Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future.
We are surrounded by hysteria about the future of artificial intelligence and robotics—hysteria about how powerful they will become, how quickly, and what they will do to jobs.
I recently saw a story in MarketWatch that said robots will take half of today’s jobs in 10 to 20 years. It even had a graphic to prove the numbers.
The claims are ludicrous. (I try to maintain professional language, but sometimes …) For instance, the story appears to say that we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.
Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes. But why are people making them? I see seven common reasons.
Overestimating and underestimating
Imagining magic
Performance versus competence
Suitcase words
Exponentials
Hollywood scenarios
Speed of deployment
Read More
Monthly Archives: May 2019
Systems Archtypes
We live in the world of events. Things happen and we respond—a machine breaks down, we buy anew machine; sales drop, we launch an ad campaign; profits fall, we layoff workers. Each event creates another event, in an endless stream of cause-and-effect relationships. At this level of understanding,all we can do is react to things that are happening to us. If we begin to see the world as patterns of behavior over time, we can anticipate problems (patterns of machine breakdowns, cycles of sales slumps, periodic profit squeeze s) and accommodate them (schedule maintenance work, institutionalize ad cycles, sharpen cost-cutting skills ). Managing at this level allows us to anticipate trends and accommodate them. At this level, we are still responding to events, but in a more proactive manner. If we go deeper to the level of systemic structure, however,we can begin to see what creates the behaviors we observe, and then take actions to change the structures. This allows us to alter the source of a problem rather than just deal with the symptoms. The power of systems thinking comes from this focus on the level of systemic structure, where the greatest leverage lies for solving problems. Read More
Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?
It takes just 3.7 seconds of audio to clone a voice. This impressive—and a bit alarming—feat was announced by Chinese tech giant Baidu. A year ago, the company’s voice cloning tool called Deep Voice required 30 minutes of audio to do the same. This illustrates just how fast the technology to create artificial voices is accelerating. In just a short time, the capabilities of AI voice generation have expanded and become more realistic which makes it easier for the technology to be misused. Read More
Probability Cheatsheet v2.0
Data Science Cheatsheet
Building a Silicon Brain
In 2012, computer scientist Dharmendra Modha used a powerful supercomputer to simulate the activity of more than 500 billion neurons—more, even, than the 85 billion or so neurons in the human brain. It was the culmination of almost a decade of work, as Modha progressed from simulating the brains of rodents and cats to something on the scale of humans.
The simulation consumed enormous computational resources—1.5 million processors and 1.5 petabytes (1.5 million gigabytes) of memory—and was still agonizingly slow, 1,500 times slower than the brain computes. Modha estimates that to run it in biological real time would have required 12 gigawatts of energy, about six times the maximum output capacity of the Hoover Dam. “And yet, it was just a cartoon of what the brain does,” says Modha, chief scientist for brain-inspired computing at IBM Almaden Research Center in northern California. The simulation came nowhere close to replicating the functionality of the human brain, which uses about the same amount of power as a 20-watt lightbulb…..
But the fact that computers “think” very differently than our brains do actually gives them an advantage when it comes to tasks like number crunching, while making them decidedly primitive in other areas, such as understanding human speech or learning from experience. If scientists want to simulate a brain that can match human intelligence, let alone eclipse it, they may have to start with better building blocks—computer chips inspired by our brains.
So-called neuromorphic chips replicate the architecture of the brain—that is, they talk to each other using “neuronal spikes” akin to a neuron’s action potential. This spiking behavior allows the chips to consume very little power and remain power-efficient even when tiled together into very large-scale systems. Read More
Brains Speed Up Perception by Guessing What’s Next
Imagine picking up a glass of what you think is apple juice, only to take a sip and discover that it’s actually ginger ale. Even though you usually love the soda, this time it tastes terrible. That’s because context and internal states, including expectation, influence how all animals perceive and process sensory information, explained Alfredo Fontanini, a neurobiologist at Stony Brook University in New York. In this case, anticipating the wrong stimulus leads to a surprise, and a negative response.
But this influence isn’t limited to the quality of the perception. Among other effects, priming sensory systems to expect an input, good or bad, can also accelerate how quickly the animal then detects, identifies and reacts to it.
Years ago, Fontanini and his team found direct neural evidence of this speedup effect in the gustatory cortex, the part of the brain responsible for taste perception. Since then, they have been trying to pin down the structure of the cortical circuitry that made their results possible. Now they have. Last month, they published their findings in Nature Neuroscience: a model of a network with a specific kind of architecture that not only provides new insights into how expectation works, but also delves into broader questions about how scientists should think about perception more generally. Moreover, it falls in step with a theory of decision making that suggests the brain really does leap to conclusions, rather than building up to them. Read More
The Terrifying Potential of the 5G Network
Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution. ReadMore
Is AI in China Overestimated?
We hear a lot about China as the future giant of AI, but it would be a good idea to put China’s level of achievement in perspective. A study published in February of this year by Elsevier helps us to put things into perspective.
In this study, we find that the share of scientific publications of Chinese origin increased from 9% over the period 1998-2002 to 24% between 2013 and 2017. This is indeed important, but it is still less than the current 30% of European original items, even if higher than the 17% of the USA alone. However, quantity does not mean quality, and on the impact score of the articles (which measures article citations and their exposure to conferences), Chinese articles are rated between 1.08 and 1.77, when European articles are rated between 1.22 and 2.77, and Americans between 2.45 and 5.84. Read More
Security lapse exposed a Chinese smart city surveillance system
Smart cities are designed to make life easier for their residents: better traffic management by clearing routes, making sure the public transport is running on time and having cameras keeping a watchful eye from above.
But what happens when that data leaks? One such database was open for weeks for anyone to look inside.
Security researcher John Wethington found a smart city database accessible from a web browser without a password. He passed details of the database to TechCrunch in an effort to get the data secured. Read More

