Elon Musk

An “ancient” Bible of the 21st century

Imagine for a minute an ancient manuscript of a bible written circa mid-21st century (about 45 years from now):

“After the age of Aquarius came the age of the supercomputer. And God saw that it was good. But then the people quickly replaced it with the cell phone, and very shortly thereafter with the smart phone. And the smart phone, as it was called in those days, was indeed a lot smarter than the supercomputers of ancient times, but dumber than dumb when compared to the chips people embedded in their hearts, and limbs, and brains. And God saw it and said: OMG! What have I wrought…haven’t they learned from the Greeks that all tragedy begins with hubris?”


A modern version of the “ancient” story

Without invoking God or religion, some of the brightest minds of our age are sounding the alarm: Mankind is constructing a new version of the Tower of Babel: Artificial Intelligence (AI). And like the ancient tower, if we continue to build it, it will bury us.

Now, I am not talking about your run of the mill alarm: These are geniuses like Stephen Hawking and visionaries firmly anchored in reality, like Elon Musk. If you read how their warnings are portrayed in the media, you can’t escape the smirky tone of “well, a bunch of eggheads indulging in speculation…”.

Here is CNET, a website with healthy respect for Science and Technology: “Stephen Hawking: Humans Evolve Slowly, AI Can Stomp Us Out.”

It goes on to quote an interview Hawking gave to the BBC:

“Hawking said he fears that a complete artificial intelligence would simply do away with us.

AI ‘would take off on its own, and redesign itself at an ever-increasing rate,’ he mused. The result would quite simply be that this new, exalted intelligence would see no need for our cumbersome, turgid ways. Or, as he put it: ‘Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.'”

This isn’t the first time in recent months that Hawking has predicted our doom. In May, he warned that the moral goodness of AI depends on who controls it. In June, he cautioned that robots “might simply turn out to be smarter than us.”

And here is the headline from CNBC.com: “Rise of the Machines! Musk warns of ‘summoning the demons’ with AI.”


Is this the dawning of the Singularity?

Let’s examine. When Hawking talks about AI outstripping biological evolution, he knows wherefore he speaks. A branch of AI is based on algorithms called “evolutionary programming,” that are designed to constantly improve and perfect themselves, using natural selection principles. Such programs can iterate and reiterate in a matter of seconds what took evolution millions of years. At some point, argues Hawking, these programs will be so much smarter than their creators, in fact, become sentient (i.e. able to perceive or feel things), and shuck off the yoke of those slow and dumb humans.

The real problem is that, by then, it’s going to be too late because we won’t even realize when the machines became sentient beings. So the option of “pulling the plug” in time is illusory. Even if we try to regulate the field, as we did with DNA synthesis, there is always going to be a rogue scientist who will just be curious.

The Washington Post reports on Elon Musk’s speech at MIT in which he told the audience that the tech sector should be “very careful” about pioneering AI, calling it “our biggest existential threat.” On several occasions, Musk has called the technology a big risk that can’t be controlled. At MIT, Musk carried the metaphor a bit further than he has in the past. “With artificial intelligence, we are summoning the demon,” The Post quoted Musk as saying.


Moral outsourcing

Musk’s comments highlighted a budding ethical debate in the broader society about whether machines should be able to think for themselves. Some ethicists and technology practitioners are concerned about the potential for what Oxford University recently called “moral outsourcing.”

In a blog post last year, Oxford scholars cautioned that “when a machine is ‘wrong,’, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could. Simple algorithms should be extremely predictable, but can make bizarre decisions in ‘unusual’ circumstances.”


Theoretical? Think again

Now, if you think this a theoretical problem and far into the future, think again.

After acquiring British technology firm DeepMind earlier this year, Google bowed to the growing controversy over AI by agreeing to establish an ethics board that would oversee its efforts to create conscious machines. The search giant has made steady advances to make its applications more convenient to users by making them increasingly autonomous.

The Huffington Post reports that in 2011, the co-founder of DeepMind, the artificial intelligence company acquired this week by Google, made an ominous prediction more befitting a ranting survivalist than an award-winning computer scientist:

“Eventually, I think human extinction will probably occur, and technology will likely play a part in this,” DeepMind’s Shane Legg said in an interview with Alexander Kruel. Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the ‘number 1 risk for this century.'”

So there you have it, from the creator of these “increasingly autonomous” machines, himself.


Who is Ray Kurzweil?

We can’t talk about the future of AI without a nod in Ray Kurzweil’s direction. In a long article, Time Magazine tells a charming anecdote:

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret (Steve Allen was the host). He then played a short musical composition on a piano. His secret? The piano piece was composed by a Clunky computer that he had built himself.

Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

Computing power is increasing exponentially, which means that not only are computers getting faster, but the doubling time of computing power is getting shorter and shorter. There might conceivably come a moment when they are capable of something comparable to human intelligence—artificial intelligence.

All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness—not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, anything we can do. This is the point of Singularity.

From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. We would be no match for this monster.


Why is it so difficult to accept?

We watch those Terminator and Matrix movies, and we chalk it off to “science fiction”. Journalists can’t help a bemused, sometimes outright dismissive attitude. Why?

Because our brains are not wired to think exponentially. Nature functions in a linear progression. A leopard runs at you as fast as he can, but that’s it—he cannot accelerate exponentially. And when you run away, you can increase your speed incrementally only. This is also why we are so bad at predicting the future. We think in moving averages, meaning that we extrapolate the last 5-7 data points. And we constantly update the moving average by dropping the oldest one and adding the newest. This is linear thinking; just ask Malthus.


So what’s the answer?

I don’t know, and I am in good company. None of the knowledgeable thinkers in the field has an answer. But they all keep warning us that if we don’t do something about it, the end, as they say, is nigh.

Have a nice rest of your day…

Dov Michaeli, MD, PhD
Dov Michaeli, MD, PhD loves to write about the brain and human behavior as well as translate complicated basic science concepts into entertainment for the rest of us. He was a professor at the University of California San Francisco before leaving to enter the world of biotech. He served as the Chief Medical Officer of biotech companies, including Aphton Corporation. He also founded and served as the CEO of Madah Medica, an early stage biotech company developing products to improve post-surgical pain control. He is now retired and enjoys working out, following the stock market, travelling the world, and, of course, writing for TDWI.



All comments are moderated. Please allow at least 1-2 days for it to display.