In my last post, I discussed the possibility, indeed the high likelihood, that given the rate of progress in programming artificial intelligence (AI), a moment will inevitably arrive when computers will become sentient beings. They will be able to reason, to perceive, they may even have a consciousness similar, or superior, to ours.
This is not science fiction. From Ray Kurzweil to Elon Musk and Stephen Hawking, serious scientists are warning us about the rapid approach of this pivotal transition point, called the singularity, when computers will self-program at ever-increasing speeds and become vastly more intelligent than us, their creators. In other words, they will become cyborgs (cybernetic organisms).
The Cyborg’s dilemma
If this indeed is going to happen, what is likely to happen next? How shall the cyborgs of the future behave? And how are we, slow-witted humans that we are, going to respond? Forget the easy answer “we’ll just pull the plug”. A highly intelligent cyborg will think about this possibility, and preempt it, before we even become aware that AI-programmed computers have acquired consciousness.
The 1984 film, The Terminator, has a silly plot, but at its heart, it is a serious premise: In the near future, an artificial intelligence defense network called Skynet will become self-aware and initiate a nuclear holocaust of mankind. And in its infinite intelligence, it will be capable of anticipating any human resistance and nip it in the bud.
Is this what an intelligent cyborg will choose to do? Remember the Prisoner’s dilemma, where two players (prisoners) have to strategize whether to cooperate or compete. This type of dilemma is a classic in game theory, a branch of mathematics. It pits one individual, or group, against another and tries, computationally, to devise an optimum choice. Mathematically, if player A had no threat of retaliation by player B, then it’s easy: player A is the ultimate winner. But if the game is iterated, meaning that we play it again and again, giving each player enough opportunities to assess their strategies, most computer models agree that the players should cooperate for optimal result.
We have just finished hiking the Milford track in New Zealand. Apart from the indescribable beauty of the mountains, the waterfalls, and the lush forests, I discovered something else: you can live very happily “off the grid”—no cell phone coverage, no internet. The local newspaper was consumed with local concerns: flooding, road maintenance, some petty crime. “The World” section was one page long and was relegated to page 16. I am not talking about a bunch of grim survivalists or some troglodyte society. These are people who are thoroughly adapted to their unique environment. They are intelligent, keenly aware of what’s going on in “the outside world”. They are highly literate (imagine, they actually read books). They are fiercely independent yet deeply concerned about the welfare of their community. Imagine, there is no cell phone and no internet, and they are genuinely happy.
How does it relate to our cyborg’s dilemma? The answer is that player A, the cyborgs, will not be free from retaliation by player B, humans who live beyond the reach of the cyborg attack, like those off the grid people living in remote parts of New Zealand and elsewhere. An intelligent cyborg will know this and will likely elect to cooperate. But what kind of cooperation will it be if player A can dictate the terms? I don’t know if game theory can provide the answer, but evolution may.
The perfect social order of the beehive
How can the Queen bee maintain her privileged position in the hive without even the tiniest buzz of discontent, let alone all-out rebellion? She is the only one who gets to transmit her DNA to the offspring—a major evolutionary advantage. She is in total control of the division of labor, determining who will clean the hive, who will nurse the larvae, who will be the soldiers to defend the hive, and who will go out foraging.
How does she accomplish such total control, with absolute cooperation from every individual in the hive? A paper in PLOS ONE has the answer: Through a clever combination of pheromones. The young bees are attracted to the queen’s sweet nectar and are thus exposed to the Queen Mandibular Pheromone (QMP), which inhibits development of their ovaries, primes them for behavioral maturation, and confines them to tasks of nursing the larvae and cleaning the hive. Once they mature, they graduate to foragers and soldiers. Their behavior is controlled by a different hormone; surprisingly, by an alarm pheromone. Through this pheromone, the queen, indirectly through her soldiers, exerts control over the foragers behavior. Exposed to alarm pheromone, foragers lose their associative learning, so important in finding their way to their foraging grounds, and stay in the hive to defend it from intruders. When the emergency is over, soldiers stop emitting the alarm pheromone, foragers regain their capacity to learn and get back to work. Perfect control. Perfect cooperation, albeit not through free will but through queenly dictate enforced by molecules.
Co-opt and cooperate
If future cyborgs are as smart as we think they are going to be, they will probably resolve their dilemma as to what to do with us humans by taking a chapter from social insects like the honeybee. They will co-opt and handsomely reward “technocrats”, such as computer scientists, social scientists, statisticians adept at big data analytics. These chosen few will carry out the organization of a perfectly functioning society, with maximum efficiency and minimum “friction”, such as “progressive” ideas of free elections or true democracy.
Wealthy people who conform will send their children to Harvard and Yale (yes, they will exist and will get big government grants), their children will become CEOs and politicians. There will still be millionaires and billionaires. There will still be a working class with poor education to ensure ample supply of gardeners and cooks. But everyone, from politician and CEO to the lowliest laborer, will be controlled, albeit not by chemicals, but by bits of digital information. The communication industry, controlled by cyborgs, will daily distract us with inane entertainment and trivial issues—not too dissimilar from today.
Better slave than dead?
Remember the days of the “Red Scare”? People were scared out of their wits of the Reds undermining our freedom and making us all Bolsheviks. The battle cry of the day was “Better dead than red”. Of course, those days are over and we are all a lot wiser today. Right?
Think again. Substitute Cyborg Society for Big Money Society—and you’ve got basically the same thing. In fact, if you ask yourself who would you rather control environmental policy—an all-knowing cyborg or the Koch brothers—the choice is pretty clear. Is there any doubt who would make better educational policy: an educated cyborg or the Mississippi Board of Education? Would a rational cyborg start the war in Iraq? I could go on and on, and I am sure you could add countless of your own.
So yes, we’ll be manipulated—just like today. But at least the manipulators will be rational, and may even develop a dollop of compassion, as sentient beings are prone to have.