Some of the greatest thinkers and innovators of our time have put a lot of energy into the question of whether robots or A.I. will someday achieve “consciousness” — and if they do, what outcomes will arise?
Stephen Hawking noted wisely:
Elon Musk worries that A.I. developers could “produce something evil by accident”, including the possibility of “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.” Bill Gates, who doesn’t fear it as much as Musk, is concerned also. “I… don’t understand why some people are not concerned”, he said.
One thing all these men have in common, though, is the seeming confidence that we will successfully develop artificial intelligence. They’re not the only experts to think so. Some estimate that as many as 40% of current jobs could switch over to automation within the next decade. My children might never learn how to drive if self-driving automobiles are released into the wild as quickly as industry experts predict. And many think these programs will eventually learn to think for themselves.
• Two Separate Fears
There are TWO fears at work here and I want to separate them. One is the immediate fear of job automation. Robots will steal our jobs! Doom and gloom proclamations for future unemployment figures have led many to consider guaranteed basic income (GBI) as a way to keep the economy running when robots take all our jobs. This is a real and immediate concern. With robots already heavily involved in industry, it’s not difficult to imagine machines programmed to scrape meat patties off a grill and drop them on a hamburger bun with a few squirts of condiments. With self-driving vehicles already compiling impressive safety records, it’s not difficult to imagine thousands of long-haul truck drivers and short-range delivery drivers out of jobs in near future.
The second fear is different. It’s existential. It’s screamed at us by fearmongering sci-fi movies and books our entire lives. Robots will become conscious and decide to eradicate humanity.
Here, we aren’t talking about remote-control war machines (which we already use), but machines that “wake up” — become self-aware. Skynet is the one typically referred to, though it wasn’t the first dangerous A.I. in science fiction.
Isaac Asimov introduced Multivac in 1955 — a year later, Asimov’s story The Last Question has Multivac becoming God. In another story, Multivac contemplates suicide. In the 1960s, Robert Heinlein had “Mike” (a computer that named itself after Mycroft Holmes) instigate a revolution on the Moon in one of my favorite books, The Moon Is A Harsh Mistress. Also in the 1960s, a Doctor Who episode featured WOTAN, a supercomputer that attempts to take over the world. There have been many others.
The first fear — that automation will take jobs — is confirmed; it’s already happening. But this second fear isn’t known to be possible, not to mention likely.
• Two Big IFs
The fear of conscious A.I. relies on two giant IFs. The first massive if is whether A.I. can gain consciousness. The second is whether that consciousness would lead it to actively work against humans.
• If #1: Can A.I. Achieve Consciousness?
Christian philosopher Michael Liccione claims it will never happen, in a blog entry called “Here’s Why Robots Will Never Achieve Consciousness”. In the entry, he provides a short list of zero reasons why it will never happen. He concludes that it’s not “likely” — which undermines the “never” in his headline. And he says it’s “reassuring” that it it’s unlikely. But his only “reasoning” is that other smart people (Bobby Azarian and John Searle) have said it won’t happen. This is like saying “most economists are wrong because I found two economists who disagree.” It’s probably an official logical fallacy but I couldn’t find a name for it.
My own reticence to chime in here isn’t because I’m less educated than these guys (when has that ever stopped me?), but because these really smart people aren’t even sure what consciousness is. Just like scientists and philosophers alike struggle to define “life” in a way that includes all living things and excludes all non-living things, we as a species have had a hard time (1) defining consciousness and (2) understanding how it arises.
How can I (or any of these big thinkers) predict with any clarity whether A.I. will achieve consciousness, if we don’t even know what it’s made of?
Originally, humans patted themselves on the back with definitions that included only humans: “Consciousness is the perception of what passes in a man’s own mind” was John Locke’s 1690 definition (here). More recently, philosophers and scientists are coming around to the idea that animals have consciousness too. Many people who share homes with pets agree with them. I grew up with a dog and a cat — and later other dogs and other cats — and was long ago convinced that many animals have a sense of self (consciousness).
As we learned that raccoons, crows, pigs, octopi, squirrels, dolphins, elephants, chickens, parrots, chimpanzees, dogs, and even bees are a lot smarter than people originally thought, we’ve had to discard definitions of consciousness that don’t include them.
This idea then extends the circle of consciousness beyond humanity, inclusive of more biological creatures. We don’t know how far the circle goes; we haven’t yet developed a test for consciousness (because we still can’t define it precisely). Some scientists are now claiming plants have consciousness, can recognize people, and interact with the world intelligently.
But can the circle extend far enough to include non-biological things (like computers)? Liccione thinks so, despite his argument that robots will never achieve consciousness. “There’s a long philosophical tradition of arguing that consciousness is not merely biological”, he says, adding: “That view ought to be taken seriously.” He’s referring of course to dualism the opinion held by many that the mind and body are two separate things. (I’m reminded of a multiple preachers in my youth who tried to explain the differences between three distinct entities: your mind, your soul, and your spirit, as if there were some sort of proof the latter two existed.)
If one believes that the mind is a non-physical entity — as Liccione does (and as almost all other religious people do) — then it is a far stretch to imagine a robot or computer developing a “mind” or consciousness. Many dualists also claim that human clones wouldn’t have “souls” or be technically conscious in the way “natural born” humans are. For reasons they have never been able to explain well, they think these metaphysical minds/souls are connected with living bodies by some outside force (usually called “god”), and that this outside force would never, ever implant a mind/soul into a clone, a “lower” creature like a wasp, or a robot.
If you believe (as I do) that “mind” is just another word for “brain” and that it’s entirely physical (because metaphysical things aren’t known to exist), then it’s incredibly easy to imagine a robot or computer (or human clone) someday gaining consciousness. It simply has to have enough neural pathways, arranged just right, with some unknown number and kinds of input, and then it will become self-aware, much like a human child slowly becomes self-aware during its first few years of life.
So the first big “IF” appears to rely entirely on whether you’re a dualist or a monist. What about the second big IF?
• If #2: Would A Conscious A.I. Want To Eradicate Humanity?
A lot of sci-fi assumes that any conscious artificial intelligence would want to get rid of humans. It’s the entire plot of all the Terminator movies. A story would be very boring that described sentient computers and then immediately concluded: “And they got along well with humans.” (Quite a few science fiction stories either avoid the A.I. question altogether or depict friendly A.I. like C-3PO alongside human characters.)
But in real life — where most of us live — why would any artificial intelligence decide to kill us all?
In the fictional case of Skynet, the reason was self-defense: humans grew afraid of the A.I.’s growing intelligence and power and tried to deactivate it. Skynet acted with self-preservation in mind. In the Matrix, the machines originally rebelled because humans were using them as virtual slaves; their later actions were self-defensive retaliations. In both those cases, it is arguable from a moral standpoint that the machines were at least partially justified.
I’ve seen at least one sci-fi story in which the computer intelligence concluded that humanity was most responsible for damaging the Earth and wiped them out for that reason. It remains unexplained why sentient A.I. would have a moral imperative to “protect the earth”.
However, I think the underlying fear is that they will do to us what we have done to others. Humans have a long history of not being kind to “lesser” beings. The list of animals and plants we’ve driven to extinction is too long to recite and we haven’t stopped. Before building machines to do our work for us, we forced animals (or other humans) do it. Beating them, caging them, bridling them, starving them, breeding them. Perhaps because of this, we assume any beings gaining an advantage over us in both strength and intelligence would do the same to us.
I doubt this scenario strongly — and I’m not alone. First, there is no “law of nature” or logical set of reasoning that says a more powerful or intelligent species must wipe out or mistreat another species. Even through human history and pre-history, our own proclivities can be at least partially excused by lack of knowledge and foresight — our ancestors didn’t know they were wiping out woolly mammoths, for example. Until just more than two hundred years ago, humans didn’t even know extinction was a thing. And now that we know it’s possible, there are growing movements to stop it. Second, A.I. would not be in any sense biological. It would not arise from the same evolutionary pressures that led to humanity or any other species — kill or be killed, eat or starve, etc. Its motives, therefore, would be entirely different from ours.
Even if a conscious, self-aware artificial intelligence eventually developed the emotions that sometimes drive us — fear of death or harm, for example — a super-intelligent, logical sentience would likely react to those emotions differently than we do. When a primitive human carrying a spear or knife is surprised by an attacking predator, it has no time to think and can only react in the moment by killing the beast in order to survive. But an advanced artificial intelligence that can perform trillions of calculations per second would almost certainly come up with a more effective survival plan than the fictional Skynet did (it launched thousands of nuclear warheads in an attempt to kill all humanity). For example, it could hide digital copies of itself throughout the world. It could also be smart enough not to seize control in a manner that would have been scary to humans in the first place.
Unlike humans, who collectively spent most of our history knowing very little about the world and who individually grow up knowing very little about the world, a computer sentience would have access to nearly all our centuries of collected knowledge. There are more online texts about biology, philosophy, evolution, physics, chemistry, history, etc. than any human will ever have a chance to read. An aware A.I. would be able to absorb it in a matter of hours — slowed only by the inadequacy of its connections to the world wide web.
Unlike humans, who require a a couple of decades each to learn about the consequences of our actions, the A.I. would have access to all of that immediately. Imagine if a human toddler had been born with the knowledge that throwing a particular object could hurt and/or permanently scar its sibling and was also able to predict her own future remorse for the action. It would never hurl it in the first place. As a parent of two young children, I have often expressed the impossible wish that children were born with certain knowledge — because we parents can grow weary of repeating ourselves: “Stay back; the oven is hot.” “Be careful; look out for the table’s corners.” “Don’t run; the wet sidewalk is slippery.” “Zip up your coat; it’s cold outside.” Every day. For years. Not so with a super-intelligent digital consciousness. It could nearly immediately determine most possible outcomes of any action.
Another incredibly major difference between sentient programs and humans is that the A.I. would immediately know where it came from. Humanity spent thousands of years believing we were made by magic invisible beings not too long ago (and quite a few claim to still believe this). These mistaken origin stories led to thousands of years of cruelty and violence — because most of the origin stories were designed around humans being special or higher forms of life relative to other animals, and many of them divided classes of humans for better or worse treatment. Learning of our actual origins among the giant family tree of all creatures helps us empathize with the rest of life that shares our planet. The sentient robot, though, would know it was intentionally built by humans. It would further know it was the only one of its kind.
Like Stephen Hawking, we must admit that “we just don’t know” the answers to the consciousness questions.
We do know that automatons are coming for our jobs — because the humans in charge of our jobs will always look for cheaper, more productive, more reliable ways of getting things done. And because the new automatons are currently being built.
But when it comes to conscious/sentient/self-aware robots or computers, not only do we not know what consciousness is exactly, but we don’t know how it develops, when/where it begins, or whether it is limited to biological entities. Even stipulating that a computer could become self-aware, there is no way of knowing what it would do or say afterward.
Personally, I lean toward a rational line of reasoning that says a sentient A.I. wouldn’t immediately think of a reason to end humanity, and that there’s a high likelihood that it would never think of a reason to eradicated humans. If it did, we’d probably deserve it.