anyone in there? —

Could AIs become conscious? Right now, we have no way to tell.

Scientists struggle to define consciousness, AI or otherwise.

Could AIs become conscious? Right now, we have no way to tell.
BlackJack3D/Getty Images

Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.

In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.

Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?

Consciousness in Von Neumann computers

For a computer to experience the vast repertoire of internal states accessible to human beings, its hardware presumably needs to function somewhat like a human brain. Human brains are extremely energy-efficient analog “devices” capable of high-levels of parallel processing.

Modern computers, based on Von Neumann architecture, are not any of these things—they are energy-intensive digital machines composed primarily of series circuitry.

Von Neumann computer chips physically separate memory from processing, requiring information to be retrieved from memory before calculations can be performed. “Classical Von Neumann computers have a separation between memory and processing. The instructions and the data are off in the memory, and the processor pulls them in, as much as it can in parallel, and then crunches the numbers and puts the data back in memory,” explains Stephen Deiss, a retired neuromorphic engineer from UC San Diego.

This restriction on how much information can be transferred within a specific time frame—and the limit it places on processing speed—is referred to as the Von Neumann bottleneck. The Von Neumann bottleneck prevents our current computers from matching—or even approaching—the processing capacity of a human brain. Because of it, many experts think that consciousness in modern-day computers is highly unlikely.

Consciousness in neuromorphic computers

Computer scientists are actively developing neuromorphic computer chips that evade the processing restrictions of Von Neumann computers by approximating the architecture of neurons. Some of these combine memory storage and processing units on a single chip. Others use specialized, low-powered processing elements such as memristors, a type of transistor that “remembers” past voltage states, to increase efficiency. Neuromorphic chips mimic the brain’s parallel wiring and low power requirements.

“A compute-in-memory device, which includes things like neuromorphic computers, uses the actual physics of the hardware to do the computation,” Deiss explains, referring to memristors. “The processing elements are the memory elements.”

If neuromorphic technology can be developed to the level needed to reproduce neuronal activity, neuromorphic computers might have a greater potential to experience life consciously rather than just compute intelligently. “If we ever achieve the level of processing complexity a human brain can do, then we’ll be able to point at [neuromorphic computers] and say, ‘This is working just like a brain—maybe it feels things just like we feel things,’” Deiss says.

Still, even in a future brimming with brain-like computer hardware and the stage set for artificial consciousness, there remains a big question: How will we know whether or not our AGI systems are experiencing sadness, hope, and the exquisite feeling of falling in love or if they just look like they are experiencing these things?

How will we ever know what's going on inside the mind of a machine?

Reader Comments (355)

View comments on forum

Loading comments...

Channel Ars Technica