Wednesday

Why a computer will never be truly conscious

Many advanced artificial intelligence projects say they are working toward building a conscious machine, based on the idea that brain functions merely encode and process multisensory information. The assumption goes, then, that once brain functions are properly understood, it should be possible to program them into a computer. Microsoft recently announced that it would spend US$1 billion on a project to do just that.

So far, though, attempts to build supercomputer brains have not even come close. A multi-billion-dollar European project that began in 2013 is now largely understood to have failed. That effort has shifted to look more like a similar but less ambitious project in the U.S., developing new software tools for researchers to study brain data, rather than simulating a brain.

Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data.

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Could you identify all of these as a table right away? A computer would likely have real trouble. L to R: pashminu/Pixabay; FDR Presidential Library/Flickr; David Mellis/Flickr, CC BY

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

Computation and awareness

In my own recent work, I’ve highlighted some additional reasons that consciousness is not computable.

Werner Heisenberg. Bundesarchiv, Bild 183-R57262/Wikimedia Commons, CC BY-SA
Erwin Schrödinger. Nobel Foundation/Wikimedia Commons
Alan Turing. Wikimedia Commons

A conscious person is aware of what they’re thinking, and has the ability to stop thinking about one thing and start thinking about another – no matter where they were in the initial train of thought. But that’s impossible for a computer to do. More than 80 years ago, pioneering British computer scientist Alan Turing showed that there was no way ever to prove that any particular computer program could stop on its own – and yet that ability is central to consciousness.

His argument is based on a trick of logic in which he creates an inherent contradiction: Imagine there were a general process that could determine whether any program it analyzed would stop. The output of that process would be either “yes, it will stop” or “no, it won’t stop.” That’s pretty straightforward. But then Turing imagined that a crafty engineer wrote a program that included the stop-checking process, with one crucial element: an instruction to keep the program running if the stop-checker’s answer was “yes, it will stop.”

Running the stop-checking process on this new program would necessarily make the stop-checker wrong: If it determined that the program would stop, the program’s instructions would tell it not to stop. On the other hand, if the stop-checker determined that the program would not stop, the program’s instructions would halt everything immediately. That makes no sense – and the nonsense gave Turing his conclusion, that there can be no way to analyze a program and be entirely absolutely certain that it can stop. So it’s impossible to be certain that any computer can emulate a system that can definitely stop its train of thought and change to another line of thinking – yet certainty about that capability is an inherent part of being conscious.

Even before Turing’s work, German quantum physicist Werner Heisenberg showed that there was a distinct difference in the nature of the physical event and an observer’s conscious knowledge of it. This was interpreted by Austrian physicist Erwin Schrödinger to mean that consciousness cannot come from a physical process, like a computer’s, that reduces all operations to basic logic arguments.

These ideas are confirmed by medical research findings that there are no unique structures in the brain that exclusively handle consciousness. Rather, functional MRI imaging shows that different cognitive tasks happen in different areas of the brain. This has led neuroscientist Semir Zeki to conclude that “consciousness is not a unity, and that there are instead many consciousnesses that are distributed in time and space.” That type of limitless brain capacity isn’t the sort of challenge a finite computer can ever handle.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter. ]

No comments:

Post a Comment