Can there be artificial consciousness?


For now, we only know of us human beings possessing a certain quality of our mind which we call consciousness. What it exactly is – to be conscious, aware, sentient – is hard to say. But what’s even harder is the question, if other beings, can also have consciousness. So far this question is mostly related to animals; but what about machines?

The Turing test

In the paper Computing Machinery and Intelligence from 1950, Alan Turing started with “I propose to consider the question, ‘Can machines think?'”. In it he suggested one way of determining if a machine can think or not. It became known as the Turing test and goes as follows: A human judge is having a conversation with both another human and a computer (which is able to communicate in the judge’s language) seperately but he hasn’t been told which is which. After the judge has held the conversation with them, he has to determine which was the human participant and which was the machine. If the machine could deceive the judge into thinking of it as being human, the machine is said to have passed the test.

But is such a test enough for deciding if a machine can think or not? Many experts of the fields of artificial intelligence and philosophy doubt it and especially the philosopher John Searle has influenced this discussion heavily with his Chinese room thought experiment. In this experiment, he imaged a situation wherein a person without any knowledge about chinese language would be placed inside a room and be given letters written in chinese. His task is to respond to these letters by using a catalogue which contains all the possible conversations in chinese and instructions on which chinese symbols would be an appropriate answer for which other symbols. In order to simulate a conversation, this person then looks up the symbols in the catalogue and give the corresponding answer. The person on the outside, sending and receiving the letters, would be convinced to engage in a conversation with a real chinese speaker while the person inside the room doesn’t have the slightest clue about what these symbols mean. So, you wouldn’t say that this person is able to understand. Following this argument you also wouldn’t say that a machine passing the Turing test then fully understands the language it is processing but rather follows its instructed rules.

Searle thus concludes that a machine – regardless how intelligent it seems to be – still is just shuffling around symbols and won’t be able to achieve true consciousness. It absolutely is a good counterargument against the Turing test but not a counterargument against the possibility of consciousness in machines in general.

What is the cause of consciousness?

In order to confirm whether or not machines could achieve consciousness, we’d need to clarify what consciousness and its causes are in the first place. Only after we’ve found out what it is, we can then say whether machines can also develop it or not. At least theoretically.

There exist plenty of different approaches trying to explain consciousness; from religious, supernatural ones to philosophical to scientific, neurological ones. But basically they all can be separated into two main groups: dualism versus materialism.

Dualism on the one hand argues that we, as thinking beings, consist of two parts – which are made out of completely different substances – body and mind. While the body is a material manifestation, bounded by physical laws and mortal, the mind is immaterial, outside of any physical framework and maybe even immortal. The substance of mind is pure supernatural consciousness.

Materialism on the other hand argues that our mind is a result of a composition of physical matter, formed into neurons and the brain – as interpreted by neurological advancements. The brain doesn’t produce our mind but it is our mind. All our emotions, memories, thoughts are physical processes, dauntingly complex ones, but physical nevertheless. And so would consciousness also be a physical phenomenon.

The question which one of these views would be correct is absolutely crucial for the question if artificial intelligence could develop consciousness or not.

Should dualism hold true and consciousness is something immaterial then machines can never develop true consciousness as we humans do. AI could only simulate it.

Should materialism on the other hand be true, then there is at least the theoretical possibility of creating an artificial system, a machine, which could develop consciousness just as our brains do.

Projections of research in artificial intelligence

Should you be interested in the current developments of artificial intelligence and its outlook, we highly recommend the TEDx-talk given by one of the leading figures in AI-research and futurology, Ray Kurzweil:

photo 1

EDIT: Upcoming discussion about AI and singularity in Vienna

This just in: An event by Erik Unger will take place next wednesday, the 29th of January. It is “A meetup for open minds interested in the technological singularity, exponential developments, artificial intelligence, robotics, space, radical life extension and transhumanism. We’ll have a presentation with an introduction to those topics, followed by a discussion. Drinks and mingling after the official program.”
Here’s the facebook-event. We will be there!

Image credits: Header

Share this post

Leave a comment

Your email address will not be published. Required fields are marked *



*