Future Science: Might robots one day develop self-awareness?

By January 12, 2016
Robot Nao

Selmer Bringsjord, a US-based academic working in the fields of Computer Science and Cognitive Science, has been trying to give robots some degree of self-awareness. Is this possible, or even reasonable? In this interview with L’Atelier, Professor Bringsjord talks about these issues.

It was last July that a Nao robot displayed its first signs of human consciousness – or at least gave indications that it was aware of itself and its existence. Behind this startling experiment was Professor Selmer Bringsjord, a Department Head at the New York-based Rensselaer Artificial Intelligence and Reasoning Laboratory.

Professor Bringsjord’s team subjected three of his programmed Nao robots to a test. Each of the robots has the ability to speak, but the researchers had pushed the mute button on two of them and then notified all three robots that two of them had been given ‘dumbing pills’. The robots were asked to guess which had received the pills. As none of them knew at that point in time, all three robots tried to answer “I don’t know”. However, two remained silent, while the third responded aloud with ‘I don’t know. Then, having recognised its own voice, it corrected itself with: “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

This intriguing evidence of self-awareness in a robot was followed by an open letter signed by Elon Musk, Stephen Hawking, Steve Wozniak and several others warning against the potential dangers of creating autonomous machines. We therefore asked Selmer Bringsjord to enlighten us on the subject. Our basic question was whether robots would one day achieve self-awareness or even – to take the idea a stage further – have a soul?

Would it be true to say that you’re working on giving robots a soul?

Yes, we could say that I’m working on giving robots the behaviours that one usually associates with having a soul.  Typically, what we would call having a soul is based on observable behaviour and attitudes. I’ve developed a test to identify such behaviour and I’ve put my robots through it.

So to what extent might your robots be self-aware? Do your tests reveal any limits?

Well, we can talk about two levels of ‘consciousness’, i.e. self-awareness.

The first level is associated with a structure and processes linked to taking decisions. It’s when an ‘agent’ is taking decisions that we can see whether or not it is self-aware. The second level of consciousness is all about perceptions, inner feelings. If, for example, you like to ski fast, once you’re up on your skis and moving at high speed, you feel the experience, you enjoy it. You internalise what is happening, you make it your own. I don’t think robots could ever have this type of feelings. We can however programme robots so that they do exhibit signs of self-awareness – behaviour arising from self-awareness – or reasoning.

In philosophy, the first, mechanistic, level of consciousness is often known as ‘access conscienceness’. The second is called ‘phenomenal consciousness’ or ‘subjective consciousness’.  If we’re talking about non-subjective types of consciousness there’s certainly no limit to what we can get robots to do. If you watch Blade Runner, which is in many ways the forerunner of all the recent futuristic science fiction films which presage widespread use of artificial intelligence, you can see the basic premises. Nowadays we’re able to design android robots that would be hard to distinguish from humans just on the basis of their behaviour and reactions.

Is your work inspired by futuristic science fiction?

Yes, of course. I was talking to my colleague Jean-Gabriel Ganascia [Note: interview with J-G Ganascia coming shortly on L’Atelier Trends] about this relationship between literature, narrative art – including cinema – and artificial intelligence. There are certainly links. US science fiction movies are very rich and inspiring in this respect.

literature however represents a real challenge for artificial intelligence. Could a machine ever be creative enough to compose a literary work? Wouldn’t robots’ creative capacity be too limited? Though today experiments are in fact under way to try to get robots to write articles…

Yes, I’ve heard about that. I’ve met researchers who are working on such things – not just in the field of university research, but also at companies. I’ve also written on the subject of what robots can and cannot do in terms of writing stories or fiction.

Of course, the job of a journalist is to write about realities, report facts…                  

Well, I don’t think machines will be able to write highly creative prose, whether fiction or fact. But they could write simple factual prose, yes. And machines will certainly be brought into this field. You could, for example, see them writing up summaries of sports events. In short, robots will be useful for basic content, as is already the case in other fields. Robots are already particularly good at carrying out routine checks on cars, based on voice recognition – starting the engine, opening windows, turning on the radio and adjusting the volume, and so on. This may seem trivial right now but this incursion of artificial intelligence into the automobile industry will spread to all sectors… including journalism.                                                    

What would you say to the suggestion that we’re digging our own graves? Aren’t we playing Frankenstein here?

Oh, the Frankenstein story has very little to do with artificial intelligence. The ‘technology’ that Mary Shelley describes in her novel is actually all about alchemy. In the novel, Frankenstein didn’t really know what he was doing. This is a long way away from an initiative involving explicit programming – creating a programme following specific principles in order to obtain a precise result.
On the other hand you could compare the Frankenstein myth to machine learning. These days, when people study machine learning, the final results appear to be a sort of ‘black box’. They’re not really sure how those results were achieved.  When we’re talking about autonomous machines, there are three things that you can single out that could be compared with Frankenstein. Firstly, you don’t know how the machine works because it works on the basis of a ‘black box’ approach; secondly, the machine has a certain degree of autonomy; and thirdly, the machine is kind of powerful, along the lines of Frankenstein’s monster.  
So if you have a powerful, autonomous system and you don’t understand how its intelligence works, then you could indeed be in a very tricky situation.

Last summer there was an accident in Germany. A worker was killed due to a fault in a factory machine. Don’t we need above all to make all these machines safer?

Yes, I heard about that terrible accident, but that wasn’t anything to do with a sophisticated autonomous machine.
Nevertheless, what this incident does point up is interesting in terms of the increase in autonomous mobile systems, such as for example self-driving cars.  Because it’s true that we’re not investing enough in safe systems. The only way to ensure that this type of accident doesn’t happen is to check systematically. And unfortunately we’re not investing enough in this sort of work on an ongoing basis.

Legal mentions © L’Atelier BNP Paribas