What rights should robots have?

By June 27, 2016
Droit des robots

As artificial intelligence technology improves, robots are growing ever more similar to human beings. So should there be ‘robot rights’ to protect them from people?

In 1942, Russian science fiction writer Isaac Asimov drew up his ‘Three Laws of Robotics’: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For 75 years, these clauses have inspired research and thinking on robot rights. They were even taken up in the first ‘Ethics Charter for Robots’ drawn up by South Korea in 2007. However, today Asimov’s laws seem rather simplistic and obsolete as they are centred on humans rather than robots. Nowadays ethics and the rights of machines are starting to go further. As robots increasingly come to resemble people, both in terms of intelligence and appearance, some digital technology researchers argue that it is now time to provide them with the legal status they currently lack. Says Dutch computer-science expert Ronald Siebes: “Robots are starting to look so much like humans that we also have to think about protecting them in the same way as we protect humans.

Formal rights needed in order to control our use of AI?

“A robot is an unidentified legal object,” underlines Alain Bensoussan, a lawyer known for his advocacy of robot rights. “The more independent the machine is of its owner, the closer it gets to meriting human rights. I’m not responsible for my car, which can drive itself to Toulouse in the same way that I’m responsible for my toaster,” he explained to French weekly news magazine L’Express in April 2015.

As L’Atelier has previously reported, robots look increasingly likely to act as companions in our daily lives. Nowadays they are able to converse and show ‘artificial empathy’, French psychoanalyst and psychiatrist Serge Tisseron explained  to L’Atelier in January. Understanding voice intonation and human mimics are bringing them closer to us and, according to engineer Nell Watson in response to a question at The Next Web Conference which took place in Amsterdam in late May, it will then “be very difficult not to get attached to them or to love them.”  She reckons that people are very soon going to have a similar kind of relationship with machines as they now have with their pet animals. And just as animal rights exist and cruelty to animals is sanctioned by law, one might argue that robots should also enjoy rights designed to prevent mistreatment or other abuses.

Lawyer Murielle Cahen suggests in her blog that “the act of striking a robot bodily would be sanctioned by law, not to avoid material damage in itself but rather to safeguard human feelings and uphold the interests of our society.”

It is clear that human beings having unfettered control over machines could well lead to excesses, such as using them with malicious intent. AI is for example capable of analysing a person, generating a profile and then tracking it. It would amount to abuse if someone were to order an AI system to look for passwords on the basis of elements in the life of a certain person – such as name and date of birth, which are the most common bases for passwords. A robot would also be able to absorb or generate false information which could pass for true, and transmit that to another robot. Fraud and false documents will become common currency in the near future, predicts Nell Watson, which, she forecasts “will encourage adoption of technologies such as the blockchain, which are very difficult to falsify.”  Moreover, if we want robots to behave responsibly, we will have to recognise them as people, she suggests.

Basically Robot Laws will need to establish ethical norms in the relevant field of action and determine the extent of robots’ autonomy in taking decisions. Does a robot have the right to say ‘no’, for instance or must a robot do anything and everything a human being asks it to do? What happens if a child trains a robot to play out in the road, or if a drunken driver wants to take the wheel? “If a robot is to be an effective servant, then it has to be able to say ‘no’,” argues Nell Watson. The issue of self-defence follows on from this: does the robot have such a right? If the robot is protecting a human being or goods, can it go as far as to use lethal force? Google recently announced its intention to create an artificial intelligence ‘off’ button, which opens up a whole new minefield when it comes to using AI. Under what conditions could owners – or non-owners – use the button? Would the robot have the right to stop people from deactivating it?

These are difficult questions which seem to justify the establishment of Robot Laws today. Serge Tisseron told the Next Web Conference audience: “What I’m worried about is that today we’re not paying enough attention to creating a legislative and educational framework. Scientific progress is inevitable. We need to think about educational and regulatory reference points right now. We mustn’t wait until robots are already there to start thinking about the best ways to manage them.”Ronald Siebes added: “Since a robot is today able to understand us, why don’t we put ourselves in the place of the robot?

Robot ‘ethics’: a thorny concept

The first step requires a definition of the ethical field of application of a robot. Several committees are already working on this: the French National Centre for Scientific Research (CNRS)  and CERNA – a French organisation set up to look into the ethics of scientific research and digital technologies, the EU’s RoboLaw project, plus Google and OpenEth – an ‘ethical explication engine’ that aims to crowdsource ethical heuristics for autonomous systems – at world level.

Nell Watson, who co-founded OpenEth, feels it is essential that artificial intelligence should take on board the ethics of business and privacy. “A machine must respect the wishes of a human, be transparent, and at the same time respect privacy. It must be able to identify the information its user doesn’t want to divulge, for example health or financial data,” she told the conference audience. Many researchers agree on the question of transparency: a robot should not cheat its users but should tell them openly about the actions it has undertaken and the reasons why it did so.

So what is the legal standpoint for all this? Lawyer Alain Bensoussan suggests creating the status of ‘robot person’ along the lines of a ‘moral person’ – i.e. a registered company or organisation. He feels it is essential to be able to identify robots, based for example on a social security number – “We use the figure 1 to stand for men and 2 for women, so why not 3 for robots?” – and to give each a legal representative and take out an insurance policy. The policy would then provide compensation for third parties. In return a Robot Charter entitling them to respect and dignity would be drawn up.  

When it comes to responsibility for actions and legal liability, Alain Bensoussan sees this as a cascade from the robot manufacturer to the owner and the user. Other experts working on these issues suggested the idea of a ‘black box’ which could trace responsibility “more objectively” in the event of an accident and raised the issue of the liability of those responsible for physically maintaining robots.

Drawing up an ethical framework for robot use is no easy task, given that not everyone shares the same values or defines morality in the same way. These of course differ according to culture, generation, religion, education, and so on. Nell Watson attempted to set out the process: “You have to deconstruct concepts and explain them in the most basic way possible, so that everyone can understand. You have to establish the primary rules, a bit like those that you first learn as a child”. Then you have to explore the concepts and their variations. The objective is not to create a single ethic but to “map out a space and potential links between machines and other agents,” she argued. And who will take on this task? Ethical decisions cannot be left to researchers but must be sifted through at all levels of society, stressed Nell Watson, stating: “I firmly believe in the wisdom of the crowd. We need poets, politicians, priests, programmers, and so on, so that we can discuss these topics all together.

So should robot rights be something like human rights, or company rights? Perhaps they should draw something from both. Nell Watson believes that rights between men and machines should be as equitable as possible, but not necessarily equal, in the same way that children and their parents, doctors and their patients, and companies and their employees have differing rights. “There’s always a balance, just measure between rights and duties for everyone. We need to apply this to robots as well because the worst behaviours in human civilisation arise from the overriding idea that one group of individuals should be treated differently from another.”

This engineer and thinker who is a member of the faculty at California-based Singularity University, is however deeply concerned about how far people will welcome the integration of robots into society. “We cannot predict what machines will actually be like, but human beings are easier to predict. We’ve had some sad examples in the past. I just hope that this time civilisation will be able to cope with this new emerging framework for machine morality,” she concluded.

Legal mentions © L’Atelier BNP Paribas