The road to AI regulation might still be a long one

By March 09, 2017
Keywords : Prospective, AI, Europe
IA Robot Legal

Over 2,300 experts in artificial intelligence have been working at the behest of the Future of Life Institute (FLI) to draw up a set of 23 ethical principles providing guidelines for the ongoing development of AI. But what does this mean in practical terms for future innovations and what direction should they be going in?

Technology innovations often tend to overtake the civic and human activities which they were originally intended to support. Any moves to change the world through technology must therefore take on board some ethical considerations so that these advances may continue. For some years now, the exponential development of AI and machine learning has raised questions regarding the viability – and the desirability – of such achievements. Back in 1940 Isaac Asimov drew up his Three Laws of Robotics, which state: “1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.‟ As AI technologies improve, many scientists are becoming worried that one day artificial intelligence will surpass, even replace, Man. Even a scientist as eminent as Stephen Hawking was moved to tell the BBC of his fears that "the development of full artificial intelligence could spell the end of the human race."

However, the latest survey from Deloitte shows that people from Generation Y (those born between the early 1980s and the early 2000s – also known as Millennials) do on the whole trust artificial intelligence. Some 62% of those polled agreed that AI is a driver for change, progress and productivity gains. Moreover, a majority of the Millennials interviewed believe that these new systems have the potential to stimulate creativity and open up horizons for future innovations. In fact French people appear to be more optimistic as regards progress in AI and its repercussions for the economy than their peers elsewhere in Europe. Anyway, given the interest shown by young people, it is a fair bet that companies will step up their efforts to speed up research and development in the AI field. So as the race to develop artificial intelligence starts to hot up, it is not unreasonable to take a step back and examine the huge risks which humanity might be running.

With this in mind, the Boston-based volunteer-run research and outreach organisation Future of Life Institute hosted a major gathering with a view to agreeing on guidelines for AI research and AI-linked projects. The aim here was not so much to draw up a set of statutes governing AI as to build a framework for AI practices and the direction in which work is going. The stated objective of the FLI’s 23 principles is to provide “a framework to help artificial intelligence benefit as many people as possible‟. So far so good, but it still remains to be seen whether and how the procedures set out in the framework can be enacted by the competent authorities, whether or not the principles can or should be made legally binding, and what will be the practical consequences for future innovation in this field.

Collective impulse from experts, entrepreneurs and scientists

The idea of enacting rules for examining the benefits and consequences of AI-linked activities in both the near and longer-term future is nothing new. Readers will remember for instance that last September digital giants Google, Facebook, Amazon, Microsoft and IBM founded a Partnership on AI, a collaborative organisation whose basic aim is to lead ethics debates on AI, under the philanthropic guise of educating the public. Here is proof, if proof were needed, that governance of AI is a fundamental issue.

However, while the stated aim of this agreement is to highlight ethical and moral guidance for progress in this field, in reality the partnership is a lobby working to promote the member companies’ commercial interests. By taking the lead on deciding the direction AI development should take, the partners are looking to ward off in advance any moral criticism they might face.

It is clear that the debate over AI has been monopolised by the tech giants for quite some time now. But what is new about the FLI principles is the form that the discussions have taken and the participants who have mobilised to address the issues. The Boston-based Future of Life Institute, which initiated the process, was founded by a trio composed of Max Tegmark, a cosmologist at MIT; his colleague Jaan Tallinn, the cofounder of Skype; and theoretical cosmologist Anthony Aguirre. FLI’s main mission is to orient scientific and technology research around “safeguarding life and developing optimistic visions of the future”, thus offering a human-centric framework whose basic purpose is to rationalise and ‘humanise’ technology innovations. We can easily see just how important this approach is when it comes to the future of AI development. FLI works in close collaboration with scientists, university faculty and philanthropists worldwide, organising frequent conferences to bring such people together, such as the 2015 AI Conference in San Juan, Puerto Rico, and more recently the EdGlobal X conference in Boston last May.

 

Wake-up call for tech leaders?

We should emphasise that the most recent FLI conference, Beneficial AI 2017, held in January at Asilomar in California, brought together as many scientists and academic researchers specialising in robotics as tech company people, who included PayPal founder Elon Musk, IBM researcher Francesca Rossi and Demis Hassabis, the founder of British artificial intelligence company DeepMind, which was subsequently acquired by Google. Such attendance was guaranteed to generate a sufficient range of opinions and prospects to be able to draw up principles of common interest. The drafting of the 23 principles was based both on existing expert studies and documents, including White House reports and Stanford University papers, and on proposals from major firms in the sector or influential lobbies.

Nevertheless, the only result that has come out of these debates so far is a simple overall guideline agreeing that these technologies should be going in a direction that will enable them to “benefit as many people as possible‟. Clearly, this was all about securing basic agreement that there should be an ethical framework rather than assigning any real responsibility to tech decision-makers. Some commentators have described it as a ‘practical guide’ to the scope of these principles. However, the actual text is restricted to simply underlining the need for moral rules, modelled on human rights, intended to harness artificial intelligence in the service of human intelligence and thus prevent any potential abuses in the process of taking AI systems forward, and it puts forward no practical risk prevention mechanism.

Non-binding principles of doubtful practical application

This lack of ambition to draw up real rules and standards is the major weakness of this text. The principles that have been drawn up are not binding and there is no intention to force compliance or pursue non-compliant parties in court. The agreement is intended more as encouragement than a real framework for the governance of AI-based systems. No political body, whether local, national or supranational, has been invited to collaborate on the work, and there has been no concrete progress on this front, even though such bodies are the only ones with the power to enshrine the principles in law or regulation. So this leaves the whole issue of the safety of intelligent systems at square one for the moment. Collaboration with elected authorities and political bodies will be necessary if the principles are to be made binding so as to provide effective regulation of AI development going forward.

However, the Future of Life Institute seems determined to continue along its positive path. The FLI wants its 23 principles to guide the “major change that is coming, over unknown timescales but across every segment of society‟. In order to do so, the FLI intends to broaden the debate, calling on all citizens to participate. Technology and ethics are not always perfectly in sync. While this initiative reflects a genuine growing awareness, the road to regulation of AI-linked activities still looks to be very long one.

Legal mentions © L’Atelier BNP Paribas