AI: Should machines be trained to unlearn?

By June 30, 2016
Machine Unlearning

In future machine learning looks set to be a central facet of many fields, but computers’ ability to ‘unlearn’ will also become essential for data security.

The quantitative explosion in digital data stemming from the surge in Internet communication and the widespread use of sensors is today a major driver of business opportunities for companies. In this new world, a great deal of ink is being spilled on the subject of progress in ‘machine learning’.

This increasingly common expression denotes families of algorithms which enable computer-aided systems to accumulate knowledge and intelligence automatically without being explicitly programmed to do so. Machine learning methods have applications in a wide range of fields including the manufacturing industries (process optimisation), the finance sector (risk management), the luxury goods and wider online markets (strategic marketing), defence (situational analysis) and in the biomedical sector (patient typology).

See our article in French on Artificial Intelligence in retail

However, ‘machine learning’ is far from foolproof and if applied to the economic or political field it looks certain to raise some major issues. This is why developers are starting to undertake research into how to get machines to unlearn what they have learned on their own account.

People are poor at ‘unlearning’, computers do better

The problem with algorithms is that the increasing role of digital processes and computer analysis means that spam and hacking techniques are also on the rise. These generate masses of erroneous data, or highly targeted data, with the aim of manipulating aggregated information and distorting the conclusions reached by algorithms.

Dessin d'un robot "apprenant"

It is therefore becoming increasing necessary to ensure source authentication and traceability of digital decision-making chains, and to set up more effective control mechanisms. But what about correcting and/or deleting information in predictive systems? Human beings show a strong cognitive bias towards basing their perceptions, judgements and decisions on what they already know and are generally very poor at forgetting or ‘unlearning’ mental constructs. Computers, on the other hand, do have this ability to ‘forget’ – i.e. to delete information.

An increasing number of scientists and opinion leaders are today calling for tools for unlearning, correction and forgetting to be embedded in computer programs. Early in 2016, Yinzhi Cao and Junfeng Yang, assistant professors at Lehigh and Columbia (USA) universities respectively, received Federal funding to develop ‘machine unlearning’ tools designed to modify or delete operations and data at various levels of granularity.

Computer code as law?

These efforts are an essential step towards what has been described as ‘augmented intelligence’. At the turn of the millennium, US attorney Lawrence Lessig predicted the normative power of computer code in an argument encapsulated by the famous dictum ‘code is law’.

See our article on Deep Learning!

Today, with the development of machine learning, deep learning and the blockchain and their awaited applications in the banking, legal and insurance fields, the expression ‘code is law’ is taking on an added dimension. An increasing number of activities are governed by underlying computer applications, whether at an individual or distributed level. Human rules often leave a lot of room for interpretation and can benefit from the assistance of computer formulae, which enable greater precision and rigour, but we should beware of their limitations as well.

The field of machine learning needs to adapt and incorporate new ways of handling any disputes, errors or data piracy, arbitrate regarding prior existing information and privacy rights (including the emerging ‘Right to be Forgotten’) and to differentiate processable pure data aspects from human aspects, which are open to interpretation. Because the real world fortunately cannot be summed up purely in complex exchanges programmed into algorithms. 



Assistant Professor Yinzhi Cao giving the keynote speech on ‘Machine Unlearning’ at the 37th IEEE Symposium on Security and Privacy, San Jose, California, May 23-25 2016

Article by Pierrick Bouffaron and Arnaud Auger, senior strategic analysts at L'Atelier North America

Legal mentions © L’Atelier BNP Paribas