The field of application of artificial intelligence (AI) is widening, and looks set to encompass smartphones in the near future. The phone’s operating system will then be able to work rather like nerve cells do, and your device will be capable of positively identifying both categories of objects and individual human faces.
By Arthur de Villemandy October 22, 2014
Hollywood movie clichés apart, substantial progress is actually starting to be made in the field of artificial intelligence, due to the unflagging enthusiasm of academic institutions on the one hand and commercial companies on the other: major technology companies are working in partnership with university research laboratories to find ways of integrating AI into devices used by the general public. Meanwhile, in addition to the machine-learning projects already underway at the Google X lab, directed by the Group’s co-founder Sergei Brin, the Mountain View giant recently acquired Deepmind, a company specialising in AI. The UK-based startup, which has developed software that learns by combining machine-learning with neuroscience techniques, moved into the Google fold in January. Moreover, research being carried out at wireless telecommunications products and services company Qualcomm should enable this so-called ‘deep learning’ technology to be incorporated into our smartphones in the near future.
Qualcomm, which makes electronic chips for smartphones, is aiming to include AI as standard functionality in mobile devices in the coming years. The company’s researchers are in the process of developing an application which is able to identify different types of scenery or background in a photo – cityscape, landscape, sunset, close-up, etc – using image recognition techniques and so optimise the photo setting on that basis. Even more impressive is the claim made by Qualcomm’s head of software research, Charles Bergan, during a conference at MIT in September that “maybe the app will detect that it’s a soccer game in progress and look for that moment when the ball is just lifting off.” However, at the moment AI technology is being directed first and foremost towards human face recognition. A face-recognition app which Bergan demonstrated at the MIT gathering succeeded in identifying his face despite reportedly having being trained to recognise his features using only a short, shaky, and poorly lit video of his face. Unsurprisingly, image recognition is very much in demand from smartphone manufacturers. Recently Google tested out deep learning techniques on 10 million images taken from YouTube videos. The results were twice as good as any of the image recognition software previously used and the technology was for example able to categorise several different cats as all being cats.
The thrust of deep learning technology, which was one of ten breakthrough technologies for 2013 listed by MIT experts, is basically to simulate the activity of the neurons in the neocortex of our brains. This zone, which is located in the external layer of the cerebral hemispheres, is involved in spatial representation and language functions, and more widely in perception and reaction. In fact deep learning is not particularly new. In the 1980s Professor Kunihiko Fukushima, a Japanese pioneer in the field of neural networks, invented an artificial neural network, which he called the ‘neocognitron’, that has a hierarchical multilayered architecture and acquires the ability to recognise visual patterns through learning. Thirty years on, commercial companies are now starting to get interested in the technology with a view to predictive data analysis. Ersatz Labs, a San Francisco-based startup, runs a paid-for platform for creating tailored algorithms that help companies to analyse their data more meaningfully. Moreover recent advances in computational power have been driving progress in the field. High-powered computer chips such as IBM’s recently-unveiled TrueNorth have enabled progress in image and language recognition, two key features that look set to make our smartphones even smarter.