3D Immersion the Next Standard Feature for Smartphones?

By August 14, 2014
Mantis Vision

Israeli startup Mantis Vision has developed a set of tools which should make our mobile ICT devices capable of recording and manipulating 3D images.

Now that taking photos and recording videos on mobile devices have become commonplace, 3D looks set to be the next step in mobile user content creation and immersion. 3D has already been put to a range of uses, from movement capture in video games to bifocal cameras shooting real ‘3D’ films. In even more advanced scientific projects, 3D is being used to model near space, not to mention the overcrowded field of virtual reality. The basic idea is to capture the data of an environment with a relatively simple device and then transcribe it in such a way as to create the experience of moving virtually. The 3D world is not generated by a software programme as with synthetic imaging, but arises directly from the space filmed by a mobile device. Founded in 2005 by Amihai Loven, Mantis Vision and its toolkit for capturing and processing 3D images has proved very popular in this ecosystem. The company’s MV4D solution is in fact central to research being carried out by Google to make 3D a new standard feature for mobile, as snapping still photos has been since 2003 and video filming has been since 2010.

MV4D technology incorporated into smartphones

MV4D technology is basically the fusion of two functions, neither of which is new in itself. The first is a depth sensor, which enable the user to move in a space in three dimensions as when using 3D modelling software. In addition, specific light projectors give the image sufficient intensity to be manipulated at every angle. Functionality which is now fairly common such as movement capture – made popular by Microsoft’s Kinect – is used to increase user immersion. For the Kinect, Microsoft used the licenced technology developed by PrimeSense, a direct competitor of Mantis Vision. PrimeSense was subsequently bought by Apple in 2013 for $300 million. Mantis Vision uses a movement sensor to facilitate movement within an image. The Mantis Vision aim is not only to make 3D accessible on all devices but also to enable 3D manipulation of images – both videos and photos. Once a video has been recorded in 3D format, it can be edited it and effects added – changing the viewing angle, adding dynamic special effects, and so on.

Longer-term vision

The Mantis Vision technology is central to Google’s Project Tango. The Tango prototype is an Android smartphone-like device which tracks its own 3D motion and creates a 3D model of the environment around it. Google has a cluster of major development collaborators, including Bosch and OmniVision. Meanwhile Google is making massive investments in an array of technologies – of which Mantis Vision’s is one – in order to develop the first standard for 3D image capture on mobile. Google has made a prototype development kit for Android available in order to attract developers to the platform, a strategy which the company earlier used for Google Glass. With the devices that Google believes it will sell under Project Tango, MV4D technology will make it possible to immerse oneself in a 3D environment. In addition, users will be to scan the space near to them and control each of the elements, for example, isolating a body in a room and modifying the decor. Longer term, Google is talking about the total experience that better mastery of 3D could bring once myriads of mobile devices are equipped with the new technology. It goes without saying that immersion in a virtual universe which can be explored in several dimensions will improve in line with the increasing numbers of users sharing their 3D recordings.



Legal mentions © L’Atelier BNP Paribas