Nowadays a lot of research is going into perfecting the use of hand gestures to control mobile devices, using a variety of technologies. Researchers in Zurich have taken the process a step further with their new algorithm, based on highly formalised gestures.
Professor Otmar Hilliges and his team at ETH, the Swiss Federal Institute of Technology in Zurich, are working on a gesture recognition system for use with mobile phones. The team believes that the underlying algorithm, which has been developed by Jie Song, a Master’s student at ETH, will expand the range of potential interactions with smart devices beyond what other techniques have achieved to date.
The Zurich system uses the smartphone’s built-in camera to register its environment. It does not assess depth or colour, just the shape of the gesture and the parts of the hand used. To command your phone you will need to execute a series of very precise movements. For the moment the programme recognises only six gestures, so the range is rather limited. However, the app is not yet on the market and the team are planning to add more control gestures before contemplating launching it.
The most novel aspect of the ETH programme is the type of gestures used – which might at first glance seem rather strange. For example, mimicking the firing of a pistol means, depending on the context, that the user wants to switch to another browser tab, to change the map’s view from satellite to standard, or (unsurprisingly!) to shoot down enemy planes in a video game. The team stress that the control gestures must be clearly distinguishable from everyday natural movements so that the app will not make a mistake and delete text when for example you are just inspecting the state of your nails.
The algorithm and the basic system appears to work but where are the real advantages of this ‘solution’ and are smartphone users ready to learn what amounts to a sign language in order to add to their range of device controls? “People got used to operating computer games with their movements,” replies a confident Professor Hilliges. True enough, but is playing a video game on a large screen on the same functional level as gesticulating at your phone in order to convey action commands?
Researchers are currently investigating the use of a number of different technologies for gesture control. Connected rings are slowly making headway and recently the University of Washington in Seattle has been looking into using reflected smartphone signals for this purpose. But are users ready for any of this? The Leap Motion controller, a hardware sensor device that enables you to make hand and finger motions as input to a computer, has still not yet gained a firm foothold in the market one year after launch. In fact, for the moment the gesture recognition market remains very small, where various technical innovations are jostling for attention and the questions are piling up.