Object Differentiation is Key to Robot Autonomy

By January 16, 2013
autonomous robot

If we want robots which serve human beings to attain an adequate degree of independence, we need to enable them to deal with objects commonly found in their working environment, not all of which remain stationary at all times.

If a robot is to become able to act more independently, it needs to understand its environment
and the objects to be found in it. There are a good many projects ongoing in this field.
Researchers at MAST (Micro Autonomous Systems & Technology) and at the European
Software Centre
are working on one such project, designed to enable robots to map the
environment in which they operate and thus assess potential obstacles. However it’s rare for
such studies to take account of the fact that the objects themselves might move at the same
time as the robot, as would be the case for chairs or doors in the workplace, for instance.
Now a group of researchers from the computer science departments of Stanford and Carnegie Mellon Universities has developed an algorithm that enables robots to identify objects around
them and classify them in terms of potential movement.

Mapping, identification and hierarchy

Following their MAST colleagues, the researchers* from Stanford and Carnegie Mellon
installed laser sensors on their robots in order to develop the algorithm. These sensors
enabled the robots first of all to map the area and to detect any object which might represent
an obstacle. This analysis of the robot environment took place over nine separate sessions,
during which the researchers altered the layout of the area – shifting or entirely removing
objects. Based on the algorithm, the robots were able to ‘remember’ those changes and could
then classify the objects which had disappeared or moved into one category, and those which
had not moved at all into another. So the robots ‘learn’ to group all similar objects into one of
two categories according to characteristics which are common to them and thus to establish,
with a reasonable degree of probability, those which might prove to be an obstacle at different
times within the same environment.

Results and limitations

Although the researchers from Stanford and Carnegie Mellon have proved that this new
approach works well in practice, they also state in their conclusions that the approach has its
limitations. For identifying non-stationary objects, the current segmentation approach requires
that objects do not move during the robotic mapping phase and also that they are spaced far
enough apart from each other – around 5cm – to be identified as separate objects. In addition,
the algorithm only takes into account the shape of an object in order to identify it – i.e. it
does not currently learn other attributes such as relationships between multiple objects, and
non-rigid object structures. It might be useful therefore to combine this approach with the
artificial skin” project implemented some time ago by the Technical University of Munich
where the artificial skin is covered with tiny sensors detecting variations in temperature,
vibrations and pressure associated with the sense of touch.

*Dragomir Anguelov, Rahul Biswas, Daphne Koller, Benson Limketkai, Scott Sanner
(Computer Science Department, Stanford University); Sebastian Thrun (School of Computer
Science, Carnegie Mellon University)

Legal mentions © L’Atelier BNP Paribas