Walking Without Thinking On a Complex Terrain
Nowadays, a large majority of robots that have to move in their environment, either for domestic or industrial applications, use wheels. Compared to legs, wheels have several advantages when moving on a flat ground: they are quite energy efficient since they avoid unnecessary movements and they offer excellent stability due to their constant contact with the ground. Human environments being mostly flat, wheeled robots are often the privileged choice such as for autonomous cars on roads, roombas indoors or autonomous lawnmower in gardens. However, things get more tricky as soon as conditions are not ideal anymore. Wheels can handle and go over obstacles as long as they are relatively small compared to them (modest hole in a road, bumpy grass, small step). In harsher situations, wheeled robots are unable to proceed and have to stop. That’s when legged robots come into place. Legs offer more versatility due to their wider range of movements and discontinuous contacts with the ground. Legged robots could be used to explore hazardous places since they can go over rubble or holes by properly choosing the contact locations of their feet. They could watch plants or carry loads in buildings with several floors as well. For all these applications, efficient control algorithms are needed to be able to move in complex environments, avoiding obstacles, climbing stairs or jumping over the holes.
As part of the MEMMO project, funded in Horizon 2020 under grant agreement n°780684, a collaboration between the Gepetto team of LAAS-CNRS and the University of Edinburgh has been set up to develop a unified, and yet tractable, approach for complex motion generation. Researchers want to enable robots to walk in complex scenarios, adapting their trajectory in real time to to be robust to perturbations or changes in the environment. Their control algorithms can leverage a 3D map of the surroundings to find the best contact locations for incoming footsteps. For now the map is directly given to the robot for simplicity since the environment is known but later it could be reconstructed in real time thanks to an on-board camera. To this day, efforts have been focused on locomotion for the small quadruped Solo which is part of the Open Dynamic Robotic Initiative (https://open-dynamic-robot-initiative.github.io/) that provides all instructions to build your own robots out of 3D printed parts and off-the-shelves components.
The robot controller works as follows. First, the controller chooses which surfaces the incoming footsteps should land on among the pool of available surfaces in its surroundings. For instance, if there are two surfaces separated by a hole, it can choose to land the two next footsteps before the hole and the following ones after it in order to cross the gap. When a foot is in the air, the precise target location of the contact is computed at a high frequency by solving an optimization problem that uses a simplified model of the robot (to limit the computational requirements). Knowing the current location of the foot and the target, a trajectory in the air and the movements that the robot should perform can be deduced. This complex controller has been validated in simulation and in real life experiments with Solo in an indoor scenario with stairs and holes.