Zephyrnet Logo

Watch an AI Robot Dog Rock an Agility Course It’s Never Seen Before

Date:

Robots doing feats of acrobatics might be a great marketing trick, but typically these displays are highly choreographed and painstakingly programmed. Now researchers have trained a four-legged AI robot to tackle complex, previously unseen obstacle courses in real-world conditions.

Creating agile robots is challenging due to the inherent complexity of the real world, the limited amount of data robots can collect about it, and the speed at which decisions need to be made to carry out dynamic movements.

Companies like Boston Dynamics have regularly released videos of their robots doing everything from parkour to dance routines. But as impressive as these feats are, they typically involve humans painstakingly programming every step or training on the same highly controlled environments over and over.

This process seriously limits the ability to transfer skills to the real world. But now, researchers from ETH Zurich in Switzerland have used machine learning to teach their robot dog ANYmal a suite of basic locomotive skills that it can then string together to tackle a wide variety of challenging obstacle courses, both indoors and outdoors, at speeds of up to 4.5 miles per hour.

“The proposed approach allows the robot to move with unprecedented agility,” write the authors of a new paper on the research in Science Robotics. “It can now evolve in complex scenes where it must climb and jump on large obstacles while selecting a non-trivial path toward its target location.”

[embedded content]

To create a flexible yet capable system, the researchers broke the problem down into three parts and assigned a neural network to each. First, they created a perception module that takes input from cameras and lidar and uses them to build a picture of the terrain and any obstacles in it.

They combined this with a locomotion module that had learned a catalog of skills designed to help it traverse different kinds of obstacles, including jumping, climbing up, climbing down, and crouching. Finally, they merged these modules with a navigation module that could chart a course through a series of obstacles and decide which skills to invoke to clear them.

“We replace the standard software of most robots with neural networks,” Nikita Rudin, one of the paper’s authors, an engineer at Nvidia, and a PhD student at ETH Zurich, told New Scientist. “This allows the robot to achieve behaviors that were not possible otherwise.”

One of the most impressive aspects of the research is the fact the robot was trained in simulation. A major bottleneck in robotics is gathering enough real-world data for robots to learn from. Simulations can help gather data much more quickly by putting many virtual robots through trials in parallel and at much greater speed than is possible with physical robots.

But translating skills learned in simulation to the real world is tricky due to the inevitable gap between simple virtual worlds and the hugely complex physical world. Training a robotic system that can operate autonomously in unseen environments both indoors and outdoors is a major achievement.

The training process relied purely on reinforcement learning—effectively trial and error—rather than human demonstrations, which allowed the researchers to train the AI model on a very large number of randomized scenarios rather than having to label each manually.

Another impressive feature is that everything runs on chips installed in the robot, rather than relying on external computers. And as well as being able to tackle a variety of different scenarios, the researchers showed ANYmal could recover from falls or slips to complete the obstacle course.

The researchers say the system’s speed and adaptability suggest robots trained in this way could one day be used for search and rescue missions in unpredictable, hard-to-navigate environments like rubble and collapsed buildings.

The approach does have limitations though. The system was trained to deal with specific kinds of obstacles, even if they varied in size and configuration. Getting it to work in more unstructured environments would require much more training in more diverse scenarios to develop a broader palette of skills. And that training is both complicated and time-consuming.

But the research is nonetheless an indication that robots are becoming increasingly capable of operating in complex, real-world environments. That suggests they could soon be a much more visible presence all around us.

Image Credit: ETH Zurich

spot_img

Latest Intelligence

spot_img