Robot Dog Learns to Walk Starting on its Back
New system lets robots learn by experience and automatically adapt
It only took an hour, but a robotic dog taught itself to walk from a starting point on its back.
A research team from the University of California, Berkeley, developed a new algorithm based on reinforcement learning (RL) to train robots without simulations or demonstrations.
The system gives robots a trial-and-error learning model that could be ground breaking for efficient real-world robot training.
The Dreamer program uses reinforcement learning to ‘train’ robots through continuous feedback and rewarding bots once a task is successfully completed.
Dreamer was applied to four robots to test the RL capabilities in practice.
A quadruped robot learned to stand from its back and walk in an hour, and taught itself to withstand pushes and roll over.
The team also trained two robotic arms to pick and place objects using camera images, with results outperforming model-free units. When deployed on a wheeled robot, Dreamer helped it to navigate itself to a destination using only camera images.
Typically, the reinforcement learning training was considered inefficient due to the vast amounts of trial and error required before a robot adapts behavior. However, simulations to train units can be insufficient as they can only capture a small amount of real-world situations and are prone to inaccuracies.
The Dreamer system is slightly different from conventional deep reinforcement learning practices, using a small amount of interaction to create trial-and-error actions a robot uses to plan its movements.
While initial results are promising, further testing is needed to see how the algorithm will respond to different situations, and challenges remain in the time it takes to code each robot.
The results of the study, “DayDreamer: World Models for Physical Robot Learning”, were published in the journal arXiv.
About the Author
You May Also Like