Google DeepMind’s AI-Powered Robots Play Soccer
The team said the tests show the capability of deep reinforcement learning to train robots’ motion, coordination and environmental awareness
Google DeepMind has used deep reinforcement learning to teach tiny, bipedal robots to play soccer.
The team posted the results on social media, showcasing the robots’ ability to walk, turn, kick and get up again after falling over. Over time, the small-scale robots were able to anticipate opponents’ movements and even block their shots.
“Robotics has long used games like soccer to test complex motion, coordination and environment awareness,” they wrote.
“Our agents were trained in simulation using the MuJoCo physics engine and then transferred to small humanoid robots with 20 actuated joints.”
In tests, the robots walked 181% faster, turned 302% faster, took 63% less time to get up and kicked a ball 34% faster than a scripted baseline.
Full details of the research were published in the journal Science Robotics.
According to the team, the research demonstrated the ability of deep reinforcement learning to gain full-body control of humanoid robots, with potential applications in larger-scale robots in the future.
“The robots exhibited emergent behaviors in the form of dynamic motor skills such as the ability to recover from falls and also tactics like defending the ball against an opponent,” study co-author Amos Matsiko said. “The robot movements were faster when using their framework than a scripted baseline controller and may have potential for more complex multi-robot interactions.”
About the Author
You May Also Like