Stanford Team Creates Multi-Sensory Robot Training Platform
The new platform trains robots using both visual and audio cues
Stanford University researchers have created a new robot-training platform, Sonicverse, which uses audio and visual elements for navigation.
Designed as a simulation platform for robots that rely on both camera and microphone feeds, Sonicverse accounts for any sounds robots may cause or detect as they complete tasks, and makes for what the researchers say is a more “realistic” training environment.
“Developing embodied agents in simulation has been a key research topic in recent years … However, most of them assume deaf agents in silent environments, while we humans perceive the world with multiple senses,” said the team. “Sonicverse models realistic continuous audio rendering in 3D environments in real time.”
In tests, the researchers used the platform to test a simulation of TurtleBot, requiring it to move through its environment and reach a set destination without colliding with obstacles.
The system was then applied to a real-life TurtleBot, which was placed in an office environment.
The tests saw successful results and Sonicverse is now available for use online to train AI agents and robotic systems.
About the Author
You May Also Like