AI Robotics Company Gives Robots ‘Human-Like’ Reasoning, Decision-Making Skills
The platform gives robots the ability to make autonomous decisions on the fly and self-reflect on actions to improve behavior
AI robotics company Covariant has launched a new large language model platform for robots giving them “human-like” reasoning capabilities, in what the company says marks the first successful implementation of GenAI to give robots a deeper understanding of language and the physical world.
Covariant’s RFM-1 (Robotics Foundation Model 1) platform provides robots with “human-like” reasoning capabilities and is the first time GenAI has “successfully” given commercial robots a deeper understanding of language and the physical world, the company said.
RFM-1 was trained on text, images, video, robot actions and physical measurements from both real-world tests and online data, creating a foundational dataset for robots to identify and mimic actions.
“Robotics Foundation Models require access to a vast amount of high-quality multimodal data,” said Peter Chen, Covariant’s CEO. “These models require data that reflects the wide range of information a robot needs to make decisions, including text, images, video, physical measurements and robot actions.
“Unlike AI for the digital world, there is no internet to scrape for large-scale robot interaction data with the physical world. So we built a highly scalable data collection system which has collected tens of millions of trajectories by deploying a large fleet of warehouse automation robots to dozens of customers around the world.”
Using AI-generated videos, the platform can predict how objects will react to robotic actions, simulate potential actions and select the best one depending on its predicted outcome.
It also gives robots an understanding of the English language to enable greater human-robot collaboration.
Covariant said the platform addresses a problem with manual-based robotic programming, which often lacks flexibility and versatility in real-world scenarios. With RFM-1, robots can make autonomous decisions on the fly and self-reflect on actions to improve behavior.
“To create value at scale, robots must understand how to manipulate an unlimited array of items and scenarios autonomously,” the company said in a statement.
Covariant said the platform broadens robots’ industry applications, including for domestic, health care, retail and industrial settings.
”Recent advances in Generative AI have demonstrated beautiful video creation capabilities, yet these models are still very disconnected from physical reality and limited in their ability to understand the world robots are faced with,” said Pieter Abbeel, Covariant’s chief scientist.
“Covariant’s RFM-1, which is trained on a very large dataset that is rich in physical robot interactions, represents a significant leap forward towards building generalized AI models that can accurately simulate the physical world.”
About the Author
You May Also Like