Robot Learns to Respond to Human Facial Expressions

The Columbia University design uses AI to predict and mimic human facial expressions to improve human-robot interactions

Scarlett Evans, Assistant Editor, IoT World Today

April 4, 2024

2 Min Read
Study's lead author, Yuhang Hu
Study's lead author, Yuhang HuJohn Abbott/Columbia Engineering

Columbia University engineers have designed an AI-enabled robotic face that can anticipate and mimic human expression, which the team said is a “major advance” in human-robot interactions.

While programs such as ChatGPT are increasingly used to improve robots’ verbal communication skills, researchers are still developing their physical communication, such as facial expressions and body language.

The team said their design, dubbed Emo, is a step forward in this field. 

Developed by the university’s Creative Machines Lab, Emo can anticipate facial expressions and greet humans with a smile, with the team saying it can predict a smile 840 milliseconds before it happens.

The robot is equipped with 26 actuators covered by soft silicone skin, with high-resolution cameras integrated into its eyes to allow it to make eye contact.

emo_robot2.png

To train the robot, the team developed two AI models: one that analyzes changes in a human’s face to predict their facial expressions and another that generates motor commands depending on the corresponding facial expressions.

Over a few hours, the robot learned the correlation between facial expressions and motor commands, a process the team called “self modeling”. After training, Emo could predict people’s facial expressions by identifying small changes in a person’s face.

Related:ChatGPT Gives Humanoid Robot Ability to Speak in Multiple Languages

“I think predicting human facial expressions accurately is a revolution in human-robot interaction,” said Yuhang Hu, study lead author. “Traditionally, robots have not been designed to consider humans' expressions during interactions. Now, the robot can integrate human facial expressions as feedback.

“When a robot makes co-expressions with people in real-time, it not only improves the interaction quality but also helps in building trust between humans and robots. In the future, when interacting with a robot, it will observe and interpret your facial expressions, just like a real person.”

“Although this capability heralds a plethora of positive applications, ranging from home assistants to educational aids, it is incumbent upon developers and users to exercise prudence and ethical considerations,” said Hod Lipson, lead researcher. “But it’s also very exciting -- by advancing robots that can interpret and mimic human expressions accurately, we're moving closer to a future where robots can seamlessly integrate into our daily lives, offering companionship, assistance, and even empathy,

“Imagine a world where interacting with a robot feels as natural and comfortable as talking to a friend.” 

About the Author

Scarlett Evans

Assistant Editor, IoT World Today

Scarlett Evans is the assistant editor for IoT World Today, with a particular focus on robotics and smart city technologies. Scarlett has previous experience in minerals and resources with Mine Australia, Mine Technology and Power Technology. She joined Informa in April 2022.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like