Seeing Eye Robot Dog Debuts at Binghamton University

Engineers from the university also created a “leash-tugging” interface for the dog to understand and respond to users’ commands

Scarlett Evans, Assistant Editor, IoT World Today

October 31, 2023

2 Min Read
Binghamton University's robot guide dog
Binghamton University

Researchers at Binghamton University in New York have created a seeing eye robotic dog to improve accessibility for visually impaired people.

According to the team, the project responds to ongoing issues with seeing eye dog accessibility, as these dogs cost around $50,000 and take several years to train. By creating robotic alternatives, the team said they hope to provide a more cost-effective and readily available solution for those with visual impairments.

“We were surprised that throughout the visually impaired and blind communities, so few of them are able to use a real seeing eye dog for their whole life,” said Shiqi Zhang, lead engineer on the project. “We checked the statistics and only 2% of them are able to do that.” 

Robot dogs can also have maps of certain locations and terrains built into their navigation system, allowing them to identify the best route to get from A to B and warn users of upcoming changes to terrain. 

Zhang’s team demonstrated their technology in October, with the robot dog shown leading a person around a lab hallway and responding to directive input. Designed with an array of cameras and sensors to monitor and navigate through its environment, the robot dog leads a user around, helping avoid obstacles along the way. 

Related:Deep Robotics Launches Robot Dog

The team also developed a “leash-tugging interface” to train the robot using reinforcement learning. The interface allows the robot to understand and respond to a user pulling the leash in different directions, with the robot turning in response to the tugs.

“In about 10 hours of training, these robots are able to move around, navigating the indoor environment, guiding people, avoiding obstacles and at the same time, being able to detect the tugs,” Zhang said.

Next, the team said it is looking at integrating a natural language interface into the design to enable greater communication between robot and user.

“Our next step is to add a natural language interface. So ideally, I could have a conversation with the robot based on the situation to get some help,” said David DeFazio, a researcher on the team. “Also, intelligent disobedience is an important capability. For example, if I’m visually impaired and I tell the robot dog to walk into traffic, we would want the robot to understand that we should disregard what the human wants in that situation. Those are some future directions we’re looking into.”

The team is also investigating real-world tests in collaboration with the Syracuse chapter of the National Federation of the Blind.

“If this is going well, then potentially in a few years we can set up this seeing-eye robot dog at shopping malls and airports,” Zhang said. “It’s pretty much like how people use shared bicycles on campus.” 

About the Author

Scarlett Evans

Assistant Editor, IoT World Today

Scarlett Evans is the assistant editor for IoT World Today, with a particular focus on robotics and smart city technologies. Scarlett has previous experience in minerals and resources with Mine Australia, Mine Technology and Power Technology. She joined Informa in April 2022.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like