Design Engineering

MIT system aims to make navigation easier for people with visual impairments

Staff   

Automation MIT vision system

The system includes a 3D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface.

New technologies are emerging everyday to help make the lives of people with visual impairments easier to manage. One of the most common tools available to help the visually impaired navigate around is a metal-tipped, white cane. However, there are some limitations to using a cane, which is why researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) looked to develop a new system to limit the challenges visually impaired people face on a daily basis.

blind, visually impaired navigation system

New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects. Courtesy of the MIT researchers.

The team hopes this new system will provide users with more information about their environment. It is able to do this through a 3D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface. The system can be used in conjunction with a cane or as an alternative.

The 3D camera is worn in a pouch hanging from the users neck. The team has created a processing unit to run their proprietary algorithm. The sensor belt has five vibrating motors evenly spaced around the front and a reconfigurable Braille interface worn on the user’s side.

“We did a couple of different tests with blind users,” says Robert Katzschmann, a graduate student in mechanical engineering at MIT and one of the paper’s two first authors. “Having something that didn’t infringe on their other senses was important. So we didn’t want to have audio; we didn’t want to have something around the head, vibrations on the neck — all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.”

Advertisement

The algorithm quickly identifies surfaces and their orientations from the 3D camera. According to the researchers, the algorithm first groups the pixels into clusters of three. The pixels have associated location data, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10 degrees of each other, the system concludes that it has found a surface. It doesn’t need to determine the extent of the surface or what type of object it’s the surface of. What it does is simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2 meters of it.

The user will receive tactile signals through the belt motors that vary in frequency, intensity, and duration as well as the intervals between them. For example, when the system is in chair-finding mode, a double pulse indicates the direction in which a chair with a vacant seat can be found.

For the Braille interface, the team included two rows of five reconfigurable Braille pads. Symbols displayed on the pads describe the objects in the user’s environment. The symbol’s position in the row indicates the direction in which it can be found; the column it appears in indicates its distance. A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide.

In tests, the chair-finding system reduced subjects’ contacts with objects other than the chairs they sought by 80 percent, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86 percent.

The researchers are presenting their work at the International Conference on Robotics and Automation. They will describe the system and a series of usability studies they conducted with visually impaired volunteers.

www.mit.edu

Advertisement

Stories continue below

Print this page

Related Stories