“Socially aware” autonomous robot to observe codes of pedestrian traffic

Robots face four main challenges when it comes to autonomously moving through heavily trafficked areas: location, perception, motion planning and control.

0 August 30, 2017

As you walk down the street, do you ever notice the pedestrian conventions fellow walkers follow? Do you move to the left when passing another person? Walk consistently on the right side in keeping with the flow of traffic? Do you saunter or walk at a steady pace?

Engineers at MIT have designed a robot to better understand the general codes and conduct of pedestrian traffic. The robot is fully autonomous with “socially aware navigation”, allowing it to keep pace with foot traffic while observing general rules of the sidewalk.

MIT autonomous robots

Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct. Photo courtesy of the researchers.

In test situations, the robot successfully avoided collisions while keeping up with the flow of traffic.

“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” says Yu Fan “Steven” Chen, who led the work as a former MIT graduate student and is the lead author of the study. Chen goes on to explain that small robots could potentially operate on sidewalks for package and food delivery. Or personal mobility devices could transport people in large, crowded spaces once the technology is developed.

The robot faces four main challenges when it comes to autonomously moving through heavily trafficked areas: location (knowing where it is in the space), perception (recognizing the surroundings), motion planning (optimizing path plans) and control (physically executing the desire path).

Chen and colleagues looked for standard approaches to solve the first two challenges. When it came to localization, the team used open-source algorithms to map the robot’s environment and determine positioning. For the perception challenge, the team outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor.

“The part of the field that we thought we needed to innovate on was motion planning,” says co-author and graduate student Michael Everett. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”

The most challenging situation is dealing with pedestrian-heavy environments — people’s paths can be highly unpredictable. In order to deal with these situations, roboticists can take a trajectory-based approach by programming a robot to compute an optimal path that accounts for everyone’s desired trajectories. These trajectories must be inferred from sensor data, because people don’t explicitly tell the robot where they are trying to go. The problem with this option is it can take forever to compute.

“Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett says. “So that approach is not very realistic, especially if you want to drive faster.”

Another option is using “reactive-based” approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions. However, according to Everett, the unpredictability of human nature the robot tends to collide with people who veer off a set path.

“The knock on robots in real situations is that they might be too cautious or aggressive,” Everett says.

The team set out to find a way to enable to robot to adapt to this unpredictable behavior while continuously moving with the flow and following typical social codes of pedestrian conduct.

The robot was outfitted with “reinforcement learning”, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left.

This method allows for offline training to get the robot “up to speed” for real life situations.

The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.

As for real life situations, the team test-drove the robot in busy, winding halls of MIT. The robot was able to drive autonomously for 20 minutes at a time.

“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”

Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment.

Chen and Everett’s co-authors include former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT. This research was funded by Ford Motor Company.


Leave a Reply

Your email address will not be published. Required fields are marked *