Design Engineering

MIT researchers develop imaging technique for better self-driving cars

Staff   

Quality Automotive LIDAR MIT

This new computational method improves the resolution of time-of-flight depth sensors 1,000-fold with wide ranging applications.

MIT light imaging

Comparing of the cascaded GHz approach with Kinect-style approaches visually represented on a key. From left to right, the original image, a Kinect-style approach, a GHz approach, and a stronger GHz approach. (Courtesy of the researchers)

The Camera Culture group at MIT’s Media Lab has developed a new approach to time-of-flight imaging that significantly increases the depth resolution.

According to the team, the depth resolution increases 1000-fold and could make self-driving vehicles more practical — especially in conditions like fog, which has traditionally been a challenge to accurately measure distance.

Existing time-of-flight systems are good enough for today’s common systems like assisted-parking and collision detection. At a range of 2 metres, these systems have a depth resolution of about a centimetre. That’s good enough for the assisted-parking and collision-detection systems on today’s cars.

“As you increase the range, your resolution goes down exponentially,” explains Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper.

Advertisement

In a long-range scenario, you generally want your car to detect an object as far away as possible so it can make a fast update decision.

“You may have started at 1 centimetre, but now you’re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life,” adds Kadambi.

One of the benefits of the MIT researchers’ system is that it offers a depth resolution of 3 micrometres at a range of 2 metres.

Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. Light-burst length is one of the factors that determines system resolution.

Kadambi explains that there are other imaging techniques that will enable a higher resolution — interferometry, where a light beam is split in two, and half of it is kept circulating locally while the other half (the sample beam) is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams yields a very precise measure of the distance the sample beam has traveled.

One of the challenges with this method is that it requires careful synchronization of the two light beams. Kadambi explains that this would never be used in a vehicle due to its sensitivity to vibrations. But the team is combining ideas from interferometry and LIDAR to make a better system. Kadambi adds that they are also using some ideas from acoustics.

“The fusion of the optical coherence and electronic coherence is very unique,” says Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group. “We’re modulating the light at a few gigahertz, so it’s like turning a flashlight on and off millions of times per second. But we’re changing that electronically, not optically. The combination of the two is really where you get the power for this system.”

The research team’s paper appeared in IEEE Access. 

www.mit.edu

Advertisement

Stories continue below

Print this page

Related Stories