MIT robotic system tracks moving objects more precisely than vision system
By DE StaffAutomation Machine Building
TurboTrack system uses RFID tags to home in on moving targets in miliseconds and within a centimeter.
Called TurboTrack, the system uses an RFID tag applied to any object. A reader sends a wireless signal that reflects off the RFID tag and other nearby objects, and rebounds to the reader. An algorithm sifts through all the reflected signals to find the RFID tag’s response. Final computations then leverage the RFID tag’s movement – even though this usually decreases precision – to improve its localization accuracy.
Although other systems have used RFID tags in a similar way, they have had to trade speed for increased accuracy or visa versa. The MIT researchers were able to preserve both through a technique called super-resolution imaging. These systems stitch together images from multiple angles to achieve a finer-resolution image.
To reach that high resolution, the system combines a standard RFID reader with a helper component that broadcasts a wideband signal comprising multiple frequencies. The system then captures all the signals rebounding off objects in the environment, including the signals unique specific to the specific RFID tag. Because these signals travel at the speed of light, the system can compute a “time of flight” and thereby gauge the location of the tag, as well as the other objects in the environment.
To zoom in on the tag’s sub-1 centimeter location, the researchers developed what they call a “space-time super-resolution” algorithm. This algorithm combines the location estimations for all rebounding signals, including the RFID signal, which it determined using time of flight. Using some probability calculations, it narrows down that group to a handful of potential locations for the RFID tag.
As the tag moves, its signal angle slightly alters – a change that also corresponds to a certain location. The algorithm then can use that angle change to track the tag’s distance as it moves. By constantly comparing that changing distance measurement to all other distance measurements from other signals, it can find the tag in a three-dimensional space. This all happens in a fraction of a second.
The researchers say the system could replace computer vision for some robotic tasks. As with its human counterpart, computer vision is limited by what it can see, and it can fail to notice objects in cluttered environments. Radio frequency signals have no such restrictions: They can identify targets without visualization, within clutter and through walls.
To validate the system, the researchers attached one RFID tag to a cap and another to a bottle. A robotic arm located the cap and placed it onto the bottle, held by another robotic arm. In another demonstration, the researchers tracked RFID-equipped nanodrones during docking, maneuvering, and flying. In both tasks, the system was as accurate and fast as traditional computer-vision systems, while working in scenarios where computer vision fails, the researchers report.
“If you use RF signals for tasks typically done using computer vision, not only do you enable robots to do human things, but you can also enable them to do superhuman things,” says Fadel Adib, an assistant professor and principal investigator in the MIT Media Lab, and founding director of the Signal Kinetics Research Group. “And you can do it in a scalable way, because these RFID tags are only 3 cents each.”
Adib’s group has been working for years on using radio signals for tracking and identification purposes, such as detecting contamination in bottled foods, communicating with devices inside the body, and managing warehouse inventory. The other Media Lab co-authors on the paper are visiting student Qiping Zhang, postdoc Yunfei Ma, and Research Assistant Manish Singh. The work was sponsored, in part, by the National Science Foundation.