Design Engineering

Robots fitted with tactile sensors show improved sensitivity and dexterity

Staff   

Automation MIT Robots

Two MIT research groups are incorporating GelSight sensor technology to robot grippers for added and enhanced functionality.

Two MIT teams are looking to improve robot’s sensitivity and dexterity by mounting sensors on the grippers of robotic arms.

GelSight sensor

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. Photo: Robot Locomotion Group at MIT.

It was about eight years ago that Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight. This technology provided a detailed 3D map of any surface it came in contact with.

Now two research groups are leveraging this technology to make robots even more responsive.

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches.

Advertisement

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

The GelSight sensor consists of a block of transparent rubber, one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape.

The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper.

Gauging Gripping Strength

One of the challenges for autonomous robots is the need to gauge an object’s softness or harness in order to determine how much gripping power is needed but also to know how they will behave when moved, stacked, or laid on different surfaces.

The MIT researchers looked at how human’s interact with objects to determine their hardness. Essentially, we judge based on contacting the surface and seeing how much it changes when we press on it.

Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson’s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness.

Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. Yuan then extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.

The data was fed into a neural network, which automatically looked for correlations between changes in contact patterns and hardness measurements.

The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.

Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.

Accurate Robotic Positioning

For the most part, an autonomous robot will use some sort of computer vision system to guide its manipulation of objects in its environment. One of the challenges with this system, although it is highly accurate, is when the robot picks up a small object that is covered by the robot gripper — making the robot’s ability to estimate the object’s location unreliable.

The Robot Locomotion Group experienced this exact event during the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), when their robot had to pick up and turn on a power drill.

The team turned to GelSight as a solution to this problem. Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper, along with co-authors  Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake’s group — designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

The researchers were tasked with reconciling the data produced by a vision system with that from the tactile sensor. Because GelSight is camera-based, the data was much easier to integrate effectively.

In Izatt’s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don’t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver’s position in the robot’s hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable, and maybe help us understand something about our own sense of touch and motor control.”

www.mit.edu

Advertisement

Stories continue below

Print this page

Related Stories