Soft robotic gripper uses three fingers to pick up, manipulate and model new objects

UC San Diego engineers’ three-fingered, pneumatic gripper tactilely senses objects with nanotube sensors.

0 October 15, 2017
Staff

Engineers at the University of California San Diego have designed a soft robotic gripper that can pick up and manipulate objects without any training or even seeing them.

Credit: University of California San Diego

The engineering team, led by Michael T. Tolley, a roboticist at the Jacobs School of Engineering at UC San Diego, designed the gripper with three fingers, each made of three soft flexible pneumatic chambers, which are able to move when air pressure is applied.

The gripper has more than one degree of freedom, which enables it to manipulate the objects its holding depending on the operation it needs to perform.

The team tested the gripper on an industrial Fetch Robotics robot and demonstrated that it could pick up, manipulate and model a wide range of objects.

From screwing lightbulbs to turning screwdrivers, the gripper is able to pick up and operate different objects in low light and low visibility situation.

“We designed the device to mimic what happens when you reach into your pocket and feel for your keys,” said Tolley.

One way the team did this was through designing each finger, covered with a smart, sensing skin made of silicone rubber. The sensors are made of conducting carbon nanotubes are embedded within the skin.

Fetch robotic gripper finger with sensors

The gripper has three fingers, each made of three soft flexible pneumatic chambers, which are able to move when air pressure is applied. Credit: University of California San Diego

As the fingers move and flex, the conductivity of the nanotubes changes. This allows the sensing skin to record and detect when the fingers are moving and coming into contact with an object. The data the sensors generate is transmitted to a control board, which puts the information together to create a 3D model of the object the gripper is manipulating. It’s a process similar to a CT scan, where 2D image slices add up to a 3D picture.

The team is working on adding machine learning and artificial intelligence to data processing so that the gripper will actually be able to identify the objects it’s manipulating, rather than just model them.

Researchers are also exploring using 3D printing for the gripper’s fingers to make them more durable.

jacobsschool.ucsd.edu


Leave a Reply

Your email address will not be published. Required fields are marked *

*