Q&A: Engineers create 3D objects with “embodied logic”
The actuators work off of if/then statements, responding to programmed stimuli found in specific environments.0
A team of engineers at the University of Pennsylvania recently published a paper detailing their approach to 3D printing objects imbued with “embodied logic.”
While the 3D printed objects won’t be making any sophisticated decisions any time soon, they have been programmed with a series of if/then statements that allow the devices to react to the world around them according to specific environmental stimuli.
One example used throughout Bifurcation-based embodied logic and autonomous actuation, which was published in the journal of Nature Communications, was a Venus Flytrap. Tiny hairs on the leaves of the plant can recognize when weight is been detected, causing it to close. After partially closing, the plant reassesses the size of an enclosed object and awaits continued stimulation to determine whether to open to release the object or to close the rest of the way to begin digestion.
We spoke with one of the main authors of the paper, professor Jordan Raney from the Architected Materials Laboratory at UPenn about the process.
How did you become involved with this specific project?
Before coming to Pennsylvania [University] I did a postdoc in a 3D printing lab at Harvard. One of the projects I worked on was 3D printing fibre composites. For example, you could put short carbon fibres inside of a resin and as it’s 3D printed you can control the fibre alignment, generating some very specific mechanical properties.
I had other separate work related to instabilities and so if you compress a beam or if squeeze a pencil hard enough for example, eventually it would buckle or even snap laterally in the direction perpendicular to how you’re squeezing it.
We 3D printed these elastomers that have some of these built-in instabilities where the beams buckle and unbuckle in really interesting configurations. You can lock energy into the buckled state and it’s like stretching a rubber band but as if the rubber band when you let go of it, stays stretched. So you’ve locked energy into that elastic stretching.
The question that I posed to my students and to myself was “Is there a way that we can embed some degree of intelligence or responsiveness into the material and the shape of that material, rather than the more traditional mechatronic solution that we’re used to in engineering of sensors, microprocessors and batteries?”
Ultimately we wanted systems that could respond to their environment so they could transform and change shape or function in response to whatever might be going on in the environment. Whether that’s a change in the humidity or the change in the light or someone stepping on it.
An artificial Venus Flytrap closes when weight is detected and the actuator is exposed to a solvent
Space is dedicated in the paper to the process of creating specific geometric parameters in order for these embedded logic objects to work properly. How did you and the team make that happen?
The original work for that was a major part of what I did at Harvard. It was basically an observation of this rubber band analogy I spoke to previously. If you take a rubber band and constrain it somehow, into a rigid frame, we take these beams and constrain them and depending on their precise aspect ratio or the specific angle relative to the direction of loading, you can observe that sometimes these beams will stay buckled, referred to as bistability.
Bistability means that you have two different equilibrium configurations or shapes that the system can be in with zero force applied. You get a snap between these two stable configurations by pushing through an energy barrier that separates one from the other.
It was first observed that sometimes this happens and sometimes it doesn’t. The question was “What are the specific geometric parameters that define this? Why does it happen sometimes and not other times?” At Harvard, we had a previous study that we cite in this current work when we went through the work using numerical simulations where tests are run through a computer as well as through actual experimentation where we started rapidly 3D printing these different geometries.
We would print slightly different forms each time and see which ones stay compressed and which ones don’t. We went through the whole process of mapping geometric parameters to the state of “Are they bistable or are they not bistable?”
With 3D printing, we can print these structures right on that boundary. We make them bistable but just barely and then they become very sensitive sensors for the environment. They don’t take very much energy from the environment to suddenly cross the boundary from bistable to monostable or whats called bifurcation.
If you think of the stretched rubber band that’s just waiting there, and you do something to it to remove the constraints it’s going to immediately snap back to its original configuration and release all of that energy that’s in the stretched material. That’s essentially what we’re doing. We’ve come up with a detailed way to control precisely when you take a thing that has stored energy in its elasticity and when it will release that energy based on an environmental queue.
The foundation of those environmental queues are a series of if/then statements you and the team created, the embedded logic of the object so to speak. Walk me through that process.
If you think of a transistor as one or zero, like a switch, It’s kind of what we have here. Because of this bistability where you have these two configurations, its either snapped open or it’s closed. The rubber band is either stretched or it’s relaxed. That’s our one and our zero, our bit so to speak.
We can combine several units of these ones and zeros and because each one of these units is responsive to the environment in some way, we can combine these in different ways to require particular instances in the environment in order for actuation to occur.
The simplest thing is the Venus Flytrap, where you have to set some object inside of it to get it to close. But we can have this little locking mechanism on the side of it that has to be stimulated by the environment for the gate to be activated, for the fly trap to be activated.
This would be something like “Okay the fly trap closes if an object is stuck inside of it, but only if the lights are on, or only if the humidity level is above a certain percent.” It’s that variable, the light or the humidity, that’s going to trigger the disengaging of the lock.
We’ve assembled a few different gates. There’s the AND gate, OR gate and something called the NAND gate. From the NAND gate is what’s called functionally complete. In computer science that just means that you could develop any circuit, any logic, or any sequence of logic purely from that gate. If you can make one of those you can combine any number of them into any other complex logic.
While no one was going to make a computer out of this technology, theoretically you could combine these NAND gates to make really complex and responsive logic and systems. You could program in very specific things.
When exposed to a solvent, both actuators expand, knocking off objects keeping the box closed
I don’t know why you would want to do this but you could program a box that’s like “Only open if the temperature is above 26 degrees and less than 29 degrees and the sun is shining but don’t open ever if there’s oil nearby.” You’re starting to embed these different kinds of logic into these little bistable units based on the material that you construct them. That’s the broad Idea, how intelligent can I make pieces of material instead of relying on sensors and microprocessors and coding?
Part of the research discusses the need to infuse the material with glass and cellulose fibres, what was the reasoning behind that and how did you do it?
One of the core competencies of our lab is developing new 3D printing materials. What we do is we take things whether it’s an elastomer resin or like epoxy resin or hyper gels, and we make the precursors of this material and we can mix in whatever we need to mix in before printing.
In the case, we were using a silicone elastomer. We’re not putting the glass fibres in after the fact, we take two liquids that you mix together and we mix in glass fibres at that same time with the curing agent. So we have this glass fibre elastomer resin composite and that’s what we extrude. After we pattern that we stick it in a furnace and we cross-link it, which makes the rubber that you think of as an elastomer silicone material. The glass is already included at that point.
Similarly for the other material, for the cellulose fibres, we’re using a precursor that will become a hydrogel that swells in the presence of water. We’re mixing in these cellulose fibers ahead of time and then printing that and cross-linking it after the fact. It’s part of the ink material.
So then what kind of printing set up are you using? it must be difficult to find a commercial system for this to work.
The problem with using an end-to-end system is that most commercial 3D printers really don’t support these kinds of materials that we use. There isn’t really any commercial system that’s naturally built for these types of materials.
It’s all our own custom stuff. What we use is just an extrusion process, we have syringes and extrude our own material. We can extrude it simply by applying pressure or by using a volumetric pump. Depending on the material we can use a different system to force the extrusion. For the 3D printing aspect, we mount our own materials and our own material extrusion apparatus onto any sort of 3D motion controlled system.
We could buy a commercial CNC mill and just not have the mill on it and we would just mount our own material extrusion system. We use the commercial system to control our 3D motion and then use our own stuff to control the material control.
What types of design challenges did you and the team run into working on this?
One thing that’s tricky here is that we really need the fibres in the material because we need what we call anisotropic swelling. The trick is that when we apply a stimulus we want the material to swell slightly more one direction than the other. That’s critical because that’s what gets us in this geometric phase diagram between the bistability and the monostability. We need that non-uniform change to the geometry to jump across the gap from bistability to monostability. To get to anisotropy you have to put something in the material itself that makes it want to swell more one way than the other.
We lined up fibres and that makes it harder to swell in the direction that the fibres are aligned. The problem is, to get fibres in there, it can create problems with printability. You’re putting a lot of solid things into an ink you can have things like nozzle clogging that becomes very annoying. It’s a practical challenge you have to think about.
An actuator with embodied logic releasing its elastic energy
Another thing that’s difficult is if you want the kind of logic that I was describing you really need what you might call orthogonal materials. You want materials that respond very specifically to one particular stimulus but not to any others. In the case of our silicone, this thing swells in response to oils or other sources of nonpolar solvents. It doesn’t swell in response to water, which we wanted. We wanted to make sure it responds to one signal only.
One of the tricky things here is that this thing would also swell if you just heat it up a whole bunch. Heat, in this case, isn’t the stimulus that we intend so in principle if you had a system like this just floating out in the ocean, it’s response could change, for example, if it’s in the hot summer heat versus frigid winter weather. The temperature will affect the swelling as well. So in terms of a long-term commercialized approach for an engineering industry problem, you’d really have to think about what the environment is and whether or not you might inadvertently trigger the system.
Is that an equation problem or something to do with the simulations? How was that problem solved?
It was definitely both. Especially in the initial work we just tried it in the lab and watched roughly what worked and what didn’t. We supported that with some pretty detailed simulations. There were some added complexities because the anisotropic material begins to affect your geometric phase diagram. We used the simulations to inform roughly where we needed to be to get good enough anisotropy for our purposes. I would describe it as an iterative loop between experiments and simulations that helped us through us.
You talked about wanting the material to only actuate to a specific stimulus, does that mean you would make specific materials for specific situations?
You could make relatively generally materials designed for specific stimuli. In this work, all we did was use silicones that respond to nonpolar solvents, oils that kind of thing and hydrogels that respond to water. We used those as a demonstration of what the technology could do with if/and gates.
But the cool thing is that there are many other researchers who focus on just developing new active materials that respond to specific stimuli. For example, there are things like liquid crystal elastomers that people can make that respond very precisely to temperature changes or changes in light.
The way I see this technology, we demonstrated the basic principles of embodied logic with two materials, but rather than designing a bunch of new materials from scratch, the next step would probably be trying to integrate some of the other nice materials that others have worked on, into the scheme of embedded logic. And then you would probably have some kind of fail-safe within the device itself rather than trying to re-engineer new material.
…And with these new materials, people are creating you’d have this pick-and-play model to pick from. 10 or 12 different materials to choose from let’s say.
That’s the vision for this long-term. You’d have a pallet of material that you could select from “I need temperature responsiveness within this range, I need light responsiveness in this range.” And with these multi-material 3D printers you could pick and choose where you want to place these different materials so that they actuate and activate different components at different times.
You mentioned oil spills earlier, but what other use cases do you see for this technology?
There are strengths and weaknesses to every technology. One of the key strengths of this is that you can do long-term monitoring of a particular environment with nothing happening for months or years. And finally when the stimulus is introduced you get a very rapid actuation event and it does what it’s supposed to do. If you were talking about something like an oil spill, traditionally you would probably have a solid-state sensor that’s pinging the environment at 10 Hz. It would constantly loop and ping the environment “Do I have oil, do I have oil, do I have Oil?” And that takes power.
You’d have a constant power drain with a computer monitoring the sensor and probably some other actuation system that’s waiting for a response to a trigger. That seems rather unnecessarily complex for a situation like that. The benefit here is that you don’t have a battery or a computer. It’s just pre-programmed by its shape and material to sit there indefinitely until it encounters the stimulus its been programmed for.
One other place where that could be useful is in microfluidic processing, places where you have constant fluid flow for long periods of time, but you want to make sure you don’t have a containment that’s leaking into the system. You’d have these almost as safety valves or gates that would close it off. It can sit there for many months during the processing.
So it actuates and cuts off the flow of liquid, or sucks up oil as we previously discussed but what about notification? If we needed to ping someone that this was happening how would that work?
That would be a question about how you do system integration. All this does is it gives you controlled actuation without any power. You can use that actuation to do whatever you need to do. It physically moves very energetically so you could use that to pinch off the input valve and deactivate the fluidic chip in that example. If you wanted notification then you could still have an antenna or some sort of Bluetooth device or computer that is powered off sitting and there. The actuation could flip the switch to turn it on and ping whoever needs to be notified. At least the device at the site isn’t constantly pinging the environment, it’s just turned on once you actually need that communication or notification.
You can find the full paper here