Soft optoelectronic sensory foams with proprioception


Since its inception, the field of soft robotics has advanced from one-degree-of-freedom contractile actuators with open-loop control [i.e., McKibben artificial muscles (1, 2)] to active three-degree-of-freedom mechanisms (37), devices with closed-loop control (810), and high-force actuators (11, 12). Contemporary elastomeric machines can also have both exteroception and proprioception through embedded strain and pressure sensors (1317), enabling them to sense and respond to external forces (18). As elastomeric machines continue to grow in complexity and as roboticists push the boundary of soft robot functionality, more sophisticated sensing will become necessary.

For a soft robot to robustly interact with its environment, it must know its current shape in three dimensions (3D). To know its own configuration, an inherently compliant system must be able to sense deformation—whether it is self-induced through actuation or externally inflicted. The most commonly used sensors in soft robots are either surface mounted for pressure and touch detection (14, 17, 19, 20) or embedded along neutral bending axes to measure the global curvature of a robot limb (8, 2123). These types of sensors are typically integrated to measure a specific type of deformation (e.g., pressure at a certain point and bending along a certain axis), which limits the information that they can give about a robot’s configuration. To fully know a soft robot’s shape, we may need to fabricate sensors that can detect arbitrary deformations; however, it may suffice to pattern high densities of currently available sensors and either derive a complex analytical model or apply machine learning (ML) techniques. Such an approach has been used on sensor systems to fabricate devices such as a gesture recognition device, a pressure sensor, and a robotic skin (2427). In a step toward soft actuator proprioception, we present an elastomeric foam that can sense macroscopic deformation via embedded optical waveguides and the use of ML and statistical techniques to interpret transmitted light intensities.

Here, we present an elastomeric foam sensor system that we have trained to sense when it is being bent and twisted. To achieve this goal, we embedded an array of optical fiber terminals into the base layer of an elastomeric foam (Fig. 1). The fibers served to illuminate the foam and to detect diffuse reflected light. We bent and twisted the foam to known angles and gathered the intensity of the diffuse reflected light leaving each fiber. To produce models that predict the foam’s deformation state from the internally reflected light, we applied ML techniques to the data (Fig. 2 and movie S1). We chose to use ML instead of deriving a theoretical model, because doing the latter would have been very difficult given the large number of independent variables, many of which would have been difficult to accurately measure. Those independent variables include foam porosity, foam geometry, strut geometry, optical fiber placement, optical fiber terminal orientation, refractive index of the silicone, loss of the optical fibers, and absorption of the silicone. Diffusing wave spectroscopy (DWS) in cellular and colloidal substances has been used previously to gather information about microstructural statistics (28); however, this technique does not yield macroscopic shape specificity and has not been applied to robotics. We combined this platform of DWS with ML to create a soft robotic sensor that can sense whether it is being bent, twisted, or both and to what degree(s).

Fig. 1 Foam assembly design.

(A) Left: Foam and optical fiber assembly in three stages of fabrication. Right: Cross section of foam and optical fiber assembly in three stages of fabrication. (B) Diagram of foam and optical fiber assembly.

Fig. 2 Sensor functionality.

(A and B) Optical fiber terminals from which light intensity is read. (C to E) Real images of deformed foam and optical fiber assembly. (F to H) Real images of deformed foam and optical fiber assembly overlaid with computer reconstruction of the assembly’s state.

To detect sensor deformation, we selected and evaluated two distinct approaches. The first approach used single-output classification to detect whether the sensor was being bent or twisted, followed by single-output regression to predict the magnitude. This approach allowed us to detect one deformation mode at a time. The second approach enabled us to detect bending and twisting simultaneously by using multi-output regression. To model the foam’s state for the first approach, we defined two variables: deformation mode and angle. Deformation mode is a categorical variable that can hold one of the following four values: bend positive, bend negative, twist positive, or twist negative. Angle is a real-valued number corresponding to the magnitude of the bend or twist experienced by the foam. By using the values of deformation mode as training data labels, we trained a single-output categorical model to predict the type of deformation. Then, by using the values of angle as training data labels, we trained four single-output regression models (one for each deformation mode) to predict the magnitude of the deformation after the deformation had been categorized. We compared three classifiers [k-nearest neighbors (kNN), support vector machines (SVMs), and decision trees] and six regression models [kNN, SVMs, decision trees, Gaussian processes (GPs), linear models, and multilayer perceptrons (MLPs; also known as neural networks)]. The best classifiers had a test error rate of 0, and the best regression models had a test mean absolute error of 0.06°. For the second, multi-output approach, we modeled the foam’s state as a 2D vector of real-valued numbers representing the bend and twist angles experienced by the foam. With this label format, we trained a multi-output regression model to predict the bend and twist angles simultaneously. We compared three multi-output regression models—kNN, linear models, and MLPs—and found that the best model had a test mean absolute error of 0.01°.