The Social And Digital Systems (SANDS) Group is a transdisciplinary research collective within the School of Arts, Media, and Engineering at Arizona State University. Our materially-oriented work draws on approaches from computer science, interaction design, humanities, and philosophy of technology. Our current research examines bottom-up participation in science, DIY (Do It Yourself) methods, and the mechanisms by which expertise and knowledge is scaffolded amongst communities of practice.
In order to visualize the techniques, process, and emotions of sketch artists, we have sought to display elements of traditional drawing processes. To do so, we created an interactive system that unobtrusively tracks the freehand drawing process (movement and pressure of artist’s pencil) on a traditional easel. The system outputs recorded information using video renderings and 3D-printed sculptures.
To test our system, we held a user study with 6 experienced artists who created multiple pencil drawings using our easel. The resulting digital and physical outputs from our system revealed vast differences in drawing speeds, styles, and techniques. The easel, video renderings, and bas-relief sculptures will be presented at the ACM Twelfth International Conference on Tangible, Embedded and Embodied Interactions (TEI 2018) in Stockholm, Sweden. You can read the write up here: (TEI 2018 publication)
Our interactive system is a traditional drawing easel which has been augmented with a pencil tracking system and a pencil pressure sensing system.
To track the movements of the pencil, our system uses two cameras, which are mounted on the top and left sides of the easel. Images captured by the cameras are used to determine the vertical and horizontal location of the pencil. To make the tracking easier, we covered the drawing pencils in a layer of blue ink and mounted green colored background strips along the bottom and right edges of the easel. The horizontal and vertical locations of the drawing pencil is determined by locating the blue color blob created by the pencil against the green background.
The pencil pressure sensing system is based on acoustic sensing, since we observed that the sound created by friction between the pencil and paper can be used to approximate the pencil pressure. While this relationship is not reliable enough to measure subtle variations of pressure, it is sufficient for detecting the major changes. To record sound, we placed 12 modules (each containing a microphone and microcontroller) in a 3 X 4 grid on the back side of the easel. Weighted averages of the three sensors closest to the pencil are used to determine the pencil pressure exerted on the drawing surface.
Visualizing the Data
To display the recorded data, we chose to render the pencil speed and the pressure as an animation. In the animations, the pencil speed is determined by calculating the distance between data points. The pencil strokes which were drawn in slow, medium, or high speeds are represented distinctly in the visualization using green, yellow, and red colors. The different pressure levels are depicted using different line thicknesses.
In addition, we created another program that generates 3D bas-relief models displaying the drawing data. Bas-relief is a type of sculpture that consists of a projected image with little overall depth, such as Egyptian hieroglyphs or coins. In our models, the thickness of the ridges is based on the speed of the drawing, while the height of the ridges is based on the pressure of the drawing stroke. The height of the ridge can be compounded if several lines are drawn over the same area.
Artists Exploring the System
To explore the possibilities of our system, we conducted a study with six local artists, including MFA students, cartoonists, and a primary school art teacher. Each artist was invited to a drawing session during which they created three sketches, two of objects in the room (a lamp and flower pot) and one of whatever they wanted. In between creating the sketches, the artists were shown the video rendering and the 3D bas-relief rendering of the sketch they had just completed. All artists who took part in our study considered our tracking system to be unobtrusive and were interested in seeing the visualizations of their pencil movements.
The video renderings revealed unique characteristics among the drawing styles of participants. For example, they clearly showed that some participants, particularly the cartoonists, tend to use thicker lines in their drawings when compared to the others. The artists felt that the system could be useful both for teaching beginning artists and as a tool to study the evolution of a particular artist’s style.
Screenprinting is one of the most popular DIY printing methods, which has been, for many years, used to produce static visual representations in various scales and forms.
Part of our work at SANDS explores screenprinting as a DIY fabrication process that can be used to embed interactive properties into a range of substrates including paper, fabric, vinyl, wood, or acrylic. This project is aligned with recent trends in “smart” materials, whereby instead of using external components, responsive behavior and/or visualization is incorporated into the material itself.
Using off-the-shelf materials, we developed low-cost light-sensitive, temperature-responsive, and conductive screenprinting inks. We applied these inks in manual screenprinting to consistently reproduce photochromic, thermochromic and conducive properties across different substrates. To explore possible application areas, we held a workshop with local artists who experimented with our screenprinting methods and applied them to their practice. The workshop resulted in two interactive pieces showcased at a local gallery.
This summer, we set out to explore the opportunities for applying DIY smart material fabrication in youth STEAM (STEM + the arts) domains. We developed a week-long summer camp module for junior high school youths as part of a Digital Culture outreach program at our university.
During the first day, students explored photochromism by mixing UV-responsive pigments with screenprinting inks and exploring the colors with a UV light and sun exposure. Students also worked in groups to set up screens from pre-cut vinyl stencils and make their first prints using the photochromic inks they created.
Days 2 and 3 introduced basic electronic concepts and students worked on designing their own stencils in Adobe Fireworks and screenprinting a folding switch circuit. This project also taught students the concept of “registering” or aligning multiple printed layers on the same material. The final project included a conductive strip that served as part of a folding switch, an LED and coin cell battery that completed the circuit, and a thermochromic image that was printed to decorate the switch.
Days 4 and 5 were used to create a screen-printed storyboard that illustrated a narrative created by the entire class. The inks and concepts learned in the class served as prompts for each frame of the storyboard and served as action points in the story (the final story consisted of four frames which used regular, photochromic, thermochromic, and conductive elements).
We see screenprinting as parallel to many existing, successful initiatives that incorporate tangible media into art and science curriculums. In our work, screenprinting combines elements from the fine arts, including one of the oldest forms of printmaking, with modern technologies such as vinyl cutting, and advancements in material science.
A particularly unique feature of screenprinting is that it naturally supports collaborative making. The physical aspects of the printing process and the reproducibility of the prints enables individuals to make and keep a copy of the group project. This makes screenprinting an exciting platform for STEAM, as collaborative exploration is a key tenet of informal learning.
Our STEAM course shows the potential of manual screenprinting as a DIY fabrication technique for youth makers. Our overall findings demonstrate several unique features of screenprinting: a low barrier to entry for smart material fabrication, a collaborative maker practice, and a creative integration of STEAM concepts.
This is a continuation of Melting Materials for Mold Making, where we describe some of our experiments to create molds of wax, chocolate, and jello using 3D printed models and silicone molds, and 3D-Designed Molds for Baking and Freezing, where we experiment with baking and freezing food using silicone molds.
Our focus is using 3D prints to fabricate molds for culinary exploration. To determine what types of 3D designs and recipes work well to create customized, detailed dishes, we held a workshop with culinary enthusiasts.
Participants were invited to attend a workshop, which introduced them to our software system and workflow for generating 3D food molds. Over the course of the following week, they submitted drawings and photographs to be converted into 3D prints by our system. The participants then experimented with different recipes in their own homes, and kept in touch with the group by sharing their designs and recipes through a private group on a social network. During this time, they also had the option to create additional designs, and those were 3D printed and made into silicone molds for them to experiment with.
Types of Molds and Designs
The most common participant requests were to make multiple silicone molds of each print, create interconnected designs, and fabricate additional silicone molds of household items.
Several of the participants requested the option to make multiple silicone models of each 3D design. While it takes 2-3 hours to create one of the 3D prints, each silicone mold can be made in about thirty minutes. As such, participants were able to make several silicone molds from a single 3D print. This is a clear benefit of using molds over directly 3D printing the food, since having multiple molds allowed the participants to have several copies of the food design made simultaneously, whereas a 3D printer can only create one copy at a time.
Participants also noted the usefulness of interconnected designs. Such designs are beneficial because they allow relatively simple designs to be multiplied into complex forms, and, by changing the number of molds used, allow meals to be scaled to the needs of the person cooking.
below: examples of interconnected designs
In addition to making molds from 3D prints, two participants also made silicone molds of household objects. The downside of deeper shapes is that they limit the types of food that can be molded. In order to remove the original reference object, the silicone mold had to be cut in half and then pressed together when the food mold is being set. While this works for thick batter or melted chocolate, participants found that materials like liquid gelatin or egg whites will leak out through any cuts in the silicone mold before they have time to harden.
below: example of molds made from household objects
Recipes and Food Experiments
Most of the participants focused on single-ingredient foods that could easily transform from a liquid to a solid state, such as chocolate, egg yolk, gelatin, pancake, and flan. As our participants discovered from their experiments, other materials, such as wonton wraps, can also be shaped in the models, though they require the use of simpler molds composed of smooth surfaces.
below: example of wonton wraps
For designs, participants suggested using food appearance and shape to encourage diners to make healthy choice. In addition, they were interested in using the shape of the food to confuse or intrigue the diner as to what taste they may encounter.
Overall, our participants’ experiments revealed that molds with smooth surfaces worked well universally, whereas molds with fine details worked best with frozen and gelatin based foods. Hot foods were the most problematic, as they are often soft and difficult to remove form the molds. Depending on the ingredients, it may be more effective to freeze the meal into the mold, remove it, and then re-heat the food.
In the future, 3D models can be tailored more specifically to the foods they are applied to. For instance, our software might be altered to preview several different 3D models from one 2D image to show variable levels of detail and depth. Each 3D model could then be customized to maximize detail based on the specific attributes and limitations of the different foods being worked with. That way, culinary enthusiasts could visualize and compare what the finished dish would look like depending on their design and choice of ingredients.
below: examples of same molds being used to make different foods. In the future, models could be generated to best serve different food materials
Since our participants enjoyed and appreciated the social-sharing aspects of this study, it could be beneficial to create a broader social sharing platform to aggregate 3D designs and recipes, thereby scaffolding a broad base of knowledge to advance and expand what food enthusiasts can create.
In addition, our approach offers insights for developing future high fidelity food-based 3D printing technologies. For example, our study shows that there is a definite interest in providing healthier options, as well as a desire to create several portions simultaneously in order to facilitate a shared dining experience. This indicates that future food 3D printers could focus on offering expanded food options outside of sweets and treats, and explore ways of generating food that encourages a communal, rather than isolated, dining experience.
Since food 3D printing technology could potentially become ubiquitous in future years, it would be prudent to make sure the technology does not inhibit, and hopefully tacitly encourages healthy eating and social engagement.
The solar cooking app is meant to be a tool to conveniently document recipes for solar cooked meals as well as designs for solar ovens. It allows solar cookers to store their recipe/oven designs, share them with others, and interact with other solar cookers. The iOS application most importantly focuses on the differences between conventional cooking and cooking food using the sun. This enables easy, and quick creation of recipes while still maintaining accuracy.
Figure 1 Figure 2
The app in its current version has a login view (figure 1) that allows login using Facebook, a main view (figure 2) that displays all the recipes/ovens added to the app so far, as well as the buttons to navigate to other functionalities. More details about the recipe (Figure 3) is possible by tapping a recipe. Viewing, and potentially editing, the logged in user’s own profile (Figure 4) is possible by tapping the settings button. Searching all existing recipes (Figure 5) according to a wide variety of constraints is possible through the search button. Finally, adding a new recipe/oven (Figure 6) is possible through the + button.
Figure 3 Figure 4
In order to gauge our target solar cooking app users on their experiences using the app and what they would want an app geared specifically for solar recipes to offer, our team conducted a workshop to help understand what the targeted users need out of such tool. Throughout the workshop, our team learned about the different things to consider while using the sun’s heat to cook. This allowed for a compiled list of possible attributes a recipe could have, including but not limited to: oven type, A recipe has to be custom geared to work with a particular oven type; outside temperature, differentiated from internal temperature of oven cooking compartment, could define the success chances of a recipe; Altitude; and tags to represent the need for redirection of the oven, potentially replaceable by a frequency of redirection attribute. Some fields apply for recipes as well as ovens.
During the workshop, users were also prompted to use the app’s version at the time; their interaction with the app was used to discover some issues with the flow and cosmetic appearance of the app. The workshop solar cooker attendees’ interaction with the app helped pave the way for new ideas to aid the upcoming user interface development.
Figure 5 Figure 6
We hope to have an initial version/prototype of the app available to users for testing this summer of 2017. By then, it is our aim to have a polished app that, in addition to the main functionalities mentioned, also allows commenting on recipes and ovens posted to the app. In an effort to continuously improve the app, testers will be encouraged to submit their experience with the interface and suggest any ideas or improvements that could help us make the app better.
On July 12th, the students of ASU Digital Culture and The Design School have presented their LIFE/LIGHT project at the Biogesign Challenge summit in MOMA, New York. The project was developed at AME 410 Interactive Materials course and finalized for the competition.
The summit happened for the third time, engaging enthusiasts that combine design with biotechnology. It is one of the largest biodesign events in the US and brings the attention of the growing community of designers and researchers.
Around twenty teams have participated in the Challenge this year from various schools and countries. The 1st prize was taken by the team from Central Saint Martins, UK, that have presented the concept of Quantumworm Mines. The runners-up were the students of University of Edinburgh, UK with the research project “UKEW 2029” that showed parallels between biology and socio-political trends.
What if we worked with the natural world that surrounds us to design with and within its natural patterns, schedules, and properties instead of forcing it to work inharmoniously around ours? How can we be more aware of how we impact the environment we share — even at a microscopic level? #Biodesignchallenge #asu
A post shared by ASU Herberger Institute (@asuherberger) on Jun 23, 2017 at 11:55am PDT
ASU project researched the potential of bioluminescent unicellular organisms and scrutinized the issue of co-habitat and control in a man-made environment. LIFE/LIGHT is an algae-driven living building system that produces fuel and light if properly taken care of.
We were designing in the middle-ground between an artifact, living nature, and humanity where the behavior of each component of the system influences its performance. (See figure 1.)
Figure 1. Concept diagram.
Choosing our components within the broad fields of nature and artifacts, we decided to look into the relationships between dinoflagellate, a capricious algae creature that illuminates ocean in a number of coastal cities, including San Diego, and architecture as a medium for most of the human activities.
With the increasing concerns about ecology, the notion of living architecture arises. In the age of Anthropocene, living buildings adapt to the constant flux of technological, social and environmental conditions through integration with living nature.
The best example of such thing, probably, would be the rice paddles in South Vietnam (image 1), a sustainable artifact of agriculture and built environment that existed for centuries.
Image 1. Rice paddles in South Vietnam, stock photo.
Among other inspirational examples are the Algae-fueled building in Hamburg designed by ARUP (image 2), the proposal by Mitchell Joachim for homes grown like plants (image 3), and the interactive installation by David Benjamin that visualized ecological conditions for the citizens of Seoul (image 4).
Image 2. BIQ algae-powered building in Hamburg, image courtesy of ARUP.
Image 3. FabTreeHub, image courtesy of Mitchell Joachim.
Image 4. Living Light, Seoul, image courtesy of David Benjamin and The Living New York.
Image 5. Bioluminescent dinoflagellate, stock photo.
Dinoflagellates are unicellular algae plankton chosen for the LIFE/LIGHT project due to the its qualities:
Image 5. Bioluminescent jellyfish, stock photo.
Bioluminescence is the ability of living organisms to produce light. The “cold light” produced by dinoflagellate is done without wasting energy compared to conventional
electrically generate light.
When agitated by movement, algae colony produces light for a short perious of time.
Dinoflagellate photosynthesis is capable of converting CO2 in glucose. This provides residual potential energy within cultures longer after decay.
Conversion to biofuel
Dinoflagellates may contain large amounts of high-quality lipids, the principal component of fatty acid methyl esters. The harvest of these organisms provides a suitable choice as a bioresource for biodiesel production.
Dinoflagellates are marine organisms that thrive in the natural medium of marine water. It makes them suitable for growth in the coastal cities with the use of natural salt water resources only.
ALGAE AT DAR
The dinoflagellates were grown in SANDS lab as part of the Digital Art Ranch at ASU.
This space supports DIY biology as well as other forms of researching interactive materials (image 6).
Image 6. The experience of working with dinoflagellates, photos from DAR.
Growing algae takes a lot of patience and attentiveness. Not only we had to keep in a specific medium for the lack of fresh marine water, but also synchronise its day and night cycles with the lab operating hours.
During the day cycle (~12 hours), photosynthesis happens, and algae transform CO2 into glucose. During the night cycle (~12 hours also), they multiply and show bioluminiscense if agitated. Like humans, dinoflagellates are active during the day, rest during the night, and are very irritated when their rest is interrupted.
The optimal living condition for dinoflagellate is a room temperature 18 to 24°C (65 to 75°F) and avoiding rapid temperature fluctuations. This was regulated using a white LED lamp, which can be changed for a cool white fluorescent light.
Time was a limiting factor as cultures would take a week or two to regain its properties from packaging. This opened for the possibility that cultures order may have been non-lively upon arrival.
Also the time for sub-dividing cultures takes 3-4 weeks, again letting the subcultures regain their properties. Then when testing cultures, this would have to be done over a span of days to few weeks to determine the necessary action for culturing.
A typical dinoflagellate flash of light contains about 100 million photons and lasts about a tenth of a second. In a testing format, it is suggested to use a control amount and compare the luminous value on the scale of 10. Also, one has to be very careful not to “stimulate” the culture before you actually measuring their light output because the first time they flash they produce a lot more light than each successive flash.
Patience and constantly being aware of the cultures. The cultures can be unforgiving when they begin to use bio luminescence and taking additional time to recharge before seeing the effect again. Also there was a problem to document the effect during an appropriate time.For circadian rhythms to
For circadian rhythms to be aligned with documentation during the day. The cultures would be in a night cycle during the day. Causing a problem of space, we had to devise a small container that would keep temperatures low and block enough light pollution from the room it was placed in.
DINOFLAGELLATE BUILDING FACADE SYSTEM
The project is a living building system that is attached to buildings in coastal cities and relies on algae for light and fuel production. It utilizes ocean water resources as a medium for dinoflagellate. It consists of tubes filled with algae-infused fluid, distributed operational nodes that control the water flow and a controlling device.
Image 7. Facade system sketch
The system works in 3 different modus operandi: day, night and
harvesting organic residues for biofuel production.
Figure 2. System elements
During the day, the water is supplied from the ocean water resources and distributed to the LIFE/LIGHT and other building systems, e.g. cooling. The algae-infused fluid flows into the tubes attached to a building facade and exposed to the sun.
Figure 3. Day mode
At night, algae-infused water fills the interior tube system that prevents its exposure to
city night illumination. When moved, the fluid gives away cold light that supports quiet
night activities inside a building.
In this mode, the most interaction between a human and the system happens. Human and algae share the same habitat and have to live in harmony in order for the system to work. If the night cycle is distracted by a human’s late night activities, algae do not multiply. When a person moves within the space with the algae tubes, they also move, arousing bioluminescence and illuminating the space.
Figure 4. Night mode
At the end of dinoflagellates life cycle, they become a residual organic matter that can be harvested in order to produce biofuel.
Figure 5. Night mode
The node serves an illustration to a highway of tracks within a system.
There would be numerous tubes to ensure the cultures are filtering, harvesting and transport to the correct location.
Image 8. Operational node
Inspired by thermostats, the control unit provides a basis for displaying information and controlling additional systems in a house.
The 3 buttons would allow for the most critical options of the system to be chosen.
Additionally, the display provides a small sample of the dinoflagellates that would be tested. Depending on the condition of the sample and the previous sample was taken, the filter option could be accepted. Cycling the dinoflagellate culture and providing more medium.
Image 9. Controlling node
The origin of the design needed to resemble a simple form of communication to a user that performs maintenance with the architecture embedded system. Not only would it provide given information on the LCD screen but it has the ability to control other system operations as needed.
Image 10. Inspiration for the controlling node. Image courtesy of Honeywell.
The questions then are:
What is the boundary between an artifact and nature? Is the LIFE/LIGHT system alive?
Would you co-inhabit space in algae and adjust your habits so that both species thrive or control it remotely and transform living creatures into a utility?
The project was submitted by Loren Benally and Veronika Volkova and contributed by Jacob Sullivan and Ryan Wertz
As part of the Digital Culture Summer Institute, the SANDS lab organized a bioart module for junior and high school students. Working with Cassandra Barrett and Kat Fowler, we developed a week-long summer camp course that invites students to create petri dish art using bacteria and antibiotic substances.
Our design studio was recently approved for BSL-1 (biosafety level 1) clearance, which means we can now (officially) work with minimally risky bacteria and procedures. Fun fact: we might be the first design lab to get this clearance through the ‘proper’ layers of paperwork and inspections at our University!
Our work embraces the DIYbio movement, which aims to make biology accessible outside of professional laboratories. So during the first day of camp, we showed the students how to sterilize lab equipment with a pressure cooker. According to the CDC guidelines, this means the materials must be kept at 121°C and 15psi for 30 minutes. It’s usually a pretty exciting 30 minutes to be watching the pressure cooker.
The next few days of the camp were spent practicing aseptic (sterile) lab technique to streak plates with different pigmented bacteria. We used our trusty old DIY incubator that we made in-house to culture our art at 26C.
We used several regular antibiotics (Ampicillin and Streptomycin) as well as antibiotic items the students brought from home to shape the growth of the bacteria. Essentially, we did the Kirby–Bauer diffusion test for antibiotic sensitivity, whereby growth is hindered around the effective antibiotics.
Our students brought an impressive and very creative range of substances to test for antibiotic properties, including handsoap, pennies, dog antibiotics, neosporin (very effective), tylenol, and toothpaste (not very effective at killing bacteria it turns out!).
We also added food coloring to our media to add a background color to the petri dish art.
For the final project, we asked the students first to sketch out the layout for their bacteria art piece, including what bacteria, background color, and antibiotic substances they wanted to use on their petri dishes. Can we say we did rapid lo-fi prototyping for biology 🙂 ?
The resulting bacteria images inspired us to write bioart haikus, and some of these were pretty deep.
Finally, the students used a graphic design program to convert their favorite petri dish images into stencils for vinyl cutting and screenprinting. The last day day of screenprinting was chaotic and messy, but order emerged just like the haiku said 😉
Huge thanks to everyone who helped run this awesome class, and to the creative and thoughtful students who are now excited to take a bio course at their schools even if they don’t get a printed T-shirt out of it next time.
This is a continuation of Melting Materials for Mold Making, where we describe some of our experiments to create molds of wax, chocolate, and jello using 3D printed models and silicone molds.
Here we are presenting new additions to the model making software and further experiments with different types of food.
3D Model Generator Additions
The program we have been using to generate our models works by taking a black and white 2D image and transforming it into a depth map, where the lighter parts of the image are raised up and the dark parts are lowered. In order to save on time and material costs for the 3D printing, we have also made the models hollow in the back.
Below-left: photograph of Antonio Canova’s Bust of Venus Italica. Center: bas-relief 3D model generated from that picture. Right: back of the 3D model
In addition, we created a version of the program that uses a color signifier (in this case, red) to subtract part of the image from the finished model. This way, the resulting model will not be limited to the rectangular dimensions of the original 2D image.
Below: model generated using the version of the program that subtracts red space. Left: Antonio Canova’s Bust of Venus Italica with red background. Middle: generated model. Right: back of model.
As in Melting Materials for Mold Making, silicone putty is used to create a negative of the 3D model. All food will then be cast using the silicone putty mold and will have no direct contact with the 3D print. This is because 1) the flexibility of silicone makes it significantly easier to remove molds after they have hardened, and 2) while we are using food safe 3D printed materials, the temperature limits of 3D printed material food safety is not entirely known. For the following food tests, we specifically chose to use Silicone Plastique putty, since it is food safe and can withstand temperatures up to 450 degrees Fahrenheit.
Before each baking test, the silicone mold was throughly washed and sprayed with cooking spray.
Sugar cookie – We found that Pillsbury sugar cookies (oven, 350 F, 12 minutes) did not closely stick to the mold, largely because of air pockets that formed in the cookie. Below-left: silicone mold. Right: sugar cookie.
Pancake – While we could not get a complete result with the Aunt Jemima pancake mix, we were able to get some promising details in the pancakes, and further experiments with cooking time / temperature / pancake mix could likely result in a functional pancake mold.
Below-left: (oven, 375 F, 12 minutes) Pancake was still gooey
Below-center: (oven, 375 F, 17 minutes) Pancake was fluffy, though still slightly undercooked. Part with detail (hair) stuck to silicone mold
Below-right: (oven, 375 F, 12 minutes) Significantly less batter was poured into the mold with the hope that it would cook faster. This was successful, and the resulting pancake was fully cooked. Part of the pancake was stuck to the mold, but some nice detailing (bun and part of hair) was successfully preserved.
Eggs – The eggs cooked fairly evenly in the oven and were overall easy to remove from the mold without causing any damage. They were also successful in capturing details from the silicone mold.
Sunny side up (oven, 350 F, 12 minutes)
Below-left: sunny side up egg still in mold. Center: egg removed from mold with yoke still intact. Right: yoke broken open
Whisked egg (oven, 350 F, 12 minutes)
Below-left: whisked egg still in mold. Right: egg taken out of mold. Part of the egg was still slightly gooey, which caused a chunk of the hair to become stuck to the silicone mold.
Liquid was poured into the silicone mold and then placed in the freezer overnight. Overall, the frozen models were the most successful in capturing fine details from the silicone mold.
Below-left: ice (frozen tap water). Right: popsicle made from Bolthouse Farms breakfast smoothie.