Author Archive

Grow You A Great Big Web Of Wires

000036347161

Grow You a Big Web of Wires is a simple project built from copper tape, an Arduino Micro, lots of LED lights and a mess of wires. I wanted to explore what our homes and indoor spaces might look and feel like with the artifice of nature. This project was part sculpture, part installation and part futures imagining.

The leaves of the ‘plants’ are made of conductive copper tape that activate string LED’s when touched together. They are beautiful, but not alive, nor do they clean the air. The leaves look like leaves, the wires look like roots, but at best, they are a facsimile. The plants are meant to spark conversations about the place of nature’s contribution to our indoor lives.

“This project came to light from a love of plants; I found myself in a contemplation of what our homes might look like with more artificial and less natural. The forms I created are plant like, but are missing that chlorophyllic energy of something alive, though there is electrical energy flowing through each. These little plants are imaginings of a halfway point of the uncanny natural valley.” – Grow you a big web of wires 2018

Idea Process

In my initial thinking about this project I wanted to explore the gestures and communication design of plants and trees in nature.

I began pondering these questions to spark my ideation:

  • How do we think about communication methods of nature?
  • How can we use these notions to improve the way we as humans communicate?
  • How does the medium of physical technology change how we interact with these creations?
  • What are these things that plants talk about? Imagine a conversation between a pair of plants.
Idea sketches

Idea sketches

Material Process

I began by imagining how these plants would look. I knew that I wanted to recreate the way traditional house plants look in our homes. I went to source some wire at the hardware store and came back with 8 meters of galvanized steel wire.

Lots of experimentation with leaf shape and engraving veins into the copper tape. This shape was one of the successful shapes I was happy with

Lots of experimentation with leaf shape and engraving veins into the copper tape. This shape was one of the successful shapes I was happy with

The galvanized steel was pliable and easy to cut, but was VERY messy!

The galvanized steel was pliable and easy to cut, but was VERY messy!

I began manufacturing many leaves for the tree form. It reminded me of a beautiful fall day, but not quite.

I began manufacturing many leaves for the tree form. It reminded me of a beautiful fall day, but not quite.

I began sculpting with the galvanized wire and had initially had an idea of a tree shaped from a thousand wires all twisted together, however it proved more difficult than I had hoped. It could certainly happen, but would take much more time than I had. So I settled on creating a tree form out of some copper pipe I had laying around, then wound the steel wire around and through the pipe to create a base sketch of a tree. Taking cues from the plants in my house to structure the stems and leaf patterns.

In creating this tree form I realized how different metal is compared to plant life.

In creating this tree form I had a good contemplation session about how different metal is compared to plant life.

After building the form of the tree I began another plant. In working with the copper tape while making the leaves I began to get a sense of the best way to use this material. I keep some Sanseveria plants in my home in the places where there is not a lot of light. They are an extremely hardy plant with beautiful sculptural leaves. These plants don’t need to be watered often and can live through almost anything, it seemed like an excellent candidate to recreate. Using big long pieces of copper tape and solid core wire I began to form the plant by making leaves of different lengths and anchoring them in the plant pot using a base made out of a plastic lid.

My first Sanseveria leaf and my first proof of concept using a simple pullup connection.

My first Sanseveria leaf and my first proof of concept using a copper tape switch and a simple pullup resistor connection.

The final Snake plant

The final Snake plant

Code and Circuit Process

The circuit for this project was graciously offered to me by Veda Adnani from her project The Kid’s Puzzler. It was quite a simple set up utilizing the digital pins on the Arduino Micro. It was easy to create the proof of concept, however, there was much testing to be done with the copper tape once it was formed into the leaves and attached to the LED’s. There was a lot of real world differences once the circuit was connected. Sometimes the copper didn’t make a connection, sometimes the LED wires didn’t connect in the breadboard. It ended up taking a lot of troubleshooting and patience to come up with a setup that would work every time.

Circuit Diagram

Circuit Diagram

Circuit Diagram

Github Code

https://github.com/lc-w/GYAGBWoW

Final Presentation

The second part of this project was the actual installation of the plants. I had decided on displaying this work in the hallway between the Experimental Media Room and the main Graduate Gallery, this is a transitional space that could be peaceful and allow the viewers to have a quiet moment in the dark to reflect on these plants. The one issue I had been wrestling over all week were the walls, which were covered in a wheat pasted repeat pattern created by Inbal Newman. I had huge plans of covering the wall in large paper, or a projection, or even hanging a long white curtain in front of the work. But through the process of making the wire plant I realized that Inbal’s art and my creations had a good synergy, that complemented each other, so I began to wonder if the works could be incorporated together. The final display was directly in front of the piece and it did work well.

The title card in front of Inbal's wheat paste wall.

The title card in front of Inbal’s wheat paste wall.

Title and Description Cards

Title and Description Cards

The whole scene lit up and ready to glow.

The whole scene lit up and ready to glow.

The installation was quiet and contemplative; the two plants were placed on plinths with the title and info cards next to them, there was a mess of string lights everywhere. There was a lot of positive feedback. I was happy with the installation as a first iteration; but much like the first iteration of Grow You a Jungle, I want to go bigger! I am envisioning this project in a room with a multitude of copper plants, perhaps setting up a kitchen or bedroom area as the setting. In the future I would like to join the two projects together, using real plants as a switch for sound and the copper plants as a switch for light, creating a cycle of dependancy between the plants and humans.

Detail shots of both plants

Detail shots of both plants

References

One of the first projects I researched while thinking about this project was Botanicus Interacticus, a multi-faceted work funded by the Disney Research lab.

Botanicus Interacticus is “a technology for designing highly expressive interactive plants, both living and artificial. Driven by the rapid fusion of computing and living spaces, we take interaction from computing devices and places it in the physical world using livings plants as an interactive medium.” (Sato 2012)

Botanicus Interacticus

Botanicus Interacticus

This project uses the electrical currents in plants to enable a person to create music by touching a plant. They also created artificial plants that responded to touch. It was a look at how we can program interactivity into the world around us using the electricity that inherent in the world. I was interested in understanding how our gestures could be examined and used to reveal new ways of connecting to nature and this project was a big influence on my thinking.

Sonnengarten

Sonnengarten

Sonnengarten is an interactive light installation that reveals the relationship of plants and light. When a user presses their hand against the plant installation, “for a short time the plant is symbolically deprived of its energy of live.” (Sonnengarten 2015) so the light in the installation changes. This project had me thinking about how the lack of light indoors can affect a plants growth, and how it’s survival is reliant on the person taking care of it. The cycle of dependancy came to mind, and I began to think about how to connect the ideas from Grow You A Jungle to this new project.

Final Thoughts

This project was actually a lot more challenging than I initially expected, and that was mostly due to working with the copper tape. It was an exercise in learning your material and pushing it to the limits of its use. In working with the circuit and doing all the troubleshooting I came to a stronger understanding of how to fix connection issues. Something as small as a solder connection needs to be checked in the process of discovering the problem.

Works Cited

Synchrobots

By: Amreen Ashraf, Lauren Connell-Whitney, and Olivia Prior

Overview

Partner robots

Figure 1: Our two robots preparing themselves for the their first demo in front of a large group of people.

Synchrobots are randomly choreographed machines that attempt to move in synch with each other. The machine’s receive commands to start and stop their wheels, and as well the pause in between movements. Even though the two machines receive the same commands, the differences in weight, shape, and environment can cause the robots to fall out of synch. The machines  execute the commands as they travel across the room, sometimes bumping into each other or bumping into walls. The randomness of the movements creates a perceived intelligence, though there is none. The machines may nearly hit a wall but avoid it at the last second, it could be interpreted as very intentional, but it is the human interaction that creates these stories. These randomized in synch movements create life for these machine’s and make users delighted in following what they will do next.

Code

https://github.com/alusiu/experiment-4/

Idea Process 

We had a huge range of ideas from the beginning. We did manage to keep the idea of duality in our final project.

Figure 2: We had a huge range of ideas from the beginning. We did manage to keep the idea of duality in our final project.

We began the ideating by talking about how we could affect machines using outside data, for instance, using the data of the amount of people passing through the OCAD doors, or the wind at any given moment in Antarctica. We began developing an idea to make bots that would rove around and crash into things. Their construction would be of two pieces of steel that could crash together and make a thunderous noise. However, the idea of constructing something that was made to bump into things with force seemed like a tall challenge, and possibly not one we wanted to tackle just yet.

Quick sketches of how we wanted our kinetic sculptures to move

Figure 3: Quick sketches of how we could install our “thunder bots”.

Next we had an idea to create bots that would find each other in the dark; our Marco Polo bots. This was the idea that we began moving forward on: three bots that would seek and find another “leader” bot, then once it was found, another bot would become the leader. This idea lead into a thought about migration and how birds use this idea of passing around the leadership role so that the whole mechanism can stay strong.

One of our initial process sketch for our Marco Polo bots

Figure 4: One of our initial process sketch for our Marco Polo bots.

Our workflow written our for Marco Polo bots

Figure 5: Our workflow written our for Marco Polo bots.

Our "pseudo-code" hand written out for Marco Polo bots

Figure 6: Our “pseudo-code” hand written out for Marco Polo bots.

Creation Process 

Our prefab bots were simple to put together, but made of very cheap parts. One of the DC motor gear ended up breaking inside of the casing and so we did an emergency surgery using another 180 degree servo's gears. It was nice to take stuff apart and see how everything worked.

Figure 7: Our prefab bots were simple to put together, but made of very cheap parts. One of the DC motor gear ended up breaking inside of the casing and so we did an emergency surgery using another 180 degree servo’s gears. It was nice to take stuff apart and see how everything worked.

We began with a trip to Creatron, where the array of wheels and servo motors was vast and costly. So we ended up bringing home a lovely red chassis kit that came with two rear wheels that attached to two DC motors and a stabilizing front wheel. It was a great place to start and getting the wheels working was proof that our little bot could move if we wanted it to.

Video 1: A clip showing the robot moving with both DC motors connected to the feather micro-controller. 

Coding, Fabricating, & Development

Networking Platform & Iterating on Ideas

Our first step in development was to find a resource for networking our robots that was not PubNub. Unfortunately, PubNub API was not able to receive any data that was published from an Arduino – users were only able to subscribe to data. Since our initial idea prior to Synchrobots, was for three robots to talk to each other by sending coordinates to the other, we needed to have a platform that would allow us to send and receive data. After some research, we decided to pursue adafruit.io to network our robots.

Adafruit.io allowed us to very easily send and receive data through different feeds. The primary issue that adafruit.io presented was that the account could only send and receive data up to 30 times a minute. This meant that we could not be continuously be sending or receiving data like PubNub.

This presented some issues for our initial idea; we wanted our robots to be talking with each other continuously. Because we were developing for three robots that meant each one could only send and receive data ten times a minute. We discussed it amongst ourselves and decided to develop for an idea that required two robots that were not continuously sending and receiving data. As well we decided that if we changed our idea we would not scrap everything and start from the beginning. We would be able to re-use most of the code developed and thoughts from our previous hand written pseudo-code.

Additionally, at this time we had purchased out prefabricated platforms for the robots. The movement of these bots was not linear, nor consistent. We decided that our initial idea of “Marco Polo bots” would not reflect well with these movements, and decided to pursue an idea that would allow the robots to move more freely.

This brought us to the idea of Synchrobots: two robots that would receive the same data set, and execute at the same time. We were interested in how the robots would respond to the same data, and how the presentation of two robots would bring up connotations of partners, a couple dancing, or friends moving around.

At the start of development, we found that adafruit.io has a feature that allowed us to view the data in a visualization that was being sent in real time. This was a very useful tool for our development since it let us see what values were being sent by each feed while the robots would be moving. We were not required to have our microcontrollers connected to our machines to use the serial logs. 

Adafruit dashboard showing a data visualization of the randomly generated times for our robots.

Figure 8: Adafruit dashboard showing a data visualization of the randomly generated times for our robots.

Issues with DC motors and Servos

Our prefabricated platforms came with two DC motors and wheels each. Upon purchasing we were certain that the motors in the case were servos and were surprised that they were DC motors in servo casing. The DC motors would have worked well, but they did not have a data pin. This posed problems for us as we needed to be able to command both wheels independently through the microcontroller. As well, the motors were very cheaply made and one of the gears broke fairly early into development.  We took the motor apart and attempted to fix the gear by using one of the same parts from a servo we were not using. 

We attempted to fix the gear of our DC motor

Figure 9: We attempted to fix the gear of our DC motor.

Additionally, one of our feathers stopped working on us while attempting to control the DC motors; we suspect this was a power management issue that we were unaware of could happen. After a day’s worth of research, a feather passing away, and attempts of controlling the DC motors with our feather microcontrollers, we decided to get servo motors instead.

One of our feather microcontrollers stopped working during development

Figure 10: One of our feather microcontrollers stopped working during development.

Unfortunately, this did not solve all of our problems. The servos were installed in a way that they were “flipped”. When executing code that would turn them on and off, the wheels would turn in opposite directions. The next issue to tackle was finding out how to rotate the servos in the same way. We finally found a solution that starts the wheels in opposite positions: one starts at 360 degrees and goes to 0, and the other goes from 0 degrees to 360.

Video 2: A clip featuring both of our wheels spinning in the same way, and pausing in between times sent to the robots. 

Set up

Our set up for the microcontrollers was to have three feathers in total: two robots with feathers that would receive the data, and one feather connected to wifi sending the data. We designed the process as follows:

Synchorobots process diagram

Figure 11: Synchrobots process diagram.

We wanted to use randomly generated times so that the robots would be anthropomorphized as viewers watched them. The random times would reduce that chance of patterns that viewers engaging with the robots for a long time could pick up. As well, we were attracted to this idea because we had been talking throughout the project of the character of machines that can occur through real life obstacles and variances, which became apparent in our little bots as we constructed them. One of their wheels was slightly wobbly, one of them was heavier because of a different battery pack. All small details that would make the real life outcome slightly different between the two.

Deep in thought with the code and construction!

Figure 12: Deep in thought with the code and circuit construction with the servos.

Next we began soldering the circuit, a simple one at that. Something we also had in our plan initially was to add in blinking LEDs to the robots and possibly a robotic chatter with a speaker, as if they were talking to one another while they were roving. Near the end of the process we realized that it may have been too much for this one project and decided to keep this on for further iteration at another time.

Soldering the circuit, picky, fun, detail work.

Figure 13: Soldering the circuit; picky, fun, detail work.

Fabrication 

Once we had our components soldered we discussed the casing for the microcontrollers. Our debate amongst the team was how to keep the robots looking industrial, while also adding in features that elevate the anthropomorphized qualities we wanted. We thought that having an entire casing that hid the wires and the microcontrollers would create too much of a connotation of critters, while leaving the bare bones hardware would look unintentional.

We had some spare copper tubing and LED fairy lights around the studio. We experimented to see how adding some more industrial material onto the robots would change the experience of the machines roving around the room. We placed the copper tubing in the front with LED lights and found that it resembled an eye. This was the perfect middle ground between robot and critter. To compliment the front, we cased some of the wired in copper tubing in the back.

Assembling the robots with the copper tubing as casing for the wires, and as an "eye" in the front

Figure 14: Assembling the robots with the copper tubing as casing for the wires, and as an “eye” in the front

We had two types of LED lights, one “cool” strip and one “warm” strip. We decided, that to create small differences between the two robots, to have one robot that would house the cold strip, and one that would house the warm strip.

Experimenting with the LED lights and copper tubing on the front of the robot

Figure 15: Experimenting with the LED lights and copper tubing on the front of the robot

Final

Hardware

  • 3 X Adafruit feather ESP32
  • 2 X Feetech FT-MC-001 Kits(chassis)
  • 4 X Servo Motors
  • 2 X Protoboards
  • 2 X Rechargeable battery packs

    synchrobots-1

    Figure 16: Synchrobots ready and waiting.

GO TEAM!

Figure 17: Team of Amreen, Olivia and Lauren holding both finished Synchrobots.

Project Context

A work that provided a larger context for this work is Lichtsuchende: Exploring the Emergence of a Cybernetic Society by David Murray-Rust and Rocio von Jungenfeld. A project that created a community of robotic beings that interacted through light. The project examined how we as humans can design for other beings, namely, machines. It is a beautiful project that went through many iterations of how the robots reacted in the space and learned from their behaviours and group patterns.

Lichtsuchende: the robot community in action

Figure 18: Lichtsuchende: the robot community in action

Final Thoughts

Demoing our Synchrobots was a success. They seemed as if they were dancing at some points, they roved around separately, they even crashed into people and walls. It was a wonderful display of human and machines interacting. Peopled were delighted by them, some didn’t know how to react. It was a similar experience to watching people interact with a baby; some people were overly cautious and timid about the interactions, while there were others that actively and playfully interacted with the bots.

We received overall positive feedback and great advice on how we could carry our project forward. After the demonstration of our bots, Kate (Hartman) suggested that we could harness a Go Pro camera on the wall and have it observe the bots as they move about a room until the battery ran out. This is something we might like to pursue to track the patterns of our bots through time and space.

As we saw from the reactions of the our classmates, the movement and the meeting of the bots was such a cause for delight. Some other suggested direction was to look at LEGO mindstorms, which are tiny bots by the LEGO company that come as LEGO hardware with a preinstalled software. Nick Puckett had a thought to look at electric toothbrushes and dissecting them to get the vibration motors which would lead to creating small “dumb bots”. There were more suggestions to attach a pen or a marker to the bots as they moved around the room, as a look into bot art. This idea of letting the bot create work through movement was interesting because we had looked at a similar project while researching and decided against it because there were many such projects. We wanted a free flow movement of the bots without having a purpose attached to the movement. The feedback we will next implement if we do take this project forward is thinking about interactions between these machines. During this project we explored the concept of interactions via using LED strips. We got the LEDs working but we hadn’t coded any interactions on how or what the LEDs would do to react when the bots interacted. This point would be the most crucial in the further development of the project.

References

  • Šabanović, Selma, and Wan-Ling Chang. “Socializing Robots: Constructing Robotic Sociality in the Design and Use of the Assistive Robot PARO.” Ai & Society, vol. 31, no. 4, 2015, pp. 537–551., doi:10.1007/s00146-015-0636-1.
  • Conference, RTD, Dave Murray-Rust, and Rocio von Jungenfeld. “Thinking through robotic imaginaries”. figshare, 20 Mar. 2017. Online. Internet. 26 Nov. 2018. Available: https://figshare.com/articles/Thinking_through_robotic_imaginaries/4746973/1.
  • DeVito, James. “Bluefruit LE Feather Robot Rover.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, 2016, learn.adafruit.com/bluefruit-feather-robot.
  • Gagnon, Kevin. “Control Servo Power with a Transistor.” Arduino Project Hub, 2016, create.arduino.cc/projecthub/GadgetsToGrow/control-servo-power-with-a-transistor-3adce3.
  • McComb , Gordon. “Ways to Move Your Robot.” Servo Magazine, 2014, www.servomagazine.com/magazine/article/May2014_McComb.
  • PanosA6. “Start-Stop Dc Motor Control With Arduino.” Instructables.com, Instructables, 21 Sept. 2017, www.instructables.com/id/Start-Stop-Dc-Motor-Control-With-Arduino/.
  • Schwartz, M. “Build an ESP8266 Mobile Robot.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, 2016, learn.adafruit.com/build-an-esp8266-mobile-robot/configuring-the-robot.

Grow You A Jungle

screen-shot-2018-11-12-at-10-20-57-am

Grow you a jungle was created with the intention of bringing a little simple joy and life to the process of watering plants. Taking care of a lot of houseplants you begin to think about the life and time of a plant. How the growth is hard to see sometimes, yet sometimes a rustling can be heard and the leaves are moving, growing, dropping. Seeing the significant growth of a plant can take time.

Being in the woods or a jungle you notice the crashing noise and the movement of all of the life around you, creating a cacophonous hum of living. Indoors you start to forget how alive everything truly is. I wanted this project to bring a bit of the lush movement of nature to the indoors.

Process Ideation

Throughout the semester I had wanted to do a project that involved plants, but hadn’t found a group work that it fit into. So I began to formulate an idea to involve a plant, the idea of time, and after using the orientation sensor, I realized that I wanted to examine the concept of movement and gestures. I am in a class currently called Experiences and Interfaces which has led me through a lot of thinking about movement and our interactions with the world through gestural action. I realized one night while putting off watering many of my plants how artificial this gesture of pouring water is. The houseplant market is booming, yet it is just a facsimile of the natural world.

documentation

I began to think about creating a small simple experience that would enhance the idea of a plant growing from being watered, eyes and ears engaged. Projection has always been of interest to me, for its use of scale and darkness and light. An image of a dark room coming alive with the sounds of nature slowly creeping up at the action of a watering can feeding a garden came to mind. I decided to move forward and create.

Process Video and Sound

The process of creating the final video for the installation came through much iteration and testing. I made the decision to create it using found footage with Creative Commons licensing, since the timeline was too short for me to film my plants with any significant changes tracked. I wanted the video to have a lush ephemeral quality with lots of light and dark and movement. In my personal photography work I frequently create images using overlays and intense saturation and decided to use this same technique for the video. So I took to creating layers of plants growing in Adobe Premier Pro, this was a process of testing and tweaking. I used several opacity masks in the end to get the look I wanted. Below are some other videos I created before settling on my final.

The sound portion of this project took huge queues from a website I found called Jungle Life, a user controlled jungle sound player. I spent a bit of time in the jungles of Costa Rica a couple years ago and have very intense memories of sitting in the rainforest listening to the deafening sounds changing and moving around me. I wanted this to be the sound that would be triggered by the watering can. So as with the images I found about 15 Creative Commons samples of jungle and forest sounds and took to creating a timeline of all of them in various levels of highs and lows.

jugnle-life-player

Process Arduino

Initially when the project was announced I was a bit intimidated by the idea of creating the entire code and circuit by myself, but it was a much needed challenge and confidence booster in the end. I take very well to building on knowledge that I already have, so I decided to keep everything as tidy and simple as I could to tell the story I wanted. In class one day I set up the orientation sensor successfully and realized how versatile this feature could be. My final circuit ended up being this exact set up, taken from the orientation sensor tutorial in class. When setting up my board I ran into some problems, and could not figure out why it wasn’t working. After 20 minutes of pulling my hair out and rewiring most of the board…. I realized that one of the wires I had cut had split and wasn’t making a connection. A good lesson in checking the small stuff thoroughly.

img_0435

img_0474

Final Circuit

fritzing-grow-you-a-jungle_bb

Process P5

Building the code was the most intimidating part in my mind. So as with the board I built on some of the code that had been provided by Nick and Kate. I began to slowly break down each line and what it mean and how it functioned. I did several Google searches to assist in writing this code and as had been mentioned there really is no search for “How to make a watering can trigger a video”. I honestly didn’t even find much for “How to make an orientation sensor trigger a video”. There was however many pages of documentation about using Processing to do this. It seemed like a big task to switch the language at that time, so I decided to proceed on using P5.

My realization was that what I would have to write is an If/Else Statement. Which I did successfully, or so I thought. But it wasn’t triggering the video. And after another couple hours of painful searching I posed the question to my classmates. One noticed that I hadn’t been using the draw function, this had been intentional initially as I just needed the video to play on the screen, but I hadn’t taken into account the action that would need to trigger and loop. So once I moved my  toggleVid();  if/else function into the draw function, BOOM! It worked. This moment felt like I had won a medal. Something I keep learning every time I code is how tedious and time consuming it can be, and that the learning will never end. Persistence and variety in method is surely the key to this one.

Github

https://github.com/lc-w/grow-you-a-jungle

Demo

Context

I have been ruminating for a couple years on the Québécois NFB short film, “The Plant”, directed by Joyce Borenstein and Thomas Vamos. It is a beautiful film that delivers a timeline of obsession through the relationship of a man and a plant and was filmed in my old house in Montréal. Something that I kept remembering about this film was how the plant’s movements turn from joyous to vicious and wild in the time it would take to see a small amount of growth in a real live plant. Which lead me to think about how when you take a time lapse of a plant you can see exactly how wild and alive the growth really is.

the-plant

Another project that lent some inspiration was the wonderful art group teamLab from Japan. They create incredible immersive experiences using digital technologies and huge real life installations. They believe that the digital world and art can be used to create new relationships between people. By using interactive work that responds to the users movements they achieve this. I am interested in creating simple installations in my own work that will make people reflect on their place in the world and how the interact with it, the small, magical changes that can occur when you make a action or decision. One of their projects below uses projection to a tea ceremony that bring life to the process.  Another overlays projected animals in a natural environment to make the user contemplate our place in the world and how we may be the top predator of the life cycle.

team-lab-1

team-lab-2

Final Thoughts

This project began simple and stayed simple, but I do not think it lessens its value and success. I am very happy with the outcome and am hoping to come back to this idea of triggering growth in the future. In the final presentation there were some wonderful comments of refining the way the sound and video reacted to the movement, I am banking these for future iterations. Something else I wanted to explore was how scale could amplify the feelings that this project evokes. Possibly using a whole room filled with plants and using projection mapping to sculpt the way the videos look and feel. The more we build technology into our daily lives I become more aware of our need for natural life. Exploring this idea further is going to make the world we live more whole and will allow sustainable living ideology to flow into our making and ideas more freely.

Resources

  • “Adafruit BNO055 Absolute Orientation Sensor.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/processing-test.
  • “Adafruit BNO055 Absolute Orientation Sensor.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor?view=all.
  • “Free Forest Sound Effects.” Free Sound Effects and Royalty Free Sound Effects, www.freesoundeffects.com/free-sounds/forest-10008/.
  • “Free Jungle Sound Effects.” Free Sound Effects and Royalty Free Sound Effects, www.freesoundeffects.com/free-sounds/jungle-10009/.
  • Ir, and Stéphane Pigeon. “The Sound of the Jungle, without the Leeches.” The Ultimate White Noise Generator • Design Your Own Color, mynoise.net/NoiseMachines/jungleNoiseGenerator.php.
  • Koenig, Mike. “Birds In Forest Sounds | Effects | Sound Bites | Sound Clips from SoundBible.com.” Free Sound Clips, soundbible.com/547-Birds-In-Forest.html.
  • Koenig, Mike. “Frogs In The Rainforest Sounds | Effects | Sound Bites | Sound Clips from SoundBible.com.” Free Sound Clips, soundbible.com/251-Frogs-In-The-Rainforest.html.
  • Koenig, Mike. “Rainforest Ambience Sounds | Effects | Sound Bites | Sound Clips from SoundBible.com.” Free Sound Clips, soundbible.com/1818-Rainforest-Ambience.html.
  • teamLab. “Flowers Bloom in an Infinite Universe inside a Teacup.” TeamLab / チームラボ, www.teamlab.art/ew/flowersbloom/.
  • teamLab. “Living Things of Flowers, Symbiotic Lives in the Botanical Garden.” TeamLab / チームラボ, www.teamlab.art/w/lives2018botanical/.
  • “Timelapse Footage.” Openfootage, www.openfootage.net/timelapse-footage/.
  • Vamos, Thomas. “The Plant.” National Film Board of Canada, National Film Board of Canada, 1 Jan. 1983, www.nfb.ca/film/the_plant/.
  • “Video.” p5.Js | VIdeo, p5js.org/examples/dom-video.html.

Shadow Play

A Multiscreen Project
by Olivia Prior and Lauren Connell-Whitney

// Description //

Shadow Play is an interactive web application that self-directs visitors through an improvised shadow puppet show on the go. The application uses the participants built-in flashlight functionality on their phone to create a theatre scene where they may be.

shadow-play-in-action

Shadow Play randomly generates wait times, performing times, performance cues, and shadow puppet actions every round for each participant. The randomly generated wait times and performing times stagger participants so that everyone is not acting out their shadow puppet at once and nor are they performing the entire time. The randomly generated cues are a combination of an adverb, verb, and a .gif image of a shadow puppet demonstration. Viewers are invited to interpret these cues in any which way they prefer, and are also encouraged to interact with the other performers near them.

Shadow Play also requires audience interaction depending on the cue. Some of the shadow puppet demonstrations require both of the performers hands on at once. The performer will need to call upon an audience member to either create the demonstrated puppet with them, or alternatively for the audience member to hold onto their phone and direct the light at their hands for them.

// How it works //

  • Participants turn on the flashlight on their upon entering the application.
  • After confirming their flashlight has been turned on, the user is redirected to an instructions page
  • Upon clicking “Let’s play”, each participant is given a random time between 20-60 seconds
  • When the timer is up, a shadow puppet demonstration in the form of a gif, an adverb and a verb, and a new random timer between 20-45 seconds appears on the screen.
  • Participants act out the shadow puppet in a collaborative group setting for the designated time.
  • Once the timer is up, the participants screen returns to the previous timer screen with a new time between 20-60 seconds.
  • If the user is ever confused about the instructions, they can return to the instructions page by clicking the “i” in the lower right corner.
  • The process repeats until the user ends the game by clicking the “x” button in the lower left corner.

// Code //

https://github.com/alusiu/shadow-puppets

// Process Journal //

Initial Ideas Day 1

We initially began talking about general project ideas that appealed to us, places that one might encounter lots of screens and ways we use multiple screens.

We discussed and shared common interests and goals that we wanted to incorporate into the design of this project. Through documented conversations, the common interests were to design a product that would create a self-directed narrative while using the application, laughter and physical interaction amongst participants, moving the users around the physical space when using the application, and incorporating randomly generated data so that each user’s experience was unique and open for their own interpretation.

More Initial Ideas Day 1

More ramblings of the things that sparked creativity and ideas that we may want to shape our project around.

// Ideation //

The concept for Shadow Play arose from discussing what are all of the abilities one has control with over their phone. The list from the discussion included: compass, camera, microphone, texts, flashlights, browsers, and accelerometers. After creating a list of possible ways to use each of these features, pursuing a product that relies on the flashlight was decided due to the simplicity of users being able to have control over this particular feature (rather than focussing on writing a program that relies solely on JavaScript to prompt the phone’s feature, such as using the compass). Something that was important to us in these discussions was creating an interaction between the user, the screen and the real world.

// Proof of concept //

We tested a quick proof of concept by casting a shadow puppet onto the wall and decided to pursue the concept of creating an interactive puppet show that was directed through a web application. The decision was made after seeing the simplicity of the action, and how complex the resulted shadow could be due to the placement of the flashlight (farther away you could create bigger shadows, the shadow is affected by how many lights are around, etc.). The placement of the person and the light naturally encouraged movement and interaction which were on our list of desired project goals. Something that was exciting about this was the possibility for storytelling and childlike imagination.

// Other possible ideas //

A big decision that was frequently discussed during our design process was how to direct a narrative in our product. Initially we decided upon creating a very explicit narrative experience. Our idea was to generate two shadow puppet demonstrations with a verb. The idea, though similar to our current iteration, was directed as a game rather than an improvised theatre application. The user would create the first shadow puppet on the screen, and act out the verb on the corresponding shadow puppet beneath the verb. The user would have 15 seconds to complete this action before the demonstrations and the verb switched to new options. If the user completed this action successfully within the 15 seconds they would get a point. We decided against this idea because we were not interested in making a game that was points based. It would deter from the fun nature of acting out the puppets. As well the concept of the game deterred from the act of shadow puppets; the shadow puppets were not required for the interaction, rather they were supplemental.

Our idea evolved from this game though. We decided that using one shadow puppet demonstration, a pairing of an adverb and verb, and a implementing a random time constraint allowed users to participate and create a changing environment. Everyone had a random time for waiting and performing the characters ensured that the theatre was always changing.

// Workflow Diagram //

olivia-notes-oct-1

Our very simple MVP in the bottom right of this note took us from the initial stages of ideation to the final show. An incredibly valuable concept that I (LCW) found to be one of the most enlightening and useful concepts of the whole project.

The workflow for the project was discussed immediately to determine the scope, tasks, and process of the project. It was also discussed to determine what the minimal viable product (MVP) would be to achieve our desired product goals of having an interaction shadow puppet application. The workflow determined the next steps: prioritization of tasks, discovering what pages will need to be wire framed, technical requirements, and technical organization.

// Tasks //

Our week of work planned out in moveable sticky notes, another valuable workflow tool that Olivia taught me (LCW)

Our week of work planned out in moveable sticky notes, another valuable workflow tool that Olivia taught me (LCW)

From the workflow diagram we assessed all the tasks and functionality our product will need. We wrote down each task individually on a post-it note and drew out a calendar of the upcoming week. On the calendar we placed the post-it notes on the dates we needed to start the written task. Post-it notes allowed us to move tasks around and and physically play with our schedule to determine which work needed to be done first and in which order to allow for the most efficient product completion. We placed each of our own initials on the tasks that each of us would lead.

// Technical planning //

Workflow Planning More workflow planning

From the workflow diagram, we assessed the functions that were required for the project. We decided to use p5.js as our main library because it provided us with the functionality to create and style everything in a responsive canvas. The assessed functions were incorporated into the post-it note tasks placed on the calendar. We set up a Git Repository and created a development branch for push/pull requests.

copy-of-puppets

Our final function diagram that enabled us to see how we needed to proceed with building the project

// Wireframes //

We used two types of wireframes: low and high quality. The low wireframes were sketched out to allow us to start the technical side of the project. As we were developing the application we simultaneously started working on the high quality wireframes. Our process was to get all application functionality working first, and then to style the product once all of our elements were being rendered onto the page. Once the high quality wireframes were completed and the functionality was completed we applied the wireframes onto the product.

Drawn Wireframes

These initial drawings enabled us to move forward with coding while leaving the design and styling until after the bones of the project had been written.

The low quality wireframes were simply sketched out so that we were able to iterate on the product workflow and freely think through drawing. The high quality wireframes were created using Adobe XD. Adobe XD allowed us to style a theme and an UI that was easy to understand for our users. The theme was inspired by old Charlie Chaplin movies, to place an emphasis on theatre and narrative.

High Quality Wireframes

This was the final design. We decided to use the most simple design solutions to direct the user; using a classic silent film font “Windsor Condensed”, and a yellowish tint that was reminiscent of low light.

// Technical implementation //

Our project required data to pull for the randomly generated aspects of our product. As a way to store the data for the verbs, adverbs, and images, we created individual JavaScript files that held the information in arrays. We loaded these files onto the index.html and used them as global variables that we called in our sketch.js file.

We also required videos or gifs for our shadow puppet demonstration. We recorded and edited the images ourselves to have full control over our application content.

bird-480x480 bunny-480x480 scrappy-pup-480x480 hungry-kermit-480x480 bossy-goose-480x480 lilsnail-480x480

// The Play //

// Discoveries //

Much of the building of this project was smooth (ish) once we had created our idea and laid out our workflow and wireframe plans. However, the most interesting part of this whole process, and where the project came alive was the demonstration and watching the users interact with this tool. All the joys of the idea, and user interaction flaws became more apparent. One particular comment from a user made me think about if our UI was efficient enough; while trying to play this game, one user couldn’t figure out why the image on screen wasn’t moving with their own hands, when in fact it was only an image and not a camera of what what going on in front of the phone. This comment along with a suggestion to pair this project with Augmented Reality was particular enlightening as to the possibilities of this project’s future.

Some users took issue with the countdown timer page and suggested adding in a directive comment while the timer is playing, this was another comment that made us think a little deeper into the user experience of our design. This was a valuable comment in rethinking the whole flow of the play. Each moment must be directed and clear, even if the user is meant and encouraged to use their own imagination and creativity to play.

This is a good example of a timer that is too long! Who wants to wait 52 seconds when everyone else is having fun.

This is a good example of a timer that is too long! Who wants to wait 52 seconds when everyone else is having fun.

Another discovery that came clear when watching the group play the game was that the adverb/verb combination was, while funny, possibly not logically directing the user to do the action with their shadow puppet. Some of the word combinations were too vague or simply not able to be completed with the puppet. How would one “absentmindedly build” with a shark?

The timer was another discovery that occurred with the user testing, there may be a more ideal time set for group play depending on the number of players. The time we had set for a group of 20 could be shortened for a group of 4, leading to another possibility of game play sets for different groups of people.

// Challenges //

Some of the challenges we faced initially were something that seemed so simple:

Why could we not get our viewport to be responsive? The answer to this was discovered later and turned out to be an oversight that could be classed as a typo… We had forgotten to include our .css file in the index page. It is small trials like this that really are wonderful learning experiences to never forget to think simple and start from the beginning.

Another challenge that we had was at the beginning of this process, which was actually nailing down an idea and sticking to it. We went back and forth on game play ideas and if it was the right direction to take this project. Would our players understand what to do? Is it a game? Do we tell people where to go? How to act? Do we get them to each act out a part in a story? These are all valid questions, but what we learned, is that simple planning is best. We spent quite a bit of time adding complexities that didn’t add to the play experience enough to matter.

// Project Context //

These two projects were inspiration for our final product. The first, “Guten Touch” by Multitouch Barcelona demonstrated a physical interaction of screens. The painting of the screens promotes physical movement as the participants actively paint the screen with the given paintbrush. This project was inspiration for our goal to get people physically moving in the space, and the use of darkness to encourage a sense of play and unrestrained play.

The second project #MIMMI is inspiration for the collaborative work of our product. #MIMMI takes on twitter user data from a city to create a display the responds to the mood of the tweets. Together the data creates a narrative of the city, and encourages group participation in the greater display. Our project reflects #MIMMI by prompting a collaborative canvas to create a an ever changing real-time narrative. The participation of the users is what creates the display, similar to #MIMMI.

// References //

Get random item from JavaScript array. Resourced from: https://stackoverflow.com/questions/5915096/get-random-item-from-javascript-array
How Can We Merge Our Digital and Physical Communities? Resourced from: http://www.urbaindrc.com/mimmi/

MIMMI Comes to Minneapolis Convention Center Plaza. Resourced from:
https://www.minneapolis.org/media/news-releases/mimmi-comes-minneapolis-convention-center-plaza/

##MIMMI. Resourced from: http://www.invivia.com/portfolio/mimmi/

Shadow Puppets. Resourced from: https://www.google.ca/search?q=shadow+puppets&source=lnms&tbm=isch&sa=X&ved=0ahUKEwids6jUvvjdAhUBMqwKHcXjDY4Q_AUIDigB&biw=1377&bih=677&dpr=2

Create Image. Resourced from: https://p5js.org/examples/image-create-image.html

Countdown Timer. Resourced from: https://editor.p5js.org/marynotari/sketches/S1T2ZTMp-

P5.js Reference Library. Resourced from: https://p5js.org/reference/

p5.js loadFont function? Resourced from: https://stackoverflow.com/questions/26110959/p5-js-loadfont-function

Armengol Altayó, Daniel and Multitouch Barcelona, directors. Guten Touch. Vimeo, 19 Feb. 2009, vimeo.com/3288753.

“Barcelona 2008 – The Exhibition.” Red Bull Music Academy, www.redbullmusicacademy.com/about/projects/barcelona2008.

Hu, Ray. “Talk to Me 2011: ‘Hi, a Real Human Interface’ by Multitouch Barcelona.” Core77, 20 Oct. 2011, www.core77.com/posts/20829/Talk-to-Me-2011-Hi-a-Real-Human-Interface-by-Multitouch-Barcelona.

Multitouch Barcelona, director. Natural Paint. Vimeo, 14 Nov. 2008, vimeo.com/2240497.

Multitouch Barcelona, director. Sabadell. Vimeo, 2 July 2012, vimeo.com/45056797. “Multitouch Barcelona.” IdN World, www.idnworld.com/creators/MultitouchBarcelona.

“Multitouch Barcelona.” Vimeo, vimeo.com/multitouchbcn.

Vilar de Paz, Xavier, and Marvin Milanese. “MULTITOUCH BARCELONA. HOW ‘HEAT’ TECHNOLOGY.” Digicult, digicult.it/design/multitouch-barcelona-how-heat-technology/.

Vilar, Xavi. “Guten Touch.” Xavi Vilar, works.2783.me/guten-touch/.

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.