Category: Experiment 3

Stories We Tell


Stories We Tell, by Naomi Shah

Experiment 3: Creation and Computation
Github link:




Project Description:

Stories are structured casualties. Strings of words form pages of sentences, recalling a past, painting a thought or envisioning a future. ‘Stories We Tell’ is an exploration into such microfiction narratives.


As you turn the knob, sentences change, new meaning emerges and new stories are created.

The combinations and permutations selected by users provide insight into experiences, thoughts, moods and even the worldview of a user when they string together seemingly disparate sentences to create coherence.





Project Context:

As a consumer of non-fiction books, cinema and graphic novels, I have always been fascinated with exploring ways in which newer stories emerge. Since the beginning of Creation and Computation, I have wanted to experiment with the migration of videos and stories to more interactive spaces, using participatory methods to engage with audiences to device ways in which they can become active users. This is my first attempt at generating stories through code.


I took inspiration in prose and storytelling from a crowd-sourced micro-fiction storytelling platform called Terribly Tiny Tales. Based out of India, this platform creates stories catering to the shrinking attention span of readers by limiting their stories to just 2000 characters. Creating succinct and engaging microfiction narratives which often sounds poetic, the ambiguity of the stories drew me towards engaging and interpreting them when I was younger. This was the starting point of inspiration for my project, exploring how I might adapt this into the ‘This and That’ format for Experiment 3.


More Inspiration:

A source of inspiration was from a game called ‘Pitchstorm’, which I recently purchased from Kickstarter. This is a party game that puts players in the position of unprepared writers, pitching movie ideas to the world’s worst executives. Through combinations and permutations of the 164 character and plot cards that are randomly drawn, players must create the premises for a movie. As a player, I enjoyed the random dealings of character and plot cards that created the possibilities of diverse stories, throwing players into a frenzy as they try concocting larger worlds from the minimal sentences on the cards.  I had the chance to play this game while building this project, which gave me some insights into the next steps of the project beyond the MVP.


I came across plenty of resources that allow users to engage in creating stories using prompts, guidelines or random generators. Another inspiration was The TVTropes Story Generator, a website that gives users a bunch of elements to use as inspiration to build their stories. Random sentences are assigned to certain elements of the narrative such as Setting, Plot, Narrative Device, Hero, Villain, Character As Device, Characterization Device. By hitting the refresh button, another set of random sentences are generated.


Similarly, the Amazing Story Generator is a book that allows users to combine the three different elements of setting, character and plot to mix and match and create unique a story ideas.


Hardware Used:

1 Breadboard

1 Arduino Micro

1 USB cable

4 10k Potentiometers

4 differently coloured knobs

M/M Jumper Wires


Other Materials Used:

1 Cardboard Box

2 Rubber Bands


Software Used:


Arduino IDE










Considering this was my first solo coding assignment, I was intimidated by the task . However, it gave me the opportunity to assess everything I had learnt (and had not learnt) to apply myself to this project. I found myself needing revision in the basic concepts of arrays, booleans and map functions, and having only myself as a resource as I was building this project out of the OCAD environment and back in my native environment. Because I am in the sleepy coastal town of Goa, resources are hard to find, and the challenges were different while building the project in this environment.  I realised I stumbled a lot, spent and in the process of trial and error, and I built my project.


My project was technically simple. I wanted to use 4 potentiometers, each assigned to a different category of sentences. The potentiometer would then be used to ‘scroll’ between different sentences. I tackled the Arduino and P5JS categories separately, and then combined the two using the p5.serialcontrol app.


I started by first writing down the sentences for each category, which I felt were open-ended enough to complement any combination of sentences to form random stories.  



That night in the kitchen,

It was a bright day in the garden

The walls of the fortress were high



She felt scared by what stood before her

She felt peace in that moment

She had a lump in her throat



She took a knife and shoved it into his stomach

And ran her fingers through her hair

She pet the back of her iguana



‘I am not going to let you get the better of me’ she yells.

‘It’s a new beginning’, she said.

‘Don’t let me down’ she whispered.


Wiring the potentiometers to the Arduino and writing the code for 4 potentiometers instead of one was simple. However, everytime I ran the code, the values from 0 to 1023 would not show up in the serial monitor and instead displayed a series of special characters. It took me a while to realise that the problem occured everytime I would charge my laptop, owing to an earthing problem in my home.  



img_20181223_154816 img_20181223_154820


For P5.js, I started off by first doing several coding train tutorials to understand how to piece together my project. At first, I created a local JSON file, with the intention of increasing the number of sentences for each category at a later stage. However, I eventually added the sentences into global arrays for each of the four categories: Location, Emotion, Action and Dialogue. I chose three sentences for each category and mapped each sentence to a range, for example sentence 1 has a range from 0-300, sentence two has a range from 300-600 etc, all the way up to 1023  which is the analog range the potentiometer returns to the Arduino.


I created a slider to scroll through each category using booleans to make sure that I was calling the data correctly. Once this was successfully achieved, I then set up the port and the serial control connection between Arduino and P5.js. This created some complications, and ‘p5.SerialPort is not a constructor’ was an error that the programs would throw up from time to time. One by one, I scanned every line and identified the errors, big and small before all the errors had vanished.  I was almost sure that my code would not work, but when it eventually did I was ecstatic.




I then spent some time designing the page with some basic HTML and CSS, giving it a clean and minimal aesthetic to package it to completion.




I re-purposed a packaging box to become the controller for ‘Stories We Tell’. I was lucky to have some carpenters at my disposal who assisted me while using the power tools to drill 4 holes into the cardboard. Through these holes would emerge the 4 colored knobs which the user would use to control various aspects of the micro fiction narrative. The four knobs were purchased from Creatron before my departure from Toronto. The knobs and text on the screen are designed to be color-coordinated, allowing users to distinguish quickly, which knobs correspond to which elements on the screen.


img_20181224_223555 img_20181224_223759 img_20181224_224039











img_20181224_235830    img_20181224_235932


Future Possibilities

An obvious next step for this project would be to populate each story category (Location, Action, Emotion, Dialogue) with several more sentences, ideally hundreds to encourage endless possibilities for generating unique stories.


While building this project, I had several ideas that developed along the way, all revolving around the core principle of scrolling through random, unconnected and disparate sentences to string together something that exhibits clarity and coherence. In this case, the sum of all the individual storytelling elements is greater than its individual parts, generating stories that demonstrates a user’s experience, worldview, thought, or mood. With variations made and layers added to this project, listed below are some potential future possibilities I would like to explore further:


Brainstorming tool for filmmakers and writers

Ideal for filmmakers or writers, ‘Stories We Tell’ could be a tool to jump-start a brainstorming session for fresh short stories, novels, scripts, screenplays, or improv sessions. Added layers of complexity to this project could involve users being encouraged to either convert their microfiction narratives into more long-form narratives, or perhaps generate a pitch for their microfiction narratives. Other variations of this could involve different kinds of elements within stories such as genre, characters, conflict, resolution, etc.


Stories created by children, for other children

As children increasingly take to the digital world for purposes ranging from entertainment to education, this could be a tool to encourage cognitive abilities and creativity among children by allowing them to create their own stories. The tool can be tweaked to become more visually heavy, relying on illustrations or images much more to appeal to a younger demographic. Children can create their own stories, save them, share  them with their friends and read stories created by others, all via the same platform.


Design research tool using storytelling for researchers

If the sentences and the categories that contain them were to be modified to suit a specific context, this tool could be used as a design research tool, where story making and story building can be used by researchers, clients and other participants to make sense of complex, interconnected situations. Narratives generated by simply rotating the knob and choosing seemingly disparate sentences that can be built into a story, can help shed light on the way people perceive themselves and their environment. This could also be useful prior to survey development, allowing researcher to gauge emotions and issues related to a situation, and then ask about it further through one-on-one interviews.




3.4: Boolean Variables – p5.js Tutorial. (2015, September 10). Retrieved from

7.3: Arrays of Objects – p5.js Tutorial. (2017, October 10). Retrieved from

Long story short. (n.d.). Retrieved from

Pitchstorm. (n.d.). Retrieved from

Story Generator. (n.d.). Retrieved from

Storytelling. (n.d.). Retrieved from

Train, T. C. (2015, October 30). 10.3: What is JSON? Part II – p5.js Tutorial. Retrieved from


First Flight (An Interactive Paper Airplane Experience.)

Experiment 3:

By: Georgina Yeboah

Here’s the Github link:


First Flight. (An Interactive Paper Airplane Experience. 2018)

Figure 1.”First Flight. (An Interactive Paper Airplane Experience, 2018)” Photo taken at OCADU Grad Gallery.

First Flight (FF) (2018), is an interactive tangible experience where users use a physical paper airplane to control the orientation of the sky to appear they are flying with the screen, while attempting to fly through as many virtual hoops as they can.

Fig 2. Georgina Yeboah. 2018 "First Flight Demo at OCADU Grad Gallery."

Figure 2. “First Flight Demo at OCADU Grad Gallery.” 2018


Fig 2. Georgina Yeboah 2018. "First Flight Demo at OCADU Grad Gallery."

Figure 3.  First Flight Demo at OCADU Grad Gallery ( 2018).

Video Link:

The Tech:

The installation includes:

  • x1 Arduino Micro
  • x1 Bono 55 Orientation Sensor
  • x1 Breadboard
  • x1 Laptop
  • A Couple of Wires
  • Female Headers
  • 5 Long Wires (going from the breadboard to Bono 55)
  • A Paper Airplane

Process Journal:

Thursday Nov 1st, 2018: Brainstorming to a settled idea.

Concept: Exploring Embodiment with Tangibles Using a Large Monitor or Screen. 

I thought about a variety of ideas leading up to the airplane interaction:

  1. Using a physical umbrella as an on or off switch to change the state of a projected animation. If the umbrella was closed it would be sunny. However if it were open the projection would show an animation of rain.
  2. Picking up objects to detect a change in distance (possibly using an ultrasonic sensor.) I could prompt different animations to trigger using objects. (For example; picking up sunglasses from a platform would trigger a beach scene projection in the summer.)
  3.  I also thought about using wind/breathe as an input to trigger movement to virtual objects but was unsure of where or how to get the sensor for it.
  4. I later thought about using the potentiometer and creating a clock that triggers certain animations to represent the time of day. A physical ferris wheel that would control a virtual one and cause some sort of animation was also among my earliest ideas.
Fig 2. Georgina Yeboah. 2018. First initial ideas of embodiment.

Figure 4. First initial ideas of embodiment.


Fig 3. Georgina Yeboah. 2018 "Considering virtual counterpart of airplane or not."

Figure 5. Considering virtual counterparts of airplane or not.

Monday Nov 5th, 2018:

Explored and played with shapes in 3D space using the WEBGL feature in P5.js. I learned a lot about WEBGL and it’s Z  axis’s properties.

Fig 5. Georgina Yeboah, Screenshot of Airplane.Js code.

Figure 6. Screenshot of Airplane.Js code.

I looked at the camera properties and reviewed it’s syntax from the “Processing P3D” document by Daniel Shiffman. Later, I would plan to set the CSS background’s gradient and later attach the orientation sensor to control the camera instead of my mouse.

Fig x. Georgina Yeboah (2018). "Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY."

Figure 7. Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY.


Fig x. Georgina Yeboah. 2018. "First Flight's Interface using WEBGL."

Figure 8. First Flight’s Interface using WEBGL.

Tuesday Nov 6th, 2018.

I had planned to add cloud textures for the sky but never found the time to do so. I did manage to add my gradient background though using CSS. 

I also planned to add obstacles to make getting hoops challenging but I didn’t include it due to time restraints and prioritization and thought it be best suited for future work.

Tuesday Nov 8th, 2018.

The eve before the critique, I had successfully soldered long wires to the female head that would be attached to the Bono 55 orientation sensor. The sensor would sit nicely on the top of the paper airplane head, covered with extra paper. On the other end, the sensor would connect to a breadboard where the Arduino Micro would sit on.

Fig 6. Georgina Yeboah. 2018. Bono 55 Orientation sensor sits nicely on top of paper airplane.

Figure 9. Bono 55 Orientation sensor sits nicely on top of paper airplane.

References and Inspirations:

I wanted to play with the idea of embodiment. Since I’ve worked with VR systems in cohesion with tangible objects for awhile, I wanted to re-visit  those kind of design ideas but instead of immersive VR I wanted to use a screen. A monitor big enough to carry out the task of engagement seemed simpler enough to explore this idea of play with a paper airplane.

I looked online for inspiring graphics to help me start building my world. I wanted this to be a form of play so I wanted the world I’d fly through to be as playful and dynamically engaging as possible while flying.


Paper Planes by Active Theory created a web application for the Google I/O event back in 2016 (Active Theory). It was an interactive web based activity where guests at the event could send and receive digital airplanes from their phones by gesturing a throw to a larger monitor. Digital paper airplanes could be thrown and received across 150 countries (Active Theory). The gesture of creating and throwing in order to engage with a larger whole through a monitor inspired the idea to explore my project’s playful gesture of play and interactivity.

Fig. 8. Active Theory. (2016). Paper plane's installation at the Google I/O event.

Figure. 10. Active Theory. (2016). Paper Plane’s online web based installation .

The CodePad:

This website features a lot of programmed graphics and interactive web elements. I happened to come across this WEBGL page by chance and was inspired by the shapes and gradients of the world it created.

(Fig 4. Codepad. (n.d) "WebGL Gradient". Retrieved from

(Figure 11. Meyer, Chris. (n.d) “WebGL Gradient”. Retrieved from


P5.Js Reference with WEBGL:

I found that  the Torus (the donut) was a apart of the WEBGL and next to the Cone, I thought they would be interesting shapes to play and style with. The Torus would wind up becoming my array of hoops for the airplane to fly through.



Figure 12. P5.Js. (n.d) “Geometries” Retrieved from

Future work:

Currently, the project has many iterations and features I would like to add or expand on. I would like to finalize the environment and create a scoring system so that the user can collect points when they go through a hoop. The more hoops you go through the more points you get. Changing the gradient background of the environment after a period of time would also be a feature I would like to work on. I believe there is a lot of potential in First flight that can eventually become a fully playful and satisfying experience with a paper airplane.


3D Models. Cgtrader. (2011-2018). Similar free VR / AR / Low Poly 3D Models. Retrieved from

ActiveTheory. (nd). Paperplanes. Retrieved from 

Dunn, James. (2018). Getting Started with WebGL in P5. Retrieved on Nov 12th 2018 from

McCarthy, Lauren. (2018). Geometries. P5.js Examples. Retrieved from

Meyer, Chris.(2018). WebGl Gradient. Codepad. Retrieved from

Paperplanes. (n.d). Retrieved from

Shiffman, Daniel. (n.d). P3D. Retrieved from

W3Schools.(1999-2018). CSS Gradients. Retrieved from


Generative Poster

Project by: 
Josh McKenna



Browser Experience Here.


Experiment 3, This & That was introduced to us as an opportunity to work individually and explore a concept that involved communication via Arduino and P5. The idea for my experiment originated from an experience I had earlier in the semester when I attended the Advertising & Design Club of Canada’s annual design talk– which this year’s event featured multiple graphic design studios from San Francisco (see Figure 1). At the end of the presentation I bought one of 3 posters they made. Each one of the three posters having small differences between one another. It was the first time I remember having the choice to buy the same poster in terms of its system and graphical elements, but which also had some variability between each design. Inspired by this experience, I recognized an opportunity to use generative design as a way to produce variability within graphic design artefacts. I felt that it could add some value or incentive to the attendant of an expo or event to bring home a personalized version of a poster that expanded the graphical identity for that event. This project experiments with that very idea and allows the user to explore the identity of a preset graphical system expressed through various compositions powered from an Arduino controller.


Figure 1: The Advertising & Design Club of Canada’s Design San Francisco event poster for the 2018 event.

Recognizing that the variability demonstrated within my generative posters would be part of a larger system I decided to begin my ideation by re-visiting my favourite graphic design text, Josef Müller-Brockmann’s Grid System book. From there I continued looking at work from the Bauhaus and eventually to more contemporary works by studio Sulki & Min. It was through examining the Sulki & Min’s archived projects that I came across a body of work of theirs that I felt could be expanded upon within the time parameters and scope of this project (see Figure 2).


Figure 2: Perspecta 36: Juxtapositions by Sulki & Min, 2014


Through an Arduino powered controller, users will be able to modulate and induce variability into poster design via generative computing.


The project hardware components were fairly simple. Altogether, the electrical components that were used in this experiment included an Arduino Micro board, a potentiometer and two concave push buttons. See the Fritzing diagram below for the full schematic (Figure 3).cc3

Figure 3: Fritzing diagram of electrical components and Arduino

Because of the constraints of this project and the limited skillset I have in fabrication, I decide to focus the majority of the project on developing the browser experience. When it came to constructing an actual controller to house the electronics, a simple cardboard jewelry box was sourced (See Figure 4). The controller itself includes the aforementioned potentiometer, a blue push button and a black push button.

The most important aspect of the physical casing for the project was that whether it worked or not. Compared to the ideation and execution of the browser aspect of this project, minimal time was spent planning the physical form of the Arduino controller.



Figure 4: The Arduino Controller component as part of the Generative Poster Experiment (Top). Inside the physical container (Bottom).


The approach I decided to move forward with was simple: I first had to determine and define the limits in the variability of the system. Keeping a strong reference to Juxtapositions by Sulki & Min (Figure 3),  the first rule of the graphic system would be that all of the circles in each new sketch will be found along divisional lines along the x-axis of the canvas. I originally divided the canvas’ width into quarters, but landed on ninths as I felt that the widescreen of a browser worked best within that ratio. The code now had to reflect the x position of each generated circle to appear at a randomly selected multiple of 1/9 in the browser window. From there its Y position would also be randomized within the height of the canvas. Originally the concept intended for the user to be able to edit the fraction at which the canvas’ width would be divided into by a potentiometer but the functionality was eventually scrapped because of scope issues (although this feature can be reintroduced manually by unquoting code in the sketch.js file.)

The second rule of the system was that a large circle would appear in one of four quadrants each time the sketch is redrawn. This circle acts as the poster’s primary element and because of its dominance in the composition, I decided to give the user the ability to manipulate its sizing from large to small through a potentiometer linked to the Arduino controller (See Figure 5). This functionality was also the easiest to map to a potentiometer.

Finally the third rule of the composition was that a equivalent number of medium and very small circles would be drawn compared with a larger proportion of small circles. The ratio of medium and very small circles compared to small circles was experimented with, but finally a 4:1 ratio (M+VS : S) was decided upon. This ratio was not editable by the user when interacting with the Arduino controller.

Originally I wanted to add controls into the Arduino portion of the project over the rate at which each set of circle sizes would increase over the course of time in the sketch. However, this proved to be outside the scope of this project as I was not able to find a way to incorporate this functionality both from a technical and aesthetic viewpoints.

To give a sense of pacing and movement to the otherwise original static reference, I felt that all of the circles generated should have specific growth rates as they expand to fill the canvas.


Figure 5: Ideation of the Generative Poster project

The variability of the posters would only be recognized if the user would redraw the sketch, therefore a refresh/redraw functionality was incorporated into the Arduino controller through a blue concave push button. By refreshing, the user is able to cycle through randomly generated posters and decide which composition suits best for them. Finally, the print screen/save image function was assigned to the other push button.


I believe that this project was executed to the standard I set for at the beginning of this project. To my excitement, during the critique I was able to see some of the different posters people made based on the algorithm and system I laid out for my project (See Figure 6 and 7).


Figure 6: Example of Generative Poster

The idea is for the user to sense when a composition forms into something that visually resonates with them. They can then choose to save it. During the critique of this project, the user’s selected composition was printed onto a paper, framing that moment in time– so that the user could have their own physical copy of the experience.


Figure 7: Randomly generated posters during Experiment 3 Critique


The Coding Train. (2017, January 9). Coding Challenge #50.1: Animated Circle Packing – Part 1. Retrieved from

Hertzen, N. V. (2016). html2canvas – Screenshots with JavaScript. Retrieved from

NYU ITP. (2015, October 4). Lab: Serial Input to P5.js – ITP Physical Computing. Retrieved from

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from

StackOverflow. (2011, March). How to randomize (shuffle) a JavaScript array? Retrieved from

Sulki & Min. (2014, March). Archived Designs. Retrieved from

Voice Kaleidoscope








Voice Kaleidoscope takes voice patterns from the microphone in the computer and outputs onto an LED circular matrix to make colours and patterns. Created for people on the autistic spectrum who have trouble interpreting facial expressions.  This tool was created for Pattern thinkers who have Autistic Spectrum Disorder.



Voice Kaleidoscope was created as a tool to help communicate emotion through patterns and colours. Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. Autism spectrum disorder (ASD) is a severe neurodevelopmental disorder characterized by significant impairments in social interaction, verbal and non-verbal communication, and by repetitive/restricted behaviors. Individuals with ASD also experience significant cognitive impairments in social and non-social information processing.  By taking Voice expression and representing it as a pattern this can facilitate as a communication tool.





There are many variations of the way voice is utilized into patterns . I was curious about the fluctuations in voice and emotion. What was interesting was seeing sound waves translated into frequency.  I wanted to see what these patterns would look like and how it could help me conceptualize the design around my own project.  Through a HAM radio club I found someone who was willing to talk to me about sound frequency and patterns and the beautiful patterns of sound through an oscilloscope.




Early in the process I was pretty secure on my concept. Seeing a friend with a family member who relates more to colours and patterns I always wondered why there wasn’t a tool to facilitate the interpretation of human emotions for people who sometimes deal with these barriers. It was also very important for me to get out of my comfort zone in regards to coding.  I wanted to embark on a journey of learning even if I was afraid of not sticking with what I already knew I could execute. Utilizing output from P5.js to arduino I knew would be much more challenging than the input infrastructure I had gotten more comfortable with. I was adamant about this also being a journey of taking chances and true exploration. This project was about communication and growth.


While researching aspects of pattern thinking and ASD tools in classrooms my project went through an Initial metamorphosis. At first I thought of this design as a larger light matrix with literal kaleidoscope features, further in the thought process decided this communication tool should be more compact. Easy to fit into a back pack for most carrying mechanisms. Earlier versions also had construction plans for a complex face with cut out shapes.





I started with the code right away, I knew my biggest hurdle would be to get P5.js working with arduino. I started to think about the architecture of the project. My first design process was to start thinking about through the flow of how voice would move through P5.js and into the arduino. What would that coding look like.

Initially had to decide how the microphone was going to be incorporated into the design. I began exploring adding a mircophone to the breadboard or  using the microphone in the computer and vice versa. At this stage in the process got started on serial control application right away. There were many issues with the application crashing.  The first step was to design the voice interface in p5.js this was a dificult task. I wanted to incorporate the same number of LEDS into the design with out it being over complicated and messy.  While designing the interface I began testing the microphone with the interaction of the p5.js. I was trying to encapture the animation of voice flickering in the p5.js sketch and started to look up code variations of turning the built in microphone on.

After this was set up and working I moved back to the json and serial control app. There were still connection issues.  In the first stage of coding I was having connection issues in the console not being able to stay on an open port, I continued to test through using variations of turning the serial control on then getting it to stay on a specific port. I discovered the port kept changing frequently. Decided to reinstall and that fixed the issue temporarily.


Putting together the board and LED lights:

For the Led matrix I decided to use three LED pixel WS2812B rings.   With initial testing of the rings and deciding how to power and design my breadboard I had the rings seperate.


I had to figure out how to daisy chain the components to lead through 1 data in and out wire to the arduino.  While powering up the lights I discovered that an outside power source of 5 volts wasn’t enough. I did some online sleuthing and discovered that if I used a 12 or 9 volt power source and ran it through a DC to DC power converter that would be better for my LED’s.


Coding the Arduino:

During this process had to discern what the light patterns were going to look like. Went through many colour variations and patterns and decided to utilize a chase pattern and colour variations for loudness. Depending on how loud or soft the voice was would discern how many times the light went around the rings. Had to test variations of brightness.Even with the 9 volt power source the LED’s were running the power quickly and flickering.The rings proved to be very different operationally than the strips.

Finalizing the board and testing:

Once the lights and board were operational I dove into testing the p5 files with the arduino.There were many calibrations between the p5 and the arduino.  At first I could see that the port was open but was not sure if there was communication in the google console.  Since I couldn’t  use the serial monitor in arduino to see anything I initially had a hard time discerning if the arduino was connecting. I could see numbers running in the console and an open port but was still not able to get an open connection. Went back to researching what notifications I should see in console if arduino is connected. Found the connection nofication but still could not get running after going over code. finally with a reboot my microphone and p5.js files were connecting with the arduino and I could see my voice patterns in the matrix.


This experiment brought my learning experience to a whole new level of json and serial communication. I learned the ins and outs of not just input but output as well. Even though there were many connective issues working through these problems made me a better coder and builder.  Getting feedback in regards to expanding on a much needed communication tools and seeing these thought processes expand the lives of other people was valued feedback to keep along this throught process and to continue exploring methods of assisting people through technology.

Added notes on future expansion for this project:

  • To make different sizes of this device for wearables or larger for environments such as presentations.

  • Incorporating a study conducted on voice patterns, light and how that incorporates with autism and  pattern oriented thinkers.

  • To expand on p5.js interface to reflect any findings in the study and expand on design based on these findings.


Article on Autism

P5.js to Arduino

Serial Call Response

Article Autism and Emotions

Paper on Emotions and Pattern Resarch

References p5









Experiment 3: This+That


RGB-Geometry Control Tool


By Mazin Chabayta



Project Description

My project is made of 3 potentiometers, 3 sliders, and a strip of LEDs, all connected to an Arduino Micro, with an external power supply (3V x 3) for the LEDs. It functions has a basic graphic tool which gives the user control over the color of the background and the geometric shape in the foreground.

It is mainly designed as an educational tool for children with autism, however, as I was building, I identified many other interesting variations and applications.








For this experiment, we were tasked to create an interaction between physical components (Arduino Micro) and a digital interactive canvas (p5).


Coming from a very tactile hands-on background, working with coding and digital build was a challenge. However, I am interested in working with basic visual elements and creating interesting artistic pieces and interactions with them. So, my goal was to enter this p5 challenge from a direction I interested in and enjoy working with.


Once I identified what I wanted to learn and explore during this experiment, I started considering a concept that would resonate with me and speak to something I really care about. Coming from a background that was exposed to several cases of autism, I decided to venture into bringing together the world of technology and education for special needs. Through my research I discovered that there has been many positive results from merge: “ Despite exciting preliminary results, the use of ICT remains limited. Many of the existing ICTs have limited capabilities and performance in actual interactive conditions” (Boucenna et al. 2014). In the abstract, the authors agree that there needs to be more effort invested in utilising technology for educating children with autism.


When I identified the challenges ahead of me, I then started thinking of a function for this project and eventually landed on this idea: an educational tool / toy for children with special needs, specifically autism, that is beneficial, entertaining and attractive. Also, I’ve always believed in DIY concepts and how we can create solutions that can be shared with people to download and replicate at home easily.


Since I am not a professional in the field, I did not want to rely solely on my personal experience dealing with children with special needs. So, I started my research into available tools and theories in what has been observed to work best or least. I came across an article called Teaching Tips for Children and Adults with Autism”  by Dr. Temple Grandin, who has personal experience growing up and overcoming autism. In her article, Dr. Grandin provides valuable insights into how the minds of autistic children work and what tools and techniques work best. Grandin suggests to  Use concrete visual methods to teach number concepts. My parents gave me a math toy which helped me to learn numbers. It consisted of a set of blocks which had a different length and a different color for the numbers one through ten(Grandin 2002). As a graphic designer, I am trained to establish a visual line of communication with my audience, so this presented itself as an opportunity, and a challenge, to find the most appropriate visual language to communicate with a child with autism. In addition to that, Grandin also believes that “Many individuals with autism have difficulty using a computer mouse. Try a roller ball (or tracking ball) pointing device that has a separate button for clicking. Autistics with motor control problems in their hands find it very difficult to hold the mouse still during clicking” (Grandin, 2002). Based on those key insights from Dr. Grandin, I was able to gain an understanding of how the tool should physically look like, and the type of interactivity I would want the user to have with it.


Observing existing toys that are designed for children, I decided to use basic geometry and colors as a line of interaction between the device (the user) and the screen. In addition to that, since I wanted to keep the affordances of the device simple, I compared the input devices available to us, and decided that potentiometers and sliders would be appropriate options to use. In conclusion, the primary graphic elements I decided to use were the 3 RGB values + size of shape + rotation of shape + number of angles in the shape. Eventually, I added a strip of LEDs in order to reflect the color on the screen into a physical component, which was necessary in my opinion, in order for the child to see the interaction happen in real-time.


Design & Build

In the beginning I wanted to create a small four-sided pyramid, with the interactions knobs on each side: 1 geometric knob on each side and 3 RGB knobs on the fourth side. However, this direction quickly started showing some issues. My main issue with it is that it is not suitable for children with special needs, especially autistic children,  because the tools for interaction are spread out around the device, and this can be confusing. Since it was not clear which knob does what, I decided using one surface that has all the interactions.
















Once I started sketching the new design, I noticed that having 6 similar potentiometers can get confusing and since RGB levels are usually sliders on screen, I decided that the RGB potentiometers should actually be sliders. So, after securing all the input components I needed to build my device, I started connecting cables and ensuring that all my inputs were functioning properly, and once soldered, maintain a secure connection.


img_1335 fritzinsketch_bb


Video of initial testing



Working with physical prototypes is always very beneficial for me, because I quickly get a sense of the feel of the device, and more importantly, the size and space this device gives me. So, after working with my initial prototype for some time, I realized that I needed a bigger prototype in order to house all components and maintain accessibility to the inside of the device. At this point, I had a strong idea of the measurements and the shape of the final prototype, which gave me more time to focus on the functionality and the coding side of the process.



img_1340 img_1344


For my second and final prototype, I repurposed a small cardboard box, which had an opening/closing method that I was looking for in order to ensure I am able to access it easily without hindering my connections.


img_1393 img_1391img_1408




Since I am still trying to wrap my head around p5, I wanted to identify a personal strength in p5 and a personal challenge. My strength when it comes to this project would be the geometry, and my challenge was the ‘map’ function and then communicating it from Arduino to p5.


Looking through the examples on p5, I found several inspirations on how I could add interactions. What stood out the most was the geometric polygon shape in p5, which allows the user to change the size, the rotation, and the number of sides this polygon has, which essentially allows the user to change the shape completely. I saw a great opportunity in this, since it’s directly related to my concept, and I quickly decided to use this function as my main foreground element. The interaction was based on the values of the three potentiometers, mapped in Arduino from 1023 to 100 for number of polygon sides, 100 for rotation, and 800 for size.


screenshot-of-code-p5 screenshot-arduino


After that, I mapped the values of the three sliders from 1023 to 255 and assigned each to be an R, G, or B value. This combination gave the user all control over the visual elements on their page, however, maintained a threshold of simplicity in the visuals that is necessary for my concept as an education tool for children with autism.


Once I had my mapped values in the ranges necessary for meaningful interaction, the rest was simple application of what we learnt in class. I assigned the necessary port, established a connection between Arduino and p5, and started seeing a feed of data flowing from the physical components onto the screen, in real-time, which was quite satisfying.


This process was very educating for me because this was the first time I am able to create, as simple as it can be, a controller of some sort that allows the user to interact with the computer outside the conventional way, which was eye-opening. It inspires to push those boundaries and experiment using more interactions and achieving different results.



Going Forward


Even though this concept was created for children with autism, while working on it, I have discovered more applications for it. For example, since I have a background in tattoo art, I can see how a device like can be used as a quick tool to generate complex patterns from a simple geometric motif. A tool like this would be a great addition for a tattoo artist since it maintains the hands-on feel of the tattoo design rather than a complex computer program, which might take away from the authenticity of the experience. Of course, the functionality would have to be changed to reflect the requirements of a tattoo artist, but since color might not be of importance for the artist, changing the functionality of the sliders to apply mathematical equations used to generate complex patterns can replace the RGB values. I am interesting in pursuing this option and probably will have different variations of this device one day.



Asynchronous Playback

Asynchronous Playback

By Olivia Prior

GitHub link

Figure 1: Asynchronous Playback being activated by a viewer standing on the map

Figure 1: Asynchronous Playback being activated by a viewer standing on the map

Asynchronous Playback is an art installation that reflects the viewer’s posture through constantly altering video playback. The viewer is invited to step on a mat that activates the start of a video. The mat detects the distribution of weight between the viewer’s left and right side as they watch the video. As the viewer settles into one side of their body while viewing, the right and left hand side of the video fall of out sync. The side that detects higher force speeds up the corresponding side of the video. If the viewer shifts their weight to the opposite side the other side of the video will speed up and the first will return to standard speed. If either side does not detect weight, that side of the video will not play. If one side detects weight, which the other does not, the video playback will speed up to twice as fast.

The piece requires user interaction for the video to continue forward which makes it a collaborative performance. In collaboration, viewers have to together take their turn to step on the mat to progress the video forward, and as well to sync up the two screens.

This work also comments on the experience of viewing video art; viewers are never given what time the video starts, how long the video is, or what part of the video they are entering in. Asynchronous Playback can create a new way to experience video art.


My initial idea for this project was to create an art installation that focused on the act of viewing. The p5.js library has a strength in image making and manipulation. I wanted to create an interactive piece that mimicked the viewer engaging with the piece visually.

The first iterations of this theme used a painting (or an image) that would warp depending on the viewer’s posture. This was inspired by the act of standing and absorbing a visual work in an art gallery and the comfort of falling into a resting position. I chose to not pursue this idea of an image mimicking the viewer’s posture after reflecting upon my own experience in a gallery. Most paintings or images become fleeting when grouped together and do not grab a user’s attention for long enough. As well, if I were to pursue a reactive image, I would want the response to be gradual and not immediate. Additionally, the idea felt a bit novel since the image only reflected the posture and I could not see much more that it could offer as an experience. I did not chose to do a proof of concept for this idea because of my doubts of how engaging it would be.

I iterated on this theme by changing the responsive media to video art to reflect posture. Video has attributes that easily allow for participation and engagement; it has the ability to alter playback speeds, to play and pause, and most importantly the medium naturally engages viewers to directly engage with the work through pausing and watching the content. With this in mind I started to design how this interaction would work.

Figure 2: Initial sketch of the mat in relation with the screen

Figure 2: Initial sketch of the mat in relation with the screen

Upon sketching ideas, I considered using two mats to prompt two separate viewers to be the input for the piece. Upon reflection, I thought that have two users focusses on the relationship of the two beings, rather than the relationship with the viewer and the screen. I kept this idea in the back of mind, since for construction the only difference between having two mats control a video, and having a single user control the playback with their posture, was either creating a mat with two sensors or splitting the pair into separate casing. The code would fundamentally be the same. I decided to continue forward and let testing the experience on a small scale determine what the experience should be.


Technology decisions

I considered a few different options when researching what type of sensor to use as an input. One option was using a contact microphone that would be triggered if there was sound or rustling. If the microphone was put into a mat, it would be able to pick up the movement of the material and the viewer settled into a position. This input seemed quite static and didn’t offer the ability to measure pressure. Upon browsing various component sites, I came across a “Force, Flex, & Load’ sensor that measure weight. The sensor documentation seemed like this was a viable option, so I bought one to test as a proof of concept.

Figure 3: Force Sensor found from Creatron.

Figure 3: Force Sensor found from Creatron.

Proof of concept

Before embarking on development, I needed to ensure that p5.js would be able to dynamically adjust and stop video playback. In the p5 reference library there is an entire document on altering playback speed on videos. I implemented the test code from the document and found that it worked exactly how I needed it.

My next test was to see if the code would work if I used two videos on the screen to play. I used the same code as before, but loaded two videos and duplicated the speed control buttons for the second video. The videos were able to load simultaneously, and be controlled separately.

Hooking up the sensors

The following step was to hook up the device to my Arduino, and as well to use the p5 Serial Control application to connect the input from Arduino to the output of p5.js sketch.

I followed along with the Adafruit Flex, Force & Load hook up guide. One interesting point in the connection diagram was that the FF&L sensor had altered sensitivity depending on how much resistance the circuit had connected. The Arduino sketch that was given by the documentation included calculations depending on the resistance the assembler had chosen. The documentation also noted that the higher resistance included, the more sensitive and specific the feedback would be from the sensor.

The documentation used a 3.3k Ohms resistor. In my kit, I did not have one, but the document said you could create a parallel circuit using three to four 330 Ohms resistors to get to a similar output. To get the sensor working, I decided to connect this circuit using their recommendation. I did note that if I had time I would pick up a resistor that was either 3.3k Ohms or as close as possible to save myself the soldering of a two parallel circuits, and as well to save space on my proto-board.

I connected four 330 Ohms resistors to my circuit and uploaded the sample code from the documentation.

Figure 4: The FF&L sensor connected to the Arduino, using a parallel circuit of four 330 resistors.

Figure 4: The FF&L sensor connected to the Arduino, using a parallel circuit of four 330 resistors.

Connecting the FF&L Sensor to p5.js

I modified the sample code to receive the input from two FF&L sensors. When either sensor was triggered, it would take the value and input it into JSON to be sent to my p5 file through the serial controller. If there was no value from that sensor, it would return 0.

I did not map the values right away, as I was unsure of how the maximum and minimum values I could receive from the sensor. As well, I was unsure about how to evaluate the differing values from the sensor and correlate them to playback.

The initial test once the FF&L were connected to my sketch.js file was to receive the data from both sensors and compare the values. The program would then take whatever value was higher, and play the video that correlated with the sensor. The lower value would then pause. If both values were 0 (not moving). This test was a successful proof of concept.

Figure 5: Screenshot from my proof of concept footage.

Figure 5: Screenshot from my proof of concept footage found here on YouTube.

Workflow of Code  

My first step was to chart out the interactions from the viewer and how I wanted the video to react to the sensor feedback. I created a gold-plated diagram (one that included the next steps and more advanced features) so that I could develop with the future steps in mind. Charting out the interactions would allow me to think about the patterns that would arise from the viewer’s interactions and inform how to structure my code.

Figure 6: Interaction flow chart for Asynchronous Playback

Figure 6: Interaction flow chart for Asynchronous Playback. High definition version of this diagram found here due to WordPress not being able to upload the high quality image

I used this process as a model to determine the thresholds for the video playbacks. My proof of concept already had the base logic for the third case on the right of the diagram “If only one sensor is activated”. I started by using this base as scaffolding to find how I should map the significant increase or decrease of one side.

The sensor data was returned in grams, and would max out at 25000g. I mapped the sensor from 0-25000 to 0-10. I did not want to sensor to alter the feedback significantly if the viewer’s weight was distributed unevenly by anything under 5 pounds.

I chose my thresholds to be from 0-2, 3-7, 8-10 to determine how significant the weight change was between the two sensors.

My testing of these thresholds was initially from either my thumbs or me placing weight on both sensors on the table using my palms. This worked for the most part, but I wanted to test it out with the sensors underneath material. At this point I was unsure if the sensors would register weight if there was any material on top of them.

Coding Challenges

            The biggest challenge I came across when implementing the code was an “Asynchronous Playback” issue. My code would execute loop and pause commands quickly that sometimes the videos would be told to pause and play in the same loop, stopping the playback of that specific video. I tried to avoid this by creating Booleans for each video that would switch to true if the video was playing, and false if it was not. This system worked very well about three out of five times the footage played. Other times, about a minute or two until the asynchronous playback issue would occur.

My back up option was going to my proof of concept and only having one video play at a time. I chose to not implement this because I thought it was an interesting experience to at least see the two videos play simultaneously together and slowly fall out of sync. The proof of concept provided a much more physical experience, causing the viewer to constantly sway back and forth between the two pads. I wanted to at least demonstrate the causation of standing on the mat to start both videos for the viewer.

Implementing the different speeds was effective, but I found that at times because of the asynchronous playback issue it was hard for both of the videos to play at the same time. Therefore, one side of the video was more commonly twice the speed rather than subtly ahead.

Another challenge I encountered was when I was loading the videos I needed to user an sftp client to load my files because Chrome does not allow videos to automatically play when loaded locally. I followed the “how to start a server on your machine locally” tutorial from the coding train and used the python command “python –m StartSimpleHTTPServer”. This saved time and allowed for a less clunky workflow than having to load files onto my sftp client.

“Gold-Plating” Coding Challenge

            I tried to implement the step in the workflow process of when the video is still for a certain amount of time, check to see if the videos are synched, and if find the video that is behind and play it until they are synched.

I attempted this by finding the timecode of each video using the JavaScript method “.currentTime()”. If the videos did not have an equal current time, find the one that was behind and play it until the current time matched. This was difficult to implement because the time the code executed would not pause when it came to the value of the other video due to how the code was being executed. The current time would take the value to a second, and the code would more often than not miss the exact second. I attempted to match the videos within a couple of seconds but this did not work well. The videos did not match perfectly, and they would align infrequently.

Another gold-plated feature that I attempted to implement was the speed increasing gradually. I attempted to implement it using a for loop that would count up or down until it reached the intended speed. This did not work as gradually as I wanted, nor did it have a noticeable effect on the interaction. It would have been more noticeable if I had included delays within the for loop, but I did not want to interfere with the consistent input that was coming from the micro controller. After spending some time trouble shooting this feature, I decided to not include it in this version.

Hardware assembly

Parts Overview

Parts list overall:

  • Proto-board
  • Female headers
  • Flex, Force, & Load Sensor x 2
  • 9k Ohms Resistor x 2
  • Arduino Micro
  • Wire
  • Microfoam
  • Paper
  • Tape
  • Plastic floor covering
  • Felt


            I had a half size proto-board that I had purchased for another side project. Instead of buying new materials, I decided to use this size of proto-board instead. As well, I did not want my casing for my hardware to be large and overwhelming, as I did not want it to distract from the visual of the mat. Because my board was smaller, I did not have space to include my parallel circuit of 330 Ohms resistors for each sensor. I purchased two 3.9k Ohms resistors as replacements to save space. I tested both sensors with these resistors and found that they provided as much accuracy as the 330 Ohms.

Figure 7: Close up of soldered proto-board

Figure 7: Close up of soldered proto-board


Figure 8 : Full view of soldered proto-board w ith both FF&L sensors

Figure 8: Full view of soldered proto-board w ith both FF&L sensors


Figure 9: Fritzing diagram for reference when soldering

Figure 9: Fritzing diagram for reference when soldering



I wanted to use a material that gave the viewers some sort of haptic feedback. I went to Home Hardware to look at ready-made materials such as the mats that they had in stock. The only mats that they had were the standard mats meant for outdoor shoes. These mats were rather flat and would not give tactile feedback as the viewer stood on top of it. Home Hardware had small sheets one and a half-inch thick memory foam in stock, so I chose to buy this to test with my sensors.

Figure 10: Testing the memory foam with the sensors, the creation of my “Minimum Viable Product” (MVP)

Figure 10: Testing the memory foam with the sensors, the creation of my “Minimum Viable Product” (MVP) (Footage of this test here)

The memory foam as an object allowed for the viewer’s impression of their foot to stay in the material as they switched their standing position. I did not like the appearance of simple memory foam, so I searched Chinatown for a clear mat to go on top of the memory foam. I did not want the common carpet covered black door mat that was most commonly found in hardware stores. I wanted the imagery of the mat to be related to a mat, but separated enough that the user did not simply register the object as a door mat. I found bulk by the foot clear covering that is often used to top carpets. I bought enough to wrap around the pieces of memory foam.

Having the memory foam allowed for me to produce a “Minimum Viable Product” (MVP). The sensors picked up the pressure beneath the memory foam mat which was a success.

Continued Assembly of Mat

I placed the plastic on top of the memory foam, but found that aesthetically it was not pleasing. The plastic was entirely clear and the memory foam was in two pieces. I did not like the look of the seam beneath the plastic. A classmate offered me felt from their previous project that I tested wrapping around the memory foam.

I wanted a signifier that the user was supposed to stand on the mat. I felt that the mat was symbolic enough, but experimented with cutting out footprints and placing them on top of the felt. I really liked the contrast of the blue and the white aesthetically. To attach the sensors onto the back, I gently taped them onto a piece of paper, and then attached the paper to the back of the mat.


Figure 11: The plastic covering and the white felt covering

Figure 11: The plastic covering and the white felt covering


Figure 12: The plastic and felt covering over the memory foam, with cut out footprints

Figure 12: The plastic and felt covering over the memory foam, with cut out footprints


Figure 13: The completed mat without the sensors attached

Figure 13: The completed mat without the sensors attached


Figure 14: The paper with the sensors attached

Figure 14: The paper with the sensors attached


For the box, I found a simple cardboard box that I made a small hole in with scissors to allow space for the Arduino wire to move through. The wires for the sensor were quite thin so I did not need to make incisions for them, the box could lay directly on top of them. I covered the box in the same white felt to match the mat. I did not cover it in the plastic mat covering because I did not want to give the signifier that the user could step on it.

Figure 15: The mat assembled with the casing for the proto-board

Figure 15: The mat assembled with the casing for the proto-board

Choice of film

As I was testing the playback of the video I was using an old clip from my own personal project documentation. The video worked well because there was lots of movement, and it was short so it would load quickly for iterative coding. I was using the video duplicated to mock the look of two screens.

Figure 16: The demo footage loaded twice onto the page to test out the code

Figure 16: The demo footage loaded twice onto the page to test out the code

In my original idea, I had envisioned a video split into two halves. Since I had my MVP I decided to explore this option. Since I liked the movement, and the immediacy of action in the demo clip, I wanted something that had these same qualities. I thought a nature documentary might be interesting as a combination of movement and imagery. As well, I thought it may be interesting to have a longer video, rather than a short one that loops like my demo clip. The length would make it that it would be very unlikely for the viewer to stumble upon the same part of the clip if they returned to the gallery since the video is not in constant auto-play.

I found some free stock footage videos of waves and other nature, but all of them were too short. I decided to go onto the National Film Board website to browse through open source films. I came across a familiar one from my past work at a canoeing camp called “Path of the Paddle: Solo Whitewater” by Bill Mason. I downloaded the film, and used Adobe Premiere Pro to slice the video in half. For the purpose of loading and the circumstance of having a shorter critique, I shortened the film to only twenty-five minutes rather than the full fifty-five minutes.

Slicing the video in half was trickier than anticipated due to exporting. The quality of the film did not export to high enough of a resolution, but I decided to keep it because the imagery was still interesting to interact with.

Figure 17: the left side of the sliced footage from “Path of the Paddle”

Figure 17: the left side of the sliced footage from “Path of the Paddle: Solo Whitewater”

Figure 18: Full screenshot of “Path of the Paddle"

Figure 18: Full screenshot of “Path of the Paddle: Solo Whitewater”


Figure 20: Viewer interacting with the video, a couple minutes in with the videos out of sync

Figure 19: Viewer interacting with the video, a couple minutes in with the videos out of sync

Figure 20: Mat in front of the projected videos

Figure 20: Mat in front of the projected videos

Video of piece in action here.

I presented the project using a projector and placing the mat on the floor in front of the wall. I disclosed to the audience that the code was a bit “buggy” due to the issues with asynchronous playback errors from Chrome. The two videos played simultaneously for about twenty seconds before receiving the error. As stated before the error only causes one video to pause, while the other video continues to be responsive to the sensor.

The response to the project was positive. People noted that it was interesting to create a piece that made people aware of their posture, since it is something that is not consciously noted frequently. Someone noted that if they were to come into a gallery, they would know exactly how to approach the piece due to the design and the symbols on the mat. As well, it was suggested using frames to move the video forward rather than using the loop or pause functions could prevent Asynchronous Playback.

I refreshed the page where the videos were loaded in attempt to get them to play longer than the initial attempt. People noted that even though one video would stop playing that it did not spoil the entire experience.


Figure 21: Viewer changing foot position on the mat to change the playback of the footage

Figure 21: Viewer changing foot position on the mat to change the playback of the footage

I am very happy with the result of my project, even though the code has a few bugs concerning the playback of the videos. I think overall it conveyed a new way of experiencing video art through reflecting posture and requiring physical interaction from the viewer to play.

I am surprised with the artifact of the mat because I had no immediate visual in mind for how it should look or operate when I first designed this project. I enjoyed letting the stores of materials inform my decisions rather than setting out with a preconceived notion.

I do not think using p5.js was the best platform for this idea. If I were to continue pursuing this idea, I would use software like Max MSP and Jitter to control the video speed. Controlling videos from JavaScript is too focused on avoiding Asynchronous Playback. The p5.js Serial Controller application was a bit buggy as well. I had to keep a very “clean” environment for the code to execute. This involved closing all of my files and the p5 Serial Controller every time I wanted to update my code. This was a bit cumbersome to be a part of my workflow, but if I did not do this the p5 Serial Controller would start to drastically lag with the data being communicated between my Arduino Micro and p5 sketch file.

As well, I would not implement what I noted as “gold-plated” features if I were to continuing developing this project. I do not think that having the video speed gradually speed up or down adds anything to the overall experience. It is also an interesting experience to see the video speed just up suddenly, and causes a bit more of a sudden awareness in the viewer. This experience can cause the viewer to challenge whether or not the video is supposed to all of a sudden change speed, or if it is their actions. I am unsure if I would add the feature for the video to slowly return to the synched position. I think there is something engaging about the viewer coming in on a screen that is already out of synch. Also, I think it would be confusing if a viewer came in and the video was in the process of returning to a synched position.

Overall, I am proud of the execution of this project and am happy with the reception. I would definitely continue on this project if time allows.

Related Articles

            I took primary inspiration from the video art piece “24 Hour Psycho” by Douglas Gordon. This piece changes the way the viewer engages with video art by altering the time of a well-known film. The film “Psycho” is slowed down to take twenty-four hours, making each second go through two frames rather than the standard twenty-four frames per a second. It would be nearly impossible for a viewer to watch the entire film in a sitting. This piece was one studied in my undergraduate as a pivotal artwork for adding to the dialog about the relationship between time and video art. “24 Hour Psycho” as well comments on the physically of film. By slowing the film down, the audience becomes aware of the underlying static qualities of film when the frames are presented almost individually.

My piece relates to “24 Hour Psycho” by altering the time and interaction with the video piece. The user manipulates the time of the video through their interaction, creating a new experience for the video. The video in a way is brought into the world physically through the control of the viewer’s body. “Aysnchronous Playback” similarly to “24 Hour Psycho” would be very difficult to watch in one viewing. The way the sensors are programmed make it that the user needs to start with equal pressure on both sensors to start the playback at the exact time. The act of stepping on the mat makes this nearly impossible. The way the interaction is designed makes this piece difficult to engage with in one sitting unlike many video art pieces.


Force Sensitive Resistor HookUp Guide. Resourced from:

HTML Audio/Video DOM currentTime Property. Resourced from:

Lee, Nathan. 2006. The Week Ahead: June 11 – June 17. FILM. New York Times: New York, USA. Resourced from:

Mason, Bill. 1977. Path of the Paddle: Solo Whitewater. National Film Board of Canada. Resourced from:

Parish, Allison. 2018. Working with video. Resourced from:

Reference: .speed(). Resourced from: 


The Pun-nity Mirror (for Hijabis)

EXPERIMENT 3: This + That

by Carisa Antariksa



The Pun-nity Mirror is a prototype of a conceptual vanity mirror that responds to existing beauty standards surrounding the Hijab within the online hijab community. It turns phrases that are self-deprecating, such as “I look like an egg with the hijab on,” or “that’s a huge tent,” into comedic and motivational comments as the user looks into the mirror. These pun-pliments (Pun-like compliments) are activated on the website/screen as the user touches a specific part of the mirror.

eggfake4 fake2

This prototype mirror is a commentary on how vanity has become a significant factor in influencing a Muslim woman’s intentions to wear the hijab. By placing comedy into the mix, I wanted to create an experiment that tackles the issue through a light-hearted angle.

Github Link:


Ideation Process 

I went into this project with Kate’s words in mind (“Have fun with it!”) and began with testing all the examples when I had the chance. The concept came about later on in the process as I wanted to see the possibilities of the all the input and the output sensors and how I can then apply it to a list of ideas. Eventually, I narrowed down my list to the inputs and outputs I was more interested in exploring, from the capacitive sensor, the electret microphone and the infrared (IR) LEDs. I took a week to explore these elements but in the time given, I was only successful in fully exploring the capacitive sensing abilities of both the sensor itself and creating one using the Arduino. I had discovered Capacitive Sensing for Dummies and explored ways in which I could apply this to my personal project.

Furthermore, in terms of the idea itself, I was heavily interested in visualization, especially through Perlin Noise. I studied how it worked and thought of ways to apply it to text, as demonstrated by Kiel Mutschelknaus. Would I be able to apply it to existing vectors I had made in my previous works? Or perhaps I can apply it to other shapes? I had these questions as I kept brainstorming for possibilities. Along with this exploration into algorithms within p5, I also thought of more realistic ideas that concerned my identity, as representation was something I was personally passionate about. 


In the end, I chose to do a micro-project about the Hijab, which I thought suited the opportunity. There were not many existing precedents that explored this religious symbol in a “relatable” context so to say, so I wanted to start by creating one. I approached this idea by first browsing things that I would encounter on Twitter and Instagram regarding comments and opinions on the hijab within the community itself. Modest hijab fashion is a continuously booming industry, giving rise to many influencers from many different backgrounds and nationalities. This in turn forms many narratives from muslims terms of how the hijab is represented and the impressions it gives to the general public. More often there is a serious undertone to these opinions, whether it is judgement or what some define as freedom of expression. An example of this is the issue that rose from one of many hijabi influencers, Dina Torkia, about responses on how she recently wears the hijab (link to article here.)

Rather than applying the heavy weighted side to this subject, I wanted to introduce a perspective that I was more comfortable with. I often discuss about different hijab styles with my close friends and how some styles look differently on us depending on our face or head shape. We throw around words such as ‘egg’, ‘tent’ or even ‘looking “sick”‘ to each other and laugh about it. I was also reminded at some muslim women’s own reasoning behind wearing the hijab itself, ranging from wearing it out of free will or family influence, and can still be heavily affected by the vanity aspect regardless. This thought then led me to search if others who wear the headscarf also think and feel the same way.

examples lol

Screencap of “hijab egg” search results on Twitter

The results were amusing to say the least, which led me to sketching my final idea. I thought of common puns that can go along with these terms, such as “eggscellent” and “ten-tastic.” Since it was supposed to be a prototype, I also decided to narrow it down to these two phrases as it is used in conversations, whether comically or even used as a teasing comment. I also personally added one more, the word “ehh,” since it was one I often use to tell myself and my other friends to be okay with “today’s look.”


Within this process, I also received advice from Veda Adnani to use a mirror to suit the purpose. It was a more practical way to implement the project than using the actual scarf I owned and creating a flimsy wearable. I then decided on this as it was also a more compact way to present the intended concept. From this, the idea of a vanity mirror + puns came to fruition.

Programming & Presenting the Concept

Website (p5)

The illustrations to the website was something I had previously done in a previous project and I altered to fit the concept. The vector graphics were from a visual style project I had created for a visual novel game.


Original Vector Illustration


Altered Vector Illustration & Tent, Egg, Leaf base for the Hijab shapes

These were then applied in the final renderings of the screens in early on in this post. The final vector images were placed under if statements and would change should the wire connected to ground (0).


I consulted peers who have used the Capacitive sensor in the previous experiment to help me with the Arduino aspect. I was warned that there were many challenges to using that sensor, as it is quite sensitive once connected to the power. I started with the circuit that I found on Capacitive Sensing for Dummies and later adapted it to the capacitive pull-up resistor code from Kate’s example in class. The circuit evolution is as follows:


Initial Circuit


Final Circuit

The first circuit pulled values on the serial monitor as it touched conductive objects, while the final was a simple 1-0 value pull-up. I included the LED to help indicate that if it did light up, the circuit proved to work. This element performed perfectly. I decided to employ the final circuit for this prototype and if time allowed later, there would be a better chance to use the actual capacitive sensor element to allow activation from a finger touch.

Arduino to p5 (and the problems.)

Once the LED was lit up, I was sure the process of linking it to my p5 website would be no problem, as it only required connecting the serial port. However, I underestimated this as my images did not load as I turned it on. I had thought this was a laptop problem, and switched it over to the windows laptop. This did not make a difference, as it was most probably a webserver problem, and continued to test it with the original circles in the code examples.


Even after testing it on multiple local webservers and having help from my peers Olivia and Omid, the images still did not load. However, after some debugging, the images finally showed up, thanks to help from Omid. Instead of function preload(), he had suggested I used the createImage function. This ended up solving my agony from the night before.


function preload() to createImg

However, there were time constraints to fully change the code for the presentation on Friday. I created a demo video to present the concept and shared the overall project through what should happen, given the code worked.


The feedback was overall positive regardless of the mishap, and many suggestions on how to push the project forward, from using an API to gather information from certain key-terms on social media to adding additional elements on the website that can provide better context to the concept of vanity and the hijab. There was also a mention from Kate on how linking comedy to fashion is quite rare, and this project allowed a semblance of it.

Presentation day lol

Completing the Experiment

I was determined to make the code work. I then debugged the .js file, rearranged the elements and reconstructed the connection of the Arduino to the mirror. After spending time on perfecting the placement of the elements, I finally opened the serial connection to the p5 website. Eventually, things worked as planned.

Future Iterations

This was my first time planning and constructing hardware for the Arduino by myself and I came to some realizations:

  • The mirror that was lent to me by Veda was stainless steel and would easily conduct the current. In my later construction of the wiring onto the mirror, I had placed in copper tape all around the top part where the indicators on where to clip or touch are. Once I connected the circuit to power and “touched” the word, all of them responded on the p5. I then placed in wood under the pins and taped them in place, and after, ended up layering duct tape on top of the other to prevent all of the images from changing on the p5 sketch.


Evolution of mirror connection

  • Using the capacitive sensor would have created a better viable product, as the user can actually touch the part of the mirror instead of using alligator clips.

This project is only the start and has the potential to evolve into a series of micro-experiments that introduce other aspects within communities as specific as a the hijab fashion-influencer culture. Although this prototype scratched the surface, there are aspects to vanity that can be explored. The data and commentaries to what can be considered the threshold of intentions in wearing the hijab can be collected and expressed in more creative and engaging ways than just merely creating and posting on a social media profile.


Behance. “SPACE TYPE GENERATOR: _v.Field.” Behance,

Instructables. “Capacitive Sensing for Dummies.”, Instructables, 13 Oct. 2017,

“Language Settings.” p5.Js | Home,

“Language Settings.” p5.Js | Home,

RCP, Sean T at. “Hijab Egg – Twitter Search.” Twitter, Twitter, 31 Mar. 2018, egg&src=typd.

The National. “Influencer Dina Torkia Says Online Hijabi Community Is Becoming like a Toxic Cult.” The National, The National, 5 Nov. 2018,

Artful Protest


By Maria Yala

Creation & Computation Experiment 3 / This & That


Project Description

Artful Protest is an interactive art installation, using p5 and Arduino over a serial connection, that invites participants to use art as a form of protest. It was inspired by this quote by Carrie Fisher:

“Take your broken heart, make it into art.”

The installation is a form of peaceful protest, designed around the idea that the freedoms we enjoy everyday are not guaranteed and that we have to constantly fight to keep them around. In the installation, participants are presented with a screen projecting the current problems in the world at the moment. E.g black people being brutally murdered by police, families and children being separated at the US border, or innocent people killed in Churches, synagogues and schools. There is also a cardboard protest sign, that controls the images that are being projected. If the blank physical sign is picked up i.e. a person begins protesting, the projection on the screen changes to that of a digital protest sign animation. The digital protest sign changes depending on how high or low the physical cardboard sign is being held. If the cardboard sign is put down, the initial screen is projected again, inviting the participants to protest again.


Process Journal & Reflection

Initially, I wanted to recreate a voting booth experience to communicate the idea that people need to get active and vote to create or push for local/global change. However, upon further reflection and consultation with a few members of my cohort, I realized that I was overthinking the idea. I decided to keep it simple and focus on the experience that was to be created and not on the code or technology. I started to look at the experience on a wide scale before narrowing down, how I would execute it. I knew that I wanted to keep the political angle, and I wanted something visual and also something interactive where a viewer would become a participant. I was thinking of the women’s march and protest signs and came up with the idea of a peaceful protest that used art.

[ Protest signs ] + [ Making one’s voice heard ] + [ Peaceful protest ] = Artful Protest


Once I had a general direction, I began looking for inspiration. Below are some of the images and work from other artists that inspired Artful Protest.


‘Imperfect Health: The Medicalization of Architecture’ Exhibition


type / dynamics installation at the stedelijk museum amsterdam


Interactive Art Installation by Akansha Gupta

I was inspired by the idea of hidden information being revealed and text being projected on the screen in the art pieces and installations above. I took elements of these ideas and used them in Artful protest where protest signs are hidden and then revealed depending on how high the physical protest sign is held.

Arduino Hardware & Code

For this project, a simple Arduino setup was used to collect information about how high the cardboard sign was held. I used a SRO4 Ultrasonic sensor to collect the proximity data. I found that this sensor was very finicky and would return strange sensor readings. To fix this problem, I used a suggestion found in Kate Hartman’s code found here. The suggestion was to use the NewPing library in the Arduino resource library. The library helps receive cleaner values from the proximity sensor by providing a maximum reading for the sensor. I set my maximum to 200, therefore the range of values returned was 0 – 200cm. Additionally, the library also provides a function to return the distance calculated in cm. Artful Protest uses a serial connection where proximity data is gotten from Arduino as input and send over to p5.


Serial Connection

I chose to send the distance data over to p5 using JSON( JavaScript Object Notation ). I had some problems using pretty print when sending JSON over serial. On the p5 side I would get an error about unexpected whitespace and was able to resolve this by removing the pretty print function and using printTo instead i.e. p5Send.printTo(Serial);

P5 Animations

On the p5 receiving side, the proximity data is used to determine what mode the installation is in. There are two modes – watching and protesting. Using the received data and a global boolean variable ‘isProtesting’, which is initialised as false, I use the distance data to determine what mode to set. If the distance is less than or equal to 30cm, the mode is ‘watching’ and ‘isProtesting’ is set to false. If the distance is greater than 30m the mode is ‘protesting’ and ‘isProtesting’ is set to true.

In the draw loop, I then use the ‘isProtesting’ boolean to determine which animation to “play”. I use a global variable ‘animation’ which is initialized to 0. Using the sensor value from Arduino, I determine a new value for animation based on the height of the digital protest sign.


Sketches and initial idea designs

Animation 0 – Initial Screen.

The screen below is drawn when the variable animation = 0. It is the initial screen drawn when the mode is ‘watching’. The screen is drawn using text that is confined to the width and height of the screen.


Animation 1 – Little Sign, Yuge Feminist Screen

This screen is drawn when the distance recorded is less than 80cm. The animation part, “Yuge Feminist” is drawn buy changing the color of text from black to white and back to create the illusion that the text is appearing to and disappearing into the black background. The colors are generated using sin function to generate colors within the range of white to black.


Animation 2 – Dump Trump Screen

This screen is drawn when the distance recorded is less than 100cm. The animation is created by toggling the drawing of two string variables. i.e “Dump” and “Trump”. To create the effect of toggling both words, I use a counter variable dumpCnt which is updated, by 1, every time animation 2 is drawn. I then check whether the dumpCnt is odd or even. This is done using the modulo function. If the dumpCnt is even “dump” is drawn, if it is odd, “trump” is drawn. When I got this animation to work, I realized that the frameRate was too fast to get the effect that I wanted, so in setup function i set the frameRate to 2. However this ended up slowing down my other animations later so I resolved this by setting a different frameRate for each animation, within the draw function.


Animation 3 – Love is Love Screen

This screen is drawn when the distance recorded is 100cm or greater. Is shows an animation of an explosion of balls forming the word ‘LOVE’. This is done by tracing the font used to draw ‘LOVE’ on the screen. The tracing is then turned into points, using the x and y co-ordinates . The animation was based on the The Coding Train : Steering Challenge which can be found here. Each point is initially drawn at a randomly generated x,y co-ordinate then it moves to it’s target position, which is assigned according to the tracing of the original text. This is what creates the animation of the points exploding on the screen and moving to a set location. The points are created as vehicle objects of the class Vehicle. They are held in an array and the array is iterated over drawing them on the screen. Each vehicle has a target x,y co-ordinate, hue (color), starting position, velocity, acceleration, radius, max speed and max force.

To assign colors to each point / vehicle, I set the color mode to HSL and then while iterating over the points when vehicles are getting targets assigned, I assign a color and iterate over a hue variable, causing the rainbow effects of the points.


Designing the protest sign

The protest sign was created using a sheet of cardboard material. I chose cardboard as it is a material that is used a lot during protest marches, and cardboard signs featured heavily during the 2017 Women’s March which was my main source of inspiration. I chose not to add a pole as a handle for the sign because I felt that it restricted the interactions with how a person would hold the sign. I created a small pocket at the back of the cardboard to hold my breadboard and sensor.

During testing, I realized that the cardboard was quite heavy, and when lifting it up higher it got significantly harder to hold it up. I wasn’t too bothered by this as I felt that it added another layer to the experience. During presentation, I got a suggestion to maybe tie this into the types of protest signs displayed i.e. when talking about heavy issues, show these protest signs when the physical protest sign is held up highest.


The physical cardboard protest sign with pocket at the back for the breadboard

Future expansions

I would like to add more protest signs and possibly include an orientation sensor, to determine another way in which the sign is being held. I also got the suggestion during critique, that I could explore having multiple people interacting and having that affect the protest signs shown. This would allow multiple people to join in on the interaction. I think this is an avenue I could possibly explore further, perhaps even adding more protest materials to the installation, not just one protest sign, and I could use an additional screen.

What I learned

I particularly enjoyed this experiment because it helped me think to simplify my work and focus on creating meaningful, organic interactions rather than showing of what a piece of tech can do. The quote below summaries my findings 🙂

The artist should never contemplate making a work of art that is about something; a successful work of art can only ever be about nothing. The artist’s complete negation of intent thus creating a reflective surface into which the critic, curator or collector can gaze and see only himself – Sol LeWitt, Paragraphs on Conceptual Art, 1967

References & Resources

Artful Protest Code on Github

Code – Resources

Creative Coding – Text & Type

Rainbow Paintbrush in p5

The Coding Train – Steering Code Challenge

CC18 Digital Futures Class Github-

Images – Resources


“Bird” a toy to help understand machine/animal interaction.

Project By Amreen Ashraf

Project Github:







I started out with the project wanting to work with sound. My first two projects were based around the idea of sound and using  code and electronics to control and manipulate sound. I started out by looking at my code for experiment 1 where I had sketched out a circle which moved to music using the help of the Daniel Shiffman’s coding train videos. As somebody who is new to coding, the one thing I was sure of was the the idea of using code to create audiovisual and sound experiments. My first round of brainstorming and ideation looked at building upon that concept further. Through the week after being introduced to various other examples in class and trying out a few of them I wanted to explore the idea of “connection” between humans and their machines by using a virtual pet with a physical companion to the virtual pet, almost like the 90’s toys like the Tamagotchi. This idea of connecting with a virtual pet slowly evolved into not having the human in the mix at all. As  technology and tech products get smarter to recognize more of our natural environment could technology connect to this environment just as humans can?could technology come to care for other sentient beings and look after them the way humans have? These were some of the larger questions that I had at the back of my mind during this project. 


Project description

“Bird” is  a toy operated by a proximity sensor which senses animals bodies and uses this to set off a toy to keep help the human connect with their pet while being physically away from their homes. With the rise of “vertical” living and urbanization, most people live in small spaces with their animal friends and companions. Unlike dogs, who need to be taken on constant walks, cat’s usually stay indoors, which sometimes causes them to be lazy and inactive. There are a myriad of cat toys and products aimed at keeping indoor cats active and healthy, however most of these toys require that humans be present in playing with their companions. 


Final Circuit Board

For the final circuit board I used a proximity sensor and a servo connected to the Arduino Micro.



The tech


Arduino Micro

Servo Motor

Proximity Sensor



The Process Journal

First round of brainstorming:

30-31st October

I first started with the idea of working on a project that used sound as an example. My first brainstorm session was built around that idea:


I tried out a few examples on the class GitHub provided by Kate and Nick. One of the examples used two potentiometers and the other another example which increased the brightness of the LED. I tried to play around with the idea of using those two examples to combine into one audiovisual experiment. As I don’t have a very strong background in code, throughout this course I’ve used the examples provided in class as a basis to built upon.

Round 2

2-4th November

During the weekend a week before the project was due, not completely satisfied with the idea I went back to the drawing board. Growing up I always had animal companions, currently I have 5 beautiful cats who live in sri lanka my hometown. As a masters student who is away from  them all the time it is hard to commit to taking care of a pet, this sparked the idea of using technology to comfort that part of me that wished to have a animal companion. I came up with the idea of some sort of virtual pet, which could comfort and respond just like an actual pet.


I quickly drew a sketch out with an initial concept and idea. The idea was to use light sensors as input to connect to a cat sketch on P5. js . The light sensors would be connected to a physical piece of hardware maybe in the shape of a cat which when touched or pet, would produce a sound and initialize the vibration motors to start.


I also connected with the idea of some sort of idea a “cat rave” just to give it an element of fun and silliness. The idea of technology being silly and fun appeals to me a lot. The idea of the “cat rave” was partly inspired by this video whose soundtrack I actually used in my final project.

Round 3

5-7th November

I got working on the light sensors and was having difficulties working with two sensor values from the light sensor. Even during Experiment 2, I realized I did not particularly enjoy working with the light sensor. Trying to get the Arduino serial monitor to read two values was not working out for me. On Monday before the project was due I approached Kate in class and she gave me a few pointers such as using a serial.Print(“,” ) line to separate the two readings I was getting from the light sensors. I gave it one more try and decided to abandon the light sensor and go back to my favourite sensor, the proximity sensor. I felt that Experiment 2 had prepared me well to work with the sensor.

This is the point in my project I had to go back to the drawing board. I realized that if I used the proximity sensor, I couldn’t use the vibration motors and the first idea did not make sense. This is where I took the idea of the human completely out from the idea. What if it was a an idea which purely worked with the technology and the animal.

I used the examples from the class github for the proximity sensor and decided to add the servo as a part of something which moves. My thought process at this time was use the servo as some sort of toy. This is when I zeroed in on the idea that this creation or idea would specifically geared towards cats. the idea was linked to the idea of having a toy which mimics prey.



I used 200 cm as max distance for the proximity sensor and 30 cm to initialize and start the servo.


Round 4

6-9th November

By the time I got the proximity sensor and servo connected it was Wednesday morning and I had two days till the project was due. My idea for my P5.js part of the project till this point was to do a simple sketch of a cat which would make the sound “meow”. I tried to sketch out the shape of a cat using an ellipse and two triangles. Due to time running out I quickly skipped this step and tried to map out and recall an image which which was mapped to the sensor value. I used the image below to tie it to the proximity sensor value so that it would only be activated only when the cat is close to the machine. This action of the cat approaching starts a dual action of the image appearing and servo starting.



I used craft feathers and popsicle sticks to construct a movable object that would appeal to cats.



  1. I found it much more satisfying to work in a group
  2. I should have spent more time on the sketching out my idea on p5.j


"Kurt Box" by Erik De Luca

“Kurt Box” by Erik De Luca

A project I found inspiring was a project I had presented for case study 2. “the kurt box” which was aimed at using electronics to interact with non-humans. “kurt box” uses a microphone to sense an animal approaching and when the animal nears the object the kurt box would plays a Nirvana song. This idea of using electronics as way of extending this human/machine connection to non-human and other sentient beings is vastly appealing to me as a designer. I usually aim to practice human-centered design coming from that background, but recently I’ve been curious especially after learning more about electronics to use these devices to learn in the aid of understanding our surroundings and nature.



  1. Puckett, N., & Hartman, K. (n.d.). Retrieved November 12, 2018, from
  2. Deluca, E. (2017). “Kurt Box”. Retrieved November 12, 2017, from
  3. Image cool cat. (n.d.). Retrieved November 8, 2018, from
  4. Image kitten chasing butterfly. (n.d.). Retrieved November 12, 2018, from
  5. Official Meow Mix Song – Cats at a Rave! (2014, September 24). Retrieved November 8, 2018, from

Grow You A Jungle


Grow you a jungle was created with the intention of bringing a little simple joy and life to the process of watering plants. Taking care of a lot of houseplants you begin to think about the life and time of a plant. How the growth is hard to see sometimes, yet sometimes a rustling can be heard and the leaves are moving, growing, dropping. Seeing the significant growth of a plant can take time.

Being in the woods or a jungle you notice the crashing noise and the movement of all of the life around you, creating a cacophonous hum of living. Indoors you start to forget how alive everything truly is. I wanted this project to bring a bit of the lush movement of nature to the indoors.

Process Ideation

Throughout the semester I had wanted to do a project that involved plants, but hadn’t found a group work that it fit into. So I began to formulate an idea to involve a plant, the idea of time, and after using the orientation sensor, I realized that I wanted to examine the concept of movement and gestures. I am in a class currently called Experiences and Interfaces which has led me through a lot of thinking about movement and our interactions with the world through gestural action. I realized one night while putting off watering many of my plants how artificial this gesture of pouring water is. The houseplant market is booming, yet it is just a facsimile of the natural world.


I began to think about creating a small simple experience that would enhance the idea of a plant growing from being watered, eyes and ears engaged. Projection has always been of interest to me, for its use of scale and darkness and light. An image of a dark room coming alive with the sounds of nature slowly creeping up at the action of a watering can feeding a garden came to mind. I decided to move forward and create.

Process Video and Sound

The process of creating the final video for the installation came through much iteration and testing. I made the decision to create it using found footage with Creative Commons licensing, since the timeline was too short for me to film my plants with any significant changes tracked. I wanted the video to have a lush ephemeral quality with lots of light and dark and movement. In my personal photography work I frequently create images using overlays and intense saturation and decided to use this same technique for the video. So I took to creating layers of plants growing in Adobe Premier Pro, this was a process of testing and tweaking. I used several opacity masks in the end to get the look I wanted. Below are some other videos I created before settling on my final.

The sound portion of this project took huge queues from a website I found called Jungle Life, a user controlled jungle sound player. I spent a bit of time in the jungles of Costa Rica a couple years ago and have very intense memories of sitting in the rainforest listening to the deafening sounds changing and moving around me. I wanted this to be the sound that would be triggered by the watering can. So as with the images I found about 15 Creative Commons samples of jungle and forest sounds and took to creating a timeline of all of them in various levels of highs and lows.


Process Arduino

Initially when the project was announced I was a bit intimidated by the idea of creating the entire code and circuit by myself, but it was a much needed challenge and confidence booster in the end. I take very well to building on knowledge that I already have, so I decided to keep everything as tidy and simple as I could to tell the story I wanted. In class one day I set up the orientation sensor successfully and realized how versatile this feature could be. My final circuit ended up being this exact set up, taken from the orientation sensor tutorial in class. When setting up my board I ran into some problems, and could not figure out why it wasn’t working. After 20 minutes of pulling my hair out and rewiring most of the board…. I realized that one of the wires I had cut had split and wasn’t making a connection. A good lesson in checking the small stuff thoroughly.



Final Circuit


Process P5

Building the code was the most intimidating part in my mind. So as with the board I built on some of the code that had been provided by Nick and Kate. I began to slowly break down each line and what it mean and how it functioned. I did several Google searches to assist in writing this code and as had been mentioned there really is no search for “How to make a watering can trigger a video”. I honestly didn’t even find much for “How to make an orientation sensor trigger a video”. There was however many pages of documentation about using Processing to do this. It seemed like a big task to switch the language at that time, so I decided to proceed on using P5.

My realization was that what I would have to write is an If/Else Statement. Which I did successfully, or so I thought. But it wasn’t triggering the video. And after another couple hours of painful searching I posed the question to my classmates. One noticed that I hadn’t been using the draw function, this had been intentional initially as I just needed the video to play on the screen, but I hadn’t taken into account the action that would need to trigger and loop. So once I moved my  toggleVid();  if/else function into the draw function, BOOM! It worked. This moment felt like I had won a medal. Something I keep learning every time I code is how tedious and time consuming it can be, and that the learning will never end. Persistence and variety in method is surely the key to this one.




I have been ruminating for a couple years on the Québécois NFB short film, “The Plant”, directed by Joyce Borenstein and Thomas Vamos. It is a beautiful film that delivers a timeline of obsession through the relationship of a man and a plant and was filmed in my old house in Montréal. Something that I kept remembering about this film was how the plant’s movements turn from joyous to vicious and wild in the time it would take to see a small amount of growth in a real live plant. Which lead me to think about how when you take a time lapse of a plant you can see exactly how wild and alive the growth really is.


Another project that lent some inspiration was the wonderful art group teamLab from Japan. They create incredible immersive experiences using digital technologies and huge real life installations. They believe that the digital world and art can be used to create new relationships between people. By using interactive work that responds to the users movements they achieve this. I am interested in creating simple installations in my own work that will make people reflect on their place in the world and how the interact with it, the small, magical changes that can occur when you make a action or decision. One of their projects below uses projection to a tea ceremony that bring life to the process.  Another overlays projected animals in a natural environment to make the user contemplate our place in the world and how we may be the top predator of the life cycle.



Final Thoughts

This project began simple and stayed simple, but I do not think it lessens its value and success. I am very happy with the outcome and am hoping to come back to this idea of triggering growth in the future. In the final presentation there were some wonderful comments of refining the way the sound and video reacted to the movement, I am banking these for future iterations. Something else I wanted to explore was how scale could amplify the feelings that this project evokes. Possibly using a whole room filled with plants and using projection mapping to sculpt the way the videos look and feel. The more we build technology into our daily lives I become more aware of our need for natural life. Exploring this idea further is going to make the world we live more whole and will allow sustainable living ideology to flow into our making and ideas more freely.


  • “Adafruit BNO055 Absolute Orientation Sensor.” Memory Architectures | Memories of an Arduino | Adafruit Learning System,
  • “Adafruit BNO055 Absolute Orientation Sensor.” Memory Architectures | Memories of an Arduino | Adafruit Learning System,
  • “Free Forest Sound Effects.” Free Sound Effects and Royalty Free Sound Effects,
  • “Free Jungle Sound Effects.” Free Sound Effects and Royalty Free Sound Effects,
  • Ir, and Stéphane Pigeon. “The Sound of the Jungle, without the Leeches.” The Ultimate White Noise Generator • Design Your Own Color,
  • Koenig, Mike. “Birds In Forest Sounds | Effects | Sound Bites | Sound Clips from” Free Sound Clips,
  • Koenig, Mike. “Frogs In The Rainforest Sounds | Effects | Sound Bites | Sound Clips from” Free Sound Clips,
  • Koenig, Mike. “Rainforest Ambience Sounds | Effects | Sound Bites | Sound Clips from” Free Sound Clips,
  • teamLab. “Flowers Bloom in an Infinite Universe inside a Teacup.” TeamLab / チームラボ,
  • teamLab. “Living Things of Flowers, Symbiotic Lives in the Botanical Garden.” TeamLab / チームラボ,
  • “Timelapse Footage.” Openfootage,
  • Vamos, Thomas. “The Plant.” National Film Board of Canada, National Film Board of Canada, 1 Jan. 1983,
  • “Video.” p5.Js | VIdeo,

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.