Category: Experiment 3

First Flight (An Interactive Paper Airplane Experience.)

Experiment 3:

By: Georgina Yeboah

Here’s the Github link:


First Flight. (An Interactive Paper Airplane Experience. 2018)

Figure 1.”First Flight. (An Interactive Paper Airplane Experience, 2018)” Photo taken at OCADU Grad Gallery.

First Flight (FF) (2018), is an interactive tangible experience where users use a physical paper airplane to control the orientation of the sky to appear they are flying with the screen, while attempting to fly through as many virtual hoops as they can.

Fig 2. Georgina Yeboah. 2018 "First Flight Demo at OCADU Grad Gallery."

Figure 2. “First Flight Demo at OCADU Grad Gallery.” 2018


Fig 2. Georgina Yeboah 2018. "First Flight Demo at OCADU Grad Gallery."

Figure 3.  First Flight Demo at OCADU Grad Gallery ( 2018).

Video Link:

The Tech:

The installation includes:

  • x1 Arduino Micro
  • x1 Bono 55 Orientation Sensor
  • x1 Breadboard
  • x1 Laptop
  • A Couple of Wires
  • Female Headers
  • 5 Long Wires (going from the breadboard to Bono 55)
  • A Paper Airplane

Process Journal:

Thursday Nov 1st, 2018: Brainstorming to a settled idea.

Concept: Exploring Embodiment with Tangibles Using a Large Monitor or Screen. 

I thought about a variety of ideas leading up to the airplane interaction:

  1. Using a physical umbrella as an on or off switch to change the state of a projected animation. If the umbrella was closed it would be sunny. However if it were open the projection would show an animation of rain.
  2. Picking up objects to detect a change in distance (possibly using an ultrasonic sensor.) I could prompt different animations to trigger using objects. (For example; picking up sunglasses from a platform would trigger a beach scene projection in the summer.)
  3.  I also thought about using wind/breathe as an input to trigger movement to virtual objects but was unsure of where or how to get the sensor for it.
  4. I later thought about using the potentiometer and creating a clock that triggers certain animations to represent the time of day. A physical ferris wheel that would control a virtual one and cause some sort of animation was also among my earliest ideas.
Fig 2. Georgina Yeboah. 2018. First initial ideas of embodiment.

Figure 4. First initial ideas of embodiment.


Fig 3. Georgina Yeboah. 2018 "Considering virtual counterpart of airplane or not."

Figure 5. Considering virtual counterparts of airplane or not.

Monday Nov 5th, 2018:

Explored and played with shapes in 3D space using the WEBGL feature in P5.js. I learned a lot about WEBGL and it’s Z  axis’s properties.

Fig 5. Georgina Yeboah, Screenshot of Airplane.Js code.

Figure 6. Screenshot of Airplane.Js code.

I looked at the camera properties and reviewed it’s syntax from the “Processing P3D” document by Daniel Shiffman. Later, I would plan to set the CSS background’s gradient and later attach the orientation sensor to control the camera instead of my mouse.

Fig x. Georgina Yeboah (2018). "Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY."

Figure 7. Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY.


Fig x. Georgina Yeboah. 2018. "First Flight's Interface using WEBGL."

Figure 8. First Flight’s Interface using WEBGL.

Tuesday Nov 6th, 2018.

I had planned to add cloud textures for the sky but never found the time to do so. I did manage to add my gradient background though using CSS. 

I also planned to add obstacles to make getting hoops challenging but I didn’t include it due to time restraints and prioritization and thought it be best suited for future work.

Tuesday Nov 8th, 2018.

The eve before the critique, I had successfully soldered long wires to the female head that would be attached to the Bono 55 orientation sensor. The sensor would sit nicely on the top of the paper airplane head, covered with extra paper. On the other end, the sensor would connect to a breadboard where the Arduino Micro would sit on.

Fig 6. Georgina Yeboah. 2018. Bono 55 Orientation sensor sits nicely on top of paper airplane.

Figure 9. Bono 55 Orientation sensor sits nicely on top of paper airplane.

References and Inspirations:

I wanted to play with the idea of embodiment. Since I’ve worked with VR systems in cohesion with tangible objects for awhile, I wanted to re-visit  those kind of design ideas but instead of immersive VR I wanted to use a screen. A monitor big enough to carry out the task of engagement seemed simpler enough to explore this idea of play with a paper airplane.

I looked online for inspiring graphics to help me start building my world. I wanted this to be a form of play so I wanted the world I’d fly through to be as playful and dynamically engaging as possible while flying.


Paper Planes by Active Theory created a web application for the Google I/O event back in 2016 (Active Theory). It was an interactive web based activity where guests at the event could send and receive digital airplanes from their phones by gesturing a throw to a larger monitor. Digital paper airplanes could be thrown and received across 150 countries (Active Theory). The gesture of creating and throwing in order to engage with a larger whole through a monitor inspired the idea to explore my project’s playful gesture of play and interactivity.

Fig. 8. Active Theory. (2016). Paper plane's installation at the Google I/O event.

Figure. 10. Active Theory. (2016). Paper Plane’s online web based installation .

The CodePad:

This website features a lot of programmed graphics and interactive web elements. I happened to come across this WEBGL page by chance and was inspired by the shapes and gradients of the world it created.

(Fig 4. Codepad. (n.d) "WebGL Gradient". Retrieved from

(Figure 11. Meyer, Chris. (n.d) “WebGL Gradient”. Retrieved from


P5.Js Reference with WEBGL:

I found that  the Torus (the donut) was a apart of the WEBGL and next to the Cone, I thought they would be interesting shapes to play and style with. The Torus would wind up becoming my array of hoops for the airplane to fly through.



Figure 12. P5.Js. (n.d) “Geometries” Retrieved from

Future work:

Currently, the project has many iterations and features I would like to add or expand on. I would like to finalize the environment and create a scoring system so that the user can collect points when they go through a hoop. The more hoops you go through the more points you get. Changing the gradient background of the environment after a period of time would also be a feature I would like to work on. I believe there is a lot of potential in First flight that can eventually become a fully playful and satisfying experience with a paper airplane.


3D Models. Cgtrader. (2011-2018). Similar free VR / AR / Low Poly 3D Models. Retrieved from

ActiveTheory. (nd). Paperplanes. Retrieved from 

Dunn, James. (2018). Getting Started with WebGL in P5. Retrieved on Nov 12th 2018 from

McCarthy, Lauren. (2018). Geometries. P5.js Examples. Retrieved from

Meyer, Chris.(2018). WebGl Gradient. Codepad. Retrieved from

Paperplanes. (n.d). Retrieved from

Shiffman, Daniel. (n.d). P3D. Retrieved from

W3Schools.(1999-2018). CSS Gradients. Retrieved from


Generative Poster

Project by: 
Josh McKenna



Browser Experience Here.


Experiment 3, This & That was introduced to us as an opportunity to work individually and explore a concept that involved communication via Arduino and P5. The idea for my experiment originated from an experience I had earlier in the semester when I attended the Advertising & Design Club of Canada’s annual design talk– which this year’s event featured multiple graphic design studios from San Francisco (see Figure 1). At the end of the presentation I bought one of 3 posters they made. Each one of the three posters having small differences between one another. It was the first time I remember having the choice to buy the same poster in terms of its system and graphical elements, but which also had some variability between each design. Inspired by this experience, I recognized an opportunity to use generative design as a way to produce variability within graphic design artefacts. I felt that it could add some value or incentive to the attendant of an expo or event to bring home a personalized version of a poster that expanded the graphical identity for that event. This project experiments with that very idea and allows the user to explore the identity of a preset graphical system expressed through various compositions powered from an Arduino controller.


Figure 1: The Advertising & Design Club of Canada’s Design San Francisco event poster for the 2018 event.

Recognizing that the variability demonstrated within my generative posters would be part of a larger system I decided to begin my ideation by re-visiting my favourite graphic design text, Josef Müller-Brockmann’s Grid System book. From there I continued looking at work from the Bauhaus and eventually to more contemporary works by studio Sulki & Min. It was through examining the Sulki & Min’s archived projects that I came across a body of work of theirs that I felt could be expanded upon within the time parameters and scope of this project (see Figure 2).


Figure 2: Perspecta 36: Juxtapositions by Sulki & Min, 2014


Through an Arduino powered controller, users will be able to modulate and induce variability into poster design via generative computing.


The project hardware components were fairly simple. Altogether, the electrical components that were used in this experiment included an Arduino Micro board, a potentiometer and two concave push buttons. See the Fritzing diagram below for the full schematic (Figure 3).cc3

Figure 3: Fritzing diagram of electrical components and Arduino

Because of the constraints of this project and the limited skillset I have in fabrication, I decide to focus the majority of the project on developing the browser experience. When it came to constructing an actual controller to house the electronics, a simple cardboard jewelry box was sourced (See Figure 4). The controller itself includes the aforementioned potentiometer, a blue push button and a black push button.

The most important aspect of the physical casing for the project was that whether it worked or not. Compared to the ideation and execution of the browser aspect of this project, minimal time was spent planning the physical form of the Arduino controller.



Figure 4: The Arduino Controller component as part of the Generative Poster Experiment (Top). Inside the physical container (Bottom).


The approach I decided to move forward with was simple: I first had to determine and define the limits in the variability of the system. Keeping a strong reference to Juxtapositions by Sulki & Min (Figure 3),  the first rule of the graphic system would be that all of the circles in each new sketch will be found along divisional lines along the x-axis of the canvas. I originally divided the canvas’ width into quarters, but landed on ninths as I felt that the widescreen of a browser worked best within that ratio. The code now had to reflect the x position of each generated circle to appear at a randomly selected multiple of 1/9 in the browser window. From there its Y position would also be randomized within the height of the canvas. Originally the concept intended for the user to be able to edit the fraction at which the canvas’ width would be divided into by a potentiometer but the functionality was eventually scrapped because of scope issues (although this feature can be reintroduced manually by unquoting code in the sketch.js file.)

The second rule of the system was that a large circle would appear in one of four quadrants each time the sketch is redrawn. This circle acts as the poster’s primary element and because of its dominance in the composition, I decided to give the user the ability to manipulate its sizing from large to small through a potentiometer linked to the Arduino controller (See Figure 5). This functionality was also the easiest to map to a potentiometer.

Finally the third rule of the composition was that a equivalent number of medium and very small circles would be drawn compared with a larger proportion of small circles. The ratio of medium and very small circles compared to small circles was experimented with, but finally a 4:1 ratio (M+VS : S) was decided upon. This ratio was not editable by the user when interacting with the Arduino controller.

Originally I wanted to add controls into the Arduino portion of the project over the rate at which each set of circle sizes would increase over the course of time in the sketch. However, this proved to be outside the scope of this project as I was not able to find a way to incorporate this functionality both from a technical and aesthetic viewpoints.

To give a sense of pacing and movement to the otherwise original static reference, I felt that all of the circles generated should have specific growth rates as they expand to fill the canvas.


Figure 5: Ideation of the Generative Poster project

The variability of the posters would only be recognized if the user would redraw the sketch, therefore a refresh/redraw functionality was incorporated into the Arduino controller through a blue concave push button. By refreshing, the user is able to cycle through randomly generated posters and decide which composition suits best for them. Finally, the print screen/save image function was assigned to the other push button.


I believe that this project was executed to the standard I set for at the beginning of this project. To my excitement, during the critique I was able to see some of the different posters people made based on the algorithm and system I laid out for my project (See Figure 6 and 7).


Figure 6: Example of Generative Poster

The idea is for the user to sense when a composition forms into something that visually resonates with them. They can then choose to save it. During the critique of this project, the user’s selected composition was printed onto a paper, framing that moment in time– so that the user could have their own physical copy of the experience.


Figure 7: Randomly generated posters during Experiment 3 Critique


The Coding Train. (2017, January 9). Coding Challenge #50.1: Animated Circle Packing – Part 1. Retrieved from

Hertzen, N. V. (2016). html2canvas – Screenshots with JavaScript. Retrieved from

NYU ITP. (2015, October 4). Lab: Serial Input to P5.js – ITP Physical Computing. Retrieved from

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from

StackOverflow. (2011, March). How to randomize (shuffle) a JavaScript array? Retrieved from

Sulki & Min. (2014, March). Archived Designs. Retrieved from

Voice Kaleidoscope








Voice Kaleidoscope takes voice patterns from the microphone in the computer and outputs onto an LED circular matrix to make colours and patterns. Created for people on the autistic spectrum who have trouble interpreting facial expressions.  This tool was created for Pattern thinkers who have Autistic Spectrum Disorder.



Voice Kaleidoscope was created as a tool to help communicate emotion through patterns and colours. Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. Autism spectrum disorder (ASD) is a severe neurodevelopmental disorder characterized by significant impairments in social interaction, verbal and non-verbal communication, and by repetitive/restricted behaviors. Individuals with ASD also experience significant cognitive impairments in social and non-social information processing.  By taking Voice expression and representing it as a pattern this can facilitate as a communication tool.





There are many variations of the way voice is utilized into patterns . I was curious about the fluctuations in voice and emotion. What was interesting was seeing sound waves translated into frequency.  I wanted to see what these patterns would look like and how it could help me conceptualize the design around my own project.  Through a HAM radio club I found someone who was willing to talk to me about sound frequency and patterns and the beautiful patterns of sound through an oscilloscope.




Early in the process I was pretty secure on my concept. Seeing a friend with a family member who relates more to colours and patterns I always wondered why there wasn’t a tool to facilitate the interpretation of human emotions for people who sometimes deal with these barriers. It was also very important for me to get out of my comfort zone in regards to coding.  I wanted to embark on a journey of learning even if I was afraid of not sticking with what I already knew I could execute. Utilizing output from P5.js to arduino I knew would be much more challenging than the input infrastructure I had gotten more comfortable with. I was adamant about this also being a journey of taking chances and true exploration. This project was about communication and growth.


While researching aspects of pattern thinking and ASD tools in classrooms my project went through an Initial metamorphosis. At first I thought of this design as a larger light matrix with literal kaleidoscope features, further in the thought process decided this communication tool should be more compact. Easy to fit into a back pack for most carrying mechanisms. Earlier versions also had construction plans for a complex face with cut out shapes.





I started with the code right away, I knew my biggest hurdle would be to get P5.js working with arduino. I started to think about the architecture of the project. My first design process was to start thinking about through the flow of how voice would move through P5.js and into the arduino. What would that coding look like.

Initially had to decide how the microphone was going to be incorporated into the design. I began exploring adding a mircophone to the breadboard or  using the microphone in the computer and vice versa. At this stage in the process got started on serial control application right away. There were many issues with the application crashing.  The first step was to design the voice interface in p5.js this was a dificult task. I wanted to incorporate the same number of LEDS into the design with out it being over complicated and messy.  While designing the interface I began testing the microphone with the interaction of the p5.js. I was trying to encapture the animation of voice flickering in the p5.js sketch and started to look up code variations of turning the built in microphone on.

After this was set up and working I moved back to the json and serial control app. There were still connection issues.  In the first stage of coding I was having connection issues in the console not being able to stay on an open port, I continued to test through using variations of turning the serial control on then getting it to stay on a specific port. I discovered the port kept changing frequently. Decided to reinstall and that fixed the issue temporarily.


Putting together the board and LED lights:

For the Led matrix I decided to use three LED pixel WS2812B rings.   With initial testing of the rings and deciding how to power and design my breadboard I had the rings seperate.


I had to figure out how to daisy chain the components to lead through 1 data in and out wire to the arduino.  While powering up the lights I discovered that an outside power source of 5 volts wasn’t enough. I did some online sleuthing and discovered that if I used a 12 or 9 volt power source and ran it through a DC to DC power converter that would be better for my LED’s.


Coding the Arduino:

During this process had to discern what the light patterns were going to look like. Went through many colour variations and patterns and decided to utilize a chase pattern and colour variations for loudness. Depending on how loud or soft the voice was would discern how many times the light went around the rings. Had to test variations of brightness.Even with the 9 volt power source the LED’s were running the power quickly and flickering.The rings proved to be very different operationally than the strips.

Finalizing the board and testing:

Once the lights and board were operational I dove into testing the p5 files with the arduino.There were many calibrations between the p5 and the arduino.  At first I could see that the port was open but was not sure if there was communication in the google console.  Since I couldn’t  use the serial monitor in arduino to see anything I initially had a hard time discerning if the arduino was connecting. I could see numbers running in the console and an open port but was still not able to get an open connection. Went back to researching what notifications I should see in console if arduino is connected. Found the connection nofication but still could not get running after going over code. finally with a reboot my microphone and p5.js files were connecting with the arduino and I could see my voice patterns in the matrix.


This experiment brought my learning experience to a whole new level of json and serial communication. I learned the ins and outs of not just input but output as well. Even though there were many connective issues working through these problems made me a better coder and builder.  Getting feedback in regards to expanding on a much needed communication tools and seeing these thought processes expand the lives of other people was valued feedback to keep along this throught process and to continue exploring methods of assisting people through technology.

Added notes on future expansion for this project:

  • To make different sizes of this device for wearables or larger for environments such as presentations.

  • Incorporating a study conducted on voice patterns, light and how that incorporates with autism and  pattern oriented thinkers.

  • To expand on p5.js interface to reflect any findings in the study and expand on design based on these findings.


Article on Autism

P5.js to Arduino

Serial Call Response

Article Autism and Emotions

Paper on Emotions and Pattern Resarch

References p5







Asynchronous Playback

Asynchronous Playback

By Olivia Prior

GitHub link

Figure 1: Asynchronous Playback being activated by a viewer standing on the map

Figure 1: Asynchronous Playback being activated by a viewer standing on the map

Asynchronous Playback is an art installation that reflects the viewer’s posture through constantly altering video playback. The viewer is invited to step on a mat that activates the start of a video. The mat detects the distribution of weight between the viewer’s left and right side as they watch the video. As the viewer settles into one side of their body while viewing, the right and left hand side of the video fall of out sync. The side that detects higher force speeds up the corresponding side of the video. If the viewer shifts their weight to the opposite side the other side of the video will speed up and the first will return to standard speed. If either side does not detect weight, that side of the video will not play. If one side detects weight, which the other does not, the video playback will speed up to twice as fast.

The piece requires user interaction for the video to continue forward which makes it a collaborative performance. In collaboration, viewers have to together take their turn to step on the mat to progress the video forward, and as well to sync up the two screens.

This work also comments on the experience of viewing video art; viewers are never given what time the video starts, how long the video is, or what part of the video they are entering in. Asynchronous Playback can create a new way to experience video art.


My initial idea for this project was to create an art installation that focused on the act of viewing. The p5.js library has a strength in image making and manipulation. I wanted to create an interactive piece that mimicked the viewer engaging with the piece visually.

The first iterations of this theme used a painting (or an image) that would warp depending on the viewer’s posture. This was inspired by the act of standing and absorbing a visual work in an art gallery and the comfort of falling into a resting position. I chose to not pursue this idea of an image mimicking the viewer’s posture after reflecting upon my own experience in a gallery. Most paintings or images become fleeting when grouped together and do not grab a user’s attention for long enough. As well, if I were to pursue a reactive image, I would want the response to be gradual and not immediate. Additionally, the idea felt a bit novel since the image only reflected the posture and I could not see much more that it could offer as an experience. I did not chose to do a proof of concept for this idea because of my doubts of how engaging it would be.

I iterated on this theme by changing the responsive media to video art to reflect posture. Video has attributes that easily allow for participation and engagement; it has the ability to alter playback speeds, to play and pause, and most importantly the medium naturally engages viewers to directly engage with the work through pausing and watching the content. With this in mind I started to design how this interaction would work.

Figure 2: Initial sketch of the mat in relation with the screen

Figure 2: Initial sketch of the mat in relation with the screen

Upon sketching ideas, I considered using two mats to prompt two separate viewers to be the input for the piece. Upon reflection, I thought that have two users focusses on the relationship of the two beings, rather than the relationship with the viewer and the screen. I kept this idea in the back of mind, since for construction the only difference between having two mats control a video, and having a single user control the playback with their posture, was either creating a mat with two sensors or splitting the pair into separate casing. The code would fundamentally be the same. I decided to continue forward and let testing the experience on a small scale determine what the experience should be.


Technology decisions

I considered a few different options when researching what type of sensor to use as an input. One option was using a contact microphone that would be triggered if there was sound or rustling. If the microphone was put into a mat, it would be able to pick up the movement of the material and the viewer settled into a position. This input seemed quite static and didn’t offer the ability to measure pressure. Upon browsing various component sites, I came across a “Force, Flex, & Load’ sensor that measure weight. The sensor documentation seemed like this was a viable option, so I bought one to test as a proof of concept.

Figure 3: Force Sensor found from Creatron.

Figure 3: Force Sensor found from Creatron.

Proof of concept

Before embarking on development, I needed to ensure that p5.js would be able to dynamically adjust and stop video playback. In the p5 reference library there is an entire document on altering playback speed on videos. I implemented the test code from the document and found that it worked exactly how I needed it.

My next test was to see if the code would work if I used two videos on the screen to play. I used the same code as before, but loaded two videos and duplicated the speed control buttons for the second video. The videos were able to load simultaneously, and be controlled separately.

Hooking up the sensors

The following step was to hook up the device to my Arduino, and as well to use the p5 Serial Control application to connect the input from Arduino to the output of p5.js sketch.

I followed along with the Adafruit Flex, Force & Load hook up guide. One interesting point in the connection diagram was that the FF&L sensor had altered sensitivity depending on how much resistance the circuit had connected. The Arduino sketch that was given by the documentation included calculations depending on the resistance the assembler had chosen. The documentation also noted that the higher resistance included, the more sensitive and specific the feedback would be from the sensor.

The documentation used a 3.3k Ohms resistor. In my kit, I did not have one, but the document said you could create a parallel circuit using three to four 330 Ohms resistors to get to a similar output. To get the sensor working, I decided to connect this circuit using their recommendation. I did note that if I had time I would pick up a resistor that was either 3.3k Ohms or as close as possible to save myself the soldering of a two parallel circuits, and as well to save space on my proto-board.

I connected four 330 Ohms resistors to my circuit and uploaded the sample code from the documentation.

Figure 4: The FF&L sensor connected to the Arduino, using a parallel circuit of four 330 resistors.

Figure 4: The FF&L sensor connected to the Arduino, using a parallel circuit of four 330 resistors.

Connecting the FF&L Sensor to p5.js

I modified the sample code to receive the input from two FF&L sensors. When either sensor was triggered, it would take the value and input it into JSON to be sent to my p5 file through the serial controller. If there was no value from that sensor, it would return 0.

I did not map the values right away, as I was unsure of how the maximum and minimum values I could receive from the sensor. As well, I was unsure about how to evaluate the differing values from the sensor and correlate them to playback.

The initial test once the FF&L were connected to my sketch.js file was to receive the data from both sensors and compare the values. The program would then take whatever value was higher, and play the video that correlated with the sensor. The lower value would then pause. If both values were 0 (not moving). This test was a successful proof of concept.

Figure 5: Screenshot from my proof of concept footage.

Figure 5: Screenshot from my proof of concept footage found here on YouTube.

Workflow of Code  

My first step was to chart out the interactions from the viewer and how I wanted the video to react to the sensor feedback. I created a gold-plated diagram (one that included the next steps and more advanced features) so that I could develop with the future steps in mind. Charting out the interactions would allow me to think about the patterns that would arise from the viewer’s interactions and inform how to structure my code.

Figure 6: Interaction flow chart for Asynchronous Playback

Figure 6: Interaction flow chart for Asynchronous Playback. High definition version of this diagram found here due to WordPress not being able to upload the high quality image

I used this process as a model to determine the thresholds for the video playbacks. My proof of concept already had the base logic for the third case on the right of the diagram “If only one sensor is activated”. I started by using this base as scaffolding to find how I should map the significant increase or decrease of one side.

The sensor data was returned in grams, and would max out at 25000g. I mapped the sensor from 0-25000 to 0-10. I did not want to sensor to alter the feedback significantly if the viewer’s weight was distributed unevenly by anything under 5 pounds.

I chose my thresholds to be from 0-2, 3-7, 8-10 to determine how significant the weight change was between the two sensors.

My testing of these thresholds was initially from either my thumbs or me placing weight on both sensors on the table using my palms. This worked for the most part, but I wanted to test it out with the sensors underneath material. At this point I was unsure if the sensors would register weight if there was any material on top of them.

Coding Challenges

            The biggest challenge I came across when implementing the code was an “Asynchronous Playback” issue. My code would execute loop and pause commands quickly that sometimes the videos would be told to pause and play in the same loop, stopping the playback of that specific video. I tried to avoid this by creating Booleans for each video that would switch to true if the video was playing, and false if it was not. This system worked very well about three out of five times the footage played. Other times, about a minute or two until the asynchronous playback issue would occur.

My back up option was going to my proof of concept and only having one video play at a time. I chose to not implement this because I thought it was an interesting experience to at least see the two videos play simultaneously together and slowly fall out of sync. The proof of concept provided a much more physical experience, causing the viewer to constantly sway back and forth between the two pads. I wanted to at least demonstrate the causation of standing on the mat to start both videos for the viewer.

Implementing the different speeds was effective, but I found that at times because of the asynchronous playback issue it was hard for both of the videos to play at the same time. Therefore, one side of the video was more commonly twice the speed rather than subtly ahead.

Another challenge I encountered was when I was loading the videos I needed to user an sftp client to load my files because Chrome does not allow videos to automatically play when loaded locally. I followed the “how to start a server on your machine locally” tutorial from the coding train and used the python command “python –m StartSimpleHTTPServer”. This saved time and allowed for a less clunky workflow than having to load files onto my sftp client.

“Gold-Plating” Coding Challenge

            I tried to implement the step in the workflow process of when the video is still for a certain amount of time, check to see if the videos are synched, and if find the video that is behind and play it until they are synched.

I attempted this by finding the timecode of each video using the JavaScript method “.currentTime()”. If the videos did not have an equal current time, find the one that was behind and play it until the current time matched. This was difficult to implement because the time the code executed would not pause when it came to the value of the other video due to how the code was being executed. The current time would take the value to a second, and the code would more often than not miss the exact second. I attempted to match the videos within a couple of seconds but this did not work well. The videos did not match perfectly, and they would align infrequently.

Another gold-plated feature that I attempted to implement was the speed increasing gradually. I attempted to implement it using a for loop that would count up or down until it reached the intended speed. This did not work as gradually as I wanted, nor did it have a noticeable effect on the interaction. It would have been more noticeable if I had included delays within the for loop, but I did not want to interfere with the consistent input that was coming from the micro controller. After spending some time trouble shooting this feature, I decided to not include it in this version.

Hardware assembly

Parts Overview

Parts list overall:

  • Proto-board
  • Female headers
  • Flex, Force, & Load Sensor x 2
  • 9k Ohms Resistor x 2
  • Arduino Micro
  • Wire
  • Microfoam
  • Paper
  • Tape
  • Plastic floor covering
  • Felt


            I had a half size proto-board that I had purchased for another side project. Instead of buying new materials, I decided to use this size of proto-board instead. As well, I did not want my casing for my hardware to be large and overwhelming, as I did not want it to distract from the visual of the mat. Because my board was smaller, I did not have space to include my parallel circuit of 330 Ohms resistors for each sensor. I purchased two 3.9k Ohms resistors as replacements to save space. I tested both sensors with these resistors and found that they provided as much accuracy as the 330 Ohms.

Figure 7: Close up of soldered proto-board

Figure 7: Close up of soldered proto-board


Figure 8 : Full view of soldered proto-board w ith both FF&L sensors

Figure 8: Full view of soldered proto-board w ith both FF&L sensors


Figure 9: Fritzing diagram for reference when soldering

Figure 9: Fritzing diagram for reference when soldering



I wanted to use a material that gave the viewers some sort of haptic feedback. I went to Home Hardware to look at ready-made materials such as the mats that they had in stock. The only mats that they had were the standard mats meant for outdoor shoes. These mats were rather flat and would not give tactile feedback as the viewer stood on top of it. Home Hardware had small sheets one and a half-inch thick memory foam in stock, so I chose to buy this to test with my sensors.

Figure 10: Testing the memory foam with the sensors, the creation of my “Minimum Viable Product” (MVP)

Figure 10: Testing the memory foam with the sensors, the creation of my “Minimum Viable Product” (MVP) (Footage of this test here)

The memory foam as an object allowed for the viewer’s impression of their foot to stay in the material as they switched their standing position. I did not like the appearance of simple memory foam, so I searched Chinatown for a clear mat to go on top of the memory foam. I did not want the common carpet covered black door mat that was most commonly found in hardware stores. I wanted the imagery of the mat to be related to a mat, but separated enough that the user did not simply register the object as a door mat. I found bulk by the foot clear covering that is often used to top carpets. I bought enough to wrap around the pieces of memory foam.

Having the memory foam allowed for me to produce a “Minimum Viable Product” (MVP). The sensors picked up the pressure beneath the memory foam mat which was a success.

Continued Assembly of Mat

I placed the plastic on top of the memory foam, but found that aesthetically it was not pleasing. The plastic was entirely clear and the memory foam was in two pieces. I did not like the look of the seam beneath the plastic. A classmate offered me felt from their previous project that I tested wrapping around the memory foam.

I wanted a signifier that the user was supposed to stand on the mat. I felt that the mat was symbolic enough, but experimented with cutting out footprints and placing them on top of the felt. I really liked the contrast of the blue and the white aesthetically. To attach the sensors onto the back, I gently taped them onto a piece of paper, and then attached the paper to the back of the mat.


Figure 11: The plastic covering and the white felt covering

Figure 11: The plastic covering and the white felt covering


Figure 12: The plastic and felt covering over the memory foam, with cut out footprints

Figure 12: The plastic and felt covering over the memory foam, with cut out footprints


Figure 13: The completed mat without the sensors attached

Figure 13: The completed mat without the sensors attached


Figure 14: The paper with the sensors attached

Figure 14: The paper with the sensors attached


For the box, I found a simple cardboard box that I made a small hole in with scissors to allow space for the Arduino wire to move through. The wires for the sensor were quite thin so I did not need to make incisions for them, the box could lay directly on top of them. I covered the box in the same white felt to match the mat. I did not cover it in the plastic mat covering because I did not want to give the signifier that the user could step on it.

Figure 15: The mat assembled with the casing for the proto-board

Figure 15: The mat assembled with the casing for the proto-board

Choice of film

As I was testing the playback of the video I was using an old clip from my own personal project documentation. The video worked well because there was lots of movement, and it was short so it would load quickly for iterative coding. I was using the video duplicated to mock the look of two screens.

Figure 16: The demo footage loaded twice onto the page to test out the code

Figure 16: The demo footage loaded twice onto the page to test out the code

In my original idea, I had envisioned a video split into two halves. Since I had my MVP I decided to explore this option. Since I liked the movement, and the immediacy of action in the demo clip, I wanted something that had these same qualities. I thought a nature documentary might be interesting as a combination of movement and imagery. As well, I thought it may be interesting to have a longer video, rather than a short one that loops like my demo clip. The length would make it that it would be very unlikely for the viewer to stumble upon the same part of the clip if they returned to the gallery since the video is not in constant auto-play.

I found some free stock footage videos of waves and other nature, but all of them were too short. I decided to go onto the National Film Board website to browse through open source films. I came across a familiar one from my past work at a canoeing camp called “Path of the Paddle: Solo Whitewater” by Bill Mason. I downloaded the film, and used Adobe Premiere Pro to slice the video in half. For the purpose of loading and the circumstance of having a shorter critique, I shortened the film to only twenty-five minutes rather than the full fifty-five minutes.

Slicing the video in half was trickier than anticipated due to exporting. The quality of the film did not export to high enough of a resolution, but I decided to keep it because the imagery was still interesting to interact with.

Figure 17: the left side of the sliced footage from “Path of the Paddle”

Figure 17: the left side of the sliced footage from “Path of the Paddle: Solo Whitewater”

Figure 18: Full screenshot of “Path of the Paddle"

Figure 18: Full screenshot of “Path of the Paddle: Solo Whitewater”


Figure 20: Viewer interacting with the video, a couple minutes in with the videos out of sync

Figure 19: Viewer interacting with the video, a couple minutes in with the videos out of sync

Figure 20: Mat in front of the projected videos

Figure 20: Mat in front of the projected videos

Video of piece in action here.

I presented the project using a projector and placing the mat on the floor in front of the wall. I disclosed to the audience that the code was a bit “buggy” due to the issues with asynchronous playback errors from Chrome. The two videos played simultaneously for about twenty seconds before receiving the error. As stated before the error only causes one video to pause, while the other video continues to be responsive to the sensor.

The response to the project was positive. People noted that it was interesting to create a piece that made people aware of their posture, since it is something that is not consciously noted frequently. Someone noted that if they were to come into a gallery, they would know exactly how to approach the piece due to the design and the symbols on the mat. As well, it was suggested using frames to move the video forward rather than using the loop or pause functions could prevent Asynchronous Playback.

I refreshed the page where the videos were loaded in attempt to get them to play longer than the initial attempt. People noted that even though one video would stop playing that it did not spoil the entire experience.


Figure 21: Viewer changing foot position on the mat to change the playback of the footage

Figure 21: Viewer changing foot position on the mat to change the playback of the footage

I am very happy with the result of my project, even though the code has a few bugs concerning the playback of the videos. I think overall it conveyed a new way of experiencing video art through reflecting posture and requiring physical interaction from the viewer to play.

I am surprised with the artifact of the mat because I had no immediate visual in mind for how it should look or operate when I first designed this project. I enjoyed letting the stores of materials inform my decisions rather than setting out with a preconceived notion.

I do not think using p5.js was the best platform for this idea. If I were to continue pursuing this idea, I would use software like Max MSP and Jitter to control the video speed. Controlling videos from JavaScript is too focused on avoiding Asynchronous Playback. The p5.js Serial Controller application was a bit buggy as well. I had to keep a very “clean” environment for the code to execute. This involved closing all of my files and the p5 Serial Controller every time I wanted to update my code. This was a bit cumbersome to be a part of my workflow, but if I did not do this the p5 Serial Controller would start to drastically lag with the data being communicated between my Arduino Micro and p5 sketch file.

As well, I would not implement what I noted as “gold-plated” features if I were to continuing developing this project. I do not think that having the video speed gradually speed up or down adds anything to the overall experience. It is also an interesting experience to see the video speed just up suddenly, and causes a bit more of a sudden awareness in the viewer. This experience can cause the viewer to challenge whether or not the video is supposed to all of a sudden change speed, or if it is their actions. I am unsure if I would add the feature for the video to slowly return to the synched position. I think there is something engaging about the viewer coming in on a screen that is already out of synch. Also, I think it would be confusing if a viewer came in and the video was in the process of returning to a synched position.

Overall, I am proud of the execution of this project and am happy with the reception. I would definitely continue on this project if time allows.

Related Articles

            I took primary inspiration from the video art piece “24 Hour Psycho” by Douglas Gordon. This piece changes the way the viewer engages with video art by altering the time of a well-known film. The film “Psycho” is slowed down to take twenty-four hours, making each second go through two frames rather than the standard twenty-four frames per a second. It would be nearly impossible for a viewer to watch the entire film in a sitting. This piece was one studied in my undergraduate as a pivotal artwork for adding to the dialog about the relationship between time and video art. “24 Hour Psycho” as well comments on the physically of film. By slowing the film down, the audience becomes aware of the underlying static qualities of film when the frames are presented almost individually.

My piece relates to “24 Hour Psycho” by altering the time and interaction with the video piece. The user manipulates the time of the video through their interaction, creating a new experience for the video. The video in a way is brought into the world physically through the control of the viewer’s body. “Aysnchronous Playback” similarly to “24 Hour Psycho” would be very difficult to watch in one viewing. The way the sensors are programmed make it that the user needs to start with equal pressure on both sensors to start the playback at the exact time. The act of stepping on the mat makes this nearly impossible. The way the interaction is designed makes this piece difficult to engage with in one sitting unlike many video art pieces.


Force Sensitive Resistor HookUp Guide. Resourced from:

HTML Audio/Video DOM currentTime Property. Resourced from:

Lee, Nathan. 2006. The Week Ahead: June 11 – June 17. FILM. New York Times: New York, USA. Resourced from:

Mason, Bill. 1977. Path of the Paddle: Solo Whitewater. National Film Board of Canada. Resourced from:

Parish, Allison. 2018. Working with video. Resourced from:

Reference: .speed(). Resourced from: 


The Pun-nity Mirror (for Hijabis)

EXPERIMENT 3: This + That

by Carisa Antariksa



The Pun-nity Mirror is a prototype of a conceptual vanity mirror that responds to existing beauty standards surrounding the Hijab within the online hijab community. It turns phrases that are self-deprecating, such as “I look like an egg with the hijab on,” or “that’s a huge tent,” into comedic and motivational comments as the user looks into the mirror. These pun-pliments (Pun-like compliments) are activated on the website/screen as the user touches a specific part of the mirror.

eggfake4 fake2

This prototype mirror is a commentary on how vanity has become a significant factor in influencing a Muslim woman’s intentions to wear the hijab. By placing comedy into the mix, I wanted to create an experiment that tackles the issue through a light-hearted angle.

Github Link:


Ideation Process 

I went into this project with Kate’s words in mind (“Have fun with it!”) and began with testing all the examples when I had the chance. The concept came about later on in the process as I wanted to see the possibilities of the all the input and the output sensors and how I can then apply it to a list of ideas. Eventually, I narrowed down my list to the inputs and outputs I was more interested in exploring, from the capacitive sensor, the electret microphone and the infrared (IR) LEDs. I took a week to explore these elements but in the time given, I was only successful in fully exploring the capacitive sensing abilities of both the sensor itself and creating one using the Arduino. I had discovered Capacitive Sensing for Dummies and explored ways in which I could apply this to my personal project.

Furthermore, in terms of the idea itself, I was heavily interested in visualization, especially through Perlin Noise. I studied how it worked and thought of ways to apply it to text, as demonstrated by Kiel Mutschelknaus. Would I be able to apply it to existing vectors I had made in my previous works? Or perhaps I can apply it to other shapes? I had these questions as I kept brainstorming for possibilities. Along with this exploration into algorithms within p5, I also thought of more realistic ideas that concerned my identity, as representation was something I was personally passionate about. 


In the end, I chose to do a micro-project about the Hijab, which I thought suited the opportunity. There were not many existing precedents that explored this religious symbol in a “relatable” context so to say, so I wanted to start by creating one. I approached this idea by first browsing things that I would encounter on Twitter and Instagram regarding comments and opinions on the hijab within the community itself. Modest hijab fashion is a continuously booming industry, giving rise to many influencers from many different backgrounds and nationalities. This in turn forms many narratives from muslims terms of how the hijab is represented and the impressions it gives to the general public. More often there is a serious undertone to these opinions, whether it is judgement or what some define as freedom of expression. An example of this is the issue that rose from one of many hijabi influencers, Dina Torkia, about responses on how she recently wears the hijab (link to article here.)

Rather than applying the heavy weighted side to this subject, I wanted to introduce a perspective that I was more comfortable with. I often discuss about different hijab styles with my close friends and how some styles look differently on us depending on our face or head shape. We throw around words such as ‘egg’, ‘tent’ or even ‘looking “sick”‘ to each other and laugh about it. I was also reminded at some muslim women’s own reasoning behind wearing the hijab itself, ranging from wearing it out of free will or family influence, and can still be heavily affected by the vanity aspect regardless. This thought then led me to search if others who wear the headscarf also think and feel the same way.

examples lol

Screencap of “hijab egg” search results on Twitter

The results were amusing to say the least, which led me to sketching my final idea. I thought of common puns that can go along with these terms, such as “eggscellent” and “ten-tastic.” Since it was supposed to be a prototype, I also decided to narrow it down to these two phrases as it is used in conversations, whether comically or even used as a teasing comment. I also personally added one more, the word “ehh,” since it was one I often use to tell myself and my other friends to be okay with “today’s look.”


Within this process, I also received advice from Veda Adnani to use a mirror to suit the purpose. It was a more practical way to implement the project than using the actual scarf I owned and creating a flimsy wearable. I then decided on this as it was also a more compact way to present the intended concept. From this, the idea of a vanity mirror + puns came to fruition.

Programming & Presenting the Concept

Website (p5)

The illustrations to the website was something I had previously done in a previous project and I altered to fit the concept. The vector graphics were from a visual style project I had created for a visual novel game.


Original Vector Illustration


Altered Vector Illustration & Tent, Egg, Leaf base for the Hijab shapes

These were then applied in the final renderings of the screens in early on in this post. The final vector images were placed under if statements and would change should the wire connected to ground (0).


I consulted peers who have used the Capacitive sensor in the previous experiment to help me with the Arduino aspect. I was warned that there were many challenges to using that sensor, as it is quite sensitive once connected to the power. I started with the circuit that I found on Capacitive Sensing for Dummies and later adapted it to the capacitive pull-up resistor code from Kate’s example in class. The circuit evolution is as follows:


Initial Circuit


Final Circuit

The first circuit pulled values on the serial monitor as it touched conductive objects, while the final was a simple 1-0 value pull-up. I included the LED to help indicate that if it did light up, the circuit proved to work. This element performed perfectly. I decided to employ the final circuit for this prototype and if time allowed later, there would be a better chance to use the actual capacitive sensor element to allow activation from a finger touch.

Arduino to p5 (and the problems.)

Once the LED was lit up, I was sure the process of linking it to my p5 website would be no problem, as it only required connecting the serial port. However, I underestimated this as my images did not load as I turned it on. I had thought this was a laptop problem, and switched it over to the windows laptop. This did not make a difference, as it was most probably a webserver problem, and continued to test it with the original circles in the code examples.


Even after testing it on multiple local webservers and having help from my peers Olivia and Omid, the images still did not load. However, after some debugging, the images finally showed up, thanks to help from Omid. Instead of function preload(), he had suggested I used the createImage function. This ended up solving my agony from the night before.


function preload() to createImg

However, there were time constraints to fully change the code for the presentation on Friday. I created a demo video to present the concept and shared the overall project through what should happen, given the code worked.


The feedback was overall positive regardless of the mishap, and many suggestions on how to push the project forward, from using an API to gather information from certain key-terms on social media to adding additional elements on the website that can provide better context to the concept of vanity and the hijab. There was also a mention from Kate on how linking comedy to fashion is quite rare, and this project allowed a semblance of it.

Presentation day lol

Completing the Experiment

I was determined to make the code work. I then debugged the .js file, rearranged the elements and reconstructed the connection of the Arduino to the mirror. After spending time on perfecting the placement of the elements, I finally opened the serial connection to the p5 website. Eventually, things worked as planned.

Future Iterations

This was my first time planning and constructing hardware for the Arduino by myself and I came to some realizations:

  • The mirror that was lent to me by Veda was stainless steel and would easily conduct the current. In my later construction of the wiring onto the mirror, I had placed in copper tape all around the top part where the indicators on where to clip or touch are. Once I connected the circuit to power and “touched” the word, all of them responded on the p5. I then placed in wood under the pins and taped them in place, and after, ended up layering duct tape on top of the other to prevent all of the images from changing on the p5 sketch.


Evolution of mirror connection

  • Using the capacitive sensor would have created a better viable product, as the user can actually touch the part of the mirror instead of using alligator clips.

This project is only the start and has the potential to evolve into a series of micro-experiments that introduce other aspects within communities as specific as a the hijab fashion-influencer culture. Although this prototype scratched the surface, there are aspects to vanity that can be explored. The data and commentaries to what can be considered the threshold of intentions in wearing the hijab can be collected and expressed in more creative and engaging ways than just merely creating and posting on a social media profile.


Behance. “SPACE TYPE GENERATOR: _v.Field.” Behance,

Instructables. “Capacitive Sensing for Dummies.”, Instructables, 13 Oct. 2017,

“Language Settings.” p5.Js | Home,

“Language Settings.” p5.Js | Home,

RCP, Sean T at. “Hijab Egg – Twitter Search.” Twitter, Twitter, 31 Mar. 2018, egg&src=typd.

The National. “Influencer Dina Torkia Says Online Hijabi Community Is Becoming like a Toxic Cult.” The National, The National, 5 Nov. 2018,

Artful Protest


By Maria Yala

Creation & Computation Experiment 3 / This & That


Project Description

Artful Protest is an interactive art installation, using p5 and Arduino over a serial connection, that invites participants to use art as a form of protest. It was inspired by this quote by Carrie Fisher:

“Take your broken heart, make it into art.”

The installation is a form of peaceful protest, designed around the idea that the freedoms we enjoy everyday are not guaranteed and that we have to constantly fight to keep them around. In the installation, participants are presented with a screen projecting the current problems in the world at the moment. E.g black people being brutally murdered by police, families and children being separated at the US border, or innocent people killed in Churches, synagogues and schools. There is also a cardboard protest sign, that controls the images that are being projected. If the blank physical sign is picked up i.e. a person begins protesting, the projection on the screen changes to that of a digital protest sign animation. The digital protest sign changes depending on how high or low the physical cardboard sign is being held. If the cardboard sign is put down, the initial screen is projected again, inviting the participants to protest again.


Process Journal & Reflection

Initially, I wanted to recreate a voting booth experience to communicate the idea that people need to get active and vote to create or push for local/global change. However, upon further reflection and consultation with a few members of my cohort, I realized that I was overthinking the idea. I decided to keep it simple and focus on the experience that was to be created and not on the code or technology. I started to look at the experience on a wide scale before narrowing down, how I would execute it. I knew that I wanted to keep the political angle, and I wanted something visual and also something interactive where a viewer would become a participant. I was thinking of the women’s march and protest signs and came up with the idea of a peaceful protest that used art.

[ Protest signs ] + [ Making one’s voice heard ] + [ Peaceful protest ] = Artful Protest


Once I had a general direction, I began looking for inspiration. Below are some of the images and work from other artists that inspired Artful Protest.


‘Imperfect Health: The Medicalization of Architecture’ Exhibition


type / dynamics installation at the stedelijk museum amsterdam


Interactive Art Installation by Akansha Gupta

I was inspired by the idea of hidden information being revealed and text being projected on the screen in the art pieces and installations above. I took elements of these ideas and used them in Artful protest where protest signs are hidden and then revealed depending on how high the physical protest sign is held.

Arduino Hardware & Code

For this project, a simple Arduino setup was used to collect information about how high the cardboard sign was held. I used a SRO4 Ultrasonic sensor to collect the proximity data. I found that this sensor was very finicky and would return strange sensor readings. To fix this problem, I used a suggestion found in Kate Hartman’s code found here. The suggestion was to use the NewPing library in the Arduino resource library. The library helps receive cleaner values from the proximity sensor by providing a maximum reading for the sensor. I set my maximum to 200, therefore the range of values returned was 0 – 200cm. Additionally, the library also provides a function to return the distance calculated in cm. Artful Protest uses a serial connection where proximity data is gotten from Arduino as input and send over to p5.


Serial Connection

I chose to send the distance data over to p5 using JSON( JavaScript Object Notation ). I had some problems using pretty print when sending JSON over serial. On the p5 side I would get an error about unexpected whitespace and was able to resolve this by removing the pretty print function and using printTo instead i.e. p5Send.printTo(Serial);

P5 Animations

On the p5 receiving side, the proximity data is used to determine what mode the installation is in. There are two modes – watching and protesting. Using the received data and a global boolean variable ‘isProtesting’, which is initialised as false, I use the distance data to determine what mode to set. If the distance is less than or equal to 30cm, the mode is ‘watching’ and ‘isProtesting’ is set to false. If the distance is greater than 30m the mode is ‘protesting’ and ‘isProtesting’ is set to true.

In the draw loop, I then use the ‘isProtesting’ boolean to determine which animation to “play”. I use a global variable ‘animation’ which is initialized to 0. Using the sensor value from Arduino, I determine a new value for animation based on the height of the digital protest sign.


Sketches and initial idea designs

Animation 0 – Initial Screen.

The screen below is drawn when the variable animation = 0. It is the initial screen drawn when the mode is ‘watching’. The screen is drawn using text that is confined to the width and height of the screen.


Animation 1 – Little Sign, Yuge Feminist Screen

This screen is drawn when the distance recorded is less than 80cm. The animation part, “Yuge Feminist” is drawn buy changing the color of text from black to white and back to create the illusion that the text is appearing to and disappearing into the black background. The colors are generated using sin function to generate colors within the range of white to black.


Animation 2 – Dump Trump Screen

This screen is drawn when the distance recorded is less than 100cm. The animation is created by toggling the drawing of two string variables. i.e “Dump” and “Trump”. To create the effect of toggling both words, I use a counter variable dumpCnt which is updated, by 1, every time animation 2 is drawn. I then check whether the dumpCnt is odd or even. This is done using the modulo function. If the dumpCnt is even “dump” is drawn, if it is odd, “trump” is drawn. When I got this animation to work, I realized that the frameRate was too fast to get the effect that I wanted, so in setup function i set the frameRate to 2. However this ended up slowing down my other animations later so I resolved this by setting a different frameRate for each animation, within the draw function.


Animation 3 – Love is Love Screen

This screen is drawn when the distance recorded is 100cm or greater. Is shows an animation of an explosion of balls forming the word ‘LOVE’. This is done by tracing the font used to draw ‘LOVE’ on the screen. The tracing is then turned into points, using the x and y co-ordinates . The animation was based on the The Coding Train : Steering Challenge which can be found here. Each point is initially drawn at a randomly generated x,y co-ordinate then it moves to it’s target position, which is assigned according to the tracing of the original text. This is what creates the animation of the points exploding on the screen and moving to a set location. The points are created as vehicle objects of the class Vehicle. They are held in an array and the array is iterated over drawing them on the screen. Each vehicle has a target x,y co-ordinate, hue (color), starting position, velocity, acceleration, radius, max speed and max force.

To assign colors to each point / vehicle, I set the color mode to HSL and then while iterating over the points when vehicles are getting targets assigned, I assign a color and iterate over a hue variable, causing the rainbow effects of the points.


Designing the protest sign

The protest sign was created using a sheet of cardboard material. I chose cardboard as it is a material that is used a lot during protest marches, and cardboard signs featured heavily during the 2017 Women’s March which was my main source of inspiration. I chose not to add a pole as a handle for the sign because I felt that it restricted the interactions with how a person would hold the sign. I created a small pocket at the back of the cardboard to hold my breadboard and sensor.

During testing, I realized that the cardboard was quite heavy, and when lifting it up higher it got significantly harder to hold it up. I wasn’t too bothered by this as I felt that it added another layer to the experience. During presentation, I got a suggestion to maybe tie this into the types of protest signs displayed i.e. when talking about heavy issues, show these protest signs when the physical protest sign is held up highest.


The physical cardboard protest sign with pocket at the back for the breadboard

Future expansions

I would like to add more protest signs and possibly include an orientation sensor, to determine another way in which the sign is being held. I also got the suggestion during critique, that I could explore having multiple people interacting and having that affect the protest signs shown. This would allow multiple people to join in on the interaction. I think this is an avenue I could possibly explore further, perhaps even adding more protest materials to the installation, not just one protest sign, and I could use an additional screen.

What I learned

I particularly enjoyed this experiment because it helped me think to simplify my work and focus on creating meaningful, organic interactions rather than showing of what a piece of tech can do. The quote below summaries my findings 🙂

The artist should never contemplate making a work of art that is about something; a successful work of art can only ever be about nothing. The artist’s complete negation of intent thus creating a reflective surface into which the critic, curator or collector can gaze and see only himself – Sol LeWitt, Paragraphs on Conceptual Art, 1967

References & Resources

Artful Protest Code on Github

Code – Resources

Creative Coding – Text & Type

Rainbow Paintbrush in p5

The Coding Train – Steering Code Challenge

CC18 Digital Futures Class Github-

Images – Resources


“Bird” a toy to help understand machine/animal interaction.

Project By Amreen Ashraf

Project Github:







I started out with the project wanting to work with sound. My first two projects were based around the idea of sound and using  code and electronics to control and manipulate sound. I started out by looking at my code for experiment 1 where I had sketched out a circle which moved to music using the help of the Daniel Shiffman’s coding train videos. As somebody who is new to coding, the one thing I was sure of was the the idea of using code to create audiovisual and sound experiments. My first round of brainstorming and ideation looked at building upon that concept further. Through the week after being introduced to various other examples in class and trying out a few of them I wanted to explore the idea of “connection” between humans and their machines by using a virtual pet with a physical companion to the virtual pet, almost like the 90’s toys like the Tamagotchi. This idea of connecting with a virtual pet slowly evolved into not having the human in the mix at all. As  technology and tech products get smarter to recognize more of our natural environment could technology connect to this environment just as humans can?could technology come to care for other sentient beings and look after them the way humans have? These were some of the larger questions that I had at the back of my mind during this project. 


Project description

“Bird” is  a toy operated by a proximity sensor which senses animals bodies and uses this to set off a toy to keep help the human connect with their pet while being physically away from their homes. With the rise of “vertical” living and urbanization, most people live in small spaces with their animal friends and companions. Unlike dogs, who need to be taken on constant walks, cat’s usually stay indoors, which sometimes causes them to be lazy and inactive. There are a myriad of cat toys and products aimed at keeping indoor cats active and healthy, however most of these toys require that humans be present in playing with their companions. 


Final Circuit Board

For the final circuit board I used a proximity sensor and a servo connected to the Arduino Micro.



The tech


Arduino Micro

Servo Motor

Proximity Sensor



The Process Journal

First round of brainstorming:

30-31st October

I first started with the idea of working on a project that used sound as an example. My first brainstorm session was built around that idea:


I tried out a few examples on the class GitHub provided by Kate and Nick. One of the examples used two potentiometers and the other another example which increased the brightness of the LED. I tried to play around with the idea of using those two examples to combine into one audiovisual experiment. As I don’t have a very strong background in code, throughout this course I’ve used the examples provided in class as a basis to built upon.

Round 2

2-4th November

During the weekend a week before the project was due, not completely satisfied with the idea I went back to the drawing board. Growing up I always had animal companions, currently I have 5 beautiful cats who live in sri lanka my hometown. As a masters student who is away from  them all the time it is hard to commit to taking care of a pet, this sparked the idea of using technology to comfort that part of me that wished to have a animal companion. I came up with the idea of some sort of virtual pet, which could comfort and respond just like an actual pet.


I quickly drew a sketch out with an initial concept and idea. The idea was to use light sensors as input to connect to a cat sketch on P5. js . The light sensors would be connected to a physical piece of hardware maybe in the shape of a cat which when touched or pet, would produce a sound and initialize the vibration motors to start.


I also connected with the idea of some sort of idea a “cat rave” just to give it an element of fun and silliness. The idea of technology being silly and fun appeals to me a lot. The idea of the “cat rave” was partly inspired by this video whose soundtrack I actually used in my final project.

Round 3

5-7th November

I got working on the light sensors and was having difficulties working with two sensor values from the light sensor. Even during Experiment 2, I realized I did not particularly enjoy working with the light sensor. Trying to get the Arduino serial monitor to read two values was not working out for me. On Monday before the project was due I approached Kate in class and she gave me a few pointers such as using a serial.Print(“,” ) line to separate the two readings I was getting from the light sensors. I gave it one more try and decided to abandon the light sensor and go back to my favourite sensor, the proximity sensor. I felt that Experiment 2 had prepared me well to work with the sensor.

This is the point in my project I had to go back to the drawing board. I realized that if I used the proximity sensor, I couldn’t use the vibration motors and the first idea did not make sense. This is where I took the idea of the human completely out from the idea. What if it was a an idea which purely worked with the technology and the animal.

I used the examples from the class github for the proximity sensor and decided to add the servo as a part of something which moves. My thought process at this time was use the servo as some sort of toy. This is when I zeroed in on the idea that this creation or idea would specifically geared towards cats. the idea was linked to the idea of having a toy which mimics prey.



I used 200 cm as max distance for the proximity sensor and 30 cm to initialize and start the servo.


Round 4

6-9th November

By the time I got the proximity sensor and servo connected it was Wednesday morning and I had two days till the project was due. My idea for my P5.js part of the project till this point was to do a simple sketch of a cat which would make the sound “meow”. I tried to sketch out the shape of a cat using an ellipse and two triangles. Due to time running out I quickly skipped this step and tried to map out and recall an image which which was mapped to the sensor value. I used the image below to tie it to the proximity sensor value so that it would only be activated only when the cat is close to the machine. This action of the cat approaching starts a dual action of the image appearing and servo starting.



I used craft feathers and popsicle sticks to construct a movable object that would appeal to cats.



  1. I found it much more satisfying to work in a group
  2. I should have spent more time on the sketching out my idea on p5.j


"Kurt Box" by Erik De Luca

“Kurt Box” by Erik De Luca

A project I found inspiring was a project I had presented for case study 2. “the kurt box” which was aimed at using electronics to interact with non-humans. “kurt box” uses a microphone to sense an animal approaching and when the animal nears the object the kurt box would plays a Nirvana song. This idea of using electronics as way of extending this human/machine connection to non-human and other sentient beings is vastly appealing to me as a designer. I usually aim to practice human-centered design coming from that background, but recently I’ve been curious especially after learning more about electronics to use these devices to learn in the aid of understanding our surroundings and nature.



  1. Puckett, N., & Hartman, K. (n.d.). Retrieved November 12, 2018, from
  2. Deluca, E. (2017). “Kurt Box”. Retrieved November 12, 2017, from
  3. Image cool cat. (n.d.). Retrieved November 8, 2018, from
  4. Image kitten chasing butterfly. (n.d.). Retrieved November 12, 2018, from
  5. Official Meow Mix Song – Cats at a Rave! (2014, September 24). Retrieved November 8, 2018, from

Grow You A Jungle


Grow you a jungle was created with the intention of bringing a little simple joy and life to the process of watering plants. Taking care of a lot of houseplants you begin to think about the life and time of a plant. How the growth is hard to see sometimes, yet sometimes a rustling can be heard and the leaves are moving, growing, dropping. Seeing the significant growth of a plant can take time.

Being in the woods or a jungle you notice the crashing noise and the movement of all of the life around you, creating a cacophonous hum of living. Indoors you start to forget how alive everything truly is. I wanted this project to bring a bit of the lush movement of nature to the indoors.

Process Ideation

Throughout the semester I had wanted to do a project that involved plants, but hadn’t found a group work that it fit into. So I began to formulate an idea to involve a plant, the idea of time, and after using the orientation sensor, I realized that I wanted to examine the concept of movement and gestures. I am in a class currently called Experiences and Interfaces which has led me through a lot of thinking about movement and our interactions with the world through gestural action. I realized one night while putting off watering many of my plants how artificial this gesture of pouring water is. The houseplant market is booming, yet it is just a facsimile of the natural world.


I began to think about creating a small simple experience that would enhance the idea of a plant growing from being watered, eyes and ears engaged. Projection has always been of interest to me, for its use of scale and darkness and light. An image of a dark room coming alive with the sounds of nature slowly creeping up at the action of a watering can feeding a garden came to mind. I decided to move forward and create.

Process Video and Sound

The process of creating the final video for the installation came through much iteration and testing. I made the decision to create it using found footage with Creative Commons licensing, since the timeline was too short for me to film my plants with any significant changes tracked. I wanted the video to have a lush ephemeral quality with lots of light and dark and movement. In my personal photography work I frequently create images using overlays and intense saturation and decided to use this same technique for the video. So I took to creating layers of plants growing in Adobe Premier Pro, this was a process of testing and tweaking. I used several opacity masks in the end to get the look I wanted. Below are some other videos I created before settling on my final.

The sound portion of this project took huge queues from a website I found called Jungle Life, a user controlled jungle sound player. I spent a bit of time in the jungles of Costa Rica a couple years ago and have very intense memories of sitting in the rainforest listening to the deafening sounds changing and moving around me. I wanted this to be the sound that would be triggered by the watering can. So as with the images I found about 15 Creative Commons samples of jungle and forest sounds and took to creating a timeline of all of them in various levels of highs and lows.


Process Arduino

Initially when the project was announced I was a bit intimidated by the idea of creating the entire code and circuit by myself, but it was a much needed challenge and confidence booster in the end. I take very well to building on knowledge that I already have, so I decided to keep everything as tidy and simple as I could to tell the story I wanted. In class one day I set up the orientation sensor successfully and realized how versatile this feature could be. My final circuit ended up being this exact set up, taken from the orientation sensor tutorial in class. When setting up my board I ran into some problems, and could not figure out why it wasn’t working. After 20 minutes of pulling my hair out and rewiring most of the board…. I realized that one of the wires I had cut had split and wasn’t making a connection. A good lesson in checking the small stuff thoroughly.



Final Circuit


Process P5

Building the code was the most intimidating part in my mind. So as with the board I built on some of the code that had been provided by Nick and Kate. I began to slowly break down each line and what it mean and how it functioned. I did several Google searches to assist in writing this code and as had been mentioned there really is no search for “How to make a watering can trigger a video”. I honestly didn’t even find much for “How to make an orientation sensor trigger a video”. There was however many pages of documentation about using Processing to do this. It seemed like a big task to switch the language at that time, so I decided to proceed on using P5.

My realization was that what I would have to write is an If/Else Statement. Which I did successfully, or so I thought. But it wasn’t triggering the video. And after another couple hours of painful searching I posed the question to my classmates. One noticed that I hadn’t been using the draw function, this had been intentional initially as I just needed the video to play on the screen, but I hadn’t taken into account the action that would need to trigger and loop. So once I moved my  toggleVid();  if/else function into the draw function, BOOM! It worked. This moment felt like I had won a medal. Something I keep learning every time I code is how tedious and time consuming it can be, and that the learning will never end. Persistence and variety in method is surely the key to this one.




I have been ruminating for a couple years on the Québécois NFB short film, “The Plant”, directed by Joyce Borenstein and Thomas Vamos. It is a beautiful film that delivers a timeline of obsession through the relationship of a man and a plant and was filmed in my old house in Montréal. Something that I kept remembering about this film was how the plant’s movements turn from joyous to vicious and wild in the time it would take to see a small amount of growth in a real live plant. Which lead me to think about how when you take a time lapse of a plant you can see exactly how wild and alive the growth really is.


Another project that lent some inspiration was the wonderful art group teamLab from Japan. They create incredible immersive experiences using digital technologies and huge real life installations. They believe that the digital world and art can be used to create new relationships between people. By using interactive work that responds to the users movements they achieve this. I am interested in creating simple installations in my own work that will make people reflect on their place in the world and how the interact with it, the small, magical changes that can occur when you make a action or decision. One of their projects below uses projection to a tea ceremony that bring life to the process.  Another overlays projected animals in a natural environment to make the user contemplate our place in the world and how we may be the top predator of the life cycle.



Final Thoughts

This project began simple and stayed simple, but I do not think it lessens its value and success. I am very happy with the outcome and am hoping to come back to this idea of triggering growth in the future. In the final presentation there were some wonderful comments of refining the way the sound and video reacted to the movement, I am banking these for future iterations. Something else I wanted to explore was how scale could amplify the feelings that this project evokes. Possibly using a whole room filled with plants and using projection mapping to sculpt the way the videos look and feel. The more we build technology into our daily lives I become more aware of our need for natural life. Exploring this idea further is going to make the world we live more whole and will allow sustainable living ideology to flow into our making and ideas more freely.


  • “Adafruit BNO055 Absolute Orientation Sensor.” Memory Architectures | Memories of an Arduino | Adafruit Learning System,
  • “Adafruit BNO055 Absolute Orientation Sensor.” Memory Architectures | Memories of an Arduino | Adafruit Learning System,
  • “Free Forest Sound Effects.” Free Sound Effects and Royalty Free Sound Effects,
  • “Free Jungle Sound Effects.” Free Sound Effects and Royalty Free Sound Effects,
  • Ir, and Stéphane Pigeon. “The Sound of the Jungle, without the Leeches.” The Ultimate White Noise Generator • Design Your Own Color,
  • Koenig, Mike. “Birds In Forest Sounds | Effects | Sound Bites | Sound Clips from” Free Sound Clips,
  • Koenig, Mike. “Frogs In The Rainforest Sounds | Effects | Sound Bites | Sound Clips from” Free Sound Clips,
  • Koenig, Mike. “Rainforest Ambience Sounds | Effects | Sound Bites | Sound Clips from” Free Sound Clips,
  • teamLab. “Flowers Bloom in an Infinite Universe inside a Teacup.” TeamLab / チームラボ,
  • teamLab. “Living Things of Flowers, Symbiotic Lives in the Botanical Garden.” TeamLab / チームラボ,
  • “Timelapse Footage.” Openfootage,
  • Vamos, Thomas. “The Plant.” National Film Board of Canada, National Film Board of Canada, 1 Jan. 1983,
  • “Video.” p5.Js | VIdeo,

BLOCK BUSTER – Fight your way through the digital realm!

Block-Buster (A game using serial communication between p5 and Arduino)


Link to Github Repository –



This is an Arduino and HTML based game working with the concept of serial communication. Shoot enemies on screen by pointing at them with a physical gun outside the screen giving a more feel to the experience to the game. Press the trigger on the gun to shoot enemies. Dodge enemy fire by actually dodging from the sensors on the computer, engaging yourself more into the game. Time your shots right! Welcome to Block Buster.

The game uses an Arduino micro hooked to an ultrasonic sensor which detects the gun being front of the screen or not. The presence of the gun in front of the sensor determines if the player is dodging or not. The gun is also connected to the microcontroller and has a momentary switch built into it in the form of a trigger. Pressing the trigger enables you to fire at the enemy.




1. Conceptualization and Brainstorming

After the first brief I decided to explore into the idea of a music visualizer (specifically 8-bit music). I researched into a few ideas and followed a few projects that was done with 8-bit music.

Tinkering with that idea for a bit did not get me closer to the visualizer but paved the way to base of my project. I decided to go for an 8-bit game but with a twist, implementing a physical object and a more concrete reaction with a retro concept.

The game that I remembered to have done it beautifully and one of my major inspirations for this project would have to be Duck-Hunt.


I was obssessed with the game as a kid. You would point your gun at the screen to shoot down ducks. A physical trigger press and the position of your gun determined your shot. That was pretty awesome for me as a child. This became my prime inspiration for Block-Buster.


2. Narrowing down (Art style and Implementation)

Next was art-style and implementation (on both arduino and p5 side). I explored into more 8-bit games for references and decided to go for the old Tron game for my major inspiration for the art style, it fit perfectly into my story. The game narrative of protecting the digital realm comes again from Tron, the original Digimon series and numerous other digital realm rescue games.

ref2  ref3

I decided to not use any external images or videos and use p5 to sketch the entire layout to add a more authentic retro feel to the game and also save processing power by using just simple lines and shapes.

I tried to go for a 2D game at first but did not like the look so decided to go for a 3D style. I looked into the concept of the old Doom game and how they rendered simple lines in perspective to create a 3D feel to the overall game.


3. Concept Art

Since I had my idea finalized in my head I decided to make a concept art for the game jotting down the game in my head into its first physical mockup.

3      img_0250


4. Mechanics and Dynamics of the game (Game design)

Next came designing the game so that it was an overall enjoyable experience. I could count on the interaction of dodging and shooting via a physical gun in front of the screen to be a pretty playful and fun interaction, but we will talk about it more during the implementation part ahead. The next task was to fix the entities, mechanics and rules of the game.

img_2157  img_2156

The Game : 

  • Simple FPS game with a fixed perspective and a retro style
  • Mission – To defeat the corrupt digibits and save the digital realm.
  • A 1v1 showdown combat system where you dodge and try to shoot the enemy.

Simple mechanics : 

  • Shoot (trigger on the gun)
  • Dodge (move the gun away from screen)
  • Time your movements right.


5. Implementing the Idea (Execution)


Link to Github Repository –

From the p5 end, as I mentioned before, all the elements of the game were sketched using simple shapes from p5. This was particularly challenging as the lines in the code kept racking up and keeping track of every line became a little hard. You can refer the git link and check the code out as it is explained in detail there.

I would like to highlight my key learning and interesting facts I found while coding for this project:

  • The lerp function is a pretty nice tool which enables shapes to move from point A to B in a seamless fashion. This came particularly useful when I was implementing the enemy bullets.
  • To plan out the illusion of movement in the player, entities and objects of the game, you need to just keep updating their coordinate values using nested for loops. This works pretty well and is not very hard, just keeping track of the code became an issue later in the project.

I used the ultrasonic sensor to track the guns position (in front of the screen or away). This created the beautiful interaction of dodging and gave a more fun and playful feel to it.  I decided to add a trigger on the gun itself via a momentary switch or button mounted on the gun I was planning on fashioning.

The circuitry was pretty basic :


Then came the hard part. Parsing the data from arduino into p5. I used serial communication at the beginning which works perfectly fine for sending a single data value to p5. I did have some issues as the sensor was sending the value too fast to p5 which ended up being read wrong. This was solved after a session of debugging and help. It was suggested to me that I’d give a line break between every instance the data was read. This solved the problem of sending values from the sensor to the system perfectly!

But, when I decided to add a button to the mix, I had to switch to the Json method to send multiple values (namely the button value and the sensor reading) to the p5 code. By creating a Json object it was easy to parse this value and worked nicely after a serious debugging session.


1    2



6. Fabrication and Testing


Finally, I came down to fabricating the whole thing. I decided to go for Block-techs (Lego knock-offs) to fabricate the gun, because I was kinda familiar with using them alongside arduino at this point. I gave the gun a flat front so it would be easy to detect by the sensor which was clipped at the top of the laptop screen.

Finally, I integrated the whole thing and went for the final test sequence. Once this base was ready and working I spent my final hours tweaking the values to make the game more balanced and fun.

  • Added to new levels in the game.
  • Added a boss fight at the end of the game.
  • Tweaked with the enemy speed and shooting frequency to add an increased difficulty for the higher levels.
  • Tested the game with different people to get feedback on the difficulty of the game.
  • Rechecked the dodging mechanism using the gun, and the ease of doing so by the users.
  • Added a health system in the game (the player can tank 10 hits before dying).
  • Introduced a basic in-game UI which shows player health and the stage level.


7. Presentation and Feedback

Here is the video from my final presentation and demonstration of the project.




1. FPS game using p5.js by jakezhong



2. P5.js game “Dodge” by worldofcodes


3. P5.js layers reference by Luqian chen


4. Simple visualizer using p5.js by


5. Music visualizer using p5.js by Blazar David


6. P5.js documentation


7. P5.bots references by sarahGP (connecting p5 with microcontrollers)


8. P5.js example Non orthagonal reflections (for bullets)


9. Physical Computing basics

10. Digital Futures Github

Winter Is Here!

Experiment 3: Winter is Here

by Tabitha Fisher


Winter is Here is a multimedia installation that fully embraces the inevitable. The project uses a p5 snowflake simulator in combination with Arduino which allows the user to “turn up the dial” on winter for a fully immersive experience.


Process Journal

Step one: make a thing control another thing. A daunting task when you can barely understand your previous things. See, in the last two assignments I had the benefit of working with some very smart people. We balanced out each others strengths and weaknesses. I can delegate, I can facilitate and I can ideate, but I’m not great at doing when I don’t know what I’m doing. I am also not great at retaining information without having time to let it sink in. Which is a problem when that is the guiding philosophy of this particular course. To rephrase that Jenn Simmons quote, The only skill I know is how to identify what I don’t know… and then what???

I began this project knowing that I had to keep it simple. My goal was to have something as soon as possible with the thought that at any moment this can be taken away. Basically, working from the general to the specific. It’s a philosophy I use while drawing/writing/animating – the idea that your work is never truly finished so you must have something to show at every stage of the process. It’s not meant to be as melodramatic as it sounds.

I knew I wanted to understand the code. In the previous group projects I understood the theory behind some of the code, but I couldn’t confidently explain how it works. So, rather than start with some lofty plan that I couldn’t possibly execute I decided to work from what was already working as a basis for my experiment. My goal was to create a project that I could slowly take the time to understand.

In class we were learning about JSON Protocol and how it allows the Arduino and P5 to talk with each other. There was an in-class example that used the mouse positioning on-screen to control two LEDs. After a few tries and a bit of help I was able to get it working, which was quite exciting. I also happened to have my potentiometer hooked up from a previous exercise, and it was at this point where I realized how I can have multiple devices on the same breadboard without having them mess each other up. There was also this serial port thing that was fairly new – in order for JSON to work we had to use this serial control software and add the serial port name into our code. I managed to get that working too. Knowing you can do something in theory is very different from actually doing it yourself.

Fig. 1 - Two things on the thing and nothing exploded!

Fig. 1 – Two things on the thing and nothing exploded!

After this I thought since I’ve already got a potentiometer on here I should try to get it to control some stuff. In the p5 examples page there was a particle simulation for falling snowflakes. I ran this idea past Nick and we discussed possibilities for displaying the project. I thought it would be pretty lame to just have it on a laptop and shake it like a snow globe, so Nick suggested that I check out what’s available in the AV Rental room. That’s when I learned all about short throw and long throw projectors!

At this point, I had to remember how to get the P5 example to display on my computer. Basic, I know, but at this point we had tried out so many other new things in class that it was a struggle to remember the steps. Oh yeah… I had to make a new folder with an index.html document and copy the code into a javascript document. And open the index.html in Chrome. Baby steps, people…

Fig. 2 - Mockup of my dial that I used from a class example.

Fig. 2 – Mockup of my dial that I used from a class example.

I got the snow working but I had no idea how to control it – or even begin trying to figure out that information. We had been given a document from the in-class assignment that I had already loaded into my Arduino but I wasn’t sure how to figure out the code. A few classmates had some ideas but I wasn’t able to fully follow along, which had me even more confused. One of them even got the snow to fall but I didn’t understand how they did it and there were way too many snowflakes, and I couldn’t fix that either. There were many challenges such as remembering the shortcut to access the console – not to mention remembering it was even called ‘console’ so I could google it and find out for myself. At this point my n00b brain was completely fried as I came to the realization that all my computer knowledge up until this point has relied solely on an understanding of a few select programs and outside of them I am a helpless baby kitten.

Fig 3. - Way too many snowflakes, my computer is dying! But at least the potentiometer is working!

Fig 3. – Way too many snowflakes, my computer is dying! But at least the potentiometer is working!

After speaking with Nick he suggested that I put the Arduino away for now and spend the weekend trying to understand the code as well as the controls for the snowflake properties. Could I figure out how to change the number of snowflakes? How about the size? So I went home and adjusted some of the code on my own. To start, I knew I wanted to make the screen larger than a little rectangle so I played around with the canvas size. Then, I changed the colour to blue! Next, I wanted to try adding an image but the usual method wasn’t working.

Fig 4. - Bigger canvas! Blue canvas!

Fig 4. – Bigger canvas! Blue canvas!

During Explorations class I was able to add images by putting one in my library and writing the title into the code, but nothing was happening. I found an example from the P5js site that described how to load and display an image using the command line to create a simple server. It was not working as described, so I abandoned that option and took Kate’s advice to host my sketch on the OCADU server for now. I just wanted to see if the image was working!

Fig 5. - Oh yeah, remember cyberduck? What a quack… #dadjokes

Fig 5. – Oh yeah, remember cyberduck? What a quack… #dadjokes

After a bit of tinkering I got it to do the thing! Meaning, my image was on screen and my snow was falling. Super great. Except this wasn’t a permanent solution since Arduino can’t talk to my website over the OCAD server… le sigh.

Fig 6. – Webspace snow video

So I put aside the idea of working with the image on screen for the moment and focused on trying to get the Arduino to speak to the p5 sketch. Which had been working a few days ago. But when I sat down to recreate the interaction I couldn’t remember what I needed to do. I knew I had to load something into the Arduino, but what? I had taken notes but suddenly they no longer made sense. Was that something located in my project folder? Weren’t they supposed to be .oi files – and how come I wasn’t seeing one of those? And how come when I tried to load my index.html page I got… these horrible grey streaks??

Fig 7. - The saddest snow.

Fig 7. – The saddest snow

At this point I was very concerned that I had somehow ruined the whole thing. It was already Wednesday and the project was due on Friday. Going back to my work ethos that you should always be at some form of “done” for every stage of a project… at this point, it seemed as though I had no project at all. Dark times. I spent hours fussing with the code, trying to get it back to where it was a few days earlier but to no avail. I had a meeting with Nick and Kate that afternoon where I was hoping to discuss presentation possibilities but at that point I had nothing to show and no idea how to fix it. I really can’t recall a time where I felt more lost on a project.

Nick got me back on the right track by starting over with the original p5 example. It appears that I had just messed up something in the code, but within a few minutes he was able to get it going again. I was relieved but also pretty frustrated that I wasn’t able to figure it out on my own. He also pointed out the videos on Canvas that showed us how to make a server on our computers. That’s exactly what I needed to run the sketch with my images. Then it was time to head off to work for the evening, so any testing would have to wait until the following day.

Fig 8. – Building it up again

I was able to spend the next day applying these changes in the time between my other classes, and for the first time I was able to get through it all smoothly. I was simply retracing my steps, but this time I was understanding why I was doing them. When my Arduino failed to speak with my sketch I knew to re-select the correct serial port. I had a better understanding of the purpose of the index.html file vs the javascript file. I swapped my old background image for a new one. I knew where to go if I wanted to upload the .ino file to my Arduino. I felt as though I was controlling my project rather than allowing it to control me!

Fig 9. – Hey, things are working!

Fig. 10 - Thanks coding train!

Fig. 10 – Thanks, coding train!

The best part was getting the image up onto the wall. By chance I had positioned the projector so that it was tilted towards the corner of the room and when I turned it on the image came to life. Because the image was a nature scene it created the feeling of a little immersive world. A simple one, but still. The projector took it from being was is essentially a 90s-era screen saver to an actual installation project. I’ve never made such a thing before. Generally, I don’t make ‘art’. I make assets that are part of a great big commercial venture (animated tv!). Or, if I make something for myself I do so using methods I already know (films! sketchbooks!). But nothing that I’ve made has ever been in a gallery. I have always wished I could do this, but modern forms of installation art have always seemed so mysterious. Nuit-blanche type stuff for fancy art folks. How do they come up with those ideas? What are the guiding principles behind this style of making?

Fig. 11. – My baby on the big screen!

I can draw stuff and I can draw it pretty well, but DF requires an entirely different set of skills. It would have never dawned on me to consider the effects of scale on a projected image unless it was a comparison between a film screened in a festival vs on a phone. I remembered back to the second project when Kate described the opportunities of working with physical materials. In terms of code the project may be technically simple but the way we use the physical environment can turn the project into something special. I knew I wanted to create this immersive snow environment but when I saw my classmates react so positively to the projection I thought it could be about revelling in the thing we dread… winter… with its dark days and icy sidewalks. My project could be about embracing the best sides of winter and all the things that make it special. Cozy scarves and hats. Hot drinks and chocolate chip cookies. A little holiday ‘muzak’ for ambiance. The comfort of the familiar. A metaphor for this particular journey of mine, perhaps?


Fig. 12 – Toques!


On the morning of the presentation I gathered the last of my supplies and left time to setup and ensure that my project was running properly. One of my classmates had suggested using Incognito while working on  project to ensure that my changes updated properly on the browser. It also had the side benefit of being quite dark which helped it blend in with the rest of my image while I was presenting. In a moment of brazen recklessness I decided to pull the yellow and blue LEDs from my breadboard moments before the start of class. They weren’t exactly needed anymore, and I felt that I understood my setup enough to do it with confidence. Thankfully I was right. Then, glorious synchronicity. I learned that another classmate had brought an office-sized vessel of coffee to share which they generously donated to my cause – I was planning to pick up something similar for my installation. I had grabbed some hats and scarves at Black Market and found some tasty-looking cookies at Rabba to share. They need to be tasty or what’s the point?

When it was time to present I cranked the Bublé tunes and found myself feeling… strangely exposed. My project was not nearly as sophisticated as the work of my classmates. It had taken me two weeks to use a preexisting snow simulation and make it work with a dial. What’s so special about that? Could you even call it an experiment? Well, for me it was and here’s why. Up until the moment I started at DF my value as an artist (and maybe as a person?) has been measured by my ability to draw. This has always been my currency. It’s at the very core of me, but in a way that’s very limiting. I came to this program to explore the unexplored and expose myself to methods of working that I know nothing about. Well, coding is one of those things.

Interestingly, during the critique Kate made the point that she wished I had used some of my drawings on this project. I agree that would have been great, but admittedly it hadn’t crossed my mind because my last few months of schooling have been about exploring worlds beyond that person. It is very easy for me to dress up one of my projects with a nice drawing. I know that’s not how she meant it, but I wanted to have a reason for using my drawings and I hadn’t quite arrive there yet. In a way I think I needed that reset. Maybe it’s silly to purposefully disengage from that part of myself but I was hoping that I’d benefit from the distance. Like going away on a very long holiday to somewhere completely new, only to return with a newfound appreciation for the familiar. Having gone through this process I now feel that I’m ready to reimagine what’s possible.

Fig. 13 – Final Snow



Underwater Aquarium - Windows 98

Underwater Aquarium – Windows 98

Once I started working with the P5 snow example I noticed how it gave off a 90’s screensaver vibe. I really love that kitchy aesthetic. While I wasn’t able to fully explore the options here because I was so caught up with managing the basics, I kept them in mind when selecting the image. I have fond memories of those fish.

Office party photo booth

Office party photo booth

These days it seems as though every office holiday party needs to be equipped with some kind of photo booth. My favourite part about it is the fanciness of the suits in contrast with the silliness of the props. When I think of an installation this is what comes to mind – probably because I’ve experienced more office holiday parties than immersive art projects. But there’s an earnestness to it all that I love very much.


Winter as a national tragedy

Winter as a national tragedy

I am very interested in the way citizens of a large city collectively gripe about specific topics throughout the year. It seems as though everything is always the worst. Winter especially. Maybe it’s somehow cathartic for us to perform this strange ritual at the turn of each season?



Original snowflake simulation example:


Instructions on loading images (not super useful, however):


Making a local server for the P5 Sketch:


Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.