Experiment 2 – Pet Me (If You Can)

Project Title
Pet Me (If You Can)

Team Members
Jignesh Gharat, Neo Chen, and Lilian Leung

Project Description
Our project explores the ability to create a creature character with the ability to surprise viewers with interactivity with the use of two distance sensors. The experiment is an example of living effect to give a machine a life of its own and the use of different modes of operation to create distinct emotions with the creature.

The creature was created with the use of two Arduinos, three servos, a row of LEDs and two distance sensors. The creature sits on a pedestal and moves on its own accord, surprises viewers when they come nearby by closing its mouth and eyes becoming from erratic.

Project Video

You can access the code for the experiment here

Project Context

To create a creature with the use of servos and sensors. We explored the ongoing questions as to “Why do we want our machines to appear live?” as mentioned by Simon Penny, a new media artist and theorist. Caroline Seck Langill in The Living Effect: Autonomous Behaviour in Early Electronic Media Art (2013) argues that we create lifelike characteristics to elicit a response from the audience that is suggestive of a fellow life-form to achieve living effect, where as we do not attempt to re-create life but rather to “make things have a life of their own,”.

Our original intention for the project was to create a creature to be halloween themed or to have a security-like box that would guard a valuable item such as jewellery or used for everyday use such as guarding cookies within a cookie jar-like shape.

Langill (2013) proposed the three characteristics of living effect being first, an adherence to behaviour rather than resemblance, the second; the effect is one of a whole body in space with abilities and attributes, and the third being potential for flaws, accidents and technical instabilities as imperfections allow one to acknowledge to living effect within a synthetic organism.

We initially began prototyping using the oscillation of two servos to create the eyes. We began our prototyping with the use of post-it notes over the two servos for the eyes to get the movements of the pupils to move at a natural speed with the use of easing using Nick’s animationTools Arduino library.

In Robotics facial expression of anger in collaborative human–robot interaction (2019) Reyes, Meza and Pineda describe expressive robotic systems favour feedback and engagement with viewers. Emotions such as anger created the most effective responsonse with participants. Using the minimal amount of facial expressions with the components available, we tried to replicate a human-like expression as an indicator for possible modes of operation the creature could react to.

From there we incorporated the main body (the box) of the creature and began exploring ways we could have the box open. Our initial thoughts were to save some sort of lever outside and above the box that would pull the lid open with the use of thread or fishing wire.


We also explored possibly having the servo on the side of the box, but were concerned that the motor wouldn’t be able to handle the full width of pushing the lid open from one side. In the end, we landed on having the servo inside the box, in the back centre area where it could push the lid open with the assistance of a curved shape to reach the lid. We then began testing to get the right angle to use for the servo as well as what range it should have, so as not to push the servo out from its spot inside the box or to open to box too wide.

Servo Testing Gif

Before laser cutting out all our final shapes, we tested each component separately using the breadboard to make sure the circuit was functioning before soldering each piece. From there we built out new facial features by using the opening box and laser cutting a tongue-like shape which we then lit up using red LEDs.  We laser cut the pupil and iris to attach onto the servos, as well as making a small enclosure to hide the actuators afterwards. All the cables are then looped inside the box and tucked in the back to keep the cables tidy when the creature opens its mouth.



The creature has three modes of operation:

  1. The eyes oscillate from 0 to 180 degrees slowly within the two meter “safety zone” away from viewers at a speed of 0.1. Within this safety range, the servo controlling the mouth also props the mouth open as it deems the area “safe”. During this time, the LEDs within the tongue piece are lit up.
  2. Within the middle zone, the creature becomes “conscious” of viewers and the speed increases to 0.2. The increased speed of the eyes signify hesitation or caution with the creature.
  3. When viewers come in to the “danger zone” within approximately one meter from the object, the speed of the eyes increases to 0.8 and the mouth shuts closed.

To avoid overloading one of the Arduinos and to make sure the electrical circuit was consistent, we divided the sensor controlling the two servos for the eyes from the sensor that controls the LEDs and servo motor that controls the mouth opening. 

One of our challenges was working with the noise generated from the sensors which caused some of the modes of operation to fluctuate from opening the mouth to immediately dropping even when viewers are within a safe distance away from the distance threshold. Though we adjusted the settings to make the middle range shorter to turn the noise disturbance within the sensor to appear more like a laughing motion by the creature.

After our presentation, we got feedback as to how we could better incorporate the sensors into the experiment, so that the experiment can become more mobile and easily placed in different situations.


Our solution was to attach the creature onto a pedestal and have the sensors hidden away below the surface. The creature stands upright as if it were an exhibition piece. The creature takes a personality of its own when the eyes oscillate as if they’re patrolling the surrounding area and closes its mouth and looks downwards when viewers approach it in a more humble composure. 







Arduino. (2017, January 13). How to Use an Ultrasonic Sensor with Arduino [With Code Examples]. Retrieved from https://www.maxbotix.com/Arduino-Ultrasonic-Sensors-085/.

Circuit Digest. (2018, April 25). Controlling Multiple Servo Motors with Arduino. Retrieved from https://www.youtube.com/watch?time_continue=9&v=AEWS33uEwzA

Langill, Caroline. (2013). “The Living Effect: Autonomous Behavior in Early Electronic Media Art.” Relive Media Art Histories. Cambridge MA: MIT Press. pp.257-274. .

Programming Electronic Academy. (2019, July 2). Arduino Sketch with millis() instead of delay(). Retrieved from https://programmingelectronics.com/arduino-sketch-with-millis-instead-of-delay/.

Puckett, N. (n.d.). npuckett/arduinoAnimation. Retrieved from https://github.com/npuckett/arduinoAnimation.

Reyes, M. E., Meza, I. V., & Pineda, L. A. (2019). Robotics facial expression of anger in collaborative human–robot interaction. International Journal of Advanced Robotic Systems, 16(1), 172988141881797. doi: 10.1177/1729881418817972

Experiment 2: Pro Yoga!

Names: Liam Clarke, Rittika Basu & Katlin Walsh

Project Description: 

Project “Pro Yoga” blends the idea of fun and fitness for everyone. It’s a fascinating installation for exercising where both the body and the mind are engaged for physical, mental and spiritual upliftment through the art of Asanas. This interactive training set-up can be utilized by any fitness center, yoga studios and at home. The goal of this activity is to make the tri-colored sets of LEDs activate at various heights by stretching out one’s limbs towards the Ultrasonic Range Finders through different Yoga positions. The goals will be set by the verbal instructions from a yoga instructor in the prototype, while the next generation would be a virtual instructor. The activity of making the LEDs activate functions as an indicator to let the participant know they are in the correct position. For example, the instructor might ask the participant to make the blue LED (placed in front at height 5’2 ft), red LED (placed on the right side at ground level) and green LED (placed behind at height 5’2 ft) light up together. The user’s objective is to stretch out his/her arms and legs to make the specified LEDs activate, hold that pose steady for a set amount of time before the next pose.  


Work-in-Progress Images:

Setting the Ultrasonic Range Finder at different heights and orientations for testing the LED’s blinking functions from various ranges of distance.





Final Images:





Interaction Video:

Project Context:



Our brainstorming of ideas started with sketching possible concepts while exploring various functions of servos and LEDs. We also discussed the various projects created by Arduino software accessible from online platforms like Pinterest, Design Boom Magazine and Arduino Project Hub, YouTube, etc. 

Several ideas involved in creating a space for performing arts and divergent forms of physical activities. The first involved a dance contest, where a ‘Thumbs Signal Sticker’ could be attached to the servo. Essentially a multiplanar Dance Dance Revolution, LEDs are triggered by the distance sensor. The servo attachments can rotate the ‘Thumps-Up’ (180° – indicating a good grade) or as a ‘Thumbs-Down’ (0° – indicating a bad grade) or if the performance gets undetected it turns into a ‘Thumps in the Middle’ (90° – indicating an average grade). 

Multiplanar DDR led to a ‘Body Twister Game’ where the LEDs will blink according to the movements of contestants. We planned to create groups of 2-3 and follow the rules of the original ‘Twister’ game, where contestants would get tangled through trying to activate LEDs. In this version of the game, sensors would be placed at three points around the user to create a common area where the sensors converged. Players would need to be mindful of all body parts present in order to trigger and maintain colors for a specified period of time.

Post several trials and deciding earlier ideas were lacking without a further developed interface, we worked our project into a ‘Health and Fitness System’. We decided on an interactive installation that would be enjoyable and healthy, a gamified home yoga center. In this version, a user becomes more mindful of the space in which they are occupying, allowing them to return to a meditative state while engaging in their practice. Through the use of changing colors, harnessing spiritual meditation and mindfulness of one’s body through a guided practice can be encouraged without interrupting a user’s thoughts. 



Creating and Prototyping:

We further refined our brainchild into ‘Yoga-Training equipment’ where three of the Ultrasonic Range Finder -LV-MaxSonar-EZ0 would be placed in a triadic arrangement. Each section is accompanied by three LEDs of different colors, ie. Red, Blue and Green. Each one of these sections is placed at a different height, orientation and the code is calibrated differently depending on the target limb used on that particular sensor.

The idea is to activate the ‘Blinking’ functions with the aid of Yoga Postures. While calibrating within the code, we had to keep physiology and the body motions of yoga in mind as they relate to the distance sensors.

Initially, we tested the prototype by using one LED and observing how it blinks with input data from the distance sensor. Adding on more sensor/LED setups, we experimented with orientations, lengths and heights of the three sensors.

The code is relatively simple, it involves activating specific LEDs based on input data from the distance sensor. The only issue was trying to find the best-suited range from input data to make holding the pose difficult while not impossible.


Moving beyond the context of this project, the use of RGB LEDs, battery packs, and additional sensors would be investigated to create a product that is more compact, and able to be placed in a room without wires being shown. This would only require small changes to the code framework, in addition to some hardware upgrades to swap LED types.



Data Collection: Researching on Yoga and Asanas:



While selecting the yoga posture for our installation, we had to consider various factors. We shortlisted simple postures that could be carried by beginners, elderly individuals and even kids. These are enjoyable, healthy and easily integrates with our Arduino installation. They are listed below as: 

  1. Mountain Pose/Tadasana 
  2. Warrior 1 & 2 Pose/Virabhadrasana Pose ( Referred to in our experiment)
  3. Triangle Forward Pose/Trikonasana
  4. Raining Hands in Lotus Pose/Padmasana

Execution: An example of an experiment scenario

Step-by-Step instructions for training the participant to perform the Tadasana Asana or the Warrior Yoga Pose

  1. Welcome to the Pro Yoga experience.
  2. Kindly, step-up on the yoga mat.
  3. We will start by the Mountain pose/Tadasana – Standing Asana that forms the foundation of other standing Yoga poses. 
  4. Keep breathing in and out slowly throughout the whole process.
  5. Bend your left leg forward, try to turn the Red LED on (its blinking function may be positioned from 700mm – 1000mm & placed at the ground level height). Here the participant is expected to bend her leg and stress on the knee in the forward position as long as the specified LED doesn’t blink. 
  6. Stretch out your left hand, maintain the arm at your shoulder level and try to turn the Green LED on (its blinking function may be positioned from 300mm – 500mm & placed at the height of 5’2). Here the participant is expected to stretch out her left hand towards the sensor maintaining the joints at 180 degrees, as long as the specified LED doesn’t blink.
  7. Now, extend your right hand straight in the opposite direction, maintain the arm at your shoulder level and try to turn the Blue LED on (its blinking function may be positioned from 600mm – 800mm & placed at the height of 5’2). Here the participant is expected to stretch out her right hand towards the sensor maintaining the joints at 180 degrees, as long as the specified LED doesn’t blink.
  8. Finally, the participant is expected to hold the performed Yoga posture (Warrior Pose here) for 10-30 seconds and then relax before executing the next Asana/Yoga pose.


This project involves a YogAI, (Yoga Instructor)who focuses on and guiding the user through the workout. It extracts anatomical key points to detect the posture configuration and give continuous feedback on posture correction. Our project derives from this idea in the sense of having a user-guide throughout his/her training session. But Pro Yoga focuses more on easy and universal poses in addition to special focus on the stretching of the limbs along with positive reinforcement (LEDs blinking).

1.“YogAI: Smart Personal Trainer”. Arduino Project Hub, 2019 https://create.arduino.cc/projecthub/yogai/yogai-smart-personal-trainer-f53744.

2. “Change Your Meditation With Colors Spirituality”. Yogi Times, 2015 https://www.yogitimes.com/article/meditation-colors-spirituality

3. “New Energy Geographies: A Case Study of Yoga, Meditation, and Healthfulness”. Journal of Medical Humanities, 2015


4. “E-traces creates visual sensations from ballerinas” Arduino Blog, 2014


5. This project involves an LED that changes color according to the movement of one’s hand towards the four directions. We were referring to its set-up and coding for our project development.

Motion Controlled Color Changer!“. Arduino Project Hub, 2016 https://create.arduino.cc/projecthub/gatoninja236/motion-controlled-color-changer-299217?ref=tag&ref_id=motion&offset=0

6. We referred to various images of Yoga and Asana positions for studying, comprehending and shortlisting postures that can be used in our project. We went for simple poses that could be carried by beginners, elderly individuals and even kids. 

6.1 RelaxingRecord.com. Top Ten Yoga Positions For Beginners. 2019 http://www.relaxingrecords.com/2015/11/17/top-ten-yoga-positions-for-beginners/.

6.2  Fitwirr. Fitwirr 24 Yoga Poses For Beginners – Yoga Kids (Laminated Poster) – Kids Yoga Poses – Yoga Children – Yoga For Kids -Yoga Wall Arts – Yoga Poster. 2019 https://www.amazon.com/Fitwirr-Yoga-Poses-Beginners-Laminated/dp/B07C1SQK6L.

7. Puckett, Nicholas. October 4 – Videos I Creation & Computation 001. 2019 https://canvas.ocadu.ca/courses/30331/pages/october-4-videos.

8. Puckett, Nicholas. October 8 – Videos I Creation & Computation 001. 2019 https://canvas.ocadu.ca/courses/30331/pages/october-8-videos



• project title 


• names of group of members 

Neo Chen & Rajat Kumar

• project description 

This game that we created was first inspired by the classic game ——SNAKE, which we believed that most people have played before. We want to bring this game into a more physical interactive form. It is a game that requires all the players to use their phones, and the number of players is preferably to be even since the game plays in pairs in each round. (When there’s a player singled out, he/she will be given an extra number to balance  out the game.)


Shake the device to generate your random number and don’t let anyone else see it 🙂

You will be competing in pairs, show your number to your opponent 

The person with bigger number wins

The person who loses will have to follow the winner for the rest of the game(forming the snake)

But that doesn’t mean your game is over

Ever since you joined the winner, your number goes to the winner

So when they go to the next round, they are comparing numbers including the loser’s number

Same for the rest of the rounds, the person who singled out will get extra points towards the total to balance out the game 

The last person standing wins the game!

Extra: The final winner will have a long line of “losers” following behind, forming the shape of the SNAKE, the winner will be asked to take a selfie with everyone else standing behind.

• visuals (photographs & screenshots)


img_0091 5fd94222-be73-42d2-8cc9-cbed9fb7d4bd img_0309


we found some existed code online for creating face tracking and filter in P5js that we wanted to collaborate with, but unfortunately it could only work on the website and when we try on the phone, the image we got was lagging and the filter would not show up, this is not what we were looking for, so we decided to give up on this as part of our game.



 •edited video of the 20 screen experience (1 minute or less)

•code https://github.com/Rajat1380/CODE_OCAD/tree/master/EXPERIMENT_1


• Project Context 

Our first intention was to make a mini ice breaker game since all of us just joined the program and barely know each other, so we did some research on the social games and was looking for a good one to transform into our game, but we could not agree on one that represents our idea the most. One day this thought just popped up in my mind, why don’t we have everyone to play the snake game that the most of us have been played before, but this time, we are giving it a twist so that players are given more chances to be interactively involved. 

After finalizing the concept of our game we brokedown development into the minimum viable product of coding to function the game. We first prioritized the random number generation and started building. Preload function came to rescue us and initially, it was working fine with 4-5 images to generate randomly but failed with 21 images. We spend most of the time figuring it out and then Jun Lee came to us and pointed the error out. The main problem was with the for loop, It ran fast before loading all the images and we see nothing on the screen. We have to preload all the images manually. Images showing on the desktop but not on the mobile. The problem was that we were previewing in the present mode and it got solved by previewing in fullscreen mode. We had to fight with adobe XD for 3 hours to get the png format for the images.



img_0734  img_0736 img_0737

We originally wanted the game to be a combination of the snake game and the bluffing game by granting the players to bluff about the number they randomly draw, which would also be more challenging, like a psychological game. But it’s going to lengthen the game time for each round and we were having trouble with how to get people in pairs (which we decide later to use the hand written card for help). Not getting to use the network between phones was no doubt a huge issue we faced because we wanted to lower the chance of people drawing the same number, to solve this, we came up with the solution of having the ones who compete for each other and have the same number to redraw. (Although the chance for people to get the same number and also getting paired are pretty low.)

We implemented the random number generation with device shake ability. We have MVP of our game and we tested game with the numbers. We did playtest with more number of players and found one flaw in our game. We did the math of pairing and after the 3rd and 4th round of game one player got left out. We need to balance the gameplay in order to maintain the fun aspect of the game.


After some maths, we decided to give 15 points in 3rd and 25 in 4th round to the player who got left out.


Then we decided to develop some more functionality. We implemented the button, as we were thinking about the bluffing aspect of the game. Surprisingly button was showing on the desktop but not on the mobile phone. We searched on StackOverflow and got one clue that there is some openGL that we have to write in HTML. We tried but failed. All we wanted with buttons to hide the number and show them when required. It did not work out, so we shifted our focus to change the background of the number in order to make it less readable. At the end of the game, we thought of creating a selfie function along with a face tracking filter, we found some existed code online for creating face tracking and filter in P5.js that we wanted to collaborate with, but unfortunately it could only work on the website and when we try on the phone, the image we got was lagging and the filter would not show up, this is not what we were looking for, so we decided to give up on this as part of our game.

The major issue we had during the whole process, was to bring what we wrote on the P5js web editor into mobile devices, we found out that a lot of functions that we wanted to put into our game could only work on the computer which limited the final result that we wanted to present. 

• Presentation

img_0470 img_0471 img_0472


(The snake formation)

• Reference

Face Tracking and Filter https://editor.p5js.org/jeeyeonr/sketches/Hyy5W3GiQ

Input and Button https://p5js.org/examples/dom-input-and-button.html

Create Button https://p5js.org/reference/#/p5/createButton

Device Shaken https://p5js.org/reference/#/p5/deviceShaken

Set Shaken Threshold https://p5js.org/reference/#/p5/setShakeThreshold

Snake (Video Game Genre) https://en.wikipedia.org/wiki/Snake_(video_game_genre)

Bluff (Poker) https://en.wikipedia.org/wiki/Bluff_(poker)



Forest Escape!

Group 8: Katlin Walsh & Jessie Zheng

Project Description

“In order to build trust and friendship in each other as we get into our digital futures program, the university has ordered us on a mandatory team building exercise in the woods. 

What Kate and Nick thought was going to be a traditional hike in the woods with some trust falls quickly turns into something unexpected. After walking for hours we finally admit to ourselves that we’re lost, but not to worry! As a digital futures cohort, we all pull out our phones and try to find a map to get home.

Alas, with no gps signal, the only thing that seems to be working is a strange webpage, directing you to the location of a paper map. You have to work together to make sure you’re not looking for a clue that’s already been found. And be careful, as soon as you lift your finger, the clue will disappear.”

Forest Escape! is a game that builds teamwork and communication skills within a group. Not only does it engage players within a physical meeting space, but it also encourages them to interact within a virtual space by interacting with their phones. By combining traditional geocaching, escape the room, and team building exercises, Forest Escape! aims to get players to think critically on how to use their individual devices effectively as a team. The game is purposefully designed so that any number of players can participate, prompting groups to create a strategy in order to work together and complete the challenge. 

Watch Video

Continue reading “Forest Escape!”


A fun grouping game that entails random (shuffling, counting, clustering).

Priya Bandodkar & Jun Li






Random (ANIMALS, MONSTERS & HUMANS) is a smartphone-based interactive physical game experience. The game facilitates both person-to-person and person-to-phone interactions. It can be played with 20 players and up, in an open space.

It is a character-based game containing a variety of single and mixed sets of personas, which are either animals, monsters or humans. On hitting start, the code shuffles, randomises and assigns a persona or a mixed set to each player. This is completely random, based on when the participant chooses to stop the shuffle, thus leading to a dynamic range of character-sets on the floor. The host then announces a pairing condition between these animals, monsters and humans (for instance, 1 human, 2 animals, 2 monsters need to form a team). The players now have to group up with other participants in a way that their cluster meets the announced pairing condition. They then advance to the next round. This continues until we have a winner(s).

It was interesting to see how the game playfully turned out to be an integrity test, as some players sneaked in multiple shuffles to find a matching character, and thus survive longer in the game.





We explored and brainstormed to generate concepts that could weave in, what we thought was the essence of this experiment—creating one connecting experience with 20 screens without networking. We also wanted to build an engagement that allowed users to have fun. Some of the ideas that came to our minds were—creating a shopping aisle or gallery of unusual products, and using smartphones as interactive displays. Another one was building a world map installation, while using phones as different regions/cities around the world, and interactively depicting their landscapes, interesting facts. Although these concepts had quite a bit of creative scope and aligned to the initial brief, we realised they lacked the element of fun. We did a second round of ideas, drawing inspiration from ice-breaker games like Musical Chairs.[1] And that’s when we came across the ‘Group in the numbers of…’ game by Michael Hartley.[2] It involves each player to pair up with other participants and total up to match the number announced by the host. It was a simple concept, but had a potential to serve as a footing that we could build on.

Concept Development

Taking this concept further, we thought what if each player was allowed to have a dynamic, random number for each round. Visualising it in the digital context, smartphone was the ideal choice for introducing this twist. 


Looking up for code resources, we realised that the math references could help generate a dynamic number with each click. Here is a process video of testing this out:

Furthermore, to lend an organic feel to the game, we brought character into play instead of numbers. We also sub-categorised the characters into families of animals, humans and monsters (earlier, aliens) with a view to introduce an additional layer of challenge. We further considered adding mixed set of characters to shake up the combination possibilities between participants as drawn below:Here is a process video of testing the functionality with images:


Game Flow Visualisation

We mapped the flow of the game early on for two reasons—gauge the scope and plan for milestones, and foresee possible challenges. It was succinctly visualised as below:

  1. The game starts with 20 individual participants in an open area with 1 phone each.
  2. The game is established using an animation with the game title and button to proceed and play.
  3. The rules of playing the game are explained to participants verbally and/or  including a rule page within the game.
  4. The participants trigger the shuffle on their respective phones either by using a phone shake or by tapping the screen.
  5. Pictures or animated GIF loops of character start shuffling in a random order.
  6. Players tap the screen to stop and stop at one character.
  7. The facilitator picks a pairing condition between different characters based on numbers (from a bowl of paper chits, may need to be improvised depending on players left) and announces it. 
  8. This requires players to pair up in a different number of groups to meet the announced criteria. Players have 30 seconds to for this.
  9. Players pair up quickly form teams. 
  10. The ones left out, are eliminated.
  11. We continue with more rounds (3-4 rounds) until we have a winner(s).


We built on the random() functionality tested during prototyping. While developing the code with GIFs, we ran into a roadblock with using random GIF loops in place of the images. The GIFs somehow did not work on both browsers (Safari & Chrome). Looking up for references online about animated GIFs [2] , we encountered a way to play a single GIF, but the same logic did not cater to loading 18+ GIFs. We also realised using a repository of GIFs would take a toll on the loading time of the game due to relatively higher file size. To overcome this predicament, we decided to introduce color in our images. To compliment the idea further, we added a code to that resulted in a different background color, from the selected color palette each time. It thus increased image possibilities by multifolds, and making it less repetitive.

Visual Aesthetics

We were initially exploring graphic styles that would lend well to creating GIF loops. The purpose of including animated characters was essentially because they would add so much more life to the screen. We had narrowed down to the pixel art and the doodle style.

We went ahead with the doodle style (right), because it had more scope to bring out different character personalities using facial expressions. Below is are the character designs and GIF loop:

giff-3We were unable to proceed with GIFs midway through the development phase, we decided to alternatively uplift the visual aesthetics by adding colors using a 4-color palette.


The idea of randomising the background color in code led to an even more diverse image repository for the game.


We launched the ‘random (ANIMALS, MONSTERS & HUMANS)’ game at the Creation and Computation Experiment 1 Critique on 27 September, 2019. The video features highlights from the round of this game played by our classmates:

Observations during gameplay

  • Players that got eliminated in the early rounds were keen on being a part of the game in some other way.
  • It was a playful integrity test as some players sneaked in multiple shuffles to find a matching image to stay longer in the game.
  • Participants adapted to the game fairly quickly, and were able to pair up in the initial 10 seconds or so.
  • There was a glitch in viewing the gameplay on the Android interface leading to cropped images.
  • Everyone seemed to have enjoyed. There was a request for more rounds in order to decide a single winner.


  • Building an interactivity in p5.js using random images with an optimal loading time.
  • Working with constraints of screen size for different platforms by adding the background color in the code.
  • For Priya: Embracing the steep learning curve that came with coding for the first time, and developing physical computing skills in a span of 20 days.
  • For Lee: Applying p5.js to create a working prototype of the idea and learning designing skills from Priya.


  • Including GIF loops would have made the game experience even more interesting and dynamic.
  • We would be able to restrict players from tapping multiple shuffles (read, cheating) in one round by adding the ongoing round number functionality on the screen.
  • Rotation and device-shaken functionalities would not be suitable for this game as players need to run and quickly to make groups.
  • The game itself was successful, as the players enjoyed and wanted to play more rounds. We thus achieved the objective we had in mind.


  • Including the ongoing ‘number of round’ functionality.
  • Finding a way to involve players that get eliminated in the initial rounds.
  • Working on including GIFs.
  • Adding sound effects to make the game more interesting and playful.
  • Making it more versatile to work on different operating systems and browsers.


https://editor.p5js.org/leelijun961118/present/PBjABRx9C6  (developed for smartphone platform)




[1] How to Play Musical Chairs.” wikiHow, 29 Mar., 2019. https://www.wikihow.com/Play-Musical-Chairs.

[2] Hartley, Michael. “Game of Getting Into Groups of Given Numbers | Dr Mike’s Math Games for Kids.” Dr-mikes-math-games-for-kids.com, 2019. http://www.dr-mikes-math-games-for-kids.com/groups-of-given-numbers.html.


Random functionality: https://p5js.org/reference/#group-Math

Phone functionality: Touches: https://p5js.org/reference/#/p5/touches

Using GIFs in p5.js:

Discussion: https://github.com/processing/p5.js/issues/3380

Library: https://github.com/wenheLI/p5.gif/

Example: https://editor.p5js.org/kjhollen/sketches/S1bVzeF8Z

p5.js to GIF:https://www.youtube.com/watch?v=doGFUaw_2yI

Array: https://www.youtube.com/watch?v=VIQoUghHSxU&list=PLRqwX-V7Uu6Zy51Q-x9tMWIv9cueOFTFA&index=27

Loading Animation: https://www.youtube.com/watch?v=UWgDKtvnjIU





 (Creation & Computation DIGF-6037-001)

  Team : Jignesh Gharat  & Nadine Valcin                                                          Mentors: Kate Hartman & Nick Puckett

Project description

Eggxercise is a hybrid digital and physical version of the traditional egg-and-spoon race where participants balance an egg on a spoon while racing to a finish line. It replaces the typical raw egg used in the classic version with a digital egg on a mobile screen. If the egg touches the side of the screen, it breaks, displaying the contents of a cracked egg with the message “Game over”. 

Because of space and time constraints, a relay race format was used for the in-class demonstration. The participants were divided into 3  teams whose members had to go through a simple obstacle course made out of a simple row of chairs. When they finished their leg, they had to pass on the phone to the next member from their team without breaking the egg. If at any moment, the egg broke, the participant who was holding the mobile phone had to reload the game and wait for the expiration of the 5 second countdown to resume the race.

Presentation Day

Project context

We both shared a strong desire to explore human-computer interaction (HCI) by creating an experience with an interface that forced participants to use their bodies in order to complete a task. That physical interaction had to be sustained (unlike a momentary touch on the screen or click of a mouse) and had to be different from the many daily interactions people have with their smart devices such as reading, writing, tapping, and scrolling. In other words, we were searching for an experience that would momentarily disrupt the way people use their phones in a surprising way that simultaneously made them more aware of their body movements. 

We also wanted to produce something that was engaging in the simple manner of childhood games that elicit a sense of abandon and joy. It had to have an intuitive interface and an immediacy that didn’t require complex explanations or a high level of skill, but it simultaneously had to provide enough of a challenge as to require a high level of engagement. As Eva Hornecker remarks:

 “ We are most happy when we feel we perform an activity skillfully […] Tangible and embodied interaction can thus be a mindful activity that builds upon the innate intelligence of human bodies.”  (23)

poseNet()  Library (ML)

We explored the different ways in which the body could be used as a controller mainly through sound and movement. Posenet – a machine learning model that allows for Real-time Human Pose Estimation (https://storage.googleapis.com/tfjs-models/demos/posenet/camera.html) offered the possibility of interacting with a live video image of a person’s face. We conjured an experience that would attach a virtual object to a person’s nose and allow the person to move that object along a virtual path on a computer screen. This led to the idea of a mouse, following its nose to find a piece of cheese. It would force users to move their entire upper body in unusual ways in order to complete the task.

We then moved onto the idea of controlling an object on a mobile device through voice and gestures. Building on our desire to make something inspired by childhood games, we decided to transpose the egg-and-spoon game. We didn’t want a traditional touch interaction so we used the accelerometers and gyroscopes data on the phone to sense tilting, rotation, and acceleration to control the movements of a virtual egg on a mobile phone. This allowed for immediate and unmediated feedback to the user who could quickly gauge the acceptable range of motion required not to break the egg. This can be seen as an application of a direct manipulation interface (Hutchins, 315) where the object represented, in this case, the virtual egg, behaves in a similar fashion than a real egg would if put on a moving flat surface. The interface also feels more direct because the user’s intentions to balance the egg as demonstrated by their hand movement provide the expected results that follow the normal rules of physics. 

Eggxercise is engaged with the aim to investigate a natural user interface (NUI) where the interaction with the smart device is direct, tangible and consistent with our natural behaviors. Within that paradigm, it aligns with the multi-disciplinary field of tangible embodied interaction (TEI), that explores the implications and rich potential for interacting with computational objects within the physical world and with the projects and experiments of  MIT’s Tangible Media Group, led by Hiroshi Ishii, that continuously search for new ways “to seamlessly couple the dual worlds of bits and atoms.” (tangible.media.mit.edu)

Mobile phone game play demo

Our project integrates a virtual object on a physical device that responds to movement in the physical world in a realistic way. In that way, it is related to the controllers that are used in gaming devices such as the Nintendo Wii and the Sony Playstation. It also has the embodied interaction of the Xbox Kinect while maintaining a connection to a real-world object.

Game court

Played between 3 teams of 6 participants on a 10mt relay track. The layout looked as illustrated in the image below.

Game Court

The code for our P5 Experiment can be viewed on GitHub:


Technical issues

The sound for Eggxercise launched automatically on Android devices. We spent a lot of time trying to get the sound to work on the iPhone only to accidentally discover that users had to touch the screen to activate the sound on those devices.

Sound output:

  • Android Device – Browser used Firefox.
  • iPhone Device – Browser used safari but only works on the first touch.

Next steps
  • A background with lerp color to have the screen turn red as the egg approached the edges.
  • More levels, increasing the speed of the egg, adding a second egg or obstacles on the screen. 
  • A timer indicating how long the users had managed to balance the egg

Observations / User test
  • The participants were using a different browser on mobile phones with different operating systems and specifications and all the phones were connected over wifi to load the code from P5.js, It was difficult to start the game at the same time as the loading time was different.
  • Participants having a large screen size had an advantage over others.
  • Adding obstacles on the path made the game more challenging and fun.
  • The background sound (Hen.mp3) did enhance the experience as the motion of the egg changed the speed and the amplitude of the sound.
  • Participants were having a good time using the app themselves as well as watching others play.

  • Getting started with creative coding and understanding the basic workflow of P5.js and java.
  • Integrating graphics with code.
  • Making the screen adaptive to any screen size (Laptop & mobile phones).
  • Exploring various interaction and interface patterns using code. i.e Touch, voice, motion tracking, swipe and shake.


Hornecker, Eva, “The Role of Physicality in Tangible and Embodied Interactions”,  Interactions, March-April 2011, pp.19-23.

Hutchins, Edwin L, James D. Hollandand Donal A. Norma, “Direct Manipulation Interfaces”, Human-Computer Interaction, vol. 1, 1985, pp. 311-338.

Tangible Media Group, tangible.media.mit.edu/vision/. Accessed September 28, 2019.

 poseNet() librart                                                              https://ml5js.org/reference/api-PoseNet/

Amplitude Modulation                                                https://p5js.org/examples/sound-amplitude-modulation.html

Frequency Modulation                                               https://p5js.org/examples/sound-frequency-modulation.html

The Coding Train https://www.youtube.com/channel/UCvjgXvBlbQiydffZU7m1_aw

Hen Sound Effect                                                             https://www.youtube.com/watch?v=7ogWsIYJyGE

Road Hawgs

Creation and Computation | Experiment 1
By: Catherine Reyto & Arshia Sobhan

Project Description

Road Hawgs is a two-team game which is played by using phones as physical pieces on the game field. Each team tries to pass its own finish line through obstacles on the field, trying to prevent the other team to succeed at the same time. In this game, phone displays are used as multi-dimensional game pieces, giving players access to all the tools they need in one place.

The core concept of this game is a group game-play using phones which are mostly considered as the reason behind the isolation of people in social contexts, even in a regular multi-player computer game where players are staring at their screens without any physical interaction. This game combines the groupness that exists in traditional board games with new possibilities that a mobile phone screen can provide as a game piece.

Project Context

We were both interested in the idea of people using their phones as tactile objects to physically connect with one another, but it took us a few iterations before we landed on the roadblock game.  

Group game-play is reminiscent of childhood when structured social activities were routine and commonplace. We had this in mind – the act of people gathered around say, a puzzle or a board game – in considering how we would approach this experiment. We were interested in exploring what the experience of group games feels like: being lost in focus but with occasional bursts of bickering, cheering or laughter, competitive impulses, and most of all a strong sense of ‘togetherness’. We imagined a group leaned in shoulder-to-shoulder, far removed from the isolating tendencies that are typically associated with personal-use digital screens of any sort.  We started thinking about the environments that tend to go hand-in-hand with group game-play: living-room floors, basement rec rooms, cottages, and eventually got to the idea of camping. This, in turn, led us to our first iteration of an image-matching concept. The idea involved the whole group ‘assembling’ a tent, (an image of one that is, with all 20 phone screens making up the canvas), working together to build it out piece-by-piece, not far off from the real-life experience of threading the poles hand-over-hand to set up a tent in the woods. And just like real camping, when the work is done and it’s time to kick back and enjoy the view (or at least, the fire), we were thinking of setting up all the laptop screens to mimic the experience by an emitted glow or panoramic image. But we were dissuaded by the technical challenges that the level of orchestration involved. It entailed either networking, which was ruled out, or a level of programming beyond our three weeks with p5.

img_8677 img_8678


We were still attached to the collaborative puzzle work of image-matching and spent the next session brainstorming. We were both drawn to visual patterns and the power of code to alter complex graphics into drastically new mosaic designs with just a few taps (or clicks). We really liked the idea of working with Islamic tile patterns, both on account of their captivating beauty but also because like code, the designs are grounded in maths principles. But like Jessie and Kaitlin discussed their scavenger hunt map, we anticipated that the variety of screen sizes would be too disruptive to the visual rhythm.

Photo Credit: www.sigd.org
Photo Credit: www.sigd.org
Illustrating the visual interference caused by the framing around various phone screen displays

We also became increasingly aware of an overarching issue beyond screen-size interference.  For the class to interact with the screens towards a common goal, we both felt a challenge was needed. Not simply for the sake of competition or upping the ante, but rather to continue with our ideas about group game-play. We wanted to see our classmates working together, sharing frustrations and accomplishments as they competed in large groups.    

Figuring out our challenge led us through several iterations of wheel-spinning and creative frustration. We kept falling short of the target with concepts that were visually stimulating but too easily achieved, to the point of risking complacency. We frequently turned to the work of Artist and Designer Purin Panichphant for inspiration, eventually coming across the artwork that led us to the idea of matching Pieces. Panichphant’s Optical Maze Generator allowed us to make that final connection, though at first only in an abstract sense. As soon as we saw the maze and how it worked, it was unanimously agreed that we’d build our idea from it. We tapped around on the screen, rotating the squares of a grid of shape patterns, and began to visualize the idea of positioning parts of vertical and horizontal Pieces.

Panichphant’s Optical Maze Generator
Panichphant’s Optical Maze Generator


Designing the Game

A vision came to mind from the old version of the PC game Sim City (Will Wright, 1989). The game involves strategically building a metropolis on an allotted budget in order to grow the population and in turn, increase that budget to continue expansion and growth. One of the greatest satisfactions came from laying down Pieces of road pavement because it signified profit enough to invest in infrastructure.

SimCity, 1989 Video Game (Photo Credit: imgur.com)
SimCity, 1989 Video Game (Photo Credit: imgur.com)

We cut up several sheets of paper into phone screen-sized portions, then plotted out a system of match points (mid-width, mid-height, and all corners) for each screen. The shapes could be combined in many ways by simply rotating the sheets of paper to match connection points from one road piece to another. The strategic positioning of the road pieces was devised with the building blocks of Tetris (designed by Alexey Pajitnov) in mind: minimal variation (we used four shapes), relying greatly on rotation for combining point-to-point shape connections.




To create a bit of context and make the game more interesting, we threw in a few literal roadblocks, in the form of a river, a train passing and construction/road work. Each ‘blocker’ presented its own challenge: rivers need bridges to cross, as do trains, unless you choose to simply wait for the train to pass (miss a turn), and construction sites limit or alter your route. We added ‘relief’ pieces to the mix: a bridge for crossing the river and train tracks, and a ‘bribe’ to override the construction.  

With our pieces laid out, we felt good about having everything we needed to make a game that was simple yet clever enough that we could imagine it actually being played outside the classroom, by kids and adults alike. We just needed to strategize the game rules, and quickly learned that there is nothing simple about that. Game design is a puzzle in itself, or a story with a definitive beginning, middle and end that needs to be a delicate balance of pain and gain points. We wanted to focus on the collective experience of the whole class but keep the element of competition a priority. To solve this, we divided the class into two large teams that would be pit against one another on the same road. The tricky part came in trying to decipher their common goal. Was it to gain more distance (finish line), or more points (flags)? We also had to keep the demo time in mind, which meant having to omit the luxury of a first-round trial run. A complex set of rules could make for a far more interesting game, but we were always aware of having to keep it at a basic level. We also added the factor of luck to the game by having a dice toss to determine the number of moves each team has in a turn.

The title of the game has a double meaning. According to Wikipedia, a road hog is a motorist who drives recklessly or inconsiderately, making it difficult for others to proceed safely or at a normal speed.  Since the goal of the game is to be the first team to reach the finish line, the players will be placing pieces haphazardly, their strategy in selection curtailed by the pressure of the group (like the round-robin scenario in table-tennis).  Because both teams are ‘building’ the same road, they are detrimentally dependent on one another to win, thus making a play on the term ‘hog’, as into hoard to oneself. That concept was inspired by the billiards game “9 Ball”, which takes a non-linear approach to win the game (rather than a cumulative tally of points).

Final Version of the Game

After discussing different scenarios, we finalized these rules for the game:

  • Two teams are differentiated with two colours (team pink and team green)
  • The teams build the same road in turns
  • The number of moves in each turn is determined by a dice toss
  • Each team has its own respective finish line (placed side by side).
  • There are some obstacles in the field that prevent teams from going straight
  • Each team can use as many blockers as they want to deter the other team
  • When a team is blocked, they need to use a relief tool to get past the block
  • Teams have to physically use their phones in the game. They rotate them when finding the desired direction of a road piece, and they stack one phone on top of another when using blockers and reliefs (ie. bridge over the river).


  • Dynamite: Destroys the last three moves (road pieces)
  • River: Blocks the road (“bridge” is needed to pass)
  • Train: Blocks the road (“bridge” is needed to pass)
  • Construction: limits the directions to continue (“bribe” can be used to pass through in any direction)


  • Bridge: to pass the river and the train
  • Bribe: To pass through construction in any desired direction




Using p5 and Technical Challenges

As far as p5 was concerned, in spite of our limited knowledge, we were pretty good at communicating approach strategies. We caught one another if an idea seemed out of scope, and Arsh really stepped up when it came to tackling challenges like adding a swipe mechanism.  The swipe was the fundamental feature needed for easy, intuitive game-play, as well as a great solution to simplifying our navigation. We aimed to keep every aspect of the game as minimal as possible because we anticipated the loss of time from explaining game rules in the demo.  

After finalizing the tools, we designed a simple navigation system using tap and swipe. Players have three separate tabs: for road pieces, blockers and reliefs, that can be accessed by swiping left and right respectively.  In each tab, they can then tap to toggle through subsets (ie. road shapes, blocker types). Although using tap was quite easily achieved using the event “touchStarted” and using variables to loop the toggle, the swipe function was not very straight forward. After some searching and testing, we finally used this example code from Shiffman incorporating hammer.js.  It enables swipes in all four directions and worked properly on iOS and Android in all browsers. We only needed to make use of the left and right swipe to give access to blockers and reliefs, with road pieces being the default toolset on the home screen.

The dice toss was also executed in p5 using the shake gesture to mimic the gesture of a real-life dice toss. The only factor that was in need of some tweaking was the shake threshold (setShakeThreshold()). After a bit of ‘road’ testing, we finally settled on a threshold of 40. But for presentation’s sake, yes – Nick had a good point, a real die would have sufficed.

We felt a little restrained by our limited skill-level. There were plenty of cute extras we had to rule out, like small animals scurrying across the road pieces for idle animation. We were both eager to challenge ourselves with p5, but the time restrictions added a precarious element to our codebase. It was and still is apparent that we could do with some refactoring, as doing so would lead to a free playground for adding and experimenting. Because we were sharing the codebase, and some of the code had been pulled from elsewhere (the foundation of the Swipe feature was courtesy of Shiffman), that sometimes led to hesitation about tampering with one another’s code. But we also worked really well at overcoming issues in the code when we were able to sit down together to work through them.

Presentation Reflection

Presenting first meant it was really difficult to gauge how to make the best use of our time.  Right off the bat, it was apparent that we had been too detailed in our projector-tutorial of the game rules. In hindsight, we’d have been pretty efficient in leading our classmates straight to the QR codes so there would be ample time for everyone to figure out the game as a hands-on trial run.  

It was a painful oversight that we hadn’t thought to load the QR code for the die into one of our own phones before we started the game.  We didn’t want to interrupt the flow of the lineups as we were weary about how much time we had left. This led to the call-out of ‘fake’ die rolls, the sort of on-the-spot thinking that happens in a worked-up presentation.  

In spite of what became a bit of a chaotic moment, it was really satisfying to see the game successfully play out.  We had anticipated that long start where the road needed to grow close enough to the finish line before the real fun of the game kicked in. In our own test-runs of the game, limited to just two phones, we were still able to see that we needed dispersed pain/pressure points in order to overcome that issue.  We resolved that being master game-designers might take a few more iterations yet. But in the meantime, we had achieved what we had set out to do. We got to watch our classmates compete and cheer and laugh as they used their phones like blocks from a classic board game.








Shiffman. (n.d.). hammer.js swipe. Retrieved from p5.js Web Editor: https://editor.p5js.org/projects/HyEDRsPel

Hammer.js. (n.d.). Retrieved from Swipe Recognizer: http://hammerjs.github.io/recognizer-swipe/

Pajitnov, A. (1984, June 6). Tetris Analysis. Retrieved from http://cmugameresearchlibrary.pbworks.com/w/page/3984534/Tetris%20Analysis

Phanichphant, P. (n.d.). Experiments with P5.js. Retrieved from http://purin.co/Experiments-with-P5-js

Phanichphant, P. (n.d.). Optical Maze. Retrieved from http://p5js.site44.com/019/index.html

THE MARRIAGE OF DIGITAL AND ANALOG: THE FUTURE OF GAMING? (2016, November 11). Retrieved from https://cmon.com/news/the-marriage-of-digital-and-analog-the-future-of-gaming

Luke Stern, S. W. (2015). Game of Phones. Retrieved from https://boardgamegeek.com/boardgame/179701/game-phones

Wright, Will. (February 2, 1989) SimCity. DOS, Maxis

Experiment 1: Wake them up!

Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions.

Manisha Laroia & Rittika Basu

Project Mentors
Kate Hartman & Nick Puckett

Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions. The experience consisted of many virtual ‘Sleepy Monsters’ and the participant’s task was to ‘Wake them up’ by interacting with them. The experiment was an attempt to assign personalities and emotions to smartphones and create delight through the interactions.

The participants were organized into four groups and assigned with a QR code each. They had to scan, wake up the monster, keep it awake and move to the next table to wake up the next monster. Eventually they would have woken up all four monsters and collected them all.

For the multiscreen aspect of the experience, we created four Sleepy Monster applications each with its unique color, hint, and wake up gesture. Each Sleepy Monster was programmed to pick a color from a predefined array of colors, in the setup, such that when the code was loaded onto a mobile phone, each of the 20 screens would have a different coloured monster. For each case, we added an indicative response, which was a pre-programmed response of the application to a particular gesture, so as to inform the user that it is or it is not the gesture that works for this Monster and they must try a different one. Participants were to try various smartphone interactions which involved speaking to, shaking, running, screen-tapping etc. The monsters responded differently by different inputs. There were four versions of the monster for mobile devices and one was created for the laptop as a bonus.

Sleepy Monster 1
Response: Angry face with changing red shades of the background
Wake up gesture: Rotation in the X-axis

Sleepy Monster 2
Response: Eyes open a bit when touch detected
Wake up gesture: 4 finger Multitouch

Sleepy Monster 3
Response: Noo#! text displays on Touch
Wake up gesture: Tap in a specific pixel area (top left corner)

Sleepy Monster 4
Response: zzz text displays on Touch
Wake up gesture: Acceleration in X-axis causes eyes to open

*Sleepy Monster 5
We also create a web application which was an attempt to experiment with keyboard input and using that to interact with the virtual Sleepy Monster. Pressing key ‘O’ would wake up the monster.

The four Sleepy Monsters and the interactions inbuilt:

The participant experience:

Github link for the codes





Project Context
Moving to grad school thousands of miles away from home started-off with excitement but also with the unexpected irregular sleep patterns. Many of the international students were found napping on the softest green couch in the studio, sipping cups of coffee like a magic potion and hoping for it to work! Amongst them were us, two sleepy heads- us (Manisha and Rittika) perpetually yawning, trying to wrap our heads around p5.

The idea stemmed from us joking about creating a wall of phones with each displaying a yawning monster and see its effect on the viewer. Building on that we thought of vertical gardens with animals sleeping in them that awaken with different user interaction OR having twenty phones sleeping and each user figuring out how to wake their phone up. Eventually, we nailed it down to creating a twenty Sleeping Monster, and the participant must try out different interactions with their phones to Wake them up!


The way phones are integrated into our lives today, they are not just meer devices but more like individual electronic beings that we wake up, talk to, play with and can’t live without.  No wonder we feel we’ve lost part of ourselves when we forget to bring our smartphone along (Suri, 2013).We wanted the user to interact with their Sleepy Monster (on the phone) and experience the emotions of the monster getting angry if woken up, or saying NO NO if tapped, refusing to wake up unless they had discovered the one gesture that would cause it to open its eyes adding a personality to their personal device, in an attempt to humanize them. The experience was meant to create a moment of delight, once the user is able to wake up the Sleepy Monster and instill an excitement of now having a fun virtual creature in their pocket to play with or collect. The ‘Wake up the monster’ and collect its element of the experience was inspired by the Cat collecting easter egg game on Android Nougat and the pokemon Go mania for collecting virtual Pokémons.


By assigning personalities to the Monsters and having users interact with them, it was interesting to see the different ways the users tried to wake them.

From shouting WAKE UP! at their phones, poking the virtual eyes too vigorously shaking them, it was interesting to see users employ methods they would usually use for people.

The next steps with these Sleepy Monsters could be a playful application to collect them, or morning alarms or maybe do-not-disturb(DND) features for device screens.

Day 1: We used the ‘Create your Portrait’ exercise as a starting point to build our understanding of coding. Both of us had limited knowledge of programming and we decided to use the first few days to actively try our hand at p5 programming, trying to understand different functions, the possibilities of the process and understanding the logic. Key resources for this stage were The Coding Train youtube videos by Daniel Shiffman and the book Make: Getting Started with p5.js by Lauren McCarthy.


Day 3:  Concept brainstorming, led us to questions about the various activities we could implement and what functions were possible. We spent the next few days exploring different interactivity and writing shortcodes based on the References, section on the p5.org website. Some early concepts revolved around, creating a fitness challenge, a music integrated experiences, picture puzzles, math puzzle games or digital versions of conventional games like tic-tac or catch-ball or ludo.


Day 6: We did the second brainstorm, now with a more clear picture of the possibilities within the project scope. A lot of our early ideas were tending towards networking, but through this brainstorm, we looked at ways in which we could replace the networking aspects with actual people-people interactions. Once we had the virtual Sleepy Monster concept narrowed down, we started defining the possible interaction we could build  for the mobile interface.


Day 8: We sketched out the Monster faces for the visual interface, and prototyped the same using p5. Parallelly, we programmed the interactions as individual codes, to try out each of them like acceleration mapped to eye-opening, rotation mapped to eye-opening, multitouch mapped to eye-opening, audio playback and random color selection on setup.

Day 10: The next step involved combining the interactions into one final code, where the interactions would execute as per conditions defined in the combined code. This stage had a lot of hits and trials, as we would write the code, then run it on different smartphones with varying OS and browsers.

Day 10-15 : A large portion of our efforts in this leg of the project was focussed on bug fixing and preparing elements(presentation, QR codes for scanning, steps for the demo & documenting the experience) for the final demo, simplifying the experience to fit everything in the allotted time of 7 minutes per team.

Getting the applications to work in different browsers and on different operating systems was an unforeseen challenge we faced during trials for the codes. The same problem popped up even during the project demo. For Android, it worked best in Firefox browsers, and for iOS, it worked best in Chrome browsers.
Seamlessly coordinating the experience for 20 people. We did not anticipate the chaos or the irregularity that comes with multiple people interacting with multiple screens.
Another issue came up with audio playback. We had incorporated a snoring sound for the Sleepy Monster to play in the background when the application loaded. The sound playback was working well on Firefox browsers in Android devices but didn’t run on Chrome browsers or iOS devices. In the iOS device, the application stopped running, with a Loading… message appearing each time.

Defining absolute values for acceleration and rotation sensor data
Random background color changes on each setup of the code
Execute multiple smart-phone interactions like acceleration, rotation, touch, multitouch, device shaken and pixel area-defined touches.

Meet the Sleepy Monsters by scanning the QR codes


    1. “Naoto Fukasawa & Jane Fulton Suri On Smartphones As Social Cues, Soup As A Metaphor For Design, The Downside Of 3D Printing And More”. Core77, 2013, https://www.core77.com/posts/25052/Naoto-Fukasawa-n-Jane-Fulton-Suri-on-Smartphones-as-Social-Cues-Soup-as-a-Metaphor-for-Design-the-Downside-of-3D-Printing-and-More.
    2. McCarthy, Lauren et al. Getting Started With P5.Js., Maker Media, Inc., 2015, pp. 1-183 https://ebookcentral.proquest.com/lib/oculocad-ebooks/reader.action?docID=4333728
    3. Henry, Alan. “How To Play Google’s Secret Neko Atsume-Style Easter Egg In Android Nougat”. Lifehacker.Com, 2016, https://lifehacker.com/how-to-play-googles-secret-neko-atsume-style-easter-egg-1786123017
    4.  Pokémon Go. Niantic, Nintendo And The Pokémon Company, 2016.
    5. “Thoughtless Acts?”. Ideo.Com, 2005, https://www.ideo.com/post/thoughtless-acts
    6. Rosini, Niccolo et al. ““Personality-Friendly” Objects: A New Paradigm For Human-Machine Interaction”. IARIA, ACHI 2016 : The Ninth International Conference On Advances In Computer-Human Interactions, 2016.
    7. Wang, Tiffine, and Freddy Dopfel. “Personality Of Things – Techcrunch”. Techcrunch, 2019, https://techcrunch.com/2019/07/13/personality-of-things/
    8. Coding Train. 3.3: Events (Mousepressed, Keypressed) – Processing Tutorial. 2015,https://www.youtube.com/watch?v=UvSjtiW-RH8
    9. The Coding Train. 7.1: What Is An Array? – P5.Js Tutorial. 2015,  https://www.youtube.com/watch?v=VIQoUghHSxU
    10. The Coding Train. 2.3: JavaScript Objects – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=-e5h4IGKZRY
    11. The Coding Train. 5.1: Function Basics – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=wRHAitGzBrg
    12.  The Coding Train. p5.js Random Array Requests (whatToEat). 2016,https://www.youtube.com/watch?v=iCXBNKC6Wjw
    13. “Learn | P5.Js”. P5js.Org, 2019, https://p5js.org/learn/interactivity.html
    14. Puckett, Nick. “Phone Scale”. 2019. https://editor.p5js.org/npuckett/sketches/frf9F_BBA

Experiment 1 – Echosystem


Group: Masha, Liam, Arsalan

Code: https://github.com/lclrke/Echosystem

Project description

This installation involves 20+ screens and participants that create a network through sound. Incoming sound is measured by the devices and this data is used to influence the visual and auditory aspects of the installation. Sound data is used as a variable within functions to affect the size and shape of the visuals. Audio synthesis using P5.js is used to create responsive sound to the participants input. The features of the oscillators are also determined by the data from the external audio input.

While the network depends on our participation, the devices concurrently relay messages through audio data. After we start the “conversation” there is a cascading effect as the screens interact with each other through sound, creating a bi-communications network via analog transmissions. 

Visually, every factor on the screen is affected by participant and device interactions. We created a voice synchronized procession of lines, color and sound that highlight and explore the sound as a drawn experience. The installation is continuously changing  at each moment. The incoming audio data influences how each segment is drawn in terms of shapes, number of lines and scale.  This is in contrast to a drawing or painting that is largely fixed in time and creates an opportunity to draw with voice and sound. Through interaction, the participants are able to affect the majority of the piece, bridging installation and performance art. 


Week 1

The aim of our early experiments was to create connections between participants through the devices rather than an external dialogue. We started with brainstorming various ideas and figured there were 2 directions:

  1. A  game or a play, which would involve and entertain participants:
  2. An audio/visual installation based on interaction between participants and devices.

First, we planned to create something funny and entertaining and sketched some ideas for the first direction.

OCAD Thesis Generator: Participants would generate a random nonsensical thesis and subsequently have to defend it.


Prototype: https://editor.p5js.org/liamclrke/present/9fBGEz9CH

Racing: Similar to slot car racing, you have to hum to stay within a certain speed in order to not crash. Too quiet and you’ll lose.

Inspiration: https://joeym.sgedu.site/ATK302/p5/skate_speed/index.html

Design Against Humanity: Screens used as cards. Each screen is a random object when pressed. Have to come up with the product after. Ex. “linen” & “waterspout” → so what does this do?

Panda Daycare: Pandas are set to cry at random intervals. Have to shake/interact with them to make them not cry.

Sketch: https://editor.p5js.org/liamclrke/present/MpxLmb1jQ


Week 2

After further exploring P5.js, we decided we were more interested in creating an interactive installation rather than a game.  

Raw notes/ideas for installation:

Wave Machine: An array of screens would form an ocean. Using amplitude measurement from incoming sound, the ocean would get rougher depending on the level of noise. Moving across an array of screens making noise would create a wave

Free form Installation: Participants  activate random sounds, images and video with touch and voice. Images include words in different languages, bright videos and gradients and various sounds. (this idea was developed into the final version of the experiment)

Week 3 

We agreed to work on art installation, which involves sounds, images and videos affected by participant interaction. An installation project seemed more attractive and closer to our interests than a game. We figured we could combine our skills together to create a stronger project and function as a cohesive team. 

That week we produced graphic sketches, short videos and chose sounds we would want to use in the project:



At this step, we took inspiration from James Turrell and his work with light and gradients.

Week 4

Uploading too many images, sounds and videos made the code run slow on devices with smaller processing power. We changed the concept to one visual sketch and used p5.js audio synthesis.

We were looking for a modular shape which expressed the sound in an interesting way, apart from a directly representative waveform. We started with complicated gradients which overtaxed the processors of mobile phones so we dialed down certain variables in the draw function. Line segment density was a factor of amplitude multiplied by a variable, which we lowered till the image could be processed without latency.

The final image is a linear abstraction, drawn through external and internal sound.

Project Concept:

The project was inspired by multiple art works.

Voice Array by Rafael Lozano-Hemmer: When a participant speaks into an intercom, the audio is recorded and the resulting waveform is represented in a light array. As more speech is recorded, the waveforms are pushed down the horizontal array, playing back 288 previous recordings. When a recording reaches the end, it is released as solo clip. This inspired us to use audio as a way to sync devices without a network connection.


Paul Sharits: Screens: Viewing Media Installation Art by Kate Mondloch was used for research, and within Paul Sharits’ work with screens was discovered. Paul Sharits was known for avante garde filmmaking, often featured across multiple screens accompanied by experimental audio. We took this concept and reformatted it into an interactive design.

unnamed-2 unnamed

Manfred Mohr: Manfred is a pioneer of art in the digital using algorithms to create complex structures and shapes. The visual simplicity driven by more complex underlying theory was a creative driver for the first iteration of Echosytem.


Challenges and solutions:

  1. The first challenge was lag from overloading processors from multiple video, sound and image files. These files slowed down the code, especially on the phone. Therefore, we decided to use P5.sound synthesis and creative coding to draw the image.
  2. First sketches were based only on touch, which did not create a strong enough interaction between participants, so the solution was to add voice and sound which affect the characteristics (amplitude and pitch) of the oscillators . 
  3. In previous ideas, it was difficult to affect videos and images (scaling and filters) so we created a simplified  image in P5.js which consists of lines of different colors. This step allowed us to affect the number of lines drawn by audio input data.
  4. In the beginning, to organize the physical space, we planned to build a round stand for devices. This would create a circle and bring participants together around the installation. However, different size and weight of devices complicated things.

photo_2019-09-30_21-34-485. Another idea was to hang screens to the ceilings, but the construction was too heavy. Without having the right equipment, we simplified these concepts and used flat horizontal surfaces to place the screens, so the number and size of devices was not limited.

6. The synthesizer built in P5.js led to a number of challenges. The audible low and high ends of a tablet differed greatly from a phone, resulting in certain frequencies sounding unpleasant depending on the device’s speaker. Through trial and error, we narrowed the pitch range that could be modulated by audio input for maximum clarity over multiple devices. There was also an issue of a continuous feedback loop, so the oscillator’s amplitude had to be calibrated in a similar fashion. The devices had to be at a certain distance range or would result in continuous feedback. We put a low-pass filter on finally in order to control the sound as a fail-safe as the presentation set up would be less controlled than in tests. 


Although we managed to involve 20 screens and groupmates into process of creating sounds and images, the design of the presentations logistics could have been more concrete. With preparation and set placement of screens, the project has high scalability, far above 20 screens and participants. 

The first question we asked upon assignment was whether we could overcome sync issues while keeping the devices off a network. Through the use of responsive sound we created an analog network of sound, resulting in a visual installation blurring the lines between participant and artist.


  1. Early Abstractions (1947-1956) Pt. 3  https://www.youtube.com/watch?v=RrZxw1Jb9vA
  2. Mondloch, Kate. Screens: Viewing Media Installation Art. University of Minnesota Press, 2010.
  3. Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial   https://youtu.be/jEwAMgcCgOA
  4. Sketches made in processing by Takawo  https://www.openprocessing.org/sketch/451569
  5. Screens: Viewing Media Installation Art- Kate Mondloch
  6. Rafael Lozano-Hemmer- Various works http://www.lozano-hemmer.com/projects.php
  7. United Visual Artists – Volume  https://www.uva.co.uk/features/volume
  8. https://collabcubed.com/2012/10/16/carsten-nicolai-unidisplay/
  9. http://jamesturrell.com/



Experiment 1 – Camp Fire

Project Title: Campfire
Team Members: Nilam Sari (No. 3180775) and Lilian Leung (No. 3180767)

Project Description:

Our experiment was an exploration as to how we could create a multi-screen experience that would speak to the value of ‘unplugging’ and having a conscious and present discussion with our classmates using the metaphor symbolism of the campfire.

From the beginning, we were both interested in creating an experience that would bring people together and be able to have a sort of digital detox and engage in deeper face-to-face conversation. We wanted to play along with the current trend of digital minimalism and the Hygge lifestyle focused on simpler living and creating deeper relationships.


While our project would only provide about a 10 minute reprieve from our connected lives, we wanted to bring to attention, while we’re in a digitally-lead program, that face-to-face conversation and interaction is just as important for improving our ability to empathize with one another.

Visual inspiration was taken from the visual aesthetic of the campfire as well as the use of abstract shapes used for many meditative and mental health apps such as Pause, developed by ustwo.


ustwo, Pause: Interactive Meditation (2015)

How it Works:

The sketch is laid out with three main components: the red and orange gradient background, the fire crackling audio, and the transparent visuals of fire. 

On load, the .mp3 file of the audio plays with a slow fade in of the red and orange gradient background. The looped audio file’s volume is dependent on mic input, so that the more discussion from the group participating amplifies the volume. The visual fire graphics at the bottom adjust in size dependent on the volume of the mic input, creating a flickering effect similar to a real campfire.

To lower the volume and fade the fire, users can shake their devices and the acceleration of the x-axis causes the volume to lower and the tint of the images to decrease to 0. This motion is to recreate shaking out a light match.

Development of the Experiment

September 16, 2019

Our initial thought about visually representing the camp fire was to recreate an actual fire. Though we realized from our intentions to have all the phones laid flat on a surface, that fire is seen from a vertical perspective, while the phones would lay horizontally so we went with a more abstract approach instead by using gradients. 

The colours chosen were taken from the natural palette of fire though we also explored sense of contrast with the use of gradients. 

Gradient Studies

Righini, E. (2017) Gradient Studies

Originally we tried working with a gradient built in RGB, though while digging into control of the gradient and switching values, Lilian wasn’t quite comfortable yet with working with multiple values once we needed to go having them change based on audio level input.

Instead we began developing a set of gradients we could use as transparent png files, this allowed us more control over what they visually looked like and allowed the gradients to become more dynamic and also easier to manipulate.


Initial testing of the .png gradients as a proof of concept, worked as we managed to get the gradient image to grow using the mic Audio In event.

While Lilian was working on the gradients of the fire, Nilam was trying to figure out how to add on the microphone input and make the gradient corresponds to the volume of the mic input. One of her solution was using mapping.

The louder the input volume the higher the Red value gets and the redder the screen become. This way they could change the background to raster image, and instead of lowering the RGB value to 0 to create black, this changed its opacity to 0 to show the darker gradient image on the back of it.

Nilam made edits on Lilian’s version of experimentation and integrated the microphone input and mapping part into the interface she already developed.

September 19, 2019

Our Challenges

We were still trying to figure out why mic and audio input and output was working on our laptops but not on our phones. The translation of mic input on to increase the size of the fire seemed laggy even attempting to down-save our images.

On our mobile devices, the deviceShake function seemed to be working, while laggy on Firefox, playing the sketch on Chrome provided better, more responsive, results. Other issues were once we started changing the transition of the tint for our sketch that sometimes the deviceShake would stop working entirely.

We wanted a less abrupt and smoother transition from the microphone input. So we tried to figure out if there are functions like delay. We couldn’t find anything so we decided to try using if statement instead of mapping.

We found out from our google searches that there is a possibility of a bug that stopped p5.js certain functions like deviceShaken from working after the recent iOS update in this past summer. Because, while laggy, it still worked on Lilian’s Android Pixel 3 phone, while it just completely never worked Nilam’s iPhone.

Audio Output

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No Yes
Safari No N/A

deviceShaken Function

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No No
Safari No N/A

Additionally, Lilian started working on additional exploration like mobile rotation and acceleration to finesse the functionality of the experiment. She also began exploring how we could incorporate noise values to recreate organic movement. We were inspired by
these examples using Perlin noise.


To add the new noise graphic, we used the createGraphics() and clear() functions to create an invisible canvas on top of the gradient to let the bezier curve trails so it looks like a flame. It clears itself and repeat the process again after the 600 frame count to decrease loading problems in the sketch.

September 21, 2019

After reviewing our code we realized some of the issues we were having with the audio were because of Chrome’s privacy restriction with disabling auto-playing audio, as well as our mic problem was because we placed our code within the function setup { section and was only running once, as compared to once we moved it to function draw {, the audio seemed to be working better.

September 23, 2019 – The Phone Stand


After getting feedback for our prototype, we started creating a stand that we could place everyone’s phones during the presentation. We laid out two rows of stands, the outer circle holding 12 phones, and the inner circle holding 8 phones, as we explored how we could better re-create the ‘fire’ when we have out multi-screen presentation. 


We started out by sketching the layout for the phone stand. The size is based on the widest phone width someone has in our class. We then went to the Maker Lab and drilled into the circular foam and chiselled out the middle sections to create an indent that the phones could sit within.


The next step is to apply finishing to the foam. We used black matte spray paint to cover the foam. The foam deteriorated a little from the aerosole of the spray paint, which we foresaw, but after a test paint it didn’t seem to damage the structure much so we decided to proceed. 

September 26, 2019 – deviceShaken to accelerationX


Finding the mobile deviceShake event wasn’t working, Lilian created a new sketch testing the opacity and audio level using accelerationX as the new variable. The goal was to test changes in acceleration cause the audio volume to decrease and the images to fade out. AccelerationX seemed to providing more consistent results and was added into the main Experiment 1 sketch.

User Flow

This experiment is primarily conversation lead. Set-up from the facilitators are creating a wide open space for everyone to sit and dimming or turning off the lights to recreate a night scene. Users are then asked to load in the P5 experiment and join together in a circular formation in the room.

Users should allow the fire animation to load in and place their phone into the circular phone stand. The joint phones coming together recreate the digital campfire. Facilitators then can  speak to the importance of coming together and face-to-face conversation.

The session can run as long as needed. When the session is finished, users can shake their phones to dim the fire and lower the volume of the fire crackling audio.

However, it did have an impact on the foam around the slots. It melted the foam with the paint and it wouldn’t dry. We thought we could have use gesso before the spray paint for future reference, but we had to improvise for this one so we used paper towels.

The Presentation


Photo Credit: Nick Puckett





The code for our P5 Experiment can be viewed on GitHub

Project Context

The project’s concept was taken from our mutual interest in creating a multi-screen experience that would cause classmates to come together in an exercise rather than be just an experiment using P5 events. After brainstorming a couple of ideas and possibilities within our limited personal experience with programming, we came up with an idea about ‘unplugging’ and having a full attention to the people around us without distraction of devices, except that it is facilitated by screens.

We wanted the experience to be about ‘unplugging’ to recognize the value (even within a digital media program) that time away from screens is just as beneficial and an opportunity for self-reflection. While technology allows us to extend ourselves within the virtual space, there are also many consequences to our real life relationships and physical composure.

Described within a Fast Company article What Really Happens To Your Brain And Body During A Digital Detox (2015) experts explained that our connectedness with our digital devices alters our ability to empathize, read each others emotions as well as maintain eye contact in real life interactions.

After our presentation, we looked Sherry Turkle’s work in Reclaiming Conversation: the power of talk in the digital age (2016). Turkle describes face-to-face conversation is the most humanizing method of communication, and allows us to develop a capacity for empathy. People use their phones for the illusion of companionship when real life relationships may feel lacking, our connectedness online leads us to discredit the potentiality of empathy and intimacy of face-to-face conversation. 

We chose a campfire as the visual inspiration for our P5 sketch due to the casual ritual performed today that provides both warmth and comfort while people connect with nature. Fire is pervasive across all human history, but within the present context, we use it as a symbol to voluntarily disconnect with technology and give one’s self the opportunity to nurture our relationships with nature and those close to us. 

Expanding upon the campfire and going into the ceremonial practice of a bonfire, fire has been used across history as a method to bring individuals together for a common goal, whether it be celebrations or a folk lore custom.

Rather than working with the literal visual depiction of fire, we chose to take visual cues from mobile meditation apps. 

We don’t believe our experiment will provide all these benefits, but we wanted to use it as an opportunity that while we’re in a digitally-lead program, face-to-face interaction is just as important. To provide each of our classmates a moment of self-reflection, we’re provided the unique opportunity to evaluate what we would like to offer for one another and to create.

We think that our presentation helped our classmates to take a break from the hecticness over constantly having to look at multiple screens while working on our experiment one projects. One of the feedback that we got was that the presentation would have been more successful if we had presented towards the end of the class when everybody had spent more time looking at screens.

If we were to develop the experiment further, we could explore how we could use the phones camera input ability to dim the fire based on eye contact to encourage users to look away from their screens when in conversation together. Further improvements could be functionality to blow on the phone to dim the fire which would require ranges of mic input to detect the difference between conversation and blowing on the phone. 


2D and 3D Perlin noise. (n.d.). Retrieved September 21, 2019, from https://genekogan.com/code/p5js-perlin-noise/.

Righini, Evgeniya. “Gradient Studies.” Behance, 2017, www.behance.net/gallery/51830921/Gradient-Studies.

Turkle, S. (2016). Reclaiming conversation: The power of talk in a digital age. NY, NY: Penguin Books.

ustwo. (2015) Pause: Interactive Meditation, apps.apple.com/ca/app/pause-interactive-meditation/id991764216.