Category: Experiment 4

Sound and Heart Beats – Interactive Mat

music beats & heart beats

Music Beats & Heart beats by Alicia Blakey


Music and Heart Beats is an interactive installation that allows users to wirelessly send sounds and interact with a digital record player. Through the installation, you can either send someone a sound beat or a heartbeat. Listening to certain music or the sound of loved ones hear beat has proven to help improve mood and reduce anxiety.

If a user opens the application connected to the interactive record player they can see when others are playing songs. The digital record player then starts spinning and is initiated when a user interacts with the app that corresponds with the installation. The pin on the record player is indicated by LED lights that music is being played or this fun interaction can also be initiated through touch sensors as well.

This art installation also conceptualizes the experience of taking a moment to initiate with your senses of hearing and touch to have fun and take a few minutes out of your day to feel good and listen to sounds that are good for your body and mind.






Initially, I had a few variations of this idea that encompassed the visual of music vibrations and heartbeat blips. After the first iteration, the art and practice of putting on a record engaged with the act of listening more.  The visual aspect of watching a record play is captivating with in itself.  I always notice after someone puts on a record they always stay and watch the record spinning. There is something mesmerizing with the intrinsic components of this motion. I wanted to create an interaction that was more responsive with colour, light and sound. Expanding on the the cyclical nature of the turntable as a visual the intent was to create an environment.







While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg. I also used a WS2812B Neo pixel strip envoloped in plastic tubing. The LED strip required 5v power while the feather controller required 3v power. To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard.





When initializing the video it with testing it proved more optimal that the  video sequence sat over the controller by changing the z index styling.  My next step was to apply a mask styling over the whole desktop page to prevent clicks altering the p5 sketch. Styled the controller.js to be in the same location as both desktop and mobile so it could share pub nub x y click locations.  The media.js  file would connect with the controller.js for play and stop commands. One of the initial issues was a long loading time for the mobile client. The solution was distinguishing with inline javascript a variable that we can use to stop the mobile client from running the inload audio function.T he mobile and desktop sites were not working on iphone only on android.  Pub nub would initiate on android mobile phones but in the end could not debug the mobile issue. If the desktop html page was loading its media.js  while a mobile client was trying to communicate with it the overlying result was unexpected behaviour. A possible solution would be to apply in the desktop a call back function; this would tell the mobile client it is loaded.







  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×8 Canvas Material
  • Optoma Projector
  •  6 x 10k resistors
  •  3.2 ft Plastic tubing


I decided to use a breadboard instead of a proto – board this time due to the fact that the interactive touch sensitive mat was large. In order for the prototype to remain mobile I needed to be able to disconnect the LED’s and power converter. It was easier to roll the mat up this way and quickly reconnect everything. Since I was running over 60 LEDS I used a 9volt power supply to run through the converter.  I originally tested with the 3.7k resistors but found the sensors were not really responsive. I then replaced and tested with the 10k resistors and the mat had varied greatly in sensitivity and was more accurate.


The outcome of my project was interesting people were really encompassed in just watching the video  projected onto the interactive mat.  Being able to control the LED’s was a secondary approach that users seemed to enjoy but just watching the playback while listening to music seemed to cause a state of clam and happiness. The feedback and response to the instillation was very positive. It was noted that the projection was hypnotic in nature. The installation was designed to bring a state of calm and enjoyment.  Although the LED’s were very responsive with the touch sensors there was some flickers on the LED’s I think due to an issue with the converter dwindling. I had purchased used but after using the Ywrobot converter would purchase new for other projects.  Other comments suggested that I add another interaction into the p5.js sketch to enable users to control the motion of the record through the video with the sensors. The overall reaction was very promising for this prototype. I’m extremely happy with the conclusion of this project. There was a definitive emotional reaction that the project was designed for.













By Ladan, Shikhar, Peiheng

Project description:

Space Quest Pizza is an endless arcade game with a simple goal (with a twist). Collect the pizza and avoid the enemies. The goal may be simple but it is the interactions that add the difficulty in the game as multiple players control the same character and have to work together to survive in the game.


GitHub Link:



2. brainstorming:


Initially with this project,  we wanted to explore how to combine the music and visualizers into a game. We decided as group to work with the strength we had in the group which was game design, and visual design.

In the meeting, we talked about what were some  ideas we could execute. Music based Dungeon Crawler, Midi Controller Game (Guitar Hero inspired).  We talked about pros and cons as well a feasibility and the skills we all could contribute to the project.

Finally, we got a direction. We wanted to create a game that was aesthetically strong that incorporated music somehow. Our initial game idea was a dungeon crawlers game that incorporated and aspect of cooperation that folks had to figure out as they played the game. The music drove the aesthetic and rhythm of the game as every-time that a player got a power up it would change the genre of the music playing plus the aesthetic of the game. And we named it DISCHORD.

2. Dischord game

This is the initial sketches for our first game idea Dischord.


First ideation session of the game we talked about what were the things we would have liked to explore with this project. There was an interest in gameplay and music. We came up with a dungeon crawler that was driven aesthetically and dynamically by music.

We started out building the game the p5.js games library. We initially created the game before were introduced fully to PunNub. We thought we would just build out the game fully and incorporate the network afterwards. Not realizing that the network would affect the type of messages we could send.

On our first meeting/ question session with Nick and and Kate we realized we couldn’t just incorporate pubnub into the backend after we created the game. There was suggestion made by both Nick and Kate on how to move forward the one that we felt we could execute in the time we had was to create a browser based game with a phone controller.  Once we had a pivot point we started trying to put their suggestion into the game we already had but were finding it difficult with connecting with PubNub.

We were not able to get the p5 play library to work with PubNub.  We decided to start from scratch and we found a piece of code that incorporated a second screen controller with moved ball with the built-in accelerometer of the device. We talked about what kind of interaction we wanted in the game. Once we simplified the interaction about multiple players controlling one avatar we started building out the game. Once we simplified the controls to moving a ball on screen up and down we were able to start world building and goals of the game. Once we moved to simpler code skeleton we built out the rest of the interaction for the game. Once we simplified the code this when we moved away from the Dischord game concept and moved toward the space quest pizza.


3.  Space Quest Pizza

As mentioned above, we started to adjust the game architecture because of pubnub. This idea came into existence when our old game could not be achieved due to technical issues. We went to the game lab and spoke to them about interactions in games. We also looked at a few games which used unique interactions in the context of networking. This is what drove us to the idea of space quest pizza. A simple avoid and collect game which requires an immense collaboration and dialogue between all players.


In the end, we decided that the game consists of a master screen and several mobile devices, and users can choose any of the four buttons including the top, bottom, left and right to control the movement of the game characters on the master screen through these different keys.

The game character needs to avoid the enemy character that constantly appears in the game. If it touches the enemy, the character will die. It need to try its best to get falling pizzas and get more score, which will be displayed at the top of the master screen of the game.

At the same time, on the first page of the game, we set two options: a two-persons group and a four-persons group. The device interfaces of two-persons group will get up-down button, and left -right button. The four-persons group will get up, down, left, and right buttons respectively. In addition, users need to scan the QR code on the first page of master screen to get the button interface on the phone. The QR code does not have any introduction. Users need to click the key to know which direction they get. We wanted to make the game more funny because of this sense of mystery and to get users to work closely with other users at the beginning.


Coding processing:

1, Game skeleton 

The game idea was derived from our brainstorming where we wanted a multilayer game which was intense and music was a big element in the game.

Dischord was supposed to be a multilayer music based dungeon crawler where all the players had the common objective of surviving and getting to the goal. All the obstacles and power ups were based on three music genres (pop, rock and R&B). The obstacles in the game had a visualizer behaviour.

In terms of the code for Space quest pizza. We started off with to help us with collisions and sprite creation. After working on the level design we realized that establishing a pubnub connection and was not feasible.

We went back to the drawing board and changed the idea, that’s where we started off with space quest pizza. This time without

For the enemies, we used the keep function to allow them to chase the player. Each enemy is targeting areas around the player so it gives them separate behaviours of their own.

For the collisions we just compared the x and y coordinate values of the enemy and the players.



Music was an important part of both of the game ideas we had. For the Dischord as seen in the video the concept we were going for was that the environment moved according to the beat of the music. Once the player got a power up it changed the game play environment colour and rhythm. The video below show the initial idea of how the environment would move based on the music. This was something we ultimately wanted to add to the space quest pizza but didn’t have enough time. 

The finalized music that was added in the background of SQP was music that Ladan composed and once we heard it as group agreed it went well with the game play.


Pubnub connection:

In the initial game plan, the connection is that when all users playing the game, their characters’ actions will be displayed synchronously on all users’ interfaces. We tried to complete this interaction by controlling different devices through pubnub. At the beginning, we had a relatively simple understanding of pubnub. We believed that data, image and function could be transmitted through pubnub. After trying many coding and discussing with Kate, we found that pubnub could not transmit images or even animations. So this plan is undoubtedly difficult to implement in pubnub.

So as mentioned in ideation, Nick suggested that we adjust our idea to a more achievable plan. Instead of displaying the game processing synchronously from multiple devices, all users who enter the game can control the movement of a character on the master screen through a handheld device.  The only messages that would be sent across PubNub would be up, down, right, left.

Therefore, in the final game, users can click the buttons on the mobile phone screen to control the movement of a character on the master game screen, so as to get scores for eating pizza or avoid enemies. We set different values from 1 to 4 for the top, bottom, right and left buttons. When pubnub transmits different values to the the master game screen, XY position of the character will change according to the received value, thus generating the function of movement.


Visual design:

One of the major decision that drove design aesthetics was our group members wanted to explore pixel art. Once we decided how we wanted the look and feel of the game it was easy to go forward with the designing of the layout and characters. The reason we wanted to explore pixel art was to support the nostalgic feeling of videos games. Also we wanted to combine the retro arcade feel with the modern technology of networking. This would engender emotional resonance with our game.

We went through a few iterations of the background first

65ff0ae6-064c-498b-b871-f64cdfbc3a94 09eec551-993b-48eb-a20a-b4fe78c227fd









b4d11105-3cb8-4a77-92bd-391071d7cb0e 3d69b919-6a99-4fc3-8eda-4bd6457b7706

We picked a space theme for the game which was very nostalgic to the arcade feel with games like Galaga and asteroids. After that we started with our pixel art iterations. We started simple and kept making the visuals better bottom up. The colour pallet was dark hue and a blue shade. For the game entities we used a lighter palette to distinguish main and background in the game easily.


To make the game visually more relevant to the theme, we downloaded an Alien Language  font from the Internet and made different buttons to display on the phone.


User testing before in last day:


In the Game Mechanics, we planned to make the character get one point for every pizza it ate, to improve player’s motivation. However, due to the time and code, this function was not fully  implemented. 

In addition, QR codes are close to each other, and since there is no hint, when a group of people rush to try the game — like the presentation day — some people will scan the same QR codes, causing some confusion.

We received positive feed back on the look and feel of the game. One of the part we could have grown on was the achievement portion of the game play. It wasn’t clear to players what they were supposed to do if it wasn’t for us letting them know. Also wasn’t clear when the game ended.

Future iterations:

We plan on adding a few new features in the game in terms of mechanics and interactions.

  • we plan to make the controls randomized between the players at some instance while in game to add another level of complexity and have players collaborate to an even more extent.
  • Showing how long the players stay alive to make them feel like they are achieving something.
  • We plan on making this a exhibition game project where the controls would be split depending upon the number of people in the room. This we plan on achieving using an ultrasonic sensor at the entrance of the room which can detect when someone enters the room.
  • For the next iteration we want to incorporate the music in the game play aspect by having sounds for when the players is moving but also music in the background. Also a small visualization with the stars moving and twinkling to the music.  To give the game play for dynamism.




The Flower of Life

Experiment 4: Network

The Flower of Life

Team Members: Naomi Shah, Carisa Antariksa, Mazin Chabayta


Project Description

‘The Flower of Life’ is a kinetic sculpture that represents life and death rates of various countries around the world through two oppositely spinning disks. The installation attempts to visually represent populations across countries by the speed at which it spins, while an accompanying screen communicates its quantitative figures. The goal of this kinetic sculpture is to physicalize and merge the ideas of life and death into a single whole shape that is almost pulsating with life.

How it works

A user steps up to ‘The Flower of Life’ installation and enters the name of a country of their choice into one laptop and hits ‘enter’. The quantitative population data of birth and death rates then reflect on a second laptop screen. This, in turn, influences the installation, causing two disks mounted on 360-degree servo motors to start rotation in opposite direction, hence animating the information displayed on the screen. The orange disc represents births while the black disc represents deaths. ‘The Flower of Life ‘signifies the eternal circle of life through its constantly rotating disks.


Digital Visualization of the Kinetic Sculpture



Before we split up into groups, a number of us from the cohort decided to approach forming teams differently for this experiment. We decided to meet and do a group brainstorming session so that we may collaboratively arrive at our concepts with each member contributing to as many ideas as possible. Over two days, we mapped out our disparate ideas and ‘workshopped’ with one another to build up our concepts. We developed around 6-7 strong ideas and finally narrowed it down to just a couple that we were interested in exploring. The concepts that were discussed and developed during our self-initiated workshop allowed us to form teams on the basis of what interested us personally.

From the beginning, our group showed interest in using physical computing as an element for this experiment, and after the Cy Keener’s talk a few weeks ago, we were inspired to experiment with physical computing as a form of data physicalization. Furthermore, we were keen on experimenting with global data brought into the classroom that participants could interact with.



Converging into an idea

Our ideation process began by first assessing the kind of dataset we wanted to visually represent through our installation. We also wanted our project to speak to everyone regardless of where they are from or whatever their background is. In our cohort, we have people from many different parts of the world and we wanted to bring everyone in front of our sculpture and have a relationship with it. Continuing on that line of thought we eventually found the type of data that has a presence inside every one of us: Life and Death through population numbers.

Taking on a task of visualizing life and death can be tricky because they are subjective experiences that can have different meanings to each one of us. However, life and death are natural occurrences that happen every second of every minute all around the world, and we wanted to communicate that. In addition to that, we wanted to avoid a depressing impact on our audience and so we believed the visual representation cannot be unpleasant to look at. Eventually, we decided to use a traditional and well-recognized symbol for life, “the flower of life”. The flower of life is a symbol that’s being used by different civilizations since the 9th century BC. Today, it is usually associated with life and birth.


Other iterations of the kinetic sculpture

final spiral

Final form of the kinetic sculpture


High Priority Tasks:

We broke up our tasks on the basis of the core elements of this installation and then categorized them into high and low priority. Our high priority tasks involved deciphering the code that would be the foundation of our project and building the basic prototype of the installation. Our low priority tasks would extend itself to refining the experience of interacting with the installation through beautification and conceptual layering. We regularly assessed our tasks against our time frame.

How to pull data from an API using p5js

This was the first time that any of us had worked with API data. There were two ways we could have approached this. We could either pull up synchronous data from a JSON file locally hosted or pulled a simulation of real-time data from a URL API using asynchronous callbacks. The former would have required us to use Pubnub to meet the networking requirement, while the latter gave us the option of not using Pubnub at all. We needed to find data that best reflected our concept, or allowed us to tweak the concept only minutely, and that would determine whether we would use a locally hosted JSON file or a URL API.

Translating quantitative data visually

We needed to make sure that we were pulling up specific quantitative information from the API data on p5js, that would influence the kinetic sculpture via the Arduino. Our biggest priority was to make sure that the difference in numbers between life and death actually translated to the sculpture to have the disks move at varying speeds and represent populations.

Designing and building the kinetic sculpture

While this project had immense possibilities for visually representing this installation, we chose to keep our fabrication simple and work with materials from the university’s inventory, given the time constraint. We made quick iteration sketches to determine how it would look and how it must be built. We decided to start with just one pair of disks, but make multiple for various countries if time permits.

Low Priority Tasks:

Designing the web pages for input of country and output of quantitative data

We wanted to design the UI for the information input and output pages on both screens to aesthetically complement the installation ‘Flower of Life’. This would give the project an overall sense of finesse and completion while allowing for more engagement from participants.

Multiple disks for representation

We wanted to shortlist a selected number of countries and make a pair of disks for each, representing the population of each of these countries. This would have allowed us to create a more wholesome data visualization through the tangible exploration of data.


Once we had our concept ready, we started to plan our building process. We considered several options for presenting the kinetic sculpture like hanging it on the wall or on a pedestal. However, after realizing that the flower needs to be on eye level in order for the visual effect to be experienced by the users, we eventually decided to build a pedestal about 6-ft high made of wood for stability. Next, we quickly sketched the structure and all its parts and sourced most from our OCAD facilities.


Materials Used:

20 inches of acrylic sheets with varying colors
Wood for pedestal and frame around rotating disks


2x 360-degree servo motors
1x Breadboard
1x Arduino Micro
1x External power source with wiring Jumper wires


We started our fabrication process with a consultation with Reza, the Maker’s lab manager and based on his advice, we made some crucial changes to the structure. He advised using a cross base rather than a square base, which meant the structure is lighter and easier to move around. In addition to that, at this point, we also decide to add a frame around the disks in order to hide the flatness of the acrylic disks and give more depth to the sculpture.


Fabrication Process 1

For the disks, we were limited to a maximum weight of 4.8kg each, since this is the servo’s capacity. So, although we considered using wood for the disks, acrylic was lighter and more colorful than wood, which is more appropriate for our requirements. After sourcing the acrylic, we utilized the laser cutting machine to performs the cuts on the disks.


Fabrication Process 2

Since we decided to use bigger servo motors with more torque to handle the size of the disks, we decided to test them ahead of time. During the testing stage, we noticed that the servo motors use too much power from the Arduino board and we had to come up with a better solution for power. We realize that our sculpture is not mobile and instead of investing in batteries, we decided to use the internal power converter inside a wall adapter, stripped a USB cable, connected power and ground to the breadboard and used those terminals as a power source for the servos. That ensured the constant supply of power.



Power supply link  + Cut USB cable

In the spirit of last minute complications, one of our disks fell and broke into multiple pieces. We tried gluing it back together, but eventually got them laser cut again on the morning of the presentation.



Coding Process

P5 – Serial Connection to Arduino

Since most of the functions are happening in p5, figuring out this portion of the code was a constant challenge. Also, it was the first time we attempt to use p5 to send data over serial connection to Arduino. So, we first divided the p5 code into the main functions needed and then started writing those codes separately. First, we used the examples we had in class for serial connection p5 to Arduino, which gave us some trouble at first, but eventually, we were able to maintain a connection between the two.

P5 – API Call & JSON Objects

This was the first time for us to pull data from a live source. It was a challenge to understand what APIs are and how they work, and how each API is called differently. We experimented a lot with the API callbacks and even explored using weather conditions from a free online weather API. There were a lot of query tests– from placing the .json information into function setup(), and making an asynchronous callback from that data in function draw(). This was a manual way to call the information, but it would not have been a viable option to call that information because the content of the .json file was quite big (information about ~220 countries were listed.) 

json in setup

Placing data in setup()

Aside from trying to work with a .json, we also experimented using a url for the API. This was proven to be successful as we were able to pull data by specifying the correct path.


Calling data from URL

Eventually, we decided to make the JSON file work. We based the information found online in text (.txt) format and converted it accordingly. After extensive research, the best way to write a JSON array of objects and with each object a set of data that we need to send to Arduino. We learned the correct way to create that file, and then call it from a p5 sketch. This was an excellent learning experience that will be very useful for us in the future. It was then pulled successfully through a chrome webserver extension (200 ok) and the preload function successfully loaded the .json.

P5 – PubNub

Another extremely useful tool that we learned how to make use of during this experiment was PubNub. For our project, we needed the user to be able to insert the country name in order to view that on the servos, and we needed that connection to be wireless. So, we created a PubNub channel called ‘Kinetic Life” and used that channel the ‘subscribe’ and ‘publish’ functions in order to submit commands through the internet. Eventually, we assigned one of the computer to subscribe to the channel in order to receive data, and the other to publish in order to the send out commands. So, computer A sent a call to the JSON file, through PubNub, to computer B, which pulled the JSON and sent out the relevant response, over serial to Arduino.

P5 – Mapping Data

One of the main lessons learned in experiment 3 was the correct way to map values. It allows us to view any data, no matter size or complexity, in a simple output either on screen or for Arduino. Although it is a simple line of code, used correctly, can be a powerful tool. 

Arduino – Data Receiving & Output

The code for the Arduino was fairly simple and straight to the point. After opening a serial connection between P5 and Arduino, it is fairly easy to tell Arduino to receive that data and output it through the servo. However, we quickly learned that servos are not as simple as they may seem and hacking them might not as easy of a task. After several sessions of trial and error, we were able to control the speed and direction at which the servos rotate, and based on the data coming in from the p5 sketch.


Final Circuitry


Presentation Day


Strengths Observed:

We invited participants to come forward and input the name of a country of their choice and see the disks spin as a visualization of the birth and death rates. We observed that many people put in names of countries they come from or have roots in, and expressed surprise over some of the data that came forward. Furthermore, many participants reported feeling mesmerized by the design of the disks that gave the effect of an optical illusion.

Limitations Observed:

Each time a participant input the name of a new country, we had to manually reset the Arduino. It would have been more effective to either have a reset button or allow the discs to rotate only for a specific time before a new country name input was fed into the input box. This would have improved the user experience considerably.

Furthermore, multiple discs representing various countries would have been much more effective as a data visualisation, allowing a comparison to arise between birth and death rates across different geographical regions. Not only would this have been more visually striking, but would have also allowed participants to reflect on growing or declining populations across the world at the same time.

Finally, gaining more control over the speed of the servos would have also contributed to making the experience of deciphering the contrasts between birth and death rates more pronounced. While we did succeed in influencing the speed slightly, it would not perhaps be obvious had the participants not been looking out for it.

Feedback from Participants:

Creating multiple disks for effective Data Visualisation

Creating multiple disks to represent different countries would have been more impactful because of a comparison that would arise, while also looking more striking aesthetically. This is something that we did intend to do initially. However, troubleshooting the code at every stage of the process occupied much of our time, without giving us the time to build further on our fabrication.

Improved Data Input

Participants were instructed to input a capital letter at the start of every country’s name, which would then show the quantitative data and allowed the disks to spin. One of the limitations of our dataset was that it yielded results only if the data input was written in this specific format. A few options provided during the feedback session was

  1. Creating a drop-down menu instead of allowing participants to type it out, hence avoiding mistakes with capital letters.
  2. Creating a map interface to make it more striking, immersive and also aesthetically pleasing

Future scope:

Making interactive data visualisation tangible and experiential can be a fantastic way to immerse participants in large datasets that would be missed if it lay in the form of passive, quantitative information. Not only does the data physicalization give the opportunity to participants to immerse in different contexts, but can also tell an effective narrative through the data that it represents.

It has the ability to inspire a sense of wonder at how its underlying technology allows us to explore, create and communicate with each other in new ways.

The ‘Flower of Life’ is the first prototype at achieving this ambitious goal. This experiment can be further forwarded using diverse data sets and interactions to tell a narrative about our world’s growing population, especially in the context of climate change and depleting resources. While digital data visualisation may be effective in allowing interactivity as well, the tangible and tactile interaction through an installation can allow for a more immersive sensory experience.

Another possible outcome of the ‘Flower of Life’ could be a call to action upon interacting with the installation, focusing on not what we already intuitively know, but where this knowledge could take us in the future.


Flower Of Life – A Thorough Explanation. Retrieved November 26, 2018, from

Population API. Retrieved November 26, 2018, from

Servo Won’t Stop Rotating. Retrieved November 26, 2018, from

Servo 360 Continuous Rotation. Retrieved November 26, 2018, from

Toddmotto/public-apis. Retrieved November 26, 2018, from

Weather API – Free Weather API JSON And XML – Developer API Weather For Website – Apixu. Retrieved November 26, 2018, from

Wolfram, S. (2002, January 1). Note (d) For Why These Discoveries Were Not Made Before: A New Kind Of Science | Online By Stephen Wolfram [Page 872]. Retrieved November 25, 2018, from–ornamental-art


By: Amreen Ashraf, Lauren Connell-Whitney, and Olivia Prior


Partner robots

Figure 1: Our two robots preparing themselves for the their first demo in front of a large group of people.

Synchrobots are randomly choreographed machines that attempt to move in synch with each other. The machine’s receive commands to start and stop their wheels, and as well the pause in between movements. Even though the two machines receive the same commands, the differences in weight, shape, and environment can cause the robots to fall out of synch. The machines  execute the commands as they travel across the room, sometimes bumping into each other or bumping into walls. The randomness of the movements creates a perceived intelligence, though there is none. The machines may nearly hit a wall but avoid it at the last second, it could be interpreted as very intentional, but it is the human interaction that creates these stories. These randomized in synch movements create life for these machine’s and make users delighted in following what they will do next.


Idea Process 

We had a huge range of ideas from the beginning. We did manage to keep the idea of duality in our final project.

Figure 2: We had a huge range of ideas from the beginning. We did manage to keep the idea of duality in our final project.

We began the ideating by talking about how we could affect machines using outside data, for instance, using the data of the amount of people passing through the OCAD doors, or the wind at any given moment in Antarctica. We began developing an idea to make bots that would rove around and crash into things. Their construction would be of two pieces of steel that could crash together and make a thunderous noise. However, the idea of constructing something that was made to bump into things with force seemed like a tall challenge, and possibly not one we wanted to tackle just yet.

Quick sketches of how we wanted our kinetic sculptures to move

Figure 3: Quick sketches of how we could install our “thunder bots”.

Next we had an idea to create bots that would find each other in the dark; our Marco Polo bots. This was the idea that we began moving forward on: three bots that would seek and find another “leader” bot, then once it was found, another bot would become the leader. This idea lead into a thought about migration and how birds use this idea of passing around the leadership role so that the whole mechanism can stay strong.

One of our initial process sketch for our Marco Polo bots

Figure 4: One of our initial process sketch for our Marco Polo bots.

Our workflow written our for Marco Polo bots

Figure 5: Our workflow written our for Marco Polo bots.

Our "pseudo-code" hand written out for Marco Polo bots

Figure 6: Our “pseudo-code” hand written out for Marco Polo bots.

Creation Process 

Our prefab bots were simple to put together, but made of very cheap parts. One of the DC motor gear ended up breaking inside of the casing and so we did an emergency surgery using another 180 degree servo's gears. It was nice to take stuff apart and see how everything worked.

Figure 7: Our prefab bots were simple to put together, but made of very cheap parts. One of the DC motor gear ended up breaking inside of the casing and so we did an emergency surgery using another 180 degree servo’s gears. It was nice to take stuff apart and see how everything worked.

We began with a trip to Creatron, where the array of wheels and servo motors was vast and costly. So we ended up bringing home a lovely red chassis kit that came with two rear wheels that attached to two DC motors and a stabilizing front wheel. It was a great place to start and getting the wheels working was proof that our little bot could move if we wanted it to.

Video 1: A clip showing the robot moving with both DC motors connected to the feather micro-controller. 

Coding, Fabricating, & Development

Networking Platform & Iterating on Ideas

Our first step in development was to find a resource for networking our robots that was not PubNub. Unfortunately, PubNub API was not able to receive any data that was published from an Arduino – users were only able to subscribe to data. Since our initial idea prior to Synchrobots, was for three robots to talk to each other by sending coordinates to the other, we needed to have a platform that would allow us to send and receive data. After some research, we decided to pursue to network our robots. allowed us to very easily send and receive data through different feeds. The primary issue that presented was that the account could only send and receive data up to 30 times a minute. This meant that we could not be continuously be sending or receiving data like PubNub.

This presented some issues for our initial idea; we wanted our robots to be talking with each other continuously. Because we were developing for three robots that meant each one could only send and receive data ten times a minute. We discussed it amongst ourselves and decided to develop for an idea that required two robots that were not continuously sending and receiving data. As well we decided that if we changed our idea we would not scrap everything and start from the beginning. We would be able to re-use most of the code developed and thoughts from our previous hand written pseudo-code.

Additionally, at this time we had purchased out prefabricated platforms for the robots. The movement of these bots was not linear, nor consistent. We decided that our initial idea of “Marco Polo bots” would not reflect well with these movements, and decided to pursue an idea that would allow the robots to move more freely.

This brought us to the idea of Synchrobots: two robots that would receive the same data set, and execute at the same time. We were interested in how the robots would respond to the same data, and how the presentation of two robots would bring up connotations of partners, a couple dancing, or friends moving around.

At the start of development, we found that has a feature that allowed us to view the data in a visualization that was being sent in real time. This was a very useful tool for our development since it let us see what values were being sent by each feed while the robots would be moving. We were not required to have our microcontrollers connected to our machines to use the serial logs. 

Adafruit dashboard showing a data visualization of the randomly generated times for our robots.

Figure 8: Adafruit dashboard showing a data visualization of the randomly generated times for our robots.

Issues with DC motors and Servos

Our prefabricated platforms came with two DC motors and wheels each. Upon purchasing we were certain that the motors in the case were servos and were surprised that they were DC motors in servo casing. The DC motors would have worked well, but they did not have a data pin. This posed problems for us as we needed to be able to command both wheels independently through the microcontroller. As well, the motors were very cheaply made and one of the gears broke fairly early into development.  We took the motor apart and attempted to fix the gear by using one of the same parts from a servo we were not using. 

We attempted to fix the gear of our DC motor

Figure 9: We attempted to fix the gear of our DC motor.

Additionally, one of our feathers stopped working on us while attempting to control the DC motors; we suspect this was a power management issue that we were unaware of could happen. After a day’s worth of research, a feather passing away, and attempts of controlling the DC motors with our feather microcontrollers, we decided to get servo motors instead.

One of our feather microcontrollers stopped working during development

Figure 10: One of our feather microcontrollers stopped working during development.

Unfortunately, this did not solve all of our problems. The servos were installed in a way that they were “flipped”. When executing code that would turn them on and off, the wheels would turn in opposite directions. The next issue to tackle was finding out how to rotate the servos in the same way. We finally found a solution that starts the wheels in opposite positions: one starts at 360 degrees and goes to 0, and the other goes from 0 degrees to 360.

Video 2: A clip featuring both of our wheels spinning in the same way, and pausing in between times sent to the robots. 

Set up

Our set up for the microcontrollers was to have three feathers in total: two robots with feathers that would receive the data, and one feather connected to wifi sending the data. We designed the process as follows:

Synchorobots process diagram

Figure 11: Synchrobots process diagram.

We wanted to use randomly generated times so that the robots would be anthropomorphized as viewers watched them. The random times would reduce that chance of patterns that viewers engaging with the robots for a long time could pick up. As well, we were attracted to this idea because we had been talking throughout the project of the character of machines that can occur through real life obstacles and variances, which became apparent in our little bots as we constructed them. One of their wheels was slightly wobbly, one of them was heavier because of a different battery pack. All small details that would make the real life outcome slightly different between the two.

Deep in thought with the code and construction!

Figure 12: Deep in thought with the code and circuit construction with the servos.

Next we began soldering the circuit, a simple one at that. Something we also had in our plan initially was to add in blinking LEDs to the robots and possibly a robotic chatter with a speaker, as if they were talking to one another while they were roving. Near the end of the process we realized that it may have been too much for this one project and decided to keep this on for further iteration at another time.

Soldering the circuit, picky, fun, detail work.

Figure 13: Soldering the circuit; picky, fun, detail work.


Once we had our components soldered we discussed the casing for the microcontrollers. Our debate amongst the team was how to keep the robots looking industrial, while also adding in features that elevate the anthropomorphized qualities we wanted. We thought that having an entire casing that hid the wires and the microcontrollers would create too much of a connotation of critters, while leaving the bare bones hardware would look unintentional.

We had some spare copper tubing and LED fairy lights around the studio. We experimented to see how adding some more industrial material onto the robots would change the experience of the machines roving around the room. We placed the copper tubing in the front with LED lights and found that it resembled an eye. This was the perfect middle ground between robot and critter. To compliment the front, we cased some of the wired in copper tubing in the back.

Assembling the robots with the copper tubing as casing for the wires, and as an "eye" in the front

Figure 14: Assembling the robots with the copper tubing as casing for the wires, and as an “eye” in the front

We had two types of LED lights, one “cool” strip and one “warm” strip. We decided, that to create small differences between the two robots, to have one robot that would house the cold strip, and one that would house the warm strip.

Experimenting with the LED lights and copper tubing on the front of the robot

Figure 15: Experimenting with the LED lights and copper tubing on the front of the robot



  • 3 X Adafruit feather ESP32
  • 2 X Feetech FT-MC-001 Kits(chassis)
  • 4 X Servo Motors
  • 2 X Protoboards
  • 2 X Rechargeable battery packs


    Figure 16: Synchrobots ready and waiting.


Figure 17: Team of Amreen, Olivia and Lauren holding both finished Synchrobots.

Project Context

A work that provided a larger context for this work is Lichtsuchende: Exploring the Emergence of a Cybernetic Society by David Murray-Rust and Rocio von Jungenfeld. A project that created a community of robotic beings that interacted through light. The project examined how we as humans can design for other beings, namely, machines. It is a beautiful project that went through many iterations of how the robots reacted in the space and learned from their behaviours and group patterns.

Lichtsuchende: the robot community in action

Figure 18: Lichtsuchende: the robot community in action

Final Thoughts

Demoing our Synchrobots was a success. They seemed as if they were dancing at some points, they roved around separately, they even crashed into people and walls. It was a wonderful display of human and machines interacting. Peopled were delighted by them, some didn’t know how to react. It was a similar experience to watching people interact with a baby; some people were overly cautious and timid about the interactions, while there were others that actively and playfully interacted with the bots.

We received overall positive feedback and great advice on how we could carry our project forward. After the demonstration of our bots, Kate (Hartman) suggested that we could harness a Go Pro camera on the wall and have it observe the bots as they move about a room until the battery ran out. This is something we might like to pursue to track the patterns of our bots through time and space.

As we saw from the reactions of the our classmates, the movement and the meeting of the bots was such a cause for delight. Some other suggested direction was to look at LEGO mindstorms, which are tiny bots by the LEGO company that come as LEGO hardware with a preinstalled software. Nick Puckett had a thought to look at electric toothbrushes and dissecting them to get the vibration motors which would lead to creating small “dumb bots”. There were more suggestions to attach a pen or a marker to the bots as they moved around the room, as a look into bot art. This idea of letting the bot create work through movement was interesting because we had looked at a similar project while researching and decided against it because there were many such projects. We wanted a free flow movement of the bots without having a purpose attached to the movement. The feedback we will next implement if we do take this project forward is thinking about interactions between these machines. During this project we explored the concept of interactions via using LED strips. We got the LEDs working but we hadn’t coded any interactions on how or what the LEDs would do to react when the bots interacted. This point would be the most crucial in the further development of the project.


  • Šabanović, Selma, and Wan-Ling Chang. “Socializing Robots: Constructing Robotic Sociality in the Design and Use of the Assistive Robot PARO.” Ai & Society, vol. 31, no. 4, 2015, pp. 537–551., doi:10.1007/s00146-015-0636-1.
  • Conference, RTD, Dave Murray-Rust, and Rocio von Jungenfeld. “Thinking through robotic imaginaries”. figshare, 20 Mar. 2017. Online. Internet. 26 Nov. 2018. Available:
  • DeVito, James. “Bluefruit LE Feather Robot Rover.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, 2016,
  • Gagnon, Kevin. “Control Servo Power with a Transistor.” Arduino Project Hub, 2016,
  • McComb , Gordon. “Ways to Move Your Robot.” Servo Magazine, 2014,
  • PanosA6. “Start-Stop Dc Motor Control With Arduino.”, Instructables, 21 Sept. 2017,
  • Schwartz, M. “Build an ESP8266 Mobile Robot.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, 2016,


by Tyson Moll, Joshua McKenna, Nicholas Alexander

Audio by Astrolope





Overview: is a participatory online installation inviting users to join in on a collaborative work of art, from anywhere in the world. Visitors to the web site are assigned a paint colour and treated to a view of their canvas: a rapidly spinning disc. With the click of a mouse the user will see their colour of paint dropped on the disc and be treated to the mess it makes as it splatters off the surface. As projects are completed users can log in to the gallery and enjoy their handiwork.


How It Works: receives painting commands from wireless controllers and transmits the information to a device that creates spin art.

The mechanical aspect drops paint on a spinning wheel using gravity and solenoid valves. The valves are controlled by the Arduino microcontroller, held above the spinning wheel using a custom-built shelving unit. Paint is fed into the solenoid valves via plastic tubing and water sealant tape. The spinning wheel is attached to an industrial motor from a salvaged Sears fan head, which can be operated with three different speed settings. Several cameras are also fixed to the shelving unit to provide video footage of the action.

The digital aspect of the project is split across four nodes: the Arduino code, the Controller code, the Client code, and Twitch streaming. The Arduino code receives commands from the Controller and operates the solenoids as instructed. The Controller code (written with p5.js) receives instructions via PubNub from any number of Clients and forwards the information via USB serial communication to the Arduino. Programmed with the jQuery library, the Client code provides participants the ability to send “Paint” commands to the Controller through PubNub, with selection options for different colours as well as a live stream of the spinning wheel. In order to livestream the process, we used a program called OBS (Open Broadcaster Software) and three webcams to share a videostream of the process online on Twitch.



Process Journal:
This project began with a desire to explore the possibilities afforded by the connective technology required by the project (Pubnub and Javascript Libraries). We brainstormed a series of concepts and tools we would be excited to work with and looked for overlap, in order to land on a solid project idea we could all be excited about.

From there we knew that we wanted to explore creating an activity that would be fun to join in with others on, from anywhere in the world, that might be improved by not being in proximity. We examined similarly-themed projects including Studio Puckey’s and Studio Moniker’s Puff Up Club, taking elements we liked and spinning them into something new. What was fun about these projects? What made them unique to their medium?


We were inspired by the sense of collaboration on a tangibly satisfying act and the sense of play from the two projects above in particular. We seriously considered several projects (a remote bubble-wrap-popping machine, a model that users would slowly flood drip-by-drip, and a power-sander that users would guide click-by-click through common household objects) while continuing to rapidly-generate other design ideas that fit our constraints.

Early design ideations: a device that would slowly scrape away layers from a scratch-board  surface, a remote-controlled saw, and a power-sander slowly destroying a shoe, and a cut-your-own-tree holiday diorama

We settled on something with a fun, positive message, but kept the sense of mess and chaos. The basis for what would become was in place.

We adjourned to do research and begin to gather materials. We conceived of the project as having three core pillars: the physical build of the apparatus, the coding, and the web-side interface.

We began the build of the apparatus by considering its requirements: it would have to be sturdy enough to hold up a considerable amount of liquid and hold the Arduino components steady and dry.

We considered and discarded several designs before landing on the final version with some help from the incomparable Reza Safaei.

Late design ideations. Here you can see the designs that would inform the final apparatus, as well as some explorations of how to realize the spinning plate.

We built the skeleton of the apparatus by assembling a tall box frame, spaced widely enough to allow us to reach inside and access whatever parts we placed there fairly easily. Knowing that we would want at least two levels of shelving for paint and valve control we drilled multiple guide holes along each pillar – this way we could make adjustments quickly and easily.We had been debating how best to realize the spinning-wheel portion of the apparatus (we considered a pottery wheel, a belt-and gear, and a drill-and-friction-driven system among others) when we found a discarded and functional Sears fan from the 1970s. We removed its cage and its fan blades, then made and affixed a wooden plate to its central bit.

The fans of the era appear to have been built with impressive torque; we had hoped that the fan’s adjustable speeds might afford us interesting opportunities in adjusting the speed of the paint, but it was so powerful that we settled on keeping it at the first setting. We spent some time exploring the possibility of adding a dimmer to the fan, but eventually shelved it as being out of scope.

The Arduino component of the project presented new challenges for us, as this was the first project most of us had encountered that required careful power management.

We chose solenoid valves as the best machinery for our purposes, having judged that the code required to control them (a simple HIGH/LOW binary command to open/close the valves) would be simple to send over Pubnub. The solenoids required 12 volts to function, far more than the Arduino Micro could supply, so we looked into managing multiple power sources. This led to the inclusion of diodes to protect the circuitry and transistors to function as the switches for the solenoids. Ultimately the Arduino component proved to be among the simplest aspects of the build: once we had the circuitry working for one valve it needed only be repeated exactly for the other two, and we were correct in judging that a simple HIGH/LOW command would effectively manage the valve. Our first iteration of the circuitry became our final iteration, and when troubleshooting we only ever needed to check connections.


We selected plastic tubing of the same diameter as our solenoid valves. The tubing kept its shape strongly – we used a heat gun to straighten it out, at which time it screwed in tightly to the valves. It required only a small amount of waterproof sealing tape to make the connection from valve to tubing watertight. We had a tougher time connecting the tubing to the 2L pop bottles we chose as the paint receptacles due to their ease and simplicity. The mouth of the bottles was slightly too wide to connect easily to the tubing as the valves had. We managed to get the connection between tube and bottle sealed by using a combination of duct tape to hold the tubing tight and sealing tape to keep it watertight.

The process of coding the communication protocol for the device was relatively straightforward; we used example code provided to the class as a backbone for the PubNub communication interface and Serial communication between a computer and Arduino. The only message that we needed to send across the devices was a means of denoting the colour of paint to drop. In order to ensure that paint was dispensed from the solenoid, we implemented a short delay in the duration of the dispense signal. The only other features coded independent of the web interface were two arduino buttons that could debug the solenoids and live display of incoming messages from PubNub, two time-saving features for troubleshooting the devices.


For the minimum viable product, we wanted the interface to allow the user to select between paint colours and to be able to paint with either red, blue or yellow. Collectively, as a group we felt that some sort of visual feedback was needed in the interface to demonstrate to the user that they painted with their selected paint colour. Originally we proposed an animation that would float above the paint button for each time the user clicked. We suggested either a paint drop specific to the colour selected, or a “+1” as an indicator that the user clicked the paint button. Because of time-restraints we opted for a counter in the top right corner that would demonstrate the total amount of paint drops from all users combined.


With another revision on this project we would include a user-specific counter or visual interface element so that the person interacting with the paint button knows exactly how much they are contributing to the artwork, themselves. Additionally we would have an HTML element in the bottom left corner replacing the “Project 4” with text that would update with each plate that was painted.

We developed graphical assets in illustrator based off a primary colour theme of Red, Yellow and Blue. Viewers would be able to click to adjust their colour type and press a magic paint button to deliver a command to our machine. Central to the web interface was a live stream of the wheel in motion. We intended to have an instantaneous video feed of the project, but we encountered an approximately 8 second delay in the streaming process that we believe is a result of the data transmission speed from computer to computer. These assets were all programmed in jQuery.

We were interested during the development of the web interface to include a particle system to visually display the colours being submitted live during the process. We discovered that jQuery and p5.js’ canvases seem to conflict with one another in terms of interactability; although there were solutions available to us to remedy the error (e.g. adjust layering or convert the jQuery elements to p5.js) we were short for time and decided to textually render the live feed of paint drops committed by participants.


The solenoids available to us were only rated for use with water, so we were concerned about damaging them if we used paint at higher consistency. After consulting with local art professional Jordana Heney we were told that the best options for our purposes would be watercolour or watered-down tempera paint. The expense of watercolour paint precluded its use, so we went with tempera. Later the option of vegetable-based inks and dyes was brought to our attention, which we hadn’t had the chance to experiment with, but would like to in future.

We experimented with paint thicknesses to get a sense of what moved well through the solenoid, left a good colourful impression, and could be consistently reproduced. We settled on a ratio of approximately 1 part paint to 1 part water, give-or-take depending on brand, as being the best for our purposes. Just slightly thicker than water but not so thick as to cause disfunction to the valves, this was the ratio we stuck with through the rest of the project.

Apart from some minor rewrites to the code and UI tweaks, once every pillar was connected the apparatus worked perfectly. We tested several varieties of paper before settling on paper plates as our paint surface, as their porous surface and shape were a good fit for our consistency of paint and size of our spinner.dscf7549

After our crit we returned to the apparatus to create multiple variations on the painted plates, in order to better capture the different results our apparatus generated.




Project Context:

Spin Art was the driving concept behind the machine’s functionality. Although interaction between the device and the wheel is presently minimal, we took great inspiration from the techniques employed in the process of developing such artworks. Callen Shaub is an excellent example of the practice, who works out of Toronto creating gallery-standard works. aligns with fun, exploratory, tongue-in-cheek internet installation art like and Puff Up Club. The intended experience is to share an out-of-the ordinary action with people, see what others did, and consider your own action in that light.

It also exists within the same sphere as participatory art installations such as The Obliteration Room by Yayoi Kusama where guests are given circular stickers to place anywhere in a room, and Bliss by Urs Fischer and Katy Perry where participants can slowly destroy the artwork to reveal new colours while adding colours of their own. The creators have laid out a framework, but it is the participants who define what they actual final visual state of the artwork is. The act of participating is the experience of the art – the final outcome is, perhaps, irrelevant.


Next Steps:

Based on feedback and testing we would expand this project by experimenting with different inks and receptacle media. Paper plates were something of a stopgap, as was tempera paint; they were choices we made out of necessity keeping time and budget in mind. Given the time we would experiment with multiple media and generate a large volume of work.

Once work is generated we would like to explore arranging it. Seeing many instances of the works juxtaposed might reveal interesting patterns, and playing with the arrangement would be as involved a project as their creation.

We would also like to improve the speed of interactions, add more valves and colours, automate paint reloads, and industrialize the entire process so it can be left unsupervised for long periods of time while still generating artwork.








Bliss (n.d.). Retrieved from

Controlling A Solenoid Valve With Arduino. (n.d.). Retrieved from

Shaub, Callen. (n.d.) Callen Shaub. Retrieved from

Studio Puckley. (n.d.). Do Not Touch. Retrieved from

Studio Moniker. (n.d.). Puff Up Club. Retrieved from

THE OBLITERATION ROOM. (n.d.). Retrieved from




By Maria Yala

Creation & Computation – Experiment 4 / Networking


FindWithFriends is a collaborative web-app game that is built using p5.js and PubNub a data stream network used to send and receive messages between players’ devices. When the game starts, each player is presented with an identical game board made up of a matrix of clickable tiles and a list of words to find, however each player is assigned a random color to differentiate them from other players. Words are arranged on the board in varying directions – forwards, backwards, upwards, downwards, and diagonally. Players can either play collaboratively, or competitively. Every time a player clicks on a tile, the tile’s color changes to the player’s assigned color. Once a player finds a word, they can lock their tiles to prevent other players from stealing tiles that they have found. If a player clicks on an unlocked tile, it will turn white again. When the ‘lock tiles’ button is clicked, the player’s score is calculated and drawn on the screen.


For this project, I wanted to learn more about creating custom classes in JavaScript as I didn’t have much experience with them in JavaScript before. Additionally, I was drawn to the idea of working visually with objects in a matrix when we went over some of the examples in class. I wanted to challenge myself to learn more about custom classes and nested for-loops.

Ideation & Inspiration

My main inspiration was to initially create something to do with collaborative storytelling, however, since I was thinking of working with grids/matrices, I ended up choosing word games particularly crosswords and word find puzzles. In the end, I settled on the idea of a collaborative word find game where each player was identified by a different color. This was inspired mainly by a word find game in a zine I had created and the game WordsWithFriends.





Step 1 – The Tile Class

I created a custom class to represent the tile objects of the game board. In the beginning I circular and square tiles using the ellipse() and rect() functions. Each tile had the following attributes: x and y coordinates and size dimensions. The Tile class also had a display function that when called would draw the tile at the object’s x and y position.

Step 2 – The Matrix

I began by testing out – creating a 3×3 matrix of square objects onto the screen. This was done by using a nested for loop that upon each inner iteration would create a new tile passing the x and y positions in the for-loop to a mapping function to generate a position on the screen. The matrix was restricted to a size of 600 and 600 and these were the dimensions used to map coordinates.

An image showing testing of a 3×3, 15×15, and 30×30 matrix on a screen portion of 600×600 pixels.


Step 3 – The Letters

I then created a second custom class to overlay letters over each tile. The Letter class was also created using a nested for-loop iterating over an array of letters. Each Letter object had the following attributes: a letter, an x and y coordinate. The Letter class had one method; a display function that when called, draws the letter on the corresponding tile. Below is an example of a 3×3 sample that was iterated over to generate a nested 3×3 array of Letter objects.

var test = [‘A’,’P’,’J’,’X’,’E’,’I’,’C’,’O’,’W’];

An image showing the letters overlaid onto a 3×3 game board


Step 4 – Clicking on tiles

Upon success with drawing the tiles and letters onto the canvas, I began to work on interacting with the objects on the screen so that a tile would change color when clicked. I began by creating a variable pcolor to hold the players random color assignment which is generated when the page is loaded. Using the mousePressed() function, I got the x and y position of the mouse when the user clicked and then passed it to a new method in the Tile class, clickCheck(). This function, clickCheck(), used the x and y coordinates of the player’s click and the x and y coordinates of the tile, calculating the distance between the two to determine if the player had indeed clicked within the tiles radius. If the click was within the radius, the color of the tile would change from white to the player’s color. Here I also updated the Tile class, adding the clickCheck() function and color attributes i.e r, g, and b for RGB color mode. The nested for-loop that created the array of Tile objects was then updated to initially create tiles as white in color. Initially I was using mouseClicked() but changed it to mousePressed() because during testing I found that it worked on a laptop but not on the iPad.

An image showing testing of clicking on tiles to change their color, with color randomly updating upon page refresh


Step 5 – Adding another player

Once basic game functionality was working for one-player, I began to integrate PubNub so as to allow for multi-player functionality. I updated the mousePressed() function to publish a message to pubnub and upon receipt of a message back in the readIncoming() function, the clickCheck would be called. The message passed to and from PubNub carried information about the mouse x and y coordinates and player color. These were then passed to the clickCheck() function which would update the tiles accordingly.

An image of a game screen showing a 2 player game where each player is a different color


Step 6 – Testing Tile Size When Matrix Size Increases

I changed the test array(as shown below) containing letters, switching from the 3×3 grid to a 7×7 grid to begin testing out how the grid would look with a words that were placed in different directions on the grid. And tested adjusting the size of the tiles so that the background would be covered.

Figuring out correct size for the tiles on a new, and larger game board


3Testing the game using a game board with circular tiles and 3 players


I ended up removing the circular tiles as square tiles were more visually pleasing because they didn’t overlap with the other tiles.

Step 7 – The “Steal tiles” feature

Here I updated the Tile Class, adding 3 new attributes – ‘c’ a string to hold the color id of the tile e.g white would be 255255255, ‘isLocked’ a boolean to check whether a tile is locked or not, ‘isWhite’ a boolean to check whether a tile is white or colored. I used the ‘isWhite’ variable to detect clicks on tiles; a tile is created as white, when it is clicked, it’s color is changed and this variable is set to false. When a user clicks again on a tile they had already clicked, I compare the tile’s ‘isWhite’ value and its current color id to determine whether it is being stolen or the click is simply an undo. If it is an undo click, color reverts to white, if not, another player is stealing the tile. I had trouble with implementing an undo click because I was calling the clickCheck function twice i.e in my mousePressed() and again in my readIncoming() functions. This caused the color of the tile to change from the players color to white and then it remained white. I solved this by removing the call to the clickCheck function in the mousePressed listener.

An image showing testing the “undo click” feature and “steal tiles” feature

Step 8 – A “Lock tiles” feature

I used the ‘isLocked’ boolean to prevent a tile from being stolen by another player. I also added a button on the screen that when clicked would lock all the tiles that had the same color as the player. To do this I created 3 new method: lock() – a function to pass the player color to the tile’s lock function, updateLock(p) – a function to update other players screen locking all tiles belonging to a particular color, pubLock() a function to publish a message indicating a lock has occurred. I also added a new boolean ‘lockPressed’ that would be used to determine what kind of message was being sent i.e a normal message or a lock message. If lock id in message was 0 then the message was a normal message, if it was 1 then it was a lock message and the readIncoming() function would call the updateLock(p) function for the player who initiated the lock.

Step 9 – Final 15×15 matrix & Game Text (Hints)

I settled on a 15×15 grid for the final game and using a word find puzzle from the Huffington Post, I created a new array to hold the matrix. Text hints were drawn to the side of the game board with a ‘lock tiles’ button underneath. When choosing the theme for the puzzle, I wanted to pick a topic that was universal and a little controversial so I ended up in the realm of politics and Donald Trump with the “Who has Trump offended?” puzzle from Huffington Post. The 15×15 grid size was chosen as it was the best size that allowed legibility and precise clicks on a tablet and laptop using a fixed portion of the canvas.

Who Has Trump Offended?” puzzle from The Huffington Post


The final 15×15 game board matrix array


The final 15×15 game board matrix


Step 10 – Adding a Score

I updated the code, adding a points array that would be mapped to tiles the same way that letters were. The points system I created was that tiles in words positioned forwards or downwards were each worth 1 point each. Tiles in words going backwards were worth 2 points each. Tiles in words positioned diagonally were worth 3 points each. Tiles in the hidden word were worth bonus points of 2 points each. I added a points and a global variable score to calculate a players score based on locked tiles. Tiles that were not part of the words were each assigned 0 points. Calculation was triggered when the lock button was clicked. This score was then drawn to the screen in large font and was colored in the player’s color.

The points matrix for the final 15×15 game board


Testing the score calculation – PRESS is 5 points, HISTORIANS is 10 points


Testing the game

Video link of test from the image below – Testing the game with 2 players playing on iPad & Laptop


End result of the test between two players


Presentation (Setup & the experience )

For the presentation, I decided to use iPads instead of a combination of iPads and laptops, this was done mostly to ensure mobility. I didn’t want the players to be tied to one place. Although I had provided 5 iPads I anticipated that people may form teams when playing and would pick up the iPad or move about with them. I did use one laptop, hooked up to a projector so that onlookers would not be left out of the FindWithFriends experience. Projecting the game board on the wall was a choice that ended up being beneficial to the presentation as it heightened the game experience as people watched the tiles change color. Ultimately, the game experience changed when players realized they could steal each other’s tiles and once words had been found they proceeded to try and see who could get the most colored tiles on the screens. Below are some video and images from the presentation.



Video link to the presentation

Feedback & Future Plans

Some future adjustments to the game I would like to make would be to add the other players score onto the individual screens so as to heighten the competitive aspect. I would also like to explore turning this into an installation piece. This is inspired by feedback from the cohort and the experience of how that game turned from a simple word find to a different game when the players were notified that they could “steal” each other’s tiles. It shifted from collaborative play to competitive play when I informed them of the lock button’s functionality. Comments were also given about the choice of theme – Trump & politics. When some players were playing they would spontaneously shout out things like “I found humanity” or “I’m stealing immigrants” The words being found have the potential to make people uncomfortable and I would like to explore this further by perhaps playing with different contexts. It was also suggested that because of projecting the game board onto the wall, a new interaction could be actually having players pick tiles on the wall. This is also something that I would like to explore in the future.


Code on github – FindWithFriends

Reference Links








(manufactured) realities

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link:


Figure 1.1: Introduction screen, featured on projection screens and mobile devices

Project overview
“Falsehood flies, and the Truth comes limping after it,” Jonathan Swift
(manufactured) realities is a project created to step out of the status quo and truly evaluate whether our beliefs are based on facts. In order to do this, the team selected six news stories, three true stories, and three stories that were released and later on retracted or debunked by credible news organizations. The increase in conflicting information has fuelled much discussion. ‘Our inability to parse truth from fiction on the Internet is, of course, more than an academic matter. The scourge of “fake news” and its many cousins–from clickbait to “deep fakes” (realistic-looking videos showing events that never happened)–have experts fearful for the future of democracy.’ (Steinmetz, Time Magazine, 2018) The six articles are presented, and after each, the participants will be given a chance to submit a vote on whether they believe it or challenge it. Once all votes are in, the servo device will show the results of the poll as the projector reveals whether the story is true or fraudulent. Once all six questions are answered, we end the exercise on a result page which will show the overall accuracy of the group and also the accuracy of each question.


Figure 2.1: Veda, Omid and April on presentation day
Figure 2.2: (manufactured) realities on display, projection in back and servo device in front

Intended context
How we receive information is more complex than ever before. It used to be as simple as picking up a book or a newspaper to update yourself on the news, current events or specialized educational materials. These media sources are held to rigours ethical standards, and if they ever breach this code of conduct, a retraction must be printed and released to the public. The more retractions, the less credible the publication becomes. Nice and simple. In 2018, this simplicity has been turned on its head all thanks to the internet. Nowadays, we are constantly overwhelmed with information, some of it playful and useless, some educational and enlightening but some streams of information are created only to conflict and confuse the public. With all of this content being released hourly to various public channels, there is more emphasis on releasing the information first and less concern about releasing accurate information. There has been a shift from reading credible sources by publishers to consuming information by our favourite ‘content creators’. These new creators of content do not run by any rigorous code of conduct and simply publish what they believe to be true. They also share articles with their subscribers and/or followers, further amplifying the story without knowing (or caring) if it is in fact credible. ‘A false story is much more likely to go viral than a real story’ (Meyer, The Atlantic, 2018) Media awareness is a long-standing issue; it is very easy for the person with the microphone to sway a crowd in their favour. The time we live in now goes far beyond that; we simply do not know what to believe anymore.

Product video

Production materials
Our aim in this project was to use a combination of hardware and software to create a seamless and straightforward evocative experience to spark conversation. The following materials were used for this project:

  • 1x Arduino Micro
  • 1x Laptop
  • 1x USB cord
  • 1x Strip of NeoPixel Lights
  • 2x Servos
  • 1x Breadboard
  • 1x Plywood
  • 1x Parchment paper
  • 1x Projector

During the ideation phase, the team came up with many exciting options. The focus of each of our ideas was to create something that is extremely relevant and pertinent.

Idea 1: Constructing communication systems in a collapsed society
Communication devices can be made of scraps, discarded plastic debris and e-waste
This can be a way to communicate levels of remaining natural resources
‘Citizen scientists can take advantage of this unfortunate by-product of “throwaway culture” by harvesting the sensor technology that is often found in e-waste.’ (link)
Our team went to watch the Anthropocene movie for inspiration

Idea 2: A better way to communicate coffee needs
Texting and slack is not a sufficient way to get the coffee needs of a large group
Create an ‘if this then that’ type app, with your regular order saved and ready
When someone asks the app if anyone needs coffee, instantly they receive the orders

Idea 3: Broken telephone game
Using the sensors, we already have on our phones
Create a game to pass messages from phone to phone
Somehow creating a way to scramble the messages

Idea 4: Digital version of a classic board game
Pictionary, a digital version that can be played in tandem, speed rounds?

Idea 6: Think piece, ‘Challenging assumptions and perceptions.’
There is currently no way to validate content and communications on the internet
Create a survey for everyone to do at the same time, generate live results on screen
Use this to gauge perceptions or bias

One we listed out all the ideas, we gave each other a day to think through what we feel most excited to do. We returned the next day, and unanimously agreed to proceed with our think piece ‘Challenging assumptions and perceptions.’ We were also mindful about the potential for the scalability of this experience. While the prototype itself was built for a small group of people, the intent was to set the foundation for a product that can easily scale to more extensive experience and audience in the future.

Process Map
Once the idea was finalized, the next step was to flesh out all the details, including the flow of the experience. We began the process by creating the user flow diagrams. We broke down the hardware, software, and API instances, and how each of them is interconnected. It was vital to iterate the different pieces of the puzzle and see how they fit together.


Figure 3.1: User flow diagram

Once the flow was set, we focused on information architecture across all the devices. We used Adobe Illustrator to create wireframes with placeholder content. This helped us visualize the skeleton for the experience. We decided to use the projector as the centrepiece and the mobile phones as a balloting device.

Projector Experience Wireframes
The Projector experience would hold the critical question screens, the response screens and the final result screen to conclude the experience.




Figure 4.1: Projector Wireframes

Mobile Experience Wireframes
Mobile devices around the room will function purely as ballots, and the projector will take center stage as soon as the voting process ends. The team put much consideration into the flow of the participant’s attention. Since there would be three interfaces in play, we made sure to include as much visual feedback as possible to make sure participants knew where to look and when.




Figure 5.1: Mobile Phone Wireframes

Finding the news stories
The team took the selection of stories very seriously and took the time needed to research and find surprising news that shook the world when it was released. We remembered stories that had stood out for us in the past, and looked for current pressing issues that were creating news. We also divided the stories into true reports and false ones. For this project, we felt it was important not to make up false stories, but instead, find stories that were released as true before being retracted later. This was crucial for the overall project objective. The team checked multiple sources and created a database of 15 stories initially, before shortlisting 6 with a random order of fake and true stories and thereafter began the UI design process.


Figure 6.1: April and Omid searching for news stories
Figure 6.2: Veda and Omid searching for news stories

User Interface Design
We began the interface design process with the introduction screen. We didn’t want to create something that was static, so we went with a moving background. For the identity design, we wanted to create something striking and beautiful at the same time. We used Adobe Illustrator and Photoshop for all the designs. Another difficult problem we were facing was the use of three different interfaces. The team put much consideration into the flow of the participant’s attention. Since there would be three interfaces in play, we made sure to include as much visual feedback as possible to make sure participants knew where to look and when.


Figure 7: Introduction screen, displayed on projector and mobile devices

The team thought it would be essential to add a disclaimer screen to ensure that the exercise is well accepted. While we tried to be as mindful as possible while picking the stories, we knew that it was equally important to respect our cohort and faculties sentiments. Then we shifted focus to the news article in question.


Figure 8.1: Top right, Screen which displays the article
Figure 8.2: Top left, Screen which displays whether article is true or not
Figure 8.3: Bottom right, Screen which displays disclaimer
Figure 8.4: Bottom left, Screen which displays final results and accuracy percentage


Figure 9.1: First draft of the User Interface
Figure 9.2: Veda hard at work designing two versions of UI, one for projector and one for mobile


Figure 10.1: Right, Mobile Screen UI for ballots
Figure 10.2: Left, Mobile Screen, feedback to allow a user to know they have completed the vote

Controlling the flow of the experience was a high priority. To do this, we decided to create three different pages. A page to display the news articles and the answers on a projection screen. An admin page, giving a button to a ‘moderator’ who will keep track of what page will be shown and a user page which acts as a ballot for every person involved in the experience. We know how important it was to choose appropriate articles that are related to today’s world and also topics that people are very opinionated about. To have more time to find the right questions, we decided to start with a simple structure for the program.

Connection to PubNub
For our first step, we created the connection between the three pages to the PubNub and tested the communication between the three pages. The admin page sends data to PubNub commanding which page to be shown on the other two pages. The user page receives data from the admin page and transmits data to the display page regarding the votes of the user. The display receives data from the admin and the user page to display the number of votes. Once everything was working, we added all to the articles to the display page and tested the program to make sure everything goes correctly. We then added a final page to show the results of the survey and allow the users to reflect on the experience that they just had.


Figure 11.1: Connection to PubNub and testing
Figure 11.2: Veda and Omid working on UI and Coding
Figure 11.3: Testing of the final infrastructure

Servos and NeoPixel hardware
After creating the basic coding structure and testing it by sending messages from each page to other, we added the communication to the Arduino Micro by the Serial Connection. Initially, we wanted to use the Feather board and connect the board to PubNub, but because of the difficulties we found connecting our boards to the OCAD U’s Wifi connection, we decided to stick with the Arduino Micros that we have already gotten to know well. We tested the communication by sending angles for the servos to the board based on the votes received in each category. The original idea included the lights to transition into 3 phases; a standby state would have the white lights, polling state would use a colour library from adafruit to show a rainbow of colours. Finally, when the poll is complete, the lights will turn to green. Unfortunately, the coding for the pixel lights fought with the servo coding, so we had to replace the beautiful colour library option and opt for solid RGB colour, blue matched nicely with the final designs. The final use of the pixel lights only included one state, it flashed on and off with a 10 second delay for each.
Adding Sound
We wanted the users to be focused on answering, and we decided to add audio recordings of each news item to keep a good pace and allow the participants to ingest the content with ease. Adding the sound to the code wasn’t difficult, once it was recorded and edited.


Figure 12.1: Setting up the circuit for the servos and NeoPixel lights
Figure 12.2: Testing the servos before completing fabrication


Figure 13: Final Circuit Diagram

As a final step to make sure the experience was smooth, we tested each component of the program and ran many trials to make sure everything worked correctly.


Figure 14.1: Running final tests before presentation begins, ballot and article screen working correctly
Figure 14.2: Servo device and article results screen working correctly

Although it wasn’t necessary for this project to have any hardware, we all wanted to add something tangible for two reasons. First, to utilize our student access to a maker lab and learn more about how to use the equipment. Second, we wanted this experience to only happen in person and not be a simple online survey that disappears from your mind the moment it is completed. Our team had an idea nailed down quite early, and we were eager to get the fabrication underway as soon as possible, knowing that other teams will be using the Maker Lab. The goal was to finish the fabrication process in the first week. We initially wanted to 3D print a casing for the servo motors since we knew the laser cutting machine was down. Using both Illustrator and Autodesk Fusion 360, we created an STL file that could be read by the printer. This was a big learning curve since no one on the team had ever used the 3D software before. On Wednesday we met with Reza, we were advised to wait for the maintenance of the laser cutting machine to be completed since the execution of our design would be better using that device. Base on Reza’s advice, we went back to the original illustrator file that could be used by the laser cutting software. Waiting for the laser cut machine to be fixed did through us of our schedule, but we were able to pull it all together.


Figure 15.1: 3D View of the design, front view
Figure 15.2: 3D View of the design, back view

The first attempt was cut on cardboard to check the dimensions of the design and the quality of the cut patterns. In this process, we realized how small the lines in the background pattern were once laser cut. Some of the lines broke as soon as they were touched. Some of this was due to the ripples in the cardboard. To make sure this would not happen in our final product, once again we went back to the design and increased the thicknesses of the problem areas.


Figure 16.1: Laser cutter in action
Figure 16.2: Prototyping on cardboard, pattern was to intricate and needed to be reworked slightly
Figure 16.3: Prototyping on cardboard, front of the design

We decided to go with a thin layer of plywood. Reza was concerned that some pieces would jump out during the cutting process and either hurt the machine or the design, so he set the depth of the laser incisions to not completely cut through. Since there was a natural curve to the piece of plywood, some pieces came out quickly but other parts needed to be cut out later on using a x-acto knife.


Figure 17.1: Assembling final wooden casing that will house the servo device
Figure 17.2: Cutting out the patterns on the wood body
Figure 17.3: Cutting out the patterns on the wood body, back view

For the final project, we decided to add an LED Strip to the design so that we could highlight the moments when the users had to look at the servos. To hide all the electronics in the design and further infuse the light, we added a layer of parchment paper behind the patterns.


Figure 18.1: Final circuit prototype, back view
Figure 18.2: Adding parchment paper to hide the circuit and diffuse the light more effectively
Figure 18.3: Fabrication of final product

Presentation & Critique
For the critique, we wanted to make sure that everything would go smoothly and started early in the day and made sure we had enough time to test the design after connecting everything in the Gallery. Once connected, we ran through a few tests and triple checked that the servo was working. Once the computer was connected to the projector, the display would not go to full screen when we were using the Firefox. To improve the presentation of the project, we decided to change to another browser. Unfortunately, the serial connection was left idle and disconnected before the actual presentation causing the servos not to move at all.
The feedback we received was positive. The topic was relevant, and many shared similar concerns. There was a concern for the mobile device to continue showing the timer after the vote was cast. There was shock from everyone seeing the final results page display the overall accuracy. It was quite low, sitting at a 41% accuracy rating. Two articles, in particular, were quite convincing although they were both untrue and most people had believed. Having done much research on fake news, we expected people to accept false ideas that fell into their own confirmation bias.

Upon reflection, there are a lot of minor tweaks we would make to this project based on the flow of the first presentation to a large group of people. First, the sound coming from the computer was not loud enough, and many participants were straining to either hear or quickly read what was on the screen, a wireless speaker is required. Second, we tried to design the experience with little interaction by our team needed during the polling. Working off this assumption, when we noticed a silence in the room or a look of confusion from a participant we realized we needed to be more prepared to guide them through. Third, after the vote was cast, we moved on to reveal the truth behind the story. The issue we noticed was there was too much content provided, and we don’t think anyone read what was on the screen. This is a flaw in the UX that can be fixed with a bit of editing and altering the hierarchy of the content. Forth, and possibly the most important, we did not put enough thought into the interface used by the ‘moderator.’ This interface did not show the timer which the participants saw on their screen and wasn’t completely sure when to switch to the next article. Also, if/when the servo device decides not to work it would be a bonus for the moderator’s interface to see the voting results so at least it can be delivered to the participants verbally. The team learned a great deal about effectively providing a message to a group of people using multiple interfaces, effective communication feedback and the importance of presence upon delivery.

Steinmetz, K. (2018, August 09). How Your Brain Tricks You Into Believing Fake News. Retrieved November 26, 2018, from

Meyer, R. (2018, March 12). The Grim Conclusions of the Largest-Ever Study of Fake News. Retrieved November 26, 2018, from

English, J. (2016, November 08). Believe It Or Not, This Is Our Very Own River Yamuna. Retrieved November 26, 2018, from

“The Office” Women’s Appreciation. (n.d.). Retrieved November 26, 2018, from

McLaughlin, E. C. (2017, April 26). Suspect OKs Amazon to hand over Echo recordings in murder case. Retrieved November 26, 2018, from

Gilbert, D. (2018, November 20). A teenage girl in South Sudan was auctioned off on Facebook. Retrieved November 26, 2018, from

The truth behind ‘Fake fingers being used for orchestrating a voting fraud’ rumour. (2018, September 30). Retrieved November 26, 2018, from

Sherman, C. (2018, November 21). Why the women suing Dartmouth over sexual harassment are no fans of Betsy DeVos. Retrieved November 26, 2018, from

ABSTRACT (2018) by Georgina Yeboah

ABSTRACT (2018): An Interactive Digital Painting Experience by Georgina Yeboah

(Figures 1-3. New Media Artist Georgina Yeboah’s silhouette immersed in the colours of ABSTRACT. Georgina Yeboah. (2018). Showcased at OCADU’s Experimental Media Gallery.)

ABSTRACT’s Input Canvas:

ABSTRACT’s Online Canvas:

GitHub Link:

Project Description: 

ABSTRACT (2018) is an interactive digital painting collective that tracks and collects simple ordinary strokes from users’ mobile devices and in real-time, translates them into lively vibrant strokes projected on a wall. The installation was projected onto the wall of the Experimental Media room at OCADU November 23rd, 2018.  ABSTRACT’s public canvas is also accessible online so participants and viewers alike could engage and be immersed in the wonders of ABSTRACT anytime, anywhere.

The idea of ABSTRACT was to express and celebrate the importance of user presence and engagement in a public space from a private or enclosed medium such as mobile devices. Since people tend to be encased in their digital world through their phones or closing themselves off in their own bubbles at times, it was important to acknowledge how significant their presence was outside of that space and what users have to offer to the world simply by existing. The users make ABSTRACT exist.

Here’s the latest documented video of ABSTRACT below:


Process Journal:

Nov 15th, 2018: Brainstorming  Process

(Figures 4-6. Initial stages of brainstorming on Nov 15th.)

Ever since experiment 1, I’ve always wanted to do something involving strokes. I was also interested in creating a digital fingerprint that could be left behind by anyone that interacted with my piece. I kept envisioning something abstract yet anonymous for a user’s input online. Trying out different ways of how I could picture what I wanted to do, I started thinking about translating strokes into different ones as an output at first just between canvases on my laptop.  I wanted to even go further by outputting more complex brush strokes from simple ordinary ones I drew on my phone. A simple stroke could output a squiggly one in return or a drawing of a straight line could appear diagonally on screen. I kept playing with this idea until I decided to just manipulate the colour of the strokes’ output for the time being.

Nov 19th 2018: Playing with strokes in P5.JS and PubNub

Using Pubnub’s server to connect P5’s javascript messages I started to play with the idea of colours and strokes. I experimented with a couple of outputs and even thought about having the same traced strokes projected on the digital canvas too with other characteristics but later felt the traced strokes would hinder the ambiguity I was aiming for. I also noticed that I was outputting the same randomization of colours and strokes both on mobile and on the desktop which was not what I wanted.

Nov 21st,2018: Understanding Publishing and Subscribing with PubNub


Figure 9. Kate Hartman’s diagram on Publishing and Subscribing with PubNub.

After a discussion with my professors I realized that all I needed to do to distinguish different characteristics from the strokes I inputed and then later outputted was to create another javascript file that would only publish the sent variables I wrote in my ellipse syntaxes:

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Nov 22nd and 23rd 2018: Final Touches and Critique Day

On the eve of the critique I managed to create two distinguishable strokes: Ordinary simple strokes on one html page with it’s own JS file and vibrant stroke outputs for the other. The connection was successful. I decided to add triangles to the vibrant strokes and play around with the opacity to give the brush stroke more character. I later tested it along with another user and we both enjoyed how fun and fluid the interaction was.

Figure. User testing with another participant.

Figure 11. User testing with another participant.

Figure. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Figure 12. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Here are some stills with their related strokes:

Figure 13. Output of Vibrant strokes from multiple users' input.

Figure 15. Output of Vibrant strokes from multiple users’ input.

Overall, the critique was an overwhelming success with a positive outcome. When the installation was projected in a public space users engaged and interacted with the strokes they displayed on the wall. Some got up and even took pictures as strokes danced around them and their silhouettes. It was a true celebration of user presence and engagement.

Figure 14. A user getting a picture taken in front of ABSTRACT.

Figure 16. A participant getting a picture taken in front of ABSTRACT’s digital canvas.


Figure 17. Experimental Media room where ABSTRACT was installed.


Figure 18. Georgina Yeboah standing in front of her installation ABSTRACT in the Experimental Media room at OCADU.

Related References

One of my biggest inspirations towards interactive installations that require user presence and engagement like ABSTRACT always lied in the works of Camille Utterback. Her commissioned work Abundance (2007) tracked the movements and interactions of pass-byers on the streets of San Jose plaza. This created interesting projections of colours and traces across the building. Many of Utterback’s work uses spatial movement and user presence to express a reflection of the life interacting and existing in the work’s space.


Multiuser drawing.(n.d). Retrieved from

Kuiphoff, J. (n.d). Pointillism with Pubnub. Retrieved on November 21 2018 from

Npucket and katehartman. (2018, November 26). CC18 / experiment 4 / p5 / pubnub / 05_commonCanvas_dots/ . Github. Retrieved from

Utterback, C. (2007). Abundance. Retrieved from

PittsburghCorning. (2011, April 8). Camille Utterback – Abundance. Retrieved from

The ‘Call Mom’ Project

By Frank, Jingpo, Tabitha

Project Description:

Mom misses you. She wants to know why you never call. The ideal moment to call Mom would be that fleeting period of time before bed where she’s snuggled under the blankets with a hot tea and a good book, just about to drift off to sleep. But of course she hasn’t told you this, you’re just expected to know through a nonexistent form of parent-child telekinesis.

Call Mom is an Arduino-based networking project that uses a light sensor to determine when her reading lamp is switched on and sends you a notification that she’s in relaxation mode, ready to hear from you. The device is housed inside a vintage book that blends in seamlessly with her bedroom decor. Powered by a simple battery pack it’s a low maintenance internet connected device that sits inconspicuously by her bedside. Some moms will want to know how it works and others won’t care in the least, but it’s a universal truth that moms just want to hear your voice and know that they haven’t been forgotten amidst the chaos of your busy life.

Github Link:


The first iteration of our project was a simple device that allows parents to send their young children messages while they were at school, kind of like a kid-friendly pager. As we continued to develop the idea we discussed how children have trouble perceiving time in the same way adults do, so parents and teachers could program reminders with friendly icons to mark key moments throughout the day. Based on Frank’s initial sketch we decided that the object should resemble a wooden children’s toy with an lcd screen in the front and a simple button on the top for the child to confirm that they had received the message.

Frank's first sketch for the kid-friendly messaging device.

Frank’s first sketch for the kid-friendly messaging device.

We quickly ran into trouble when we realized that the Feather and our lcd screen were not compatible with Pubnub. After discussing the situation with Kate we determined that it was best to pivot towards a new idea. In the brainstorming session we explored other ways of marking the passage of time but many of these ideas felt like watered-down versions of google calendar. So we went back to the initial concept – networking. What does it mean to network? Why do we seek connection? Rather than think it through intellectually we distilled our project down to the universal feeling of separation through distance. Longing to be with the person you care about the most. Late night calls to your loved one, though miles apart, still knowing that you’re both looking out the window at the same night sky. I just want to know that you’re thinking of me.

As we discussed new directions Frank made drawings on the blackboard to help clarify our ideas.

While discussing new directions Frank made drawings on the blackboard to help solidify our ideas.

We continued to explore the idea of a remote and networked self. Tabitha explained how she keeps her favourite travel destinations on her phone’s weather app to help her imagine that she’s somewhere else. On a rainy Toronto day she can see the current weather in Paris and concoct an escapist fantasy of the adventures she would be having if she was there instead. Jingpo told us that she does something similar to help her imagine what her Mom is doing overseas. She described the experience of never knowing the best time to call her mom who lives far away in another time zone. The best time to call is usually right before bed. Frank mentioned hearing about a project where the artist created a networked sensor that would alert him when his mother was seated in her favourite chair. It was these elements combined that caused the “ah-ha!” moment – A light sensor that could send you a message when Mom’s bedside lamp turns on and she has settled down for the night with a cup of tea and a good book. Now we had a project to address the question “What’s the best time to call mom?”

Coding Process: (Frank)

Hardware & Coding the Device

This is the hardware we used to achieve this project:

– Adafruit Feather ESP8266

– 7mm photoresistor

– 1K Resistor

The Fritz diagram for the circuit:


As you can see the circuit is super simple. We have focused primarily on the functionality of what we were trying to achieve.

What do the parts do?

The photoresistor is a simple device that measures the amount of light it is receiving. It sends this number which is constantly changing to the Analogue pin(An analog pin is represented with the letter before the number, in this case, A3, A1, and A2 cannot be used in this instance since we are using WIFI on the ESP8266 which disables those two pins for use) on the Arduino.

When the Arduino is ON(Powered up) the Loop() function is constantly monitoring the sensor information, and when it goes above a certain number it triggers an event which sends a message to IFTT(An internet service which provides hooks to various notification protocols) The IFTT service which in our case is using Webhooks sends an email and notification to the users ID which is set in the Arduino code.

We can easily change this amount of people who receive this notification to up-to 12 different emails if we use the Gmail notification through IFTT, we did not use this during our presentation and in the code as it was unreliable at times and was causing our code to get glitchy.

The code also makes sure we only receive only one notification until the lamp is turned OFF, if the lamp is turned on again it will send another notification.

A feature that we would like to add in the next iteration is time of day, in which case it will bypass the notification if it is not 8-12pm, in which case it can be made very power efficient too if we use a switcher to power down the use of Wifi when it is not needed. This would allow for this device to be made very small using different hardware and a better power management circuit.

Trouble Shooting the Code:

The code although straightforward needed some mental gymnastics to get sorted out. These were the main challenges facing us.

1. How can we reliably get the notification to trigger? We were given PubNub ( as an internet protocol to use, which is essentially a messenger service, we would have to get the Arduino to send a message to PubNub and that in-turn send us a notification, this was easier said than done as PubNub has a lot of Api’s that connect to it which could do this but  it was very technical to set them up on the server side and also figure out the api documentation in a matter of a few days. We are not inherently coders and most of these technologies are introduced to us at the start of the week alongside 2-3 three other projects we have on the go. Given that limitation, we looked for a simpler solution which could meet our communication wireless needs for this project. IFTTT was the answer, there were good youtube videos which demonstrated the use of setting up the applet in IFTTT and using the api Key. The one I referred to was: []

2. The second problem we faced was the trigger would keep going off all the time as soon as the light was turned on, this again is a simple fix in retrospect but at the time was a head scratcher for people who don’t come from an intensive coding background. The solution was using a conditional if then boolean statement which bypasses the trigger once the light has been turned on and the trigger is set to light on = true.

3. The final piece of the puzzle was the getting the notification to trigger reliably and finding a good median light reading that would not trigger in normal lighting conditions. Also, we would have liked to use the gmail notification which would notify multiple people(siblings) at the same time but this proved to be unreliable because of the IFTTT service. It still works and can be used in the code but might skip a few times the lamp is turned on because the IFTTT has an issue with this applet.

The Internet of Things

We looked at the internet of things for inspiration to build this project, which we environ becoming a standalone project in the future. The electronics are embedded into a real book creating a sense of continuity in a nostalgic use of an artifact we are all used to having on a bedside table. We would like to add a practical use to the book in the future, which could be a way of recording all the times we did call our mom because of this little book. It also blends into our daily lives silently performing what some would consider a mundane task, but the links us across continents giving us a tangible feeling of knowing our loved ones are getting ready to call it a night.

We could extend this project by adding an internet log of all times the lamp was switched on and off, which would show our parents sleep times, but we question the use of this and if they would want this kind monitoring, it could also be used for parent in aged homes, its a gentle nudge on our phones just letting us know the rhythm of their lives.

Where the wild things are

We are taken into the world of the imagination where we find a thread of connection from a simple sms, not sent by our mom’s but sent by a machine which is silently placed beside their bedside table reminding us that they matter. We have a world of over communication from all sides, we may receive a message every day from loved ones forwarding jokes or making casual remarks, but there is something magical about a voice and the time we spend just to reflect about time and space they might be in.

A sketch I did thinking about our concept over the weekend.

A sketch I did thinking about our concept over the weekend.

Fabrication: (Tabitha)

We decided early on that it was important for our project to feel nostalgic. Working from the idea of a mom’s bedside table we began to think of objects that could house a light sensor, Arduino and battery pack that wouldn’t feel out of place. We settled on a book because it would be large enough to house the components and could be sent through the mail to wherever Mom lives.

Items from my home

Items from my home including the book corner.

I looked around my apartment for objects that fit within the vision we had for the project. My family is really into antiques and my husband works for the library so I had no shortage of materials to choose from in our home. In selecting these objects I tried to create a vignette with universal appeal. Even though the items come from my family it was important that I wasn’t recreating the bedside table of my own mom. To connect emotionally with our audience they needed specific details but also the freedom to project their own Mom onto these objects. So in our case that meant a cup of tea, an assortment of books, a lamp and an old family portrait.

Building the book

Building the book required several hours of gluing and snipping.

After selecting a musty copy of Heidi from the never-to-be-read section of our bookshelf I headed to the Maker Lab where Reza shook his head and said there were no shortcuts in this case, each page had to be cut and glued by hand! At over 250 pages this had me wishing I had chosen a smaller book… but thankful that I hadn’t picked something ridiculous like Ulysses. The light sensor was placed on the front cover of the book and two holes were drilled to connect the wires to the Arduino. At the end I attached velcro to the front cover so the components would stay in place.

The final setupThe final setup at the Grad Gallery.

Before the presentation we set up the table with the light-sensitive book as well as props to help build the world of this fictional mom character. There was a cup of tea – brown rice tea, low in caffeine since it’s just before bed! She keeps by her bedside a copy of Heloise’s Kitchen Hints as well as  Machiavelli’s The Prince and we leave you to decide which of the two books has the greater influence! After it was all set up we did one more test to make sure we were receiving messages from the Feather to our phones.

Future Development of a User Interface: (Jingpo)

We decided to keep the main function and make user interface simple this time, but for future development of this project. We are thinking of having a web interface where provide users other useful ancillary functions. After class critique on last Friday, we found one interesting insight that sometimes it’s very difficult to call to our parents since we haven’t contacted in quite a while.

We got a very good reaction from our international students about this project. The emotional response that happened while demonstrating the project exceed our expectations. Many of them expressed they would buy the device if it was a merchandise for sale.

Follow-up functions for products:

Possibilities for a calling interface.

Possibilities for a calling interface.

1.Web page: When you received an email or text message alert, you could click on a link to an external webpage.
The whole webpage would be created in p5 Javascript. The image of the page changes when the light is turned on and off. Users can visually check multiple data, such as local weather, current temperature, humidity and air quality, date and time. We found there is a possibility to receive a central district of the city/town mom lives with its own parameters (city name) in API response.

2. Sensors:
If possible we can add temperature sensor into this device, so users can only know the local temperature but also real temperature at home.

3. Click-to-call links:
In most cases, international call would be very expensive, so people usually choose to call their moms online. It would be great to create click-to-call links for mobile browsers. They can call their mom directly through the link without downloading or opening video chat application such as Facetime or Skype. We found some meta tags or syntax code call Skype or Facetime applications from a website.

4. Data generator:
Hopefully we can access to user’s personal data and provide them some useful data, such as “What the average time she sleep?”, “What the last time you call her?”, or “How long did she read yesterday?” . We care about our parents’ health and want to know if they go to bed on time even though we don’t call them everyday.

5. Chat topics:
We are very interested in the insight we found that sometimes we feel it was difficult to call to our parents. You miss her voice and want to call but something hold you back. After you struggle a bit, it turn out you choose text. If possible we can randomly provide you some topics that you can talk with your mom.

A tracker that documents mom's routine before bed.

A tracker that documents mom’s routine.

Class Critique and Conclusions:

Presenting our project to the class.

Presenting our project to the class.

We had discussed doing a role play scenario where one of us would act as the mom but that didn’t seem like the right direction. As we were setting up Tabitha had the thought of calling her real mom during the presentation – coincidentally they had just been talking on the phone that morning. So Tabitha sent a synopsis of the presentation as well as some photos of the setup and told her mom to be herself and say whatever she’d normally say before bed. She was very excited to be asked to participate in the project!

Sending secret messages to my mom in class.

Sending secret messages to my mom in class.

The feedback we received was complimentary but the most striking was the emotional response that happened while demonstrating the project with Tabitha’s mom. We were able to see the heart of the project reflected in faces of our classmates. There was a sense of understanding why this simple device matters and how it can make a big difference in a very small way.

We were asked to think about practical aspects like the future iterations of the device. Suggestions included shrinking it down to bookmark size, repurposing it for different family members and networking it with communication messengers to develop a calling interface. The gallery project could be expanded by creating multiple character vignettes using the bedside table theme. However no technology can fully address the question “Now that she’s on the phone, what do I say to her?” Sometimes it’s very difficult to relate to our parents as they are disconnected from our day to day. But perhaps that’s besides the point. Moms are resilient and all they want is a brief acknowledgement that they are loved through the simple act of saying goodnight.

References/Context: The following are some items, blogs and resources that inspired us and helped develop our project.


This weather station was the initial inspiration for our parent-child communication device. The plan was to use this lcd screen however we changed the direction for our project.

As we were still exploring the calendar idea we found this project from a previous digital futures class.

This was useful in trying to troubleshoot the lcd screen and arduino connections.

This is the video I referred to for the help with the setting up IFTT, he uses block code to setup the function.

This is the block code editor for ESP8266, it makes troubleshooting code a little easier if you can’t follow normal syntax, It also provides the C++ code if you build your logic in the block code editor, you can just copy and paste the code in your arduino sketch.


Tabitha – The inspiration to call my mom came from years spent listening to Jonathan Goldstein interview his family on CBC’s Wiretap as well as a general interest in ‘ordinary people’ as performers. Three years of Second City training has taught me the power of unscripted acting for its spontaneity and truthfulness, but I especially love it when untrained actors are brought onstage. All it takes is a short briefing about the premise and away they go. That’s when the magic happens.


Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.