Category: Experiment 3

LateBoard [EXPERIMENT 3]

Afaq Karadia, April Xie



Custom-made for student lounge spaces in OCAD University, LateBoard is an interactive bulletin board for students to remotely notify their peers if they are running late, and where they are.

Via URL, users can select from six locations:  home, coffee shop, 100 McCaul, 205 Richmond, travelling by TTC, and the bathroom. The corresponding building on the LateBoard will then light up, as a physical notification for students in the lounge.

This prototype is meant to be installed and displayed in the Digital Futures Grad Lounge – Room 610, 205 Richmond









********************************* HOW LATEBOARD WORKS  *********************************

  1. User A opens URL
  2. User A selects one of six location buttons to “turn on”
  3. Selected button will trigger the light for the corresponding location on the LateBoard in the DF Lounge
  4. User A turns location “off” by pressing button again. Light for the corresponding location on LateBoard turns off. The light is now available for others to turn on.

********************************* PHASE 1 – IDEATION *********************************


We were interested in making:

  1. a physical group “dashboard” or notification board,
  2. designed for a specific meeting space,
  3. to replace a particular type message commonly sent among those who use the space.

We decided to design a notification board for the needs of our Digital Futures 2018 cohort. We chose our cohort’s main meeting hub, the DF Grad Lounge (sixth floor, 205 Richmond) as our target location. 

Our DF cohort actively flips back and forth between various chat-groups on WhatsApp – the more group projects, the more chat-groups are made. One of the more common messages sent around is people’s whereabouts as they are rushing to the Grad Lounge for a meeting.

The Grad Lounge could be considered a “physical forum”, in which people log in and out (enter to chat, then exit). We decided to play with this idea of the Lounge as physical user forum, and prototype a physical “dashboard” to match. The board would allow students running late to the  Lounge (forum) let others know of their whereabouts.

We hoped to create a physical, ambient alternative to the dozens of “running late to the Lounge” messages sent within our cohort a day.


We decided to implement six location options for this iteration of LateBoard. We thought about prominent locations we all came to frequent as OCAD students, and settled on the following:

Home sleeping


100 McCaul


205 Richmond

Travelling via TTC


  • Lo Fi materials for playful feel
  • Copper wire / tape circuits
  • LEGO



We designed this board to be installed on a prominent wall in the Digital Futures graduate lounge space, room 610, 205 Richmond Street.

The grad lounge is the primary meeting space and work hub for the students of the Digital Futures graduate program. Its multiple circular tables are constantly occupied with students, prototypes, and stray parts. The location for the board had to be in a central spot, but not visually or physically intrusive, as we did not want to add to the clutter. 

We explored both horizontal (table-top) and vertical (poster-style) configurations for the board.

Concept 1: Whimsical Lo-Fi Material

Horizontal table-top 3D city model

Humorous Lo-Fi playful aesthetic

Location ‘hotspots’ animated with LEDs, servos, and speakers

(e.g. coffee cup lid flapping open and shut, toilet making flushing sounds, snoring coming out of a house.)

Possible materials: LEGO, foamcore or 3D printed components


Conclusions: We realized that a horizontal configuration was not well suited to the environment of the Digital Futures lounge. The space is jam packed with group seating, and table space is a precious commodity. Taking up additional horizontal space in the lounge would not enhance the environment – it would only get in the way.

Concept 2: Ambient Furniture and Departure Boards

Large vertically hung board; sleeker, more ambient quality

Rationale: The DF lounge is noisy, crowded, and we wanted something that would enhance the ambience of the room and evoke whimsy and calm, while not adding to the distraction and cacophony of the space.  

We were inspired by MIT Media Lab researcher David Rose’s philosophy for designers to aim for IOT objects in home spaces that are  “seamless and transparent“. 

David Rose advocates for smart objects that are designed for peripheral attention, subtelty and seamlessness, minimal demand for attention, everyday gestures, glanceability, and subtle incidental interfaces. A summary of his portfolio of work can be found here.

Our objective then became to blend LateBoard into background as to not distract, but remain engaging enough for passerby to notice.

We were further inspired by airport departure boards. A type of “status boards” familiar to all, departure boards are vertical, ambient, able to blend into environment, yet prompt engagement
from passerby with gentle rhythmic screen transitions.


They are large, clean, prominent, informative, and inoffensive in the airport environment.

Other examples of physical internet-connected dashboards were few and far between – most dashboards we had found were stuck on displaying analytics on the screen.


David Rose’s Google Latitude Doorbell, part of his Ambient Furniture project at MIT Media Lab, was the most compelling example we found of an ambient IOT dashboard.


The piece“…plays subtle chimes to tell you when someone’s on their way, and getting home soon. Knowing that Mom’s ten minutes away lets you put the pasta on at exactly the right time. Each person in the house has their own signature chime.”


Sketch of Concept 2 – six LEDs embedded via copper wire / tape circuit in the outline of a cityscape

With these examples in mind, we came to our second aesthetic and design iteration.

***********************PHASE 2: EXPLORING CAPABILITIES ********************


With one week to go before critique day, we understood the scope of the project may not allow us to build all desired capabilities for this prototype of LateBoard.

We divided and prioritized the capabilities desired. We agreed to focus first on making a robust prototype that showcased priority 1, then consider priorities 2 and/or 3 and/or 4 if there was time.

The priorities we agreed on, in order:

  1. 6 LEDs on LateBoard are activated remotely via serial network  
    • Possible extra animation with one servo
    • Wiring with regular connecting wire, circuitry hidden at the back of the board
    • The details of “city map” will be drawn or laser cut
  2. Connecting 6 LEDs on LateBoard with copper wire / tape
    • Wiring LEDs with copper wire / tape, arranged as outline of city scape (Concept 2)
  3. Component on LateBoard to identify who was sending the incoming message
    • Web interface would include textbox to submit name
    • Possible options for displaying message sender on LateBoard:
      • LED matrix with scrolling letters of name user enters in textbox
      • Speaker that reads name out loud using P5 speech library
      • One LED per person, lights up when that person is messaging. Interface asks you to select location, and select who you are (out of 20 people in cohort)
  4. Ability for users in proximity to LateBoard to send request for location to anyone with identifier on LateBoard
    • What this would entail: identifiers would need to be activated by a button, sending a message to user
    • Challenge: circuitry too complex; logistics of creating an app for the interface is difficult; are we putting too many restraints on participation?


  • We were fascinated with the conductive and malleable properties of copper wire and tape. Copper tape has become a cult crafting favourite for creating DIY circuits for greeting cards and other inventive paper electronics. We purchased the starter kit for Chibitronics to investigate making copper tape circuits for LEDs. Chibitronics was started by Jie Qi, a PhD candidate at the MIT Media Lab. She developed small LED stickers with a built in ohm resistor to make copper tape circuits easy to build and light with a coin cell battery.
  • In order to connect a copper circuit with Arduino, we were aware we’d need to use alligator clips.


  • Upon exploration of copper circuitry, we realized that turning the wiring into art would be another project unto itself. We decided to focus on getting basics up and running first – lighting six LEDs through serial communication.
  • We would compromise regarding design by laser-cutting the city onto plywood.
  • The design would be subtle, yet playful. Diffused lights behind sand-blasted acrylic for the windows of our buildings would provide the ambient notification for the LateBoard.

***********************PHASE 3: EXECUTION ********************

  1. Got basic Arduino and P5.js sketches talking to each other via serial communication. Used p5.serialcontrol to bridge communication between Arduino and P5.js
  2. Added the control of one LED via p5.js over serial communication
  3. Extended sketch to control six LEDs via p5.js over serial communication
  4. Purchased 27” x 48” piece of ⅛” birch plywood for the board
  5. Made and submitted .ai design file for laser cutting onto the board. Select segments of the design were selected for cut-out so LED light can shine through
  6. Purchased and soldered six neopixel rings for the board. We selected neopixel rings over regular LEDs for programmable colour capabilities – could play around with colour change as added dimension of communication, if there was time
  7. Imported neopixel library into Arduino sketch, connected with p5.js over serial communication
  8. Purchased acrylic sheets and added sandblast texture with sandpaper; attached to back of board with hot glue to cover building windows
  9. Made and attached cardboard boxes at back of board to house and hide hardware behind each window
  10. Soldered extra wiring onto neopixels and inserted one neopixel ring per box; connected final circuit to Arduino, assigned boxes with a number for the final p5.js and Arduino sketch
  11. Designed icons and web interface
  12. Tested web interface with neopixel ring circuit
  13. One neopixel burnt out due to inadequate soldering – purchased a new one and re-soldered
  14. Uploaded sketch to PubNub so that any user can access our interface via URL
  15. The laser-cut lines, although beautiful, were too thin to see from a distance. For the purposes of the final video, we filled the lines in with black pen, and coloured the buildngs with marker.




Final board – back

Front of board: via GIPHY


100 McCaul – detail


100 McCaul – detail with colour

***********************FUTURE ITERATIONS********************

In future, we would like to incorporate the following into LateBoard:

  1. More “late locations” to include on board
  2. Increase thickness of laser cut lines; add deliberate colour scheme 
  3. Play with ways to identify user who is sending the “late” message, e.g. neopixel colour capabilities, LED matrix scrolling names, etc. 
  4. Play with ways to visualize more than one person per location (is there a way to concentrate or change colour depending on the amount of people logging in?)
  5. Updating interface to display information on who is where, along with the physical board
  6. Conceive way for LateBoard and interface to display users who were currently in the lounge (who is “logged in” to the forum)
  7. Explore ways for users within proximity of physical board to ‘nudge’ remote users to send their location



by: Mahsa Karimi & Rana (Nana) Zandi
Experiment #3 (Messaging)

P5 Sketch:
Controller One:
Controller Two:




Voyager is a two player computer game set in outer space. Each player has a unique controller made out of different sensor (inputs) to play the game.

Player One
This controller uses the Ultrasonic sensor to direct the spaceship to Mars without hitting any asteroids. According to the distance of players hand from the sensor, the spaceship moves from the left to the center and to the right path, trying to escape the dropping asteroids. The spaceship has 4 lives to get to Mars.

Player Two
This controller uses the push of three buttons to prevent the spaceship from reaching Mars by dropping asteroids on it. Each button is associated to a path (Left, Right and Center) in the game. Player in charge follows the movement of the spaceship to drop asteroids accordingly. This player has 12 attempts to stop the spaceship.


The game starts with the spaceship launching from earth heading to Mars. Along the way the spaceship will go through  obstacles in the form of astroids. The spaceship has a total of 4 lives, if 4 asteroids hit the spaceship the game will b over and if the spaceship has passed 12 dropping asteroids then it has successfully landed on Mars. The graphics below show different scenes form the game:




Initially, we went forward with a complete different idea. As we progressed further in the development we realized we were not fulfilling assignment requirements, where the Arduino half of the project was forced into the equation and it could have been eliminated. The out come of the first ideation would have been the same regardless of the sensor on the Arduino sending data or not.
This idea included a light sensor that was attached to the computer measuring the amount of “blue” light an individual was receiving in a day. Then the result would have been posted on a web page were friends could have compared each other’s exposure to the light from their computer.

As a team we decided to start over and come up with a concept that was not only supporting all the requirements, but also making the out come more engaging. At last we decided to create a game, and made sure that the end result was targeting the objectives of the project. Some of the games that we looked into as part of research and ideation was the game “Asteroids” and arcade car games. which involved a moving object that was shooting at the dropping obstacles.





Initially we though of a game where a boat is moving horizontally from one side of the page to the other and once it has reached the end of the path it would jump up a line and continue on. Along the way there were obstacles that the boat had to jump over to make it successfully to the finish line.

In order give the sense of the movement to the player the boat itself had to ride through the path form right to left. In order to make the game more intractable and the movements cleaner, we decided to come up further polish the idea before we start the coding.


As a group we are both fascinated with the otter space and decided to come up with a game that revolves around the otter space. What better than going to Mars? In this iteration with decided to have the main object (spaceship) stationary along the y-axis and have it only move slightly to left or slightly to right to escape the dropping obstacles (asteroids). In order to give the sense of the movement to the players, we designed vertically moving starts that were continuously dropping from the top of the page to the bottom, giving the illusion of a vertical movement.




Having developed the concept behind the game, we moved into the designing the characters. each character was designed in Illustrator and later imported as a .png file to P5.js.




The narration portion of VOYAGER including the visuals is based on the popularity of the idea of making humans a multi-planetary specie. Elon Musk recently debuted his plans in colonizing mars. As space exploration technology advances, and the cost decreases dreams of men exploring the galaxy gets closer to reality. We wanted to further popularize “Mars Generation” through our game.

Our other Motto for the project was a quote by president Barak Obama. In a major space policy speech at Kennedy Space Center on April 15, 2010, U.S. he predicted a manned Mars mission to orbit the planet by the mid-2030s, followed by a landing: “By the mid-2030s, I believe we can send humans to orbit Mars and return them safely to Earth.” Although it is still a few years to what he had predicted, we wanted to keep up the excitement by introducing a game revolving the same topic.


This controller included an ultrasonic sensor, which according to the distance of the players hand, it moves the spaceship from one path to the other (Left, Centre or Right).










This controller included three push buttons, each button is associates with a path within the game (Left, Centre or Right). Player pushes the buttons according to the position of the spaceship.screen_shot_2016-11-13_at_2-02-19_am_720img_0347_720 img_0345_720


  1. This project was an opportunity for us to further explore P5 and Arduino coding and bringing everything together and design an interface that includes both.
  2. We were introduced to networking, Using pub nub for real time data streaming.
  3. Exploring the world of computer games.
  4. (int) vs. (long), the variable long is a longer format for the variable int.


  • More ergonomic controllers. In the case of the controller with the ultrasonic, being able to hide all the sensor. This can be done if the sensor is placed on it’s side inside the box. In this case the gesture infant of the sensor will be horizontal rather than vertical.
  • Smoother transition of visuals – When 12 asteroids are passed the images of Mars, Astronaut and the welcome sign just appear on the screen. There can be a smoother transition.
  • Visual representation of the number of lives and number of asteroids left. At the moment the players have to remember the numbers.
  • Start button to start the game.
  • Refresh button or play again button within the interface.
  • Introducing whole body interaction (movement) for the player with the ultrasonic sensor.


  5. Creation and Computation in class examples





Project description

This is a wearable project about painting with neo pixels through speech. Spoken words are converted to visuals associated with the word/words. This can be done remotely through messaging, were we can remotely paint each other’s accessories.We propose to do this through pixel conversations where two people can wear Neo Pixel accessories.

Since we want to use speaking, this most nature tool to express the intimate emotions and a fun trigger to our project, we start to look for speech recognition library. However, we find that it is not been supported widely. Google chrome desktop browser support this tech while google chrome mobile browser does not support at all. In this project, considering we also use p5.js as our visual design library, we find it is easy to use p5.js speech library to achieve the speech recognition part. p5.speech is a JavaScript library that provides simple, clear access to the Web Speech and Speech Recognition APIs, allowing for the easy creation of sketches that can talk and listen.


Circuit diagram

This is the circuit diagram for the Neo Pixel display, where the incoming string is sorted and then converted to Neo Pixel display.


Code Links:

Library Link:

Simple speech Recognition Example:


Messaging Structure











Code: One of the code challenges we ran into was making the speech recognition to be continuous and setting English as our default recognized language. As the library support the continuous recognition, there is the piece of code to achieve it. However, when it comes to real test on browser, we still find that the browser is being activate for a short time. Once the browser does not receive any voice data, the browser dived into inactivate state. So, we can only refresh the wage manually. We might need to add a function as “clicking on the button to refresh the page” in the final product. Secondly, as the coder’s desktop setting language is Chinese and English, no matter what do you say, it is always goes to two possibility as translating my test speech into English or Chinese. This is very annoying as we hope the English word as the trigger words. But in the second test on the tester’s desktop whose setting language is English only, the problem solved. So that means the speech recognition is decide by the user’s setting language. If a user’s local language is not English, the data will never be able to trigger the function and run the Arduino.

Neo Matrix: Tricky library by Adafruit.

Experiment 3: Lost in Space



by Ania Medrek and Orlando Bascunan

Link to code:

Link to process video:

Link to circuit diagram:

Lost in Space is an interactive shape and colour recognition game. The game has physical and digital components: a game box that is placed in front of players on a table and a web interface players use to input responses. Up to 10 players.





Designate who will be the Game Master. The Game Master controls the server that will allow him to trigger the game to start. The game box is placed in front of the players (everyone must have a clear view of the game box). Each player opens in a browser on their laptops.

When all players have signed up to play by inputting their names, the Game Master will trigger the game to start.


Inside the game box, there are five cards that are programmed to pop up one at a time. Each card shows one colour and one object (ex. green alien). When the first card is triggered, players have five seconds to determine its colour and type and compare it with the four cards that have appeared on their screens. The first player to click on the card that DOES NOT have the colour or object on it gains the most points. The players have five seconds to click a card. After ten rounds, the player with the most points wins.


After ten rounds, the player with the most points wins.




As a group, we agreed early on that a game would be a fun way to incorporate networking, messaging and notifications. We were inspired by games like Ghost Blitz and Anomia which are simple and easy to learn. Our first challenge was to pick a theme for the game and work out the game structure. The space idea had a wide potential for objects and colour, so we decided to use a green alien, a purple robot, a blue planet, an orange spaceship and a red UFO as the basis of the game. We drew from other card-matching games to come up with the idea of mixing objects and colours that the player will have to sort out.

There are only eight different cards that show up on a player’s screen throughout the whole game, but only four appear per round. We figured out eight options (Cards A-H) is enough with the chart below.



There is nothing random about the game. There is only one winning card in each round of the 10 rounds. There are five pre-programmed rounds, that are played twice. We designed the game to be as straightforward to code and design as possible. The trick of the game is that it is so face-paced that it feels like there are dozens of randomised cards and options.

The programming part of this project consists of three different codes: the client, the server and the Arduino. The server is what controls the game flow, registers the clients and calculates the score. It also controls the Arduino part by sending it messages to trigger whichever flag is needed to each game round.

The client part is what each individual player sees on their screen. It displays the instructions and gameplay details, including the cards. It registers the users’ card choices.

The server and clients communicate using PubNub. We created different ‘types’ of messages, such as ‘join’, for when a user joins the game and ‘pick’ for when they choose a card. Other messages like ‘start’, ‘nextRound’ and ‘gameover’ were also used to control the game flow.

We also needed a bit of HTML to allow players to input their names.

Once the code structure was figured out and the graphics were complete, we moved on to creating physical game box. We followed a simple circuit diagram from to learn what parts we needed to regulate the power and not fry all our servos. We used a transistor, two electrolytic capacitors, and connected everything to ground to make sure this would not happen.

We found the servos very temperamental. They would reset and start twitching around whenever a new Arduino code was uploaded, or just seemingly for no reason at all. Every time this happened, we had to unplug them from the power source and re-calibrate them so that the cards were at exactly a 90-degree angle.

We had to build a few different prototypes of the game box. The first box was too small, and tape was not enough to keep the servos steady. We ended up hot-glue-gunning the servos to a piece of foam. We tried a version made with cardboard, but foam proved to be sturdier and looked more polished.


Project Context

Our game is part of the ongoing to trend to update the homely card game. Some of the most popular examples of this trend are Pokemon’s recent AR creation and Dungeons and Dragons multi-platform empire. It’s crazy to think Pokemon first became famous as a card-collecting activity. Even classics like Monopoly and Solitaire now have high-tech re-incarnations. Board games have included electronic physical ‘game-boxes’ for decades now (for example, Monopoly’s batter-operated cash-counter). Lost in Space was inspired by these games, and took it one step further by creating a web interface that works in-sync with the physical game box.


Creation & Computation Experiment 3

Sara Gazzaz and Mudit Ganguly


‘Project: Marcelo’ – converts paint strokes into a language




Marcello is a messaging tool that uses brushes as input devices. It was developed to be used as a tool that helps users engage in nonverbal communication over long distances.

We aimed to convert the physical action of making a painting into messages that could be sent across the internet to another user who would then read the message and reply back using a brush as well.  Since the brush is limited in terms of strokes we had to use a language that uses limited inputs. Morse code proved to be the simplest and most efficient answer.

The entire system is controlled by an Arduino which receives sensor data from a flex sensor built in the brush. The data is then sent to P5.js which then sends it across the internet using PubNub. The information is then picked up by P5.js on the receivers’ end and it displays the information in the console which is then translated by the receiver in real time.  


whatsapp-image-2016-11-13-at-4-09-45-pm whatsapp-image-2016-11-13-at-4-09-43-pm whatsapp-image-2016-11-13-at-4-09-45-pm-1 whatsapp-image-2016-11-13-at-4-09-41-pm whatsapp-image-2016-11-13-at-4-09-40-pm

Target Audience + USES

The possible uses of such an interface can be in performance art, couples therapy, long distance communication and interactive art practices.

How it works?


2 X Arduino Micros
2 X Flex Sensors
2 X LED strips (blue)
2 X Breadboards
2 X Laptops
2 X 9 Volt Power Adaptors
2 X Transistor Pins
2 X Diodes
4 X 10k Resistors
2 X Briefcases
2 X Brushes
2 X Plastic wrap
2 X Micro USB cables
Paint & Paint Containers
Glue Gun
Canvas Boards
Painting Paper
Conductive Wire

We broke apart a brush and inserted a flex sensor into it along with the bristles. Each flex sensor was covered in a protective plastic wrap that protected the sensor from any paint or water. The sensors were then attached to the breadboard with long wires that were inside a wooden handle we built. We used a wooden cylinder that we sawed in half and sawed the inside to build space for the wires to go through the brush.
To hold the bristles and flex sensor in place and attach them to the handle we covered them with felt so it would help provide users with a comfortable grip as well as to cover the whole mechanism.

whatsapp-image-2016-11-13-at-4-09-42-pm whatsapp-image-2016-11-13-at-4-09-42-pm-1whatsapp-image-2016-11-13-at-4-09-39-pm whatsapp-image-2016-11-13-at-4-09-44-pm whatsapp-image-2016-11-13-at-4-09-44-pm-1whatsapp-image-2016-11-13-at-4-10-28-pm


whatsapp-image-2016-11-13-at-4-10-26-pm-1whatsapp-image-2016-11-13-at-4-10-27-pm whatsapp-image-2016-11-13-at-4-10-29-pm


The LED strips were placed on the rims of the briefcases. They provided feedback to the users in the case that whenever the flex sensor was pushed beyond a certain point, the LED strip would light up notifying the user that a sensor value has been registered. These strips were then attached to the breadboard as well.

The LED strips required more voltage than the arduino provided so we used an external power supply of 9 volts attached to the breadboard that powered the LED strip while the Arduino was powered by a micro USB cable. The Arduino was connected to a laptop which had the arduino code running as well as the P5.js code.
There are two laptops that are used as part of the system to demonstrate how two people would communicate with each other. Each one displayed data through the console log in P5.js.

The briefcase is where all these components came together. It had the LED strips that lit up as well as the laptop. All the other components were hidden underneath the laptop. Also over the laptop was a white foam board that held the canvas in place. It was used to cover the laptop’s keyboard in order for it to be viewed only as a screen. The paint containers were glued on to the board with 2 colors of paint (blue &  yellow).

whatsapp-image-2016-11-13-at-4-12-52-pm    whatsapp-image-2016-11-13-at-4-13-03-pm whatsapp-image-2016-11-13-at-4-12-50-pmwhatsapp-image-2016-11-13-at-4-12-48-pm-1


Circuit Diagram


System Architecture

Software CODE:


Arduino Code available on GIthub (add link) for Flex sensor 1’s%20Brush

Arduino Code for Flex sensor 2
P5.JS Code


The Arduino Micro microprocessor on the breadboard runs a simple firmware that reads the sensor values, converts them and prints them into the serial output. Since each flex sensor has its own native value we had to develop separate codes for each sensor.

The result of the integration is the sensor values being converted into either a ‘dash’ ,‘dot’ or ‘blank’ values depending on the direction the bristles of the brush were pressed on. The data sent is packed as a message to Arduino’s Serial Monitor.

This packaged data is then sent to our P5.js sketches that we’re running simultaneously. P5.js needed an additional program to run (P5 Serial Monitor) that would enable it to pick up the data packets that the Arduino is sending it.

P5.js then displays this incoming data package in its own Console Log. Once the data is in the log it is then sent over the internet via PubNub to the second laptop that also had P5.js running. These ‘dots’ , ‘dashes’ or ‘blanks’ were then displayed on the second laptop’s P5.js Console log. This is the basic interaction.

Note: The users should refer to a key diagram that translates the english alphabets into morse code so that they know what the messages being sent to them say and also to refer to the kind of strokes required to send the message they intend to send the other user.


Once the second user receives the morse code they had to translate it to decipher the message and then they replied back using the same mechanism.


Arduino Code:
We began by calibrating the flex sensor. We found the complexity of the movement and the responsiveness compelling and thought it would translate well to the movements of a painter. We divided the range of values into 3 sections that would register either a ‘dot’ or a ‘dash’ or a ‘blank, which are the fundamental communication tools used in Morse code. We also then incorporated the LED strips and made them light up whenever a ‘dot’ or a ‘dash’ had been registered. Finally we coded the values to appear on the Console log of the Arduino.

P5.js Code:
In P5.js we had to code the software to pick up the Arduino code for the particular Port on that laptop. We also had to add the integration with PubNub using our own Serial Keys. Finally we designed the code in such a way that only incoming information from the second laptop would be displayed on P5.js’s Serial Monitor. This was important as we didn’t want both users to get confused between the messages they send and the messages they received. . So User 1 only saw incoming messages from User 2 and vice versa.

Network Communication

There were two communication issues involved in the project: Arduino to P5.js and P5.js to Pubnub

Arduino to P5.js
P5.js needs an additional program to run (P5 Serial Monitor) that enables it to pick up the data packets that the Arduino was sending it. This whole setup has a lot of requirements and even then is not the most stable network. The only way to troubleshoot it was to restart Arduino, the P5 Serial Monitor and P5.js and then hope for the best. It was not the most reliable way to troubleshoot as it was more of a hit and miss with the software being temperamental.

P5.js to PubNub
Pubnub gives users a set of custom credentials to be inserted into the P5.js code that would enable internet connectivity. Even when we put these credentials into the program we faced multiple obstacles. Thanks to Marcelo Luft, we managed to overcome these obstacles and send those data packets over the internet.



On the day of the presentation we had the class divided into two groups that were separated by a white board and acted as a barrier for them to see them as being two separated rooms. We then began communication between two users (Sara and Mudit)

We shared messages like “Sup’ , ‘Yo’ and ‘Hi’. We then allowed our classmates to try it out.
Ania sent the message ‘Cat’ to Orlando.
whatsapp-image-2016-11-13-at-11-00-39-am-1 whatsapp-image-2016-11-13-at-11-00-39-am-2 whatsapp-image-2016-11-13-at-11-00-39-amwhatsapp-image-2016-11-14-at-11-36-03-am








Case Studies

Yuri Suzuki is a sound artist, designer and electronic musician whose recent work explores the physical and technological characteristics of sound production, an interest that has arisen since the loss of the music library stocked in his laptop when the hard drive crashed.
For his Royal College of Art graduation show in 2008, he presented work which involved an innovative way of playing conventional vinyl records, including Sound Chaser (a miniature electric circuit constructed from pieces of old records on which small cars circulate and transmit sound) and the Finger Player, a transmitter handled like a thimble, enabling the physical experience of the retransmission of sound by running a finger along a record.
Suzuki’s intention is ‘‘to raise public awareness of the way in which sound and music is produced’’ and in most cases this occurs through performances and workshops requiring public participation. For Mudam Summer Project he is therefore presenting workshops led by invited artists and creators that tackle a variety of themes such as learning the basic principles of electronic music and the creation of sound pieces using transformed objects


Save Your Friend

Samaa Ahmed and Jeffin Philip

Project Description

With our hectic schedules and intense workloads, we spend ever increasing hours in front of laptop screens trying to get everything done on time.

In an effort to stay on top of our professional and academic commitments, we often neglect our own well-being.

One aspect of this is the stress that we put on our spine by sitting down in uncomfortable positions. This can have long term effects on our health, including an increased risk of heart disease, diabetes, and even premature death (Harvard Health Blog).

Save Your Friend is an interactive tracking website that displays information from “posture sensors”. It encourages friends to alert each other if they notice someone’s posture is unhealthy, i.e. that they are leaning or slouching.


While brainstorming, we came up with many interesting ideas and interpretations of “messaging” systems, but many of our concepts would have been easier to execute with a photon, e.g. clothes hooks that lit up based on a weather report, or an alarm clock that would squirt water.

Those projects did not quite fit with the brief that we were assigned, so then we thought about how to maximize the utility of the tools that we were allowed/required to use in this experiment: an Arduino, p5.js, and a screen – either as input or output. Based on those criteria, and thinking about the overall context of this experiment — to highlight the ubiquity of technology — we narrowed down our idea further.

We wanted to create a product that would be useful to us as OCAD students. We realized that in our cohort we spend a lot of time sitting down, working on assignments, drinking coffee, feeling stressed, and otherwise not taking great care of our bodies.

We initially thought about creating a coaster that could be plugged into our laptop which would measure how much coffee (or other beverages) we drink within a certain time period. We thought about representing different beverage options on a webpage, allowing a user to select an option, and then calculating the difference in weight from when the user started drinking that beverage to when they finished. We would then compare this to the daily recommended averages to notify users if they were engaging in healthy or unhealthy habits. For example, if the user selected water, and they drank 8 glasses a day, this would be a great mechanism for them to track their habits. But if the user was instead only drinking coffee or hard liquor, this notification system would be a harsh reminder and accountability tool. Although this was an idea that we were very excited about, we realized that it did not enhance a multi-user experience.

With the thoughts we had in mind for the coaster idea, we decided to refine the concept and think of other ways in which we would create a tool that would help foster healthier habits that we could implement in our daily routines at OCAD. Thus, “Save Your Friend” was born.


We wanted to create a flexible, customizable set of sensors that could be attached to any piece of furniture, and could be added to or subtracted based on preference and needs. Our prototype focused on office chairs with three back sensors, but these sensors could be attached to anything to track all types of movement or posture. For example, if you had a reclining chair, you may need six sensors, or if you wanted to track your sleep movements you could attach many sensors to a mattress.

Project Context

This project was developed in the context of other wearable technology and posture tracking devices and mobile apps. We looked at Darma cushion and app and Lumo back wearable and app  to situate our project. within an ecosystem of similar “Internet of things” body monitoring products.


Darma Co. is “the world’s first smart cushion that monitors your posture, sitting habits, stress level, and coaches you to sit better.” It is a combination of a seat cushion with pressure sensors and an app that notifies you about your posture.

We liked aspects of this product, for example, the user interface was sleek and simple, and the cushion is moveable. However, we found it limiting that the seat cushion was the only point of contact for measuring posture. We thought that a more robust posture sensing system should include sensors on the back of the chair to ensure that users were sitting upright.

Lumo Back is a wearable sensor that tracks posture, steps, sitting position, and how you sleep. “The sensor provides a gentle vibration when you slouch to remind you to sit or stand straight. It is worn on your lower back and designed to be slim, sleek and so comfortable that you barely feel it when you have it on. The sensor connects wirelessly to a free mobile app which tracks your daily activities, including steps, sitting, standing, running, and sleeping.”

We liked the fact that Lumo provides a physical notification to users when they are slouching, and we considered integrating a feature like this into our product, however we did not have enough time to explore this fully. We also liked that Lumo is flexible and tracks different metrics, but we did not want to create something that the users had to wear on their body. Our user interface was also inspired by Lumo.

Sketches and Design Files


We created a simple graphic of a person sitting in different positions to illustrate the effects of posture on one’s back.

In the “best” situation, the person’s back is highlighted green to show that they are sitting up straight, and all of the pressure sensors are engaged. In the slightly less optimal situation, the person’s back is yellow (only two sensors engaged) and in the worst situation the person’s back is yellow, where only one pressure sensor is engaged.

We considered adding a fourth option, where only the sensor on the seat of the chair would be engaged, and there would be no contact with the back of the chair — maybe if you were resting your head on your desk. However, we decided not to for this demonstration.

sensorThe sketch above illustrates the design of the sensors we created. One of the challenges in creating sensors was to find the right size of the sensor and having a large enough surface area for the conductive fabric to complete the circuit. Because the sensors are triggered by leaning back on to them, rather than pressing between your fingers, the pressure is more dispersed. Thus, we had to create several iterations of the sensor to figure out the size that worked best for each of the sensors.

We also created a sketch to show where the sensors would be placed on the chair. We chose to simplify our design slightly, with one sensor on the seat, and two on the back. The limitations of using an Arduino meant that the breadboard would have to be close enough to the laptop that it could be connected via USB cable, so we decided to attach the breadboard directly to the chair, and the wire would act as a “proximity” guideline to ensure that the sensors would be plugged in to the laptop. This also fit in with our concept, using the proximity of the chair to laptop to reinterpret what counts as a “wearable” sensor.


Process Journal

There were three main phases of development in this experiment:

  1. Code development in p5.js
  2. Technical development
    • Arduino code
    • Creating sensors
  3. Networking using Pub Nub

 The first stage of development in this project was creating the interaction in p5.js. The video below shows initital testing that would change graphics on mouse clicks, and show two users on one screen.

The second stage was integrating the visuals and the technical components. The video below shows the graphics changing by touching wires together. These same wires would then be used to connect our pressure sensors to the circuit.

We then created a prototype of the sensors that would respond to pressure and trigger the interactions with both the circuit and visuals. Our initial prototype was very small, which we used to test with our hands, rather than our backs or bodies sitting in a chair. We quickly realized that this was problematic for our project development because the pressure that we exert by squeezing fabric together between our fingers is much more concentrated and direct than we would by sitting down on a sensor. Although this test was useful in terms of testing the circuit and interactivity of the product, we had to iterate many times to find the correct size.



One of the main issues we had was in creating the sensors. We wanted to maximize the conductivity of the circuit, but then had problems with returning the circuit/visuals to their “resting” state position. This was because once someone had sat on the sensors, the conductive fabric did not separate easily and go back to normal (when they were not touching). To fix this, we added a small strip of foam on top of one of the layers of conductive fabric. When someone sat on the sensor, the foam strip would compress/contract, and the conductive fabric would come into contact with each other to complete the circuit. However, when that person gets up, the foam expands and pushes the layers of conductive fabric apart, thus breaking the circuit and restoring the setup to “resting” state.


Another challenge we had was creating a two-user experience. Because Jeffin and I have different OS on our laptops (I have a Mac, he has a PC) we realized that the code we used on his computer had to be changed to be useable on mine. Additionally, since users can plug in a device to any USB port on their systems, there should be a way to dynamically assign the port addresses, instead of having to write that directly into the code. Unfortunately we only realized this on Friday, and did not have enough time to fix the code to get the two-user experience to work during our presentation. We were able to show what it would be like if two users were sitting at the same time, and each user could see the other user’s posture on their screen, but there was a lag between the two operating systems (or browsers) that did not show realtime information. We would have to adjust this in our next iteration, and this is something we wanted to fix but did not have time to address within the time limit of this project.

Another thing that we developed, but was not easy to see in our presentation, was a timer to show how long a person had been sitting down. Next to the user graphics on the screen, there is a small circle that cycles from bright green, dull green, brown, to red to indicate the time that a user has spent sitting down. However, the timer restarts every time the position changes, so if someone is slouching (or in a “bad” posture) but momentarily straightens up, the timer would reset. This reduces the accuracy of the readings and therefore makes the tracker less reliable. This is something we would need to fix in the next iteration of this project.

Code and Demo


Project Title:
‘Love Box’
by Katie Micak & Natasha Mody

20161110_201627  screen-shot-2016-11-08-at-7-38-06-pm
20161109_123148 20161110_165756
screen-shot-2016-11-08-at-7-37-58-pm colour_wheel

Initial Idea:
Our project emerged and was greatly inspired by ‘Zoroastrianism’, one of the world’s oldest monotheistic religions. The very spiritual ‘Zoroastrian’ worships in fire temples, where a sacred fire is kept burning to signify an eternal flame, and fire is always present during special prayers and ceremonies. This germ of thought led to the idea of ‘light’ and the use of NeoPixel LED’s as a visual interpretation.

To widen the realm, we started to think of how light might act as a method of communication and an indictor of connection over distance. After much research and the exploration of multiple diverse ideas, we finalized on the concept of a ‘Love Box’.

Concept & Project Description:
The purpose of ‘Love Box’ is to help create a simple visual that would allow couples in long distance relationships to communicate visually, that they were thinking of each other. To create a device that people could personalize to tell each other simply where they were located which then led to different colours meaning different circumstances – ”I’m at work” or “I’m safe”. To have an object represent a feeling of missing, in turn providing comfort to people living far away from their loved ones – All of which would be triggered via a ‘LED Illusion Mirror’, an infinity mirror that would light up a colour on the click of a browser or app button.

We chose the object of the infinity mirror to talk loosely about flattened space and light travel as a metaphor. It could function well in a domestic space as light has an implied presence and could represent a person easily.


20161111_132637   20161108_174901  screen-shot-2016-11-13-at-7-57-45-pm

Process Journal & Code:
This project was very interesting but quite challenging at the same time, especially given that both, Katie and I come from backgrounds that don’t involve coding. We are both novices to all languages of code in fact, but very eager to learn. That said, we eventually worked out an idea that we hoped would be ‘simple’ and easily implemented.

Our overall process involved the use of a web browser triggering the output, i.e the illusion mirror that we created with wood frame, glass, film, mirror and LED’s. The web design was a simple color wheel divided into 4 quadrants. On a mouse press,  the LED’s in the illusion mirror light up with the respective colour.

We initially made p5 and arduino work independently. We then moved on to the communication between p5 with Arduino through PubNub, in lieu of a successful end product. We generated two p5 codes – one that was received by pubnub and the other that connected pubnub to arduino. This initially seemed to have worked but was very inconsistent. Multiple options and codes were relentlessly explored alongside much user testing and help from our peers. We were very close to cracking it, but weren’t too successful unfortunately. We are not sure why, and need to work on it further to have a true understanding of the technical processes we engaged in.

Project Context & Reference Links:
The reference links (below) were a source of inspiration and helped accelerate our brainstorming/ideation process. We landed a concept after much research and the exploration of many possibilities.



Group Members

Thoreau Bakker

Afrooz Samaei


PacNet is a browser-based game, based on the classic arcade game and built in P5.js. It features a simplified interface and does away with many of the original game elements, instead focusing on interactivity through networking. The main character navigates using a custom made controller, employing ready-made components and an Arduino micro, housed in a custom designed 3D printed enclosure. Additional characters are represented by small circles, and navigate using a keyboard. The main player controls the Pacman, chasing the circles on the screen. As other players open the web page, they get a circle which they can control using the laptop’s arrow keys. The players have to prevent their circles from being caught by the Pacman.


Design Process

The main source of inspiration for this game came from the Cineplex TimePlay. The idea was to create a collaborative game experience, in which the players either play against each other or play with each other to reach a target.




The scenario that we intended to create was to have a main player playing at the arcade machine, and the rest of the players play on their laptops or portable devices.





The first game concept that we came up with was a game in which the main player has to reach a target with the help of the others. The players would have to collect and move some blocks and place them on top of each other, in order for the main player to climb up the blocks and reach the target.



The players would have to move the blocks and place them on top of each other (Source: Phaser)


Another concept was to create a game in which the players compete against each other. We thought of a game in which the player has to collect some objects, like stars, which are controlled by other players. So, as the main player chases the objects, the other players have to control theirs and prevent it from being caught.

The player has to collect the stars which are controlled by other players.

The player has to collect the stars (Source: Phaser)

However, the main challenge was to create all the game objects and send the related data to Pubnub, in order to build a networked game. In addition, since the game library of P5.js is not as complete as other game engines, we spent a considerable amount of time experimenting with Phaser, which is a game framework based on Javascript. After lots of efforts and iterations, we finally decided to narrow down the game mechanics and assets, so we could effectively build the network. Hence, we built a networked version of Pacman game and created the assets from scratch, using P5.js drawing functions.

Simulating the Arcade Machine

For the sake of this prototype, we simulated the arcade machine using a laptop, a thumb joystick, and an Arduino Micro. The joystick and the Arduino are housed inside a custom-designed 3D printed box and are connected to a laptop. The values read by Arduino are used to control the movements of Pacman and are published to Pubnub to be transferred to all web pages.

img_1471 img_1473 img_1475
img_1479 img_1482


Game Played on browser


Circuit Diagram


Challenges and Outcomes

This project proved to be especially challenging, both technically and conceptually. As a team, we invested considerable time early on, brainstorming with a very open and broad scope and tried to explore all possible avenues. In an effort to keep project satisfaction high for both of us, we really wanted to find an idea that excited us both equally. This proved to be challenging as different directions resonated with us in different ways. 

Despite the challenges, the project was extremely valuable in terms of learning outcomes. We were not only able to gain new skills regarding networking, but also regarding the scope and being realistic about what is achievable in a given amount of time. Our initial vision involved a game that incorporated not only hardware and networked input but also a complex interaction involving group coordination and physics for a final goal. While the idea was (and still is) a strong one, the goal was far too ambitious for our skill level and time constraints.

The future iterations of the game involve making the connections smoother, reduce the delays, and make a mobile version of the game so it can be played on smartphones and tablets as well.


Links to the Game:

Main Player:

Other Players:

Link to Code:

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.