Frame It Up!


By Finlay Braithwaite and Tommy Ting

Frame It Up is an interactive screen-based game best played with 10+ people. The game requires players to carry their laptops and physically walk around the room. Frame It Up is a choreography generating game influenced by Twister.



Using your laptop, open the URL in Google Chrome.

Read the instructions.

Click ‘PLAY’ to enter the game.

You are presented with a name and gestural prompt.

Use your camera to find the person and ask them to perform the prompt.

Click anywhere to take a picture.

Pictures are saved onto your laptop.

After capturing a prompt, a new prompt appears.

Take pictures of the person with the new prompt.

Repeat until the 1 minute timer.

Every minute, on the minute, all players are provided with a new name and prompt.



  1. To negotiate with other players in the room to capture an image of a person and gestural prompt.
  2. To generate random acts of choreography and dance movements to highlight human’s relationship with technology


P5.js Code


Supporting Visuals

Presentation Day







Process Journal

Day 01 [2017.10.16]: Experiment 2 Introductions

We came up with a few different ideas on our first day. We were interested in using the camera function, but inherent in the camera technology is the conversation of ethics and more specifically privacy. We wanted to use the camera in a critical way that would open up discussions around ethics.

  1. “No Pervert!” Using the camera, the screen will direct you to point it at someone in order to “see what lies underneath”, but when once you line it up with a body, it will generate a message saying “Why would you ever want to do that?”
  2. “Conversation Helper” Your mobile device will connect you with another user, then it prompts you with some conversation topics
  3. “Colour Matcher” Using the mobile device’s gyroscope, you have to rotate your phone to the right x y and z coordinates to match the colour of the text to the colour of the background of the canvas.
  4. “Shake It Up” Using your phone again, shake it to generate a prompt to find another player in the room, once located, shake it up again to generate a prompt for a body part, take a picture.

After coming up with a few different ideas, we decided to go with Shake it Up. We were interested in the movement of humans that this game would generate. It brings up some important things we were both interested in exploring with this experiment which are physical interaction with digital technology and movement and dance.

Day 02 [2017.10.17]: Coding

The first major hurdle was to get the video camera to work in a consistent and predictable way. The number of different possible device types, makes, and models made this a daunting task. We were fairly determined to use smartphones and tap into their cameras as the technical underpinning for our project. We ran into some basic hurdles getting video to work even in a rudimentary fashion. Chrome, for example, demands an https:// server is used if the camera is to be engaged, for security/privacy reasons. This means that code has to be uploaded frequently to such a server for developing and testing purposes. Dreamweaver became our go-to editor as it facilitates automatic SFTP sync on save. It also has built-in Github integration which is a dream come true.


As the working title suggests, getting the shake input code to work would be imperative in our development. However, our early testing led us to conclude that it would not be an effective way to move through a serial sequence of interaction as unintentional double shakes and phantom shakes were difficult to avoid using code. This investigation was illuminating in that it demonstrated that our user flow had too many stages in the sequence and device interactions. We felt that this took away from the experience as the device became the focus of the experience rather than a catalyst. We played with the idea that the random people and body part prompts cycled on a timer, not relying on interaction. It would also be a great moment if this timer was set to a common clock on all devices, so that new prompts were generated for all players simultaneously.

Day 03 [2017.10.18]: Back to the Drawing Board

One immediate concern we had was capturing pictures of someone’s body part without their consent. Although it would call attention to the problems of privacy, we thought this was too simplistic and literal. We went back to the whiteboard to brainstorm new ideas.


We came up with a few different ideas for new prompts. One was to use colours, mood and feelings, this would be more abstract and will give the player the choice to interpret this however they want though it is still not consensual.

Next was to use a RGB value or a Grayscale value, player has to find the match colour with their camera on their person’s body. This would make our project more “game-like” but we didn’t know how to use camera to calculate values. Moreover it still doesn’t solve our consent issue.

Lastly we came up with a list of gestures such as head nod, smile, right hand shake, left middle finger, right peace sign. This immediate solves our consent problem since you have to ask your person to perform such a task. This also creates more of a negotiation between you and the other players. Finally, this would add a much richer dimension to our initial interest, which was to use this game to create random acts of dance and choreography.

Day 04 [2017.10.19-23]: Coding (Cameras, Mobiles to Laptops)

Eureka! We were starting to make real progress on the video front. Kate Hartman had suggested that we ‘time box’ this problem, giving up on it if we didn’t get the results we needed in specified amount of time. The biggest challenge that we overcame was specifying which camera a mobile device was used. The p5js video capture allows for constraints compliant with the W3C specification which includes language to call for different camera types. The type we were interested in was ‘environment’, the non-selfie outward facing camera on the back of a phone. Finding the correct syntax to connect this constraint to p5js was elusive and frustrating but eventually my android phone took a brave step and faced the world. With this victory, we began working with the video image and integrating it into our code. To accommodate for variable screen and camera resolutions, we created a display system that would respond to four possibilities:

  • Camera resolution width narrower than horizontal display.
  • Camera resolution width wider than horizontal display.
  • Camera resolution width narrower than vertical display.
  • Camera resolution width wider than vertical display.

With these four scenarios, our video placement would respond to the parameters and crop and place itself accordingly.

In this meticulous process we encountered a bug with the p5js reference. With the following function: ‘image(img,dx,dy,dWidth,dHeight,sx,sy,[sWidth],[sHeight])’, you can crop an image and place it into your canvas, possibly resizing it in the process. However, in working with this code it appears that the destination coordinates (d) and the source coordinates (s) are reversed from the documentation. We will investigate further and let p5js know if this is indeed the case.

This code was important as we wanted to crop our video instead of resizing it. We wanted a clean ⅓ height band of video centered in the middle of the screen. We wanted this to resize smoothly and adapt to variables of screen and camera resolution. We felt a crop would give us a natural zoom that would enhance the image finding aspect of the game and would also lower the CPU overhead of live video resizing.


Tommy’s phone won’t open %#%^#@^. Try as we might, our code worked well on android, but not iPhone, particularly Tommy’s iPhone. With the time box in shambles and our project in jeopardy, everything was on the table, including revisiting other ideas or generating new ones. Realizing that the majority of portable devices available to us were made by Apple, we swallowed our pride and began developing for laptops. Unfortunately, we didn’t have the current ability or time to figure out a way to include both android phones and laptops, so we went with laptops only.

Despite our worst fears, the laptops were great, and added some new dimensions to the game. People could see themselves being captured and adjusted their position and pose to assist in play. This interactive feedback element would not be possible with a phone’s ‘environment’ camera.

Day 05 [2017.10.24]: Playtesting

The playtest was extremely revealing and gave us a lot of insight into how to quickly resolve some immediate issues. We noticed a two main issues:

  • Our sketch did not work on iOS consistently, some phones worked, most didn’t.
  • People were upset by not having a specific end goal, namely they were confused with what to do after they framed the person up with the corresponding body part
  • The 1 minute timer was too long since it was easy and simple to find the person and the body part.

It also confirmed what we had hoped for:

  • During the scuffle to locate the person and the body part, it resulted in a dance amongst players.
  • People had to negotiate with each other in order to find their body part.

Play Test

Day 06 [2017.10.26]: Refinement in Code and Game Concept

On our last day, we refined the game visual interface from small details such as font size and stroke shade to adding a photo capture feature.

The last major coding hurdle turned out to be fairly easy. Neither of us had made an app with multiple states or scenes. Our code to this point would rely on one loop for the entire experience. We needed to make a splash page to introduce and explain the game. We could have done it as a separate html launch page, but we wanted to try doing it in a single p5js sketch. To start, Tommy created the launch page in one sketch and I finished the details on the main code. By using a simple ‘if’ statement tied to a button on the splash page we were able to have users cleanly move from one state to the next cleanly. Huzzah!

The coding details of this project were fun mini-challenges. We attempted to make everything proportional to the display such as the text and video size & placement. A fun example is that the button size is tied proportionally to the font size which is tied proportionally to the overall number of pixels in the canvas. Another fun detail was randomness. The colours are all randomly generated giving this a fun look that’s different every time. However, in our tests, users complained that the text often blended in with the background and became difficult to read. We set some rules to enforce that the randomly generated colours have a specified minimum difference in hue. Changing the p5js colour mode to a hue based system instead of RGB made colour picking of this nature possible. Making the sounds random was a larger challenge than anticipated. Generating a random hue is one thing, but randomly selecting from a pool of sound clips is another. With sound, we wanted generate a fun and chaotic reinforcement of the experience. We wanted each device to emit sounds unique from the next. To achieve this, each device loads ten random sounds from a pool of fifty-one. At each sound cue, the code randomly selects one of these ten files for playback. Loading all fifty-one sounds would have increased the loading time and made the experience fairly buggy considering there’s already a live video input in play. This seemed like a good compromise.

Playing with sounds

Finally we changed the focusArray from a list of body parts to a list of gestures which included positive, neutral and negative gestures. We decided that this would be more interesting as the people would have to negotiate even more with other players in order to capture their photograph. It acts as a consensual prompt which mitigates the issues of privacy. We also decided to keep the 1 minute timer instead of speeding it up (from our play test). Within the 1 minute, the person must capture as many different gestural prompts as possible. Lastly, using gestural prompts instead of finding their body parts creates more of a dance, this was more fitting in our conceptual framework of contemporary dance.


Project Context

While doing some initial research for the project, we were immediately drawn to the relationships between kinaesthetics, human bodies, dance and choreography and camera technology.

Although Frame it Up is a game, we were more interested in the choreographic outcomes of playing the game. We found that the game was able to generate random acts of choreography which materialized our interest in human’s relationship with technology in the form of dance. We deliberately decided to use the camera as the main device to connect the players, as the act of taking someone’s picture is inherently violent (Sontag 1977) and to explore this violence through dance and play. Informed by Jane Desmond’s idea that how we move, and how one moves in relation to others comes from a place of desire (Desmond, pp.6), and gestus, a theatre technique created by theatre director Bertolt Brecht that understands gestures as an integral part of the human character, its wishes and desires (Baley, 2004). We wanted to investigate how do we move with each other and amongst each when our violent technological devices have become both embedded within and extended out to how we express desire.

Although it wasn’t our original idea, carrying around the laptops was able to intensely highlight our increasingly posthuman body. The sound tracks that we used were all compiled of laughing tacks, which call attention to human’s happiness, playfulness and desire but also its violence and brutality. We also looked to the works of choreographer Pina Bausch. Bausch’s work highlights the violence of men and the suffering and oppression of women to an incredibly uncomfortable degree. Her work “forces her audiences to confront discomfort: they are painful to look at but impossible to turn away from.” (Avadanei, pp. 123) Using Susan Sontag’s understanding of the camera as a weapon, dance theory and Pina Bausch work, our goal that Frame It Up is both a playful game and a tool to generate choreography that explore relationship between privacy, human desire and technology.



Avadanei, Naomi J. “Pina Bausch: An unspoken explorations of the human experience.” Women & Performance: a journal of feminist theory, vol. 24, no. 1, 7 May 2014, pp. 123–127., doi:10.1080/0740770X.2014.894289.

Baley, Shannon. “Death and Desire, Apocalypse and Utopia: Feminist Gestus and the Utopian Performative in the Plays of Naomi Wallace.” Modern Drama, vol. 47, no. 2, Summer 2004, pp. 237-249., doi: https:éédoi.orgé10.1353émdr.2004.0018

Desmond, Jane, editor. Dancing Desires. The University of Wisconsin Press, 2001.

Sontag, Susan. On Photography. Picador. 1977


Project title: Bablebop


Roxanne Baril-Bédard
Dikla Sinai
Emilia Mason

Project description:
Bablebop is a smartphone and human interaction game in which the players must elect who will be the next ruler of the planet. The game randomly assigns a character to each player, based on the character’s personalities they will vote and try to convince the other players to give them a crown. Each crown counts as one vote. The player with the most crowns wins the game and becomes the ruler of Bablebop.

1. Wear the phone in a pouch around your neck.
2. Press “Start to play” button
3. Read about your character. You must act according to your character’s personality.
4. Start campaigning: You have 4 minutes to convince everyone you talk to give you a crown. You can concord with other members of your species to take the win as a team.
5. These are elections and you have to vote. Give a crown to those you think to deserve it.
6. Give dirt to those who don’t deserve to rule Bablebop.

Input: Tapping Device Screen
Output: Changing screens, button blinking

Blobs: The Blobs can see it all and use what they see to compliment everyone. You will have to use your compliment power to win this election. Make everyone feel extra good to get the most amount of crowns possible. Don’t fall for tricks and compliments, give crowns to those who you think to deserve them and dirt to the ones you think are lying.

Blarks: The Blarks are very manipulative and charming. You will need to convince everyone you deserve to win this election. Get the most amount of crowns possible. Lie if you have to. Don’t fall for tricks and compliments, give crowns to those who you think to deserve them and dirt to the ones you think are lying.

Blims: The Blims are very smart. You can read in between the lines and know not to trust anyone. Don’t fall for tricks and compliments, give crowns to those who you think to deserve them and dirt to the ones you think are lying.

Game story:



Problems faced

-It was hard to try to find a way to have a full-size image as the functions width and height did not fill the window. Ultimately, we settled for having slightly squished images in every browser with the windowWidth and windowHeight function, and sized the buttons with fractions of width and height of said window, in order not to have them difform.

Consequently, it works only on windows that are higher than wider.

Another hurdle was understanding how timers work, how to make a round last a certain amount of time and how to create a loop that can take note of time passing since a change in state.

We still have a bug in relation to browsers and operating systems. On Android in Chrome, it skips one screen because it reads a second tap. We tried to fix it by putting a timer so it will take a certain time before it could register a second tap but that completely broke it for IOS so right now in Android still skips a frame. We chose to go with the code that allowed iOs to work best being that most people in the class have iPhones.

-The code would be simpler if we understood how to make objects and for loops. As of now, it is almost 500 lines.

Another challenge was animating and putting a sense of feedback in the buttons. We managed to draw ellipses on top of the button to give a pressed look, but we also tried to have the buttons pulsate in the last quarter of the round and could not figure out the way to have the animation (successive drawing of the image bigger and smaller) be played slow enough that it is visible to human eyes.

Finally, we thought it would be best to optimize for mobile in order not to be resource hungry was a challenge too, because the images we were loading we so big that sometime they just wouldn’t load. We also implemented state loops in order for the images not to redraw themselves every frame.

Design Files:





Photographs and Videos:

Bablebop 1

Bablebop 2

Bablebop 3



October 16th
Dikla and Emilia are paired to work together on this assignment.
During their first meet up they decide their project will consist of dancing lessons using 20 screens. Ten of the screens will be giving directions on how to move your upper part of the body and the other 10 screens will give instructions on how to move your lower part of the body.

October 19th
Dikla and Emilia have a video call and decide to change the main topic of the project.
After reading some news from Nicaragua, we decided to focus on Sexual Abuse and Rape.

October 20th
Roxanne joined our project and we discussed what we could make regarding our new topic.

We started our meeting brainstorming and agreed on the next points:

Main topic: Sexual Abuse and Rape.
Possible points of the game: How easy it is to destroy someone
                                                Predators exist everywhere
Abuser – Survivor: Asymmetrical games

The Just World Hypothesis

Possible inputs: Camera picking up movement.
Characters: Predators: 2, Preys: 2, Bystanders: 2
Predators silence bystanders and Preys give trust tokens
We were trying to find a way to have the game’s procedural rhetoric make a point about the social dynamics surrounding sexual abuse. The idea that bystanders, making up to 80% of the players, would have to take a side, was explored.

References mentioned during the meeting.




October 22nd
Change game dynamic to Pacman’s logic. We tried to define the mechanics of the game.
At this point we wanted the characters to give possible powers to each other.

Define Steps:
Step 1: Code randomizes what character each screen gets.
Step 2: Once every screen has a character user must tap for instructions.


October 23rd
Roxanne started working on Code
Dikla started illustrations
Emilia began description of characters, story, and instructions

We discussed making a special background animation to set the mood of our game, the background would go from night to day and day to night to let the users know the times. We will be using a projector for this. We were thinking of having night and day phases because it would be related to different characters and the time they each could use their specific powers.

We also created a design guideline to define color scheme and type of illustration.
screen-shot-2017-10-27-at-12-43-23-pmAt some point, we discussed the idea of using the phone’s camera as an input and have each player take a selfie and use that photo as the image of each character. For aesthetic reasons, we chose to design the three characters.

Presentation of the game to Nick during class time:
As we were explaining, we realized we needed to simplify our idea.

Suggestions from Nick:
-Make a storyboard, wireframe or user flow to get a better idea of our game
-More interactions on the screen than in person.
-Divide our information and instructions in Public info and Private info for the users.

October 24th
Our game idea went through another iteration. We figured that because of the time, we needed to simplify the gameplay loop a lot, being that we only had 5 minutes or so to play.
We managed to define the game logic and the steps.



Step 1: Log into Website
Step 2: Grab your phone and put it in the ziplock bag
Step 3: Press button to randomize what character you got. Your identity is a secret.  Button in the middle “Press here to get your character”
Step 4: Screen gives card with secret character. Image and name of the character “Tap for more info”
Step 5: Tap gets you a new card with character’s information.
There is 1 card per character:
“Instructions: Wear the phone around your neck and wait to hear the instructions from the game managers.”

Step 6: Give the instructions in person:
During the next few minutes, your job is to convince others to give you crowns. You will use your character’s personality to do so.
All of you can vote, you can give either crowns or dirt, use your judgment wisely when giving a crown or dirt.

Step 7: Screen will be showing two buttons for players to press. We decided to use Blarks, Blobs, Blims and Bablebop as names to avoid any language. English is not our first language and we decided to incorporate this experience of having problems pronouncing words.

During this day we also discussed the behavior the users would have. We were having doubts regarding if they would like to play or not, this is why we decided to give the users the ability to give crowns and also dirt. We thought having two options would be an incentive for users to engage. We talked about the personality of our classmates and that they would enjoy giving/getting crowns but would also laugh when giving or receiving dirt.

October 26th:
The code needed debugging. Roxanne spent a good amount of time making sure the game worked on IOS and Android. We were facing some difficulties with Android devices.

The timer was set for the code and tested it on different phones.

We also develop character’s information and instructions. The process of iteration regarding the story was very helpful to make sure the instructions made sense.

This day we also made the “phone necklaces”. We tested different bag sizes and discussed the length for the string.

Why using ziplock bags and strings to make “phone necklaces”:
-Allows players to use their hands while trying to convince other players to give them crowns.
-Allows players to make their vote secret.
-Made the interactions more fun and personal.

October 27th
This day we decided the amount of time the users would have to convince other players to give them crowns (4 minutes).

Roxanne tested different ways to make sure the two buttons were working and were giving feedback that they were being pressed.

Observations during gameplay:
-Players were pressing the “Start to Play” button before time. They were eager to play and weren’t engaging with the character personalities a lot. Similarly, the players didn’t really register and interact with other members of their species.
-Some players didn’t want to put their phones in the “phone necklaces”.
-Some players were giving crowns and dirt.
-Some players were cheating and giving themselves crowns (you can see this happening in the videos).
-One player’s device didn’t run the game.
-Most of the players were laughing very loudly.
-Groups were organically made, one person would try to convince and others would vote.
-Some players did not make any groups and were mingling with the other players.
-Some players felt uncomfortable with the “phone necklaces”.

Takeaways for future iterations:
-Clear instructions in the game for players to read and then explain it to them IRL.
-Find a way for players not to cheat.
-“Phone necklaces” should be in the shape of the mobile device, some phones were placed horizontally in the ziplock bags and it made it hard for the players to vote.
-String for “phone necklaces” should be smaller for some players and longer for others. Some players were very short and taller players found it difficult to vote on their devices.
-More animation and screen candy would be nice.
– Maybe integrating rules for team wins, having players of each species collect crowns together, in order to have more of a team play and less of a free for all.

Project Context:

Video games of the Oppressed:
Just-world Hypothesis:
Werewolf Card Game:
Secret Hitler Game:
Spent Game:



…im melting

Karo Castro-Wunsch + i



a dialogue with a sentient glacier. it’s a needy glacier granted, but a valid one. a glacier that just wants to be cool. dont we all have a right to be cool? a basic glacier with froyo dreams. dreams that are melting away.

The use of multiscreen here is used to bring visibility to the choices people make in their interactions with social issues. Donations are traditionally often made anonymously whereas this project brings people interacting with the ‘climate change propaganda’ into the same space next to each other. Making the interaction public puts pressure on people involved to make a move and put down $$$. This is not necessarily a good or effective method of propaganda as there are definite benefits to anonymous donations starting with personal differences in what is considered an important cause. The multiscreening also serves the effect of giving each user their own interface to interact with the piece, but each interface is part of a larger cohesive issue. This emphasizes the we’re-all-in-it-togetherness of global climate issues.



* in order to engage with the piece solo, enter the values 1,1,0 into the text inputs, then click the ++++ button.

WASD: move

QE: tilt

RF: vertical strafe

click to cycle the narrative

22883732_10159662121300422_384492340_o 22908246_10159662120765422_2095046417_o


The goal of this project is founded in the intention of making rallying propaganda for climate change. Of the many elements of our natural backdrop being lost in this process of change, glaciers are a large and symbolic one. They’re more generic than the loss of individual species and more focused, visually. In order to draw an empathic response to the glacial loss, the idea is to anthropomorphize the glacier in maybe an anime-esque way and drape a human narrative over a crystal losing it’s integrity. There are so many metaphors to play with here wrt to the losing of shape, the losing of hardness, of clarity, of majesty, of terrain (ice terrain). I attempted to communicate the loss of coolness (losing your cool) and projecting that onto the inanimate symbolic glacier.

The interactivity of the piece was the most difficult to determine. It was originally planned to be more complex, with objects in the 3d environment that could be offered to the glacier, attempting to placate it, none of which would get to the root of the problem. The root of the problem being the user’s own IRL actions. The virtual glacier, as a symbolic form, is unsatisfiable and unfixable, it’s purpose being to do ‘what all ads are supposed to do: create an anxiety relievable by purchase.’. purchase here meaning action in the form of buying trees. Note: the buying of trees is to offset your carbon footprint to get the glacier her (it’s?) coolness back. It was decided though, that interaction in the 3d space would ad little to the message of the piece. In fact, the final product in it’s current state also has too much going on, a more simple and dialled back scene composition would be more effective. The choice to add the distractions button was to call attention to the user’s own inevitable distraction. The distraction that pulls us away from things that are difficult to engage with. the idea is that by calling this out, by making a visible button for distraction, the user can pseudo-pacify themselves for a moment which would be short-lived and then bring them back to the confronting the issue at hand.

Technically, much of what was set out to be accomplished was accomplished: the embodiment of a glacier in the form of a human textured with reflective surfaces, the incorporation of water texturing to show the melting of the glacier, the multiscreen code to split up the scene into smaller pieces, the incorporation of distraction gifs.

This piece was too dry to succeed in its goal of drawing an emotional response from it’s users and moving them to action. The aesthetic of the 20-screen interactive collage was also, I think, too fractured and fragmented for people to really engage well with it. I’m inspired to attempt to create a piece that is more in-line with the piece made by one of the other groups that involved taking pictures of each other in poses and using that sort of physical action as a segue for social action. A sort of catharsis of body. The beginning of any propaganda piece should be easily engaging, and emotionally charged in order to move people.


Propaganda has been around for a while. Governments, religions, corporations all love it and use it to further there ideologies. Propaganda is always about some sort of war, and the better looking and more engaging it is, the more it helps it’s cause. Climate change is a war vs ourselves and our own over-consumptive tendencies, it stands to reason that we need some good propaganda for it.

Stylistically the piece heralds to the net art/vaporwave aesthetic with it’s 3d models and scene with overlaying text and .gifs.


three.js was used with flycontrol, sound js

water shader :


Phoneless Xmas!

Put Your Phone Away & Have a Merry Christmas!
by  Jad Rabbaa & Yiyi Shao

Pictures and Videos of the final product:
img_9282  img_9281

Christmas has always been this time of the year when people, friends and families get together. Nevertheless, with the new generation’s continuous addiction to mobile phones, people forget that they are surrounded by others even when they deliberately go to events such as Christmas dinners. People tend to immediately get distracted.

The only solution is to be away from phone screens and this Christmas tree is the right solution and excuse.

Inspired by the new unconventional style of modern christmas trees, and the conventional tradition of bringing family members to collaborate in the christmas decoration and engaging them into a social activity, this project saw the light.

1-Host installs the tree at home, and send the link to his guests.
2-Guests browse to the link provided, and every phone would have a different color of christmas ornament on the screen. Guests then place their phones in one of the clear sockets on the tree.
3-Interaction:  Christmas ornaments on the screen swing back and forth with the rotation of the phone. On tap they also play different christmas sounds and wishes of Merry Christmas in several languages of the world.
4-Outcome: As long as phones are away guests are forced to be social and engage with everyone taking back the christmas dinners to how they used to be.

1-In the 16th century, Christmas trees were real green pine. Later people started using artificial trees made from plastic. Recently, the fasion shifted to unconventional materials such christals, glass, light bulbs, and even sometimes weird stuff such as cups or bottles.
2-Some cultures throw a pre christmas party so all family members gather to help decorate the tree.
3- People becoming antisocial during family dinners and parties.

Inspirations links:

You can find the final code for our project here.

* Day 1: 18 october 2017:
We started brainstorming for the the big idea of our project and we thought about so many ideas:

We mainly wanted to use the multiple screens (2D planes) as layers to create a 3D effect. We thought about making one landscape with 3-4 different planes (layers) where the first layer would be trees that interact with the movement of the phones whether horizontal or vertical.

The idea was little limited to angle of view,  so we thought about the universe and create a 3d effect of stars and constellations with the milky way in the background.

We also were interested to use one mobile phone as a speaker (song or music) to influence other devices and visualize how close or far it is by using the sound volume and frequency as input.

This idea made us think of simulating instruments sounds with taps and use the 20 screens as a symphony, and with a little more research we found that this idea was explored last year, and then we thought about something else that combines the visuals effects to the sound effect we were interested in : an cylinder art installation.

We did some research and we found the Tibetan prayer wheel to be a good example that we can get inspired from.

We started sketching the structure as shown in the sketch above.

* Day 2: Friday 20 october 2017:
We started sketching the structure of the prayer wheel.
We decided the visual would be inspired from the tibetan culture and we thought of the lotus shape.
We also researched online examples for codes for the installation to simulate the bloop sound.
We then decides we can make the illusion of rotation when all the phones are aligned and let the graphic move from right to left to give the spinning effect, and then we decided to use the real tibetan characters on the wheel if we decided to go with this project.

We spoke to Kate about it and we decided to go forward.

* Day 3: Monday 23 October 2017:
We talked about the material needed and drew the new sketch for the project display as an installation. The main problems we put into consideration are as following:

  • How to hang 20 smartphones on a structure to be able for our audiences to interact with?
  • How to avoid screens smashing into each other when swinging?
  • What material is suitable to show both visualization and sound? Most importantly, the material should be able to hold phones very well and it should not take us a very long time to set everything up on Friday.
  • How can we use the example code, in particularly, to make sound that being generated like musical instruments?

After discussion, we both agreed that only clear plastic bag is the only choice because once we attached them to the installation, the only thing people need to do is putting smartphones into bags. So we don’t need to tie every single smartphone and then hang them up on Friday.

sketch 2 

We talked to Nick about our idea, suggestions he gave us back are as following:

  • Instead of hanging 3 phone vertically on a cylinder, we can divide the cylinder into three parts. So each mobile phone can be attached to each level.
  • Sound library at P5.js is worthwhile to reference if we want to make an instrument-like sound.

* Day 4: Tuesday 24 October 2017:

screen-shot-2017-10-30-at-1-24-27-amWhen we sketched out the new structure with 4 different pieces, we found that the final appearance look very similar to christmas tree. As it’s nearly the time for christmas and we miss our family already. We decide to alter the concept from prayer wheel to christmas tree. Digital prayer wheel is a very interesting idea but we need to do deeper research into tibetan culture and buddhism to be able to deliver the final piece in appropriate condition for our audiences to understand. If we have more time, we would go further to this direction.

Sketch 3

Sketching the measurements of the structure and meeting with Reza to talk about execution.

Sketch 4

Sketch 5

Working on the example code that written by someone else is very difficult, because the one we found contains complicated arguments which made us very confused.

Sound research:

  1. How the example code work
    • The example code is using accel events and touch in mobile device. Only when touching is active, acceleration data will be mapped into notes. And with the mobiledevice moving, accelerator will sensoring different values.
    • Note.js is the other javascript included in the project folder, it used web audio API to make a synth. In the code, ‘createOscillator’ is the function to generate sound, and the waveform was set into sawtooth.
    • In Note.js, filtering, volume and pitch are the variables to change the differences of sound.
  2. How sound works in p5.js sound library:example code work:
    • Vibrations make sound in physical environment, the same in code, sounds are generated by math function in the form of  waves. There are four different waveforms in digital environment: sine, square, triangle and sawtooth. Amplitude is the distance between top and bottom in the wave which controls the volume of the sound. Period is the length of the wave circle which controls the pitch of the sound.  (1/period = frequency, measured in Hz) As we are trying to improve the quite plain sound in Bloop Dance and make it like a bell, sine wave is more gentle and closer to our goal.

Image source from WikiPedia

  • By wrapping multiple waves into ADSR envelope, it will eventually sounds like a musical instrument with a certain set of value. A means Attack time, D means Decay time, S means Sustain time, R means Release time.

Image source from wikipedia

  • Tone.js is the another library that can create music in the browser, it can help schedule synths and effects to build on top of web audio API.

This part is too complicated and time consuming, I believe there must be a way to make the Bloop Dance example sounds better after spending more time to figure out web audio API, tone.js and p5.js sound library. It will be super cool to generate sound directly from the acceleration data without using extra sound file.

However, we only have 2 days left for this project. Considering about time management and deliverability, besides, the final sound output will not have much differences between synthesis and loading sound file, we decide to shift to an easier solution and completely write the code from scratch.

* Day 5 : Wednesday 25 October 2017:
Problem proposed to Kate and Nick:
1- use of p5.min.js is included in p5.js
2-How to use tone.js
Meeting with Reza to execute the structure. Went to Reza to finalize the structure. and install the clear cases.

Video of Building installation with Reza:

* Day 6: Thursday 26 October 2017:
After meeting with Kate and Nick yesterday, visualization finally appears on the screen with Nick’s help. However the visualization runs in a wrong function, the ball is shaking unsteadily and remaining in the same position on the screen. Only when device is shaking very hard, the ball will start to move.

We know the value must be wrong, so try to find a way to make a console window (like arduino) to print out the value on the screen. With Feng’s help, we finally find the problem and differences between acceleration and orientation.





Image resource:


Accelerometer provides the XYZ coordinate values which means direction and position of the device at which acceleration occurred (due to gravity g=9.81m/s2). The value is very small, while we mapped AccelerationX between -90 to 90 to reset the position of the ball, after checking back to p5.js reference we found out orientation data is the one representing the number of degrees which are in the right range.

<<< One main problem solved! >>>

The other problem we are facing is to map orientation degree to fit our purpose of playing sound. Phone will hang vertically on the tree and we want the first sound that people hear when tapping is the original one. Then pitch will change depending on different angles while devices been swinging back and forth. Also this time, we still use console window to find the right angle with right value and do some maths and voilà!Another problem solved! 

– Concerning the colors of the balls, we wanted them to be randomly selected so classmates would each have a different color on their phones  so the installation look like a regular christmas tree. We started with randomizing the rgb values but the problem we sometimes had dark colors that weren’t contrasting enough with the green background so we decided to create an array of 9 set bright colors (check picture below) defined as #26FF00, #0BF9F9 and so on.
It originally was just a rounded 2D shape and we wanted to leave it digital looking but to add more depth we had to add the light and dark sids and some shiny dots on top (as shown in the picture below).

The challenge was to move the whole ball as one entity, the Y position of the ball moves based on calculation of percentage of the the screen height reflecting the rotation of the phone.
Through testing and some maths (a lot of maths) we succeeded in finding the right equation for all the ellipses (5 of them) to move in a proportional way.

Logic: vertical position of the ball  is 0 if rotation is -90.
           Vertical position of the ball is screen’s height if rotation is +90.

– We followed the same logic for the sound’s speed and pitch. For some reason with at rotation 0 degrees, the sound seemed to be way slower than it should.. And we need the phone’s sound to be normal when it is in a vertical position (which is the normal position when hung on the tree).

After some test and error, we tried to alter the minimum and maximum value of the orientation instead of 0 to 180, we added 30 degrees, so 30 to 210 and sound seemed fine when phone was vertical.
When we tilt the phone in one direction the sound becomes fast and the pitch gets higher and the vice versa.

– We came up with a new idea to record different voices to say “Merry Christmas” in different languages. Firstly, we couldn’t find good sound library and we couldn’t guarantee that when 20 phones play together whether it will be noisy. Secondly, when we are working at study room to find jingle bell song, Mudit was curious and asked why finding a christmas song singing in English? Very good question! Christmas is an international festival, why not just record different wishes from different languages as we have many international students here. At the end, we collect 20 different languages in total!

screen-shot-2017-10-30-at-1-27-48-am– Finalizing structure with more decoration.



We believe this project has lots of potential in the future. No matter if it is seen as an interactive installation art or a selling product, it questions the relationship between human and social media and that infamous addiction to continuously checking our phones.
Beyond this project, we envision a more developed tool and take it  further by setting a function to record people’s voice of saying wishes and upload to the Cloud service, so each time people tap and interact with the christmas ball on the tree, they will hear different wishes from everyone in the family or even everyone in the world (different button to set areas). People can also customize their own christmas ball by choosing color or shape of ornament or maybe even take a picture of their like when setting up before placing the phone in its socket and hanging it on the tree.


Example code:



“What Do We Have Here?” by Quinn and Ramona

Title: “What Do We Have Here?”


Project Members: Quinn Rockliff and Ramona Caprariu

Project Description:

The parameter for this assignment was to use 20 screens. Using this as a platform, we set out to create a game that would be an educational experience for us as well as the users. Since neither of us had prior experience with p5.js, we wanted the resulting game to stress the importance of getting to know the vocabulary and relationships behind the simple interactions possible with this coding language, as that is what we found most paramount to our process.

The game “What Do We Have Here?” was developed from all of our brainstorming and trials.

Development Journal:


We began our journey in class on Monday trying to brainstorm different possibilities. We discussed the concept of coworking spaces, shared desk space, and all their implications. We thought it so intriguing to enter a space where you could somehow see the space as it was used for the person previous and somehow that being a method of developing a bond or intimacy. So our initial idea developed into: creating a ‘desk top’ from phones that could sense the imprints of all the objects placed on top of them and then, using that information, translate into patterns and colours. We took the next couple of days to then mull over how exactly we would get the phones to use haptics to ‘sense’ objects not fingers.



We came together and decided against the initial idea, seeing as we are both new to coding and wanted to keep within a realm of feasibility. All of our focusing around how we would teach ourselves the language of p5.js led us to think about creating a game.  Then emerged a theme that we then kept returning to: were camp games/childhood games. And only naturally, as we kept bringing back the 20 screen necessity, we decided on playing off a game that using a similar amount of ‘playing cards’, “Guess Who!”. Our idea spun off this popular game by having a one-on-one layout, with 10 phones/webpages for each player. Instead of just displaying faces like the original game, we both agreed that each screen should have an effect that could be described in ways we are learning through this p5.js process. We created a list of interactions and

We split up the list so that we would each be in charge of different pages and then during our game play, have everybody assigned to access one of the 10 pages (in 2 different sets) and then that would create the ‘game board’.



We went to Michael’s to gather materials for our game board. We knew that learning how to code all ten webpages was a priority but even more so, we must understand how the game is going to be played. If all the phones are laying flat, the user will not be able to conceal which phone they have selected. We decided to use foam to cut slots for the ‘average’ phone size and this way phones could be slotted in quickly as well as turned around when eliminated from the game.

Note to selfs:  Do not try and cut foam like this again. It is messy and makes a snowy mess and not very easy to be precise. You will end up covering it in sparkly gold paper and fancy purple tape.

After 2 hours of fighting with an exacto knife and foam we had the outlines of our board!





We met in between classes and reviewed some of our issues with the code. Some of the common issues we faced were:

How do we make all of these webpages look related?

We decided to pick a colour scheme and shape scheme. A 400×400 ellipse would be placed in the middle of the screen when possible. This would create an identifying relationship between all of the screens as well as make the game more difficult if they all seemingly look alike!

Secondly, we picked a colour scheme which we would input into the code whenever we could to add to the effect.




This decision while not related to the technical code really brought all of our webpages together. It created a final design that had intention and looked good!

How do we stop the webpage from dragging with our finger?

Ramona did some research, found a little beautiful line of code. Lives were changed.


We both continued to work diligently on our code and worked out the kinks along the way with the help of coding rainbow, google, and p5.js examples online.

We got all of our code loaded onto cyberduck and prepared for our presentation by creating stricter rules and printing them onto cards to distribute to players.

  1. LERP






  1. SNAKE – arm






We presented our game in class for a critique. Although we prepared to the best of our ability there was some chaos getting everyone’s phone loaded up with the webpages and all slotted into the game board. This was anticipated but still took up a lot of our presenting time. We were able to run two quick fire rounds of our game which went well and exemplified the interactiveness and playfulness of the game. Some players expressed confusion with some of the screens saying that they were not sure exactly what they were doing, or couldn’t remember what the x-y coordinates affected, all questions and concerns we hoped would arise in order to spark conversation and inquiries into the relationship we have with content placed in front of us. In the critique we discussed future iterations and potential applications of the game.




Video Documentation


You can find the code for all ten of our webpages here:


Experimenting in the final class was a valuable experience in helping us ascertain how our intentions with this project play out. We got the chance to observe how all the participants individually chose to interact with their screens and how natural it was to explore the different ways in which these interactions are possible.


For this project we looked to the classic structure of the game “Guess Who”. This game uses the same characters on two sides of the boards with defining features. We wanted to educate the class and ourselves by using the classic examples on P5.js ( We also wanted to think about how people interact with screens, what are our first instincts? Do we swipe, do we tap, shake? How have the apps we use, and the interface of our screens determined our movements and relationships to interactive design? When there are no words, how do we interpret the information we are provided with? and ultimately, how can we communicate this with others?




Written in the Stars


By Kylie Caraway and Emma Brito

Written in the Stars operates like a digital puzzle that requires teamwork between 20 participants and their phone screens in order to view the entire night sky. It begins with a physical printed map of the sky with the constellations’ names, but is void of their images. To see a constellation, participants must go online on their phone, click on a link for a specific constellation, and then raise and tilt their phones slightly, as though they are viewing the sky through their phone. Once the phone is tilted to a specific degree, the image of the constellation will appear. In order to see all 20 constellations at once, there must be 20 people participating in order to piece the map and its proper constellations together.

The fact that each screen only displays one constellation  at a time is an important feature. When used alone, the screen only offers a small fragment of the night sky that is visible. This means that the screens, and the people holding them, are reliant on the interaction with others in order to complete the puzzle and entire image of the night sky.

Github Code


  1. Andromeda
  2. Aquarius
  3. Aries
  4. Cancer 
  5. Capricorn 
  6. Cassiopeia 
  7. Centaurus 
  8. Draco 
  9. Gemini 
  10. Leo
  11. Libra
  12. Orion
  13. Pegasus 
  14. Pisces 
  15. Sagittarius
  16. Scorpio 
  17. Taurus
  18. Ursa Major
  19. Ursa Minor
  20. Virgo

Process Journal



When we first received the assignment, we quickly decided on using stars and constellations as the focus of the project. This backdrop could utilize simple shapes in complex ways, which we found to be both doable and effective in P5 javascript.

Initially we liked the idea of having all of the constellations in a single  3-dimensional space, so as a device turned the sky-scape would change as well. We liked the idea of people having their own experience and perspective within the same space. (We later realized that this would rule out interaction between participants, therefore eliminating the need for 20 phones in a particular space.)

Beginnings/Trial and Error:

We found a P5.JS code called “orbit” that created a 3-dimensional space and would allow us to hang shapes within in it. When used on a laptop, the canvas would orient to the mouse as it was dragged, yet it would snap back to the original view when the mouse button was released. This created a problem of creating a realistic night atmosphere. We decided we would instead use phones and devices with an internal compass feature so that the change in position was registered based on the rotation of the device. Laptops were ruled out as a result.

Unfortunately, we also quickly found it difficult to manipulate the orbit code. We couldn’t randomize the spheres within the code in order to mimic a starry sky, and it was difficult to pinpoint new shapes where we wanted them to go. 2D planes were also very difficult to place in the 3D view. The 3-dimensional space itself was limiting in size on our phones, which would make all of our 20 constellations impossible to include.


The New Plan:

We scrapped 3D orbit after we realized it wasn’t going to work well for us, and instead decided on a 2D iteration of the night sky, as Kate suggested. We decided to give each user one constellation, as a piece of the larger puzzle of the universe surrounding us. Using p5.js, we would create a 2D landscape, a constellation, and an interaction comprised of tilting the phone to create an interactive experience that relies on the participation and interaction of 20 users.

Atmosphere / Arrays:


At first, we searched for codes or examples of astronomical atmospheres that created linkages between the stars as you clicked. (This idea can be visualized through particles.js). Unfortunately, we could not get the particles.js library and code to work within our canvas. There were issues in the javascript console between pieces of code within the particles library, which was too daunting to attempt and problem solve. Next, we looked at parallax effects using arrays. These seemed to work best on laptops, but would not translate well onto a phone without a mouse hover function. They also felt like they were more appropriate for a video game (such as asteroids) rather than an observant experience. Finally, we found a star array code that did not rely on interaction or extra libraries. This began to serve as our basis, creating an atmospheric background to surround our constellations. We changed portions of the code, because the stars were too slow and not visible on our phones. We adjusted the frames per second, the ellipses colors and sizes, as well as the orbit’s location.


Kylie incorporated a gyroscope measurement code into our project so that the motion of tilting the phone would result in the appearance of a constellation, in order to mimic the act of looking up to stargaze. The video of the constellation would then play on loop until the phone was lowered and no longer tilted. We only focused on the variable “beta” in our phone, which measured how much the phone was tilted on the X-axis. At first, we told the program to only draw the constellation when it was greater than 120. While this angle is more aligned to how users actually look up into the sky, we realized this would create problems with our map on a flat wall. We changed the code to draw the constellation when it is greater than 80, so people could view the constellations as they hold their phones against the map on the wall.


Once we got a grasp on the new kind of code we would be pursuing, we started making the constellations themselves. We chose to include the 12 astrology signs, as well as 8 of the more well known constellations. Astrology signs were important to us, because they are commonly a subject that implements conversation and interaction between users. We decided to create the constellations as simple animations in After Effects, and then place the video to play over the array. Each constellation would follow the same basic format with a few different variations (colour, effect, shapes) in order to keep them stylistically aligned without becoming repetitive. We also toyed with the idea of animating the myths behind the constellations, which would only be seen if users pressed the constellation, but we decided against this for 3 reasons: 1) users pressing an image this is tilted above their head would result in an uncomfortable, non-intuitive experience 2) this feat of ultimately 40 animations was beyond our scope and could not be accomplished in our timeframe 3) getting one video to work was proving impossible; two videos that rely on multiple, sequential interactions was asking for disaster. Although we tried numerous ways to get the video/GIFs to work, our code did not want to embed the frames within the canvas. In the end, we used 1920 x 1080 JPEG photos, so we could keep the quality of the design without large file sizes.

Challenges and Issues:

Our biggest challenges in this project revolved around code issues. We had slight issues at first with WEBGL and 3D scapes. Pieces were difficult to move around, the code was very sensitive to any changes, the orbit control did not move the way we visualized it would on a phone, and the 3D space it constructed felt too confined for our project. Through these issues, we opted for a 2D space instead.

We also had issues with star arrays and background visuals (as mentioned before). After trial and error, we researched different arrays that depicted constellations, and ultimately found one that was easy to understand, implement and edit, and fit nicely within our project.

Our largest obstacles were displaying video files and GIFs of our animated constellations. Our first signs of trouble began when we ran into problems when we rendered the constellations. We wanted to keep their alpha channel so users could view the stars behind them, but the video files were huge (200 MB or more,) and both Atom and Sublime crashed when we used them as assets in the code. We then tried to take the video and create PNG sequences out of them. Atom and Sublime didn’t like this either, because our animations were anywhere from 5 seconds to 10 seconds long (looking back on it now, I believe this could be why we were unable to play the videos or sequences). We downloaded the P5 Play library, and attempted to run the PNG sequence, but the animation would never load. We finally decided that we had to scrap the Alpha channel, and plan to have a background color behind the constellations. This realization forced us to change the location of the code, so a black box would not appear on top of our array. Ultimately, we had to draw the star array after the constellation, so that the canvas background and constellation background would blend seamlessly.

We also tried GIFs, with no optimal results. The GIFS would not load as videos.  They would either hold the first frame (creating a still image), they would draw on a weird spot on the screen that was not within the canvas whatsoever, or they would not draw at all (the most common scenario).We attempted to download a GIF p5.js library, but there were issues with the library, and the GIF would never play. We also attempted to use the P5 DOM library and elements code to run the GIFs. By using “create image” rather than “load image”, the GIFS would finally appear… Except they were writing over the array, regardless the location of the code, and their location would change based on the device. In the end, the GIFs never operated how we wanted them to.

Going back to video, we attempted to make smaller videos that would load more easily onto the phones. Unfortunately, the videos would either crash the site, never load, would only load the first frame, or would load outside of the canvas and ask you to press play on the video, which would open a new tab with the video. After multiple attempts to implement the code, from the p5.js book and website, to other tutorials online, we could never get the video to load in our code. After meeting with Kate and Nick on Wednesday, we attempted to take apart the pieces of the code. After separating the various portions, we could not get our videos to load within a canvas on both the iPhone and Android phones. In the end, we ultimately decided to use images rather than video. These images were able to load quickly, they were placed in the right location, and they were reliable, as they worked on both types of phones.


Code Issue Examples:

In this example, we attempted to load an MP4 file. While this code would respond to the Gyroscope and place the video in the proper location, it  would only show the first frame of the video.

In this example, we used the P5 GIF library in attempts to load a GIF. The GIF would not display in the proper location, it would not play, and it would not respond to the Gyroscope code. This attempt was the most failed, as nothing was working properly.

In this example, we used p5 Element code. This was our closest success story. The GIF would respond to the gyroscope code, it would play the GIF, and it was centered for iOS. Unfortunately, it also drew over our array, creating a black box around the GIF, even though the code was beneath the array. Additionally, when we attempted this on an Android phone, the GIF would not center and created issues with the canvas fitting to the phone screen size.


Our attempts to deconstruct code. Removing all other code, we tried to load a MP4 video by itself. No success. I assume this is the result of our video file sizes or the video length.


In our first iteration, our map consisted of both the constellations’ names and diagrams. We were relying on the map to help users place their constellation in the larger image, but we realized that the devices would be obsolete if the information was already on the map, so we removed the constellation graphics to create a game element. Removing the constellations allows the user to have an incomplete visual without the assistance of their devices.

Final Iteration:


Regardless of the iterations this project went through, we are very happy with the final incarnation of Written in the Stars. It differs from our original plan since it is an image presented rather than a video, but the other features are present. The gyroscope prompts the image to appear, while the array is a constant. We incorporated the physical map in order to encourage interaction between people and devices, after all stargazing has been a social activity, with stories and mythologies associated with each constellation. We also provided information on zodiac signs for participants unfamiliar with astrology to learn their sign, as well as horoscopes for a fun read and conversation piece to connect with our interactive installation. As each participant has their own constellation, they can participate with others to create a full atmosphere of the night sky.

Vimeo link here :

Sketches, Designs, and Photographs


Sketch of our initial brainstorming ideas. While we scrapped the 3D atmosphere and 2D game, we implemented our original user experience, complete with animation, gyroscope, and our revised sky map.

We considered incorporating written information, such as a constellation’s history, science, or mythology, but we decided it clogged up the screen and detracted from the overall image of the night sky.

This depicts our colour palette, as well as the aesthetic style we implemented in our project. We strove for clean and simple lines, ellipses and stars, with limited colour options, in order to remain cohesive, yet have enough variety to be visually appealing.


This is a process image of the creation of the Cancer constellation in After Effects. This is one of the smaller of the 20 constellations included in Written in the Stars.


Gemini was our test constellation within the code. This is an image of the visual that appears after the gyroscope is activated. It was with this image that we first realized the video wasn’t playing and launched a series of trial and errors in order to attempt to make the animation play.

Graphic of all of our constellations


First design of our map. Kylie made the mistake of making it in Photoshop at 72 by 48 inches with a resolution of 300 pixels per square inch. The file was huge, wouldn’t save, and kept crashing. She finally was able to save it as a PDF. It was 1.64 GB and the print shop she sent it to would not accept a file that large. She then recreated it in Illustrator. While she couldn’t get the faint nebulae texture she had used in Photoshop, Illustrator was overall the preferable tool: the map could be resized to any desired amount, the print shop preferred illustrator files, and the file size was less than 1 MB. Lesson learned: use illustrator for large prints. Another lesson learned: don’t wait 2 days before project is due to get your map printed. Print shops love to charge you somewhere around 500% markup of the original price for a rush order…


Our poster rolled out for the first time!


Final Presentation Day!

Written in the Stars in action:

The presentation was successful. The coding and images worked properly and we got the desired reaction from the class. We also turned out the lights for added effect.



Our project can be contextualized through a couple of different avenues. As was touched on previously, it was important to us to include the astrological constellations because of the personal connection and sense of ownership people feel for their sign. We don’t need to look further than the fact horoscopes are a staple in nearly every newspaper.  A stir was even caused because of this  in 2011. Astronomers said the moon’s gravitational pull on the Earth had changed the axis of the Earth, resulting in different astrology signs for each month. After public commotion, NASA had to put out a statement reminding the community that astrology is, in fact, not science.

Stars, and the night sky in general, have been a popular subject throughout history in various forms of art, and later in media. From its initial introduction by the Babylonians as a storytelling technique, to its representation in artwork, such as Salvador Dali’s illustrations of the signs and Vincent Van Gogh’s iconic “The Starry Night,” to astronomy’s current popularity as both a marketing and social tool, there is no question regarding human affinity for the stars. Due to the ubiquity of astrology and the love of stargazing, this project is relatable to a wide audience. We wanted to capitalize on the social aspect of this activity as well, and the 20 screen requirement allowed up to do this.

While currently Written in the Stars currently serves as an installation which encourages communication and interaction, Written in the Stars could be further developed as an educational tool to teach astronomy. This project could additionally be used as a data visualization tool. Both NASA’s Kepler Space Telescope that monitors astronomical phenomena and the European Space Agency’s Gaia telescope that has produced a revolutionary catalog of the structure of stars in the Milky Way Galaxy serve as a model for this project.

In the end, Written in the Stars has the potential to be used for discussions about physical sciences as well as social sciences, as a digital puzzle that can be used for entertainment, group participation, and to illustrate “the unique cognitive-emotional link that makes us the intelligent creatures we are” as we sort through pieces of “randomness” and “information” in order to create a full, comprehensive picture of our surroundings (Mutalik).

References and Influences

It’s impossible to talk about our Written in the Stars project without mentioning the Sky Map ( app. It serves as both inspiration and aspiration for this project. While our project differs from Sky Map because of our focus on the interaction of people working together versus an individual experience, Sky Map is thorough and places all the constellations within one space. We would like to move this project forward to include Geolocation of the constellations, as Sky Map has effectively implemented throughout their app.

When we searched online for images of constellations and maps, we noticed that there were extreme variations of constellation forms, number of stars, and location of constellations. We decided to use a reputable source, National Geographic, ( as our resource for constellation formations, locations, and our map iteration.

Kelsey Oseid’s book, What We See in the Stars: An Illustrated Tour of the Night Sky ( provided inspiration for visual aesthetic, as well as information about constellations. Although we were unable to get animations to run in this prototype, Oseid’s book will continue to be a great reference through the development of this project into an interactive storytelling tool about constellations, astrology, and the science behind our universe. ( provided us with the horoscopes and dates for each of the zodiac signs. During the installation, we handed out slips of paper with the constellation, dates of the zodiac, our website link, and their horoscope. This provided an extra entertaining detail to get participants engaged with their constellations before the installation began.

This was the initial code for the array we used. We altered both the size of the ellipses and the colour of the background to better suit our phone screens. Other code was gradually simplified, altered and added, from changes in frame rates, to position and flow of the orbit, to the number of stars.

This is where we received the code for the Gyroscope/ Accelerometer. We used this code to measure the phone’s position as we moved and tilted the phone. We realized we would only be using the Beta variable, so we removed the Alpha and Gamma. We then deleted the rectangle and code that showed the values of each axis. In the end, we ran a simple “if” statement, so that when the Beta variable was above 80, the code would draw the constellation.

We also referenced ( and Make: Getting Started with p5.js for multiple portions of our code.


Alessio, Devin. “The Cocktail You Should Be Drinking Based on Your Zodiac Sign.” Elle Decor, 21 Dec.


   the-cocktail-you-should-be-drinking-based-on-your-zodiac-sign/. Accessed 25 Oct. 2017.


“Daily Horoscopes.”, Accessed 26 Oct. 2017.

Darley, James. “A Map of the Heavens.” National Geographic, Dec. 1957,

   x/n/xng195712a_90.jpg. Accessed 25 Oct. 2017. Map.

Garreau, Vincent. Particles.js. Accessed 20 Oct. 2017.

Guarino, Ben. “Chaos in the Zodiac: Some Virgos Are Leos Now (But NASA Couldn’t Care Less).” The

   Washington Post, 26 Sept. 2016,


   Accessed 25 Oct. 2017.

Johnson, Michele, editor. “What Does Kepler Have Its Eye On?” NASA, 31 Aug. 2017,

   image-feature/what-does-kepler-have-its-eye-on. Accessed 25 Oct. 2017.

Kuiphoff, John. “Gyroscope with P5js.” Coursescript, edited by John Kuiphoff, 2017,

   notes/interactivecomputing/mobile/gyroscope/. Accessed 25 Oct. 2017.

Max. “[p5.js] Starfield.” Codepen, 9 Oct. 2016, Accessed 25 Oct.


McCarthy, Lauren, editor. P5.js. Accessed 25 Oct. 2017.

McCarthy, Lauren, et al. Make: Getting Started with P5.js. San Francisco, Maker Media, 2016.

Mutalik, Pradeep. “Can Information Rise from Randomness?” Quanta Magazine, 7 July 2015, Accessed 25 Oct. 2017.

Oseid, Kelsey. What We See in the Stars: An Illustrated Tour of the Night Sky. Ten Speed Press,


Popova, Maria. “Salvador Dali Illustrates the Twelve Signs of the Zodiac.” Brain Pickings, 19 Aug.

   2013, Accessed 25

   Oct. 2017.

Sky Map. Android and iPhone app, Mobius Entertainment, 2016.

Wolchover, Natalie. “From Gaia, a Twinkling Treasure Trove.” Quanta Magazine, 14 Sept. 2016, Accessed 25 Oct.


“Lord of the Dance” (Feng Yuan and Dave Foster – Creation & Computation – Exp. 2)

Process Journal (Lord of the Dance)

Feng Yuan and Dave Foster

DIGF-6037-001 – Creation and Computation (Kate Hartman and Nicholas Puckett)

Brief Project Description:

Link 20 (0r more) screens (input device type optional – output device type optional – PC, Mac, Phone etc.) such that it produces an interactive experience or display for 1 or more users .

Tuesday, Oct. 17th:

We met in the 6th floor DF lab space at 12:00.

Project Design Discussion:

As a beginning, we discussed several ideas for the project (short description, discussion results noted below tentative titles):

  • Proximity Alarm” (your friends are close)
  • Your screen (phone, tablet or computer) gives you an alert when any of 20 specified people or their device comes close.
    • Several problems here (not insurmountable – probably –  but complex).  For one, how do we establish the “trigger” (bluetooth signal?, TCPIP address?, etc.).  Also, how do we reliably (and preferably simply) code for this?
  • Proximity Alert” (any Bluetooth device)
    • Your screen (phone, tablet or computer) gives you an alert with available “radar screen” display of direction and proximity of any bluetooth device within a set radius.  The idea being to warn us if a camera-equipped device might be nearby.
      • As above with “trigger” etc.  Also does not really address the “20 screen” portion of the proposed project.
  • Join the Choir
    • The trigger is simpler (just a counter on the site) but issues of timing would (we speculated) be problematic.  We could not see a way to reliably code around this question.
    • Lord of the Dance” (Mah Na Mah Na)
    • Each screen in the group of 20  (phone, tablet or computer) is given a “voice” in the choir (bass, tenor, alto, soprano, etc.) at random and upon “logging in” to an established website.  Once the trigger number is reached for the site (20 per project specifications), the “choir” begins to sing (we were unable to pick one tune for this).
  • Lord of the Dance
    • Based on the Muppet sketch “Mah Na Mah Na .  Each screen becomes a separate but linked numbered site (from one to 20) all linked to a central node or site.  All 20 screens would receive and display the 2 “singers” and the base “tune” from a central node or site.  Site 1 would (initially) also receive the “little furry guy” or “dancer” who contributes the “mahna mahna” for a verse or two.  At that point the “dancer” begins to “riff” on the tune for a bar or two and the “singers” on that screen react “disapprovingly”, to which he responds by “jumping” to a random screen in the array and with the allowed “Mah Na Mah Na” and the song and dance continue.
      • We believe this is do-able and decided to go with this idea.

Research/Build Work:


  • Produced a basic mock-up sketch of the project using Balsamiq (see below)



  • Began research of the required code at the site.
  • Began basic coding.
    • Some discussion (no hard conclusions reached) about what the “look” of the characters should be given the attempted “simplicity” of the coding desired by both participants.
    • Achieved the beginnings of the “dance” on at least one screen.

Friday Oct. 20:


Showed Kate the Balsamiq mockup and described the idea to check acceptability within project parameters (seems to be OK and codeable – if we can make it work)

Monday Oct. 23:


Feng had some concerns regarding separation of sound track for singers and furry-guy.  It would be simpler to code if the furry-guy is separate from the singers.  Several files will have to be created for variation and to separate the “riff” and “reaction” animations.  It was decided that Dave would work on the sound file(s) while Feng coded the movement(s).  Next meeting scheduled for Tuesday, Oct. 24 @ noon.

Research/Build Work:


  • Downloaded WavePad (audio editor) for work with MP3 file of “Mah Na Mah Na
  • Began separation of “singer” and “furry-guy” tracks


  • Began coding for drawing characters for use in routine

Tuesday, Oct. 24:


As a result of Feng’s consultation with Nick, we decided that:

  • There was too much of the “server” paradigm in our original idea
  • There was (possibly) too little user-user/screen-screen interaction

As a consequence, we discussed a couple of ways in which to more closely conform to the project guidelines:

  • We discussed making the “dancer’s” movements contingent upon a mouse-click or enter key from each screen.
    • Could not come up with a way to time this to the “Mah Na Mah Na” tune
    • Could not decide exactly how to trigger the user response.
  • The above triggered a thought from Feng — what about a variant of “Whack-a-Mole”? (illustration from Balsamiq mockup below)


    • Advantages:
      • No need to start from scratch for basic idea
      • We could keep the “Mah Na Mah Na” tune and background “singers” intact (no requirement to separate the character’s sound tracks)
      • Simplifies “stimulus/response” or “user interaction” portion of assignment.
      • Simplifies the programming and selection of the “dancer” character.

Research/Build Work:


  • Used Photoshop to remove extraneous background from the picture (see below) and passed it to Feng



  • Began coding of the characters and the “Whack-a-Mole” game
  • Learning P5.JS from The Coding Train and P5.gif.js

Inspiration for the background design:

Wednesday, Oct. 25:

Meeting with Nick and Kate re: concerns with project:

  • Nick reiterated his objection to the server based portion of the concept
    • Recommended some simplification of the concept for “Whack-a-Mole” format
      • 4 “states” required
        • “Mole absent”
        • “Mole up”
        • “Mole hit”
        • “Mole missed”
  • Both Kate and Nick recommended not being “married to” the whole Muppets theme.


  • Work on the visual design of the game and draw the characters.

Thursday, Oct. 26:


Feng has simplified and coded the characters (illustrations below) allowing for the “states” recommended in Wednesday’s meeting.  Asked to have the tune “chopped” into manageable portions.  Discussion/decision as to which type of device(s) to use as well as “array” required to play the game.  iPads (X9) and iPhones (X11) chosen as best option given the game format.  Discussion as to whether 1 screen for 20 players or 20 screens for 1 player fits project parameters as well as game format.  Single player with 20 screens selected as best format.



Research/Build Work:


  • Continued work on code
  • Work on the character’s’ animation.


  • Chopped the tune into acceptable portions (“Mah Na Mah Na”, “Riffs” and 4 different “boop de de de” segments) and E-mailed WAV files to Feng.
  • Worked on Balsamiq mockup of array with input from Feng (illustration below):


Second idea (3 tables in a “U”) chosen as best for single-player format.

Friday, Oct. 27 – presentation day:

Met in DF lab at 10:30 to finalize.  Decided to use Mac screens as coding appears to work better there than in other devices.

Code available at :

Images from Presentation are below:

exp2_images-15 exp2_images-14 exp2_images-13 exp2_images-12 exp2_images-11 exp2_images-10 exp2_images-9 exp2_images-8 exp2_images-7 exp2_images-6 exp2_images-5 exp2_images-4 exp2_images-3 exp2_images-2 exp2_images-1


    Antiboredom. P5.gif.js.

“Best 25+ Animal Muppet Ideas on Pinterest | Drum Kits, Drummers and Rudimental Meaning.” Pinterest. N.p., n.d. Web. 26 Oct. 2017.

“Hyperspace Image.” Google Images. N.p., n.d. Web. 26 Oct. 2017.

“Language Settings.” P5.js | Reference. N.p., n.d. Web. 26 Oct. 2017.

“Mahna Mahna » Free MP3 Songs Download.” Free MP3 Songs Download – N.p., n.d. Web. 26 Oct. 2017.

The Code Train on Youtube. P5.js Sound Tutorial.

Umiliani, Piero. “Mah-na-mah-na.” Mah-na-mah-na. Parlophone, n.d. MP3.

VHTrayanov. “Muppet Show – Mahna Mahna…m HD 720p Bacco… Original!” YouTube. YouTube, 02 Oct. 2010. Web. 26 Oct. 2017.

The Apples Game

Members: Roxanne Henry, Margot Hunter


Oh what a missed pun opportunity! The name of the game is to find your pair; why oh why didn’t I name this Find Your Pear?

The game loosely revolved around the idea of the memory card game. My original concept was to use the phones as cards, where the phones were laid down upside down as a grid, and then players would have to find matches the same way the card game was played. However, I didn’t think it quite got people as involved with one another as I would have liked. It could still very well be considered a single player game. So I started thinking of ways of involved each person and their own personal devices. The idea came to me that if each person was a card, they would have the agency to go find their partners themselves. This way, the would have to interact with one another and each others’ devices in order to determine if they were a pair or not.

Development Journal

Day 1

The plan moving forward, then was to have 20 different apple slices, so that they matched up in 10 different ways. I was adamant about randomizing the distribution process, but also, about making the game fair and making sure everyone would have a partner. I knew it would be impossible to do without involving some centralized list that kept up to date with the client-side allocation of apples.

I started looking into using server-side controls. Originally I had a file with a list of the apple slice names, and the client-side code would then look into the file to find out which apples were available for picking from. However, I needed a way to have the client-side code confirm with the file once they used a specific apple. It is unfortunately impossible for client side code to write directly into server-side files, for obvious security reasons.



So, I investigated the possibility of using server-side scripting to do the writing. It took me a while to figure out which server side scripting languages had been installed on the webspaces, but I soon discovered it used both Node.js and PHP. I had slightly more experience using PHP, so I started with that. Unfortunately, there seemed to be a security problem. I didn’t want to spend too long debugging that. I know out of experience that when it comes to permissions, that that kind of error could come from any level of security protocols. I took one go at asking IT for write permissions to the servers, and when that fell through, I immediately started looking for another option. I didn’t want to waste too much time on things if I wasn’t certain I could make them work.


Day 2

Moving on, I looked into external API-enabled database solutions; brief consultation with Nick had reminded me of their existence. It didn’t take too long to find one that was free to use. I signed up, created my database, and started getting to know the API.


To my surprise, it was fairly simple to set up my code, using only p5, to communicate through the API to the database. I hadn’t expected a library dedicated to drawing and animation to have a powerful selection of HTTP methods, but I was pleasantly proven wrong. The biggest challenge with using this API was making sure I had set up my CORS-enable API key properly. It took a few tries of a different combination of reading the examples, the API documentation, and brute force testing to figure out the happy combination I needed to access the database. Turns out, a terminal slash in the default URL means something pretty specific to the API, and it was throwing off all my results. It’s always the small things ?.

Soon enough, I had an infrastructure built that would randomly select an apple from a list of available apples. There was still a small risk of there being duplicates attributed. The time it took for the client-side code to attribute a random apple and then update the database with the information was still slow enough that someone had time to access the database for the SAME list of apples as the first person, meaning their randomly attributed apple could, in theory, be the same, but it severely limited the chances of this, and that was good enough for the requirements of the project. It would have been impossible to guarantee total randomness and fairness without the random apple being selected and immediately updated by the server, but I didn’t have the time to learn if the database server I was using even had that possibility.

Day 3

I finally had access to apple drawings that I could test loading images with. It worked out fairly quickly and easily to load the images based on the incoming image file name. I wasn’t sure why I thought that would be difficult, but it really wasn’t. Something that did vex me momentarily and without explanation, however, was that using displayWidth and displayHeight gave me tiny apples on mobile, though it displayed correctly on PC. I found that using windowWidth and windowHeight worked better, but in reverse. This was fine since the game is easier to play on mobile overall.

The hardest part I wasn’t expecting came later when I found that the API’s GET method was a little slow. I decided to create a custom loading animation, in order to entertain the player while they waited. A simple thing, only 15 frames long, of the game’s eponymous apple being eaten, and then exploding back into a full apple. I thought the loadImage() function would load a gif as easily as a png. I was wrong in the most obnoxious way: it loaded the gif, alright, but only the first frame. RIP.

I started looking for solutions. After about 30-45 minutes of research, the first one I found was a suggestion about using loadImg() instead. This worked, except not in the way I expected, and certainly not the way I wanted. It created an HTML img element outside the canvas, without transparency, and without p5 control over it. This was not a good solution.

Day 4

I moved on to find the p5.gif library which I found worked wonderfully during testing. It was simple to use and worked the same was as loadImage(), except one would use loadAnimation(). The honeymoon phase wore out fast, though, when I realized it doesn’t work on mobile. Ugh!

Next, I found which, thankfully, worked really well on all devices I tested on. It required a few extra lines of code, but it was worth it to get the loading gif to finally work.

Finally, I added an image that would appear when the database was out of usable apples to inform the user of the system’s status. I think this is a really important piece of information to share with the user.

Day 5 – Morning of the presentation

The game only works on my phone for some reason. Everyone else is getting security problems. Nick and Kate show up and I share my concerns. They suggest I host it elsewhere. Of course. I feel like if I had hosted it elsewhere from the start, I would have avoided a lot of the server problems I had experienced on the first few days. Alas. I have about 10 minutes to change the hosted location. I quickly set up a new git repository, and throw my code there. Everything loads except I have no idea why. I made several rapid-fire changes in my rush to get this working. I think that maybe the solution was to add “.js” at the end of the include in my index file. Originally, it had been working no problem without it, but I suspect github’s hosting has some stricter rules about that. I guess I’ll never know! It ended up working in the end and people seemed to enjoy it, so, all’s well that ends well.




The code is available through, and also hosted on Github.

Video (thanks to Tommy for filming!)

Available at Vimeo.

Project context and bibliography

So the project was originally going to be a card game, and then it became a human card game. It’s a bit difficult for me to frame it as “a game which aimed to connect people physically instead of through technology” because I don’t personally believe they have to be mutually exclusive. Sure in the game’s case it could have been real, physical cards instead, but why not use the phone, which everyone already has. It can make for impromptu games, no planning ahead required. The technology allows us to access it any time we want, without worrying about having to bring the card game with us. Of course, several modifications would have to be made to allow for this. Custom game lobbies for groups of players, and true-random assignation of the apple halves would need to be implemented, for starters. Another good modification would be to change from apples to pears. Gotta be punny. But ultimately, I do not think this project’s aim was to bring people to interact outside of technology, but instead to embrace its possibilities and make use of them in a context where people just want to have an impromptu ice breaker game.

Antiboredom. “Antiboredom/p5.gif.js.” GitHub. December 20, 2016. Accessed October 24, 2017.


“Apples-for-the-teacher-gift-bushelbasket.jpg.” Digital image. Two Sisters Crafting. Accessed October 24, 2017.


Pedercini , Paolo. “ – a game library for p5.js.” – a game library for p5.js. Accessed October 24, 2017.


Brig. “Processing 2.x and 3.x Forum.” Processing 2.0 Forum. Accessed October 22, 2017.


“Reference.” P5.js | reference. Accessed October 2017., The Team at. “Plug and Play database service.” May 31, 2016. Accessed October 17, 2017.


Title: Dough Not Game

Group: Savaya Shinkaruk and Max Lander

Project Description

Our game was created by thinking of a project for our Multiscreens assignment.

The goal of this experiment is to create an interactive experience for 20 screens. This could be 20 laptops lined up in a row, 20 phones laid out in a grid, or something of your own imagining. Possible inputs include camera or mouse. Possible responses will be reviewed in class. Students are responsible for developing their own conceptual framework. – Nick and Kate

When we were first assigned this project we wanted to create something that would be fun but also play with people’s emotions when working with a pre-designed art installation. We need an audience to create an image, with no audience there is no art piece.

We go more into depth about the journey of our process towards making our game in our blog  but the overall description of our project is:

We created a game where people can interact together as much as they are interacting with their phones. The idea is to be in an exhibit where you and others need to work together to create a large image. Large enough that 20 screens will be needed to do this game because on each phone being used will be a piece of the whole image.

To play the game, navigate to on your smartphone, once loaded do an erase like motion on your phone screen to get a image, and once you have ‘scratched’ enough to get most of the image, shake your phone to stop the image from disappearing from the ellipses (circles). Zoom in on the image if necessary to align the edges with your screen and line up your phone with your fellow players complimentary images.

Be sure to test the game because there are lots of fun emotional factors!

So, continue on to the rest of our page to read more about the ‘Dough Not Game’ team and our journey.

About team

Savaya Shinkaruk: Savaya Shinkaruk is a fashion stylist and journalist with a keen interest in wanting to blend components of the online fashion industry with design. She graduated with a BA in communications in 2017 and is completing her MDes in Digital Futures at OCAD University.

Max Lander: N. Maxwell Lander is a photographer, designer, game-maker and hedonist. His work often blurs the line between disgust and desire, and involves a lot of fake and real blood. He enjoys making things that engage with gender, kink, violence, and neon. In particular, his work critically engages with masculinity in ways that range from the subtle and playful to brutal and unnerving.




October 16, 2017

Today we were introduced our second assignment in our Creation and Computation class.

This assignment is to come up with a digital concept that is both interactive and is shown on 20 digital screens. These screens can include an iPhone, computer, or iPad.  

During class hours, max and I discussed some possibilities of what we could do for this assignment.

NOTE: Our research for this project mainly came from us using Make: Getting Started with p5.js. And just testing code from there and re-testing it till it worked the way we wanted it to.

Also from the Reference page on the website. And again, just testing and re-testing code till it worked in a way that we wanted it to. 

Here are some initial ideas we started with:

  1. We want our digital screen to be a phone.
  2. Each person gets a piece of a full image on their phone screen.
  3. The interactive element is to have people walk around and look for their puzzle match so people can create a large image.
  4. The other interactive element is to have each person go to the link where their image will be and have to do a ‘scratch and sniff’ method to find out what their image is.

We liked the puzzle concept so much, that we decided to keep this concept but figure out a way to make it more challenging when it came to coding and designing this. And also a more frantic interactive element.

Here are things we need to remember when putting our project together:

  • This assignment has to be interactive on a digital and physical level – which we accomplished in our brainstorming ideas.
  • We have to code it in a way so people don’t get the same image every time.
  • We need to make it digitally challenging for us to code but for people to play too.

In the end our concept is:

You download the code – erase to see what image you get – then assemble a puzzle – with a frustrating factor (we need to create).  

We came up with a game where you interact with people in the room by using your phone to create a larger and full image. Each person in the room will have a different piece to the puzzle – and to figure out what your image is, you have to use your finger on your phone to make it appear.

Additions to our concept:

  • Will the image you are trying to see stop so you don’t have to keep using your finger to get the image on your screen?
  • What will the frustrating factor be? One idea we really like is to have random ellipses popping up on the screen while you are erasing. As if the computer is fighting back.
  • A colour theme.
  • A image.

From here we both went home and researched and tested ways to make this more intricate in the design and coding and how we can take the game to a next level. And to visually look better.

Here are some sketches of the ideas we came up with:



End of day one.




October 20, 2017

Over the week Max and I were busy with other school projects – but kept in touch via Facebook when we had an idea or coded something new.

During our conversations we both felt we needed to add another element into the interactive portion of people ‘scratching’ to see the image on their phone screen. – Which was mentioned in day one where we talked about using random ellipses.

In class on October 20, 2017 we re-connected face to face about some of the ideas we thought about and started to get to work.

Things to think about / research to code / we want to incorporate:

  • Cut a full image into an even number of squares / cut it into 20 pieces OR cut it in less than 20 pieces where things appear more frequently.
  • Have a reload button so people can reload to get a new image if someone else already has their image.
  • Figure out a way to have a image fit to the screen.
  • Random versus sequential (x+1, x+2) when it comes to who gets what image.
  • How does the image stop?
  • List of images // each page refresh will place someone randomly in that list // and will then have a button to move sequential through that list.
  • Pick a phone // ‘scratch and sniff’ process with that phone to figure out the image // see if it links to a buddies image // if not press next image
  • We will use 20 phones // but talked about only using 10 // could change this later.

Images to use:

    • Doughnuts.
    • Something poetic.
    • Something sensational.
    • We need an image that doesn’t have a lot of detail in and it will be easy for people to find their match.
    • So we decided on the: Doughnuts image – shown below.

Testing period for some of the things we wanted to add / change / test:

P5 Code / what we looked at / what is working / what is not:

  • Background image – full image or to cover with a colour to punch through?
  • Adding and using Javascript also – so we can see how to load images from a different folder – as we are unsure if p5 can do so
  • P5 – copy code – where it copies whatever you tell it to – in our case we are telling it to copy the image we put in the background
  • Loaded image . then when you click its copying onto your canvas / it’s stamping (which is the copy code)  the pattern of the image we loaded into script  BUT because it’s not making the image full width its not filling the window – but we can stretch or find a general sized image
  • Trying to find a image to make it take the size of the screen – like fit to content on Indesign
  • Issue we keep seeing – only copying in it default size. – why is this?
  • Maybe try and make a canvas and fit the image to that….
  • We are trying to not have to put another background colour over top of the image to then erase or copy – after testing can’t erase

Codes we are using / are wanting to use:

This list will keep being updated as we move throughout our project

  • Random Background image code
  • Copy code
  • Random Ellipse code
  • Fill code
  • Stroke code
  • Random Background code
  • Button for next image code
  • Anti-Bounce code
  • Shake code
  • FrameRate code

Game names:

    • Digital scratch game
    • Puzzle scratch game
    • Puzzle party game
    • Scratch and party game
    • Erase and puzzle game
    • Scratch array game
    • Do Not game – Winner! – but we changed it… See what we changed it to as you read through the rest of our blog.

Ways to make the game visually interesting:

  • Play with the colour theme.
  • Add a background colour image – jk leave it white.
  • Make the circles a different colour / grow / change colour /
  • Have the copy be white.

After today and testing and trying our ideas – some things worked and other things didn’t. But we assigned jobs for each of us to work on over the weekend until we met on Monday.

Based on the ideas we had last week though, here is a video of the first trial run:

Max: work on figuring out a way to make the image go the size of the page without messing up with copy code.

Savaya: work to make it more visually appealing and play with the copy code and its colour and work on adding another ‘frustrating’ element.

Here is a video to show the practiced colour theme of the random ellipses (not final):

End of day two.




October 23, 2017

Today we worked on the blog a little bit more to showcase our process – both as a group and separately (always thinking of the group).

Here is a image to show the colour theme of the ellipses:


These colours were chosen from the Doughnut image using Adobe Color CC

We also finalized the concept of the game:

The full image our class will be putting together – if they can – will be a image of doughnuts. We will be using 20 screens and each person should get a different image to create the final image.

You will download the code given to you by us you will see a blank screen use your finger in a erase like motion on your screen to try and see what piece of the image you get if you get one that someone already else has, click the NEXT IMAGE button and attempt again until you get a image no one else has.

THE CATCH: as you are trying to see the image on your screen there will be another function that is also deleting the image (random ellipses) – so keep swiping fast!! And, will you ever be able to put together the complete image? Yes! When you have your image shake your phone to stop the ellipses from copying over the image.

Image of the emotional goal for the player using the game:


So, now that we have figured out the context of the game and are working to finalize the colour theme, we are seeing a few issues.

A couple problems we are seeing:

  • When we go to use the erase like motion on your phone screen to see your image, for those with an Android and iPhone the movement isn’t as smooth – it is moving the whole screened image rather than just being able ‘scratch’ to see your image with no extra movement. – But yay! We fixed it. (In css, position:fixed and overflow:hidden)
  • The random ellipses are a little bit TOO crazy. We need to figure out a way to slow them down. (the discovery of frameRate!)
  • We need to re-size the doughnut image to make it larger so when people have their piece of the puzzle that small image will cover the screen of their phone.
  • When putting the ellipse code and the game code together, the scale of the ellipses was doing something weird to the image.

The next step after solving these issues / steps to fix:

  • When cutting the images we realized we cannot cut it into 20 pieces because of the size. So – the glitch in our assignment will be only 18 people will actually be apart of the whole image, but everyone will be apart of the game. – Decided not to do this – and we made it so it would be for 20 screens.
  • When sizing the image into pieces we should air on the smaller size because people can enlarge the image after to ‘connect’ the dots as close as possible.

Here is a image to show how we are going to break up the whole image:


End of day three.




October 24, 2017

Today we worked on making our ‘get a new image’ tab on the page into a image so when you scrolled over it with your finger it wouldn’t get highlighted – as it was doing that with it not being an image.

Here is a image of the New Image link:


Because the default p5 button doesn’t seem to have a way to make it an image built in, we decided to change the css properties of all buttons to show the above image – this solution would not work if we were using multiple buttons, but we are not, so yay! If we were using multiple buttons, we imagine the way to do it would to be to create the buttons in html and then look into how to like them into a p5 function.

We also worked on finalizing the speed and look of the ellipses. They feel much faster on a phone than they do on a computer so after fiddling around decided to slow them down a little but more so as to only produce a mild panic.

We also decided to update the name of our game from The Do Not Game to The Dough Not Game.

End of day four.




October 26, 2017

Today we worked on making our game look good and finalizing all the code, and process of the game. We did this via Facebook when we thought of something that we would need to add to make the interaction of the game smooth.

Here is a image to show how it will look once everyone has gotten an image on their phone:


We asked on our DF Grads 2019 Facebook page what kind of phones everyone had to make sure the sizes of the pieces of the image would fit on people’s screens. And based on everyone’s answers we are seeing that some people will have to enlarge their image on their phone screen to match to their partners.

Here is a image to show everyone responding to our question on Facebook:


This information also re-assured us that using 20 screens will work!

For the interaction portion of the game, we were talking about how it might be a little crazy for everyone to be running around looking for their puzzle partner – so we added in our instructions a little tip.

The tip: To make this process easier for you, try getting into groups or partners so you can work on pieces of the puzzle together than just doing it individually and hoping you will find someone with the next image to match yours.

In the end of all our brainstorming, coding, and hard work here is a GIF to show you a pre-show of our game, The Dough Not Game:

End of day five.




October 27, 2017

To take a look at our code click this link: The Dough Not Game Code

To play the game go to this link:

Here is a image of our branding:

dough-notHere is the final video of everyone playing our game:

Here is a image to show everyone’s interaction with the game:


Here is a image to show the whole image everyone created by playing the game:

Processed with VSCO with f2 preset
Processed with VSCO with f2 preset

In the end there was not 20 phones being used, but it created a cool picture!

End of day six.



Dough Not Game:

The Dough Not Game is a fun art and digital installation where you can connect with others in a single room and also to play with your emotions.

To play The Dough Not Game: you download the code that is shared to you – you will do an erase like motion on your phone screen to get a piece of a complete image – and once you have ‘scratched’ enough to get most of the image you will shake your phone to stop the image from disappearing from the random ellipses  – from there you will find others playing the game and connect with them to match your pieces of the puzzle – once you have found your puzzle partners you will then create a full image on your phone screens. Remember, you need up to 20 people to play this game.

Here is a tip to make this game a little more interactive and fun: To make this process easier for you, try getting into groups or partners so you can work on pieces of the puzzle together than just doing it individually and hoping you will find someone with the next image to match yours.

Take a picture and tag us in it to show us the image you made with your peers!

Project Members: Max Lander and Savaya Shinkaruk.

Project Context:

The Dough Not Game was designed and created to be a fun art and digital installation game for those who are bonding in a common space. It is more common for people to come into a common space and automatically go to their digital screens and connect via online rather than face to face – so Max and Savaya created this game so people could connect with others face to face but they still need a digital screen to make a interaction happen.

We also made it so people have to come together and work together to create a bigger picture – but they also need to work on their own ‘erasing and scratching’ skills to find a piece of the bigger image. Most of our research on how to created this game, consisted of us brainstorming fun game ideas, and then using our P5.js book and online webpage P5.js to make it happen.

During our brainstorming ideas we both like the games where we are given a mismatched complete image and have to move the pieces around to make the complete image. So, we took this concept and when with it and talked about it – through our understanding of the P5 code how we could do this. And so, The Dough Not Game was created.

With the goal of our given assignment – to create an interactive experience for 20 screens – we took that concept for you to have an interactive component on your screen, but also for it to be an interactive with other people in a common space. With that in mind we created something that was fun to make and play, and explored different areas of a new coding system (p5). With a broad concept, we thought of things that we would like to use and do when it came to making an interactive piece.


The Dough Not Game Code

URL link:

Supporting Visuals and Design Files to show our process to our final code and URL link to our game are throughout the blog post above. ^^


McCarthy, L., Reas, C., & Fry, B. (2015). Make: Getting Started with p5.js. (1st ed.). San Francisco, CA: Maker Media

P5.Js. (n.d.). References. Retrieved October 16, 2017, from

End of experiment.


Experiment #2 – RGBelieve it!

Group Members: Roxolyana Shepko-Hamilton & Kristy Boyce

Project Title: RGBelieve it !

Description: RGBeleive it! is an interactive, multiscreen colour tracking experience. It works as a game or stand-alone art installation, depending on the user’s needs/mood. Using Computer Vision and webcams, RGBelieve it! Tracks colour and creates a variety of “stunning” shape based on screen experiences.

For our project, we wanted to get more familiar with capturing and tracking motion via a computer’s webcam.

We focused our research around the following questions:

How do we use multiple computers to track motion and show the effect of it on each screen? Is it possible to have a motion sensor on each computer connected to a central website? How would the website automate the output? Can we make motion based art? What about a self portrait done via video captured from a web cam?

Sketches & Brainstorming:

  • Create art with your body
  • Lay all of the phones down and use as a giant interface that you can interact with
  • Phones are all touch-based motion tracking? Gestural? Drawing input, swipe etc. could we make the phone vibrate
  • Computer screens – light sensitivity, motion tracking, speakers in computers,

We decided to try and create a multi-screen, motion activated art installation; when you moved, the visualization on the screens would move, grow and ideally draw.

So we researched people that had created similar work and watched tutorials on painting with pixels, edge detection, etc. We were able to get the webcam up and running fairly quickly but found the filters like “pixellate” etc, were a little too basic and we worried that though it would look cool to have 20 different screens with different looks: heatmap, pixel, edge detection, etc, using the filter function was a little too basic in terms of adding out own touch and really creating something new.

One of our first ideas was to have a wall vintage looking of TVs like one might have seen in a 1960s department store, but of course, in this case, the “TVs” would be png files layered over our webcam video feed.


Edge detection with a png tv border
Edge detection with a png tv border
Motion tracking with webcam is a comparison of one frame to the next, shifting of the color
Motion tracking with webcam is a comparison of one frame to the next, shifting of the color


TV wall sketch
TV wall sketch





Links to some of our first early research:

To open webcam in browser

Brightness Mirror – p5.js Tutorial This video looks at how to create an abstract mirror in a p5.js canvas based on the brightness values of the pixels from a live video feed:



Experimentation Cont’d:

We tried to create our motion/colour tracked output in a variety of ways:

  • Edge detection
  • Pixellate
  • Particle field (examples we found were way over our heads but that didn’t stop us from spending a few days tinkering with it!)
  • A faux heat mapping effect
  • Optical Flow

Video of particle effect responding to magenta on the webcam

Particle 1
Particle 1
Particle 2
Particle 2
Particle 3
Particle 3
Experimenting with webcam on the ipad
Experimenting with webcam on the ipad


Got particle effects working, got fullscreen (window, height etc), broke particle effect, fixed it. Original particle code was a different version than the javascript file we had downloaded. Some technical issues included how to hide the webcam window while still drawing the data from it.
To solve this, we tried the following:
Getting rid of the window entirely (didn’t work)
Hiding it through display:none (didn’t work)
Minimizing the window’s size so it wouldn’t appear on screen (didn’t work)

Essentially realized the #video, #canvas had to be on the screen in order for the color tracking to work. Solution: change the opacity of #video, #canvas so that it wouldn’t appear on screen but still existed there invisibly!

Once we got the code working locally and basically appearing in the layout we wanted, today was mainly an exploration day. We messed around with the color of the circles, the shape/size, how many in a cluster, etc. One of the biggest issues that we are finding is not being able to find how to change the specific color that the code is tracking. What if we wanted different colors to be tracked? Ideally, we would come up with at least 4 or so different colors that the website(s) would track and load up different folders onto webspace. We’d have 4 almost identical websites, which would open up the opportunity for us to create a sort of matching game; find which site responds to your color!

Our sense is that the code is in the the tracking.min.js file. There is a colortracking wording associated with ‘Magenta’ but there are no RGB or hex values, just the word, ‘Magenta.’

Another potential extension of this project would be to create/affect sound along with the particles; or at least call up some appropriate sound affect to go along with the interaction!


After our first meeting with Kate, she suggested that what we were trying to do was A. Computer Vision and B. HARD.

We then looked at Kyle McDonald’s work and additional tutorials from Coding Rainbow that focused more on painting with pixels and Computer Vision:

Nick provided a simpler sample for creating a colour track and response system
Nick provided a simpler sample for creating a colour track and response system


  • A lot of the resources we found were in processing as opposed to p5.js
  • We found a working demo of a particle effect but because it was so complex, even editing it was a very steep learning curve and we had trouble really making it our own. That and it was a time waster when in the end Nick was able to point us towards a much simpler baseline as a (re) starting point.
  • Lack of understanding of javascript held us back in terms of knowing how to show our effects but not the capture video and how to layer visuals (pngs vs video, etc,.)

With our new, dumbed down approach we started again. While keeping with the idea of capturing video via the webcam, our new approach involved drawing simple shapes in response to motion. We also started to consider more analog ways of interacting with RGBelieve it!, like a game involving sticky notes.

One of our game layout sketches
One of our game layout sketches


Sticky notes to be placed on each user/player
Sticky notes to be placed on each user/player
Sana in action
Sana in action





Colour tracking via the webcam and tracking.js; the camera detects the colour, then draws a rectangle. The size and x, y coordinates change based on the motion captured via the webcam. Basically, you can put a magenta stickynote on your forehead and dance like a maniac in front of the webcam and a shape will be drawn and “dance” with you. If you thought it was hip to be square before, now it’s righteous to be a rectangle!

Shapes move along with tracked blue ellipses.
Shapes move along with tracked blue ellipses.



Working from some examples, Kristy was able to get more complicated shapes with their own motion to show up on screen when magenta was detected by the webcam, but couldn't get any further tracking operable
Working from some examples, Kristy was able to get more complicated shapes with their own motion to show up on screen when magenta was detected by the webcam, but couldn’t get any further tracking operable


Shapes move along with tracked blue object
Sana was able to integrate the code from the complicated shapes that had their own separate movement to move along with our simple tracked blue object

Videos of Drawing/Tracking & Motion:

Working on making the "start" page responsive
Working on making the “start” page responsive
Brainstorming a name
Brainstorming a name

Game Logistics Setup:

To prepare for the in-class gameplay and to save time, we assigned each participant a team and a url to load onto their laptops in advance. We listed this url on our Digital Futures facebook page and came early to load the browsers manually to ensure timely setup for the presentation.

These are our groups (as chosen by this random list organizer)


Final name and logo
Final name and logo



Welcome to RGBelieve It!.

These are your teams:



Prof. Kate









Roxanne H.


Prof. Nick












Roxanne B.



Please locate the sticky note pads that correspond to your team’s colour.


Each team member will place 2 sticky notes onto themselves; this can be on their shoulder, knee, etc. Choose the sticky note location wisely, as once you have placed the 2 sticky notes on your body, the locations are final!


You will also carry extra sticky notes, but keep these in a pocket, or concealed in some way!


Once everyone has placed their sticky notes on themselves, the game will begin once everyone hits START. Make sure to allow RGBelieve It! to access your webcam!


Walk by each screen to see what happens. If the screen reacts in some way, that means you have found a screen that matches your team colour. Mark the screen with a sticky note of your team colour.


The team that locates 6 or more screens that reacts with their colour first, wins!






URL list randomized and up to 20 (made sure to have at least 6 of each color): savaya feng sana kristy kylie roxanne quinn ramona emilia chris max jad emma sean tommy roxanne b. dikla karo

In Class Demo:

img_9587 img_6544

Chaos ensues
Chaos ensues



We see RGBelieve it! as the first phase of research for a much larger project, involving motion capture and art creation via gesture. Much of our inspiration in terms of possibilities game from artists working with motion capture and Computer Vision. Caitlin Sikora’s interactive web application Self Portrait, 2015 “…uses motion capture data and Markov models to generate new movement data in real time.”

We really connected with the way her animations responded to the user and the user data influenced the output. One of her other projects, I need-le you, baby used p5.js and webcam pixel data, which made us think that many great interactions were possible using these same technologies. And though that’s true, Sikora’s knowledge and ability in this area is leaps and bounds ahead of ours. But from an inspiration perspective, her work was invaluable.


Moving forward, we see a large screened art installation being the likely final output, wherin the user stands in front a screen larger than themselves and their gestures are captured by a camera then rendered in real time on screen in some artistic visual output. Whether the motion is captured via placing magenta mittens and kneedpad sfor example, on the user or via other motion capture techniques will require more thought and research.


McCarthy, L., Reas, C., & Fry, B. (2015). Make: Getting Started with p5.js. (1st ed.). San Francisco, CA: Maker Media

P5.Js. (n.d.). References. Retrieved October 17, 2017, from

Camera and Video Control with HTML5

Motion Detection with JavaScript