Category: Experiment 2



Afaq Ahmed Karadia and Ginger Gio

Project Title:

Hexade Orchestra

Project description:


The project was all about creating an interaction between screens and the user. and most importantly the way this interaction creates an interactive environment. most interestingly how we can make the connection between these interactions and the users. so in this regards, we come up with different ideas how we can formulate that interaction with 20 screens. we decided to do something with speech and typography but unfortunately, few devices are not capable of accepting speech interaction.


Then we decided to do something with sound and gestures how a user can interact with sound using gestures. during ideation, we did a research on Music and history of Music and how different musical instruments evolve from a different region of the world and different genres of music.



while researching on music and its evolution we come across the point that nowadays electronic music is taking place of classical music during our research we come across with this article called

The Feeling’s Mutual: Why classical and electronic music aren’t the polar opposites you might expect

that give a pretty good idea about differences between two genres and why they are different from each other. what we need now a days is a platform where new generation can identify the cultural values of music after all electronic music is also derives from classical music.

in classical music and ancient greek period Orchestra was on peak what is an orchestra

“orchestra” was used to describe the place where musicians and dancers performed in ancient Greece. The orchestra, or symphony orchestra, is generally defined as an ensemble mainly composing of bowed stringed instruments, percussion, wind and brass instruments. Often, the orchestra is composed of 100 musicians and may be accompanied by a chorus or be purely instrumental. In today’s setting, the word “orchestra” not only pertains to a group of musicians but also to the main floor of a theater.

An example of early music pieces for modern day symphony orchestras is evident in the works of Claudio Monteverdi, specifically his opera Orfeo.


Musical Instruments of the Orchestra

strings (cellos, double bass, violas, first and second violins)

brass (trumpets, horns)

woodwinds (bassoons, clarinets, oboes, flutes)

percussion (timpani)


During the 19th century, more instruments were added to the orchestra including the trombone and tuba. Some composers created music pieces that needed orchestras that were very large in size. However in the late 20th century, composers opted for smaller sized orchestras such as chamber orchestras.

The Conductor

Composers play many different roles, they can be performers, songwriters, educators or conductors. Conducting is more than just waving a baton with a flourish. A conductor’s job may look easy, but in reality it’s one of the most demanding and highly competitive fields in music. Here are several resources that explores the role of conductors as well as profiles of well-respected conductors in history.


keeping in mind the idea of conductor we got a click what if we come up with a electronic orchestra with the help of conductor all 20 screen will interact with each other. during our research we come up with couple of ideas about electronic orchestra screen based orchestra. we found another example of  electronic orchestra that is


here they are using instruments but they are also generating sound with help of MIDI controllers and other electronic music instruments. we also found couple of example for interactive orchestra .


Project Process :

Nowadays we can find lots of example about music apps and music softwares most of them are based on modules .These modules are very simple graphical based music instrument representation. these are few example these are mostly used for electronic music production.


maxresdefault-1 native-instruments-imaschine-music-making-app

The real challenge for us was to create a orchestra with its real meaning during that we come across with a case study

The Stanford Mobile Phone Orchestra (MoPhO) is a first-of-its-kind ensemble that explores social music-making using mobile devices (e.g., iPhones and iPads). Far beyond ring-tones, MoPhO’s interactive social-musical works and research take advantage of the unique technological capabilities of today’s hardware and software, transforming multi-touch screens, built-in accelerometers, gyroscopes, microphones, cameras, GPS, data networks, and computation into powerful, mobile, and personal musical instruments and experiences. MoPhO was instantiated in 2007 at CCRMA, Stanford University, by faculty member and director Ge Wang, Deutsche Telekom senior research scientist (now faculty at University of Michigan, co-director 2007-2009) Georg Essl, and visiting CCRMA researcher Henri Penttinen, with CCRMA Artistic Coordinator Chryssie Nanou, 2007-2008 MA/MST students, and generous initial support from Nokia. MoPhO performed its first public concert in January 2008 and continues to serve as a research platform for social, mobile music.


after doing research and lots of hard work we come up with the idea of having orchestral instruments in form of illustration . illustrations of music instruments so that people can understand what they are playing and how that instrument look likes as we are not familiar with most of the instruments by this people with understand more depth knowledge of instrument they are playing. also they way how we play these instruments was also a real challenge for us in this regards we introduce gesture thing in our app the way how you play flute and rubab and shake it requires a gesture that is the most amazing part of out app and while doing our user testing we found interesting that interactivity with gesture are more powerfull thing in our app.
you can find these example over here with video reference

code :

About code we started with speech library but unfortunately some of the phone are not compatible with speech recognition technology then we switch the idea to doing something with sound and gestures .





after doing this experiment we learn lots of new thing in terms of interactivity more about how user interact with environment and how they make connections between those interaction and that is best part of any interactive activity.


Experiment 2 (Multiscreen) – What Is Next?


What Is Next ?

by: Mahsa Karimi & Jeffin Philip
Creation and Computation
October 30, 2016


Puzzle 1  Puzzle 2  Puzzle 3  Puzzle 4  Puzzle 5
Puzzle 6  Puzzle 7  Puzzle 8  Puzzle 9  Puzzle 10
Puzzle 11  Puzzle 12  Puzzle 13  Puzzle 14  Puzzle 15
Puzzle 16  Puzzle 17  Puzzle 18  Puzzle 19  Puzzle 20




WHAT IS NEXT? is a memory game consisting of series of puzzles. Solving these mini puzzles leads into finishing the game. To start one has to choose a cell phone and solve the corresponding puzzle, once the puzzle is solved the player gets to a screen which states a fun fact about Canada with a complementing illustration. Then the same player gets to choose a second cell phone and solve another puzzle. After the second puzzle is solved, If the second fact and illustration matches the first, both phone get eliminated and the player gets one point. If the illustration and the fact do not match the first set then no phone gets eliminated and the player has to memorize the placement of each illustration for the future rounds.


  1. All players were given a card with a number on top which associated to the location of their phone in the game.
  2. Each card also held a link to the website to every specific mini puzzle and illustration.
  3. All phone are laid on a table according to the placement chart and the numbers on the given card.
  4. Player 1 gets to choose a cellphone and to solve the puzzle
  5. Once the puzzle is solved the illustration and a fun fact from appears on the screen.
  6. Same player gets to choose another cellphone and repeats the steps
  7. If the illustrations and the facts on both phones match, those phone get eliminated and that player scores on point.
  8. If they do not match, the player has to memorize the illustrations and the placement of the phones for future rounds.
  9. Other players get to repeat same steps.
  10. Player with the most points scored wins the game.



screen-shot-2016-10-30-at-10-16-14-pmscreen-shot-2016-10-30-at-9-30-23-pmscreen-shot-2016-10-30-at-6-37-39-pm screen-shot-2016-10-30-at-6-37-51-pm screen-shot-2016-10-30-at-7-00-21-pmscreen-shot-2016-10-30-at-6-38-15-pm screen-shot-2016-10-30-at-6-38-25-pm


Our group was very interested in using this opportunity to come up with an interactive activity. Some of the ideas that we had prior to coming up with “what is next? puzzle and memory game” was to use the 20 screens to showcase a story that starts in one screen and runs through the rest of the screens to compete the narration (video below). We were not able to continue with this idea since it required networking the screens and was out of the scope of the project, but an idea to hold onto to execute in the future.  Another idea was to divide each cellphone screen in to eight smaller squares (2 columns and 4 rows). Each square represented a pixel and their could have been changed by tapping on them and once the 20 phone were all put together participants can use the gridded devices to create a pixel art (picture below). Finally we decided to let go of this idea as well to brainstorm further ignorer to come up with a more engaging and interactive activity.


Further researching on puzzles we both got interested in producing a series of mini puzzles that when all are solved would lead into solving a bigger game. The narration of the game is around fun facts about Canada. We agreed on designing 10 mini puzzles and using each of these puzzle twice in the entire game to cover all 20 screens. Later down the process of coding we realized that due to the shortage in time and the scope of the project we decided to decrease the number of the mini game puzzles to 5 and use them 4 times to cover all screens. Initially we looked for photographs online that could support our fun facts about Canada, but later we decided to redesign and illustrate the images ourselves so that the entire game had a more cohesive interface.

The first iteration of our design had 20 screens with simple gestures to unlock the images. We identified 10 interesting facts about Canada. As the player clears a mini-game, a photograph explaining one of these facts appears on the screen. The gestures included actions like swipe, double tap etc.

To make the game more engaging, we decided to replace the simple gestures with mini-games and puzzles. We studied the possibilities of P5.js library and came up with few ideas for the mini-games. The aim was to create 10 different puzzles, so that each game repeat only once in the 20 screen experience. However, because of the time constraint we went for 5 mini-games. This helped us in refining the final visuals and interactivity of the games to a better level. To create a cohesive visual appeal throughout the game, the final photographs of facts were replaced with illustrated fact cards.





1. This project was an opportunity for us to explore different ways of coding. For example, in the puzzle that 3 strips of colour had to be tapped in such that all 3 strips end up being paused on same colour, we ended up making one class and made three objects of the class. This way of coding helped us reduce redundancy.

2. Once all coding was done , we had to test each puzzle on different phones. The games were intractable on all apple devices (iPad, iPhone). The interaction was smoother and fast on the newer generation iPhones (iPhone 7, 6s, 6 and 5s) than older generations (iPhone 5, 5c and 4).
The interaction was the same for the Android devices, smoother for the newer generation phones than the older ones.

3. Depending on the processor of each device the animations can be faster or slower. Ignorer to solve this issue we need to use time functions rather than simple for loops.



  1. To add more puzzles so that none would be repeated throughout the game
  2. It is more engaging to have less players that 20 people (number of the participants in the class).
  3. Play with different inputs for the puzzles, such as audio and camera.



  1. pop the lock application
  2. Unlock me application

Don’t Spill


By: Shreeya Tyagi, Sara Gazzaz

A mobile water balance game designed to raise awareness on the issue of water scarcity in rural parts of the world.

Project Context:

Balancing water in a container is a common practice in many parts of the world. Yet nearly 1 in 10 people worldwide or twice the population of the United States live without clean water. The majority live in isolated rural areas and spend hours every day walking to collect water for their family. Not only does walking for water keep kids out of school or take up time that parents could be using to earn money, but the water often carries diseases that can make everyone sick. Access to clean water means education, income and health – especially for women and kids.

In Africa alone, women spend 40 billion hours a year walking for water.

Access to clean water gives communities more time to grow food, earn an income, and go to school — all of which fight poverty.


Project Description:

Based on the above we wanted to create a playful game that allows 20 users to play the part of balancing water on their heads. We wanted our project to create an experience for the user to be able step into the shoes of people who carry water on their heads for miles in drought struck areas of the world.

We did this through creating a body balancing game that involves the participants to each get a water bottle with the instructions to open a link on their phones. The are told to each take a headpiece based on their preferred size (small, medium, and large) and insert their phones into the sleeve on the headpiece horizontally.
The link opens moving water on a screen with the background of land affected by droughts. They are asked to try walking and balance the water on their heads by taking care of not spilling it. Each time they spill the water level decreases accordingly. We incorporated sound to enhance the visual effects.
They are scored on how much water they’ve lost by checking the number of droplets left on the top left of the scree. They can then refer to the water bottle given to them to know the percentage of water lost.


Presentation of Prototype at Project Critique:

20161028_154722 20161028_154848 20161028_154853



dsc00048 dsc00052







Phase I – Gyroscope Data

It was difficult to find the code to access the Gyroscope through P5.js and incorporate it to create the movement of a virtual container.

This image above shows our initial test. We were testing out the accelerometer to see how it responds to tilting the phone sideways as well as back and forth. We adjusted the sensitivity of the tilt after this test so it would be more sensitive and cause more spilling since we are dealing with water.


Phase II – Introducing Sine waves to a container 

Sine waves are regular 2D waves. We used 2D waves to simulate a moving fluid in our virtual container. We had never used Sine waves before and it was a challenge to use these to create a water-like effect.

Phase III – Using Sine waves inside a container It was a challenge to make the wave and container move together.
Our second test when we started out with constraining the water in a container instead of it filling the whole screen. It worked out well but we have to change it to horizontal and fill up the screen with water instead so it would be more visible for participants to see because when tested it was too small to understand.
Also another problem we faced with this trial was that we were dealing with water and it was too complex because of it having different attributes. Water particles are bonded to other particles in “class” which was very complex coding we haven’t learnt. Since we had no background on how to code visual water spillage we thought of using gifs that would appear when someone spills water due to imbalance but then it was crossed out because it didn’t allow users to feel the flow and movement of water.




Phase IV – Using the phone as a container

The fluid effect is a 2D Sine Wave simulation and it would have been interesting to do 3D simulations. This would be our next challenge while developing the game idea further.

Our third test was trying to fill the screen with water and draw waves using small circles that would cause the lines to flow across the screen.




Phase V – Using the phone as a container space for the fluid simulation. Working with various platforms was a huge challenge. Our sketch worked fine on IOS and gave trouble with various Android devices.

Below is the image of first cointainer that we planned on taking pictures of to show spillage visualised as our gif plan. Also, the video is our reference to show how we wanted to the spill to look like.


We were trying to take videos of real spilling of this water container that we colored with blue paint.
Also in this picture we used a ground that we made out of clay that we painted and carved to look like land that has been through droughts.



From the start we knew we wanted to use the act of balancing in some way so we decide to go for a head piece that allows users to insert the phone into.
At first we went through with starting to brainstorm for a vertical sleeve that would somehow be attached to the headpiece using a strong clip-like structure. BUT then we changed that to a horizontal visual because it allowed us to use up more space since we are using mobile screens and needed to make it more visible for participants to see what’s happening around them on their fellow participants’ screens.
Before deciding on going for the idea of participants interacting by seeing the water balancing act on their fellow participants rather than on themselves, we were faced with trying to adjust the phone’s placement in order to allow our users to see how good enough they are balancing the water. We brainstormed an idea of making the phone face the user by extending a piece from the headpiece towards the front of their face that would hold a mirror reflecting the phone on their forehead using clips. Another option would be the phone itself would be hung in front of their faces in level with their eyes in a sleeve. See photo 1.0

We eliminated the mirror idea and the the whole idea of the user having to see how he/she is balancing water because it was not what our goal was. We wanted it the goal of this project to be about how when people balance water on their head they dont have the ability to see it and also the main reason was for the 20 screens and users to interact with each other and guide each other with how well they are balancing the water.

dsc_0489dsc_0489PHOTO 1.0

screen-shot-2016-10-27-at-11-59-07-pmPhoto reference of how we would like our headpiece to look like.

whatsapp-image-2016-10-27-at-10-03-05-pm  Testing fitting of head piece which resulted on being irritating and caused discomfort due to the hard surface of phone on forehead.


whatsapp-image-2016-10-28-at-12-38-36-pm-2 whatsapp-image-2016-10-28-at-12-38-36-pm-1 Using a sponge to make it more comfortable for user by creating a cushion for the phone to rest on their forehead.
dsc_0449 dsc_0450dsc_0443Attaching elastic band to plastic sleeve using staples for strength


 dsc_0455     testing water level when high to low —–>    dsc_0454

dsc_0451Water labels designed with instructions to open given link on participants’ phones. 
img-20161027-wa0042 Process of cutting pasting score droplets representing water percentage of actual bottle.





Orlando Bascunan, April Xie

Project Description

HUNT is a smartphone-based, physical game that puts a twist on classic Tag. The game can be played with 4 players and up, in an open space. Using easy-to-make Armband wearables, players turn their phones into game-piece and dashboard.

The program randomizes the user’s role as either deer or wolf. Wolves must eat deer by tapping on their screens attached to the arm. Eaten deer reincarnate as wolves – the last deer standing wins the game.  

Surprising and funny interactions emerge between players, as they can move only when their screen says so, through programmed intervals of “stop” and “go”.




Through exploration and brainstorming, we chose the following design concepts for our game:

  1. Screen as promoting person-person interaction (vs. person-screen)
  2. Experiment with competition vs collaboration
  3. Strategy-building with restricted movement
  4. Observing rule manipulation in a screen-based and physical hybrid
  5. Encourage humour and body contortions to protect the game token
  6. Emotions: adrenaline, competition, frustration, agitation, humour, unexpected social dynamics

Process Ideation

Our game was inspired by many physical and mystery games we played as kids:

  1. Mystery games – the “bad guy” is a secret, and the group must figure out who it is before he/she “kills” everyone
  2. Guess Who?
  3. Clue
  4. Tag, Blind tag
  5. Freeze tag – restricting movement
  6. Octopus
    The ‘octopus’ stands in the middle of the room, while rest of group stands against a wall. When the octopus shouts “Octopus”, group runs to other end of the room, trying to not get caught.

Through ideation, we decided to create a game of “tag” that involved:

  • Inputs: phone accelerometer
  • Outputs: screen instructions

Ideas that were not used in final concept

  • Integrating sounds into the game for stronger connection between screen and physical world
  • Use accelerometer for added difficulty features, e.g. needing to keep arm as still as possible  
  • Combining deer and wolf movement patterns in prime numbers
  • Sound-input to “recharge” movement allowance (e.g. yelling into microphone)
  • Timers / “health bar” of movement allowance in the form of waning moons
  • Having all deer move in sync
  • Mood/theme: “frog” vs. “fly” – warrior flies??

Constraints/discoveries through iteration

  • Graphics work slowly in browsers
  • Full game experience can only be 5-7 mins. Minimize instructions, maximize intuitive design through staging
  • Phone models will exhibit varying speeds in browsers – in sync moving not possible
  • Sound input not working in p5.js and no library for connecting vibration through web browser
  • Keep graphics simple and utilitarian – focus of interaction is in physical world, not on screen.

Movement design

  • We bodystormed various time intervals and movement constraints to decide the best set of rules to encourage narrow escapes and close chases
    • Deer: we wanted them to be constrained to time intervals – increases difficulty, as it takes away decision-making for when to move (and desperation when wolf is close).
    • Wolves: we wanted to give them choice of movement, where movement is considered “ammunition”, depleted quickly and reloaded quickly.
  • We decided to use:
    • Deer: two seconds GO, two seconds STOP
    • Wolf: 1.5 seconds POUNCING, 1.5 seconds STOP (to recharge pounce), POUNCE used whenever player chose strategically.

Gameworlds as creative medium for the player

  • One of the dynamics we were most interested in observing was player interaction with the gameworld we created – a hybrid of screen-based and physical games.
  • Screen-based and physical games typically have different dynamics between gameworld and player
    • Screen-based: No “rules”, but rather “physical laws” that players can creatively hack/ take advantage of loopholes or quirks in program.
    • Physical-based: less tolerance for “cheating” or “hacking” the rules.
  • We wanted to make “physical laws” for HUNT, the screen-based experience blending into the physical realm. We hypothesized it would make players feel like avatars in the real world, and we were excited to see how creative players would be with the gameworld we gave them.




Staging: We rented RHA Room 301 for the class critique – an open space with plenty of room to run around.

Aesthetics – visual and audio

  • We used a neon colour palette to better grab player’s visual attention
  • During the exhibit round on Oct 28, we played the song “Warriors” from the movie soundtrack for Hero – an ominous, battle-cry of a song. This, juxtaposed with the playful, bright, child-like graphics of HUNT created a competitive yet humorous environment for players.


The code is divided into ’modes’ that represents the states in which the game is currently on.

It starts at ‘home’, an initial screen that invites the users to tap to start with a day and night animation.  After the user taps the screen he/she is assigned to a deer or a wolf mode. In the final version there is a 20% chance you are assigned as a wolf.

Wolf mode awaits for the user to surpass a specific amount of movement registered by the accelerometer to start the moving phase, which ends after a brief time and is followed with the stop phase, that punishes the user if he moves restarting the waiting time.

Deer mode is only time based and its synchronized with the global clock, so if played in high end phones, the whole ‘herd’ would move at the same time.

Tapping on a Deer phone would trigger a dying animation and later you would continue as a wolf.


Armband Wearable

  • We chose the arm for the placement of the phone so that it could be:
    • Accessed and read as a dashboard
    • Nimbly and interestingly protected from wolves
      First iteration: cling wrap

      The cling wrap did its job sufficiently (kept phone in place, screen could still be tapped and swiped).
      We looked for more robust wearable solution.

      Second iteration: ziplock bags and stretch utility straps with velcro

      One side of ziplock bag was reinforced with tape, then had slots cut to thread the one utility strap through.
      Utility strap is wrapped around player’s wrist twice, then secured with velcro.

Word Choice

  • Because our game couldn’t be graphics-heavy, a lot of importance was placed on word choice for the go/stop mechanism. We needed simple, colourful words to make the desired actions intuitive.
  • We also thought about the types of movement different verbs would evoke
  • WOLF: desire for words to give player
    • Sense of autonomy
    • Sense that only small movement is allowed at a time
    • Understanding that movement had to be recharged and then used, like ammunition
    • We decided to use
      “Ready” – ability to move is now recharged
      “Pouncing” – player is moving and depleting movement allowance
      “Stop” – player can’t move while move is being re-charged
  • DEER: desire for words to give player
    • Sense of passivity and lack of autonomy over movement
    • We decided to use
      “Go” and “Stop”



We launched HUNT at the Creation and Computation Experiment 2 Critique on Oct 28, 2016. The video features highlights from the round of HUNT played by classmates.

Observations during gameplay

  • During exhibit round, a classmate creatively placed wearable on her leg 
  • Two players used their height to their advantage and kept arms in the air, phones far from reach
  • One player hid behind projector screen
  • Interesting use of arms
  • Very heightened fear arousal (e.g. made noise behind a player, she jumped and screamed)
  • Element of mystery as to who wolves are heightened paranoia
  • Phone became extended part of body – players were very startled when someone would sneak up to touch their phone. Although there is no intended physical contact in the game, physicality extended into the phone.
  • Players struggled to coordinate movements in physical world and receive feedback from screen – too much looking up and down. Some even forgot to look at their screens after a while. Was an interesting experiment to see how much people would adhere to screens or break rules.

Takeaways for future iterations

  • Improve functionality of armband wearable – customizing fit for phone pocket; using adjustable straps instead of velcro
  • Investigate ways to communicate actions through haptic feedback (vibration) and/or sound
  • Investigate other playful iterations of the game – blindfolded wolf, game played in the dark
  • Play with different movement constraints between wolf and deer


Broken Space

Team Members

Katie Micak 

Afrooz Samaei









Broken Space is an interactive installation, designed to display a footage of the galaxy on 20 laptop screens, placed on a grid. Each laptop randomly displays a part of the final video. The pieces should be put together by users through mouse clicking, in order to form the final image. Once the first frame is properly composed, the users have to press the mouse wheel in order for the video to play.


Design Process 

Afrooz comes from a background in engineering, and Katie comes from an art making field. When we came together we found that communicating our ideas to each other was difficult at first, since we were viewing the project from different perspectives informed by our histories in image making.

In order to arrive at a concept, we described how we viewed screens and interactions, what we should consider about images in terms of abstraction, and how we could look at building the experience.

After a few drawings and conversations, we finally began showing each other what images we found inspiring. Katie showed Nam June Paik’s video wall sculptures (, and Afrooz showed an interactive iPad work showed at a Japan expo in 2014 ( As it turned out, we were generally speaking about creating the same type of experience- namely, one that involved a large scale and images that would move over multiple screens- like a painting.

Our first rendition of this experiment came in the form of a wall of laptops that would show an abstract image. This image would be altered through effects when the view passed by the image- we would do this by accessing the webcam. After an investigation in coding, and further conversation we came to the decision that we could simplify and distil our idea into a simpler format, but still result in the same impact.


Here are the core elements we chose to investigate:

-Scale: a wall of laptops showing the video, or using the laptops as a sculptural material that would inform the work.

-Interaction: since it would not be networked we would have to keep the interaction to one screen only. We decided that the interaction would then be among the users- they would have to collaborate.

-Abstraction: We both knew that we would be working with a number of variables that would cause any image we choose to appear ‘less than perfect’ and our tactic for dealing with this was to choose an image that would be more flexible. It would have to create a larger piece once all of the screens showed it at once.

-Physical Space: We wanted to build a space that would provide a center of focus.

-Concept: We wanted our concept to reflect the technology we were using, and capitalize on the limitations of the project.


Realization of concept

After we decided that we would be building a grid to hold and display our image, we began searching for abstract videos that would show well across many screens, even with breaks between monitors/ shelves/ etc.

We began within the visual language of technology, or data visualization since it belongs to computers entirely. We shared images of energy scans, brain scans, and also color fields. Katie began looking at color as it exists on computers and in nature, and was attached to a suggestion on her YouTube page. It was of space. Space it was! A simple enough concept, we would use a space video as a ‘painting.’ It was effective because it has a lot of color and movement, these images incorporated elements of design.

When constructing the puzzle we went through a number of iterations. First, we tried the first image of our video which is a nebula. This would not work, because viewers would have no orientation for how to construct the image- it was too abstract. Then we decided to try an astronaut and an illustration of a planet- also too ambiguous.


Finally, we decide on a beautiful image of a planet, with the text ‘Broken Space’ as a point of orientation- It would be simpler to decipher and put together for viewers working collaboratively.


The final video is cut into twenty pieces. The pieces are stored in an array. Once the web page loads, a video is randomly displayed on the screen. The users have to go through the images (first frames of the video) by LEFT clicking (going forward) and RIGHT clicking (going backward). Clicking the mouse wheel (or the space bar, in case a mouse is not available) plays the video. For simplicity and a better display, the Enter button toggles the browser to fullscreen.

Link to the Web Page:

The Day of Exhibition

For the activity, viewers were asked to place their computers on shelves, which we custom constructed for the project, and each viewer was given a wireless mouse to activate the screens. Using the LEFT and RIGHT click function, viewers would be able to click through web pages until they found the correct image for their screen in terms of the composition.

Once all the images were in place, the viewers were given a surprise moment when the puzzle turned into a larger video.

There are 20 videos with 20 different audio tracks. The decision to incorporate audio was to utilize the 40 speakers that we have readily available through the laptops. The audio scape was created by editing compositions using found material from the NASA Soundcloud page (open source). We chose different sounds for different locations of the laptops- the bottom row would have more a bass sound, the top row had higher pitch laser-like sonic, the middle rows had more tangle sounds like speaking and beeping that are tied to spacecraft – man-made sounds. Listening to this audio in this context created an atmosphere of space and the sounds were meant to swing from peaceful audio to a rising tension- the audio moves the viewers through emotional space.

As mentioned earlier, we were concerned with bringing the viewer into the work physically and controlling the space. The viewers had to physically click and talk to solve the puzzle. We also closed the blinds to create a dark immersive environment and we kept the viewers further away from their laptops so that they could view the work in parts and as a whole. The audio filled the room when the video played. These elements are important to create an experiential piece creating more emotional weight. We hoped to create a sense of wonder and awe in the viewers, and that they would feel something from this experience.

It was interesting to see how viewers organized themselves to reach their goal of constructing an image. One person became the leader, and checked the individual screens in comparison to the whole image, approving when it was in the correct position. The group worked well, and quickly, created the image by communicating, listening, and experimenting.



Broken Space is a piece about networks on the cosmic level. We acknowledge connections to create a whole, we see the universe as both organized and random, we hope that this metaphor is translated to the concept of the internet and how individual participation creates structure and larger portraits that change over time, and provide insight into overarching concepts of collaboration and unity even though these connections are mediated.



Tam Tam Digital

by Samaa Ahmed and Thoreau Bakker


When we first started working on the Multiscreen Experiment, and looked at the project brief, we framed our project around a few key questions and considerations:

  1. How could we create interactivity between screens without networking them? (This was a big challenge when we were in the conceptualization stage!)
  2. How could we create an intuitive, user friendly experience, where we would not need to spend a lot of time explaining the concept or how the project worked? What common themes, concepts, or practical knowledge could we draw on?
  3. How could our project enhance the multiscreen experience? For example, how could we create a multiscreen experience that was more than just visual?
  4. How could we create an experience that was not dependent on multiscreen, but improved by it? Thinking about ideas of participation, for example, would everyone have to start the experience at the same time, or could they opt-in/out as they wanted? How do we create an experience that feels organic to the users, so that we could translate this project out of the scope of our classroom?


With the above considerations in mind, we began to brainstorm ideas and concepts on which to base our project. We thought about incorporating text, puzzles, games, randomization, movement, gifs, videos, and experimenting with different gestures and even webcams as inputs.

In our mind map, we noticed that there were some recurring ideas. Firstly, we kept coming back to the visual of a grid.


Secondly, all of our ideas had a very sleek, simple user interface. In the image below, you can see us starting to play around with the idea of modules. For example, if there were 20 screens, perhaps there would be 5 different ‘modular’ interfaces with different shapes, and then there would be 4 variations of those shapes in different colours. Each of those colour variations would then be animated slightly differently, but in a similar way to the other modular shapes.


After our brainstorm we had a few key concepts that we wanted to incorporate into our final design:

  • screen-shot-2016-10-30-at-5-37-43-pmGrid
  • Music/sound/audio:
    • Beats
    • Percussion
    • Drums?
  • Sleek and simple design
  • Participatory designModular
    • Responsive

We used the following images to create a moodboard of sorts. By using visual cues, it was easy for us to start to see connections between the concepts and to integrate these separate ideas into a cohesive whole.

We thought of creating a digital beat-maker grid that could have different sounds/instruments, be responsive, and jammable.

We thought of digital beat makers, digital drum sets, and drum pads. They manage to create an interactive, responsive, tactile experience using touch pads and screens, in a convincing way. Below are some of the visuals we drew from.

Digital drum set

Drum pad

Reloop Beatpad

iOS drum maker

From that idea, we started sketching and creating designs for what the interface could look like, and refining our idea. We came up with a name, and a final concept, Tam Tam Digital.

Project Description

Tam Tam Digital is an interactive, collaborative drum circle experience that allows users to ‘jam’ using the touch features on their phone or tablet screens, or by clicking their mouse on their laptop. They can jam on their own, or with others, and create rhythms, beats, and compositions using unique sounds that simulate percussion beats.

The project is inspired by the idea of drum circles, where people gather together as a community to explore, experiment, and create. Anyone can join the drum circle, and they need not have any prior drumming or musical experience. Participants in the drum circle respond to each other’s beats as well as riff by themselves. The goal of the drum circle is improvisation and making it up as you go along. That is the beauty in it.

‘Tam Tam Digital’ is inspired by The Tam Tams – a weekly festival that takes place in Montreal in Mont Royal park. The focal part of the festival, and what it has become renowned for, is a large drum circle.


We chose the drum circle concept because we felt that it lent itself well to the multiscreen experience. As the screens in this project are not linked or networked, we wanted to play with the idea of participation. Using ‘participation’ as a cue, we kept a few key themes in mind: Everyone need not join the experience at the same time. People can opt in and out. Everyone should be able to enjoy the experience without feeling lost or overwhelmed. Basically, we wanted to create an experience that would be intuitive, easy, fun, and largely unregulated so that people could be as active or involved as they wanted.

Design Process

Drum circles are very tactile, immersive experiences. The feel of the drum, the feeling of community, the (literal) vibes that are created, are essential to the experience. So, how would we translate that into a digital context?

Using digital beatpads as inspiration, we researched them further to see how DJs and electronic musicians interact with them. What could we borrow from those interactions to inform the design for ours?

Click to play


Click to play

The videos above show how the visuals of the ‘beat pads’ respond to touch and complement the sound that is being produced. The second video shows the collaborative possibilities of using a beatpad (in this case, a Launchpad Pro).

Design Considerations

Using the beatmaker and Launchpad Pro interfaces as an inspiration, we started creating user interface designs. In the design below, when the touchscreen or mouse was pressed, a sound would play and the image would invert.

Grid Design, by Samaa Ahmed

Grid Design, by Samaa Ahmed

This design used primary and secondary colours and solid white shapes. These shapes would be drawn in p5.js, not imported as vectors files.

We played around with different aesthetic options, for example, using more complicated icons instead of solid shapes. We also thought about using a more tonal colour scheme, instead of such stark and different colours. We thought that shades of pinks and blues and purples would the relationships/connectedness between the sounds when they would be played together.


Design swatches

  • We also discussed whether the gestural inputs should be movement, shakes, sliding, or just tap/touch?
  • Should we only do instruments? (Should we also do dancers?)
  • How long should the interaction be? (In sync with the sound)
  • Should the visuals respond to the sound in a more organic way? (More than just inverting?)

We tested these different considerations using code.

Code Development 

First, we played with the idea of movement, using gifs instead of images, vectors, or static shapes.

We decided this was too messy and that the visuals should be drawn in p5, not imported as images or gifs.

The reason we chose to draw the shapes was to get as much experience with coding as possible, since neither of us has a coding background. This would also allow us to make the shapes more responsive, and the interactions would be more ‘organic’ – as the code would actually determine the visuals, rather than a pre-designed image file.

Second, we tried to get the visuals to sync with the audio, so that the shape would invert simultaneously with the beat of drum. We wanted the visuals to return to their ‘original’ position after the interaction was over.

The problem with this test was the spacing of the shape in relation to the size of the screen. It was difficult for us to centre the shape on different sized screens and across different platforms (e.g. differences between iOS and Android)

Thirdly, we wanted to see whether we should use sound loops (multiple beats of audio) vs. ‘hits’ (only one beat).

This test used the play/pause function in p5, which was easier to use with sound loops. However, we were concerned that having 20 different sound loops playing simultaneously would get too ‘messy’, so we decided to stick with ‘hits’.

Finally, we wanted the visuals to be more responsive to the sound, so we wanted to create variation and animation.

We liked the way that this visual worked with the sound, but not all of our sounds had as much tonality to them, so we used a combination of inverting visuals with ‘pulsing’ ones in our final design.

Final Code

Design Decisions

We liked the tonal colour palette, so we decided to create our interface in shades of pink and blue.

Because we were trying to simulate a drum beat, we decided that the input should just be touch, and we should only use circles to give the feeling of hitting the head of a drum.

We also changed our design ‘structure’ from a grid interface, to a circle.


Circle Design, by Samaa Ahmed

We kept the home screen interface cohesive with the audio screen interface to give the feeling of playing one, collaborative instrument.


Tam Tam Digital home screen interface. Created using HTML, by Samaa Ahmed


Click to play video showing final code development and interaction, by Thoreau Bakker

We even snuck in a kitten sound.

Demo Day

Our in class presentation went well. (Link to presentation here.)

Everyone seemed to get the hang of how the project worked, and enjoyed jamming out. (We have lots of videos of people dancing! *cough, Mahsa, cough*)

We brought in a sound mixer to connect people’s phones/laptops to a speaker, to amplify the sound. It was an interesting juxtaposition, having an analog mixer at the center of this digital instrument that we created.


Ania and Katie riffing next to each other, each on separate screens, creating a responsive beat


Tam Tam Digital circle set up


Creating a responsive, interactive beat


Click to play



Click to play

Case Studies

Tam Tam Digital is situated within a context of other collaborative instrument projects. The following case studies informed the development of our project and provided references for some of the interactivity that we were trying to achieve.


The ARC – Audience Reactive Composition – is a large-scale interactive music experience using light, sound, touch, and physical design. It was developed for Deloitte Digital by Dave and Gabe Interactive Installation Studio who wanted to explore the question “What is the future of music?” It was exhibited at the South by Southwest (SXSW) music festival in 2016. The concept for the experience is to create a collaboration between artist and audience, enabled by a series of unique tactile instruments.

Source: Dave and Gabe

Collaborative Instrument

This instrument, created by designer Matt West, requires two people to play it. One controls the instrument’s pitch and the other controls its rhythm. Each player gets to control one piece which then fits into the main base station where the sound is emitted. West says, “For this instrument to be played successfully the two musicians will need to constantly communicate or rehearse together until they know the piece of music and each other perfectly.”

Source: Matt West


According to its designer, Ean Golden, Orbit asks the question: “What would happen if you allowed people to control the music in a night club? Would they have fun, be inspired and choose to dive deeper into music?”

Instead of a DJ, Orbit allows people to use its table interface to play any part of a mix. The purpose of Orbit is to bring people together and help them connect, using music as a collaborative tool. Orbit is equipped with 12 large jog wheels, similar to those found on a DJ controller. Each touch-sensitive wheel plays one instrument in the “mix”. When pressed, the instrument plays, and when turned it changes the instrument’s tone or rhythm.

Click to play

Social Lamellaphone

The Social Lamellaphone is a musical instrument created by Australian artist Gary Warner. It is made from discarded street sweeper bristles, fastened together in a circular shape, and is designed to be played by several people in tandem in a manner similar to an African thumb piano.

Click to play


Avatar Orchestra Metaverse

Collaborative Instrument

Drum Circle

Inside the #ARC: Interview with Creators Dave and Gabe

Orbit Collaborative Music

p5.js Examples: Sound, Load/Play Sound, Play Mode, Oscillator Frequency

p5.js References: MouseIsPressed, Sound Library

SXSW Interactive 2016: Sound Idea 

The Mont Royal Tam Tams

The Social Lamellaphone



clocks screen-shot-2016-10-24-at-4-33-03-pm screen-shot-2016-10-26-at-1-57-43-pm screen-shot-2016-10-26-at-6-48-49-pm screen-shot-2016-10-26-at-6-42-08-pm20161028_141757 20161028_141702

Oblivion experience

Name of group members: Rana Zandi (NANA), Natasha Dinyar Mody

Project title: Oblivion


Project description:

Oblivion is a non-linear multi-screen experience narration, based on the functionality of memories within the human brain using a hyperlink structure.

The narration, explores the protagonist’s memories (Lily) of the people around her at school.

Although the experience involves the reader on his/her own to read the narration, the experience needs to be completed as a group. Since the narration uses a hyperlink structure, it provides readers with an illusion of choice and allows each individuals to read the same story through different paths enabling them to compare & contrast their knowledge of the events within the story with one another.

Supporting the narration is a timer enhancing the group experience. Users can compare their results (pages completed) at the end of the time given. (15:00 min)

Also embedded within each page of the story, are supporting visuals  (3 clocks) that change based on the movement of the mouse and the path taken by the user enhancing not only the concept, but also the multiscreen experience.

Process context:

Oblivion explores various concepts;

  • Memories & the hyperlink structure – According to Ray Kurzweil in his book “How to Create a Mind”, memories are like hyperlink structure. Memories have the ability to be linked to each other and recall one another based on new memories (experiences). For-example, for 6 months you might have a neighbour with a funny looking moustache. Years later, walking down the street a man may pass you by who has a moustache. Suddenly, a memory of your old neighbour will pop into your head even though the man who passed you by might not at all look similar to your old neighbour. Oblivion explores the same notion. Through the hyperlink structure Lily’s memories (the protagonist) get recalled by one another.
  • Memories – What is a memory? Our brain doesn’t hold on memories like a computer or a folder cabinet that one can open and browse through. In fact, memories don’t exist at all even though the entire functionality of the human brain is based on memories and experiences. Each time an individual experiences something new, a specific neurological pattern (neutrons passing messages to one another, think of it like a lightning pattern) takes place within the brain. It is the recalling of this pattern that we call memory. When an individual recalls a memory, the same neurological pattern assigned to that “memory” needs to be recreated/gets recreated. This means that each time you recall a memory, the whole experience takes place in your brain as if it was the first time. However, each time this pattern gets recreated it is a bit different from the previous ones. Hence, some researches believe that each time you recall a memory it becomes further and further away from what actually took place the first time that neurological pattern was created. In short, the memories you never recall are the safest. Oblivion was written based on this concept. The story forces the reader to remain in a state of oblivion through loops of recalling memories, not being able to tell which of the events or characters are real according to the protagonist (Lily).
  • Time – The clocks within Oblivion – Oblivion starts with the first page holding 7 hyperlinks for the reader to choose from. Each link is linked to a different page that unfolds a certain event in time while holding several links that repeats the same process. The only way the reader can know if they are traveling backward or forward in time is through the clocks. When the mouse is not moving the clocks are frozen in time. As the reader moves the mouse on the screen, the clocks are in a state of oblivion; some are moving clockwise and some counter-clockwise with varied speed. When the reader has to make a decision between which link to choose, as the mouse hovers over different links the clocks will ALL either go clockwise or counterclockwise hinting to the reader of the direction of time.

Process journal:

  • Phase One – Writing the story – It was challenging to come up with the narrative. Knowing who our persona’s were helped us to narrow down a theme. Since, our readers were going to be our classmate and majority of them were girls, we decided to write a love story based on familiar events taking place within the school environment. Since NANA is currently researching memories and cognition, we decided to go with this concept. It was tricky however, to embed the hyperlink structure while writing the story. Each page had to go back to a certain other page, and this had to be kept in mind through the writing.
  • Phase Two – Coding the story – Coding the story wasn’t hard at all. Using basic HTML & CSS we had a layout and a base to work from.
  • Phase Three – The timer – Setting the timer was very challenging. We had to figure out how to create a fixed frame that could hold the countdown clock and the content. Both of us very new to coding how to understand pages and pages of Javascript from various searched on google. We found a timer on one our google searches, but we had to study and learn the code of the timer. We asked help from other classmates, and even some second year students. Finally, we were able to understand it enough to be able to tweak it to add our own time and aesthetics to it. Through this phase, we learned to be more organized with our code in order to understand what we were doing. Sometimes, our code wouldn’t work and we couldn’t find out why. When a second year student, suggested we duplicate our files and keep organized or even sometimes work backwards, helped us to finish these phase successfully.
  • Phase Four – The visuals – The P5.js experience was a mission on its own. Phase three and phase four took the longest. The creation of clocks weren’t as challenging as embedding them within our story and coding them to go backwards and forwards in times according to the movement of the mouse. After figuring out the code, we had to go through the story and embed the code for each link. (which there were many links!!)

References & resources:

Experiment 2: 100 Futures Lane

by Ania Medrek and Bijun Chen

Project Description









100 Futures Lane is an interactive model of an apartment building that allows the user to play the part of the nosey neighbour. There are 20 different window narratives users can tap and play with, from a couple fighting to a unicorn vomiting rainbows. The project uses audio, animation, and P5.js to bring simple digital illustrations to life.

The apartment building structure is a laser-cut piece of masonite, attached, by magnet, to a 5-foot plank of fibrewood. The plank allows 100 Futures Lane to be portable and self-supporting. When propped up against a wall, the project sits at a slight angle, making it easier to view and interact with.


Links to Videos

Process Video:

Presentation Day Video:



GitHub link:

Our code was written with phone screens in mind. On a computer screen, some images will appear bigger than others because we experimented with sizing. The larger files work best on 6-inch smartphones, while the smaller ones are better for 4-inch iPhones. In the end, pinching the screen to zoom in and out was the simplest solution.

Link to Interactive Windows

100 Futures Lane


Process Journal

We were inspired to create an interactive apartment building by looking across the street and imagining how fun it would be to see different scenarios in each window. We knew it would be too simple to apply one code to 20 phones, so we set out to create 20 different ‘window’ narratives.  During the first week, we came up with the initial 10 illustrations and used those to test different code examples from P5 reference guide. Originally, we wanted to use shake, flip, tap and swipe so that there would be a large variety of interactions, and the apartment building would be like a digital ‘dollhouse’.

By trial and error, we learned that shake, flip and swipe worked — but not very well. Flip was particularly glitchy and it was hard to figure out what angle the phone needed to be to trigger an action. We decided it would be too confusing for the user to need to figure out what to shake and what to tap. We decided to stick with the most intuitive input and make all interactions tap-based. To simulate movement, most of the window scenarios are slightly changed PNG files, appearing in a sequence, for example the Cat Sequence (which also meows):

We were still hoping to incorporate swipe into some windows, but when we tried it on the phones, it jerked the whole browser around and was overall less effective than simple taps. The Plants and Crime windows would have been nice as ‘swipe’ in particular; maybe a later iteration could use bigger screens that would allow a smoother interaction.

After we had most of our drawings and coding finished, building the structure was the next step. We decided the common denominator of screen size would be 3X2 inches. We mapped out the apartment’s face in Illustrator, then again in a separate file the laser cutter could register. We planned to build an actual 3D model of a building, with walls, a roof and all. But when spoke to Reza in the maker lab, he pointed out that we forgot to leave enough space for the little shelves the phones would sit on. We adjusted our blueprint and had a second apartment face laser cut. We decided to ditch the 3D model and create a self-supporting wall. This solved the problem of keeping the phones in place and made the project more user-friendly.

On critique day, we presented first hoping to get as much set up as possible done before class started. We asked those with iPhones to line up and scan QR codes we prepared to match up with specific window spots. This process took longer than we thought, and in a future iteration, we would try to make the ‘loading’ time shorter. The main challenge was accommodating all the different shapes and sizes of phones. We created little blocks to prop up smaller phones, but putting it all together in class was a longer process than we would have liked.

To make lining up a little more fun, we had binoculars for participants to use, so that they could get excited to play the part of the ‘nosey neighbour’. In future iterations of the project, we would explore using iPads and IPad minis to make the windows larger and test out longer and more complex interactions using P5.js.

References and Resources

P5.js Reference page:

Ted Talk by Aparna Rao: Art that craves your attention:

We were inspired by Aparna Rao’s Ted Talk about enticing art, in particular, her ‘Framework’ example. In ‘Framework’ little care-free characters run around the frame of a window. This is a light-hearted piece of work that has a strong impact, similar to what we were going for in 100 Futures Lane.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.