Re-Do / Un-Do/ Over-Do – Starfish Generator

Digital Ocean

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984



For the final assignment for Atelier 2 our group set out to revisit one of our very first assignments. Although we weren’t all initially in the same group, everyone enjoyed the approach to the concept of starfish and saw potential expanding on it. When tackling ideas on how to expand the initial project the group settled on interactivity amongst individuals. We exercised ideas where individuals would interact with the starfish and the environment as well as other people. Ultimately, we went back to the initial concept of generating a starfish and worked on creating an interactive aspect where individuals would be able to create their personalized starfish and add it to an archive of other people’s creations. The project operates as follows: on one screen, the individual would be able to alter physical properties of the starfish such as amount of legs, length of legs, thickness of the starfish, and colour. Along with that, users would be able to search any image they want and it would be textured mapped over the starfish. Once the creation is done, the user would send the starfish onto a second screen that holds all previously user generated starfish.


When approaching the project we prioritized expanding on previous attempts, concepts, and limitations and wanted to rebuild the project as a new experience. It went as far as completely removing the starfish aspect and focusing on generating and interactivity. Ultimately, we continued with the generator and focused on implementing previous suggestions.


The first Starfish generator prototype used Pubnub to link all the various screens together but for our attempt we utilized Firebase to help us archive all the user generated starfish which eventually appeared on our aquarium. The different screens such as the generator itself and the final display screen of the aquarium operated as websites of their own from Github. One of the previous limitations with the starfish generator was the lack of customization since users were only able to select from a small array of options. To expand on this we provided the user with the capability of searching an image to work as the texture of the Starfish. Using Google images (along with WebGL), the first image would be selected and then mapped over the starfish as a texture. Additionally, we opted on using sliders instead of a selection of options to provide users with even more variation of values. This gave the users a more variation and freedom with their creations.

First iteration of the starfish being sent to the aquarium


With the customization built we also worked on adding life to our 2D objects by incorporating animations to the starfish. Once archived and added to the aquarium, the starfish travel at random but with a noise added on to mimic organic movement from starfish. Although we did experiment with adding other functions such as sin onto the starfish, we settled on noise simply because of how we constructed the object. Other explorations saw us attempting to use bezier curves to construct the starfish but we ran into similar complications because of how we built the inner and outer radius of the starfish.


Finally, the background was created within Photoshop and since we were using WebGL we also had to map the PNG of the aquarium over a plane to work as our canvas background.

Final Prototype Build

The Aquarium Populated with peoples Creations

Explorations and Challenges

Although the final build resembled what we had initially drafted out on paper, we weren’t able to incorporate all our ideas either due to limitations or complications.

We wanted to emphasize interactivity with our generated starfish by allowing either the user to name their starfish or, similar to the previous prototype build, give the starfish a scientific name. Then, once in the aquarium, users would be able to hover over any starfish and see its designated name. Unfortunately, since WebGL was such an integrated part in the programming, it limited our capabilities with text.

Toronto & CO: AR Tour Book

Documentation: Atelier Unit 3: Final Project: Ambient Sound

Kiana Romeo | Dimitra Grovestine | Tyra D’Costa | Sam Sylvester |Ola Soszynski


Our concept for our final project was developed off of the idea of tourism and everything that the city of Toronto has to offer. Originally, we thought about having our set space as the entire city, where tourists could walk around and discover history through images and audio. We then decided to make an interactive promotional piece for Toronto which could immerse tourists here, and elsewhere in the world, in our city’s culture.



Being such a diverse and ever-changing city, there’s always something new and exciting to learn about and explore here in Toronto. Although experiencing what our city has to offer firsthand is ideal, it is not always as simple as jumping on a plane and going across seas and even if one wanted to do just that, it is definitely a good idea to explore what the destination has to offer in terms of tourism and entertainment first. By creating an immersive and interactive tour pamphlet, we have given people the opportunity to experience Toronto from the comfort of their own home before their visit to the city.

Project Breakdown

  • Promotional Piece (a graphically designed pamphlet for tourists to access)
  • Representative Blender objects for each highlighted Toronto element
  • A general ambient sound
  • A corresponding sound or story through audio for each highlighted Toronto element
  • An app created through Unity


Division of Roles

Unity App Development  – Tyra

Blender 3D Modelling – Kiana, Dimitra & Tyra

Audio Design – Sam, Tyra & Dimitra 

Graphic Design – Tyra & Ola


  1. Research
  2. Develop App in Unity
  3. Model Objects in Blender
  4. Record and Mix Sound in Reaper
  5. Design and Print Magazine in Photoshop

Unity Testing

One of the first tests that Tyra conducted for unity was the initial use of combining the camera with the 3D objects. Tyra did this by first building out an AR business card, and attaching video, sound and images to the AR object in Unity. 


Next, Tyra started to build the final application, she built  five Image Tracking  Anchors. Nested within the anchors  are five different prefabs containing the modeled objects from Blender. In each Prefab modifications were made to include ambient sound, and touch based interaction. Using Lean Touch Tyra was able to incorporate an interactive aspect between the user and the AR object. The Lean Touch Script was built to allow users to scale, rotate and move the objects within the AR space. The last step was to replace the test objects, images and videos for the final project assets.




For many of us, the models created in this project were our first attempts at working in Blender without the help of an instructor. Our modeling team was able to put together five models: A coffee cup (Tyra), a timbit (Dimitra), a hockey puck (Dimitra), a CD (Dimitra) , and a Teepee (Kiana). These models were they exported and sent to Tyra to be uploaded into the Unity AR application. 

With Materials

screen-shot-2019-03-27-at-11-19-43-am screen-shot-2019-03-27-at-11-19-54-am

screen-shot-2019-03-27-at-11-20-34-am screen-shot-2019-03-27-at-11-20-48-am screen-shot-2019-03-27-at-11-20-04-am

Without Materials


Graphic Design

The booklet was centered around being a visual aid for the rest of the experience. As such, Tyra  designed the layout to be quite simple, with a focus on photos and headlines of the topics covered. We debated on having either filler text or full information, and eventually settled on filler text, due to time constraints and to not distract from the core of the simulation. The pages are laid out in InDesign, with a modern layout and format. This process was relatively simple, as InDesign is something Ola was worked in for several years, with the biggest dilemma being with the print layout of the pages, which Ola settled after changing the page setup and order. The booklet itself is hand-bound, to have the product itself last longer and seem a bit more solid, as it were.



The Audio team brainstormed and mixed together a variety of Canadian inspired artwork such as: poems, songs and field recordings using Reaper. These were then sent to Tyra and uploaded into the Unity AR application.

  • Bad Canadian Songs(Sound Sourcing done by Dimitra, Editing done by Sam)
  • Hockey Arena Foley (Sound Sourcing done by Dimitra, Editing done by Sam)
  • Hey Trudeau Poem  (Sound Sourcing done by Dimitra, Editing done by Sam)
  • Crickets (Field Recording done by Tyra, Editing done by Tyra)
  • Crowd Sounds (Field Recording done by Tyra, Editing done by Tyra)

screen-shot-2019-03-27-at-11-24-28-am screen-shot-2019-03-27-at-11-24-33-am



Collapsing City – Immersive Sound and Augmented Spaces

Collapsing City – An AR Experience

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984

Rosh Leynes – 3163231


For our immersive sound and augmented spaces we set out to create a narrative space developed with sound and the assistance of visuals all experienced within an AR environment. A set of four rooms were modeled and built in Unity all with variations in concept and layout. All the rooms are connected to each other allowing for linear and seamless story telling. Essentially, with a phone, the audience would maneuver through the 4 different rooms and listen to the sounds of each room to understand the narrative. As the user approaches certain objects in the AR space a proximity sound would play to further tell the story. The narrative of the space follows a city collapsing due to global tensions. The initial room is a regular day in the city accompanied by music, children playing, and basic city chatter. The scene connects to a room with a television that cuts from a game of football to an emergency broadcast of nuclear war. The next scene connected is a dense city environment with a ruined building, air-raid sirens, and fire. The final scene is the city in complete ruin with buildings decimated, rubble scattered, and an ambient howl of the wind.




When developing the concept we knew that we wanted to tell the story through different rooms and environments with a variations of sounds accompanied with them so we organized the rooms with a rough sketch first.



Originally we wanted to set the whole scene onto a plane where the audience could look over and closer onto details and experience the change in the narrative but then we decided that having the audience go through each room individually and experiencing them like they were in the environment would yield a stronger reaction and connection to the narrative. When creating the rooms separately, we initially had the idea to have the audience teleport to each room after stepping through a door frame or entrance to the scene. We decided to scrap the idea because our intent was to have the audience experience the change in the environment so we bridged all the individual rooms together to create a seamless experience. As we were all working on one unity project and making frequent changes we used Unity Collaborate which allowed us to import each others changes.



Since sound was a main component to this experiment, we worked with triggers and ambient sounds. On top of general ambience in each scene to establish the environment, we included triggers that would aid in telling the narrative. For instance, if approaching the television, a soccer game cutting into an emergency broadcast would play. Additionally, if approaching rubble, the sound of explosions would trigger to play to visualize what had happened. Although we had visuals and 3D models to create our environments, the sound was crucial to tell the story.



We experienced several challenges throughout the development of our experiment which were eventually resolved but two had stood out: the sound triggers and the scale of our approach.

For sound, we wanted to have each scene vary in atmosphere and sound so we used triggers to achieve this. The trigger’s require something physical to run through it in order to trigger the sound. So when the come walks into the room with the television, it had to interact with the trigger in order to activate the sound. We worked around this by attaching a cube to the camera and adding the rigid component onto it so that it would be able to interact with the triggers.


The largest challenge we encountered was how we approached the experiment. We really enjoyed the idea of having variety especially in scenes to tell a narrative so we focused most of our development on building these scenes and rooms accompanied with sound, particles, and an assortment of sourced 3D models. Throughout development we had to regularly scale down our scene sizes and limit particles and lighting to effectively run the build on phones. In the end, the build ran with lag and users weren’t able to make it through the first scene due to it’s sheer scale.


The Dating Game

Tyra D’Costa | Ola Soszynski | Samantha Sylvester

The Dating game is indented to be an interactive storytelling table top interface. Our inspiration came from the short story “Cat Person” by Kristen Roupenian that recently went virally after being published in the New York Times. We wanted to mold Roupenian’s short story into what we recognized as the dreaded, and inevitable stages of dating of in today’s day and age. The work that we have creates is meant to discuss the drastically changing landscape of romance in the 21st century in comparison to the age old sexual motivators found in human nature.


How it Works

The Board itself is connected to an Arduino UNO and MAXUINO, it has eight wired buttons and two conductive switches that can be activated by touch. When activated in order the buttons tell a story through audio and visual assets that we created with After Effects and Audition. In between the video sequences the user will be prompted to make decisions about the story and characters, however regardless of what they choose the story will end the same way. The user interaction is intended to reflect the frustrated, sometimes powerless feeling that Roupenian’s main character Margot feels in the short story “Cat Person”.

Our Process


  • Copper tape
  • 8 sensors(button, sound, mic, pressure, etc)
  • Speaker
  • Thin Plate of Wood


  • Projector
  • Arduino Uno
  • Soldering iron
  • Speakers

Software Used:

  • Adobe Premiere
  • Arduino
  • Maxuino
  • Touchdesigner
  • Adobe Illustrator

1.         Brainstorming

We came up with a lot of ideas, and potential topics to explore by simply making a flowchart to help interrogate various elements of the UX/UI design. One of the main things we wanted to be part of the experience was the ability for the user to leave input.






2.         Voice Acting and Recording

Ola was able to take the original narrative and construct a new script that was simplified and more compatible with the aims of our project. Using the new and improved script we headed to the recording studio where we worked together to record the voice over narration. Next, we made a list of background sounds we wanted add to the background narration and we recorded those too. Sam took the raw audio files and mixed them together to create 8 separate audio tracks for the final output.


3.         Visuals and Animation

Ola worked in After Effects to create 8 individual animation files that would play with the push of the buttons. However, all of the files were corrupted and the work was lost resulting in a very frustrated Ola. Luckily, Sam came up with the idea of using colors rather than visual scenarios to communicate the ideas and emotions in the story. Together Ola and Sam did some research on Color Theory and applied the knowledge to the visual element of the project.



4.         Technical Work

Tyra put together the user experience design framework and the Max patches required to activate the functionality of the board. Tyra began by laying out the exact flow and structure of the user experience via pen and paper sketch wire framing. The process began with designing the layout of the board itself, as well as the content and syntax of prompted user interactions. The main Max patch consists of eight buttons that open and play media files when activated, each button has a different video and sound that becomes part of the overall story. Within the main patch is a sub patch, this sub patch is connected to the ‘swipe left’ and ‘swipe right’ feature of the board that makes it interactive.

The sub patch is responsible for opening and playing video files from a two separate lists. When the user is prompted they will have to choose to ‘swipe left’ or ‘swipe right’, whatever they choose will activate the next video file from the corresponding list. At the same time, the sub patch will eliminate the video file that was not played because it was not chosen by the user. This was an attempt to sequence the movie files and make sure they only play given the right conditions. However, the interactive functionality of this project was not developed as well as it could have been. Unfortunately, the Max subscription Tyra was using was unlicensed and ran out before she could debug the problem with the sub patch. Overall, the sub patch does achieve its base function of movie sequencing, however it does not eliminate the unselected video files which is necessary to maintain the syntax of the story progression.


5.         Rapid Prototyping

Tyra  designed the physical board that holds the electronic components using Illustrator and a Laser Cutter from the Rapid Prototyping Center. Ola designed the circuit and put it all together. 





6.         Putting it all together

Together we integrated the circuit into the physical prototype board, then we plugged in the Arduino and connected to the Max patcher. Finally, we hooked up the projector and speakers so that the visual and audio experience could be shifted from the centrality of screen to the openness of the physical world.


The story takes place over time, but is told over the course of a few button presses from the user, filling and concentrating the space with feelings of nostalgia, relatability and humour. Through these fragmented experiences the interpretation of the work can vary from view point to view point making each interaction with the space unique. For example, someone who relates to the story might understand the place in which it is situated as somewhere where they belong and feel heard. Whereas, someone who is confused or puzzled by the story might interpret the space as a distant, maybe even emotionally attacking.


The Time Traveller’s Clock: Murder at Acorn Dorms

adelaine Fischer-Bernhut, 3161996
Salisa Jatuweerapong, 3161327

February 26, 2019

Atelier II: Collaboration
Prof. Simone Jones



YOU are one of Toronto Police Commission’s top detectives, with the best track record for solving cases the force has seen in 65 years. Having never left a case unsolved, your co-workers joke that it is like you have the ability to “be in the room where it happens”. That is merely a rumor, of course; you dismiss it easily, of course. When asked what your secret technique is, you say: “my ears”. While that technically isn’t a lie, your true secret lies in your clock: imbued with a mysterious magical presence, it allows you to turn back time to hear all the sounds that are happening in a specific place.

TODAY, you have just recieved news of a MURDER at ACORN DORMS, of an unfortunate 19-year-old girl. Your subordinate brings you all the information on the suspects they were able to find, but it is now your turn. You have already set the location in your clock to the dorm room. Time to figure out who did the deed.

Are you ready?

Continue reading “The Time Traveller’s Clock: Murder at Acorn Dorms”

Studio Murder Mystery


Joseph Eiles

Ermar Tanglao

Vijaei Posarajah

Narrative Spaces – Studio Murder Mystery

Project Description

For our Narrative Spaces project we chose to create a murder mystery that takes place within a music studio. The player takes on the role of a detective investigating a cold case wherein the lead singer from an 80’s rock band was found dead within the recording studio. In the story we have six characters; Gabriel Newel the bassist, Jim Petty the drummer, Teddy Lorne the guitarist, Micky Strats the singer, Max Powers the manager, and Herman Dale the studio tech.

Each character had their own motive for committing the crime. Gabriel’s motive was that he’s jealous of the lead singer and disputes his position within the band. Teddy’s was that the victim cheated with his girlfriend. Jim’s motive was that the victim was in debt with him for a large sum. Max’s motive was that he’s enabling the victim’s drug habit and profiteering from him. Lastly, Herman’s motive is inconclusive.

Within the scene we placed objects that acted as clues such as a pill bottle, a bag of flour that acted as cocaine and bloody fingerprints on the piano. For clues such as the autopsy report and the police report we decided to hide them around the studio as we wanted the player to explore the area as a detective would.

For the audio portion of the project we decided to create switches that turns on the audio, these switches are placed on the clues that the person playing has to find. The position of the clues are based on where the scene takes place such as scene two taking place in the room recording booth. The clues itself are based on what happened in the scene such as the ripped contract being the first scene wherein the manager discusses the contract with the band. The microphone is the second scene wherein the band is practicing and a dispute happens with the singer and bassist. The skull which is the third scene represents the skull of the singer who died and what happened before and during his murder/death. The person playing is also equipped with a glove that has conductive fabric sewn to it that closes the switch when placed upon the clue which plays the appropriate audio file given to said clue.

Process and Documentation

To create the audio for the scenes we went and assigned ourselves a character and recorded ourselves in the recording booth. We first read through the script trying to figure out how were going to voice each character according to their personality. When recording we did each character’s lines simultaneously for that scene and cut it into multiple audio files when finished. For the sound effects we had to improvise some sounds such as kicking a chair, slamming the door, shaking a pill bottle and moving sugar around on a wooden slab.

For the editing process we edited our audio in Adobe Premiere. We decided that two of us create our own rendition of the audio and see which one fits the best. One of our audio effects we put in had a slight delay added to it to create the idea of a flashback happening. The other audio clip had more sound effects added to it but did not have a delay to the audio. We decided that the audio without delay was much better so we went with that. During the intruder scene we added a delay, echo, amplify, reverb, and pitch correction to mask the killer’s voice. 


Initial Arduino setup testing.


Studio recording session.


Audio Recording Script.


Editing of the scenes through Adobe Audition.



Prop evidence used for the Murder Mystery.


Final Setup for the clues and evidence.


Arduino setup under the desk.

Arduino Circuit

For our switches we used tinfoil as it was relatively reliable in its use as an on/off switch and it was a material in which we had a lot of experience and expertise in regards to previous projects. The tinfoil was arranged in such a way that there was a gap in between the two strips and the circuit would be completed once the user placed the palm of the glove lined with conductive fabric over the two strips. The glove worked perfectly from a technical and thematic perspective as it was able to complete the circuit and it fit the idea of a detective putting on special gloves so that they don’t compromise any evidence in an investigation.


The Arduino circuit consisted of six tinfoil strips that were connected to a 5V power source, a ground connection that was filtered through 10k resistors, wires that connected them to digital inputs, and alligator clips that connected the jumper wires to the tinfoil. We chose to work with digital inputs as we were essentially working with on/off switches and thus had little use for anything analog.

Maxuino Code


The Maxuino Patch consisted of three switches that would play a sound file once the user placed their hand upon one of three clues. The code was arranged in such a way that the sound files are preloaded and played once the switch sends a ‘1’. The sound file is able to continuously play until the end of the file even if the user releases their hand from the clue and the switch is turned off. Earlier challenges with the Maxuino patch included a sort of ‘double trigger’ wherein the sound file would get played twice as the switch recognized both a ‘1’ and ‘0’ as a valid input; this was later fixed by preloading the files and placing a ‘sel 1’ to filter out any potential repeats.

Context and Inspiration

Our project was developed through a basic concept of interactive objects that contributed to an overall sound based narrative. The idea of using a skull for one of the objects allowed us to narrow down our project down to an interactive murder mystery. Our project consisted of both paper and non interactive evidence to provide a grounding for the murder mystery, as well as three pieces of evidence that can be interacted with a conductive glove to close a circuit to play an audio file related to the murder mystery.

For research proposes we looked at how murder mystery games were set up and what was required for an adequate experience. It was from the blog on ( that we realized for our game we would need a very specific theme and for recording purposes we would require a very detailed script with the types of sounds required to record. As a group we decided to use the recording studio as a set for the murder mystery and agreed to a theme based around an 80’s rock band aesthetic. As a group we came up with a plot and the backstory for the characters that were involved as well as who would be the culprit in the end.

For the recording of the scripted scenes we researched other similar audio based mystery games and came across “Wonderland” ( which was a mystery and puzzle game solely based on audio and exploring your surrounding environment. This allowed us decide our three audio scenes would involve using the recording studio as a set that the players can explore as the murder took place in the same environment. This was to be evident in the recordings themselves as all possible sounds should be possible within the provided environment.

We also researched to see if other projects have attempted to create a murder mystery type game using interactive elements based around Arduino and found ( which uses the choose your adventure element where a story is told to the player and they make decisions provided in the story by picking colored cards that are read by a color sensor to continue the story. Our project shares a similarity in that the Arduino is used as a trigger element to provide a portion of the story to the participant but unlike their linear story ours is presented the player in fragments that they have to assemble.  

We used Aaron Oldenburg’s research paper “Sonic Mechanics: Audio as gameplay” ( as a guide to further explore how an audio based game can have aspects of interactivity. This involves how we build spatial awareness using sound alone and recording nuances in movement and how they can be interpreted as sound within a closed environment.


Audio Files –

Github Link –


Fútbol – Narrative Space

Fútbol – An Interactive Experience


Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984

Francisco Samayoa – 3165936


For our narrative space we set out to create an environment of a soccer game using physical, visual, and audio installations. The installation was built like so: three pylon stations are spread across the ground accompanied with digital sensors, on top of the installation is a projection mapping of a field with visuals and animations. All the sound and visuals are controlled by the digital sensors. Once the player passes one of the pylon stations, they would press onto the switch which then would trigger an animation from the projector to lead the player onward as well as a variety of announcer recordings. This would continue on until the player would score a goal at the end of the installation. On top of all of this we also implemented a backdrop of crowd cheering audio samples through speakers. The purpose of this concept was to enthrall the participants in an intense soccer experience where their actions and movements would draw a reaction from a crowd and the announcers. The bright animated visuals and loud dynamic audio was intended to fill the participant with emotion.


A lot of work had to be done to create our narrative space and so we broke it down into three portions: audio, visual, and physical. For audio we set out to the recording studio because we wanted to use as much self-recorded material as we could. Mainly, we wanted to record commentary that would reflect the actions of the player whether they were to score a goal, miss a goal, or pass through the pylons. For this we brainstormed and created a brief script to follow.


After going through our recorded audio in the studio we decided that we wanted a variety of announcers for the narrative space so that the participants would not get tired of hearing the same voice over and over. Additionally, we discovered that improvising yielded more genuine commentary.


After having every member of the group finish their recordings we brought the audio files into FL Studio.With FL Studio, we were able to make changes to the audio files by altering them to resemble the sound of stadium commentary. Additionally, since we wanted to create a pool of commentary to be called upon at random, we had to cut our long studio recordings into short bits of phrases which we then organized in Maxuino.


For the physical portion of the installation we created a simple digital switch using cardboard and tin foil that would act as a pressure sensor. Two cardboard sheets with tinfoil on and conductive thread would be the main triggers for the audio and visuals.


Originally the tinfoil spanned most of the cardboard sheet but after a trial run we discovered that the data showed that the switches were constantly being completed when we didn’t intend them to so we had to alter the sizes and placement of the tinfoil as well as adding more space and resistance between them.


Since the placement of the tinfoil was centered we decided to add pylons on each side of the switch forcing the players to go over top the cardboard and press onto the centre in order to yield data. Addtionally, we had to add some changes to the goal post that contained the IR sensor. The IR sensor struggled to detect the ball no matter its speed when it went past the two posts we set up. If we tried to increase the sensitivity, the IR sensor would occasionally pick up the movement of the player instead. We quickly worked around this by placing a cone in the centre of the two posts that the player had to shoot for. With this, the IR sensor easily picked up the movement of the cone being struck and in turn triggered the celebratory goal animations and audio.


The visual aspect was created all in Photoshop along with sourced gifs. The animations consisted of the arrows that lead the player through the plyons and to the goal, the accompanying sourced gifs, and the flashing celebratory goal animations. Each phase of the installation was split into four videos. We constructed the presentation by projecting the field and it’s animations and placed the cardboard buttons and goal post accordingly. We had encountered issues with the visuals responding to the triggers. Although data was being read and transferred to trigger the animations, there was an occasional delay between the transitions of videos as the files were too large but manually triggering the transitions worked well.

Video of Final Product


Pictured above is the Maxuino patch used for organizing all the switches and their interactivity with the various commentary and visuals.


Documentation and Process Work

Stadium Field with various stages and animations

Audio Files of the various commentary along with sampled crowd audio

Additional documentation

Sneaking Out – Narrative spaces [ATLR II EX2]

Mahnoor Shahid – 3162358

Siyue Yasmine Liang – 3165618

Jin Annie Zhang – 3161758


Narrative Spaces – Sneaking Out [ATLR II EX2]

Our narrative revolves around you quietly trying to sneak out of the house at night without waking up your parents. But your parents have secretly set up traps around the house that play loud alarm sounds when triggered. The setting of the installation is supposed to represent the entryway in a house. You have to try your best to stay quiet and avoid these alarms while exiting. If sounds are played you have to stop them by replacing other objects on sensors or using other methods. As the interaction starts, your pet dog wakes up and starts barking for food. Your first challenge is to give the dog food within 30 seconds before your parents wake up!

Interaction Steps

The installation has a challenging procedure that the player has to overcome.

  1. The first challenge is to stop the dog from barking by giving him food within 30 seconds (before the parents wake up).
  2. You have to grab the food bowl from the table and the place is on the mat to stop the barking.
  3. The food bowl is on a light sensor, so when the bowl is taken off the table, the light sensor triggers an alarm sound.
  4. Now, the player has to find another object and cover the light sensor with it to stop the alarm
  5. They can take a painting off the wall and place it on the sensor.
  6. As they move towards the door, there is a light beam from a flashlight that is shining on a light sensor, if the player crosses it, an alarm will trigger so they must walk over the light beam
  7. Near the door, there is a safe that the player has to open to get the car keys.
  8. The code has three numbers that are hidden around the room.
  9. The player has strategically walk around the room and search for them without triggering the light beam alarm.
  10. The number can be found behind the painting, on the dog bowl, and near the table lamp.
  11. Once the code is found and correctly entered, the safe is unlocked and the player can grab the keys and sneak out!



Max Patches

Dog bowl switch and light sensor alarm trigger

This max patch had one digital switch and one analog light sensor.

The dog barking starts at the beginning of the game. When the dog bowl is placed on the mat, the digital switch closes, and the dog sound stops.

The dog bowl was on top of a light sensor. When the dog bowl is moved, an alarm sound triggers since the lamp is shining on it. The person has to replace the bowl with something else.


img_0552 img_0553




Light Beam “Alarm System”


This match patch has one analog light sensor. By default, the flashlight is shining on the sensor which keeps the alarm sound off. When a person passes through the light beam,  the light sensor receives no light and plays the alarm sound. The person in the game has to make sure not to block the light beam to avoid alarms. 






Atmosphere Sound


To give a night time atmosphere, we had a cricket chirping in the stereo sound.


Keypad and servo motor lock

The player has to find the code numbers 5,9 and 4 and enter them in that specific order. When the correct order of the code typed into the keypad, the servo motor rotates 90 degrees, ending up in a horizontal position. This unblocks the hole and allows the person to open the box cover.




Code for the lock system




  • lamp
  • dog bowl
  •  conductive fabric (digital switch)
  • dog mat
  • flashlight
  • 2 light sensors
  • painting
  • stickers

Audio Files

  • Dog barking (Barks for 30 seconds, then “Who’s there” dialogue starts).
  • Crickets
  • Alarm ringing
  • Buzzer sound when the code entered is incorrect.



Playing with Fire


Dimitra Grovestine, 3165616

Melissa Roberts, 3161139

Kiana Romeo, 3159835


This project is a physical experience of a German children’s story, Die Gar Traurige Geschichte mit dem Feuerzeug. The story is about a girl who is left home alone with her two cats, plays with matches, and accidentally sets herself on fire.

The juxtaposition between the project and the story lies in the fact that the user must play with fire in order to experience the story, and the moral of the story is not to play with fire. Through the manipulation of objects related to the children’s tale (a cat, a matchbox, a picture of a little girl, a pair of shoes, a casket, and a shoe-box house) the experiencer navigates through the story without ever reading it.

Though the story is narrated, and that narration is triggered by some objects, the story is not exposed to the user. The narration is in the story’s original language: German, meaning the user hears the story being told, but doesn’t understand what’s being said. The German narration, the images pasted onto the inside of the roof, and the cues on the objects hint towards the story, encouraging user interaction.

concept_housefront     concept_houseback

Continue reading “Playing with Fire”

Pub Trivia

Tyra D’Costa | Shiloh Light – Barnes | Omar Qureshi

Pub trivia is a decentralised trivia game that is targeted towards groups of friends or pub – goers who want to play trivia, where trivia is not being offered. This means there is no single command screen that  has exclusive control over the data or processes of the game, the game will b dynamic as players communicate data between each other. Our trivia game requires each member to Join a team, they can do this individually or collaboratively. Once all the players have joined the game the questions will begin and the console will log the players score, the top three players, and which players have guessed wrong or correctly. The trivia game we put together focuses on music and orchestral music facts from the early 1800’s to the late 1900’s.  Ideally, the player would have a multitude of topics and themes to choose from, but for the sake of demonstration our project we chose to focus on the topic of a random non fiction book we were given. This documentation seeks to display the key concepts and functionality required to develop a decentralised trivia style game app, or webpage.

See the code here:

We decided to use Pubnub to develop the backend because its Data Stream Network (DSN) and real time data transfer capabilities are particularly useful when trying to get the players device to communicate effectively between each other. We also used P5.js to develop the backend functionality because it is what we are most familiar with and it is fairly simple to build the functions and structures that were necessary. For the front end development we formatted with HTML and CSS. We did this because we wanted to separate the  the visual aspects of the code and the functional aspects to avoid errors, and organise the code more effectively. We started with front end development, which consisted of Illustrator mockups, and the p5.js DOM library. We then began to develop a loose wireframe of pure HTML buttons, labels, and inputs – which would serve as a playground environment to test whether PubNub would be a viable option to move forward.



We concluded with a simple PubNub system of sending messages on one channel to join a game, and another channel to answer questions. This meant that each channel could receive a different object storing specific information. Each time a player answers a question, for example, a message is sent to all other users containing the players name, and the letter that they have selected (A,B,C,D). The computers will then continue to receive answers until the total number of answers is equal to the total number of players. Once the last answer arrives, the computers will then remove all the elements from the screen and generate the so-called results screen. From there, a simple three second timer returns all players to the question screen. The computer will continue to select questions from the array until the number of questions is equal to the variable. Overall the the development process went relatively smoothly, except for one problem we faced which restricted the ability to have a dynamic player number. We wanted the ability to invite a dynamic number of players into the game depending on the circumstance, but due to us not being able to integrate with PubNub “presence” features, we ended up having to go with a fixed “numberOfPlayers” variable. If we decided to continue with this project, we will focus on integrating a dynamic player account feature, as well as improving General bug fixes and UI elements.

CSS Reference:  ferrysdayoff et al,. Jun 11, 2018, 10:16:47 PM. “Styling HTML Forms”. [Web Article] Retrieved from:

  • Explains how most of the CSS was done. Including how to align buttons and input bars, edit button properties and add svg files to code.

Javascript Reference: Berhanu, Yaphi. May 23 2017. “How to Make a Simple Javascript Quiz. [Web Article]. Retrieved from:

  • Explains the basic functionality, problem solving tasks, and features that need to be developed in order to create a trivia game. For example, we learned that we would need to use a loop to fill in the answer choices for the current question.

CSS Reference:  ferrysdayoff et al,. Jun 11, 2018, 10:16:47 PM. “Styling HTML Forms”. [Web Article] Retrieved from:

Javascript Reference: Berhanu, Yaphi. May 23 2017. “How to Make a Simple Javascript Quiz. [Web Article]. Retrieved from:
Pub nub Reference: 
Pub nub. Aug 30th, 2016 at 4:57PM. “Solutions Home”. [Webpage]. Retrieved from:
Inspiration + Ideating   
kahoot pubtrvia

Once we chose “The story of Orchestral Music and its Times” as our story to base the project on, we contemplated the different interactive forms in which players could express their knowledge of orchestral history. Inspired by quiz games such as Pub Trivia, Kahoot, and HQ Trivia, we decided on a real-time multiplayer trivia game. From there we went back-and-forth between simply asking question after question and displaying result at the end, or showing a continuous real-time results screen containing information about players current score. We ended up going with the continuous option to keep players engaged and competitive with one another.