Alchamation : Making Magic with Augmented Reality



Salisa Jatuweerapong | Tyra D’Costa | Mahnoor Shahid


In this Augmented Reality focused project our team sought out to create a magical experience using the technology we have learned in the last design challenge. We all enjoyed working with Unity collectively decided that we wanted to build on our AR development skills by using Vuforia. Some main target goals for this project were to include elements of storytelling, narrative and original artwork for all the assets used. We settled on developing an app that allows users to combine our designed artwork to create ‘magical powers’ which were designed as shaders in Unity. The artwork was printed on temporary tattoo paper and stickers so that users could apply ‘powers’ to their own bodies and interact with others in a playful and engaging way.


We derived our inspiration from a 2017 art exhibition called Mirages and Miracles by Claire Bardainne & Adrien Mondot. This work gave us the idea to integrate particle systems into our designs in a way that brought the artwork to life.

screen-shot-2019-04-11-at-10-50-31-pm screen-shot-2019-04-11-at-10-49-49-pm screen-shot-2019-04-11-at-10-49-30-pm

Miracles and Mirages Exhibition Pieces

Our challenge for this piece was to create a collaborative AR experience: inspire social collaboration and movement from a medium that unfortunately often ends up socially isolating. Our thought process to solving this challenge was engineering interactions off the screen. We decided this could be done through interactions with the tracking images. Thematically, we were inspired by alchemy and magical girl themes: the idea of mixing different elements and powers (including those of friendship!) to create something bigger and better.


In the process sketches pictured below we have outlined two different foundations for the user experience design. The one on the left, design idea one, shows a scenario where users are meant to place their hand in the middle of the plinth and use their phone to activate the visual effects. The image on the right, design idea two, shows a scenario in which users place their hand into the plinth under and iPad to active the visual effects. We also considered using wearable jewelry like bracelets or rings instead of body art, however we opted for the body art as a more intuitive design feature. In the end, we decided to go with design idea one because because the iPad camera we were using  was inconstant and laggy with image tracking.

sketch        55625733_2264748886880174_8888595064301813760_n

            Design Idea One                                      Design Idea Two 


Final Design Idea
Final Design Idea


Art Direction: Tattoos & Stickers 

Unused Tattoo Design Iterations
Unused Tattoo Design Iterations

The idea behind the three tattoos was that when they overlap on top of each other, they would create the design seen on the acrylic plinth. Hence we decided on a geometric design for our tattoos as basic shapes can combine to create interesting designs. To keep a common theme through our tattoos, we repeated an identical circular outline, arrows, triangles, squares and dashed lines. 

screen-shot-2019-04-11-at-11-38-16-pm screen-shot-2019-04-11-at-11-38-09-pm screen-shot-2019-04-11-at-11-38-02-pm

One of our challenges was to keep the tattoos simple enough to easily transfer on the skin without ripping the design. Another aspect we had to keep in mind was the Vuforia image tracking rating. The tattoos had to be detailed yet asymmetrical to have a high feature count. Consequently, we had to edit two of our tattoo be more asymmetrical to increase their rating to 5 stars. As a result, our original idea of the tattoos overlapping to show the exact design on the acrylic was cancelled.

Iteration Process for best quality Image Tracking
Iteration Process for best quality Image Tracking

Art Direction: Plinth

The artwork pictured on the acrylic plinth is a combination of all three tattoo designs layered on top of one another. We used this layered design to create a laser cut friendly template, then we rasterized, scored and cut into a large acrylic sheet. The last step was to carefully glue the led light strip onto the sides of the plinth so that the light would diffuse aesthetically and evenly.


Tech Design: Shaders and Particle Systems 


The visual of the shaders would define the “magic” of the experience. We wanted our experience to elicit a sense of play, above all–that was our rational for choosing more “cartoony” effects. While early tests were done with  VFX cinema-style light bursts and photo-realistic flames, we wanted our final “powers” to be less realistic and closer resembling low-poly video game style. This was our first time working with shaders. Mannie explored using shader graph early on, while Salisa experimented with the code route. Salisa’s shaders made it to the final and the code was created by closely following this tutorial. Each magic power is two shaders combined: they are made of two stacked balls to create a more interesting affect. As well, an animation code was written to make each “power” bloom to life upon activation. To show that each tattoo had latent power, and something would happen, a generic particle effect was applied to each tattoo so it would sparkle when detected by Vuforia. Only when another one was detected though would a power bloom to life.


Tech Design: Image Tracking and Distance Tracking

Thought Process involved in designing the distance tracking script
Thought Process involved in designing the distance tracking script

Using Unity we created a script in C# that would accurately track the distance between three image targets. This was an integral part of the system because we wanted users to be close enough together in order to activate their ‘power combinations’. When users view their tattoos through our AR app a weak particle system is activated displaying to the user their unique power element (green core, blue flame or fire ball). The script allows for the system to detect when more than one image is being tracked. If two different image target are being tracked, and the total distance between the two images is small enough, then the system will activate a stronger shader effect depending on the image target combination. When three image targets are tracked at an optimal distance then the users will activate the largest shader effect called ‘Big Blast’.

Testing Proof of concept with Game Objects
Testing Proof of concept with Game Objects

Scripting Distance Tracking – In this process video you will see the first stages of creating the distance tracking script

Testing proof on concept on Body
Testing proof on concept on Body

Activating Shaders and Particle Systems – In this process video we solidly our proof of concept by successfully activating shaders and particle systems with distance tracking scripting

Narrative Design

We created a brief narrative surrounding the installation to tie the visual elements and artwork together. In the future we would like to do some more character building by creating character models and personas that associate with the different tattoo types.

Narrative Concept
Narrative Concept

Narrative – Story Telling Presentation Video – In this video we have detailed the story of the three brave archers as part of the installation.

Finished Installation

img_5557 img_5561

CLCK HERE TO SEE VIDEO OF WORKING FINAL INSTALLATION –  see the final product in action.


In this installation users are drawn in by the narrative video and encouraged to take on a character persona. Each character has a specific tattoo associated with a unique elemental super power. Users come can apply the tattoo of their choice to their hands, and finally enter the plinth to find two other users with different tattoo markings. By connecting with other users at the plinth the user can activate the full potential of their powers. As they combine tattoos with other they will unlock new stronger powers with greater effects.  In the future we hope to incorporate what we have done with this project into a gameplay setting. This could include anything from a VR game environment to a table to a table top board game, ideation is still in the works. There’s potential for this to grow as a commercial game and ARG. along the lines of trading card games and LARP-ing.



Re-do- 90’s quiz game

90’s Quiz Game

Kiana Romeo -3159835

Concept/ Context

In today’s modern age, mobile games are everywhere on everyone’s handheld device. Platform games, puzzles are among some of the most popular but its quiz games that are beginning to take the world by storm. One popular game called HQ is growing increasingly popular. hq-triviaThe game works by connecting millions of players worldwide into one huge quiz game where 12 questions are asked and a process of elimination takes place where players get booted from the game once answering a question incorrectly. Although the biggest incentive to play is the huge cash prize that can be won if reaching the final question successfully, the extreme randomness of the game is incredibly entertaining as well. The questions begin with simple knowledge tests like “what colour does red and blue make?” but ends with extremely complicated ones that no one with an average knowledge of the world would know. The thrill of racing against a clock to win a prize is an addictive experience and I therefore wanted to create a game that would provide that, but also follow a theme.

The original project featured non timed questions about the 80s which was fun, but didn’t have the competitive edge that great quiz games do. The era I resonate with most is the 90s- the style, the shows and the toys and games are all so intriguing to me. As well most of my classmates were either born in the 90s, or grew up in them so the nostalgia factor would be enticing as well.


Answer button, buzzer, name input

The players log onto the game server by opening one of the four buzzer html pages. Each has a different button design to create a more personalized experience. Once there, the players can enter their name into a box and claim their spot in the game.

button1 button-5 button-4 button-3

Their points will automatically be set to 0 and will only be able to increase if the host sees it fit to award them with any. When the game begins, the first question will pop up. If the player knows the answer, they “smash” their buzzer. A sound will play and the player will be able to answer the question verbally.


Timer, points, changing question slides, resetting the game

The host controls almost everything in the game. When a player answers a question correctly, they award a point. They also control the timer when a player buzzes in and controls the slides as well.

Question slides


The question slides simply had the questions on them, and when the host is ready to move on, they simply press “Next” on their side and the slide changes.


While the code worked perfectly separately, it was when I tried to tie it all together with PubNub that things started to go awry. I faced many challenges, most of them still unfixed. Some challenges I experienced with the code for the player controls included being able to control the game based on who voted first. It was difficult to make it so that the first button hit locked the others so I fixed it by making it so that the when a button is hit by a player it sends a message to the console of the host telling them who hit it first.

Another challenge I had was being able to control the timer from the player’s side. I wanted it to be so that when the player buzzed in, the 30 second timer would start counting down but I found that to be a difficult task. I instead opted to add it to the host controls and while I still don’t have it working properly, I believe it would be better for the host of the game to have control over that instead.

The last issue I had was getting the slides to work. This issue also took place in the original project where pressing next did not change the slide and in fact the slides did not even show. The “loading” message stayed on the screen the entire time without changing and the question slides would never appear. I therefore had to resort to using PowerPoint to present the slides as I wouldn’t be able to use P5 and PubNub to control them.


Illustrator and Photoshop

The first thing I did was create the slides and button jpegs. When I work, I often like to have my visual components done first, and then worry about the technical side as the aesthetics and theme inspire me. At first, I wanted to create the jpegs on illustrator but soon realized it would be harder than I thought to create good images and that it would be much easier to stick with Photoshop. For the buttons I was inspired by the ones used in game shows to buzz in answers. Large buttons on square stands and slamming them down to activate them. I drew one button and came up with as many 90’s like colour schemes as possible switching them up for each button. this was a huge upgrade from the controllers of the old game as it wasn’t as aesthetically pleasing and did not so much go with the theme. As well, as these buttons were created based on feedback I got from the other project, I believe they are as well made and user friendly as they can be.screen-shot-2019-04-11-at-1-18-53-pm screen-shot-2019-04-11-at-1-22-06-pm

For the slides, I basically copied the format I used for the first version of the game. The feedback I received on that game was that my colour schemes and aesthetic choices were mostly 90’s centric instead of 80’s. I was therefore able to just use my old slides and change the questions.

Research/ PubNub/ P5

I didn’t have to do much research on the theme of the project as I am pretty well versed in 90s culture whether it be fashion, pop culture or toys but extensive research and thorough investigation had to take place in order to get PubNub to work and even then it still did not do exactly what I wanted. On the other hand, JavaScript is something I am pretty good at and I was able to successfully get each component of the project to work separately. As most of the project consisted of jpegs, it was mostly “mouseClicked” functions that made the project work.

 Critique/ Conclusion

Unfortunately on the day of presentations I came down with a stomach sickness and could not attend. Based on the critique I got from the first project though, I changed a lot of things to improve it. For the most part I am proud of how it turned out in the end and hope to somewhat get it working during the summer.

Code for the game







Atelier Art Heist

Donato Liotino

Ermar Tanglao

Joseph Eiles

Sam Sylvester

Vijaei Posarajah

Waters Art Gallery Heist


Project Concept:

For project four we decided to revisit project two’s theme of Narrative Spaces where we had created an interactive murder mystery based around a soundscape environment. We had learned that by not providing a clear plot summary or scenario introduction most participants were lost and were reliant on us for help. The murder mystery itself was solely reliant on finding evidence through audio so there was a limit on the potential for interactivity. Based on the feedback given for that project we sought to expand upon the narrative space concept by creating opportunities for more interactivity within a self-reliant closed environment.

We had thought of the concept for an “Art Heist,” as a way to build upon the murder mystery and transition into an escape room. Much like a murder mystery a small group would interact within a curated environment with a goal in mind. For the interactive elements we relied on arduino to create keypad based safe, a potentiometer based safe, and laser security system based around flex sensors. Much like project two, for the soundscape elements we developed a script and recorded scenes that would be triggered when participants touched specific objects within the room. To allow the participants to be self-reliant within the closed environment, upon entry they  were given paper clues and played an audio introduction explaining the plot, the goal, rules and the tools that they were provided with. The art gallery environment itself was created to replicate a private gallery focused around renaissance art. This included renaissance paintings with invisible ink clues drawn all over that can be found using a black light as well as miniature statues on plinths. The participants are tasked with finding a specific art piece hidden within a locked box protected by flex sensor lasers. Like the murder mystery project they use a conductive glove to trigger scenes based around “memories” captured within the room for clues to find the correct paintings for numbers used to crack a potentiometer safe which holds the key they require. Below is the detailed sequence of events that showcase all the moving parts within the escape room.

Sequence of Events:

1) Enter room and listen to the mission brief (max) and receive an envelope containing

– Brochure of art pieces (clue)

– Security Memo clue for keypad (aid)

2) Timer starts with music.

3) Find number code for the security keypad in the security memo.

4) Disable security keypad (arduino) and find black light inside. (The black light is needed to identify painting clues)

5) Find 3 audio memory clues (max), each leading to a single painting.

– School of Athens painting (clue)

– Christ among Doctors painting (clue)

Niccolò Mauruzi da Tolentino at the Battle of San Romano painting (clue)

6) Search each painting with black light for a potentiometer number clue.

7) Crack potentiometer safe (arduino) with the 3 numbers found in the right sequence.

8) Get key inside potentiometer safe.

9) Open masterpiece lockbox with key, avoid laser security flex sensors (arduino).

10) Leave room with masterpiece before time runs out. (20 mins).


After settling upon the idea of an art heist we began conceptualization of our three puzzles; as we relied mainly on tinfoil circuits in our previous studio murder mystery project we wanted to create more elaborate and complex puzzles that showcased the creative potential of the Arduino. Keeping with the theme of the art heist, we bounced around several ideas such as giving the players a codex to decipher a code found in the paintings to blacking out certain words in a book to provide a password; after much brainstorming and deliberation we settled upon a potentiometer safe, a keypad, as well as flex sensor trip wires as they were possible within the amount of time we were given, we had a relatively good idea of how each puzzle would function in regards to sensors and circuitry, and they fit the theme of the art heist fairly well.

Studio recording for mission brief and audio scenes

3d printed statues & 3d modeling lock box. Discussion for keypad code.

Working on potentiometer, development of keypad lock.

Setup and testing of flex sensor lasers.

Audio editing for timer sound effect, incomplete potentiometer lockbox.

Invisible ink testing, and art gallery installation.

Preparation for gallery show.

Collaboration notes discussing traps within room, and initial room layout.

Video Documentation:

Max Patch and Sound:

Github Link:

The voice acting was all recorded using Adobe Audition in the audio lab with a condenser mic. To edit the audio I exported the session into Logic Pro. Since the scenes were recorded by person and not in order of lines, I pieced each dialogue line from each person into each scene. I cleaned up the audio using EQ and vocal effects as well as adding in sounds from the Legend of Zelda, Super Mario, telephone sounds, and any sounds found in the Max patch.


Grandpa: Vijaei

Narrator: Ermar

Security: Donato

Vernita: Ola

The Max patch is a number of toggles that trigger each audio file as scenes to play when the toggle is triggered; it’s essentially a simplified DJ sampler.


Puzzle One Code:

Puzzle One Circuit:

Puzzle Two Code:

Puzzle Two Circuit:

Third Puzzle:

The Final Chest is a fairly simple task to accomplish. The chest’s keyhole is surrounded by a series of five red threads. These threads are meant to act as “laser beam” sensors, like one might see in a spy movie. The end of each thread is secured to a wall and the other end is attached to one of five flex sensors. Over half of each sensor is taped flat against the wall opposite the other end of the thread so that the threads are taut, whilst bending the sensors as little as possible.

The flex sensors are each attached to an analog read of an Arduino with 10K resistors. The Arduino then runs off the power of a laptop running serial control, to send the five values as a string to p5, which then interprets and assigns the values to five variables. The p5 code utilizes the sound library and has a function to play an alarm noise file whenever a sensor goes beyond a designated threshold. It also has a small display on the canvas to help visualize the values which the flex sensors are reading and set values for it then to start playing.

Puzzle Three Code and 3D Model CAD files:

Puzzle Three Circuit:



The potentiometer box was designed with non-destructive modelling in Fusion360. When designing the mechanism for the potentiometer box, we calculated the dimensions of the base of the thinker, the space it needed in order to slide and reveal the hole, and the distance between the shaft of the motor to the inside wall.  In that way, everything moved smoothly without the statue falling off, the hole being too small, or not having enough room to store the Arduino components and power.

The box can be disassembled into four 3D printed parts. There is the base, which is a 165-centimetre cube with three holes in the front. Beside each hole, is a small slot for the potentiometers to hook onto so that they do not rotate when being turned.

The second component is a platform that goes into the box first. It is as tall as the servo is deep. The purpose of it is to create a platform for the chest’s key to be placed on as high as possible, to both provide room to hide the Arduino components, as well as make the key reachable since the hole on top is rather small for a whole hand to fit into. It also has a small section cut out in the back so that it can go around where the motor mount attaches.

The third component is the servo mount. The servo goes in after the platform and slides snugly into a dovetail slot. The reason the servo goes in after the platform and is not just printed as part of the base box is so that the platform would not need to have a large hole in it to fit around the mount, thereby exposing the Arduino and wiring, or letting the key fall to where the players cannot reach. The mount also has a small cylinder which fits through one of the holes of the servo which would regularly be used for a screw so that the motor doesn’t slide out of the exposed front. The front is exposed so that the wires on the front can fit in while sliding in from the top.

Finally, the lid has three main aspects. The lid is the same width and length as the box’s height and has an inset perimeter ridge along the bottom so that it does not slide off when the motor turns. The top of the lid has two holes in it. One hole is for the Statue to cover where the key is visible, and the second hole is fitted for the servo to poke out and turn the statue.

Re-Do / Un-Do/ Over-Do – Starfish Generator

Digital Ocean

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984



For the final assignment for Atelier 2 our group set out to revisit one of our very first assignments. Although we weren’t all initially in the same group, everyone enjoyed the approach to the concept of starfish and saw potential expanding on it. When tackling ideas on how to expand the initial project the group settled on interactivity amongst individuals. We exercised ideas where individuals would interact with the starfish and the environment as well as other people. Ultimately, we went back to the initial concept of generating a starfish and worked on creating an interactive aspect where individuals would be able to create their personalized starfish and add it to an archive of other people’s creations. The project operates as follows: on one screen, the individual would be able to alter physical properties of the starfish such as amount of legs, length of legs, thickness of the starfish, and colour. Along with that, users would be able to search any image they want and it would be textured mapped over the starfish. Once the creation is done, the user would send the starfish onto a second screen that holds all previously user generated starfish.


When approaching the project we prioritized expanding on previous attempts, concepts, and limitations and wanted to rebuild the project as a new experience. It went as far as completely removing the starfish aspect and focusing on generating and interactivity. Ultimately, we continued with the generator and focused on implementing previous suggestions.


The first Starfish generator prototype used Pubnub to link all the various screens together but for our attempt we utilized Firebase to help us archive all the user generated starfish which eventually appeared on our aquarium. The different screens such as the generator itself and the final display screen of the aquarium operated as websites of their own from Github. One of the previous limitations with the starfish generator was the lack of customization since users were only able to select from a small array of options. To expand on this we provided the user with the capability of searching an image to work as the texture of the Starfish. Using Google images (along with WebGL), the first image would be selected and then mapped over the starfish as a texture. Additionally, we opted on using sliders instead of a selection of options to provide users with even more variation of values. This gave the users a more variation and freedom with their creations.

First iteration of the starfish being sent to the aquarium


With the customization built we also worked on adding life to our 2D objects by incorporating animations to the starfish. Once archived and added to the aquarium, the starfish travel at random but with a noise added on to mimic organic movement from starfish. Although we did experiment with adding other functions such as sin onto the starfish, we settled on noise simply because of how we constructed the object. Other explorations saw us attempting to use bezier curves to construct the starfish but we ran into similar complications because of how we built the inner and outer radius of the starfish.


Finally, the background was created within Photoshop and since we were using WebGL we also had to map the PNG of the aquarium over a plane to work as our canvas background.

Final Prototype Build

The Aquarium Populated with peoples Creations

Explorations and Challenges

Although the final build resembled what we had initially drafted out on paper, we weren’t able to incorporate all our ideas either due to limitations or complications.

We wanted to emphasize interactivity with our generated starfish by allowing either the user to name their starfish or, similar to the previous prototype build, give the starfish a scientific name. Then, once in the aquarium, users would be able to hover over any starfish and see its designated name. Unfortunately, since WebGL was such an integrated part in the programming, it limited our capabilities with text.

Toronto & CO: AR Tour Book

Documentation: Atelier Unit 3: Final Project: Ambient Sound

Kiana Romeo | Dimitra Grovestine | Tyra D’Costa | Sam Sylvester |Ola Soszynski


Our concept for our final project was developed off of the idea of tourism and everything that the city of Toronto has to offer. Originally, we thought about having our set space as the entire city, where tourists could walk around and discover history through images and audio. We then decided to make an interactive promotional piece for Toronto which could immerse tourists here, and elsewhere in the world, in our city’s culture.



Being such a diverse and ever-changing city, there’s always something new and exciting to learn about and explore here in Toronto. Although experiencing what our city has to offer firsthand is ideal, it is not always as simple as jumping on a plane and going across seas and even if one wanted to do just that, it is definitely a good idea to explore what the destination has to offer in terms of tourism and entertainment first. By creating an immersive and interactive tour pamphlet, we have given people the opportunity to experience Toronto from the comfort of their own home before their visit to the city.

Project Breakdown

  • Promotional Piece (a graphically designed pamphlet for tourists to access)
  • Representative Blender objects for each highlighted Toronto element
  • A general ambient sound
  • A corresponding sound or story through audio for each highlighted Toronto element
  • An app created through Unity


Division of Roles

Unity App Development  – Tyra

Blender 3D Modelling – Kiana, Dimitra & Tyra

Audio Design – Sam, Tyra & Dimitra 

Graphic Design – Tyra & Ola


  1. Research
  2. Develop App in Unity
  3. Model Objects in Blender
  4. Record and Mix Sound in Reaper
  5. Design and Print Magazine in Photoshop

Unity Testing

One of the first tests that Tyra conducted for unity was the initial use of combining the camera with the 3D objects. Tyra did this by first building out an AR business card, and attaching video, sound and images to the AR object in Unity. 


Next, Tyra started to build the final application, she built  five Image Tracking  Anchors. Nested within the anchors  are five different prefabs containing the modeled objects from Blender. In each Prefab modifications were made to include ambient sound, and touch based interaction. Using Lean Touch Tyra was able to incorporate an interactive aspect between the user and the AR object. The Lean Touch Script was built to allow users to scale, rotate and move the objects within the AR space. The last step was to replace the test objects, images and videos for the final project assets.




For many of us, the models created in this project were our first attempts at working in Blender without the help of an instructor. Our modeling team was able to put together five models: A coffee cup (Tyra), a timbit (Dimitra), a hockey puck (Dimitra), a CD (Dimitra) , and a Teepee (Kiana). These models were they exported and sent to Tyra to be uploaded into the Unity AR application. 

With Materials

screen-shot-2019-03-27-at-11-19-43-am screen-shot-2019-03-27-at-11-19-54-am

screen-shot-2019-03-27-at-11-20-34-am screen-shot-2019-03-27-at-11-20-48-am screen-shot-2019-03-27-at-11-20-04-am

Without Materials


Graphic Design

The booklet was centered around being a visual aid for the rest of the experience. As such, Tyra  designed the layout to be quite simple, with a focus on photos and headlines of the topics covered. We debated on having either filler text or full information, and eventually settled on filler text, due to time constraints and to not distract from the core of the simulation. The pages are laid out in InDesign, with a modern layout and format. This process was relatively simple, as InDesign is something Ola was worked in for several years, with the biggest dilemma being with the print layout of the pages, which Ola settled after changing the page setup and order. The booklet itself is hand-bound, to have the product itself last longer and seem a bit more solid, as it were.



The Audio team brainstormed and mixed together a variety of Canadian inspired artwork such as: poems, songs and field recordings using Reaper. These were then sent to Tyra and uploaded into the Unity AR application.

  • Bad Canadian Songs(Sound Sourcing done by Dimitra, Editing done by Sam)
  • Hockey Arena Foley (Sound Sourcing done by Dimitra, Editing done by Sam)
  • Hey Trudeau Poem  (Sound Sourcing done by Dimitra, Editing done by Sam)
  • Crickets (Field Recording done by Tyra, Editing done by Tyra)
  • Crowd Sounds (Field Recording done by Tyra, Editing done by Tyra)

screen-shot-2019-03-27-at-11-24-28-am screen-shot-2019-03-27-at-11-24-33-am



Collapsing City – Immersive Sound and Augmented Spaces

Collapsing City – An AR Experience

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984

Rosh Leynes – 3163231


For our immersive sound and augmented spaces we set out to create a narrative space developed with sound and the assistance of visuals all experienced within an AR environment. A set of four rooms were modeled and built in Unity all with variations in concept and layout. All the rooms are connected to each other allowing for linear and seamless story telling. Essentially, with a phone, the audience would maneuver through the 4 different rooms and listen to the sounds of each room to understand the narrative. As the user approaches certain objects in the AR space a proximity sound would play to further tell the story. The narrative of the space follows a city collapsing due to global tensions. The initial room is a regular day in the city accompanied by music, children playing, and basic city chatter. The scene connects to a room with a television that cuts from a game of football to an emergency broadcast of nuclear war. The next scene connected is a dense city environment with a ruined building, air-raid sirens, and fire. The final scene is the city in complete ruin with buildings decimated, rubble scattered, and an ambient howl of the wind.




When developing the concept we knew that we wanted to tell the story through different rooms and environments with a variations of sounds accompanied with them so we organized the rooms with a rough sketch first.



Originally we wanted to set the whole scene onto a plane where the audience could look over and closer onto details and experience the change in the narrative but then we decided that having the audience go through each room individually and experiencing them like they were in the environment would yield a stronger reaction and connection to the narrative. When creating the rooms separately, we initially had the idea to have the audience teleport to each room after stepping through a door frame or entrance to the scene. We decided to scrap the idea because our intent was to have the audience experience the change in the environment so we bridged all the individual rooms together to create a seamless experience. As we were all working on one unity project and making frequent changes we used Unity Collaborate which allowed us to import each others changes.



Since sound was a main component to this experiment, we worked with triggers and ambient sounds. On top of general ambience in each scene to establish the environment, we included triggers that would aid in telling the narrative. For instance, if approaching the television, a soccer game cutting into an emergency broadcast would play. Additionally, if approaching rubble, the sound of explosions would trigger to play to visualize what had happened. Although we had visuals and 3D models to create our environments, the sound was crucial to tell the story.



We experienced several challenges throughout the development of our experiment which were eventually resolved but two had stood out: the sound triggers and the scale of our approach.

For sound, we wanted to have each scene vary in atmosphere and sound so we used triggers to achieve this. The trigger’s require something physical to run through it in order to trigger the sound. So when the come walks into the room with the television, it had to interact with the trigger in order to activate the sound. We worked around this by attaching a cube to the camera and adding the rigid component onto it so that it would be able to interact with the triggers.


The largest challenge we encountered was how we approached the experiment. We really enjoyed the idea of having variety especially in scenes to tell a narrative so we focused most of our development on building these scenes and rooms accompanied with sound, particles, and an assortment of sourced 3D models. Throughout development we had to regularly scale down our scene sizes and limit particles and lighting to effectively run the build on phones. In the end, the build ran with lag and users weren’t able to make it through the first scene due to it’s sheer scale.


The Dating Game

Tyra D’Costa | Ola Soszynski | Samantha Sylvester

The Dating game is indented to be an interactive storytelling table top interface. Our inspiration came from the short story “Cat Person” by Kristen Roupenian that recently went virally after being published in the New York Times. We wanted to mold Roupenian’s short story into what we recognized as the dreaded, and inevitable stages of dating of in today’s day and age. The work that we have creates is meant to discuss the drastically changing landscape of romance in the 21st century in comparison to the age old sexual motivators found in human nature.


How it Works

The Board itself is connected to an Arduino UNO and MAXUINO, it has eight wired buttons and two conductive switches that can be activated by touch. When activated in order the buttons tell a story through audio and visual assets that we created with After Effects and Audition. In between the video sequences the user will be prompted to make decisions about the story and characters, however regardless of what they choose the story will end the same way. The user interaction is intended to reflect the frustrated, sometimes powerless feeling that Roupenian’s main character Margot feels in the short story “Cat Person”.

Our Process


  • Copper tape
  • 8 sensors(button, sound, mic, pressure, etc)
  • Speaker
  • Thin Plate of Wood


  • Projector
  • Arduino Uno
  • Soldering iron
  • Speakers

Software Used:

  • Adobe Premiere
  • Arduino
  • Maxuino
  • Touchdesigner
  • Adobe Illustrator

1.         Brainstorming

We came up with a lot of ideas, and potential topics to explore by simply making a flowchart to help interrogate various elements of the UX/UI design. One of the main things we wanted to be part of the experience was the ability for the user to leave input.






2.         Voice Acting and Recording

Ola was able to take the original narrative and construct a new script that was simplified and more compatible with the aims of our project. Using the new and improved script we headed to the recording studio where we worked together to record the voice over narration. Next, we made a list of background sounds we wanted add to the background narration and we recorded those too. Sam took the raw audio files and mixed them together to create 8 separate audio tracks for the final output.


3.         Visuals and Animation

Ola worked in After Effects to create 8 individual animation files that would play with the push of the buttons. However, all of the files were corrupted and the work was lost resulting in a very frustrated Ola. Luckily, Sam came up with the idea of using colors rather than visual scenarios to communicate the ideas and emotions in the story. Together Ola and Sam did some research on Color Theory and applied the knowledge to the visual element of the project.



4.         Technical Work

Tyra put together the user experience design framework and the Max patches required to activate the functionality of the board. Tyra began by laying out the exact flow and structure of the user experience via pen and paper sketch wire framing. The process began with designing the layout of the board itself, as well as the content and syntax of prompted user interactions. The main Max patch consists of eight buttons that open and play media files when activated, each button has a different video and sound that becomes part of the overall story. Within the main patch is a sub patch, this sub patch is connected to the ‘swipe left’ and ‘swipe right’ feature of the board that makes it interactive.

The sub patch is responsible for opening and playing video files from a two separate lists. When the user is prompted they will have to choose to ‘swipe left’ or ‘swipe right’, whatever they choose will activate the next video file from the corresponding list. At the same time, the sub patch will eliminate the video file that was not played because it was not chosen by the user. This was an attempt to sequence the movie files and make sure they only play given the right conditions. However, the interactive functionality of this project was not developed as well as it could have been. Unfortunately, the Max subscription Tyra was using was unlicensed and ran out before she could debug the problem with the sub patch. Overall, the sub patch does achieve its base function of movie sequencing, however it does not eliminate the unselected video files which is necessary to maintain the syntax of the story progression.


5.         Rapid Prototyping

Tyra  designed the physical board that holds the electronic components using Illustrator and a Laser Cutter from the Rapid Prototyping Center. Ola designed the circuit and put it all together. 





6.         Putting it all together

Together we integrated the circuit into the physical prototype board, then we plugged in the Arduino and connected to the Max patcher. Finally, we hooked up the projector and speakers so that the visual and audio experience could be shifted from the centrality of screen to the openness of the physical world.


The story takes place over time, but is told over the course of a few button presses from the user, filling and concentrating the space with feelings of nostalgia, relatability and humour. Through these fragmented experiences the interpretation of the work can vary from view point to view point making each interaction with the space unique. For example, someone who relates to the story might understand the place in which it is situated as somewhere where they belong and feel heard. Whereas, someone who is confused or puzzled by the story might interpret the space as a distant, maybe even emotionally attacking.


Studio Murder Mystery


Joseph Eiles

Ermar Tanglao

Vijaei Posarajah

Narrative Spaces – Studio Murder Mystery

Project Description

For our Narrative Spaces project we chose to create a murder mystery that takes place within a music studio. The player takes on the role of a detective investigating a cold case wherein the lead singer from an 80’s rock band was found dead within the recording studio. In the story we have six characters; Gabriel Newel the bassist, Jim Petty the drummer, Teddy Lorne the guitarist, Micky Strats the singer, Max Powers the manager, and Herman Dale the studio tech.

Each character had their own motive for committing the crime. Gabriel’s motive was that he’s jealous of the lead singer and disputes his position within the band. Teddy’s was that the victim cheated with his girlfriend. Jim’s motive was that the victim was in debt with him for a large sum. Max’s motive was that he’s enabling the victim’s drug habit and profiteering from him. Lastly, Herman’s motive is inconclusive.

Within the scene we placed objects that acted as clues such as a pill bottle, a bag of flour that acted as cocaine and bloody fingerprints on the piano. For clues such as the autopsy report and the police report we decided to hide them around the studio as we wanted the player to explore the area as a detective would.

For the audio portion of the project we decided to create switches that turns on the audio, these switches are placed on the clues that the person playing has to find. The position of the clues are based on where the scene takes place such as scene two taking place in the room recording booth. The clues itself are based on what happened in the scene such as the ripped contract being the first scene wherein the manager discusses the contract with the band. The microphone is the second scene wherein the band is practicing and a dispute happens with the singer and bassist. The skull which is the third scene represents the skull of the singer who died and what happened before and during his murder/death. The person playing is also equipped with a glove that has conductive fabric sewn to it that closes the switch when placed upon the clue which plays the appropriate audio file given to said clue.

Process and Documentation

To create the audio for the scenes we went and assigned ourselves a character and recorded ourselves in the recording booth. We first read through the script trying to figure out how were going to voice each character according to their personality. When recording we did each character’s lines simultaneously for that scene and cut it into multiple audio files when finished. For the sound effects we had to improvise some sounds such as kicking a chair, slamming the door, shaking a pill bottle and moving sugar around on a wooden slab.

For the editing process we edited our audio in Adobe Premiere. We decided that two of us create our own rendition of the audio and see which one fits the best. One of our audio effects we put in had a slight delay added to it to create the idea of a flashback happening. The other audio clip had more sound effects added to it but did not have a delay to the audio. We decided that the audio without delay was much better so we went with that. During the intruder scene we added a delay, echo, amplify, reverb, and pitch correction to mask the killer’s voice. 


Initial Arduino setup testing.


Studio recording session.


Audio Recording Script.


Editing of the scenes through Adobe Audition.



Prop evidence used for the Murder Mystery.


Final Setup for the clues and evidence.


Arduino setup under the desk.

Arduino Circuit

For our switches we used tinfoil as it was relatively reliable in its use as an on/off switch and it was a material in which we had a lot of experience and expertise in regards to previous projects. The tinfoil was arranged in such a way that there was a gap in between the two strips and the circuit would be completed once the user placed the palm of the glove lined with conductive fabric over the two strips. The glove worked perfectly from a technical and thematic perspective as it was able to complete the circuit and it fit the idea of a detective putting on special gloves so that they don’t compromise any evidence in an investigation.


The Arduino circuit consisted of six tinfoil strips that were connected to a 5V power source, a ground connection that was filtered through 10k resistors, wires that connected them to digital inputs, and alligator clips that connected the jumper wires to the tinfoil. We chose to work with digital inputs as we were essentially working with on/off switches and thus had little use for anything analog.

Maxuino Code


The Maxuino Patch consisted of three switches that would play a sound file once the user placed their hand upon one of three clues. The code was arranged in such a way that the sound files are preloaded and played once the switch sends a ‘1’. The sound file is able to continuously play until the end of the file even if the user releases their hand from the clue and the switch is turned off. Earlier challenges with the Maxuino patch included a sort of ‘double trigger’ wherein the sound file would get played twice as the switch recognized both a ‘1’ and ‘0’ as a valid input; this was later fixed by preloading the files and placing a ‘sel 1’ to filter out any potential repeats.

Context and Inspiration

Our project was developed through a basic concept of interactive objects that contributed to an overall sound based narrative. The idea of using a skull for one of the objects allowed us to narrow down our project down to an interactive murder mystery. Our project consisted of both paper and non interactive evidence to provide a grounding for the murder mystery, as well as three pieces of evidence that can be interacted with a conductive glove to close a circuit to play an audio file related to the murder mystery.

For research proposes we looked at how murder mystery games were set up and what was required for an adequate experience. It was from the blog on ( that we realized for our game we would need a very specific theme and for recording purposes we would require a very detailed script with the types of sounds required to record. As a group we decided to use the recording studio as a set for the murder mystery and agreed to a theme based around an 80’s rock band aesthetic. As a group we came up with a plot and the backstory for the characters that were involved as well as who would be the culprit in the end.

For the recording of the scripted scenes we researched other similar audio based mystery games and came across “Wonderland” ( which was a mystery and puzzle game solely based on audio and exploring your surrounding environment. This allowed us decide our three audio scenes would involve using the recording studio as a set that the players can explore as the murder took place in the same environment. This was to be evident in the recordings themselves as all possible sounds should be possible within the provided environment.

We also researched to see if other projects have attempted to create a murder mystery type game using interactive elements based around Arduino and found ( which uses the choose your adventure element where a story is told to the player and they make decisions provided in the story by picking colored cards that are read by a color sensor to continue the story. Our project shares a similarity in that the Arduino is used as a trigger element to provide a portion of the story to the participant but unlike their linear story ours is presented the player in fragments that they have to assemble.  

We used Aaron Oldenburg’s research paper “Sonic Mechanics: Audio as gameplay” ( as a guide to further explore how an audio based game can have aspects of interactivity. This involves how we build spatial awareness using sound alone and recording nuances in movement and how they can be interpreted as sound within a closed environment.


Audio Files –

Github Link –


Fútbol – Narrative Space

Fútbol – An Interactive Experience


Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984

Francisco Samayoa – 3165936


For our narrative space we set out to create an environment of a soccer game using physical, visual, and audio installations. The installation was built like so: three pylon stations are spread across the ground accompanied with digital sensors, on top of the installation is a projection mapping of a field with visuals and animations. All the sound and visuals are controlled by the digital sensors. Once the player passes one of the pylon stations, they would press onto the switch which then would trigger an animation from the projector to lead the player onward as well as a variety of announcer recordings. This would continue on until the player would score a goal at the end of the installation. On top of all of this we also implemented a backdrop of crowd cheering audio samples through speakers. The purpose of this concept was to enthrall the participants in an intense soccer experience where their actions and movements would draw a reaction from a crowd and the announcers. The bright animated visuals and loud dynamic audio was intended to fill the participant with emotion.


A lot of work had to be done to create our narrative space and so we broke it down into three portions: audio, visual, and physical. For audio we set out to the recording studio because we wanted to use as much self-recorded material as we could. Mainly, we wanted to record commentary that would reflect the actions of the player whether they were to score a goal, miss a goal, or pass through the pylons. For this we brainstormed and created a brief script to follow.


After going through our recorded audio in the studio we decided that we wanted a variety of announcers for the narrative space so that the participants would not get tired of hearing the same voice over and over. Additionally, we discovered that improvising yielded more genuine commentary.


After having every member of the group finish their recordings we brought the audio files into FL Studio.With FL Studio, we were able to make changes to the audio files by altering them to resemble the sound of stadium commentary. Additionally, since we wanted to create a pool of commentary to be called upon at random, we had to cut our long studio recordings into short bits of phrases which we then organized in Maxuino.


For the physical portion of the installation we created a simple digital switch using cardboard and tin foil that would act as a pressure sensor. Two cardboard sheets with tinfoil on and conductive thread would be the main triggers for the audio and visuals.


Originally the tinfoil spanned most of the cardboard sheet but after a trial run we discovered that the data showed that the switches were constantly being completed when we didn’t intend them to so we had to alter the sizes and placement of the tinfoil as well as adding more space and resistance between them.


Since the placement of the tinfoil was centered we decided to add pylons on each side of the switch forcing the players to go over top the cardboard and press onto the centre in order to yield data. Addtionally, we had to add some changes to the goal post that contained the IR sensor. The IR sensor struggled to detect the ball no matter its speed when it went past the two posts we set up. If we tried to increase the sensitivity, the IR sensor would occasionally pick up the movement of the player instead. We quickly worked around this by placing a cone in the centre of the two posts that the player had to shoot for. With this, the IR sensor easily picked up the movement of the cone being struck and in turn triggered the celebratory goal animations and audio.


The visual aspect was created all in Photoshop along with sourced gifs. The animations consisted of the arrows that lead the player through the plyons and to the goal, the accompanying sourced gifs, and the flashing celebratory goal animations. Each phase of the installation was split into four videos. We constructed the presentation by projecting the field and it’s animations and placed the cardboard buttons and goal post accordingly. We had encountered issues with the visuals responding to the triggers. Although data was being read and transferred to trigger the animations, there was an occasional delay between the transitions of videos as the files were too large but manually triggering the transitions worked well.

Video of Final Product


Pictured above is the Maxuino patch used for organizing all the switches and their interactivity with the various commentary and visuals.


Documentation and Process Work

Stadium Field with various stages and animations

Audio Files of the various commentary along with sampled crowd audio

Additional documentation

Sneaking Out – Narrative spaces [ATLR II EX2]

Mahnoor Shahid – 3162358

Siyue Yasmine Liang – 3165618

Jin Annie Zhang – 3161758


Narrative Spaces – Sneaking Out [ATLR II EX2]

Our narrative revolves around you quietly trying to sneak out of the house at night without waking up your parents. But your parents have secretly set up traps around the house that play loud alarm sounds when triggered. The setting of the installation is supposed to represent the entryway in a house. You have to try your best to stay quiet and avoid these alarms while exiting. If sounds are played you have to stop them by replacing other objects on sensors or using other methods. As the interaction starts, your pet dog wakes up and starts barking for food. Your first challenge is to give the dog food within 30 seconds before your parents wake up!

Interaction Steps

The installation has a challenging procedure that the player has to overcome.

  1. The first challenge is to stop the dog from barking by giving him food within 30 seconds (before the parents wake up).
  2. You have to grab the food bowl from the table and the place is on the mat to stop the barking.
  3. The food bowl is on a light sensor, so when the bowl is taken off the table, the light sensor triggers an alarm sound.
  4. Now, the player has to find another object and cover the light sensor with it to stop the alarm
  5. They can take a painting off the wall and place it on the sensor.
  6. As they move towards the door, there is a light beam from a flashlight that is shining on a light sensor, if the player crosses it, an alarm will trigger so they must walk over the light beam
  7. Near the door, there is a safe that the player has to open to get the car keys.
  8. The code has three numbers that are hidden around the room.
  9. The player has strategically walk around the room and search for them without triggering the light beam alarm.
  10. The number can be found behind the painting, on the dog bowl, and near the table lamp.
  11. Once the code is found and correctly entered, the safe is unlocked and the player can grab the keys and sneak out!



Max Patches

Dog bowl switch and light sensor alarm trigger

This max patch had one digital switch and one analog light sensor.

The dog barking starts at the beginning of the game. When the dog bowl is placed on the mat, the digital switch closes, and the dog sound stops.

The dog bowl was on top of a light sensor. When the dog bowl is moved, an alarm sound triggers since the lamp is shining on it. The person has to replace the bowl with something else.


img_0552 img_0553




Light Beam “Alarm System”


This match patch has one analog light sensor. By default, the flashlight is shining on the sensor which keeps the alarm sound off. When a person passes through the light beam,  the light sensor receives no light and plays the alarm sound. The person in the game has to make sure not to block the light beam to avoid alarms. 






Atmosphere Sound


To give a night time atmosphere, we had a cricket chirping in the stereo sound.


Keypad and servo motor lock

The player has to find the code numbers 5,9 and 4 and enter them in that specific order. When the correct order of the code typed into the keypad, the servo motor rotates 90 degrees, ending up in a horizontal position. This unblocks the hole and allows the person to open the box cover.




Code for the lock system




  • lamp
  • dog bowl
  •  conductive fabric (digital switch)
  • dog mat
  • flashlight
  • 2 light sensors
  • painting
  • stickers

Audio Files

  • Dog barking (Barks for 30 seconds, then “Who’s there” dialogue starts).
  • Crickets
  • Alarm ringing
  • Buzzer sound when the code entered is incorrect.