Blog Post 5

Mika Hirata



  • Luna

Luna was a calm and relaxing experience with amazing sound effects. The visual was extremely beautiful and the details made me feel like I was in a painting. At first, I found the control a little bit hard to understand, since the grabbing motion only worked on some objects. Also I started from the middle of the game, so it was hard to understand I had to clear the puzzles to move on the to the next level. Since the game has puzzles, it got more depth of storytelling rather than just a pretty settle environment. Particle effects are used in many objects, so that the whole experience was very magical. Also as Max pointed out, this experience is made for one person only who is viewing the game in the VR headset. Unlike the other games, the people surrounding the player were not able to understand or join the magical feelings, so it was an isolated experience.



  • Beat Saver

I really enjoyed playing the beat saver, because I have watched a girl playing this game on Facebook and I have always wanted to try this game. The game itself is very simple and easy to understand. The user experience and user interface were both very friendly, so I was able to play it without any directions. Also the sound effects were amazing, which made game very exciting. I realized that while the player is playing this game, people around the player were also having fun and they were engaged to the game. Moreover, the scoring system made the game very competitive and enjoyable for many people, so it was not an isolated experience like Luna.


  • Lone Echo

Lone Echo was a very interesting and unique experience, which allowed me to feel like I was actually in the space. The 3D models that were used in the game were extremely impressive and the quality was amazing. I really liked the dialogue of the story, so I was able to become the captain of the space shuttle. The movement was non-gravitational, so it made the whole experience more realistic and exciting. I would love to create something like this for the next project, and it was differently very inspirational in terms of having a clear concept for the environment.



Posted in Uncategorized | Comments closed

Sky Lantern Festival

Sky Lantern Festival


Sky Lantern Festival is a VR experience that is meant to be simultaneously relaxing, beautiful and contemplative. Unlike the conventional use of VR, the user is meant to be seated and laying down facing the sky. With the theme being ‘Biography of Things’, I believe sky lanterns are a timeless representation of objects that have been given agency. Although I have never experience one in person, through research and appreciated feedback I have been able to replicate that journey of their release. In the first version presented at the exhibition, it was a purely audiovisual experience with no interactivity with the environment. Based on the feedback from classmates and critics at the exhibition, the project was successful and only required minor improvements to compliment the cinematic experience. The 2nd version of this project aimed on not only making the noted improvements but further expanding on the interaction with the sky lanterns without dramatically changing the overall experience. This involved collecting tweets from twitter’s API and displaying them in a caption style similar to films. Of course this functionality really does qualify as ‘OVER-DO’-ing it, so I decided to add the ability to toggle the newer features on or off.

Process & Presentation


unadjustednonraw_thumb_939 unadjustednonraw_thumb_933 unadjustednonraw_thumb_935 unadjustednonraw_thumb_90d unadjustednonraw_thumb_90e

Mobile VR Instructions

Instructional cards created to encourage others watching to try it using Google Cardboard

Link to recorded video for presentation

Posted in Uncategorized | Comments closed

Window House

By Erika Davis, Ziyi Wang, and Natalie Le Huenen


This project is open source. View the entire project on GitHub:

Project Description:

Window House is an interactive digital world where viewers explore surreal fictional environments. Our aim was to create something that’s timeless and can be experienced universally. Free of any form of language, all interactions involve visual and auditory symbols. From there, viewers are encouraged to create their own interpretations. Visual aesthetics and pleasant ambient music are emphasized to create a sense of harmony. The environmental design and architecture also reflects the dreamlike quality of existing fantasy worlds found in digital games. The lighting is exaggerated to make the environment feel more dynamic, and also gives the viewer a hint as to where to proceed to the next area. Glowing ‘portals’ fill the screen with light and transport the viewer upon interaction.

One particular theme we wanted to look at is adding unexpected movement to objects that are always static in the real world. Experimenting with digital tools allows us to explore interactions that are physically impossible. Building our world using Unity allowed us to adapt modified 3d game physics and mechanics to our custom use.

Window House aims to create a deep feeling of mystery and curiosity. We decided to continue developing upon our original project, which was a single room filled with mysterious pots, acting as “windows” allowing you to peer into different worlds. Images on pottery (something that is always still in the real world) move and come to life when you approach or look at them.


compare13D modelling the first scene

screen-shot-2018-03-08-at-3-53-00-pm-copyExperimenting with lighting and post-processing effects

Experimenting with Unity’s “Post-Processing Stack” allowed us to boost the look of ambient lighting in the scenes and push the detailing to its highest potential. Creating a filmic look, adding a ‘glow’ to the lights, creating a subtle lens-dirt effect and adding a slight vignette helps add immersion.

Using baked lighting involves rendering lighting effects and applying them as textures to models in the scene. We used baked lighting and light probes for a more realistic effect.

Light cookies are black-and-white textures used to represent the light cast by emitters onto objects in the scene. We created custom light cookies to tweak the appearance of all window and portal lights in the scenes.

Programming movement into the textures: Adding movement to the art on the pottery involved modifying the ‘offset’ value of each texture every frame. Achieving a ‘parallax scrolling’ effect involves setting the scrolling speed value of each texture at different values, giving the illusion of 3d movement.


potparallaxtestParallax scrolling in a texture

ziyi4cropCreating transparent rectangular textures to wrap around meshes


Programming player interaction: The pottery reacts to the player’s movement. The texture-offsetting algorithm stores all the data about the player’s 3d position, the player’s viewing angle, and the 3d position of each pot. To simplify calculations, this 3d data is projected onto a 2d surface where all the vector calculations happen.

The custom script accepts values to tweak the calculations to create a result that feels the most intuitive for viewers.

View the texture offsetter script:

screen-shot-2018-04-11-at-12-47-24-am-copyA glowing portal

Transitioning the player between the three scenes involves using a ‘portal’ system. Upon colliding with hitboxes designated as portal colliders, the next scene is loaded.

View the scene transition script:


screen-shot-2018-03-16-at-12-23-32-am-copyThe animated window light

Creating the animated window light involved making a keyframe animation for the light’s rotation and colour. This lets players witness the passage of time more dynamically. The “Volumetric Fog” package was also added to add to the effect.

screen-shot-2018-03-22-at-12-02-51-pm-copyCreating water

We used a modified version of our texture offsetter script to animate the fountain water. A simple water texture is wrapped around a cylinder model with the top and bottom faces removed, and is scrolled to give the illusion of falling water. A water splashing sound effect is also present in the center of the fountain. The closer to the fountain the viewer is, the more prominent the sound effect is.

Creating the second scene:


screen-shot-2018-04-11-at-1-01-10-am-copyFinal images of the second scene

ziyi2Studying and testing the structure of different vaults in Rhino

ziyi5Creating the staircase

ziyi3Putting little parts together

ziyi1Modeling columns

The second scene is a very open space allowing you to look at the stars outside. They move across the sky very quickly, similar to the rotating light in the first scene. This is accomplished by offsetting the skybox’s texture every frame.

screen-shot-2018-04-02-at-7-57-30-pmAn early view of the starry skybox outdoors

Creating the third scene:


The skybox of the third scene was hand-painted and also rotates. The scene uses light and fog tweaked to match the skybox, creating an intense sense of depth for the monolithic structure.

screen-shot-2018-04-11-at-1-22-29-am-copyA view of the final scene

Walking on the bottom creates a feeling of ‘floating’ as you can float over empty holes in the architecture.


Builds for Windows and Mac:


Posted in Uncategorized | Comments closed

D E S K – A Vaporwave Experience

By Harit Lad, Sanmeet Chahil and William Selviz


D E S K, a VR experience showcasing the life of a former office worker packing up their most dearest, yet mundane, belongings with a V A P O R W A V E twist.

Project Description

We developed this VR experiment after deconstructing the human experience we call nostalgia. This feeling needs a host, whether its an object, aesthetics or art piece, in order to remain meaningful over time. The team was inspired by the lo-fi or “vaporwave” aesthetic. This internet subculture uses nostalgia as a tool to evoke satirical melancholy, combining 80s and 90s imagery with current trends, events, and technology. The idea of reminiscing about the past with an existentially neutral perspective seems more appealing than living through our painful realities. Nostalgia has the power to take us back to a moment in time, and even make us say “those were the good times”. Combining this concept with VR technology allowed us to construct a paradigm-shifting interpretation of this unlucky, yet relatable, event. Vaporwave is dead, long live vaporwave.


For Mac OS – Click here

For Windows – Click here


Our Final Documentation can be found here

Posted in Uncategorized | Comments closed


Mika Hirata 3154546, Vivian Wong 3158686, Anran Zhou 3157820


HOKUSAI is an interactive art installation experience of culturally influenced art illustrating the evolution of the famous Japanese artist, Katsushika Hokusai.

Projected on a paper-like hanging screen, we bring Hokusai’s artwork to life, in an inviting, open, interactive space. Layered in a parallax formation to display depth, animations are activated with motion detected by a Kinect. The installation features reworked and refined digital paintings, special sound effects, the artist’s quotes, and self-composed, culturally-influenced music.

Beginning with a looping animated scene of the artist’s name, the audience is invited to approach the piece. Once a user is detected by the Kinect, the screen reveals a dynamic representation of Hokusai’s painting, The Great Wave of Kanagawa. The viewer can interact with the painting, as the waves and particles follow the viewer’s horizontal location. The closer the viewer gets to the painting, the louder the accompanying music and waves become. The audience is directly invited to experience the culture of the artist. As an interactive installation, the viewer is no longer only the audience, rather, the viewer becomes a part of the piece.


For our final project, we decided to continue our previous VR project using a completely different medium. Due to our lack of knowledge with Unity and VR, we struggled greatly with developing the previous project. As a result, we decided to take this project into Processing, a program that our team has much more experience with. We also wanted to represent our concept in a more digital way.


We originally had six different scenes in the VR version, but decided to only work with one scene for the final project. We dissected the scene into separate layers to be manipulated by user interactions using Photoshop. We determined our interactions and visuals in a storyboard initially.

wave2-1 wave_layers_high
Separated layers of the bottom and high waves in the scene.

Initial Storyboard

In After Effects, we first attempted to create videos with quotes in a calligraphy-like style. The writing animations ended up taking too long to produce for each scene, so we used special ink-like effects to recreate it.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2018-04-09-21-25-09 %e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2018-04-09-21-25-00

We started putting the scene together by importing libraries, adding the layers of images one at a time. Once we had a basic setup of waves figured out, we included the SimpleOpenNI user3D example template. To get the interaction to work, we replaced the x-axis locations of specific images with the userX center of mass variable. We used mouseX to prototype our sketch motion animations when we did not have the Kinect at hand. We learned to calibrate the Kinect with the map function.

We created functions to oscillate boats back and forth, to play videos, and to fade in scenes and assets. We also created separate sketches for the Processing particles animation. To clean up the sketch, everything was organized to have its own function.

Code on Github

Sparkles on the drawing:

Main Sketch:


Testing and determining the best way to setup the piece. Found it effective to project onto a large hanging sheet of mylar with a mirrored display, so that viewers can interact with the piece up close, without blocking the projection.


Hand Detection

Initially, we planned to add sparkle animation following the hand waving position. We were trying to use User3D example file for hand detection but it doesn’t work properly when we combine the code to our sketch, so in the end we decided to make the sparkle animation follow body movement.

We also often had trouble determining how to use the Kinect Processing library, SimpleOpenNI. While it was simple to use with the default example sketches, it became more challenging when combining it with our own scene, due to the several translate() and perspective() functions it came with. The scene resolution and display was often altered due to these issues. Sometimes the scene was mirrored or displayed a zoomed in version of the original image unintentionally. We had to start the sketch from scratch, testing everytime we changed or added new code several times to identify the issue.

We had attempted to add a third interaction to our installation. Originally, the number of users detected by the Kinect could affect the scene as well. Depending on the number of users detected, the boats would oscillate with different speeds. The more users detected, the faster the boats would oscillate. Unfortunately it did not work, the boats spun around in 360 unrealistically. With the final sketch, the boats would also stop oscillating after the sketch resets, which we could not debug.

Sketch of the final outcome:


How we locate the kinect, projector and user interaction.

On the day of presentation, we noticed that our sketch ran with a lag. However, during our testing and development sessions, we never encountered this error. This could have been due to the resolution change, or too much going on in the sketch that Processing could no longer handle it. We lowered the count of the particles in the sparkle animation, which helped a bit.

Final Product


Final Product Videos:

Master Folder:


The interactive art installation approach proved to be more effective than the VR piece. By simplifying and reducing the project to one scene, we could focus on making the scene’s interactions and visuals more appealing. Taking the project further, we would refine the animations, continue to debug the code, remove the lag, and focus on improving the viewer relation to the installation. We gained valuable knowledge and experience from working with the Kinect and Processing throughout the development of this project.

Posted in Uncategorized | Comments closed

When You’re Gone

Sydney Pallister, Julianne Quiday, Pandy Ma, Vivian Fu, Tania Samokhvalova

Project ConceptThe focus was to create a small indoor space that players could explore and find a narrative within. The game followed two key concepts: storytelling and the transition of object agency. Together, we designed a room to tell the personal story of two people and their relationship with death and grief. Players would be able to explore this ordinary room and learn along what happened. We decided to use voice lines to guide the story forward. This allowed us to create a second character while leaving the distinction of the characters unclear.

Altogether, When You’re Gone explores the passage of grief and how it can transform the outlook of life. The player, alongside the main character eventually experiences survivor’s guilt and the inability to move on.

In order to manage in the given time frame, we created a list of tasks and split them into weeks. Items of priority would be completed in the first week to allow for more time in the final week. Each week we were able to complete a new build of the game and finally reach our final product.

Maya modeling/assetsInitially, the plan was to create an ordinary room with various trinkets and objects to narrate the story. Together, we created a list of possible things to be found in a room and the meaning that could be tied to them. From there, we decided how much we could accomplish in the given time and how many objects would be needed to successfully tell the story. Finally, we decided on seven significant objects.

1          2

The second half of modelling was about creating the room. First, we decided on a floor plan and took furniture references from Ikea catalogs. Then we assigned one person to work on the furniture, while another worked on the interactive objects. The room that we all decided on was the bedroom, which allowed multiple objects with backstories. The build consisted of complete models with found royalty free textures. As we progressed to the final stages of the second iteration, the majority of objects have had their textures UV mapped in order to ensure there would be no texture glitches as all as having custom textures. All works models were then moved into Unity to assemble the room.

2D assets/drawingsThe overall goal was to move away from using stock photos and having our own artworks. To accomplish this, we assigned to people to work on painting photos and the skybox. These drawings included newspaper prints, photographs, book covers, and posters. By creating our own works, it allowed us to have more personal pieces that tied together with the story.

tumblr_p6hen8tc2r1uetbg1o1_1280         generic-romance          bandposter

Unity UI/GUIThe first version of the game started out as just the game itself; the player starts in the room and ends with the final line of the game. There were no clear signs as to when the game would begin and conclude. As a result, we decided to add UI menus throughout the game in order to create a better flow for the players experience.

The Start menu was added first to give the player options to actually play the game, learn the story behind it, or exit the game. The Controls and Credits tabs were added to provide guidance for players who have never used a controller before (a fact we learned at the TSV open show). We chose from two different backgrounds, each an image of the outdoor terrain of the game.

3 4 5

The next menu added was the Pause menu which was a gateway for the players to momentarily leave the game if desired. The Pause menu would appear when the player hit escape, and the options would be to resume or quit the game.

The last menu added was the Restart menu to give a clear indication of when the game ended. A black screen would fade in after the final audio line of the game and the player was able to go back to the main menu or quit the game.

There were definitely some obstacles when creating these menus, some of which were unavoidable. With two people running Unity for the coding aspect, it was a challenge trying to send the menus back and forth and then directing the other on how to add them in from scratch. Converting from mouse and keyboard to controller also took some time because of player options that were not enabled in the Unity inspector. However, being able to finally add these menus was definitely an accomplishment and also created a natural flow to the game.
Unity Builds/CodingThis project started with a coding base that we made for our project The Biography of Things. The initial code simply triggered the voicelines to play when players clicked on the correct object. Voicelines corresponded with specific objects and were tagged accordingly, and after a certain amount of objects had been clicked, a counter was set to trigger a new set of voice lines after each object clicked in succession.

6 7 8

For the improvements of this project, a main issue was that there was no way of stopping a player from clicking on all the other objects and having the the voicelines all play all at once. This issue was fixed by creating a counter that would add a one while the line was playing, and if the counter was equal to one then players couldn’t click on any other objects. Once the line was finished, the counter subtracted back to zero, allowing players to click on other objects and trigger the cycle once again.

Early into testing it became obvious that players required an indicator to know when the cursor was pointed at an object. Many different scripts were tested to change the material color of the object but had issues with the compatibility of the raycast script. Instead, we worked with particle effects. Once the player had the cursor focused on the object and the raycast hit, the particle effect would change to a blue color. This was done by referencing the particle effect with the corresponding object name and changing its start color to blue when it was looked at, and white when it was not. Once it was clicked, the particle effect was set to stop.

The next steps were to customize the post processing settings as well as the FPS controller. Many people when they played the game found that the player controller moved and looked around the room way too quickly, and that it took you out of the game. The settings were then adjusted accordingly to these suggestions. The head bob was set to be less dramatic, and various walk speeds and sensitivity were tested until it was better suited to the game itself.

9          10

Research was done on the post processing stack settings, what each one accomplished and what feel they gave the room. The various values and options were tweaked until a suitable color palette and lens filtering system matched with the art of the game and accented its theme.

The custom skybox we made presented a unique challenge, as none of our group members have experience with making them. First attempts were to make a cubemap, and set the material as a six sided image. This worked, and the skybox could be rotated with a simple script to make it seem like the clouds were moving. The issue with this way of doing it, however, very clearly made the sky look like an actual box.

11 12 13

There was a very clear distinction in the vertices in the cubemap, which ruined the immersion of the game and looked very unclean. The next step was looking into having it mapped onto an inverted sphere and then once Tania, the game’s background artist, had repainted to fix the distortion this would cause, was to take the material and have it read as a cubemap latitudinal and longitudinal. Initially the cubemap was repainted and attached to a sphere and then inverted in a 3D software (Blender). This left no seams, but left the clouds very stretched.

Finally we used photoshop and converted the cubemap to a horizontal format, and converted that into a panorama which we then imported into Unity as a texture. This solved the issue. Putting an actual inverted sphere into the scene would have as well, except this gave strange coloring issues.


To give the setting a bit more character, people had suggested have ocean sounds and background noises. To do this, two different audio sources were placed outside of the room which allowed them to have their own 3D sound, so they get louder when the player gets closer to the window and have stereo effects that go from ear to ear when headphones are plugged in.

One final step was to make the game input compatible with both mouse and keyboard as well as an Xbox controller. Unity has built in settings that support some of the controller mapping, but the rest must be done manually. Controller input mapping was put into the unity input menu and then coded accordingly. “ || “ Or notation in the code was used so that the game would accept both mouse click and the a button as input.

Progression of Each Build:

15 16 17


Code Gist:


Game Executable:




Posted in Uncategorized | Comments closed


Mustafa Abdel Fattah

Atelier II: Collaboration

Project #4

Project Description

FRAGMENTS is a projection installation created by Mustafa Abdel Fattah. It illustrates the concept of faded experiences and raises the question; if objects had memories what would they be? Following the perspective of a children’s toy, a loop of the toy’s distorted memories is revealed and it’s up to the viewer to interpret them.

FRAGMENTS began as an exploration into the significance of childhood memories to adults and how memories shape adults to become who they are. Associated with one’s childhood are children’s toys; which are known to be linked to some of the deepest of memories for every individual. This piece aimed to show a glimpse of the memories that accompanies a children’s toy and subtly commented on the subject of passing time and growing age. As the concept became refined, FRAGMENTS grew to showcase the passing of time and the diminishing significance of children’s toys in one’s life as toys become just another memory.

Click to View Final PDF Documentation

Posted in Uncategorized | Comments closed

Wiring Team

Wiring Team


January 25, 2018






Things to look in to:

  1. Communicating between boards
  2. Multiple nanos and multiple screens
  3. Connecting the uno with sensors
  4. Photoresistor, potentiometer, pressure sensor (just one of each)
  5. Communication of sensor nano to all other nanos
  6. Redundancy
  7. Labelling system
    1. On screens
    2. On Nanos
    3. On wires
    4. Everything
  8. Troubleshooting system

Materials Needed:

  • Wires
  • Photoresistor (1)
  • Potentiometer (1)
  • Pressure sensor (1)
  • Protoboard (1)
  • Headers (60)
  • Breadboard (1)


What if we used the PWM pins for float and int information (photoresistor, potentiometer, pressure sensor) and Digital out pins for boolean information (such is on, thus is not)


Wiring Legend

Red Power 5v
Black Ground GND
Yellow SDA (OLED) A4
Green SCI (OLED) A5
Orange I2C (Nano to Nano) ( RX + TX, MAYBE! )


Final Documentation:


Initial wiring test:

One of the first steps was to start soldering power headers into protoboards, allowing for the large amount of connections that would be needed. We left a few extra pins, partly for inter-board connections, and partly in case of faulty connections (this came in handy).

At first, we planned to power the OLEDs using the 5V and GND pins on the Arduinos. This was scrapped in favour of keeping separate power headers on the protoboards, making for denser boards, but sturdier connections.

On the initial test day after these three separate protoboards were put together we tested each OLED screen and arduino to make sure that they were functioning properly, and that the connections from the boards were correct. There was one faulty protoboard which meant having to fix the short so that the connections were no longer bridged.

It was also at this time that we added a third header to transfer state information from the central Arduino in the control panel.

Everyone was sent home with an individual screen and nano so they final wiring job besides the protoboard power connections couldn’t be started until the morning of, which was the real challenge.


Wiring to the Domes

Labour for dome-wiring was divided in two sections. One team dealt with the base, and the other simultaneously dealt with the internals of each dome.

We started by drilling an extra hole in the case for the 5V power cable.

The first problem that came up when attempting to wire the Nanos to the OLEDS in each dome, was figuring out the organization of the wires. We only had to colours (red and black) to work with, and this immediately led to some confusion to where everything belonged and connected. We used black-red-black-red as a mnemonic for the OLED wiring pattern. In some instances, this pattern was reversed, leading to a substantial number of burnt-out screens.

The wires got tangled easily and when testing each individual screen that wasn’t working this made it difficult to find the correct connections. This issue was accelerated by our decision to dispense of numeric labelling conventions, connecting any Arduino to any OLED in any dome. Such a choice made troubleshooting less a matter of, “Screen 24 is down, something’s up with Nano 24,” and more a matter of, “This screen is down, check all of the Nanos.”



With each individual dome hooked up, we tested if the nanos and screens would power on. Initially, none of the nanos had any power. The DC power jack’s positive and negative terminals had been bridged by spare solder, bypassing the assembly entirely. This was solved by de-soldering the jack completely, cleaning the site of its connection, then re-soldering it with extra care taken.

With that fixed, we realized most of the screens had no power or just would not turn on. In part, this was due to the aforementioned wiring reversals in the OLEDs. The screens which had their wires reversed were burnt out, and needed replacing. This largely happened in the desert dome; the final one to be installed.

Problems in the desert dome were compounded by a lack of distinction between power and signal cables, which was not resolved due to time constraints.

Some screens resisted all troubleshooting. In some cases, swapping out either Arduino or the screen itself solved the problem. In other instances, the issue could be resolved by finding loose jumpers. However, a rare few simply resisted all solutions, and so remained unsolved.


Thoughts on the Prototype

For next time a different organization system of wiring would have made this quicker and more simplified to go through, labelling each wire and OLED with a specific number so we could tell what wasn’t plugged in properly and which ones weren’t functioning correctly and make a note of that. As it was, when we opening up the wiring box when there was clearly a couple of shorts going on, there was no way to sort through them besides the labels that we had put on all of the data cords. At that point we had to individually go through each wire and arduino to see where the problem was and if anything had been fried or plugged in wrong.

As for the OLED screens and creatures themselves, another organization system that would have been beneficial would have been assigning numbers and domes to each person in order to properly know which OLEDs hadn’t been turned into us or were having difficulties.


Wiring Images/ Materials:

Process photos:


Posted in Uncategorized | Leave a comment

Coding Team

Jerez Bain 3152579
Vivian Wong 3158686
Anran Zhou 3157820
Mika Hirata 3154546
Natalie Le Huenen 3156341
Katrina Larson 3159249


Neuro (Subject to Change) // SanDome

An interactive art installation, in which the audience is given the power to change the environmental conditions of three different worlds: Forest, Desert, Water, by adding heat, poison, or water. Virtual creatures inhabiting these worlds will respond and react to the changes according to their traits in either a positive or negative way. Neuro // SanDome focuses on the connection between these worlds and creatures, similar to how neurons in the brain connect and react to inputs.

As part of the coding team, we were responsible for the functionality aspect of the project, rather than the design. However, the choices we made helped shape the design and limitations of the project. Our challenge… the code has to function, or the whole project falls apart.

It was absolutely vital that we did everything we could to prevent that.


In order to successfully develop the code with minimal error, we needed to make sure that we had a plan. Referring to the initial design of the project, we listed out all of the tasks we had to complete, and produced a timeline. We gave priority to completing the animation base template first, as the entire class would need it to complete their individual parts.

We wrote pseudocode to understand and determine the logic behind our code. We communicated our ideas with the Enclosure and Wiring teams and made changes based on their input. We divided our team into three parts: Animation, Communication, and Control Panel/Central Brain, and got straight to work.


Preparing a ‘template’ which houses each set of animations loaded on each Arduino Nano was crucial for having all the components synchronized.

We began by making an early working template for shape-based animations to be placed in. Using the shape functions provided by the u8g2 library (found here: simple movements created by shifting coordinates of shapes and lines can be combined to create animated creatures.

We chose a set timing for the animations, three frames per second. The template is set to loop through animation frames in the order of frame 1, frame 2, frame 3, and frame 2 again. This creates a ‘breathing’ or ‘pingpong’ effect, maximizing the use of six frames per creature (three ‘happy’ frames and three ‘sad’ frames).

hframe1photo hframe2photo hframe3photo
Testing the timing and frame order of an example ‘happy animation’ on an OLED screen

We found that creating animations with this method works but is time-consuming. Verifying that the animations are working as intended involves re-uploading the template to the Arduino Nano every time, which takes several minutes.

Experimenting with using drawn images as animation frames involved designing a process where image information can easily be added to the existing animation template. We created instructions which guide the rest of the crew in using the template. There are many steps involved to ensure that the image information is compatible with the OLED screens.


hframe001 hframe002 hframe003 sframe001 sframe002 sframe003

Drawn images use black-and-white pixels to tell the OLED screen which pixels are on and which are off. Exporting the image as an xbitmap (.xbm file) and opening it with a text editor displays the image information which can be moved into the animation template using the Arduino IDE.


As the animation template was developed independently of the communication system, merging the code involved ensuring that there was enough onboard memory in each Arduino Nano. Fortunately, drawn xbitmap animation frames can be stored within the Nano’s static memory instead of the RAM, which the communication system uses more heavily.


For the communication, we started off by using SoftwareSerial library on Arduino, which allows us to use any pin on the board as RX and TX. SoftwareSerial library allowed serial communication on other digital pins of the Arduino, using software to replicate the functionality.  So that we were able to have multiple softwares serial ports( Arduino Uno and Arduino Nano).

Process Images:

First trial of the Arduino communication:
Since we did not have the same GND for both of the Arduino Unos, we were not able to get the Arduino communicating each other.



Second trial of the Arduino communication:
Second failure of connecting two Arduino Nanos with using OLED. We still did not have same GNDs for both of the Arduinos.


Third trial of the Arduino communication:
We started to use SoftwareSerial library on the same computer by plugging in two Arduinos in the same laptops.


Fourth trial of the Arduino communication:
Successfully connected the Arduino Nanos by using one laptop. We could light up a green LED to test if the Arduinos are successfully communicating.

img_3499 img_3498


Before we could even start writing the control panel code, we had to have a working prototype of the control panel inputs. Using sample code, we tested it out each sensor individually, to make sure they were working and that the wiring was correct.

20180118_092945 design-proposal-circuit-control-panel-a_bb

We then rewrote the code so that all sensors could be used at the same time.



We worked individually on the code at first, testing out our own versions. We added checking functions with if-statements that referred to the range of each sensor value.

There were several ways to code the language of the control panel code. In one version, we used booleans to store the input information according to the if-statements. In another version, we replaced the boolean type with int types defined as either 0 or 1.

Working in collaboration with the communication code team, we merged their working code with the control panel code to test. Unfortunately we kept getting strange readings in the Serial Port. The Serial Port should have sent ‘1’s but kept sending ‘-1’s, and the occasional ‘49’. Through research and trial and error, the error was fixed by replacing Serial.print() with Serial.write() and boolean variables.


Test 1:

We replaced the original animation template images with text that signified which input was on when testing initially.

However, when we switched out the Uno to the Nano, our original code stopped working. To fix this, we had to switch back Serial.write() to Serial.print() and integer variables of 1, 2, or 3 according to the input.

Central Brain OLED Animation:

Since we decided to create three types of conditions which are fire, water, and poison, we created three patterns of animations on the OLED display.

  • If the FSR pressure sensor is pressed, then poison animation is on.
  • If the photoresistor detects a certain amount of light, then fire animation is on.
  • If the potentiometer is twisted over a level, then water animation is on.

We used the If() statement to declare each type of condition and made sure the animation property switches based on the value which the sensor detects.

URLS for the code and videos:

Vimeo link for condition animations(water, poison and fire animations):

Github link for condition codes:

Images of animations:

img_3547  img_3550 img_3553

When we were ready, we merged the communication code and the central brain condition animation code with the control panel code.

Final Test video:


Throughout our experience, we have learned that good instructions come a long way. The animation template and instructions made it easy for anyone to successfully create their creature. During Communication and Control Panel coding, there were multiple times where we uploaded code to the wrong Arduino. It became a nuisance to ensure that the ports and Arduino build were correct each time. It was also interesting how the Arduino Uno and Nano required different methods to send data through the Serial Port. While the Uno worked better with Serial.write in bytes and booleans, the Nano worked best with Serial.print in integers. 

Out of all of the tasks, Merging everyone’s code together was certainly one of the most challenging. We attempted to merge the code several times, only to be met with no working results. When we slowly merged the code step by step and continued to test each time, we finally succeeded. Coding always comes with a constant need of research, debugging, and troubleshooting. With luck, we are able to stumble upon solutions for the obstacles that we face. Good commenting on everyone’s code was extremely helpful to avoid misunderstandings and confusion.

Delay & Millis

At the beginning, we were using delay() to control each function, such as checkLight, checkWater, checkPressure functions for the prototype. But it turns out that these functions cannot be played at the same time since the delay function works in a linear way. Thus, we replaced Delay function to Millis function, which allows different sensor and functions to be triggered at the same time based on different rate.

Create intervals, timers for each function and use Millis() to calculate time



Since we had the control panel wiring ready, we took responsibility for attaching the wired components inside the control panel dome. There were a few challenges with fitting everything inside the dome, and making sure that the wires did not disconnect. We used a half breadboard for the Nano and replaced long jumper wires with manually cut, colour-coded wires. The potentiometer and FSR sensor were the most difficult to keep in place. We had to raise the height of the pressure sensor by using an extra breadboard as a platform. We used alligator clips for the pressure sensor as the jumper wires did not fit, and taped everything into place.

Image inside the dome for central brain:

img_6095 img_6096

Final outcome images:




All things considered, developing the code for this project was a huge undertaking and quite challenging. A huge part of this journey was teaching ourselves and each other, and learning as a group and as individuals. As individuals, we taught and shared our knowledge with one another. As a team, we learned to communicate and work together, both in our small groups and with different teams.

The success of the entire project depended heavily on how well we could organize time and communicate our progress and required tasks with each other. We developed an effective way of communicating  and sharing files with one another that made this possible considering the size of the project team.

The journey was nerve-wracking, frustrating, challenging, but well-worth the result. The project helped us all see how many people working on their individual parts can come together and make a fantastic project. Success would never have been possible without everyone’s hard work and commitment.

All project code and video links below.

Read More »

Posted in Experiment 5 | Leave a comment

Enclosure Team


Upon finalization of the supercomputer idea, we asked one of the original Niero group members from the design proposal assignment to give us further insight with regard to structure and their vision of how we should design the enclosure for this project.


We relayed some ideas around the newly formed group about how it should function and specific design aspects that would contribute to the overall aesthetic. Soon enough, we landed on the idea to feature these screens on domes.


The domes were suggestive to various different environments that can affect everyone’s creatures on their respective OLED screens. We got to this idea by looking at architectural modelling of terrain and other topographic designs to gain a perspective on how it’ll possibly look.


In the ideation stage we wanted to incorporate the idea of the brain and its neurons – staying true to the proposed name, Niero. It was to take the idea of the brain cells and the connections and having the wires exposed to simulate the interconnectedness of the brain. After we assessed the discrepancies with the design in regard to exposing real wires, we alluded towards a biome design to showcase certain elemental effects that change the creatures animation to either happy or sad.



To best communicate our design aspirations, we sketched out a rough design of the structure and how it will look like in a real world context. In the sketches there are concepts of the design with intermittent conversation with the wiring team about how it would work out and exact sizing of the structure itself.


In this sketch we are trying to explain how the single dome design would work with the idea of splitting the dome into selections. This is also the rough sketch of the wiring on top of the dome.


In this sketch we are looking at the wireframe of what needs to be connected and how things are going to be connected underneath the structure and on top. This sketch was just a way of looking at the system and not an actually design concept, it more of a way to look at the functions of the hardware and the way the wiring team could possibly put things together.


For the final design we landed on making a ‘shoe-box’ structure – a box with a hinged lid to house the wires, boards, and Nano’s inside. This will also give us the best and most amount of space for the project.


The top of the lid has 4 domes – three for the respective environments and one for the controller. Each one is decorated in appropriate environmental setting to resemble the natural element that affects the creature. We also created a fourth dome to house the controller for users to control the different elements.


We wanted to create an ambient environment that the creatures can blend into rather than creating a world that doesn’t have any illusion or connection to the original idea. We also kept with the idea of the brain by placing the controller in-between all of the domes along with the wiring so that function can work out better for the wiring team and also be aesthetically pleasing.


After having the final draft of the design fully thought out, we began to brainstorm about potential materials. We first went with the idea of using stiff foam boards and layer them or make a box out of it to make the placement of wires and Nano’s easier. After going through the pros and cons of various materials, we settled on wood as it gave an organic feeling. For the dome we wanted to have a semi-circle dome type of shape. So we vacuum formed plastic by using a basketball as the mould. The plastic was thin enough and had the ability of mold into the shape we wanted when under the influence of heat.


Once these two essential factors were complete, we focused on decoration and foliage of the domes so that they’ll accurately depict the environments we chose. For the aquatic dome, we chose seaweed like plants and lily-pad flowers. For the desert dome, we chose cacti and dried plants. Finally, for the forest biome we chose tree-like plastic pieces.



Upon completion of the design and ideation of the construction, we brought together each individual component to get a bigger picture and anticipate the final product.


The first thing that was to be made was the box so that it’ll be able to fit everything into it. The box was made from wood with 3 separate door hinges to allow for easy access. Due to sheer skill in craftsmanship, every facet of the box was well cut and accentuated its clean design. Additionally, there was an added lip to the lid of the box to allow for ease of opening.


The next thing to create was the domes themselves. This was not only the hardest part but took some tries getting it to the right size and look. The first couple were made from vacuum-formed basketball, draped with plastic. The first one was very oversized and looked really odd.


The second and third ones were smaller and had a unique design that followed our ideation stage of how the dome should look like. Then the fourth and fifth domes were molded and those ones turned out a lot better. They had a form we were all pleased with.


After the box and domes were complete we required a method of feeding wires and the OLED screens themselves on the domes. First, we tried by cutting into the cold plastic material, but it did not work out so well. Due to the abundance of soldering equipment, we resulted in using a soldering iron to burn exact-sized holes into the domes. This step was interesting because when plan A didn’t work, we were quick to adapt and move forward with plan B! By burning holes into the plastic, it allowed for a customized look and feel to the dome, without any stray pieces of plastic as they were sanded down.


The next step was to make holes into the lid so that the all the necessary wiring can be run through the lid to the inside of the box for storage. We outlined the the domes once they were trimmed of the extra material, and made adequate holes into the lid by using a drill bit. This drill bit was quite large because it allowed for a lot of room for the wires to be fed to the insides of the box. The holes were made for the domes, controller and the power source.


Later on, we added the decorated domes with the OLED screens and then wired them to the Nano’s for the data transfer, ground, and power. One very satisfying touch to the domes was the Japanese textured paper, as it gave a natural and 3D effect. Topped off with foliage, it made for a very aesthetically pleasing piece.


Lastly, we combined everything together and physically added the wiring to the inside structure. With the design, the structure worked out with the minor adjustments that were made last minute. The problems we ran into right at the end was just getting the power into the box and ease of access to the wiring when troubleshooting. These were easily taken care of with collaborative efforts right away with fast and quick adjustments.  



The final product was exactly what we imagined and it turned out really well. We had successfully held true to our concept all the way until the end of the project. The detailing that we wanted and the overall look and aesthetic came through in the physical model. Another thing that came through very well was the awareness for the wiring teams system and aspirations. This allowed us to build the structure in a way that benefited the way things were placed in the box. And when everything was working, the screens and Nano’s it really made the unique environments stick out with a strange weird otherworldly feeling.


Some of the suggestions we received from other instructors for the redesign or second version of this project structure wise to create a sliding screen instead of a lid. This would be for easier access to the wiring and would make the project easier to work with in terms of wiring, organizing, and troubleshooting. The sliding design would also be much more efficient because the way it’ll slide is easier than lifting and threading wires. Another suggestion was to experiment and look at molding different sphere shapes that could relate or have different meaning to the piece. For example, this would be like a topographic view of the box. It would be a terrain like environment that would be separated towards how each selection of the environment can affect the creatures.


As we were discussing the overall appearance of the structure and about improving the discrepancies, a lot of the talk was geared towards custom hardware and creating special pieces that can organize the wiring and components a lot better. Suggestions were to make the screens seamless to the dome or environment. Another was to create a longer structure that has different levels and areas along the structure. And there was also talk about creating custom screens and breadboards that would be more efficient and result in better effects for the aesthetics and design. Having custom material allows for unique shapes and organization to occur.


To conclude, the construction process was lots of work from its ideation, creation, building phase and finally presentation of the final product. This process was tough and consisted of a lot of collaboration between group members and separate groups to make decisions and design choices. The problems that arose were thinking further about the logistics of other groups requirements and our method of how we want to get things done. A lot of time-coordination had to be done so that everything that was required to be built was finished and viable, further more pressured due to the lack of access we had to the open workshops and other crafting facilities. Ultimately, each group member was able to utilize their unique skill-set to contribute to the final product, and even learned something new by focusing on a different aspect of a product for a change.

Posted in Experiment 5 | Leave a comment

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.