Category: General Posts


fluid resonance title2

What if water could be an instrument?

2014-11-10 21.53.04

Fluid Resonance DJ Gary

If the vibrations from a single drop of water in the ocean could be suspended in momentary isolation, what infinite arrays of symphonic arrangements could we hear? A constant flow of signals in the tide of the universe codified in sounds, awaiting to be experienced in time. There are certain moments when we are moved by sound, a direct emotional connection to the physical movement of orchestrated disturbances in the air; an unseen but pervasive, invasive and volumetric presence. Such characteristics as these are what concern me now, and are the focus of the project. The formation of relational interactions in space codifying the event and its parameters are subtly and violently manoeuvred by the invisible actions as subtext underlying the visible surface, just as sound changes its timbre by the material surface it reverberates on, yet continues to imbue into the substrata of matter.

Music makes time present. Or, at least, it makes one aware of time, even if one loses track of it. Take, for example, Leif Inge’s 9th Beet Stretch, a reimagined version of  Beethoven’s 9th Symphony stretched into a 24 hour journey (a sample can be heard here on RadioLab at 4:23, or listen to the full streaming version here). The remastered continuous audio notation materializes the presence of sound and the distillation of a captured moment, giving one a moment to reflect on the mortal moments that stream by in subconscious sub-fluences every minute, without awareness.  In this example, we are transported into the life of a musical movement in its own existence; in contrast, another way of thinking about the relation of time and sound comes in the form of crickets. An analog field recording taken from a soundscape of crickets was slowed down, edited to a speed equivalent to the lifespan of a human, scaled up from a cricket’s lifespan. What emerges is a harmonic layering of triadic chords playing in syncopated rhythm like an ebb and flow of a call and response. (Note: the field recording has later been reported to have been accompanied by opera singer Bonnie Jo Hunt, who recalled “…And they sound exactly like a well-trained church choir to me. And not only that, but it sounded to me like they were singing in the eight-tone scale. And so what–they started low, and then there was something like I would call, in musical terms, an interlude; and then another chorus part; and then an interval and another chorus. They kept going higher and higher.” (ScienceBlogs 2013). When we slow down, speed up, alter, change, intersect intangible concepts into human-scaled pieces to hold, we have an opportunity to reveal insights from our point of view into a dimension outside our horizon that which we never would have encountered in the habitual form. It may not grant us full access to aspirations of knowing or truth, but the discovery of interrelated phenomena – whether it be time and music, water and sound, or natural- and computational glitches causing anomalies – gives us a better understanding of the effects and consequences of the tools used to define a language, which in turn, define our state of being and future intent.

And what of the water project?

The original intent of processing and music began with the introduction to cymatics – a term used to describe the experiments of a substance’s patterned response to various sine wave tones.

And here’s a polished music video of many such experiments compiled into a performance:


Water revealed the vibratory patterns of tones in consistent yet exciting designs, which then began the exploration into sound processing. Minim became the library that accommodated the recall of pre-recorded instrumentation (or any .wav / mp3 file) , however it was not the first library to be experimented with. Beads, a sound synthesizer library with the capacity to generate sine wave tones provided an introduction to visualizing a simple wave form.


beads interaction sine wave


Close-up of Frequency modulating on-screen.

The location of a pointer on the screen moved by the mouse changed the wave modulation and frequency in relation to the vertical and horizontal actions, respectively. The input of movement and proximity changed the auditory pitch of a tone output.

Another variation of sound visualization from Beads was the Granulation example. This exercise used a sample of music, then ‘flipped’ the composition, pushing and pulling the tones, stretching them into digitized gradations of stepped sound. Imagine a record player turning a 45rpm disc with minute spacers in between every 1/16 of a second, turning at 33rpm (but with the same pitch) – the digital composition of finite bits reveal themselves, but link the tones in a digitized continuum. This would later become very influential in the final performance of the sound generated by water.

An inquiry into the physical properties of cymatics proved to be challenging. Initial investigations were conducted with a coagulate fluid (water and cornstarch).

Gary blows a speaker.

Gary blows a speaker.

It was soon discovered that commercial-grade hardware and equipment would need to be used in order to achieve an effective result. Though challenging, time would not permit further explorations. (Gary Zheng continued to explore cymatics to great effect based on these initial experiments).

A second option was to simulate cymatics through visual processing, leading to some play with Resolume, a software used for sound visualization, popular among DJs to augment their sets with responsive graphic media.

ResolumeInitially, the layered track interface and set bpm files made this an easy-to-use software medium. Pre-made .mov or .wav files could be loaded to simulate interaction with the beat-heavy tracks. For entertainment value, Resolume has much to offer and is easily accessible.  But the spontaneity is removed from the equation of output, and is dependent upon the user’s technical knowledge of software and constraints of the program.

motion vs colour detection

motion of water detection

The method of investigation revealed interesting physical responses to sound, and in turn, inverted the experiments of cymatics – from the causation of sound into form – to form resulting in sound feedback. The intrinsic properties of water displacement on its surface had the ability to create an effect through captured video, thus water became the focus as an instrument of motion represented by auditory output, and was no longer an after-effect of sound.

A deductive experiment compared two forms of video motion detection, based on exercises conducted earlier in group lessons (code originating from Daniel Shiffman’s samples,  First, there was colour detection. This version of motion detection would have dictated the physical properties of objects and or additive coloured substances to the water. Adding elements complicates the design process, and alters the base-line state of the water interface, so this was not a favourable option.

motion detection senses obscenities.

Motion Censor: motion detection senses obscenities.

Motion detection test using video processing. Video pixels are selected in the active frame to detect which colours within a certain threshold will be chosen to be tracked; in this case, the off-colour gestures turned the camera from a motion sensor into a motion censor.

Next up was the motion gesture test, a basic differencing visual of motion knocked out in black pixels, with a set threshold calibrated for the scene.

motion difference

2014-11-10 00.48.44

early tests of water detection through gesture motion

The gesture test proved to be less discriminatory of affected pixel detection, therefore the existing conditions of light and material properties would be critical in the final set-up of the performance, especially for a clear and less-detectable substance as water. A visualization of the water’s surface captured in the video camera early on indicated the sensitivity of the camera would be sufficient and better in controlled environments.

A third and most important layer to the experimentation was the implementation of the split screen lesson introduced to us as a group, as an application using coloured elements to respond to motion. Coloured items appeared,  indicating the detection of movement in the designated zone on the screen.

2014-11-06 16.24.52

Split screen divisions

hand detection grid format

Layout of grid zones

**At this point, the design of the project became clear. A music interface would be created with water, from which motion would be detected through user interaction (imagine water drop syringes annotating musical notes in a pool. Notes vary on a scale depending on where you release the droplets). The vibration of water is also augmented by a graphic icon, colour-coded to represent the different tones. Once the user has made the connection of the interface, colour cues, notes and zones, the ability to improvise and to create a melody through patterning becomes intuitive. **

As the design of a musical interface called for a variety of designated tones, a grid was mapped out to correspond to a simplified scale. 8 tones would be selected, and would represent a major key to start with harmonic layering. A small threshold showing less motion than was detected maintained a relatively neutral background, while a graphic icon was required to track the gesture: this was needed to give visual feedback to the user, to understand the interface orientation and navigation of the zones.

2014-11-09 16.48.12

gesture motion and tracking with graphic icon

An important aspect of the grid layout and the user interaction required a method of knowing where the user was affecting the interface, as it was a visual representation augmenting the real physical interactions of the water. It was determined that a static image appearing intermittently did not represent the user action of dropping water, so a sequenced animation (GIF) was created in PhotoShop.


GIF sequence of 15 frames developed in Photoshop

8 unique variations (colours) of a 15 frame GIF were created. Then, a GIF library was sourced to introduce the animation into the code. GifAnimation was used to activate the series of images. There were at least a couple of ways to integrate the animation: as a sequence of still images, or as a compiled GIF (the latter was chosen for this instance). For further information, here is a link to start

In order for the GIF to be successful, it had to follow the pixels in the zone it was assigned to, and it needed to appear in an approximated area where the most changes occurred in the video processing. What transpired was a coloured “droplet” image appearing where the real droplets of water were being played out on the water’s surface. This part of the program code would not have been possible without the consultation of Hart Sturgeon-Reed, who helped to apply the following:

///draw the gifs

if (points>80)


//Upper left 1
if (xPosition <splitLine && yPosition <height/2)

…and so on, for each tonal zone.

To recap, the foundations of gesture motion detection was layered with split screen detection, thereafter divided into quadrants, then 8ths of the screen (default 640×480). Video processing also enabled tracking of the GIF, which was implemented with the GifAnimation library. Now, Minim was used as a playback for pre-recorded royalty-free audio. In this case, the default notes on a guitar were selected as a basis for the sounds – a simple foundation easily recognizable, with the potential to grow in complexity.

A fundamental leap of concept occurred in the playback results. Initially, the pre-recorded single note tone would play, and a simple identification of a sound would be the result. Minim has the capability of playing a complete song if needed, by using the recall at the critical moment. This method may slow down the recall, however, and since the tones for the project were short, the recall required quick access / activation. Another detractor to the Play loop was the one-time cycle. A reset was required thereafter, and did not set as expected, and often the tone cut short as other tones were activated through the water motion. To counter the stuttering effect, trouble-shooting with the Trigger loop had interesting results. As the motion of the water continuously recalibrated the video detection when its surface broke from the droplets, the tones were triggered with a constant reset, creating a continuous overlap of sounds, not unlike the earlier experiments with the Beads library example of granulation. So here we are with a unique sound that is no longer like a guitar, because it is retriggering itself in a constant throng of floating, suspended long notes weaving between each other. It is guitar-like, yet it is pure processing sounds, delivering it from simulation to simulacra, activated by natural elements of rippling water.

The second point to note about the visual and auditory feedback was the glitches. In the coding structure, parameters were set to define the area within the screen (each zone had 160×240 pixels) from which an approximation would be determined, in order to represent the point of contact where the most action occurred in each zone with the GIF droplet icon. But, as the water surface continued to overlap and ripple into adjacent zones, it appeared that the icons were blipping outside of their originating boundaries, often overlapping one another. This was fortified by the seeming overlap of tones, when in fact each tone was activated on an individual basis; due to the immeasurably small lapse times between each triggered effect, two tones would sometimes sound like they are playing simultaneously, when they were bouncing back and forth from each sound. The flowing state of water in its contained form would activate multiple zones at once, which I can only surmise that the Processing sequence arbitrarily determined which zone and tonal value to play in order of activation, yet was constantly being reevaluated as the water continuously produced new results, changing the sounds by the millisecond.

The set-up: A podium, webcamera, projector, speakers, source light, laptop, water dish and two water droppers. Simplicity was key, considering the varying levels of sensory input and stimulation, however the learning curve was quick and responsive due to the immediacy of the performance.

(Two other experiments to note: FullScreen- an application whereby the Processing viewing window stretches to the size of the native computer (laptop) – and Frames – the calibrating tool to indicate how often the camera input refreshes data – did not synchronize due to the limitations of the external webcamera used. The 640×480 aspect ratio was not preferable, but did serve its purpose in speed and responsiveness.)

Prototype Performance

From the outset, the physical interactions of the environment and the design (reflection of light on the water surface, the container vessel, the camera positioning…) were discussed in detail, yet the focus remained on the programming concept. As a designer of physical space, the programming content, in conjunction with the hardware and sensitivity to environmental conditions was a negotiation process, requiring constant testing throughout the stages of development. Such practice-based research produces unexpected results, and informs the process through reflective and iterative methods.  The most challenging aspect deals with the interaction of elements in its most basic state. The approach to this project was to respect the properties of water, and to work with water as the central vehicle to the creative concept. This now includes the unpredictability of its changing yet constant state.

Phase II

Future stages to the project entail expanding the range of tonal values / octaves in the instrument, which could include secondary mechanisms to “flip” to another octave, or output of sound. Recorded feedback of either visual or auditory information could become an additional layer to the performance. A designed vessel for the water interface is to be reviewed.

Other considerations:

Spatial and cognitive approaches to virtual and digital stimuli in environments have the potential to be accessible touchpoints of communication, whether it be for healthcare, or as community space.

The initial mapping of the split screen into zones carries into another project in progress, and informs the development of a physical space responding with feedback based on distance, motion and time on a much larger scale.



Many thanks to Gary Zheng, Stephen Teifenbach Keller, and Hart Sturgeon-Reed for their help and support.


// Jay Irizawa Fluid Resonance: Digital Water
// base code started with Learning Processing by Daniel Shiffman
// Example 16-13: Simple motion detection
//thanks to Hart Sturgeon-Reed for the graphic icon detection

import gifAnimation.*;

PImage[] animation;
Gif loopingGif;
Gif loopingGif1;
Gif loopingGif2;
Gif loopingGif3;
Gif loopingGif4;
Gif loopingGif5;
Gif loopingGif6;
Gif loopingGif7;
GifMaker gifExport;

//minim sound library

import ddf.minim.spi.*;
import ddf.minim.signals.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;

// Variable for capture device
Capture video;
// Previous Frame
PImage prevFrame;
// How different must a pixel be to be a “motion” pixel
float threshold = 50;

float motionTotL;
float motionTotL1;
float motionTotL2;
float motionTotL3;
float motionTotR;
float motionTotR1;
float motionTotR2;
float motionTotR3;

float maxMotionL = 3000;
float maxRadiusL = 60;
float radiusL;

float maxMotionL1 = 3000;
float maxRadiusL1 = 60;
float radiusL1;

float maxMotionL2 = 3000;
float maxRadiusL2 = 60;
float radiusL2;

float maxMotionL3 = 3000;
float maxRadiusL3 = 60;
float radiusL3;

float maxMotionR = 3000;
float maxRadiusR = 60;
float radiusR;

float maxMotionR1 = 3000;
float maxRadiusR1 = 60;
float radiusR1;

float maxMotionR2 = 3000;
float maxRadiusR2 = 60;
float radiusR2;

float maxMotionR3 = 3000;
float maxRadiusR3 = 60;
float radiusR3;

float splitLine = 160;
float splitLine1 = 320;
float splitLine2 = 480;
float splitLine3 = 0;

int xsum = 0;
int ysum = 0;
int points = 0;
int xPosition = 0;
int yPosition = 0;

//Minim players accessing soundfile
Minim minim;
AudioSample player1;
AudioSample player2;
AudioSample player3;
AudioSample player4;
AudioSample player5;
AudioSample player6;
AudioSample player7;
AudioSample player8;

void setup() {
size(640, 480);
video = new Capture(this, width, height);
// Create an empty image the same size as the video
prevFrame = createImage(video.width, video.height, RGB);

loopingGif = new Gif(this, “DropGifditherwhite.gif”);

loopingGif1 = new Gif(this, “DropGifBlue.gif”);

loopingGif2 = new Gif(this, “DropGifGreen.gif”);

loopingGif3 = new Gif(this, “DropGifYellow.gif”);

loopingGif4 = new Gif(this, “DropGifRed.gif”);

loopingGif5 = new Gif(this, “DropGifBlueDrk.gif”);

loopingGif6 = new Gif(this, “DropGifOrange.gif”);

loopingGif7 = new Gif(this, “DropGifPurple.gif”);
minim = new Minim(this);

// load a file, give the AudioPlayer buffers that are 1024 samples long
// player = minim.loadFile(“found.wav”);

// load a file, give the AudioPlayer buffers that are 2048 samples long
player1 = minim.loadSample(“1th_String_E_vbr.mp3”, 2048);
player2 = minim.loadSample(“2th_String_B_vbr.mp3”, 2048);
player3 = minim.loadSample(“3th_String_G_vbr.mp3”, 2048);
player4 = minim.loadSample(“4th_String_D_vbr.mp3”, 2048);
player5 = minim.loadSample(“5th_String_A_vbr.mp3”, 2048);
player6 = minim.loadSample(“6th_String_E_vbr.mp3”, 2048);
player7 = minim.loadSample(“C_vbr.mp3”, 2048);
player8 = minim.loadSample(“D_vbr.mp3”, 2048);

void captureEvent(Capture video) {
// Save previous frame for motion detection!!
prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); // Before we read the new frame, we always save the previous frame for comparison!
prevFrame.updatePixels(); // Read image from the camera;

void draw() {


//reset motion amounts
motionTotL = 0;
motionTotL1 = 0;
motionTotL2 = 0;
motionTotL3 = 0;
motionTotR = 0;
motionTotR1 = 0;
motionTotR2 = 0;
motionTotR3 = 0;
xsum = 0;
ysum = 0;
points = 0;

// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x ++ ) {
for (int y = 0; y < video.height; y ++ ) {

int loc = x + y*video.width; // Step 1, what is the 1D pixel location
color current = video.pixels[loc]; // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

// Step 4, compare colors (previous vs. current)
float r1 = red(current);
float g1 = green(current);
float b1 = blue(current);
float r2 = red(previous);
float g2 = green(previous);
float b2 = blue(previous);
float diff = dist(r1, g1, b1, r2, g2, b2);

// Step 5, How different are the colors?
// If the color at that pixel has changed, then there is motion at that pixel.
if (diff > threshold) {
// If motion, display white
pixels[loc] = color(0,50,150);

xsum+=x; //holder variable
ysum+=y; // holder variable
points++; //how many points have changed / increase since the last frame

//upper left 1
if(x<splitLine && y<=height/2)
//lower left 1
else if(x<splitLine && y>height/2)
//upper left 2
else if(x>splitLine && x<splitLine1 && y<height/2)
//lower left 2
else if(x>splitLine && x<splitLine1 && y>height/2)
//uppermid right 1
else if(x>splitLine1 && x<splitLine2 && y<=height/2)
//lowermid right 1
else if(x>splitLine1 && x<splitLine2 && y>height/2)
//upper right 2
else if(x>splitLine2 && y<height/2)
//lower right 2
else if(x>splitLine2 && y>height/2)

else {
// If not, display black
pixels[loc] = color(0);

//line(splitLine3,240,width, 240);

///draw the gifs

if (points>80)


//Upper left 1
if (xPosition <splitLine && yPosition <height/2)
// E string
//Lower left 1
else if (xPosition <splitLine && yPosition >height/2)
// Upper Left 2
else if (xPosition >splitLine && xPosition <splitLine1 && yPosition <height/2)

//Lower Left 2
else if (xPosition >splitLine && xPosition <splitLine1 && yPosition >height/2)


//Uppermid right 1
else if (xPosition >splitLine1 && xPosition <splitLine2 && yPosition <height/2)

//Uppermid right 2
else if (xPosition >splitLine2 && yPosition <height/2)

//Lowermid right 1
else if (xPosition >splitLine1 && xPosition <splitLine2 && yPosition >height/2)

//Lower right 2
else if (xPosition >splitLine2 && yPosition >height/2)

println(“Motion L: “+motionTotL+” Motion R: “+motionTotR);


Processing Fishing


For the water park project I decided to use colour tracking to affect a processing animation. I have created a “fish tank” with various coloured laser cut fish and “barnacles” that the user fishes out of the tank using a magnetic fishing rod. When the fish or barnacle comes into the webcam’s view, the colour is detected and the swimming fish in my processing animation change colour based on the fish or barnacle you have caught. This project is interactive for the user because they are physically affecting the change in the animation and the user can see this change. It fits the theme of water in three ways. First, thematically, the visual elements are related to fish and fishing. Second, the fish and barnacles sit in a tank of water waiting to be caught. Third, the processing animation illustrates swimming, undulating fish. I modified code from:



photo 3 (16)

First I decided to have “fish” as my colour tracking objects so I had them laser cut from transparent blue, orange and green plexi. My plan was to use these fish alone and combine them to create mixed colours. (explained in experiments)


photo 4 (15) photo 5 (9)


Once I abandoned my original idea of having the fish on sliders, I decided to change my vessel to a fish tank and have a magnetic fishing rod to retrieve the coloured pieces in the tank so the webcam could view them as the came out of the tank. I made this fishing rod by drilling a hole in both ends of a square dowel and feeding a string through both holes. I then glued a magnet to the long end of the string.

photo 4 (14)photo 2 (18)


The laser cut fish outfitted with magnets. All the pieces ready to go into the tank.

Project Context:

This is an example of a project using Processing and OpenCV. This project is similar to mine because it is also a one player game using the same software tools. Instead of colour tracking, this project uses motion detection by comparing the current frame and the last frame. If there was movement, a bubble pops, if not, a bubble in drawn. The goal is to pop all the bubbles. His code also has to update constantly to look for new movement whereas my code constantly updates to look for new colours. Like my game, there is no set ending or something that happens in the code once all the bubbles have been popped as his project is also a basic first start. I would have liked to add possibly a different processing animation that occurs once all fish and barnacles have been caught.

Project Experiments

1: This is my first assignment from Body Centrics using Processing

For this assignment I chose to use a sensor I had never used before, the flex sensor. The first human movement that came to mind with regards to bending or flexing was exercise. This set up of sensors determines how effectively someone is performing specific exercises; in this case, a squat.

Screen shot 2014-10-05 at 11.08.34 PM


(click to enlarge)

Flex sensors are attached behind each knee and a light sensor is placed under the person performing the exercise. The maximum is reached when both flex sensors are fully bent by the knee and when the light sensor reads 105 (in this case). At this point, the knees are fully bent in a squatting position and the body is low to the ground covering available light to the light sensor. I modified your code by changing the layout, colours, and circle positions. I then made it so that the circles grow once they reach their respective maximum so as the person performs the exercise the circles will grow and shrink accordingly with a large circle for each sensor being the goal.

Screen shot 2014-10-05 at 10.29.48 PM

Screen shot 2014-10-05 at 10.29.41 PM

Screen shot 2014-10-05 at 10.29.53 PM

Screen shot 2014-10-05 at 10.19.55 PMScreen shot 2014-10-05 at 10.22.49 PM


These are screen shots of the Processing animation. The first image shows the resting body where both flex sensors are straight and flat and full ambient light is reaching the light sensor. The second image shows the sensors reaching their maximums. All maximums and minimums in the code have to be altered and calibrated in each new room due to differing light sources. I chose a yellow circle for the light sensor and two pink circles for the flex sensors.

photo 1 (13) photo (18)


For the set up, the breadboard and arduino sit on the floor underneath the person. The two flex sensors are affixed to velcro bands that wrap around above the knee. The flex sensor is then stuck onto the leg underneath the knee bend. This set up could be modified to test the efficacy of a push up (with flex sensors at the elbow bend) or a sit up (with the flex sensors along your stomach muscles).

2: Processing sketch of stick figure me

This is the second time I used Processing in class. I made a stick figure of myself that shows when the mouse is pressed. This was a good way for me to learn how to create a drawing in Processing from scratch. I started with a 100,200 size and centred the middle line, triangle skirt and ellipse head. I then added two arcs for bangs with a gap in the middle for a part and two long rectangles for hair. I then added triangle shoes and two small ellipses for eyes. I then added and if statement for mousePressed so the function: drawTegan will appear when the mouse is pressed or not if else.

3: Face tracking Body Centrics assignment


image (50)

You will need white eyeshadow or face paint, black eyeshadow or face paint, foundation, tape and an optional  curling iron.

photo 1 (18)

First I did my hair. I curled all my hair to give it some body. I then added two buns on either side of my head to obscure the shape of my head. I also used a large front section to swoop over one of my eyes and pinned it to the side. The long curls also obscure the shape of my jawline.

photo 3 (15)

I then wanted to remove dimension from my other ocular region. I used white eyeshadow all around my eye and on my eyelashes. I then used foundation to remove any darkness from my eyebrow.

image (44)

My next step was obscuring the bridge of my nose. I taped a large triangular shape across my nose and filled it in with black eyeshadow.

image (47)

To remove dimension from my mouth region, I taped another shape onto my lips and filled the lips in with foundation and the skin part with black eyeshadow to give the illusion of opposite light and dark areas.

image (48)

Final look

This anti face tutorial is based on Adam Harvey’s CV Dazzle project and follows his tips for reclaiming privacy.

1. Makeup

For my look,  I avoid enhancing or amplifying any of my facial features. Instead I use dark make up on light areas and light make up on dark areas to confuse face tracking technology.

2. Nose Bridge

The nose bridge is a key element that face tracking looks for. I obscured it by using dark makeup in an irregular shape over the nose bridge area.

3. Eyes

The position and darkness of the eyes are key features for face tracking. Both ocular regions are dealt with here in different ways. A large section of hair completely obscured one and light makeup conceals the dimension of the other ocular region.

4. Masks

No masks or partial masks are used to conceal my face.

5. Head

Face detection can also be blocked by obscuring the elliptical shape of one’s head. I used buns on either side of my head, and long curls to break up the roundness of my head and jaw.

6. Asymmetry

Most faces are symmetrical and face detection software relies on this. All hair and makeup elements are different on each side of my face.

 4: Colour Tracking

This is me experimenting with colour tracking using the gesture based colour tracker Nick provided us with. When I mouse press something in view, the colour is recorded and the tracker draws a blob of that colour over the object and anything else that it that colour. Here I learned that the tracker is not as specific as I would like and picks up on things that you don’t want it to because of lighting. This issue came up a lot when I was testing my project. Anything I was wearing, the colour of my skin, the colour of the wall behind the tank or even low lighting would make the colour tracker pick up on things I didn’t want it to.

5: Modification of fish_sketch

At this point I had decided to modify this sketch for my game project. Here I try changing variables like size, position, fill colour and stroke weight. When I make the background colour (0,0) the fish do not update their position. Instead, you see the track of where the fish has been the whole time within the sketch and therefore shows the colours that have been tracked throughout the game. This could become part of the game in the future potentially. It could become a multi-player game where whoever’s colour is of the highest percentage on the screen at the end would win. It reminds me of this:


6: Fish sliders

photo 2 (19) photo 1 (20)


My first idea was to have this plastic container as my tank. I laser cut rectangles of plexi that I glued together to create channels for a slider that the fish would be attached to. The player would change the position of the fish to change the colour of the Processing fish like the final game, but in this set up, the fish could be placed in front or behind each other. Then the colour tracker would be able to pick up specific colour mixes as the plexi is transparent and would create new colours (two fish together or all three together). In the end I abandoned this idea because the sliders I made didn’t work very well and weren’t sliding properly. Also the fish would not stay attached to their sliders.

7: Fish transparency

The next issue with my original idea was that the colour tracker had trouble identifying the colour mixes as they ended up to dark. In this video I attempt to have the colour tracker identify orangeblueFish, bluegreenFish and allFish. They all just end up a dark purplish colour. Another issue is that the surface of the fish reflect back the computer screen and confuses the tracker. At this point I decided to switch to my final “Processing Fishing” idea. Beyond the laser cut fish I also wanted other coloured items that the tracker could identify.

8: Foam sea creatures

photo 3 (17) photo 1 (19)


At this point I decided to try tracking these foam sticker sea creatures. I first tried gluing magnets to them so they could sit in the tank with the laser cut fish but no amount of weight would make them sink to the bottom. I then decided to attach them to a stick so the player could add them into webcam view but this didn’t really fit with the fishing idea I had come up with. I then decided to make coloured “barnacles” as extra tank items that do sink to the bottom. In this video I show the tracker identifying all the colours of the extra foam creatures. Some were too close to the colours of the laser cut fish so this confused the colour tracker.

Project 3: City as Amusement Park- Simon Says Lanterns (Tegan Power & Rida Rabbani)

Project Proposal:

Project Video:

Project Description:

Our project is an interactive two player game installation using XBee communication. This project is inspired by Simon Says but has been altered to be two player and to transform an existing space into an installation for game play. Lanterns of different sizes and colours communicate with each other and players attempt to match the colour sequence that the other player has input. A set of 3 small lanterns (blue, green, red) have large arcade buttons that player one may press 6 times in any order. This input corresponds to the large lanterns hanging on the other side of the room. Once player one has inputted the sequence, the corresponding LED string colours will blink in order on player two’s side. Once this is done, player two must match this sequence. Player two inputs their response by tapping the large lanterns, activating the tilt switch in each. If player two matches the sequence correctly, all six lanterns on both ends will blink in congratulations. If player two fails to match the sequence,  a buzzer will sound. This game could transform any space, indoor or outdoor and could be packaged and sold as a “install yourself” party game.


Transmit (small lanterns) code:

Receive (large lanterns) code:

Writing the code was by far the most challenging aspect of this project and took several days to evolve and become functional. We first started by testing arduino to arduino XBee communication by having buttons on one end control LEDs on the other end. We did this by assigning each button pin to a number (1,2,3) that would serial print to the LED side. The LED side would read the incoming number and light up the corresponding LED (1=blue, 2=green, 3=red). Once this was set up, we focused on the receiving end. (We wanted to have player one and player 2 roles interchangeable but once we realized the difficulty of the code we took on, we decided to stick to a one way communication that would simply reset at the end of a turn. The first thing we did on the receiving end was create variables for player and player two’s sequence inputs. Player one has six button presses so the code would look for button presses and fill the six “YourHold” spaces. We then had to add booleans to indicate when Your turn and My turn was over. Once YourTurnIsOver=True, the input sequence is digitally written to the corresponding LED string in the large lanterns for player two to memorize. Once this is done, player two fills the six MyHold spaces. It does this by looking for button presses 1,2 or 3 to fill the six slots. Once player two has finished, MyTurnIsOver=True. At this point, the YourHold and MyHold values are compared. For this comparison of the six values, each is nested into the last because if any value does not match, there is no need to check to others. At this point, if the matches are all satisfied, all LedPin outputs blink, if they do not match, the buzzer in pin 13 is HIGH. The system then resets for the next match.






Case Studies:

Case Study 1-Intel World Interactive Amusement Park

Intel came to Inhance needing an environment in which to explain their Intelligent System Frameworks in an educational and enjoyable way.

Interaction: It features an animated theme park that allows up to thirty people to engage with the wall, bringing up floating windows of content amidst the rides, roller coasters and people moving about the park.

Technology: The result is the Intel Amusement Park Experience, an interactive multitouch application displayed on a 3×2 multi-screen LCD wall. It integrates with a Social Media Photo Booth App that allows attendees to take photos that superimpose their faces on a roller coaster ride. The photos can be sent to Facebook, Twitter, the Intel®World Wall and their email.

Narrative: The wall brings all of Intel’s products into one environment to show the connectivity through the entire park. Our goal was to deliver the same emotion one experiences in an amusement park, drawing attendees to the wall to touch it and learn. The result was constant excitement on people’s faces and large clusters of people touching the wall. It was highly successful in terms of being created primarily for trade shows, including Embedded World, Mobile World Congress, Design West and Cornell Cup.

Case Study 2-Lagoon Amusement Park

Amusement parks are all about speed. Whether it’s riding a massive roller coaster or plummeting 70 feet inside a tubular water slide, guests want to go fast.

Interaction: Lagoon now is able to satisfy the needs of its employees and guests with the updated card printing technology, bringing the park back to its desired speed.

Narrative: Now that the Lagoon Amusement Park has established its current system, computer stations at the gates can track Season Passport access information and provide valuable marketing information. “We’re trying to increase our per person usage through promotions such as our Season Passport Holder coupon books,” Young said. This allows them to operate at full capacity all day long, allowing their guests get their season passports quickly and in a fun way.

Case Study 3-  XD Dark Ride in the world 

Set in the iconic Filmpark Babelsberg just outside Berlin, this full turnkey project was the first installation of the XD Dark Ride in the world.

Interaction: XD Dark Ride is an immersive and interactive screen with a capacity of  24 seat hence being an object to many people ride.

Technology: Adding interactivity to state-of-the-art immersive special effects, it has revolutionized the world of ride simulation by combining video game technology with 3D animated movies.

 Narrative: First XD Dark Ride theater project in Europe. Is a conversion project of a pre-existing spherical structure into a one-of-a-kind interactive dome integrating the world’s largest interactive screen (16m wide)
Case Study 4- Wizarding World of Harry Potter
The latest installment of The Wizarding World of Harry Potter is scheduled to open this summer in Orlando’s Universal Studios theme park. The new attraction features London and the magic-packed Diagon Alley.
Interaction: Guest will not only be able to enter the arches of the Leicester Square facade but will be immersed in a bustling Wizarding hub within a Muggle city where towering buildings are slightly askew, with steep staircases and jagged edges galore.
Technology: In the real-life version, visitors will be in awe of the marble lobby and cavernous passageways. They’ll take off from here on a multi-sensory thrill ride through the vaults. And the dragon that will perch atop the bank building (reminiscent of when it escapes from the bank in the series) really does blow a giant ball of fire quite frequently. The thrill ride requires visitors to don 3D Glasses and features 360-degree themed sets, intense 4K animations, and 3D projection systems for complete immersion.
Narrative: Guests around the world were impressed by the immersive experience Universal created and the meticulous attention to detail they used to bring the Harry Potter stories to life. With the theme central to that of Harry Potter it is brought to life through a real life version of the story.

Photos and Diagrams:


photo 1photo 2


Soldering the arcade button to longer leads. Installing the button into the small lanterns by wrapping wire around it and the metal piece in the centre of the lantern. LED string is fit into the small lanterns and affixed to the sides to keep it in. Long leads come out the bottom for later connection to the Arduino.

10841221_10152630338748477_483116723_n-2photo 3photo 5

All three small lanterns are affixed to the black player one board. Construction paper covers with button holes are attached to the top to hide electronics inside each lantern.

photo 4-2photo (22)


Large lanterns are also filled with corresponding coloured LED string and affixed to the edges. The tilt sensor is soldered to long leads and affixed to the bottom of the metal lantern structure. The tilt sensor had to be placed at a very specific angle so that players two’s tap would successfully close the switch fully. Long leads are soldered to the other end of the tilt switches and LED string for connection to the Arduino.

photo 1-2photo 2-2


Final setup: Hanging large lanterns for player two, board mounted small lanterns for player one.


photo 2-3photo 1-3




  • 2 Arduinos
  • 2 Xbees
  • 6 Lanterns
  • 3 Tilt sensors
  • 6 sets of LED string
  • 3 Buttons
  • 2 9V Batteries

Circuit Diagram:

small lanterns breadboard prototype: buttons are replaced with large coloured arcade buttons installed in small lanterns. LEDs are replaced with red, green and blue LED string inside small lanterns.


large lanterns breadboard prototype: buttons are replaced with tilt sensors. LEDs are replaced with red, green and blue LED string inside large lanterns.






Notes on process:

We started of thinking of different ideas generated by the theme of the project. With the theme being an amusement park we wanted something that involved people in terms of visuals and engaged them to join in or interact with the installation. Initially we wanted to create an environment which coul be from both inside and outside. However once we started working with our simon says idea, it really didn’t really matter where the lanterns were placed as long they could create a establish a communication.

Then we had to decide whether we wanted a one player game dedicated to one player interacting with the simon says lanterns or two players playing amongst each other, while the rest of the audience enjoyed this process of lanterns creating a pattern and lighting up.

After establishing the 2 player game installation we had to work with the different materials, at first we were thinking of using balloons but then we decided the lights and lanterns along with XBee  inside them as we could not find balloons with cavity space within it. When we proposed the idea we were also advised that larger lanterns and materials would make a bigger impact.

The code to respond to our game was the more complicated part however with a large help from Ryan we got a code which stored arrays and chunks of sequences, the point we got stuck was when it came to the buttons responding to the sequence of lights.

While at the same time we managed materials, sensors and how they respond to one another. On the day of the final presentation we were experimenting with stability of materials as well as the code to correspond to our idea, however it was more complicated then we thought although it was able to store the sequence, not only was the communication with the XBee lost along the line but when we got them to communicate one of the buttons kept sending faulty data, at some point our simple on and off button became a sensor detecting movement near it. It was finally with Ryans help when we got the circuit to work as a simon says game it was too late to set it up to the tilt sensors and lights on the larger lanterns. Which despite our last minutes attempts wasn’t sending any data to the lED lights if it was not connected directly to the arduino.

Project Context:

Although the simon says Arduino was a very simple demonstration without making use of the XBees it gave us an idea of how to send information back and forth and simply test out the led’s and match them using the buttons. The next step from here was to translate this into our more complicated wireless use of the simon says technique back and forth within the lanterns and making it more interactive.

These case studies helped us explore not only the potential of real time technology but how experiential and interactive attractions, sets and props add to the touch and feel of the environment. Provoking senses and working with the familiarities and surprises for the audience makes them curious and interested in the space and attractions.

“LightGarden” by Chris, Hart, Phuong & Tarik











A serene environment in which OCAD students are invited to unwind and get inspired by creating generative artwork with meditative motion control.

The final Creation & Computation project -a group project- required transforming a space into an “amusement park” using hardware, software, and XBee radio transceivers.

We immediately knew that we wanted to create an immersive visual and experiential project — an interactive space which would benefit OCAD students, inspire them, and help them to unwind and meditate.

The Experience










Image credit: Yushan Ji (Cynthia)

We chose to present our project at the Graduate Galley as the space has high ceilings, large unobscured empty walls and parquet flooring. We wanted the participants to feel immersed in the experience from the moment they entered our space. A branded plinth with a bowl of pulsing and glowing rock pebbles greets the participants and invites them to pick up one of the two “seed” controllers resting on the pebbles. These wireless charcoal coloured controllers, or “seeds”, have glowing LEDs inside them and the colours of these lights attract the attention of participants from across the space.

Two short-throw projectors display the experience seamlessly onto a corner wall. By moving the seeds around, the participant can manipulate a brush cursor, and draw on the display. The resulted drawing uses complex algorithms that create symmetric, mandala-like images. To enhance the kaleidoscope-like visual style, the projection is split between two walls with the point of symmetry centered on the corner, creating an illusion of three-dimensional depth and enhancing immersion.

By tilting the seed controller up, down, left, and right, the participant shifts the position of their brush cursor. Holding the right button draws patterns based on the active brush. Clicking the left button changes the selected brush. Each brush is linked to a pre-determined colour, and the colour of the brush is indicated in the LED light on the seeds as well as the on-screen cursor. Holding down both buttons for 3 seconds resets the drawing of that particular participant but will not affect the other user.

To compliment and enhance the meditative drawing experience, ambient music accompanies the experience and wind chime sounds are generated as the player uses the brushes. Bean bags were also available in the space to give the participants the option to experience LightGarden while standing or sitting.

The visual style of the projections we were inspired by:

  • Mandalas
  • Kaleidoscopes
  • Zen Gardens
  • Fractal patterns
  • Light Painting
  • Natural symmetries (trees, flowers, butterflies, jellyfish)









Image credit: Yushan Ji (Cynthia)

Interactivity Relationships

LightGarden is an interactive piece that incorporates various relationships between:

  • Person to Object: which exhibits in the interaction between the player and the “seed” controller.
  • Object to Person: the visual feedback (i.e. the cursor on the screen responses predictably whenever the player tilts or click a button through the change in its location/appearance on the screen), as well as auditory feedback (i.e. the wind chimes sound fades in when draw button is clicked) let user know that he is in control of the drawing.
  • Object to Object: our initial plan was to use Received Signal Strength Indicator to illustrate the relationship between the controller and the anchor (e.g. the shorter of the distance between the anchor and the “seed” controller is, the faster the pulsing light on the anchor goes).
  • Person to Person: Since there are two “seed” controllers, two players can use each individual controller to collaboratively produce a generative art with different brush and color.

The Setup

  • 2 short-throw projectors
  • 2 controllers, each has an Arduino Fio, XBee, accelerometer, two momentary switches, one RGB LED, and lithium polymer battery.
  • 1 anchor point with an Arduino Uno, a central receiver XBee, and RGB LED
  • An Arduino Uno with Neo Pixel strip

System Diagram















The Software

Processing was used to receive user input and generate the brush effects. Two kinds of symmetry were modeled in the program, bilateral symmetry across the x-axis, and radial symmetry from ranging from two to nine points. In addition to using different colors and drawing methods, each brush uses different kinds of symmetry, to ensure that each one feels significantly different.

Each controller is assigned two brushes which it can switch between with the toggle button. A class was written for the brushes that kept track of its own drawing and overlay layers and was able to handle all of the necessary symmetry generation. Each implemented brush then extended that class and overrode the default drawing method. There was also an extension of the default brush class that allowed for smoothing, which was used by the sand brush.

One major downside discovered late in development was that the P2D rendering engine won’t actually make use of the graphics card unless drawing is done in the main draw loop of the Processing sketch. Most graphics work in the sketch is first rendered off-screen, then manipulated and combined to create the final layer, so as a result the graphics card was not utilized as effectively as it could have been.

Here is a listing of the four brushes implemented for the demonstration:











1. Ripple Brush

This brush uses a cyan color, and fades to white as it approaches the center of the screen. It uses both bilateral symmetry and a sixfold radial symmetry, which makes it well suited for flower-like patterns. It draws a radial burst of dots around each of the twelve cursor positions (six radial points reflected across the x-axis), and continually shifts their position to orient them toward the center of the screen. With smoothing effects applied, this creates multiple overlapping lines which interweave to create complex patterns.

2. Converge Brush

This brush uses a dark indigo color, and draws lines which converge toward the center of the drawing. It has bilateral symmetry and eightfold radial symmetry. As the lines approach the edge of the screen, a noise effect is applied to them, creating a textured effect. Because all lines converge, it creates a feeling of motion, and draws the viewer toward the center of the image

3. Sand Brush

This brush uses a vibrant turquoise color, and like the starburst brush fades to white as it nears the center of the image. It draws a number of particles around the brush position; the size, number, and spread of these particles increases as the brush approaches the outer edge, creating a scatter effect. This brush uses sevenfold radial symmetry, but does not have bilateral symmetry applied, which allows it to draw spiral patterns which the other brushes cannot make.

4. Silk Brush

This brush uses a purple color and has the most complex drawing algorithm of the brushes. It generates nine quadratic curves originating from the position the stroke was started to the current brush position. The effect is like strands of thread pulled up from a canvas. The brush has bilateral symmetry but only threefold radial symmetry so that the pattern is not overwhelming. Because it creates such complex designs, it is well suited for creating subtle backgrounds behind the other brushes.

The Controllers and Receiver










Image credit: Yushan Ji (Cynthia) and Tarik El-Khateeb

Seed Controller – Physical Design

When considering the design and functionally of our controllers, we started the endeavour with a couple goals from the start. These goals were determined by very realistic limitations of our intended hardware, most notably, the XBee transceiver and the 3-axis accelerometer.  We knew we needed accelerometer data for our visuals and in order to have reliable, consistent data, the base orientation of the accelerometer needed to be fairly standardized. Furthermore, the Xbee transceiver signal strength severely drops when a line of sight relationship is blocked, either by hands or other physical objects. Taking this into consideration, we designed a controller that would suggest the correct way of being held. The single affordance that we used to do this was a RGB LED that would illuminate and signify what we wanted to be the “front” of the controller.










Image credit: Tarik El-Khateeb and Phuong Vu.

Initially we started with the hopes of creating a 3D-Printed, custom shaped controller (by amending a ready 3D module from, however after some experimentation and prototyping, we quickly came to the conclusion that it was not the right solution given the time constraints associated with the project. In the end, we decided to go with found objects that we could customize to suit our needs. A plastic soap dish became the unlikely candidate and after some modifications, we found it to be perfect for our requirements.

To further suggest controller orientation, we installed two momentary, press-buttons that would act as familiar prompts as how to hold it. This would prevent the user from aiming the the controller with just one hand. These buttons also engaged the drawing functions of the software and allowed for customization of the visuals.

The interaction model was as follows:

  1. Right Button Pressed/Held Down – Draw pixels
  2. Left Button Pressed momentarily – Change draw mode
  3. Left and Right Buttons held down simultaneously for 1.5 seconds – clears that user’s canvas.










Image credit:  Tarik El-Khateeb

Seed Controller – Electronics

We decided early on to use the Xbee transceivers as our wireless medium to enable cordless control of our graphical system. A natural fit when working with Xbees, is the Arduino Fio, a lightweight, 3.3v microcontroller that would fit into our enclosures. Using an Arduino meant that would could add an accelerometer, RGB LED, two buttons and the Xbee without a concern of shortage of IO pins, as is the case with using a Xbee alone. By programming the Fio to poll the the momentary buttons we could account for duration of each buttons presses. This allowed some basic, on-device processing of data before sending instructions over wireless, helping reduce unnecessary transmission. Certain commands like “clear” and “change mode” were handled by the controllers themselves, significantly increasing reliability of theses functions.

In the initial period of development, we had hoped to use the Xbee-Arduino API, certain features seemed very appealing to us but as the experimenting began, it was clear that even though it was an API there were still several low-level functions that significantly complicated the learning processing and overall interfered with our development. We made a strategic decision to cut our losses with the API and rather use the more straight forward, yet significantly less reliable method of broadcasting serial directly and parsing it on the other end in Processing, after the wireless receiver relays it. Here is an example of the data being transmitted by each controller:




LightGardenControllerSchematic LightGardenControllerBreadBoard





Circuit diagrams for the Seed Controllers.
Wireless Receiver

In order to receive the wireless commands from both of our controllers, we decided to create an illuminated receiver unit. The unit is comprised of an Arduino Uno, RGB LED and an Xbee; it acts as a simple relay, forwarding the the serial data received via the Xbee to the USB port of the computer for the Processing sketch to parse. We used the SoftwareSerial library to emulate a 2nd serial port on the Uno so we could transmit the data as fast as it was being  received. In terms of design, instead of hiding the device we decided to feature it prominently in the view of the user, a pulsing white LED indicates that it serves a functional purpose and our hope was for it to remind users that wireless transmission is occurring, something that we take for granted nowadays.

LightGarden_Reciever_Schematic LightGarden_Reciever_BreadBoard





Circuit diagrams for the Wireless Receiver.


Branding strategy:

The LightGarden logo is a mix of two fonts from the same typeface family: SangBleu serif and sans serif. The intentional mix of serif and sans serif fonts is a reference to the mix and variety of effects, colours and brushes that are featured in the projections.

The icon consists of outlines of four seeds in motion symbolizing the four cardinal directions, four members of the group as well as the four main colours used in the visual projection.


Image credit: Tarik El-Khateeb

Colour strategy:

Purple, Turquoise, Cyan and Indigo are the colours chosen for the brushes. The rationale behind using cold colours instead of warm colours is that the cold hues have a calming effect as they are visual triggers associated with water and the sky.

Purple reflects imagination, Turquoise is related to health and well-being, Cyan represents peace and tranquility and Indigo stimulates productivity.


Sound plays a major role in our project. It is an indispensable element, without it the experience cannot be whole. Because the main theme of our project is to create a meditative environment, it was important to choose the type of sound which was meditative: enhancing rather than distracting the players from the visual experience. We needed to find a sound that was organic, can be looped, and yet does not become boring to the participants in the long run.

In order to fulfill all of aforementioned requirements, we decided to go with Ambient music, an atmospheric, mood inducing musical genre. The song “Hibernation” (Sync24, 2005) from Sync24, was selected as the background music. Using Adobe Audition (Adobe, 2014), we cut out the intro/outro part of the song, and beatmatched the ending and the beginning of the edited song so that the entire song can be seamlessly looped.










Image credit: Screen captures from Adobe Audition

Sound was also used as a means of giving the auditory feedback to the user of our “seed” controller, i.e., whenever player clicks the draw button on the “seed” controller, a sound is played with the purpose of notifying player that the action of drawing is being carried out. For this purpose, we employed the sound of wind chimes, whose characteristic is known for inducing the atmospheric sensation as used in Ambient Mixer (Ambient mixer, 2014). In our application, the ambient song is played repeatedly in the background  whereas the wind chimes sound fades in and out every time the player clicks and releases the draw button allowing the wind chimes to organically fuse into the ambient music. To do so, we utilized the Beads, a Processing library for handling real-time audio (Beads project, 2014). Beads library contains several features for playing the audio file and for generating a sequence of timed transition of the audio signal, i.e., sequence of changes in amplitude of the audio signal. So when the draw button is clicked the amplitude of wind chimes audio signal increases, and conversely, when the draw button gets released the amplitude of wind chimes audio signal decreases.


Case Studies

One: Pirates of the Caribbean: Battle for Buccaneer Gold

Pirates of the Caribbean: Battle for Buccaneer Gold is a virtual reality ride and interaction game at DisneyQuest, an “indoor interactive theme park”, located in Downtown Disney at the Walt Disney World Resort in Florida. (Wikipedia. 2014)

The attraction is 5 minutes long and follows a linear storyline in which Jolly Roger the Ghost Pirate appears on screen and tells the participants that their pirate ship is seeking treasure and that they can win this treasure by sinking other ships and taking their loot. The ship sails through different islands and comes across many ships to battle. 4:30 minutes into the ride, the pirate ghost re-appears and informs the players that they have to battle him and his army of skeletons in order to be able to keep any treasure they won by battling the ships. Once all the ghosts and skeletons have been defeated the final score appears on the screen.

The attraction can be experienced by up to 5 participants. One individual steers the pirate ship by using the realistic helm on the attraction inside of a detail rich 3D computer generated virtual ocean with islands and ships. Up to four players control cannons to destroy other ships. The cannons use wireless technology to “shoot” virtual cannons on the screen.

The attraction uses wrap-around 3D screens, 3D surround sound, and a motion platform ship that fully engages the participants and make them feel like real pirates on a real ship. (Shochet. 2001)


Two: Universal Studios Transformers: The Ride 3D

Transformer: The Ride 3D (Universal Studios, 2011) is a 3D indoor amusement ride situated in Universal Studios Hollywood, Universal Studios Florida and Universal Studios Singapore. The ride 3D is an exemplary case study of how a thrill ride when combined with visual, auditory and physical simulation technologies can create such immersive experience that it clears the borderline between fiction and reality.

The setup of this attraction consists of a vehicle mounted on motion platform that runs for 610 meters track. Each vehicle can carry up to 12 riders, who, throughout the ride, will be exposed to different kind of effects like motion, wind-blowing including hot air and air blast, water-spraying, fog, vibration, and a 18 meters high 3D projections that shows various Transformers characters (Wikipedia, 2014). Along the ride, participants will have a chance to “fight along side with Optimus and protect the AllSpark from Decepticons over four stories tall” (Universal Studios, 2011).


Three: Nintendo Amiibo

The Nintendo Amiibo platform is a combination of gaming consoles and physical artifacts, which take the form of well-known Nintendo characters in figurine form (Wikipedia, 2014). The platform is one of many following the same trend, that is, small, NFC (near-field-communication) equipped devices that, when paired with a console, add additional features to that console or game. NFC is actually a technology built up from RFID (Radio Frequency Identification) and most smart phones are now equipped with it (AmiiboToys, 2014).

The Amiibos have memory capability (only 1Kb-4Kb) and allow certain games to store data on the figurine itself (AmiiboToys, 2014). One example of this is with the newly released, Super Smash Bros game for the Wii U. The figurines actually “contain” NPC (non-playable characters) that match the resemblance of the character. These characters actually improve their abilities based on your own playing habits, apparently they actually become quite hard to beat! (IGN, 2014).

The interesting aspect of the Amiibo line and others like it, is the interaction between the digital representation of the character and the physical figurine itself. By using NFC, the experience seems almost magical, something that a physical connection would most likely ruin. There is a relationship between the player and the object but also between the player and the on-screen character, especially when said character is aggravating the player because its skills are improving. The transparency of the technology helps dissolve the boundaries between the physical object and the fully animated character.


Four: Disney MagicBand

The fourth case study will not be focusing on an attractive in an amusement park but rather on a new one billion dollar wearable technology that has been introduced in the Walt Disney parks: the MagicBand. (Wikipedia. 2014)

The MagicBand is a waterproof plastic wristband that contains a short range RFID chip as well as bluetooth technology. They come in adult and child sizes and store information on them. The wearer can use them as their hotel room key, park ticket, special fast pass tickets, photo-passes as well as a payment method for food, beverages and merchandise. (Ada. 2014)

The MagicBands also contains a 2.4ghz transmitter for longer range wireless communication, it can track the band’s location within the parks and link on-ride photos and videos to guests’ photo-pass account.

Thomas Staggs, Chairman of Walt Disney Theme Parks and Resorts, says that the band in the future might enable characters inside the park to address kids by their name. “The more that their visit can seem personalized, the better. If, by virtue of the MagicBand, the princess knows the kid’s name is Suzy… the experience becomes more personalized,” says Staggs. (Panzarino. 2013)


References & Project Context

3D printing:



Adobe. 2014. Adobe Audition. Retrieved from

Ambient Mixer. (2014). Wind chimes wide stereo. Retrieved from

Beads project. 2014. Beads library. Retrieved from

Sync24. “Hibernation.” Chillogram., 22 December 2005. Web. 01 Dec. 2014. Retrieved from


Case Study 1:

Disney Quest – Explore Zone. Retrived from

Shochet, J and Banker T. 2001. GDC 2001: Interactive Theme Park Rides. Retrived from
Wikipedia. 2014. Disney Quest. Retrived from


Case Study 2:

Inside the Magic. 2012. Transformers: The Ride 3D ride & queue experience at Universal Studios Hollywood. Retrived from

Universal Studios. 2011. Transformers: The Ride 3D. Retrieved from
Wikipedia. 2014. Transformers: The Ride. Retrieved from


Case Study 3:

AmiiboToys, (2014) Inside Amiibo: A technical look at Nintendo’s new figures. Retrieved from

IGN, (2014). E3 2014: Nintendo’s Amiibo Toy Project Revealed – IGN. Retrieved from [Accessed 10 Dec. 2014].

Wikipedia. (2014). Amiibo. Retrieved from


Case Study 4:

Ada. 2014. Making the Band – MagicBand Teardown and More. Retrieved from

Panzarino, M. 2013. Disney gets into wearable tech with the MagicBand. Retrieved from

Wikipedia. 2014. MyMagic+. Retrieved from


Project Inspirations / Context:

The City of Love – Wearables and more.. Chen & Frank & Mehnaz

IMG_0354 (1)

Ideation & Brainstorming                                                                                                                           

When we decided to create a wearable device in an amusement park environment, we first thought that creating human emotions and human contact..from coming there, first idea that we generated was as Toronto being an immigrant city, having people contact in a different level with the loved ones. Taking this to consideration, we thought that we can map a camera view of a person on an object and having this in both sides of the world for both people who are in communication through computers. Since we found out that XBees do not travel that long distance we reshaped our thoughts around keeping it in a short distance but adding more love to our idea.

The next step was improving the idea of HUGGING in a meaningful manner with a narrative behind.

IMG_0356         IMG_0352        IMG_0349


On the street of Toronto, the big installation takes its place, made out of plastic, (white acrylic) boxes the structure represents the city scape. 2 projectors to be installed as they are located to project on 4 sides of the sculpture. Mapping with a mapping software the images represent the cities which are known as most romantic cities in the world beside Toronto city image.

One block will capture a flashing banner which is; ” Can Toronto Be The City of Love?” question.

Having a wearable device sawn shirt which includes XBee, LilyPad and conductive material, one person will welcome visitors who are also wearing a conductive shirt or a necklace. As they hug the person who has the device, they close the circuit and an identifier coloured LED starts lighting on the person’s shirt who has the device. This colour represents the first person who hugs first. At the same time, same colour of light bubbles start floatin on the structure’s surface to represent filling it with love.

When second person hugs, new coloured LED takes place and new colour starts floating on the structure. As many people hugs, the structure collects more colours as representing more love. For people who don’t have the shirts, like public walking by, we designed a necklace to share the experience. The shirts and the necklace can be thought as marketing feature, or supporting a cause, as a result of public contribution, those can be the key items to be sold and/or given away.

IMG_0433                                        IMG_0434

Conductive fabric added necklace : We also used sheet copper to connect the two pieces of necklace to create flex possibility to reach to close the circuit on shirts with the necklace.

 3D Printing:

Before setting the city scape idea, we thought about having a small 3D printed model of the actual sculpted piece.


Rhino 3D model representing HUG


Rhino 3D Sketch


Four views of Rhino 3D model



The City of Love is made of;

3 wearable shirts,

LilyPad Arduino 328 Main Board

Lilypad XBee

2 XBee devices

JST Lipo Battery connector

Arduino board


Conductive fabric

Conductive thread



IMG_2684  IMG_0418  IMG_2691 (1)



How did we make it work?

We thought that if the person hug the wearing device is better. So we choice the LilyPad Arduino 328 Main Board, XBee and Lilypad XBee to send and recieve the wireless signal.

As this project is our the first time to work with XBee, we needed to experiment with XBees many times .

1. XBee—XBee, two XBees send and receive signal from each other.

2. XBee—Arduino, We used XBee with breadboard through Arduino to send and receive signal to another XBee

we had a problem as we couldn’t  upload the code to Arduino when we  connect it

with XBee, so we have to take off the two PINs – TX & RX, so Arduino can receive the


3. XBee—LilyPad Arduino & XBee,

IMG_2691 (1)


First problem we had that we couldn’t send signal from Arduino to XBee , but we could have received  signal from the XBee to Arduino. After discussing with our TA, which also made him think a lot, we were concerned that the PX-PIN on Lilypad should link to the same PIN in the Lilypad XBee, but Ryan told us that maybe the problem is the RX-PIN onthe Lilypad should link to the TX-PIN on the Lilypad XBee, and the TX-PIN link to RX-PIN. We tried it, and it worked. An other question is that the LilyPad Arduino 328 Main Board doesn’t have the case to link the battery, it has 6 holes and we were not quite sure what they were for. After searching the Schematic about the board in Sparkfun and we found the power hole and the the ground. Then we bought a JST Lipo Battery connector and soldered it on the LilyPad.

At the beginning of the project, we wanted to use a pressure sensor to control the signal with the interaction. After discussing it told us that we came to a point of not thinking as a button but an initiator, so it can be anything magical like using conductive fabric. At this point project started to be more fun. We used the conductive fabric as the buttons. We made one fabric circle to be connected with power and two other connected to the ground. Ground connected fabric was also linked with #7 & #8 PIN. There is also a resistor between the pins and fabric.


When the circle cut fabric of #7 or #8 PIN is connected to the fabric which is attached to power, the Arduino can recognize the signal and send it to Processing to control the animation. We set up two colours, one is blue and one is yellow. We sewed the context on a black microfiber shirt with conductive thread which created the “base” shirt. We also sewed two bigger pieces of conductive fabric on a blue shirt and a yellow shirt. Person who wears the blue shirt hugs the person who wears the base shirt, as conductive fabric on the blue shirt covers the two fabric circles on the base shirt as it closes the circuit, Arduino recognizes the HUG.

We sewed two LEDs which are linked in #10 & #13 PINs on the base shirt to distinguish and show whether the HUG works. As the sewing is a hard work it was very challenging to connect the LED on a synthetic material which has no grip. As the thread shouldn’t overlap each other, we had to try to create a different direction with each connection.

IMG_0430  IMG_0419  IMG_0421


After this step we started testing and found out connection problems. For example, if the height of the person who wears the base shirt is different, conductive area on the coloured shirts did not meet the circles. So we have added more fabric to yellow coloured shirt to raise create more coverage. With more testing we found out the best area of the body to contact when two people hug, and sewed the bigger conductive fabric there.

IMG_2699  hugteam  IMG_0420

IMG_0422    IMG_0421    IMG_0424


IMG_2703 (1)



In terms of Arduino codes, there are two main functions to achieve the interaction human hugging and computers. One is to communicate with processing program. The other one is to detect signals from different digital pin ports.

The principle of detecting different identities can be seen as a simple example of enlightening a LED by pressing a button. Specifically, on our t-shirt, the trigger of interaction is that assumes the conductive fabric as a button. Then, open or close the circuit to detect the statement of “HIGH” or “LOW”. The statement can be detected easily by digital pin ports of Arduino board. Therefore, according to different pin ports, we can achieve detecting different identities.

IMG_2688    IMG_2687 (1)

Arduino code

To achieve the collective love bubble effect, we use communication between Processing and Arduino. Processing detects different values from different digital pin port so that Processing can detect different identities as well. Additionally, singular color dots will be active after one circuit has been completed. The increasingly colorful dots rising effect is achieved through decreasing y-axis value. Also, random method has been added to vary speed of the dynamic effect. Another function of Processing is to play music after detected the first closed circuit.

IMG_2724 IMG_2726 cond

Processing code – Collective love effect

Collective love effect (test vision without XBee)

To fit the surface of the structure, we use VPT to adjust the interface. Also, for creating the sense of city, we simulate neon light effect on our slogan by using random color on the text.

Neon light effect

Creating the structure:

We wanted to use this setting as a street installation in Toronto downtown. Setting a big base structure of acrylic blocks on about 10’X 10′ area about 10 feet high, and projecting the images on 4 or more surfaces…also the reality can bring more development such as having different interacting images on each surface as an addition to city pictures and the code projection.

persp  IMG_0431  IMG_0446

Mapping with VPT

IMG_0445  IMG_0444




Bill of Materials

Assembly List

Label Part Type Properties
LED1 Blue LED type single color; color Blue; polarity common cathode
LED2 Blue LED type single color; color Blue; polarity common cathode
Part1 Lilypad Arduino Board type Lilypad Arduino
Part2 LilyPad XBee type Wireless; protocol XBee
R3 220Ω Resistor bands 4; pin spacing 400 mil; tolerance ±5%; package THT; resistance 220Ω
R4 220Ω Resistor bands 4; pin spacing 400 mil; tolerance ±5%; package THT; resistance 220Ω
U1 LIPO-1000mAh variant 1000mAh; package lipo-1000

Shopping List

Amount Part Type Properties
2 Blue LED type single color; color Blue; polarity common cathode
1 Lilypad Arduino Board type Lilypad Arduino
1 LilyPad XBee type Wireless; protocol XBee
2 220Ω Resistor bands 4; pin spacing 400 mil; tolerance ±5%; package THT; resistance 220Ω
1 LIPO-1000mAh variant 1000mAh; package lipo-1000



Case Studies: 

1- Pillow Fight Club Study

  • Interaction: person to person, person to object

When two people have pillow fight, interaction sends signal to display images

  • Technology: sensing and display

They used same technology as our project XBees and Arduino, processing software as they created images as a result of a wireless interaction

  • Narrative: simple

It is a simple narrative which is already a life experience for many people.


2- Super Hero Communicating Cuffs

The Superhero Communicator Cuffs enable brave souls to call on their partners in a time of need. This tutorial demonstrates how to send and receive wireless signals without the use of micro controllers or programming. You will learn how to configure Xbee radios, build a basic soft circuit , and work with conductive thread and conductive fabric.

How it works: Each pair of cuffs has an electronic switch made of conductive fabric. When the wrists are crossed, a wireless signal is transmitted which
activates the LED on your partner’s set of cuffs, beckoning to them that you need Super Hero assistance! Since you’ll be making two pairs of communicator cuffs, this tutorial will be great to make with a friend!

  • Interaction: person to person, person to object
  • Technology: sensing and wireless communication
  • Narrative: simple


3- SMILE – Interactive Lights

SMILE was originally created for an all-night outdoor installation at Toronto Nuit Blanche in the historic Fort York park. Each cube is outfitted with a high-brightness RGB LED, a SLA battery, and is wirelessly programmable. Additionally, the cubes can form a mesh-network, communicating with each other or receiving commands from a central computer.

  • Interaction: person to object, object to person
  • Technology: wireless communication
  • Narrative: simple


4- Dream Jammies

Icon: Chizuko Horman
Embroidery: Melody Litwin

Dream Jammies are pajamas which are aware of your body in several ways. They know whether you are standing or laying down, tossing or lying quietly. Dream Jammies also know your body temperature. This information is relayed to your partner’s iPhone, and expressed on their screen in color, changing in realtime.

As you lay down to sleep, the screen fades from green to blue, the shade of blue reflecting your body temperature. As you roll around, the screen flickers red. By shaking the iPhone your partner is able to reach out, causing the chest of your pajamas to vibrate. Not pleasant while you sleep, but a perfect alarm clock. Not only are you able to keep in touch while living on opposite sides of the world, Dream Jammies offer insight into how you sleep by capturing data as you snooze.







Make: Wearable Electronics , Hartman K., 2014, Toronto, Canada

Social Body Lab, OCAD

Super Hero Communicating Cuffs

Moment Factory

Project2: Blinking Jellyfish


When talk about water park, the first image in my mind is the undersea view. About the undersea stuffs, my favourite thing is the jellyfish. It’s an amazing animal. It’s pellucid and soft and it’s moving is very interesting. Also it can be influenced by the light and turns to the colour what light towards to it. Actually, the reason why I love jellyfish is the scene comes from <Life of Pi> directed by Ang Lee. In that film in the evening, when night falls, these groups of blue ocean animals surfaced, mapping the blue moon, swimming slowly, crows the undersea plants, the view is fantastic. So this is what my idea comes from.


I wanted to try to use Processing to make some graphic creations, and do some animation attempts with it. Interact with the sensors, make the pictures to show more artistic effects. Enable participants to get more fun and more visual enjoyment.

Experiment1-Getting RGB information from Colour Sensor

At first, I begin to search some kinds of sensor to be used. I found a sensor called ENV-RGB detector. It’s very convenience. It can be put in the water directly to do some detects. So my original idea is to let participants interact with some physical things in the real water with this sensor. So I did a series of experiments. But the result is not satisfied. Because the sensor is very unstable, sometimes it can’t received the signal indescribably. And it can’t easy to  find some information about the sensor in the website. In the blog some friends also ask some questions about this sensor. I ask help with my friends, but they didn’t can figure out the method to solve the problem. So I have to consider some other ideas or to use some other sensors.

Experiment2-Changing Colour Sensor

Then I decided to change to a more normal colour sensor, I found a sensor board called TCS3200D. And there are more information and experiments video about this sensor on the website. So it’s convenience to get help. I have some background experience about colour sensor, but the sensor I have used is called ColorPAL which just has three PINs in itself. But TCS3200D has ten PINs, and the program code is more difficult also. And I can’t buy the ColorPAL temporarily, so I have to do some research and study for this TEN-PINs sensor. I attended a series of  online tutorials and saw some of videos and materials about it. Finally, I could get the values of RGB use this sensor very stably and modify with relevant code with myself.

Experiment3-Controlling RGB-LED with Arduino

After I got the values of the RGB, I want not only to show them on the screen,but also to do some more direct fun experiments. So I found a RGB-LED to show them. This is a  very interesting LED, you can modify the value in the Arduino to change it’s colour to whatever you want. Also you can set the random value to let it change the colour randomly. So this is the experiment I did to show the random change.

Experiment4-Synchronizing LED’s colour with Sensor

After control sensor and RGB-LED, I tried to do the experiment to combine them together. As I was not very familiar with the program code. So this process takes me a long time. It not just simple like put these two pieces code together. Because the code of colour sensor I used is a example, the stuff in the “void loop” is a new set function, so when I write the output code in the “void loop”,the program always loop the function before set and the output can’t successful sent to the LED, so the LED didn’t work. As the beginning I didn’t know the reason, so tried a lot of times to solve the problem even I attempted some other kinds code about the colour sensor. At last, with the help from the classmates I finally solved it, and the idea can be continue successfully.

Experiment5-Drawing an image with processing.

As the project’s main platform is processing. So I read some books about the processing carefully. It’s very easy to draw some geometric figures with processing, but if you want to draw a stuff which composed with some bight, for example the jellyfish which is a soft animal, it’s maybe a little difficult. At the beginning, I have to download some familiar examples, and learned how the editor wrote them, and then tired to modified them to what I want. Slowly, it became a pattern that a little bit like the feet of the jellyfish.

Experiment6-Modifying the image and making animation

For make the pattern I drew more vividly, I dropped the code which I wrote before that came from other ones, and wrote a piece of new one. Of course, in the process, this new code takes some experience before. I defined some key points in the line which composed with lot of points. Through control these key points, to do some animation. For example, through rotate and copy these lines with centre to get the body of the jellyfish. Then divided these lines to three and adjust the width to control the from of the jellyfish. Also, I did some rotate with the animation that make it more truly and vividly. Although this effect it show is not very obvious at last, but this process is very difficult. Because I have to calculate some values to do that in the code.

Experiment7-Communicating with processing.

The duties of Arduino and Processing were all finished, so I will try to communicate with them each other. This process is also bother,a few times the values sent to processing was not accurate, neither was can’t to be sent. Until I found the example which the lecturer told us in the class, is a method that communicate from Arduino to Processing, so I refer it to my project and modify the code. At last, I got the correct RGB values from Arduino and used them to control the image in the Processing.

Experiment8-Making boxes

Actually, at first, I want to use any colour paper directly, but when I did that, I fund that the colour sensor is too unstable, a little light or somebody moving maybe can influence the information that comes from it. So I try to change the paper’s shape to make the signal more stable. At first I just fold the paper’s two sides together, but there is some other light comes form the head and bottom, so I took a whole afternoon to make some boxes to package the sensor. The result was satisfied. If I do the experiment with the light off, these boxes will be very beautiful. In the end, I improved the boxes. I taped the last side of the box, just left a small hole to put the sensor in. So it finally can be better to recognize the RGB of the paper.
















I want the jellyfish as the representative of the submarine, to show the charming scenes of undersea to every people. As the different depth of the sea, the extent of the light it can be shined is different. So the colour reflected by the light on the jellyfish is different also. I want to use the colour sensor to put the colour which the participants recognized to the jellyfish which I made. But one point that I ignored is jellyfish are generally appear in groups, but I just draw one. So if I make more forms and set more rules to make more effects my work maybe can get more interest. For example, I can store the colour that recognized and identified them to create more different jellyfish or more other interaction.

Through observe and study other students’ work and lecturers’ explanation, I think my project should be able to have more ample visual things and more interactivity in it.I maybe too focused on how to make the jellyfish’s form and animation move vividly and truly and spent too much time on it, while ignoring the more interactive and interesting in the work.

Through this course and variety of experiments I found that actually there are a number of directions can be choose about water.Return to the idea about the beginning of the detection of water, I think it completely can consider the water from itself. Water is the source of life. The quality of drinking water is closely related to our daily life. In my country,  industrialization lead to the pollution is very serious now, so people also slowly begin to pay attention to their diet health. Water PH value is our daily topic of common concern. So I searched for some information about PH value, and I am very interested in a PH sensor .

So maybe I will do some experiments about this sensor in the later study, and expect to do more work.

Another idea is about electrolysis of water, by this way, we can separate the minerals from water. Different minerals in the water will show different colour and status. So these ideas than directly change the colour of the water are much more interesting and have more meaningful.


Arduino Tutorial:

Online Shop-RGB LED Breakout (5050):

TCS3200 Color Sensor:



Underwater Dream is a game that can be played by your mind. The purpose of this project is to create dream-like experience of underwater.

This idea came from my own dream experience. When I was dreaming, I can experience amazing story that never happened in real life. If I realized I was dreaming, I can control the dream by my mind. For example, if I want a cake, I can imagine there is a cake in my hand and after I blink my eyes, I will see a cake in front of me, like magic.

So there are some points that make dream experience different from real life:

1. Without real activity

2. Change surroundings by imagination (and blink)

3. Fantastic story, but looks real

Because many unexpected physical errors appeared (and difficult to be fixed) in Project 1, I don’t want to make many physical instalments. So this time, I chose game, which is a good way for interaction, as the platform.

To decide what element can be used for water park, I had a brainstorming:

Screen Shot 2014-11-17 at 7.00.50 AM



To achieve these points, this game combines three different data sauces: EEG sensor, Arduino, webcam.

EEG sensor is used to achieve point 1 and 2. Player can go forward by attention level, play music by meditation level, and destroy barriers by blinking eyes.

Arduino is used to connect water in real world to virtual water, by detecting water level.

Webcam is used with openCV to detect player’s face, to make the world in screen looks like a “real” one because player can have different view angle by moving head.

Here is a reference video about the “real” 3D effect:

To summarize, this game can be played without keyboard and mouse, only mind and motion.


About the scene, there are two plans:

First, the whole scene is a circle to make the game be able to circulate. Player can turn automatically. There are three themes for the whole map: Coral Valley, Sunken Ship, and Underwater Graveyard.

Second, the scenes are straight path that player can not turn. There are two themes for two scenes: Life and Death, player starts the game from Life scene, in which all the sceneries are light and lively. After player experiences the whole scene, he/she will be transported to Death area, which is full of dark and horrible skeletons and remain of sunken ship. After this, player will be brought back to Life area to complete a circulation.



         Unity 3d


3Ds max



         openCV for Processing





The main challenge is how to transfer data between different sources. The data come from Arduino, Webcam, EEG sensor to Processing, and then transfer to Unity.

Screen Shot 2014-11-17 at 6.42.26 AM


From Processing to Unity

First of all, I have to make sure Processing can be connected with Unity. I found a video that let Processing control the scale of a cube in Unity. As the video said, I download the library called oscP5, with this, the number can be sent to Unity successfully. This is the base of the whole project that should be confirmed before everything start.

From Arduino to Processing

I use ultrasonic sensor in this project, to detect water level in a box. The game will start only the box is filled with water.

Here is the circuit:

Screen Shot 2014-11-15 at 9.45.25 PM

Here is an experiment, for testing if the ping sensor can detect the water level:

//Experiment 1

(Some description of experiment is not enough 200 words because I didn’t write in order, some descriptions are in other section, and also some descriptions are for more than 1 experiment)

Here is another experiment for testing the data transformation, from Arduino to Processing to Unity.

//Experiment 4

From EEG to Processing


The first time I knew EEG is, I watched a TED talk about Emotiv, I was so surprised to the technology (and this is an important reason why I chose Digital Future). I have a NeuroSky Mindwave EEG sensor, but because I don’t have background of coding so I didn’t use it until I learnt Processing. There is a library for EEG sensor called ThinkGear, and fortunately, NeuroSky company provided lots of develop tools in their website and some of them are free.

To transfer the data from EEG sensor to Processing, two software are necessary: MindWaveManager and ThinkGearConnector. The first one is used for connecting EEG sensor to laptop; the other one is the socket that can help user to transfer data from sensor to Processing. Both of these can be download from the website.
There are several variables that have been provided in the library: attentionLevel, meditationLevel, blinkStrength, delta, theta, low_alpha, high_alpha, low_beta, high_beta, low_gamma, mid_gamma. The level of these can reflect user’s emotion. In this project, only attentionLevel, meditationLevel and blinkStrength are used.

This is Experiment 2: Data from EEG sensor to Processing

In this experiment, I used attentionLevel to control the height of this rectangle, and the blinkStrength to change the color of it. The harder I blink my eyes, the more vivid the color is.

//Experiment 2

Here is Experiment 5: using attention to control the movement and move speed. Player can move faster by concentrate more, and if player lose concentrate, he/she will stop.

//Experiment 5

In Experiment 6, the blink function has been added. When my strength of blink reached a certain level (it can avoid slight blink disturbing the result), the barrier disappears. This function is used to stimulate the situation in dream that I mentioned in the first part. The advantage of this function is the player won’t find the process of disappear because it happened when the player close the eyes.

//Experiment 6

From Webcam to Processing


For tracking face with webcam, open CV library is used. From the example of face tracking, I tried to locate the middle point of face, and use the position to control the camera in Unity. The final effect is, when you move your head, the camera in Unity will move and rotate with your movement.

//Experiment 3

Work in Unity

After transfer data from Processing to Unity, the other works were completed in Unity.

The first challenge is Processing is not supported in Unity so I have to use JavaScript, which I never used before. I completed the code of JavaScript by learning from examples, online tutorials and official Unity manual. From learning JavaScript I found that even JavaScript and Processing are different languages, they are still have something similar that the knowledge I learnt from Processing is helpful for learning JavaScript.

In Unity, I move the whole scene instead of moving the character. In Experiment 3 I already tested the camera movement function, in Experiment 5 the movement function has been tested too, and in Experiment 6 the basic gameplay was completed. But they all relate to Processing. Here is the function just effect in Unity.

This experiment is an improved version of Experiment 6, because I need this game is automatically circulate without stop to restart, so the barriers should appear again. The Experiment 7 is for this function. The effect is, after the second barrier is destroyed, the previous one will appear to the original position.

//Experiment 7

Work in Processing

Experiment 8 is for background music that can be controlled by meditation level. When meditation level over 50, a piece of music will be played. It uses a library called minim. The music will not play a second time until it finishes for the first time.

//Experiment 8

Experiment 9 is for testing objloader library. I tried an example but when I tried to import my own obj, it can’t show the texture and I don’t know how to solve this problem. Objloader is another choice if Processing can’t transfer data to Unity. But obviously Processing can’t support 3d as convenient as Unity.

//Experiment 9

Experiment 10 is the one I tried to let player turn automatically. But unfortunately I just complete half and because at first I didn’t set the original position properly so it’s difficult to change it. I tried 2 methods to trigger the turn, the first one is use distance but it’s not accurate. So I tried to use collision. This one is much better but because I don’t have enough time to modify the position, I didn’t add this function into my final version. But I forgot to record this part….



Processing and Unity:


The 3d models were made in 3ds max, the texture were drawn in Photoshop.

Here are some reference pictures:

1 2 3 4

Here are some pictures for modelling:


Import 3d models to Unity:


Also I added many effects such as light, fog, particle and so on in this scene.

4. Conclusion

Here is the final version of the project:


Actually this is not a project for showing to publish. It concentrates on personal experience. But I think a multi-player version will be more attractive. And also because of the time, I didn’t complete all the things I want to show.

From this project I really learnt a lot. This is the first time I made something about code. I am so excited about learning a programming language (even I met a lot of difficult, both in Processing and JavaScript, and some of them haven’t been solved now). But I think coding is very important in a project because I found many ideas cannot be realized because the lack of program knowledge.

Misty Window




An interactive installation that displays a movie hidden under a circular viewport controlled by a custom tracking code on the viewer with a webcam.

The idea behind my project was formed through a mixture of different concepts and inspirations. I knew from the beginning that I wanted to do an interactive installation, however the question was where do I start. I faced many difficulties upon deciding the direction I want to take for the project, so to proceed I decided to explore Processing and conduct simple experiments to understand the limitations of the program. I tried to test different forms of water from bubbles to dry ice, however due to the time constraint my options were limited. Following my tests, I tried to figure out a way to use water as a trigger, but since we are using Processing, I chose to project or use projection in an unconventional way. After a great deal of research for inspirations, I came upon various projects that utilize projection; one of the projects was a projection of a 3d rabbit on steam using three cameras.



The installation consists of a fog screen, webcam, and a projector. The fog screen intended to be used as a display for the movie. The idea behind the projected movie was inspired by the concept of this project. Since it is all about water, it was fitting to choose a video that explores the sea and the creatures it holds. The way the installation works is as the viewer approaches the installation, a movie will be projected on the fog. As they get closer, the webcam will detect their faces using FaceOSC app- Tool for prototyping face-based interaction- and they will be able to explore the video through a circular viewport. The viewport is scaled and positioned according to the movement of the head. As the viewers move towards the installation and get closer, the circle will increase in size and as they move further it will decrease allowing them to control the zoom level of the scene. Another part of the installation is a box with bright LED that only lights up when contacted with the steam. The viewers will be provided with transparent boxes that they will use to fill it with the steam coming from the screen. This represent




  • Processing
  • FaceOSC
  • Video Projection Tool (VPT)



  • 2 Humidifiers
  • 47 Straws
  • 2 hoses
  • Black board frame
  • 1 Projector
  • Macbook pro
  • Logitech webcam web cam





IMG_6299 copy     20120127_1200499




IMG_6319     20120127_120222










Screen Shot 2014-11-16 at 7.33.21 AM






The Drift table

Water Light Graffiti



Smile TV








Experiment 1

In this experiment I wanted to test the projector and to explore projection on different surfaces. For this experiment I used VPT -multipurpose projection software tool, and since it was my first time using it I tried to explore it by projecting onto different surfaces with various shapes. In this video, I tried to map a video that was already installed in the program onto a sweatshirt. I used three different layers of the same videos to cover the whole shirt. Due to my limited experience with the program, it took me a while to cover all of the edges, however after trying multiple times I was able to achieve the goal. This helped me to understand the tricks involved in mapping and how to apply this knowledge into my project. As I learned how to control each layer I discovered the capacity of the VPT to produce complex projects.




Experiment 2

The next step in projection was to choose the surface and the items involved. At the time of this experiment, I have already figured out the concept behind the project so I tried to play with the materials. My first choice was to project on steam, so I bought one humidifier and a couple of straws to produce steam that has the structure of a fog screen. I also took the hose out from my vacuum to connect the straws to the humidifier. The tricky part of this experiment was the uncertainty of how clear the projection would be. While testing I found out the best thing is to project things with less detail for the best resolution. The colors should be bright and have strong contrast between them.




Experiment 3

After the previous experiment, I had to figure out the position of the projector to the fog screen. Trying to do so, I encountered multiple problems. I had to create 3 prototypes to find out the optimal position, but sadly each one of them failed. One of the problems was with the straws because I had to tap each two straws together multiple times to ensure that the steam will not escape. However, the tape couldn’t seal the steam because I didn’t use waterproof tape. After solving this issue, I commenced testing the position of the fog screen. The first position I tested was based on my initial idea for this project where the fog screen will be viewed from the top. However since the steam coming from the side and the straws are horizontally laid, the steam was not strong enough and water kept piling up inside the straws. Another issue, the projector has to be placed within a certain distance and the best position for that is to have the straws position vertically.




Experiment 4:

In this experiment, I tested FaceOSC app for the first time. In order to figure out how does it actual work I had to look for examples. This is one of the codes I found belongs to Adam Bendror. I tried to mimic the same style to understand more about the tracking of each feature but I couldn’t because of the time limit. This illustration was made in DMesh. However, I tried to learn how exactly does tracking works in FaceOSC.






Experiment 5&6:

In this experiment I wanted to explore the visuals projected on the steam. I wanted to make the viewer explore the screen with something similar to looking through a peephole. At first I didn’t know how I could achieve that in Processing so I tried to draw a circle but of course it wasn’t successful and I couldn’t figure out how I would be able to make it. So I tried another option less direct but much simpler. I draw a black rectangle and subtracted a circle in the middle with photoshop. This rectangle will be following the head movements using FaceOSC so for that I had to make the shape really big so when the camera track the face we cannot the see the edge of the rectangle. After I achieved that I wanted to explore other options for what I can hide underneath the peephole so I uploaded a movie clip and the result was amazing. I loved the movement in the video and the soothing music. After that I tried to make another version with white rectangle instead of the black to see later in projection which one has more impact. The best thing in Projection on steam is to choose visuals with least details and with the strong colors and contrast to obtain the best resolution.





Experiment 7

The idea was to connect two applications that are working on a video together. I wanted to create visuals in processing and pipe them into VPT for live interactive Projection. I thought it was going to be hard and tricky since I’m using both FaceOSC and Processing and now I’m adding VPT to the mix. However, converting the file was indeed an easy solution. All of this was possible because Syphon server can work really well with audio and video. Syphon as explained in the official website (Syphon is an open source Mac OS X technology that allows applications to share frames – full frame rate video or stills – with one another in real time). It allows third party to access any video. To apply the syphon code, I had to download the library in Processing and then I had to write a couple of codes.





Experiment 8:

In this experiment, I tried to add another interactive part to the installation that highlights the water (mist). After coming up with the final concept of the installation, I started exploring ideas that can be done with Arduino.

After much research I decided to use a steam sensor to trigger a bright Light. In this case, the viewer will be offer a transparent box that they can fill it up with the magical mist (steam coming from the screen) and after couple of seconds the box will light up. After the mist solely vanish, the light will fade out as well. This experiment had a mesmerizing affect with the colorful LED surrounded with an aura of fog.




Processing is a very useful tool to achieve greatness but time is needed for that. In this project I encountered multiple problems all related to timing issues. As I was adjusting and mounting my installation things started to collapse. Some of the straws started to fill with water and that stopped the steam from coming out. It actually worked perfectly the week and the night before but an hour before the presentation everything seemed not to work. In the last minute I was able to save some of the straws but I couldn’t show the second part of the installation.



Thank you to Jenna and Glen for their tremendous help.








Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.