Category: Project 2


fluid resonance title2

What if water could be an instrument?

2014-11-10 21.53.04

Fluid Resonance DJ Gary

If the vibrations from a single drop of water in the ocean could be suspended in momentary isolation, what infinite arrays of symphonic arrangements could we hear? A constant flow of signals in the tide of the universe codified in sounds, awaiting to be experienced in time. There are certain moments when we are moved by sound, a direct emotional connection to the physical movement of orchestrated disturbances in the air; an unseen but pervasive, invasive and volumetric presence. Such characteristics as these are what concern me now, and are the focus of the project. The formation of relational interactions in space codifying the event and its parameters are subtly and violently manoeuvred by the invisible actions as subtext underlying the visible surface, just as sound changes its timbre by the material surface it reverberates on, yet continues to imbue into the substrata of matter.

Music makes time present. Or, at least, it makes one aware of time, even if one loses track of it. Take, for example, Leif Inge’s 9th Beet Stretch, a reimagined version of  Beethoven’s 9th Symphony stretched into a 24 hour journey (a sample can be heard here on RadioLab at 4:23, or listen to the full streaming version here). The remastered continuous audio notation materializes the presence of sound and the distillation of a captured moment, giving one a moment to reflect on the mortal moments that stream by in subconscious sub-fluences every minute, without awareness.  In this example, we are transported into the life of a musical movement in its own existence; in contrast, another way of thinking about the relation of time and sound comes in the form of crickets. An analog field recording taken from a soundscape of crickets was slowed down, edited to a speed equivalent to the lifespan of a human, scaled up from a cricket’s lifespan. What emerges is a harmonic layering of triadic chords playing in syncopated rhythm like an ebb and flow of a call and response. (Note: the field recording has later been reported to have been accompanied by opera singer Bonnie Jo Hunt, who recalled “…And they sound exactly like a well-trained church choir to me. And not only that, but it sounded to me like they were singing in the eight-tone scale. And so what–they started low, and then there was something like I would call, in musical terms, an interlude; and then another chorus part; and then an interval and another chorus. They kept going higher and higher.” (ScienceBlogs 2013). When we slow down, speed up, alter, change, intersect intangible concepts into human-scaled pieces to hold, we have an opportunity to reveal insights from our point of view into a dimension outside our horizon that which we never would have encountered in the habitual form. It may not grant us full access to aspirations of knowing or truth, but the discovery of interrelated phenomena – whether it be time and music, water and sound, or natural- and computational glitches causing anomalies – gives us a better understanding of the effects and consequences of the tools used to define a language, which in turn, define our state of being and future intent.

And what of the water project?

The original intent of processing and music began with the introduction to cymatics – a term used to describe the experiments of a substance’s patterned response to various sine wave tones.

And here’s a polished music video of many such experiments compiled into a performance:


Water revealed the vibratory patterns of tones in consistent yet exciting designs, which then began the exploration into sound processing. Minim became the library that accommodated the recall of pre-recorded instrumentation (or any .wav / mp3 file) , however it was not the first library to be experimented with. Beads, a sound synthesizer library with the capacity to generate sine wave tones provided an introduction to visualizing a simple wave form.


beads interaction sine wave


Close-up of Frequency modulating on-screen.

The location of a pointer on the screen moved by the mouse changed the wave modulation and frequency in relation to the vertical and horizontal actions, respectively. The input of movement and proximity changed the auditory pitch of a tone output.

Another variation of sound visualization from Beads was the Granulation example. This exercise used a sample of music, then ‘flipped’ the composition, pushing and pulling the tones, stretching them into digitized gradations of stepped sound. Imagine a record player turning a 45rpm disc with minute spacers in between every 1/16 of a second, turning at 33rpm (but with the same pitch) – the digital composition of finite bits reveal themselves, but link the tones in a digitized continuum. This would later become very influential in the final performance of the sound generated by water.

An inquiry into the physical properties of cymatics proved to be challenging. Initial investigations were conducted with a coagulate fluid (water and cornstarch).

Gary blows a speaker.

Gary blows a speaker.

It was soon discovered that commercial-grade hardware and equipment would need to be used in order to achieve an effective result. Though challenging, time would not permit further explorations. (Gary Zheng continued to explore cymatics to great effect based on these initial experiments).

A second option was to simulate cymatics through visual processing, leading to some play with Resolume, a software used for sound visualization, popular among DJs to augment their sets with responsive graphic media.

ResolumeInitially, the layered track interface and set bpm files made this an easy-to-use software medium. Pre-made .mov or .wav files could be loaded to simulate interaction with the beat-heavy tracks. For entertainment value, Resolume has much to offer and is easily accessible.  But the spontaneity is removed from the equation of output, and is dependent upon the user’s technical knowledge of software and constraints of the program.

motion vs colour detection

motion of water detection

The method of investigation revealed interesting physical responses to sound, and in turn, inverted the experiments of cymatics – from the causation of sound into form – to form resulting in sound feedback. The intrinsic properties of water displacement on its surface had the ability to create an effect through captured video, thus water became the focus as an instrument of motion represented by auditory output, and was no longer an after-effect of sound.

A deductive experiment compared two forms of video motion detection, based on exercises conducted earlier in group lessons (code originating from Daniel Shiffman’s samples,  First, there was colour detection. This version of motion detection would have dictated the physical properties of objects and or additive coloured substances to the water. Adding elements complicates the design process, and alters the base-line state of the water interface, so this was not a favourable option.

motion detection senses obscenities.

Motion Censor: motion detection senses obscenities.

Motion detection test using video processing. Video pixels are selected in the active frame to detect which colours within a certain threshold will be chosen to be tracked; in this case, the off-colour gestures turned the camera from a motion sensor into a motion censor.

Next up was the motion gesture test, a basic differencing visual of motion knocked out in black pixels, with a set threshold calibrated for the scene.

motion difference

2014-11-10 00.48.44

early tests of water detection through gesture motion

The gesture test proved to be less discriminatory of affected pixel detection, therefore the existing conditions of light and material properties would be critical in the final set-up of the performance, especially for a clear and less-detectable substance as water. A visualization of the water’s surface captured in the video camera early on indicated the sensitivity of the camera would be sufficient and better in controlled environments.

A third and most important layer to the experimentation was the implementation of the split screen lesson introduced to us as a group, as an application using coloured elements to respond to motion. Coloured items appeared,  indicating the detection of movement in the designated zone on the screen.

2014-11-06 16.24.52

Split screen divisions

hand detection grid format

Layout of grid zones

**At this point, the design of the project became clear. A music interface would be created with water, from which motion would be detected through user interaction (imagine water drop syringes annotating musical notes in a pool. Notes vary on a scale depending on where you release the droplets). The vibration of water is also augmented by a graphic icon, colour-coded to represent the different tones. Once the user has made the connection of the interface, colour cues, notes and zones, the ability to improvise and to create a melody through patterning becomes intuitive. **

As the design of a musical interface called for a variety of designated tones, a grid was mapped out to correspond to a simplified scale. 8 tones would be selected, and would represent a major key to start with harmonic layering. A small threshold showing less motion than was detected maintained a relatively neutral background, while a graphic icon was required to track the gesture: this was needed to give visual feedback to the user, to understand the interface orientation and navigation of the zones.

2014-11-09 16.48.12

gesture motion and tracking with graphic icon

An important aspect of the grid layout and the user interaction required a method of knowing where the user was affecting the interface, as it was a visual representation augmenting the real physical interactions of the water. It was determined that a static image appearing intermittently did not represent the user action of dropping water, so a sequenced animation (GIF) was created in PhotoShop.


GIF sequence of 15 frames developed in Photoshop

8 unique variations (colours) of a 15 frame GIF were created. Then, a GIF library was sourced to introduce the animation into the code. GifAnimation was used to activate the series of images. There were at least a couple of ways to integrate the animation: as a sequence of still images, or as a compiled GIF (the latter was chosen for this instance). For further information, here is a link to start

In order for the GIF to be successful, it had to follow the pixels in the zone it was assigned to, and it needed to appear in an approximated area where the most changes occurred in the video processing. What transpired was a coloured “droplet” image appearing where the real droplets of water were being played out on the water’s surface. This part of the program code would not have been possible without the consultation of Hart Sturgeon-Reed, who helped to apply the following:

///draw the gifs

if (points>80)


//Upper left 1
if (xPosition <splitLine && yPosition <height/2)

…and so on, for each tonal zone.

To recap, the foundations of gesture motion detection was layered with split screen detection, thereafter divided into quadrants, then 8ths of the screen (default 640×480). Video processing also enabled tracking of the GIF, which was implemented with the GifAnimation library. Now, Minim was used as a playback for pre-recorded royalty-free audio. In this case, the default notes on a guitar were selected as a basis for the sounds – a simple foundation easily recognizable, with the potential to grow in complexity.

A fundamental leap of concept occurred in the playback results. Initially, the pre-recorded single note tone would play, and a simple identification of a sound would be the result. Minim has the capability of playing a complete song if needed, by using the recall at the critical moment. This method may slow down the recall, however, and since the tones for the project were short, the recall required quick access / activation. Another detractor to the Play loop was the one-time cycle. A reset was required thereafter, and did not set as expected, and often the tone cut short as other tones were activated through the water motion. To counter the stuttering effect, trouble-shooting with the Trigger loop had interesting results. As the motion of the water continuously recalibrated the video detection when its surface broke from the droplets, the tones were triggered with a constant reset, creating a continuous overlap of sounds, not unlike the earlier experiments with the Beads library example of granulation. So here we are with a unique sound that is no longer like a guitar, because it is retriggering itself in a constant throng of floating, suspended long notes weaving between each other. It is guitar-like, yet it is pure processing sounds, delivering it from simulation to simulacra, activated by natural elements of rippling water.

The second point to note about the visual and auditory feedback was the glitches. In the coding structure, parameters were set to define the area within the screen (each zone had 160×240 pixels) from which an approximation would be determined, in order to represent the point of contact where the most action occurred in each zone with the GIF droplet icon. But, as the water surface continued to overlap and ripple into adjacent zones, it appeared that the icons were blipping outside of their originating boundaries, often overlapping one another. This was fortified by the seeming overlap of tones, when in fact each tone was activated on an individual basis; due to the immeasurably small lapse times between each triggered effect, two tones would sometimes sound like they are playing simultaneously, when they were bouncing back and forth from each sound. The flowing state of water in its contained form would activate multiple zones at once, which I can only surmise that the Processing sequence arbitrarily determined which zone and tonal value to play in order of activation, yet was constantly being reevaluated as the water continuously produced new results, changing the sounds by the millisecond.

The set-up: A podium, webcamera, projector, speakers, source light, laptop, water dish and two water droppers. Simplicity was key, considering the varying levels of sensory input and stimulation, however the learning curve was quick and responsive due to the immediacy of the performance.

(Two other experiments to note: FullScreen- an application whereby the Processing viewing window stretches to the size of the native computer (laptop) – and Frames – the calibrating tool to indicate how often the camera input refreshes data – did not synchronize due to the limitations of the external webcamera used. The 640×480 aspect ratio was not preferable, but did serve its purpose in speed and responsiveness.)

Prototype Performance

From the outset, the physical interactions of the environment and the design (reflection of light on the water surface, the container vessel, the camera positioning…) were discussed in detail, yet the focus remained on the programming concept. As a designer of physical space, the programming content, in conjunction with the hardware and sensitivity to environmental conditions was a negotiation process, requiring constant testing throughout the stages of development. Such practice-based research produces unexpected results, and informs the process through reflective and iterative methods.  The most challenging aspect deals with the interaction of elements in its most basic state. The approach to this project was to respect the properties of water, and to work with water as the central vehicle to the creative concept. This now includes the unpredictability of its changing yet constant state.

Phase II

Future stages to the project entail expanding the range of tonal values / octaves in the instrument, which could include secondary mechanisms to “flip” to another octave, or output of sound. Recorded feedback of either visual or auditory information could become an additional layer to the performance. A designed vessel for the water interface is to be reviewed.

Other considerations:

Spatial and cognitive approaches to virtual and digital stimuli in environments have the potential to be accessible touchpoints of communication, whether it be for healthcare, or as community space.

The initial mapping of the split screen into zones carries into another project in progress, and informs the development of a physical space responding with feedback based on distance, motion and time on a much larger scale.



Many thanks to Gary Zheng, Stephen Teifenbach Keller, and Hart Sturgeon-Reed for their help and support.


// Jay Irizawa Fluid Resonance: Digital Water
// base code started with Learning Processing by Daniel Shiffman
// Example 16-13: Simple motion detection
//thanks to Hart Sturgeon-Reed for the graphic icon detection

import gifAnimation.*;

PImage[] animation;
Gif loopingGif;
Gif loopingGif1;
Gif loopingGif2;
Gif loopingGif3;
Gif loopingGif4;
Gif loopingGif5;
Gif loopingGif6;
Gif loopingGif7;
GifMaker gifExport;

//minim sound library

import ddf.minim.spi.*;
import ddf.minim.signals.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;

// Variable for capture device
Capture video;
// Previous Frame
PImage prevFrame;
// How different must a pixel be to be a “motion” pixel
float threshold = 50;

float motionTotL;
float motionTotL1;
float motionTotL2;
float motionTotL3;
float motionTotR;
float motionTotR1;
float motionTotR2;
float motionTotR3;

float maxMotionL = 3000;
float maxRadiusL = 60;
float radiusL;

float maxMotionL1 = 3000;
float maxRadiusL1 = 60;
float radiusL1;

float maxMotionL2 = 3000;
float maxRadiusL2 = 60;
float radiusL2;

float maxMotionL3 = 3000;
float maxRadiusL3 = 60;
float radiusL3;

float maxMotionR = 3000;
float maxRadiusR = 60;
float radiusR;

float maxMotionR1 = 3000;
float maxRadiusR1 = 60;
float radiusR1;

float maxMotionR2 = 3000;
float maxRadiusR2 = 60;
float radiusR2;

float maxMotionR3 = 3000;
float maxRadiusR3 = 60;
float radiusR3;

float splitLine = 160;
float splitLine1 = 320;
float splitLine2 = 480;
float splitLine3 = 0;

int xsum = 0;
int ysum = 0;
int points = 0;
int xPosition = 0;
int yPosition = 0;

//Minim players accessing soundfile
Minim minim;
AudioSample player1;
AudioSample player2;
AudioSample player3;
AudioSample player4;
AudioSample player5;
AudioSample player6;
AudioSample player7;
AudioSample player8;

void setup() {
size(640, 480);
video = new Capture(this, width, height);
// Create an empty image the same size as the video
prevFrame = createImage(video.width, video.height, RGB);

loopingGif = new Gif(this, “DropGifditherwhite.gif”);

loopingGif1 = new Gif(this, “DropGifBlue.gif”);

loopingGif2 = new Gif(this, “DropGifGreen.gif”);

loopingGif3 = new Gif(this, “DropGifYellow.gif”);

loopingGif4 = new Gif(this, “DropGifRed.gif”);

loopingGif5 = new Gif(this, “DropGifBlueDrk.gif”);

loopingGif6 = new Gif(this, “DropGifOrange.gif”);

loopingGif7 = new Gif(this, “DropGifPurple.gif”);
minim = new Minim(this);

// load a file, give the AudioPlayer buffers that are 1024 samples long
// player = minim.loadFile(“found.wav”);

// load a file, give the AudioPlayer buffers that are 2048 samples long
player1 = minim.loadSample(“1th_String_E_vbr.mp3”, 2048);
player2 = minim.loadSample(“2th_String_B_vbr.mp3”, 2048);
player3 = minim.loadSample(“3th_String_G_vbr.mp3”, 2048);
player4 = minim.loadSample(“4th_String_D_vbr.mp3”, 2048);
player5 = minim.loadSample(“5th_String_A_vbr.mp3”, 2048);
player6 = minim.loadSample(“6th_String_E_vbr.mp3”, 2048);
player7 = minim.loadSample(“C_vbr.mp3”, 2048);
player8 = minim.loadSample(“D_vbr.mp3”, 2048);

void captureEvent(Capture video) {
// Save previous frame for motion detection!!
prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); // Before we read the new frame, we always save the previous frame for comparison!
prevFrame.updatePixels(); // Read image from the camera;

void draw() {


//reset motion amounts
motionTotL = 0;
motionTotL1 = 0;
motionTotL2 = 0;
motionTotL3 = 0;
motionTotR = 0;
motionTotR1 = 0;
motionTotR2 = 0;
motionTotR3 = 0;
xsum = 0;
ysum = 0;
points = 0;

// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x ++ ) {
for (int y = 0; y < video.height; y ++ ) {

int loc = x + y*video.width; // Step 1, what is the 1D pixel location
color current = video.pixels[loc]; // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

// Step 4, compare colors (previous vs. current)
float r1 = red(current);
float g1 = green(current);
float b1 = blue(current);
float r2 = red(previous);
float g2 = green(previous);
float b2 = blue(previous);
float diff = dist(r1, g1, b1, r2, g2, b2);

// Step 5, How different are the colors?
// If the color at that pixel has changed, then there is motion at that pixel.
if (diff > threshold) {
// If motion, display white
pixels[loc] = color(0,50,150);

xsum+=x; //holder variable
ysum+=y; // holder variable
points++; //how many points have changed / increase since the last frame

//upper left 1
if(x<splitLine && y<=height/2)
//lower left 1
else if(x<splitLine && y>height/2)
//upper left 2
else if(x>splitLine && x<splitLine1 && y<height/2)
//lower left 2
else if(x>splitLine && x<splitLine1 && y>height/2)
//uppermid right 1
else if(x>splitLine1 && x<splitLine2 && y<=height/2)
//lowermid right 1
else if(x>splitLine1 && x<splitLine2 && y>height/2)
//upper right 2
else if(x>splitLine2 && y<height/2)
//lower right 2
else if(x>splitLine2 && y>height/2)

else {
// If not, display black
pixels[loc] = color(0);

//line(splitLine3,240,width, 240);

///draw the gifs

if (points>80)


//Upper left 1
if (xPosition <splitLine && yPosition <height/2)
// E string
//Lower left 1
else if (xPosition <splitLine && yPosition >height/2)
// Upper Left 2
else if (xPosition >splitLine && xPosition <splitLine1 && yPosition <height/2)

//Lower Left 2
else if (xPosition >splitLine && xPosition <splitLine1 && yPosition >height/2)


//Uppermid right 1
else if (xPosition >splitLine1 && xPosition <splitLine2 && yPosition <height/2)

//Uppermid right 2
else if (xPosition >splitLine2 && yPosition <height/2)

//Lowermid right 1
else if (xPosition >splitLine1 && xPosition <splitLine2 && yPosition >height/2)

//Lower right 2
else if (xPosition >splitLine2 && yPosition >height/2)

println(“Motion L: “+motionTotL+” Motion R: “+motionTotR);


Processing Fishing


For the water park project I decided to use colour tracking to affect a processing animation. I have created a “fish tank” with various coloured laser cut fish and “barnacles” that the user fishes out of the tank using a magnetic fishing rod. When the fish or barnacle comes into the webcam’s view, the colour is detected and the swimming fish in my processing animation change colour based on the fish or barnacle you have caught. This project is interactive for the user because they are physically affecting the change in the animation and the user can see this change. It fits the theme of water in three ways. First, thematically, the visual elements are related to fish and fishing. Second, the fish and barnacles sit in a tank of water waiting to be caught. Third, the processing animation illustrates swimming, undulating fish. I modified code from:



photo 3 (16)

First I decided to have “fish” as my colour tracking objects so I had them laser cut from transparent blue, orange and green plexi. My plan was to use these fish alone and combine them to create mixed colours. (explained in experiments)


photo 4 (15) photo 5 (9)


Once I abandoned my original idea of having the fish on sliders, I decided to change my vessel to a fish tank and have a magnetic fishing rod to retrieve the coloured pieces in the tank so the webcam could view them as the came out of the tank. I made this fishing rod by drilling a hole in both ends of a square dowel and feeding a string through both holes. I then glued a magnet to the long end of the string.

photo 4 (14)photo 2 (18)


The laser cut fish outfitted with magnets. All the pieces ready to go into the tank.

Project Context:

This is an example of a project using Processing and OpenCV. This project is similar to mine because it is also a one player game using the same software tools. Instead of colour tracking, this project uses motion detection by comparing the current frame and the last frame. If there was movement, a bubble pops, if not, a bubble in drawn. The goal is to pop all the bubbles. His code also has to update constantly to look for new movement whereas my code constantly updates to look for new colours. Like my game, there is no set ending or something that happens in the code once all the bubbles have been popped as his project is also a basic first start. I would have liked to add possibly a different processing animation that occurs once all fish and barnacles have been caught.

Project Experiments

1: This is my first assignment from Body Centrics using Processing

For this assignment I chose to use a sensor I had never used before, the flex sensor. The first human movement that came to mind with regards to bending or flexing was exercise. This set up of sensors determines how effectively someone is performing specific exercises; in this case, a squat.

Screen shot 2014-10-05 at 11.08.34 PM


(click to enlarge)

Flex sensors are attached behind each knee and a light sensor is placed under the person performing the exercise. The maximum is reached when both flex sensors are fully bent by the knee and when the light sensor reads 105 (in this case). At this point, the knees are fully bent in a squatting position and the body is low to the ground covering available light to the light sensor. I modified your code by changing the layout, colours, and circle positions. I then made it so that the circles grow once they reach their respective maximum so as the person performs the exercise the circles will grow and shrink accordingly with a large circle for each sensor being the goal.

Screen shot 2014-10-05 at 10.29.48 PM

Screen shot 2014-10-05 at 10.29.41 PM

Screen shot 2014-10-05 at 10.29.53 PM

Screen shot 2014-10-05 at 10.19.55 PMScreen shot 2014-10-05 at 10.22.49 PM


These are screen shots of the Processing animation. The first image shows the resting body where both flex sensors are straight and flat and full ambient light is reaching the light sensor. The second image shows the sensors reaching their maximums. All maximums and minimums in the code have to be altered and calibrated in each new room due to differing light sources. I chose a yellow circle for the light sensor and two pink circles for the flex sensors.

photo 1 (13) photo (18)


For the set up, the breadboard and arduino sit on the floor underneath the person. The two flex sensors are affixed to velcro bands that wrap around above the knee. The flex sensor is then stuck onto the leg underneath the knee bend. This set up could be modified to test the efficacy of a push up (with flex sensors at the elbow bend) or a sit up (with the flex sensors along your stomach muscles).

2: Processing sketch of stick figure me

This is the second time I used Processing in class. I made a stick figure of myself that shows when the mouse is pressed. This was a good way for me to learn how to create a drawing in Processing from scratch. I started with a 100,200 size and centred the middle line, triangle skirt and ellipse head. I then added two arcs for bangs with a gap in the middle for a part and two long rectangles for hair. I then added triangle shoes and two small ellipses for eyes. I then added and if statement for mousePressed so the function: drawTegan will appear when the mouse is pressed or not if else.

3: Face tracking Body Centrics assignment


image (50)

You will need white eyeshadow or face paint, black eyeshadow or face paint, foundation, tape and an optional  curling iron.

photo 1 (18)

First I did my hair. I curled all my hair to give it some body. I then added two buns on either side of my head to obscure the shape of my head. I also used a large front section to swoop over one of my eyes and pinned it to the side. The long curls also obscure the shape of my jawline.

photo 3 (15)

I then wanted to remove dimension from my other ocular region. I used white eyeshadow all around my eye and on my eyelashes. I then used foundation to remove any darkness from my eyebrow.

image (44)

My next step was obscuring the bridge of my nose. I taped a large triangular shape across my nose and filled it in with black eyeshadow.

image (47)

To remove dimension from my mouth region, I taped another shape onto my lips and filled the lips in with foundation and the skin part with black eyeshadow to give the illusion of opposite light and dark areas.

image (48)

Final look

This anti face tutorial is based on Adam Harvey’s CV Dazzle project and follows his tips for reclaiming privacy.

1. Makeup

For my look,  I avoid enhancing or amplifying any of my facial features. Instead I use dark make up on light areas and light make up on dark areas to confuse face tracking technology.

2. Nose Bridge

The nose bridge is a key element that face tracking looks for. I obscured it by using dark makeup in an irregular shape over the nose bridge area.

3. Eyes

The position and darkness of the eyes are key features for face tracking. Both ocular regions are dealt with here in different ways. A large section of hair completely obscured one and light makeup conceals the dimension of the other ocular region.

4. Masks

No masks or partial masks are used to conceal my face.

5. Head

Face detection can also be blocked by obscuring the elliptical shape of one’s head. I used buns on either side of my head, and long curls to break up the roundness of my head and jaw.

6. Asymmetry

Most faces are symmetrical and face detection software relies on this. All hair and makeup elements are different on each side of my face.

 4: Colour Tracking

This is me experimenting with colour tracking using the gesture based colour tracker Nick provided us with. When I mouse press something in view, the colour is recorded and the tracker draws a blob of that colour over the object and anything else that it that colour. Here I learned that the tracker is not as specific as I would like and picks up on things that you don’t want it to because of lighting. This issue came up a lot when I was testing my project. Anything I was wearing, the colour of my skin, the colour of the wall behind the tank or even low lighting would make the colour tracker pick up on things I didn’t want it to.

5: Modification of fish_sketch

At this point I had decided to modify this sketch for my game project. Here I try changing variables like size, position, fill colour and stroke weight. When I make the background colour (0,0) the fish do not update their position. Instead, you see the track of where the fish has been the whole time within the sketch and therefore shows the colours that have been tracked throughout the game. This could become part of the game in the future potentially. It could become a multi-player game where whoever’s colour is of the highest percentage on the screen at the end would win. It reminds me of this:


6: Fish sliders

photo 2 (19) photo 1 (20)


My first idea was to have this plastic container as my tank. I laser cut rectangles of plexi that I glued together to create channels for a slider that the fish would be attached to. The player would change the position of the fish to change the colour of the Processing fish like the final game, but in this set up, the fish could be placed in front or behind each other. Then the colour tracker would be able to pick up specific colour mixes as the plexi is transparent and would create new colours (two fish together or all three together). In the end I abandoned this idea because the sliders I made didn’t work very well and weren’t sliding properly. Also the fish would not stay attached to their sliders.

7: Fish transparency

The next issue with my original idea was that the colour tracker had trouble identifying the colour mixes as they ended up to dark. In this video I attempt to have the colour tracker identify orangeblueFish, bluegreenFish and allFish. They all just end up a dark purplish colour. Another issue is that the surface of the fish reflect back the computer screen and confuses the tracker. At this point I decided to switch to my final “Processing Fishing” idea. Beyond the laser cut fish I also wanted other coloured items that the tracker could identify.

8: Foam sea creatures

photo 3 (17) photo 1 (19)


At this point I decided to try tracking these foam sticker sea creatures. I first tried gluing magnets to them so they could sit in the tank with the laser cut fish but no amount of weight would make them sink to the bottom. I then decided to attach them to a stick so the player could add them into webcam view but this didn’t really fit with the fishing idea I had come up with. I then decided to make coloured “barnacles” as extra tank items that do sink to the bottom. In this video I show the tracker identifying all the colours of the extra foam creatures. Some were too close to the colours of the laser cut fish so this confused the colour tracker.



Arduino / LED Matrix Control


Processing / LED Model / Trackpad Interface








Several years ago, while I was still living in South Africa, I experienced a beautiful natural phenomenon. While walking through the shallow waters of a salt-water river mouth, I noticed a shimmer in the water as it swirled around my legs and feet. Thinking it was merely a figment of my imagination I dismissed it, however, moments later, the water unmistakably illuminated all around me and I realized that in fact there was something beautiful occurring. Bioluminescent plankton contained within the water from a Red Tide earlier that day, were lighting up in amazing displays of colour as the water was disturbed. Fascinated by the effect I decided to collect some of the water and the organisms within it, so I could observe them further at home, unfortunately they didn’t last long and soon died the next day. This experience stuck with me from that moment on and further fuelled my fascination with the natural world and the astonishing beauty it contains.

When starting this project with the assignment theme of water as an interface, my mind instantly went back to this childhood experience. I wanted to somehow convey this experience through means of technological methods. I already knew that tonic water contains quinine, a substance known to fluoresce when exposed to ultraviolet light, therefor I wanted to exploit this property to achieve the effect. By using UV LEDs and tonic water, my hope was to create an interactive experience that would recreate the sense of wonder that I felt back then as a child.

To aid me in this endeavour I managed to find various sources to help influence my design choices. The first of these sources, a YouTube video depicting the same effect I witnessed, provided a refresher into the visual aesthetics that these bioluminescent creatures manifest. I wanted to try and achieve the same level of vivid colours in my project. Another source of inspiration I found was an exhibition created by the artist Shih Chieh Huang, after he had been studying bioluminescent creatures in the deeps of the ocean. He, similar to my approach, used neon coloured LEDs to try and convey the same effect as those creatures.

This was accompanied by more research into bioluminescence as well as other artworks that have incorporated the phenomenon. Furthermore, I was able to find good examples of existing, non art based projects that successfully used UV LEDs and tonic water to create glowing, liquid interfaces, one of which is referenced below, this was significant because it demonstrated how my intended use of the materials would work.


Nemerov, Alexander1. “The Glitter Of Night Hauling.” Magazine Antiques 179.3 (2012): 146-155. Art Source. Web. 4 Dec. 2014.





1 – UV LED Through Tonic Bottle

The first thing I decided to test was the effect the UV LEDs had on the tonic water. The desired effect was for the tonic water to fluoresce a bright blue when the UV light would shine through it. My concern was that the low-power LEDs would not be a strong enough light source for the quinine in the tonic water to react.

As seen in the video, the UV LED did have an effect on the tonic water albeit not as strong as I had hoped. I noticed that the effect of the light was intensified when I aimed the light through the opening of the bottle (avoiding the plastic). This would suggest that the plastic of the bottle is filtering out some of the weak UV light.

My initial design had the LED’s below the plastic container in an attempt to avoid the tricky work involved in trying to submerge the LEDs and make their electrical connections water-tight. After these results I think I may have to submerge the LEDs.

2 – UV LED Submerged vs Through Plastic

After the previous test results, it made sense to progress to actually submerging the LED in the tonic water to see if the results were good enough to justify submerging the LEDs in the liquid and committing time to do the waterproofing.

I wasn’t sure how best to test the UV LED in liquid so I just decided to rig up the light using crocodile clips and then simply dipping the the encased surface of the LED into the tonic water, making sure to not get moisture between the electrical contacts.

As soon as the LED met the tonic water and began to submerge, I could tell that the effect was much better than the LED simply being on the exterior. The light seem to create a cleaner beam, illuminating a clear patch of the tonic water. I decided after this test that I would have to submerge the LEDs in order to get the best effect.

3 – Testing UV LED Matrix With Arduino Code

This was a simple test of the LED matrix. I used a simple piece of Arduino code to write a PWM value to each LED in order so that I could make sure each channel was working and that the corresponding LED was in the position I expected it to be. Essentially it was a test of my dressing system. As you can see in the video, I was fortunate that each LED worked the first time around. At this point I knew that if I could get the Processing sketch and the Arduino to communicate via serial, I could achieve the desired effect.

4 – Processing Sketch

Another major part of the project is the middleware Processing sketch that would interpret the input (yet to be determined) and plot those coordinates onto a matrix, updating the Arduino’s UV LED output, depending on position. I wanted to take this approach because I was apprehensive about coding this project without a visual aid of what the output would be. It also allowed me to test the system with a simple input, my computer’s mouse.

A crucial part of the program was the collision detection between the input coordinates and the various LEDs position. I was unsure how to code collision detection but after some research I found the solution. This test represents the testing of the collision as well as the addition of a buffer around each LED, allowing me to configure the distance a finger would need to be from the each light to activate it.

The test was a success and I’m confident it will be able to handle the visual aspects of the LED’s, brightness, fade, location, etc…

5 – Serial Comms

Another major part of the project was to test out the serial communication between the Arduino and the Processing sketch. The Processing sketch was responsible for controlling all of the light values and then outputting the value for each LED via serial to an Arduino. The Arduino’s only job was to receive the array of values from Processing and update the PWM driver, in turn controlling the current value of the matrix.

My concern was that the Arduino would not be able to receive all 16 values at the same time as serial communication can be a bit buggy. I knew that some tweaking would be necessary in order to get the timing right. As you can see in the test, there were initially problems with serial communication. It seemed the Processing sketch was transmitting to fast for the Arduino to keep up. I made some changes and by the end of the test you can see I was able to get a pretty fluid motion across the LED matrix.

6 – Kinect Test

The kinect was another option to use for the input method. I had planned to potentially position a Kinect, pointing at the container of tonic water and then use hand and finger tracking to watch users interacting with the liquid. I had concerns about this approach because of the limitations of it. The angle would have to be very precise and there would be issues with detecting the finger tips when they were submerged in the liquid.

This test was to find out the complexities of setting up the Kinect with my software and the types of constraints I would have to employ in order to get the desired results to work with. From all the data the Kinect can provide I only needed the points of fingers in the liquid.

As you can see in the experiment video, the Kinect worked relatively well however there would have to be some major additions to the library used. In the end I decided against using the Kinect as I felt it was overcomplicating the project. I really wanted the interface to be self-contained, using and external sensor seemed to detract from that.

7 – Cap-Touch Basic Test

The initial plan for the project was to include capacitive touch sensors within the liquid itself in order to calculate the position of contact between the finger of a user and the tonic water. This seemed like the best method to create and actual interface out of the liquid. I was unsure that this method could work and knew it would take a while to work out the complexities of the hardware as well as software.

This test was the first step in trying to determine if this approach would not only work but would be viable for the time frame of the project. I wasn’t sure if tonic water would even conduct the same way water would. I knew that water had been used like this so I had hope it would also work.

I was able to get the capacitive sensor to work well, however, due to time constraints I had to abandon the plan. I hope to do more testing though and perhaps try to incorporate the feature into a “version 2.0”.




Project2: Blinking Jellyfish


When talk about water park, the first image in my mind is the undersea view. About the undersea stuffs, my favourite thing is the jellyfish. It’s an amazing animal. It’s pellucid and soft and it’s moving is very interesting. Also it can be influenced by the light and turns to the colour what light towards to it. Actually, the reason why I love jellyfish is the scene comes from <Life of Pi> directed by Ang Lee. In that film in the evening, when night falls, these groups of blue ocean animals surfaced, mapping the blue moon, swimming slowly, crows the undersea plants, the view is fantastic. So this is what my idea comes from.


I wanted to try to use Processing to make some graphic creations, and do some animation attempts with it. Interact with the sensors, make the pictures to show more artistic effects. Enable participants to get more fun and more visual enjoyment.

Experiment1-Getting RGB information from Colour Sensor

At first, I begin to search some kinds of sensor to be used. I found a sensor called ENV-RGB detector. It’s very convenience. It can be put in the water directly to do some detects. So my original idea is to let participants interact with some physical things in the real water with this sensor. So I did a series of experiments. But the result is not satisfied. Because the sensor is very unstable, sometimes it can’t received the signal indescribably. And it can’t easy to  find some information about the sensor in the website. In the blog some friends also ask some questions about this sensor. I ask help with my friends, but they didn’t can figure out the method to solve the problem. So I have to consider some other ideas or to use some other sensors.

Experiment2-Changing Colour Sensor

Then I decided to change to a more normal colour sensor, I found a sensor board called TCS3200D. And there are more information and experiments video about this sensor on the website. So it’s convenience to get help. I have some background experience about colour sensor, but the sensor I have used is called ColorPAL which just has three PINs in itself. But TCS3200D has ten PINs, and the program code is more difficult also. And I can’t buy the ColorPAL temporarily, so I have to do some research and study for this TEN-PINs sensor. I attended a series of  online tutorials and saw some of videos and materials about it. Finally, I could get the values of RGB use this sensor very stably and modify with relevant code with myself.

Experiment3-Controlling RGB-LED with Arduino

After I got the values of the RGB, I want not only to show them on the screen,but also to do some more direct fun experiments. So I found a RGB-LED to show them. This is a  very interesting LED, you can modify the value in the Arduino to change it’s colour to whatever you want. Also you can set the random value to let it change the colour randomly. So this is the experiment I did to show the random change.

Experiment4-Synchronizing LED’s colour with Sensor

After control sensor and RGB-LED, I tried to do the experiment to combine them together. As I was not very familiar with the program code. So this process takes me a long time. It not just simple like put these two pieces code together. Because the code of colour sensor I used is a example, the stuff in the “void loop” is a new set function, so when I write the output code in the “void loop”,the program always loop the function before set and the output can’t successful sent to the LED, so the LED didn’t work. As the beginning I didn’t know the reason, so tried a lot of times to solve the problem even I attempted some other kinds code about the colour sensor. At last, with the help from the classmates I finally solved it, and the idea can be continue successfully.

Experiment5-Drawing an image with processing.

As the project’s main platform is processing. So I read some books about the processing carefully. It’s very easy to draw some geometric figures with processing, but if you want to draw a stuff which composed with some bight, for example the jellyfish which is a soft animal, it’s maybe a little difficult. At the beginning, I have to download some familiar examples, and learned how the editor wrote them, and then tired to modified them to what I want. Slowly, it became a pattern that a little bit like the feet of the jellyfish.

Experiment6-Modifying the image and making animation

For make the pattern I drew more vividly, I dropped the code which I wrote before that came from other ones, and wrote a piece of new one. Of course, in the process, this new code takes some experience before. I defined some key points in the line which composed with lot of points. Through control these key points, to do some animation. For example, through rotate and copy these lines with centre to get the body of the jellyfish. Then divided these lines to three and adjust the width to control the from of the jellyfish. Also, I did some rotate with the animation that make it more truly and vividly. Although this effect it show is not very obvious at last, but this process is very difficult. Because I have to calculate some values to do that in the code.

Experiment7-Communicating with processing.

The duties of Arduino and Processing were all finished, so I will try to communicate with them each other. This process is also bother,a few times the values sent to processing was not accurate, neither was can’t to be sent. Until I found the example which the lecturer told us in the class, is a method that communicate from Arduino to Processing, so I refer it to my project and modify the code. At last, I got the correct RGB values from Arduino and used them to control the image in the Processing.

Experiment8-Making boxes

Actually, at first, I want to use any colour paper directly, but when I did that, I fund that the colour sensor is too unstable, a little light or somebody moving maybe can influence the information that comes from it. So I try to change the paper’s shape to make the signal more stable. At first I just fold the paper’s two sides together, but there is some other light comes form the head and bottom, so I took a whole afternoon to make some boxes to package the sensor. The result was satisfied. If I do the experiment with the light off, these boxes will be very beautiful. In the end, I improved the boxes. I taped the last side of the box, just left a small hole to put the sensor in. So it finally can be better to recognize the RGB of the paper.
















I want the jellyfish as the representative of the submarine, to show the charming scenes of undersea to every people. As the different depth of the sea, the extent of the light it can be shined is different. So the colour reflected by the light on the jellyfish is different also. I want to use the colour sensor to put the colour which the participants recognized to the jellyfish which I made. But one point that I ignored is jellyfish are generally appear in groups, but I just draw one. So if I make more forms and set more rules to make more effects my work maybe can get more interest. For example, I can store the colour that recognized and identified them to create more different jellyfish or more other interaction.

Through observe and study other students’ work and lecturers’ explanation, I think my project should be able to have more ample visual things and more interactivity in it.I maybe too focused on how to make the jellyfish’s form and animation move vividly and truly and spent too much time on it, while ignoring the more interactive and interesting in the work.

Through this course and variety of experiments I found that actually there are a number of directions can be choose about water.Return to the idea about the beginning of the detection of water, I think it completely can consider the water from itself. Water is the source of life. The quality of drinking water is closely related to our daily life. In my country,  industrialization lead to the pollution is very serious now, so people also slowly begin to pay attention to their diet health. Water PH value is our daily topic of common concern. So I searched for some information about PH value, and I am very interested in a PH sensor .

So maybe I will do some experiments about this sensor in the later study, and expect to do more work.

Another idea is about electrolysis of water, by this way, we can separate the minerals from water. Different minerals in the water will show different colour and status. So these ideas than directly change the colour of the water are much more interesting and have more meaningful.


Arduino Tutorial:

Online Shop-RGB LED Breakout (5050):

TCS3200 Color Sensor:



Underwater Dream is a game that can be played by your mind. The purpose of this project is to create dream-like experience of underwater.

This idea came from my own dream experience. When I was dreaming, I can experience amazing story that never happened in real life. If I realized I was dreaming, I can control the dream by my mind. For example, if I want a cake, I can imagine there is a cake in my hand and after I blink my eyes, I will see a cake in front of me, like magic.

So there are some points that make dream experience different from real life:

1. Without real activity

2. Change surroundings by imagination (and blink)

3. Fantastic story, but looks real

Because many unexpected physical errors appeared (and difficult to be fixed) in Project 1, I don’t want to make many physical instalments. So this time, I chose game, which is a good way for interaction, as the platform.

To decide what element can be used for water park, I had a brainstorming:

Screen Shot 2014-11-17 at 7.00.50 AM



To achieve these points, this game combines three different data sauces: EEG sensor, Arduino, webcam.

EEG sensor is used to achieve point 1 and 2. Player can go forward by attention level, play music by meditation level, and destroy barriers by blinking eyes.

Arduino is used to connect water in real world to virtual water, by detecting water level.

Webcam is used with openCV to detect player’s face, to make the world in screen looks like a “real” one because player can have different view angle by moving head.

Here is a reference video about the “real” 3D effect:

To summarize, this game can be played without keyboard and mouse, only mind and motion.


About the scene, there are two plans:

First, the whole scene is a circle to make the game be able to circulate. Player can turn automatically. There are three themes for the whole map: Coral Valley, Sunken Ship, and Underwater Graveyard.

Second, the scenes are straight path that player can not turn. There are two themes for two scenes: Life and Death, player starts the game from Life scene, in which all the sceneries are light and lively. After player experiences the whole scene, he/she will be transported to Death area, which is full of dark and horrible skeletons and remain of sunken ship. After this, player will be brought back to Life area to complete a circulation.



         Unity 3d


3Ds max



         openCV for Processing





The main challenge is how to transfer data between different sources. The data come from Arduino, Webcam, EEG sensor to Processing, and then transfer to Unity.

Screen Shot 2014-11-17 at 6.42.26 AM


From Processing to Unity

First of all, I have to make sure Processing can be connected with Unity. I found a video that let Processing control the scale of a cube in Unity. As the video said, I download the library called oscP5, with this, the number can be sent to Unity successfully. This is the base of the whole project that should be confirmed before everything start.

From Arduino to Processing

I use ultrasonic sensor in this project, to detect water level in a box. The game will start only the box is filled with water.

Here is the circuit:

Screen Shot 2014-11-15 at 9.45.25 PM

Here is an experiment, for testing if the ping sensor can detect the water level:

//Experiment 1

(Some description of experiment is not enough 200 words because I didn’t write in order, some descriptions are in other section, and also some descriptions are for more than 1 experiment)

Here is another experiment for testing the data transformation, from Arduino to Processing to Unity.

//Experiment 4

From EEG to Processing


The first time I knew EEG is, I watched a TED talk about Emotiv, I was so surprised to the technology (and this is an important reason why I chose Digital Future). I have a NeuroSky Mindwave EEG sensor, but because I don’t have background of coding so I didn’t use it until I learnt Processing. There is a library for EEG sensor called ThinkGear, and fortunately, NeuroSky company provided lots of develop tools in their website and some of them are free.

To transfer the data from EEG sensor to Processing, two software are necessary: MindWaveManager and ThinkGearConnector. The first one is used for connecting EEG sensor to laptop; the other one is the socket that can help user to transfer data from sensor to Processing. Both of these can be download from the website.
There are several variables that have been provided in the library: attentionLevel, meditationLevel, blinkStrength, delta, theta, low_alpha, high_alpha, low_beta, high_beta, low_gamma, mid_gamma. The level of these can reflect user’s emotion. In this project, only attentionLevel, meditationLevel and blinkStrength are used.

This is Experiment 2: Data from EEG sensor to Processing

In this experiment, I used attentionLevel to control the height of this rectangle, and the blinkStrength to change the color of it. The harder I blink my eyes, the more vivid the color is.

//Experiment 2

Here is Experiment 5: using attention to control the movement and move speed. Player can move faster by concentrate more, and if player lose concentrate, he/she will stop.

//Experiment 5

In Experiment 6, the blink function has been added. When my strength of blink reached a certain level (it can avoid slight blink disturbing the result), the barrier disappears. This function is used to stimulate the situation in dream that I mentioned in the first part. The advantage of this function is the player won’t find the process of disappear because it happened when the player close the eyes.

//Experiment 6

From Webcam to Processing


For tracking face with webcam, open CV library is used. From the example of face tracking, I tried to locate the middle point of face, and use the position to control the camera in Unity. The final effect is, when you move your head, the camera in Unity will move and rotate with your movement.

//Experiment 3

Work in Unity

After transfer data from Processing to Unity, the other works were completed in Unity.

The first challenge is Processing is not supported in Unity so I have to use JavaScript, which I never used before. I completed the code of JavaScript by learning from examples, online tutorials and official Unity manual. From learning JavaScript I found that even JavaScript and Processing are different languages, they are still have something similar that the knowledge I learnt from Processing is helpful for learning JavaScript.

In Unity, I move the whole scene instead of moving the character. In Experiment 3 I already tested the camera movement function, in Experiment 5 the movement function has been tested too, and in Experiment 6 the basic gameplay was completed. But they all relate to Processing. Here is the function just effect in Unity.

This experiment is an improved version of Experiment 6, because I need this game is automatically circulate without stop to restart, so the barriers should appear again. The Experiment 7 is for this function. The effect is, after the second barrier is destroyed, the previous one will appear to the original position.

//Experiment 7

Work in Processing

Experiment 8 is for background music that can be controlled by meditation level. When meditation level over 50, a piece of music will be played. It uses a library called minim. The music will not play a second time until it finishes for the first time.

//Experiment 8

Experiment 9 is for testing objloader library. I tried an example but when I tried to import my own obj, it can’t show the texture and I don’t know how to solve this problem. Objloader is another choice if Processing can’t transfer data to Unity. But obviously Processing can’t support 3d as convenient as Unity.

//Experiment 9

Experiment 10 is the one I tried to let player turn automatically. But unfortunately I just complete half and because at first I didn’t set the original position properly so it’s difficult to change it. I tried 2 methods to trigger the turn, the first one is use distance but it’s not accurate. So I tried to use collision. This one is much better but because I don’t have enough time to modify the position, I didn’t add this function into my final version. But I forgot to record this part….



Processing and Unity:


The 3d models were made in 3ds max, the texture were drawn in Photoshop.

Here are some reference pictures:

1 2 3 4

Here are some pictures for modelling:


Import 3d models to Unity:


Also I added many effects such as light, fog, particle and so on in this scene.

4. Conclusion

Here is the final version of the project:


Actually this is not a project for showing to publish. It concentrates on personal experience. But I think a multi-player version will be more attractive. And also because of the time, I didn’t complete all the things I want to show.

From this project I really learnt a lot. This is the first time I made something about code. I am so excited about learning a programming language (even I met a lot of difficult, both in Processing and JavaScript, and some of them haven’t been solved now). But I think coding is very important in a project because I found many ideas cannot be realized because the lack of program knowledge.

Misty Window




An interactive installation that displays a movie hidden under a circular viewport controlled by a custom tracking code on the viewer with a webcam.

The idea behind my project was formed through a mixture of different concepts and inspirations. I knew from the beginning that I wanted to do an interactive installation, however the question was where do I start. I faced many difficulties upon deciding the direction I want to take for the project, so to proceed I decided to explore Processing and conduct simple experiments to understand the limitations of the program. I tried to test different forms of water from bubbles to dry ice, however due to the time constraint my options were limited. Following my tests, I tried to figure out a way to use water as a trigger, but since we are using Processing, I chose to project or use projection in an unconventional way. After a great deal of research for inspirations, I came upon various projects that utilize projection; one of the projects was a projection of a 3d rabbit on steam using three cameras.



The installation consists of a fog screen, webcam, and a projector. The fog screen intended to be used as a display for the movie. The idea behind the projected movie was inspired by the concept of this project. Since it is all about water, it was fitting to choose a video that explores the sea and the creatures it holds. The way the installation works is as the viewer approaches the installation, a movie will be projected on the fog. As they get closer, the webcam will detect their faces using FaceOSC app- Tool for prototyping face-based interaction- and they will be able to explore the video through a circular viewport. The viewport is scaled and positioned according to the movement of the head. As the viewers move towards the installation and get closer, the circle will increase in size and as they move further it will decrease allowing them to control the zoom level of the scene. Another part of the installation is a box with bright LED that only lights up when contacted with the steam. The viewers will be provided with transparent boxes that they will use to fill it with the steam coming from the screen. This represent




  • Processing
  • FaceOSC
  • Video Projection Tool (VPT)



  • 2 Humidifiers
  • 47 Straws
  • 2 hoses
  • Black board frame
  • 1 Projector
  • Macbook pro
  • Logitech webcam web cam





IMG_6299 copy     20120127_1200499




IMG_6319     20120127_120222










Screen Shot 2014-11-16 at 7.33.21 AM






The Drift table

Water Light Graffiti



Smile TV








Experiment 1

In this experiment I wanted to test the projector and to explore projection on different surfaces. For this experiment I used VPT -multipurpose projection software tool, and since it was my first time using it I tried to explore it by projecting onto different surfaces with various shapes. In this video, I tried to map a video that was already installed in the program onto a sweatshirt. I used three different layers of the same videos to cover the whole shirt. Due to my limited experience with the program, it took me a while to cover all of the edges, however after trying multiple times I was able to achieve the goal. This helped me to understand the tricks involved in mapping and how to apply this knowledge into my project. As I learned how to control each layer I discovered the capacity of the VPT to produce complex projects.




Experiment 2

The next step in projection was to choose the surface and the items involved. At the time of this experiment, I have already figured out the concept behind the project so I tried to play with the materials. My first choice was to project on steam, so I bought one humidifier and a couple of straws to produce steam that has the structure of a fog screen. I also took the hose out from my vacuum to connect the straws to the humidifier. The tricky part of this experiment was the uncertainty of how clear the projection would be. While testing I found out the best thing is to project things with less detail for the best resolution. The colors should be bright and have strong contrast between them.




Experiment 3

After the previous experiment, I had to figure out the position of the projector to the fog screen. Trying to do so, I encountered multiple problems. I had to create 3 prototypes to find out the optimal position, but sadly each one of them failed. One of the problems was with the straws because I had to tap each two straws together multiple times to ensure that the steam will not escape. However, the tape couldn’t seal the steam because I didn’t use waterproof tape. After solving this issue, I commenced testing the position of the fog screen. The first position I tested was based on my initial idea for this project where the fog screen will be viewed from the top. However since the steam coming from the side and the straws are horizontally laid, the steam was not strong enough and water kept piling up inside the straws. Another issue, the projector has to be placed within a certain distance and the best position for that is to have the straws position vertically.




Experiment 4:

In this experiment, I tested FaceOSC app for the first time. In order to figure out how does it actual work I had to look for examples. This is one of the codes I found belongs to Adam Bendror. I tried to mimic the same style to understand more about the tracking of each feature but I couldn’t because of the time limit. This illustration was made in DMesh. However, I tried to learn how exactly does tracking works in FaceOSC.






Experiment 5&6:

In this experiment I wanted to explore the visuals projected on the steam. I wanted to make the viewer explore the screen with something similar to looking through a peephole. At first I didn’t know how I could achieve that in Processing so I tried to draw a circle but of course it wasn’t successful and I couldn’t figure out how I would be able to make it. So I tried another option less direct but much simpler. I draw a black rectangle and subtracted a circle in the middle with photoshop. This rectangle will be following the head movements using FaceOSC so for that I had to make the shape really big so when the camera track the face we cannot the see the edge of the rectangle. After I achieved that I wanted to explore other options for what I can hide underneath the peephole so I uploaded a movie clip and the result was amazing. I loved the movement in the video and the soothing music. After that I tried to make another version with white rectangle instead of the black to see later in projection which one has more impact. The best thing in Projection on steam is to choose visuals with least details and with the strong colors and contrast to obtain the best resolution.





Experiment 7

The idea was to connect two applications that are working on a video together. I wanted to create visuals in processing and pipe them into VPT for live interactive Projection. I thought it was going to be hard and tricky since I’m using both FaceOSC and Processing and now I’m adding VPT to the mix. However, converting the file was indeed an easy solution. All of this was possible because Syphon server can work really well with audio and video. Syphon as explained in the official website (Syphon is an open source Mac OS X technology that allows applications to share frames – full frame rate video or stills – with one another in real time). It allows third party to access any video. To apply the syphon code, I had to download the library in Processing and then I had to write a couple of codes.





Experiment 8:

In this experiment, I tried to add another interactive part to the installation that highlights the water (mist). After coming up with the final concept of the installation, I started exploring ideas that can be done with Arduino.

After much research I decided to use a steam sensor to trigger a bright Light. In this case, the viewer will be offer a transparent box that they can fill it up with the magical mist (steam coming from the screen) and after couple of seconds the box will light up. After the mist solely vanish, the light will fade out as well. This experiment had a mesmerizing affect with the colorful LED surrounded with an aura of fog.




Processing is a very useful tool to achieve greatness but time is needed for that. In this project I encountered multiple problems all related to timing issues. As I was adjusting and mounting my installation things started to collapse. Some of the straws started to fill with water and that stopped the steam from coming out. It actually worked perfectly the week and the night before but an hour before the presentation everything seemed not to work. In the last minute I was able to save some of the straws but I couldn’t show the second part of the installation.



Thank you to Jenna and Glen for their tremendous help.








The Swimming pool

The Swimming Pool is a 1st stage prototype  for an art installation, which will explore the use of swimming pools in european and North American films as a signifier of forthcoming disaster often resulting in a death or murder of one of the  characters committed or experienced by a protagonist. In this case a swimming pool plays the role of a proverbial  rifle, which if introduced into a scenario has to end up shooting someone. The two films used in the installations are a French/German film “The swimming pool” produced in 1969 and British/French film “the swimming pool” produced in 2003.

The idea of a miniature swimming pool was an immediate reaction to the water park assignment. I had been thinking about trying a miniature installation for quite some time but never had a chance to attempt it. I had seen both films long time ago but they remained somehow in my memory and it was a perfect opportunity to try out the concept.

The work process: 

My first idea was to have an open rectangular container into which I will project a video of a woman swimming. An interactive component was to have the swimmer to follow the movement direction of the viewer. So if the viewer will be moving to the right the video of the woman swimming in that direction will start. If the viewer will move to the left the swimmer will appear from the right side of the container and will swim to the left. In case when there will be no movement – the swimming pool will remain empty. For that 3 videos will have to be loaded into the processing and the code will have to be written to switch from one to another with the change of direction. Instead of the sensor I was going to use the webcam and the movement tracking code which Nick showed us. I started with the code. Chris Olsen helped me with brainstorming the code and we came up with the sketch.


Experiment 1

I bought a container and attempted to project on it from a small Pico projector.


Experiment 2

The downside of this idea was that in order for effect really work I needed a long container, in order for a viewer to move alongside it with the swimmer. But there were no such containers at the store. I bought a few of the same to put them one next to another but the look and the feel was cheap and uninteresting. In the store I had found a variety of large glass boxes, which looked interesting and gave me an idea of placing the swimming pool into the box. I went back to the store and bought one to experiment with it and see if projection will look better inside the box.

Experiment 3

I was still experimenting with the set up of the box adding grass and water.

The idea for interaction changed also and I decided to use the photo sensor and Arduino to work with processing. The idea evolved into having a sensor inside the box so the projection could be triggered when the box will be open. When the box will close the black rectangle will cover the image. As a sample for the code I used the sketch I found on I made appointment with Jackson, our TA and we were working on the code for almost 3 hours. However, we couldn’t establish a reliable communication in-between Arduino and Processing. The troubleshooting took a long time. We established the values for the sensor but the printl was reading a different value. It seems that the issue was that Arduino was sending a number but the processing was receiving it as a string. This is an unfinished processing code from this session:

Movie theMov;
Movie theMov2;
boolean isPlaying;
boolean isLooping;
boolean direction = false;
int lValue = 0;

String[] inSensor = new String[1];
Serial myPort; // Create object from Serial class
int val; // Data received from the serial port

void setup() {
size(640, 480);

/////// Initialize the Serial object and print serial ports
//tring portName = Serial.list()[7]
myPort = new Serial(this, “/dev/tty.usbmodem411”, 9600);
// myPort.bufferUntil(‘\n’);

theMov = new Movie(this, “Act 1 transition 2-Medium.m4v”);
theMov.loop(); //plays the movie over and over
theMov2 = new Movie(this, “Act 1 transition 2-Medium.m4v”);
theMov2.loop(); //plays the movie over and over
isPlaying = true;
isLooping = true;

void draw() {

// if ( myPort.available() != 0) { // If data is available,
// val =; // read it and store it in val
// println(int(val));

if(val >= 10){
isPlaying = true;

if(val < 10){
// theMov.stop();
isPlaying = false;
println(“movie stopped”);
direction = !direction;

// }

if(isPlaying == false){
// theMov2.stop();
// rect(0,0,width,height);

if (isPlaying == true){
if (direction == true){;
image(theMov, 50, 50); //mouseX-theMov.width/2, mouseY-theMov.height/2);
if (direction == false){;
image(theMov2, 50, 50); //mouseX-theMov.width/2, mouseY-theMov.height/2);


void serialEvent(Serial myPort){
String inString = myPort.readStringUntil(‘\n’);
if (inString != null){
inString = trim(inString);}

//inSensor = split(inString, “,”);
//if (inSensor.length>=0)
// lValue = int(inSensor[0]);

val = int(inString);
void movieEvent(Movie m) {;

void keyPressed() {

// if (key == ‘p’) {
// // toggle pausing
// if (isPlaying) {
// theMov.pause();
// } else {
// }
// isPlaying = !isPlaying;
// } else if (key == ‘l’) {
// // toggle looping
// if (isLooping) {
// theMov.noLoop();
// } else {
// theMov.loop();
// }
// isLooping = !isLooping;
// } else if (key == ‘s’) {
// // stop playing
// theMov.stop();
// isPlaying = false;
// } else if (key == ‘j’) {
// // jump to a random time
// theMov.jump(random(theMov.duration()));
// }


Experiment 4

As I was working on the code my idea about the project continued to evolve. I wanted to have a more elaborate set up then just one box. Since my concept was around 2 movies – I wanted to have to spaces representing each movie. So I went back to the store and bought 2 identical boxes to try my idea. Two boxes felt immediately much better and it was clear that it was the right choice.



Experiment 5

I’m trying out the edited video of the swimmer in one of the boxes.


Experiment 6

After experimenting with the video and the grass I felt that the set-up in the boxes has to be more elaborate with the miniature objects which could be arrange to signify a murder. I bought some samples of miniature furniture and tried different things out till I found the set up, which felt right.


Experiment 7

While working with the miniatures I felt that something was missing visually from this set up. The projections were too small and there was a lot of negative space on the sides. I also felt the the narration needed another line through. In both movies much of the action in the swimming pool is being observed, spied on from the main house. As well as encounters between men and women are also being reflected or visible through the windows of the house. So the house was not represented in my set up. I decided to use tablets fastened to the sides of the boxes for the display of the close-ups of the voyeurs spying on the swimmers. I edited 4 videos and put 2 of them on the tablets on the sides of the boxes.

Experiment 8

In order to have one video feed playing in two swimming pools I had to calculate the size of the frame and create a mask which will allow the 2 videos play side by side in the same frame. I also wanted to have the lighting for the grass and the miniatures on the sides of the swimming pools. My friend and editor Scott Edwards helped me with creating and layering the main video for the projector.

Experiment 9 

This is a process we went through measuring the area and creating mask in the photoshop.



Work on the code

The code had to be completely rewritten. Now one movie was running on a loop throughout the presentation. The boxes were wired with 2 light sensors. When both lids were closed the black rectangular box was blocking all of the image. When one of the lids was open – one side of the black box was removed and the video was visible. If both lids were open – both black masks were removed and both videos were visible.  The communication in-between Arduino and the processing was simplified and only 4 signals were sent to the processing using binary code.  00 – both boxes closed; 01 – left box open; 10 – right box open; 11 – both boxes open.


swimmingpool_schem                   photo 2




It was very interesting to work on this project. The evolution of the idea and the execution in such short time frame made me very focused and I enjoyed the challenge. I also was happier than with my previous project because I decided to approach it right away as an art installation rather than just a technical challenge. Thinking in a broader sense was more satisfying and the ideas were more interesting. I would like to continue to work on this project in the future. I already have a few ideas about how to make it into a full scale art installation. For that I will have to figure out manufacturing of the boxes and  I would like to experiment with 3d printing of the miniature set. I also would use the screens instead of projections and will try to shoot tilt/shift as Nick suggested.




Magic Fish Pond



The second project I did is called magic fishpond. Fishpond was the first image came to my mind when I first heard what we needed to do should be something related to the water. I know Tom has told me usually your third idea is the best idea you can think of and I also know the theme of fish is not new and interesting enough at least in terms of the concept as most of us will immediately think of fish. But finally there were two reasons let me still insist to develop this idea. One was about my childhood memories. When I was little I lived with my grandpa in a city called Suzhou. That city is widely known for its garden architecture. Normally all gardens are built with a fishpond in it therefore watching fish with my grandpa is my happiest time in my childhood. My deepest impression is that fish are beautiful no matter in their colors, shapes and movement patterns. Even today I still love to observe fish as it always gives me a sense of calm. Therefore I started to think to create something to represent this feeling.

fish garden1

The other reason is in eastern Asia fish is always treated as a special symbol as people believe fish will bring them lucky and fortune. This is also the reason of people loving to build a fishpond in their yard in the old days. Watching fish represents the people’s pursuit of happiness, the love to the beautiful things and the wish of having a better future. But it’s interesting that most of them don’t have a kind of visual manifestation. Therefore from this point of view I divided my work into three parts, each part I wanted to show one function or pleasure of watching fish.

441_501931 Bret-Fish6




After deciding to use fish as the object, my first mission is to define the form of the fish. At this point, it would be hard to convey the soft and sprightly feeling of the fish’s body if I just uploaded a picture of a fish and moved it in Processing. I thought of it for a while and the best solution came from a toy I have.

Screen Shot 2014-11-13 at 0.02.17 Screen Shot 2014-11-13 at 0.02.32

This crocodile is made by wood and it can writhe like the real crocodile when you bend it. The magic behind is it is made by a group of semi-independent units. They are half–connected with each other. Therefore it feels very life-like when you bend or wiggle it. This let me remember another example in the book Learning Processing. In one chapter Daniel Shiffman teaches us to draw a shape like a snake. The snake is drawn by a bunch of circles. Likewise, this example also could be used to draw a fish.

Screen Shot 2014-11-12 at 23.44.00

After finishing the shape, the second thing was to determine the colour of the fish. At first I tried to mimic the real fish’s colour but the result was not as good as I thought. Until one day I found a picture of the phytoplankton.

bio-2 bio-4 bio-6

Phytoplankton means the marine microbes. They are bioluminescent and emanate the blue glow. That translucency effect strongly attracted me.

Vaadhoo-Island-Maldives-2 1177458243604_6byjcA4X_l

Considering I will use water as my projection interface, I decided to draw my fish in half-transparency to create a fluorescence effect. At first I only used blue as my main colour but as my idea kept developing I finally used three gradient, six colours in total to render my fish.



As for the movement part I still need to thank to Daniel Shiffman. His book The nature of code gives me a lot of help. I learned most of algorithm of particle system and genetic algorithms from that book. My personal experience is that atan2(), noise(), dist()and trigonometric function are some of very important fuctions to learn and use as if you can use them in a proper way they will create really organic movement pattern for you.


I immediately decided to use LeapMotion as my main sensor when I started my project. The reason is simple, hand gesture is the most natural interactive way in front of fish. We use hands to feed them, to play with them even to catch them. In my project I wanted to use hands to achieve most of my goals and without needing to touch anything. These goals include switching among three different modes, changing the size of the fish and command the shoal’s movement. It’s lucky Processing has a library of controlling LeapMotion called LeapMotionP5. It’s developed by the famous generative design studio Onformative. Thanks to the Onformative’s work. The library covers all the API functions of LeapMotion and it is also very easy to use. Through some consideration, finally I chose to use swiping gesture to switch between 3 modes and use circle gesture to control the size of the fish.

Apart from gesture control, I remember when I was little I usually liked to paddle the water back and forth when I watched the fish. Now think about it, that was a very instinctive behaviour as people always wants to interact with something and water was the only thing I could touch at that time. Therefore, I got an idea which is summoning the fish through stirring the water. After having this idea, the problem to me was how to detect the water flow. At first I thought to use the water flow sensor. But the fact is water flow sensor only works in the case of very rapid water flow. Apparently it does not work very well in this particular scenario. Through some searching online I found the flex sensor is my ideal solution as it is sensitive enough and easy to use. After some experiments I finally created my own water flow sensor.


2014-10-22 13.43.24 2014-11-13 02.39.30 2014-11-13 02.39.04

During I researched how to use the flex sensor, I found there is another sensor called gyroscope. This sensor is very familiar to us because we all have one in our cellphone. It can measure orientation based on the principles of angular momentum. I ran across this sensor on the website and I immediately thought of using it in my project control the swimming direction of the shoal. But after I connecting the sensor with Arduino I found the number from the raw data was unbelievable huge to use, so I had to read the data sheet and on page 13 I found I can convert the raw accelerometer data through multiplying of g(9.8 m/s) by dividing by a factor of 16384. After this adjustment the data was finally back to normal.

2014-10-29 16.32.38 2014-10-29 20.42.18 2014-10-31 13.47.07

Projection and interface:

Before this project, I had seen several projects using water as interface. What had impressed me most is project AquaTop. That project did really well in trying to use different materials as interfaces and the exploration of ubiquitous computing. Especially their use of water inspired me a lot; therefore at the very first of this project I had decided to use water as my project’s carrier. I believe water as the projection interface has two main benefits. One is water can add visual depth and texture for your images when you use in the right way. For example, in my testing, I found all images projected on the water had a halo around them, which makes the whole picture look more aesthetically pleasing. Another benefit is since we use water as display we have no screen size limitation. This advantage can let us use larger-size container to create a better final effect.

2014-11-13 02.36.46 2014-11-13 02.37.01 2014-11-11 19.52.30 2014-11-11 19.51.42

The last puzzle:

After finishing all these parts above I still personally felt my project missed one most important character – emotion. Because I believe all great art works have one thing in common, which is able to let audience get emotionally involved. People love arts, music and movies because they can empathize with them and think from them. Therefore this makes me think about the behaviour of watching fish over again. One day I checked the Google’s Devart Website and I found one project called Wishing Wall was really interesting and especially the background music touched me truly profoundly. That is a project about visualizing wishes. When I was watching it I suddenly realized that people have the same kind of behaviour as well when they watch the fish. Sometimes people like to toss a coin into the pond and make a wish simultaneously. Therefore I started to think to use another way to record this beautiful moment. Because in a way – wishing this behaviour itself is very meaningful and full of emotions. So why I don’t catch this opportunity to use this idea and create out of it something interesting and other people can also see and interact with.

Finally I focused on the words. Through integrating the concept of fish and wish I developed an idea of using fish to spell what audience wants to say. In terms of the code, I chose to use a library in Processing called Geomerative. This library can split the text into several segments and each segment is defined to be a destination of one fish. In this way audiences can type in whatever they want to wish and then a certain number of fish will be summoned and spell those wishes.



During the final presentation, I could feel that most audience liked the wishing part most. That more or less proved my original point of view. This also encouraged me to create more engaging and immersive experiences in my third project.




Source Code



Circuit Diagram

Screen Shot 2014-11-14 at 23.36.12



Shiffman, D. (2008). Learning Processing. Amsterdam: Morgan Kaufmann/Elsevier.

Shiffman, D., Fry, S. and Marsh, Z. (n.d.). The nature of code.

Bohnacker, H., Gross, B., Laub, J. and Lazzeroni, C. (2012). Generative design. New York: Princeton Architectural Press.

Colossal, (2014). A Maldives Beach Awash in Bioluminescent Phytoplankton Looks Like an Ocean of Stars. [online] Available at: [Accessed 15 Nov. 2014].

Yamano, S. (2014). AquaTop – An Interactive Water Surface. [online] Available at: [Accessed 15 Nov. 2014]., (2014). this is onformative a studio for generative design.. [online] Available at: [Accessed 15 Nov. 2014]., (2014). Geomerative. [online] Available at: [Accessed 15 Nov. 2014]., (2014). DevArt. Art made with code.. [online] Available at: [Accessed 15 Nov. 2014].


Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.