Category: General Posts

The Swimming pool

The Swimming Pool is a 1st stage prototype  for an art installation, which will explore the use of swimming pools in european and North American films as a signifier of forthcoming disaster often resulting in a death or murder of one of the  characters committed or experienced by a protagonist. In this case a swimming pool plays the role of a proverbial  rifle, which if introduced into a scenario has to end up shooting someone. The two films used in the installations are a French/German film “The swimming pool” produced in 1969 and British/French film “the swimming pool” produced in 2003.

The idea of a miniature swimming pool was an immediate reaction to the water park assignment. I had been thinking about trying a miniature installation for quite some time but never had a chance to attempt it. I had seen both films long time ago but they remained somehow in my memory and it was a perfect opportunity to try out the concept.

The work process: 

My first idea was to have an open rectangular container into which I will project a video of a woman swimming. An interactive component was to have the swimmer to follow the movement direction of the viewer. So if the viewer will be moving to the right the video of the woman swimming in that direction will start. If the viewer will move to the left the swimmer will appear from the right side of the container and will swim to the left. In case when there will be no movement – the swimming pool will remain empty. For that 3 videos will have to be loaded into the processing and the code will have to be written to switch from one to another with the change of direction. Instead of the sensor I was going to use the webcam and the movement tracking code which Nick showed us. I started with the code. Chris Olsen helped me with brainstorming the code and we came up with the sketch.

.photo

Experiment 1

I bought a container and attempted to project on it from a small Pico projector.

 

Experiment 2

The downside of this idea was that in order for effect really work I needed a long container, in order for a viewer to move alongside it with the swimmer. But there were no such containers at the store. I bought a few of the same to put them one next to another but the look and the feel was cheap and uninteresting. In the store I had found a variety of large glass boxes, which looked interesting and gave me an idea of placing the swimming pool into the box. I went back to the store and bought one to experiment with it and see if projection will look better inside the box.

Experiment 3

I was still experimenting with the set up of the box adding grass and water.

The idea for interaction changed also and I decided to use the photo sensor and Arduino to work with processing. The idea evolved into having a sensor inside the box so the projection could be triggered when the box will be open. When the box will close the black rectangle will cover the image. As a sample for the code I used the sketch I found on http://www.silentlycrashing.net/p5/libs/video/. I made appointment with Jackson, our TA and we were working on the code for almost 3 hours. However, we couldn’t establish a reliable communication in-between Arduino and Processing. The troubleshooting took a long time. We established the values for the sensor but the printl was reading a different value. It seems that the issue was that Arduino was sending a number but the processing was receiving it as a string. This is an unfinished processing code from this session:

import processing.video.*;
Movie theMov;
Movie theMov2;
boolean isPlaying;
boolean isLooping;
boolean direction = false;
int lValue = 0;

String[] inSensor = new String[1];
Serial myPort; // Create object from Serial class
int val; // Data received from the serial port

void setup() {
size(640, 480);

/////// Initialize the Serial object and print serial ports
println(Serial.list());
//tring portName = Serial.list()[7]
myPort = new Serial(this, “/dev/tty.usbmodem411”, 9600);
// myPort.bufferUntil(‘\n’);
//println(Serial.list()[7])

theMov = new Movie(this, “Act 1 transition 2-Medium.m4v”);
theMov.loop(); //plays the movie over and over
theMov2 = new Movie(this, “Act 1 transition 2-Medium.m4v”);
theMov2.loop(); //plays the movie over and over
isPlaying = true;
isLooping = true;
}

void draw() {
background(0);

// if ( myPort.available() != 0) { // If data is available,
// val = myPort.read(); // read it and store it in val
//
//
// println(int(val));

if(val >= 10){
//theMov.play();
isPlaying = true;

}
//
else
if(val < 10){
// theMov.stop();
isPlaying = false;
println(“movie stopped”);
direction = !direction;
}

// }

if(isPlaying == false){
theMov.stop();
// theMov2.stop();
fill(0);
// rect(0,0,width,height);
}

else
if (isPlaying == true){
if (direction == true){

theMov.play();
image(theMov, 50, 50); //mouseX-theMov.width/2, mouseY-theMov.height/2);
}
else
if (direction == false){
theMov2.play();
image(theMov2, 50, 50); //mouseX-theMov.width/2, mouseY-theMov.height/2);

}
}

}
void serialEvent(Serial myPort){
String inString = myPort.readStringUntil(‘\n’);
if (inString != null){
inString = trim(inString);}

//inSensor = split(inString, “,”);
//if (inSensor.length>=0)
//{
// lValue = int(inSensor[0]);
//}

//println(inString);
val = int(inString);
println(val);
}
void movieEvent(Movie m) {
m.read();
}

void keyPressed() {

// if (key == ‘p’) {
// // toggle pausing
// if (isPlaying) {
// theMov.pause();
// } else {
// theMov.play();
// }
// isPlaying = !isPlaying;
//
// } else if (key == ‘l’) {
// // toggle looping
// if (isLooping) {
// theMov.noLoop();
// } else {
// theMov.loop();
// }
// isLooping = !isLooping;
//
// } else if (key == ‘s’) {
// // stop playing
// theMov.stop();
// isPlaying = false;
//
//
// } else if (key == ‘j’) {
// // jump to a random time
// theMov.jump(random(theMov.duration()));
// }
}

 

Experiment 4

As I was working on the code my idea about the project continued to evolve. I wanted to have a more elaborate set up then just one box. Since my concept was around 2 movies – I wanted to have to spaces representing each movie. So I went back to the store and bought 2 identical boxes to try my idea. Two boxes felt immediately much better and it was clear that it was the right choice.

 

 

Experiment 5

I’m trying out the edited video of the swimmer in one of the boxes.

 

Experiment 6

After experimenting with the video and the grass I felt that the set-up in the boxes has to be more elaborate with the miniature objects which could be arrange to signify a murder. I bought some samples of miniature furniture and tried different things out till I found the set up, which felt right.

 

Experiment 7

While working with the miniatures I felt that something was missing visually from this set up. The projections were too small and there was a lot of negative space on the sides. I also felt the the narration needed another line through. In both movies much of the action in the swimming pool is being observed, spied on from the main house. As well as encounters between men and women are also being reflected or visible through the windows of the house. So the house was not represented in my set up. I decided to use tablets fastened to the sides of the boxes for the display of the close-ups of the voyeurs spying on the swimmers. I edited 4 videos and put 2 of them on the tablets on the sides of the boxes.

Experiment 8

In order to have one video feed playing in two swimming pools I had to calculate the size of the frame and create a mask which will allow the 2 videos play side by side in the same frame. I also wanted to have the lighting for the grass and the miniatures on the sides of the swimming pools. My friend and editor Scott Edwards helped me with creating and layering the main video for the projector.

Experiment 9 

This is a process we went through measuring the area and creating mask in the photoshop.

 

 

Work on the code

The code had to be completely rewritten. Now one movie was running on a loop throughout the presentation. The boxes were wired with 2 light sensors. When both lids were closed the black rectangular box was blocking all of the image. When one of the lids was open – one side of the black box was removed and the video was visible. If both lids were open – both black masks were removed and both videos were visible.  The communication in-between Arduino and the processing was simplified and only 4 signals were sent to the processing using binary code.  00 – both boxes closed; 01 – left box open; 10 – right box open; 11 – both boxes open.

swimmingpool_bb

swimmingpool_schem                   photo 2

SOURCE CODE

GitHub

Conclusion

It was very interesting to work on this project. The evolution of the idea and the execution in such short time frame made me very focused and I enjoyed the challenge. I also was happier than with my previous project because I decided to approach it right away as an art installation rather than just a technical challenge. Thinking in a broader sense was more satisfying and the ideas were more interesting. I would like to continue to work on this project in the future. I already have a few ideas about how to make it into a full scale art installation. For that I will have to figure out manufacturing of the boxes and  I would like to experiment with 3d printing of the miniature set. I also would use the screens instead of projections and will try to shoot tilt/shift as Nick suggested.

 

 

 

Magic Fish Pond

1


Concept


The second project I did is called magic fishpond. Fishpond was the first image came to my mind when I first heard what we needed to do should be something related to the water. I know Tom has told me usually your third idea is the best idea you can think of and I also know the theme of fish is not new and interesting enough at least in terms of the concept as most of us will immediately think of fish. But finally there were two reasons let me still insist to develop this idea. One was about my childhood memories. When I was little I lived with my grandpa in a city called Suzhou. That city is widely known for its garden architecture. Normally all gardens are built with a fishpond in it therefore watching fish with my grandpa is my happiest time in my childhood. My deepest impression is that fish are beautiful no matter in their colors, shapes and movement patterns. Even today I still love to observe fish as it always gives me a sense of calm. Therefore I started to think to create something to represent this feeling.

fish garden1

The other reason is in eastern Asia fish is always treated as a special symbol as people believe fish will bring them lucky and fortune. This is also the reason of people loving to build a fishpond in their yard in the old days. Watching fish represents the people’s pursuit of happiness, the love to the beautiful things and the wish of having a better future. But it’s interesting that most of them don’t have a kind of visual manifestation. Therefore from this point of view I divided my work into three parts, each part I wanted to show one function or pleasure of watching fish.

441_501931 Bret-Fish6

 


Process


Form:

After deciding to use fish as the object, my first mission is to define the form of the fish. At this point, it would be hard to convey the soft and sprightly feeling of the fish’s body if I just uploaded a picture of a fish and moved it in Processing. I thought of it for a while and the best solution came from a toy I have.

Screen Shot 2014-11-13 at 0.02.17 Screen Shot 2014-11-13 at 0.02.32

This crocodile is made by wood and it can writhe like the real crocodile when you bend it. The magic behind is it is made by a group of semi-independent units. They are half–connected with each other. Therefore it feels very life-like when you bend or wiggle it. This let me remember another example in the book Learning Processing. In one chapter Daniel Shiffman teaches us to draw a shape like a snake. The snake is drawn by a bunch of circles. Likewise, this example also could be used to draw a fish.

Screen Shot 2014-11-12 at 23.44.00

After finishing the shape, the second thing was to determine the colour of the fish. At first I tried to mimic the real fish’s colour but the result was not as good as I thought. Until one day I found a picture of the phytoplankton.

bio-2 bio-4 bio-6

Phytoplankton means the marine microbes. They are bioluminescent and emanate the blue glow. That translucency effect strongly attracted me.

Vaadhoo-Island-Maldives-2 1177458243604_6byjcA4X_l

Considering I will use water as my projection interface, I decided to draw my fish in half-transparency to create a fluorescence effect. At first I only used blue as my main colour but as my idea kept developing I finally used three gradient, six colours in total to render my fish.

upload-Clione-limacina-1

 

As for the movement part I still need to thank to Daniel Shiffman. His book The nature of code gives me a lot of help. I learned most of algorithm of particle system and genetic algorithms from that book. My personal experience is that atan2(), noise(), dist()and trigonometric function are some of very important fuctions to learn and use as if you can use them in a proper way they will create really organic movement pattern for you.

Interaction:

I immediately decided to use LeapMotion as my main sensor when I started my project. The reason is simple, hand gesture is the most natural interactive way in front of fish. We use hands to feed them, to play with them even to catch them. In my project I wanted to use hands to achieve most of my goals and without needing to touch anything. These goals include switching among three different modes, changing the size of the fish and command the shoal’s movement. It’s lucky Processing has a library of controlling LeapMotion called LeapMotionP5. It’s developed by the famous generative design studio Onformative. Thanks to the Onformative’s work. The library covers all the API functions of LeapMotion and it is also very easy to use. Through some consideration, finally I chose to use swiping gesture to switch between 3 modes and use circle gesture to control the size of the fish.

Apart from gesture control, I remember when I was little I usually liked to paddle the water back and forth when I watched the fish. Now think about it, that was a very instinctive behaviour as people always wants to interact with something and water was the only thing I could touch at that time. Therefore, I got an idea which is summoning the fish through stirring the water. After having this idea, the problem to me was how to detect the water flow. At first I thought to use the water flow sensor. But the fact is water flow sensor only works in the case of very rapid water flow. Apparently it does not work very well in this particular scenario. Through some searching online I found the flex sensor is my ideal solution as it is sensitive enough and easy to use. After some experiments I finally created my own water flow sensor.

 

2014-10-22 13.43.24 2014-11-13 02.39.30 2014-11-13 02.39.04

During I researched how to use the flex sensor, I found there is another sensor called gyroscope. This sensor is very familiar to us because we all have one in our cellphone. It can measure orientation based on the principles of angular momentum. I ran across this sensor on the website and I immediately thought of using it in my project control the swimming direction of the shoal. But after I connecting the sensor with Arduino I found the number from the raw data was unbelievable huge to use, so I had to read the data sheet and on page 13 I found I can convert the raw accelerometer data through multiplying of g(9.8 m/s) by dividing by a factor of 16384. After this adjustment the data was finally back to normal.

2014-10-29 16.32.38 2014-10-29 20.42.18 2014-10-31 13.47.07

Projection and interface:

Before this project, I had seen several projects using water as interface. What had impressed me most is project AquaTop. That project did really well in trying to use different materials as interfaces and the exploration of ubiquitous computing. Especially their use of water inspired me a lot; therefore at the very first of this project I had decided to use water as my project’s carrier. I believe water as the projection interface has two main benefits. One is water can add visual depth and texture for your images when you use in the right way. For example, in my testing, I found all images projected on the water had a halo around them, which makes the whole picture look more aesthetically pleasing. Another benefit is since we use water as display we have no screen size limitation. This advantage can let us use larger-size container to create a better final effect.

2014-11-13 02.36.46 2014-11-13 02.37.01 2014-11-11 19.52.30 2014-11-11 19.51.42

The last puzzle:

After finishing all these parts above I still personally felt my project missed one most important character – emotion. Because I believe all great art works have one thing in common, which is able to let audience get emotionally involved. People love arts, music and movies because they can empathize with them and think from them. Therefore this makes me think about the behaviour of watching fish over again. One day I checked the Google’s Devart Website and I found one project called Wishing Wall was really interesting and especially the background music touched me truly profoundly. That is a project about visualizing wishes. When I was watching it I suddenly realized that people have the same kind of behaviour as well when they watch the fish. Sometimes people like to toss a coin into the pond and make a wish simultaneously. Therefore I started to think to use another way to record this beautiful moment. Because in a way – wishing this behaviour itself is very meaningful and full of emotions. So why I don’t catch this opportunity to use this idea and create out of it something interesting and other people can also see and interact with.

Finally I focused on the words. Through integrating the concept of fish and wish I developed an idea of using fish to spell what audience wants to say. In terms of the code, I chose to use a library in Processing called Geomerative. This library can split the text into several segments and each segment is defined to be a destination of one fish. In this way audiences can type in whatever they want to wish and then a certain number of fish will be summoned and spell those wishes.

 


Conclusion


During the final presentation, I could feel that most audience liked the wishing part most. That more or less proved my original point of view. This also encouraged me to create more engaging and immersive experiences in my third project.

IMG_2012

IMG_1995

 


Source Code


Github

 


Circuit Diagram


Screen Shot 2014-11-14 at 23.36.12

 


Reference


Shiffman, D. (2008). Learning Processing. Amsterdam: Morgan Kaufmann/Elsevier.

Shiffman, D., Fry, S. and Marsh, Z. (n.d.). The nature of code.

Bohnacker, H., Gross, B., Laub, J. and Lazzeroni, C. (2012). Generative design. New York: Princeton Architectural Press.

Colossal, (2014). A Maldives Beach Awash in Bioluminescent Phytoplankton Looks Like an Ocean of Stars. [online] Available at: http://www.thisiscolossal.com/2014/01/bioluminescent-beach-maldives/ [Accessed 15 Nov. 2014].

Yamano, S. (2014). AquaTop – An Interactive Water Surface. [online] Sngymn.github.io. Available at: http://sngymn.github.io/aquatopdisplay/ [Accessed 15 Nov. 2014].

Onformative.com, (2014). this is onformative a studio for generative design.. [online] Available at: http://www.onformative.com/ [Accessed 15 Nov. 2014].

Ricardmarxer.com, (2014). Geomerative. [online] Available at: http://www.ricardmarxer.com/geomerative/ [Accessed 15 Nov. 2014].

Devart.withgoogle.com, (2014). DevArt. Art made with code.. [online] Available at: https://devart.withgoogle.com/#/ [Accessed 15 Nov. 2014].

 

Swimming in Darkness – Interactive Video Project

swimming cover

Swimming in Darkness is an interactive video art installation for the class project on the theme of Water Park. It is intended to be used as a display piece, possibly in a high traffic area such as a foyer.

The way it works is the user approaches the installation, and is instructed to waive their hands in front of the sensor, which will then control the brightness of four different videos.

Execution: Video

There are two areas of how the project executed. The first is the capturing of the videos, and the second is the implementation of them in an interactive setup.

To capture the videos, a vase of water was set up in a dark room and various videos were projected on to them. Sometimes just basic colours instead of videos were sent from the projector. Then the vase of water would be shook, or other liquids such as cream were added to it. A DSLR camera was used, so the focus could also be adjusted for added effect.

Vase with water and cream

One trick used in capturing the videos is VPT was employed to help shoot video just onto the vase, and nothing around it.

To create the interactive element of the project, Max was used for everything from capturing data from the Leap Motion sensor, to playing the videos and projecting the final output.

Patch breakdown 

Project code on Github

max msp patch

Cover your eyes – the entire patch

To begin with the Leap Motion, the aka.leapmotion object was used. The way it works is it needs to be fed a metro to make it start running. From this point, it will start to send a whole range of data such as how many hands are detected and the movement of your different fingers. This large data stream is fed into something called the Route object, which is a way of separating the different types of data. From here, we routed the location of the palms, which basically means we are simply tracking the location of the whole hand.

leap motion part of patchjj

The data we are receiving is a list of x,y and z coordinates. Having three numbers being spit out simultaneously is not really possible to work with, so we use the Unpack object to further separate the numbers. The range of most of these values were something to the effect of -250 to 250, so the Scale object was employed to make the ranges a little more useful, sweeping from 0 to 2 (and all the small decimal numbers in between).

To play the videos, the Jit.QT.Movie object was used. Using a Read message, we tell it which video to play, and make sure to save the patch in the same folder as the videos so it doesn’t have any problem looking for it when you set up the installation.  The videos are fed into the Jit.brcosa object, which allows us to control the brightness using the data from the Leap Motion.

two video players

 In order to mix the videos together, two Movie objects are fed into jit.op which applies ‘operations’ to two video streams. These operations can do things such as add two videos together, or more complex processes such as getting the absolute difference. Think of these as similar to layer styles in Photoshop. We run this operation on two more videos, and then combine everything with one last jit.op object.

windowsFinally, for display purposes we have a small video window inside the patch for previewing. This one is called the pwindow object. There is also a jit.window object which enables us to project to a second screen. The ‘fullscreen’ message is very important otherwise we will be stuck with a basic window.

Deployment and Improvements

Displaying this project was satisfactory. The Leap Motion worked ok but in future I might just reserve it for personal performances because it’s an odd object to have sitting on the table. Ideally this would be projection mapped onto a spherical shape, but i’m not very skilled in assembling sculptures of this sort yet.

The four videos created sort of a random effect, and although the hands controlled brightness, it still needed a greater level of interaction with the user. Another problem that arose close to finishing was that the videos were playing sound. I tried a few different techniques in Max to mute the videos, but they didn’t work. As a result I couldn’t further develop a sound portion of this project as hoped.

Experiments

To get to the final product, I conducted a series of experiments, as outlined below.

Experiment #1 – Dropping Cream into Water

For this experiment, I prepared a vase full of tap water, and placed in front of a screen piece of foam. I adjusted the camera’s height on the tripod to put it at the same level as the water. I then carefully dropped a bit of whipping cream (35%) into the vase. At first I was underwhelmed, and thought I didn’t do it properly, but slowly the cream began to form a neat shape as it made its way to the bottom. For the next experiment, I poured in a much larger mass of cream, but it seemed to be too much and quickly fogged up the whole vase.


Althought it would be very hard to achieve results similar to what professional photographers are doing in this regard, it was a worthwhile experiment. With more time and ingredients, i’m sure some amazing work can be done with this.

Experiment #2 – Dropping cream onto concave surface

Following up with dropping the cream into the vase, I placed a concave glass bowl over the lens of the camera. In order to begin this, I had to point the camera up to the ceiling, and find a part of it that had no lights or pipes which would interfere with the purity of the image.

My first try, I realized the bowl was way too close to the camera, so it was impossible for there to be any focus. Even moving it up a bit, it was still not easy to focus, but I was able to capture some video of liquids moving around. The cream was too thick, so I started adding more water to get some motion going. I realize that 35% whipping cream is nice for some uses, but can be too thick for others.

Experiment #3 – Computer vision tracking of fish

I was interested in pursuing a computer vision portion of this project, so I ran some tests with aquarium footage found on Vimeo. I loaded the video file into a Jitter patch that is part of the cv.jit collection by Jean-Marc Pelletier (based on Open CV). While it did seem to be able to track the main fish in the video, I had no idea how to make use of what was happening on screen. If I could figure this out in future it, the program can be used for all sorts of colour and face tracking.

Experiment #4 – Leap Motion in Max 

To execute this experiment, I had to research which Leap Motion objects were available for Max and Processing. I eventually found the aka.leapmotion object, which was functioning, but I had to do some searching about how to get the data out of it.  Even when I was able to get the data out, it was spitting out too much of it. I decided to take a short nap, and within minutes of closing my eyes, the answer came to me.

I proceeded to hook the various outputs into knobs so I could start to make sense of the movements. From there I was able to hook the data streams into the videos later on.

Experiment #5 – Shooting videos in the dark

This experiment involved setting up a projector and pointing it at the vase. I prepared a few different videos and basic lighting effects in order to do this. As I shot the video onto the water, I would go into Jitter and further manipulate the video to help create the effects I was looking for.  I was satisfied with the different videos that were captured, so I was happy to go ahead and combine the results of this experiment with the previous one involving Leap Motion, and this combination completed the project.

Conclusion

This was a fun project which helped me string together a few areas that i’m interested in. I found that Leap Motion is very useful for certain things, but has its limitations for installations. I would like to keep working with the Kinect to see how it can improve interactions with the public. I also got the chance to experiment with water and liquids, which I might not have otherwise thought to play with.

References

Cycling ’74 Max 6 – Used for the interactive portion of this project. Not only was the software useful, but the help files were used to reference how various processes are put together.

Cycling ’74 User Forums – For things like aka.leapmotion, the forums were crucial to overcoming various roadblocks.

CV.Jit – For experiment #3, Jean-Marc Pelletier’s extensive software was used.

Fish tank – Video used in experiment #3

 Project code on Github

VA (Visual Amplifier) Kit Prototype 1 for Audio Art Installations (Project 2)

My initial project started out pretty open ended, i didn’t really have a set direction

or idea of what i wanted to do. I was aware that the experiments was suppose to be

mini projects that could relate to the final project or the culmination could be

something unrelated to the experiments. I thought maybe i should think of doing

a midi controller that could be used water as a interface since it had been done

before but wasn’t completely sold on the idea.

 

I think i was hoping after all the experiments i might be able to tie

everything down to a final concept. The final project 2 turned out to be an

audio visual installation which was nice since it was my first art

type installation.

 

I used Ableton to launch my audio clips and used the keyboard to

trigger my visuals in Processing with the videos coded in.

 

 

Demonstration

 

 

Reality – My Gear

My gear

Imagined expectations – The Gear
The reflector

 

The documentation on the experiments below provide an overview of my:

 

– Process & Experiments

– Designs + Prototypes

– Different ideas considered

– Conclusion/further work

 

Process & Experiments

Experiment 0: DIY Inductor coils

This experiment wasn’t remotely related in any way to my project but it was something

i did in a audio workshop i attended recently and thought i include it here. We were

given some coils and tools to build our own inductor coils. The final product would

enable you to pick up electromagnetic sounds that are

generally inaudible from the environment.

 

 

1

 

2

 

3

4

 

Experiment 1: Working with videos using Processing

 

I played around some processing codes to see what i could do with videos

and was thinking of using Processing as a way to trigger videos and audio.

I had to learn the code application of how to play video first and

experimented with ways to blend videos and apply filters.

 

I’m glad i did this experiment because later on when i realized my options

on executing my project the way i wanted to were limited, i had this

to fall back to. I used initial videos from a video archive and applied effects

like filters, pixel manipulation and blending videos.

 

I got some really interesting visuals which i wish i could have

used for my demonstration. I resorted to using archive film

clips in the actual demonstration.

 

Around this time i also started checking VJ softwares because

i felt that maybe this could be something i could use. I was interested in

running my demonstration with one platform either by Ableton

or Processing because i wanted to use my midi controller.

 

Avenues were running thin on considering Processing for this since i had

trouble integrating codes for visual trigger, midi control and sounds. I had learned

how to use minim and image/video in processing but getting into the midi bus

started to feel like i might have to find other ways to run my audio visual installation.

 

 

Experiment 2: Drum Machine

I used Processing and found some codes to create a drum machine.

I still did not have a clear idea, well that’s not very accurate, i had an idea

but everything seemed a little floaty on what was going to be my main project idea.

Actually i felt a little worried around this period because comparing with the

rest of my peers, i felt like i was exploring an open ended grid that might

not lead to a cohesive picture of what i was really trying to do.

 

At most i was mostly working with codes to learn and play with. Still i was

creating a sketch for a drum machine so i did enjoy doing this exercise

and it’s something i might re-visit again at a later time.

 

I used Processing and found some codes to create a drum machine. I still did not have

a clear idea, well that’s not very accurate, i had an idea but everything seemed

a little floaty on what was going to be my main project idea. Playing around

a drum machine in processing also gave me this idea of what a visualizer

machine that could be based on sequencers similar to a drum machine.

 

 

Experiment 3: Making a synth

While doing my research for my project i found some sites that used processing

to make DIY synth.

 

It mentioned libraries like beads, controlP5 library and midibus that can be used

to design your own synthesizers. During the course of my project, these libraries kept

coming up, i would interpret that this is something i should probably learn in

the near future since there are specific areas in digital technology

that i’m interested in exploring.

 

I found out that the Beads library allows for more flexibility in making and

designing sounds for synths. I think what i enjoyed about my process for this

project was that it was getting me to explore audio visual generally and

getting some hours in to practice Processing.

 

 

Experiment 4: Working with images

I used Processing to get some basics know how on learning how to load images

and morph them to become something else. I considered processing some pictures

with processing and then setting it up along with my videos, the pictures would be

distorted and cut to some videos. Although i felt a mix of pictures and videos would

feel like a static experience and not really enhance my installation.

 

 

Experiment 5: Turning pictures into sounds

While trying to figure out what my final project was going to be, and in

between the various sites and applications i scour across the web. I ran

into a max application made by some musician who gave away his

max patch with his album.

 

The patch allows pictures to generate music. His music was not to my taste but

i thought the patch was interesting cause it provided a fresh perspective in the

process of writing music.

 

For my project it also lead me to consider if i could integrate a camera in my work.

I thought of using the water controller to take pictures of the user, this

pictures would then be converted to music and projected as a audio visual feedback

for the user to experience.

 

I did not take this route because this software was manual rather than automated,

you had to drop pictures in the folder and play the music. I felt this couldn’t fit in and

abandoned this concept.

 

Later on i found a couple of max patches online that had to do with

converting sounds from picture or vice versa.

 

 

Experiment 6: EarSketch

While getting busy with Processing and just trying to find my niche around the

goals of my project i got into exploring different ways of composing music.

I used EarSketch,a free online coding environment tool for learning programming.

It’s a little like Processing, that it’s a teaching tool to initiate non-coders to get

into coding by doing fun stuff. This was music generation by coding that would

produce algorithm based music.

 

 

I would do this by setting up the code and inserting audio files

from the sound library then setting up the parameters before running the code.

The software was originally made to allow non coders to get used to

coding and ease them into learning Python. While i would probably never actually

use this unless i was bored or curious, i wish i got to know about

this program sooner, it’s actually a fun way to learn programming.

 

 

Experiment 7: Audio reactive in processing

More codes for experimentation and learning, ah processing you have

begin to process me into codes.

 

Void setup ();

 

This was a good application that i’m sure i might use again one day. I did not use this

program in my project because i wanted to create a more interactive experience

for the audience. I wasn’t sure how to modify this code and use a midi controller

to play interact with the visuals and cue different music tracks. I wanted my visuals to

morph and interact with the cues i was sending out by midi control using Ableton

as my  audio engine and Processing. I know this can be done but during this period

although i had some leads i wasn’t able to pull it off with the codes.

 

 

Experiment 8: Fast fourier theorem

This processing application worked like an visual equalizer playing

to a music track that’s coded in. I like to work with this again someday and

maybe apply it in different ways. Glad to have learned this.

 

I like to be able to use a equalizer that morphs to many different visual but

still functioning like an equalizer. My project was dependent on good visuals

in order for audience see the effects of my projection so a standard equalizer

was not something i would want to use.

 

 

Experiment 9 : Reflections, mirrors, surfaces, projector

 

The other component of my project was to utilize projections of my visual.

I was interested in trying find interesting ways i could spread out the visuals

on the wall with just one projector. This was something i had in my mind for

a while now, as a performer type person i wondered about how i could work

around limitations when it came to gear. I used to wonder how i could break

the light from a projector to amplify the visual effects, could i play around with

some small little mirrors.

 

Seemed like to much trouble, was there an easier way around this,

was there some sort of object that could help me do this?

 

The reflector

 

In my project, i thought about possibly building a kind of “visual amplifier”

until i realized with suggestion of the TA that i should probably look for

alternatives and work with what i can find for demonstration purpose.

The first thing i did was pay a visit to one of the lecturers at the art and design

materials faculty to find out about what possible materials i could play with

get the effect i wanted.

 

I started with a slab of Acrylic glass, and cut it up to smaller pieces using

one of the machine cutters at the plastics and materials department

at the main building.

 

Razor

 

 This was an interesting experience. Much care exercised. Don’t show this to my mother 😉

Cut mirrors
Victims of the Cutter

 

The Acrylic glass was very similar to a mirror, I liked that it was

close to a mirror,but it mostly reflected the image or light

rather than break it.

 

Snowman

Another imagined design for a Reflektor, mirror shaped object similar to the size of a ninja Shuriken but shaped like an origami. It’s placed on the ground but made with mirror properties similar to Convex mirror

I got a small disco ball and placed inside a plastic cylinder, this produced interesting

visuals but the image was generally small. It look decoratively nice, but i wanted images

that were clear enough and still was kaleidoscopic.

 

 Multiscreen
Initial test for projection using mini Disco balls. Has potential

 

I decided to get more disco balls, i thought maybe if i add more numbers

of disco balls maybe i could enhance the image better. This isn’t really a correct way

of thinking about the problem i noted later since the mirrors on the discoball

are actually quite small.Still a few more disco balls later and more shiny things

like fake diamonds really added to the effect.

 

Fake diamonds

 

Crystal method disco punk

 

I was looking for something that would inflate the visual image but also

help spread it across the walls. With a little research, i found convex mirror

and spherical mirror would probably be something i want to get.It might not

break the image but it’s able inflate the image across the wall making it immersive

which i found out has been used for home based virtual experiments

and screen immersion.

 

Most of the convex mirror were generally expensive, and Amazon

seemed like a good option but my date for completion of the project was

quite close.

 

The initial two piece curve mirror which i purchased from Surplus didn’t

really reflect the image well, it would make for a good security mirror

in a store but it didn’t have the properties of the convex mirror,

it looked like one but it was not.

 

The properties were different. I found some reflective paper at an art shop

and decided to wrap it around a curved mirror i had got from Active Surplus.

When i had pieced together the two fake convex that i got from Active Surplus

and wrapped the reflective art paper on the surface, i think i got closer to the

properties of convex mirror.

 

I got surprising results when i tested it.

Green1Green2Green3Green4Green5

My colleague, Frank helps me out manning the Projector 

 

 

 

 

 

Wrapped

Green spectre

 

 

You can watch a video of this experiment here

https://vimeo.com/111607666

 

 Conclusion &  further work

 

Surprisingly all my experiments in thinking about surfaces, mirrors and directions

of projections yielded some effect which i felt fit the water park theme. The icing on

the cake would have been better if i had managed to add the midi water controller

component to my project, but since the effect i got from the projection

had a ethereal water component feel to it, i was pleased with the outcome.

 

There definitely is a lot of room i could work with to expand this concept effect

if i include more light sources, multiple projectors, mirrors and more materials

that project light differently. Not to mention the possible pursuit of creating

the Reflektor someday. I’ve read that 3-D printers allow new complexity in terms

of design and functions for product manufacturing which conventional production

can’t really meet sometimes, i’m convinced this not only expands the complexity

of projection mapping but general visual atmosphere and portability.

 

I would also consider using some VJ software for future work mostly because

they come with great visuals and could probably make my audio installations a

lot more exciting. My process to create my work has been quite an unexpected output,

it started of chaotically with no real goal in mind but a vague notion,

as a result i also had further ideation on other possible routes i could explore

for future ideas related to what i’ve done here.

 

I have begin to consider using Ableton in new ways other than what i

previously used the software for,which was just plain recording and production.

It’s likely i will  use Ableton and other VJ softwares to add more atmosphere in

performance events. There were a variety of VJ softwares out there to get into,

i may have not used one for this project, but i really like to in the future.

 

Reference 

http://www.creativeapplications.net/processing/quasar-processing-sound/

http://www.ponnuki.net/2011/05/diy-soft-synth-midi-controller-processing/

http://blog.dubspot.com/3-max-for-live-devices-to-add-visuals-to-your-performance-ganz-graf-mod-x-vizzable-2-motionmod/

http://vj-dj.arkaos.net/blog/2009/06/hints-tips/grandvj-ableton-live-and-akai-mpd24

http://www.exploratorium.edu/snacks/corner_reflector/

http://projection-mapping.org/turn-living-room-omnimax/

http://news.beatport.com/convert-visual-images-into-sound-in-ableton/

 

Kinect with the Sea

Code

https://gist.github.com/Jesskee/5ad0eb3a2f582e7987ef 

Sketches & Diagrams

 IMG_5630 IMG_5665

 

Photos

IMG_5634 IMG_5636 IMG_5637 IMG_5638 IMG_5641 IMG_5644 IMG_5647 IMG_5650 IMG_5659 IMG_5688 IMG_5693 IMG_5695 IMG_5698

Videos

Project Context

My project is based on an idea created by my colleague Glen Zhang. He has projected on the same material, but in a table version. Reflecting the projector off a mirror, he was able to get the images onto his cloth. He introduced the video to me early on in the year and it had been something that fascinated me. I really am glad to exercise my version of this concept, executing it with a water theme and incorporating more elements.

10754509_1557034784528601_211582597_o

To branch off from here, I did some more research to see if other companies or developers have used this method. I came across a group called Klang Figuren (klangfiguren.com), who has developed a prototype with this concept. They seem to use this concept for a variety of different things because you can program any two-dimensional multi-touch image and alter it with perception of depth. The following is a link to their Vimeo video for this prototype and it can also be found on their website http://vimeo.com/42846180. They used it to display certain muscles and body parts in the human anatomy. It also looks as if they played around with it making visually aesthetic and interactive with dots that followed your hand around. And the last thing I saw in that video was a virtual city that could be bombed when you touch the screen.

8 Project Experiments

Experiment 1 – Changing the background

I started off with an open source code and first wanted to change the background. Doing research on the Internet I figured out how to replace image background. Now that I have changed the background, it only showed a specific portion of the image I selected. I had to find a way to show the full image. First I resized it in Photoshop and secondly expanded the height and width in processing to show more of the image.

Screen Shot 2014-11-07 at 4.45.24 PM

Experiment 2 – Connecting the Kinect

Challenge: Must adjust image resolution to the touch points that is aloud by the Kinect. The surface area that is covered by the Kinect is not the same resolution as the image that I am using as the background.

Solution: Combine the depth image and actual image together so that it synchronizes with each other.

Experiment 3 – Testing Kinect sensor

I needed to see at what distance would the Kinect sense my hand, which would activate the ripples in the water. The problem I faced was the Kinect not differentiating my body, and other objects, hence causing the rippled effect everywhere. I fixed this by putting ‘if’ statements to differentiate my arm from my body.

Experiment 4 – Testing the screen

When I ran the sketch, the sketch was smaller than my computer screen, which means that when I connect my computer to the projector, it won’t show only that image. If I make the sketch full screen, it wouldn’t coordinate well with the touch points of the Kinect, so what I had to do was adjust the distance of the projector and the frame I built to show only the sketch from processing.

Experiment 5 – Testing Kinect sensor with my screen

I didn’t have my screen set up till the morning of the day of the presentation so I had to see how the projection with the Kinect would respond. Because I had to make the distance between the projector and the screen a certain distance to show my whole sketch, I had to adjust how far the Kinect should be and calibrate my code. I had some last minute troubles fixing this and I didn’t have enough time. Unfortunately, I had to place the Kinect in front of my screen but I will fix it to its intended purpose.

Experiment 6 – Projector to screen, screen to user

One of the things I noticed is touching the right side of the screen caused the ripples on the left side of the screen. This is because the projector is projecting an image from behind the screen, causing a mirrored image from the users side. I had to find a way to reflect that image so that the ripple will correspond to the location that the person is touching the screen.

More experiments to be continued…

Documentation

October 30 – I still have no clue what I want to do for my project. The subject seems very appealing to me, but my only concern is my level of skill using the processor.

November 3 – This class was very useful because I came out with a concrete idea for my project. My idea is to create a user interactive experience that really wow’s the audience. To do this, I will use the Kinect as an input data method and a projector as the output method. The user is going to be able to touch the screen and at that touch point, a rippled effect will appear, creating the illusion that they are touching water.

Now that I had the idea, I started playing around with processor. I went to openprocessing.com to find some codes I could incorporate in my design. I took a course on innovation and it taught me that the greatest innovators used a combination of existing products. So I plan to use an open course code, modify it, and add some new applications. The code that I selected to start off with is perfect for the idea I had. Today’s class I accomplished changing the background and adjusting the size.

November 6 – I bought all the materials I need to construct the physical portion of my project.

I went to Home Depot to buy all the wood I needed. I had my friend Anthony accompany me because it seemed like a 2 man job. We brought all the wood and the rest of the tools back to the studio and we worked on it here until very late in the night.

November 7 – Today I wrapped up the coding part of my project. With the help of Glen (DFI Student) who lent me his Kinect, we sat down together to incorporate this code into my existing code.

November 10 – Today is the day of the presentation. I came to school early to finish my frame and also test out the projection. Unfortunately I came across with a few problems: That the screen view cannot go full screen due to the limits of the Kinect, the ripples are on the opposite side of the hand because the projector is projecting on the back of the screen, and lastly it wasn’t detecting the hand from where I placed the screen. I had to alter my project temporarily in order to present so I put the Kinect in the front of the screen. Unfortunately, the user wasn’t able to interact with the screen by touching the spandex, but they were able to cause the ripples just by hand motion. At this point I still want to fix my project to its intended purpose.

 

Water Park – Water and Sky

 

When it comes to water, sky is the limit!

 

Water Park project was fun to work. When it comes to water there are millions of ideas passing by when brainstorming.. experimenting certainly can be forever and unlimited..

 

Sky and Water 1 –  1938, Woodcut

 

Sky and Water I

M.C Escher

Maurits Cornelis Escher (1898-1972) is one of the world’s most famous graphic artists. His art is enjoyed by millions of people all over the world, as can be seen on the many web sites on the internet.

He is most famous for his so-called impossible constructions, such as Ascending and DescendingRelativity, his Transformation Prints, such as Metamorphosis IMetamorphosis II and Metamorphosis IIISky & Water I or Reptiles.

 

Water and Sky

Project # 1Sky and Water 1 – Escher 

Presenting overlay & image coordinates:

Theme and inspiration:

When I was searching for my project, one of the strongest images of Escher came to my mind as the word water brought it up, I thought this would be great metaphor to use for such a dreamy headline, water. As Escher played with forms, symmetry and illusion, all apply to water movement and how it also changes its surrounding elements. Art effects and creates transforms humans and human behaviour. Whichever medium art comes in, it always brings some kind of human interaction with it.

interaction

1.reciprocal action, effect, or influence.
2.Physics.
-the direct effect that one kind of particle hason another, in particular, in inducing theemission
 or absorption
 of one particle byanother.
-the mathematical  expression that specifies thenature and strength of this effect.
Interaction, as it always has slight misunderstanding attached, does not necessarily mean human’s physical contact to one another or to any object. Today’s technology brings social media, visuals, audio which all add certain values to interacting with art or interaction.
My inspiration is Escher’s work, his way of seeing world, and how his art interacts with us.

Goal:My goal with this project is, applying  code & technology to physical objects, creating an art installation which communicates with children and youth in a water park environment through physical experiences and objects.

Task:

1- Installation:

-Creating a screen frame

-Stretching fabric on the frame

-Creating milky water environment as projection surface.

-Creating a paper kite

-Setting an electric fan

-Creating a fishing rod and a toy fish

-A data projector

 

Materials;

Wood frame

Ryon fabric

Duct tape

milk + water

Plastic bin

An electric fan

Paper kite

Wood fishing rod and string

a toy fish

Computer

 

2- Writing the code by Processing:

Code included layered images and by using overlay command I created sliding futures in a loop and added extra single cut out bird images to the actual art work. As Escher’s work continues and has no limit to repetition, I thought having the code in a loop is the best way to express its infinity.

Experiments:

E1:Changing background image

https://vimeo.com/111661743

At the very beginning of this step, I had to create a new background for my project.

Originally created background consist of two parts which is equally divided height of the actual art work. At the beginning background was created by using two equal size rectangles and coloured in solid colours with filled command. After having the top layer ( original image) I decided to add photo realistic images of sky and water ..this way the graphic work meets with real world as in the context of technology dreams meet reality objects.

By using add file command, I have tried adding the image to my code. In the meant time I identified image at the void set up section, and used the identified name for image entry when writing the code. But in this experiment it did not work as the image was either missing from its original location or identified differently than the one on the code.

Also before setting its coordinates the image is created by using Graphic Software, and pre-calculated coordinates added for height and width.

 

E2:Adjusting image coordinates

https://vimeo.com/111662622

After adding the main image, the water part which includes fish images,  was cut by half and saved as jpg image to be worked on. I cut a single fish image and emptied the background, then coloured it on Graphic Software Gimp. Changing its scale and adding back to the code as a separate layer brought a coordinate problem as it wasn’t the part of the original image anymore.

I have had many trials on the code until I found the good number of coordinates to locate the fish on the water surface, previously it was on the sky image.

I also noticed that adjusting coordinates on the single sliding layer does not effect any other future except run time gets shorter in the loop.

 

E3:Adding birds & changing amount

https://vimeo.com/111668345

Escher’s work is playful so I wanted to add more action and magic to explain how it can come to life more. I have cut and paste single birds from the main image, only three birds to be repeated in the code, and created png. images for each bird with no background.

Each bird needed to stay its original X ,Y coordinate on the individual images. Before thinking and getting help to use overlay command, I was originally planning to enter each bird by its coordinates. This would result a heavy image on the code and overlay helped solving the problem, and bird flock is created on a different tab in the same code.

As changing the value on bird count expanded the original art work and subtracting it created questions about its emptiness.

 

E4:Movement speed and changing its value

https://vimeo.com/111663588

x+= movementSpeed; //move the escher image to the right

On this experiment I tried different numbers for slide movement speed. The goal here was, to move the main image slower than the big coloured fish so it can give the feeling that the biggest and the fastest fish can get the food, also will get caught, as on the physical interaction, person with the fish rod will wait for the big coloured fish to approach and try to catch it. Instead the person will end up catching a toy fish.

When added value to x gets bigger the image gets faster.

E5:Experimenting the single image coordinates with Graphic app Gimp

https://vimeo.com/111689243

As I tried to explained above, a single bird image was adjusted on the Gimp Software to be added as separate layer image. After cutting and pasting I saved as png image and added to Gimp app to re-identified its coordinates. I have created guides on the gridded surface. After adding to the bird flock code, changing values resulted as having the image overlapped on the wrong coordinates with the original art work.

Most importantly, this experiment helped me the understand benefits of using grid before starting each processing project. Even with a simple explanation of X and Y coordinates are the very basics of coding.

E6:experimenting surfaces for projection

https://vimeo.com/111686995

Before deciding screen and water installation for my presentation, I have tried many surfaces, as walls on different angles, table tops, furniture, clear water, and ceramic. The best result came with milky white water and shiny ryon fabric. As I wanted to include wind effect by using a fan, silky ryon created the best result.

Also used wind effect on water surface for especially the second project. Cartoon orange fish supposed to be captured by hand while wind intended to make it playful and harder.

Shooting this experiment created the big question which was the location of the projector. It needed certain angle to capture the L shaped screen& water installation. I wanted to use VPT software but the code was already created and I found out that already having a background would make this process harder and it needs adjustments on the code..As it was a late discovery, this thought me to plan better at the very first phrase of each step as well as think about the last step in every step.

 

Project # 2Water and Sky 2 – Birds and fish:

presenting mousePressed & image sliding

Goal: My goal to create another project which is base to the previous one is more like an experiment itself. On this project my goal was surprising the user with more interaction and let them to scoop the projected fish image in the water by hand.

E7:Adjusting the amount of mousePressed

https://vimeo.com/111670501

The second project included the same sliding base futures and the code, but this time instead of using Escher image, I wanted to create more like a game display for kids.This came as another experiment to be tried as the amount of mousePressed made the presentation flawless. Before changing the value from 20 to 200, after 20 pressed later the game was stopping. I figured out by trying, as the number gets bigger the action continues.

E8:Changing value of bird mapping speed

On this experiment, I gave different values to single bird mapping speed.As the value got higher, birds were added slower to the image.

Challenge ; In general, the project had three main speeds to follow each other. Managing them and finding the right combination was challenging. Big coloured fish needed to go fastest. Then the image and after that the single birds. The speed difference between the single bird addition and the sliding image matter most, because the goal is to be able to see every added bird before the sliding art work reaches to the right side.

 

Sky and Water I

IMG_0167

Creating the screen, adding fabric with staples

IMG_0170

Fabric Screen – Ryon

IMG_0173

Locating the fan

IMG_0176

Adding duct-tape on the wood frame edge to cover and secure the fabric

 

 

Project Idea # 2- Creating Salad dressing by using increments code & and mixing it with Servo and magnets

IMG_0178

Salad Dressing Project idea #2 Calibrating Olive Oil & Vinegar

IMG_0179

Shopping list for Olive Oil&Vinegar experiment

IMG_0180

Thinking process for Salad Dressing project

 

Project Idea # 3- Moving image projected water droplets by using chemicals 

https://www.youtube.com/watch?v=P5uKRqJIeSs

Project idea #3-Moving decanol droplet

Alcohol is attracted to salt 

The droplets start to move when they sense salt in their environment.

“Salt is the stimulus that makes them move. They move because the salt gradient provides a different energy landscape. It is like taking a ball that is laying still on a flat surface and then suddenly make the surface hilly. The ball will roll to the lowest accessible point. That is what the droplet is doing. Without a salt gradient every direction in which a droplet might move looks the same (flat). But with a salt gradient coming from one direction the droplet can move energetically downhill into the salt gradient. And stronger salt concentrations will attract the droplet more,” says Martin Hanczyc.

http://pubs.acs.org/appl/literatum/publisher/achs/journals/content/langd5/2014/langd5.2014.30.issue-40/la502624f/20141008/images/medium/la-2014-02624f_0010.gif

 

decanol

Project Idea # 4- Using colour wheel with selective colour code to create complimentary colours projected on water

IMG_0184

Project idea #4 – Colour wheel projected on water

IMG_0185

Brainstorming on water

IMG_0186

Researching for salad dressing project

IMG_0187

Adding idea of increments control with Processing software to wheel colour

IMG_0189

Shopping list for Salad dressing project

 

 

Setting Projector

IMG_5684 IMG_5683 IMG_5682

 

Early Development of Water and Sky

IMG_0103

IMG_0104 IMG_0106 IMG_0107 IMG_0108 IMG_0109 IMG_0110

 

FINAL https://vimeo.com/111707214

https://vimeo.com/ https://vimeo.com/111707214

 

IMG_5716 IMG_5723 IMG_5717 IMG_5719 IMG_5718 IMG_5721 IMG_5720 IMG_5722final

 

 

IMG_0112

 

 

Codes:

Water and Sky 2 – Mousepressed

https://gist.github.com/maydemir/2493bed879f96b1b53a5

Water and Skay 1 – Esher

 https://gist.github.com/maydemir/58cf016aacfa802f46f4

References:

http://www.mcescher.com

http://pubs.acs.org/appl/literatum/publisher/achs/journals/content/langd5/2014/langd5.2014.30.issue-40/la502624f/20141008/images/medium/la-2014-02624f_0010.gif

Code Academy

Sayal Electronics

http://www.sayal.com/zinc/index.asp

“Getting Started with Processing”

Casey R. & Ben F.

 

Special Thanks To:

Hart Sturgeon-Reed and Chen Ji for my codes

 

 

Project 02 – Rivers

Introduction

Rivers is a media art installation piece that combines video, interactivity, tactile sensing and virtual reality audio. Rivers functions both as a documentary and creative experience that explores the rivers and ravines of Toronto through their soundscapes and video images. This experience is interactively augmented by the physical sensation of touching its water covered surface and producing meditative visual and sonic reactions as a response.

Rivers appears as a wooden box with a water covered acrylic top surface and a pair of headphones. The acrylic surface displays video images taken from rivers and ravines in the Toronto area while 3D audio soundscapes, captured in the same locations, can be simultaneously heard through headphones retrofitted with a head-tracking sensor (Ambisonic and Binaural processing). The touch interactions produce changes on the video images that simulate the effect of affecting small “virtual” white particles floating on the river and simultaneously generating sounds that change according to the movements of the hand and number of fingers placed on the acrylic surface. The soundscapes and the touch-generated sounds are delivered using 3D sound technology to create the aural sensation of physically being at the river location and of having the generated sounds move around the listener. The generated sounds also move in this virtual space following the same direction as the touch gestures.

The videos and the audiovisual interactivity changes approximately every two minutes switching to different “scenes” representing different locations along the rivers. On each of these scenes the produced sounds and the physics of the visual interaction changes.

Rivers can be approached from different perspectives:

  • As a visual and tactile only piece where the headphones are not worn but the audience focuses on the visual interaction and tactile perception of the water.
  • As a virtual audio experience where the audience doesn’t look at or touches the piece but concentrates only on exploring the 3D audio soundscapes by facing different directions (eyes can be closed for increased effect).
  • As a sound art (musical) experience where the audience focuses on finding different ways of producing and controlling the sounds produced by touching on different areas of the surface and using a varying number of fingers.
  • As an integrated experience covering all of the above.

 

[click on images for full version]

20141110_214439 20141110_214432 IMG_5761 IMG_5766 IMG_5791

 

 

Context

This piece is heavily inspired by the Acoustic Ecology movement, soundscape based sound art and the sound field recording practice as manifested by the Phonography movement. These areas all share the commonality of looking to produce some level of  awareness about our connection with our surrounding environments (natural, social, geographical, architectonical), through the understanding and artistic use of sound, considering all of its implications as a physical phenomenon and as an aural perception experienced by all living organism. Acoustic Ecology has a scientific basis but a strong involvement of the artistic community probably due to having being founded by a group of music composers and sound artists lead by the Canadian composer and SFU University professor R. Murray Shafer.

Rivers became an interesting opportunity for combining sound art with a visual interactive interface where both the visual and the acoustic can exist independently or as a combination.

Related web resources

Acoustic Ecology:

Phonography:

 

System description and functionality

Hardware

The hardware I used in Rivers is all self-contained in a wooden box with an acrylic top cover that functions as rear projection surface and water container. it consists of:

  • 1 pico video projector
  • 1 Leap Motion sensor (IR camera)
  • 1 Mac Mini computer
  • 1 Pair of sealed-back headphones
  • 1 Arduino Pro Mini based head-tracking sensor with Bluetooth

An infrared web cam was required (as explained in the sections below). I used the Leap Motion since it was already accessible to me and because it contains two IR cameras and three IR LEDs for lighting. Only one of the cameras was used which means that possibly any other USB IR webcam could have been used instead; although some of the advantages of the Leap Motion’s cameras are: the small form factor, the wide angle lens that allows them to be placed in close proximity and the possibility of stereoscopic imaging and depth sensing (see experiments below but not used for the final project).

The orientation sensor is required to enhance the 3D virtual audio effect. The Arduino sends serial data via Bluetooth containing the yaw, pitch and roll orientation angles in degrees. These values are generated by the fusion of the data obtained from three sensors: a gyroscope, an accelerometer and a magnetometer. The components I used made it easy to assemble with the only challenge being finding a way of making it compact enough so it can be attached to the headphones headband. The components are:

In the near future I will design and 3D print a case for this module. The battery must be positioned with its top (where the cables connect to the cell) away from the magnetometer since this part seems to produce a strong magnetic field that interferes with it. More information about the firmware running in the Arduino can be found in the next section.

The headphones should preferable be ear enclosing, studio monitor quality with sealed-back drivers for increased sound isolation. The headphone jack is located on the front of the box and internally connected with the headphone output of the Mac Mini. The head-tracking module is attached to the middle of the headband using a velcro fastener.

IMG_5756 IMG_5779 IMG_5752 IMG_5751 IMG_5750 IMG_5749

 

Software

Source code available on GitHub

The software running on the Mac Mini was programmed using OpenFrameworks plus the following libraries:

I chose OpenFrameworks over other alternatives (like Processing or Max/MSP) due to my own personal interest in learning more about this particular platform and because of providing a better performance with the produced native binaries programmed in C++. I tested the performance of LiquidFun and OpenCV under both Processing and OpenFrameworks and the second showed a much more faster and fluid (no pun intended!) response and rendering. The LiquidFun engine worked great as a way of simulating the physical interaction of floating particles. This library is developed by Google as an extension of the well known Box2D physics engine.

I named the resulting control application with the code-name Suikinkutsua Japanese word used to name a particular type of garden fountain consisting of an underground resonation chamber that produces sound as drops of water fall into it. The logic of the application contains the following processes:

  1. Background Video and audio playback: For each scene, a video file is automatically played using an ofVideoPlayer instance. The corresponding soundscape is also played by sending a score event to the Csound instance. At the end of each video a new one is loaded and a new soundscape triggered.
  2. Virtual particles: An instance of the ofxBox2d object is filled with a ofxBox2dParticleSystem object containing 5000 white particles. The gravitational behaviour of this particles changes according to the scene.
  3. Video capturing: Using the Leap Motion API the black and white (infrared) frame image from one of the cameras is obtained via an instance of the Leap::Controller object.
  4. Image processing: The pixel information from the Leap camera is transferred to a ofxCvGrayscaleImage object and resized to remove the distortion (the images form the Leap are elongated on the x axis).
  5. Background removal: The ofxCvGrayscaleImage::absDiff() function is called on the image to remove the background by comparing it to an image captured right after the application is launched. The background can be recaptured at any moment by pressing the spacebar key to compensate for ambient light changes (a future development will automatize this).
  6. Brightness thresholding: The image is thresholded using ofxCvGrayscaleImage::threshold() to produce a binary image for OpenCV. The threshold level can also be adjusted during runtime by pressing the ‘=’ and ‘-‘ keys (a future development will save this and all settings on a preferences text file so they don’t need to be hardcoded).
  7. OpenCV contour and blob detection: An instance of ofxCvContourFinder is used to detect the blobs appearing in the processed image. The fingertips of the user will reflect the IR light from the Leap and appear as white circles in the thresholded image. This is also possible since the white acrylic  material allows the IR light to pass through. The light and images from the projector do not interfere with this process since they remain in the visible light segment of the spectrum and are not detected by the IR camera. The blob size limits can be set via the ‘a’ and ‘s’ keys for the maximum area and ‘z’ and ‘x’ keys for the minimum area size.
  8. Particle animation: The centroids of the detected blobs are mapped to the width and height of the projection and used to spawn invisible circular ofxBox2dCircle objects in the ofxBox2d world at the same locations where the fingertips are detected. The repulsion forces of these objects are set to make it appear as if the touch causes the particles to disperse.
  9. Head-tracking: The yaw, pitch and roll data received from the head-tracking module, via serial communication through an instance of the Razor object, is sent to Csound to be used in the sound processing. By only using the gyroscope and accelerometer sensor it would be possible to determine the orientation but since these sensors are not perfect (particularly the gyroscope) there is a need of a reference point to be used to correct for their error accumulation and drift. This is where the magnetometer is used and the magnetic north of the earth becomes the required reference. This also means that the zero degrees position (front) will always point towards the magnetic north. This can be useful for aligning the soundscapes to their actual geographical position but for the purpose of this project I required the front to be pointing towards the installation. To achieve this, I programmed the application to offset the orientation values by an amount equal to the current orientation of the sensor at the moment of pressing the “c” key, so this orientation becomes the zero degrees position in all axis (yaw, pitch and roll). This calibration is required right after starting the application and in the case of occasional drifts.
  10. Sound synthesis: The x position of the OpenCV blob centroids is sent to the Csound instance to be used in the synthesis of sounds and to virtually position them spatially in the vicinity of the same location (mapped to a -180 to 180 degrees frontal area). This spatial location is varied by a random fluctuation to give it a more organic sensation. The more blobs are detected the more sounds will be produced although due to performance limits the number of sounds is currently limited to 3. The sound synthesis is performed using time-stretching  and freezing of the spectral FFT analysis (Phase Vocoder) performed on three different short audio samples that I recorded previously: wind chimes, knocks on a wooden box and crickets, each one used on a different scene. When the user places a finger on the surface, its x position is mapped to the length of the audio sample so the sound contents at that time location are heard. If the user doesn’t move the finger then the sound is “frozen” which means that what is heard is only the sound contained at that moment in time.
  11. Spatial sound processing: The soundfields of the ambisonic soundscape recordings and the ambisonically panned syntesized sounds are mixed and rotated in the opposite direction of the angular orientation reported by the head-tracking sensor. By doing this, the sounds appear to the listener to remain in their positions and in this way enhancing the effect of sonic virtual reality. This process is similar to the one that is performed for visual VR (e.g. Oculus Rift) where the orientation of the viewer’s head is used to rotate the camera in the 3D scene. A binaural reverb is added to the synthesized sounds to enhance the sensation of being in a physical space using Csound’s hrtfreverb diffuse-field reverberator opcode. The diffuse-field is composed by the sound reflections or reverberation on walls or objects; these don’t contribute to sound localization but are used by the human brain to determine the size of a physical space. The final ambisonic mix is transcoded to binaural, using the virtual speaker process, so it can be delivered via headphones.

Besides the key commands described above, and for calibration purposes, the application can also display the raw and thresholded images captured by the Leap by pressing the “i” key. This is useful for checking if the background needs to be recaptured or if the presence of ambient IR light makes it necessary to change the filtering threshold. By pressing the “t” key the connection to the head-tracker is done, which is also useful in case of loosing connection. The

Audio and visual content production

The videos and soundscapes used in Rivers were collected by me using a Cannon T3i digital SLR camera and ambisonic sound recording equipment. The ambisonic microphone used is my own B-Format prototype constructed using 3D printed metallic and plastic parts and 10mm electret microphone capsules (one omnidirectional and three bi-directional). This microphone was not built for this project but it is part of a longer personal research process.

The sound recorded used is a modified Zoom H2 that had its four internal microphone capsules removed and a 5 pin XLR connector installed instead. The parts for the modification were designed by the sound engineer Umashankar Manthravadi and are available on Shapeways (he also designed parts to build A-Format ambisonic microphones). My 3D designs for the B-Format microphone will also be eventually available through the same supplier.

IMG_5780 IMG_5781 IMG_5782 20141109_115427

 

 

Development Process

Experiment 1: Video projection

This is a simple experiment where I wanted to find how a laser pico projector would interact with different surfaces. I knew that trying to project on clear water wouldn’t work with a light bulb based projector so I wanted to confirm that a laser one would also not be that effective on clear water.

From the experiences with my previous project I found that light through white acrylic would yield an interesting effect so I tested that too. I found that the projection on acrylic caused some light refraction that  made the image a bit blurry, event though the laser projectors are known to always remain in focus regardless of the distance of the projection surface. I found that this effect wasn’t detrimental to my objectives and instead contributed to a good visual effect.

 

Experiment 2: Leap Motion as “see-through” IR camera

I’ve know of other systems utilizing IR cameras to track objects or finger tips placed on a transparent surface. As an example, the ReacTIVision system used in the Reactable uses this combination to sense the objects placed on the surface and to function as a touch interface. I wanted to explore this possibility still now knowing how would it be applied to my project. I don’t posses an IR camera so I wanted to find if the IR cameras in the Leap Motion could be used for this purpose.

In order to sense touch on the surface of a semi-transparent white acrylic (as the one used in the previous experiment)  I found that an IR camera also worked very well due to not being interfered by the video projection. Since I knew that I wanted to combine projection and computer vision, this seemed to be a good way of avoiding confusing OpenCV with the images being projected. For this experiment I used Processing with the Leap Motion library by Darius Morawiec to get access to the feed of the IR cameras. I could have also used the TUIO framework (used in ReacTIVision) but I decided I wanted to implement my own simpler system.

The results of the experiment were successful with the exception of realizing that the intenisity of the IR LEDs on the Leap Motion are not user controllable through the API and that they self-adjust depending on the levels of ambient IR light. It seems that the reflection of the IR LEDs on the acrylic surface caused the  Leap to increase their brightness making the image filtering process imperfect due to some sections being burned out. A future experiment would be to determine if a non IR reflective acrylic surface could be used.

 

Experiment 3: Magnetometer response

One of my early ideas was to use the acrylic surface, covered with water (the same acrylic tray design I used for the final version) and use floating objects that could be placed on the water by the user. This objects would have magnets embedded in them and I would use one or more magnetometer sensors under the tray to sense the magnetic fields and use that data for creating the visuals projected from above. For doing this experiment I bought two magnetometer modules that I attached to an Arduino.

When coding the I2C communication I realized that the specific module I purchased didn’t allow to set a custom address so it would be hard to have more than one connected to the same I2C bus (the Arduino Uno has only one). As an alternative I could have used a bit-banging I2C library (like this one) for the second module but for this experiment I used only one. The purpose of the experiment was to find what kind of electromagnetic variations I would get with the magnet in close proximity and over the different axis of the magnetometer. I also verified how the changes in polarity would affect the values.

 

Experiment 4: Floating magnet and electromagnet interaction

At this point is where I had the idea of adding a sound component to my installation that would get generated by using the electromagnetic data from the magnetometers. Following the same idea of electromagnetic objects floating on water, the user would place the floating objects on the water tray and move them around to change the characteristics of the synthesized sounds. I then thought I could also make them move by themselves when nobody was interacting by using electromagnets placed under the tray. For controlling this electromagnets I would use a PNP transistor and an Arduino for switching a 12volt current going into the coils.

For this experiment I used the coils found inside two different relays I purchased form a surplus store (Active Surplus). One of the coils appeared to not produce a very strong electromagnetic field probably due to having a lower count of wire turns. A second larger one worked much better and it did provide some push to the floating wooden cubes with embedded electromagnets. These cubes floated quite nicely without the magnet but after drilling a hole in the centre to insert the magnet the volume of the cube wasn’t enough to make them remain on the surface. A second test was done with a light foamy material.

 

Experiment 5: Producing a depth map with the Leap Motion

After my floating magnets experiment I thought I could try to use the stereoscopic IR cameras on the Leap Motion to generate a depth map. I did some initial search on the internet and couldn’t find a definitive answer about this being possible at all. I still thought it could be a nice way of having access to a compact and inexpensive alternative to the depth mapping functions of the Kinect. OpenCv has the StereoBM and StereoSGBM classes that could be used to generate the depth map.

I programmed a test in Processing and initially found that it didn’t work at all. I then thought this could be due to the images not aligning properly so I added to the sketch the option of shifting one of the images horizontally to the left or right by pressing keys and moving the image one pixel at a time. This gave some results that started to look promising since I was at least able to see some contours and areas that matched the objects (my hand) placed under the Leap with the depth represented as shades of grey.

After some more research on the internet I finally came across this post on the Leap Motion forum about someone else trying to do the same and getting better results. It seemed that the problem lies in the fact that the images from the Leap Motion are distorted due to the fish-eye type of lens mounted on the cameras. I also discovered that the Leap Motion API has some functionalities for correcting this distortion but at this point it seemed too involved of a process for the amount of time I had to develop this project so I decided to leave it for a future one.

 

Experiment 6: Colour tracking using a webcam

In this experiment I was looking into the alternative of using colour tracking to track floating objects on the surface of the water, viewed from above, and use that information for sound synthesis. This would allow someone to place floating cubes of different colours and use the colour and position information to create or modify different sounds. At this point I decided to switch to OpenFrameworks since I was already considering using other libraries like LiquidFun to generate the visuals (and as mentioned before, this seemed to perform much better in OF than in Processing).

The challenges were the expected ones: the webcam was too noisy and introduced a lot of randomness into the system, the light variations made tracking very inconsistent and unstable, I wouldn’t be able to use any kind of projection on the water or bottom of the container since this would add extra confusion to the system, background removal wouldn’t be a solution either due to the constant changes in the projection. I did manage to get some level of success with the tracking but not precise enough for my purposes. The reflections on the water also introduced a lot of noise into the system as seen in the experiment video below (the colour circles are the tracked positions and the different video views show the direct image from the web cam, the HSB decomposition and the thresholded binary feed sent to OpenCV).

 

Experiment 7: LiquidFun interactivity

Going back to using the Leap Motion infrared cameras to avoid interference by the projected video, I experimented with just tracking the object position from above and mapping that to the LiquidFun world to make the particles move. The experiment had a good level of success but the automatically controlled intensity of the LEDs in the Leap Motion caused large variances that made it hard to use. This variances where mostly caused by the hands entering the scene under the Leap and changing the amount of light reflected on the shiny acrylic surface. As an alternative I experimented with just using a webcam and hand gestures to interact with the particles, which worked very well.

At this stage I also got the head-tracker to communicate with the OpenFrameworks application so I tested the performance of having the tracker, OpenCV and LiquidFun working at the same time.

By now, I had finished building the wooden box and acrylic tray so I was able to test the performance of the Leap Motion placed inside the box behind the acrylic to detect touch. It was a successful experiment and the intensity of the LEDs remained stable due to not having any other objects placed in between the Leap and the projection surface.

 

Experiment 8 and conclusions: The acoustic ecology art experiment

Having must of the technical details figured out, I started thinking about the artistic content that this interactive platform could present. Making a connection between my personal interest in sound art (strongly inspired by exercising attentive listening of our surrounding environments) and the fact that water was already present as part of the interface, I decided to test using images and field recordings captured near bodies of water.  My initial thought was to gather visuals and sounds at different locations of the lake shore in Toronto but at the time there were strong gusting winds happening in that area that would have made sound recording very difficult. Then it came to my mind that ravines are usually found in lower level land and sheltered by trees so I went out scouting for a good location.

This experiment was very successful because even though I’ve known for long time about the work of other artists combining sound art with interactivity in an installation (and also done sound design for them), for the first time I’m producing my own interactive installation piece that I feel satisfied with and that accomplishes this combination. I really enjoyed working on this project, despite it’s difficulties and limitations of time. I’m looking forward to adding more content for this platform by gathering videos and soundscapes of a wider range of ravines. I will also be looking for venues where to present this installation in the near future.

 

 

Project 2: Brain Activity

 

Untitled

 

Background:

In order to create a theme around the concept of water, it was important to understand the properties of projection along with its interaction with processing. As this was my first interaction with processing, exploring it’s multiple assets and the role it could play in the development of my final project was critical.

The only projection I was fully able to comprehend on the explanation of water projection was quite literally using the projector and projecting the water in a tank on a surface as simple as a wall. However after researching and going through amazing work by some artists my perspective became broader. I began to understand that this was only one of the ways digital technologies can come to play with the physical world.

Project Description:

The idea behind the project was the activity that goes on in the brain. As a result of thoughts and process which trigger brain frequency which communicates and connects with the other parts of the body and as a result projects a certain action or movement.

Although the brain is a complex organ. I used the mirror, water and an interactive frequency generator through processing to be generated through a speaker.

The way this worked was every time you interacted with a low level frequency on processing it generated a sound strong enough to reflect itself through a connected speaker on the mirror which had water placed on it. These patterns created variations in the mirror projection before the mouse interaction which was constant.

Code:

https://github.com/ridasrabbani/assignment-2/blob/master/assignment.ino

Process:

The first step of the project was understanding the brief of the project in relation to the processing knowledge and projection. Over the weeks with the different ways the digital and physical spaces could be combined made it clear how a variety of materials and software could enable the users to interact with multiple points of the project.

Whether the software was as simple as the camera built inside the laptop which allowed the projection or as complex as the mad mapper. With an understanding of  what it was in terms of projection and the final result could be possible by choosing the correct combination of medium.

The next step was to look at existing works on the theme of water or projection using processing and physical medium. Distort Yourself was an example of one such work that had a complex yet strong idea with a fairly simple execution. In a similar way I wanted to execute an idea that would come across to the audience as something they wanted to dive deeper and allow interaction. The brain organ which is so complex yet the process and activity can be reflected on a medium came to me and I wanted to use a similar system to that of Distort Yourself but bring it to life just as The Abyss creatures were projected.

My next step was to look at the way different materials interact with one another to best project my idea. In order to do so I looked at different sizes of mirrors and speakers and although the size was an important element the quality and weight of the mirrors and speakers were crucial. The heavier the mirror the less vibrations and patterns it allowed on the layer of water. However I wanted to create a comfortable sound without bursting any eardrums and produce the same effect on the layer of water. After surveying of the market and getting ideas from people. I was able to get a mirror with a large enough surface rather than a frame that covered the projection space.

10751571_10152581487858477_18178307_n10811503_10152582294253477_2080520156_n10805170_10152582695378477_323625154_n

 

 

 

The beads library  had a wide range of choice to chose from. Although I was thinking of using a player at first. I wanted the users to interact with the frequency screen on processing hence I chose to use the Lesson 10 interaction. There was still the element of giving them the complete control that was missing. Which is why I altered the code to allow complete mouse control. By starting from point 0 and by clicking on any space on the screen and keeping it pressed an interaction was built.

void mousePressed()
{
ac.start();
}

void mouseReleased()
{
ac.stop();
}

H0wever I still wanted a lower frequency to be produced to create better patterns and vibrations on the screen which is why I altered the frequency ration from 10 to 0.1

updatePixels();
//mouse listening code here
carrierFreq.setValue((float)mouseX / width * 1000 + 50);
modFreqRatio.setValue((1 – (float)mouseY / height) * 0.1+ 0.1);
}

Even after having the lowest possible frequency ratio I noticed the lowest frequency produced was when the mouse was clicked on the extreme top left side corner.

 

Finally I wanted to change the size of the screen to full screen on processing hence I added a code line to the size:

size(displayWidth, displayHeight);

 

Diagram of the System:

 

diagram

Photographs:

jkjkjk

 

 

10748639_10152581180718477_628418883_n
10807007_10152581181063477_179630985_n (1)

10799453_10152581181093477_699078001_n

 

 

Project Video:

 

 

 

Other diagrams:

brain-hi

Daydreaming-on-the-job-this-brain-wave-reading-helmet-knows-video--b8035a79cd

 

Experiments:

Experiment 1: Using For Loops

 

This was the first processing code that I followed with the tutorial in order to create loops. In order to repeat code, it not only simplifies the same repeat actions you have to do again and again but how important calculation and cordination with the rest of the code is in processing. Although I got the code to run and execute on the screen the first time, it gave me squares of colour the second time around. That may have been because I missed a important loop line or another technical issue that I didn’t account for. While doing this code it also gave me access to a large palette of colors that are available in processing. In the final code of my assignment two though I chose to vary the palette using the RGB in order to reflect on the theme of my project which was the brain using the color red for the frequency and black for the background:

color fore = color(102, 0, 0);
color back = color(0,0,0);

However in the future I want to be able to chose from the color themes available in the palette library.

 

Experiment 2:

Adding sound was my first encounter with sound in processing and it helped me understand how the sound element of media can add life to a sketch or a picture. It was also an important aspect of my final project as while doing this experiment the tutorial gave me an understanding of how libraries can be used to create a sketch and can be further modified depending on the function you want it to do. I also got access to the beads library which later became an important part of my code as it was the skeleton of the interaction project that I used to create the interaction between the user and the frequency. This experiment required a wav file and a jpeg to add the visual layer and a sound file on top of it. The problem I faced with it however was difficulties with the sound it self it played out for a second on the first try however re adding the sound after the picture worked on the second time around.

Experiment 3: Beads Sound Library

The choice available on the beads library made it a hard decision to chose a particular sketch when it came to processing sound. Although at first I wanted to use a music player and simply just use a low frequency sound that works well with my projection. I didn’t want to have to take the easy way out. I also wanted an interaction and although the interaction sketch worked well it was still missing the component which gave the user full control. Hence I had to modify it to give better results and work only when the mouse was clicked and stayed clicked. It was also that each user got a unique experience and got to explore the sketch from scratch without having to load again each time. However in the future when I have more time to experiment I want to be able to add a sound of my choice and then allow interaction with that on different frequencies to see how I can create distort existing music and create variations.

Experiment 4: The Abyss Creatures

The Abyss is a 3D space that allows interaction between graphics and animation with processing. Allowing control over not only the basic creatures shape and drawing but their movement, focus and their information. Andreas Gysin created this as a workshop for design students to explore this space in processing and allow programming at different levels.  It allows interaction and the basic creatures comes alive as a graphical output. This inspired me to see projection not just as a tool but to be used to rebuild a space for individual creatures and multiple creatures to coexist with one another.

The Abyss code allowed me to recreate the creature, and gave me ideas for my future project. Creating a virtual space people can explore and recreate and to me this experiment seemed very similar to the games I played while growing up such as Sims where you create a virtual world with properties you seem fit and characters with as many qualities as you want. The personal aspect of the experience was something that I wanted to create in my final project. I also at one point wanted to work on this experiment as my final project however because of the locked layers I was not able to edit the information about the movement for each creature and their shapes.

Experiment 5: Frequency Interaction

The frequency interaction was one of the most important experiments of my project as it was the basis on which the sound was generated. It involved playing around with the frequency changes, the screen size, the basic elements that changed the whole feel of the project which were the colors and most importantly the mouse control which allowed screen interaction only when mouse is pressed.

However I would have liked to add a element of the code where you do not need to necessarily have the mouse pressed as this is only one of the interaction with the project. The mirror reflection and patterns is another that would have required something to be pressed until you pressed something else to stop it. This with other elements such as a different comfortable sound or a less pitchy sound would have allowed less focus on the frequency interaction and more on the mirror interaction.

Never the less this was one of the best sketches in the sound library of beads and worked really well with my final project with a little tweaking here and there.

Experiment 6: Projection

Projection was one of the experiments that followed me right till the end. Even right before my turn I was setting up the processing screen with the projector as the mac settings which were different from the windows that I am used too. The projection in my case was to show the interaction with the frequency screen. However I wanted to project the mirror image changing with the frequency generated and the before and after.

Madmapper was also an interesting tool which I wanted to use to highlight an installation complementing mine on the wall. However trying out projection as a part of the class generated many ideas as to how it can be used with the traditional projector. Projection was certainly different from how I had imagined. From the projection on the room itself to the certain points in the room which focus can be concentrated on.

This helped me on my project as I learnt people can project on different surfaces and materials and use different tools to create the right effect on the spectator. In my case this was a layer of water on the mirror which created patterns and variations different from the original reflection.

Experiment 7: Mad Mapper

Mad Mapper which seemed confusing yet artistic to me in the start became much clearer to me when we practically tried out a projection on this medium. It was not only about the 2D image on a surface but creating 3D patterns, objects and textures. All this by sizing and aligning it against objects and surfaces it would best work with. It was fun because you can perfect the art and recreate it as many times as it was an easy prototype friendly software. Having only used it twice,once as a part of a tutorial to distort an image and the second time with friends who had tried it before. It was an amazing experience to see the simple projection play out and transform as a part of an environment.

Although we had this session very close to the final project demonstration I really wanted to either add this element of mapping or recreate my final project with this new understanding of the software. It was also interesting to see many people implement it as a part of their demonstration or to create the right mood. In the future I want to try it out with after effects and LED set-up.

Experiment 8: Gesture Based Detection

I carried out the gesture based experiments through the in built camera it allowed detection of color, movement, distance and direction. In this case I alterd the code so that the distance red dot disappeared completely as soon as the finger moved close to the camera it detected this through the pitch black darkeness of the screen hence detecting a certain color. The code that allowed this was:

void mousePressed() {
// Save color where the mouse is clicked in trackColor variable
int loc = mouseX + mouseY*video.width;
trackColor = video.pixels[loc];
}

void keyPressed()
{
if(key==’v’)
{
videoToggle=!videoToggle;
}

}

This came as close to the color detection and worked better than the other interactions with the camera. Although I wanted to incorporate the this element of detection in the final project by detecting the change in the image or a certain movement in the water due to the unreliable results of the speaker and the mirror at the different stages I was not able test and implement it out before the final project.

 

Inspirational  works and their connection with my project:

Distort yourself with we are narcisses by Bertrand Lanthiez and Chloé Curé uses the simple idea of the mirror and sound to create a distortion and question their real image. However this project is more visual then it is interactive and sound is generated depending on the distance sensor.  The visual and perceptive element of this work which was so finally polished inspired to dig deeper and create a connection with an existence which has always been there yet not so noticeable. Which made me think of the brain which is such a complex organ yet the process is so generative. Hence I wanted to use the idea of frequencies to be the essential input.

However the concept of frequencies being constant was not quite sitting with me and I wanted to dive deeper which is when I came across the Aural Architecture and Visualize the sound by Sung Ting Hsieh which also applied to the concept of water using the microphone to pick up oscillations from the structures and emphasize the connection between the spaces and the audience. Which is how I thought of bringing in the element of interaction and alterations of frequency on a processing screen and then on the output screen which was the mirror through variations and patterns.

While working on my project with elements of interaction and interesting projections I came across a third project The Abyss by Filip Visnjic a 3D space on processing where creatures can be drawn, built and released and although the idea in itself was interesting what I really got from the experience was that the most basic information such as the name and the birthday can bring a concept which is so hard to visualize otherwise into life. By naming, creating and defining the movement of the creature I felt not only a part of the atmosphere but I was better able to visualize the entire projection better. Hence while working on my project this feeling really resonated within me and I wanted to concentrate on how while going through my installation people would think of everyday object, actions and things differently in terms of how they contribute to the universe. Just as one mind becomes two and the infinite brain activity is creating patterns, processes and variations.

Expectations and Outcomes

Although the function aspect of project played out in same way as I imagined it to. I did at a point want to add the open cv elements  into it which would allow control over which part of the mirror the pattern could be created. As well a sound which was more stable yet created the same low frequency required for the vibrations.

As for the design elements of the project I imagined to be working with a bigger mirror and later on in the execution something that would like a floating brain. However the frequency generated by the speaker didn’t prove to be strong enough to work well on a larger surface. As for the brain even though I had a basic template, waterproof properties and better planning was needed for my execution to mirror a brain.

As for the processing display and function that aspect of the project went smoothly as it allowed me to generate the lowest frequency noise by changing the frequency ratio and the size of the screen to full screen by an extra line in the code: size(displayWidth,displayHeight);

 

 

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.