Project 2, “Klaus Wallpaper/Mirror”

Klaus Wallpaper/Mirror

When applying to the DFI program, in the winter of 2012, I was working on a project loosely referred to as, “Klaus Mirror Wallpaper.” In it, I envisioned a mirror, similar to the one in Snow White, which one could peer into but would have different images looking back at them. In my mind, the images would interactively ripple like a pebble thrown into water and pop up, like a mirror.

http://www.youtube.com/watch?v=Z_-pDpLVVNc

I wanted it to double as wallpaper, its eeriness based on how it could camouflage itself into the rest of the wallpaper. At the time, I considered this to be magical, or at the very least, too technically difficult for me to accomplish on my own. Instead, I came up with a Flash animation version of what I had in store. I made a music loop for this, called “Klaus Tomato Complete”.

Images

Klaus Nomi was a German underground No Wave/opera performer living in New York City in the late seventies/ early eighties. I have longtime been a fan of Nomi’s ability to transform visual art into a persona (through costume and song), sort of a precursor to Lady Gaga. Around the time that I was initially working on this project, I was listening to a cover he did of Donna Summer’s “I Feel Love.”

I made a song that was inspired by this.

https://soundcloud.com/chante/friend-of-slime

“Will Munro: Total Eclipse” was the first art show I saw when I moved to Toronto in 2010, at Frank’s Gallery (AGO). The poster for this show included a large image of Klaus Nomi.

Will Munro

 

This poster has been hanging on the wall of my various apartments since that time. I like the idea of overtly  acknowledging pop culture obsessions and making more art with and about it.

For “Klaus Wallpaper/Mirror”, I used an image of Klaus Nomi that I found on a fan tumblr. I used this image  as the wallpaper because I like taking a character and turning him into a highly reproductible iconic figure  (Andy Warhol). I like Walter Benjamin’s ideas in “The Work of Art in the Age of Mechanical Reproduction”,  where he discusses how the more something is mass-produced, the less authentic it becomes: “Even with the  most perfect reproduction, one thing stands out: the here and now of the work of art- its unique existence in the  place where it is at this moment…The here and now of the original constitute the abstract idea of its  genuineness. The whole province of genuiness is beyond technological reproducibility.” (Benjamin, 8)

Due to my highly limited Photoshop skills, I erased one of the figure’s heads by accident, but prefer it that way.

For Halloween of that year, I found a Martha Stewart craft, “Mirror Glow Eyes”, that is a different version of the  Snow White Mirror, that gives off animation feel. I love it.

Mirror Glow Eyes

 http://www.marthastewart.com/268895/mirror-glow-eyes

For the images that went on top of the Klaus Nomi heads, I used photos that I had taken in 2006 of video games at an arcade in Hong Kong. I have always been fascinated by the video game aesthetic; make something as synthetic as possible. I like the juxtoposition between the Klaus Nomi heads and arcade photos.

I was also inspired by Tasman Richardson’s 2012 show, “Necropolis” at MOCCA. He takes film and plays with  it in all sorts of formats that are interactive installatiosn. The following clip is an experience of his work.

 Process

 Working on this project has involved many steps.

  1. Load all of the images.
  2. Figure out the exact points of coordination where the arcade photos should go to be exactly on top of the Klaus heads.
  3. For some reason, after I loaded them and started playing around with my ‘if statements’, they all became huge.Take the arcade photos, resize and touch them up in Photoshop.
  4. Create a new sketch, where I load all of these photos again.
  5. Create all of my ‘if statements’ for where I want the photos to be placed. I want it to be so that depending on where you are standing, the arcade photos pop up over Klaus’ head. This means that, at first, I have to give vague points of coordination, around each square for where the person has to stand in order for the arcade photo to pop up. Because these points of coordination often work in alignment, it is difficult to have a person stand in one position and not have a few heads pop up at the same time.
  6. Endless playing with OpenCV and Capture, which is endlessly inspiring, but confusing, because I haven’t been able to do figure out how to incorporate the demos into my sketch. So far, some other demos that I would like to play with and incorporate into Klaus Mirror Wallapaper or another project, at a future time are:

-Suitscan: Have video playing before, then when you stand somewhere, everything stops and it plays the split screen, plus colous changes.

-In Open CV, copy is cool, the image, for some reason, reminds me of using many LFOs in music, something about how they drag.

-Spatiotemporal: Half Screen

– Time displacement: Ripple delay

-Pocky

-Framingham

-In Movie, I like scratch.

 More Process Notes:

I was having a lot of problems trying to use OpenCV; I didn’t know where to put it in my code, in the beginning or end? I went to see Nick and he stayed with me for what felt like over an hour and helped me TREMENDOUSLY. He told me that I needed to employ Network Communication for the project; get another sketch running at the same time as mine. The initial one would be the ‘client’ and the other one, with Open CV, would be the ‘server’. One of the reasons for this was to be able to have my initial dimensions for the image (1200, 1500). Otherwise, trying to use OpenCV in my initial sketch would only be able to handle the small square that it initially wants to use (hope that is clear).

I was ecstatic when everything finally worked. It needs to be dark in the room for things to work most efficiently and it’s a bit rough, but wow, what gratification I have to see my images come and go, depending on where I am standing!!

Now to incoporate audio:

I added the Minim library and was able to have one sketch successfully play my accompanying loop, “Klaus Tomato Complete”, although when I try to put it into my Client sketch, this is when troubles arise. I spent a few hours trying to incorporate it into my main sketch, when it eventually occurred to me to do with Minim what I had done with OpenCV; give it its own sketch!! Duh hickey. I added the code and then one line so that it would loop and presto, all three sketches work to create my Klaus Mirror/Wallpaper!

The Code

Remember, I used three sketches. I am having trouble logging into GitHub, so I will have to give you my code here.

1. The server, which was in charge of face detection:

import hypermedia.video.*;
import java.awt.Rectangle;
import processing.net.*;////
OpenCV opencv;

// contrast/brightness values
int contrast_value = 0;
int brightness_value = 0;

Server server;///make a new server
float faceX;
float faceY;
String sendData; //
byte zero = 0;

void setup() {

size( 320, 240 );

opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection description, here-> front face detection : “haarcascade_frontalface_alt.xml”

server = new Server(this, 5204);

}
public void stop() {
opencv.stop();
super.stop();
}

 

void draw() {

// grab a new frame
// and convert to gray
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );

// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

// display the image
image( opencv.image(), 0, 0 );

// draw face area(s)
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {

faceX=((faces[i].x+(faces[i].x+faces[i].width))/2);
faceY=((faces[i].y+(faces[i].y+faces[i].height))/2);

rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );

ellipse(faceX,faceY,20,20);
}

sendData=(map(faceX,320,240,1500,1200)+”,”+map(faceY,320,240,1500,1200));
server.write(sendData);
server.write(zero);

}

 

2. The client, which was responsible for my images:

import processing.net.*;///*****
Client client;//****

PImage img1; // Klaus Background
PImage img2; // Boobs
PImage img3; // Egyptian Eye
PImage img4; // Clown
PImage img5; // Yellow Buttons
PImage img6; // Kind Text
PImage img7; // Pink Buttons
PImage img8; // Girl Face
PImage img9; // Sunset
float faceX;//***
float faceY;//***

void setup() {

client = new Client(this,”127.0.0.1″, 5204);

////****
size(1500, 1200);
img1 = loadImage(“Klaus Background.jpg”);
img2 = loadImage(“Boobs28.jpg”);
img3 = loadImage(“EgyptianEye28.jpg”);
img4 = loadImage(“Clown28.jpg”);
img5 = loadImage(“YellowButtons.jpg”);
img6 = loadImage(“KindText28.jpg”);
img7 = loadImage(“PinkButtons28.jpg”);
img8 = loadImage(“GirlFace28.jpg”);
img9 = loadImage(“Sunset28.jpg”);

}
void draw()
{
receiveData();///*****put this at the top of the draw

image(img1, 0, 0);

//img2 Boobs

if (faceX > 240 && faceY > 70) {
image(img2, 216, 40, 60, 90);
}

//img3 Egyptian Eye

if (faceX > 500 && faceY > 150) {
image(img3, 675, 337, 60, 90);
}

//img4 Clown

if (faceX > 400 && faceY > 600) {
image(img4, 446, 641, 60, 90);

}

//img5 Yellow Buttons

if (faceX > 1100 && faceY > 150) {
image(img5, 1132, 336, 60, 90);
}

//img6 Text

if (faceX > 800 && faceY > 70) {
image(img6, 903, 42, 60, 90);
}

//img7 Pink Buttons
if (faceX > 210 && faceY > 150) {
image(img7, 216, 336, 60, 90);
}

//img8 Girl Face
if (faceX > 1100 && faceY > 200) {
image(img8, 1132, 40, 60, 90);}

//img9 Sunset
if (faceX > 1100 && faceY > 50){
image(img9, 1132, 641, 60, 90);}

}

void receiveData()
{
boolean sTest;
String[] inCoords = new String[2];
byte zero = 0;
String data;
/////////////////////////////////
// Read data
if (client.available() > 0) {
data = client.readStringUntil(zero);
//println(data);

try {
inCoords = splitTokens(data, “,”);
sTest=true;
} catch (NullPointerException e) {
sTest=false;
}

if(sTest==true)
{

faceX=int(inCoords[0]);
faceY=int(inCoords[1]);
}

}
}

3. The music

import ddf.minim.*;

Minim minim;
AudioPlayer song;

void setup()
{
size(100, 100);

minim = new Minim(this);

// this loads mysong.wav from the data folder
song = minim.loadFile(“Klaus Tomato Complete.mp3”);
song.play();
song.loop();
}

void draw()
{
background(0);
}

 

**Important to note that one must press play first for the server, second for the client and third for the music, otherwise it won’t play.

***One thing that I noticed through playing with this is that because it can be for the camera to pick up the user’s face, the user (me), sometimes has to wave their arms around and it becomes a sort of dance to get the attention of a small green light at the top of the camera screen.

****I would love to see this on a big projector!

 

Bibliography

Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” (Penguin Great Ideas), 2008.

 

 

 

 

Kinect! Translating Information Through Gesture & Sound

Research Presentation: Kinect

Presented: October 2013

Kinect Research Report draft

Kroma-The Chromotherapy Machine

My initial inspiration for the project was derived from an installation at the Tate Modern Museum which was an encompassing wall of LED screens that kept changing color at random. When it turned red the user felt warm, a bluish hue felt like a sharp temperature drop. Although the following images are not from the same exhibit but they accurately depict the sense of space that I had experienced. These images are from the work of  artist James Turrell’s installation at the Guggenheim museum in New York.

James-Turrell-Aten-Reign-6-Purple1-e1378472221282 James-Turrell-Las-Vegas James Turrell

Chromotherapy has been defined as the treatment of disease or healing using specific color(s) as “medicine”. The earliest suggestion dates back to the time of Avicenna in the late 10th Century when he recommended that certain colors could be used in the treatment of disease. There are several schools of thought that have developed over the centuries but science has generally regarded chromotherapy as a “pseudo-science” and accused it of having nothing but the “placebo effect” for people treated by it who swear its effectivity.

My curiosity was the effect colors have on people and as it was defined in Architecture school where we were taught the psychology of color e.g. never use the color black in a restaurant interior since it is not appetizing and will turn customers off food. Also worth mentioning is the use of the color Pink in psych wards where people who have displayed aggressive behaviour have been placed in rooms painted entirely pink (including the ceiling & the floor)

Pink curbs aggression

Initial design considerations were based on the user experience and an exploration into the methods of dispensing chromotherapy. The image below depicts the school of thought which sets the entire body as a target for chromotherapy versus what the eye (and as a result the brain) perceives.

IMG_0206

IMG_0207 IMG_0208

Choosing mind over matter I opted for creating an environment which would serve the latter (eye/mind connection VS the body)

User experience considerations

User experience considerations

photo

Kroma is a prototype for a machine that can be used as a meditation catalyst. The setup includes a screen/poster which gives the users instructions on how to interact with it. Cards with emotions (1 emotion per card) are placed next to the screen and the user is invited to sit down and choose a card which best describes the emotion being felt at that time and to place the card in a slot provided. Voila! when the card is in place, the screen and the surrounding environment turns into the colour that associated with the emotion selected.

Although the list of colours associated with “healing” is extensive and varies according to the school of thought, the following list is the formula used for Kroma.

ORANGE-Happy

Happy

Hopeful

Optimistic

Positive

Elated

Overjoyed

GREEN-Sad

Sad

Upset

Depressed

Pessimistic

Distressed

Crying

AQUA-Dazed & Confused

Hopeless

Irritated

Incoherent

Confused

Dazed

Stressed out

Lack of focus

RED-Anger

Stressed out

Angry

Jealous

Hatred

Negative

 BLUE– Creative/Mental Block

Mental block

Creative block

PINK – Aggressive

Suppressing aggression

Appetite suppression

Stress relief

Inciting relaxation

Fiducial markers pasted at the back of the cards were used, and colour values attributed to each marker. The code used was the marker tracking code with additional “if” statements telling the machine to change the colour of the screen when a specific marker is added. Reactivision was used to recognize the fiducial markers and their values. An additional element added to the design was the use of a “word” to help spark off user meditation. Word selection was carefully made in direct context to the emotion being “treated”

Next Steps 

Kroma has many applications. The most important consideration for its development would be the method of delivery to the user. Since it is a meditative tool it would definitely require an environment that successfully isolates the user from other distractions. Additional ambience tools like sound could be added to the experience. Explorations with transitions of colours or employing a broader spectrum of colours are also possible.

The Video

The Code

https://github.com/umaramanullah/Project-2/tree/master

Resources & References

Yousuf Azeemi*, S. T., & Raza, S. M. (n.d.). A Critical Analysis of Chromotherapy and Its Scientific Evolution. NCBI. Retrieved October 29, 2013, from http://www.ncbi.nlm.nih.gov/pmc/articles/PM

Colour Psychology. (n.d.). ThinkQuest. Retrieved October 29, 2013, from http://library.thinkquest.org/07aug/02208/

Fiducial. (n.d.). Marker. Retrieved October 29, 2013, from http://www.roborealm.com/help/Fiducial.ph

Chinese Character Waterfall

 

屏幕快照 2013-10-24 上午4.06.33 屏幕快照 2013-10-24 上午4.08.23

 

GitHub address : https://github.com/KlaudiaHan/project-2

The argument about the traditional art and the digital art is till fierce. Although this argument sustains, the digital art has already take a great place in current art area. Since the 1970s, various names have been used to describe the process including computer art and multimedia art, and digital art is itself placed under the larger umbrella term new media art. The impact of digital technology has transformed activities such as painting, drawing and sculpture, while new forms, such as net art, digital installation art, and virtual reality, have become recognized artistic practices.[1]

 

As multimedia technology has been more and more developed, exhibiting digitized replicas has become a popular alternative means of displaying cultural heritage, ranging from archeological sites and statues [Carrozzino et al. 2009; Koller et al. 2009; Foni et al. 2010], to paintings [Chu and Tai 2001; Zhu et al. 2004; Lin et al. 2009]. Compared to traditional exhibitions, the digital exhibitions not only provide viewers with a means to interact with an exhibit without actually contacting it, but also are capable of revealing background knowledge [Foni et al. 2010]. Most of the state-of-the-art multimedia exhibitions present a painting or parts of it in the form of animations [Chu and Tai 2001; Zhu et al. 2004] sometimes being triggered by users using intelligent user interfaces [SeoulAliveGallery 2008; Lin et al. 2009]. [2]

 

Some types of digital exhibition make a very digital interface which statement this exhibition is different from traditional arts. Some of the digital exhibition mixed the elements both of traditional and digital arts. The main purpose of this project is finding a proper way to combine the traditional arts with the digital techniques.

 

The Context

  • Build a virtual wash painting content for users
  • Pure Chinese traditional style . Combine the traditional art with the digital techniques.
  • Making a virtual figure of user to emerge the user in to the work.
  • Through the user get the feedback from the work by interacting with the content  to improve the user’s engagement.

The main aim of this installation is to propaganda wash painting cultures and find the possibility of people’s acceptance of mixed format of arts. It is a new trying of digital art apply based on traditional arts. There is a tone that digital art is not a kind of real art ,and the digital art is contrary with the traditional arts . The meaning of combine them together is a important for artists and the audiences . It means  these two formats of arts can live together in one art work. Comparing with separate the digital art from traditional arts, it is better to find another way to get a win win.

 

The code research

 

The simulation of waterfall based on the source code of the particle example in processing library. The method of form a Chinese character waterfall is pushing a set of characters into an array, and choosing a character randomly from the array when generating a new character of waterfall.

 

The generation of a transparent figure form a colorful video is an other problem . the method I found is that make a colorful video into a black and white video by the function ”adaptiveThreshold”[3] .Then scan the set of the pixel of the image get from the video. Change each white pixel into transparent.

 

PImage transparentImage (PImage img) {

  PImage transImage ;

  transImage = img;

  for (int i = 0; i < img.width * img.height; i++) {

    if ((img.pixels[i] & 0x00FFFFFF) == 0x00FFFFFF) {

      transImage.pixels[i] = 0;

    } else {

      transImage.pixels[i] = brown;

    }

  }

  transImage.format = ARGB;

  transImage.updatePixels();

  return transImage;

}

Another important code I researched is the “blobDetection”, this detection is founded by the library named “BlobDetection”. This detection could found the edge of the shape, which has different color with the color besides. [4]

屏幕快照-2013-10-24-上午4.06.33p 屏幕快照-2013-10-24-上午4.08.23p

Apply

This installation can be amplified in to a huge screen on a square. Adding more people to interactive with the virtual image. The main aim of this installation is to propaganda wash painting cultures and find the possibility of people’s acceptance of mixed format of arts.

 

[1] Digital Art By imjith, jan. 2012 | 2 Pages

[2] Annotating Traditional Chinese Paintings for Immersive Virtual Exhibition

[3]http://urbanhonking.com/ideasfordozens/2013/07/10/announcing-opencv-for-processing/

[4] http://www.v3ga.net/processing/BlobDetection/index-page-home.html

Interactive street art

http://vimeo.com/77822058

Interactive street art

My project started with the idea of create a political statement using processing. In my first approach I intended to use video capture with a transparent image to make the Mexican president talk about whatever I wanted, and since I was going to be able to record it I could watch it any time and laugh at him (I found the code at the “Drawing Movie” example in the processing library). I though it was cool but it was just me listening to me talking about something I only care.

http://vimeo.com/77759903

Browsing on Internet looking for ideas I run across many memes of the Mexican president that people have customized, I was amazed about how creative people can be when they have the opportunity to participate so I decided to make an interactive political/social poster.

In order to create this poster I needed a canvas (my screen with an image as a background), brushes (fiducials) a grid (reacTIVision) and lots of ink (processing code).

20131025_165622 dashboard

I start setting up the canvas by loading images as background. Afrter that I started coding using the mouse as the brush, the drawing was smooth and uniform but when I switch to the fiducials instead of drawing a line I was just spotting color. I needed some help so I checked out the examples that processing has and I found pulses which is basically a loop that draw shapes (ellipses for example) when a particular event is happening, such as when the mouse is pressed. Therefore,  I decided to incorporate this code into my project.

pulses

Once I made the code to work I decided to record a video of the creation process and since I had the code that I used at the beginning of the project it was easy to do it. Recording creative processes is good for documentation and research purposes.

Here is the link to my code: https://github.com/dbeto/Project-2/blob/master/Interactive%20poster

http://vimeo.com/77760174

The ultimate goal of this project is to be projected in different parts of the city recording peoples interaction and creation of a visual outcome addressing the social issues they might have. This documentation can be projected in other areas of the city to create awareness through empathy.

Finally, I have the video of the interaction that the DF students had with the poster.

http://vimeo.com/77761928

Int x=13

500 – x  words about context

I have always been interested in design and art that actively participate in the political and social matters to create awareness, I am happy to see how designers and artists express their opinions about political issues or denounce social injustices using their artworks.

But what amazed me the most it is when these visual communicators get out of the galleries, recover public spaces, and create stronger political statements. As examples we can see the work of the Mexican muralists Diego Rivera and David Alfaro Siqueiros in the mid 1900’s, or more recently, the street art of Shepard Fairey and Banksy .

Rivera_Detroit industrythe-revolution-mural

 

imgresbanksy-olympic-rings

Although most of these artworks are beautiful and conceptually deep sometimes, they are constantly tagged as a simple act of vandalism,  as the destruction of a communal space from a single individual,  when on the contrary, most of the time it is in fact the opinion of a great amount of the population, and the issues that these pieces of art are addressing are way more important than the wall itself that serves only as a canvas.

Another thing that I find really interesting it is how people are becoming more involved in politics using new information and communication technologies. To be listened is no longer a privilege of a few but an opportunity for more. Posting on Facebook, twitting, creating memes to criticize the work of public, etc. are examples of this new free participation.

So, what if we instead of “destroy” we create and what if we instead of just watching we participate. Well, I bet we can do it using a projector, some sensors and lines and lines and lines of code.

First let’s take a look at some existing projects related.

AIR GRAFFITI by FotoMaster, this product offer the experience of create digital graffiti, in order to have that experience you could either rent it with $2000.00 for 4 hours or you could buy its software license for $4950 plus infrared spray cans, infrared webcam, projector and so on. This software is really fun but limited and when the 4 hours pass all your fun and creations leave with them.

SWEATSHOPPE video painting. In 2009 Bruno Levy and Blake Shaw started a project that relates video projection and architecture in more interactive way. This project called “video painting” uses technology to paint videos onto walls with a special roller. This project is not commercial and it has been evolving since then. The artist have done a SWEATSHOPPE tour in US and Europe documenting their work in different landscapes.

 

These projects are really cool and since we are living the 2.0 era, where we can use open source software, like processing, more affordable hardware, companies are more open to independent development, artist are including digital technology and co-creation to their artwork, a project like interactive street art for social awareness make sense to me and I look forward to keeping working on it.

 

 

Spoke & Mirrors / Nebulous Talk Iterative Sound Visualizer

Spoke & Mirrors

Description

Spoke & Mirrors is intended to visually capture the fleeting nature of conversation & interaction via iterative/recursive visuals. In it’s ideal form, discreet microphones/inputs will capture conversation between individuals, speeches, or ambient sound, convert it to a visual form (in this case smoke-like shapes), then, dependent upon application, reveal it as art-print, live projection or manifestation of the sonic environment.

Much of the impetus for this concept developed through a combined interest in natural systems and the fleeting nature of conversations, relational moments & words as manifestations of (fleeting) thought. In essence, an effort to capture the ephemeral. This sentiment of creating tangibility out of the intangible links directly to my proposed thesis — engaging with macro- & micro- (natural) phenomena via scientific data to create 3D printed & visual manifestations, working as translations conceivable at a human scale.

/ / /

Possibilities
There are numerous iterations in which this concept could manifest:

  1. Hidden mics that respond to pitch, talking speed, tempo, etc. & create reactive visual objects more representative of the individual(s). This avenue was explored & believe is quite feasible, if not complicated.
  2. Discreet mics for weddings, romantic/intimate meetings, or the recording of palliative care/last moments as personal memento..
  3. Artwork projection via ambient sound in rail-stations, restaurants, cafés, etc.
  4. Mobile app with shareable images via social media, printable.
  5. I believe this concept could mesh well with Umar’s Colour Therapy project where mood-altering visuals are paired with colour &/or psychologically probing questions, furthering customization & allowing for deeper affect. For example, perhaps cloud-like shapes flit, form & dissolve in an effort to calm.


Issues
Some of the issues I ran into during the creation & discovery of this project include:

  1. Difficulty using dual mics in Processing. This was perhaps one of the most disappointing points, as it limited the direct ability to capture conversations in the envisioned manner
  2. Algorithms become complicated & learning to control randomness certainly proves challenging, albeit fascinating & worthy of further exploration. Many iterations became ‘persistent’, refusing to halt; while at the same time, creating strange & mesmerizing visuals akin to lava, dripping paint, play dough as well as other strange forms.
  3. Low resolution: pixel based iteration is less suitable for print; vector options (Bezier, etc) appear limited within the Processing environment. Alternate options will be explored & most likely can create something equally suitable with expandable options.
  4. As ridiculous as it sounds, these became rather addictive to watch, particularly when paired with famous speeches, philosophical debate (CBC Ideas) or one’s favourite music.

 

This One Goes to 11

 

/ / /

On Github

Code for Nebulous Talk

Code for Speech/Talks on White

As you’ll see, additional code variations are provided on Github, exploring variations in colour, reaction thresholds and algorithm alterations (in terms of particle age, persistence, speed, etc). The intention would be to give the user control over colour, speed & other variables to increase customization, all contained in one code/product/service.

/ / /

Sketches & Design
Through perusal of openprocessign.org, code using Perlin points as basis of movement for the visual aspect of this project turned carried much appeal. This system was conceived as “a function for generating coherent noise over a space. Coherent noise means that for any two points in the space, the value of the noise function changes smoothly as you move from one point to the other — that is, there are no discontinuities.” (Zucker, 2001). Essentially calculating an average of averages taken from a series of points.

This noise meets 3 intended criteria:

  1. Apparent randomness — appearing random to the human eye.
  2. Reproducible — the same input should manifest the same output.
  3. Smooth Transitions — No sharp edges.

Perlin noise/points was created by for the movie Tron by Ken Perlin, professor in the Department of Computer Science at New York University, who is also the founding director of the Media Research Lab at NYU, & Director of the Games for Learning Institute.

As I will not pretend to understand (or worse yet, explain) the maths at work in Perlin noise, further info can be found here.

 

Chicken scratches that help a bundle despite appearances.

Chicken scratches that help a bundle despite appearances.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

/ / /

Screenshots & Videos

Nebula Inception Visualization (Vimeo)

MLK’s “I Have a Dream” Speech Visualization

Empire Strikes Back: Asteroids Visualization (Vimeo)

Fire Nebula Sound Visualization (Vimeo)

Filament Styled Nebula (Vimeo)

 

 

Presentation PDF:  P2_Pres_F72

/ / /

Context

Science, art, design & programming has searched for — & created — patterns, visuals & algorithms in an effort to explain the natural world, develop models and harness the beauty inherent within natural systems. This search renews continually as our models become ever more complicated congruently with increased computing power. This evolution has been used to impressive effect in the modelling of characters, backgrounds, hair, and associated special effects in movies and video games. The preceding increases allow for an increase in ‘polygon’ use — the underlying structure of graphics — where textures & colour are mapped on top; at current levels, the effect is one of near-perfect realism. Fine examples of these graphics can be seen in the works of Pixar, games by ID Software (creators of Doom), as well as most modern movie special effects where  ubiquitous usage has become possible thanks to nearly seamless presentation.

One of the most popular visualizers comes from iTunes — where variable, reactive shapes & light-objects react to music played. These though, are not printable / capture via iTunes itself & do not entirely “react” as there is movement & creation without sound.

The works of Jared Tarbell at Levitated.net provide much inspiration (sadly found after this project’s close), as does Benoit Mandelbrot who created / discovered fractals through the underlying mathematics which create these intricate forms. Much of electronic music has utilized visualizations as well, although often ‘cold’ in their affect, as seen in this example: Visualizing Sound – Audio Responsive Generative Visuals | Made with ProcessingVideos & recent performances by DJ Amon Tobin where projection, mapping & representative graphics are used throughout. (vimeo.com/24502224 )

Although impressive & engaging, this ‘coldness’ is where I believe a gap exists, which allows for new visualization opportunities — in particular, outputting these visuals as aesthetic-conceptual objects — creating a tangible form of the ethereal through the virtual.

Another context which this project can be viewed is through the phenomena of Synesthesia, where one (or more) senses are activated by another, unrelated sense (ie. “seeing sound” or attributing colours with letter-forms). This sensorial confusion can be used a conceptual impetus, providing unexpected translation-spaces ripe for exploration. The use of sound to create visuals has been explored in through extensive variations, where visualizations may provide new moods, aesthetics & create space for critical exploration.

I stumbled upon an amazing project called “Digiti Sonus – Advanced Interactive Fingerprint Sonification Using Visual Feature Analysis” by Yoon Chung Han — which, to some extent, approaches the investigation of unique physical structures in relation to sound via the opposite direction. This though, relates to the possibility of creating visual iterations that can then be read-back & played in a loop of create, play, sense, record, play, repeat. Although no doubt extremely complicated, the possibility of adding in ‘random’, or at least, constrained random elements, could, in theory, create new evolving soundscapes & visuals during each iteration.

 

/ / /

 

Links & Research

Code for the Perlin based visuals: openprocessing.org/sketch/10475

Sound input with Minim: code.compartmental.net/tools/minim/quickstart/

_

Further info on Perlin Noise: freespace.virgin.net/hugo.elias/models/m_perlin.htm

Benoit Mandelbrot: Fractals & the Art of Roughness at TED: ted.com/talks/benoit_mandelbrot_fractals_the_art_of_roughness.html

Perlin, Ken. Noise Machine. noisemachine.com/talk1

The American Synesthesia Association. Retrieved from www.synesthesia.info

_

Carpenter, Siri. (2001, March). Everyday Fantasia: The World of Synesthesia. The APA Monitor, Vol 32, No. 3

Gleik, James. (1987) Chaos: Making a New Science. New York, NY: Penguin Books.

Turrok, Neil. (2012) The Universe Within: From Quantum To Cosmos. Toronto, ON: House of Anansi Press.

Zucker, Matt. The Perlin Noise Math FAQ. 2001. Retrieved from http://webstaff.itn.liu.se/~stegu/TNM022-2005/perlinnoiselinks/perlin-noise-math-faq.html

 

Project 2_Interactive Fireworks

Project Description

Interactive Firework is designed based on the idea of expressing traditional scene in a more expressive and interactive way. Even through the firework imitates the traditional firework, I think it should not only represent firework but also inspire the imagination of other objects which depends on the audience’s own perceptions according to their experience. In oder to better interact with audiences, Interactive Firework adopt colour detection into the design, the colour of the firework would change based on the colours of the audience’s clothes which creates a two-way communication with audiences and convey a message that Your Colours Inspires Colourful.

Github link

https://github.com/tulipsun/interactive_firework/tree/master

Sketches & Design files

interactive firework

Screen Shot 2013-10-25 at 12.24.09 AM

Screen Shots

Screen Shot 2013-10-25 at 12.30.32 AM

 

Video

https://vimeo.com/77762352

https://vimeo.com/77762352

Context

The idea of this project is originally initiated from a wedding proposal with fireworks. However, firework is not allowed in Canada for personal use anymore. So I started to thinking about reproducing the romantic scene by using screen-based interactive design. The colour detection could sense the colour of the dress that people are wearing and generate the particles with the same colours.

The project adopts using colour detection to create an interaction with audiences through the colour of their dresses, since the colour of people’s dress could serve as an important meduim for self-expression. There are a variety of psychological researches show that the close relationship between the colour preferences and personality. The book Showing Your True Colours, which was written by Mary Miscisin in 2001,  talks about how the colour preference reflects the personality and human’s behaviour, meanwhile the book suggests how people from each colour group could maximize their advantages and effectively cooperate with others.  Therefore, I believe colour detection could effectively improve the communication with audiences in the project.

In addition, simulation is an important function that Processing offers. There is a specific section of examples show different simulation works by using Processing. A specific example named Multiple Particle Simulation demonstrates the changes and movements by adjusting the  acceleration of particles to reproduce the movements.  And Anders Fisher develops a simulation of firework which simulating the real firework. However, I was hesitating about adopting a simulation of firework that could only remind people of firework in their mind, since I believe the more abstract expression of an object could bring more interactions with audiences since people usually have different perceptions of an abstract object which inspires imagination.

References:

http://www.positivelymary.com/Shopping/Showing_Our_True_Colors_Book_PeekInside.pdf

http://www.openprocessing.org/sketch/17259

Jia Le’s Colour Personality: http://www.baike.com/wiki/性格色彩学

Facial Capture system

splash

 

Inspiration

Facial capture system has been used in film and game industry for over 5 years. For instance, in Avatar, James Cameron used this technology created the incredible aliens that can hardly be told that they were computer generated.Further, in many games, such as the latest game “The last of us”, they used face mocap technology created realistic human characters.

However, the facial capture technology that used in industry neither marks nor makerless that share the same disadvantage, they are too expensive. Thus, I’m wondering if there’s a way to bring this “expensive technology” to people’s life, especially for our “poor” developers. So here is my idea, using webcam that has been built in laptop as an video input, then use color track code in processing to track different markers on people’s face. Finally, collect the coordination data that from markers to rebuild my face with different facial expressions. 

Untitled-1Untitled-2Untitled-3Untitled-4

 

So as you can see, there are different colored markers on my face, further, in order to have more colors, I split the screen into two part, top and bottom. Thus I can track the same color in different parts of the screen. Eventually, I can successfully track 11 markers on my face, which could help me to rebuild my face as well as the expressions. 

So the first step is to pick the color by pressing 1, 2, 3, 4 etc. Each number represents a individual color on the screen. After color picked, next step is press “B” key to draw lines between each marker. Thus will give me a rough shape of my face. Finally, press “V” to generate a huge amount of random points that floating around each marker, and a function called ” drawAwesome”, which is very fantastic, will draw lines between each small dots that floating, thus can create a more precise shape of my face.

Video Link: https://vimeo.com/77754776

The Code

I wrote those code in two separate files, one of them is main part that including draw, keypressed, and capture video. The other part, which is essential, is coordination calculation function. What this part do is to scan every pixel of the current frame and compare the current pixel with the color that picked. Then calculate the center of those color in the current frame. I used a FOR loop that can scan the screen based on how many color that assigned. 

Within the second file, there are few draw function, I used basic function such as ellipse, lines to connect different markers. Obviously, for loop is very useful in this scenario. 

Code link: https://github.com/BrandyYang/project2_facial_capture/tree/master

Next Step

Currently, I just can use different color for each markers, which is very inconvenient. Because the human face dominated the color ranges from yellow to purple, leaves me only green, blue, and red that can be used. So in the next step I’ll try to use one color for all of the markers. By doing this, not only I can have more markers but also have more accurate resolving result. Further, exporting the data of markers would be another step. Thus I can apply those data to my 3D model in other software such as Maya, 3ds max or C4D. So there are still a long way to go!

Reference

Color Tracking: http://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/

Draw Lines Between Nodes: http://www.openprocessing.org/sketch/93078