Author Archive

FLUID RESONANCE: AN AUDIO+VISUAL DIGITAL WATER EXPERIENCE

fluid resonance title2

What if water could be an instrument?

2014-11-10 21.53.04

Fluid Resonance DJ Gary

If the vibrations from a single drop of water in the ocean could be suspended in momentary isolation, what infinite arrays of symphonic arrangements could we hear? A constant flow of signals in the tide of the universe codified in sounds, awaiting to be experienced in time. There are certain moments when we are moved by sound, a direct emotional connection to the physical movement of orchestrated disturbances in the air; an unseen but pervasive, invasive and volumetric presence. Such characteristics as these are what concern me now, and are the focus of the project. The formation of relational interactions in space codifying the event and its parameters are subtly and violently manoeuvred by the invisible actions as subtext underlying the visible surface, just as sound changes its timbre by the material surface it reverberates on, yet continues to imbue into the substrata of matter.

Music makes time present. Or, at least, it makes one aware of time, even if one loses track of it. Take, for example, Leif Inge’s 9th Beet Stretch, a reimagined version of  Beethoven’s 9th Symphony stretched into a 24 hour journey (a sample can be heard here on RadioLab at 4:23, or listen to the full streaming version here). The remastered continuous audio notation materializes the presence of sound and the distillation of a captured moment, giving one a moment to reflect on the mortal moments that stream by in subconscious sub-fluences every minute, without awareness.  In this example, we are transported into the life of a musical movement in its own existence; in contrast, another way of thinking about the relation of time and sound comes in the form of crickets. An analog field recording taken from a soundscape of crickets was slowed down, edited to a speed equivalent to the lifespan of a human, scaled up from a cricket’s lifespan. What emerges is a harmonic layering of triadic chords playing in syncopated rhythm like an ebb and flow of a call and response. (Note: the field recording has later been reported to have been accompanied by opera singer Bonnie Jo Hunt, who recalled “…And they sound exactly like a well-trained church choir to me. And not only that, but it sounded to me like they were singing in the eight-tone scale. And so what–they started low, and then there was something like I would call, in musical terms, an interlude; and then another chorus part; and then an interval and another chorus. They kept going higher and higher.” (ScienceBlogs 2013). When we slow down, speed up, alter, change, intersect intangible concepts into human-scaled pieces to hold, we have an opportunity to reveal insights from our point of view into a dimension outside our horizon that which we never would have encountered in the habitual form. It may not grant us full access to aspirations of knowing or truth, but the discovery of interrelated phenomena – whether it be time and music, water and sound, or natural- and computational glitches causing anomalies – gives us a better understanding of the effects and consequences of the tools used to define a language, which in turn, define our state of being and future intent.

And what of the water project?

The original intent of processing and music began with the introduction to cymatics – a term used to describe the experiments of a substance’s patterned response to various sine wave tones.

And here’s a polished music video of many such experiments compiled into a performance:

 

Water revealed the vibratory patterns of tones in consistent yet exciting designs, which then began the exploration into sound processing. Minim became the library that accommodated the recall of pre-recorded instrumentation (or any .wav / mp3 file) , however it was not the first library to be experimented with. Beads, a sound synthesizer library with the capacity to generate sine wave tones provided an introduction to visualizing a simple wave form.

Beads_interaction

beads interaction sine wave

FreqMod

Close-up of Frequency modulating on-screen.

The location of a pointer on the screen moved by the mouse changed the wave modulation and frequency in relation to the vertical and horizontal actions, respectively. The input of movement and proximity changed the auditory pitch of a tone output.

Another variation of sound visualization from Beads was the Granulation example. This exercise used a sample of music, then ‘flipped’ the composition, pushing and pulling the tones, stretching them into digitized gradations of stepped sound. Imagine a record player turning a 45rpm disc with minute spacers in between every 1/16 of a second, turning at 33rpm (but with the same pitch) – the digital composition of finite bits reveal themselves, but link the tones in a digitized continuum. This would later become very influential in the final performance of the sound generated by water.

An inquiry into the physical properties of cymatics proved to be challenging. Initial investigations were conducted with a coagulate fluid (water and cornstarch).

Gary blows a speaker.

Gary blows a speaker.

It was soon discovered that commercial-grade hardware and equipment would need to be used in order to achieve an effective result. Though challenging, time would not permit further explorations. (Gary Zheng continued to explore cymatics to great effect based on these initial experiments).

A second option was to simulate cymatics through visual processing, leading to some play with Resolume, a software used for sound visualization, popular among DJs to augment their sets with responsive graphic media.

ResolumeInitially, the layered track interface and set bpm files made this an easy-to-use software medium. Pre-made .mov or .wav files could be loaded to simulate interaction with the beat-heavy tracks. For entertainment value, Resolume has much to offer and is easily accessible.  But the spontaneity is removed from the equation of output, and is dependent upon the user’s technical knowledge of software and constraints of the program.

motion vs colour detection

motion of water detection

The method of investigation revealed interesting physical responses to sound, and in turn, inverted the experiments of cymatics – from the causation of sound into form – to form resulting in sound feedback. The intrinsic properties of water displacement on its surface had the ability to create an effect through captured video, thus water became the focus as an instrument of motion represented by auditory output, and was no longer an after-effect of sound.

A deductive experiment compared two forms of video motion detection, based on exercises conducted earlier in group lessons (code originating from Daniel Shiffman’s processing.video samples, www.learningprocessing.com).  First, there was colour detection. This version of motion detection would have dictated the physical properties of objects and or additive coloured substances to the water. Adding elements complicates the design process, and alters the base-line state of the water interface, so this was not a favourable option.

motion detection senses obscenities.

Motion Censor: motion detection senses obscenities.

Motion detection test using video processing. Video pixels are selected in the active frame to detect which colours within a certain threshold will be chosen to be tracked; in this case, the off-colour gestures turned the camera from a motion sensor into a motion censor.

Next up was the motion gesture test, a basic differencing visual of motion knocked out in black pixels, with a set threshold calibrated for the scene.

motion difference

2014-11-10 00.48.44

early tests of water detection through gesture motion

The gesture test proved to be less discriminatory of affected pixel detection, therefore the existing conditions of light and material properties would be critical in the final set-up of the performance, especially for a clear and less-detectable substance as water. A visualization of the water’s surface captured in the video camera early on indicated the sensitivity of the camera would be sufficient and better in controlled environments.

A third and most important layer to the experimentation was the implementation of the split screen lesson introduced to us as a group, as an application using coloured elements to respond to motion. Coloured items appeared,  indicating the detection of movement in the designated zone on the screen.

2014-11-06 16.24.52

Split screen divisions

hand detection grid format

Layout of grid zones

**At this point, the design of the project became clear. A music interface would be created with water, from which motion would be detected through user interaction (imagine water drop syringes annotating musical notes in a pool. Notes vary on a scale depending on where you release the droplets). The vibration of water is also augmented by a graphic icon, colour-coded to represent the different tones. Once the user has made the connection of the interface, colour cues, notes and zones, the ability to improvise and to create a melody through patterning becomes intuitive. **

As the design of a musical interface called for a variety of designated tones, a grid was mapped out to correspond to a simplified scale. 8 tones would be selected, and would represent a major key to start with harmonic layering. A small threshold showing less motion than was detected maintained a relatively neutral background, while a graphic icon was required to track the gesture: this was needed to give visual feedback to the user, to understand the interface orientation and navigation of the zones.

2014-11-09 16.48.12

gesture motion and tracking with graphic icon

An important aspect of the grid layout and the user interaction required a method of knowing where the user was affecting the interface, as it was a visual representation augmenting the real physical interactions of the water. It was determined that a static image appearing intermittently did not represent the user action of dropping water, so a sequenced animation (GIF) was created in PhotoShop.

DropGif_blkBkgrnd

GIF sequence of 15 frames developed in Photoshop

8 unique variations (colours) of a 15 frame GIF were created. Then, a GIF library was sourced to introduce the animation into the code. GifAnimation was used to activate the series of images. There were at least a couple of ways to integrate the animation: as a sequence of still images, or as a compiled GIF (the latter was chosen for this instance). For further information, here is a link to start http://extrapixel.github.io/gif-animation/

In order for the GIF to be successful, it had to follow the pixels in the zone it was assigned to, and it needed to appear in an approximated area where the most changes occurred in the video processing. What transpired was a coloured “droplet” image appearing where the real droplets of water were being played out on the water’s surface. This part of the program code would not have been possible without the consultation of Hart Sturgeon-Reed, who helped to apply the following:

///draw the gifs

if (points>80)
{xPosition=xsum/points;
yPosition=ysum/points;

println(xPosition);
println(yPosition);
}

//Upper left 1
if (xPosition <splitLine && yPosition <height/2)
{
radiusL=map(motionTotL,xPosition,maxMotionL,yPosition,maxRadiusL);
image(loopingGif,xPosition,yPosition,radiusL,radiusL);

…and so on, for each tonal zone.

To recap, the foundations of gesture motion detection was layered with split screen detection, thereafter divided into quadrants, then 8ths of the screen (default 640×480). Video processing also enabled tracking of the GIF, which was implemented with the GifAnimation library. Now, Minim was used as a playback for pre-recorded royalty-free audio. In this case, the default notes on a guitar were selected as a basis for the sounds – a simple foundation easily recognizable, with the potential to grow in complexity.

A fundamental leap of concept occurred in the playback results. Initially, the pre-recorded single note tone would play, and a simple identification of a sound would be the result. Minim has the capability of playing a complete song if needed, by using the recall at the critical moment. This method may slow down the recall, however, and since the tones for the project were short, the recall required quick access / activation. Another detractor to the Play loop was the one-time cycle. A reset was required thereafter, and did not set as expected, and often the tone cut short as other tones were activated through the water motion. To counter the stuttering effect, trouble-shooting with the Trigger loop had interesting results. As the motion of the water continuously recalibrated the video detection when its surface broke from the droplets, the tones were triggered with a constant reset, creating a continuous overlap of sounds, not unlike the earlier experiments with the Beads library example of granulation. So here we are with a unique sound that is no longer like a guitar, because it is retriggering itself in a constant throng of floating, suspended long notes weaving between each other. It is guitar-like, yet it is pure processing sounds, delivering it from simulation to simulacra, activated by natural elements of rippling water.

The second point to note about the visual and auditory feedback was the glitches. In the coding structure, parameters were set to define the area within the screen (each zone had 160×240 pixels) from which an approximation would be determined, in order to represent the point of contact where the most action occurred in each zone with the GIF droplet icon. But, as the water surface continued to overlap and ripple into adjacent zones, it appeared that the icons were blipping outside of their originating boundaries, often overlapping one another. This was fortified by the seeming overlap of tones, when in fact each tone was activated on an individual basis; due to the immeasurably small lapse times between each triggered effect, two tones would sometimes sound like they are playing simultaneously, when they were bouncing back and forth from each sound. The flowing state of water in its contained form would activate multiple zones at once, which I can only surmise that the Processing sequence arbitrarily determined which zone and tonal value to play in order of activation, yet was constantly being reevaluated as the water continuously produced new results, changing the sounds by the millisecond.

The set-up: A podium, webcamera, projector, speakers, source light, laptop, water dish and two water droppers. Simplicity was key, considering the varying levels of sensory input and stimulation, however the learning curve was quick and responsive due to the immediacy of the performance.

(Two other experiments to note: FullScreen- an application whereby the Processing viewing window stretches to the size of the native computer (laptop) – and Frames – the calibrating tool to indicate how often the camera input refreshes data – did not synchronize due to the limitations of the external webcamera used. The 640×480 aspect ratio was not preferable, but did serve its purpose in speed and responsiveness.)

Prototype Performance

From the outset, the physical interactions of the environment and the design (reflection of light on the water surface, the container vessel, the camera positioning…) were discussed in detail, yet the focus remained on the programming concept. As a designer of physical space, the programming content, in conjunction with the hardware and sensitivity to environmental conditions was a negotiation process, requiring constant testing throughout the stages of development. Such practice-based research produces unexpected results, and informs the process through reflective and iterative methods.  The most challenging aspect deals with the interaction of elements in its most basic state. The approach to this project was to respect the properties of water, and to work with water as the central vehicle to the creative concept. This now includes the unpredictability of its changing yet constant state.

Phase II

Future stages to the project entail expanding the range of tonal values / octaves in the instrument, which could include secondary mechanisms to “flip” to another octave, or output of sound. Recorded feedback of either visual or auditory information could become an additional layer to the performance. A designed vessel for the water interface is to be reviewed.

Other considerations:

Spatial and cognitive approaches to virtual and digital stimuli in environments have the potential to be accessible touchpoints of communication, whether it be for healthcare, or as community space.

The initial mapping of the split screen into zones carries into another project in progress, and informs the development of a physical space responding with feedback based on distance, motion and time on a much larger scale.

 

 

Many thanks to Gary Zheng, Stephen Teifenbach Keller, and Hart Sturgeon-Reed for their help and support.

 

// Jay Irizawa Fluid Resonance: Digital Water
// base code started with Learning Processing by Daniel Shiffman
// http://www.learningprocessing.com
// Example 16-13: Simple motion detection
//thanks to Hart Sturgeon-Reed for the graphic icon detection

import gifAnimation.*;

PImage[] animation;
Gif loopingGif;
Gif loopingGif1;
Gif loopingGif2;
Gif loopingGif3;
Gif loopingGif4;
Gif loopingGif5;
Gif loopingGif6;
Gif loopingGif7;
GifMaker gifExport;

//minim sound library

import ddf.minim.spi.*;
import ddf.minim.signals.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;

import processing.video.*;
// Variable for capture device
Capture video;
// Previous Frame
PImage prevFrame;
// How different must a pixel be to be a “motion” pixel
float threshold = 50;

float motionTotL;
float motionTotL1;
float motionTotL2;
float motionTotL3;
float motionTotR;
float motionTotR1;
float motionTotR2;
float motionTotR3;

float maxMotionL = 3000;
float maxRadiusL = 60;
float radiusL;

float maxMotionL1 = 3000;
float maxRadiusL1 = 60;
float radiusL1;

float maxMotionL2 = 3000;
float maxRadiusL2 = 60;
float radiusL2;

float maxMotionL3 = 3000;
float maxRadiusL3 = 60;
float radiusL3;

float maxMotionR = 3000;
float maxRadiusR = 60;
float radiusR;

float maxMotionR1 = 3000;
float maxRadiusR1 = 60;
float radiusR1;

float maxMotionR2 = 3000;
float maxRadiusR2 = 60;
float radiusR2;

float maxMotionR3 = 3000;
float maxRadiusR3 = 60;
float radiusR3;

float splitLine = 160;
float splitLine1 = 320;
float splitLine2 = 480;
float splitLine3 = 0;

int xsum = 0;
int ysum = 0;
int points = 0;
int xPosition = 0;
int yPosition = 0;

//Minim players accessing soundfile
Minim minim;
AudioSample player1;
AudioSample player2;
AudioSample player3;
AudioSample player4;
AudioSample player5;
AudioSample player6;
AudioSample player7;
AudioSample player8;

void setup() {
size(640, 480);
video = new Capture(this, width, height);
video.start();
// Create an empty image the same size as the video
prevFrame = createImage(video.width, video.height, RGB);

//Gif
loopingGif = new Gif(this, “DropGifditherwhite.gif”);
loopingGif.loop();

loopingGif1 = new Gif(this, “DropGifBlue.gif”);
loopingGif1.loop();

loopingGif2 = new Gif(this, “DropGifGreen.gif”);
loopingGif2.loop();

loopingGif3 = new Gif(this, “DropGifYellow.gif”);
loopingGif3.loop();

loopingGif4 = new Gif(this, “DropGifRed.gif”);
loopingGif4.loop();

loopingGif5 = new Gif(this, “DropGifBlueDrk.gif”);
loopingGif5.loop();

loopingGif6 = new Gif(this, “DropGifOrange.gif”);
loopingGif6.loop();

loopingGif7 = new Gif(this, “DropGifPurple.gif”);
loopingGif7.loop();
minim = new Minim(this);

// load a file, give the AudioPlayer buffers that are 1024 samples long
// player = minim.loadFile(“found.wav”);

// load a file, give the AudioPlayer buffers that are 2048 samples long
player1 = minim.loadSample(“1th_String_E_vbr.mp3”, 2048);
player2 = minim.loadSample(“2th_String_B_vbr.mp3”, 2048);
player3 = minim.loadSample(“3th_String_G_vbr.mp3”, 2048);
player4 = minim.loadSample(“4th_String_D_vbr.mp3”, 2048);
player5 = minim.loadSample(“5th_String_A_vbr.mp3”, 2048);
player6 = minim.loadSample(“6th_String_E_vbr.mp3”, 2048);
player7 = minim.loadSample(“C_vbr.mp3”, 2048);
player8 = minim.loadSample(“D_vbr.mp3”, 2048);
}

void captureEvent(Capture video) {
// Save previous frame for motion detection!!
prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); // Before we read the new frame, we always save the previous frame for comparison!
prevFrame.updatePixels(); // Read image from the camera
video.read();
}

void draw() {

loadPixels();
video.loadPixels();
prevFrame.loadPixels();

//reset motion amounts
motionTotL = 0;
motionTotL1 = 0;
motionTotL2 = 0;
motionTotL3 = 0;
motionTotR = 0;
motionTotR1 = 0;
motionTotR2 = 0;
motionTotR3 = 0;
xsum = 0;
ysum = 0;
points = 0;

// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x ++ ) {
for (int y = 0; y < video.height; y ++ ) {

int loc = x + y*video.width; // Step 1, what is the 1D pixel location
color current = video.pixels[loc]; // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

// Step 4, compare colors (previous vs. current)
float r1 = red(current);
float g1 = green(current);
float b1 = blue(current);
float r2 = red(previous);
float g2 = green(previous);
float b2 = blue(previous);
float diff = dist(r1, g1, b1, r2, g2, b2);

// Step 5, How different are the colors?
// If the color at that pixel has changed, then there is motion at that pixel.
if (diff > threshold) {
// If motion, display white
pixels[loc] = color(0,50,150);

xsum+=x; //holder variable
ysum+=y; // holder variable
points++; //how many points have changed / increase since the last frame

//upper left 1
if(x<splitLine && y<=height/2)
{
motionTotL++;
}
//lower left 1
else if(x<splitLine && y>height/2)
{
motionTotL1++;
}
//upper left 2
else if(x>splitLine && x<splitLine1 && y<height/2)
{
motionTotL2++;
}
//lower left 2
else if(x>splitLine && x<splitLine1 && y>height/2)
{
motionTotL3++;
}
//uppermid right 1
else if(x>splitLine1 && x<splitLine2 && y<=height/2)
{
motionTotR++;
}
//lowermid right 1
else if(x>splitLine1 && x<splitLine2 && y>height/2)
{
motionTotR1++;
}
//upper right 2
else if(x>splitLine2 && y<height/2)
{
motionTotR2++;
}
//lower right 2
else if(x>splitLine2 && y>height/2)
{
motionTotR3++;
}
}

else {
// If not, display black
pixels[loc] = color(0);
}
}
}
updatePixels();

//stroke(255);
//line(splitLine,0,splitLine,height);
//line(splitLine1,0,splitLine1,height);
//line(splitLine2,0,splitLine2,height);
//line(splitLine3,240,width, 240);

///draw the gifs

if (points>80)
{xPosition=xsum/points;
yPosition=ysum/points;

println(xPosition);
println(yPosition);
}

//Upper left 1
if (xPosition <splitLine && yPosition <height/2)
{
radiusL=map(motionTotL,xPosition,maxMotionL,yPosition,maxRadiusL);
image(loopingGif,xPosition,yPosition,radiusL,radiusL);
// E string
//player1.rewind();
//player1.play();
player1.trigger();
}
//LEFT
//Lower left 1
else if (xPosition <splitLine && yPosition >height/2)
{
radiusL1=map(motionTotL1,0,maxMotionL1,0,maxRadiusL1);
image(loopingGif1,xPosition,yPosition,radiusL1,radiusL1);
//player5.rewind();
// player5.play();
player5.trigger();
}
// Upper Left 2
else if (xPosition >splitLine && xPosition <splitLine1 && yPosition <height/2)
{

radiusL2=map(motionTotL2,0,maxMotionL2,0,maxRadiusL2);
image(loopingGif2,xPosition,yPosition,radiusL2,radiusL2);
//player2.rewind();
//player2.play();
player2.trigger();
}
//Lower Left 2
else if (xPosition >splitLine && xPosition <splitLine1 && yPosition >height/2)
{

radiusL3=map(motionTotL3,0,maxMotionL3,0,maxRadiusL3);
image(loopingGif3,xPosition,yPosition,radiusL3,radiusL3);
//player6.rewind();
//player6.play();
player6.trigger();
}

//RIGHT
//Uppermid right 1
else if (xPosition >splitLine1 && xPosition <splitLine2 && yPosition <height/2)
{

radiusR=map(motionTotR,0,maxMotionR,0,maxRadiusR);
image(loopingGif4,xPosition,yPosition,radiusR,radiusR);
//player3.rewind();
//player3.play();
player3.trigger();
}
//Uppermid right 2
else if (xPosition >splitLine2 && yPosition <height/2)
{

radiusR2=map(motionTotR2,0,maxMotionR2,0,maxRadiusR2);
image(loopingGif5,xPosition,yPosition,radiusR2,radiusR2);
//player4.rewind();
//player4.play();
player4.trigger();
}
//Lowermid right 1
else if (xPosition >splitLine1 && xPosition <splitLine2 && yPosition >height/2)
{

radiusR1=map(motionTotR1,0,maxMotionR1,0,maxRadiusR1);
image(loopingGif6,xPosition,yPosition,radiusR1,radiusR1);
//player7.rewind();
//player7.play();
player7.trigger();
}
//Lower right 2
else if (xPosition >splitLine2 && yPosition >height/2)
{

radiusR3=map(motionTotR3,0,maxMotionR3,0,maxRadiusR3);
image(loopingGif7,xPosition,yPosition,radiusR3,radiusR3);
//player8.rewind();
//player8.play();
player8.trigger();
}
println(“Motion L: “+motionTotL+” Motion R: “+motionTotR);

}

00001

 

 

hell_01_world CNC

hell_01_world:

An investigation behind the design process of arduino project 00001.

Challenge: create an object / artifact found in a “haunted spaceship”. H.R. Geiger, anyone?

Concept:

Spinning head activates upon approach, splurts out blood as it gains velocity.

Foreseeable challenges:

  1. blood – create a barrier.
  2. Element of surprise – create a mirror effect, lighting up from behind the glass to reveal the spinning head.
  3. Casing – design product housing for sensors and breadboard / computer.

Sketch of concept

Object of inspiration

Object of inspiration

 

Method of process:

  • Break down the elements to trouble-shoot input ranges. This includes the sensors, mapping interface, and testing mechanical parts.
  • Test atomic parts of code to interact with the physical parts.
  • Stitch the components together physically, and build the code by integrating the parts step-by-step.
  • Trouble-shoot mechanics of code and physical output.
  • Design infrastructure and physical interface to house, protect, present and communicate the project.
  • Test in environment.

Step 1: investigating PIR sensor

Looking for a sensor that can detect movement, the PIR sensor (Passive InfraRed) is selected in hopes of greater reach for detecting movement.

It seems, however, that the movement it detects has a wide spectrum, and is not selective in what it detects, vs.the IR sensor that has a select spectrum range of 40 degrees in the direction the sensors are facing.

PIR sensor

First investigation into IR sensor

IRsensor

Illustration of sensor mechanics http://www.education.rec.ri.cmu.edu/content/electronics/boe/ir_sensor/1.html

Spectrum of sensor http://www.elecfreaks.com/wiki/index.php?title=PIR_Motion_Sensor_Module:DYP-ME003

Resolution: an IR sensor – Sharp 2Y0A21 for ranges between 4cm – 80cm was selected. Initially the IR was selected over the sonar sensor, as the design required the sensor to be hidden from view behind acrylic. This was not the case, as will be described later.

Upon reflection, PIR sensor is better-suited for HIGH / LOW applications, whereas the IR proximity is suited for ranges of proximity, conducive for mapping  / controlling such ranges to activate the mechanical parts of the project.

Step 2

 

Choice of motor for the spin.

A stepper motor with limited rotational range was not an option, and, as the spinning object of choice was determined, the weight of the object suggested a more powerful motor was required beyond the scope of a 5V power source. 12V motors were investigated, however the design did not allow enough surface contact for the object to be attached. A 30V motor was obtained after viewing samples, with the intent of reducing the power source to control the velocity (after many attempts, the affixed head simply flew off with significant trajectory).

30V mechanical DC motor.

30V mechanical DC motor.

Step 3

Testing coded parts

As stated, the PIR sensor was ill-suited for the project – the feedback was randomized, and no detectable pattern discerned.

PIR sensor with analog print values. Randomized.

PIR sensor with analog print values. Randomized.

Once an IR sensor was integrated, patterns of detectable values were determined between 4cm – 80cm, with 40cm being the optimal range of detection desired.

Mapped threshold of detection for the IR sensor.

30-40cm threshold.

 

 

 

 

 

 

 

 

 

An external DC motor with 12V battery power was required. The synthesis of the motor called for a transistor called a TIP120 (Darlington Transistor) to control the in / out power to the motor, and a diode is incorporated to capture a charge relay that could bounce back after the power is turned off – this information and diagram below is courtesy of http://bildr.org/2011/03/high-power-control-with-arduino-and-tip120/

TIP120 DC motor fritz dwg

Image provided by bildr.org, demonstrating external DC motor w/ transistor and diode.

Test prototype of the DC motor and external power with TIP120 transistor and diode. 2.2K resistor required for the TIP120.

Glen and Stephen help to review the board after initial set-up.

Once the motor was integrated with the IR sensor, testing proved successful – motor was activated, lights were added, and the IR detected motion at a defined threshold. Lights and motor were activated at varying threshold levels, to build as a two-step process.

Running prototype of lights, DC motor in action, activated by IR sensor.

Step 4

Building the components:

The critical juncture point is the motor and the spinning object. Weight, size, speed of the object, in conjunction with the motor power and torque define the physical parameters of the design. A plastic knob was acquired to be used as a connector to the head, which fit without any allowance for error. This in turn was bonded to the spinning dias of the motor with weld-bond sealant to fill up crevices and surface.

Two plastic knobs were adhered together with triclorethylene bonding agent. Aluminum male connector was cut and customized for fitment into the shaft of the motor head.

Two plastic knobs were adhered together with triclorethylene bonding agent. Aluminum male connector was cut and customized for fitment into the shaft of the motor head.

2014-10-05 14.46.56

 

Bonding components with different material properties calls for an integral system of design to withstand the rotational forces.

The base of the motor required a construction of a 6” plinth to house the mechanics, in addition to the spinning head defined as 6”. This led to a construct of a clear acrylic box to house the performance, and to protect viewers from any possible projectiles.

Step 5

Eventually a decision was made to not incorporate the projectile blood, as this would be a one-time performance. Another decision eliminated the “two-way mirror” effect, as it became apparent that the transparency of the box indicated the desire to make the mechanics (but not the electronics) visible. Horror is often-times hidden to amplify the element of surprise, yet the opposite can be true: sometimes the most terrifying elements of humanity are seen when it is in plain sight, and the machinist aesthetic reveals everything about the desire to replace the once-human form spinning in a post-human eternal gesture.

Structural composition was determined through AutoCad simulation for clearances and optimal visibility of performance. IR components, cabling and base platforms were determined with actual thicknesses of materials (3/8” optical acrylic).

ACAD screen

AutoCad development of template.

Graphics etched into the reveal did create allowances, to obscure the lower base form where the electronic bread board and cables were collected. Allowances for cables was designed to ensure connectivity and access. As sharing code is a part of the culture of the creative commons initiative, a decision was made to etch (right or wrong) the code used in the program, right into the physical structure. This is the DNA of the interactive program.

Illustrator output file for CNC laser etching and cutting template. Precision to 1?16” allowance.

Illustrator output file for CNC laser etching and cutting template. Precision to 1/16” allowance.

Graphics created in two-tone dispersion to simulate a gradient transition from translucent to clear.

Base housing for the DC motor – 6” acrylic customized with circular-cut framing.

The IR sensor was also integrated, as, during testing, it was evident that the acrylic would have obscured any detection or reading if it were placed behind the surface. Therefore, the sensor was visible at the base of the housing.

IR sensor

IR sensor revealed through the acrylic, flush with the external surface.

Construction of the housing was performed with trichloroethylene solvent.

Additional views of the base and housing for electronics.

Wire (mis)management.

Step 6

Testing the product required calibrating the threshold of the IR sensor within the environment. Different environments changed the sensitivity, but was easy enough to determine optimal distances. The final product entailed modifying the external power source, dropping the voltage to 6V power instead of 12V. The torque of the rotation in conjunction with the weight of the spinning head object (weighted to simulate a real baby’s head) proved too powerful for the base, which was intentionally not affixed for ease of access. The application of a potentiometer resulted in a staggered on /off rotation, with the off intervals as indicative of the potentiometer mapping variable. This could be accredited to the conversion of a digital signal into an analog out pulse – the power is incapable of being modulated, but the interval of signal is, resulting in an adjusting rotational rhythm. The following video does not demonstrate the pulse / potentiometer intervention, but is more indicative of the reduced voltage fed into the signal.

machineHead

Step 7

Documenting and recording the display developed into a narrative of its own. The original intent of the design was to have the arduino produce an audio amplification of pre-recorded sound effects, however, post-production enabled the inherent audio of the DC motor gears to be amplified, generating its own authentic voice to the soundtrack. Two tracks were developed – one with a music score (video option embedded) and one without. A part of the track is slowed down to 20% to match the video speed, adding a different intimate dimension of time to the experience. The environment sound of the fluorescent ballast humming is also amplified to showcase the actual field recording process, at the beginning and end of the video. Very few transition effects were added, but timing and negative space / black-outs enhanced the true lighting effect reacting from the IR sensor.

Post-production editing – Adobe Premiere CC. Minimal transition effects were used, and sound effects were generated by the gnashing of the DC motor gears and field recording of the fluorescent lights. No added colour treatments were required.

Post-production editing – Adobe Premiere CC. Minimal transition effects were used, and sound effects were generated by the gnashing of the DC motor gears and field recording of the fluorescent lights. No added colour treatments were required.

Final thoughts:

Coding into the physical world is extremely satisfying when the connection is made. The digital input with physical output is like a pure synthesis of energy in raw power, refined only by the imagination. Like all things, bugs are inherent to the process, and, like all design, a considerable amount of hacking is required to manufacture the results envisioned. Process is purity, and all things eventually reveal itself as the only real magic in this world.

Code

Note: a post-evaluation attempt to incorporate a long value – replacing the delay –  to the potentiometer changed little of the output results of the DC motor, however constant motor signals vs delayed signals transferred variably at seemingly randomized mapped values. Further investigation required.

/*test_IRSensr_threshmotor_pot.ino
the following code has been appropriated from 3 variations with a sample
as follows. Hello_01_World activated 10 06 2014, Jay Irizawa

Analog input, analog output, serial output

Reads an analog input pin, maps the result to a range from 0 to 255
and uses the result to set the pulsewidth modulation (PWM) of an output pin.
Also prints the results to the serial monitor.

The circuit:
* potentiometer connected to analog pin 0.
Center pin of the potentiometer goes to the analog pin.
side pins of the potentiometer go to +5V and ground
* LED connected from digital pin 9 to ground

created 29 Dec. 2008
modified 9 Apr 2012
by Tom Igoe

This example code is in the public domain.

*/

// These constants won’t change. They’re used to give names
// to the pins used:
const int analogInPin = A0; // Analog input pin that the IR is attached to
const int analogOutMotor = 9; // Analog output pin that the motor is attached to
const int ledPin1 = 8;
const int ledPin2 = 7;
const int potPin = 5; // potentiometer analog read

int sensorValue = 0; // value read from the IR
int outputValue = 0; // value output to the PWM (analog out)
int threshold = 350; // distance from IR to activate Motor (approx 20″)
int potVal = 0; // pot variable to store the value coming from sensor
int outputSpeed = 0;
void setup() {
// initialize serial communications at 9600 bps:
pinMode(analogOutMotor, OUTPUT);
pinMode(ledPin1, OUTPUT);
pinMode(ledPin2, OUTPUT);
Serial.begin(9600);
}

void loop() {
// read the analog in value:
sensorValue = analogRead(analogInPin);
// map it to the range of the analog out:
outputValue = map(sensorValue, 0, 1023, 0, 255);
// change the analog out value:
analogWrite(analogOutMotor, outputValue);

potVal = analogRead(potPin);
outputSpeed = map(potVal, 0, 1023, 0, 255);
analogWrite(analogOutMotor, outputSpeed);

// potentiometer delay
potVal = analogRead(potPin); // read value from pot analog dial
if
(sensorValue > threshold) {
digitalWrite(analogOutMotor, HIGH);
delay(potVal);
digitalWrite(analogOutMotor, LOW);
delay(potVal);
}
else if(sensorValue < threshold) {
digitalWrite(analogOutMotor, LOW);
}
// set up lights
if
(sensorValue > 300) {
digitalWrite(ledPin1, HIGH);
}
else if(sensorValue < 300) {
digitalWrite(ledPin1, LOW);
}
if
(sensorValue > 300) {
digitalWrite(ledPin2, HIGH);
}
else if(sensorValue < 300) {
digitalWrite(ledPin2, LOW);
}

// print the results to the serial monitor:
Serial.print(“sensor = ” );
Serial.print(sensorValue);
Serial.print(“\t output = “);
Serial.println(outputValue);

// wait 2 milliseconds before the next loop
// for the analog-to-digital converter to settle
// after the last reading:
delay(2);
}

Final Fritz diagram of arduino schematic

Final Fritz diagram of arduino schematic

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.