Interactive Digital Tree by Harjot

Hello everyone! Great to see everybody’s final projects showcased during the final class. Definitely a successful finale for the class.

Approach

As mentioned in my proposal post I decided to formulate a thesis topic that suited my interests and what I hope to explore in greater detail within a two-year timeframe.

In my thesis work, I plan to explore the presence of self in a world where social networking and data mobilization have gained predominance. I look to employ technology, in the form of pre-existing computer applications and custom-made hardware configurations, to discover how individuals relate to one another and their environment both emotionally and psychologically. Additionally, I am interested in utilizing the human body as a natural interface, one which is already rich with information and sensory modes of communication, in various design projects.

Creating a moment of emotional contact between a user and nature was my specific concern and so I wanted to create natural ways to communicate with a digital object which would draw parallels with the ways in which one interacts physically with nature.

Original Mission Statement

I want to visualize an ethereal fantasy tree onto a large projection screen which grows and changes before users as a result of interaction. User behaviour is the key input in my concept and comes in the form of interaction with a custom-made peripheral device–something like a magical sphere. Ideally, the spherical object would be lit up from inside and use colour as well as (fingers-crossed) vibration to respond to user input. I am also considering using a webcam or (again fingers-crossed) the Kinect so that I may capture user movement to further influence the reaction behaviours of the tree. I have envisioned this scenario to take place in a dark environment where focus is initially given to the ‘magical sphere’ object, towards which a curious user walks and with which he/she hopefully decides to interact, leading to the rapid birth and growth of a beautifully intricate tree structure, taking influence from user input to guide its progressive growth. My motivation for this concept comes from my understanding that nature (the organic world) benefits the brain in ways that allow individuals to think and feel more clearly. Studies on children with ADD have found that increased exposure to nature, i.e. visiting a park before going to school, and taking occasional recess breaks, helped to calm the mind and so allowed students to perform in school better than usual, this trend was similar to that observed in control groups. Additionally, this is also the thinking behind the belief that patients recover more quickly when they can see trees from their windows. [Ref: “The Cognitive Benefits of Interacting With Nature” by Marc G. Berman, John Jonides, and Stephen Kaplan, “Recess in Elementary School: What Does the Research Say?” by Olga S. Jarrett, “Recess—It’s Indispensable!” by Olga S. Jarrett and Sandra Waite-Stupiansky]

Context & Concept

The portrayal of a tree which alters its behaviour in response to user engagement with an external object. I felt that creating a scene and placing the user in a visual context from the start was important in creating a strong narrative. This was accomplished by the use of background imagery which depicts a dark forest with almost dead-looking tree trunks. The addition of music, via the Minim audio library, helped heighten the sense of presence within context; a somber tune playing in the background and lasting for the entirety of the sketch conveys the isolation and out-of-touch associations made with limited human-to-nature interactions that exist today, as a result of rapid urban development. The ability to interact with nature via touch and speaking were modes of communication that I felt necessary to explore.

The Arduino Device

Now this is where I had some difficulties! Unfortunately, my XBee wireless kit did not arrive in time and so enclosing my peripheral hardware was really not a viable option and I was not able to come up with any last-minute fixes (excuses, I know…should have been better prepared!). The morning of I decided that wireless would still put my device over-the-edge so I went over to Creatron and made a not-so-wise investment in a BlueSmirf Silver Bluetooth Module (BTW: I had a very negative experience with the sales clerk at Creatron…he was being extremely rude and so we had a not-so-nice exchange of words). Although I managed to get to Bluetooth module up-and-running (after reading MANY how-to guides online) the thing was very moody and developed a life of its own…this was why I decided not to chance my presentation with entertaining the Bluetooth capability–being unsure if I would be able to have a successful showcasing of my project. Finally, the ‘wireless’ device was comprised of an Arduino and Breadboard, both of which were mounted on a compact tray, an Electret Microphone, push switch, and Force sensor. The three interactive devices were all connected to the Arduino as analog devices communicating via serial input. The BlueSmirf was connected to the RX and TX pins on the Arduino (reversed) and to the GND and 3.3V. I also invested in a 9V battery adapter for a complete wireless experience.

Serial Communication

The Visual Color Mixer example on the Arduino website helped me send data from multiple serial devices from the Arduino to Processing without interfering values when engaging with multiple input sensors simultaneously. The trick was basically to capture the string of data coming in from the serial ports and split each value for the respective input sensor with commas–designating parameters for value input in Processing.

Function & Interaction

  • Push Switch = Grow the tree from a simple root into a more complex structure. Unique random generation each time. Tree stops growing on switch release.
  • Mic Sensor = Conditionals placed on the mic made it sensitive to close-contact blowing which resulted in visually seeing the tree move and contort rapidly with colour change.
  • Force Sensor = Touching the force sensor resulted in what was supposed to be a physical exchange between user and tree–shown by lighting up the tree in an orange and red colour scheme…depicting warmth…life energy. The sensor was conditioned accept moderately forceful touches.

Future Iteration 

I would definitely work on creating a unique and appropriate physical interface to further contribute to immersion and presence in regards to the visual piece and all that it conveys. I was planning to create a small tree trunk which would house the Arduino device and have sensors incorporated into its body. Play-Doh seems like an interesting material to experiment with in creating such an object. The use of video tracking, which I was also heavily relying on in my initial concept formulation, would be great. I was hoping to have the user guide the tree’s X position based on their left/right position in front of the screen…almost as to provide the user with fictional shade and distance from the screen controlling the location of the tree on the Z axis.

DEMO

Arduino Code

//DIGF 6B02: Creation & Computation
//Final Project, November 2011
//by Harjot Bal
//LIVING TREE
//Arduino Code
//A combination of Processing and Arduino were used to allow users to interact physically with a digital tree.

//Declare serial input variables.
const int mic = A0;
const int pot = A2;
const int force = A5;

void setup() {

//Set baud rate for capture.
Serial.begin(115200);

}

void loop() {

  //Print and read analog values that are coming in, split each by use of commas.
  Serial.print(analogRead(mic));
  Serial.print(",");
  Serial.print(analogRead(pot));
  Serial.print(",");
  Serial.println(analogRead(force));

}

Processing Code

//DIGF 6B02: Creation & Computation
//Final Project, November 2011
//by Harjot Bal
//LIVING TREE
//Processing Code
//A combination of Processing and Arduino were used to allow users to interact physically with a digital tree.
//Reference: http://blog.superkrut.se/data/tree/applet/sketch_050113a.pde - Superkrut's processing tree code helped me in building the tree structure.
//Reference: Background image used was royalty-free via http://www.sxc.hu/.
//Reference: Music track "Wounded.mp3" used was royalty-free via http://incompetech.com/.

//Call Minim library for audio.
import ddf.minim.*;
//Start AudioPlayer via Minim.
Minim minim;
AudioPlayer player;
//Decalre a variable for graphic to be imported.
PImage b;
//Call Serial library for serial inputs.
import processing.serial.*;
//3 serial inputs used (shown as variables). Note: "pot" can be replaced with a switch.
float mic = 0;
float pot = 0;
float force = 0;
//Declare serial class.
Serial myPort;
//Variable for a new tree root is set to 0.
int nr=0;
//Declare the body part to be made.
BodyPart root;
//Variable for where the base of the tree starts, can be changed in terms of dimensions.
float rootAngleX=1500;
float rootAngleY=0;

void setup() {

  //Size of screen, with call to 3D library.
  size(1400, 800, P3D);
  //New audio object via Minim.
  minim = new Minim(this);
  //Call track which is located inside data folder.
  player = minim.loadFile("Wounded.mp3", 1024);
  //Play the audio (on Run of sketch).
  player.play();
  //Color mode used with range of 1.
  colorMode(RGB, 1);
  //Define the background image via the variable b, image to be located in data folder.
  b = loadImage("pic.jpg");
  //Root object setup.
  root = new BodyPart(0, 0);
  //List all the available serial ports
  println(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 115200);

}

void draw() {

  //Call 'b' image through background.
  background(b);
  //Lights up the 3D cylinders used in the tree baed on orientation.
  lights();
  //move base of tree, according to 3D plane. X, Y, and Z axes.
  translate(width/2, height,-50);
  //Rotate base so that it is upward.
  rotateX(rootAngleX/1000);
  rotateZ(rootAngleY/1000);
  //Call the function which starts growth of the tree.
  root.grow();
  //Under the root growing function set to angle position of cylinders to 0.
  root._angPosX = 0;

}

//Define the body part class with all required variables.
class BodyPart {
  BodyPart[] children = new BodyPart[3];
  //New root.
  private int child_nr=0;
  //Width of tree.
  private float _width = 0;
  //Height of tree.
  private float _height = 0;
  //How fast tree grows horizontally.
  private float _widthspeed;
  //How fast tree grows vertically.
  private float _heightspeed;
  //Openness of tree branches of X plane.
  private float _angleX;
  //Openness of tree branches of X plane.
  private float _angleY;
  //Variable created for manipulating 'mic' input. The wind factor.
  public float brandnewX;
  //The affected variable as a result of 'mic' input.
  public float _angPosX;
  //Range of angles for tree branches on X plane.
  private float newangleX;
  //Range of angles for tree branches on Y plane.
  private float newangleY;
  //Dictates when to grow new beanches.
  private float nextchild;

//Inputs values to tailor growth of tree.
public BodyPart(float angleX, float angleY){
    //Initial varaibles defined.
    _angleX = angleX;
    _angleY = angleY;
    _angPosX = 0;
    _widthspeed=.04;
    _heightspeed=.3;
    nextchild = 2;
  }

//Tree development while growing handled here.
public void grow(){
    //If swith is pushed or potentiometer is turned...GROW!
    if(pot == 0){
      //Default tree color.
      fill(.3,.3,.3,.9);
      //Vertical growth.
      _height+=_heightspeed;
      //Horizontal growth.
      _width+=_widthspeed;
      //Conditional for when to grow a new "child" (branch). Dependent on width of previous cylinder and numbers of branches it has thus far.
      if(_width>nextchild && child_nr<2){
        //Random branch growth, highest the cylinders can stack is 20x vertically.
        nextchild+=random(1,20)*(child_nr+1);
        //Vertical growth occurs at an addition of 40% current rate.
        _heightspeed*=.4;
        //Randomized range for openness of branches on X plane.
        newangleX = random(.2,.4);
        //Randomized range for openness of branches on X plane.
        newangleY = random(.2,.8);
        //Imput randomized growth values and use them for each new child or branch grown.
        children[child_nr++] = new BodyPart(random(-newangleX,newangleX), random(-newangleY,newangleY));
  }
    } 

    //Call the function that colors the tree.
    paint();

    //Keep tree growing in respect to current branches.
    for(int i = 0;i<child_nr;i++){
      pushMatrix();
      children[i].grow();
      popMatrix();
    }

}

//Draw each cylinder according to input values.
void drawCylinder(float topRadius, float bottomRadius, float tall, int sides) {

  //3D cyinder shape creation
  float angle = 0;
  float angleIncrement = TWO_PI / sides;
  beginShape(QUAD_STRIP);
  for (int i = 0; i < sides + 1; ++i) {
    vertex(topRadius*cos(angle), 0, topRadius*sin(angle));
    vertex(bottomRadius*cos(angle), tall, bottomRadius*sin(angle));
    angle += angleIncrement;
  }
  endShape();
  //Rotate the cylinder according to tree once made.
  rotateX(-PI/2);

}

private void paint(){  

    //Color of tree, grayish.
    fill(.3,.3,.3,.9);
    //Darkened tips of branches.
    stroke(2);
    strokeWeight(2);

    //Movement of branches and everything connected to them, again oriented to tree position.
    rotateX(_angleX+sin(brandnewX)/10+PI/2);
    rotateY(_angleY+sin(brandnewX)/10);

    //If the force sensor is pushed alone, detecting a value less than 200, change of color and transparency!
    if(force <= 200 && pot !=0 && force !=0){
      fill(_width/10,_width/15,random(.2,.3),random(.4,.6));
    }
    //Or else do this.
    else{
     //Left out for now!!
     // fill(_width/30,random(.3,.4),random(.3,.4)/*,random(.3,.4)*/);
    }
    //If mic senses sound out of the range of a certain threshold only then chnage the speed of random movement of the tree branches.
    if(mic > 444 && mic < 459){
      //Within set range, so don't change branch movement.
      brandnewX = _angPosX+=.05/_width;
    }
    //The WIND factor!
    else{
      //change number value to increase/decrease amount of wind perceived.
      brandnewX = _angPosX+=5/_width;
      //Colorize the tree when wind is present.
      fill(random(.4,.5),random(.7,.8),_width/10,random(.4,.5));
    }
    //The values for drawing each cylinder.
    drawCylinder(_width/1.5, _width/1.5, _height, 10);
    //Placement of cylinder in respect to tree.
    translate(0,0,_height);
   }

}

//Dial in the serial values.
void serialEvent (Serial myPort) {

  //Read port until new values stop coming in.
   String inString = myPort.readStringUntil('\n');
  //Ignore anything that isn't a value.
   if (inString != null) {
  //Trim off any whitespace.
   inString = trim(inString);
  //Split the string on the commas and convert the resulting substrings into an integer array.
   float[] input = float(split(inString, ","));
  //If the array has at least three elements, you know you got the whole thing.
   if (input.length >=3) {
  //Map the values to any range desired.
   mic = map(input[0], 0, 1023, 0, 1023);
   pot = map(input[1], 0, 1023, 0, 1023);
   force = map(input[2], 0, 1023, 0, 1023);
   }
  }

}

//Stop the music.
void stop() {

  //Closes Minim audio classes when done with them.
  player.close();
  minim.stop();

  super.stop();
}


Final Project Documentation

Concept:
The project I presented was a critique of Augmented Reality (AR) applications. The project is an iPad application that uses the live camera feed of the ipad. I am using an AR library called BazAR (http://cvlab.epfl.ch/software/bazar/) that functions as object detection in the application. The idea is to build a library of everyday objects that would be recognized by the system. When you place the ipad overtop of an object, the system will detect that object and begin projecting overtop the live camera feed additional information about the object.

Scenario:
During the presentation I demonstrated the project using a cigarette pack of belmont cigarrettes. When the system detects the pack, the image on the ipad will be displayed as a hyper-over-saturated version of the cigarette packaging. This is to imply the hyper-reality of the branding; the rhetoric used to sell a particular product. Using the multitouch of the ipad – by swiping over the hyper-image of the cigarettes on the ipad, new layers and news story headlines are revealed. By slowly peeling away layers of the object, the user will be exposed to multiple perspectives relating to that particular object.

Expansion:
Thinking beyond this prototype I would want to build an ongoing database of different objects and allow for user generated content to contribute to the system. Similarly, I would want the system to begin to integrate not just physical objects such as products, but also structures such as buildings. By expanding to explore buildings, the system could also address the the layer of institutions as well (universities, government buildings, corporate buildings, etc). This added layer to the system could also provide a playful yet serious element to this work.

Photos:


Code:
The software was written in C++ using the Openframeworks library. It was developed using xCode, but I am just uploading the external C++ files. The testApp.m and tesApp.h are the main files where the setup, update and draw functions occur.

Here are the files in a .zip file:
Ftsonis Source Code (ZIP)

Earth Project: Linh, Liz and Shuting

Earth Project

Shuting, Lihn & Liz

Sample Video of Earth Project in Action:
Earth Video

Purpose
To create an educational, thought-provoking visual / interactive installation that combines an object (the globe), Processing, Arduino, sensors, video and user engagement provoking people to reexamine their personal behavior, as well as the a basic survey of contemporary human behavior and the resounding impacts on the Earth’s environment.

Concept
Traditionally, we refer to the Earth as our “mother,” but when you stop to think about the true cause and effect influence in this relationship, one realizes quickly that the dynamics are reversed: it is us as humans that occupy the mothering role, responsible for taking care of the planet today so that the future Earth can be fully-functioning and healthy, not the other way around positioning humans as dependents of the Earth. (For example, consider the sometimes-frightening influence parents have on their children—see Alfred Hitchcock’s Psycho, George Bush or Foucault, for example.) In this interactive installation, we draw on the notion of nature verses nurture. If the Earth is inherently self-sufficient and high-functioning, the negative influence of human behavior just might have something to do with global warming and the disruption of peace in the Earth’s natural climate.

Through Earth Project, we hope to express the sometimes invisible positive and negative effects we as humans have on the Earth and our shared natural environment.

Interaction Process / User Experience
In front of the user is a hanging globe. Projected on the wall in front of the user is video footage of the world from outer space. When the user interacts with the globe (by touch [covering up one of the three light sensors], blowing / speaking roughly, or shaking / tilting the globe), videos of natural disasters and happenings such as hurricane, tornado, avalanche and volcano appear projected on the wall. When the globe is stable, and/or receives direct light through a final light sensor, pleasant video footage (a happy and healthy eco system) is projected.

Materials

  • Hardware
    • Projector
    • LED Lights
    • Laptop
    • Arduino
    • Sensors:
      • Light Sensors
      • Microphone / Sound Sensor
      • Tilt Sensor
  • Lots of Wire
  • String to Hang the Globe
  • Prototype Materials
  • Table to Present the Installation and Cover the Guts
  • Software
    • Processing
    • Arduino
    • Sound: Music
    • Processing Video Library
    • Video Footage of the Earth

Context

  • Earth Project is intended to be an interactive art gallery exhibition. Our current design is a second-round iteration and therefore is still, in our minds, a functioning prototype of sorts. We wanted the globe to be approachable–reminiscent of a globe you might find in a library, study or classroom space. Additionally, we wanted the globe to be hung at a level that might be accessible to the greatest amount of people as possible hitting “arm’s reach.”

 

EOU Analysis

Environment

The installation is set-up within a small, dark and quiet room, where user can see vaguely an earth floating in the center of the room and a screen being projected on the wall in front of the earth.

The user will act as a mother who examines their sick child (Earth) by touching and talking to it. Therefore, the installation should feel personal, private, comfortable, dark and quiet.

User

Anyone who is interested in interacting with the installation can be the user. Although for the fragile set-up, we are not allowed children to touch the objects or people with pets to go inside the room.

For the sensitivity of the sensors, only one user is allowed at a time to interact with the object.

Users can touch, blow, shake or talk to the earth, they are also given a flash light to light up the earth as wanted. Based on the nature of behavior, a different video and sound will be played in the screen. Ex. If user softly touches and whisper to the sculpture, different videos of natural disasters and sound will be projected onto the wall and the user can see what is happening to the earth, examining what disaster is happening where in the world.

Object

A sculpture of earth made from paper mâché, foam, tape and paint is being hang on the ceiling by a thin rope that user can not see clearly in the dark –creating a feeling that the globe is floating in the universe and human is the god who has the power to create and examine the creation.

Different sensors were carefully attached to the surface of the earth sculpture, so that it will response to human behavior sensitively. Sound sensors, Light sensors and Tilt Sensors are among those.

Videos and sounds are being played and projected on the wall, they were edited to be able to loop over and over smoothly incase user wants to enjoy the video. The sounds were mix to be able to mingle together when being played at the same time.

 

 

Approach

We began by revising our first final project idea of creating an interactive art installation based on manipulating sound and video to reach a new idea (after hours of brainstorming): Earth Project. Maintaining the interactive and gallery-based attributes of out preliminary idea, we moved to a more concrete and layered concept.

To begin, we brainstormed by drafting approach and concept notes. Once we firmly decided on our idea, we investigated physical materials to use for the globe. We initially purchased a small children’s basketball from Canadian Tire. We intended to cover it with paper mâché so that we could cut it out a small doorway in the ball to place the Ardunio and adhere the sensors and wiring.

Before we went to the basketball with mâché, we tested our idea out by covering a balloon with pulp paper mâché to test the size, feasibility and concept. During one of our work sessions, Anne walked by and suggested that we use a Styrofoam ball as the globe as it’s already completely round, light and easy to find. We modeled our next (and final presentation) iteration on the use of a Styrofoam ball. We took the ball to the Plastics Lab at 100 McCaul and cut the ball in half using a band saw. We dug out space for the Arduino and breadboard, power cord and after mapping, inset / feed holes for our sensors. After all of the code and programming tests were complete, the soldering was done and the wiring firmly in place, we taped the ball back together and Linh and Shino covered the ball with a black and gray-scale map of the Earth. We attached string we borrowed from Mayan and hung the ball from the ceiling.

Making the Paper Mâché Prototype

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

As for the installation display stand, we looked around our DFI shared workspace and found a tall wooden box that was just about the right height for the display. We covered it with white paper, put the laptop and projector inside, draped a white plastic tablecloth over the box and cut a hole for the projector’s light/image projection. Linh made a project logo and we adhered it to the wall to complete the “gallery” display look.

Installation Prep & Concept Development Photos

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Setting up the installation

 

 

 

 

 

 

 

 

 

Problems & Difficulties

We ran into some pretty basic issues that were only complicated by the short timeframe we had to complete the project—stabilizing the code/programming and getting the sensors to work properly.

 

When we programmed the video content into the code, we lost the audio and couldn’t figure out how to get it back or manipulate the code to restore it. We ended up having to produce the project without sound for the installation which we all found to be a compromise from our ideal vision for the project.

 

The code, though we (honestly, Linh and Shuting headed this aspect of the project up—don’t be surprised-but I was present) spent countless hours working on it, it proved extremely difficult to stabilize. We determined that the more heavy work we were asking Adruino and Processing to do, the more unstable the code became. If we had more time, stabilizing the code would be the most primary revision we’d tackle first.

We also came into a bit of trouble deciding which tilt sensor would be most appropriate for the project. We considered using a Tri-Ax Breakout sensor, as well as multiple X, Y & Z sensors and tilt sensors under the recommendation of the owner of Creatron. In the end, we chose to stick with the standard tilt sensor from our basic electronics kit.

The mic / sound sensor was very sensitive and was difficult to work with. The sound senor in the end functioned more as a switch than an analogue sensor.

Lastly, we neglected to test our sensors in the exact (or close to exact) conditions we planned on having in the project installation space. This resulted in very different sensor readings and ratios. Our final project therefore was highly sensitive to light and worked best in near complete darkness and silence.

Misc Issues
Shino burnt her finger taking a turn at soldering. We lost the receipt to take the children’s basketball back to Canadian Tire—once we realized we didn’t need it.

Potential Future Directions & Revisions

The comments from our presentation gave us some great leads on how we might revise or edit the project in the future. One of the most important pieces of feedback we received was that the globe could be modified or recreated to be more approachable, encourage playful engagement and potentially be less fragile. Taking the next iteration to another level to include some of these suggestions would require us to utilize wireless (XBee) communication.
We also heard that the rhetoric of the “natural disasters” (and the ratio of “positive” to “negative” video content) did seem a bit dark and suggested that humans only have negative impacts on the Earth. In the next iteration, we would be sure to include more positive video content that would be triggered by human interaction.

Overall, we did an incredible job on planning, group work-shopping and working collaboratively—even down to the minuscule, but vital elements of production (such as soldering) we managed our time well, took equal ownership in the project and jumped in, both at times we were not completely comfortable with the task at hand (e.g. processing or soldering) as well as where we felt expertly comfortable.

 

 

CODE:

for processing:
import processing.video.*;
import processing.serial.*;

Serial myPort;  // Serial port you are using
int[] buff = new int[10];
int num1 = 120, num2 = 0, num3 = 0, num4 = 0, num5 = 0;
int thd11 = 100, thd12 = 130, thd2 = 20, thd3 = 240, thd4 = 240, thd5 = 25;

Movie theMov0;
Movie theMov1;
Movie theMov2;
Movie theMov3;
Movie theMov4;
Movie theMov5;
boolean isDefault = true;
boolean isMovie1 = false;
boolean isMovie2 = false;
boolean isMovie3 = false;
boolean isMovie4 = false;
boolean isMovie5 = false;

void setup() {

myPort = new Serial(this, Serial.list()[0], 9600);
myPort.clear();
size(1440, 900);

theMov0 = new Movie(this, “Earth_Intro.mov”);
theMov1 = new Movie(this, “hurricans.mov”);
theMov2 = new Movie(this, “tournadoes.mov”);
theMov3 = new Movie(this, “volcano.mov”);
theMov4 = new Movie(this, “avalanches.mov”);
theMov5 = new Movie(this, “happy earth.mov”);

theMov0.loop();
}

void draw() {
while(myPort.available() > 0){
//read values
for (int i=0; i < 9; i++) {
buff[i] = buff[i+1];
}

buff[9] = myPort.read();
if (buff[9] == 1) {
num1= buff[8];
}
if (buff[9] == 2) {
num2= buff[8];
}
if (buff[9] == 3) {
num3= buff[8];
}
if (buff[9] == 4) {
num4= buff[8];
}
if (buff[9] == 5) {
num5= buff[8];
}

println(“1:”);
println(num1);
println(“2:”);
println(num2);
println(“3:”);
println(num3);
println(“4:”);
println(num4);
println(“5:”);
println(num5);
//select movies

}

if((num1 < thd11 || num1 > thd12) && !isMovie1){
isMovie1 = true;
theMov1.loop();
}
else if(num1 > thd11 && num1 < thd12 && isMovie1){
isMovie1 = false;
theMov1.stop();
}

if(num4 > thd4 && !isMovie2){
isMovie2 = true;
theMov2.loop();
}
else if(num4 <= thd4 && isMovie2){
isMovie2 = false;
theMov2.stop();
}

if(num3 > thd3 && !isMovie3){
isMovie3 = true;
theMov3.loop();
}
else if(num3 <= thd3 && isMovie3){
isMovie3 = false;
theMov3.stop();
}

if(num5 > thd5 && !isMovie4){
isMovie4 = true;
theMov4.loop();
}
else if(num5 <= thd5 && !isMovie4){
isMovie4 = false;
theMov4.stop();
}

if(num2 < thd2 && !isMovie5){
isMovie5 = true;
theMov5.loop();

isMovie1 = false;
theMov1.stop();
isMovie2 = false;
theMov2.stop();
isMovie3 = false;
theMov3.stop();
isMovie4 = false;
theMov4.stop();
isDefault = false;
theMov0.stop();
}
else if (num2 >= thd2 && isMovie5){
isMovie5 = false;
theMov5.stop();
}
//if(num1 < thd1 && num2 < thd2 && num3 < thd3 && num4 < thd4 && num5 < thd5){
if(num1 > thd11 && num1 <= thd12 && num2 > thd2 && num3 < thd3 && num4 < thd4 && num5 < thd5){
isDefault = true;
theMov0.loop();

isMovie1 = false;
theMov1.stop();
isMovie2 = false;
theMov2.stop();
isMovie3 = false;
theMov3.stop();
isMovie4 = false;
theMov4.stop();
isMovie5 = false;
theMov5.stop();
}
//else if ((num2 >= thd2 || num3 >= thd3 || num4 >= thd4 || num5 < thd5 ) && isDefault){
else{
isDefault = false;
theMov0.stop();
}

//draw movies
background(0);

if(isDefault){
theMov0.read();
image(theMov0, 1440/2-320, 900/2-240, 640, 480);
}
else if(isMovie5){
theMov5.read();
image(theMov5, 1440/2-320, 900/2-240, 640, 480);
}
else{
if(isMovie1){
theMov1.read();
image(theMov1, 120, 0, 600, 450);
}
if(isMovie2){
theMov2.read();
image(theMov2, 120, 900/2, 600, 450);
}
if(isMovie3){
theMov3.read();
image(theMov3, 1440/2, 900/2, 600, 450);
}
if(isMovie4){
theMov4.read();
image(theMov4, 1440/2, 0, 600, 450);
}
}

myPort.clear();
}

ardunio:

int analogPin2 = A0;//light sensor 2 brown wire
int analogPin4 = A1;//tilt sensor blue wire
int analogPin5 = A2;// sensor blue wire
int analogPin1 = A3;//light sensor 1 yellow wire
int analogPin3 = A4;//light sensor 3 purple wire

int temp1 = 0;//value of light sensor 1
int temp2 = 0;//value of light sensor 2
int temp3 = 0;//value of light sensor 3
int temp4 = 0;//value of tilt sensor
int temp5 = 0;//value of sound sensor

void setup() {
Serial.begin(9600);
}

void loop() {
temp1=analogRead(analogPin1);
temp1=int(map(temp1,0,1023,2,

250));//light sensor 1
temp2=analogRead(analogPin2);
temp2=int(map(temp2,0,1023,2,250));//light sensor 2
temp3=analogRead(analogPin3);
temp3=int(map(temp3,0,1023,2,250));//light sensor 3
temp4=analogRead(analogPin4);
temp4=int(map(temp4,0,1023,2,250));//tilt sensor
temp5=analogRead(analogPin5);
temp5=int(map(temp5,0,1023,2,250));//sound sensor

Serial.write(1);
Serial.write(temp1); // println add Linefeed to my float
Serial.write(2);
Serial.write(temp2); // println add Linefeed to my float
Serial.write(3);
Serial.write(temp3); // println add Linefeed to my float
Serial.write(4);
Serial.write(temp4); // println add Linefeed to my float
Serial.write(5);
Serial.write(temp5); // println add Linefeed to my float
}

Weather Projection – Faysal, Shino and Cathy

Introduction

Talking about the weather, is always a good way to start off a conversation! After we collected some sensors that can measure the outside conditions, we decided to combine them and build a weather station! From projecting sensor controlled drawings onto a wall, to having  projections on a canvas with windmills on it, we finally decided to create a projection that interacts with the body!

Concept

Inspired by virtual fitting rooms, we decided to have something similar that would help you decide what to wear from your closet while taking into consideration the weather conditions.

As soon as the camera detects a specific color on your body, there would be a projection directly on the body of a type of clothing that corresponds to the outside weather. If our weather station detects low temperatures, an image of a coat would be projected, if high temperature, a t-shirt would be displayed instead. Other than the temperature sensor, we also included a humidity sensor, a light sensor and a  hall effect sensor. The humidity sensor would detect rain and thus project an image of an umbrella (closed or open depending on the amount of humidity in the air). The light sensor would darken or lighten the lenses of glasses that would be projected also on the body. The Hall effect sensor helped us “capture” the speed of the wind. We would build a windmill inside that would turn as fast as a windmill set outside and would give an idea of the wind speed.

Arduino

For the Hardware we used an Arduino uno board, a temperature sensor, a humidity sensor, a light sensor and a hall effect sensor. We also used a board that we mounted on top of the arduino so we’d be able to better connect the sensors. (see image below).

 

Arduino/Processing Code

We first needed to get our sensors working properly, we also needed to collect the data from the sensors and feed it to Processing that will generate the projections needed. At first, we had trouble connecting all the sensors together, but then thanks to Shuting’s help we were able to get our hands on her arduino code ,that can read 5 sensors at same time. Unfortunately it did not solve our problem completly. The sequence of the variables the processing was getting from arduino wasn’t the the same as the analogue input. So we asked Jim for some help and he showed us the library of firmata, which can be directly uploaded to the board without changing anything. As for the processing part, we just used the arduino input library that can get the variables.

Below shows our Processing work:

1. We started by using “println(arduino.analogRead(0)” to test the variables from the sensor ,so we can know their range and  how they change, so we can then adjust them to display different types of clothing, sunglasses and an umbrella,using “if…else if…else”.

2. We then decided to  make it more interactive. Ideally it would track the human body: When people step into the frame, there would be a projection of clothes. Using a motion sensor came to mind and we thought of using it as a switch or as a tracking sensor that would sense the body that entered the frame. As it seemed hard to realize, we then decided to go with a  web came that would track color. (which we learned towards the end of our processing class). The example tracked the color that was chosen by the mouse, ours needed to track only one specific color that we would assign in advance. We simplified the code, combined it with the changing of clothes, as well as the umbrella code and  then created a function to draw the glasses. Finally we combined everything together and tested it.

By the way, thanks to Harjot’s code on assignment 4, we were able to control the video tracking and activate it only when we push a button.

 

Usually When you push the button, the value changes but when you remove your finger, the value goes back to the original one. We made sure that each time the button is pushed the value would switch from 1023 to 0 or from o to 1023.

 

Original Screen:

 

Screen when button is pushed:

 

3 .Before we pushed the button we wanted the projection on the wall to be decorative, so we searched on openprocessing.org, and found a nice clock code that showed the date and time. It was quite nice as it’s colors change over time.

 

4. To make the projection more attractive we decided to introduce an electronic photo album along with the clock . We used the PNG photo format ,and made the pictures appear randomly as they rotate and scale in size. (it also looked like a digital collage). The “tint” made the collage cool ,but surprisingly the “tint” also affected the color tracking part, and made the clothes look darker ,so we delete this part eventually.

 

5. We finally combined the collage’s code along with the previous color tracking code.

Projection of Collage and Clock:

 

 

 

Projection of Clothes:

 

The light sensor, humidity sensor, and temperature sensor will affect what is projected on the user’s body, while tracking the pink dot held by the user. (the pink dot is the color that the web cam tracks).

>>> Our processing code can be accessed here:  processing_code

The Windmill 

No weather project would be completed without a windmill! As previously mentioned, we used a hall effect sensor to detect the speed of the wind. We also decided to use a different arduino board for this circuit.

The wind detecting windmill was the physical interactive part of our project. Our design challenge was to create a device that would let people know the outside wind condition while they’re inside. So we built a windmill outside the room that collects data from the wind speed and uses it as an input data to stimulate and activate another windmill inside the room. When the windmill inside the room rotates, people would have an idea of the wind speed outside their door. (We first thought of having an inside fan that would blow air on the face of the user and make him feel the outside win. But since our fan wasn’t strong enough, we dropped that idea).

Building the windmill 

1. Determining which sensor we would need to detect the wind speed.

Jim gave us many suggestions and recommended that we use the hall-effect sensor to detect the wind speed. After doing much research, the hall effect sensor was definitely the best to way calculate the RPM. (revolutions per minute that is a measure of the frequency of rotation).

The sensor works with a magnet, it would detect each time a magnet came close to it.

2. Building the device that detects the RPM

Setting up the Hall effect sensor:  connecting the hall effect sensor as an input device and the motor as an output device. (see image below).

 

Fixing the hall-effect sensor in the box and setting the magnet on the back of windmill:

 

>>> Our arduino code for the windmill can be accessed here: windmill_code

Fixing a motor and putting a windmill on it:

Here is a video of the windmill in action!!

 

Challenges

Some of the challenges that we faced was the inaccuracy of the body detection. Sometimes the projection was shaky and did not always detect the color on the user.

Sensor delay: we sometimes had delays as the sensor data would take time to travel and project the right image.

Next Steps

We definitely  would like to explore the use of the Kinect with our project in the future. We would then have better accuracy and a better interactive experience!

Final Project Documentation: “Global IceCubes”

“Global IceCubes” in its demo/final project form, was an installation where the user could enjoy a relaxing drink of scotch on the rocks…however, after the first sip is taken, the visitor watches their ice cubes quickly melt (more quickly than they would normally expect). They were able to slow the melting process by talking to their ice cube. Conversation cards were provided if the user was at a loss for words.

Bartender talks to scotch connoisseur, Mark Thoburn

Project Evolution

Here is my original design statement:

“The vast majority of the world’s population live in cities. Global issues — from climate change to deforestation — are geographically, temporally, and emotionally distant from the vast majority of us. I am interested in exploring ways to bring us into direct contact with global issues by bringing remote data sources and their associated narratives into urban settings, in physical (tangible) and public ways. This project aims to do that by injecting remote data and narratives of environmental impact into what is normally a private, enjoyable moment.”

The original project proposal for “Global IceCube” changed somewhat as I worked it up, mostly in my decision to focus on one main type of user interaction (talking), and to not incorporate a data feed into the project. More on that below.

Process Notes

This is presented in a linear form, however there was overlap between each step and considerable iteration throughout the entire process.

1. Concept Refinement and Testing:

Working from the original concept, I used simple props — a trivet, glass of scotch with ice — to bounce the idea off of people and get some initial feedback. I also spent some time with my ice cube tamagotchi, testing the melt rate of ice cubes in scotch, as well as doing several scotch and water freezing test…delicious.

 

User testing concept

User testing concept

Ice cube tamagotchi

Ice cube tamagotchi

Testing natural melt rate

Testing natural melt rate

Testing freezing (scotch and water)

Testing freezing (scotch and water)

2. Research:

This consisted of “expert interviews” (specific questions posed to faculty, plastics technician, data scientist) and secondary research on glaciers, possible data streams, hardware options for the heater and material options for the housing, as well as available code libraries.

At this point in the process I ended up deciding to switch from cooling the ice cube and playing around with the absence or presence of cold, to working with heat. I also decided not to control the warming of the ice cube using an external data feed, for example glacier warming data from the World Glacial Monitoring Service, various CO2 emissions sources, data specific to the user, etc. While I may experiment with that in later versions, I decided that the external context (reference to global warming) wasn’t necessary, and that the technical challenges were beyond the scope of this project. I also decided to focus on seeing how one simple user interaction  — talking to the ice cube — worked before experimenting with multiple user interactions.

 

3. Prototype circuit, write code and test:

In terms of code, I ended up using “standard firmata” arduino code to enable Processing to talk to Arduino, the Minim audio library , and adapted a chunk of code from Jim (Ruxton) that allowed me to use the Minim library to measure sound values when the user was talking (or not talking) into the mic. I could then set a threshold for what would be considered talking (how loud and how long) and therefore when the heater and corresponding visual feedback would be turned on or off.

The main challenge here was accounting for natural pauses in people’s speech. The first version of code I had running worked, except that it required the user to talk without pausing. I became so zoned in on the timeTalking and timeNotTalking sections of the code that I didn’t even consider digging into the minim library. An elegant solution proposed by Jim — increasing the buffer sample size for the audio to average out the pauses — helped me solve the problem.

Testing with breadboard

Testing with breadboard

Turning heater (resistor) on/off

Turning heater (resistor) on/off

 

4. Design/Build housing:

This was just a matter of researching materials to find one that would house the hardware and still allow for visual feedback to show through. I mapped out the physical setup of the hardware that would need to be contained within the housing, assembled a cut list and then had the good fortune to work with (assist) the plastics workshop technician, Jon Kuisma, in assembling the housing from translucent acrylic.

Housing measurements and cut list

Housing measurements and cut list

Housing (translucent acrylic)

Housing (translucent acrylic)

5. Build heater, solder circuit and assemble final installation:

After experimenting with different conductive materials for the coaster, I ended up building a quick-and-dirty heater using a 5ohm (20 watt) resistor taped to a 4’x 4′ piece of aluminium (baking pan). It took some time experimenting with different rigging systems (wire, brackets) and glues before I decided on aluminum tape, which I didn’t know existed until then, but was glad to find as it was the simplest solution, dealt well with heat, and worked.

 

Heating unit

Heating unit

This was my first soldering experience other than soldering two wires together. While I didn’t exactly have the hands of a surgeon, the circuit was relatively simple and I got it done with the benefit of good advice from the lab technician, George Dougherty, and helpful equipment (“third hands” are fantastic).

Here are some images of the final circuit worked up from the breadboard prototype:

 

Final circuit

Final circuit

Final circuit, back

Final circuit, back

6. Demo preparation and final testing: I prepared “conversation cards” (inspired by Monty Python), in case the user was at a loss for words. I also prepared a short script as an improv aid in my role as bartender. While this helped somewhat in the “performance” / demo, it’s hard work for an introvert. A true improv actor playing the role of bartender would have been better. I also spent time setting the room up and adjusting to the specific dimensions and light and sound characteristics of the room. I was able to account for differences in ambient noise and street sounds by adjusting thresholds in the code.

 

Code / Final Hardware & Materials List

The final hardware and materials setup consisted of:

1 x Arduino Uno

1 x 5ohm (20 watt) resistor with aluminum heat sink
1 x TIP120 transistor with aluminum heat sink
1 x 1K Ohm resistor

1 x Red LED
1 x Blue LED
2 x 330 Ohm resistor

22 gauge solid core jumper wire
3 x alligator clips
1 x protoboard

1 x audio-technica pro 24 stereo microphone
9 x “conversation cards”
1 x opaque plastic box (housing) with cutout for coaster

1 x Glenlivet 12 year-old scotch
1 x Balvenie doublewood 12 year-old scotch
1 x package of salted almonds
1 x bucket of ice

Download the code [PDF]

Overall Challenges & Successes

The group critique provided some insight into the challenges and success of this first (demo) version, although it would be useful to see what happens when testing is conducted with individual users in the intended setting — a gallery location where the installation space is small and a mostly contained/private within a public space (i.e., surrounding traffic from other gallery visitors).

The critique demonstrated that the challenges that were identified in the conceptual stage were indeed accurate. The main user experience challenges present during the critique revolved around issues of appropriate feedback and communication of interaction possibilities, specifically:

1) Appropriate feedback, specifically the changes in temperature / when the user had turned the heat on or off. I chose to use the computer screen and LEDs (changing both from red to blue) as a visual cue for when the heater was turned on or off. While the colour change was obvious during testing, the final installation location had much more ambient light; I didn’t have time to adapt and so the changes were too subtle. I think this would ultimately be a fairly easy fix using different materials and visual/aural cues. More on that below.

2) Communicating the interaction possibilities to the user, specifically that their ice cube was melting at a faster rate than usual and that they could talk to it to slow the melt rate. While I designed out a few possibilities for this, I ended up choosing to handle this through a live actor (me as bartender). I’m not sure if a more practiced improv actor would have done better, but it felt a bit clunky with me in the starring role. Communicating with the user will be one of the main challenges going forward.

3) Physical interaction challenges: the setup — a mic on a stand, no stool in front of the bar, and the heater-as-coaster — meant that the user (the serious scotch enthusiast, Mark Thoburn) could break the system, for example, choosing to hold on to his glass and not put it down on the coaster. While I realized this during late-stage user testing (and even wrote a line in my improv script to account for it), the fact that it occurred highlighted the limits of the installation and had an overall negative impact on the user experience. It also meant that the user was free to roam and so there wasn’t as cozy a relationship between the user and the bar setup as there might otherwise have been.

4) Handling variability: Mark choosing to conduct the entire class in a sing along to slow the warming (instead of interacting with the ice cubes on his own), although fun, wasn’t what I expected. Mostly because the intended user scenario was a mostly private experience within a public (gallery) setting. Still, it clearly showed that it’s impossible to fully predict how users will interact and that I need to account for that somehow.

5) The glacial pace of change: I don’t mind that the experience is meant to be longer than usual, in fact that’s what I like about it, but user expectations need to be set appropriately and then dealt with effectively. This ties back into appropriate feedback for me, but it also touches on the need to find a better hook into the experience and making sure that at some point early on, the user makes an emotional connection with some aspect of the installation, or that something gets them to keep interacting with the system.

This demo (final project) version was always intended as a first step study for a potentially larger project, for both practical reasons of feasibility — time, resources, as well as my emergent, to be generous, coding abilities — as well as the need to user test the most basic of interactions before expanding the project further.

Overall, I was happy with the process — while working with a group would have been easier, choosing to work alone forced me to get my hands dirty in putting together all aspects of the project.

In terms of successes…seeing a few people (in testing before the final demo) talking to their ice cube showed a glimpse of what I think might be an interesting interaction to expand upon. I also think there might be some potential in the theatre aspect of the demo — mixing live theatre / improv artists with an interactive scotch and ice cube installation (or interactive installations in general). Overall, I had mixed feelings about the results. Early responses to the concept ranged from perplexed to very enthusiastic; ultimately what I took away from user testing the project at various stages and the final demo/critique was that it’s a potentially interesting concept that needs some work to make it a satisfying user experience.

Future Improvements / Next Steps

One of the first things I’d like to do before taking any further steps would be some more user testing of the installation in its current form.

Overall improvements would need to deal with the challenges mentioned above. A few specific ideas for possible improvements and areas to explore (some of which came out during the critique), include:

– Bringing the user closer to the glass, in part by more closely integrating the speaking action (mic), heat source, and even feedback mechanisms (perhaps, although not necessarily by having it all self-contained). This might allow for a more intimate experience with the glass/ice, and also allow for more natural user interactions, for example, alleviating the necessity for the user to place the glass on the coaster/heater in order for the system to work;

– Research and experiment with new materials and better feedback mechanisms, including thermochromic inks for feedback based on temperature;

– Experiment with different timescales of user experience and perhaps even with multiple locations (i.e., starting the user experience in the gallery/installation and building something in that continues the experience once the user leaves);

– Explore other interactions the user could have with the ice cubes, including more physical/direct interactions, and perhaps even multiple user / group interactions;

– Explore an effective data-driven component to the installation.

Protected: Final Project Proposal

This content is password protected. To view it please enter your password below:

Note Float! Cathy Chen & Maayan Cohen

OVERAL CONCEPT

A playful interactive musical experience for children We adopted the idea from Andre Michelle’s Tone Matrix http://lab.andre-michelle.com/tonematrix, and adapted it to a physical form. We both wanted to make something big, soft and physical for children to interact with and the Tone Matrix seemed to be a perfect fit for what we could start approaching.

APPROACH

Software and Hardware

At first we tried to find a library that will allow us to play notes on keyboard. Sound Cipher was the first one that we tried and was the one we ended up using.  http://explodingart.com/soundcipher/.

PROTOTYPES

Initial Software Prototype
The code basically imitated how the tone matrix works; however we wanted to make a smaller scale of the Tone Matrix as we considered the scope of the project is limited by the amount of time we are given; therefore we decided to make forty notes on keyboard initially.

8 notes on the column
5 notes on row (so five octaves)
with each note going incrementally on a pentatonic scale

so the pattern is:
first octave EGABDEGA
second octave: EGABDEGA
third octave: EGABDEGA
fourth octave: EGABDEGA
fifth  octave: EGABDEGA

we acquire the note number from this website: http://tonalsoft.com/pub/news/pitch-bend.aspx

After we got the keyboard working with the code, we went into hardware part of the operation. The main concept we used is “array of an array”

________
|__|__|__| <- an array with length 3 of floats = an “octave”
|__|__|__|
|__|__|__|
|__|__|__|
|__|__|__|
|__|__|__|
|__|__|__|
|__|__|__|
^
an array with length 8 of “octaves”

Here is the code for testing the various tone possibility for keypad:

/**
DIGF 6B01 Final Assignment Keyboard Test Version 4
November 11 2011
* Create a music keyboard from the qwerty keyboard.
* Capture key presses and map the ascii key number
* to a MIDI pitch number and play a note immediatly.
*
* A SoundCipher example by Andrew R. Brown
*/

import arb.soundcipher.*;

SoundCipher sc = new SoundCipher(this);

float [][] chord = new float [6][4];
boolean [][] pressed = new boolean [6][4];
int scaleCounter = 0;

void setup() {
  for (int i = 0; i < 6; i++) {
    for (int j = 0; j < 4; j++) {
      chord[i][j] = -1;
      pressed[i][j] = false;
    }
  }
}

void draw(){

    sc.playChord(chord[scaleCounter], 80, 0.3);

    scaleCounter++;

    if (scaleCounter == 6) {
        scaleCounter = 0;
    }

    delay(180);

}

void keyPressed()
{
  if (key == 'z') {
    if (!pressed[0][0]) {
      chord[0][0] = 64;
      pressed[0][0] = true;
    }
    else {
      chord[0][0] = -1;
      pressed[0][0] = false;
    }
  }
  else if (key == 'x') {
    if (!pressed[1][0]) {
      chord[1][0] = 67;
      pressed[1][0] = true;
    }
    else {
      chord[1][0] = -1;
      pressed[1][0] = false;
    }
  }
  else if (key == 'c') {
    if (!pressed[2][0]) {
      chord[2][0] = 69;
      pressed[2][0] = true;
      pressed[2][0] = true;
    }
    else {
      chord[2][0] = -1;
      pressed[2][0] = false;
    }
  }
  else if (key == 'v') {
    if (!pressed[3][0]) {
      chord[3][0] = 71;
      pressed[3][0] = true;
    }
    else {
      chord[3][0] = -1;
      pressed[3][0] = false;
    }
  }
  else if (key == 'b') {
    if (!pressed[4][0]) {
      chord[4][0] = 74;
      pressed[4][0] = true;
    }
    else {
      chord[4][0] = -1;
      pressed[4][0] = false;
    }
  }
  else if (key == 'n') {
    if (!pressed[5][0]) {
      chord[5][0] = 76;
      pressed[5][0] = true;
    }
    else {
      chord[5][0] = -1;
      pressed[5][0] = false;
    }
  }

   ////////////////////////////////////////

  else if (key == 'a') {
    if (!pressed[0][1]) {
      chord[0][1] = 79;
      pressed[0][1] = true;
    }
    else {
      chord[0][1] = -1;
      pressed[0][1] = false;
    }
  }
  else if (key == 's') {
    if (!pressed[1][1]) {
      chord[1][1] = 76;
      pressed[1][1] = true;
    }
    else {
      chord[1][1] = -1;
      pressed[1][1] = false;
    }
  }

  else if (key == 'd') {
    if (!pressed[2][1]) {
      chord[2][1] = 74;
      pressed[2][1] = true;
    }
    else {
      chord[2][1] = -1;
      pressed[2][1] = false;
    }
  }
  else if (key == 'f') {
    if (!pressed[3][1]) {
      chord[3][1] = 71;
      pressed[3][1] = true;
    }
    else {
      chord[3][1] = -1;
      pressed[3][1] = false;
    }
  }
  else if (key == 'g') {
    if (!pressed[4][1]) {
      chord[4][1] = 69;
      pressed[4][1] = true;
    }
    else {
      chord[4][1] = -1;
      pressed[4][1] = false;
    }
  }
  else if (key == 'h') {
    if (!pressed[5][1]) {
      chord[5][1] = 67;
      pressed[5][1] = true;
    }
    else {
      chord[5][1] = -1;
      pressed[5][1] = false;
    }
  }

   ////////////////////////////////////////

  else if (key == 'q') {
    if (!pressed[0][2]) {
      chord[0][2] = 69;
      pressed[0][2] = true;
    }
    else {
      chord[0][2] = -1;
      pressed[0][2] = false;
    }
  }
  else if (key == 'w') {
    if (!pressed[1][2]) {
      chord[1][2] = 71;
      pressed[1][2] = true;
    }
    else {
      chord[1][2] = -1;
      pressed[1][2] = false;
    }
  }
  else if (key == 'e') {
    if (!pressed[2][2]) {
      chord[2][2] = 74;
      pressed[2][2] = true;
    }
    else {
      chord[2][2] = -1;
      pressed[2][2] = false;
    }
  }
  else if (key == 'r') {
    if (!pressed[3][2]) {
      chord[3][2] = 76;
      pressed[3][2] = true;
    }
    else {
      chord[3][2] = -1;
      pressed[3][2] = false;
    }
  }
    else if (key == 't') {
    if (!pressed[4][2]) {
      chord[4][2] = 79;
      pressed[4][2] = true;
    }
    else {
      chord[4][2] = -1;
      pressed[4][2] = false;
    }
  }
    else if (key == 'y') {
    if (!pressed[5][2]) {
      chord[5][2] = 81;
      pressed[5][2] = true;
    }
    else {
      chord[5][2] = -1;
      pressed[5][2] = false;
    }
  }

   ////////////////////////////////////////

    else if (key == '1') {
    if (!pressed[0][3]) {
      chord[0][3] = 83;
      pressed[0][3] = true;
    }
    else {
      chord[0][3] = -1;
      pressed[0][3] = false;
    }
  }

    else if (key == '2') {
    if (!pressed[1][3]) {
      chord[1][3] = 81;
      pressed[1][3] = true;
    }
    else {
      chord[1][3] = -1;
      pressed[1][3] = false;
    }
  }
    else if (key == '3') {
    if (!pressed[2][3]) {
      chord[2][3] = 79;
      pressed[2][3] = true;
    }
    else {
      chord[2][3] = -1;
      pressed[2][3] = false;
    }
  }
    else if (key == '4') {
    if (!pressed[3][3]) {
      chord[3][3] = 76;
      pressed[3][3] = true;
    }
    else {
      chord[3][3] = -1;
      pressed[3][3] = false;
    }
  }
    else if (key == '5') {
    if (!pressed[4][3]) {
      chord[4][3] = 74;
      pressed[4][3] = true;
    }
    else {
      chord[4][3] = -1;
      pressed[4][3] = false;
    }
  }
    else if (key == '6') {
    if (!pressed[5][3]) {
      chord[5][3] = 71;
      pressed[5][3] = true;
    }
    else {
      chord[5][3] = -1;
      pressed[5][3] = false;
    }
  }
}

First Physical Interactive Choice (sensor): photoresistors

We tried connecting arduino with a sensor. Just for prototype, we used four photoresistors to activate the sound. However one problem we found with photoresistors is that it receives two values at the same time; thus if we asked it to turn on the sound if the sensor detects > 400, the photo resistor will detect 420, 422 at the same time, and it will very quickly turn on and turn off the sound. However we know that the sensor is reading the value and that it will turn on the music according to our code. So the first communication between the sensor and the code worked!

Second Physical Interactive Choice (sensor): Piezo Transducer

The second sensor we tried was piezo. We both felt piezo would be a perfect “sensor” for this project as it reacts with a knock or a touch. We grabbed a simple tutorial for piezo just to try to make it work: http://www.arduino.cc/en/Tutorial/KnockSensor Other examples that made us believe that piezo would work is: http://www.flickr.com/photos/peplop/3766646109/

However we encountered many problems with piezo transducers as it picks up noise even when we are not interacting with it. Eventually we asked Jim for help. Jim told us it is a mini microphone and that is why it’s picking up noises and signals even when we are not interacting with it. The piezo transducer then seems to be not working for us. Jim told us maybe a switch would do us well for our purpose. So we turned to our third choice of interaction

Third Physical Interactive Choice: Boolean Switch

Jim introduced us o micro switches. The main code that we tried is to use boolean to work with the switch. So when someone touches it once, the light/note will turn on and will keep turning on; however when someone touches it again, it will turn off.

     

Fourth Physical Interactive Choice: Reed Switch

Jim also introduced us to Reed Switches, which is a switch operated by an applied magnetic field; so there a magnet embedded in the switch. The reason why we turned to reed switches is because as we have to place the material on the conductive fabric in a certain manner in order for the interaction to work; however with the reed switch, it allows the interaction to work whichever way we put the individual block on the fabric.

Fifth Physical Interactive Choice: Simple Switch

However we discovered if we arrange the conductive fabric in a certain way so that both strips touch both sides of the conductive fabric on the block at the same time, the interaction would work no matter how we put the block down on the fabric. Therefore, due to the change of plan between a press interaction to a lift and drop interaction, we discarded the boolean code and changed to the basic switch code where when two conductive things touch, an interaction occurs. This became our final decision to build on our final prototype.

ps. we used a lot of aluminum foil for various prototypes. We like aluminum foils.

               

Final configurations for different shapes

          

MUSIC

We discovered that with the music going incrementally up the pentatonic scale in five, the music isn’t very interesting. So we devised few iterations of how the scale and octaves should perform, and plugged it in the keyboard codes:

The time signature that we ended up with is 2/6

B (xi) 71 D (re) 74 E (mi) 76 G (so) 79 A (la) 81 B (xi) 83
A (la) 69 B (xi) 71 D (re) 74 E (mi) 76 G (so) 79 A (la) 81
G (so) 67 A (la) 69 B (xi) 71 D (re) 74 E (mi) 76 G (so) 79
E (mi) 64 G (so) 67 A (la) 69 B (xi) 71 D (re) 74 E (mi) 76

 

B (xi) 83 A (la) 81 G (so) 79 E (mi) 76 D (re) 74 B (xi) 71
A (la) 69 B (xi) 71 D (re) 74 E (mi) 76 G (so) 79 A (la) 81
G (so) 79 E (mi) 76 D (re) 74 B (xi) 71 A (la) 69 G (so) 67
E (mi) 64 G (so) 67 A (la) 69 B (xi) 71 D (re) 74 E (mi) 76 

 

B (xi) 71 D (re) 74 E (mi) 76 G (so) 79 A (la) 81 B (xi) 83
A (la) 81 G (so) 79 E (mi) 76 D (re) 74 B (xi) 71 A (la) 69
G (so) 79 E (mi) 76 D (re) 74 B (xi) 71 A (la) 69 G (so) 67
E (mi) 64 G (so) 67 A (la) 69 B (xi) 71 D (re) 74 E (mi) 76

 

PHYSICAL MAKING

We’ve considered several variations and arrangements for the project. We initially considered the grid that resembles Tone Matrix; however we wanted to make the project more mobile in some way so we thought of various possible interactions, such as placing individual notes separately on a tree the music will play according to the leaf pressed. In the end, we decided to have a board that has notes and we use several blocks to activate the notes. We thought of decorating the ‘board’ in an interesting narrative way so children can read stories while making music. The last idea we decided on is the round shape background and have various shapes of blocks for interaction.

Making of the Block

We’ve always wanted to use foam for shapes; therefore we tested round foams, square foams and decide why not use various shapes of foams to make the interaction more interesting and engaging. We also used various colours of felt fabric and lights, so that we consist of variety of shapes and colours. In total we devised eight shapes of each, six felt course, resulting in 24  notes in total. In each block, we wanted to make the block feel more firm and thus we combined two pieces of foam together to increase the volume of the block.

To decrease the number of digital pins on arduino mega, we decided to put the light with a battery holder and batter directly on the foams itself. On the night before the project is due, we found out that either the blocks are not flat enough or not heavy enough for the interaction between the blocks and the table to work desirably; we then decided to put rock inn between the two pieces of foams, and sew!

               

List of sewing for the blocks:
sewing the lights to the battery holder
sewing the fabric into various shapes
sewing the conductive fabric to the back of the shape
sewing the batter holder to the conductive fabric that is at the back of the shape

     

Making of the Round Fabric 

Trying to make a perfect circle out of a big piece of fabric was a challenge for us. Moreover, trying to measure forty eight strips equally was even more challenging on the not-so-perfect circle that we made.

     

Maayan ended up learning how to use a sewing machine and sewed all the individual strips to the round fabric whereas Cathy focused on other parts of the sewing that needed to be done by hand, as Cathy hated using the sewing machine since grade 8 Home Economic class when she sewed a big hole (unintentionally) on her final project, her boxer. Cathy is very thankful and proud of Maayan for dealing with that piece of complicated and sophisticated machine!

We connected the round fabric with conductive fabric and each end of conductive fabric has a clap sewed to it. The clap then is soldered to a wire, which is then connected to the board.

GENERAL ISSUES & CONCERNS

We wanted to hide the technology as much as possible; however this was difficult for us as we anticipated many wires hanging down from the round table. We then bought two portal boards as Jim taught us how to connect the portal boards together to the Arduino Mega.

     

Array Issue: With the arrangement of note, we can use array of an array, however as the number for each note (from the note library provided above) is different, and we have a different arrangement of scale, it became hard for us to devise an algorithm for the arrangement and configuration of the notes; therefore we hardcoded each note number in processing. This way it will be easier for us to change individual note.

Surprisingly, using an Arduino Mega was fairly difficult in some way. For some reason, not all the digital pins work properly when we try to plug everything connection in the pins. We had to try note by note and pin by pin to test which note works with which pin. Eventually we found out that  the even number pins work more cooperatively with us than the odd number pins. So we tested each even number pin one by one to ensure that the arrangement of notes and the interaction make sense.

     

FINAL ARDUINO CODE

/**
DIGF 6B01 Final Assignment Keyboard Test Version 17
November 29 2011 

*/

int numOctave = 4;  //number of rows we have
int numNotes = 6;  //number of columns we have
int totalSignal; //total number of switches
int activeButton = 0;

 int LEDoutput [] = {0, 0, 0, 0}; //LED lights in an array
 //int analogValue [] = {0, 0, 0, 0}; // switches in an array
 int button [] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; //the inital state of the swithces in an arary

void setup() {
    Serial.begin(9600); 

    totalSignal = numOctave * numNotes;

    Serial.println(totalSignal);

    for(int i = 0; i <  totalSignal; i++) {
      LEDoutput[i] = 2 + i;
      pinMode(LEDoutput[i], OUTPUT);
      pinMode(button[i], INPUT);

      //LEDoutput[i] = 9 - i*2; //the pin of the array, 9, 7, 5, 3

     // analogValue[i] = 0; //inital value of the switch
      //button[i] = true; // inital state of the switch
    }
}

void loop() {

  for(int i = 0; i < totalSignal; i++) {

    button[i] = 52 - i * 2;
    //map(analogRead(i), 0, 1023, 0, 255);

    if (button[i] == 20) {
      button[i] = 23;
    }

    /* when the switch value is bigger than one,
    if the button state is false, then make the button state true
    else, if the button state is true, make it false;*/

    activeButton = digitalRead(button[i]);
    Serial.write(activeButton);

    if(activeButton == HIGH){
      digitalWrite(LEDoutput[i], HIGH);
    } else {
      digitalWrite(LEDoutput[i], LOW);
    }
  }

 

FINAL PROCESSING CODE

/**
DIGF 6B01 Final Assignment Keyboard Test Version 13
November 29 2011 

Source: A SoundCipher Library by Andrew R. Brown
*/

import arb.soundcipher.*;  //soundcipher libraries
import processing.serial.*;

int numOctave = 4; // number of rows
int numNotes = 6; // number of columns
int totalSignal = numOctave * numNotes; // total number of notes

int currentByte = 0; // inByte counter
int [] inByte = new int [totalSignal];
/*
a new integer of array is called inByte;
it communicates between arduino and processing; informs on the number of signals
in this case, it is name of the individual notes
*/

int sensorValue; //

Serial myPort;
//boolean [][] on = new boolean [numNotes][numOctave]; //an array an of array
float [][] chord = new float [numNotes][numOctave]; //an array of an array
int scaleCounter = 0; //metronome 

SoundCipher sc = new SoundCipher (this); 

void setup(){
  String portName = Serial.list()[0];
  myPort = new Serial(this, portName, 9600);

  for(int i = 0; i < numNotes; i++){
   for(int j = 0; j < numOctave; j++){
     chord [i][j] = -1;
    // on [i][j] = false;
   }
 }

}

void draw(){

  /*
  sc is the new object. We access the function playChord from sc.
  chord is the entire canvas of notes
  scale counter acts as a metronome that accesses an individual column within the chord.
  chord [] gives you a column
  chord [] [] gives you specific note
  100 is the volume
  0.3 is the length of each note
  */
  sc.playChord(chord[scaleCounter], 100, 0.3);

  /*scaleCounter as the metronome keeps moving right until it reaches the total number
  fot notes (the end of the column), and it goes back to the beginning */
  scaleCounter++;

  if(scaleCounter == numNotes){
    scaleCounter = 0;
  }

  //determines the tempo
  delay(180);
}

void serialEvent(Serial myPort){

  /*
  the system tries to store every properties of the note in place by using currentByte
  this way the system ensures multiple notes can be played when the scale counter reaches
  the position of the notes. 

  */

  inByte[currentByte] = myPort.read();
  //println(inByte[currentByte]);
  currentByte++;

  /*thus only when the system contains the information of every notes (total signal), the system
  will perform the following tasks */

   if (currentByte == totalSignal) {

   if (inByte[0] > 0) { //if inByte [0] the analogValue is bigger than 0
        chord[0][0] = 64; //play the note 52;
      } else {
        chord[0][0] = -1; //don't play any note
      }

    if (inByte[1] > 0) {
        chord[1][0] = 67;
      } else {
        chord[1][0] = -1;
      }

    if (inByte[2] > 0) {
        chord[0][1] = 69;
      } else {
        chord[0][1] = -1;
      }

 if (inByte[3] > 0 ) {
        chord[1][1] = 71;
      } else {
        chord[1][1] = -1;
      }

 if (inByte[4] > 0) {
        chord[2][0] = 74;
      } else {
        chord[2][0] = -1;
      }

 if (inByte[5] > 0) {
        chord[2][1] = 76;
      } else {
        chord[2][1] = -1;
      }

     ///////////////////////////////

  if (inByte[6] > 0) {
        chord[3][0] = 79;
      } else {
        chord[3][0] = -1;
      }

  if (inByte[7] > 0) {
        chord[3][1] = 76;
      } else {
        chord[3][1] = -1;
      }

  if (inByte[8] > 0) {
        chord[4][0] = 74;
      } else {
        chord[4][0] = -1;
      }

  if (inByte[9] > 0) {
        chord[4][1] = 71;
      } else {
        chord[4][1] = -1;
      }
  if (inByte[10] > 0) {
        chord[5][0] = 69;
      } else {
        chord[5][0] = -1;
      }

 if (inByte[11] > 0) {
        chord[5][1] = 67;
      } else {
        chord[5][1] = -1;
      }

       ///////////////////////////////

 if (inByte[12] > 0) {
        chord[0][2] = 81;
      } else {
        chord[0][2] = -1;
      }      

 if (inByte[13] > 0) {
        chord[1][2] = 79;
      } else {
        chord[1][2] = -1;
      }

 if (inByte[14] > 0) {
        chord[2][2] = 76;
      } else {
        chord[2][2] = -1;
      }

  if (inByte[15] > 0) {
        chord[3][2] = 74;
      } else {
        chord[3][2] = -1;
      }

   if (inByte[16] > 0) {
        chord[4][2] = 71;
      } else {
        chord[4][2] = -1;
      }

 if (inByte[17] > 0) {
        chord[5][2] = 69;
      } else {
        chord[5][2] = -1;
      }
      ////////////////////////////////////////

     if (inByte[18] > 0) {
        chord[0][3] = 71;
      } else {
        chord[0][3] = -1;
      }

    if (inByte[19] > 0) {
        chord[1][3] = 74;
      } else {
        chord[1][3] = -1;
      }

    if (inByte[20] > 0) {
        chord[2][3] = 76;
      } else {
        chord[2][3] = -1;
      }

   if (inByte[21] > 0) {
        chord[3][3] = 79;
      } else {
        chord[3][3] = -1;
      }

   if (inByte[22] > 0) {
        chord[4][3] = 81;
      } else {
        chord[4][3] = -1;
      }

   if (inByte[23] > 0) {
        chord[5][3] = 83;
      } else {
        chord[5][3] = -1;
      }   

    println("1: " + inByte[0] + "    " + inByte[1]+ "    " + inByte[2]+ "    "+inByte[3]+ "    "+inByte[4]+ "    "+inByte[5]);
    println("2: " + inByte[6] + "    " + inByte[7]+ "    " + inByte[8]+ "    "+inByte[9]+ "    "+inByte[10]+ "    "+inByte[11]);
    println("3: " + inByte[12] + "    " + inByte[13]+ "    " + inByte[14]+ "    "+inByte[15]+ "    "+inByte[16]+ "    "+inByte[17]);
    println("4: " + inByte[18] + "    " + inByte[19]+ "    " + inByte[20]+ "    "+inByte[21]+ "    "+inByte[22]+ "    "+inByte[23]);

    currentByte = 0; //after performed all these, current goes to zero
  }

}

FUTURE IMPROVEMENT

We would like to incorporate lights in the project so that  the light could indicate the pace of the music.
We are also interested in incorporate letters with the experience to allow people to deliver interesting messages while interacting in making the music based on pentatonic scale.

Protected: USB Keypad

This content is password protected. To view it please enter your password below:

AR Project Proposal

Concept:
I am proposing to write software that will use live camera tracking and function as an Augmented Reality application.

I am interested in using location data and adding layers on the screen about a specific location, which will allow a person to learn about specific pieces of information about that area. The idea is to see through revealing. While viewing a specific landscape on the tablet, they can peel back layers that are overlaying on the screen. Some ideas I currently have are historical photos of the location, crime data, historical events that have taken place. I want to create a user experience that is playful but also informative. I also want to play with the idea of Augmented Reality as a pseudo reality.

Technical:
-iPad
-C++ and OpenFrameworks

Background:
Recently Amazon released their Flow app that scans products and finds prices, reviews and similar products in their system. Its a really smooth way of mixing “reality” with overlays of information.

Similar Work:
ZeitagTO
Bionic Eye
WorkSnug

Dustin’s availability

Last week Dustin told me he would only be available at this week’s lab for about an hour and that he was going to be out of the country for the next two lab sessions.  If I’m not mistaken that spans most of our final project prep time.  Could we get some confirmation of his availability?  Kate and Jim, are you ready for lots and lots of questions in his absence?