Serial Communication Tutorials

These tutorials are eye-opening!

Here is a link to my ‘homework’ video:

For the first Call/Response exercise I used two different sensors – tilt and light, with ‘wonky’ results. For the second Call/Response I used two of the same – both light sensors – with much smoother, less erratic results.

Unfortunately my trial copy of Max6 has expired and I wasn’t able to try the patches . . ..


The Lightness of Sound

By Jeremy Littler and Mitzi Martinez

What is The Lightness of Sound?

SEE The Lightness of sound:


Based in the LUMARCA design created by Albert Hwang, Matt Parker and Eliot Woods (, The Lightness of Sound is an installation designed to visualize music in a 3D light display, adding a new dimension to the interaction of the musicians with their performance, by adding a visual element displayed in a physical object that reacts on every note that is being played.

 For the musicians, this completes the sensorial experience as by playing they are physically touching the instrument to produce notes, they hear it, and they see the volumetric display in the WireMap reacting to their input by showing changes of length and color of the light.

 For the spectators, it also adds a new dimension as they can see the synergy between music, instrument and musician condensed in a physical object that displays this interactions and the flow of the music in a very visual and almost tactile way.

How it works?

On the technical side, the Lightness of Sound project goal, was to integrate a wireless MIDI system (in this case a MIDI guitar) with a projected visualization. Ideally, live music could be performed that seamlessly blended both the visual and auditory elements into a seamless experience for the audience.  In theory, the wireless MIDI system would allow musicians to create a dynamic musical compositions influenced the projected content, while at the same time being influenced musically by visualizations.  The core elements of the wireless MIDI system were as follows:

1)   A Midi Guitar with support for battery based operation.

2)   An Arduino with MIDI input capabilities.

3)   An XBee based transmission system (for MIDI).

4)   A Processing application that integrated audio and projection playback functions.

5)   A virtual MIDI system that enabled Processing to convert MIDI notes to sound.

MIDI OUTPUT (FOR INFORMATIONAL PURPOSES)In terms of the hardware/software requirements Arduinos are easily configured for MIDI output  (See example below).


 The hardware requirements are a MIDI Connector  (PRT-09536 PRT-09536 from Sparkfun), three segments of wire and a 220 ohm resistor.  To output MIDI, you can install the RWMidi Library for processing or MidiBusMIDI content generated by Processing can be patched via Apple’s IAC Driver to Garage Band (or other sound generators) or connected to a USB-Midi interface (e.g., the Uno) and then taken as an input in the sound generator. There is an example of this approach at Instructables. The process of configuring an Arduino for MIDI input, as was required by this project, is more complex…


Developing a solution based on MIDI input is more complex than MIDI output. There are essentially two ways to configure an Arduino as a MIDI input device. The least costly option is to breadboard a MIDI IN interface (see below) using basic electronic components.

Source: Notes and Volts:


The parts requirements for this project are:


6N138 Optocoupler

A 270 ohm resistor

A 220 ohm resistor

A 1N914 Diode

A Midi Socket

3 wires.

The total cost of these items is under $10.00.

Note: If connected, this circuit will conflict with the programming of the Arduino. Therefore, the circuit must be disconnected when performing this task. It would be worth installing a “programming” switch to simplify development.

Despite the circuit being fairly simple, Jeremy spent an entire day attempting to get it to work (see below) without success. Simply put, the circuit would not communicate MIDI information to the Arduino. Perhaps others will have more success with it. It’s worth noting that there were comments online that others were unable to get this to function. It’s possible that changes to the Arduino environment or hardware have rendered this circuitry inoperative.

The second method (as used in this project) of enabling MIDI IN on an Arduino is to attach the SparkFun MIDI Shield to the Arduino (PRT-09595 from SparkFun). However, it’s worth noting that this module (which handily includes a programming/running switch) does block access to the RX/TX headers on the Arduino. This presents a challenge when combining MIDI Input with an XBee via an Arduino.

To receive notes from the MIDI shield an Arduino library is required. Specifically, the Arduino Midi Library (version 3.2+).  As the MIDI shield overrides the serial communications sub-system on the Arduino it is not possible to simultaneously use a MIDI shield and an XBee for serial communication using standard serial techniques. To overcome this limitation, the Arduino can be configured to communicate with the XBee using the SoftWare Serial Library.  However, this does limit the baud rate of the XBee communication to a maximum of 9600bps.

Source: SparkFun

The Arduino code for the MIDI IN project is as follows:


// Wireless Midi to Xbee

// By Jeremy Littler

// Using 2 Key Libraries. See below

// Note: on the Yamaha - EZ-AG the Note Range it is 41-76

// ****************


//                            ==================

//   ooo                   ===================

//   ===               ===============

//   ====================[]===[]=====   ROCK ON!

//   ===              ================

//   ooo                 ==================

//                             ===============


// SoftSerial Library from Arduino



// Arduino Midi Library from Ardiono


// fortyseveneffects (at) gmail (dot) com.

// To overcome the Sparkfun midi adapter taking

// Over the Flipping Physical Serial Port. use SoftwareSerial

// to communicate with the XBee

//Don't Forget to That the Midi Shield Needs to

// Be Manually Switched (near the midi plugs)

// Between Program and Operation Mode

#include <SoftwareSerial.h>

#include <MIDI.h>  // Add Midi Library

// LED is defined to determine (easily) the MIDI Value

#define LED 13    // Arduino Board LED is on Pin 13

// Use Software Serial to talk to the Xbee. Whoppeeeee!

// This used Pin 11 to Output the Midi Value

// Connect the Jumper Wire from

// From Pin 11 on the Arduino to the

// XBee Station

SoftwareSerial myXbeeSerial(10, 11); // RX, TX

String MidiNoteString = "02#";

//  Channel (Ignored), Pitch (Kept), and Velocity (Ignored)

void MyHandleNoteOn(byte channel, byte pitch, byte velocity) {

// Have Received Data so Prepare it to be printed to the XBEE

String MidiNoteString = "02#" + String(pitch);

if (velocity == 0) {//A NOTE ON message with a velocity = Zero is actualy a NOTE OFF

digitalWrite(LED,LOW); //Turn LED off



String MidiNoteString = "02#" + String("999");


} else {

digitalWrite(LED,HIGH);//Turn LED off






void setup() {


pinMode (LED, OUTPUT); // Set Arduino board pin 13 to output

// OMNI sets it to listen to all channels.. MIDI.begin(2) would set i

MIDI.begin(MIDI_CHANNEL_OMNI);  // Initialize the Midi Library.

// Kill Thru Mode as it hogs performance


// Don't Need Sysex Message so turn that off


//Register a Function to Call Note on/off Messages



void loop() { // Main loop; // Continually check what Midi Commands have been received.

//myXbeeSerial.println("Hello, world?");



 Two XBee units are configured using CoolTerm (in OS/X) with the following settings:

XBee Receiver:

Pan ID: 2171 - ATID2171 (XBee 1 and 2 must have the same Pan ID).



XB (MIDI) Sender:

Pan ID: 2171 - ATID2171



Both Xbee units should be set to a data rate of 9600 BPS (ATBD3 in CoolTerm)

 One XBee is wired to the headers on the SparkFun MIDI shield, as follows:

  • Pin 11 to Pin 3 on the XBee
  • +5V to Pin 1 on the XBee
  • GND to GND on the XBee

We worked over the Processing Sketch created by Matt Parker, changing the display and combining the MIDI Read Functionality and the Project visualization that we used is the following:



Wiremap Renderer for 2 Globes

/ \
/ \ / \
/________\ /________\______
/ \ | \
/ / \ \_____ | \
/________\______ | /
\ / \ |_____/
\ /________\
\ \ /
\_____ \ /
/ \
/ \

For more information on the project please visit:

This program builds two separate 3d globes. I have two separate functions (& sets of variables) because I haven't quite yet figured out how call a function twice. Elementary stuff, I know, but I'll get to it when I can.

Some conventions to be aware of:

1 - This particular program builds in millimeters.
2 - Also, I use a left-handed coordinate system, with positive X going right, positive Y going up, and positive Z going forward.

// Fullscreen stuff:

/* Variable declarations

/* Physical Wiremap, in inches

float depth = 70.0; // The mapline lies 3 meters away from the projector's focal point
float map_length = 32.0; // The mapline is 1.28 meters wide
float depth_unit = 0.500; // Each depth unit is 5 millimeters
float map_unit = 0.7; // Each mapline unit is 5 millimeters
int wire = 64; // There are 128 wires in this Wiremap
float depth_thickness = 30.0; // How deep is the field (perpendicular to the mapline)
/* Projector

float ppi = 40; // Pixels per millimeter (unit conversion). Only true for mapline plane - 4 pixels every 5 millimeters
int string_pix_count = 9; // How many columns of pixels are being projected on each string //Mitzi's comments: here I changed the thickness of the lines.
/* Map

float[] map = new float[wire]; // example: map[0] = 90 means that the first string is 45 cm away from the mapline
float[] x_by_ind = new float[wire]; // x coordinate for each wire
float[] z_by_ind = new float[wire]; // z coordinate for each wire
/* Globe A

float[] globe = new float[3]; // globe x,y,z coords
float radius = 31.00; // default globe radius
int dot_height = 15; // height of surface pixels.
boolean render_globe = true; // toggle globe rendering

/* Key input

float step = .2; // how far the globe moves / button press
boolean mouse = true; // is mouse clicked?
int colorval_r = 200; // red
int colorval_g = 100; // green
int colorval_b = 255; // blue
boolean xpin = false; // the mouse controls the globe's y & z axis
boolean ypin = true; // x & z
boolean zpin = false; // x & y
int start_time = 0; // for beat mapper
int end_time = 0; //
float beat_multiplier = 1; // multiplies freqeuncy of how often beat hits


/* Beat Mapper Variables

int[] last_32 = new int[32]; // last 32 times the spacebar has been pressed
int times_struck; // nubmer of times spacebar struck since timer was reset
int first_strike; // millis value for when timer was reset
int period = 500; // time between beats (this is the metronome)
int offset = 1; // how far away in time we are from the last beat
/* wave variables
int trail = 350; // number of iterations of past mouse clicks we keep
int[] click_time = new int[trail]; // array of times (millis) associated w/ clicks
int[] click_x = new int[trail]; // array of x locales for clicks
int[] click_y = new int[trail]; // array of y locales for clicks
float[] click_x_trans = new float[trail]; // translations from mouse x to xyz
float[] click_y_trans = new float[trail]; // translations from mouse y to xyz
float amplitude = .6; // amplitude of waves
int decay = 3000; // how long it takes for waves to die down
float wave_velocity = .035; // inches/milliseconds
int trail_frequency = 10; // milliseconds - NOT frequency of wave, but how often a new value gets pushed into the trail arrays (above)
int trail_cycle_count; // this gets updated once every (trail_frequency)
int trail_cycle_count_compare; // this is used to check to see if we need a new value
int water_color = 0; //

float plane_angle = .0; // the angle of the plane of the water (think m in y = mx + b)
float plane_intercept = 0; // where the plane intersects the origin (think b in y = mx + b)
import processing.opengl.*;
// *******
// ******

// ********************
// **************** *
import rwmidi.*;
MidiOutput output;
import processing.serial.*;
Serial myPort; // Create object from Serial class
String val ="0"; // Data received from the serial port
String CurrentNote = "0"; // Current Note in the Serial Buffer
String LastNotePlayer = "0"; // Last Note Played
int PitchRanged = 0; // Converts the Midi Note Value to a More Predicatable Range Value

String inBuffer = ""; // Serial Port Value Received
String postinBuffer; // Used to Time WhiteSpace entc
String line;
// Used to Control the Color of the Tips in the Project Window
int tipR = 200;
int tipG = 70;
int tipB = 205;
float tipvalue=0;
int GlobeTip = 255;

// Controls the Globe Movement Accross the X Axis
float midi_mousex = 0;
// Controls the Globe Movement Accross the Y Axis
float midi_mousey = 0;
// Total Notes Played. This creates an iterative environment
int totalnotesplayed = 0;
// Controls
float midi_mouse_glide_x = 0;
float midi_mouse_glide_y = 0;

// END JL CODE *******



static public void main(String args[]) {
PApplet.main(new String[] { "--present", "olasverdes" });

void setup() {

// List The Serial Devices

// Soft Serial can only send at 9600 safely. Therefore, the Xbee has
// Been Configured to 9600 Baud.
myPort = new Serial(this, Serial.list()[0], 9600);
// Configures The System for Midi
// creates a connection to IAC as an output
output = RWMidi.getOutputDevices()[0].createOutput(); // talks to garageband

size(displayWidth, displayHeight);
//size(displayWidth, displayHeight, OPENGL);
colorval_r = 25; //Mitzi's notes: color of the globe that floates over the wave
colorval_g = 200;//Mitzi's notes: color of the globe that floates over the wave
colorval_b = 100;//Mitzi's notes: color of the globe that floates over the wave

void draw() {
fill(0); //Mitzi's notes: this is the background color
rect(0, 0, width, height);


void sineSurface() {

/* trail_frequency appends clicks to the mouse trail arrays */

int remainder = millis() % trail_frequency;
trail_cycle_count = (millis() - remainder) / trail_frequency;
if (trail_cycle_count != trail_cycle_count_compare) {
trail_cycle_count_compare = trail_cycle_count;

// JL Changed
append_click(int(midi_mouse_glide_x), int(midi_mouse_glide_y));

// append_click(mouseX, mouseY);

float[] time_since_click = new float[trail]; // the difference between now and the array of clicks
float[] amp_modifier = new float[trail]; // the amp according to decay and time since click
float[] distance_since_pass = new float[trail]; // the distance since the head of the wave has passed the string
float[] distance_since_pass_fraction = new float[trail]; // the distance gets multiplied by a fraction for beat mapping (period)
float[] time_since_pass = new float[trail]; // amount of time that has passed since head of wave & wire intersection
float[] wave_head_distance = new float[trail]; // distance between epicenter and head of wave
float[] amp = new float[trail]; // amplitude of wave @ wire point according to mouse movement & beatmapping

/* for each wire... */

for(int i=0; i<wire; i+=1) {
float final_amp = z_by_ind[i]*plane_angle + plane_intercept ; // the baseline for the final amplitude is an upward slope when looking @ the wiremap... used y = mx + b

for(int x = 0; x < trail; x ++ ) {

float local_hyp = sqrt(sq(x_by_ind[i] - click_x_trans[x])+sq(z_by_ind[i] - click_y_trans[x]));
time_since_click[x] = millis() - click_time[x];
wave_head_distance[x] = time_since_click[x] * wave_velocity;
distance_since_pass[x] = wave_head_distance[x] - local_hyp;
distance_since_pass_fraction[x] = distance_since_pass[x] / float(period / 6);
time_since_pass[x] = distance_since_pass[x] / wave_velocity;
if (time_since_pass[x] > 0 && time_since_pass[x] < decay ) {
amp_modifier[x] = time_since_pass[x] / decay - 1;
else {
amp_modifier[x] = 0;
amp[x] = - amplitude * amp_modifier[x] * sin((2 * PI * distance_since_pass_fraction[x]));

final_amp = final_amp + amp[x];

float y_top_coord = final_amp;
float y_bot_coord = -20;
float y_top_proj = y_top_coord * depth / z_by_ind[i]; // compensate for projection morphing IN INCHES
float y_bot_proj = y_bot_coord * depth / z_by_ind[i];
float y_height_proj = y_top_proj - y_bot_proj;
// JL Modified - Added Variables for TipR, TipG and TipB
fill(tipR,tipG,tipB); //Mitzi's notes: here you change the color of the tip in every line // draw a rectangle at that intersect

//int tipR = 200
//int tipG = 70
//int tipB = 205
// rect 1 is top dot for sliver
float left1 = i * (width) / wire;
float top1 = (height/ppi - y_top_proj) * ppi;
float wide1 = string_pix_count;
float tall1 = dot_height;
rect(left1, top1, string_pix_count, tall1); // draw a rectangle at that intersect

// rect 3 is filler for sliver
fill(200, 255, water_color); // Mitzi's notes: here you change the color of the wave.

float left3 = i * (width) / wire;
float top3 = (height/ppi - y_top_proj) * ppi + dot_height;
float wide3 = string_pix_count;
float tall3 = y_height_proj * ppi - (dot_height * 2);
rect(left3, top3, string_pix_count, tall3); // draw a rectangle at that intersect

// Jeremy Changed Code
float globe_x = (midi_mousex / float(width)) * (map_length) - (map_length / 2);
// float globe_x = (mouseX / float(width)) * (map_length) - (map_length / 2);
float globe_z = depth - (midi_mousey) / float(height) * (depth_thickness);
// float globe_z = depth - (mouseY) / float(height) * (depth_thickness);

float y_amp = globe_z*plane_angle + plane_intercept;
for(int x = 0; x < trail; x ++ ) {

float local_hyp = sqrt(sq(globe_x - click_x_trans[x])+sq(globe_z - click_y_trans[x]));
time_since_click[x] = millis() - click_time[x];
wave_head_distance[x] = time_since_click[x] * wave_velocity;

distance_since_pass[x] = wave_head_distance[x] - local_hyp;
distance_since_pass_fraction[x] = distance_since_pass[x] / float(period / 6);
time_since_pass[x] = distance_since_pass[x] / wave_velocity;
if (time_since_pass[x] > 0 && time_since_pass[x] < decay ) {
amp_modifier[x] = - time_since_pass[x] / decay + 1;
else {
amp_modifier[x] = 0;
amp[x] = - amplitude * amp_modifier[x] * sin((2 * PI * distance_since_pass_fraction[x]));

y_amp = y_amp + amp[x];
float globe_y = y_amp;
float radius = (sin(TWO_PI * float((millis() - offset) % period) / float(period)) + 1) / 2;
if(render_globe == true) {
gen_globe(globe_x, -globe_y, globe_z, 5);
//println(globe_x + " " + globe_y);
if (millis() > 10000) {
void append_click(int local_mouseX, int local_mouseY) {
click_time = subset(click_time, 1);
click_x = subset(click_x, 1);
click_y = subset(click_y, 1);
click_x_trans = subset(click_x_trans, 1);
click_y_trans = subset(click_y_trans, 1);
click_time = append(click_time, millis());
click_x = append(click_x, local_mouseX);
click_y = append(click_y, local_mouseY);
click_x_trans = append(click_x_trans, (local_mouseX / float(width)) * (map_length) - (map_length / 2));
click_y_trans = append(click_y_trans, depth - (local_mouseY) / float(height) * (depth_thickness));
void gen_globe(float x, float y, float z, float rad) {
for(int i = 0; i < wire; i += 1) {
if((x_by_ind[i] >= (x - rad)) && (x_by_ind[i] <= (x + rad))) { // if a wire's x coord is close enough to the globe's center
float local_hyp = sqrt(sq(x_by_ind[i] - x) + sq(z_by_ind[i] - z)); // find the distance from the wire to the globe's center
if(local_hyp <= rad) { // if the wire's xz coord is close enough to the globe's center
float y_abs = sqrt(sq(rad) - sq(local_hyp)); // find the height of the globe at that point
float y_top_coord = y + y_abs; // find the top & bottom coords
float y_bot_coord = y - y_abs; //
float y_top_proj = y_top_coord * depth / z_by_ind[i]; // compensate for projection morphing
float y_bot_proj = y_bot_coord * depth / z_by_ind[i];
float y_height_proj = y_top_proj - y_bot_proj;
/* Top dot
fill(colorval_r, colorval_g, colorval_b); // Mitzi's notes: here you can change the color of the globe as well. Fill the globe pixels this color
float left1 = i * (width) / wire;
float top1 = (height/ppi - y_top_proj) * ppi + dot_height; // ppi = pixel / mm. These are conversions to & from pixels and mm
float wide1 = string_pix_count;
float tall1 = y_height_proj * ppi - (dot_height * 2);
rect(left1, top1, wide1, tall1);
fill(GlobeTip); // Mitzi's notes: here you change the color of the tip of the globe. If you put fill(255) it will be white.

/* Top Surface
float left2 = i * (width) / wire;
float top2 = (height/ppi - y_top_proj) * ppi;
float wide2 = string_pix_count;
float tall2 = dot_height;
rect(left2, top2, wide2, tall2);

/* Bottom Surface
float left3 = i * (width) / wire;
float top3 = (height/ppi - y_bot_proj) * ppi - dot_height;
float wide3 = string_pix_count;
float tall3 = dot_height;
rect(left3, top3, wide3, tall3);


void mousePressed() {
if (mouseButton == LEFT) {
append_click(mouseX, mouseY);
append_click(mouseX, mouseY);
append_click(mouseX, mouseY);
append_click(mouseX, mouseY);
append_click(mouseX, mouseY);
append_click(mouseX, mouseY);
append_click(mouseX, mouseY);
} else if (mouseButton == RIGHT) {
if (water_color == 255) {
water_color = 0;
} else {
water_color = 255;

void keyPressed() {

/* Globe A
if (true == true) {
if (key == 'w') { // adds value to the dimension that the mouse cannot move in
if (xpin == true) {
globe[0] = globe[0] + step;
} else if (ypin == true) {
globe[1] = globe[1] + step;
} else if (zpin == true) {
globe[2] = globe[2] + step;
} else if (key == 's') {
if (xpin == true) { // subtracts value from the dimension that the mouse cannot move in
globe[0] = globe[0] - step;
} else if (ypin == true) {
globe[1] = globe[1] - step;
} else if (zpin == true) {
globe[2] = globe[2] - step;
} else if (key == 'e') { // adds to radius
radius = radius + step;
} else if (key == 'd') { // subs from to radius
radius = radius - step;
} else if (key == 'a') { // allows mouse control for radius (hold down 'a' and bring mouse up or down)
radius = (height - mouseY) * .8;
mouse = false;
} else if (key == 'q') { // stops ball in place so that you can pop it somewhere else
mouse = false;
} else if (key == 'z') { // color control (hold down buttons and bring mouse up or down)
colorval_r = (height - mouseY) * 255 / height;
} else if (key == 'x') {
colorval_g = (height - mouseY) * 255 / height;
} else if (key == 'c') {
colorval_b = (height - mouseY) * 255 / height;
} else if (key == 'v') {
colorval_r = (height - mouseY) * 255 / height;
colorval_g = (height - mouseY) * 255 / height;
colorval_b = (height - mouseY) * 255 / height;
} else if (key == '1') { // x y z pin switches
xpin = true;
ypin = false;
zpin = false;
} else if (key == '2') {
xpin = false;
ypin = true;
zpin = false;
} else if (key == '3') {
xpin = false;
ypin = false;
zpin = true;
} else if (key == 't') { // beat mapper buttons - start, stop, effects, and multipliers
start_time = millis();
} else if (key == 'y') {
end_time = millis();
period = end_time - start_time;
offset = start_time % period;
} else if (key == 'g') {
beat_multiplier = 1;
} else if (key == 'h') {
beat_multiplier = 2;
} else if (key == 'j') {
beat_multiplier = 4;
} else if (key == 'k') {
beat_multiplier = 8;
} else if (key == 'b') {
if (render_globe == false) {
render_globe = true;
} else {
render_globe = false;
void keyReleased()
if(mouse == false) {
mouse = true;
if(key==' ') {
if(millis()-last_32[31] > 1500) {
last_32[31] = 0;
last_32 = subset(last_32, 1);
last_32 = append(last_32, millis());
for(int i=31; i>=0; i--) {
if(last_32[i] == 0) {
times_struck = 31 - i;
first_strike = last_32[i+1];
} else {
times_struck = 32;
first_strike = last_32[0];
if(times_struck > 1) {
period = (last_32[31] - first_strike) / (times_struck - 1);
offset = last_32[31];
void loader() { // loads data for this particular wiremap
map[0] = 15;
map[1] = 13;
map[2] = 0;
map[3] = 29;
map[4] = 37;
map[5] = 6;
map[6] = 31;
map[7] = 14;
map[8] = 9;
map[9] = 0;
map[10] = 12;
map[11] = 24;
map[12] = 3;
map[13] = 26;
map[14] = 39;
map[15] = 18;
map[16] = 3;
map[17] = 28;
map[18] = 11;
map[19] = 18;
map[20] = 1;
map[21] = 20;
map[22] = 24;
map[23] = 8;
map[24] = 7;
map[25] = 22;
map[26] = 17;
map[27] = 34;
map[28] = 37;
map[29] = 1;
map[30] = 23;
map[31] = 10;
map[32] = 2;
map[33] = 33;
map[34] = 6;
map[35] = 34;
map[36] = 27;
map[37] = 12;
map[38] = 19;
map[39] = 25;
map[40] = 11;
map[41] = 14;
map[42] = 5;
map[43] = 15;
map[44] = 27;
map[45] = 4;
map[46] = 25;
map[47] = 8;
map[48] = 32;
map[49] = 35;
map[50] = 7;
map[51] = 30;
map[52] = 21;
map[53] = 4;
map[54] = 16;
map[55] = 2;
map[56] = 20;
map[57] = 17;
map[58] = 38;
map[59] = 22;
map[60] = 32;
map[61] = 36;
map[62] = 30;
map[63] = 10;

for(int i = 0; i < trail; i ++ ) {
click_time[i] = 0 - (i * 500);
for(int j=0; j<wire; j++) { // calculate x and z coordinates of each wire
float xmap = (0 - (map_length / 2)) + j*map_unit;
float hyp = sqrt(sq(xmap) + sq(depth));
z_by_ind[j] = depth - map[j]*depth_unit;
x_by_ind[j] = xmap - xmap*map[j]/hyp*depth_unit;


void serialEvent (Serial myPort) {

try {

// The Midi Guitar - EZ AG I am using
// Produces Midi Note Values from a
// Total Range of
// Open E = 52
// High E on 6th String = 88
// You will need to adjust the values (i.e. using the range command possible).
// To reflect your particular midi device
// 1st (Low) 52 Open - 64
// 2nd 57 Open to 69
// 3rd 62 open to 74
// 4th 67 open to 79
// 5th 71 open to 83
// 6th 76 open to 88
inBuffer = myPort.readStringUntil('\n');
postinBuffer = trim(inBuffer);
CurrentNote = postinBuffer.substring(3,5);
//CurrentNote = trim(CurrentNote);
// send a note 200 mS long
int duration = 200;
int note = 0;
int channel = 1; // GB seems to ignore this, but mehtod needs it

// Definately Wrap this Code or there may be catastrophic exceptions of the serial port gets overloaded

try {

if (CurrentNote == null) {
} else {

if (CurrentNote.equals("99")) {
int velocity = 0;
note = 0;
int success = output.sendNoteOn(channel, note, velocity); // sendNoteOn returns 0 = fail, 1 = success
delay(duration); // note length -see keys below for better solution
success = output.sendNoteOff(channel, note, velocity); // sendNoteOff returns 0 = fail, 1 = success
note = 0;

} else {

// This is protected by a try box to ensure a never lock situation

try {

note = int(CurrentNote); // midi notes are numbered 0 -127 -- not all notes are played by all voice
LastNotePlayer = (CurrentNote); // Store the last note in a variable
int velocity = 127; // range 0-127, 127= max
int success = output.sendNoteOn(channel, note, velocity); // sendNoteOn returns 0 = fail, 1 = success
delay(duration); // note length -see keys below for better solution
success = output.sendNoteOff(channel, note, velocity); // sendNoteOff returns 0 = fail, 1 = success
totalnotesplayed = totalnotesplayed + 1;
// Get the Integer Value of the Note

} catch (Exception e) {

try {

// Sets the Color Value of the Tip
tipR = int(map(note, 52, 88, 150, 255));
tipG = int(map(note, 52, 88, 30, 90));
tipB = int(map(note, 52, 88, 0, 10));

// Displays Red at the Highest points of the scale
if (note >= 80) {
tipR = 255;
tipG = 0;
tipB = 0;
// Note Yellow Value = tipR = 255, tipG = 255, tipB = 0;
// Moves the Globe along the X-Axiz
midi_mousex = int(map(note, 52, 88, 500, 2100));
midi_mousex = midi_mousex + log(random(50));
midi_mousey = int(map(note, 52, 88, 200,600));
midi_mousey = midi_mousey + log(random(50));

// Sets the Ampltude of the Waves
// High Frequency notes have greater impact on amplitude
//if (note < 65) {
int ampMap = int(map(note, 52, 88, 0, 50));
amplitude = ampMap/50 + .9;

//} else {
//int ampMap = int(map(note, 52, 88, 0, 50));
//amplitude = ampMap/50 + .9 + random(0, .65);

// Controls the X and Y movements of the Main Bars
midi_mouse_glide_x = int(map(note, 52, 88, 300, 500));
midi_mouse_glide_y = int(map(note, 52, 88, 100, 300));
midi_mouse_glide_x = midi_mouse_glide_x + random(200, 300);
midi_mouse_glide_y = midi_mouse_glide_y + random(200, 300);
// Color of the Globe
// This creates a Red Value. Impacted by Note Value
colorval_r = 255;
colorval_g = 0;
colorval_b = int(map(note, 52, 70, 255, 0));
// The Color of the Bars. Mapped from Note Value
water_color = int(map(note, 52, 88, 0, 255));

// Control the Decay Rate of the Bars. These are Mapped to the Note Value
// The total notes played also affects the decay value
int decay = int(map(note, 52, 88, 3000, 6000));
decay = decay - totalnotesplayed;

// Control the Wave Velocity
float wave_velocity = int(map(note, 52, 88, 0.035, 2));
int trail_frequency = int(map(note, 52, 88, 30, 50));

} catch (Exception e) {


} catch (Exception e) {

} catch (Exception e) {




 The wireless MIDI system should be placed in a robust but portable project case (see example below):

The instrument selected for this project was a Yamaha EZ-AG MIDI Guitar. The EZ-AG is no longer in production. However, any MIDI instrument is a potential candidate, assuming that the device looks can be battery powered.

Processing is not a particularly flexible tool for sound playback scenarios. Therefore, the MIDI information will likely need to be “piped” using a virtual MIDI configuration (e.g., via Audio MIDI Setup in OS/X) to GarageBand or a similar MIDI capable DAW. The process for wiring GargeBand specifically (though this could apply to other audio applications) is presented in detail at Hex705.  This element was essential to getting the “Pad” playback sounds required for the Lightness of Sound project.



To build the WireMap for the display, Mitzi followed this instructable:

We wanted to create smaller versions of it but the wiring for them proved to be a real challenge, the smaller the version, the more difficult is to put a wire without moving the one next to it.

 Mitzi Built 2 prototypes before settling with the a final version that is slightly smaller than the one described in the instructable.


 The biggest limitation of the approach identified is that of latency, as the SoftwareSerial library limits the data rate to 9600 bps. Players must be careful to avoid overloading the MIDI buffer.  The next iteration of this project would determine if a custom-built MIDI input module would improve MIDI transmission speeds and, therefore, reduce latency issues. Building a custom module would significantly reduce the cost of each transmission system and, therefore, enable more instruments to be employed.

 During testing the wireless MIDI circuit performed flawlessly for over 4 hours. However, on the day of the presentation the device became extremely unreliable (Murphy’s Law!). This may have been due to the number of XBee units in the performance space, as there were a large number of projects being exhibited at the time.  The lesson from this experience is that wireless communications can be very unpredictable. To compensate for this attention should be paid to isolating the XBee units from cross frequency traffic by modifying the XBee channel settings (there are 16 frequency bands) and by verifying that the devices have unique Pan ID’s. Finally, as each XBee has a per-programmed serial number it is possible to add this information to the Processing code to verify that the data packets are coming from the correct XBee units. Finally, it would have prudent to add Cyclical Redundancy Checks (CRC)/Error Checking to the communication routines. This would eliminate spurious note data.

In terms of the WireMap, building in in core foam allowed us to move the wires to calibrate it more easily that it would have been if it would have been constructed in acrylic, however, the fragility of it also made it easy to bow, causing some of the wires to miss the light of the proyector.


We started the project with the idea of building 2 of more WireMaps that could communicate to each other via XBee’s  and  react to user interaction, however, the difficulties on building smaller versions of it in core foam, brought us to a better concept, the idea of building a bigger one that could react to music.

For the future we foresee that the ability to implement a wireless MIDI approach opens up virtually unlimited performance possibilities. In addition to utilizing existing MIDI technologies it is possible to design entirely new processes for eliciting audience interaction.  The first step in exploring the performance options would be to investigate the range of MIDI instruments currently available. All MIDI instruments are considered to be “controllers”. However, the devices tend to be used primarily as instruments or as dedicated input controllers (see examples below):

While the above MIDI controllers are physical devices, it is possible to represent their functionality in virtual form (i.e., in code).  This is significant as physical MIDI devices are often expensive and have strict power requirements (e.g., many MIDI devices are not portable).


The option of implementing MIDI controllers on mobile devices (tablets in particular) significantly lowers the cost of providing large numbers of devices to “performers”. In addition, “virtual” MIDI devices can be programmed with innovative control surface designs. There are a many virtual MIDI SDK’s available that enable custom control surface modeling. At present, Jazz Mutant Lemur is considered to be amongst the most flexible (see below), but designers are free to choose form a very large palette of audio/midi design components. Virtual MIDI devices typically communicate performance information via the Open Sound Protocol (OSC) or via dedicated MIDI interfaces. However, MIDI’s compactness enables it to be easily routed through any communication protocol.

Finally, it’s possible to develop entirely novel instruments and control surfaces for musical interaction. Custom instruments can be simple or complex, standalone or interconnected. Instruments can even be implemented as sculptural elements.





 Is it possible to create a collective musical/visual experience by providing the audience with MIDI instruments?


The participatory audio concept for the Lightness of Sound Project is a physical environment where the line between audience and performers is blurred.  It is envisioned as a space where primary performers (i.e., a “band” or DJ’s) encourage their “audience” to create a collective music experience.  This real-time musical generative environment would be visibly projected. See below:

It is hoped that this environment would encourage the audience to create complex and evolving musical performances. The projected content would act as a key element in the feedback loop as it would encourage the audience to push themselves musically.


One of the concepts for the project was to implement a class of MIDI instrument based on didgeridoos. As this physical instrument requires significant practice (breathing control skills etc.) to become proficient, the concept was adapted to enable a more approachable and portable solution (see below):

Source: Jeremy Littler. Concept for Electronic Didgeridoo

Additional instruments would be developed to support rhythmic control (portable drum pads and triggers) and melodic performance instruments. These items would be distributed to the “audience” in the performance space.  The performers would also be encouraged to download an application to their portable devices that would enable them to contribute to both the sound scape and projected elements via a virtual MIDI controller. Finally, sensors would be installed to pick-up the audience’s motion, temperature etc. This information would alter the soundscape and the visualization at the same time.

Weatherbe (working title)

WEATHERBE – by Demi Kandylis, Ryan Rizzo and Heather Phenix


Our inspiration for the Weatherbe project was informed by nature, and the desire to render the weather into an artistic and comparative visualization of data from a specific set of cities across the globe. But why? For one, we found the important connection between weather and mood in humans to be intriguing, and the notion of how the relationship with one’s self can be deeply affected by weather patterns that are very much beyond our control. We thought about how weather is an increasingly common topic of conversation – often the basis of small talk – and more often the basis of important discussions surrounding disaster and the never-ending influx of opinions on global warming. It’s often the marker of how favourable a city is to live in, how we relate to each others’ state of being, whether together or apart, and how we decide on anything from which modes of transport to use, to how many layers to wear on a daily basis. Weather, in some ways, can be considered as central to how we live our lives as time.


When thinking about what kind of data visualizations could do justice to such a force, we realized that sometimes the most simple, uniform things can be the most effective. Our data sets needed to be representative of motion in an interesting way. We delved into creating mesmerizing simplicity of changing and interconnected shapes, and were inspired by Matthew and Susan Gorbet’s Chronos and Kairos project,  “a robotic suspended sculpture located at Gate 19 of the San Jose International Airport. An abstract timepiece. The choreographed movement of 65 suspended aluminum elements depict varying representations of time.”

Overall, we set out to play with temperature, wind speed, humidity and wind direction. We needed to create sketches that could build – increase and decrease – over time when fed weather API’s from cities of our choice…which brought us to our next question…which cities?

We thought in terms of our audience – the most relatively interesting cities would naturally be those of our classmates…displaying the mild, November humidity of Tehran, juxtaposed with the icy, snowy winds of Saskatoon, the dry desert heat of Beersheba, and the moderately cold Toronto would surely hit the mark.

Additional inspirations below:

Data vis- modern approaches

Data vis processing code from Ben Fry (author of visualizing data)

Khan Academy

IBM think Exhibit Data Wall

Amon Tobin’s show


Curtain (like Harjot’s Kinect project)





In creating the Weatherbee project, with the assigned wireless criteria in mind, we set out first to work together on creating something simple, that could be represented wirelessly through various forms of digital and physical light. The digital came first. Ryan and I discussed with Demi that our main goal for the project was to contribute to the coding, since this was not our strength and it was his. We discussed the use of projected light on mirrors, and looked at various examples of code that we found interesting to play with:

Things got particularly frustrating when it came to the code. Ryan and I could create simple shapes that moved, but for whatever the reasons were, we never got to the point of implementing our work into the final product. Little did we know, Demi had been toiling away on a visualization that seemed to check the boxes for all of us in terms of its aesthetics. Here is the code he used:


class Particle {
PVector location;
PVector velocity;
PVector acceleration;
float lifespan;
float wind;
PVector dir;
float temp;
float hum;
color col;

Particle(PVector l, float curLifespan) {
acceleration = new PVector(0,0.07);
velocity = new PVector(random(-1,1),random(-

location = l.get();
lifespan = curLifespan;
}void run(WeatherObject wo) {wind = map(wo.wind,0,100,0,.5);
dir = wo.dir;
temp = wo.temp;
hum = wo.hum;
col = wo.col;update();
}void update() {location.add(dir);
velocity.add(new PVector(wind*dir.x,wind*dir.y));
lifespan -= 2.0;
}void display() {
}boolean isDead() {
if (lifespan < 0.0) {
return true;
} else {
return false;
}class ParticleSystem {
ArrayList<Particle> particles;
float curLifespan = 400;

ParticleSystem() {
particles = new ArrayList<Particle>();

void addParticle() {
particles.add(new Particle(new PVector(width/2,50),curLifespan));

void run(int w, PVector d, int t, int h) {
Iterator<Particle> it = particles.iterator();
while (it.hasNext()) {
Particle p =;
if (p.isDead()) {

class WeatherObject
String city;
float wind;
PVector dir;
float temp;
float hum;
color col;

void WeatherObject()


class XbeeMultiLightControl implements Runnable

int N = 0;
int NE = 1;
int E = 2;
int SE = 3;
int S = 4;
int SW = 5;
int W = 6;
int NW = 7;

int rVal = 100;
int gVal = 0;
int bVal = 0;

int fadeSpeed = 50;
int pulse = 10;

int[] payload;

boolean isRunning;

XBeeAddress64[] addr;

XBee xbee;

void XbeeMultiLightControl() {


void init()

addr = new XBeeAddress64[4];
addr[0] = new XBeeAddress64(“00 13 a2 00 40 98 99 85”);
addr[1] = new XBeeAddress64(“00 13 a2 00 40 98 99 8c”);
addr[2] = new XBeeAddress64(“00 13 a2 00 40 98 99 83”);
addr[3] = new XBeeAddress64(“00 13 a2 00 40 92 d7 c7”);

xbee = new XBee();

try {[0], 9600);
}  catch (Exception e) {
System.out.println(“XBee failed to initialize”);


void pulsate(int dir, int rv, int gv, int bv,int speed, int pulse)
fadeSpeed = speed;

payload = new int[] {rv, gv, bv, speed, pulse};


void run()
if (!isRunning)
//isRunning = true;
sendValue(addr[0], payload);
sendValue(addr[1], payload);
sendValue(addr[2], payload);
sendValue(addr[3], payload);

println(” thread is done!”);
//isRunning = false;

void sendValue(XBeeAddress64 addr64, int[] payload)

ZNetTxRequest tx = new ZNetTxRequest(addr64, payload);


ZNetTxStatusResponse status = (ZNetTxStatusResponse) xbee.sendSynchronous(tx, 500);
if (status.isSuccess()) {


} catch (XBeeException e) {

And here is how it looked:
Demi created the Xbee setup which consisted initially of 4 breadboards with 2 LED’s each connected to 4 individual Xbees drawing from 4 rotating weather APIs. Lights were programmed to change colours according to the colour of the weather data being displayed on screen, which corresponded to weather API’s from 4 different cities (Tehran, Toronto, Bersheeba, Saskatoon). This weather data on screen was represented by a continuous loop of interconnected floating triangles, again, colour corresponding to individual city weather data, and direction and speed corresponding to wind direction and velocity from the weather APIs. Ryan and I toyed with a variety of light setups, and finally settled on using cubes with draft paper and cardboard as a base.
There are many situations where we feel the Weatherbe could be a valuable installation, both indoors and outdoors. Whether displayed at an intersection on north, south, east and west corners of University ave, at an airport or a train terminal, there are many reasons you may be interested in “seeing” the weather. Maybe you’re in a controlled environment, like a car or an office building, and you want to know what it feels like outside. Maybe you’re waiting to board a plane and you want a quick overview of the current weather in the city you’re traveling to relative to where you are now. Or maybe you’re interested in having a smaller version of the installation in your home, representing cities that mean something to you, or to people you care about.
We think Weatherbe presents an efficient, vivid and beautiful way to experience the weather. It’s a unique way to visualize how the weather feels in various cities simultaneously, providing an accurate scope of various pieces of weather data with one simple display technique.

Project 3: onemile


by Yuxi, Hudson, and Maz.

Concept: onemile

The goal of this project is to create a connection between individuals revolving around sharing past experiences. The system uses a hood, which records the experiences of the wearer (light, sounds, and step) at a random time when the hood is worn down. This recording is transmitted to a base station whenever the hood comes into proximity, triggering data visualization of that recording. A previous recording, from another hood wearer, is then transferred back down to the hood at this point, creating a network effect of shared experiences. The downloaded experience is then played back by the act of wearing the hood up, connecting two individuals across time, location and experiences.

Experience recording (Data 1)

Single user experience with data transmission, sending (data 1) and receiving (data 2)

Following the path of a single recorded experience (data 1) over time between two hoods

Playback of sent experience (data 1)




The Object: Hood

Wearing hoods can serve many purposes, weather protection being foremost. It also serves other purposes, such as providing social cues. Hoods can communicate that a person do not want to connect with other people since: they create a personal space which only the hood wear occupies.

We’ve flipped these perceptions and experience around. Instead of withdrawing into a hood it becomes a social experience that is tapped into: two people occupy the same hood space. Pulling up the hood acts to remove the wearer from the situation they’re currently in and project him/her into a new space created by the sense experience of another hood wearer.

Experience Interpretation

A tweet, post or a profile picture allows an individual to project a self-image that they choose for their social network to interact with and experience. We want to challenge this: what would the interaction with and experience of another person be without knowing their name, demographic, or likeness; all the things we’re used to having, to identify that person? How would this experience be interpreted by each user?

As this is an interpretive experience, we don’t believe this to be a more “real” or “authentic” means of conveying a person or experience. What we are providing is a mean of experiencing another in a unique, unfamiliar method. New experiences have the power to open our eyes to other possibilities; perhaps onemile can provide such opportunities.


When worn down, our Arduino-equipped hood randomly initiates recording of the hood wearer’s experience. This recording consists of the light values, via photo sensor, steps taken, via accelerometer, and sound values, via microphone. The hood wearer is alerted that recording is taking place by the rhythmic toggling of a fan, near the hood wearer’s neck, to simulate breathing. This creates the illusion of another person closely following the hood wearer; which, in turn, acts as a reminder that their current experience will eventually be occupied by another person.

Our hood is equipped with an Xbee for wireless communication of stored data. When a hood wearer approaches our data visualization base station its Xbee is used to transmit the hood’s stored data to the base station’s Xbee. At this point, the Processing sketch running on our base station triggers data visualization of that recording. In addition to viewing their own data visualizations, users are presented with averages of other, past hood wearers’, data. This allows for users to connect on deeper level with their own experience by quantifying it along with other users’ experiences.

After a hood’s recorded data is transmitted to a base station a previous recording, from another hood wearer, is then transferred back down to the hood and any existing recordings are deleted/overwritten. When the hood is worn up this shared recording from another hood wearer is played back: light values via an LED strip around the hood edge, steps taken via a vibration motor at the chest, and sound values via speakers at ear level.



We began this project with three wholly different directions: linking two place digitally and letting inputs filter across, wearable technologies such as transformable clothing, and creating hardware that acts as driver for a new form of social network. Brainstorming sessions acted to distill out the idea that resonated the most with each team member creating shared experiences through wearable technology.

Developing this idea began by first understanding what the experience would be like to use our project as well as why one would use it. Discussion revolved around individuals that may currently be having particularly bad experiences. Our thought was that perhaps we could offer a means of escapism; we would offer a wearable device that would project oneself into another persons’ experience, a better experience. We also discussed the possibility of the opposite being true. Perhaps individuals having a bad experience could be comforted by the presence of people piggybacking off their experience and offering some form of solace. This would have taken the form of a simulated hug.

The form factor of the hood came out of discussions of devices and clothings that acts to remove people from the present space, in particular, reflections of people wearing hoods and headphones on public transit. In our interpretation, these individuals seem detached from the reality around them.

We also had to take stock of what inputs and outputs could be shared. We explored sound, light, breath, hug, pulse, vibration, GPS coordinates, and social media feeds. This quickly lead us to our final three inputs/outputs as cost, complexity and feasibility were weighted. Transmitting a hug was far too complex and potentially heavy, GPS data was costly and, like social media feeds, the form GPS output would take was unclear.

Tasks and Roles

Development on the hood generally broke down into three main categories and three sub categories. The main categories consisted of the hood, hood code, and visualization code. The sub categories consisted of the hood hardware, data, and branding. Each team member was charged with leading one main category while also contributing to the two related sub categories. The following diagram illustrates the categories, their relationships, and the team members charged to them:

Flow of ideas and work between Yuxi, Hudson, and Maz

Hardware / Software – Prototype Unit

Prototype unit
Initial development focused primarily around the hardware and developing a prototype of the sensors which would eventually find their way into the hood. This prototype guided decisions made in other aspects of the project. For instance, it was from this prototype that it was discovered we needed space for two battery packs in the hood (see power limitations). There were cases, however, where external influences affected the layout of the prototype; our initial idea to use a flex sensor to detect the state of the hood proved impractical in practice, so the prototype design was changed to a switch.

Power limitations
The Arduino can provide power out in three manors, 5V at 40 mA from each I/O pin, 3.3V at 50 mA from the dedicated 3.3V pin, and a dedicated 5V safely up to about 500 mA. The devices underpinning our hood require widely varying levels of power and amperage. Powering directly off the I/O pins, 3.3V pin and using the 5V in combination with TIP120s to regulate the power flow seemed to be the power solution we required. Sadly, I found that drawing current from the 5V pin with our fan to simulate breathing would cause our sensor data, acceleration and frequency, to fluctuate, thus making data logging of any high precision to be impossible. A low-pass filter to regulate the voltage was a likely solution but my implementation of one seemed to limit all sensor variation altogether. The only present solution was to power the fan off a separate power supply despite the added weight.

Prototyping / development unit

Fritzing Diagram of the hood’s circuitry

Storing and Recalling Data

Arduino code flow / if statement decision tree.

Recorded experience data is saved onto the internal memory of the Arduino inside of arrays, with one array for each of the following: step time, light change time, light change value, sound change time, and sound change value. The two different arrays for both light and sound are linked via a shared counter value used for incrementing through both recording and replaying. Step also has a counter used to increment through step times, but no associated arrays such as intensity of step.

The data in these arrays are what is transferred whenever there is communication between our Arduino equipped Hood and our data visualizing and handling Processing sketch. A extra byte of data is transmitted first to identify the data type, then the array is incremented and transmitted with a comma delimiter.

To Arduino and Processing, this data looks like…


A stylized, human readable (used for debugging), version of that same data…

Step # : 0 , at time: 11
Light # : 0 , at time: 5 , with brightness: 128
Sound # : 0 , at time: 5 , with frequency: 0
Step # : 1 , at time: 944
Light # : 1 , at time: 870 , with brightness: 127
Sound # : 1 , at time: 956 , with frequency: 1069
Step # : 2 , at time: 1815
Light # : 2 , at time: 1763 , with brightness: 125
Sound # : 2 , at time: 1208 , with frequency: 0

Memory limitations
Using this system of array, memory capacity was an ever-present concern. Recording anymore than 50 data points per sensor was a surefire method to crash our Arduino Uno test system. This was because the hood is wholly reliant on the built in memory of the Arduino. Suffice to say, that does not provide much space for data storage. Time constraints prevented any deep exploration into using an SD-card reader to expand the memory. In order to be functional without the expanded memory provided by the SD-card I had to be very strict about how often data was logged, else run the risk of running out of space in seconds.

Hardware – The Hood

Prototype hood

Pattern, Fabric, Sewing and adding details
Concept- providing a private mental space anywhere
Solution: large sized hood, cover eyes, thick but soft fabrics.

Technical requirements – speakers, fan and vibration motor have to be placed on the right places in order to guarantee the experience.
Solution: nylon fastener tape, snaps and pockets boxes made by fabric.

Hood component pockets

Design requirements – To hide the wires
Solution: two layers, covers and pockets.

Hood sensor layout sketch

Hood circuitry and control system (Arduino)

Hooking up circuits
Due to lack of experience, we decided to use a breadboard, which caused us a lot of troubles at first. We later realized that it could have been better to solder everything on a protoboard in the first place. We decided to make the change on the day of the presentation. Unfortunately, we did not have enough time to test everything. The system broke before the presentation. It was a hard lesson for the team. I realized that it is very important to fully understand the specific features of differences circuit board and parts before implementing them on the project, as well as to test the possibilities in wearable gadgets. Therefore, in future projects, we are able to choose the appropriate parts and minimize the chance of malfunctions.

Hood breadboard circuit – initial sensors’ connection to Arduino

Hood protoboard circuit – final sensors’ connection to arduino

Up/Down detecting sensor
In order to realize our concept, which was hood down – recording, hood up – replaying, we needed to figure out the appropriate sensors to detect the hood movements. We tested number of sensors including tilt sensor, flex sensor, force sensor, magnet, snaps and the homemade sensor (nuts). Although all of them worked, we were still not satisfied because the fabric could not hold them very well. In the end we decided to use claps, which was not as good as the other sensors tested, but it worked consistently.

1.Hood cannot be worn fully put down, because the Arduino was right above the fan. When the hood was worn down, the fabric cannot be bended because of the Arduino.

Solution: I switched the Arduino to the place where the battery pack was held (moved the battery pack somewhere else) and the problem was solved.

2. Due to the technical requirement, we had to add one more battery pack. The problem was if I put two battery packs at the back of the hood, it would be over weight and not justifiable.
Solution: I used the same extra fabric to make two straps. I placed one battery pack at the bottom of each strap in order to hang them in front of user’s body. That was to balance the weight between the front and the backside of the hood.

3. The most difficult thing was to always keep the fan facing the back of the hood wearer’s neck. The fabric was too soft to keep the fan at this exact spot.
Solution: I used the extra two straps (mentioned earlier for the battery packs) to cross over the fan area. When wearing the hood, the weight of the battery packs stretches the straps, which would in turn pulling the back of the hood up while holding the fan at the right place. In addition, I created a “scarf” at the bottom of the hood, which can bond around the user’s shoulder and steady the entire hood.

Hood internal layer (inside out) and fan with cover

Reflection from the project
From this project I realized that making clothes for wearable technology is tricky. Sometimes the concept of the project could influence the design greatly. However, I really enjoyed the process and I loved to solve the challenges through better design solutions. I have strong passion towards wearable technology, not only because I enjoy making clothes, but also to create things that are more practical to use. For example, to work out the conflicts between design and technical needs is my favorite part of the process. Starting from this project, I want to pay more attention to the current situation of the wearable technology, as well as the future trend of this industry.

Inside of the hood

Outside of the hood

Sensor locations on hood

Software – Visualization

Data visualization design, hand sketches

As a general rule, we opted for iconic visuals to represent the input data of the user. A frame of influence was webicons, software navigation icons and video game symbols. We thought to include a pulsing heart or lung in the corner of the data viz display merely to reflect the “breathing fan” embedded in the hood but the visualization got a bit cluttered so it was excluded. Obvious choices for the sound input were speakers, EQ waves, but finally a speaker icon that emitted pulsing waves was chosen due to its simplicity and universal recognition. A light bulb was chosen to represent the light input for similar reasons as the speaker icon. Lastly, for the movement input, we played around with a few whimsical angles such as GIFs of the Beatles walking around Abbey Road, and the stop hand and walking man found on street lights. Eventually, we all fell in love with a simple walking man GIF that also match the styles of the other two icons. Consistency in design choices was key to maintaining the aesthetic quality of the visualization.

Final Processing data visualization software

I received tremendous technical support from Hudson regarding the engineering of the code and how the data viz receives input from the Arduino and how the average data sets for each user experience switched upon the commanded of the viewer. The Processing library GifAnimation provided some support to integrate the GIF files into the display and from that I learned to adapt them for our specific requirements.

It was important for me to make sure that the data viz display was very much a part of the onemile experience and not simply something patched on. To address this, I kept the branding of the hood with the data viz display consistent and unified. This involve continual coordinating of colour, shape and texture with Yuxi.

Additionally, I wished to engage the hood wearer with the visualizations of their data beyond just looking at it. My intent was to have them engage directly with the data. Specifically, regions of the display would respond dynamically when moused over them, providing additional information about their data in a look not unlike the rest of the visualization so as not to be too jarring.

Future Improvements


Ideally we would augment the memory of the Arduino with an SD-card reader. Time constraints prevented this from being an option for this iteration. Additionally, our current data storage and recall methods, both on the hood and the visualization system, limit our ability to store large amounts of data as well as recall any data after power-off. We explored the possibility of storing data in a text file on the computer for later recall but again found time to be a limited factor.


Our team has explored the possibility of recording light colour in addition the light intensity but find the hardware solutions available to use to be lacking. The distance from which colour could be perceived by the colour sensor was limited to only a few centimetres. As such, we chose not to implement a colour sensor in this project, though the thought of providing such data is still appealing and worth further investigation.

Sound detection is another area where hardware proved to be a limiting factor. The microphone in our current setup reports frequency but fails to report amplitude. As a result, we are not able to get a true reading of loudness around our hood as we had intended. Instead we are only able to report on sound activity (changes in frequency).

Control Unit and Power

The hood is very heavy at the moment.  This partly a result of the two power supplies and also because of the Arduino we used.

Regarding the batteries, initially our design had called for a single power supply to power the Arduino which, in turn, would power the rest of the hood.  We had to scrap this due to interference created in our sensor data by the fan powering on and off. A low-pass filter to regulate the voltage is an ideal solution and we’d like to explore this more.

Similarly, our choice of Arduino was not our first selection. Our Arduino Uno test system, a unit which has 32KB of built in memory, would lock up from lack of memory if recording more than 50 sensor events. For comparison, the Arduino LilyPad, a better Arduino for wearables, has only 16KB of memory, only 14KB of which is accessible. Our sketch alone is 13KB. The Arduino LilyPad, at least in our project’s current configuration, does not work.

Data Transmission

Unlike our original concept, the hood doesn’t transmit or playback data in real-time. Our concept had originally been to transmit data in real-time over wifi networks in a high bit-rate stream but there did not appear to be much precedent in this area. As such we scaled back to a better documented technological solution: using Xbees to transmite recorded data logs. Ideally, given enough time, we would explore and implement our initial concept with wifi data streaming.


The following is a link to a Dropbox folder containing our code:

Sources / Inspiration

Hood and Wearable Technology


A vest that gives you a squeeze for every facebook “Like” you get
By Melissa Kit Chow

Fashion Trends

Scarf hood trend

As found on Google image search.

Photos here


Data Logging

We explored many devices that logged daily activity such as step count and sleep patterns. This research proved vital when we began to visualize our logged data. Of particular help was the method in which the Jawbone Up was used to log step data, as it cannot be graphed in the same mode as light or sound.

Jawbone UP

Nike+ Fuel Band

Data Visualization


Nicholas Felton’s Personal Annual Reports that weave numerous measurements into a tapestry of graphs, maps and statistics that reflect the year’s activities.

Box o’ Secrets by Pui, John, Andrew & Cris

The idea for the Box o’ Secrets began with the desire to make online communication more human.  We thought about the distance between family and loved ones and wondered whether that gap could be bridged with the technology we are working with.

At some point, our conversation turned to the content of the communication as opposed to the medium of communication.  We thought that one thing loved ones share are secrets.  In many ways we give them to others so that they can help bear the weight.  We looked at Post Secret and its popularity.  Post Secret is a space where people can safely release their most guarded secrets; funny ones, sweet ones, dark ones.


At this point, our project pivoted and we wanted to create something that would receive secrets as input and output them in a way that anonymises and transforms the secret into something different … maybe beautiful.  Transmogrify if you will.

There were two things to consider in this project: the input and the output.  The input was important because it was the interface through which a person would feel comfortable enough to divulge a secret.  At first, we wanted to create a wearable device that was activated by gesture.  We thought of creating a pair of mittens with a built in microphone that would be activated by the gesture we make when whispering in someone’s ear.  We also thought about using a can attached to a string to evoke a sense of childhood and nostalgia.


Our conversation around output went down many routes.  We debated over how much to anonymise the speaker and how much to abstract the secret.  At the very beginning of the project we were interested in data visualization, so discussed many ways that we could use the fluctuations in speech to activate some kind of visualization.  We looked at data visualization artists like Aaron Koblin and the way that he creates animations to illustrate data in an interesting and beautiful way.

However, we felt that to do this, though technically challenging for us, would be in some ways expected and straight forward.  So we wanted to think about ways that the secret could be visualized in a physical way.  We looked at an artist like Zimoun, who uses simple mechanisms but implemented in a powerful way.

We thought about creating a mobile or a spinning lantern that would move and react to the voice of the secret teller.  We also wanted there to be some artifact of the experience, so decided to incorporate a long exposure image from a DSLR camera mounted in the installation.

In the end, we decided that the only experience of the visualization the viewer should experience was the final image of the long exposure photograph.  The entire mechanism that visualizes the secret would be miniaturized and then hidden from view.



In terms of input, the mittens proved more difficult than expected.  Streaming audio over the arduino seemed not possible for our technical abilities in the timeframe.  We pivoted the concept slightly to create an onscreen interface that would link to the Box o’ Secrets, which would record the visualized secret then upload the outputted image to a Tumblr Blog through ‘If This Than That.’

The main challenge was to get the servos to react to the voice in such a way so that the outputted visual would be a unique reflection of the incoming voice.  We also had many discussions around how the final image should look.  We played with a  variety of combinations of servos, LEDs, springs and reflective paper.  The choice of making the visualization physical was also intended in order to make the final result unexpected or unpredictable.


Please see follow the link to a diagram of the system flow and a video for the development process:


Processing Code

Arduino1 Code

Arduino2 Code 

In the process of experimentation we created some beautiful images that were created with a combination of arduino controlled servos and lights and human controlled camera.  We believe this could be an avenue for further exploration in the future.  However, we were unable to recreate this type of image from a fully automated system.  This was our favourite image.


Connect Four

Connect Four team for the last project

_Ruzette Tanyag

_Ryan Maksymic

_Borzu Talaie

free processing coding book

It seems to be for more advance visualization in processing (like particles and fractal), but it’s worth a look (I found his instructions pretty good). Also it’s under a creative common licence, so you can read it for free (and donate some money if you wish)


I have whispered to the internet.

I have whispered to the internet, and it has heard me. But only locally, so not really.

I’ve been learning to send data over the internet via pachube/cosm today and have managed to make values readable locally through my browser. My arduino is running standard firmata and sending potentiometer values to processing. Processing is using the eeml library to send values through port 5210.

The next step is setting up a cosm feed that can read these values. However I have been having a bit of trouble so far, especially because most tutorials have been written for pachube. But when pachube switched to cosm the interface changed, so further learning adventures are needed.

Here is a link to the tutorial page for reading and sending data through processing and arduino. That page contains all the library downloads that you need.

One thing to keep in mind if you are using firmata to processing, the standardFirmata code defaults to a baud rate of 57600, whereas the pachube processing code defaults to 115200. Make sure that you either change the firmata or the processing baud rate if you want the data to be legible.

See you soon friends.