Box o’ Secrets by Pui, John, Andrew & Cris

The idea for the Box o’ Secrets began with the desire to make online communication more human.  We thought about the distance between family and loved ones and wondered whether that gap could be bridged with the technology we are working with.

At some point, our conversation turned to the content of the communication as opposed to the medium of communication.  We thought that one thing loved ones share are secrets.  In many ways we give them to others so that they can help bear the weight.  We looked at Post Secret and its popularity.  Post Secret is a space where people can safely release their most guarded secrets; funny ones, sweet ones, dark ones.


At this point, our project pivoted and we wanted to create something that would receive secrets as input and output them in a way that anonymises and transforms the secret into something different … maybe beautiful.  Transmogrify if you will.

There were two things to consider in this project: the input and the output.  The input was important because it was the interface through which a person would feel comfortable enough to divulge a secret.  At first, we wanted to create a wearable device that was activated by gesture.  We thought of creating a pair of mittens with a built in microphone that would be activated by the gesture we make when whispering in someone’s ear.  We also thought about using a can attached to a string to evoke a sense of childhood and nostalgia.


Our conversation around output went down many routes.  We debated over how much to anonymise the speaker and how much to abstract the secret.  At the very beginning of the project we were interested in data visualization, so discussed many ways that we could use the fluctuations in speech to activate some kind of visualization.  We looked at data visualization artists like Aaron Koblin and the way that he creates animations to illustrate data in an interesting and beautiful way.

However, we felt that to do this, though technically challenging for us, would be in some ways expected and straight forward.  So we wanted to think about ways that the secret could be visualized in a physical way.  We looked at an artist like Zimoun, who uses simple mechanisms but implemented in a powerful way.

We thought about creating a mobile or a spinning lantern that would move and react to the voice of the secret teller.  We also wanted there to be some artifact of the experience, so decided to incorporate a long exposure image from a DSLR camera mounted in the installation.

In the end, we decided that the only experience of the visualization the viewer should experience was the final image of the long exposure photograph.  The entire mechanism that visualizes the secret would be miniaturized and then hidden from view.



In terms of input, the mittens proved more difficult than expected.  Streaming audio over the arduino seemed not possible for our technical abilities in the timeframe.  We pivoted the concept slightly to create an onscreen interface that would link to the Box o’ Secrets, which would record the visualized secret then upload the outputted image to a Tumblr Blog through ‘If This Than That.’

The main challenge was to get the servos to react to the voice in such a way so that the outputted visual would be a unique reflection of the incoming voice.  We also had many discussions around how the final image should look.  We played with a  variety of combinations of servos, LEDs, springs and reflective paper.  The choice of making the visualization physical was also intended in order to make the final result unexpected or unpredictable.


Please see follow the link to a diagram of the system flow and a video for the development process:


Processing Code

Arduino1 Code

Arduino2 Code 

In the process of experimentation we created some beautiful images that were created with a combination of arduino controlled servos and lights and human controlled camera.  We believe this could be an avenue for further exploration in the future.  However, we were unable to recreate this type of image from a fully automated system.  This was our favourite image.


Box man: Body remote robot car

–final video

This is Peggy and Hank’s project 3. Our concept is using player’s body gesture to control a robot worker to solving a box man puzzle. Originally, we started from thinking about how to use wireless technology to connect two objects more than doing a game. For instance, Hank’s wife always want a coffee machine that can help her cook coffee every morning when she gets up starting to wash her face; and I always want to do something for couples in long distance relationship (maybe two umbrellas, can project your lover’s external environment inside your umbrella. Or two bangles, can transfer your pulse data into some other things and send to your lover’s bangle.) However, we thought those are not fun enough. So we change the idea to a game. (Hank dreams for the robot car. And I like to solve puzzle. So we combined our ideas together.)

Step 1: build the robot In the process, the biggest problem was this robot car was hard for us to be programmed. We didn’t know where we could start. Officially, it can be programmed by C++. But we know nothing about C++. So we did a lot of research to figure out how can we code in Arduino environment. And then, we could control it’s speed and direction.

–forward: motors.setSpeeds(motorSpeed,motorSpeed);

–backward: motors.setSpeeds(-motorSpeed,-motorSpeed);

–right: motors.setSpeeds(motorSpeed,-motorSpeed);

–left: motors.setSpeeds(-motorSpeed,motorSpeed);

–controlled by the buttons on the car


  unsigned char button = buttons.waitForPress(BUTTON_A | BUTTON_B | BUTTON_C);

  if (button == BUTTON_A){

  if (button == BUTTON_C){

  if(button == BUTTON_B){


  motors.setSpeeds(motorSpeed, motorSpeed);

Next, we studied with the pin map for 3pi robot car and find out which pin we can use. (only 3 free pin available). –buttons controll:

And then, we tried to control the car wirelessly.

–Xbee talk

–Wireless button

–Keyboard control: read data from processing


//Arduino1:get data from processing, and send data through xbee

//Arduino2:the car, receive data from xbee


Step 2: connect Kinect

–The code to read the kinect:


–Arduino:read from Processing

–Robot car

And then, we want to code to control the angle for the car to make sure every time the car will turn 90 degree. We choose to change the code on arduino1. Here is the code:


Step3 build the maze The originally design should be like this: But when the car push the box, we can not make sure the car push the centre of the box, which result in the box will be deviated the path and the car can not keep moving vertically or horizontally. Besides, more blocks in the maze is easier to block the signal between Xbees. So I modify the maze into this: And add a board on the car (can help to push). After all red boxes are put into the target places, the leds will turned on, and a win sound effect will be there.

Step4: Display When I tried to play this game before we do the presentation, I realize I have to keep looking my laptop to see the background change, in order to make sure my gesture. Thus, I couldn’t focus on the maze and control the robot better. Kate and Nick suggested me to set up a projector to project the background color on the maze, so that I don’t need to pay attention to my laptop. And the game can be more smooth.

–maze with projector

Arduino listening to the internet via wiFly

Sketch Ball – and variables

float x = 0;
float y = 0;
float xspeed = 2.2;
float yspeed = 1.5;
float r = 32;

void setup() {

void draw() {

// Add the current speed to the x location.
x = x + xspeed;
y = y + yspeed;

// Remember, || means “or.”
if ((x > width) || (x height) || (y < 0)) {
// If the object reaches either edge, multiply speed by -1 to turn it around.
yspeed = yspeed * -1;
r = 64;

// Display circle at x location

r = constrain(r-2,32,64);


Text Files

In response to some questions today, here are some examples dealing with Reading/Writing text files


Hey gang!

I’ve been playing around with IFTTT a bunch since Thursday’s class. Here’s something that I threw together:


I made an IFTTT recipe that searches Instagram for the tag “#cntower” and then posts the photos on Tumblr. It works really well; however, I do have to look through all of the photos and delete the ones that I don’t like. There have been over 250 photos posted over the last two days. I think I’m going to keep the recipe on for a week and see how many more photos I can collect!

$1 gesture input

I haven’t tried this yet, but could be interesting

simple gesture input Processing

A Sketch I like

A perlin-noise based force field by guru
PVector[] pos;
int[] age;
void setup() {

pos = new PVector[2000];
age = new int[2000];
for( int i=0; i<pos.length; i++) {
pos[i] = new PVector( random(width), random( height ));
age[i] = 0;

void draw() {
for( int i=0; i<pos.length; i++) {
point( pos[i].x, pos[i].y );
for( int i=0; i<pos.length; i++) {
// random(1);
pos[i].add( new PVector( 2 *noise( pos[i].x * 0.02, pos[i].y*0.016 )-1, 2*noise( pos[i].x * 0.017, pos[i].y*0.011 )-1));
if ( pos[i].x width || pos[i].y width || age[i] > 100 ) {
pos[i] = new PVector( random( width), random( height ));
age[i] = 0;

if ( random(1000) > 999 ) {
noiseSeed( (long)random(10000));
for( int i=0; i<pos.length; i++) {
pos[i].add( new PVector( random(-2,2), random(-2,2)));