All posts by Andrew Hicks


By Andrew Hicks


Concept: First Thoughts 
When first introduced to the tamagotchi project, the challenge of eliciting an emotional response through computational means guided me to think of using computational elements to be as human as possible. When thinking about parts of the human body that visually express the most emotion, I considered eye brows as being such. This stemmed from my brief involvement with studying (more so admiring) traditional character animation in the past and how simple angled lines can convey so much.

Concept: Theme
Over the last year, I have been very fixed on the definition of “value” and what it means to not only myself, but to others. What creates value with or without money? What do we get out of value and how long does the value of something last, or how quickly can value depreciate? When do people start caring or stop caring over the value of something, be that with the use of time, money or emotion? I circled around these thoughts for my final tamagotchi presentation.

I knew I wanted to involve currency of some sort, but was not set on using actual money. After expressing my focus for my tamagotchi project to my colleague, Leon Lu, he quickly guided me in the right direction of using time as the value for my project. Although using time was a great idea, I still wanted someones time to be valued  or validated, other than expression being the final transaction. I wanted emotion to manifested in by some qualified means and a transaction to be the (figurative) period at the end of the sentence when interacting and connecting to my tamagotchi pen pal.  As a result, I used pennies as the final trade off for someone’s time. 

In the end, I wanted the tamagotchi to do the following:
1) Upon entry, spectators would notice the sad looking tamagotchi.

2) Upon facing the tamagotchi (within 5 to 25cm), the tamagotchi would slowly shift mood from sad to a happier state by slowly raising it’s eyebrows within 20 seconds.

3) If the user spends time with the tamagotchi for over 20 seconds, the tamagotchi would then provide a penny to the spectator. After the spectator would leave, the tamagotchi would default to a sad looking state.

4) However, if the spectator leaves the tamagotchi before 20 seconds, the tamagotchi would then become angry for 5 seconds, and then default to a state of being sad.

I started building a simple prototype using some white foam-core, two 180 degree servo motors, a sonar sensor, Arduino with breadboard and two popsicle sticks as eyebrows. 

From here I was able to establish basic emotions such as anger, sadness and contentment using a few lines of code.

Initial Code
I knew early on that the timing of emotion would have to be controlled in order for spectators to understand the process of emotion with my project. As a result, a servo library that allows speed control of 180degree servo motors was adopted for my project. When referencing the library and code, you can see an array that controls the angle, speed and boolean value. ex: (45,20,false). The speed of the motors is controlled between the values of 1 and 175.

Screen Shot 2015-12-18 at 11.49.58 AM

Having tested my eyebrow prototype and basic code set up, I began fabricating the coin dispenser for my final piece. I was lucky to find a plastic spool, which I believe was used for wire, that had the same circumference of an american penny.


Penny Dispenser
Referencing the shafts I created for our research group assignment, I fabricated a shaft that would push pennies of of the plastic spool chassis one by one. In order for this to work, a lot of time was spent on allowing the proper space for only one penny to be dispensed. Coincidentally, the popsicle sticks I used had almost the same depth as a penny. Along with some glue, the space created was enough for one penny to be ejected from the spool at a time. I got very lucky. I glued the servo and shaft into a slot I made (using a Dremel tool) into the plastic base of the spool. All worked well, except for each penny to drop from the spool.cI had to provide a front end lift for each penny on the spool, in order for the dispensing arm to hit it exactly on its edge. To do so, I glued some wire that I would adjust to the exact height that was needed for each penny to be pushed out and dropped.

The face of my tamagotchi project was created using half inch width foamcore. Using adobe illustrator, I drew out the face over six 8.5×11 art boards to be printed and then transferred on to the foamcore. Doing this provided a solid plan as to how and where the backend and front end components should meet, as precision had already become a big learning curve for myself when going forward with this project.


Using the maker lab, I cut the foamcore down using an exacto knife. I was hoping ot use a bandsaw, but the foam core was not dense enough to do so. My concern was to get a perfect looking circle for the face of my tamagotchi, but I was surprised at how just an exacto knife and fine grain sandpaper can smooth out a circle when using the foamcore material.

I cut out the eyebrows and moustache by hand, as well a mouth piece that I did not have time to implement entirely on the face.  The mouth piece was considered in order to provide a happier looking face/ expression when the tamagotchi was meant to be overly happy with someone’s prolonged presence.

Closer to the final presentation, my strategy to code the entire states of emotion and user interaction variables became a challenge for myself. As a result, I coded a plan B state that would have the tamagotchi provide a penny for every two seconds a spectator would spend time with it, while emoting an angry face when providing each penny.

I placed my piece on a pillar, facing the entrance of the gallery. This was intentional, so that user’s could see the sad state of emotion the piece was expressing form a distance.

I used an iPhone power adapter to provide a constant 5V power supply to the Arduino board on the back of the piece. Oddly enough, power supply was the last thing considered when presenting the piece, considering the attention I provided to everything else on the project.

Overall, I was very pleased with the outcome of the project. I believe I set up an obtainable challenge for myself that was enjoyable, yet still challenged me without becoming frustrated.  Going forward, I would add the mouth servo motor and spend more time with the coding component of the project to have a better understanding on how to code states.

Fritzing Diagram:

Marty Fritz_bb


Robot Chicken Auto Vehicle

Robot Chicken Auto Vehicle

Yashodha, Janmesh, Hamster and Andrew

Our goal was to create a line following autonomous vehicle that could also drop a ball when confronted with an obstacle. Although we have yet to have our robot chicken successfully drive on it’s own, we have learned a great deal.

For a visual walkthrough of our process, please watch the following video:

Q8 Array Sensor & Continuous Servo Motors
We used an Q8Array contrast sensor as our method of line following on two 360 continuous servos.  Our original method was to use the Q8Array sensor with an analog output , however after much difficulty in figuring out the equation of the code, our team decided to research another method of input for the sensor. As a result we found a digital approach that was much easier for us to understand and implement on our vehicle. Although the digital option was easier to understand, we faced a challenge in recording or vehicle to process the commands for stopping and dropping the ball while in motion.

We found great difficulty in having both of our continuos servo motors to run at the same speed. Our solution was accidentally finding a dial on each motor, that would increase or decrease the speed of each servo depending on which direction it was turned.

Sonar and Servo (Ball drop)
Using a servo, coded to a sonar sensor, we were able to achieve a ball drop technique that fit our concept of chicken laying an egg upon a confronting an obstacle. We added a two second delay to add to the comedic timing of our concept, as all actions happening at once was too sudden.

Link to code:

Fritzing Diagram: 

Screen Shot 2015-11-11 at 9.47.34 PM

Wooden Mirror Research Project

Wooden Mirror Research Project
Andrew Hicks, Leon Lu, Davidson Zheng & Alex Rice-Khouri.

Daniel Rozin is a NY based artist, educator and developer who is best-known for incorporating ingenious engineering and his own algorithms to make installations that change and respond to the presence and point of view of the viewer. He’s also a Resident Artist and Associate Art Professor at ITP, Tisch School of the Arts, NYU.

Merging the geometric with the participatory, Rozin’s installations have been celebrated for their kinetic and interactive properties. Grounded in gestures of the body, the mirror is a central theme to his work. Surface transformation becomes a means to explore animated behaviors, representation and illusion. He explores the subjectivity of self perception and dims the line between the digital and the physical.

“I don’t like digital, I use digital. My inspirations are all from the analog world”

He created the Wooden Mirror in 1999 for the BitForms Gallery in New York. The mirror is an interactive sculpture made up of 830 square tiles of reflective golden pine. A servo rotates each square tile on it’s axis thus reflecting a certain amount of light and in turn creating a varying gradient of colour. A hidden camera behind the mirror, connected to a computer, decomposes the image into a map of light intensity.

The mirror is meant to explore the inner workings of image creation and human visual perception.

The Research Technique 

How Shape Detection Works 

The fundamental idea of representing a person from their form can be as simple or complex as you want it to be. In our case, we tried to trace the rough contours of a shape (e.g. your hand), using light sensors. The same basic principle of activating and deactivating pixels based on relative intensity, applies just as well to Photoshop’s spot healing tools, as to line following robots, facial recognition, and all of Daniel Rozin’s reflective exhibits.

Rozin’s Wood Mirror (most likely) used a simple bitmap, expressing how far a tile should pivot by the average brightness of a swatch of pixels. You determine the number of digital pixels per 1 physical pixel by the scale of the video resolution to the roughly 29×29 mirror grid. All of Rozin’s recent projects (Penguins, PomPoms, etc) use Kinect, some combination of image analysis and edge detection found in the OpenCV framework. The most popular algorithm to do this is the Canney Edge Detection.

Canny Edge Detection 

1) Reduce Noise 

Look for any unusually high or low points in 5×5 grids. Use a Gaussian filter. Think lens flares from the sun or dust specks on the lens itself. It’s the same idea as the histogram in Lightroom photoshop, that lets you automatically filter out any completely dark or light areas.

Screen Shot 2015-11-10 at 10.52.29 AM


2) Find the Intensity Gradient

Scan the image left to right, row by row, to see which pixels are darker (higher value) than their neighbours. You scan top to bottom, column by column to find the same thing. You do this for every pixel.

Screen Shot 2015-11-10 at 10.52.52 AM

3) Suppression

Now you suppress any points that are smaller than the edges. Edges with high darkness values larger than both of their neighbours get assigned a 1. The neighbours get flattened to 0. This gives you a 1 pixel thin edge; a line. The entire map can be represented in binary.

Screen Shot 2015-11-10 at 10.53.28 AM

4) Thresholding (Hysteresis)

There’s two thresholds:      A minimum value          A maximum value

There’s three categories:   [Below minimum]          [Between min-max]       [Above maximum]

For points that fall between the minimum and maximum thresholds, if they’re attached to another “sure edge” they get counted. If they’re not beside a sure edge, they get thrown away.

Screen Shot 2015-11-10 at 10.53.41 AM

Our Prototype 

When first assessing the project’s needs, it became apparent that materials and scale were going to be an important part of making our prototype work as accurately as possible. A heavy wooden frame was used to house 9×180 degree servo motors with nine 3.5”x3.5” light framed panels made out of popsicle sticks and tracing paper.


A foam core divider was placed inside the frame to prevent light interfering with the other photo resistors within the box, as well as a base for each of our nine servo motors and photoresistor sensors. Each servo motor was fitted with a chassis, made of popsicle sticks, to ensure a steady front and backward motion when pushing the 3.5”x3.5” panels as so. Each chassis was accompanied with a wire that connected each servo motor to a “floating” arm that would end up pushing the panels back and forth.


Building a 90 degree angled arm, to push the panels an even further distance, was considered, but as a result of only using a wire arm, our team was able to move each panel by 0.75” back and forth, enough distance to achieve the desired effect.


Our build allows users to interact with our prototype by shining a led light on each 3.5”x3.5” panel to trigger each servo in whatever sequence the user chooses, creating an interactive pixelated effect, mirroring the user’s actions. The inverse, of using shadows, can also be achieved by reversing our input method through code.

Video 1:

Video 2:

Available at
A single Arduino controls a row of three photoresistors and three micro servo motors. The half rotation motor positions will be set to 0 initially. They will be set to variable values (60, 120, 180) in relative to the photoresistors values. (i.e. the higher the photoresistor value gets, the greater the motor position) For simplicity, we set the motor to 180 degree if the photoresistor exceeds certain threshold. Otherwise, the motor will be set back to 0 degree, thus pulling each grid inwards or pushing it outwards.

In terms of speed control, we tried to incorporate deceleration into the backwards movement. The current mechanism meant to let the motor move at a decreasing speed before it reaches 80% of the different between the last position and the target position, and a much lower speed for the rest of the difference, using loop and a timer for each servo motor. A more effective implementation may be achieved by using VarSpeedServo.h ( It allows asynchronous movement up to 8 servo motors, with advanced controls on speed such as defining a sequence of (position, speed) value pairs.

Fritzing Diagram:


Workshop 2: Daily Devices, Andrew Hicks

After some research on  this assignment, I decided to make a simple, low powered vacuum cleaner, using the fan, button and lights that came with our creatron kit. I saw some RC based vacuums (Romba) that used sensors to navigate and pick up dirt. I thought this was too much to take on for a one week duration assignment.  Therefore I planned on making a hand held vacuum instead.

I got everything almost working until I realized that the fan needed more power. I tried following tutorials to add more power to the fan, but only ended up shorting out (or so I assume) the fan and a few LEDs. I was left with nothing. NOTHING! Except my board, which I now think is on the fritz…

My back up plan after the short out was to create a two button elevator system, but my research concluded that this was going to be too time consuming. As a result, it led to me playing with the DC and servo motors.

After hours of fooling around and experimenting with new things, I realized that I was running out of time. I kept thinking of house hold devices that didn’t actually exist, and tried ot make my mistakes into household devices…

In the end, with the time I had left, I still wanted to have something to show. I ended up creating a clock with the servo motor that came with our Creatron . The only issue is that I realized after that the servo motor would only work in 180 degrees, not 360.  As a result, my cock will only count up to 35 seconds and restart from where it began.

Going forward, I would like to program the code I have to adjust to time on on other planets 😉



Workshop 1: DigiBird, Andrew Hicks

Behold! The singing bird and his dancing bean!

I used the magnetic motor and magnet to create the dancing bean. I had to unravel some of the copper wiring to extend the reach of the dancing beans dance pen.

I switched out the internal speaker to a larger one found in a creatron kit to make a louder sound, “disguised” as a megaphone.

I hid most of the electronics beneath the cardboard an bowl I used as a base. I would have also hidden the copper wires, but had already attached the bird and electronics to the base. I would have also made a sign and circus tent to accompany the top hat.

Documentation Photos:

IMG_4473 IMG_4479 IMG_4544