Wooden Mirror Research Project

Wooden Mirror Research Project
Andrew Hicks, Leon Lu, Davidson Zheng & Alex Rice-Khouri.

Introduction
Daniel Rozin is a NY based artist, educator and developer who is best-known for incorporating ingenious engineering and his own algorithms to make installations that change and respond to the presence and point of view of the viewer. He’s also a Resident Artist and Associate Art Professor at ITP, Tisch School of the Arts, NYU.

Merging the geometric with the participatory, Rozin’s installations have been celebrated for their kinetic and interactive properties. Grounded in gestures of the body, the mirror is a central theme to his work. Surface transformation becomes a means to explore animated behaviors, representation and illusion. He explores the subjectivity of self perception and dims the line between the digital and the physical.

“I don’t like digital, I use digital. My inspirations are all from the analog world”

He created the Wooden Mirror in 1999 for the BitForms Gallery in New York. The mirror is an interactive sculpture made up of 830 square tiles of reflective golden pine. A servo rotates each square tile on it’s axis thus reflecting a certain amount of light and in turn creating a varying gradient of colour. A hidden camera behind the mirror, connected to a computer, decomposes the image into a map of light intensity.

The mirror is meant to explore the inner workings of image creation and human visual perception.

The Research Technique 

How Shape Detection Works 

The fundamental idea of representing a person from their form can be as simple or complex as you want it to be. In our case, we tried to trace the rough contours of a shape (e.g. your hand), using light sensors. The same basic principle of activating and deactivating pixels based on relative intensity, applies just as well to Photoshop’s spot healing tools, as to line following robots, facial recognition, and all of Daniel Rozin’s reflective exhibits.

Rozin’s Wood Mirror (most likely) used a simple bitmap, expressing how far a tile should pivot by the average brightness of a swatch of pixels. You determine the number of digital pixels per 1 physical pixel by the scale of the video resolution to the roughly 29×29 mirror grid. All of Rozin’s recent projects (Penguins, PomPoms, etc) use Kinect, some combination of image analysis and edge detection found in the OpenCV framework. The most popular algorithm to do this is the Canney Edge Detection.

Canny Edge Detection 

1) Reduce Noise 

Look for any unusually high or low points in 5×5 grids. Use a Gaussian filter. Think lens flares from the sun or dust specks on the lens itself. It’s the same idea as the histogram in Lightroom photoshop, that lets you automatically filter out any completely dark or light areas.

Screen Shot 2015-11-10 at 10.52.29 AM

 

2) Find the Intensity Gradient

Scan the image left to right, row by row, to see which pixels are darker (higher value) than their neighbours. You scan top to bottom, column by column to find the same thing. You do this for every pixel.

Screen Shot 2015-11-10 at 10.52.52 AM

3) Suppression

Now you suppress any points that are smaller than the edges. Edges with high darkness values larger than both of their neighbours get assigned a 1. The neighbours get flattened to 0. This gives you a 1 pixel thin edge; a line. The entire map can be represented in binary.

Screen Shot 2015-11-10 at 10.53.28 AM

4) Thresholding (Hysteresis)

There’s two thresholds:      A minimum value          A maximum value

There’s three categories:   [Below minimum]          [Between min-max]       [Above maximum]

For points that fall between the minimum and maximum thresholds, if they’re attached to another “sure edge” they get counted. If they’re not beside a sure edge, they get thrown away.

Screen Shot 2015-11-10 at 10.53.41 AM

Our Prototype 

Build
When first assessing the project’s needs, it became apparent that materials and scale were going to be an important part of making our prototype work as accurately as possible. A heavy wooden frame was used to house 9×180 degree servo motors with nine 3.5”x3.5” light framed panels made out of popsicle sticks and tracing paper.

IMG_5053

A foam core divider was placed inside the frame to prevent light interfering with the other photo resistors within the box, as well as a base for each of our nine servo motors and photoresistor sensors. Each servo motor was fitted with a chassis, made of popsicle sticks, to ensure a steady front and backward motion when pushing the 3.5”x3.5” panels as so. Each chassis was accompanied with a wire that connected each servo motor to a “floating” arm that would end up pushing the panels back and forth.

IMG_5041

Building a 90 degree angled arm, to push the panels an even further distance, was considered, but as a result of only using a wire arm, our team was able to move each panel by 0.75” back and forth, enough distance to achieve the desired effect.

IMG_5043IMG_5042

Our build allows users to interact with our prototype by shining a led light on each 3.5”x3.5” panel to trigger each servo in whatever sequence the user chooses, creating an interactive pixelated effect, mirroring the user’s actions. The inverse, of using shadows, can also be achieved by reversing our input method through code.

Video 1:

Video 2:

Code
Available at https://github.com/Minsheng/woodenmirror
A single Arduino controls a row of three photoresistors and three micro servo motors. The half rotation motor positions will be set to 0 initially. They will be set to variable values (60, 120, 180) in relative to the photoresistors values. (i.e. the higher the photoresistor value gets, the greater the motor position) For simplicity, we set the motor to 180 degree if the photoresistor exceeds certain threshold. Otherwise, the motor will be set back to 0 degree, thus pulling each grid inwards or pushing it outwards.

In terms of speed control, we tried to incorporate deceleration into the backwards movement. The current mechanism meant to let the motor move at a decreasing speed before it reaches 80% of the different between the last position and the target position, and a much lower speed for the rest of the difference, using loop and a timer for each servo motor. A more effective implementation may be achieved by using VarSpeedServo.h (github.com/netlabtoolkit/VarSpeedServo). It allows asynchronous movement up to 8 servo motors, with advanced controls on speed such as defining a sequence of (position, speed) value pairs.

Fritzing Diagram:

WoodenMirrorPrototype_bb