Category Archives: Research Projects

wqdwasd

Research Project # 7 – Connecting Light

Project by: Jazmine Yerbury, Margarita Castro, Marcus Gordon & Hammadullah Syed.

Connecting light was an outdoor art installation that combined LED lights and wireless communications technology.  Installed in Northern England at a place called Hadrian’s wall park. As you can see in the image below, Hadrian’s wall stretches for a length of 73 miles (or 117 kilometers).  It was built in 122 AD during the rule of emperor Hadrian of the Roman Empire.

wall

A glimpse into the installation, shows the scale of this work spanning the large territory of the wall.  Lights in the balloons blink in response to audiences sending/receiving of messages via the balloons and a mobile app on the user’s phone.

 Who was behind this project?

The development team behind this amazing project was a coalition between members of  YesYesNo and spearheaded by Zach Lieberman,a known artist, researcher and hacker dedicated to exploring new modes of expression & play.

YesYesNo LLC is a new interactive collective that specializes in the creation of engaging, magical installations that combine creativity, artistic vision and cutting edge R&D.

For this project, 400 Digi Programmable Xbees and 20 ConnectPort X4 gateways were requiered. All this for 400 balloons lined up against 73 miles of Hadrian’s Wall.

leds

Zach Lieberman explains that each balloon is programmable, allowing the intensity of colours to be controlled.  Based on our research it seems that control was a split between user control, automatic climate adjustments that the XBee makes in response to weather conditions.

The equipment for each balloon was well organized:

equipment

equipment2

This transmedia map shows how the system takes input of text messages, through the X4 gateways and transmits to the XBee modules controlling the lights.

map

zach

Zach Lieberman is the cofounder of Open frameworks, an open source C++ toolkit designed to assist the creative process by providing a simple and intuitive framework for experimentation. OpenFrameworks is designed to work with several commonly used libraries such as: Free Type (fonts), Free Image (image saving and loading) & OpenCV. It is distributed under the MIT License. This gives everyone the freedoms to use openFrameworks in any context: commercial or non-commercial.

His aim is to use technology in a playful way to break down the fragile boundary between the visible and invisible – augmenting the body’s ability to communicate.

Through his work, he looks for the open mouth phenomenon as to be in awe. When something is so great the your conscious mind has no power over your physical body and forces the jaw to drop. Also perceived as a gateway to someone’s heart.

 Connected Colour – RGB Morse code transmitter and decoder

First Prototype “Hello”:

For out first Prototype for this research project, the first test was very simple, text was typed into the serial monitor gets output as a blinking, beeping morse code.

Second Prototype “Morse Code (sound decoder)”:

Circuit and code which allows user to input text. The text is output as morse encoded beeps, then the signal is read by an electret mic, then decoded back into english on another device.

Final prototype:

For the final prototype, there was created a circuit of 2 different sets of RGB leds and styrofoam to enhance light through a code that allowed text to be encoded into morse code signal light blinks, then read by a photocell, and decoded back into english on another device.

Sets of RGB leds and styrofoam lamps:

lamps

  1. The LED circuit  with 2 RGB LEDs was connected to a buzzer and connected to a second 2 RGB LEDs circuit.
  2. Both circuits were connected to an Arduino’s PWM digital pins.
  3. The 2 lamp prototypes  enhanced the lights from the RGB LEDs circuits.
  4. Through the morse encoder Arduino code, words  input  through serial port.
  5.  Each letter is generated in morse code on RGB light circuit #1 and buzzer.
  6. RGB light circuit #2 replicates every letter from RGB light circuit #1 in morse code.

Decoder:

The Senster by Edward Ihnatowicz (Research Project Group 6)

A groundbreaking cybernetic sculptor Edward Ihnatowicz explored the interaction between his robotic works and their audience. One of the first computer controlled interactive robotic works of art, The Senster, a 15 foot long, cybernetic sculpture commissioned by Philips in the early 70s, is widely considered as one of Ihnatowicz’s greatest achievements. The sculpture sensed the behaviour of visitors by their sounds and movements. It then reacted by drawing closer to what it saw as interesting and friendly gestures and sounds while “shying” away from less friendly louder ones.

Cybernetic Art

Of robotics in art, it might be said, that they have become a normal notion in today’s times. Of course, that hasn’t always been the case. Edward Ihnatowicz, a scientific artist in his own right, did experiments with robotics and cybernetics around the middle of the 20th century that are thought to have led to some of the most groundbreaking cybernetic art that, perhaps, we now take for granted.

One of his more grand creations was the Senster that was built and made over a period of more than two years. The life-like “being” was one of the first to catch the public’s attention – a fact which can be credited to Philips who commissioned the venue at which the Senster was shown from 1970 to 1974. As Ihnatowicz himself notes in his cybernetic art – A personal statement, “[The Senster] was the first sculpture to be controlled by a computer”. (Ihnatowicz, n.d.)

robots-in-art-from-senster-dot-com-1

Figure 1. “Robots in Art”. Retrieved November 14, 2015, from http://senster.com/robots_in_art/

Edward Ihnatowicz (b: 1926 – d: 1988)

Born in Poland 1926, Ihnatowicz later became a war refugee at the age of 13 seeking refuge in Romania and Algiers. Four years later he moved to Britain – a country that served as his home for the remainder of his life.

Ihnatowicz attended the Ruskin School of Art, Oxford, from 1945 – 1949 where he “studied painting, drawing and sculpture […] and dabbled in electronics. […] But then he threw away all of his electronics to concentrate on the finer of the fine arts.” Ihnatowicz would go on stating the move was the “[s]tupidest thing I’ve ever done. […] I had to start again from scratch 10 years later.” (Reffin-Smith, 1985).

For over a decade he created bespoke furniture and interior decoration, until 1962 when he left his home in hopes of finding his artistic roots. The next six years he lived in an unconverted garage, experimenting with life sculpture, portraiture and sculpture made from scrap cars. It was during this time Ihnatowicz would find “technological innovation opening a completely new way of investigating our view of reality in the control of physical motion” (Ihnatowicz, n.d.). Brian Reffin-Smith restated this in 1985 writing that Ihnatowicz “is interested in the behaviour of things” and that he feels that “technology is what artists use to play with their ideas, to make them really work.” (Reffin-Smith, 1985).

Ihnatowicz himself goes on by saying that the “[p]rincipal value of art is its ability to open our eyes to some aspect of reality, some view of life hitherto unappreciated” and that in the day and age of technology, the artist “can embrace the new revolution and use the new discoveries to enhance his understanding of the world” (Ihnatowicz, n.d.).

With our project being based on one of Ihnatowicz’s works, the Senster, we felt our group shared his views on this aspect of learning-by-doing. What was especially relatable to us was the fact that Ihnatowicz “only learned about computing and programming while already constructing the Senster”. (Ihnatowicz, n.d.). His “own self-taught command of scientific and technical detail is equalled by very few other artists”, Jonathan Benthall wrote about Ihnatowicz, suggesting in our opinion that where there’s a will there’s a way (Benthall, 1971).
in_lab-lrg

Figure 2. A picture of Edward Ihnatowicz working on the controls for a smaller scale of The Senster. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterstructure/index.htm

Ihnatowicz’s work – SAM

In 1968 Ihnatowicz created SAM (Sound Activated Mobile). According to Ihnatowicz, SAM was “the first moving sculpture which moved directly and recognisably in response to what was going on around it” making it perhaps one of the first interactive and dynamic sculptures. Ihnatowicz stated that SAM was “an attempt to provide a piece of kinetic sculpture with some purposefulness and positive control of its movement” (Ihnatowicz, n.d.).

sam-now01-lrg

Figure 3. SAM. Retrieved November 14, 2015, from http://www.senster.com/ihnatowicz/SAM/sam2.htm

Ihnatowicz discovered that “one of the shapes [he] had developed for SAM had a very close equivalent in nature in the claw of a lobster. It appears that lobsters are some of the very few animals that have very simple, hinge-like joints between the sections of their exoskeletons. […] A lobster’s claw was, therefore, inevitably, the inspiration for [his] next piece, the Senster”.

Ihnatowicz’s work – The Senster

In an interesting Gizmodo article from 2012 the writer, Lewis, writes about the origins of the Senster. He says“[i]t was Ihnatowicz’s interest in the emulation of animal movement that led him to become a pioneer of robotic art. Recording a lioness in its cage in a zoo, Ihnatowicz noticed the big cat turn and look at the camera then look away, leading him to ponder creating a sculpture that could do something similar in an art gallery, with the same feeling of a moment of contact with another seemingly sentient being.“ (Lewis, 2012).

senster10-lrg

Figure 4. The Senster. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterphotos/index.htm

Commissioned by Philips for the Evoluon in Eindhoven, Holland, from 1970 – 1974, the Senster “was the first sculpture to be controlled by a computer”, as mentioned earlier, with its realization taking more than two years (Ihnatowicz, n.d.).

senster-earlysketch-by-ihnatowicz-1

Figure 5. One of Ihnatowicz’s initial sketches of The Senster. Retrieved October 10, 2015, from http://www.researchgate.net/publication/221629713_The_development_of_a_cybernetic_sculptor_Edward_Ihnatowicz_and_the_senster

About 15 feet long, the Senster responded to sound using four microphones located on the front of its “head”, and also responded to movement, which it detected by means of radar horns on either side of the microphone array. Ihnatowicz wrote that the “microphones would locate the direction of any predominant sound and home in on it […] the rest of the structure would follow them in stages if the sound persisted. Sudden movements or loud noises would make it shy away” (Ihnatowicz, n.d.).

On its appearance and the overall experience, one of the Evoluon’s visitors, Brian Reffin-Smith, wrote “[t]he sight of this big, swaying head coming down from 15ft away to hover uncertainly in front of you was more moving than you’d suppose.” He underlines the Senster’s almost hypnotic powers by saying “[c]ouples, it is said, had wedding photographs taken in front of it. Kids watched it for four, five hours at a time.” (Reffin-Smith, 1985).

Aleksandar Zivanovic, editor of the very informative senster.com, provides all kinds of thorough information about the Senster, Ihnatowicz himself and his work, including technical details. On how the Senster perceived its world, he mentions that it “used two Hewlett-Packard doppler units (with custom made gold plated antenna horns) to detect movement near its ‘head’.” (Zivanovic, n.d.).

senster-parts-dopplerunit-lrg

Figure 6. The Senster’s eyes that tracked movement – a spare of Hewlett-Packard doppler unit. Retrieved November 14, 2015, from http://www.senster.com/ihnatowicz/senster/sensterradar/index.htm

In 1972, Jasia Reichardt went into detail on the Senster’s sensing/actuating flow: “the sounds which reach the two channels are compared at frequent intervals through the use of the control computer, and reaction is motivated when the sounds from the two sources match as far as possible. What occurs visually is that the microphones point at the source of sound and within a fraction of a second the Senster turns towards it.” (Reichardt, 1972).

senster5

Figure 7. The Senster’s ears – an array of four microphones – that tracked sound. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterphotos/index.htm

The control computer, a Philips P9201 computer, is actually a re-badged Honeywell 16 series. With 8K of memory and a punched paper tape unit and a teletype, Zivanovic makes a rather funny point when he mentions that the computer’s “insurance value [translated to current currency] was worth more than [his] parent’s three bedroom house in London“. (Zivanovic, n.d.).

On a similar note, in his essay on Ihnatowicz, Zivanovic writes “[The Senster’s overall] system was insured for £50,000 – the equivalent of around US $4.5m in current value [2005] – when it was shipped from London to Eindhoven in 1970)” (Zivanovic, 2005).

senster-cpu-Philips P9201-controlpanel-lrg

Figure 8. The Senster’s brain – a Philips P9201 computer. The left cabinet held the hydraulics equipment and the right cabinet was the computer itself. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/senstercomputer/index.htm

The cybernetic sculpture held itself up by three static “legs” while it moved via six electro-hydraulic servo-systems, based on the aforementioned lobster’s claw, allowing six degrees of freedom.

Taking into account the large size of his creation, Ihnatowicz had some help from Philip’s technicians on realizing an economic way of moving the the Senster. Their combined efforts resulted in affecting the sculpture’s movements by constant acceleration and deceleration (Benthall, 1971).

This life-like sense of motion combined with the Senster’s large scale appearance and multiple sensory data contributed in giving off the illusion of a real live creature. In retrospect, Ihnatowicz said that “[t]he complicated acoustics of the hall and the completely unpredictable behaviour of the public made the Senster’s movements seem at lot more sophisticated than they actually were.” (Ihnatowicz, n.d.). Reichardt agrees, stating that “[s]ince the Senster responds to a number of stimuli simultaneously its reactions are more life-like and less obvious than if merely the volume of sound were to provoke a slow or fast movement.” (Reichardt, 1972).

SPOT – Our group’s Prototype Based on the Senster

Who is SPOT?

Our version of the Senster is SPOT; a dynamic cybernetic sculpture that loves to daydream and people-watch. When SPOT is on its own, it daydreams; swaying its head and body from side to side, taking in the environment. When somebody walks up to SPOT, it excitedly looks up into that person’s eyes and becomes curious. SPOT observes that person, following their movement with its head and body.

How does SPOT work?

Alone, SPOT will sway its head and body in slow random patterns via its 3 servo motor joints. One is placed at the base for panning (rotation about the x-axis), another is attached to SPOT’s “neck” for leaning forwards and backwards, and the last servo joint tilts SPOT’s “head” (rotation about the y-axis).

SPOT uses OpenCV (an open-source computer vision library) and a webcam to detect faces when in view. If a face is detected, SPOT stops its random movement and begins to follow a user’s face where the bottom and top servos are given commands based on the detected face’s x and y coordinates relative to the camera’s center position. The middle servo is not controlled by external data – such as movement or sound – but rather it serves as a pre-programmed dramatic device to imbue SPOT with more sense of life and dynamic movement.

Making of SPOT – Birth

Having researched Edward Ihnatowicz and – among other of his creations – the Senster, our group came to a common understanding that, in Edward’s words, “most of our appreciation of the world around us comes to us through our interpretation of observed or sensed physical motion. […] For an artificial system to display a similar sense of purpose it is necessary for it to have a means of observing and interpreting the state of its environment” (Ihnatowicz, n.d.).

With this in mind, all of us went to their separate corners to think of ways to breathe life into an otherwise dead object of electrical components. On returning, we compared notes and sketches of different ideas and implementations, ranging from a selfie-taking Selfie-Bot, to an interactive Teddy Bear, to an origami inspired set of interactions and movements.

initial-ideas-selfieBot-lores-1

initial-ideas-selfieBot-lores-2

In the end we settled on a combination of ideas that resulted in the original idea for SPOT; a cybernetic sculpture that loves its ball and would somehow ask for it back if it went missing. We were aware from the beginning that the technical scope of this concept would be multi-faceted and complex, especially in the limited timeframe, but we concluded that the project was ambitious enough that the execution of even a portion of this project would produce a product with a synergy of interesting concepts and technical accomplishments.

With this common trajectory in place, we started to think of possible components and materials we might need. From sketches we could determine that for a single, moving joint we would need a servo to be fastened onto a static and sturdy material while the servo’s “horn” would be connected to the next piece that would serve as the moving component. Other considerations included how SPOT would know that its ball is in possession, how SPOT would sense and recognize its ball, how could we imbue SPOT with a sense of personality and character?

prototype-inital-components-analysis-lores-1

Making of SPOT – Searching for a way to sense the world

We found tutorials online where Processing was used as a way to compute sensory data which was sent to Arduino for actuating on the data. Researching, testing and drawing from online tutorials like Tracking a ball and rotating camera with OpenCV and Arduino (AlexInFlatland, May 2013) and Face Tracking with a Pan/Tilt Servo Bracket (zagGrad, July 2011) we began to see the light at the end of the tunnel.

Some of the working code from the tutorials would have to be tweaked and molded to meet our needs. In Greg Borenstein’s great library OpenCV for Processing (Borenstein, n.d.) we would find ourselves facing software compatibility issues forcing us to downgrade from Processing 3.0 to Processing v2.4 in order to take advantage of the visual processing library. We employed a library using a common face-processing algorithm which Borenstein explains in one of his short Vimeo videos (Borenstein, July 2013).

Making of SPOT – Initial tests and experiments

Soon thereafter we would turn to the actuators. The servos in question would change SPOT’s field of view based on where a face would be detected. SPOT’s two-dimensional visual field would be altered by the repositioning of its head in three-dimensional space. Deciding to take things step by step, we started with basic initial tests with two servos set up within a pan/tilt bracket and controlled them using different potentiometers, one for the x-axis and one for the y-axis.

From there, the next step was to experiment with face detection by implementing Borenstein’s OpenCV for Processing library examples; “FaceDetection” and “LiveCamTest”.

We also decided that SPOT could potentially recognize his very own ball by tracking an object with a specific colour reading. Jordi Tost’s HSVColorTracking – an example based on Borenstein’s aforementioned library was demonstrated to solve the problem of object-recognition (Tost, n.d.).

https://youtu.be/XaIqykLtTt4)

Along the way, we had to keep in mind SPOT’s behavioural flow – when is it searching for faces, when does it do the same but for her ball etc – so as not to get too caught up on a single aspect.

prototype-v2-behaviour-storyb-lores-1

Making of SPOT – Version 1

At this moment in time, we had the different components working separately. Next step was to introduce them to one another by putting a USB webcam into the mix.

[MISSING WRITING, MICHAEL] Our trials and errors with finding a webcam that was compatible with mac and processing. Ended on borrowing Marcus Gordon’s webcam.

Pulling through those issues, we got a working version 1 of SPOT detecting and tracking faces with the two servos!

https://youtu.be/FBnOscWJu-o)

Making of SPOT – Version 2

After the thrill of having a version 1 working properly, we wanted to increase SPOT’s range of motion for a version 2. By increasing the distance from the panning X-axis to the tilting Y-axis, SPOT’s movements would be more noticeable and hopefully more memorable.

In the video linked below fellow classmate Marcelo Luft interacts with SPOT version 2, which replicates the facial detection and tracking of version 1, but includes one extra joint and a custom built apparatus.

https://youtu.be/OrEnsvVu1mw)

It was also important to figure out and visualize in a 3-dimensional space where a viewer would see and interact with SPOT and its ball.

prototype-v2-shape-and-presentation-lores-1

 

prototype-v2-shape-and-presentation-lores-2

 

Making of SPOT – Final Presentation

Unfortunately, we ran into technical issues two days before our presentation. We realized time was running short and we needed to have a sure finished product for the presentation. We decided to forget about the entire ball-following narrative and focused instead of making the robot detect faces and move around with the most realism and fewest bugs possible. The final product moved smoothly during its random movements and recognized faces reliably.

When our classmates arrived for class, we had set up SPOT as the primary focus point of the room with a friendly message behind it encouraging people to interact with it.

presentation-prototype-2

presentation-prototype-1

The video linked below shows some of the students’ responses to meeting SPOT in person which are very interesting. To our delight, students commonly described SPOT’s random movements and face-tracking in anthropomorphic terms. Students tried to get SPOT’s attention and when failed interpreted the robot’s behaviour personally. This project and experiment revealed some of the fundamental properties of interactive objects that can create the illusion that an object is living and responding personally to a user.

Although not as ambitious as our original behaviour idea for SPOT, considering the fact that our group is a collection of newcomers to dynamic cybernetic sculpture, our version that we went with is quite the feat – in our not-so-humble opinion. And who knows, in time, perhaps SPOT will get bored of faces and will insist on getting something more geometrical. A colourful ball perhaps?


Research Project Group 6: Egill R. Viðarsson, Ling Ding, Michael Carnevale, Xiaqi Xu


References

AlexInFlatland (May 16, 2013). Tracking a ball and rotating camera with OpenCV and Arduino. Youtube. Retrived on October 29, 2015, from https://www.youtube.com/watch?v=O6j02lN5gDw

Borenstein, Greg (July 8, 2013). Face detection with OpenCV in Processing. Vimeo. Retrived on November 10, 2015, from https://vimeo.com/69907695

Benthall, Jonathan (1971). Science and Technology in Art Today. Thames and Hudson. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Borenstein, Greg (n.d.). OpenCV for Processing. Github. Retrived on November 6, 2015, from https://github.com/atduskgreg/opencv-processing

Ihnatowicz, Edward (n.d.). cybernetic art – A personal statement. Senster.com. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

interactivearch (July 15, 2008). SAM – Cybernetic Serendipity. Youtube. Retrieved on October 10, 2015, from https://www.youtube.com/watch?v=8b52qpyV__g

interactivearch (Januar 12, 2008). The Senster. Youtube. Retrieved on October 10, 2015, from https://www.youtube.com/watch?v=1jDt5unArNk

Lewis, Jacob (August 30, 2012). How the Tate Brought a Pioneering Art-Robot Back Online. Gizmodo. Retrieved on October 12, 2015, from http://www.gizmodo.co.uk/2012/08/how-the-tate-has-brought-a-pioneering-art-robot-back-online/

Reffin-Smith, Brian (1985). Soft Computing – Art and Design. Computing. Addison Wesley. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Reichardt, Jasia (1972). Robots: Fact, Fiction, Prediction. Thames and Hudson. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Tost, Jordi (n.d.). HSVColorTracking. Github. Retrieved on November 6, 2015, from https://github.com/jorditost/ImageFiltering/blob/master/SimpleColorTracking/HSVColorTracking/HSVColorTracking.pde

zagGrad (July 15, 2011). Face Tracking with a Pan/Tilt Servo Bracket. Sparkfun. Retrived on October 29, 2015, from https://www.sparkfun.com/tutorials/304

Zivanovic, Aleksandar (n.d.). Senster – A website devoted to Edward Ihnatowicz, cybernetic sculptor. Retrieved on October 10, 2015, from http://www.senster.com/

Zivanovic, Aleksandar (2005). The development of a cybernetic sculptor: Edward Ihnatowicz and the senster. Researchgate.com. Retrieved on October 12, 2015, from http://www.researchgate.net/publication/221629713_The_development_of_a_cybernetic_sculptor_Edward_Ihnatowicz_and_the_senster

Wooden Mirror Research Project

Wooden Mirror Research Project
Andrew Hicks, Leon Lu, Davidson Zheng & Alex Rice-Khouri.

Introduction
Daniel Rozin is a NY based artist, educator and developer who is best-known for incorporating ingenious engineering and his own algorithms to make installations that change and respond to the presence and point of view of the viewer. He’s also a Resident Artist and Associate Art Professor at ITP, Tisch School of the Arts, NYU.

Merging the geometric with the participatory, Rozin’s installations have been celebrated for their kinetic and interactive properties. Grounded in gestures of the body, the mirror is a central theme to his work. Surface transformation becomes a means to explore animated behaviors, representation and illusion. He explores the subjectivity of self perception and dims the line between the digital and the physical.

“I don’t like digital, I use digital. My inspirations are all from the analog world”

He created the Wooden Mirror in 1999 for the BitForms Gallery in New York. The mirror is an interactive sculpture made up of 830 square tiles of reflective golden pine. A servo rotates each square tile on it’s axis thus reflecting a certain amount of light and in turn creating a varying gradient of colour. A hidden camera behind the mirror, connected to a computer, decomposes the image into a map of light intensity.

The mirror is meant to explore the inner workings of image creation and human visual perception.

The Research Technique 

How Shape Detection Works 

The fundamental idea of representing a person from their form can be as simple or complex as you want it to be. In our case, we tried to trace the rough contours of a shape (e.g. your hand), using light sensors. The same basic principle of activating and deactivating pixels based on relative intensity, applies just as well to Photoshop’s spot healing tools, as to line following robots, facial recognition, and all of Daniel Rozin’s reflective exhibits.

Rozin’s Wood Mirror (most likely) used a simple bitmap, expressing how far a tile should pivot by the average brightness of a swatch of pixels. You determine the number of digital pixels per 1 physical pixel by the scale of the video resolution to the roughly 29×29 mirror grid. All of Rozin’s recent projects (Penguins, PomPoms, etc) use Kinect, some combination of image analysis and edge detection found in the OpenCV framework. The most popular algorithm to do this is the Canney Edge Detection.

Canny Edge Detection 

1) Reduce Noise 

Look for any unusually high or low points in 5×5 grids. Use a Gaussian filter. Think lens flares from the sun or dust specks on the lens itself. It’s the same idea as the histogram in Lightroom photoshop, that lets you automatically filter out any completely dark or light areas.

Screen Shot 2015-11-10 at 10.52.29 AM

 

2) Find the Intensity Gradient

Scan the image left to right, row by row, to see which pixels are darker (higher value) than their neighbours. You scan top to bottom, column by column to find the same thing. You do this for every pixel.

Screen Shot 2015-11-10 at 10.52.52 AM

3) Suppression

Now you suppress any points that are smaller than the edges. Edges with high darkness values larger than both of their neighbours get assigned a 1. The neighbours get flattened to 0. This gives you a 1 pixel thin edge; a line. The entire map can be represented in binary.

Screen Shot 2015-11-10 at 10.53.28 AM

4) Thresholding (Hysteresis)

There’s two thresholds:      A minimum value          A maximum value

There’s three categories:   [Below minimum]          [Between min-max]       [Above maximum]

For points that fall between the minimum and maximum thresholds, if they’re attached to another “sure edge” they get counted. If they’re not beside a sure edge, they get thrown away.

Screen Shot 2015-11-10 at 10.53.41 AM

Our Prototype 

Build
When first assessing the project’s needs, it became apparent that materials and scale were going to be an important part of making our prototype work as accurately as possible. A heavy wooden frame was used to house 9×180 degree servo motors with nine 3.5”x3.5” light framed panels made out of popsicle sticks and tracing paper.

IMG_5053

A foam core divider was placed inside the frame to prevent light interfering with the other photo resistors within the box, as well as a base for each of our nine servo motors and photoresistor sensors. Each servo motor was fitted with a chassis, made of popsicle sticks, to ensure a steady front and backward motion when pushing the 3.5”x3.5” panels as so. Each chassis was accompanied with a wire that connected each servo motor to a “floating” arm that would end up pushing the panels back and forth.

IMG_5041

Building a 90 degree angled arm, to push the panels an even further distance, was considered, but as a result of only using a wire arm, our team was able to move each panel by 0.75” back and forth, enough distance to achieve the desired effect.

IMG_5043IMG_5042

Our build allows users to interact with our prototype by shining a led light on each 3.5”x3.5” panel to trigger each servo in whatever sequence the user chooses, creating an interactive pixelated effect, mirroring the user’s actions. The inverse, of using shadows, can also be achieved by reversing our input method through code.

Video 1:

Video 2:

Code
Available at https://github.com/Minsheng/woodenmirror
A single Arduino controls a row of three photoresistors and three micro servo motors. The half rotation motor positions will be set to 0 initially. They will be set to variable values (60, 120, 180) in relative to the photoresistors values. (i.e. the higher the photoresistor value gets, the greater the motor position) For simplicity, we set the motor to 180 degree if the photoresistor exceeds certain threshold. Otherwise, the motor will be set back to 0 degree, thus pulling each grid inwards or pushing it outwards.

In terms of speed control, we tried to incorporate deceleration into the backwards movement. The current mechanism meant to let the motor move at a decreasing speed before it reaches 80% of the different between the last position and the target position, and a much lower speed for the rest of the difference, using loop and a timer for each servo motor. A more effective implementation may be achieved by using VarSpeedServo.h (github.com/netlabtoolkit/VarSpeedServo). It allows asynchronous movement up to 8 servo motors, with advanced controls on speed such as defining a sequence of (position, speed) value pairs.

Fritzing Diagram:

WoodenMirrorPrototype_bb

Research Project 2 – TENDER NOISE – James/Nimrah/Fusun

TENDER NOISE

p1

Over the past few decades, mechatronic machines and robotic techniques have been extensively used to create new musical instruments, performances, and sound installations. We can divide these works into two main categories:
Mechatronic machines have been used to create instruments that are inspired by already existing musical instruments with modified and augmented capabilities.
The second category incorporate mechatronic techniques in conjunction with non-musical objects and their sonic output is “noise”. Their objective is to explore the potential aesthetic values of its normally mundane sonic aspect.

ensemble_forWebsite1-1024x539

Our project was researching the sound-sculpture of Mo H Zareei who combines mechatronic techniques with non-musical objects.

THE BRUTALIST NOISE ENSEMBLE
https://www.youtube.com/watch?v=mNPt8ov1bSA

ARTIST – Mo H Zareei
Mo H Zareei is a Ph.D Candidate at Victoria University of Wellington. His research considers cross-disciplinary methodologies in the creation of audiovisual installations and kinetic sculptures, data sonification/visualization, instrument and interface design. His work focuses on hybridization of mixing polarities such as noise and structure, digital and physical, visceral and intellectual. You can see all his works at his website:
http://m-h-z.net/

ZAREEI’S INSPIRATION – BACKGROUND OF NOISE MOVEMENT
Zareei indicates that he has been highly influenced by the noise music.
Luigi Russolo was perhaps the first noise artist. His 1913 manifesto, L’Arte dei Rumori, translated as The Art of Noises, stated that the industrial revolution had given modern men a greater capacity to appreciate more complex sounds. Russolo found traditional melodic music confining, and he envisioned noise music as its future replacement. The manifesto was as a letter to friend and futurist composer Francesco Balilla Pratella.

Russolo states that “noise” first came into existence as the result of 19th century machines. Before, the world was a quiet, if not silent, place. The earliest “music” was very simplistic and was created with very simple instruments, and that many early civilizations considered the secrets of music sacred and reserved it for rites and rituals. He refers to the chord as the “complete sound,” the conception of various parts that make and are subordinate to the whole. He notes that while early music tried to create sweet and pure sounds, it progressively grew more and more complex, with musicians seeking to create new and more dissonant chords.

Russolo claims that music has reached a point that no longer has the power to excite or inspire. He urges musicians to explore the city with “ears more sensitive than eyes,” listening to the wide array of noises that are often taken for granted, yet (potentially) musical in nature.

“The variety of noises is infinite. If today, when we have perhaps a thousand different machines, we can distinguish a thousand different noises, tomorrow, as new machines multiply, we will be able to distinguish ten, twenty, or thirty thousand different noises, not merely in a simply imitative way, but to combine them according to our imagination.”

INTONARUMORI

Original version was burned during the war

russolo_intonarumori_1919

Russolo designed and constructed a number of noise-generating devices called Intonarumori, and assembled a noise orchestra to perform with them. There were 27 varieties of intonarumori in total with different names. The instruments were completely acoustic, not electronic. The boxes had various types of internal construction to create different types of noise. Often a wheel was touching a string attached to a drum. The wheel rattled or bowed the strings, while the drum functioned as an acoustic resonator.

Re-Construction

2400856791_e66860401e

As part of its celebration of the 100th anniversary of Italian Futurism, the Performa 09 biennial, in collaboration with the Experimental Media and Performing Arts Center (EMPAC) and the San Francisco Museum of Modern Art, invited Luciano Chessa (author of the book Luigi Russolo, Futurist. Noise, Visual Arts, and the Occult) to direct a reconstruction project to produce accurate replicas of Russolo’s legendary Intonarumori instruments. This project offered the set of 16 original intonarumori (8 noise families of 1-3 instruments each, in various registers) that Russolo built in Milan in 1913.
https://www.youtube.com/watch?v=BYPXAo1cOA4

CONTEMPORARY ARTISTS
Pierre Schaeffer // Pierre Henry // Art of Noise //Adam Ant // Einstürzende Neubauten // Test Dept // DJ Spooky // Dywane Thomas, Jr. // The Sufis //Francisco López // Panayiotis Kokoras // Intonarumori // R. Henry Nigl // Material // Jean-Luc Hervé Berthelot // Spiral-Shaped Mind // Marinos Koutsomichalis // Luciano Chessa // The New Blockaders // Radium Audio // Zimoun // Martin Messier & Nicolas Bernier

PIERRE HENRY’s work: https://www.youtube.com/watch?v=AOqfWj0HqNE
Zimoun’s Work: https://player.vimeo.com/video/7235817?color=ffffff&title=0&byline=0&portrait=0&badge=0
Martin Messier & Nicolas Bernier’s Work: https://player.vimeo.com/video/95706212?color=ffffff&title=0&byline=0&portrait=0

AN ARTISTIC SONIC EXPRESSION COMPRISING OF THREE MECHATRONIC SOUND SCULPTURES: RASPER // MUTOR // RIPPLER

Rasper is a link between the physically produced noise of mechatronic sound object and pulse based digital noise of laptop produced glitch music.
It consists: DC motor // Solenoid Plastic Disc // Spring Steel LED Strip
https://www.youtube.com/watch?v=J_uVXuH_iSw

rasper

rasper2

CRACKLER VS RASPER
The crackler is an instrument made by Russolo back in early 20th century which produced two distinct sounds. One is metallic crackling noise in the high pitched instruments and the second is a strident metallic clashing in the low ones. Rasper could be considered as the contemporary version of it.

Constant Principle:
– Controlling the speed of vibrations
– Controlling the tension

Differences:
Crank – DC motor
String – Spring
Lever – Solenoid
Manual – Automated

crackler

MUTOR: DRONE CHORUS OF METRICALLY MUTED MOTORS
https://www.youtube.com/watch?v=cSCwSGFjIHE
A mechatronic instrument that produces sonic output through buzzing of DC motors and actuation noises of solenoid. It consists: DC motor // Solenoid // LED // Spring Steel

mutor

mutor2

RIPPLER: A MECHATRONIC SOUND SCULPTURE
https://www.youtube.com/watch?v=jxHJLCzwlak
In Rippler, the actuation of noise of solenoid is amplified and transduced through a thin sheet of metal. It consists: Push Solenoid // Thin Sheet of Steel // LED

rippler

rippler2

OUR APPROACH – TENDER NOISE

‘Tender Noise’ is an interactive sound installation communicating through the noise of our heartbeats and re-interpreting us as the mundane objects. It involves two participators and provokes a discovery of other by exposing our hearts to each other through sound. The shared experiences generate different emotions between participators based on their closeness. The installation can offer a playful environment controlling the sound by manipulating the heartbeat (by running/breathing…).

photo

PROCESS

WIRING

TenderNoise-wiring

Basic Workflow
Users drive Tender noise with their heartbeats. They connect to the system via a pulse sensor, and when a beat of their heart is detected, it triggers the actuator, thereby creating a thumping sound, and it triggers a light pulse that flows across the LED strip.

Build Components
Tender Noise consists of 3 major components:
1. Electromagnetic Linear Actuator
2. LED light strip
3. Heartrate monitor (pulse sensor)

Electromagnetic Linear Actuator
– A linear dual-shaft mechanism
– The inner shaft houses a magnetic coil
– The inner shaft rests over a stack of neodymium magnets
– When a current is applied to the magnetic coil, it creates a magnetic field
– The magnet field runs in opposition to the field of the neodymium magnets – they therefore push against one another when there is a current, pushing the inner shaft upwards
– When the current stops flowing into the coil, its magnetic field is lost, causing the inner shaft to drop and create a thumping sound when it makes contact with the magnets below
– Note: 3V of power, residing on its side, is used to power the actuator. This current flows into the Arduino board, but is only able to flow into the magnetic coil when Arduino opens a gate (the gate is opened when a heartbeat is detected).

TenderNoise-actuator-structure

LED Light Strip
–  A chained series of addressable LED lights
–  There are 30 lights on the string
–  Single heart beats are pushed across the strip in a distinct “pulses”
–  Each pulse is a series of 7 blue or red lights
–  A gradient is applied to the pulse by coloring individual lights in a pulse darker/lighter
–  The wire is designed to accommodate up to 4 pulses at the same time

 Basic workflow:
– Pulses remain idle, with the first position in a pulse residing at -1, until the pulse is triggered
–  After being triggered, each of the positions of each of 7 lights in a pulse are incremented by 1 every 20ms
–  They continue to move until the last light in the pulse occupies position 30 (1 more than the last LED light position on the strip).
– They are then reset and the pulse is put in an idle state

TenderNoise-led-model

Heartrate Monitor
–  A single padded, standalone sensor. (Some pulse sensors have multiple pads that need to be attached to multiple places on the body). This sensor only needs to be attached to one place on the body.
–  Can be attached to the finger, wrist, or ear
–  Triggers individual thumps of the linear actuator and corresponding light pulses on the LED

CODING
Here is the entire coding of the project: https://github.com/jamesessex/TenderNoise

THANK YOU

Research Project: Volume by United Visual Artist

United Visual Artists (UVA) was founded in 2003 in London, UK by Matthew Clark, Chris Bird, and Ash Nehru. Originally, the 3 came together to create lighting and visuals for a concert for a London based music group called Massive Attack. Since then, they have showcased their work through many exhibitions and galleries and have won many awards. The group has been featured in multiple publications winning award after award for their creative approach in combining architecture, live-performances, installations, sculptors, and technology.

UVA has done work in Russia, UK, Australia, Hong Kong, Paris, and other cities across the globe. Apart from working with lasers, radars, and scanners, UVA also lectures across the globe. They tour around different universities and have even reached Toronto to speak to students at both Ryerson University And OCAD University in September of 2011.

Volume

As described on UVA’s site:

“UVA’s large-scale installation Volume first appeared in the garden of London’s V&A museum in 2006 and has since traveled as far as Hong Kong, Taiwan, St. Petersburg and Melbourne.

It consists of a field of 48 luminous, sound-emitting columns that respond to movement. Visitors weave a path through the sculpture, creating their own unique journey in light and music.

The result of a collaboration with Massive Attack, Volume won the D&AD Yellow Pencil in 2007 for Outstanding Achievement in the Digital Installation category.”

 

Inspiration:

The inspiration for volume was both the Play Station brief, which was to engage people in an emotional experience for the launch of PS3 in UK and Monolith, which is an installation displayed by UVA at John Madejski garden on Onedotzero’s transvision night. Monolith emits soothing colours and you hear calming sounds when no one is near. As people approach, the colours and sounds become louder and harsher, forcing people to step back to find their comfort zone. Monolith wasn’t entirely successful from an interaction point of view. It had more people than anticipated, so it spent too much time in ‘overload’ mode. But it did ‘work’ in that it created a powerful aura and transformed the space.

monolith-1_medium

3 interaction layers

ppt-5

Model Overview

ppt-6

Technical Overview

To create Volume, UVA would have used their proprietary software and hardware called D3. D3 allows artists to control many different pieces of hardware and tools for installations, performances and other visuals. In the case of Volume D3 is controlling 48 LED towers, infrared cameras and 48 individual speakers located in each of the LED towers.

The D3 software has the capability for Real Time Simulation, Projector Simulation, Sequencing, Content Mapping, Playback, Configuring Output, d3Net, Show Integration, and Backup & Support. To run the software efficiently D3 also offers specifically built hardware for running their software.

d34x4_web_2

backpanel4x4

As mentioned above D3 does Real Time Simulations of the art work. Here’s a screenshot of one of the available simulations for Volume. You can see that there is a timeline for the different events and interactions as well as a digital representation for each of the 48 LED towers.

xl

For motion tracking of the experience we suspect that they are using some sort of IR areal grid system similar to this illustration found on the Panasonic website. This method would allow participants location to be tracked and monitored relatively simply as well as keeping down cost by by minimizing the number of IR cameras required in the installation.

grid-eye_infrared_array_sensor_graphic_-_panasonic_industrial_devices

The audio for Volume was written by Massive Attack since UVA has had an existing relationship with the band. The sound artist Simon Hendry has also worked on additional effects for multiple iterations of the installation. The connection between the D3 software and hardware and the installation is done with MIDI (Musical Instrument Digital Interface) controls connecting with the Music production software Logic Pro.

Prototype

To prototype the Volume installation we went through multiple iterations. The first iteration we were trying out a 8X8 RGB LED matrix and a MAX7219 LED driver chip.

IMG_1384 IMG_1395

We got it up and running but were unable to control individual LEDs or get unique colours displaying. So instead we switched to the 74HC595 chip for bit shifting the control output from the Arduino to allow one three pins control 8 LED’s per chip. Each chip can be connected, and daisy-chained to another chip which can control another 8 LEDs.

IMG_1423 IMG_1469 2

There is a lot of wiring but it is a good way for communicating with multiple LEDs and still keep open some pins on the Arduino for other sensors.

For the user motion and location detection we are using the Sonar sensor, for the audio we are using the Arduino Library called Mozzi and we are using a small speaker with a simple connection to the Arduino.

Schematic

C_C_UVA_Volume_bb

 

Prototype Video

 Final Video

Github Source Files

Github source for the master Arduino that runs the LEDs and sensor.

Github source for the Arduino that runs the Mozzi instance.