All posts by Egill Vi arsson

WOOD-e: The Curious Sound-Searching Cube

WOOD-e at exhibition December 10th at OCAD grad building. Pic by Hammadullah (aka Hamster) Syed.
WOOD-e at exhibition December 10th at OCAD grad building. Pic by Hammadullah (aka Hamster) Syed.

Meet WOOD-e.

Being of the inquisitive kind, WOOD-e enjoys listening for the loudest shout, whistle, bump or crash in the vicinity. As he would no doubt himself attest to – if he could – he’s not exactly one of the hard of hearing. Sporting two quite sensitive electric microphones, this curious cube tries his best at picking out the most dominant sound source with his attentive, blinking gaze.

Conception of WOOD-e

Coming from a group research project on The Senster the next step was to think of ideas for our class’s follow-up Tamagatchi project.

I went through an extensive period of brainstorming where I looked at numerous things. One of the initial thoughts I had was to explore different physical human interactions based on their basic senses and try to figure out which ones were more intuitive than others.

When doing our prototype based on the Senster, named SPOT, I noticed that our classmates intuitively went for hand gestures and shouts when trying to catch SPOT’s attention. Delving deeper into those and other interactions I tried to visualize a sense of measure and scale for each interaction – sketching out forms appropriate (or not) to those interactions.

Interaction exploration sketches

Interaction explorations sketches

Shape exploration sketches

From the beginning I knew I wanted to move away from a face tracking robot like our SPOT and try to explore more ways of interacting with humans. In retrospect, though, as I’m writing this, I realize I might have been influenced by the Senster more than I initially thought.

The course that I eventually decided to take was to make a robotic head; a cube with two elliptical cuts in front and a strong light source emitting out light beams from within. Via sound localization, with electric microphones on either side, the head would turn on its neck in an effort to try and locate where the most dominant sound source is at each given time – effectively shining light on the matter.

WOOD-e light beam sketches

To add to the head’s believability of being alive, whenever the measured sound would reach to a certain level below a set threshold for a set amount of time, a pair of eyelids would slowly shut out the light beams from within the head. It would seem as though the head would fall asleep.

Furthermore, in an attempt to add more emotion to the head, I drew inspiration from My Iron Giant. The film’s character’s designers quite cleverly imbued the iron robot with different emotional states with different locations of its eyelids.

WOOD-e sketches eyelid animation

Expression map from eyelid positions

Sharp-eyed readers may have noticed the absence of lightbeams and sleeping behaviour in WOOD-e’s video above. Don’t worry, the reasons for their absence will be explained later on.

Prototyping WOOD-e: Version 1 and 1.5

To keep the ball rolling I roumaged for a cardboard box in a similar size to what I had imagined. After cutting two wholes on the front of the box representing eye holes I then mounted the box on a standing lamp at home. On the inside of the head I taped cardboard eyelids in different states to see what kind of emotion it could portray.

WOOD-e v1 standing lamp prototyping

From that session I started to think more about the presentation of the head; should it stand on the ground, and if so, how high should its eyeline be comparing to an average height human? Maybe it would be more interesting to have the head hang from the ceiling?

WOOD-e, should he hang or stand?

Meanwhile, I also did sound tests with two microphones and LEDs and then another where my bicycle light was hooked up on the main servo located in-between the microphones. Wherever there was more noise, based on extreme servo positions of 5° and 175°, the servo would turn to that position.

I wanted to see what the same function would look like with the initial cardboard head. I also wanted to get a feel of how the circuitry and eyelid mechanisms could be fitted within the head.

Initial layout of electric components

Some issues that were raised through these really helpful hands-on tests were f. ex. exploring weight distribution, looking into a more accurate way of localizing sound, imagining better structure materials and assembly options – not to mention thinking through light and power sources.

Exploring assembly with “fingers” technique
Exploring assembly with “fingers” technique
Exploring assembly with “fingers” technique
Exploring assembly with “fingers” technique
Exploring assembly with “fingers” technique
Exploring assembly with “fingers” technique

Fabrication of WOOD-e

Having decided on 6mm thick baltic birch plywood as my material for the head from the work done on prototype 1.5 I prepared a file for laser cutting at the OCAD Rapid Prototyping Lab. The following day I picked up the ready materials.

WOOD-e v2 fresh from laser cutting

In a nice feedback session with Demi he mentioned some pragmatic issues with difficulties in OCAD’s space of having things hanging from ceilings. Furthermore, he highlighted the important factor of not giving the head-turning servo the role of the head’s weight bearer. The servo should only turn the head. Following that discussion I decided to have the head stand instead of hanging from the ceiling and to look into my weight issues.

Instead of having tiny supports made of wood around the underside of the base the ever helpful Reza at OCAD’s Maker Lab introduced me to a turntable plate bearing called Lazy Susan. The component is made up of two flat metal rings with numerous ball bearings in an elliptical ridge between them. A turning plate like this would hold the weight of the head while the head-turning servo would solely turn it.

With Reza’s help – and later from the faculty at room 170 at main OCAD building – I assembled together my laser cut plywood pieces with my chosen electric components.

Inside layout of electric components
Inside layout of electric components
Sliding channels for WOOD-e’s eyelids
Sliding channels for WOOD-e’s eyelids
WOOD-e being wood glued together
WOOD-e being wood glued together

WOOD-e Tests

On December 5th I got WOOD-e to track a dominant sound source. The power source for the head turning servo was an external 9V 1A adapter with the power being stepped down with a 5V regulator before being hooked up to the head turning servo. The Arduino and the head’s microphones were running off of my Mac’s USB port.

Next I wanted to fit the top and bottom eyelids. Starting with the top one, I needed to calibrate the rotation angles of when the eyes were open and half-closed. I also wanted to try and have the servo movements at varying speeds so when the head gets sleepy the movements are slower.

With that pretty much settled, next up was doing the same thing with the bottom eyelid. This proved to be more troublesome for several reasons. The bottom eyelid servo’s prolonged arm did this sort of woodpecker drumming and if it would persist the arm would fall of the servo. Same thing would sometimes happen with the top eyelid’s servo. Aesthetically, because the bottom eyelid’s servo was the same side as the top eyelid’s one, the bottom eyelid’s servo arm was visible through one of WOOD-e’s eye holes.

Eyelid servo arm visible through one of WOOD-e’s eyeholes
Eyelid servo arm visible through one of WOOD-e’s eyeholes

In an attempt to solve my woodpecker servo issues, the following day I purchased a 5V 1A 5200mAh battery pack that I was hoping to be able to hook up to the Arduino USB port and power the hole thing that way. Despite the fact that I seemed to have been successful at creating a pre-scripted timer controlling the speed of the eyelids after a certain period of time I kept running into power issues when trying to have the head-turning servo and eyelid servos working at the same time.

Following a final feedback session with Demi and Nick, I got for WOOD-e’s servos a 5V 5A adapter and for its sensors a 9V 1A adapter that would be regulated through the Arduino.

Nick also mentioned that since my eyelids were made from cardboard and were therefore relatively light, it could be wise to use the Arduino Servo library’s attach() and detach() functions to avoid the woodpecker drumming issues. Perhaps being offset by tiny currents in the circuitry the likely reason for the servos glitchy behaviour was that they were continually trying to reach a set position but missing their marks by a small measure and then trying to correct themselves.

Nearing the finish line

With our exhibition just around the corner, speed of final steps was vital.

Replicating the top eyelid’s servo fabrication for the bottom eyelid, I moved it to the floor side on the inside of WOOD-e to avoid the arm being seen through they eyes.

Just after having the eyelids working together, the main head-turning servo started to be non-responsive. After a ton of troubleshooting and debugging attempts, I re-did the circuitry from the ground up which, thankfully, had everything running again. How come, I’m ’till this day unsure of.

Remembering I wanted to WOOD-e to have a pair of light beams coming out from his eyes, I bought a pair of super bright LEDs that then proved to be too weak to illuminate from the inside out. I thought to myself I could buy more LEDs and connect them together to *maybe* have a strong enough of a light source but unless WOOD-e would be set in a dim room, the emitting would likely not be sufficient and then the time and effort I made to make WOOD-e look nice and impressionable would go to waste since he would be hard to see. I then made the hard decision to cut out the light beam feature.

The final and one of the most time-consuming issues I had had to do with timers inside of Arduino code. Thinking I had managed to build timers earlier on for when WOOD-e should start to feel sleepy if the sound levels were low enough for set amount of time and then anxious if the sound was above the set average, I started to realize those timers were not really dynamic at all. They were too pre-determined and scripted since they weren’t resetting when an opposite condition proved true.

For almost three whole days I worked on trying to figure out timers; how to set them, how to reset them, how to use them to let the eyelid servos finish their movement before they were detached. I somehow kept running into logical errors where my resets would never come true since I was, I assume, detracting the current time from itself but never actually making a reset. I apologize if this writing seems confusing, the reason is I still haven’t figured out where I went wrong.

With the exhibition starting the following day, I decided I didn’t want to let all the hard work of making the eyelids go entirely to waste. To still imbue WOOD-e with a sense of life and purpose, I wrote WOOD-e a blinking function with random open-eyes-intervals and a quick 250ms closed-eyes-interval.

In order for WOOD-e to stay relatively in tune with interchangable sound thresholds, Demi and Marcelo Luft suggested writing an auto calibration function for when WOOD-e is restarted. This seemed like a clever thing to do so I wrote it in a way so when WOOD-e was turned on, for the first 3 seconds he would shut his eyes, add up a sum of measured sound on his left and right and finally find the average within that timeframe. That way if he had been switched on for a while within a relatively empty room, if all of a sudden the room got filled with people I could pull his plug for a quick second and plug him back in, making him listen for his new surroundings.

Fine-tuning the calibration and testing the blinking with the sound localization in the space the day before and on the day of the exhibition proved to be really helpful in getting the feeling that I wanted.

Exhibition day

WOOD-e in the wild
WOOD-e in the wild

At the exhibition WOOD-e had people trying to grab his attention by either trying to wave at him, poke him in they eyes, touch him or, the fewest of times, speak to him.

WOOD-e in the wild

I noticed WOOD-e was perhaps getting a bit jealous of the attention the pair of Fusun’s wonderful *Under the Table* legs were getting.

WOOD-e jealous of Fusun’s legs
WOOD-e jealous of Fusun’s legs

Twitching left and right trying to pick out the loudest noise at each giving time, perhaps it was because WOOD-e himself didn’t effectively enough give the impression of what guests should or shouldn’t do with him. Maybe it was because he was clean and proper in a clean and proper gallery space where Fusun’s enticing wire-netted legs contrasted beautifully with the space, drawing people in with ease.

Tending to be my own worst critic most of the time, and despite things didn’t go quite as planned, I really am content though with how WOOD-e turned out and how he came to be.

The public can prove to be one of the most difficult demographics to do projects for and one can rarely plan the appropriate steps all the way through. Assumptions – conception-, function-, fabrication- and coding-wise – are pitfalls that a strong sense of perseverance and test-attitude is needed to avoid throughout the process.

WOOD-e and me
WOOD-e and me

The Senster by Edward Ihnatowicz (Research Project Group 6)

A groundbreaking cybernetic sculptor Edward Ihnatowicz explored the interaction between his robotic works and their audience. One of the first computer controlled interactive robotic works of art, The Senster, a 15 foot long, cybernetic sculpture commissioned by Philips in the early 70s, is widely considered as one of Ihnatowicz’s greatest achievements. The sculpture sensed the behaviour of visitors by their sounds and movements. It then reacted by drawing closer to what it saw as interesting and friendly gestures and sounds while “shying” away from less friendly louder ones.

Cybernetic Art

Of robotics in art, it might be said, that they have become a normal notion in today’s times. Of course, that hasn’t always been the case. Edward Ihnatowicz, a scientific artist in his own right, did experiments with robotics and cybernetics around the middle of the 20th century that are thought to have led to some of the most groundbreaking cybernetic art that, perhaps, we now take for granted.

One of his more grand creations was the Senster that was built and made over a period of more than two years. The life-like “being” was one of the first to catch the public’s attention – a fact which can be credited to Philips who commissioned the venue at which the Senster was shown from 1970 to 1974. As Ihnatowicz himself notes in his cybernetic art – A personal statement, “[The Senster] was the first sculpture to be controlled by a computer”. (Ihnatowicz, n.d.)

robots-in-art-from-senster-dot-com-1

Figure 1. “Robots in Art”. Retrieved November 14, 2015, from http://senster.com/robots_in_art/

Edward Ihnatowicz (b: 1926 – d: 1988)

Born in Poland 1926, Ihnatowicz later became a war refugee at the age of 13 seeking refuge in Romania and Algiers. Four years later he moved to Britain – a country that served as his home for the remainder of his life.

Ihnatowicz attended the Ruskin School of Art, Oxford, from 1945 – 1949 where he “studied painting, drawing and sculpture […] and dabbled in electronics. […] But then he threw away all of his electronics to concentrate on the finer of the fine arts.” Ihnatowicz would go on stating the move was the “[s]tupidest thing I’ve ever done. […] I had to start again from scratch 10 years later.” (Reffin-Smith, 1985).

For over a decade he created bespoke furniture and interior decoration, until 1962 when he left his home in hopes of finding his artistic roots. The next six years he lived in an unconverted garage, experimenting with life sculpture, portraiture and sculpture made from scrap cars. It was during this time Ihnatowicz would find “technological innovation opening a completely new way of investigating our view of reality in the control of physical motion” (Ihnatowicz, n.d.). Brian Reffin-Smith restated this in 1985 writing that Ihnatowicz “is interested in the behaviour of things” and that he feels that “technology is what artists use to play with their ideas, to make them really work.” (Reffin-Smith, 1985).

Ihnatowicz himself goes on by saying that the “[p]rincipal value of art is its ability to open our eyes to some aspect of reality, some view of life hitherto unappreciated” and that in the day and age of technology, the artist “can embrace the new revolution and use the new discoveries to enhance his understanding of the world” (Ihnatowicz, n.d.).

With our project being based on one of Ihnatowicz’s works, the Senster, we felt our group shared his views on this aspect of learning-by-doing. What was especially relatable to us was the fact that Ihnatowicz “only learned about computing and programming while already constructing the Senster”. (Ihnatowicz, n.d.). His “own self-taught command of scientific and technical detail is equalled by very few other artists”, Jonathan Benthall wrote about Ihnatowicz, suggesting in our opinion that where there’s a will there’s a way (Benthall, 1971).
in_lab-lrg

Figure 2. A picture of Edward Ihnatowicz working on the controls for a smaller scale of The Senster. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterstructure/index.htm

Ihnatowicz’s work – SAM

In 1968 Ihnatowicz created SAM (Sound Activated Mobile). According to Ihnatowicz, SAM was “the first moving sculpture which moved directly and recognisably in response to what was going on around it” making it perhaps one of the first interactive and dynamic sculptures. Ihnatowicz stated that SAM was “an attempt to provide a piece of kinetic sculpture with some purposefulness and positive control of its movement” (Ihnatowicz, n.d.).

sam-now01-lrg

Figure 3. SAM. Retrieved November 14, 2015, from http://www.senster.com/ihnatowicz/SAM/sam2.htm

Ihnatowicz discovered that “one of the shapes [he] had developed for SAM had a very close equivalent in nature in the claw of a lobster. It appears that lobsters are some of the very few animals that have very simple, hinge-like joints between the sections of their exoskeletons. […] A lobster’s claw was, therefore, inevitably, the inspiration for [his] next piece, the Senster”.

Ihnatowicz’s work – The Senster

In an interesting Gizmodo article from 2012 the writer, Lewis, writes about the origins of the Senster. He says“[i]t was Ihnatowicz’s interest in the emulation of animal movement that led him to become a pioneer of robotic art. Recording a lioness in its cage in a zoo, Ihnatowicz noticed the big cat turn and look at the camera then look away, leading him to ponder creating a sculpture that could do something similar in an art gallery, with the same feeling of a moment of contact with another seemingly sentient being.“ (Lewis, 2012).

senster10-lrg

Figure 4. The Senster. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterphotos/index.htm

Commissioned by Philips for the Evoluon in Eindhoven, Holland, from 1970 – 1974, the Senster “was the first sculpture to be controlled by a computer”, as mentioned earlier, with its realization taking more than two years (Ihnatowicz, n.d.).

senster-earlysketch-by-ihnatowicz-1

Figure 5. One of Ihnatowicz’s initial sketches of The Senster. Retrieved October 10, 2015, from http://www.researchgate.net/publication/221629713_The_development_of_a_cybernetic_sculptor_Edward_Ihnatowicz_and_the_senster

About 15 feet long, the Senster responded to sound using four microphones located on the front of its “head”, and also responded to movement, which it detected by means of radar horns on either side of the microphone array. Ihnatowicz wrote that the “microphones would locate the direction of any predominant sound and home in on it […] the rest of the structure would follow them in stages if the sound persisted. Sudden movements or loud noises would make it shy away” (Ihnatowicz, n.d.).

On its appearance and the overall experience, one of the Evoluon’s visitors, Brian Reffin-Smith, wrote “[t]he sight of this big, swaying head coming down from 15ft away to hover uncertainly in front of you was more moving than you’d suppose.” He underlines the Senster’s almost hypnotic powers by saying “[c]ouples, it is said, had wedding photographs taken in front of it. Kids watched it for four, five hours at a time.” (Reffin-Smith, 1985).

Aleksandar Zivanovic, editor of the very informative senster.com, provides all kinds of thorough information about the Senster, Ihnatowicz himself and his work, including technical details. On how the Senster perceived its world, he mentions that it “used two Hewlett-Packard doppler units (with custom made gold plated antenna horns) to detect movement near its ‘head’.” (Zivanovic, n.d.).

senster-parts-dopplerunit-lrg

Figure 6. The Senster’s eyes that tracked movement – a spare of Hewlett-Packard doppler unit. Retrieved November 14, 2015, from http://www.senster.com/ihnatowicz/senster/sensterradar/index.htm

In 1972, Jasia Reichardt went into detail on the Senster’s sensing/actuating flow: “the sounds which reach the two channels are compared at frequent intervals through the use of the control computer, and reaction is motivated when the sounds from the two sources match as far as possible. What occurs visually is that the microphones point at the source of sound and within a fraction of a second the Senster turns towards it.” (Reichardt, 1972).

senster5

Figure 7. The Senster’s ears – an array of four microphones – that tracked sound. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterphotos/index.htm

The control computer, a Philips P9201 computer, is actually a re-badged Honeywell 16 series. With 8K of memory and a punched paper tape unit and a teletype, Zivanovic makes a rather funny point when he mentions that the computer’s “insurance value [translated to current currency] was worth more than [his] parent’s three bedroom house in London“. (Zivanovic, n.d.).

On a similar note, in his essay on Ihnatowicz, Zivanovic writes “[The Senster’s overall] system was insured for £50,000 – the equivalent of around US $4.5m in current value [2005] – when it was shipped from London to Eindhoven in 1970)” (Zivanovic, 2005).

senster-cpu-Philips P9201-controlpanel-lrg

Figure 8. The Senster’s brain – a Philips P9201 computer. The left cabinet held the hydraulics equipment and the right cabinet was the computer itself. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/senstercomputer/index.htm

The cybernetic sculpture held itself up by three static “legs” while it moved via six electro-hydraulic servo-systems, based on the aforementioned lobster’s claw, allowing six degrees of freedom.

Taking into account the large size of his creation, Ihnatowicz had some help from Philip’s technicians on realizing an economic way of moving the the Senster. Their combined efforts resulted in affecting the sculpture’s movements by constant acceleration and deceleration (Benthall, 1971).

This life-like sense of motion combined with the Senster’s large scale appearance and multiple sensory data contributed in giving off the illusion of a real live creature. In retrospect, Ihnatowicz said that “[t]he complicated acoustics of the hall and the completely unpredictable behaviour of the public made the Senster’s movements seem at lot more sophisticated than they actually were.” (Ihnatowicz, n.d.). Reichardt agrees, stating that “[s]ince the Senster responds to a number of stimuli simultaneously its reactions are more life-like and less obvious than if merely the volume of sound were to provoke a slow or fast movement.” (Reichardt, 1972).

SPOT – Our group’s Prototype Based on the Senster

Who is SPOT?

Our version of the Senster is SPOT; a dynamic cybernetic sculpture that loves to daydream and people-watch. When SPOT is on its own, it daydreams; swaying its head and body from side to side, taking in the environment. When somebody walks up to SPOT, it excitedly looks up into that person’s eyes and becomes curious. SPOT observes that person, following their movement with its head and body.

How does SPOT work?

Alone, SPOT will sway its head and body in slow random patterns via its 3 servo motor joints. One is placed at the base for panning (rotation about the x-axis), another is attached to SPOT’s “neck” for leaning forwards and backwards, and the last servo joint tilts SPOT’s “head” (rotation about the y-axis).

SPOT uses OpenCV (an open-source computer vision library) and a webcam to detect faces when in view. If a face is detected, SPOT stops its random movement and begins to follow a user’s face where the bottom and top servos are given commands based on the detected face’s x and y coordinates relative to the camera’s center position. The middle servo is not controlled by external data – such as movement or sound – but rather it serves as a pre-programmed dramatic device to imbue SPOT with more sense of life and dynamic movement.

Making of SPOT – Birth

Having researched Edward Ihnatowicz and – among other of his creations – the Senster, our group came to a common understanding that, in Edward’s words, “most of our appreciation of the world around us comes to us through our interpretation of observed or sensed physical motion. […] For an artificial system to display a similar sense of purpose it is necessary for it to have a means of observing and interpreting the state of its environment” (Ihnatowicz, n.d.).

With this in mind, all of us went to their separate corners to think of ways to breathe life into an otherwise dead object of electrical components. On returning, we compared notes and sketches of different ideas and implementations, ranging from a selfie-taking Selfie-Bot, to an interactive Teddy Bear, to an origami inspired set of interactions and movements.

initial-ideas-selfieBot-lores-1

initial-ideas-selfieBot-lores-2

In the end we settled on a combination of ideas that resulted in the original idea for SPOT; a cybernetic sculpture that loves its ball and would somehow ask for it back if it went missing. We were aware from the beginning that the technical scope of this concept would be multi-faceted and complex, especially in the limited timeframe, but we concluded that the project was ambitious enough that the execution of even a portion of this project would produce a product with a synergy of interesting concepts and technical accomplishments.

With this common trajectory in place, we started to think of possible components and materials we might need. From sketches we could determine that for a single, moving joint we would need a servo to be fastened onto a static and sturdy material while the servo’s “horn” would be connected to the next piece that would serve as the moving component. Other considerations included how SPOT would know that its ball is in possession, how SPOT would sense and recognize its ball, how could we imbue SPOT with a sense of personality and character?

prototype-inital-components-analysis-lores-1

Making of SPOT – Searching for a way to sense the world

We found tutorials online where Processing was used as a way to compute sensory data which was sent to Arduino for actuating on the data. Researching, testing and drawing from online tutorials like Tracking a ball and rotating camera with OpenCV and Arduino (AlexInFlatland, May 2013) and Face Tracking with a Pan/Tilt Servo Bracket (zagGrad, July 2011) we began to see the light at the end of the tunnel.

Some of the working code from the tutorials would have to be tweaked and molded to meet our needs. In Greg Borenstein’s great library OpenCV for Processing (Borenstein, n.d.) we would find ourselves facing software compatibility issues forcing us to downgrade from Processing 3.0 to Processing v2.4 in order to take advantage of the visual processing library. We employed a library using a common face-processing algorithm which Borenstein explains in one of his short Vimeo videos (Borenstein, July 2013).

Making of SPOT – Initial tests and experiments

Soon thereafter we would turn to the actuators. The servos in question would change SPOT’s field of view based on where a face would be detected. SPOT’s two-dimensional visual field would be altered by the repositioning of its head in three-dimensional space. Deciding to take things step by step, we started with basic initial tests with two servos set up within a pan/tilt bracket and controlled them using different potentiometers, one for the x-axis and one for the y-axis.

From there, the next step was to experiment with face detection by implementing Borenstein’s OpenCV for Processing library examples; “FaceDetection” and “LiveCamTest”.

We also decided that SPOT could potentially recognize his very own ball by tracking an object with a specific colour reading. Jordi Tost’s HSVColorTracking – an example based on Borenstein’s aforementioned library was demonstrated to solve the problem of object-recognition (Tost, n.d.).

https://youtu.be/XaIqykLtTt4)

Along the way, we had to keep in mind SPOT’s behavioural flow – when is it searching for faces, when does it do the same but for her ball etc – so as not to get too caught up on a single aspect.

prototype-v2-behaviour-storyb-lores-1

Making of SPOT – Version 1

At this moment in time, we had the different components working separately. Next step was to introduce them to one another by putting a USB webcam into the mix.

[MISSING WRITING, MICHAEL] Our trials and errors with finding a webcam that was compatible with mac and processing. Ended on borrowing Marcus Gordon’s webcam.

Pulling through those issues, we got a working version 1 of SPOT detecting and tracking faces with the two servos!

https://youtu.be/FBnOscWJu-o)

Making of SPOT – Version 2

After the thrill of having a version 1 working properly, we wanted to increase SPOT’s range of motion for a version 2. By increasing the distance from the panning X-axis to the tilting Y-axis, SPOT’s movements would be more noticeable and hopefully more memorable.

In the video linked below fellow classmate Marcelo Luft interacts with SPOT version 2, which replicates the facial detection and tracking of version 1, but includes one extra joint and a custom built apparatus.

https://youtu.be/OrEnsvVu1mw)

It was also important to figure out and visualize in a 3-dimensional space where a viewer would see and interact with SPOT and its ball.

prototype-v2-shape-and-presentation-lores-1

 

prototype-v2-shape-and-presentation-lores-2

 

Making of SPOT – Final Presentation

Unfortunately, we ran into technical issues two days before our presentation. We realized time was running short and we needed to have a sure finished product for the presentation. We decided to forget about the entire ball-following narrative and focused instead of making the robot detect faces and move around with the most realism and fewest bugs possible. The final product moved smoothly during its random movements and recognized faces reliably.

When our classmates arrived for class, we had set up SPOT as the primary focus point of the room with a friendly message behind it encouraging people to interact with it.

presentation-prototype-2

presentation-prototype-1

The video linked below shows some of the students’ responses to meeting SPOT in person which are very interesting. To our delight, students commonly described SPOT’s random movements and face-tracking in anthropomorphic terms. Students tried to get SPOT’s attention and when failed interpreted the robot’s behaviour personally. This project and experiment revealed some of the fundamental properties of interactive objects that can create the illusion that an object is living and responding personally to a user.

Although not as ambitious as our original behaviour idea for SPOT, considering the fact that our group is a collection of newcomers to dynamic cybernetic sculpture, our version that we went with is quite the feat – in our not-so-humble opinion. And who knows, in time, perhaps SPOT will get bored of faces and will insist on getting something more geometrical. A colourful ball perhaps?


Research Project Group 6: Egill R. Viðarsson, Ling Ding, Michael Carnevale, Xiaqi Xu


References

AlexInFlatland (May 16, 2013). Tracking a ball and rotating camera with OpenCV and Arduino. Youtube. Retrived on October 29, 2015, from https://www.youtube.com/watch?v=O6j02lN5gDw

Borenstein, Greg (July 8, 2013). Face detection with OpenCV in Processing. Vimeo. Retrived on November 10, 2015, from https://vimeo.com/69907695

Benthall, Jonathan (1971). Science and Technology in Art Today. Thames and Hudson. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Borenstein, Greg (n.d.). OpenCV for Processing. Github. Retrived on November 6, 2015, from https://github.com/atduskgreg/opencv-processing

Ihnatowicz, Edward (n.d.). cybernetic art – A personal statement. Senster.com. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

interactivearch (July 15, 2008). SAM – Cybernetic Serendipity. Youtube. Retrieved on October 10, 2015, from https://www.youtube.com/watch?v=8b52qpyV__g

interactivearch (Januar 12, 2008). The Senster. Youtube. Retrieved on October 10, 2015, from https://www.youtube.com/watch?v=1jDt5unArNk

Lewis, Jacob (August 30, 2012). How the Tate Brought a Pioneering Art-Robot Back Online. Gizmodo. Retrieved on October 12, 2015, from http://www.gizmodo.co.uk/2012/08/how-the-tate-has-brought-a-pioneering-art-robot-back-online/

Reffin-Smith, Brian (1985). Soft Computing – Art and Design. Computing. Addison Wesley. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Reichardt, Jasia (1972). Robots: Fact, Fiction, Prediction. Thames and Hudson. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Tost, Jordi (n.d.). HSVColorTracking. Github. Retrieved on November 6, 2015, from https://github.com/jorditost/ImageFiltering/blob/master/SimpleColorTracking/HSVColorTracking/HSVColorTracking.pde

zagGrad (July 15, 2011). Face Tracking with a Pan/Tilt Servo Bracket. Sparkfun. Retrived on October 29, 2015, from https://www.sparkfun.com/tutorials/304

Zivanovic, Aleksandar (n.d.). Senster – A website devoted to Edward Ihnatowicz, cybernetic sculptor. Retrieved on October 10, 2015, from http://www.senster.com/

Zivanovic, Aleksandar (2005). The development of a cybernetic sculptor: Edward Ihnatowicz and the senster. Researchgate.com. Retrieved on October 12, 2015, from http://www.researchgate.net/publication/221629713_The_development_of_a_cybernetic_sculptor_Edward_Ihnatowicz_and_the_senster

Workshop 2: Daily Devices, the Washing Machine

Our washing machine at home seemed so uninteresting so I decided to choose that for analyzing in hope of making it a bit more interesting.

I set up camp in the bathroom with lights, camera and not a whole lot of action.

documenting-the-washing-machine-1

I did though, I think, figure out how the machine’s different knobs, buttons and LEDs work with one another and also how it puts the machine to work. I put together a sort-of combination of a UI and user flow for an example setting of the washing machine.

For the “make-an-inspired-copy-protoype” part of the assignment, I deviated from actuators working with water and found in the 5V fan a similar trait; a spin.

So I decided to put a spin on things (pun à la Andrew Hicks intended) and thought how I could put together an interesting concept based on the washing machine but with a twist (pun, again).

At Hvíta húsið, a wonderful advertising agency I worked for from 2012 – 2015, they tried their hand at an open office space. I for one am all for increased transparency and collaboration between co-workers. That I was put smack-in-the middle of the open work area was perhaps not as ideal – for me at least.

The new situation also didn’t do so well with efficient AC during the warm summers.

So despite my attempts at getting people to voice their dialogue with team members by walking over to them instead of shouting across the room – and in the same moment, me – I thought of AC powered by silence setting for the air conditioning systems at one’s work place.

So an example would be a super warm Tuesday, the new AC is doing great and everybody’s working on their projects within decent noise levels. Then, around 10:30am, the workplace starts to get noisier and louder, slowly turning off the AC (given that it has the above setting turned on).

The noisy chatter would then flow into conversations of “Oh, my! How warm it is! Isn’t the AC working?” and “I know, right?! I’m sweating like a pig!” to which one could respond with “The AC is perfectly fine. It’s just waiting for the sweet silence”.

A working demo can be seen here of a loud-mouthed stegosaurus disturbing the work force.

I’m aware that this concept might come off as a bit bitter from my end. Don’t get me wrong, from time to time I sure like a break from projects to do other, louder stuff. Maybe just not where people are trying to continue on with their own projects in peace.

hardware-diagram-1

 

// Sketch code below, with less-than-ideal map() function values, perhaps

const int fanPin = 9;
const int speakerPin = A0;

const int sampleWindow = 50; // Sample window width in mS (50 mS = 20Hz)
unsigned int sample;

void setup() {
Serial.begin(9600);

pinMode(fanPin, OUTPUT);
pinMode(speakerPin, INPUT);
}

// map function with floats
float mapfloat(float x, float in_min, float in_max, float out_min, float out_max) {
return (x – in_min) * (out_max – out_min) / (in_max – in_min) + out_min;
}

void loop() {
unsigned long startMillis = millis(); // Start of sample window
unsigned int peakToPeak = 0; // peak-to-peak level

unsigned int signalMax = 0;
unsigned int signalMin = 1024;

// collect data for 50 mS
while (millis() – startMillis < sampleWindow) {
sample = analogRead(0);
if (sample < 1024) { // toss out spurious readings
if (sample > signalMax) {
signalMax = sample; // save just the max levels
} else if (sample < signalMin) {
signalMin = sample; // save just the min levels
}
}
}

peakToPeak = signalMax – signalMin; // max – min = peak-peak amplitude
double speakerVolts = (peakToPeak * 3.3) / 1024; // convert to volts

//convert speakerVolts from double to float before mapfloat()

int soundCtrlFan = map(speakerVolts, 0.5, 1.0, 255, 0);

Serial.print(“speakerVolts value is: “);
Serial.print(speakerVolts); // check mic analog input values

Serial.print(“\t soundCtrlAC value is: “);
Serial.println(soundCtrlFan);

analogWrite(fanPin, 0); // turn off fan while initial testing

}

 

 

Workshop 1: DigiBird – Egill Rúnar Viðarsson

Photo of *schematics* with Arduino and DigiBird interacting via sound
Photo of *schematics* with Arduino and DigiBird interacting via sound

Hey guys!

I’m relatively new to micro controllers but some fiddling with the Arduino Starter Kit last summer has definitely given me some base to work with.

An idea I had was to have the DigiBird sit on a rubber 2 1/2 ” wheel laid on it’s side connected to a DC motor. Then I’d hook up the electric mic as an input so when the DigiBird would twerp/sing, the mic would catch the sound, the sketch would interpret  the analog values to volts and power the output DC gear attached to the rubber wheel – effectively spinning the bird when it’s twerping/singing.

As a tester, I successfully have an LED blink in time with the DigiBird when it makes noise.

The hard part has been getting the DC motor to work – at all.

It has something to do with the DC gear needing more voltage for operating, maybe from a socket instead of just the USB chord.

Will continue with this later hopefully in order to have the DigiBird spinnin’ while it’s singin’!

– Egill R

Arduino and DigiBird interacting via sound and blinking LED

Arduino and DigiBird interacting via sound and blinking LED

Sketch code

/****************************************
Example Sound Level Sketch for the
Adafruit Microphone Amplifier
// https://learn.adafruit.com/adafruit-microphone-amplifier-breakout/measuring-sound-levels
****************************************/

/*
The Audio signal from the output of the amplifier is a varying voltage.
To measure the sound level, we need to take multiple measurements to find
the minimum and maximum extents or “peak to peak amplitude” of the signal.

In the example below, we choose a sample window of 50 milliseconds.
That is sufficient to measure sound levels of frequencies as low as 20 Hz – the lower limit of human hearing.

After finding the minimum and maximum samples, we compute the difference
and convert it to volts and the output is printed to the serial monitor.
*/

const int sampleWindow = 50; // Sample window width in mS (50 mS = 20Hz)
unsigned int sample;
int ledPin = 2;

void setup()
{
Serial.begin(9600);
pinMode(ledPin, OUTPUT);
}

void loop()
{
unsigned long startMillis = millis(); // Start of sample window
unsigned int peakToPeak = 0; // peak-to-peak level

unsigned int signalMax = 0;
unsigned int signalMin = 1024;

// collect data for 50 mS
while (millis() – startMillis < sampleWindow)
{
sample = analogRead(0);
if (sample < 1024) // toss out spurious readings
{
if (sample > signalMax)
{
signalMax = sample; // save just the max levels
}
else if (sample < signalMin)
{
signalMin = sample; // save just the min levels
}
}
}
peakToPeak = signalMax – signalMin; // max – min = peak-peak amplitude
double volts = (peakToPeak * 3.3) / 1024; // convert to volts

// Serial.println(volts); // check mic analog input values
if (volts >= 0.5) {
digitalWrite(ledPin, HIGH);
} else {
digitalWrite(ledPin, LOW);
}
}