Monthly Archives: November 2015

wqdwasd

Research Project # 7 – Connecting Light

Project by: Jazmine Yerbury, Margarita Castro, Marcus Gordon & Hammadullah Syed.

Connecting light was an outdoor art installation that combined LED lights and wireless communications technology.  Installed in Northern England at a place called Hadrian’s wall park. As you can see in the image below, Hadrian’s wall stretches for a length of 73 miles (or 117 kilometers).  It was built in 122 AD during the rule of emperor Hadrian of the Roman Empire.

wall

A glimpse into the installation, shows the scale of this work spanning the large territory of the wall.  Lights in the balloons blink in response to audiences sending/receiving of messages via the balloons and a mobile app on the user’s phone.

 Who was behind this project?

The development team behind this amazing project was a coalition between members of  YesYesNo and spearheaded by Zach Lieberman,a known artist, researcher and hacker dedicated to exploring new modes of expression & play.

YesYesNo LLC is a new interactive collective that specializes in the creation of engaging, magical installations that combine creativity, artistic vision and cutting edge R&D.

For this project, 400 Digi Programmable Xbees and 20 ConnectPort X4 gateways were requiered. All this for 400 balloons lined up against 73 miles of Hadrian’s Wall.

leds

Zach Lieberman explains that each balloon is programmable, allowing the intensity of colours to be controlled.  Based on our research it seems that control was a split between user control, automatic climate adjustments that the XBee makes in response to weather conditions.

The equipment for each balloon was well organized:

equipment

equipment2

This transmedia map shows how the system takes input of text messages, through the X4 gateways and transmits to the XBee modules controlling the lights.

map

zach

Zach Lieberman is the cofounder of Open frameworks, an open source C++ toolkit designed to assist the creative process by providing a simple and intuitive framework for experimentation. OpenFrameworks is designed to work with several commonly used libraries such as: Free Type (fonts), Free Image (image saving and loading) & OpenCV. It is distributed under the MIT License. This gives everyone the freedoms to use openFrameworks in any context: commercial or non-commercial.

His aim is to use technology in a playful way to break down the fragile boundary between the visible and invisible – augmenting the body’s ability to communicate.

Through his work, he looks for the open mouth phenomenon as to be in awe. When something is so great the your conscious mind has no power over your physical body and forces the jaw to drop. Also perceived as a gateway to someone’s heart.

 Connected Colour – RGB Morse code transmitter and decoder

First Prototype “Hello”:

For out first Prototype for this research project, the first test was very simple, text was typed into the serial monitor gets output as a blinking, beeping morse code.

Second Prototype “Morse Code (sound decoder)”:

Circuit and code which allows user to input text. The text is output as morse encoded beeps, then the signal is read by an electret mic, then decoded back into english on another device.

Final prototype:

For the final prototype, there was created a circuit of 2 different sets of RGB leds and styrofoam to enhance light through a code that allowed text to be encoded into morse code signal light blinks, then read by a photocell, and decoded back into english on another device.

Sets of RGB leds and styrofoam lamps:

lamps

  1. The LED circuit  with 2 RGB LEDs was connected to a buzzer and connected to a second 2 RGB LEDs circuit.
  2. Both circuits were connected to an Arduino’s PWM digital pins.
  3. The 2 lamp prototypes  enhanced the lights from the RGB LEDs circuits.
  4. Through the morse encoder Arduino code, words  input  through serial port.
  5.  Each letter is generated in morse code on RGB light circuit #1 and buzzer.
  6. RGB light circuit #2 replicates every letter from RGB light circuit #1 in morse code.

Decoder:

The Senster by Edward Ihnatowicz (Research Project Group 6)

A groundbreaking cybernetic sculptor Edward Ihnatowicz explored the interaction between his robotic works and their audience. One of the first computer controlled interactive robotic works of art, The Senster, a 15 foot long, cybernetic sculpture commissioned by Philips in the early 70s, is widely considered as one of Ihnatowicz’s greatest achievements. The sculpture sensed the behaviour of visitors by their sounds and movements. It then reacted by drawing closer to what it saw as interesting and friendly gestures and sounds while “shying” away from less friendly louder ones.

Cybernetic Art

Of robotics in art, it might be said, that they have become a normal notion in today’s times. Of course, that hasn’t always been the case. Edward Ihnatowicz, a scientific artist in his own right, did experiments with robotics and cybernetics around the middle of the 20th century that are thought to have led to some of the most groundbreaking cybernetic art that, perhaps, we now take for granted.

One of his more grand creations was the Senster that was built and made over a period of more than two years. The life-like “being” was one of the first to catch the public’s attention – a fact which can be credited to Philips who commissioned the venue at which the Senster was shown from 1970 to 1974. As Ihnatowicz himself notes in his cybernetic art – A personal statement, “[The Senster] was the first sculpture to be controlled by a computer”. (Ihnatowicz, n.d.)

robots-in-art-from-senster-dot-com-1

Figure 1. “Robots in Art”. Retrieved November 14, 2015, from http://senster.com/robots_in_art/

Edward Ihnatowicz (b: 1926 – d: 1988)

Born in Poland 1926, Ihnatowicz later became a war refugee at the age of 13 seeking refuge in Romania and Algiers. Four years later he moved to Britain – a country that served as his home for the remainder of his life.

Ihnatowicz attended the Ruskin School of Art, Oxford, from 1945 – 1949 where he “studied painting, drawing and sculpture […] and dabbled in electronics. […] But then he threw away all of his electronics to concentrate on the finer of the fine arts.” Ihnatowicz would go on stating the move was the “[s]tupidest thing I’ve ever done. […] I had to start again from scratch 10 years later.” (Reffin-Smith, 1985).

For over a decade he created bespoke furniture and interior decoration, until 1962 when he left his home in hopes of finding his artistic roots. The next six years he lived in an unconverted garage, experimenting with life sculpture, portraiture and sculpture made from scrap cars. It was during this time Ihnatowicz would find “technological innovation opening a completely new way of investigating our view of reality in the control of physical motion” (Ihnatowicz, n.d.). Brian Reffin-Smith restated this in 1985 writing that Ihnatowicz “is interested in the behaviour of things” and that he feels that “technology is what artists use to play with their ideas, to make them really work.” (Reffin-Smith, 1985).

Ihnatowicz himself goes on by saying that the “[p]rincipal value of art is its ability to open our eyes to some aspect of reality, some view of life hitherto unappreciated” and that in the day and age of technology, the artist “can embrace the new revolution and use the new discoveries to enhance his understanding of the world” (Ihnatowicz, n.d.).

With our project being based on one of Ihnatowicz’s works, the Senster, we felt our group shared his views on this aspect of learning-by-doing. What was especially relatable to us was the fact that Ihnatowicz “only learned about computing and programming while already constructing the Senster”. (Ihnatowicz, n.d.). His “own self-taught command of scientific and technical detail is equalled by very few other artists”, Jonathan Benthall wrote about Ihnatowicz, suggesting in our opinion that where there’s a will there’s a way (Benthall, 1971).
in_lab-lrg

Figure 2. A picture of Edward Ihnatowicz working on the controls for a smaller scale of The Senster. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterstructure/index.htm

Ihnatowicz’s work – SAM

In 1968 Ihnatowicz created SAM (Sound Activated Mobile). According to Ihnatowicz, SAM was “the first moving sculpture which moved directly and recognisably in response to what was going on around it” making it perhaps one of the first interactive and dynamic sculptures. Ihnatowicz stated that SAM was “an attempt to provide a piece of kinetic sculpture with some purposefulness and positive control of its movement” (Ihnatowicz, n.d.).

sam-now01-lrg

Figure 3. SAM. Retrieved November 14, 2015, from http://www.senster.com/ihnatowicz/SAM/sam2.htm

Ihnatowicz discovered that “one of the shapes [he] had developed for SAM had a very close equivalent in nature in the claw of a lobster. It appears that lobsters are some of the very few animals that have very simple, hinge-like joints between the sections of their exoskeletons. […] A lobster’s claw was, therefore, inevitably, the inspiration for [his] next piece, the Senster”.

Ihnatowicz’s work – The Senster

In an interesting Gizmodo article from 2012 the writer, Lewis, writes about the origins of the Senster. He says“[i]t was Ihnatowicz’s interest in the emulation of animal movement that led him to become a pioneer of robotic art. Recording a lioness in its cage in a zoo, Ihnatowicz noticed the big cat turn and look at the camera then look away, leading him to ponder creating a sculpture that could do something similar in an art gallery, with the same feeling of a moment of contact with another seemingly sentient being.“ (Lewis, 2012).

senster10-lrg

Figure 4. The Senster. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterphotos/index.htm

Commissioned by Philips for the Evoluon in Eindhoven, Holland, from 1970 – 1974, the Senster “was the first sculpture to be controlled by a computer”, as mentioned earlier, with its realization taking more than two years (Ihnatowicz, n.d.).

senster-earlysketch-by-ihnatowicz-1

Figure 5. One of Ihnatowicz’s initial sketches of The Senster. Retrieved October 10, 2015, from http://www.researchgate.net/publication/221629713_The_development_of_a_cybernetic_sculptor_Edward_Ihnatowicz_and_the_senster

About 15 feet long, the Senster responded to sound using four microphones located on the front of its “head”, and also responded to movement, which it detected by means of radar horns on either side of the microphone array. Ihnatowicz wrote that the “microphones would locate the direction of any predominant sound and home in on it […] the rest of the structure would follow them in stages if the sound persisted. Sudden movements or loud noises would make it shy away” (Ihnatowicz, n.d.).

On its appearance and the overall experience, one of the Evoluon’s visitors, Brian Reffin-Smith, wrote “[t]he sight of this big, swaying head coming down from 15ft away to hover uncertainly in front of you was more moving than you’d suppose.” He underlines the Senster’s almost hypnotic powers by saying “[c]ouples, it is said, had wedding photographs taken in front of it. Kids watched it for four, five hours at a time.” (Reffin-Smith, 1985).

Aleksandar Zivanovic, editor of the very informative senster.com, provides all kinds of thorough information about the Senster, Ihnatowicz himself and his work, including technical details. On how the Senster perceived its world, he mentions that it “used two Hewlett-Packard doppler units (with custom made gold plated antenna horns) to detect movement near its ‘head’.” (Zivanovic, n.d.).

senster-parts-dopplerunit-lrg

Figure 6. The Senster’s eyes that tracked movement – a spare of Hewlett-Packard doppler unit. Retrieved November 14, 2015, from http://www.senster.com/ihnatowicz/senster/sensterradar/index.htm

In 1972, Jasia Reichardt went into detail on the Senster’s sensing/actuating flow: “the sounds which reach the two channels are compared at frequent intervals through the use of the control computer, and reaction is motivated when the sounds from the two sources match as far as possible. What occurs visually is that the microphones point at the source of sound and within a fraction of a second the Senster turns towards it.” (Reichardt, 1972).

senster5

Figure 7. The Senster’s ears – an array of four microphones – that tracked sound. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/sensterphotos/index.htm

The control computer, a Philips P9201 computer, is actually a re-badged Honeywell 16 series. With 8K of memory and a punched paper tape unit and a teletype, Zivanovic makes a rather funny point when he mentions that the computer’s “insurance value [translated to current currency] was worth more than [his] parent’s three bedroom house in London“. (Zivanovic, n.d.).

On a similar note, in his essay on Ihnatowicz, Zivanovic writes “[The Senster’s overall] system was insured for £50,000 – the equivalent of around US $4.5m in current value [2005] – when it was shipped from London to Eindhoven in 1970)” (Zivanovic, 2005).

senster-cpu-Philips P9201-controlpanel-lrg

Figure 8. The Senster’s brain – a Philips P9201 computer. The left cabinet held the hydraulics equipment and the right cabinet was the computer itself. Retrieved October 10, 2015, from http://www.senster.com/ihnatowicz/senster/senstercomputer/index.htm

The cybernetic sculpture held itself up by three static “legs” while it moved via six electro-hydraulic servo-systems, based on the aforementioned lobster’s claw, allowing six degrees of freedom.

Taking into account the large size of his creation, Ihnatowicz had some help from Philip’s technicians on realizing an economic way of moving the the Senster. Their combined efforts resulted in affecting the sculpture’s movements by constant acceleration and deceleration (Benthall, 1971).

This life-like sense of motion combined with the Senster’s large scale appearance and multiple sensory data contributed in giving off the illusion of a real live creature. In retrospect, Ihnatowicz said that “[t]he complicated acoustics of the hall and the completely unpredictable behaviour of the public made the Senster’s movements seem at lot more sophisticated than they actually were.” (Ihnatowicz, n.d.). Reichardt agrees, stating that “[s]ince the Senster responds to a number of stimuli simultaneously its reactions are more life-like and less obvious than if merely the volume of sound were to provoke a slow or fast movement.” (Reichardt, 1972).

SPOT – Our group’s Prototype Based on the Senster

Who is SPOT?

Our version of the Senster is SPOT; a dynamic cybernetic sculpture that loves to daydream and people-watch. When SPOT is on its own, it daydreams; swaying its head and body from side to side, taking in the environment. When somebody walks up to SPOT, it excitedly looks up into that person’s eyes and becomes curious. SPOT observes that person, following their movement with its head and body.

How does SPOT work?

Alone, SPOT will sway its head and body in slow random patterns via its 3 servo motor joints. One is placed at the base for panning (rotation about the x-axis), another is attached to SPOT’s “neck” for leaning forwards and backwards, and the last servo joint tilts SPOT’s “head” (rotation about the y-axis).

SPOT uses OpenCV (an open-source computer vision library) and a webcam to detect faces when in view. If a face is detected, SPOT stops its random movement and begins to follow a user’s face where the bottom and top servos are given commands based on the detected face’s x and y coordinates relative to the camera’s center position. The middle servo is not controlled by external data – such as movement or sound – but rather it serves as a pre-programmed dramatic device to imbue SPOT with more sense of life and dynamic movement.

Making of SPOT – Birth

Having researched Edward Ihnatowicz and – among other of his creations – the Senster, our group came to a common understanding that, in Edward’s words, “most of our appreciation of the world around us comes to us through our interpretation of observed or sensed physical motion. […] For an artificial system to display a similar sense of purpose it is necessary for it to have a means of observing and interpreting the state of its environment” (Ihnatowicz, n.d.).

With this in mind, all of us went to their separate corners to think of ways to breathe life into an otherwise dead object of electrical components. On returning, we compared notes and sketches of different ideas and implementations, ranging from a selfie-taking Selfie-Bot, to an interactive Teddy Bear, to an origami inspired set of interactions and movements.

initial-ideas-selfieBot-lores-1

initial-ideas-selfieBot-lores-2

In the end we settled on a combination of ideas that resulted in the original idea for SPOT; a cybernetic sculpture that loves its ball and would somehow ask for it back if it went missing. We were aware from the beginning that the technical scope of this concept would be multi-faceted and complex, especially in the limited timeframe, but we concluded that the project was ambitious enough that the execution of even a portion of this project would produce a product with a synergy of interesting concepts and technical accomplishments.

With this common trajectory in place, we started to think of possible components and materials we might need. From sketches we could determine that for a single, moving joint we would need a servo to be fastened onto a static and sturdy material while the servo’s “horn” would be connected to the next piece that would serve as the moving component. Other considerations included how SPOT would know that its ball is in possession, how SPOT would sense and recognize its ball, how could we imbue SPOT with a sense of personality and character?

prototype-inital-components-analysis-lores-1

Making of SPOT – Searching for a way to sense the world

We found tutorials online where Processing was used as a way to compute sensory data which was sent to Arduino for actuating on the data. Researching, testing and drawing from online tutorials like Tracking a ball and rotating camera with OpenCV and Arduino (AlexInFlatland, May 2013) and Face Tracking with a Pan/Tilt Servo Bracket (zagGrad, July 2011) we began to see the light at the end of the tunnel.

Some of the working code from the tutorials would have to be tweaked and molded to meet our needs. In Greg Borenstein’s great library OpenCV for Processing (Borenstein, n.d.) we would find ourselves facing software compatibility issues forcing us to downgrade from Processing 3.0 to Processing v2.4 in order to take advantage of the visual processing library. We employed a library using a common face-processing algorithm which Borenstein explains in one of his short Vimeo videos (Borenstein, July 2013).

Making of SPOT – Initial tests and experiments

Soon thereafter we would turn to the actuators. The servos in question would change SPOT’s field of view based on where a face would be detected. SPOT’s two-dimensional visual field would be altered by the repositioning of its head in three-dimensional space. Deciding to take things step by step, we started with basic initial tests with two servos set up within a pan/tilt bracket and controlled them using different potentiometers, one for the x-axis and one for the y-axis.

From there, the next step was to experiment with face detection by implementing Borenstein’s OpenCV for Processing library examples; “FaceDetection” and “LiveCamTest”.

We also decided that SPOT could potentially recognize his very own ball by tracking an object with a specific colour reading. Jordi Tost’s HSVColorTracking – an example based on Borenstein’s aforementioned library was demonstrated to solve the problem of object-recognition (Tost, n.d.).

https://youtu.be/XaIqykLtTt4)

Along the way, we had to keep in mind SPOT’s behavioural flow – when is it searching for faces, when does it do the same but for her ball etc – so as not to get too caught up on a single aspect.

prototype-v2-behaviour-storyb-lores-1

Making of SPOT – Version 1

At this moment in time, we had the different components working separately. Next step was to introduce them to one another by putting a USB webcam into the mix.

[MISSING WRITING, MICHAEL] Our trials and errors with finding a webcam that was compatible with mac and processing. Ended on borrowing Marcus Gordon’s webcam.

Pulling through those issues, we got a working version 1 of SPOT detecting and tracking faces with the two servos!

https://youtu.be/FBnOscWJu-o)

Making of SPOT – Version 2

After the thrill of having a version 1 working properly, we wanted to increase SPOT’s range of motion for a version 2. By increasing the distance from the panning X-axis to the tilting Y-axis, SPOT’s movements would be more noticeable and hopefully more memorable.

In the video linked below fellow classmate Marcelo Luft interacts with SPOT version 2, which replicates the facial detection and tracking of version 1, but includes one extra joint and a custom built apparatus.

https://youtu.be/OrEnsvVu1mw)

It was also important to figure out and visualize in a 3-dimensional space where a viewer would see and interact with SPOT and its ball.

prototype-v2-shape-and-presentation-lores-1

 

prototype-v2-shape-and-presentation-lores-2

 

Making of SPOT – Final Presentation

Unfortunately, we ran into technical issues two days before our presentation. We realized time was running short and we needed to have a sure finished product for the presentation. We decided to forget about the entire ball-following narrative and focused instead of making the robot detect faces and move around with the most realism and fewest bugs possible. The final product moved smoothly during its random movements and recognized faces reliably.

When our classmates arrived for class, we had set up SPOT as the primary focus point of the room with a friendly message behind it encouraging people to interact with it.

presentation-prototype-2

presentation-prototype-1

The video linked below shows some of the students’ responses to meeting SPOT in person which are very interesting. To our delight, students commonly described SPOT’s random movements and face-tracking in anthropomorphic terms. Students tried to get SPOT’s attention and when failed interpreted the robot’s behaviour personally. This project and experiment revealed some of the fundamental properties of interactive objects that can create the illusion that an object is living and responding personally to a user.

Although not as ambitious as our original behaviour idea for SPOT, considering the fact that our group is a collection of newcomers to dynamic cybernetic sculpture, our version that we went with is quite the feat – in our not-so-humble opinion. And who knows, in time, perhaps SPOT will get bored of faces and will insist on getting something more geometrical. A colourful ball perhaps?


Research Project Group 6: Egill R. Viðarsson, Ling Ding, Michael Carnevale, Xiaqi Xu


References

AlexInFlatland (May 16, 2013). Tracking a ball and rotating camera with OpenCV and Arduino. Youtube. Retrived on October 29, 2015, from https://www.youtube.com/watch?v=O6j02lN5gDw

Borenstein, Greg (July 8, 2013). Face detection with OpenCV in Processing. Vimeo. Retrived on November 10, 2015, from https://vimeo.com/69907695

Benthall, Jonathan (1971). Science and Technology in Art Today. Thames and Hudson. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Borenstein, Greg (n.d.). OpenCV for Processing. Github. Retrived on November 6, 2015, from https://github.com/atduskgreg/opencv-processing

Ihnatowicz, Edward (n.d.). cybernetic art – A personal statement. Senster.com. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

interactivearch (July 15, 2008). SAM – Cybernetic Serendipity. Youtube. Retrieved on October 10, 2015, from https://www.youtube.com/watch?v=8b52qpyV__g

interactivearch (Januar 12, 2008). The Senster. Youtube. Retrieved on October 10, 2015, from https://www.youtube.com/watch?v=1jDt5unArNk

Lewis, Jacob (August 30, 2012). How the Tate Brought a Pioneering Art-Robot Back Online. Gizmodo. Retrieved on October 12, 2015, from http://www.gizmodo.co.uk/2012/08/how-the-tate-has-brought-a-pioneering-art-robot-back-online/

Reffin-Smith, Brian (1985). Soft Computing – Art and Design. Computing. Addison Wesley. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Reichardt, Jasia (1972). Robots: Fact, Fiction, Prediction. Thames and Hudson. London. Retrieved on October 10, 2015, from http://www.senster.com/ihnatowicz/articles/ihnatowicz%20brochure.pdf

Tost, Jordi (n.d.). HSVColorTracking. Github. Retrieved on November 6, 2015, from https://github.com/jorditost/ImageFiltering/blob/master/SimpleColorTracking/HSVColorTracking/HSVColorTracking.pde

zagGrad (July 15, 2011). Face Tracking with a Pan/Tilt Servo Bracket. Sparkfun. Retrived on October 29, 2015, from https://www.sparkfun.com/tutorials/304

Zivanovic, Aleksandar (n.d.). Senster – A website devoted to Edward Ihnatowicz, cybernetic sculptor. Retrieved on October 10, 2015, from http://www.senster.com/

Zivanovic, Aleksandar (2005). The development of a cybernetic sculptor: Edward Ihnatowicz and the senster. Researchgate.com. Retrieved on October 12, 2015, from http://www.researchgate.net/publication/221629713_The_development_of_a_cybernetic_sculptor_Edward_Ihnatowicz_and_the_senster

GOLF Project – Shenzhen Bat, The Line Following Autonomous Robot

 

shenzenbatcover

Vision evolved

The Name


The bot was named after the famous technological city called Shenzhen located in southern China. The city is known for hosting a great amount of companies responsible for manufacturing electronic parts including the ones we used on our project; the DC motors and sensors for example. The robot also carries Bat on his name, for a very simple reason: it uses ultrasonic sensors to detect walls and barriers and is mounted over a beautiful shiny black chassis.

The Flow


Stage 0 (Start)

  • Student puts the ping pong ball within the arms of the robot
  • Volunteer orients the robot towards any direction at the starting position.

Stage 1

  • Robot finds the correct direction to drive towards
  • Robot drives until track is identified

Stage 2

  • Robot drives following the line
  • Robot reaches / identifies destination or ending area

Stage 3 (End)

  • Robot stops driving
  • Robot releases the ping pong ball
  • Robot drives backwards slightly

The Design Details


shenzenbatcover2

The robot is designed to follow a black line on a white plane. 

Initial Route Finding

  • When the robot initializes it uses an ultrasonic sensor to determine if an obstacle is in front of it. If obstacles are detected the robot spins rightwards until it finds an open route.
  • After a route is found the robot begins the calibration process. 

Line-following Algorithm

  • When the robot is oriented towards an open route it begins to calibrate its line-detecting algorithm by reading black/white values from the six sensors and rotating left and right to get a range of samples.
  • After calibration is complete, the robot identifies the black line and starts moving forward.
  • The robot continuously calculates the “error” value, which represents the robot’s relative position to the black line (i.e., left or right of the robot).
  • The robot uses the “error” value to determine the power to put in left and right motors in order to always remain on or locate the track. For instance, if the left and right motor values are represented by a tuple, (i.e. a specific number and sequence of values: 60,0), the robot will turn right as the left motor rotates at 60 and the right motor stops.

Goal Detection

  • If during the line following driving routine the robot detects an obstacle in front of it using the ultrasonic sensor, the robot stops driving and comes to a halt. The robot then lifts its arms to drop the cargo (i.e., the ping pong ball) and slowly drives backwards for a moment before stopping completely.

Schematic Design

The following diagram depicts the schemes of the robot and the components being used. The second ultrasonic sensor is not connected as it is only used when ball capturing is enabled.

The schematic diagram drawn using Fritzing.
The schematic diagram of Shenzhen Bat illustrated using Fritzing

The Design Process


ShenzhenBatDesignProcessDiagram

This diagram describes the five main areas we explored in building Shenzhen Bat: control, motors, sensors, chassis design, and the actual coding. It shows where we changed ideas, which components didn’t work out, and which components and functioning we completed the robot with. More details of our work and though process will be introduced in the following sections.

The Code


We programmed using Arduino IDE and Processing IDE and managed the versions using GitHub. We created multiple branches of different features. What we used for the demonstration was our “plan B” – using single ultrasonic sensor for wall detection without ball capturing. Our “plan A” was to put an ultrasonic sensor close to the ground so that it would detect the ball and it will trigger arm movement to “capture” the ball and resume to driving. 

The code referenced from a variety of library examples as well as 3pi Robot PID control, but the main logic was created by ourselves based on the requirement of the GOLF project. We tried to write modular code by putting a particular sequence of actions into single helper function. We also have DEBUG mode for printing useful information to the serial console, which really helped us troubleshoot and understand the states of the robot.

A list of references of codes,

Revisions


DC Motors + QTR-8A ANALOG LINE SENSOR ARRAY

Initially, we used the cheapest 5V 83RPM DC motor sets for the robot’s movement. The algorithm we used was taken from this tutorial, which uses the same sensor array and two continuous servo motors. We also used a DUAL MOTOR DRIVER to control the motors.

After we adjusted the code and connected the wires, the robot moved for the first time and followed a simple route. But somehow one of the readings from the sensor array was peaking irregularly, causing the robot to go off the track.

Another roadblock we weren’t aware was that our two DC motors are running at different speeds, even with the same voltage applied equally to both. Our potential solutions to this problem were to switch from the QTR sensor to IR sensors, manually calculate and apply a voltage offset to each motor, or modify the sensing/driving algorithm.

Code sample used: http://diyhacking.com/projects/DIY_LineFollower.ino

DC Motors + 5 IR Sensors Rev1

We made our own “sensor array” using five IR sensors. We got more stable readings from the sensors, but the robot was still unable to manage sharp turns and tended to overshoot. Another problem was heavy consumption of jumper wires and resistors. We used a full breadboard for hooking up all the wires. We eventually decided to revert back to the more compact QTR sensor (see below). 

DC Motors + 5 IR Sensors Rev2

We found a better algorithm in another tutorial. The algorithm seemed to handle very sharp turns under high speed movement using some custom error correction mechanisms. More specifically, the algorithm converted the sensor readings to binary representation, and then relatively determined the power to each motor by looking at a self-defined mapping between binary values and error adjustment values. After we incorporated the new algorithm into our robot, it tended to swerve off course to the right. At that point it was clear that the speed of the two motors were different when giving the same power and the algorithm itself was not enough to compensate. We suspected that either there wasn’t enough power for the motors or the chassis was too heavy and imbalanced for the motors. These hypotheses led to the next iteration.

Code sample used: http://42bots.com/competitions/arduino-line-following-code-video/

METAL GEAR MOTOR – 6V 100RPM + 5 IR Sensors

We bought a new set of motors with higher RPM and greater torque. Since the motors can handle more weight, we laser cut an acrylic chassis. We updated the design and relocated all components on the new chassis. The result were however disappointing and the robot was still unable to reliably follow the line. We discovered that (1) since the motors are drawing power from the Arduino, it did not have enough voltage and (2) the algorithm was still not robust enough. We then discovered other existing libraries for QTR sensor arrays which used what is referred to as a PID control algorithm. We then looked into different implementations of the PID control algorithm, such as the Arduino PID library, the AutoTune library, this tutorial, and eventually the most useful one that 3pi Robot uses. At this point, we decided to switch back to QTR sensor array with new implementation of PID control.

METAL GEAR MOTOR – 6V 100RPM + QTR-8A ANALOG LINE SENSOR ARRAY

We made large improvement to the mechanical design of our robot by creating an array sensor holder in the front and getting a front wheel for smoother turns. We drew many insights on motor selection, battery selection, chassis material, and PID algorithm from this guide.

With the adoption of the libraries, the array sensor produces an “error” value that can be used for PID computation. Trial after trial, tweak after tweak, the robot followed the line and made good turns for the first time! With this breakthrough, we felt comfortable to work on track visualization, wall detection, bluetooth positioning, ball capture/release mechanisms, and refined mechanical design.

LINE FOLLOWING + WALL / BALL DETECTION

The first working model was to use obstacle avoidance with ultrasonic sensors for initial direction correction as well as endpoint detection. However, there was a problem with differentiating between the wall and ball. Our attempted solution was to place two ultrasonic sensors one above the other. The top sensor would detect walls and obstacles while the bottom sensor would detect the ball and and trigger a temporary stop for the arms to be lifted and lowered.

In our control logic we enabled the sensors at different timing so that the robot captures the ball when it drives and releases the ball at the endpoint. Some issues we ran into were the lags in each loop caused by the two ultrasonic sensors, which interfered with the timing of PID control. To make sure it wasn’t a hardware failure, we tested the ultrasonic sight range, for distance, vertical sensing height of the cone, and tilt angle of the wall (as the ultrasonic could have been bouncing back from the ground ahead)

TestingDistance2

The robot occasionally succeeded, with both sensors enabled, in driving from start to finish and correctly releasing the ball. But due to the low success rate, we chose to take the bottom sensor out. This became our “Plan B” in the sense the ball would not be captured by the robot but rather set within its arms by the user. We tried to develop bluetooth to bluetooth communication in order to point the second ultrasonic at the robot itself, from an external beacon.

Another issue we had with using two ultrasonic sensors was the lack of digital pins on Arduino. We once thought about using SHIFT Register for pin extension, but it got easily fixed by connecting both the ECHO and TRIGGER pins as one.

LINE FOLLOWING + BALL CAPTURE/RELEASE 

Our initial idea was to create a launcher to shoot the ball to the target/destination. One way to do this would be to bring two DC motors close to each other and have them rotate in opposite direction. If the ping pong ball were to make contact with the two rotating wheels they would be shot forwards or sucked in. This method however adds too much weight to the robot and is inefficient, costing the robot more power and digital pins.

We also attempted to capture the ball using fans to pull the ball toward the robot. We discovered that by overlapping multiple fans we increased their ability to pull in or push away objects. We chose not to pursue this method however due to a lack of precision and concerns for reliability. We settled on what we considered to be the most straightforward method and used two servo motors to create a moving frame, or “arms”, for capturing. This mechanism required the least resources, was robust, and was a good mechanistic exercise. The robot proved to have no problems carrying a ball with the arms.

LINE FOLLOWING + VISUALIZATION WITH PROCESSING 

Once we achieved proof of concept with with the PID motor control and successful sensing functions we decided to begin research and development into visualizing the robot’s movement path and behaviour using Processing and serial communication. This part of the project made sense considering the next project in the course would be focused on using serial communication and Processing. The Processing code is custom designed and developed.

Getting serial communication was a bit of a hurdle considering we had never worked with it before. Getting the serial information from Arduino onto Processing took some work but there are good tutorials for beginners online.

We have uploaded the Processing and Arduino code onto GitHub. It has currently been set up as a simulation so anyone can run it and see it (the serial communication code has been commented out so people can refer to it).  Every line of the Processing code has been commented for easy use and understandability. The most important functions we learned through this exercise in Processing were the translate() function (which moves a drawn object), the rotate() function (which rotates an object about its origin point based on specific angle values), and the pushMatrix/popMatrix() functions (which control which objects are affected by outside operations like the translate() and rotate() functions).

The visualization basically worked by getting commands from Arduino which were used to update the Processing algorithm. The basic commands were “drive left”, “drive right”, “drive straight”, “calibration mode”, and “finale mode”. Depending on the commands the visualization would draw the car moving left, right, straight, or show text on the screen saying “Calibration!” or “Release the ball!”. A downloaded image of a world map was also loaded and pasted onto the center of the Processing screen. THE SIMULATION WILL NOT WORK UNLESS YOU DOWNLOAD THE IMAGE. OTHERWISE, COMMENT OUT ANY LINES THAT REFERENCE “image” or “img”.

We decided to send the serial data from Arduino onto Processing using bluetooth rather than USB. Some tips regarding this process include (1) you can’t upload new sketches into Arduino through USB if the bluetooth is plugged in and (2) the TX and RX communication pins on the bluetooth must be connected to the RX and TX pins on the Arduino, respectively.

If you want to send multiple values to Processing from Arduino you have to do a simple workaround. Arduino sends data one byte at a time and Processing must fill an array one byte at a time before it is done one iteration of communication. A useful tutorial for beginners can be found here (https://www.youtube.com/watch?v=BnjMIPOn8IQ).

ROTATION SPEED WITH SPEED ENCODER

As described above, our motors were rotating at different speeds despite identical voltages being passed through each. We initially thought we could solve this by offsetting the voltage sent to each motor in our code. To precisely determine rotation speeds of each motor in relation to input voltage we built a speedometer. We did not use this solution as the PID algorithm compensated for this offset. More details on the rotary/speedometer device can be found in Michael Carnevale’s Daily Devices project in the course records.

ALTERNATIVE PLAN – BLUETOOTH DISTANCE FINDING AND BEACONS 

When we first managed to get the robot to follow the line, and realized we needed error correction, we started exploring the possibility of wireless navigation. We looked into GPS modules but couldn’t use them effectively indoors. We then explored bluetooth RSSI values. They’re typically used to express signal strength on cellphones and laptops. But since RSSI varies with the distance from one bluetooth module to another, it looked like a viable option to locate the robot without relying on sensor vision. The idea was to position 3 modules in a triangle around the car, and a fourth on the car, to imitate how satellites triangulate outdoor GPS units.

Exploring RSSI, we found out that the values returned are never as linear in the real world as they are in ideal conditions. Every bluetooth device, wall, and wireless obstruction add noise to the values. They’re reliable within 0-10m, but vary too closely to one another past 10 meters, and can still give false readings within 10 meters. This meant that they weren’t reliable enough for triangulating 3 bluetooth modules. They can be used for a basic increasing/decreasing polarity of distance (i.e. “are we closer to the goal, or further away?”) but the reliable range of values, require a course length of at least 10 meters to vary across. Our test course was half a meter.

Wifi signal strength seemed like another alternative for distance finding, but suffered from the even worse issues of wireless interference and unreliability.

Pozyx, a kickstarter company, promised an accurate indoor, bluetooth positional tracking system. With anchors around the wall, their trilateration system, promises centimeter accuracy, but they weren’t releasing their product until December 2015.

ShenzhenBatAlt2

The last bluetooth locating option to explore was an external beacon. We used this tutorial (link) and its sample script to try to set up a pair of beacons that could communicate with the bluetooth module on the robot. The script added a level of complexity, by letting each module have a fluid changeable role (master or slave), so the protocol could be reused for the Tamagotchi project. Detecting roles and issuing AT commands dynamically, at potentially different baud rates, proved too unreliable, and so we moved on to pre-setting each module with an assigned role. Later iterations will look at better modules that allow direct setting of master/slave roles.

Final Notes


On the day of presentation, our robot did not quite finish the second course even it had done it successfully during the past tests. At that time, we started to think if it was caused by the PID control logic, the batteries or the surface it drove on. We were not able to test it thoroughly with extreme cases and under different environment. In fact, we did quite a few drive tests with black tape on a light wooden table, white poster on a dark wooden floor and light wooden table, but not the uneven carpet. We did experience low batteries but not the interference of the bluetooth signal (for the visualization).

Overall, I still think we covered a good range of conditions and made reasonable design choices on control logic, materials, and parts. Some improvement I would think of include tweaking more on the PID constants, somehow reducing the lags caused by the ultrasonic sensor, synchronizing the speed of two motors programmatically, and finding more effective power source.

Credits


ShenzhenBatCrew
Photo taken by Hammadullah Syed on November 5th 2015

These names will be written in the history of autonomous robotics forever.

  • Alex Rice-Khouri
  • Davidson Zheng
  • Marcelo Luft
  • Michael Carnevale

Robot Chicken Auto Vehicle

Robot Chicken Auto Vehicle

Yashodha, Janmesh, Hamster and Andrew

Our goal was to create a line following autonomous vehicle that could also drop a ball when confronted with an obstacle. Although we have yet to have our robot chicken successfully drive on it’s own, we have learned a great deal.

For a visual walkthrough of our process, please watch the following video:

Q8 Array Sensor & Continuous Servo Motors
We used an Q8Array contrast sensor as our method of line following on two 360 continuous servos.  Our original method was to use the Q8Array sensor with an analog output , however after much difficulty in figuring out the equation of the code, our team decided to research another method of input for the sensor. As a result we found a digital approach that was much easier for us to understand and implement on our vehicle. Although the digital option was easier to understand, we faced a challenge in recording or vehicle to process the commands for stopping and dropping the ball while in motion.

We found great difficulty in having both of our continuos servo motors to run at the same speed. Our solution was accidentally finding a dial on each motor, that would increase or decrease the speed of each servo depending on which direction it was turned.

Sonar and Servo (Ball drop)
Using a servo, coded to a sonar sensor, we were able to achieve a ball drop technique that fit our concept of chicken laying an egg upon a confronting an obstacle. We added a two second delay to add to the comedic timing of our concept, as all actions happening at once was too sudden.

Link to code: https://docs.google.com/document/d/1euXkkahhhpXUAaspoCcHJeBNePOgM1hQ9esgewygzt4/pub

Fritzing Diagram: 

Screen Shot 2015-11-11 at 9.47.34 PM

Golf Project Alcohol MVP

The objective of creating a self driving/autonomous robot that would take a ping-pong ball from point A to point B was successfully accomplished. Our group began with the idea of creating a car that followed a trail of wine/alcohol to its desired location.  IMG_2952

The first problem we ran into was that the alcohol sensor was able to detect alcohol but it was unable to detect the location of where the alcohol was coming from. So we then decided to use a threshold that would trigger the car to shut off once that “smell” was detected. So rather than using the alcohol to drive the car, it was used to stop the car over the target.

robot-overview-diagram


To differentiate from most of the other groups, we went with the idea of navigating through the course with the use of a color sensor. As a particular threshold was met (or color, in this case red), the programming communicated the car to steer in our desired direction. If the car turned too much and went off course, then we had a contrasting color (yellow) to steer the car back to the course. Digging deeper, the two colors had different color temperatures. We tested the colors to translate them into corresponding numerical values. Yellow was lower than 2,400 while red was higher 35,000. With those values, we were able to create thresholds, for example: when color sensor detects anything < 2,400, turn robot to the left, when color sensor detects anything > 35,000 turn robot to the right. When those thresholds were met, the car reacted. In a nutshell, the colors were used as barriers and borders helping to keep the car within the playing field.

IMG_2963

When red was detected car would turn right, when yellow was detected the car would turn left. The cars motors were cheap and presented our group with many problems related to the discrepancy of speed. To counter, we created a potentiometer to control the slower wheels speed and match it to the faster wheels speed helping the car to steer straight.

Course 1 James

The following is a detailed micro-perspective our procedure:

Path to Autonomy/Lessons Learned

Initial Approach

Precision: In our initial ideation sessions, our goal was to achieve precision in the way the robot navigated.  We wanted it to travel a very linear, unwavering path.

To achieve this we assumed we would need:

  • Beacons placed throughout the course at corners and key positions that the robot could detect and move toward
  • The beacons would act as broadcasters of pings, in the form of sounds, colours, or lights, and the robot would be equipped with sensors that could detect the pings
  • The robot would need a mechanism, such as a rotating turret, that could locate the beacons, then move toward them
  • The basic workflow for the robot would be to: a) locate a beacon, b) move toward the beacon, c) stop when close to a beacon, d) repeat the process until the final destination is reached

This raised some initial concerns and requirements:

  • The final destination beacon would need to signify itself as being different than other beacons; it should signify that the course is complete
  • What would happen if the robot couldn’t locate a beacon? Would it be perpetually frozen or wander aimlessly?
  • What if the robot went off course? Would it be aware of this?  How would it regulate itself?
  • How many towers would be required? (We did not know what the third course looked like, so we would have to guess and/or over prepare.

 

Next Step – Investigation

With our general approach decided, our next step was to investigate sensors and determine which ones will get us to our goal.  We looked at each in the following order:

Laser:

  • These can be very precise. However, they require that a receiver be placed on the beacons, and the receiver is very tiny.  It seemed highly impractical to go this route.  The margin for error was way too large.

Sound:

  • Sound emitters and receivers seemed like a good approach. They lent themselves nicely to the pinging tower concept we were addressing.
  • The towers could emit distinct sounds. We believed we could use this such that the beacons emitted distinct “left”, “right”, and “stop” sounds.  The robot would be equipped with a receiver and know how to react to the sounds.
  • However, after an initial prototype, we determined that this approach would not work.
  • The sound sensors are very good at detecting the presence of sound, but not its location. If you clap in a room, the sound can be detected, but it appears to originate from everywhere in the room, not from a single person.
  • The sounds sensor are also not good at detecting particular sounds. They can be used to detect intensities/thresholds of sounds, but if two dissimilar sounds have intensities that are similar, it is very difficult to determine which is which.
  • Workarounds were explored. For instance, we could have erected walls around the course, have the robot ping a single sound, and then using a sound sensor on both its left and right sides, detect how centered it is within two walls.  We would determine this by how quickly the ping returns on the left relative to the right, or by how intense the returned sound is on the left or right.
  • The workarounds seemed overly complex, and presented more challenges, so sound was abandoned as a possible solution.

Cameras:

  • We considered using a camera along with the Image Tracking capabilities of most Augmented Reality kits
  • Images or shapes could be place around the course. The robot could find and move toward them.
  • The solution required at minimum a reliable Bluetooth connection to a computer or mobile device that would act as the brain.
  • It could have worked, but it was overly complex, and we wanted to keep the brain of the robot self-contained on the Arduino board, so we abandoned this investigation as well.

Alcohol:

  • The notion of using a gas or alcohol sensor appealed to us because it seemed very unique
  • Our initial thought was that we could use alcohol like an imaginary line drawn on the course from start to finish. The robot would effectively sniff around for the line, then follow it to the finish line.
  • This re-raised some of our initial concerns about going off course and self-correction
  • Moreover, after we built a prototype, we quickly learned that the sensors are great at detecting alcohol, but they work in “one off” shots. Once alcohol is detected, the sensor needs to be removed from the source of alcohol to reset (sober up) itself before being used again.
  • We also noticed that the sensor needs to be very hot to work properly.
  • However, we were determined to use the sensor in our design, so we decided to use it as the “stop” marker on the finish line target.

Proximity/Ultrasonic Sensors:

  • All along we assumed we’d use proximity sensors, either to stop the robot when it is close to a beacon, or to erect walls around the course and use the sensors as bumpers. If the robot was close to a wall, it would stop, reverse for a bit, turn, and then continue moving forward.
  • We built a prototype with this, and were successful. At this stage we had our first working autonomous robot!
  • However, we still needed a way for the robot to know that it should turn in a particular direction. We considered using 2 proximity sensors, putting one on the left and the other on the right.  If the robot detected a collision on its right, it would know to turn left, and vice-versa.

Colour:

  • We wanted a robot with more intelligence than the simple bumping technique used with proximity sensors.
  • Initial prototyping with a colour sensor showed that it was quite accurate and responsive.
  • We thought we could therefore use the proximity sensor along with walls to detect a boundary, and place coloured paper near those boundaries. Once the robot detected a wall, it could read the colour underneath and know to turn left or right or stop.
  • It very soon thereafter occurred to us that there was a simpler, more flexible approach. Namely, we could abandon the walls and the proximity sensors, and instead place coloured paper on the courses to act as imaginary walls.
  • We built an initial prototype. It showed promise, and we selected colour detection as our path to success!

 

Successful approach and its Struggles

After we had chosen colour detection as our means of navigation and alcohol as our means of stopping, we began integrating all of the components, testing, and refining our designs.

This presented several obstacles that we had to overcome.

Colour sensors fluctuate greatly:

  • We initially read raw R, G, B values, but very quickly discovered that these fluctuate dramatically. Reading them values produces nothing more than noise – especially when the position of the sensor changes in relation to a colour, and in relation to differing lighting conditions.
  • Next, we tried to look for spikes in the individual R, G, B values. That is, instead of looking for a particular R, G, B value, we looked for spikes on each of the channels.  This approach was an improvement, but was still not very accurate.
  • Our third approach was to look for colour temperature. This worked well, as the sensors are able to fairly accurately detect thresholds of colour temperatures.  We found that red, blue, and yellow colour temperature thresholds were very easy to detect.
  • However, when this approach was applied to a moving robot, it stopped working roughly 30% of the time. The movement of the robot was more than the colour sensors could handle.  This could be due to subtle variations in colours across a path.
  • After some careful analysis of the sensor output, we realized that when the variable “c” (the Clear value) changed, it was a good indicator of a significant detection. Specifically, the sensor can signify a change in colour temperature, but not always signify a change in “c”.  When “c” drops below its standard input, a “true” colour temperature reading has occurred.
  • We therefore used “c” as a flag. If, and only if, the reading on “c” changes, the robot responds to the colour temperature reading.  This approach reduced false positive readings dramatically.  Our robot was not fully capable of detecting colours – even while moving!

Lessons Learned

  • Simplicity, and more leniency in the robot’s path, work best.
  • The use of colour detection as a means of detecting boundaries meant that:
    • The robot could move off a linear path, but still always move in the right general direction. It would always be able to regulate/correct itself (as long the course is completely encased in colours).  This is achieved because if it runs into (on top of) a colour wall, it will always know the correct direction to turn in order to move back on the path that takes it to the target.
    • No extra mechanisms such as rotating turrets were required on the robot, simplifying the design
    • No extra walls or beacons were required on the course, just colour.
  • Our approach meant that the robot could adapt to a lot of different courses right away without having to make any changes to the logic and or positioning of beacons/walls on the course.
    • All that is required is to make sure there is a left coloured imaginary wall and a right one, and that the course is encased in colour. The robot will always be able to keep moving left and right until it reaches its destination.
  • Turning based on time is problematic
    • In our implementation, when the robot makes a turn, it runs its wheels in opposite directions for 500 milliseconds.
    • The problem here is that varying currents being delivered by batteries that quickly drain means that the speed of the motor fluctuates.
    • This can result in over-turning and under-turning.
    • In future implementations, turns should produce exact angles through normalization. This could be achieved by computing rotations or using speed/voltage as a multiplier against the time of rotation.

PROJECT GOLF: I.R. Sensor Autonomous Arduino Car

Introduction

The assignment proposed was to create an autonomous artifact which transports a ping pong ball from point A to point B through 2 different courses, following specific directions.

To achieve this goal, infrared receiver/emitter technology is used to actuate an arduino based car. The car is the receiver of IR signals, which are emitted by beacons strategically placed on the proposed course, navigating the vehicle to its target.


Objectives:

  • Create an autonomous object to transport a ping pong ball to a specific target.
  • The vehicle should self-orient to an endpoint on 3 proposed courses.
  • Using IR technology, autonomous car should be able to receive a specific signal from a beacon, move towards it, stop and do a 360º scan to search and find another beacon signal and move towards it following the given track.
  • When the vehicle reaches the end of course, it should be able to deliver the ping pong ball at the end point.

Components Used  

IRarduino-MotorServo

In order to create an autonomous vehicle that receives specific IR signals from fixed beacons and move towards them accordingly, we required the following components:

Primary Arduino (Riri)

– Arduino Uno
– IR Receiver 

Secondary Arduino (Rob)

– Arduino
– Solderless breadboard
– H-bridge
– DC Motors 5V (2x)
– Wheels to fasten onto DC motors (2x)
– Servo for middle wheel
– Wheel for middle wheel servo
– Custom bracket for middle wheel
– 5V regulator for servo middle wheel
– Servo for trapdoor containing ping pong ball.

See Bill of Materials here

BeaconEnd

 IR_emitter

– Arduino Duemilinova
– Breadboard
– IR LED
– Resistor for IR LED (330 ohm)
– Ultrasonic Sensor
– Baltic fir plywood box

Beacon1 (can be repeated ad infinitum)

– Arduino Duemilinova
– Breadboard
– IR LED
– Resistor for IR LED (330 ohm)
– Ultrasonic Sensor
– Baltic fir plywood box

See Bill of Materials here

 Runway Course

– 6 circuits of 3 or more LEDs
– 100k resistors

 Chassis

– Laser cut wood (Baltic Birch Plywood)
– Balsa wood


 

Dividing the workload

In the beginning, we divided the creation of our soon-to-be autonomous, self-driving robot into separate tasks of movement and sensing:

Egill drew initial sketches for a dummy chassis to test out ideas of what components we might include.

initial-sketch-build-1

Egill created the working engine using 2 DC motors  powered wheels at the sides of the car. A third servo powered wheel was added later at the front of the vehicle to turn and steer the car through the courses.

Initially we decided to create a line-following robot using an IR array.

At an early stage, an Ultrasonic Sensor was  used to prevent the vehicle from colliding into obstacles. That way, the servo guiding the front wheel would turn to the left until the ultrasonic sensor found no obstacle, then the servo would  turn back to the right and the car would drive straightforward.

 

Transporting the ping pong ball

In stages, a 12V fan was suggested to create a vacuum that would suck up the ball from the start point, letting it drop at the end point. The idea was discarded because the  fan required a lot of power and other fans were not strong enough.

Finally, a servo powered trap door was used to carry the ball inside the vehicle’s chassis and release it at the end of the course.

Getting everything together

Testing and assembly included tweaking of the motor functions, changing Beacon IR i.d.s, conceptualizing a theme, and troubleshooting after assembling the separate parts.

After a few changes were suggested, we began working on a final prototype. Initial beacons were created, which each comprised of an IR emitter and an arduino, eventually used in conjunction with Ultrasonic sensors, all housed in a wooden box.

IMG_9694

 At three separate points throughout the process, we gave ourselves until the end of the day before giving up on using IR LED emitter/receivers.  If we had stuck to other of our initial plans, whether line-following or obstacle avoiding, we may have had a perfectly autonomous, self-guiding robot on the day of the presentation.


Movement

For agility purposes, we decided to go with an H-bridge IC chip for the DC motors. That way we could affect the turning direction of the motors, giving us a range of movement options; forward, backward, spin clockwise/counter-clockwise on the spot.

IMG_9575

Early  the robo was going to be a tripod where a middle wheel or a ball would essentially serve only as a stability function.  This  static middle wheel was later replaced for a servo and a custom bracket to hold a 3rd wheel.

Initial tests with the DC motors and servo had power issues, we divided power sources into two sections based on our robot’s vision (sensors) and movement (actuators). A micro servo was used to swing open a trapdoor at the bottom of the chassis to release the ball.


Voiceshield

To make things more interesting and fun, we thought of including a voiceshield, an arduino shield that allows you to record and playback up to 4 minutes of audio, to have audio samples  played on certain parts of the course; and have the robot narrate its own behaviour.


The two headed beast – Introducing a 2nd arduino

After programming and testing a voiceshield,  we assembled it onto our robot’s arduino but an unexpected error occurred. The voiceshield  interferred with the H-bridge so that only one of the DC motors was receiving either enough power or logic to run. Since a voiceshield depends on using digital pins 2-5, we moved the h-bridge pins to other pins but we ran out of digital pins.

A separate arduino  was used for “doing the talking” with the voiceshield a primary arduino would do everything else – including sending signals over to additional arduinos via the Wire.h library telling it to play samples at certain touch points. 


More issues

 With two motors and servo running smoothly together, an IR receiver getting programmed signals from different IR LEDs, testing was done on how the sensing parts of the robot affected the moving parts.

We sadly realized at a really bad timing that the IR receiver was interfering with the DC motors was one example, and also the voiceshield interferred with the DC motors as well.

 Prioritizing

Since the voiceshield was making other communications lag and switching places did not help either,  we sadly decided to drop the voiceshield in favour of the ultrasonic sensor and the IR on the additional arduino.

 Where we had figured out how to Wire.h a signal between the two arduinos, we had little luck in sending over plural signals, i.e. one for recieved IR signal and another one for distance sensed via ultrasonic sensor.

We had the idea of “why not just put the ultrasonics on the IR LED beacons themselves“? So they would send out a certain signal to begin with and when the robot would be within a certain distance, the IR LEDs would send out a different signal where the robot would then stop and drop the ball.


Behaviour Relationship

Course 1

  1. IR Receiver on Riri (primary arduino) looks for BeaconEnd’s signal 1. While it’s not receiving that, send a default x=0 over to Rob (secondary arduino) so Rob will spin his wheels in opposite direction with the middle wheel turned sideways.
  2. When IR Receiver on Riri gets to BeaconEnd’s signal 1, Riri sends to Rob x=1 which makes Rob go forward by spinning both wheels forwards and rotating the middle wheel forward.
  3. When BeaconEnd senses distance being less than 15cm, it will send out signal 2. Riri receives it and writes x=2 to Rob who now stops for 3 seconds, turns the trap door 90° effectively dropping the ball. In 3 seconds time, Rob will turn sideways, go forwards and then stop, signaling he’s happy and done for the day.

 Course 2

  1. IR Receiver on Riri looks for Beacon1’s signal 3. While it’s not receiving that, send a default x=0 over to Rob so Rob will spin his wheels in opposite direction with the middle wheel turned sideways.
  2. When IR Receiver on Riri gets Beacon1’s signal 3, Riri sends to Rob x=3 which makes Rob go forward by spinning both wheels forwards and turning the middle wheel into a forward position.
  3. When Beacon1 senses distance being less than 15cm, it will send out signal 4. Riri receives it and writes x=4 to Rob who now will spin his wheels in opposite direction with the middle wheel turned sideways.
  4. This will keep going until Riri now gets BeaconEnd’s signal 1. Riri sends to Rob x=1 which makes Rob go forward by spinning both wheels forwards and turning the middle wheel into a forward position.
  5. When BeaconEnd senses distance being less than 15cm, it will send out signal 2. Riri receives it and writes x=2 to Rob who now stops for 3 seconds, turns the trap door 90° effectively dropping the ball. In 3 seconds time, Rob will turn sideways, go forwards and then stop, signaling he’s happy and done for the day.

Course 3

 In theory, we would simply connect two additional beacons, place them on the appropriate places on the course and the robot would do its thing. Unfortunately, we ran into problems which prevented us from arriving at this solution in real life. We are currently still not 100% certain that there is a way to make this system work without spending months doing it.


Final thoughts

Even though it didn’t work as well as we had hoped, we advanced our knowledge of arduino and our understanding of robotic constitution. We learned to cut our losses sooner, and not feel bad about going to plan B when plan A is a relentless impediment.

 

 

RedBot – Autonomous Robot Vehicle

RedBot

As a Graduate Research Project at OCAD U, we were challenged to make an autonomous robot using the Arduino Uno microcontroller. We built RedBot as an introductory idea for Red Bull events that could set up a new adventure with these little autonomous monsters.


TEAM

Nimrah Syed
Ling Ding
Jason Tseng
Marcus Gordon



THE CHALLENGE

To create an autonomous device that follows a preset path to reach its destination carrying a ping pong ball. The challenge included three different courses listed below:

Course A

img58

Course B

img62

Course C

img66

No details for ‘Course C’ was released until the presentation day.  We were given 30 mins to figure out the third course after the criteria was given.  Which, in the end, looked a little like this:

img103

STARTING IDEAS

LEGO LIFT

Create a new path above the lift. When the ping pong ball is put on the path, the first lift goes up so that the ball can arrive at point B automatically. When the ping pong ball is put on the path, the first lift and tree goes up. When the ball arrives at the position of the second lift, the lift goes down in order to use gravity to move the ball. Following the request, it would create the specific path to move the ball.

SPHERO

img79
Inspired by Sphero, we were thinking to alter the ball by adding features within it. The main method to add those features was by using Orb Basic, the Basic language of the Sphero. Features included an accelerometer, gyroscope, bluetooth and most importantly, the Sphero’s spherical nature.  This was supposed to be the main engine behind a chariot that would carry the ping pong ball.

CONDUCTIVE PATH

img82 img84
To use a conductive material as the path for the ping pong ball. This was beneficial in using the fabric as a layout for the course design itself.


STRATEGY

The RedBot modus operandi
The inspiration was to make an autonomous robotic car powered by Red Bull. Hence, the name RedBot!

Motivated by Red Bull events, the car was a natural progression to associate with it. RedBot, as much as an energy drink as it was to be a fast car, was designed to detect a path and identify objects that helped it navigate its given course.

ELECTRONIC PARTS

(2) N-Channel MOSFETs
(2) 5V DC Motors
(1) Arduino Uno
(1) Arduino Prototype Shield v.5
(2) HC-SR04 Ultrasonic Sensors

PROTOTYPING LIFECYCLE

img86 img88

img91 img99

CIRCUIT DIAGRAM

RedBot Final - Sketch_bb

TECHNICAL OVERVIEW

Each course has its own program specifically designed for it. It is indeed possible to combine all three programs into one, but at the current state they are not. Note that all turns are performed with one wheel moving, the other stationary. Turn durations are static, set within the programs themselves. Autonomy is derived by the proximity sensors, which are used to detect when to perform the pre-timed turns. The exact timing depends on the desired angle and turning speed, which in turn depends on battery power, motor specifications, balance on the device, wheel traction, etc.

Course A:

There are three movement modes used for course A.

Wall Correction Mode

When the on button is pushed, the RedBot first checks to see if there is a wall in front of it. If a wall is detected the RedBot turns right for a certain amount of time, enough so that the device no longer faces the wall. RedBot then switches to orientation mode. If RedBot does not detect a wall when it is turned on, RedBot switches to orientation mode immediately.

Orientation Mode

RedBot turns right in small steps until it detects a wall (the same wall used for wall avoidance mode). RedBot then turns left for a certain amount of time, enough so that the device is now aligned with the course. The device then switches to movement mode.

Movement Mode

RedBot moves forward in small steps until it detects a wall in front of it. RedBot then stops.  The first wall is placed on the course at a fixed position away from the starting point to standardize RedBot’s alignment. Note that the wheel acting as the centre of the turn (the right wheel) must be placed at the exact centre of the starting point for the alignment standardization to occur. The second wall is placed on the course at a fixed distance beyond from the goal point, facing RedBot, so that the device will stop exactly at the goal when it detects the wall.

[Arduino Code for Course A]

Course B:

When RedBot is turned on it moves forward in steps until it detects the half-wall placed at the corner via its lower proximity sensor. RedBot then turns left for a specific amount of time, enough so that it now faces the goal point. The device continues to move forward until it detects the full wall placed a bit beyond the goal with both its proximity sensors. The device then stops.

Half walls detection is the code that tells the device when to turn, while full wall detection is the code that tells the device when to stop. Given this logic pattern, it is possible to have RedBot navigate its way around any course, so long as the turn durations are pre-declared within the program. For example, it is possible to set all turn durations to a standardized time necessary for 45 degrees turns, and use half walls to navigate the device around a course comprised of turns with angles with a multiple of 45 degrees.

[Arduino Code for Course B]

Course C:

Due to time constraints the program for course C is a quick rig of the code for course B. The upper proximity sensor is disabled, and the full stop at the goal is sensed via a variable that acts as a counter for how many walls the device has encountered.

When RedBot is turned on it moves forward in steps until it detects the half-wall placed at the corner via its lower proximity sensor.  The device then turns left for a specific amount of time, enough so that it now faces the next wall. This bit of code is repeated for the next two angles. By now, the counter will have reached 3, meaning the device has so far encountered 3 walls. When it detects the last wall positioned beyond the goal point, the counter reaches 4, and upon reaching that count RedBot stops all actions.

[Arduino Code for Course C]

 

CONCLUSION

To conclude, RedBot has been able to achieve its goal of following a path from point A to point B, and bringing its ping pong ball to the finish line.  Although not as fast as we originally conceived, our Red Bull sponsored idea for an autonomous vehicle was a success.

Our minimalistic approach to RedBot’s design demonstrated how a crazy idea can eventually lead to a streamlined engine of simplicity.  With sensors limited only to sonar, code logic simplified to left/right instructions only, RedBot achieved greatness through the speed of thought.

img101

Wooden Mirror Research Project

Wooden Mirror Research Project
Andrew Hicks, Leon Lu, Davidson Zheng & Alex Rice-Khouri.

Introduction
Daniel Rozin is a NY based artist, educator and developer who is best-known for incorporating ingenious engineering and his own algorithms to make installations that change and respond to the presence and point of view of the viewer. He’s also a Resident Artist and Associate Art Professor at ITP, Tisch School of the Arts, NYU.

Merging the geometric with the participatory, Rozin’s installations have been celebrated for their kinetic and interactive properties. Grounded in gestures of the body, the mirror is a central theme to his work. Surface transformation becomes a means to explore animated behaviors, representation and illusion. He explores the subjectivity of self perception and dims the line between the digital and the physical.

“I don’t like digital, I use digital. My inspirations are all from the analog world”

He created the Wooden Mirror in 1999 for the BitForms Gallery in New York. The mirror is an interactive sculpture made up of 830 square tiles of reflective golden pine. A servo rotates each square tile on it’s axis thus reflecting a certain amount of light and in turn creating a varying gradient of colour. A hidden camera behind the mirror, connected to a computer, decomposes the image into a map of light intensity.

The mirror is meant to explore the inner workings of image creation and human visual perception.

The Research Technique 

How Shape Detection Works 

The fundamental idea of representing a person from their form can be as simple or complex as you want it to be. In our case, we tried to trace the rough contours of a shape (e.g. your hand), using light sensors. The same basic principle of activating and deactivating pixels based on relative intensity, applies just as well to Photoshop’s spot healing tools, as to line following robots, facial recognition, and all of Daniel Rozin’s reflective exhibits.

Rozin’s Wood Mirror (most likely) used a simple bitmap, expressing how far a tile should pivot by the average brightness of a swatch of pixels. You determine the number of digital pixels per 1 physical pixel by the scale of the video resolution to the roughly 29×29 mirror grid. All of Rozin’s recent projects (Penguins, PomPoms, etc) use Kinect, some combination of image analysis and edge detection found in the OpenCV framework. The most popular algorithm to do this is the Canney Edge Detection.

Canny Edge Detection 

1) Reduce Noise 

Look for any unusually high or low points in 5×5 grids. Use a Gaussian filter. Think lens flares from the sun or dust specks on the lens itself. It’s the same idea as the histogram in Lightroom photoshop, that lets you automatically filter out any completely dark or light areas.

Screen Shot 2015-11-10 at 10.52.29 AM

 

2) Find the Intensity Gradient

Scan the image left to right, row by row, to see which pixels are darker (higher value) than their neighbours. You scan top to bottom, column by column to find the same thing. You do this for every pixel.

Screen Shot 2015-11-10 at 10.52.52 AM

3) Suppression

Now you suppress any points that are smaller than the edges. Edges with high darkness values larger than both of their neighbours get assigned a 1. The neighbours get flattened to 0. This gives you a 1 pixel thin edge; a line. The entire map can be represented in binary.

Screen Shot 2015-11-10 at 10.53.28 AM

4) Thresholding (Hysteresis)

There’s two thresholds:      A minimum value          A maximum value

There’s three categories:   [Below minimum]          [Between min-max]       [Above maximum]

For points that fall between the minimum and maximum thresholds, if they’re attached to another “sure edge” they get counted. If they’re not beside a sure edge, they get thrown away.

Screen Shot 2015-11-10 at 10.53.41 AM

Our Prototype 

Build
When first assessing the project’s needs, it became apparent that materials and scale were going to be an important part of making our prototype work as accurately as possible. A heavy wooden frame was used to house 9×180 degree servo motors with nine 3.5”x3.5” light framed panels made out of popsicle sticks and tracing paper.

IMG_5053

A foam core divider was placed inside the frame to prevent light interfering with the other photo resistors within the box, as well as a base for each of our nine servo motors and photoresistor sensors. Each servo motor was fitted with a chassis, made of popsicle sticks, to ensure a steady front and backward motion when pushing the 3.5”x3.5” panels as so. Each chassis was accompanied with a wire that connected each servo motor to a “floating” arm that would end up pushing the panels back and forth.

IMG_5041

Building a 90 degree angled arm, to push the panels an even further distance, was considered, but as a result of only using a wire arm, our team was able to move each panel by 0.75” back and forth, enough distance to achieve the desired effect.

IMG_5043IMG_5042

Our build allows users to interact with our prototype by shining a led light on each 3.5”x3.5” panel to trigger each servo in whatever sequence the user chooses, creating an interactive pixelated effect, mirroring the user’s actions. The inverse, of using shadows, can also be achieved by reversing our input method through code.

Video 1:

Video 2:

Code
Available at https://github.com/Minsheng/woodenmirror
A single Arduino controls a row of three photoresistors and three micro servo motors. The half rotation motor positions will be set to 0 initially. They will be set to variable values (60, 120, 180) in relative to the photoresistors values. (i.e. the higher the photoresistor value gets, the greater the motor position) For simplicity, we set the motor to 180 degree if the photoresistor exceeds certain threshold. Otherwise, the motor will be set back to 0 degree, thus pulling each grid inwards or pushing it outwards.

In terms of speed control, we tried to incorporate deceleration into the backwards movement. The current mechanism meant to let the motor move at a decreasing speed before it reaches 80% of the different between the last position and the target position, and a much lower speed for the rest of the difference, using loop and a timer for each servo motor. A more effective implementation may be achieved by using VarSpeedServo.h (github.com/netlabtoolkit/VarSpeedServo). It allows asynchronous movement up to 8 servo motors, with advanced controls on speed such as defining a sequence of (position, speed) value pairs.

Fritzing Diagram:

WoodenMirrorPrototype_bb