All posts by Jordan Shaw

Curious

IMG_1954

Curious is an interactive installation that introduces viewers to the idea of technology having its own personality. This behaviour is achieved by making Curious interested yet timid around the viewers observing it within the gallery. Curious is composed of main sections. The first section is the frame which is suspended a couple feet below the ceiling. The frame holds the majority of the electronics along with the two required two stepper motors. The second section is a horizontal light bar that is suspended by two cables, one at each end of the light bar. Each cable is attached to the stepper motors that is located directly above the light bar attached to the installation frame. The final section is the sonar sensor array that consists of 5 sensors that are located a couple cementers off of the ground.

The default state of the light bar is suspended a few feet below the installation frame and is glowing by the one meter 60 pixel NeoPixel strip that has been feed through the acrylic tube. The light bar creates light patterns to try and generate interest from people in the gallery. This is the installations way to try and beckon the viewers towards itself. When the viewer becomes close enough to the installation the light bar start to mimic the user’s movement if they move from the left side to the right side of the bar. It essentially tries to  bond with the user through imitation in motion. If the viewer gets too close to Curious it become nervous and scared. When this happens Curious will fade it’s lights to be completely off and raise itself back up to be just under the suspended frame. Curious is  trying to hide from the unfamiliar creature, the gallery viewer. Finally, when the viewer backs away Curious starts to lower itself from above in an investigative manor. This includes lowering one side and than the other of the light bar and animating the light back and forth as the bar is being lowered as well as slow lowering of the light bar at a consistent speed while the LEDs preform a number of creative light patterns.

IMG_1972

Features

The frame that holds all of the electronics is held from the ceiling,  there is no guarantee for what the hight of the gallery ceiling will be. This made it pivotal to devise a solution to be able to adjust and configure Curious to be able to behave with these uncertainties. To solve the problem of not knowing how high the frame will be off of the ground and how far the motors will have to raise and lower the light bar I created different modes in the Arduino logic to set the starting position of the light bar and to adjust the steps for each motor incase they went out of sync. This communication was through the serial port or by using the HC-05 Bluetooth module. Having this configurability in a “Configure Mode” I was able to wirelessly control the motors individually to correct any errors in the motors preventing the light bar from being completely horizontal in its default state. Another state that was programmed in and controllable via serial or Bluetooth was the ability to control how the installation behaved depending on how busy the gallery is. There is a low traffic and high traffic setting to change how the installation would react to its surroundings. The low traffic mode would behave like explained above with Curious having a sudo-personality and the high traffic setting sets the light bar at a static height and illuminates the bar with glowing lights and rotating colours animating from left to right.

IMG_1970 IMG_1937

Prototypes

When coming up with the form and design of Curious I went through more than a couple  iterations and revisions. I sketched out my ideas and got feedback from classmates and friends on which they preferred. Below are some selected sketches to illustrate my thought process on the design before I started manufacturing the installation.

IMG_1986 IMG_1990 IMG_1989 IMG_1988

Challenges

The main challenges I faced when working through the different iterations and stages of prototyping and developing Curious were:

  • getting the proper volts for the NeoPixels to run correctly
  • getting the correct volts and amps for the stepper motors to work properly without burning out the Adafruit Motor Shield and its Integrated Circuits (IC)
  • the weight of the cables and their flexibility when to winding up the light bar
  • the stepper motors axle diameter in comparison to the diameter of hole meant for the axle in the pulley
  • reading and storing returned distances from the five different sonar sensors on the sonar bar

To solve the volts issues with the NeoPixels and the stepper motors it was a matter of lots of reading and testing different circuits. I had to use a 1000uF capacitor for the NeoPixels  to work correctly. With the stepper motors, I was originally trying to use 5 volt and 1 amp to control them but that was too weak — I needed more power. I switched to trying 9 volts and 2 amps, this also didn’t work; the motors needed exactly 5 volts. With the requirement of 5 volts being required I started to use a voltage regulator which helped me get 5 volts to the motors but the regulator was overheating. The voltage regulator also would only output 1 amp, as mentioned above this wasn’t powerful enough. I still needed more amps. The solution was to use a different power supply that would output 5 volts and 3 amps. Once I had this piece of the puzzle working correctly I was in business. Even with 3 amps there was still a little bit of motor slippage which I think is that the motors can still accept more amps but the motor shield I am using can only handle 3 amps — even after I added heat sinks to the motor shield motor driver IC chips.

In the next iteration of Curious I would like to use smaller but more powerful motors that also would need fewer amps. With what I learned about stepper motors I would be able to improve greatly how they are integrated into the installation. For this current version I had to spend a lot of time figuring out the correct power and configuration to control the stepper motors.

It was a challenge to come up with a solution to connect the pulleys with a large diameter to the motor axle of the stepper motors that have a very small axle. Initially I had tape wrapped around the motor axle to increase the diameter, this became loose after the motors were running for 10 to 15 minutes. As the motors were running they heated up and the glue on the tape started melt and lost its adhesive properties. The second solution I came up with was creating an axle extender that attached to the motor axel. This was crafted out of wood. This worked but it pushed out the pulley too far from the motor which isn’t what I was looking for either. The end solution, as embarrassing as it is was to fold thin pieces of cardboard to the height of ½ the diameter of the pulleys diameter and use that to center the pulley on the axle when screwing down the pulleys axel bolt. This solution would need to be revisited in the future for a better solution. Perhaps a bushing or a piece of custom made hardware would be needed.

IMG_1903

With my prototype I used a very flexible and lightweight rope that was connected to the pulleys on the motors to the light bar. When I was getting ready for the install in the gallery, for aesthetic purposes, I switched over to a thin cable. This was problematic due to the cable needed more weight applied to it, pulling down to keep the cable taut. The light bar did not offer enough weight. Not having enough weight also prevented the cable from winding up on the axel tightly. There was some slippage from the cable because of this. To solve this in the future I would use another form of cable or weigh down the light bar enough to keep the cables taut and wind up on the pulley nicely.

To store the sonar values I forked the Arduino Array library https://github.com/jshaw/arduino-array and updated it to work with the newer versions of the Arduino IDE as well as added the `getMinIndex` and `getMaxIndex` functions to help with Curious.

The final hiccup with the development of Curious was the way that the Ping sonar sensor library behaved when I was when trying to read the returned values from the multiple  sensors. This was an issue because I was also controlling 60 NeoPixels and two stepper motors a bluetooth chip along with the five sonar sensors. To solve the issue of inaccurate returned data and the fact that there is a delay in the Ping sonar library messing up some timing I switched over to using the NewPing library which allowed for better readings of multiple sonar sensors.

I’m looking forward to working on this project further iterating some features and functionality and fine tuning it and taking it to the next level!

Video

Schematic

curious_bb

Source Code

https://github.com/jshaw/cc-antisocial

https://github.com/jshaw/cc-antisocial-processing

https://github.com/jshaw/arduino-array

Autonomous Car

Introduction

IMG_1677

The Golf Project was challenge to create move a ping-pong ball from the point A to point B by any means necessary. Each group must develop a robotic system that would sense the course, it’s surroundings and any obstacles, and then would react accordingly. We would be given course one and course two, however course three would remain a secret until the day of the presentation. Below we have included images of all three courses we had to navigate.

Process

To start off with figuring out how we were going to accomplish building a semi-autonomous car we were researching possible solutions to allow the car to sensor its surroundings. Below are some ideas we (Jordan and Manik) considered:

  1. Multiple sonar sensors that would wrap around the vehicle.
  2. Infra-red (IR) emitters lining the track with IR sensors on the car ensuring it doesn’t travel outside of the IR bounds.
  3. Program different routes into the car’s “navigation system” so that the car would know its route: eg. “go straight straight for 1 meter than make a 30 degree left turn than go straight for 0.8 meters then stop”.
  4. Employ an array of IR sensors and receivers to track lines on the floor and use a sonar sensor for initial orientation for starting and stopping.

When we learned that the first challenge was going to start by the vehicle being oriented in any direction away from the start direction,  we removed possibility 3 as an option. We knew that we needed to program the car to move based on sensors rather than hand coding its route.

We decided to go with the sonar sensor and the QTR-8A IR (infrared) array.

Prototypes

When we decided on a strategy, a number of issues needed to be addressed:

  • we didn’t have extensive knowledge with motors before this project, so we started prototyping the motor and navigation system for the vehicle.
  • The first step we confirmed was turning DC motors on and off
  • we followed up by confirming that switching the direction of the current changed the direction of the motors.

To give us more control over the DC motors we added an L293 motor driver, also known as a dual H-Bridge microchip. This allowed us to control the speed of the DC motors and the direction the motors could spin.

The vehicle to make turns along various radiuses and speeds using PWM. To prototype a vehicle behaviour with the motor driver, we engineered a set of motor controls that looped using timers to control the motors in multiple directions.  Using a potentiometer allowed for  adjustments in the speed of the motors based on the values being sent from the potentiometer. The motor motions we tested were stop, start, left, right, reverse.

Our next challenge was setting up the IR array, allowing  the vehicle to follow a line on the ground to its destination.

Once we had the basic navigation solved for the vehicle realized that we didn’t always want the car to start right away once we uploaded a new Arduino sketch or turned on the power. To give us a bit more control we added a start and stop button to the car. This allowed us to calibrate the IR sensors and set the vehicle up for its tests. This came in handy to prevent runaway projects.

Once we had the car following the set lines. we moved on to defining a solution to allow the vehicle to orientate itself towards the finish line. Initially, we considered boxing in the vehicle except for an exit door. For visual simplicity, Chris suggested we develop the vehicle  so that it “looks” for  a pillar parallel to the direction it needs to travel.

We also spent some time to think about ways to add some flair to the vehicle. We looked at adding a sound system or some sort of other movement. One difficulty was picking something that wasn’t going to use too much power and could be done with only the two Arduino pins available.

We decided to add some a fun tune with an Arduino.cc sketch for ToneMelody, but ran into problems with the coding. When the melody “turned over”, the delay halted with the car’s movement. We (Manik and Alison) tried replacing the “delay” in the melody with alternative coding without success. Unfortunately, we couldn’t resolve this issue before presentation time.

Parts

The parts used in the manufacturing the car is as follows:

  • 1 Arduino Uno
  • 1 L293 Motor Driver
  • 1 QTR-8A IR array sensor
  • 2 DC motors
  • 2 wheels
  • 1 ball bearing caster for the front wheel
  • 1 ½ sized breadboard
  • 1 button switch
  • 1 sonar sensor
  • 1 220k resistor
  • 2 9V battery packs
  • cardboard for the body

Challenges

There were various debugging of a couple of issues in getting the QTR-8A array to properly control wheel directions and speeds  We needed to ensure enough power was sent to all of the components.

The solution was to separate the power for the Arduino and sensors and the two DC motors. To get this circuit to work we also had to make sure all of the parts had a common ground.

We also had to do some physical debugging for the orientation of the vehicle in the first challenge.  Considerations included how thick the line used by the IR sensor needed to be and at what turn radiuses the sensor was still able to continue following the line.

Courses

The courses that we were given are shown below. We were given them in actual size to be able to test and debug our cars before the final presentation.

IMG_1665

Course One.

IMG_1703

Course Two.

Testing

While testing we saw the vehicle often worked, but would occasionally deviate from the line. From debugging and analyzing the situations, we were fairly certain that the cause of this was an inconsistent line colour. We used markers to draw the lines, but the black wasn’t always as dark in some places or the line edges were not 100% straight which affected the sensors. To solve this we used black electrical tape for the line. After implementing this solution, the vehicle’s performance was more consistent.

We also came up with ways to try and correct any navigation errors by adding additional lines to the course. This kind of worked.

The interesting aspect of the error checking lines is that they are visual representations of issues that could be added back into the code. Developers add error checking into their code to help control the end product of the executed code. Essentially we have visual artifacts of these error checks on the course, which we believe is very fascinating.

Final product

With our current setup, the car successfully navigates and completes course one and two. The difficulty came when we journey onto course three.

IMG_1678IMG_1692

 

Improvements

After going through the first two challenges we were presented the mystery course three. We hS 45 minutes to come up with a solution that allowed  our vehicle navigate through this challenge.

IMG_1705The Infamous Mystery Course Three!

In this time, we realized that even though our car was semi-autonomous we still had some hard coded values and areas where we made assumptions based on our known courses, but hadn’t accounted for the third course. For instance, we didn’t consider making  multiple turns on and off a patterned or textured surface that could prevent the IR array sensor from working correctly.

In the 45 minutes to update our car,  we were able to make some tweaks so the vehicle could begin handling more complicated courses. We did this by switching the purpose of the sonar sensor:  once the car found its orientation, the sonar sensor was changed to look for pillars on the course, telling  the car to make a required left turn.

Reflecting on how we could improve our vehicle, it’s clear  we could improve our logic. Instead of trying to find a line and follow it, ensuring the car doesn’t leave that line, we could reconfigure the logic so the car would go straight until a sensor tells it  to change course.  This change offers greater flexibility over a variety of scenarios

If we had a total of four sonar sensors, one on each side of the vehicle we would be able to use each sensor for a specific task in the navigation process. We could have the vehicle drive straight as long as the front sonar sensor returns a valid distance to continue moving forward, and we could have the left and right sonars look for and detect physical signifiers for which direction the car should turn and once the turn is executed continue moving forward.

Schematic

This was done with Fritzing and is exported below.

autonomous-car_bb

Code

We have the code for our vehicle on Github under Creation & Computation: Autonomous Vehicle.

Project by: Jordan Shaw, Manik Perera Gunatilleke and Alison Bruce.

 

Research Project: Volume by United Visual Artist

United Visual Artists (UVA) was founded in 2003 in London, UK by Matthew Clark, Chris Bird, and Ash Nehru. Originally, the 3 came together to create lighting and visuals for a concert for a London based music group called Massive Attack. Since then, they have showcased their work through many exhibitions and galleries and have won many awards. The group has been featured in multiple publications winning award after award for their creative approach in combining architecture, live-performances, installations, sculptors, and technology.

UVA has done work in Russia, UK, Australia, Hong Kong, Paris, and other cities across the globe. Apart from working with lasers, radars, and scanners, UVA also lectures across the globe. They tour around different universities and have even reached Toronto to speak to students at both Ryerson University And OCAD University in September of 2011.

Volume

As described on UVA’s site:

“UVA’s large-scale installation Volume first appeared in the garden of London’s V&A museum in 2006 and has since traveled as far as Hong Kong, Taiwan, St. Petersburg and Melbourne.

It consists of a field of 48 luminous, sound-emitting columns that respond to movement. Visitors weave a path through the sculpture, creating their own unique journey in light and music.

The result of a collaboration with Massive Attack, Volume won the D&AD Yellow Pencil in 2007 for Outstanding Achievement in the Digital Installation category.”

 

Inspiration:

The inspiration for volume was both the Play Station brief, which was to engage people in an emotional experience for the launch of PS3 in UK and Monolith, which is an installation displayed by UVA at John Madejski garden on Onedotzero’s transvision night. Monolith emits soothing colours and you hear calming sounds when no one is near. As people approach, the colours and sounds become louder and harsher, forcing people to step back to find their comfort zone. Monolith wasn’t entirely successful from an interaction point of view. It had more people than anticipated, so it spent too much time in ‘overload’ mode. But it did ‘work’ in that it created a powerful aura and transformed the space.

monolith-1_medium

3 interaction layers

ppt-5

Model Overview

ppt-6

Technical Overview

To create Volume, UVA would have used their proprietary software and hardware called D3. D3 allows artists to control many different pieces of hardware and tools for installations, performances and other visuals. In the case of Volume D3 is controlling 48 LED towers, infrared cameras and 48 individual speakers located in each of the LED towers.

The D3 software has the capability for Real Time Simulation, Projector Simulation, Sequencing, Content Mapping, Playback, Configuring Output, d3Net, Show Integration, and Backup & Support. To run the software efficiently D3 also offers specifically built hardware for running their software.

d34x4_web_2

backpanel4x4

As mentioned above D3 does Real Time Simulations of the art work. Here’s a screenshot of one of the available simulations for Volume. You can see that there is a timeline for the different events and interactions as well as a digital representation for each of the 48 LED towers.

xl

For motion tracking of the experience we suspect that they are using some sort of IR areal grid system similar to this illustration found on the Panasonic website. This method would allow participants location to be tracked and monitored relatively simply as well as keeping down cost by by minimizing the number of IR cameras required in the installation.

grid-eye_infrared_array_sensor_graphic_-_panasonic_industrial_devices

The audio for Volume was written by Massive Attack since UVA has had an existing relationship with the band. The sound artist Simon Hendry has also worked on additional effects for multiple iterations of the installation. The connection between the D3 software and hardware and the installation is done with MIDI (Musical Instrument Digital Interface) controls connecting with the Music production software Logic Pro.

Prototype

To prototype the Volume installation we went through multiple iterations. The first iteration we were trying out a 8X8 RGB LED matrix and a MAX7219 LED driver chip.

IMG_1384 IMG_1395

We got it up and running but were unable to control individual LEDs or get unique colours displaying. So instead we switched to the 74HC595 chip for bit shifting the control output from the Arduino to allow one three pins control 8 LED’s per chip. Each chip can be connected, and daisy-chained to another chip which can control another 8 LEDs.

IMG_1423 IMG_1469 2

There is a lot of wiring but it is a good way for communicating with multiple LEDs and still keep open some pins on the Arduino for other sensors.

For the user motion and location detection we are using the Sonar sensor, for the audio we are using the Arduino Library called Mozzi and we are using a small speaker with a simple connection to the Arduino.

Schematic

C_C_UVA_Volume_bb

 

Prototype Video

 Final Video

Github Source Files

Github source for the master Arduino that runs the LEDs and sensor.

Github source for the Arduino that runs the Mozzi instance.

StopLight & Pedestrian Crosswalk

IMG_1349 IMG_1347

I was thinking that a daily device that everyone uses every day was a stop light. There’s multiple levels of complexity to it as well as it has to do with a couple timers which I am still trying to wrap my head around when theres is more than two things happening at once. My initial goal was to try and write this as an Arduino Library so that theoretically if someone has a Mega Arudino they would be able to set up a mini city grid.

Stop lights work on timers and sensors. There’s a generic timer that alternates the direction of the lights for traffic to flow at a regular interval. However if there’s a person waiting, or a car waiting there is a sensor that lets the system know this. When this message is received the default timer to switch the lights status is over written by a shorter wait time for the one who interacted with the sensor.

I was able to find a tutorial which helped me through the timer business. It was useful to have something complex as a reference. However I consistently ran into issues trying to get a library written.

IMG_1382

 

 

Potential Enhancements

A way to make this example more like a real stop light would be to add additional sensors for the cars. I was looking around to see what could maybe recognize a car waiting. Since real stop lights seem to use a magnet to detect cars I found this Spark Fun Hall Effect Sensor which is a magnetic switch and would probably do the trick.

SchematicStopLight

 

Reference:

https://www.arduino.cc/en/Hacking/LibraryTutorial

https://www.arduino.cc/en/Reference/APIStyleGuide

http://www.instructables.com/id/Traffic-Lights-Beginner-Arduino-Project/?ALLSTEPS

http://www.electroschematics.com/10178/arduino-traffic-light-system/

 

Social Artifacts

IMG_1338

Social Artifacts is a look at experiences that are shared through digital mediums can have their meaning or context of the “now” manipulated, misinterpreted, or even lost.

We all share our experiences on Social Media but we often overlook how the experiences we share become altered with the loss of context and all of the sensory information you were exposed to. Your followers understanding is often a simpler understanding of what you experienced because they did not have access to the same sensory information you did.

Documentation

Social Artifacts reads the analog audio output from the Digibird and interprets the original audio than manipulates it to varying degrees which is controlled using a Potentiometer. The manipulated audio is than broadcasted to a secondary speaker. The secondary speaker in this scenario is hidden in the tree as if it was a speakerphone of the digital world. This reinterpreted audio uses the same frequency as the original Digibird output but is simplified to only digital frequencies which overpowers the original higher quality audio.

During this project, having been playing around with the birds speaker cables the speaker connection fell off of the chip all together. I was trying to solder the speaker back to the birds chip without luck. This is illustrated in the image below. So solve this problem I biked over to Toys “R” Us to pick up a new DigiBird :S

IMG_1320

 

To get an idea of the “prototype” outside of the box this is what the whole project looked like in it’s raw state.

IMG_1313

Schematic

The Arduino prototype board is a replacement for the Digibird.

 

BirdSchamatic_bb

The source for the project is also posted on Github.