Category Archives: GOLF

GOLF Project – Shenzhen Bat, The Line Following Autonomous Robot

 

shenzenbatcover

Vision evolved

The Name


The bot was named after the famous technological city called Shenzhen located in southern China. The city is known for hosting a great amount of companies responsible for manufacturing electronic parts including the ones we used on our project; the DC motors and sensors for example. The robot also carries Bat on his name, for a very simple reason: it uses ultrasonic sensors to detect walls and barriers and is mounted over a beautiful shiny black chassis.

The Flow


Stage 0 (Start)

  • Student puts the ping pong ball within the arms of the robot
  • Volunteer orients the robot towards any direction at the starting position.

Stage 1

  • Robot finds the correct direction to drive towards
  • Robot drives until track is identified

Stage 2

  • Robot drives following the line
  • Robot reaches / identifies destination or ending area

Stage 3 (End)

  • Robot stops driving
  • Robot releases the ping pong ball
  • Robot drives backwards slightly

The Design Details


shenzenbatcover2

The robot is designed to follow a black line on a white plane. 

Initial Route Finding

  • When the robot initializes it uses an ultrasonic sensor to determine if an obstacle is in front of it. If obstacles are detected the robot spins rightwards until it finds an open route.
  • After a route is found the robot begins the calibration process. 

Line-following Algorithm

  • When the robot is oriented towards an open route it begins to calibrate its line-detecting algorithm by reading black/white values from the six sensors and rotating left and right to get a range of samples.
  • After calibration is complete, the robot identifies the black line and starts moving forward.
  • The robot continuously calculates the “error” value, which represents the robot’s relative position to the black line (i.e., left or right of the robot).
  • The robot uses the “error” value to determine the power to put in left and right motors in order to always remain on or locate the track. For instance, if the left and right motor values are represented by a tuple, (i.e. a specific number and sequence of values: 60,0), the robot will turn right as the left motor rotates at 60 and the right motor stops.

Goal Detection

  • If during the line following driving routine the robot detects an obstacle in front of it using the ultrasonic sensor, the robot stops driving and comes to a halt. The robot then lifts its arms to drop the cargo (i.e., the ping pong ball) and slowly drives backwards for a moment before stopping completely.

Schematic Design

The following diagram depicts the schemes of the robot and the components being used. The second ultrasonic sensor is not connected as it is only used when ball capturing is enabled.

The schematic diagram drawn using Fritzing.
The schematic diagram of Shenzhen Bat illustrated using Fritzing

The Design Process


ShenzhenBatDesignProcessDiagram

This diagram describes the five main areas we explored in building Shenzhen Bat: control, motors, sensors, chassis design, and the actual coding. It shows where we changed ideas, which components didn’t work out, and which components and functioning we completed the robot with. More details of our work and though process will be introduced in the following sections.

The Code


We programmed using Arduino IDE and Processing IDE and managed the versions using GitHub. We created multiple branches of different features. What we used for the demonstration was our “plan B” – using single ultrasonic sensor for wall detection without ball capturing. Our “plan A” was to put an ultrasonic sensor close to the ground so that it would detect the ball and it will trigger arm movement to “capture” the ball and resume to driving. 

The code referenced from a variety of library examples as well as 3pi Robot PID control, but the main logic was created by ourselves based on the requirement of the GOLF project. We tried to write modular code by putting a particular sequence of actions into single helper function. We also have DEBUG mode for printing useful information to the serial console, which really helped us troubleshoot and understand the states of the robot.

A list of references of codes,

Revisions


DC Motors + QTR-8A ANALOG LINE SENSOR ARRAY

Initially, we used the cheapest 5V 83RPM DC motor sets for the robot’s movement. The algorithm we used was taken from this tutorial, which uses the same sensor array and two continuous servo motors. We also used a DUAL MOTOR DRIVER to control the motors.

After we adjusted the code and connected the wires, the robot moved for the first time and followed a simple route. But somehow one of the readings from the sensor array was peaking irregularly, causing the robot to go off the track.

Another roadblock we weren’t aware was that our two DC motors are running at different speeds, even with the same voltage applied equally to both. Our potential solutions to this problem were to switch from the QTR sensor to IR sensors, manually calculate and apply a voltage offset to each motor, or modify the sensing/driving algorithm.

Code sample used: http://diyhacking.com/projects/DIY_LineFollower.ino

DC Motors + 5 IR Sensors Rev1

We made our own “sensor array” using five IR sensors. We got more stable readings from the sensors, but the robot was still unable to manage sharp turns and tended to overshoot. Another problem was heavy consumption of jumper wires and resistors. We used a full breadboard for hooking up all the wires. We eventually decided to revert back to the more compact QTR sensor (see below). 

DC Motors + 5 IR Sensors Rev2

We found a better algorithm in another tutorial. The algorithm seemed to handle very sharp turns under high speed movement using some custom error correction mechanisms. More specifically, the algorithm converted the sensor readings to binary representation, and then relatively determined the power to each motor by looking at a self-defined mapping between binary values and error adjustment values. After we incorporated the new algorithm into our robot, it tended to swerve off course to the right. At that point it was clear that the speed of the two motors were different when giving the same power and the algorithm itself was not enough to compensate. We suspected that either there wasn’t enough power for the motors or the chassis was too heavy and imbalanced for the motors. These hypotheses led to the next iteration.

Code sample used: http://42bots.com/competitions/arduino-line-following-code-video/

METAL GEAR MOTOR – 6V 100RPM + 5 IR Sensors

We bought a new set of motors with higher RPM and greater torque. Since the motors can handle more weight, we laser cut an acrylic chassis. We updated the design and relocated all components on the new chassis. The result were however disappointing and the robot was still unable to reliably follow the line. We discovered that (1) since the motors are drawing power from the Arduino, it did not have enough voltage and (2) the algorithm was still not robust enough. We then discovered other existing libraries for QTR sensor arrays which used what is referred to as a PID control algorithm. We then looked into different implementations of the PID control algorithm, such as the Arduino PID library, the AutoTune library, this tutorial, and eventually the most useful one that 3pi Robot uses. At this point, we decided to switch back to QTR sensor array with new implementation of PID control.

METAL GEAR MOTOR – 6V 100RPM + QTR-8A ANALOG LINE SENSOR ARRAY

We made large improvement to the mechanical design of our robot by creating an array sensor holder in the front and getting a front wheel for smoother turns. We drew many insights on motor selection, battery selection, chassis material, and PID algorithm from this guide.

With the adoption of the libraries, the array sensor produces an “error” value that can be used for PID computation. Trial after trial, tweak after tweak, the robot followed the line and made good turns for the first time! With this breakthrough, we felt comfortable to work on track visualization, wall detection, bluetooth positioning, ball capture/release mechanisms, and refined mechanical design.

LINE FOLLOWING + WALL / BALL DETECTION

The first working model was to use obstacle avoidance with ultrasonic sensors for initial direction correction as well as endpoint detection. However, there was a problem with differentiating between the wall and ball. Our attempted solution was to place two ultrasonic sensors one above the other. The top sensor would detect walls and obstacles while the bottom sensor would detect the ball and and trigger a temporary stop for the arms to be lifted and lowered.

In our control logic we enabled the sensors at different timing so that the robot captures the ball when it drives and releases the ball at the endpoint. Some issues we ran into were the lags in each loop caused by the two ultrasonic sensors, which interfered with the timing of PID control. To make sure it wasn’t a hardware failure, we tested the ultrasonic sight range, for distance, vertical sensing height of the cone, and tilt angle of the wall (as the ultrasonic could have been bouncing back from the ground ahead)

TestingDistance2

The robot occasionally succeeded, with both sensors enabled, in driving from start to finish and correctly releasing the ball. But due to the low success rate, we chose to take the bottom sensor out. This became our “Plan B” in the sense the ball would not be captured by the robot but rather set within its arms by the user. We tried to develop bluetooth to bluetooth communication in order to point the second ultrasonic at the robot itself, from an external beacon.

Another issue we had with using two ultrasonic sensors was the lack of digital pins on Arduino. We once thought about using SHIFT Register for pin extension, but it got easily fixed by connecting both the ECHO and TRIGGER pins as one.

LINE FOLLOWING + BALL CAPTURE/RELEASE 

Our initial idea was to create a launcher to shoot the ball to the target/destination. One way to do this would be to bring two DC motors close to each other and have them rotate in opposite direction. If the ping pong ball were to make contact with the two rotating wheels they would be shot forwards or sucked in. This method however adds too much weight to the robot and is inefficient, costing the robot more power and digital pins.

We also attempted to capture the ball using fans to pull the ball toward the robot. We discovered that by overlapping multiple fans we increased their ability to pull in or push away objects. We chose not to pursue this method however due to a lack of precision and concerns for reliability. We settled on what we considered to be the most straightforward method and used two servo motors to create a moving frame, or “arms”, for capturing. This mechanism required the least resources, was robust, and was a good mechanistic exercise. The robot proved to have no problems carrying a ball with the arms.

LINE FOLLOWING + VISUALIZATION WITH PROCESSING 

Once we achieved proof of concept with with the PID motor control and successful sensing functions we decided to begin research and development into visualizing the robot’s movement path and behaviour using Processing and serial communication. This part of the project made sense considering the next project in the course would be focused on using serial communication and Processing. The Processing code is custom designed and developed.

Getting serial communication was a bit of a hurdle considering we had never worked with it before. Getting the serial information from Arduino onto Processing took some work but there are good tutorials for beginners online.

We have uploaded the Processing and Arduino code onto GitHub. It has currently been set up as a simulation so anyone can run it and see it (the serial communication code has been commented out so people can refer to it).  Every line of the Processing code has been commented for easy use and understandability. The most important functions we learned through this exercise in Processing were the translate() function (which moves a drawn object), the rotate() function (which rotates an object about its origin point based on specific angle values), and the pushMatrix/popMatrix() functions (which control which objects are affected by outside operations like the translate() and rotate() functions).

The visualization basically worked by getting commands from Arduino which were used to update the Processing algorithm. The basic commands were “drive left”, “drive right”, “drive straight”, “calibration mode”, and “finale mode”. Depending on the commands the visualization would draw the car moving left, right, straight, or show text on the screen saying “Calibration!” or “Release the ball!”. A downloaded image of a world map was also loaded and pasted onto the center of the Processing screen. THE SIMULATION WILL NOT WORK UNLESS YOU DOWNLOAD THE IMAGE. OTHERWISE, COMMENT OUT ANY LINES THAT REFERENCE “image” or “img”.

We decided to send the serial data from Arduino onto Processing using bluetooth rather than USB. Some tips regarding this process include (1) you can’t upload new sketches into Arduino through USB if the bluetooth is plugged in and (2) the TX and RX communication pins on the bluetooth must be connected to the RX and TX pins on the Arduino, respectively.

If you want to send multiple values to Processing from Arduino you have to do a simple workaround. Arduino sends data one byte at a time and Processing must fill an array one byte at a time before it is done one iteration of communication. A useful tutorial for beginners can be found here (https://www.youtube.com/watch?v=BnjMIPOn8IQ).

ROTATION SPEED WITH SPEED ENCODER

As described above, our motors were rotating at different speeds despite identical voltages being passed through each. We initially thought we could solve this by offsetting the voltage sent to each motor in our code. To precisely determine rotation speeds of each motor in relation to input voltage we built a speedometer. We did not use this solution as the PID algorithm compensated for this offset. More details on the rotary/speedometer device can be found in Michael Carnevale’s Daily Devices project in the course records.

ALTERNATIVE PLAN – BLUETOOTH DISTANCE FINDING AND BEACONS 

When we first managed to get the robot to follow the line, and realized we needed error correction, we started exploring the possibility of wireless navigation. We looked into GPS modules but couldn’t use them effectively indoors. We then explored bluetooth RSSI values. They’re typically used to express signal strength on cellphones and laptops. But since RSSI varies with the distance from one bluetooth module to another, it looked like a viable option to locate the robot without relying on sensor vision. The idea was to position 3 modules in a triangle around the car, and a fourth on the car, to imitate how satellites triangulate outdoor GPS units.

Exploring RSSI, we found out that the values returned are never as linear in the real world as they are in ideal conditions. Every bluetooth device, wall, and wireless obstruction add noise to the values. They’re reliable within 0-10m, but vary too closely to one another past 10 meters, and can still give false readings within 10 meters. This meant that they weren’t reliable enough for triangulating 3 bluetooth modules. They can be used for a basic increasing/decreasing polarity of distance (i.e. “are we closer to the goal, or further away?”) but the reliable range of values, require a course length of at least 10 meters to vary across. Our test course was half a meter.

Wifi signal strength seemed like another alternative for distance finding, but suffered from the even worse issues of wireless interference and unreliability.

Pozyx, a kickstarter company, promised an accurate indoor, bluetooth positional tracking system. With anchors around the wall, their trilateration system, promises centimeter accuracy, but they weren’t releasing their product until December 2015.

ShenzhenBatAlt2

The last bluetooth locating option to explore was an external beacon. We used this tutorial (link) and its sample script to try to set up a pair of beacons that could communicate with the bluetooth module on the robot. The script added a level of complexity, by letting each module have a fluid changeable role (master or slave), so the protocol could be reused for the Tamagotchi project. Detecting roles and issuing AT commands dynamically, at potentially different baud rates, proved too unreliable, and so we moved on to pre-setting each module with an assigned role. Later iterations will look at better modules that allow direct setting of master/slave roles.

Final Notes


On the day of presentation, our robot did not quite finish the second course even it had done it successfully during the past tests. At that time, we started to think if it was caused by the PID control logic, the batteries or the surface it drove on. We were not able to test it thoroughly with extreme cases and under different environment. In fact, we did quite a few drive tests with black tape on a light wooden table, white poster on a dark wooden floor and light wooden table, but not the uneven carpet. We did experience low batteries but not the interference of the bluetooth signal (for the visualization).

Overall, I still think we covered a good range of conditions and made reasonable design choices on control logic, materials, and parts. Some improvement I would think of include tweaking more on the PID constants, somehow reducing the lags caused by the ultrasonic sensor, synchronizing the speed of two motors programmatically, and finding more effective power source.

Credits


ShenzhenBatCrew
Photo taken by Hammadullah Syed on November 5th 2015

These names will be written in the history of autonomous robotics forever.

  • Alex Rice-Khouri
  • Davidson Zheng
  • Marcelo Luft
  • Michael Carnevale

PROJECT GOLF: I.R. Sensor Autonomous Arduino Car

Introduction

The assignment proposed was to create an autonomous artifact which transports a ping pong ball from point A to point B through 2 different courses, following specific directions.

To achieve this goal, infrared receiver/emitter technology is used to actuate an arduino based car. The car is the receiver of IR signals, which are emitted by beacons strategically placed on the proposed course, navigating the vehicle to its target.


Objectives:

  • Create an autonomous object to transport a ping pong ball to a specific target.
  • The vehicle should self-orient to an endpoint on 3 proposed courses.
  • Using IR technology, autonomous car should be able to receive a specific signal from a beacon, move towards it, stop and do a 360º scan to search and find another beacon signal and move towards it following the given track.
  • When the vehicle reaches the end of course, it should be able to deliver the ping pong ball at the end point.

Components Used  

IRarduino-MotorServo

In order to create an autonomous vehicle that receives specific IR signals from fixed beacons and move towards them accordingly, we required the following components:

Primary Arduino (Riri)

– Arduino Uno
– IR Receiver 

Secondary Arduino (Rob)

– Arduino
– Solderless breadboard
– H-bridge
– DC Motors 5V (2x)
– Wheels to fasten onto DC motors (2x)
– Servo for middle wheel
– Wheel for middle wheel servo
– Custom bracket for middle wheel
– 5V regulator for servo middle wheel
– Servo for trapdoor containing ping pong ball.

See Bill of Materials here

BeaconEnd

 IR_emitter

– Arduino Duemilinova
– Breadboard
– IR LED
– Resistor for IR LED (330 ohm)
– Ultrasonic Sensor
– Baltic fir plywood box

Beacon1 (can be repeated ad infinitum)

– Arduino Duemilinova
– Breadboard
– IR LED
– Resistor for IR LED (330 ohm)
– Ultrasonic Sensor
– Baltic fir plywood box

See Bill of Materials here

 Runway Course

– 6 circuits of 3 or more LEDs
– 100k resistors

 Chassis

– Laser cut wood (Baltic Birch Plywood)
– Balsa wood


 

Dividing the workload

In the beginning, we divided the creation of our soon-to-be autonomous, self-driving robot into separate tasks of movement and sensing:

Egill drew initial sketches for a dummy chassis to test out ideas of what components we might include.

initial-sketch-build-1

Egill created the working engine using 2 DC motors  powered wheels at the sides of the car. A third servo powered wheel was added later at the front of the vehicle to turn and steer the car through the courses.

Initially we decided to create a line-following robot using an IR array.

At an early stage, an Ultrasonic Sensor was  used to prevent the vehicle from colliding into obstacles. That way, the servo guiding the front wheel would turn to the left until the ultrasonic sensor found no obstacle, then the servo would  turn back to the right and the car would drive straightforward.

 

Transporting the ping pong ball

In stages, a 12V fan was suggested to create a vacuum that would suck up the ball from the start point, letting it drop at the end point. The idea was discarded because the  fan required a lot of power and other fans were not strong enough.

Finally, a servo powered trap door was used to carry the ball inside the vehicle’s chassis and release it at the end of the course.

Getting everything together

Testing and assembly included tweaking of the motor functions, changing Beacon IR i.d.s, conceptualizing a theme, and troubleshooting after assembling the separate parts.

After a few changes were suggested, we began working on a final prototype. Initial beacons were created, which each comprised of an IR emitter and an arduino, eventually used in conjunction with Ultrasonic sensors, all housed in a wooden box.

IMG_9694

 At three separate points throughout the process, we gave ourselves until the end of the day before giving up on using IR LED emitter/receivers.  If we had stuck to other of our initial plans, whether line-following or obstacle avoiding, we may have had a perfectly autonomous, self-guiding robot on the day of the presentation.


Movement

For agility purposes, we decided to go with an H-bridge IC chip for the DC motors. That way we could affect the turning direction of the motors, giving us a range of movement options; forward, backward, spin clockwise/counter-clockwise on the spot.

IMG_9575

Early  the robo was going to be a tripod where a middle wheel or a ball would essentially serve only as a stability function.  This  static middle wheel was later replaced for a servo and a custom bracket to hold a 3rd wheel.

Initial tests with the DC motors and servo had power issues, we divided power sources into two sections based on our robot’s vision (sensors) and movement (actuators). A micro servo was used to swing open a trapdoor at the bottom of the chassis to release the ball.


Voiceshield

To make things more interesting and fun, we thought of including a voiceshield, an arduino shield that allows you to record and playback up to 4 minutes of audio, to have audio samples  played on certain parts of the course; and have the robot narrate its own behaviour.


The two headed beast – Introducing a 2nd arduino

After programming and testing a voiceshield,  we assembled it onto our robot’s arduino but an unexpected error occurred. The voiceshield  interferred with the H-bridge so that only one of the DC motors was receiving either enough power or logic to run. Since a voiceshield depends on using digital pins 2-5, we moved the h-bridge pins to other pins but we ran out of digital pins.

A separate arduino  was used for “doing the talking” with the voiceshield a primary arduino would do everything else – including sending signals over to additional arduinos via the Wire.h library telling it to play samples at certain touch points. 


More issues

 With two motors and servo running smoothly together, an IR receiver getting programmed signals from different IR LEDs, testing was done on how the sensing parts of the robot affected the moving parts.

We sadly realized at a really bad timing that the IR receiver was interfering with the DC motors was one example, and also the voiceshield interferred with the DC motors as well.

 Prioritizing

Since the voiceshield was making other communications lag and switching places did not help either,  we sadly decided to drop the voiceshield in favour of the ultrasonic sensor and the IR on the additional arduino.

 Where we had figured out how to Wire.h a signal between the two arduinos, we had little luck in sending over plural signals, i.e. one for recieved IR signal and another one for distance sensed via ultrasonic sensor.

We had the idea of “why not just put the ultrasonics on the IR LED beacons themselves“? So they would send out a certain signal to begin with and when the robot would be within a certain distance, the IR LEDs would send out a different signal where the robot would then stop and drop the ball.


Behaviour Relationship

Course 1

  1. IR Receiver on Riri (primary arduino) looks for BeaconEnd’s signal 1. While it’s not receiving that, send a default x=0 over to Rob (secondary arduino) so Rob will spin his wheels in opposite direction with the middle wheel turned sideways.
  2. When IR Receiver on Riri gets to BeaconEnd’s signal 1, Riri sends to Rob x=1 which makes Rob go forward by spinning both wheels forwards and rotating the middle wheel forward.
  3. When BeaconEnd senses distance being less than 15cm, it will send out signal 2. Riri receives it and writes x=2 to Rob who now stops for 3 seconds, turns the trap door 90° effectively dropping the ball. In 3 seconds time, Rob will turn sideways, go forwards and then stop, signaling he’s happy and done for the day.

 Course 2

  1. IR Receiver on Riri looks for Beacon1’s signal 3. While it’s not receiving that, send a default x=0 over to Rob so Rob will spin his wheels in opposite direction with the middle wheel turned sideways.
  2. When IR Receiver on Riri gets Beacon1’s signal 3, Riri sends to Rob x=3 which makes Rob go forward by spinning both wheels forwards and turning the middle wheel into a forward position.
  3. When Beacon1 senses distance being less than 15cm, it will send out signal 4. Riri receives it and writes x=4 to Rob who now will spin his wheels in opposite direction with the middle wheel turned sideways.
  4. This will keep going until Riri now gets BeaconEnd’s signal 1. Riri sends to Rob x=1 which makes Rob go forward by spinning both wheels forwards and turning the middle wheel into a forward position.
  5. When BeaconEnd senses distance being less than 15cm, it will send out signal 2. Riri receives it and writes x=2 to Rob who now stops for 3 seconds, turns the trap door 90° effectively dropping the ball. In 3 seconds time, Rob will turn sideways, go forwards and then stop, signaling he’s happy and done for the day.

Course 3

 In theory, we would simply connect two additional beacons, place them on the appropriate places on the course and the robot would do its thing. Unfortunately, we ran into problems which prevented us from arriving at this solution in real life. We are currently still not 100% certain that there is a way to make this system work without spending months doing it.


Final thoughts

Even though it didn’t work as well as we had hoped, we advanced our knowledge of arduino and our understanding of robotic constitution. We learned to cut our losses sooner, and not feel bad about going to plan B when plan A is a relentless impediment.

 

 

RedBot – Autonomous Robot Vehicle

RedBot

As a Graduate Research Project at OCAD U, we were challenged to make an autonomous robot using the Arduino Uno microcontroller. We built RedBot as an introductory idea for Red Bull events that could set up a new adventure with these little autonomous monsters.


TEAM

Nimrah Syed
Ling Ding
Jason Tseng
Marcus Gordon



THE CHALLENGE

To create an autonomous device that follows a preset path to reach its destination carrying a ping pong ball. The challenge included three different courses listed below:

Course A

img58

Course B

img62

Course C

img66

No details for ‘Course C’ was released until the presentation day.  We were given 30 mins to figure out the third course after the criteria was given.  Which, in the end, looked a little like this:

img103

STARTING IDEAS

LEGO LIFT

Create a new path above the lift. When the ping pong ball is put on the path, the first lift goes up so that the ball can arrive at point B automatically. When the ping pong ball is put on the path, the first lift and tree goes up. When the ball arrives at the position of the second lift, the lift goes down in order to use gravity to move the ball. Following the request, it would create the specific path to move the ball.

SPHERO

img79
Inspired by Sphero, we were thinking to alter the ball by adding features within it. The main method to add those features was by using Orb Basic, the Basic language of the Sphero. Features included an accelerometer, gyroscope, bluetooth and most importantly, the Sphero’s spherical nature.  This was supposed to be the main engine behind a chariot that would carry the ping pong ball.

CONDUCTIVE PATH

img82 img84
To use a conductive material as the path for the ping pong ball. This was beneficial in using the fabric as a layout for the course design itself.


STRATEGY

The RedBot modus operandi
The inspiration was to make an autonomous robotic car powered by Red Bull. Hence, the name RedBot!

Motivated by Red Bull events, the car was a natural progression to associate with it. RedBot, as much as an energy drink as it was to be a fast car, was designed to detect a path and identify objects that helped it navigate its given course.

ELECTRONIC PARTS

(2) N-Channel MOSFETs
(2) 5V DC Motors
(1) Arduino Uno
(1) Arduino Prototype Shield v.5
(2) HC-SR04 Ultrasonic Sensors

PROTOTYPING LIFECYCLE

img86 img88

img91 img99

CIRCUIT DIAGRAM

RedBot Final - Sketch_bb

TECHNICAL OVERVIEW

Each course has its own program specifically designed for it. It is indeed possible to combine all three programs into one, but at the current state they are not. Note that all turns are performed with one wheel moving, the other stationary. Turn durations are static, set within the programs themselves. Autonomy is derived by the proximity sensors, which are used to detect when to perform the pre-timed turns. The exact timing depends on the desired angle and turning speed, which in turn depends on battery power, motor specifications, balance on the device, wheel traction, etc.

Course A:

There are three movement modes used for course A.

Wall Correction Mode

When the on button is pushed, the RedBot first checks to see if there is a wall in front of it. If a wall is detected the RedBot turns right for a certain amount of time, enough so that the device no longer faces the wall. RedBot then switches to orientation mode. If RedBot does not detect a wall when it is turned on, RedBot switches to orientation mode immediately.

Orientation Mode

RedBot turns right in small steps until it detects a wall (the same wall used for wall avoidance mode). RedBot then turns left for a certain amount of time, enough so that the device is now aligned with the course. The device then switches to movement mode.

Movement Mode

RedBot moves forward in small steps until it detects a wall in front of it. RedBot then stops.  The first wall is placed on the course at a fixed position away from the starting point to standardize RedBot’s alignment. Note that the wheel acting as the centre of the turn (the right wheel) must be placed at the exact centre of the starting point for the alignment standardization to occur. The second wall is placed on the course at a fixed distance beyond from the goal point, facing RedBot, so that the device will stop exactly at the goal when it detects the wall.

[Arduino Code for Course A]

Course B:

When RedBot is turned on it moves forward in steps until it detects the half-wall placed at the corner via its lower proximity sensor. RedBot then turns left for a specific amount of time, enough so that it now faces the goal point. The device continues to move forward until it detects the full wall placed a bit beyond the goal with both its proximity sensors. The device then stops.

Half walls detection is the code that tells the device when to turn, while full wall detection is the code that tells the device when to stop. Given this logic pattern, it is possible to have RedBot navigate its way around any course, so long as the turn durations are pre-declared within the program. For example, it is possible to set all turn durations to a standardized time necessary for 45 degrees turns, and use half walls to navigate the device around a course comprised of turns with angles with a multiple of 45 degrees.

[Arduino Code for Course B]

Course C:

Due to time constraints the program for course C is a quick rig of the code for course B. The upper proximity sensor is disabled, and the full stop at the goal is sensed via a variable that acts as a counter for how many walls the device has encountered.

When RedBot is turned on it moves forward in steps until it detects the half-wall placed at the corner via its lower proximity sensor.  The device then turns left for a specific amount of time, enough so that it now faces the next wall. This bit of code is repeated for the next two angles. By now, the counter will have reached 3, meaning the device has so far encountered 3 walls. When it detects the last wall positioned beyond the goal point, the counter reaches 4, and upon reaching that count RedBot stops all actions.

[Arduino Code for Course C]

 

CONCLUSION

To conclude, RedBot has been able to achieve its goal of following a path from point A to point B, and bringing its ping pong ball to the finish line.  Although not as fast as we originally conceived, our Red Bull sponsored idea for an autonomous vehicle was a success.

Our minimalistic approach to RedBot’s design demonstrated how a crazy idea can eventually lead to a streamlined engine of simplicity.  With sensors limited only to sonar, code logic simplified to left/right instructions only, RedBot achieved greatness through the speed of thought.

img101

Autonomous Car

Introduction

IMG_1677

The Golf Project was challenge to create move a ping-pong ball from the point A to point B by any means necessary. Each group must develop a robotic system that would sense the course, it’s surroundings and any obstacles, and then would react accordingly. We would be given course one and course two, however course three would remain a secret until the day of the presentation. Below we have included images of all three courses we had to navigate.

Process

To start off with figuring out how we were going to accomplish building a semi-autonomous car we were researching possible solutions to allow the car to sensor its surroundings. Below are some ideas we (Jordan and Manik) considered:

  1. Multiple sonar sensors that would wrap around the vehicle.
  2. Infra-red (IR) emitters lining the track with IR sensors on the car ensuring it doesn’t travel outside of the IR bounds.
  3. Program different routes into the car’s “navigation system” so that the car would know its route: eg. “go straight straight for 1 meter than make a 30 degree left turn than go straight for 0.8 meters then stop”.
  4. Employ an array of IR sensors and receivers to track lines on the floor and use a sonar sensor for initial orientation for starting and stopping.

When we learned that the first challenge was going to start by the vehicle being oriented in any direction away from the start direction,  we removed possibility 3 as an option. We knew that we needed to program the car to move based on sensors rather than hand coding its route.

We decided to go with the sonar sensor and the QTR-8A IR (infrared) array.

Prototypes

When we decided on a strategy, a number of issues needed to be addressed:

  • we didn’t have extensive knowledge with motors before this project, so we started prototyping the motor and navigation system for the vehicle.
  • The first step we confirmed was turning DC motors on and off
  • we followed up by confirming that switching the direction of the current changed the direction of the motors.

To give us more control over the DC motors we added an L293 motor driver, also known as a dual H-Bridge microchip. This allowed us to control the speed of the DC motors and the direction the motors could spin.

The vehicle to make turns along various radiuses and speeds using PWM. To prototype a vehicle behaviour with the motor driver, we engineered a set of motor controls that looped using timers to control the motors in multiple directions.  Using a potentiometer allowed for  adjustments in the speed of the motors based on the values being sent from the potentiometer. The motor motions we tested were stop, start, left, right, reverse.

Our next challenge was setting up the IR array, allowing  the vehicle to follow a line on the ground to its destination.

Once we had the basic navigation solved for the vehicle realized that we didn’t always want the car to start right away once we uploaded a new Arduino sketch or turned on the power. To give us a bit more control we added a start and stop button to the car. This allowed us to calibrate the IR sensors and set the vehicle up for its tests. This came in handy to prevent runaway projects.

Once we had the car following the set lines. we moved on to defining a solution to allow the vehicle to orientate itself towards the finish line. Initially, we considered boxing in the vehicle except for an exit door. For visual simplicity, Chris suggested we develop the vehicle  so that it “looks” for  a pillar parallel to the direction it needs to travel.

We also spent some time to think about ways to add some flair to the vehicle. We looked at adding a sound system or some sort of other movement. One difficulty was picking something that wasn’t going to use too much power and could be done with only the two Arduino pins available.

We decided to add some a fun tune with an Arduino.cc sketch for ToneMelody, but ran into problems with the coding. When the melody “turned over”, the delay halted with the car’s movement. We (Manik and Alison) tried replacing the “delay” in the melody with alternative coding without success. Unfortunately, we couldn’t resolve this issue before presentation time.

Parts

The parts used in the manufacturing the car is as follows:

  • 1 Arduino Uno
  • 1 L293 Motor Driver
  • 1 QTR-8A IR array sensor
  • 2 DC motors
  • 2 wheels
  • 1 ball bearing caster for the front wheel
  • 1 ½ sized breadboard
  • 1 button switch
  • 1 sonar sensor
  • 1 220k resistor
  • 2 9V battery packs
  • cardboard for the body

Challenges

There were various debugging of a couple of issues in getting the QTR-8A array to properly control wheel directions and speeds  We needed to ensure enough power was sent to all of the components.

The solution was to separate the power for the Arduino and sensors and the two DC motors. To get this circuit to work we also had to make sure all of the parts had a common ground.

We also had to do some physical debugging for the orientation of the vehicle in the first challenge.  Considerations included how thick the line used by the IR sensor needed to be and at what turn radiuses the sensor was still able to continue following the line.

Courses

The courses that we were given are shown below. We were given them in actual size to be able to test and debug our cars before the final presentation.

IMG_1665

Course One.

IMG_1703

Course Two.

Testing

While testing we saw the vehicle often worked, but would occasionally deviate from the line. From debugging and analyzing the situations, we were fairly certain that the cause of this was an inconsistent line colour. We used markers to draw the lines, but the black wasn’t always as dark in some places or the line edges were not 100% straight which affected the sensors. To solve this we used black electrical tape for the line. After implementing this solution, the vehicle’s performance was more consistent.

We also came up with ways to try and correct any navigation errors by adding additional lines to the course. This kind of worked.

The interesting aspect of the error checking lines is that they are visual representations of issues that could be added back into the code. Developers add error checking into their code to help control the end product of the executed code. Essentially we have visual artifacts of these error checks on the course, which we believe is very fascinating.

Final product

With our current setup, the car successfully navigates and completes course one and two. The difficulty came when we journey onto course three.

IMG_1678IMG_1692

 

Improvements

After going through the first two challenges we were presented the mystery course three. We hS 45 minutes to come up with a solution that allowed  our vehicle navigate through this challenge.

IMG_1705The Infamous Mystery Course Three!

In this time, we realized that even though our car was semi-autonomous we still had some hard coded values and areas where we made assumptions based on our known courses, but hadn’t accounted for the third course. For instance, we didn’t consider making  multiple turns on and off a patterned or textured surface that could prevent the IR array sensor from working correctly.

In the 45 minutes to update our car,  we were able to make some tweaks so the vehicle could begin handling more complicated courses. We did this by switching the purpose of the sonar sensor:  once the car found its orientation, the sonar sensor was changed to look for pillars on the course, telling  the car to make a required left turn.

Reflecting on how we could improve our vehicle, it’s clear  we could improve our logic. Instead of trying to find a line and follow it, ensuring the car doesn’t leave that line, we could reconfigure the logic so the car would go straight until a sensor tells it  to change course.  This change offers greater flexibility over a variety of scenarios

If we had a total of four sonar sensors, one on each side of the vehicle we would be able to use each sensor for a specific task in the navigation process. We could have the vehicle drive straight as long as the front sonar sensor returns a valid distance to continue moving forward, and we could have the left and right sonars look for and detect physical signifiers for which direction the car should turn and once the turn is executed continue moving forward.

Schematic

This was done with Fritzing and is exported below.

autonomous-car_bb

Code

We have the code for our vehicle on Github under Creation & Computation: Autonomous Vehicle.

Project by: Jordan Shaw, Manik Perera Gunatilleke and Alison Bruce.