GOLF Project – Shenzhen Bat, The Line Following Autonomous Robot

 

shenzenbatcover

Vision evolved

The Name


The bot was named after the famous technological city called Shenzhen located in southern China. The city is known for hosting a great amount of companies responsible for manufacturing electronic parts including the ones we used on our project; the DC motors and sensors for example. The robot also carries Bat on his name, for a very simple reason: it uses ultrasonic sensors to detect walls and barriers and is mounted over a beautiful shiny black chassis.

The Flow


Stage 0 (Start)

  • Student puts the ping pong ball within the arms of the robot
  • Volunteer orients the robot towards any direction at the starting position.

Stage 1

  • Robot finds the correct direction to drive towards
  • Robot drives until track is identified

Stage 2

  • Robot drives following the line
  • Robot reaches / identifies destination or ending area

Stage 3 (End)

  • Robot stops driving
  • Robot releases the ping pong ball
  • Robot drives backwards slightly

The Design Details


shenzenbatcover2

The robot is designed to follow a black line on a white plane. 

Initial Route Finding

  • When the robot initializes it uses an ultrasonic sensor to determine if an obstacle is in front of it. If obstacles are detected the robot spins rightwards until it finds an open route.
  • After a route is found the robot begins the calibration process. 

Line-following Algorithm

  • When the robot is oriented towards an open route it begins to calibrate its line-detecting algorithm by reading black/white values from the six sensors and rotating left and right to get a range of samples.
  • After calibration is complete, the robot identifies the black line and starts moving forward.
  • The robot continuously calculates the “error” value, which represents the robot’s relative position to the black line (i.e., left or right of the robot).
  • The robot uses the “error” value to determine the power to put in left and right motors in order to always remain on or locate the track. For instance, if the left and right motor values are represented by a tuple, (i.e. a specific number and sequence of values: 60,0), the robot will turn right as the left motor rotates at 60 and the right motor stops.

Goal Detection

  • If during the line following driving routine the robot detects an obstacle in front of it using the ultrasonic sensor, the robot stops driving and comes to a halt. The robot then lifts its arms to drop the cargo (i.e., the ping pong ball) and slowly drives backwards for a moment before stopping completely.

Schematic Design

The following diagram depicts the schemes of the robot and the components being used. The second ultrasonic sensor is not connected as it is only used when ball capturing is enabled.

The schematic diagram drawn using Fritzing.
The schematic diagram of Shenzhen Bat illustrated using Fritzing

The Design Process


ShenzhenBatDesignProcessDiagram

This diagram describes the five main areas we explored in building Shenzhen Bat: control, motors, sensors, chassis design, and the actual coding. It shows where we changed ideas, which components didn’t work out, and which components and functioning we completed the robot with. More details of our work and though process will be introduced in the following sections.

The Code


We programmed using Arduino IDE and Processing IDE and managed the versions using GitHub. We created multiple branches of different features. What we used for the demonstration was our “plan B” – using single ultrasonic sensor for wall detection without ball capturing. Our “plan A” was to put an ultrasonic sensor close to the ground so that it would detect the ball and it will trigger arm movement to “capture” the ball and resume to driving. 

The code referenced from a variety of library examples as well as 3pi Robot PID control, but the main logic was created by ourselves based on the requirement of the GOLF project. We tried to write modular code by putting a particular sequence of actions into single helper function. We also have DEBUG mode for printing useful information to the serial console, which really helped us troubleshoot and understand the states of the robot.

A list of references of codes,

Revisions


DC Motors + QTR-8A ANALOG LINE SENSOR ARRAY

Initially, we used the cheapest 5V 83RPM DC motor sets for the robot’s movement. The algorithm we used was taken from this tutorial, which uses the same sensor array and two continuous servo motors. We also used a DUAL MOTOR DRIVER to control the motors.

After we adjusted the code and connected the wires, the robot moved for the first time and followed a simple route. But somehow one of the readings from the sensor array was peaking irregularly, causing the robot to go off the track.

Another roadblock we weren’t aware was that our two DC motors are running at different speeds, even with the same voltage applied equally to both. Our potential solutions to this problem were to switch from the QTR sensor to IR sensors, manually calculate and apply a voltage offset to each motor, or modify the sensing/driving algorithm.

Code sample used: http://diyhacking.com/projects/DIY_LineFollower.ino

DC Motors + 5 IR Sensors Rev1

We made our own “sensor array” using five IR sensors. We got more stable readings from the sensors, but the robot was still unable to manage sharp turns and tended to overshoot. Another problem was heavy consumption of jumper wires and resistors. We used a full breadboard for hooking up all the wires. We eventually decided to revert back to the more compact QTR sensor (see below). 

DC Motors + 5 IR Sensors Rev2

We found a better algorithm in another tutorial. The algorithm seemed to handle very sharp turns under high speed movement using some custom error correction mechanisms. More specifically, the algorithm converted the sensor readings to binary representation, and then relatively determined the power to each motor by looking at a self-defined mapping between binary values and error adjustment values. After we incorporated the new algorithm into our robot, it tended to swerve off course to the right. At that point it was clear that the speed of the two motors were different when giving the same power and the algorithm itself was not enough to compensate. We suspected that either there wasn’t enough power for the motors or the chassis was too heavy and imbalanced for the motors. These hypotheses led to the next iteration.

Code sample used: http://42bots.com/competitions/arduino-line-following-code-video/

METAL GEAR MOTOR – 6V 100RPM + 5 IR Sensors

We bought a new set of motors with higher RPM and greater torque. Since the motors can handle more weight, we laser cut an acrylic chassis. We updated the design and relocated all components on the new chassis. The result were however disappointing and the robot was still unable to reliably follow the line. We discovered that (1) since the motors are drawing power from the Arduino, it did not have enough voltage and (2) the algorithm was still not robust enough. We then discovered other existing libraries for QTR sensor arrays which used what is referred to as a PID control algorithm. We then looked into different implementations of the PID control algorithm, such as the Arduino PID library, the AutoTune library, this tutorial, and eventually the most useful one that 3pi Robot uses. At this point, we decided to switch back to QTR sensor array with new implementation of PID control.

METAL GEAR MOTOR – 6V 100RPM + QTR-8A ANALOG LINE SENSOR ARRAY

We made large improvement to the mechanical design of our robot by creating an array sensor holder in the front and getting a front wheel for smoother turns. We drew many insights on motor selection, battery selection, chassis material, and PID algorithm from this guide.

With the adoption of the libraries, the array sensor produces an “error” value that can be used for PID computation. Trial after trial, tweak after tweak, the robot followed the line and made good turns for the first time! With this breakthrough, we felt comfortable to work on track visualization, wall detection, bluetooth positioning, ball capture/release mechanisms, and refined mechanical design.

LINE FOLLOWING + WALL / BALL DETECTION

The first working model was to use obstacle avoidance with ultrasonic sensors for initial direction correction as well as endpoint detection. However, there was a problem with differentiating between the wall and ball. Our attempted solution was to place two ultrasonic sensors one above the other. The top sensor would detect walls and obstacles while the bottom sensor would detect the ball and and trigger a temporary stop for the arms to be lifted and lowered.

In our control logic we enabled the sensors at different timing so that the robot captures the ball when it drives and releases the ball at the endpoint. Some issues we ran into were the lags in each loop caused by the two ultrasonic sensors, which interfered with the timing of PID control. To make sure it wasn’t a hardware failure, we tested the ultrasonic sight range, for distance, vertical sensing height of the cone, and tilt angle of the wall (as the ultrasonic could have been bouncing back from the ground ahead)

TestingDistance2

The robot occasionally succeeded, with both sensors enabled, in driving from start to finish and correctly releasing the ball. But due to the low success rate, we chose to take the bottom sensor out. This became our “Plan B” in the sense the ball would not be captured by the robot but rather set within its arms by the user. We tried to develop bluetooth to bluetooth communication in order to point the second ultrasonic at the robot itself, from an external beacon.

Another issue we had with using two ultrasonic sensors was the lack of digital pins on Arduino. We once thought about using SHIFT Register for pin extension, but it got easily fixed by connecting both the ECHO and TRIGGER pins as one.

LINE FOLLOWING + BALL CAPTURE/RELEASE 

Our initial idea was to create a launcher to shoot the ball to the target/destination. One way to do this would be to bring two DC motors close to each other and have them rotate in opposite direction. If the ping pong ball were to make contact with the two rotating wheels they would be shot forwards or sucked in. This method however adds too much weight to the robot and is inefficient, costing the robot more power and digital pins.

We also attempted to capture the ball using fans to pull the ball toward the robot. We discovered that by overlapping multiple fans we increased their ability to pull in or push away objects. We chose not to pursue this method however due to a lack of precision and concerns for reliability. We settled on what we considered to be the most straightforward method and used two servo motors to create a moving frame, or “arms”, for capturing. This mechanism required the least resources, was robust, and was a good mechanistic exercise. The robot proved to have no problems carrying a ball with the arms.

LINE FOLLOWING + VISUALIZATION WITH PROCESSING 

Once we achieved proof of concept with with the PID motor control and successful sensing functions we decided to begin research and development into visualizing the robot’s movement path and behaviour using Processing and serial communication. This part of the project made sense considering the next project in the course would be focused on using serial communication and Processing. The Processing code is custom designed and developed.

Getting serial communication was a bit of a hurdle considering we had never worked with it before. Getting the serial information from Arduino onto Processing took some work but there are good tutorials for beginners online.

We have uploaded the Processing and Arduino code onto GitHub. It has currently been set up as a simulation so anyone can run it and see it (the serial communication code has been commented out so people can refer to it).  Every line of the Processing code has been commented for easy use and understandability. The most important functions we learned through this exercise in Processing were the translate() function (which moves a drawn object), the rotate() function (which rotates an object about its origin point based on specific angle values), and the pushMatrix/popMatrix() functions (which control which objects are affected by outside operations like the translate() and rotate() functions).

The visualization basically worked by getting commands from Arduino which were used to update the Processing algorithm. The basic commands were “drive left”, “drive right”, “drive straight”, “calibration mode”, and “finale mode”. Depending on the commands the visualization would draw the car moving left, right, straight, or show text on the screen saying “Calibration!” or “Release the ball!”. A downloaded image of a world map was also loaded and pasted onto the center of the Processing screen. THE SIMULATION WILL NOT WORK UNLESS YOU DOWNLOAD THE IMAGE. OTHERWISE, COMMENT OUT ANY LINES THAT REFERENCE “image” or “img”.

We decided to send the serial data from Arduino onto Processing using bluetooth rather than USB. Some tips regarding this process include (1) you can’t upload new sketches into Arduino through USB if the bluetooth is plugged in and (2) the TX and RX communication pins on the bluetooth must be connected to the RX and TX pins on the Arduino, respectively.

If you want to send multiple values to Processing from Arduino you have to do a simple workaround. Arduino sends data one byte at a time and Processing must fill an array one byte at a time before it is done one iteration of communication. A useful tutorial for beginners can be found here (https://www.youtube.com/watch?v=BnjMIPOn8IQ).

ROTATION SPEED WITH SPEED ENCODER

As described above, our motors were rotating at different speeds despite identical voltages being passed through each. We initially thought we could solve this by offsetting the voltage sent to each motor in our code. To precisely determine rotation speeds of each motor in relation to input voltage we built a speedometer. We did not use this solution as the PID algorithm compensated for this offset. More details on the rotary/speedometer device can be found in Michael Carnevale’s Daily Devices project in the course records.

ALTERNATIVE PLAN – BLUETOOTH DISTANCE FINDING AND BEACONS 

When we first managed to get the robot to follow the line, and realized we needed error correction, we started exploring the possibility of wireless navigation. We looked into GPS modules but couldn’t use them effectively indoors. We then explored bluetooth RSSI values. They’re typically used to express signal strength on cellphones and laptops. But since RSSI varies with the distance from one bluetooth module to another, it looked like a viable option to locate the robot without relying on sensor vision. The idea was to position 3 modules in a triangle around the car, and a fourth on the car, to imitate how satellites triangulate outdoor GPS units.

Exploring RSSI, we found out that the values returned are never as linear in the real world as they are in ideal conditions. Every bluetooth device, wall, and wireless obstruction add noise to the values. They’re reliable within 0-10m, but vary too closely to one another past 10 meters, and can still give false readings within 10 meters. This meant that they weren’t reliable enough for triangulating 3 bluetooth modules. They can be used for a basic increasing/decreasing polarity of distance (i.e. “are we closer to the goal, or further away?”) but the reliable range of values, require a course length of at least 10 meters to vary across. Our test course was half a meter.

Wifi signal strength seemed like another alternative for distance finding, but suffered from the even worse issues of wireless interference and unreliability.

Pozyx, a kickstarter company, promised an accurate indoor, bluetooth positional tracking system. With anchors around the wall, their trilateration system, promises centimeter accuracy, but they weren’t releasing their product until December 2015.

ShenzhenBatAlt2

The last bluetooth locating option to explore was an external beacon. We used this tutorial (link) and its sample script to try to set up a pair of beacons that could communicate with the bluetooth module on the robot. The script added a level of complexity, by letting each module have a fluid changeable role (master or slave), so the protocol could be reused for the Tamagotchi project. Detecting roles and issuing AT commands dynamically, at potentially different baud rates, proved too unreliable, and so we moved on to pre-setting each module with an assigned role. Later iterations will look at better modules that allow direct setting of master/slave roles.

Final Notes


On the day of presentation, our robot did not quite finish the second course even it had done it successfully during the past tests. At that time, we started to think if it was caused by the PID control logic, the batteries or the surface it drove on. We were not able to test it thoroughly with extreme cases and under different environment. In fact, we did quite a few drive tests with black tape on a light wooden table, white poster on a dark wooden floor and light wooden table, but not the uneven carpet. We did experience low batteries but not the interference of the bluetooth signal (for the visualization).

Overall, I still think we covered a good range of conditions and made reasonable design choices on control logic, materials, and parts. Some improvement I would think of include tweaking more on the PID constants, somehow reducing the lags caused by the ultrasonic sensor, synchronizing the speed of two motors programmatically, and finding more effective power source.

Credits


ShenzhenBatCrew
Photo taken by Hammadullah Syed on November 5th 2015

These names will be written in the history of autonomous robotics forever.

  • Alex Rice-Khouri
  • Davidson Zheng
  • Marcelo Luft
  • Michael Carnevale

One thought on “GOLF Project – Shenzhen Bat, The Line Following Autonomous Robot

  1. I appreciate, (and chuckled) at the fact you added my name in the photo. Very professional of you all. Cheers! Keep up the good work!!

Comments are closed.