Category Archives: General Posts

Weezy Wonder

Weezy Wonder is a Stevie Wonder fan down to the dark glasses and his mushy song loving heart. He waves you over, falls chronically in love when you come over ( as indicated by his blinking heart) and insists on singing you a song while you hold his hand.

I wanted to keep my tamagotchi design simple but able to evoke amusement or joy in whoever interacts with it. Hence Weezy’s romantic crooning action.

Making Weezy: The Body & Computation

I bought a  ‘rocker’ hamster toy and removed some of his old wiring parts from the inside to make room for this version of this toy. The wiring which included:

  • Blinking heart sonar
  • Handwave Servo
  • Light sensor trigger for the hand-holding action
  • MP3 Shield to play songs

 

The initial idea was to use a pulse sensor as it was suggested it would be much more interesting to have Weezy sing a song based on someone’s heartbeat. This was abandoned because the readings seemed unreliable and would not always trigger music as it should.

IMG_20151205_211327
Pulse sensor try out

The handwave that stopped working a day before the presentation:

 

All the wiring was connected to an arduino board overlayed on the Sparkfun VS 1053B MP3 Shield, and a breadboard and inserted in to various parts of Weezy’s body.

Weezy’s emotional cues and reactions are based on the following computation:

  • the code for a servo motion of 180 degrees.
  • A sonar code that made his heart blink when anyone came with 450cm
  • A light sensor code that triggered a song on the mp3shield when the hand is covered or ‘held’.
  • The MP3 shield sample code modified to play a random song when the light level falls.

Git hub link to code: https://github.com/manikpg/Weezy-Sings-Your-Heart-Song

 

Challenges

Despite the testing of multiple code variations, the song trigger would not occur more than once unless the arduino was uploaded each time. Two codes were written using the sample code library provided by the Shield and these resulted in no song playing at all so they were abandoned.

In the days leading up to the Gallery presentation of Weezy, I tried to a few back up plans that were still based on using the MP3 Shield.  I wanted to somehow get Weezy to play random songs based on a sensor. Since my code was so faulty, I resorted to trying  a tested code where  a motion sensor triggers the playing of songs. While the sensor worked, the song would not play so that was then abandoned. The shield seemed very sensitive to most other coding add ons.

I tried to have just one song play instead of a number of random songs, just so that it would still play each time someone held Weezy’s hand instead of needing to be uploaded on the arduino application each time. Removing random still did not work. It would only still work once and an upload was required so it was laptop dependent to the end. Regardless the only code that worked even with the uploading each time was the only workable version I could get to perform close to the concept of playing a song each time.

 

It was baffling how the slightest change to related codes stopped the music playing. On the day before the show, the hand wave had to be scrapped as the songs would not play if I included the servo code for hand waving and the sonar stopped working on matter what I tried. His heart stayed ‘on’ all the time.

 

 

 

 

 

 

 

 

 

 

 

 

 

Marty

Marty
By Andrew Hicks

IMG_5118

Concept: First Thoughts 
When first introduced to the tamagotchi project, the challenge of eliciting an emotional response through computational means guided me to think of using computational elements to be as human as possible. When thinking about parts of the human body that visually express the most emotion, I considered eye brows as being such. This stemmed from my brief involvement with studying (more so admiring) traditional character animation in the past and how simple angled lines can convey so much.

Concept: Theme
Over the last year, I have been very fixed on the definition of “value” and what it means to not only myself, but to others. What creates value with or without money? What do we get out of value and how long does the value of something last, or how quickly can value depreciate? When do people start caring or stop caring over the value of something, be that with the use of time, money or emotion? I circled around these thoughts for my final tamagotchi presentation.

I knew I wanted to involve currency of some sort, but was not set on using actual money. After expressing my focus for my tamagotchi project to my colleague, Leon Lu, he quickly guided me in the right direction of using time as the value for my project. Although using time was a great idea, I still wanted someones time to be valued  or validated, other than expression being the final transaction. I wanted emotion to manifested in by some qualified means and a transaction to be the (figurative) period at the end of the sentence when interacting and connecting to my tamagotchi pen pal.  As a result, I used pennies as the final trade off for someone’s time. 

In the end, I wanted the tamagotchi to do the following:
1) Upon entry, spectators would notice the sad looking tamagotchi.

2) Upon facing the tamagotchi (within 5 to 25cm), the tamagotchi would slowly shift mood from sad to a happier state by slowly raising it’s eyebrows within 20 seconds.

3) If the user spends time with the tamagotchi for over 20 seconds, the tamagotchi would then provide a penny to the spectator. After the spectator would leave, the tamagotchi would default to a sad looking state.

4) However, if the spectator leaves the tamagotchi before 20 seconds, the tamagotchi would then become angry for 5 seconds, and then default to a state of being sad.

Prototype
I started building a simple prototype using some white foam-core, two 180 degree servo motors, a sonar sensor, Arduino with breadboard and two popsicle sticks as eyebrows. 

From here I was able to establish basic emotions such as anger, sadness and contentment using a few lines of code.

Initial Code
I knew early on that the timing of emotion would have to be controlled in order for spectators to understand the process of emotion with my project. As a result, a servo library that allows speed control of 180degree servo motors was adopted for my project. When referencing the library and code, you can see an array that controls the angle, speed and boolean value. ex: (45,20,false). The speed of the motors is controlled between the values of 1 and 175.

Screen Shot 2015-12-18 at 11.49.58 AM

Fabrication
Having tested my eyebrow prototype and basic code set up, I began fabricating the coin dispenser for my final piece. I was lucky to find a plastic spool, which I believe was used for wire, that had the same circumference of an american penny.

004

Penny Dispenser
Referencing the shafts I created for our research group assignment, I fabricated a shaft that would push pennies of of the plastic spool chassis one by one. In order for this to work, a lot of time was spent on allowing the proper space for only one penny to be dispensed. Coincidentally, the popsicle sticks I used had almost the same depth as a penny. Along with some glue, the space created was enough for one penny to be ejected from the spool at a time. I got very lucky. I glued the servo and shaft into a slot I made (using a Dremel tool) into the plastic base of the spool. All worked well, except for each penny to drop from the spool.cI had to provide a front end lift for each penny on the spool, in order for the dispensing arm to hit it exactly on its edge. To do so, I glued some wire that I would adjust to the exact height that was needed for each penny to be pushed out and dropped.

Face
The face of my tamagotchi project was created using half inch width foamcore. Using adobe illustrator, I drew out the face over six 8.5×11 art boards to be printed and then transferred on to the foamcore. Doing this provided a solid plan as to how and where the backend and front end components should meet, as precision had already become a big learning curve for myself when going forward with this project.

03

Using the maker lab, I cut the foamcore down using an exacto knife. I was hoping ot use a bandsaw, but the foam core was not dense enough to do so. My concern was to get a perfect looking circle for the face of my tamagotchi, but I was surprised at how just an exacto knife and fine grain sandpaper can smooth out a circle when using the foamcore material.

I cut out the eyebrows and moustache by hand, as well a mouth piece that I did not have time to implement entirely on the face.  The mouth piece was considered in order to provide a happier looking face/ expression when the tamagotchi was meant to be overly happy with someone’s prolonged presence.

Presentation
Closer to the final presentation, my strategy to code the entire states of emotion and user interaction variables became a challenge for myself. As a result, I coded a plan B state that would have the tamagotchi provide a penny for every two seconds a spectator would spend time with it, while emoting an angry face when providing each penny.

I placed my piece on a pillar, facing the entrance of the gallery. This was intentional, so that user’s could see the sad state of emotion the piece was expressing form a distance.

I used an iPhone power adapter to provide a constant 5V power supply to the Arduino board on the back of the piece. Oddly enough, power supply was the last thing considered when presenting the piece, considering the attention I provided to everything else on the project.

Conclusion
Overall, I was very pleased with the outcome of the project. I believe I set up an obtainable challenge for myself that was enjoyable, yet still challenged me without becoming frustrated.  Going forward, I would add the mouth servo motor and spend more time with the coding component of the project to have a better understanding on how to code states.

Fritzing Diagram:

Marty Fritz_bb

Code:
https://docs.google.com/document/d/1d0y-OmjS3nz2YWyBzdpO7WhLzdrY4CTKLhjEPiSS-VM/pub

SPOT & DOT – Tamagotchi Project

SPOT & DOT

General Summary

For my Tamagotchi project I created two robots that each take sensory input that guide output behaviour. SPOT is a web camera mounted on a robotic arm that controls its movement behaviour along three degrees of freedom (i.e., 3 servo motors). This version of SPOT is a re-iteration of a research project completed by Egil Runnar, Ling Ding, Tuesday, and myself where we had to create a life-like moving robot inspired by a robotic installation from the 70s named The Senster (see image below). DOT on the other hand is a robot consisting of three concentric rings attached to each other by hinges and servo-motors, thus allowing for three degrees of rotational movement. DOT’s movement along these three axes is pseudo-random, leading to a sense that DOT is exploring its immediate environment and potential combinations of movement and orientations. DOT has one sensory input, a photoresistor sensitive to the amount of light at its surface. The degree of light DOT senses is mapped and used to control the speed of DOT’s rotation where more light leads to faster movement, and less light leads to slower movement.

senster10-lrg

SPOT

DSC_8264

Summary

Since this iteration of SPOT is based on a previous group project I will not go into too much depth about the inspirational background and underlying concepts, but the core features, updated fabrication and computed behaviours deserve mention here.

Behaviour

SPOT’s behaviour can be summarized as three potential behaviour modes: random movements during inactive phase, face tracking, and motion tracking. SPOT’s behaviour is based on its camera input. SPOT is interactive as he can follow your face as you move around, you can distract him with physical motion in front of his camera, or you can simply watch his well-orchestrated random movements. SPOT’s camera input is fed straight into a Macbook Pro via USB and processes the visual information using Processing and a series of visual processing libraries.

Random Movements: When there are no faces or movements detected, SPOT’s three servo motors are given random movement commands (varying in final position and speed) and moves in burst intervals. In this mode, SPOT can take on a number of random movements and positions, often leading the viewer to anthropomorphize his actions. Each of these movements are executed in three steps – a fast initial movement, followed by medium speed movement, and then brought to the concluding position via a fast movement. Combined these three movements give the impression of organic behaviour where animal movements follow a similar pattern in nature, thus adding to the sense that SPOT is a living and intentional object.

Face Tracking: When SPOT detects a face in his visual field he moves his head up or down, or his base left and right to center the face in his camera input. If the face is far into the camera’s periphery, he will execute a fast corrective movement, otherwise smaller adjustment movements are made.

Motion Tracking: SPOT’s final behavioural mode is motion tracking, which uses an algorithm that calculates the degree of pixel change from one instance to the next. This behaviour was novel compared to SPOT’s original iteration. If enough change is registered compared to a threshold value, it will be interpreted as movement. If behaviour was detected along either the left/right axis or the up/down axis relative to SPOT’s centre of vision, he would make a compensatory movement to bring the moving object closer to his center of vision. One issue with the motion detection is that motion would naturally be detected during SPOT’s random movements, so motion detection was always temporarily disabled until his movements were completed, otherwise he would get locked in a neverending loop.

Parts

  • 3 servo motors (2 TowerPro servos + 1 mini-Servo)
  • laser cut pieces of Baltic Birth plywood (1/8th of an inch thick)
  • 1 generic webcam compatible with OSX
  • various nuts, bolts, washers, and mini-clamps
  • Arduino Uno + USB cable + jumper wires + breadboard
  • 5V DC Power Supply
  • Computer (here a 2013 Macbook Pro)

Fabrication

The plywood body composing the arms and base structure were designed and laser-cut from Baltic Birch plywood. The CNC file used to cut the pieces incorporated both the designs for SPOT and DOT to be cut in one print to save money and time. The CNC file was created via Adobe Illustrator as shown in Figure 1. The logic of the CNC laser cut file is that there is an order that lines will be cut such that blue lines are cut first, green lines second, and finally the red lines.

Screenshot 2015-12-17 23.05.22

The servos and camera were fastened to the plywood body and the parts were put together. The servos were locked into the CNC slots and held in place via a nut, bolt, and a piece of metal used as a clamp. The camera is held onto the arm via a nut and bolt. None of the parts of SPOT were glued together; it is held together entirely via clamps or nuts and bolts.

Software

Some aspects of SPOT’s programming were somewhat complex, particularly refitting the motion algorithm to suit the present purposes, and the overall control flow taking into account the camera input to Processing, which controlled the Arduino (and moved the servos), which then sent feedback back to Processing to control camera input. The face tracking algorithm was borrowed from an OpenCV library, while the motion tracking was taken from an algorithm adopted from the course instructors.

I adapted the motion control algorithm to not only distinguish between movement at the left vs. right of the screen, but also to determine if there is motion near the top or bottom of the screen. Motion was not registered in the center of the screen but only in the peripherals.

Processing’s control of the Arduino was through the serial port, where the Arduino would then send data back to Processing through the same serial USB port. Arduino would send data back to Processing because SPOT needed a way to suppress motion detection while moving, or else he would be stuck in an infinite loop. This was where the control flow started to become complicated, managing face tracking and motion tracking while making the robot behave as smoothly as possible.

DOT

IMG_0615

Summary

DOT is a relatively large (about 2 feet tall)           interactive installation with 3 concentric rings positioned within each other, affording 3 axes of physical rotation about a static central position. The three concentric rings are rotated using servo motors via independent commands. Two of the rings rotate about the y-axis (leftward and rightward) while one ring rotates about the x-axis (upward and downward). When all rings move simultaneously, it presents a mesmerizing combination of motions that provoke one to reflect on physical movement itself. DOT intentionally invokes a relation to astronomical devices, or to the rotations of the human eye. When DOT moves about, DOT can even give a sense of intentionality as one can relate to some of the movements it independently makes, and some of the positions it finds itself in, and in some ways ominously reminds one of biological motion. The speed of DOT’s motion is influenced via its one photoresistor, where more light leads to more agitated and faster movements.

Inspiration

Smithsonian11a

The concept for DOT came about as I randomly stumbled onto a website focused on mechanical movement (http://507movements.com/mm_356.html). One of the mechanical devices presented on the website was the Bohnenberger’s Apparatus (http://physics.kenyon.edu/EarlyApparatus/Mechanics/Bohnenbergers_Apparatus/Bohnenbergers_Apparatus.html), which was the mechanical device that inspired the gyroscope. This device got me thinking about rotational movements in general, for example how the eye moves about within its socket. I felt that this movement pattern was unique and worth exploring, so I made some preliminary designs. I got feedback from the shop instructor who told me that the best and most sturdy material to use would be either metal or plywood. Working with metal would have been an exciting project but for the looming deadline I surmised that laser cutting plywood would give me the most satisfactory and reliable results.

When the device was complete and finally working I could not believe how interesting the movements were to watch. The Brohnenberger apparatus was initially conceived as an astronomical device used to study rotational movement, but the movement of DOT as it behaved on its own was surprisingly organic. As the rings slowly moved into new orientations relative to each other it became easier and easier to anthropomorphize the result. Sometimes it would move smoothly, and sometimes it would move into what appeared to be strained and dramatic looking positions. This final product will give me the adaptable opportunity to study mechanical movement via microcontroller for the foreseeable future. Future projects may include providing DOT more sensors to guide its own movements and reduce the reliance on randomness.

Behaviour

DOT’s behaviour was relatively less complex in comparison to SPOT’s multiple behavioural modes, but DOT’s behaviour is in many ways more mesmerizing. DOT’s three concentric rings rotate independently (except the outermost ring) and update their position relative to their current position randomly. DOT’s rings move to a new position by rotating up to 10 degrees clockwise or counterclockwise from its current position (the value is determined randomly for each servo). New movements commands are sent to DOT’s three servos simultaneously, and do not update until all servos have completed their movements. The speed by which DOT moves is determined at any given time by how much light is detected via its photoresistor.

When any of DOT’s rings reaches the maximum rotation of the servo in either direction (i.e., zero or 180 degrees), DOT goes through a routine where that ring then rotates to the opposite end of rotation with 30 degrees of slack to prevent a series of redundant movements. While DOT is not as interactive as SPOT, which will track your face and movements in real time, DOT’s behaviour is more independent and fascinating to watch for its own sake, but one can indeed control the relative speed of DOT’s movements. In further iterations, DOT’s central circle could be fitted with sensors to guide its behaviour, leading to a fascinating potential series of intentional rotational movements.

Parts

  • 3 servo motors (1 TowerPro + 2 mini-Servos)
  • laser cut pieces of Baltic Birth plywood (1/8th of an inch thick)
  • 5V DC Power Supply
  • Photoresistor
  • Arduino Mega + jumper wires + copper wire + breadboard
  • Nuts, bolts, washers, spacers
  • Rubber caps

Fabrication

The body of DOT is also composed of plywood laser cuts. The main challenge of DOT was the fabrication, specifically, how to have multiple concentric rings fixed within each other, all turning on their own axis via servo motor. The current solution to this problem was to print two copies of the rings and fasten them together via nuts, bolts, washers, and spacers, and to fit the servo motors within them. Axles were attached to the servo motors and an axle was attached to the opposite side as well. The axles were made of long bolts, and fitted loosely into holes attached to the spacer between the next inner ring. This allowed the pieces to rotate smoothly, or in our case, to turn via servo motor.

dot

The wires for powering/controlling the servo motors were fitted through the space in between the rings alongside the spacers, and fed along each axle, until the wires reached the base where they were connected to the microcontroller. Similar to SPOT, there was no gluing involved in this project. The servos and axle joints were set in place purely by pressure, where tightening or loosening the adjacent ring prints (held apart via spacers) would allow one to fix or release the servo motors. The arms of the servo motors were connected to the axles via circular clamps or rubber caps, which held together the arm of the servo to the arm of its associated axle.

Software

Compared to SPOT, DOT’s software aspect was relatively simple. Unlike SPOT, DOT’s program could be housed entirely on the Arduino. Using the varspeedservo.h library, which allows you to control a servo’s position as well as speed, each servo was given independent commands. Photoresistor levels were mapped to servo speed values ranging from very slow, to a medium pace. Using a Boolean in the third argument to the varspeedservo function, the function was told to wait until the movement was complete before issuing new commands. This had the effect of waiting until each servo completed its rotation before new commands were submitted to each servo.

 

 

 

 

 

Like or Dislike by Leon Lu

IMG_0528

 

Introduction

I wanted to explore the irreverent nature of people through an inanimate object which people could still connect with. I wanted it to be strange and grungy looking but something you might still consider cute. The idea was to get people to connect with it and try and get it’s attention. As people, we are easily distracted and change our minds without reason. That’s what I wanted to explore through my project. I called him ‘Gunther’. IMG_0472

The Build

IMG_0457 IMG_0455

Simply used servo motors which controlled his motion and used a ultrasonic distance sensor as an actuator. The challenge was creating smooth balanced motion which simulated life.

I also used wool, various forms of material, blue foam, Neopixel RGB LEDs and wood to construct the body and the structure.

Code

https://gist.github.com/exilente/682e4e0ccaadc49f56f0

(Note: I used the random variable function to create a sense of mystery to the actions of gunther, it would either come out and meet you or be afraid and run in based on a number it generated)

Challenges/Advancements

I had initially planned to use a web camera to detect faces through processing so the robot would not only react to one person but react to multiple people, say if more than one face was detected it would hide because it would be scared of multiple people approaching it at the same time. I faced trouble in communication between processing and arduino and decided to go with something simpler in the end. If given the opportunity to redo this project I would attempt making computer vision work in a more seamless way thus adding more character to Gunther.

Furgie and Furgurt: Virtual Cuddling Pets

Overview

Furgs_overview_diagram

Meet the Furgs – Furgie and Furgurt!  The Furgs love to cuddle, and that is their main purpose – to allow partners to cuddle virtually.  By hugging one Furg, you create vibrations within your partner’s Furg.  When both partners hug their Furgs at the same time, the Furgs vibrate with greater intensity and purr.  In this way, hugging your new furry friend allows to you feel the vibrations of your loved one – from anywhere in the world.

Watch the video:

 

Structure

The Furgs are made with the following:

Sensors: Bend sensors are used to enable the detection of hugging.  If a sensor is bent past a pre-determined threshold, a hug gesture is detected and the actuators are triggered.

Actuators: Multiple vibration motors and a buzzer motor are sewn into the inner layers.  When Furg A is hugged, the first vibration motor of Furg A and Furg B vibrate, and the buzzer starts making a purring sound.  When Furg A and Furg B are hugged at the same time, all vibration motors on both Furgs vibrate – the Furgs vibrate with greater intensity.

Boards: Each Furg runs off of its own Arduino board and its own Lilypad board.  The reason that two boards are included in each Furg is to provide many pins for future functionality (described below).  The Arduino and Lilypad boards communicate across a Software Serial channel.

Connectivity: The Furg Arduino boards are linked via a Software Serial channel.  This was done for demonstration purposes only – future versions will include WIFI chips, so that two people, located in different locations, can hug via the Furgs.

Note, that initially Bluetooth chips were used, but the chips proved to be unreliable, often dropping pairings and/or running with inconsistent timing.  Sometimes the Furg communications would transmit simultaneously, while sometimes it would take several seconds.  As a result, Bluetooth was dropped.

Materials: The furgs are built around an egg shaped stuffed animal core.  Their custom outer layer is made of very soft, shaggy fake fur.  The ears are made of another type of fake fur that resembles cheetah fur, sewn together with the shaggy fur.  Fabrication of their shape was the most time consuming process over all others.  (Sewing is not one of my top skills, but it was great fun giving the Furgs their playful form).

 

Core Functionality Furgs_functionality_grid

 

 

Code

Code for the project is available here: https://github.com/jamesessex/TheFurgs

 

Future Functionality

Although I believe that the ability to hug and cuddle virtually is awesome, I see a lot more potential in the Furgs and will be building out more features.  In addition to cuddling, multiple gestures will be added that enable virtual play fighting.

There will be no rules built into the pets, only a rich set of gestures.  Users can then make their own games out of the gestures.  For instance, ear tugging and belly poking will be enabled.  Imagine a girlfriend poking the belly of her boyfriend.  The boyfriend can poke back, or tug an ear, or send a playful sound or  playful gesture (by selecting one from  an LCD screen that will be added to the Furg bellies).  Users can create their own playful rituals around the Furgs, and can change and evolve those rituals as they please.

An initial set of features to add include:

  • Rich sound set
  • Head petting
  • Belly rubbing & poking
  • Nose touching & nose rubbing
  • Ear tugging
  • Jiggle and bounce
  • LCD screen – displays animations like heart bursts, and provides a menu that enables sending of messages, sounds, and animations

 

 

 

 

 

 

 

 

 

 

 

Tamagochi-Halloween Dog Ghost

My Tamagochi project is literally a mechanical animal toy: a dog ghost who response to people around him and  reacts when people try to pat him.

tomagochi

It may seems to be strange, but the source of my inspire is exactly Tamagochi itself. I owned a toy like this when I was a little kid, and I enjoyed the little digital pet since I couldn’t own a real pet. That’s why I wanted to created some thing similar to a toy animal, but with some funny twist in it.

9382383_orig

The major design of my Tamagochi includes two parts: the sonar sensor and light sensor which detects motions around the toy; the other part is the servo on the top of the doll, which makes the doll vibrate and turns around as programmed.

Design In Process

My original design was quite different from the final one. The original design includes:

-a sonar sensor array which can detect people’s approaching from three directions

-a mp3 shield which plays sound effect when needed

-a color sensor which can mimic the color of people’s clothes with an RGB led.

-a piezo element which detect tapping

 

Unfortunately, most of them were discarded due to following reasons:

-the mp3 shield somehow slows down the whole program, which resulting the sonar senor not being able to detect approaching fast enough.

-the color sensor can only read color very close to it, which makes it impossible to function as an “eye”, observing color from a relatively far distance

-piezo element detects a large range of tapping rather than within a specific area, which makes the idea of “tapping the right spot” insignificant.

-sonar array actually works fine as expected, however the idea of the doll “seeing” peoples’ approaching and turn away from them always seemed to be missing some piece.

 

Eventually it occurs to me that rather than waiting the doll to turn back and face the audiences again, it would be a lot more interesting and funny if the doll actually invites people to tap it, and it make it ever better if the audience have to chase the patting spot since the doll will keep turning away and try to evasive their hands. The invitation of interaction was the key to turn the doll into a robot.

20151210_115046

Diagram

The code itself is fairly long so it might be better if I just explain the logic behind the code here:

1)the sonar sensor keeps checking if there is anything approaching, while the servo constantly vibrates as if the ghost dog is slightly shaking

2)if anything get into the sonar detecting range, servo turns to a random direction

3)if the object (audiences’ hands, in this case) follows the sensor, the servo turns into another direction

4)if the light sensor is tapped and cannot detect light anymore, the green LED lights up, servo turns back to its original position. Sonar sensor is turned off.

5)wait for 3 seconds, then the servo and sonar senor back online.

Conclusion

Although the design of this project has been altered a few times in process, the final result functions quite close to what I original wanted to create: a machine pet which react to people’s behavior, just as what a traditional tamagochi does. And from the feedback I received during the exhibition, I believe the audience did enjoy playing with it. I had a lot of fun while trying to figure out the best way achieve the result I want, and the final result is satisfying. There are two things I would like to change though:

1) I hope I could find a way to make the mp3 shield to work. It would be even funnier if the ghost dog can actually talk when moving

2) the fish line only merged as a problem after the testing has been almost done. The robot, however, was designed to be hanged on the ceiling by a fish line. The sonar sensor was placed in a position where the audiences’ hands would be detected if they want to touch the “pat spot” from the same level. However, after the fish line accident (which ruined my first prototype), I switched to a clamp and mounted the doll between two chair. Naturally, it’s not positioned on a level which is a lot lower than I expected. People have to crouch to be detected by the sonar sensor. If they try to tap the doll from above, the servo would not turn at all.

In my future projects, I will make sure more test would be done before the final deadline to avoid these kinds of problems. Overall, the project was enjoyable and educational. Thank you!

Tamagotchi Project – Amaze, an autonomous turtle robot

tamaPosterupsideDown

Video to be uploaded…

Overview

Amaze is an autonomous robot with four robotic legs, mimicking the movement of a turtle. When it detects an obstacle, it gets scared and moves away from it. But there is a chance it becomes bold and dashes into the obstacle. An open-ended maze sits at the center of the top of its body. With its shaky movement, a ball moves around within the maze.

Emotion and Movement Patterns

  • Calm, moves forward/left/right randomly
  • Cautious, looks around to the left and to the right after each movement
  • Panic, simultaneous movement of fore limbs and rear limbs at a high speed
  • Scared, slowly makes a left or right U turn and moves away at a high speed
  • Bold, slowly makes a left or right U turn but turns around half way and dashes into obstacle

(Refer to Implementation -> Algorithm for more information)

Design

Behavior / Emotion

The turtle robot reacts to obstacles and shows fear, cautiousness, or calmness through a sequence of movements with variable directions and speed.
My original idea was to trigger some behaviors (of the turtle robot) if the maze is being solved. If the “egg” of the turtle gets lost in the maze, the turtle moves furiously. But when the “egg” is returned to the “ending point” of the maze, the turtle calms down and moves slowly again. Then the emotional feedback is based on the maze instead of external environment or humans. So I decided to use distant sensors as input from the surroundings to allow more interactivity.

Kinematic / Kinetic Mechanism

To mimic four-legged locomotion, I initially experimented with four micro servo motors with extension. It didn’t move effectively because the forward sweep of servo arms counter the backward sweep to some extent. I decided to create four limbs with two parts and one joint on each, where each lower limb is limited to swing within approximately 90 degree, like human limbs. I also researched about turtle locomotion pattern and decided to go with what is called Hatchling Terrestrial Locomotion (diagonally opposite limbs moved together) among most sea turtles (Lutz and Musick). This pattern works well with the limb design. There are alternative gaits such as crawling type or eight-legged locomotions like spider. One possibility is to extend the servo arms with turtle flippers to enable swimming.

(Lutz, P. L., Musick, J. A., & Wyneken, J. (Eds.). (2002). The biology of sea turtles (Vol. 2). CRC press. Retrieved from http://www.science.fau.edu/biology/faculty/Wyneken/DOC050817-012.pdf)

Apperance / Material

In order to mimic the appearance of a turtle, I focus on mimicking animal limbs and the shell of turtle. I intended to build everything out of bamboo, which may imply natural life forms. But since bamboo is hard to cut and sand, I went with wooden material for limbs and base.

Interestingly, I found a plastic food platter that has the perfect shape and size for the base for making a turtle robot. Without that, I might have made the base out of wood or bamboo by handcrafting.

For the maze, I tried heating bamboo strips in order to create curvy maze walls. Then I found it is more effective to heat acrylic and bend them manually.

For the shell, I initially wanted to make a dome with turtle shell texture. Since the texture may block the sight of the maze inside, I decided to go with a clear plastic dome.

Implementation

Hardware:
Arduino Mega 2560
Mini Breadboard x 2
Toggle Switch x 1
9g Micro Servo x 4
Ultrasonic Sensor
5V extended battery for smartphone

turtleInteriorBlog

Code:
https://github.com/Minsheng/tamagotchiturtlebot/tree/master/turtlemotion

Algorithm:
The following types of emotions/movement patterns are implemented,

  • Calm, default pattern in each cycle
  • Cautious, triggered after fear level reaches a threshold (currently 3)
  • Panic, triggered if distance from obstacle < 10cm
  • Scared, distance from obstacle < 40cm, probability of event set to 70% chance
  • Bold, distance from obstacle < 40cm, probability of event set to 30% chance

fearLevel, a variable that keep tracks of the number of times Panic or Scared mode is triggered. If it reaches a threshold, Cautious mode is activated and this variable gets reset.

becomeCautious, if set to true, the turtle robot moves to the left and to the right sequentially after a random move. It gets reset after being cautious for certain amount of cycles in the main loop.

To make the turtle react slower, previous and current distance sensor values are checked whenever an obstacle is detected. Only when both of the values break certain thresholds, the emotion pattern changes.

Library used:
NewPing (http://playground.arduino.cc/Code/NewPing)
VarSpeedServo (https://github.com/netlabtoolkit/VarSpeedServo)

Challenges / Resolution

  • achieving smooth movement/added two wheels at the bottom of the base and adjusted limbs’ length
  • synchronizing multiple servo motors with variable speed/used VarSpeedServo library functions; always wait until the slowest servo to complete before set the next positions for the other ones
  • failed to implement backward movement/let robot make U turn instead
  • unable to upload programs to Arduino Nano at some point even after reburnt bootloader/replaced with Arduino Mega 2560
  • working with two or more ultrasonic sensors/used one to detect front
  • determining where to showcase the robot/let it go and pray no one steps on it

 

 

Box Robot

My goal was to build something that people could play and interact with, and whatever it would be should react to people’s interactions.

The project started with the idea of  building a mood box, the inside of the box would show its mood and whenever someone interact with it would change its mood. From that, I added the eyes, using LED Matrix, the eyes should be consistent with the mood coming from inside the box.  To represent the mad mode, some white LEDs blink a thunder pattern inside the box and the eyes become angry.

DSC_8188

 

The mad mode starts whenever the box is putted by its self on the table, it doesn’t like to be alone, it likes to be hold. To detect when its on the table or floor it uses a photosensor on the bottom.

The second mode is the neutral mode, it starts when someone holds the box. During the neutral mode it is possible to move the box to make the eyes move, using an accelerometer the eyes will follow the inclination of the box, two vibration motor inside the box are activated when the box is inclined to right or left.

The case was made out of acrylic using laser cut, two opening on the sides as well as the brain on the top have the objective of letting the lights come out from inside.