Category: Experiment 2

Mooduino

Mooduino
A device that lets you set your boundaries of comfort.


Mooduino
Project Members: Shikhar Juyal, Jingpo Li, Naomi Shah

Project description
Mooduino is a personal space signalling device that uses emojis as a form of communication to represent how the user feels about their personal space in any given moment, to those around them. In an increasingly multicultural world, where the sense of personal space differs from culture to culture, Mooduino understands a users need for comfortable personal space and allows them to have control over the proximity they would like to keep between themselves and others, the range of emotions they would like to represent, and the context under which they would use the device.

Intended context and users
All issues of personal space are significant, regardless or cultures, geography and time. The mooduino can be adapted for users across demographics and cultures.

Team dynamics
For this project, our team worked in two phases. In the first phase, the team ideated on the concept and developed the code together. In the second phase, the team worked on the realisation of our products individually, owing to the fact that each of us perceived and interpreted space in a very personal way. This lead to the culmination of three different Mooduino products- a wearable, a desk companion, and a stuffed animal- each incorporating the hardware in a different manner. This also demonstrates the versatility and adaptability of the product.

img_20181023_150306 a2d9777e-e8e9-4e2a-ad24-f3e42263ea9cimg_1985

 

Ideation:
Phase 1:
The idea culminated as an outcome of a rapid brainstorming session. We started with a 3 minute round of sketching out and introducing as many ideas as we could on post-its, and discussing them at the end of 3 minutes. Each of us generated approximately 5-6 ideas within the first three minutes. The next 3-minute round involved building upon and modifying some of the ideas presented in the first round, while also introducing new ones that were sparked from the previous discussion. We underwent the same process of discussing our ideas, and putting aside those that really resonated with each of us individually. The third round was a variation of the second, that involved the same cycle of building upon, modifying or introducing new ideas.

img_3205img_20181018_164623img_20181018_175838

 

In this exercise, we focused on generating quantity of ideas over quality, to get our imagination flowing to arrive at our idea faster.After discussing all of our ideas, we shifted our focus to quality. We shortlisted the ones that seemed like the strongest ideas, and interested us the most. Each member then argued out the pros and cons of every idea put on the table in terms of its context, production and relevance to the brief. This way, each member contributed in some way that lead to a collective development of ideas, making our concept stronger and more wholesome. It was important for us that each member feel ownership over the idea that we narrowed down upon.

Here are some of the ideas that were shortlisted after our brainstorming session:

A tamagotchi of our avatars
A glove for different hand gestures such as fist pump, high-five and handshake.
A device that glows brighter every time you are in proximity
A face who’s expressions change depending on the actions that we do.
A device that expresses your need for personal space

 

Why personal space?

personal-space

The idea that we unanimously agreed upon was a device that expresses our need for personal space. All three of us come from parts of the world which have a different understanding of personal space, owing to close knit communities, crowded cities and overpopulation. The transition from India and China, over to Canada also changed the way we thought about our boundaries of proximity and comfort, Each of us valued our personal space differently, owing to our cultures, our gender and past experiences. We concluded that the varying perspectives in how we visualize our personal space would be a relevant project to build.

Development:
At this point, we worked on defining the characteristics of our product, and the hardware required to achieve it.

We worked with three main components:
Ultrasonic Sensors
LED Matrix display (8×8)
Buttons

circ1

These were then utilised to achieve the following defining characteristics:

  • Set own boundaries:
    The Mooduino device should allow a user to acknowledge and express their own personal space, and set a proximity limit depending on the distance they find comfortable while interacting with others.
  • Hardware: Ultrasonic sensors
    The ultrasonic sensor would do the job of measuring proximity between the user’s body and the person they are interacting with. A proximity would be set via the code by each individual depending on their levels of comfort.
  • Express oneself:
    All issues of personal space are significant, regardless or cultures, geography and time. The need to guard one’s personal space- either physical or emotional is integral to one’s well being.
    The Mooduino should allow users to send subtle but clear visual signals that let people know when they too close for comfort, but also indicates when they are welcome and even needed. Acknowledging that emojis are a binary form of communication in the sense that they can display either happy or sad, angry or joyful, etc. we wanted to use a method of communication that could be as direct as possible, without leaving room for interpretation. Emojis are a widely acknowledged system of communication, changing the meaning of text in our daily use with its application. We decided to experiment with it to test if signalling was clear and concise. By using emojis to visually communicate how one feels, the mooduino should attempt to spell out boundaries in circumstances where body language can be misinterpreted. These emojis can be customized to match a user’s personality and range of moods that they are inclined to feeling during interactions with others.

mood

  • Hardware: LED Matrix and Buttons
    The LED Matrix allows for visualization of the emojis, while the button allows for switching between different moods suited for different contexts, people, environments that a user might be dealing with.
  • Versatile, portable and modular hardware:
    The mooduino device can be modified to fit into any hardware or clothing product, and hence used under a variety of circumstances. We iterated through a series of rough concepts including various wearables, toys and accessories which could be incorporated with our project.

concepts1540230447448

This is demonstrated by the use of three different prototypes created to illustrate the versatility of the product, as well as the personal nature of an artefact that is meant to communicate your emotions and moods.

 

Mooduino Wearable:

a2d9777e-e8e9-4e2a-ad24-f3e42263ea9c    7b2e375a-2ce9-44e7-93e7-11f8f91a35da

The Mooduino can be clipped onto one’s person while navigating a public environment that might be crowded, busy or extroverted. In such a context, a user might have difficulty setting boundaries for one’s need for personal space. The device attempts to signal to those around the user if they might have violated one’s comfort zone, allowing them to take a step back. Similarly, the device can also be used to indicate if a user is open to an interaction. The mooduino can be an accompaniment to a bag, a pouch or a purse that a user might carry.

 

Mooduino Desk companion:

img_1985   mooduino

The desk companion is effective if you work in an open cubicle and need to express an encroachment on one’s personal space. In this case, the desk companion has been used to protect one’s stationery, and let others know that borrowing without asking is not a welcome interaction. It also has a customizable shell which serves as a complementary interaction with the changing moods of the device.

Mooduino Stuffed Animal:

img_20181023_150306

The Mooduino stuffed toys are mainly designed for children aged from 5 to 10. Children cannot acknowledge or communicate “ownership” or “property”. This toy help them to establish and communicate boundaries. It teaches them what they do not have ownership over and also communicates their boundaries of personal belonging to other children.

Code and circuit design

Github link – https://github.com/jli115/Project-2

We started our process by first simply getting oriented to the different components we intended to use, and then tested them out with the code. The intention at this point was simply to test how they worked individually, and the variables that we required to be aware off while using it.

We started with the LED Matrix attached directly onto our breadboards and used male-to-male jumper cables to make the circuit. The circuit that we made was unfortunately too chaotic to be feasible because of the number of wires it required. We decided to minimize the circuit by using an LED MAtrix driver that would increase the visibility on the breadboard, and make the wired more manageable. This setup required only one 10 ohm external resistor to set the segment current for the LED.

codencirc2

Next, we experimented with the HC- SRO4 Ultrasonic Sensors, first testing the readings on the serial monitor, and then testing it with a single LED bulb and then multiple LED bulbs. Once we felt confident with the sensor, we used it in conjunction with the LED Matrix and the Ultrasonic.h library. This took several rounds of testing to see how the output changed by modifying the maximum distance in the code, and eventually switched over the NewPing.h in the interest of more stable readings.

codencirc3

 

fritzing

Each of use created a series of emojis that we felt best suited our personalities and the context that we wanted to use them for. We created these using an array of bytes in binary form in the code with the help of an LED Matrix editor. We integrated a button that would help us switch between modes of emotions.

We programmed our emojis to switch between two different scenarios- one where no presence was detected and another where presence in front of the sensor was detected. For example, the emoji was sad if no presence was detected, but would switch to happy when it did. This switch happened within the area of proximity determined by us in the code. Unfortunately, the emojis working in conjunction with the sensor was largely unstable, with the visuals on the LED Matrix constantly fluctuating back and forth without any presence detected in front of the sensor.

The challenge of stabilizing the visuals on the LED Matrix occupied much of our time, while we experimented with changing variables in the code as a process of trial and error to see if it caused any change. We manipulated the maximum distance several times, running tests to see what proximity yielded the most stable results. We hypothesized that the more stable readings were when we kept the maximum distance between 50-80. However, this meant an uncomfortable level of proximity between one person’s body to another.

5

Eventually, we were assisted by Nick to add an extra variable which filters out the null values from being detected. This simple line of code helped us stabilize the readings completely, which had taken up several of our days.

Finally we minimized the circuit, making it compact enough to fit into our individual shells. This involved soldering the button into the circuit, reducing the wire lengths, using a smaller breadboard and arranging all the components snugly together.

What we wanted to do:
We spent some time experimenting with sound, but decided to forgo it with the rational that we wanted our signalling of personal space to be subtle and not alarming, believing that the visual should be indication enough that the user is in the requirement for personal space.

We also wanted to experiment with an RGB Matrix, attributing colours to certain groups of emotions. We believe that signalling would have more clarity when it fit the mental models of green as ‘go’ or ‘positive’ to suggest that an approach will be welcome, whereas red could be used as a ‘stop’, ‘warning’ or ‘danger’ to suggest that an approach would be an encroachment.
Blue would be used for a more neutral range of emotions.

We debated the idea of using an LCD display to show a quantitative approach to understanding distance, by indicating what the measured distance of comfort was for the user. This was quickly discarded as it did not seem to add any utility to the design.

4 1b77df6a-a7eb-4675-a50a-b243e673d991

 

Desk Companion (Shikhar Juyal)-

mooduino

Context
The Mooduino desk companion is your personalized desk accessory which can also doubles up as a stationary stand. This desk buddy has various mood settings you can alter by a simple button press to align with your own mood. Each setting has two emojis depending on how close someone is to you (or your stuff), and Mooduino does the job of sending a message like DO NOT DISTURB when you are in a work mode or “Come talk” when you are up for a conversation. The desk companion has a fully customizable Lego body which you can change along with the different mood settings, also adding a playful element in the design.

Process of building:
The idea of a desk companion for customizing your personal space added a strong aesthetic bend to the Mooduino while further augmenting its utility. A shared working space can oscillate between being social and isolatory and I feel it important to communicate to my colleagues what headspace I am in at the moment. It also allows me to add a flair to my workspace and showcase my personality and influences by further customising the Mooduino.

749ede7c-2ed7-4234-8411-03d57e375023img_0209 img_0210

 

I iterated a few versions of the design. The look and feel of it, as well as how the LED display can be accessorized in it.

Fabrication:
I planned on 3D printing the device. After 3D modelling a basic idea, I realized there was not enough time and resources to get it printed. However, I decided to fabricate the device by hand. I explored different materials like wood, cardboard, ivory sheet and even considered using plastic. It seemed like a hard job integrating the circuit with the materials I was considering.
Using Legos (or blocktech to be more precise) as building blocks turned out to work pretty well as it added onto the existing idea of the desk companion. It was a fully customizable design.

mooduino2

Materials used:
Arduino micro
Jumper wires (male to male, female to male)
Button
LED matrix display with driver
Ultrasonic sensor
USB cable
Blocktech or Legos (3 different colors)
Portable charger

Challenges
Integrating the circuit with the lego (blocktech) blocks : Making a lego toy is easy, but integrating a circuit into it was challenging. Placement of the blocks had to be strategic so the sensor, matrix display and the button could fit into the holes. Also, the button needed to have a sturdy back so it holds well when pressed.
Closed circuit: After Integrating the circuit with the blocks. I realized it was a closed shell so modifying connections was also a challenge.

 

Mooduino Wearable (Naomi Shah)-

img_20181028_232646_bokeh img_20181028_232048_009

Context
As an introvert, who likes keeping my personal space, a device like the Mooduino wearable has great utility. I would like to communicate my current mood to people around me who belong to a different cultural context from me which comes with varying senses of approachability and personal space.

Fabrication-
The wearable went through a process of low-fidelity prototyping, by first integrating it into an existing bag to test its core functionality. This idea was discarded in order to make the device more portable. The second iteration involved wearing it as a strap-on on the arm, and was tested using a phone case. After a couple of rounds of user-testing with different people, I realised that the limitation of the ultrasonic sensor was a rather narrow field of detection, and hence the best results would show when worn across a user’s chest. After this conclusion, I attempted to make it a wearable that could get removed and reworn easily, much like the phone case. Out of my options between a necklace, a belt and a clip on, I chose to create a low-fidelity clip-on box that would easily merged with my belt bag, allowing the emoji’s to stand out further.

capture 1540230447448capture2

img_20181026_113123 img_20181023_174223

The circuit was then minified to be compact, such that it would fit into a small box that could be clipped onto my daily belt-bag. As the base container is very small and unassuming, it can be housed into different cases and bags to blend in, or accessorize my outfit for the day. I painted the housing a matte-black to match my belt-bag and fastened it with heavy duty clips.

Materials used
Cardboard box
Stationary clip
1 Arduino micro
Wires
1 Button
8X8 Led matrix and driver
1 Proximity sensor
USB cable

Challenges
1. Minimize the circuit to fit into the box
We used the smaller breadboard. We also need to sorting the proximity sensor and led matrix first and they made the wires as short as possible.

2. Creating an aesthetically pleasing, modifiable and reasonably weatherproof housing,

 

Stuffed Animal (Jingpo Li)-

img_20181023_150300
The Mooduino desk stuffed toy is a special toy for children showing different expression based on different scenarios. Using emojis as another way of communication for children. This toy has various mood settings you can alter by pressing a simple button on the toy’s back depending upon your own mood. There are are range of emojis including happy, sad, excited and bored.

Process of building
The process of building was very minimal. After we reduced the size of the hardware to a decently small parcel, it was fairly simple to integrate it into an existing stuffed toy. This proved our hypothesis about the modularity and easy customizability of the Mooduino.

img_8219

Materials used
1 Soft toy
2 headropes
1 Clip
1 Arduino micro
Bunches of Wires
1 Button
1 Led matrix
1 Proximity sensor
Usb cable

Challenges
1. Integrating the Mooduino into the stuffed toy without diminishing the aesthetic appeal or playability of the toy.

 

Reflection

 

The act of physically building the prototype was extremely insightful, and taught us immensely about the components that we used. Working in a group was fruitful because peer exchange helped us deconstruct the code, problem solve amongst ourselves when we faced difficulty and be great sounding boards to each other while brainstorming.

 

Our group worked in the unconventional structure of building the code together but split up to work on the fabrication of our individual products. Creating individual products meant that we could not fabricate and mass produce three products between us, but also meant that we had the flexibility to define what personal space meant to us individually. This helped provide a great balance to express ourselves. We were initially skeptical of our approach, and wasn’t sure if we were on the right track, especially because our concept revolved around using existing products to integrate the Mooduino into and not fabricate a completely new product.

 

While this project is a minimum viable product at the moment, the intention for the future is to enhance perfect and refine the product into a cultural artifact that is common place, widely understood and ubiquitous enough for people to recognise it on other people’s bodies or belongings, and respect the signalling as a sign for maintaining boundaries.

References:

 

Project Library

https://www.instructables.com/id/Personal-Space-Defense-System/

https://create.arduino.cc/projecthub/Alojz/tamaguino-update-with-huge-oled-b897d6?ref=tag&ref_id=microcontroller&offset=26

LED Matrix monitor- https://xantorohara.github.io/led-matrix-editor/#00423c005a3c6600|0000423c005a3c66

Programming 8X8 LED Matrix- https://create.arduino.cc/projecthub/SAnwandter1/programming-8×8-led-matrix-23475a?ref=tag&ref_id=art&offset=9
Controlling an LED Matrix with ARduino Uno (face with 5 LED Matrixes)-

https://www.hackster.io/igorF2/controlling-an-led-matrix-with-arduino-uno-0a9e94

Arduino HC-04 and 8×8 Matrix MAX7219 (Get close and the Arduino is happy; go away and he gets sad.)- https://www.hackster.io/vinicius-lopes/arduino-hc-04-and-8×8-matrix-max7219-deab57

Arduino tutorial about a 8×8 LED Matrix. How to find the pins.-

 

Audio-Visual Theremin

Experiment 2: Multiperson

Audio-visual Theremin

Amreen Ashraf, Mazin Chabayta and Josh McKenna

Project Description

The Audio-visual Theremin is a unique device that gathers users together through the premise of sound creation and music. The device invites the user to modulate sound and light with one another through transforming the object through space. Through different generated velocities and frequencies of sound, the users are able to form harmonies and melodies with one another.

Introduction

For this experiment our brief required that we develop custom hardware devices to create and facilitate new modes of interaction amongst ourselves.

We arrived to our idea after discussing into greater detail some aspects and forms of our ideal type of communication. From that discussion we unanimously decided for our object to work around the elements of nonverbal communication– more specifically, interaction in the form of dance (body move) and sound.

Originally our idea stemmed from each of our individual desire to create an instrument as an object. Musical Instruments themselves are unique devices that can create interaction amongst users in the form of harmonies and melodies. Through further exploration we discovered a variety of interesting midi controllers as well as proximity instruments like the theremin. Our idea began as a form of synthesizer that would generate continuous sin waves with the ability to modulate them based on the position of the object over a perpendicular space.

introduction

Recognizing the scope of our initial proposal, we revised it under the consideration for what elements were absolutely necessary for our intended user experience to occur. Our hypothesis was that if users are given an instrument in-hand, one that plays sound automatically, that they would engage in dance and experiment in music creation with one another – a utopic outcome, but desired nonetheless.

After experimenting with our available input and output sensors we felt as if we could create an instrument based around the theremin, while incorporating the ability to affect light, color and rhythm as a basis for communication.

User Experience

With the concept taking shape and after deciding that our object would generate sound via proximity sensors, similar to how a theremin detects motion and distance to generate sound–we began to hypothesize how our instrument would result in a meaningful experience for users.

Our basis for design was a focus on making an organic shared musical experience amongst participating users. We first imagined that the user would be holding the object perpendicular to the floor. The primary input of the system would be the distance between the object and the floor, this value would generate corresponding tones relative to the distance the sensor detected. When the object is raised, a corresponding higher tone will be played. When the object is lowered a comparatively lower tone will be emitted. We hypothesized that this interaction of lowering and raising the instrument would result in a possible form of dance, encouraging users to continue emitting sounds. This is a fundamental difference between our audio-visual theremin compared to a traditional one, whereas it would be static and positioned in place.

Following the initial brainstorm we decided collectively that its form would be a critical part of the user experience.  Ultimately, we wanted the movement of the instrument to generate the variety of sound and contribute to the overall shared experience amongst users. We imagined our object as being something that would be easily transportable, something that could be carried to lively events or concerts. Through support of the Maker Lab we were able to develop the form of the device into a parallelohedra. Our first being made of cardboard and then eventually wood. Originally we intended for the instrument to be spherical, but landed with this geometry as we wanted differentiation for our string of LED lights.

With this concept, we still felt that the user would be limited in manipulating sound with control and intent. To ensure that our experiment would not result in only organized noise, we therefore decided to incorporate a potentiometer that would act as an interface for the user to adjust the pitch and frequency of the sound, as well as the color and frequency that the LED lights would display. The lights acting as a mirroring of the sounds one would be emitting from their device.

As demonstrated by the video below, one observation that we had is without proper instruction to the inexperienced user, the interaction is more of one that is curious and introspective rather than explicit and musical. We all noted after our prototype was tested that we could have made the intended experience more implicit, rather than focusing primarily on the premise of a organic dance experience.

Design and Fabrication

Influenced by Rafael Lozano Hemmer’s Sphere Packing as well as a variety of spherical midi controllers, we decided at first for the sphere as a shape for our device. We quickly realized the many difficult challenges we will face with fabricating a sphere, we then decided to change the shape to a more manageable design that can hold all our components. We explored different options and considered moving to a cube design, in order to simplify the fabrication process, however, we realized that the cube is not dance-friendly and doesn’t sit nicely between a user’s hands, so we went back to the drawing board to finalize the right shape for our device.

shapesketches

Eventually we decided to merge the two previous ideas and come up with a new shape that functions as a middle-ground between a sphere and a cube. Inspired by polyhedrons, our final shape was created with a hexagonal top and bottom, which provided a reliable base, and 24 triangles of two different sizes which eventually completed a semi-spherical shape.

We created our first prototype early using recycled cardboard, that allowed us to quickly predict the challenges and opportunities we would face in terms of physical constraints. It also allowed us to settle many decisions ahead of time, which provided us more time to focus on the coding functions and electronics. For our second prototype, also using cardboard, we were able to finalize a size, device manner of opening and closing, and the process of fabrication.

Since we are using sound as a main function of our device, choosing a sound-friendly material for the fabrication of our object was an important factor. With acoustic sounds, wooden enclosures are widely used due to their ability to amplify and add depth to sounds. Hence, our third prototype was made of wood, which added aesthetics and sturdiness to our object. Moreover, using biodegradable material reflects our commitment and responsibility to promote eco-friendly design.

outlinefinal

foamboardtemplate

Working with the maker’s lab manager (Reza), we were able to salvage all our required materials for building this object and plan the fabrication process efficiently. During our first meeting with Reza, we were able to plan for the fabrication process to take about 8 – 9 hours, which was acceptable for us. However, the fabrication was held up due to laser cut machine malfunction, which meant that all 26 wooden pieces had to be cut, sanded, assembled and glued manually, which required a significant amount of time we did not have. So, we decided to create one wooden prototype manually and explore different options for the other two.

screenshot-at-oct-29-22-27-15

Eventually, we went with foam board, which added a great value because this shape is easily shared, printed, cut and folded by anyone at home. This opens a new door for this project to become widely shared via an open-source channel for an easy homemade DIY project. Combined with a simple step-by-step process, and a few easily sourced components, this package allows anyone, no matter how basic their coding and building skills are, to create their own version of this theremin device.

screenshot-at-oct-29-22-28-52

In total the fabrication process took about 13 to 14 hours to complete, which is reasonable. We were able to overcome the laser cut malfunction challenge by using a new way of creating our object with foam board rather than wood.

Coding

Code on Github

We faced a number of coding challenges. The amount of coding issues and barriers we faced was correlated with the complexity and scope of the project. By reducing the amount of input sensors to finally just a ultrasonic sensor and potentiometer to affect light and sound we simplified the process and were able to deliver a working instrument as our device.

Our initial test for controlling light by potentiometer is demonstrated in the example line of code below:

In this example of code we did not have a solution to how we would act separately on each LED light without occupying the same amount of pins (from 2-11). Although this idea demonstrated our initial intent to give control to the user to modify light in correlation to sound, it was not feasible for this project given that we would be limited in the amount of lights without being able to coordinate additional input sensors.

pasted-image-0

giphy

 

We determined to delegate the following functions to our device. Raising and lowering the object will increase pitch and decrease the pitch of the sound emitted, respectively. Adjusting the potentiometer will increase both the frequency at which the sound is emitted, as well as the frequency at which the LED lights cycle through the WS2811 strip (as addressable via the FASTLED.h library).

After recognizing that we would not be able to incorporate an analog synth to our Arduino board (see section titled Process), we included a simple fix by Brett Hagman by including the ToneMelody reference library known as pitches.h. This small library defines tones relative to frequencies to be played out of the small 0.5W speaker.

screenshot-at-oct-29-23-26-53

Although numerous notes were included, we ultimately decided upon an array of 10, that was mapped accordingly to the values demonstrated by the ultrasonic sensor (between 0 and 60).

screenshot-at-oct-29-23-31-01

We defined a minimum and maximum range, at which no sounds would be played.

screenshot-at-oct-29-23-32-50

screenshot-at-oct-29-23-33-36

 

Our LEDs, being the WS2811 LED strip, were addressable via the FASTLED.h library. We were able to adjust the brightness and hue by inputting the analogRead variable input by the potentiometer into the set.Brightness and mappedHue functions.

screenshot-at-oct-29-23-35-21

 

Electronics

Originally the design intended to include a variety of input sensors. Here are the components we finalized the Audio-visual Theremin to include.

screenshot-at-oct-29-23-14-03

fritzing-diagram_cc2

Process

Day 1

On the day of the launch of Project 2, we met briefly to discuss our project. We decided to to take a couple days for research and to fully understand the demands of the project, this included testing out our Adafruit 32 feather board and later Arduino. During this research phase some considerations that took place included to look at interaction as a whole, not just what could be facilitated by our microchip.

Following our initial breakout meeting, we each came prepared with some ideas and research based on our previous discussion.

process-1

With the parameters and constraints in mind we conducted our first ideation session.

During our first ideation round we came came up with the idea for a disco ball which moves with you. We named it “dance dance tap tap” (working title). We were strongly drawn to the spherical shape of a disco ball and liked the idea of exploring a reimagining an iconic dance-floor item. Our first Ideation was crucial for the fact that it revealed to us that we wanted to work with ambient sounds and the idea of moving sound to facilitate interaction.

Day 2 and 3

unnamed

Following several informal discussions on a spherical shape, we rapidly prototyped a hexagon shape which had LED points angular to the shape. The shape itself was intriguing and provided us with an interesting basis for fabrication to work with and think around. We had a brief discussion on the form and how it would effect the:

Sound

  • We decided to use wood as a base to build our fabrication. At this  point of the project, we wanted to put the speaker inside the shape and use wood to amplify the sound.

Scalability

  • We did a rough measurement of the circuit board to build large scale of the project again using cardboard to provide us with a rough idea of the shape.

Usability

  • With the shape in mind and our initial idea of sound as an output we decided to scale it up not only to fit the circuit board but also to see how the users would react to it and use it. The questions that rose up at this point in our project were the following;

3 Questions

  1. How would the users hold the shape?
  2. What is the intended use of the prototype? Is it the ability to manipulate sound? Or does that include the ability to manipulate lights as well?
  3. What components are to be used?

At this stage of the project we listed out the components we had to play with and designed the capabilities we wanted around the shape. We decided to use the accelerometer to manipulate the sound, the light resistor to manipulate the LED output and the sonar as a back-up.

Day 4

During our third meeting and after consulting with Nick during our class on friday, we came up with a plan to focus on all the moving parts.

unnamed

We took today to also experiment with the ability to generate analog sounds. Josh attended a workshop provided by InterAccess, an artist collective and co-op working space. We decided that there was some value in Josh attending the workshop as it was a workshop based on creating an analog synth by replicating the Atari Punk Console. 

Although the workshop was successful in achieving the sound and modulating effects we wanted the user to have control over, the scope was out of reach to be able to incorporate it into an Arduino project with our limited expertise.

img_0184

Day 5

With our new scaled shaped to fit our circuit and a reiteration of our prototype, we then tested our idea based on all the desired input sensors over the weekend. We decided to drop the accelerometer as we ran into some issues with calibration. We focused our attention to get a few things right such as the controls of the proximity sensor, a knob for lights and also light resistor to dim the light.

92a839b2-88fa-4624-a7b8-46c774ae1b6a

Day 6

During our 5th meeting we decided to focus primarily on our fabrication and electronics. Mazin took charge of designing our final prototype using wood while Josh and amreen spent their time working-out the electrical component and coding set-ups.

Link to Video for Proximity Sensor Test #1

Day 7 and 8

Today we decided to drop the photoresistor and focus on the ultrasonic sensor and potentiometer as a control for the outputs of sound and LEDs. At this point, we recognized that we were becoming disoriented with the amount of inputs we were trying to coordinate with our output of light and sound. After meeting Nick and Kate who suggested looking at a theremin, we got a better understanding of the user experience from using our device.

untitled-2

At this point we had spent a lot of time constructing the wooden shape. This first prototype helped us understand that due to time constraints, we would have to change the material for our next two prototypes. We decided that we would use architectural foam board as a material to build the next two prototypes.

Day 9

On the last day, we focused on assembling the final two prototypes.

slack-for-ios-upload-1

Final Presentation

img_2072

Link to Video for User Test #1

Link to Video for User Test #2

Link to Video for User test #3

Reflection

Experiment two taught us many things in terms of bringing an idea to its physical form. Exploring materials presents an opportunity to learn more about their different characteristics and functionalities. During the process of brainstorming we realized the importance of shapes when it comes to human interaction. A cube is treated and carried differently than a sphere because the cube does not fit as well between a person’s hands. A sphere on the other hand, presents its own series of challenges when it comes to fabrication, which might not be as obvious through 2D drawings.

One of the main lessons learnt from this experiment was the importance of  a realistic plan for fabrication. Due to many unpredictable factors, fabrication may present challenges that are best identified and dealt with ahead of time rather than later. For us, having a backup plans allowed to be ready for unpredictable challenges.

Thinking about user experience in a physical object is not as straightforward as 2D design. Hand-held objects’ interactivity need to be considered as a whole and from a 360 degrees angle. So placing the sonar sensors, the speaker, the potentiometer and the LEDs had to be considered carefully and after a series of tests and experiments. We were able to figure that out early because we were able to test multiple prototypes quickly.

Exploring new ideas is sometimes not welcome, especially when time is tight. However, exploring new ideas sometimes presents new opportunities that would not have been discovered otherwise. Having to resort to our backup plan of using foam board, we discovered a new use for our object, which is an easy homemade DIY synth design that anyone with an internet connection can replicate at home.

Reflecting upon our process we realized that we started off with an extremely ambitious plan, especially when it came to the amount of incorporated sensors. We started off wanting to use the accelerometer and photoresistor to control lights and sound, however throughout the process we noted the importance of simplifying our inputs and outputs.

collage

screenshot-at-oct-29-23-11-27

References

  1. Hc-sr04 Ultrasonic Distance Sensor with Arduino. (2017, July 29). Retrieved October 29, 2018, from https://dronebotworkshop.com/hc-sr04-ultrasonic-distance-sensor-arduino/
  2. Igoe, T. (2010, January 21). Play a Melody using the tone() function. Retrieved October 29, 2018, from https://www.arduino.cc/en/Tutorial/toneMelody
  3. Joyce, C. (2015, November 27). Ultrasonic Theremin. Retrieved from https://www.instructables.com/id/Ultrasonic-Theremin/
  4. Rosa, K. D. (2015, December 5). Simple_Arduino_Instrument_Basic_Ultrasonic_Theremin.ino. Retrieved from Kevd1337/simplearduinoinstruments kevd1337 –https://github.com/kevd1337/SimpleArduinoInstruments/blob/master/
  5. Wal, P. J. (2015, February 28). Ultrasonic Theremin. Retrieved from https://paweljw.github.io/2015/02/ultrasonic-theremin/
  6. Garcia, D. (2017, August 16). FastLED. Retrieved October 29, 2018, from https://github.com/FastLED/FastLED/wiki/Basic-usage
  7. Garcia, D. (2016, March 31). FastLED : Using Multiple Controllers. Retrieved from https://github.com/FastLED/FastLED/wiki/Multiple-Controller-Examples

Tiny Trotters

 

screen-shot-2018-10-29-at-10-41-28-pm

 

 

 

 

A digital spin on an old-fashioned toy. Push toys are meant to help children and encourage them to walk more by offering a fun interaction.  Tiny Trotters is an interactive push toy with a light up pixel indicator in the wheel. When the toy is around others it becomes a game instilling togetherness. Like a stop light when walking at night Tiny Trotters can be used in unison when together they are green; if they haven’t connected in a short period of time the toys turn yellow then red to indicate to go back. If veering away from each other the red indication and bright LED and can be considered a safety feature for children that wander off as well.

screen-shot-2018-10-29-at-10-43-49-pm

 

screen-shot-2018-10-29-at-10-45-18-pm

screen-shot-2018-10-29-at-10-46-43-pm

screen-shot-2018-10-29-at-10-48-10-pm

screen-shot-2018-10-29-at-10-49-59-pm

screen-shot-2018-10-29-at-10-51-31-pm

screen-shot-2018-10-29-at-10-52-25-pm

screen-shot-2018-10-29-at-10-53-49-pm

screen-shot-2018-10-29-at-10-55-19-pm

screen-shot-2018-10-29-at-10-57-03-pm

 

screen-shot-2018-10-29-at-11-15-27-pm

screen-shot-2018-10-29-at-10-58-30-pm

screen-shot-2018-10-29-at-11-17-31-pm

screen-shot-2018-10-29-at-11-21-00-pm

screen-shot-2018-10-29-at-11-02-01-pm

 

https://github.com/aliciablakey/Pin-wheel.git

 

REFERENCES

http://www.seeeklab.com/en/portfolio-item/
https://m.youtube.com/watch?v=RKBUGA2s9JU https://www.teamlab.art/w/resonatingspheres-shi-mogamo/ http://www.cinimodstudio.com/experiential/projects/dj-light#videoFull https://www.arduino.cc/en/Tutorial/MasterReader. https://www.youtube.com/watch?v=t3cXZKBO4cw https://www.instructables.com/id/Arduino-Photore-sistor-LED-onoff/http://www.cinimodstudio.com/experiential/projects/dj-light#videoFull https://arduinomylifeup.com/arduino-light-sensor/ https://www.youtube.com/watch?v=CPUXxuyd9xw http://www.electronicwings.com/arduino/ir-communi-cation-using-arduino-uno https://www.instructables.com/id/Infra-Red-Obsta-cle-Detection/

 

 

 

 

 

Attentive Motions

Members: Olivia Prior, Georgina Yeboah and April De Zen
GitHub Link: https://github.com/alusiu/experiment-2-cnc

screen-shot-2018-10-27-at-9-51-03-am

Figure 1.1: Georgina, Olivia, and April holding the finished Attentive Motions prototype
Figure 1.2: The assembly of the Attentive Motions prototype
Figure 1.3 : Attentive Motions turned on, outputting red light as a signal to be moved

Project description and overview

Attentive Motions (AM) is a device that prompts users for consistent play and motion. The device gives user feedback through visual and audio outputs. This feedback relates to how the user is interacting with the object. If the device is left alone, it will flash red and outputs an alarm like tone as a request to be moved. When the device is in motion, it outputs playful chirping noises, and the lights flash bright, playful colours. There is a sense of play when engaging with the device which makes users want to continue the interaction.

Intended context and users

Attentive motions are intended to initiate play with all ages. It is a simple device with signifiers that evoke playfulness regardless of age, gender or any other social boundaries.

Video of Attentive Motions

Production materials for each prototype

  • 1x Micro Arduino
  • 1x Adafruit BNO 055 orientation sensor
  • 2x 4.7 ohm resistors
  • 1x 16 ohm speaker
  • 1x NeoPixel (20 lights per strip)
  • 1x Portable Power Bar Source
  • Protoboard
  • Wires
  • Hamster ball
  • Acrylic insert for mounting components

Ideation

In our initial ideation stage, we discussed common themes we wanted to have represented within our work. These themes include ‘critters,’ LED lights, responsive through immediate feedback to user interaction, and mapping pathways. We came up with 3 ideas to start, first was the ‘sneaky sneaker’ which is a device that would attach to your shoe and track the level of sound/noise you make as you walk in. In this idea, there would be a vibration or light that indicated when you were being too loud. The second idea was a ‘library assistant’ which would be a necklace that could sense the level of volume in your voice and indicate through vibrations if you were speaking too loudly in the library. Our team also thought this might represent how being quiet is preferable in certain social environments and pose the question of ‘why?’. Our last idea we thought through was based on how light can flow and defuse through plexiglass.

Our idea was to create objects that could sense passerbys and light up sequentially to create a pathway in the dark for users. From here we discussed how to transfer light, and different shapes our idea could take place, one of them being a sphere. This third idea was our initial catalyst for our prototype “Attentive Motions.”

screen-shot-2018-10-20-at-5-38-03-pm

Figure 2.1 (upper left): Team discussing potential ideas
Figure 2.2 (upper middle): April sketching ideas
Figure 2.3 (lower left): Olivia and Reza sketching designs for the casing of the prototype
Figure 2.4 (right): Initial sketches of our pathway idea

Our team consulted Reza, the maker lab technician, and he recommended that we look for pre-made plastic objects, such as cubes, or sphere to save time on creating them. We took his advice and perused shops in Chinatown for pre-made objects that would fit our criteria.

Decision

After disbanding and meeting again together we narrowed down our ideas to two projects: a sphere in continuous movement and creating an interactive lit pathway. We were happy to run with either approach, but after some consideration, we thought that three devices would not be enough to convey the possibilities of an interactive lit pathway system. From here we decided to pursue the idea of a sphere that always had to be in motion. This idea met our creative goals of wanting to use light and sound as feedback and they could reflect critters from the assumed anthropomorphized form created from the movement and user interaction. For this, our intended input would be a gyroscope sensor that would detect if the object was in motion, and our outputs would be sound, light and maybe vibration in response to the user putting the sphere in motion.

screen-shot-2018-10-20-at-5-44-52-pm

Figure 3.1 (upper left): Team assembling initial inputs and outputs
Figure 3.2 (upper right): Close up of speaker output assembled on a breadboard
Figure 3.3 (lower left): Team attempting to get the IMU sensor connected
Figure 3.4 (lower right): Initial design of Attentive Motions as a sphere

First Steps

Proof of concept

Our first step was to ensure that our inputs and outputs would work with our “always-in-motion-sphere” idea. We first tested out our speaker output with success, but struggled to get the IMU sensor to work with the Micro-Arduino. After some guidance from professors, we realized that our micro-controller needed to have 4.7 ohm resistors as a pull up for the IMU sensor. After attaining these resistors and much struggle, the IMU sensor worked.

Object design

After more discussion, we discussed possible ways to enclose the device. Our initial design ideas were getting various kids play balls in different sizes, and attaching the device on the outside of surface. To emphasize the notion of a critter, we discussed placing fur on the outside of the surface and the device to elevate the fiction of the project. Some of the spheres we started to think about were, beach balls or styrofoam balls.

Material list

Apart of our object design conversation, came the discussion of what other material we would need to source to produce our project. After sorting through our kit we decided we needed:

  • Louder lightweight speakers
  • Larger LED lights or NeoPixel lights
  • Possibly a switch
  • Power source
  • Spheres
  • Female headers

We had also decided on a budgets for our project, which was settled on $25-30 per a prototype. This would include the casing, power source, and any other parts we needed to produce our idea.

As per Reza’s suggestion we went supply shopping for ready made objects. Nothing we found was within our price range for the project; large hollow styrofoam balls were $30.00 each in total which left no budget for other supplies. We decided to go to the dollar store to scope out spheres there and purchased a bouncy ball to be able to start testing our prototype.

screen-shot-2018-10-20-at-5-58-40-pm

Figure 4.1 (upper left): The team shopping for supplies
Figure 4.2 (upper right): Dollar store bouncy balls
Figure 4.3 (lower left): Hollow styrofoam sphere halves
Figure 2.2 (lower right): Team testing proof of concept with the dollar store bouncy ball

Product Journey Map

To better utilize the team’s time, a map of the interactions and feedback for the prototype was created. To keep it in motion, there will need to be a person to move it. The team thought through what signifiers would need to be present for a person to understand the need for engagement with the device. The idea was to make the sphere output a loud noise when still, this noise would indicate immediate attention was needed. Further to the noise, the team wanted to add other forms of feedback. To bring some visual cues into play, red flashing lights were added. Since these balls can be picked up, some touch sensory with vibration may also be added.

screen-shot-2018-10-20-at-3-45-29-pm

Figure 5.1: Process map

Sketches of Product Casing

Before making decisions on which materials to buy for the device casing, we drew out some sketches. The initial idea was to use a ‘ready made’ ball and adapt it for the device. Accessibility of the device is important, cutting open the ball and putting the device inside would not work. Instead, wiring the outside of the ball and create a case for the Arduino on the bottom seemed like the best solution. The idea was to cover the whole ball in fabric to hide all the imperfections. Upon some testing, this oval shape did not work well.

screen-shot-2018-10-20-at-7-08-14-pm

Figure 6.1 (upper left): First assessment of what we would need to solder
Figure 6.2 (center): Connecting the speaker to the breadboard
Figure 6.3 (right): Sketch of the sphere with casing for the components
Figure 6.4 (lower left): IMU sensor connected to breadboard and Micro Arduino

Programming

Our initial issue with the programming was to determine when the sphere was in motion. After a consultation with Nick, we decided to sample the velocity of the previous sample, and compare it to the current sample of velocity. From here, we mapped the tone with the velocity; if the ball was in constant increasing velocity, the tone was decrease, if it was moving slower the tone would get louder.

screen-shot-2018-10-24-at-2-37-13-pm

Figure 7.1 (upper left): Team discussing with Nick on how to calculate velocity and sample rates
Figure 7.1 (center left): Breadboard with IMU sensor and speaker attached onto the bouncy ball
Figure 7.3 (right): Chalkboard snapshot of the velocity calculations
Figure 7.4 (lower left): Georgina testing out speaker with mapped tone

IMU Sensor

Once we were able to get data from the IMU sensor we needed to set up a threshold for when the device needs to switch into another state easily. After some testing we decided our threshold matrix would be in the following:

1) Sphere not in motion: read velocity as 0
2) Sphere slowing sph: less than 7
3) Sphere increasing velocity: greater than 7

screen-shot-2018-10-27-at-11-14-40-am

Figure 8: Initial circuit diagram made with

Connecting the Speaker to Gyroscope

Once we had some kind of metric for the device to recognize each state we have to control the sound output. We were having a hard time getting a smooth rate or average velocity with the speaker. We mapped the tone to the velocity, but found that our threshold through testing were not consistently providing results. Our testing was very detail oriented; we would switch the thresholds to determine how to map the tone between different digits. This was not the most time effective way of testing the correlation between tone and velocity but it was the most accurate way of finding the thresholds.

Device Casing

We went as a team to see Reza, looking for ideas on how to construct a sphere as we were worried our ball would not roll properly with the housing on the outside. Reza had the idea to find a globe, he had one in the maker lab, but it was reserved for another project. We borrowed it for testing in hopes we could find 3 globes in Chinatown. We went searching for a globe with no luck but we were able to use this globe for testing how the device responds to being rolled.

screen-shot-2018-10-27-at-9-59-21-am

Figure 9.1 (left): Globe Reza gave us in the maker lab
Figure 9.2 (center): Breadboard attached the globe with sensors and outputs
Figure 9.3 (right): Olivia assessing the data being returned from the IMU sensor when moved

We did find clear bowls that we thought we could use melt and mould together. We brought them to the plastic lab to talk to the technician to discuss options. When he saw the bowls and did not think we could achieve a plastic casing without a mould. There was not any time to make a mould, but he suggested picking up some hamster balls as an alternative. Taking his advice, the team went to a pet store to pick one up and see if it would work.

This turned out to be a great solution for a sphere for our device. The hamster ball provided solutions to all of our design problems. The ball could be opened easily so that as a team we could service our prototypes, and the clear plastic of the hamster balls allowed for a nice diffusion of the light through the device. We initially picked up one hamster ball to test, and upon success picked up two more with confidence.

LED outputs

After finicking with the mapping of the speaker tones to the velocity our next step was to incorporate LED lighting as feedback for the sphere. We purchased addressed “NeoPixel” LED lights so that we could dynamically change the light of individuals pixels to create a “glittering” effect when the ball was moving. As well, one of our teammates had previous experience working with them. Because we had defined our states (still, in motion, and slowing) when determining the tone, it was simple to incorporate the LED light states to correlate with the velocity of the sphere. Our defined states for the lights were:

  • Still – the lights would blink red
  • In motion – the RGB values of the LED lights were mapped to the XYZ values of the IMU sensor (R-X, G-Y, B-Z). This dynamically would represent the direction and motion of the sphere
  • Slowing – if the device was slowing we would slowly change the dynamically changing colours to red by adding ‘50’ to the R value

screen-shot-2018-10-27-at-10-02-25-am

Figure 10.1 (left): Team talking with Reza about how to attach our components in the hamster balls
Figure 10.2 (upper middle left): Our first attempt of securing the middle insert in hamster ball
Figure 10.3 (middle right): Georgina and Olivia testing the NeoPixels
Figure 10.4 (right): Hamster ball with the components placed inside
Figure 10.5 (lower middle left): Two different sizes of hamster balls at the pet store

Vibration Motor

Adding the vibration motor in the main code was not too difficult. Unfortunately, the wires were very flimsy and there was concern that it would easily break when users were playing with the device. We tried soldering the motors to make the connection more sturdy but it was not an easy task. The interaction evolved from picking up the device to rolling and kicking it on the floor. With that in mind, the vibration motors were left out of the final prototype, but placed as a “next steps” for the project.

Final Prep of devices

The fritzing diagram was finalized before committed to the protoboards. Upon evaluation of the remaining tasks, the best use of time would be to divide and conquer. Two team members began soldering the final devices using the fritzing diagram and the other met with Reza quickly to shape some acrylic to secure the device to the centre of the hamster ball. This allowed the final assembly to be quite efficient. Upon testing of the final circuits, only two of the three prototypes was working. After review, there is no clear reason why the third device is not working.

screen-shot-2018-10-27-at-10-58-38-am

Figure 11.1 (left): Photo of finished circuit alone
Figure 11.2 (right): Photo of three finished circuits being mounted on acrylic inserts

Final circuit diagram

screen-shot-2018-10-27-at-11-14-17-am

Figure 12.1: Final Fritzing diagram of Attentive Motions prototype

Reflections

Upon reflection, the team is very happy with the way the final prototype turned out. There are many ways in which this idea could be scaled up as a final product. Each prototype plays the same tones, future versions have the potential to incorporate multiple tones to create depth in sound or even music. The vibration motor could also be added back in if the interaction moves back to holding the sphere. The vibration motor could also be used to emphasize the “critter” like qualities of the device. For instance, if the device is still for a certain amount of time, it could try and “shake” itself through vibration in an attempt to get attention and start moving again.

The work progress during the development stages were always very organized and strategic. Once the vision was established, together the team experimented until satisfied with the final product. Each member was supportive of one another and very good at leveraging strengths and learning in order to reach the final goal.

During the final critique, there was great feedback on the prototype. Some classmates did not see a clear distinction between the ‘irritated’ state and the ‘happy’ state but they did enjoy kicking it back and forth to each other. Next steps would be more user testing to see if the interaction is preferred without the ‘irritated’ state that the team originally set out to achieve.

References

M. (2016, March 26). 9 amazing projects where Arduino & Art meet! Retrieved October 29, 2018, from http://arduinoarts.com/2014/05/9-amazing-projects-where-arduino-art-meet/

(n.d.). Retrieved October 29, 2018, from https://www.arduino.cc/en/Tutorial/TonePitchFollower?from=Tutorial.Tone2

Hughes, M. (2017, March 22). Capturing IMU Data with a BNO055 Absolute Orientation Sensor. Retrieved October 29, 2018, from https://www.allaboutcircuits.com/projects/bosch-absolute-orientation-sensor-bno055/

Adafruit NeoPixel Überguide. (n.d.). Retrieved October 29, 2018, from https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library-installation

Lloyd, P. (2015, October 13). Make an LED Light Strip AHRS with Arduino and MPU-6050. Retrieved October 29, 2018, from https://www.allaboutcircuits.com/projects/make-an-led-light-strip-ahrs-with-arduino-and-mpu-6050/

Emoji-Bot Mk.3

aka Emotion State Machine

by Veda, Ladan, and Tyson

Presentation Slides (visibility limited to OCAD U)

GitHub

The Emotion State Machine is the third iteration of our assistive communication device: an emotion display device activated by capacitive touch input. The device intended to facilitate verbal and emotional communication for limited mobility users and non-verbal interpersonal situations.

 

The Concept

One of the main areas of commonality with our project suggestions was our interest in using Arduino technology as a communication tool between individuals. Some of the critical topics we wanted to explore included:

Accessibility / assistive / adaptive technologies: devices for people with disabilities while also including the process used in selecting, locating, and using them.
Affective computing: the study and development of systems and devices that can recognize, interpret, process, and simulate human affects (emotions).
State Machines: a programming technique that allows a device to operate in one of a set number of stable conditions depending on its previous condition and on the present values of its inputs.

We envisioned that the device would be used by low-mobility users or those who may have difficulty verbally communicating. The device is intended to facilitate a more straightforward and direct emotional communication that clears up any confusion as well starts a dialogue between user and receiver, the touch input enabling a more natural communication than conventional push button triggers.

concept

The Machine

Development of the project was split between coding, acquiring parts, developing the input device, and manufacturing the output display in order to modularize work between individuals. Ladan and Veda worked together to realize the input device while Tyson took on the work of fabricating the output display devices.

The process for coding for the input started really basic. The input device was originally designed with flex-triggered input in mind but the idea was dropped to keep down costs. We took the tutorial of with lighting an LED with a button and continued to add layers of complexity. We first started with multiple lights, then moved on to multiple buttons then when Nick gave us advice about the capacitive touch were able to use the same code we made with the multiple buttons just adding more variables and if and else statements in the code. The touch sensor would be labeled with the emotions that the user is feeling and would be land start a dialogue with the individual on the receiving end of the affect signal. We used a ready-made soap dish to contain the touchable surfaces and appropriate matching symbols co-relating to the individual faces represented in our final prototype.

The final version of the project came as a result of several technical limitations we encountered while developing our relatively ambitious first prototype. We initially intended to create an articulated ‘Emoji-Bot’, with a chassis of 3d printed material and a poseable arm that could demonstrate arm and hand gestures. We chose to pursue the modelling and 3d printing of the body early on in the project’s development as the extrusion process of printing the material was expected to take at least 3 days for all three robots. Joints in the arm would be rotated by servo action, controlling the shoulder rotation and position as well as the elbow rotation. The servos were to be connected via tensioned fishing wire and the Arduino and circuitry for the output was to be housed within the droid’s body. The head was intended to feature five different expressions that could be readily adjusted through full-rotation servo motion, with separate LED lights embedded within the head. The design of the robot was created in Rhinoceros over the course of a day but the process of attempting to print a single iteration took three days and ultimately failed due to limited availability of printing resources at the school, the lack of purchasable material for the in-house 3d printers near the school, and the high overhead cost and time required to print the project elsewhere in the city.

With two days remaining to prepare a working project, we scaled back to design of our second prototype, which translated the emoji display from 3D to 2D fixture and reduced the articulated joints to one poseable hand. Instead of using 3d printing technology, we opted to lasercut the elements of the casing; in terms of time, the process of preparing the lasercut components took about 1/5th of the time invested in preparing the 3d components. This gave us the opportunity to quickly cut, assemble and test out the project over the course of a day, but it wasn’t until the day prior to submission that we discovered that the electrical design of the project was insufficient to properly power and operate all the connected devices (caused by a large amperage load). Had we tested our circuity prototype earlier we may have avoided this issue entirely.

This leads us to the design of our final prototype, which maintained the emotion display component but forgoed the inclusion of servo action. Rather than display emotion selections via servo rotation, the emojis are instead displayed simultaneously, highlighted via coloured LEDs. The intensity of the touch input directly influences the brightness value of several of the LEDs by means of pulse width modulation.

img_6353

Daily Project Schedule

October 16th, 2018

Our first meeting was to introduce ourselves, interests and areas of skill sets / expertise that would be beneficial to the execution of the project. The following was discussed.

Ladan – Brought strategic and design expertise to the table, experience with organizational skills, graphic design skills and ability to ideate well and think critically.

Tyson – Technical & Strategic expertise. Brought experience with conceptualization, 3D printing, laser cutting and other methods of fabrication. He also has experience with computer programming, and has worked with Arduino programming before on passion projects.

Veda – Brought experience with design thinking, project planning, graphic design and ability to work well with physical handmade fabrication.

Once we discussed our strengths and weaknesses, we took the opportunity to reflect upon what we learnt in class that day and how to pursue our interests the the technology of Arduino. We charted out a game plan that would tackle multiple parts of the project.

 

  • The first was a basic agreement for a quick solo crash course in Arduino basics.
  • Second, Tyson was kind enough to help us understand what was in our kits and how we could potentially use each of the parts for our project.
  • Third was a discussion on, and appreciation of the project brief from a conceptual standpoint, so we could conduct some independent research in the areas of social interactions and communication and brainstorm at the following meeting.

We concluded the meeting with way forward, the next step was to come back more equipped to work with Arduino, and the second was to ideate on potential concepts for our multi-person project.

 

October 18th, 2018

Once we had covered all the groundwork that was discussed at the first meeting, we came back to the second meeting with a whole bunch of ideas to share with each other! We first discussed the ideas that we had brought with us, and then took a deep dive into a verbal brainstorm to see if anything new comes our way. Some of the great and not-so-great concepts that we discussed were as follows:

  • Sneeze detector: A microphone input that reacts only to the sound of a sneeze and creates an interesting output.
  • Open lace detector: A device that alerts the wearer if their laces get untied.
  • Simon says game: A sensor that tracks body movements of the user based on commands from the computer.
  • Obstacle Course: An invisible obstacle course with a few sensors to detect height, movement and proximity – based on commands from a wearable.
  • Motion tracker: A simple wearable light that changes colour depending on the motion of the wearer. It can respond to speed, height and frequency.
  • Digital Twister: A wearable that directs a group of users on a Twister board, with a carefully coordinated list of movements.
  • Dance Dance Revolution: An LED screen wearable that displays arrows to guide the users movements. Users dance in a sequence to a song directed by the device.
  • Morse code game: A way for users to travel back in time. Using the push buttons and an LED display, users can decode a message prescribed by us to win a prize!
  • Fitness tester: A wearable that can track your movements on certain exercises, and display your level of fitness.
  • Sumo Robots: Controlling robots with gestures, each fighting against another to push outside of a ring
  • Microbots in a Maze: Using a single servo to control a v-shaped bot through a maze, similar to scientific tests on mice. Collaborative puzzle environments.

20181018_183727

An interesting idea that came up after the verbal brainstorm was an Emotion Detector, with an LED display wearable. While similar to our final concept, this was the first instance of the idea.

 

 

wearables

After looking at the wearables, we were not convinced with the idea. We wanted to elevate it a bit further and began thinking of our final concept which was a more unique output as compared to a simple LED screen. We decided to pursue the idea of using a 3D printed robot with coloured eyes and moving arms to convey five key emotions that humans feel.

Once we agreed upon the idea, we decided to allocate work roles for execution and decided to touch base daily to give each other updates on the progress for the day.

Veda & Ladan were assigned Input circuitry and design for the input panel.

Tyson was in charge of manufacturing the 3d body. Due to the lengthy amount of time required to print the objects, work began the following weekend with the intention of having the parts printed over the course of the week.

image-3-copy

October 20th, 2018

Since Veda and Ladan were both fairly new to Arduino, the process of circuitry began with the very basics. Our first order of business was to understand basic LED outputs as tried in class with Kate. We first experimented with a single and then multiple LED’s, along with the code for the same. We tried implementing the blinking mechanism as well to understand its workings.

Meanwhile, Tyson designed the printable model for the Makerlab Lulzbot printers in Rhinoceros 5. He reviewed online information regarding belt drives and speculated moving the limbs via rubber bands, but ultimately chose to use fishing line in tension to operate the components.

image-6-copy

image-9

emojibot

 

October 22nd, 2018

While in class on Monday, we had the opportunity to discuss our idea with our professor Nick, and collectively come up with a game plan for the following week to complete the prototype.

Minutes from the session:

  • Our first task should be to create a low fidelity prototype that functions the same way as our final project – which is fundamentally a button input to an LED output. He wanted us to try this with three buttons attached to three LED outputs respectively.
  • He suggested that we use a capacitive touch sensor to trigger the inputs as opposed to just push buttons, as that can make any surface conductive and will be an interesting twist on the input mechanism
  • He wanted us to conquer the prototype before we go out and purchase the sensors and solder them – which was great insight to begin work with
  • He helped us get to the guts of the hardware. We had developed an idea for a state machine, and everything that we needed to focus on revolved around our experience with it.
  • He proposed adding a layer of complexity to the inputs by giving some thought to the intuitive movements of fingers associated with emotions. Like speed and frequency for anger, intensity for sadness and other such instances.

 

After our session with Nick, we focussed on creating a prototype with the buttons and LED outputs, and it was a success. The code was fairly simple in this case as

 

img_6307 img_6308

 

Once we were successful with our prototype we made a trip to Creatron to purchase the 5 key capacitive touch sensor made by Adafruit for Arduino.

The first task at hand with the sensor was to solder it. This was not easy for us to do. With no prior experience in soldering we had a few failed attempts and one burnt board before we got the soldering right. The biggest challenge with the touch sensor, was realizing that the field around the actual input tip was very sensitive, hence the soldering had to be extremely precise in order to avoid the overlap of two different input points.

During this period, Tyson worked alongside Reza, the Makerlab technician, to get the 3d printable parts on the printer. Multiple colours of materials were used due to the limited availability of in stock material for the printers. As the first parts were printed, several revisions to the 3d model were applied to ensure that the device could be easily assembled and accessed by hand and accommodate the hardware. The complexity and thickness of the model were also reduced in order to improve print times and reduce material costs. During wait times, Tyson also developed the working code for the first prototype: each input button would operate a different servo, with one dedicated to changing the emotion state. The remaining buttons would adjust the servo angle. Whenever a button was released it would reverse the direction of movement, allowing for dynamic arm expressions.

image-15 image-14 20181022_212649

img_627520181022_112740 20181022_161700

October 23rd, 2018

 

Once we had the soldering in place, we focused on getting the input for the touch sensors right. We attached the sensors to one LED at first, to understand how it responds and works.

With research and advice from our cohort, we figured out that the input for the sensor requires a 4.7m resistor to work. We experimented with one input key and one LED light.

Figuring out the code was challenging at this point, as there are very limited resources available online that catered to this particular sensor, and we were unsure if it needed a specific library or input command to function well. Combining the tricky circuitry with the code, we finally got the flow right for a single input key and an LED light.

We also proceeded to complete soldering the extra touch sensors, so as to be able to create the final circuitry for the three devices in sync with each other.

 

img_6284

img_6285

At this stage, we realized that printing 3 copies of the robot was an ambitious task as available material for the printers was minimal. Tyson went out to buy material from the local area; per recommendation of the Makerlab technician, we acquired filament from Creatron that we later learned was incompatible with the working printers at the graduate school (incorrect filament size). Several online sources, including the Lulzbot user manual and forums, suggested that it may be possible to print 1.75 gauge material on the Lulzbot Taz but in practice the material only printed adequately for the first 15 minutes of operation.

20181023_184539

This roadblock resulted in the design of our second prototype. Inspired by paper colour wheel selectors, the second prototype translated the 3d head of the robot into a 2d wheel, maintaining the functionality of the servo operation but reducing the action to a 2-dimensional plane. Tyson prepared laser cut files for matboard overnight in order to accelerate the manufacturing process.

image-16 image-17

October 24th, 2018

Once we had a single flow working, we focussed on multiplying this flow into five inputs and five outputs. Later in the day we had our consultation session with Nick and Kate, both of whom advised us to try out the servo motor with the touch sensor now that we had the basic mechanics in place. And that was the next piece of the puzzle for us.

We also spent some time sourcing the wooden platform and copper rings that we had conceptualised for the fabrication of the input box at Chinatown. We did some research and discovered that copper is a great conductor of electricity and we thought the rings would be an interesting way of driving inputs from users.

image

20181024_140804

Tyson and Ladan spent the evening working with the laser cut materials prepared in the morning to assemble the second prototype, pulling together the circuitry in the process. The behaviour was bizarre; the capacitive sensor would fluctuate between values and the servos would only periodically respond to the capacitive sensor input. We would later find out that the cause behind this behaviour was excessive amperage load: we needed to supply power to the components separate from that of the arduino’s supply.

screenshot_20181026-134910_gallery end

October 26th, 2018

On this day, after trying tirelessly to get the servo motor to work with the touch sensor, we decided to call it quits on the servo. We shifted our focus to a leaner version of the original concept of the emotional state machine, which was the conductive input with the copper rings, along with the LED output of the five lights. We decided to create an intricate installation with some of the existing parts from our previous prototype.

We had the advantage of a perfectly working circuit by this point, which was replicated across the 3 boards. We wanted to minimise the soldering and restricted it to the LED’s and resistors.

img_6337

Tyson drew up a vector of the installation along with a 3d render, which perfectly housed the breadboard and power outlets for convenience. We had these laser cut at the Maker Lab with help from Reza.

  img_6341

Thereafter we focussed on assembly and installation of the final piece. We began with soldering the LED lights, and then putting all the pieces of hardware and craft together.

img_6344

20181026_113016 20181026_113019 20181026_113022

 

Final Input details: Although we tried implementing the copper coupling pieces we intended to use for the input device, they were difficult to connect with the box and hold in place. Instead, we opted for a more simplistic version of the input.

 

img_6373

Materials used: Wooden soap dish, foam board base, printed emotions and copper tape.

Method: We used the holes in the soap dish to insert the wiring for the input from the output box to the input interface surface. We then made holes in the foam board for the wires to penetrate through. We then stuck the foam board to the soap dish and pasted the emotions onto the input points. The last step was pasting copper tape along the tip of the wire for a conductive input.

 

Final Output details:

Materials used: Foam board, screw driver, Vellum, Black PVC sheet, Masking tape and electric tape.

Method: First we took the existing discs of the faces that we had from our last prototype and pasted the vellum in the areas that we wanted the light to penetrate through.

Then we focused on getting the wires of the LED’s attached to the breadboard securely. Thereafter we stuck the LED’s on the center of each eye to maximise the output of the lighting. Once we had all the eyes in place we assembled the basic structure of the installation and inserted the breadboard. We held the emotion disc in place with a screwdriver. We kept testing along the way to ensure that power supply was given to all the lights without compromise. Thereafter we connected the input wires to the input port through the hole we created for it at the back of the board, and lastly closed the blox with black PVC paper on top to conceal the circuitry.

20181028_141020 20181028_141053

Final Circuit:

The final circuit consisted of five input keys from the touch sensor that were connected to the Arduino at PWM pins. There was a 4.7M resistor between each input wire to regulate the flow of the current.

 

On the output front, we had power wires from LED lights connecting to the Analog side of the Arduino controller with 330 ohm resistors for each light to regulate the flow of the current.

 

circuitry-2

 

Overall Reflections

Veda:

This project was extremely challenging for me, and in retrospect I feel like we should have opted for a simpler outcome, but it was a rich learning experience in terms of project management, time management, role allocation and resource budgeting. I still commend the team and the effort we made on trying something ambitious. Futuristically, I would plan leaner and allocate more time to debugging for both hardware and software.

Ladan:

The project had a high learning curve for me in terms of working with the Arduino but industrial design and fabrication aspect. I have mostly worked as digital designer and most of my projects were on digital platforms. Learning from both Veda and Tyson was helpful as they both are experienced in areas that I didn’t have. We had an ambitious project from the start which made us feel that we were against time from the beginning. Stronger project management and group cohesion would have made the project planning execution easier. Also troubleshooting and testing separate parts earlier.  As well a more direct and cohesive concept would have made Moving forward I will take the skills i learned in the project (fabrication, arduino) to support a strong conceptual idea.

Tyson:

The biggest issue with our project scope was that we went into the project expecting to be assemble the pieces as we went instead of testing out the individual elements of the project that would make it successful. Had we tested out the servos and the capacitive touch sensor working in tandem early in the project’s development we likely would have succeeded in producing our second prototype. We made several group administrative errors such as poorly coordinated scheduling, communication, and properly reviewing the project outline early in the project’s timeline. While projects may be envisioned, they need to be built up as individual working components to avoid wasteful use of time. Moving forward, I hope to elicit a more collaborative engagement with future teammates and spend more time focusing on learning before doing.

 

Still holding out on getting that robot printed out, though! 🙂

 

Works Cited:

  1. An Emotion Robot For Long-distance Lovers
    Zoe Romano – https://blog.arduino.cc/2014/05/05/an-emotion-robot-for-long-distance-lovers/
  2. Moodbox Makes You Play with Emotions For Perfect Ambience
    Arduino Team – https://blog.arduino.cc/2016/03/29/moodbox-makes-you-play-with-emotions/
  3. The Interactive Veil Expressing Emotions with Lilypad
    Zoe Romano – https://blog.arduino.cc/2013/09/12/the-interactive-veil-expressing-emotions-with-lilypad/
  4. 21 Arduino Modules You Can Buy For Less Than $2
    https://randomnerdtutorials.com/21-arduino-modules-you-can-buy-for-less-than-2/
  5. Arduino Forum – Index
    http://forum.arduino.cc/
  6. Arduino Uploading Error Code?
    jim_reed -codewizard58 -chris101 -Bibi – http://discuss.littlebits.cc/t/arduino-uploading-error-code/21875
  7. Tree Of Life (arduino Capacitive Touch Sensor Driving Servo Motor)
    Instructables – https://www.instructables.com/id/Tree-of-Life/
  8. Interactive Projects
    https://idl.cornell.edu/projects/
  9. Affective Computing: From Laughter to IEEE
  10. Rosalind Picard – IEEE Transactions on Affective Computing – 2010
  11. Data As Art
    https://www.cis.cornell.edu/data-art
  12. Robotic Arms (see https://www.robotshop.com/ca/en/robotic-arms.html)
  13. Belt Drives: Types, Advantages, Disadvantages https://me-mechanicalengineering.com/belt-drives/
  14. Assistive Devices For People with Hearing, Voice, Speech, or Language Disorders
    https://www.nidcd.nih.gov/health/assistive-devices-people-hearing-voice-speech-or-language-disorders
  15. Gently Used Marketplace
    https://canasstech.com/collections/gently-used-equipment-marketplace
  16. Lulzbot Taz 6
    Lulzbot – https://www.lulzbot.com/store/printers/lulzbot-taz-6

Panic Mode

Panic Mode
By Omid Ettehadi, Lauren Connell-Whitney and Tabitha Fisher

Overview:

“Panic Mode” is a multi-person experiment that gives a physical and sensory form to the experience of human anxiety and introversion. Starting from the question “What happens when you go into panic mode?” we built a wearable Arduino device that measures the proximity of the people around you. When someone gets too close the plastic collar puffs up like the neck of a frightened animal and a mood indicator flips from a calming blue colour to an aggressive red.

The object itself comes in two parts that work together. An Arduino Micro board and its components are housed inside a circular box that is worn as a necklace draped along the chest. A mood indicator is painted on the bottom half of the front panel with a colour that ranges from blue (calm) to purple (in between state) to red (panic mode). A white arrow points to the current state, determined by data picked up by an ultrasonic sensor placed above to measure proximity. The second wearable is a plastic collar connected to two fans, one for inflation and another for deflation, and hidden inside is a string of red LED lights.

When worn the ultrasonic sensor gauges the distance of the user in relation to other people in the room. A safe distance is determined within the code and when someone crosses that threshold the mood indicator shifts closer to red. In this scenario the threatening figure is given a visual warning that they are invading the user’s space and have the opportunity to either step away and neutralize the situation or continue their advance. If they choose to press forward both the mood indicator and the string of LEDs flip to red and the plastic collar puffs up in an act of defence. To restore calm the threatening figure will need to back away to a safe distance at which point the deflation fan will kick in, the red LEDs will switch off and the mood indicator will turn back to blue.

Within the code the measurement of safe distance resets after each moment of full panic. Human moods can be unpredictable and sometimes we must approach with caution. In Panic Mode the responsibility to regulate emotion falls on both the wearer and the person initiating the distress. In order to restore calm both people will have to empathize with each other’s position and negotiate a comfortable distance that works for everyone.

Process:

We began ideating by talking about all of the things we as a group were interested in. And came up with several ideas, all quite different, but generally relating to how technology changes the initial human assumption of output.

We talked about signalling and how technology can be used as an agent for human emotion, what this meant for the viewer and how this changes the interaction. Some initial ideas for making included an emoji container that could be used as a playful way to pass notes in class. It is interesting how much overlap there was in class with emoji use. It seems that many of us are examining how communication happens and why this particular mode has become the popular way of sending small thoughts to each other.

line3

That being said we decided not to pursue the emoji device. We began talking about how one human reaction could possibly be read positively or negatively and how the read was in the eye of the beholder. How an interaction has two sides to it. Personal space was also something we began examining and how the concept was different for all people depending on several factors; culture, relationship, comfort, mood. We had all done an exercise in another class that sparked a conversation about the comfort of touching each other and the levels of personal space that we all had. This was a catalyst for the Panic Mode object. In some initial ideas we had spoken about using animal sounds as an alert, this lead us to talking about how different animal display discomfort or aggression which then lead us to talking about inflatables.

New rabbit hole: Inflatables. We all loved this idea! Though, the creation of an inflatable depends entirely on airflow. So began a deep dive into all the fans of the non-Amazon world that were available to us for under $15. Our naive assumptions of the success of this project assumed that the fan would be the least of our worries.

However, this was not the case. We tried a variety of fans all around that price point, all in the end, turned out to not have the ability to spin in two directions. This was an issue since we needed inflation and deflation as mechanisms to display calming and agitation. More intensive research produced no better idea than to use two fans (for the price point we had initially set our budget at). Beyond the initial research, we found that no one working with inflatables had actually found a cheap way to produce a product that self inflated.

img_3198

img_3203

Next we began talking about the object to hold the mood indicator; a container to house the circuit board, servo motor and the ultrasonic sensor. The idea of a pre-made container was enticing… of course. So we had a trip to Chinatown for the wonders of pre-made things that could be repurposed. The container we settled on could be opened and close easily and fit the PCB and all of our mechanisms nicely. It was a prefab bug collector for kids. We had to only drill holes for the ultrasonic sensor and the servo motor. So a trip to the Maker Lab was in order. We began hand drilling holes in stages because the hard plastic was very sensitive. In speaking with Reza, we found that the drill press would solve our problems. The drill press was a dream, and we are all looking forward to using more of the capabilities of the Maker Lab at a later point.

line2

Next we took to fashioning a more geometric shape for the inflatable. We began by just using 2 pieces of plastic ironed together as a test. Then we began drawing and shaping a paper pattern to make the final inflatable. Overall, the experiment did not turn out as planned but was a good test run for further development. We definitely learned about how this type of plastic shapes itself when inflated.

line1

At the end we put all our findings together and came up with this final results:

line4

Production Materials:

Github Project Link : https://github.com/Omid-Ettehadi/PersonalSpace

One thing we wanted to get around was the quantification of emotions that is done through most digital designs. We wanted our emotional indicator to be as close to reality as possible. In order to do that we though about having 2 indicators, a single binary one, where it shows only angry or happy, and another indicator that could provide us with more stages in out happy or angry states. To check how each component related to our project worked an how we could use them, we test them each individually and then we tried to combine them. The servo motor played an essential role in our project and to find out its capability we first tested the servo motor. We programmed it to start from 15 degrees and move up by 30 degrees, showing a different angle at different times.

servo-motor_bb

We then tested the Ultrasonic sensor. We needed a sensor to get a reading on the proximity of objects, and as we all had a US Sensor, we checked it. We programmed it to get a reading in cm and inches of objects to the sensor.

ultrasonic-sensor_bb

We then connected the two circuits together. We programmed it so that depending on the distance of the closest object, the servo moved to a specific angle out of the 6 options. It also printed out the angle and distance on the monitor. One thing we found was how jerky the movements of the servo were. We realized dividing it into only 6 states reduces the number of levels of emotions that the servo could show, so we decided to find a way to make the servo move through every single angle.
motor-sensor-combination_bb

We then started playing with light as a binary indicator of emotion, this added to our previous steps could give us a much bigger range of emotion to play with. We added an RGB LED to the program, so if an object got closer than the safe distance, the LED would turn Red and safe distance would increases. Or if an object got further than the safe distance, the LED would turn Green and the safe distance would decrease. Servo motor also was set to show the safe distance. But the LED was not bright enough to be very visible, and it didn’t get the feeling across. If we wanted to used a brighter LED, it would have been much more expensive to build.

rgb_bb

At this stage we were sure that we wanted to work with fans as a binary indicator for emotional states, but we were still looking for a fan that could do the job. We first started working on the algorithm still using the LED instead of the fan. Initially we kept track of the state using a variable. if an object was further than the safe distance, we added to the variable, if not reduced from it. The value was then mapped onto the six states and base on each state the servo would move to a specific angle and LED turn to a specific colour.

trial-algorithm_bb

It was very difficult to find a strong enough fan for the price point we had decided on in the time we had. All the fans that we were able to find had a circuit built into them making them only work in one direction. We needed something that could do the inflation and deflation both. One option was to use an air pump, but the cheapest option was 30 dollars (more than our budget), so we decided to stick with fans. Fans can’t handle any pressure like air pumps but they do transfer air into a bag. To get around the idea of a fan moving two ways, we tried sticking two fans on top of one another, but because of how fans are designed, they only allow air to flow easily into them from one direction, so they very much reduced the power of the second fan. We decided to put a fan at each end of the bag. This made the design more symmetric, but also heavier. The fans that we found were not very strong, so we needed to provide them with extra time to fully inflate or deflate the bag. The servo was set to keep track of time an object was in the clear or in the personal space of the user, and the air bag was used as a physical indicator, so that when someone crossed the barrier it would light up and get filled with air. The idea was to have a visual display of the user being choked by the inflatable, which in turn would cause the person on the other end of the interaction to move away out of sympathy.

The board in production in its many stages of iteration:

line5

Product List:

components-list

Schematic:

bb-view schematic

 

Related Works:

Our initial interaction with the idea of inflation came from several sources, one of which being Kate Hartman’s work on the Inflatable Heart. An external organ that you could inflate or deflate to show and communicate your emotional state.

fv03zhnfmygvds3-large

In the early stages of development we looked to Rafael Lozano-Hemmer’s work Vicious Circular Breathing as an exploration of breath, anxiety and social discomfort. The crinkling effect of the paper bags in this project demonstrates the power of sound when working with physical materials.

Vicious Circular Breathing

In order to get a better idea on how to use inflation in our design and how to automize the process, we looked at Hovdino, the inflatable helmet. This is something cyclist would instead of a traditional helmet. If they get into an accident, a helmet made of air is formed immediately inflates around their head protecting their head from hitting the ground.

3315r_2

Another project we looked at was the Re-inflatble vest by , where she uses micro water pump to fill up the vest with air. It was designed for office workers who constantly hunch over their laptops. The design would fill up with air every 20 minutes, forcing the user to fix their posture in order to let the air out of the bags.

fm5hq1ti4sccscu-large

Reflections:

Our main takeaway from this project was an understanding of the limitations of physical materials. In theory everything can work but sometimes components fail to operate as expected. With a limited budget and a quick turnaround we were unable to find the ideal fan for our inflatable so we settled on what was available, which did not provide the desired outcome. We also found that elements of the project worked separately while connected to the computer but failed to perform as needed once they were powered by the battery pack. Now we understand why it’s so important to get the components into the casing early to allow for more testing. Overall we feel that the project was a success because it taught us about the frustrations of working within the physical world and how a contingency plan is not just a luxury but a necessity.

The concept itself is solid and one of the benefits of working with real materials is they can become a source of inspiration. We were able to further refine our concept by exploring the form in 3 dimensions. For example, when placing our hardware into the circular casing we noticed that the configuration of the ultrasonic sensor in relation to the mood indicator gave a face-like appearance to our front panel. Later, when inflating the plastic bag around our necks, we noticed how uncomfortable it made us feel and realized how the sensory aspect to the project could relate directly to the core concept. The Panic Mode device is uncomfortable for the viewer but also for the person who wears it, making this work a multi-faceted exploration of both the experience and the effects of social anxiety in our society.

During the critique our project sparked a great discussion about personal space and how we experience anxiety in a public setting. The class felt that the device might serve the practical function of providing visual cues for a nonverbal person in distress such as someone experiencing a medical emergency or a psychologically triggering incident. We all agreed that it would be interesting to see how multiple devices would react to one another – perhaps a singular panic incident would cause a ripple effect throughout the group like a startled flock of birds. It was suggested that we consider adding sensors to other parts of the body since people are approached from many angles, not just the front. In the end it seemed as though Panic Mode was well received by our classmates and many were able to relate directly to the themes of our experiment.

References:

Antfarm. “Inflatocookbook.” Inflatocookbook, inflatocookbook.kadist.org/.

Cottrell, Claire. “A Beginner’s Guide to Inflatable Architecture.” Flavorwire, 6 July 2012, flavorwire.com/306518/a-beginners-guide-to-inflatable-architecture.

CrimethInc. “Inflatables.” Inflatables | Destructables, 14 June 2011, 12:00am, destructables.org/node/53.

Hartman, Kate. “The Art of Wearable Communication.” TED: Ideas Worth Spreading, ted.com/talks/kate_hartman_the_art_of_wearable_communication.

Hartman, Kate. “How to Make an Inflatable Heart.” Instructables.com, Instructables, 9 Nov. 2017, instructables.com/id/How-to-Make-an-Inflatable-Heart/.

Kraft, Caleb. “Learn Plastic Welding with Giant Inflatable Tentacles | Make:” Make: DIY Projects and Ideas for Makers, Make: Projects, 22 Oct. 2015, makezine.com/projects/learn-plastic-welding-giant-inflatable-tentacles/.

Lozano-Hemmer , Rafael. “Vicious Circular Breathing.” Rafael Lozano-Hemmer – Project “Vicious Circular Breathing”, www.lozano-hemmer.com/vicious_circular_breathing.php.

Q, Ziyun. “RE-Inflatable Vest.” Instructables.com, Instructables, 11 Oct. 2017, www.instructables.com/id/RE-Inflatable-Vest/.

“Test: Hövding’s Airbag 8X Safer than Traditional Bicycle Helmet! This Is How It Works.” Hövding, hovding.com/how-hovding-works/.

EmojiBall

The Emoji Ball

Creation & Computation : Experiment 2 (Multiperson)

by Carisa Antariksa, Nick Alexander, and Maria Yala

EmojiBall is a multiplayer physical ball game where the game pieces exert influence on the players. The components of the game are three EmojiBalls, three hoops, and an environment that affects the game’s outcome. The game is intended for a minimum of three players, and there is no necessary maximum number of players; however, each player must be assigned a hoop and an EmojiBall. The balls are reactive and have “moods” which will change based on how they are manipulated. Balls can be shaken, rolled, thrown, or otherwise handled in any way as the player decides. When a player scores with a ball in a “positive” mood they win a point, while scoring with a ball in a “negative” mood leads to an opponent losing points. The game is intended to be played in any space, regardless of size or surface. The distinct features of whatever space the game is played in will influence the available options players have with the balls, resulting in a unique game scenario every time.

Emojiball explanations

The Project:

good-boi

Process

We began by brainstorming by ideas; each team member gave ideas about what kinds of experiments would interest them, and then presented to the group why they were interested in the idea. We considered whether it would be a game or more of an art installation, and included ideas on what inputs and outputs the product would need.

Brainstorming

We then proceeded to come up with various products stemming from the set of ideas.

Initial ideas

After the brainstorming session, we were interested in two ideas; what wold become EmojiBall (a ball game that would let players manipulate the mood of a ball and score points when they made the ball happy) and a Wearable Glove that would allow someone to send signals and secret messages to another person.

Sketches

We ended up choosing EmojiBalls as it was the idea we were most excited about; It combined elements from each of our initial ideas – mood (Maria), emojis (Carisa), and ball games (Nick). EmojiBalls would also allow us to explore a larger combination of inputs and outputs. Additionally, EmojiBalls would provide the most interaction for the players that wasn’t completely tied to the device as it incorporated a spatial component whereby the environment affected the balls’ state. We planned on using the light, sound, and motion sensors for inputs and the vibration sensors, LED lights, and speakers as outputs.

To further the collective concept we had another brainstorming session in which we went over the project and defined what would be needed to accomplish our task. During this session we also began to discuss possible game mechanics and form of the ball.

Our initial idea for EmojiBalls was this – Emojiball: a game where you must manage the mood of three balls in order to score points. The game encourages players to use tactics that usually don’t come into play in game spaces – such as manipulating the room’s brightness, shouting, and even cuddling the game pieces.

Final Brainstorm

Once we had a general idea of what our project was we decided to break down tasks into chunks so that each of us would explore the various working of the project and teach that to the rest of the group members. We ended up deciding to use Light Sensor & Motion Sensor as our input and (Green, Yellow, Red) LEDs & Speaker as our outputs.

Lights & Points

We decided that the ball would have 3 main mood states for out minimum viable product. Happy, Calm and Angry. Each mood state would have a corresponding color – Happy (green LED), Calm (yellow LED) and Angry (red LED). Once we connected all the lights to the breadboard, we began by testing cycling through the three moods.

First, we tested the light sensor, setting up our breadboard so that when the light was bright ( a reading of 500 and above) the green LEDs would turn on and if the room was dim ( a sensor reading of under 500) then the red LEDs would go on indicating an angry ball whereas the green LEDs indicated a happy ball.

Fritzing

code     Lights

Once we got the light sensor functioning to switch the ball’s mood depending on the intensity of light, we wrote code that would alternate lighting each of the LEDs mimicking the idea that the ball’s mood would change over time. I.e. It would move from calm towards happy then back to calm and then down to angry in a looping fashion.

Lights

To affect the balls mood changing, we created a global variable, moodpoints , that would affect the ball’s mood. We decided that the ball’s moodpoints would max out at 1000 points and would be at least 0. Between 1000 moodpoints and 750 moodpoints, the ball would be happy. If the moodpoints were between 749 and 250, the ball would be in a calm state. And if the moodpoints were between 249 and 0, the ball would be in an angry state.

Sketchbook

Input from the light sensor was then used to affect the moodpoints – If the player moves the ball towards light they score 10 moodpoints, if the player moves the ball to a darker environment they lose 10 moodpoints. A function – checkLight() – is called once per loop, where the light reading (dim or bright light) determines whether the player will score moodpoints or lose moodpoints. If light reading is under 500, 10 points are lost. If the light reading is over 500, 10 points are scored. Then depending on the value of the moodpoints a function is called i.e. getHappy(), getCalm(), or getAngry(). Each of these function will turn on a corresponding light and play an accompanying melody.

Orientation Sensor

After assembling the Adafruit BNO055 and downloading the libraries and driver, we installed and tested the sensorapi example supplied by Adafruit at their website. We experimented with the hardware and software to get a sense of what kind of readings it could supply and how they might be used for our purposes. We worked with the sensor for some time before being informed by our instructors that it required a pair of 4.7KΩ pullup resistors in order to be used safely – as it happened, we had been having trouble with our computers consistently detecting ports while the sensor was enabled, and installing the pullup resistors solved this issue.

Our instructors also supplied us with demo code for the sensor which we experimented with but eschewed at that time as we were already comfortable with the code supplied by Adafruit.

Our next hurdle was determining how best to put the raw data returned by the orientation sensor into use. The sensor can return multiple types of data but we looked at it primarily for data it could return as a gyroscope and accelerometer. We noted that the sensor could return temperature and considered exploring temperature as an input to our device and as a game mechanic, but ultimately shelved it as being out of scope at this time.

In considering the application of the sensory data we considered the intended “personality” of the ball. We wanted a ball that likes to be moved gently, and gets angry if it is kicked, hit or thrown. For our purposes measuring acceleration seemed like the better choice over measuring rate of rotation. Acceleration could serve as a catch-all for fast movement, especially if the ball was tossed or otherwise moved in a way that prevented rotation. We noted that measuring acceleration would only return good data over very short periods – if the ball was, for example, thrown a great distance, it would accelerate at the beginning of the throw and at the end, but the period of time in the middle of the throw would likely not have enough of a rate of change in velocity to trigger our code. However, considering that we were working on a short timetable and demonstrating on a small scale, most interactions with the ball involving its velocity were likely to be brief – thus returning beneficial data nearly the entire time the ball would be in motion.

We instituted a threshold within the code of + or -2 m/s^2 on any axis as triggering a change in moodpoints. The threshold itself is a variable for quick adjusting during testing. Using the if(millis()-lastRead>=sampleRate) loop suggested by Nick Puckett we set the ball to test its acceleration twice per second. It was our intention that this low sample rate would discourage players from simply shaking the ball in order to anger it and instead encourage them to throw or roll the ball. As discussed below, this did not pan out in testing.

Melodies

Aside from the LED output, we decided that using the 8 ohm speaker to convey different tunes for happy, calm and angry would be essential in enhancing the experience. Moods are often quickly associated with sound, and the strong connection between the two would play a large part in creating more effective game mechanics. It also would bring these items to “life” as the sound output shares an important indication of “feeling a certain way” as it is a personification to express feelings through spoken words, opinions or sounds.

To start with, identifying how to read the notes with the pitches.h library for the toneMelody code would help in creating custom melodies. There was also a difference in volume that we would need to note between the lower notes and the higher ones. The much lower notes would be much “quieter” than the higher ones. In fully utilising the potential in the default library, research was done by looking up different musical scales, from major to minor. We quickly realized that in this library, the major scale with sharps in the notes were used. Once the middle C in the major scale was found (NOTE_C4) playing around with the notes for happy and calm were easier to do.

note_identify
Initially, we had a blue LED placed in the circuit for a “sad” mood and a sad tune was intended, but it was later scrapped. The choice to implement only happy, calm and angry was arbitrary and other moods could be implemented in future iterations of the game. 

There were a lot of instances tested with the toneMelody examples, such as:

  1. Altering the note durations between 2-16 and 1500-2000 overall to either speed up or prolong the note in the melody sequence,
  2. Experimenting with the delay() within the tune itself by either placing in a value or calling a ‘pauseBetweenNotes’ action
  3. Placing in a ‘0’ between the notes (e.g. In the calm tune, the sequence became “NOTE_C4, 0, NOTE_C4, NOTE_G4, 0, NOTE_G4, NOTE_A4, 0, NOTE_A4, NOTE_G4” to pause briefly between the melody to create a more musical tune.

These were then verified and uploaded in a simple button and speaker circuit, to repeatedly test the tunes and confirm that they are suitable in the happy, calm and angry outputs. These would activate as the color of the LED changes into the respective colors.

img_4999

Testing different combinations of note sequences.

For each respective mood, we decided that we would go for the standard beeping alarm for the angry as opposed to a custom one, a moderate tune for the calm mood, and a kept a custom tune for the happy sound. With the high pitched alarm sound, there came a sense of urgency that can affect player motivation as opposed to what could be a more robotic, angry melody. The calm tune was a familiar sound that stopped just the right timing to anticipate the change in the mood that you are after. The happy tune was a custom sound that indicated the EmojiBall is communicating a “I’m Happy!” sound by using a higher pitch at the end of the sequence (This is well reflected in the main video above.)

notes_final

Final note sequences used in the code.

These are then placed into the respective if statements that can be triggered once they have reached a certain threshold for happy and angry. By defining the “makeASound” function, each melody is called upon based on the mood changes. Some minor problems we encountered in embedding this code was making sure to define the total amount of notes in the sequence to make sure they all played within the for() equation. Within the “for (int thisNote = 0; thisNote < x; thisNote++)” the x would need to be replaced with the right amount of notes for the specific mood for the speaker to play, otherwise it would come out as a stifled sound. In the final code, the melodies synced well with the LED output, which allowed for a more “living” EmojiBall to be used in the game.

Initial Build

final-emojiball-board_bb

img_20181025_094206

First complete circuit board. This one includes a voltage regulator (lower left) while subsequent builds did not.

The physical build of the balls was an exercise in trial and error. We felt it was important to solder our boards due to the physically demanding way they would necessarily be treated. None of us had soldered before and the first board, after being built and tested on a breadboard, took the better part of two days. After becoming familiar with the materials and apparatuses, making mistakes and undoing them, the first board was complete.

mvimg_20181024_141442

Early exploration of the final circuitry

Power management was an immediate issue. We explored several options for power before consulting with a peer, Omid Ettehadi, who gave us excellent advice and pointed us toward a simple 12V battery. He explained to us that the Arduino contains a built-in capacitor to keep regulate incoming voltage. He also suggested building a voltage regulator into our circuitry to regulate the voltage and take some strain off the chip – Omid warned us that it might overheat without an external regulator.

img_20181025_094206
12V power source with voltage regulator

We built one prototype with the regulator and two without. While early testing showed no practical difference between the circuits that lacked a regulator and the other, it eventually became clear that the balls without regulators drained battery power at a much higher rate than the one that did.

img_20181025_165117img_20181024_165428

 

 

 

 

 

 

 

 

 

 

Whatever we used as a case had to be tough enough to withstand rough play but supportive enough to keep our components safe. We considered hamster balls for housing early on but were dissuaded by their expense, lack of versatility, and advice from Reza Safaei, our fabrication guru. After discussing with Reza we settled on foam for our case, which could be easily shaped and hollowed to accommodate our components but prevent them from being jostled or moved out of place. An added advantage of foam was that it is light, therefore doesn’t affect the balls portability or the player’s ease of use.

As a final touch, we covered our balls in a soft felt material, which gave them a pleasant tactile feel and encouraged players to be gentle with them. This had the added benefit of interacting well with the wood floor of our demonstration surface, as the juxtaposition of hard vs soft objects in the play space and amplification of the balls’ sounds by the wood made for a pleasant experience that we did not explicitly design for.

img_20181026_145555

The three EmojiBalls

 

pasted-image-0
Bill of materials per unit

User Testing

Our user testing suffered as a result of our inexperience with fabrication. Our team’s familiarity with code allowed us to build a game engine with parameters that could be easily tweaked, and we planned through testing to be able to adjust the parameters to get a sense of reactivity and personality of the balls. However we took longer than anticipated to complete a testable prototype and did not have enough time to test the product or the game to our satisfaction. Early iterative testing of the code using breadboards instead of soldered components pointed us in the right direction. We adjusted our thresholds and parameters to give the ball a sense of agency within the game and deliver immediate feedback to players. Moodpoint gain and loss from interaction was increased greatly from the values we had initially set for them, and the balls instantly felt more dynamic.

Testing the game itself was another matter. Our intention was to the keep the stated rules of the game simple and brief in order to allow interactions between players to flow naturally from the behavior of the balls rather than following arbitrary rules. We found that players did not intrinsically understand the consequences of interacting with the balls in certain ways and had to have the parameters of the mood-shifting mechanic explained to them. At that point, they tended to discover a single action – shaking the ball or holding the ball under light – that most reliably caused the mood-shifting outcome they wanted. We hoped that with more adjustment or the addition of extra sensors this might be mitigated, but we had prioritized fabrication over user testing and we were not able to tune the ball and the rules of the game to the extent we had hoped.

Product Design + Next Steps

The goal of the EmojiBall project was to create game pieces that felt as if they had emotions and exerted their will on the players of the game.

The actual rules of the game were secondary to the interactions with the ball. We considered adding rules such as “no touching the ball you last scored with” to prevent continuous scoring, but we decided against adding new rules in order to keep players’ minds clear and encourage interactions arising organically from play. We wanted to keep the rule set broad to ensure players not feel constrained and encourage them to try unusual things.

There are capabilities of the orientation sensor that we are not making use of that could be explored in further iterations. In addition to allowing us to tweak gameplay through the effects of physical interaction with the ball by measuring its current g-force or rate of rotation, the sensor is capable of measuring ambient temperature. Affecting the temperature of the ball or the game space might make for an interesting game mechanic.

Foam was chosen for its lightness, ease of use as a prototyping material and its protective qualities. Early in testing affixing the two foam cases together with dowels was adequate, but the rough nature of play as we drew closer to the due date necessitated us finding a more secure way to close the balls. We settled on tape in the interest of time. Layering felt material on the surface of the foam would also improve the mobility of these objects in the game. There was also a great advantage to having a felt ball to further emphasize the concept of a ball of emotion, where the idea of it becoming a sentient “feeling” object can be realized. The change in the tactile aspect could affect the interactions that can occur between the player and the object, which can bring about a sense of care for these balls that you are about to throw or roll into the hoop. With the code and core components now complete, another round of exploration and design is warranted to settle on the perfect material and closure system for the ball itself. Perhaps the ball can be designed using two spherical halves with grooves that fit into each other or even have one piece screw over the other, locking the components in place.

To refine our existing prototypes, we would add voltage converters to conserve battery life on the two prototypes that lack them. We would continue to explore options for casings and ball sizes – how might the gameplay change if the balls bounced like basketballs or were as heavy as bowling balls? How might the gameplay change if the ball texture was further altered?

We also see a likely expansion of the project as involving testing more sensors. This is not necessarily a priority, but experimenting with different kinds of inputs (e.g. a microphone to allow coaxing into different moods) might serve to refine the feeling of the ball as a discrete organism. Additionally, we would add RFID sensors to the goals and balls so that players are released from the burden of having to keep track of who scored where and by how much and who lost points where and by how much. This way, players can focus on enjoying the game and interacting with each other. The ball could be networked to cause it to react to stimuli outside the control of the players – for example, as suggested during the critique, it could change moods according to the weather.

img_4924

Resources

Critique Presentation Slides

EmojiBall Code on GitHub

Adafruit Oritentation Sensor code on GitHub https://github.com/adafruit/Adafruit_BNO055

Adafruit BNO055 Absolute Orientation Sensor. (n.d.). Retrieved from https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/arduino-code

Paul, E. (2018, June 19). Solfege: Why Do Re Mi Isn’t Just Child’s Play. Retrieved from https://www.musical-u.com/learn/solfege-do-re-mi-isnt-childs-play/

Friends-of-Fritzing Foundation. Fritzing. Retrieved from http://fritzing.org/ 

Circuit diagrams created with the Fritzing tool.

Play a Melody using the tone() function. (n.d.). Retrieved from https://www.arduino.cc/en/Tutorial/ToneMelody

All photographs taken October 2018.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.