Author Archive

Sound Synthesis

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link: https://github.com/Omid-Ettehadi/Sound-Synthesis

screen-shot-2018-12-10-at-12-20-19-pm

Music Credit: Anish Sood @anishsood

Contributors: Olivia Prior and Georgina Yeboah
A special thanks to Olivia and Georgina for letting us leverage the code from experiment 2, Attentive Motions. Without the hard work contributed by both these ladies the musical spheres would not have been finished in time, we are truly grateful.
screen-shot-2018-12-10-at-12-01-27-pm

Figure 1.1: Left, Final display of Sound Synthesis
Figure 1.2: Center, Sound Synthesis Team
Figure 1.3: Right, Special thanks to Attentive Motions Team

Project overview
Sound Synthesis is an interactive light and music display that allows anyone passing by to become the party DJ. There are 3 touch points to this system. The first is the ‘DJ console’ which is made up of children’s blocks. Each block controls a different sound stem which is triggered by placing a block on the console. The next two touch points are wireless clear spheres which contain LEDs and a gyroscope that triggers another sound stem when the sphere is moved. These interactions not only activate sound and lighting but it also invokes a sense of play among all ages.

Intended context
The teams intent was simple- bring music and life to a gallery show using items common in child’s play. Relinquishing control over the music and ambience at a public event seems crazy but this trio was screwy enough to give it a try. The goal was to build musical confidence among the crowd and allow them to ‘play’ without the threat of failure. For a moment, anyone is capable of contributing the mood of the party regardless of their musical experience.

screen-shot-2018-12-10-at-12-31-28-pm

Figure 2.1: Left, Final display of Sound Synthesis
Figure 2.2: Veda showcasing the capabilities of each musical sphere
Figure 2.3: Veda showcasing the capabilities of the DJ console
Figure 2.4: Center display in action

Product video

Production Materials

screen-shot-2018-12-10-at-9-53-43-pm

Ideation
The team brainstormed different ways to combine older projects together to create a playful music experience for those visiting the end of semester show. The ideation process started off quite ambitious, attempting to match the same footprint as another project called ‘The sound cave’.
screen-shot-2018-12-10-at-11-31-10-am

Figure 3.1: Left, Initial drawing of floor layout
Figure 3.2: Center, Initial drawing of DJ console, sphere and proposed fabrication of center display
Figure 3.3: Right, Initial drawing of additional touch points for more interactions (if time allowed)

The sound cave had five stations hooked up the their center unit with a different interaction at each station. The original plan was to use the display tower from Omid’s Urchestra project as our center display with a few alterations. The first station would involve a kid’s puzzle which was taken from Veda’s Kid’s Puzzler project and the interaction would remain the same, utilizing the pull up resistors and copper tape to create a button. The next station would have the clear spheres from the Attentive Motions project and the interaction would also remain the same, utilizing the gyroscope to sense motion to send a signal to the main unit. The next 3 units would be brand new and this is where our ambitions got the best of us. After further group discussion, it was decided only to add one more station to the project. The new station would involve a version of a touch sensor which would require a wearable to ground the circuit, see figure 3.3.

screen-shot-2018-12-10-at-11-42-21-am

Figure 4.1: Left, For a detailed understanding of the LED tower : Urchestra
Figure 4.2: Center, For a detailed understanding of the block puzzle : The Kid’s Puzzler
Figure 4.3: Right, For a detailed understanding of the clear spheres : Attentive Motion

Journey Map

screen-shot-2018-12-10-at-4-39-48-pm

Figure 5.1: Top, The first ambitious version of the Journey Map
Figure 5.2: Bottom, A more realistic and achievable Journey Map

Scheduling
As a team, we came up with a schedule. Early on we wanted to make sure we were being realistic with the amount of work we were taking on, especially since there were many other final projects in other classes. We arrived at this schedule, which needed to be shifted from time to time but overall we were able to stick to it and achieve a final product we are all very proud of (with enough sleep).

screen-shot-2018-12-10-at-4-45-46-pm

Figure 6.1: Team workback schedule

Programming
One of the benefits of revisiting previous projects is that most of the hard work has already been done. The first thing we needed to do was see what data we can get from each of them and assess what else needed to be added or altered.

screen-shot-2018-12-10-at-1-17-55-pm

Figure 7.1: Left, Changing the Arduino Micro to Feather ESP32, Center Circuitry for DJ Console and Spheres. Right, Installing the Circuitry into the base of the box.
Figure 7.2: Center, Moving the circuits from breadboards into prototyping boards
Figure 7.3: Right, Adding LEDs to the puzzle

The DJ Console (Blocks) The puzzle used six switches with the help of copper tape underneath the shapes to complete the circuit. Also, it had a single LED light to indicate when any of the shapes were placed in their right position. Each shape corresponded to a specific sound that was then played through the P5 file.

We wanted to stick to the same principle, with a straightforward addition. We wanted to provide instantaneous feedback to the users upon any changed that were made, so instead of having only one LED, we placed six of them indicating how many blocks were active at each time. The system still used an Arduino micro that sent the data for the switches over serial connection to the P5 file. The data was then sent to PubNub so that the display system could use them.

The Sphere The ball used an Arduino Micro, an Adafruit Orientation Sensor, a LED strip and a small speaker. It used to make noise whenever it was to stable asking people to move it. We didn’t want the device to play any sounds anymore, we only wanted it to send the orientation data to PubNub. To do that, we got rid of the speaker and changed the Arduino Micro for a Feather ESP32 board. The board read the data from the orientation sensor and send it to PubNub. To provide real-time feedback to the user, the LED Strip would show some light whenever the ball was shaken.

The Center Display The display used an Arduino micro, a LED strip and nine switches made of copper tapes. The biggest issue with this problem was the need for copper tapes under shoes to complete the circuit. So, we got rid of the tapes and only used the design as a display. We added two extra LED strips to the display to make the experience much better.
The P5 read the data that was sent from the two balls and the puzzle and based on their configuration played the track that was associated with them. The data then sent to the Arduino micro over the serial connection to control the 3 LED Strips. The primary LED Strip was focused on the puzzle. If any of the keys were placed the LED strip would flash a green colour every 2 seconds; else it would flash a white light. The other 2 LED Strips each were related to a specific ball. The strips would flash the same colour as the ball that was shaken.

screen-shot-2018-12-10-at-2-05-16-pm

Figure 8: The team testings of the units

Sound and Design
The sound was the most critical piece of the experience for us. Since none of us have worked with music before we were most concerned about how the experience would come alive without a high quality sound output. Instead of making any guesses we turned to Adam Tindale who has been working with sound for the last three decades.

Our meeting was extremely productive and the most important lesson from it was the difference between creating a musical experience and a musical instrument. While creating a musical instrument you have to have a very deep understanding of the instrument, how it works and what sounds it can produce. The audience for such experiences are usually musicians with a similar knowledge. We found a relevant case study that proved this and we knew that this is not the experience we wanted to create.

screen-shot-2018-12-10-at-2-19-45-pm

Figure 9.1: Left, Cave of sounds, a musical experience
Figure 9.2: Right, Color cord – a technological musical instrument

We wanted to design an experience that made it easy to play with music, and could empower users of all experiences to create music of their own. While learning how to use musical instruments is a difficult task, and requires countless hours of disciplined practice, how might we do the opposite and create something that is inclusive, easy to use, and engaging at the same time. We needed a total of 8 sounds, six sounds for the DJ console (puzzle blocks) that set the main track and 2 accent sounds for each of the spheres that are triggered upon shaking.

We began our search for sounds online, with royalty free sounds available to use. We even tried working with Ableton and Garageband to see if any sounds would work together and create a synchronized soundtrack. But nothing that was available online was good enough, and since none of us had prior sound making experience we turned to our friends to collaborate with us on this.

Anish Sood is a renowned DJ, songwriter and music producer based in Goa, India. The genre’s he focuses on are EDM, House, Techno and Electro House. These felt like the right fit for our experience. We did a call together and briefed him in detail about the project. We wanted a track that was upbeat and yet soothing, and not monotonous to listen to. We took inspiration from the artist Kygo to describe the kind of sounds we wanted to produce. We also shared with Anish many pictures and videos of the parts of the experience and our vision for it. He was extremely receptive and put together a beautiful track for us within 24 hours of our call. He created six sounds on the DJ console that were divided between base sounds and overlapping instrumental and vocal sounds. He also sent us the master track so we knew what it would all sound like when it came together.

Playlist for the DJ Console:
https://soundcloud.com/user667414258/sets/sound-synthesis-stem-set/s-b34Rx

For the spheres we wanted to find sounds that accentuated the base track from the console well. After a mini-brainstorm we shortlisted on a tambourine and a gong for the spheres.

Playlist for the Spheres:
https://soundcloud.com/user667414258/sets/sound-synthesis-stem-set-sphere-sounds/s-Zlnq0

 

Fabrication
Our fabrication process was smooth and streamlined. The following steps were part of the process:

The DJ Console (Blocks) We already had the base for the DJ console in place from Experiment 3. This included the puzzle itself, a base box for it and a single LED light to indicate if the device has been activated. In order to convert the design from a kid toy to something more mature, we decided to spray paint its colourful keys to a simple black and white design. We also had to add 5 more holes for additional LEDs feedback, and one for the connecting cable. While presenting we used a plinth that housed the Laptop underneath.

screen-shot-2018-12-10-at-2-09-35-pm

Figure 10.1: Left, drilling holes for LED lights into box
Figure 10.2: Center, Adding circuitry into box
Figure 10.3: Right, Spray painting shapes for box

The Sphere The fabrication process for the spheres were already done in Experiment 2.The only thing that was required to be changed was the circuit, and addition of a battery holder for the LED Strips so that they could be run for more than 3 hours.

The Center Display We decided to stick with the same object that was made for Experiment 3. The only thing that needed to be changed was to remove the extra Ultrasonic sensors from the box. We added a base to the design so that we could glue down the three cylinders that were to hold the three LED Strips. We also added a back panel to the design so that the LED Strips would be invisible when the device was off.

screen-shot-2018-12-10-at-1-43-49-pm

Figure 11.1: Left, adding more LEDs to original circuit created for the Kid’s Puzzler project
Figure 11.2: Center left, rewiring new and improved DJ console
Figure 11.3: Right, April making alterations and rewiring to the original display unit used in the Urchestra project

screen-shot-2018-12-10-at-1-08-30-pm

Figure 12.1: Left, Final project layout
Figure 12.2: Center, Fine tuning the blocks and sphere
Figure 12.3: Right, Fine tuning the center display

Final Fritzing Diagrams

screen-shot-2018-12-10-at-10-30-54-pm

Figure 13: The final circuit for the hamster balls

screen-shot-2018-12-10-at-10-30-29-pm

Figure 14: The final circuit for the Blocks (DJ Console)

screen-shot-2018-12-10-at-10-30-43-pm

Figure 15: The final circuit for the center display

Presentation & Show

screen-shot-2018-12-10-at-1-50-16-pm

Figure 16.1: Left, Final floor plan of Sound Synthesis
Figure 16.2: Right, Instructional signs placed on plinth under each interactive device

For the final show, we wanted to make sure the connection between the three pieces were clear and the users know what each of the pieces did. To do that, a clean installation of the work was crucial. We placed all the objects in a corner, where they could see the display center from each of the stations. We used plinths of the same height, and printed short instructions on what to do with each piece to make sure the user is clear on his/her role in the experience. We also printed matching ID cards and wore black and white – to look like a team at the exhibit.

An issue we had to deal with was to make sure the web browser for our display unit was refreshed every now and then as the large quantity of data sent to it made it crash if it was opened for a long time. We made sure that at least one person was at the station at all time to make sure nothing goes wrong.

We received very positive feedback on the project. People were very interested in how easy it was for them to act as a DJ and play with the sounds without having to worry about the pace of each track and how to synchronize them. Kids especially enjoyed the experience because they were used to the puzzle and the games and they really liked to be in charge of what is being played. Other people really enjoyed the experience because of the unusual interface for the music. They liked how simple it was to control and how little work did they have to do to get good sounds out of the system. They also appreciated how instantaneous the feedback is with the interaction. One thing that they felt that could be improved was to add more tracks and give the users ability to choose which track is for each piece.

Reflection

As a team, we really hit our stride with this project. Since we all enjoyed working together so much during project 4 we thought we would go out with a bang together in project 5. The 3 of us each brought something different to the table and we found ways to utilize each team member’s strength. Omid not only spearheaded the coding but he is also extremely patient and slowed down his process so we could all work to understand the code of each device and troubleshoot any errors. Veda is extremely detailed in her design approach. It’s not enough for it to just look good, she makes sure each design is functional and user friendly, in every detail. April brought to the table her professional experience with meticulous project management, scoping and planning, graphical design and human centered thinking. Her skill set with fabrication and printing methods was also a blessing.

One of the most important lessons for us was to scope realistically, and leave a safety margin for debugging and troubleshooting. We also made sure to give ourselves enough time to iron out all the details for the actual presentation and setup.

After all the hard work we were able to achieve something that works beyond the level of a basic prototype. Hamster balls were dropped and the system crashed but everything was up and running without anyone at the party noticing. We are extremely proud of the final product and still can’t believe how well it turned out. If this project was ever to be scaled up it would require more stable software and possibly custom microcontrollers but for a 2 week student project, we are very proud.

screen-shot-2018-12-10-at-2-49-25-pm

Figure 17.1: Left, April and Veda rocking out at the final show
Figure 17.2: Right, Veda continues to rock, While Omid makes sure everything is under control

References
(n.d.). Retrieved from http://www.picaroon.eu/tangibleorchestra.html
(n.d.). Retrieved from http://www.picaroon.eu/tangibleorchestra.html
Cave of Sounds. (n.d.). Retrieved from http://caveofsounds.com/
Romano, Z. (2014, May 22). A tangible orchestra one can walk through and play with others. Retrieved from https://blog.arduino.cc/2014/05/22/a-tangible-orchestra-one-can-walk-through-and-play-with-others/
Schoen, M. (n.d.). Color Chord. Retrieved from https://schoenmatthew.com/color-chord
Tangible Orchestra – Walking through the music. (2014, June 03). Retrieved from https://www.mediaarchitecture.org/tangible-orchestra/

 

 

(manufactured) realities

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link: https://github.com/Omid-Ettehadi/Manufactured-Realities

ezgif-4-61972ea3b4f0

Figure 1.1: Introduction screen, featured on projection screens and mobile devices

Project overview
“Falsehood flies, and the Truth comes limping after it,” Jonathan Swift
(manufactured) realities is a project created to step out of the status quo and truly evaluate whether our beliefs are based on facts. In order to do this, the team selected six news stories, three true stories, and three stories that were released and later on retracted or debunked by credible news organizations. The increase in conflicting information has fuelled much discussion. ‘Our inability to parse truth from fiction on the Internet is, of course, more than an academic matter. The scourge of “fake news” and its many cousins–from clickbait to “deep fakes” (realistic-looking videos showing events that never happened)–have experts fearful for the future of democracy.’ (Steinmetz, Time Magazine, 2018) The six articles are presented, and after each, the participants will be given a chance to submit a vote on whether they believe it or challenge it. Once all votes are in, the servo device will show the results of the poll as the projector reveals whether the story is true or fraudulent. Once all six questions are answered, we end the exercise on a result page which will show the overall accuracy of the group and also the accuracy of each question.

screen-shot-2018-11-24-at-8-17-50-pm

Figure 2.1: Veda, Omid and April on presentation day
Figure 2.2: (manufactured) realities on display, projection in back and servo device in front

Intended context
How we receive information is more complex than ever before. It used to be as simple as picking up a book or a newspaper to update yourself on the news, current events or specialized educational materials. These media sources are held to rigours ethical standards, and if they ever breach this code of conduct, a retraction must be printed and released to the public. The more retractions, the less credible the publication becomes. Nice and simple. In 2018, this simplicity has been turned on its head all thanks to the internet. Nowadays, we are constantly overwhelmed with information, some of it playful and useless, some educational and enlightening but some streams of information are created only to conflict and confuse the public. With all of this content being released hourly to various public channels, there is more emphasis on releasing the information first and less concern about releasing accurate information. There has been a shift from reading credible sources by publishers to consuming information by our favourite ‘content creators’. These new creators of content do not run by any rigorous code of conduct and simply publish what they believe to be true. They also share articles with their subscribers and/or followers, further amplifying the story without knowing (or caring) if it is in fact credible. ‘A false story is much more likely to go viral than a real story’ (Meyer, The Atlantic, 2018) Media awareness is a long-standing issue; it is very easy for the person with the microphone to sway a crowd in their favour. The time we live in now goes far beyond that; we simply do not know what to believe anymore.

Product video

Production materials
Our aim in this project was to use a combination of hardware and software to create a seamless and straightforward evocative experience to spark conversation. The following materials were used for this project:

  • 1x Arduino Micro
  • 1x Laptop
  • 1x USB cord
  • 1x Strip of NeoPixel Lights
  • 2x Servos
  • 1x Breadboard
  • 1x Plywood
  • 1x Parchment paper
  • 1x Projector

Ideation
During the ideation phase, the team came up with many exciting options. The focus of each of our ideas was to create something that is extremely relevant and pertinent.

Idea 1: Constructing communication systems in a collapsed society
Communication devices can be made of scraps, discarded plastic debris and e-waste
This can be a way to communicate levels of remaining natural resources
‘Citizen scientists can take advantage of this unfortunate by-product of “throwaway culture” by harvesting the sensor technology that is often found in e-waste.’ (link)
Our team went to watch the Anthropocene movie for inspiration

Idea 2: A better way to communicate coffee needs
Texting and slack is not a sufficient way to get the coffee needs of a large group
Create an ‘if this then that’ type app, with your regular order saved and ready
When someone asks the app if anyone needs coffee, instantly they receive the orders

Idea 3: Broken telephone game
Using the sensors, we already have on our phones
Create a game to pass messages from phone to phone
Somehow creating a way to scramble the messages

Idea 4: Digital version of a classic board game
Pictionary, a digital version that can be played in tandem, speed rounds?

Idea 6: Think piece, ‘Challenging assumptions and perceptions.’
There is currently no way to validate content and communications on the internet
Create a survey for everyone to do at the same time, generate live results on screen
Use this to gauge perceptions or bias

One we listed out all the ideas, we gave each other a day to think through what we feel most excited to do. We returned the next day, and unanimously agreed to proceed with our think piece ‘Challenging assumptions and perceptions.’ We were also mindful about the potential for the scalability of this experience. While the prototype itself was built for a small group of people, the intent was to set the foundation for a product that can easily scale to more extensive experience and audience in the future.

Process Map
Once the idea was finalized, the next step was to flesh out all the details, including the flow of the experience. We began the process by creating the user flow diagrams. We broke down the hardware, software, and API instances, and how each of them is interconnected. It was vital to iterate the different pieces of the puzzle and see how they fit together.

cnc_exp4-2

Figure 3.1: User flow diagram

Wireframes
Once the flow was set, we focused on information architecture across all the devices. We used Adobe Illustrator to create wireframes with placeholder content. This helped us visualize the skeleton for the experience. We decided to use the projector as the centrepiece and the mobile phones as a balloting device.

Projector Experience Wireframes
The Projector experience would hold the critical question screens, the response screens and the final result screen to conclude the experience.

newswireframe-1

newswireframe-2newswireframe-3

newswireframe-4

Figure 4.1: Projector Wireframes

Mobile Experience Wireframes
Mobile devices around the room will function purely as ballots, and the projector will take center stage as soon as the voting process ends. The team put much consideration into the flow of the participant’s attention. Since there would be three interfaces in play, we made sure to include as much visual feedback as possible to make sure participants knew where to look and when.

 

phonewf1

phonewf2

Figure 5.1: Mobile Phone Wireframes

Finding the news stories
The team took the selection of stories very seriously and took the time needed to research and find surprising news that shook the world when it was released. We remembered stories that had stood out for us in the past, and looked for current pressing issues that were creating news. We also divided the stories into true reports and false ones. For this project, we felt it was important not to make up false stories, but instead, find stories that were released as true before being retracted later. This was crucial for the overall project objective. The team checked multiple sources and created a database of 15 stories initially, before shortlisting 6 with a random order of fake and true stories and thereafter began the UI design process.

screen-shot-2018-11-25-at-2-31-34-pm

Figure 6.1: April and Omid searching for news stories
Figure 6.2: Veda and Omid searching for news stories

User Interface Design
We began the interface design process with the introduction screen. We didn’t want to create something that was static, so we went with a moving background. For the identity design, we wanted to create something striking and beautiful at the same time. We used Adobe Illustrator and Photoshop for all the designs. Another difficult problem we were facing was the use of three different interfaces. The team put much consideration into the flow of the participant’s attention. Since there would be three interfaces in play, we made sure to include as much visual feedback as possible to make sure participants knew where to look and when.

ezgif-4-61972ea3b4f0

Figure 7: Introduction screen, displayed on projector and mobile devices

The team thought it would be essential to add a disclaimer screen to ensure that the exercise is well accepted. While we tried to be as mindful as possible while picking the stories, we knew that it was equally important to respect our cohort and faculties sentiments. Then we shifted focus to the news article in question.

screen-shot-2018-11-25-at-11-29-47-pm

Figure 8.1: Top right, Screen which displays the article
Figure 8.2: Top left, Screen which displays whether article is true or not
Figure 8.3: Bottom right, Screen which displays disclaimer
Figure 8.4: Bottom left, Screen which displays final results and accuracy percentage

screen-shot-2018-11-25-at-2-45-19-pm

Figure 9.1: First draft of the User Interface
Figure 9.2: Veda hard at work designing two versions of UI, one for projector and one for mobile

ui-mobile

Figure 10.1: Right, Mobile Screen UI for ballots
Figure 10.2: Left, Mobile Screen, feedback to allow a user to know they have completed the vote

Programming
Controlling the flow of the experience was a high priority. To do this, we decided to create three different pages. A page to display the news articles and the answers on a projection screen. An admin page, giving a button to a ‘moderator’ who will keep track of what page will be shown and a user page which acts as a ballot for every person involved in the experience. We know how important it was to choose appropriate articles that are related to today’s world and also topics that people are very opinionated about. To have more time to find the right questions, we decided to start with a simple structure for the program.

Connection to PubNub
For our first step, we created the connection between the three pages to the PubNub and tested the communication between the three pages. The admin page sends data to PubNub commanding which page to be shown on the other two pages. The user page receives data from the admin page and transmits data to the display page regarding the votes of the user. The display receives data from the admin and the user page to display the number of votes. Once everything was working, we added all to the articles to the display page and tested the program to make sure everything goes correctly. We then added a final page to show the results of the survey and allow the users to reflect on the experience that they just had.

screen-shot-2018-11-25-at-2-28-19-pm

Figure 11.1: Connection to PubNub and testing
Figure 11.2: Veda and Omid working on UI and Coding
Figure 11.3: Testing of the final infrastructure

Servos and NeoPixel hardware
After creating the basic coding structure and testing it by sending messages from each page to other, we added the communication to the Arduino Micro by the Serial Connection. Initially, we wanted to use the Feather board and connect the board to PubNub, but because of the difficulties we found connecting our boards to the OCAD U’s Wifi connection, we decided to stick with the Arduino Micros that we have already gotten to know well. We tested the communication by sending angles for the servos to the board based on the votes received in each category. The original idea included the lights to transition into 3 phases; a standby state would have the white lights, polling state would use a colour library from adafruit to show a rainbow of colours. Finally, when the poll is complete, the lights will turn to green. Unfortunately, the coding for the pixel lights fought with the servo coding, so we had to replace the beautiful colour library option and opt for solid RGB colour, blue matched nicely with the final designs. The final use of the pixel lights only included one state, it flashed on and off with a 10 second delay for each.
Adding Sound
We wanted the users to be focused on answering, and we decided to add audio recordings of each news item to keep a good pace and allow the participants to ingest the content with ease. Adding the sound to the code wasn’t difficult, once it was recorded and edited.

screen-shot-2018-11-25-at-2-41-08-pm

Figure 12.1: Setting up the circuit for the servos and NeoPixel lights
Figure 12.2: Testing the servos before completing fabrication

circuit_bb

Figure 13: Final Circuit Diagram

As a final step to make sure the experience was smooth, we tested each component of the program and ran many trials to make sure everything worked correctly.

screen-shot-2018-11-24-at-8-38-11-pm

Figure 14.1: Running final tests before presentation begins, ballot and article screen working correctly
Figure 14.2: Servo device and article results screen working correctly

Fabrication
Although it wasn’t necessary for this project to have any hardware, we all wanted to add something tangible for two reasons. First, to utilize our student access to a maker lab and learn more about how to use the equipment. Second, we wanted this experience to only happen in person and not be a simple online survey that disappears from your mind the moment it is completed. Our team had an idea nailed down quite early, and we were eager to get the fabrication underway as soon as possible, knowing that other teams will be using the Maker Lab. The goal was to finish the fabrication process in the first week. We initially wanted to 3D print a casing for the servo motors since we knew the laser cutting machine was down. Using both Illustrator and Autodesk Fusion 360, we created an STL file that could be read by the printer. This was a big learning curve since no one on the team had ever used the 3D software before. On Wednesday we met with Reza, we were advised to wait for the maintenance of the laser cutting machine to be completed since the execution of our design would be better using that device. Base on Reza’s advice, we went back to the original illustrator file that could be used by the laser cutting software. Waiting for the laser cut machine to be fixed did through us of our schedule, but we were able to pull it all together.

screen-shot-2018-11-24-at-8-23-02-pm

Figure 15.1: 3D View of the design, front view
Figure 15.2: 3D View of the design, back view

The first attempt was cut on cardboard to check the dimensions of the design and the quality of the cut patterns. In this process, we realized how small the lines in the background pattern were once laser cut. Some of the lines broke as soon as they were touched. Some of this was due to the ripples in the cardboard. To make sure this would not happen in our final product, once again we went back to the design and increased the thicknesses of the problem areas.

screen-shot-2018-11-24-at-8-27-56-pm

Figure 16.1: Laser cutter in action
Figure 16.2: Prototyping on cardboard, pattern was to intricate and needed to be reworked slightly
Figure 16.3: Prototyping on cardboard, front of the design

We decided to go with a thin layer of plywood. Reza was concerned that some pieces would jump out during the cutting process and either hurt the machine or the design, so he set the depth of the laser incisions to not completely cut through. Since there was a natural curve to the piece of plywood, some pieces came out quickly but other parts needed to be cut out later on using a x-acto knife.

screen-shot-2018-11-24-at-8-31-46-pm

Figure 17.1: Assembling final wooden casing that will house the servo device
Figure 17.2: Cutting out the patterns on the wood body
Figure 17.3: Cutting out the patterns on the wood body, back view

For the final project, we decided to add an LED Strip to the design so that we could highlight the moments when the users had to look at the servos. To hide all the electronics in the design and further infuse the light, we added a layer of parchment paper behind the patterns.

screen-shot-2018-11-24-at-8-35-41-pm

Figure 18.1: Final circuit prototype, back view
Figure 18.2: Adding parchment paper to hide the circuit and diffuse the light more effectively
Figure 18.3: Fabrication of final product

Presentation & Critique
For the critique, we wanted to make sure that everything would go smoothly and started early in the day and made sure we had enough time to test the design after connecting everything in the Gallery. Once connected, we ran through a few tests and triple checked that the servo was working. Once the computer was connected to the projector, the display would not go to full screen when we were using the Firefox. To improve the presentation of the project, we decided to change to another browser. Unfortunately, the serial connection was left idle and disconnected before the actual presentation causing the servos not to move at all.
The feedback we received was positive. The topic was relevant, and many shared similar concerns. There was a concern for the mobile device to continue showing the timer after the vote was cast. There was shock from everyone seeing the final results page display the overall accuracy. It was quite low, sitting at a 41% accuracy rating. Two articles, in particular, were quite convincing although they were both untrue and most people had believed. Having done much research on fake news, we expected people to accept false ideas that fell into their own confirmation bias.

Reflection
Upon reflection, there are a lot of minor tweaks we would make to this project based on the flow of the first presentation to a large group of people. First, the sound coming from the computer was not loud enough, and many participants were straining to either hear or quickly read what was on the screen, a wireless speaker is required. Second, we tried to design the experience with little interaction by our team needed during the polling. Working off this assumption, when we noticed a silence in the room or a look of confusion from a participant we realized we needed to be more prepared to guide them through. Third, after the vote was cast, we moved on to reveal the truth behind the story. The issue we noticed was there was too much content provided, and we don’t think anyone read what was on the screen. This is a flaw in the UX that can be fixed with a bit of editing and altering the hierarchy of the content. Forth, and possibly the most important, we did not put enough thought into the interface used by the ‘moderator.’ This interface did not show the timer which the participants saw on their screen and wasn’t completely sure when to switch to the next article. Also, if/when the servo device decides not to work it would be a bonus for the moderator’s interface to see the voting results so at least it can be delivered to the participants verbally. The team learned a great deal about effectively providing a message to a group of people using multiple interfaces, effective communication feedback and the importance of presence upon delivery.

References
Steinmetz, K. (2018, August 09). How Your Brain Tricks You Into Believing Fake News. Retrieved November 26, 2018, from http://time.com/5362183/the-real-fake-news-crisis/

Meyer, R. (2018, March 12). The Grim Conclusions of the Largest-Ever Study of Fake News. Retrieved November 26, 2018, from https://www.theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104/

English, J. (2016, November 08). Believe It Or Not, This Is Our Very Own River Yamuna. Retrieved November 26, 2018, from http://english.jagran.com/nation-believe-it-or-not-this-is-our-very-own-river-yamuna-72099

“The Office” Women’s Appreciation. (n.d.). Retrieved November 26, 2018, from https://www.imdb.com/title/tt1020711/characters/nm0136797

McLaughlin, E. C. (2017, April 26). Suspect OKs Amazon to hand over Echo recordings in murder case. Retrieved November 26, 2018, from https://www.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html

Gilbert, D. (2018, November 20). A teenage girl in South Sudan was auctioned off on Facebook. Retrieved November 26, 2018, from https://news.vice.com/en_us/article/8xpqy3/a-teenage-girl-in-south-sudan-was-auctioned-off-on-facebook

The truth behind ‘Fake fingers being used for orchestrating a voting fraud’ rumour. (2018, September 30). Retrieved November 26, 2018, from https://www.opindia.com/2017/02/the-truth-behind-fake-fingers-being-used-for-orchestrating-a-voting-fraud-rumour/

Sherman, C. (2018, November 21). Why the women suing Dartmouth over sexual harassment are no fans of Betsy DeVos. Retrieved November 26, 2018, from https://news.vice.com/en_us/article/d3b3dz/why-the-women-suing-dartmouth-over-sexual-harassment-are-no-fans-of-betsy-devos

Switching Off

Project by: April De Zen
GitHub Link: https://github.com/aprilde/CNC—Experiment-3-ADZ

screen-shot-2018-11-09-at-11-58-47-pm

Figure 1.1: Womens head in vice, Image source: Medium
Figure 1.2: Switching off device, running content

Project overview
We live in a world where content is king. Social platforms and other public channels have amplified our voices but in order to stay on top, we must generate more and more and more. The same rule applies for marketing departments, ad agencies and good ol’ Fox News. The effects of which causes a constant war over our attention. The message of this project is simple, we all know and understand how overwhelming content can be but this was meant to go one step further. The device held a question, ‘How much can you handle before you switch off?’. When attempting to switch off, you would quickly notice that the messaging doesn’t stop it only changes to ‘entertainment’ content. The kind of entertainment that is easy to take in and numbs the mind. Content like memes, social media or bored panda. This device doesn’t give you a real option to ‘switch off’ but instead provokes questions around the choice of which ‘channel’ we would unconsciously find ourselves gravitating too. The ‘News’ channel, the ‘Advertising’ channel or the ‘Meme’ channel. Are we so over-stimulated that we choose to bombard ourselves with content that numbs our mind? Is this the new ‘Switching Off’? Is this how we calm our minds?

Intended context
This project is meant to be a think piece. It’s meant to hold up what we think should calm a mind and compare it to how we are presently calming our minds. It’s not uncommon to pick up a smartphone to quickly check a text message and suddenly lose an hour of time checking every other socal app on our phone. Many social media sites allow a user to tailor it’s contents to personal preferences allowing for a smooth and stress-free experience. Next time you find yourself lost in a social media rabbit hole, stop and ask yourself how you feel after it. Do you feel calm or relaxed? Do you feel overwhelmed and annoyed? Or maybe the feeling your experiencing is numbness.

Product video

Production materials for each prototype

  • 1x Micro Arduino
  • 1x Laptop
  • 1x USB cord
  • 4x buttons
  • 1x Toggle
  • 1x box

Ideation
The initial idea was very different from the final concept. It started out as a tool to allow a user to create art using programmed buttons through Arduino to P5. Since I’m not stellar at coding I wanted to start simple and if/when I got that working I would add in another layer. It was a good strategy to have and I quickly found myself adding in a light sensor. The light sensor would allow for 2 modes, when bright the colours would be warm and when dark the colour would be cool. Guess what, I got it to work!

Through many attempts to tweak the imagery and speed of the design, the experience still felt sporadic and overwhelming. This concept was supposed to be an enjoyable experience and I was struggling to get there. Instead of desperately trying to make the experience something that it’s not, I decided to think what about what other concepts that felt just as chaotic. This led me to the final idea of the daily content overload we endure. Once adding in the new imagery, the idea evolved into ‘switching off’ and what that would look like.

screen-shot-2018-11-10-at-12-06-57-am

Figure 2.1: Nathan testing the product
Figure 2.2: Nathan and Ethan creating art with the product
Figure 2.3: Screenshot of output and debugging

Learning Process and Programming
I tried to keep it simple, I was going to do one thing at a time and see where I landed at the end of 2 weeks. Stage one was getting something to show up on the screen in a randomized order. Once that small win was achieved, I figured out how to change the ellipse for an image. I continued with this method throughout the whole experiment and not only did I learn a lot but I also started to see the logic behind debugging. I was layering in my code section by section and each time it broke I was able to understand why and look up how to fix it. Also, the error messages provided by my web browser was starting to make sense to me which was a new and fantastic development.

screen-shot-2018-11-09-at-8-19-36-pm

Figure 4.1: Adding ellipses from Arduino to P5
Figure 4.2: Converting ellipses to images

Light sensor
The light sensor was a bit tricky since my port kept disconnecting. I was able to upload the code on to the Arduino and get a reading without an issue but when it can to P5 it was hard to troubleshoot why I was seeing glitches with the graphics. Everything seemed to be right but it wasn’t working. Turns out the Serial Control was shutting down too quickly to work and once I rebooted my computer it magically fixed the Serial Control issues started working.

screen-shot-2018-11-10-at-5-55-48-pm

Figure 4.1: Fritzing diagram with light sensor

Changing light sensor to a toggle
Once the final idea changed, I did attempt to use the light sensor to show transitions but it did not make sense in the new context. I opted to use a toggle instead to simulate switching a channel. The code for this was really simple and I did not have any issues with it. The snag was the toggle I had in my kit wasn’t just off and on. There was also a third point so I had to again slightly adapt my idea to accommodate. I added advertising as another layer of content.

Sound
Adding sound proved to be really tricky for me. Since the device was allowed to be connected to a computer for this experiment, I opted to use my laptop speakers instead of adding a lower quality speaker to my circuit. I watched a few P5 tutorials by the coding train and the method shown was not working once I added in buttons and a toggle. I used a sound memory variable to allow each button to change sounds once the toggle was in a different position. This worked perfectly on one button but as soon as I tried adding other buttons the code broke. I’m still not completely sure why.

Fabrication
For the device casing, I wanted it to be in a sturdy case. I went up to the maker lab and saw that Reza was swarmed by other students so I thought I would try the dollar store. There was a huge variety of gift boxes and there was one size that fit a breadboard perfectly. Looking through the craft section of the store I also came across some silver paper which would add another clever layer to the box. Since I would be asking people to question their reality, having a mirror type surface would only add to the effect.
Once the box was ready, I was not happy with the locking buttons I had purchased. Having to click once to start the flow of images and click again to stop the images felt like the wrong interaction for this project. I quickly made a trip to a local electronics store and picked up 4 new buttons. I am glad I did because it made the sound element more fluid.

screen-shot-2018-11-10-at-12-13-57-am

Figure 5.1: Silver paper used to cover pre-made box
Figure 5.2: Fabrication in progress
Figure 5.3: Cutting holes to account for buttons
Figure 5.4: Fabrication of final product
Figure 5.5: Fabrication from behind

Final circuit diagram

screen-shot-2018-11-10-at-5-58-27-pm

Figure 6.1: Fritzing diagram of final prototype

User Testing
Throughout the process, I had three curious little beings eager to assist in testing of the product. They did have fun with the painting idea but the ‘success baby’ meme with a silly sound effect blew some minds. Very quickly user testing turned into durability testing and lucky the hardware was able to take it.

Reflection
Overall, this was a great learning experience. It aided in building confidence and independent learning. Although spending the majority of time experimenting with code and learning to solve problems was beneficial, I do wish I had scheduled more time for fabrication. Including an exploration of the different fabrication tools at our disposal would have been ideal. Unfortunately this was not considered for this project timeline but I look forward to remedy that in the next project.

screen-shot-2018-11-10-at-12-20-15-am

Figure 7.1: Equipment durability test in progress
Figure 7.2: Continuation of equipment durability testing

References
Train, T. C. (2016, June 17). 17.5: Adding Sound Effects – p5.js Sound Tutorial. Retrieved from https://www.youtube.com/watch?v=40Me1-yAtTc&t=75s

Serial Output From Arduino. (2018, November 09). Retrieved from https://vimeo.com/237203208Experiment 2: Arduino Micro Connection Diagrams. (2018, October). Retrieved from https://canvas.ocadu.ca/courses/27262/pages/experiment-2-arduino-micro-connection-diagrams
Experiment 3: JSON Data on the Serial Port. (2018, October). Retrieved from https://canvas.ocadu.ca/courses/27262/pages/experiment-3-json-data-on-the-serial-port
Lab: Intro to Asynchronous Serial Communications. (n.d.). Retrieved from https://itp.nyu.edu/physcomp/lab-intro-to-serial-communications/

Attentive Motions

Members: Olivia Prior, Georgina Yeboah and April De Zen
GitHub Link: https://github.com/alusiu/experiment-2-cnc

screen-shot-2018-10-27-at-9-51-03-am

Figure 1.1: Georgina, Olivia, and April holding the finished Attentive Motions prototype
Figure 1.2: The assembly of the Attentive Motions prototype
Figure 1.3 : Attentive Motions turned on, outputting red light as a signal to be moved

Project description and overview

Attentive Motions (AM) is a device that prompts users for consistent play and motion. The device gives user feedback through visual and audio outputs. This feedback relates to how the user is interacting with the object. If the device is left alone, it will flash red and outputs an alarm like tone as a request to be moved. When the device is in motion, it outputs playful chirping noises, and the lights flash bright, playful colours. There is a sense of play when engaging with the device which makes users want to continue the interaction.

Intended context and users

Attentive motions are intended to initiate play with all ages. It is a simple device with signifiers that evoke playfulness regardless of age, gender or any other social boundaries.

Video of Attentive Motions

Production materials for each prototype

  • 1x Micro Arduino
  • 1x Adafruit BNO 055 orientation sensor
  • 2x 4.7 ohm resistors
  • 1x 16 ohm speaker
  • 1x NeoPixel (20 lights per strip)
  • 1x Portable Power Bar Source
  • Protoboard
  • Wires
  • Hamster ball
  • Acrylic insert for mounting components

Ideation

In our initial ideation stage, we discussed common themes we wanted to have represented within our work. These themes include ‘critters,’ LED lights, responsive through immediate feedback to user interaction, and mapping pathways. We came up with 3 ideas to start, first was the ‘sneaky sneaker’ which is a device that would attach to your shoe and track the level of sound/noise you make as you walk in. In this idea, there would be a vibration or light that indicated when you were being too loud. The second idea was a ‘library assistant’ which would be a necklace that could sense the level of volume in your voice and indicate through vibrations if you were speaking too loudly in the library. Our team also thought this might represent how being quiet is preferable in certain social environments and pose the question of ‘why?’. Our last idea we thought through was based on how light can flow and defuse through plexiglass.

Our idea was to create objects that could sense passerbys and light up sequentially to create a pathway in the dark for users. From here we discussed how to transfer light, and different shapes our idea could take place, one of them being a sphere. This third idea was our initial catalyst for our prototype “Attentive Motions.”

screen-shot-2018-10-20-at-5-38-03-pm

Figure 2.1 (upper left): Team discussing potential ideas
Figure 2.2 (upper middle): April sketching ideas
Figure 2.3 (lower left): Olivia and Reza sketching designs for the casing of the prototype
Figure 2.4 (right): Initial sketches of our pathway idea

Our team consulted Reza, the maker lab technician, and he recommended that we look for pre-made plastic objects, such as cubes, or sphere to save time on creating them. We took his advice and perused shops in Chinatown for pre-made objects that would fit our criteria.

Decision

After disbanding and meeting again together we narrowed down our ideas to two projects: a sphere in continuous movement and creating an interactive lit pathway. We were happy to run with either approach, but after some consideration, we thought that three devices would not be enough to convey the possibilities of an interactive lit pathway system. From here we decided to pursue the idea of a sphere that always had to be in motion. This idea met our creative goals of wanting to use light and sound as feedback and they could reflect critters from the assumed anthropomorphized form created from the movement and user interaction. For this, our intended input would be a gyroscope sensor that would detect if the object was in motion, and our outputs would be sound, light and maybe vibration in response to the user putting the sphere in motion.

screen-shot-2018-10-20-at-5-44-52-pm

Figure 3.1 (upper left): Team assembling initial inputs and outputs
Figure 3.2 (upper right): Close up of speaker output assembled on a breadboard
Figure 3.3 (lower left): Team attempting to get the IMU sensor connected
Figure 3.4 (lower right): Initial design of Attentive Motions as a sphere

First Steps

Proof of concept

Our first step was to ensure that our inputs and outputs would work with our “always-in-motion-sphere” idea. We first tested out our speaker output with success, but struggled to get the IMU sensor to work with the Micro-Arduino. After some guidance from professors, we realized that our micro-controller needed to have 4.7 ohm resistors as a pull up for the IMU sensor. After attaining these resistors and much struggle, the IMU sensor worked.

Object design

After more discussion, we discussed possible ways to enclose the device. Our initial design ideas were getting various kids play balls in different sizes, and attaching the device on the outside of surface. To emphasize the notion of a critter, we discussed placing fur on the outside of the surface and the device to elevate the fiction of the project. Some of the spheres we started to think about were, beach balls or styrofoam balls.

Material list

Apart of our object design conversation, came the discussion of what other material we would need to source to produce our project. After sorting through our kit we decided we needed:

  • Louder lightweight speakers
  • Larger LED lights or NeoPixel lights
  • Possibly a switch
  • Power source
  • Spheres
  • Female headers

We had also decided on a budgets for our project, which was settled on $25-30 per a prototype. This would include the casing, power source, and any other parts we needed to produce our idea.

As per Reza’s suggestion we went supply shopping for ready made objects. Nothing we found was within our price range for the project; large hollow styrofoam balls were $30.00 each in total which left no budget for other supplies. We decided to go to the dollar store to scope out spheres there and purchased a bouncy ball to be able to start testing our prototype.

screen-shot-2018-10-20-at-5-58-40-pm

Figure 4.1 (upper left): The team shopping for supplies
Figure 4.2 (upper right): Dollar store bouncy balls
Figure 4.3 (lower left): Hollow styrofoam sphere halves
Figure 2.2 (lower right): Team testing proof of concept with the dollar store bouncy ball

Product Journey Map

To better utilize the team’s time, a map of the interactions and feedback for the prototype was created. To keep it in motion, there will need to be a person to move it. The team thought through what signifiers would need to be present for a person to understand the need for engagement with the device. The idea was to make the sphere output a loud noise when still, this noise would indicate immediate attention was needed. Further to the noise, the team wanted to add other forms of feedback. To bring some visual cues into play, red flashing lights were added. Since these balls can be picked up, some touch sensory with vibration may also be added.

screen-shot-2018-10-20-at-3-45-29-pm

Figure 5.1: Process map

Sketches of Product Casing

Before making decisions on which materials to buy for the device casing, we drew out some sketches. The initial idea was to use a ‘ready made’ ball and adapt it for the device. Accessibility of the device is important, cutting open the ball and putting the device inside would not work. Instead, wiring the outside of the ball and create a case for the Arduino on the bottom seemed like the best solution. The idea was to cover the whole ball in fabric to hide all the imperfections. Upon some testing, this oval shape did not work well.

screen-shot-2018-10-20-at-7-08-14-pm

Figure 6.1 (upper left): First assessment of what we would need to solder
Figure 6.2 (center): Connecting the speaker to the breadboard
Figure 6.3 (right): Sketch of the sphere with casing for the components
Figure 6.4 (lower left): IMU sensor connected to breadboard and Micro Arduino

Programming

Our initial issue with the programming was to determine when the sphere was in motion. After a consultation with Nick, we decided to sample the velocity of the previous sample, and compare it to the current sample of velocity. From here, we mapped the tone with the velocity; if the ball was in constant increasing velocity, the tone was decrease, if it was moving slower the tone would get louder.

screen-shot-2018-10-24-at-2-37-13-pm

Figure 7.1 (upper left): Team discussing with Nick on how to calculate velocity and sample rates
Figure 7.1 (center left): Breadboard with IMU sensor and speaker attached onto the bouncy ball
Figure 7.3 (right): Chalkboard snapshot of the velocity calculations
Figure 7.4 (lower left): Georgina testing out speaker with mapped tone

IMU Sensor

Once we were able to get data from the IMU sensor we needed to set up a threshold for when the device needs to switch into another state easily. After some testing we decided our threshold matrix would be in the following:

1) Sphere not in motion: read velocity as 0
2) Sphere slowing sph: less than 7
3) Sphere increasing velocity: greater than 7

screen-shot-2018-10-27-at-11-14-40-am

Figure 8: Initial circuit diagram made with

Connecting the Speaker to Gyroscope

Once we had some kind of metric for the device to recognize each state we have to control the sound output. We were having a hard time getting a smooth rate or average velocity with the speaker. We mapped the tone to the velocity, but found that our threshold through testing were not consistently providing results. Our testing was very detail oriented; we would switch the thresholds to determine how to map the tone between different digits. This was not the most time effective way of testing the correlation between tone and velocity but it was the most accurate way of finding the thresholds.

Device Casing

We went as a team to see Reza, looking for ideas on how to construct a sphere as we were worried our ball would not roll properly with the housing on the outside. Reza had the idea to find a globe, he had one in the maker lab, but it was reserved for another project. We borrowed it for testing in hopes we could find 3 globes in Chinatown. We went searching for a globe with no luck but we were able to use this globe for testing how the device responds to being rolled.

screen-shot-2018-10-27-at-9-59-21-am

Figure 9.1 (left): Globe Reza gave us in the maker lab
Figure 9.2 (center): Breadboard attached the globe with sensors and outputs
Figure 9.3 (right): Olivia assessing the data being returned from the IMU sensor when moved

We did find clear bowls that we thought we could use melt and mould together. We brought them to the plastic lab to talk to the technician to discuss options. When he saw the bowls and did not think we could achieve a plastic casing without a mould. There was not any time to make a mould, but he suggested picking up some hamster balls as an alternative. Taking his advice, the team went to a pet store to pick one up and see if it would work.

This turned out to be a great solution for a sphere for our device. The hamster ball provided solutions to all of our design problems. The ball could be opened easily so that as a team we could service our prototypes, and the clear plastic of the hamster balls allowed for a nice diffusion of the light through the device. We initially picked up one hamster ball to test, and upon success picked up two more with confidence.

LED outputs

After finicking with the mapping of the speaker tones to the velocity our next step was to incorporate LED lighting as feedback for the sphere. We purchased addressed “NeoPixel” LED lights so that we could dynamically change the light of individuals pixels to create a “glittering” effect when the ball was moving. As well, one of our teammates had previous experience working with them. Because we had defined our states (still, in motion, and slowing) when determining the tone, it was simple to incorporate the LED light states to correlate with the velocity of the sphere. Our defined states for the lights were:

  • Still – the lights would blink red
  • In motion – the RGB values of the LED lights were mapped to the XYZ values of the IMU sensor (R-X, G-Y, B-Z). This dynamically would represent the direction and motion of the sphere
  • Slowing – if the device was slowing we would slowly change the dynamically changing colours to red by adding ‘50’ to the R value

screen-shot-2018-10-27-at-10-02-25-am

Figure 10.1 (left): Team talking with Reza about how to attach our components in the hamster balls
Figure 10.2 (upper middle left): Our first attempt of securing the middle insert in hamster ball
Figure 10.3 (middle right): Georgina and Olivia testing the NeoPixels
Figure 10.4 (right): Hamster ball with the components placed inside
Figure 10.5 (lower middle left): Two different sizes of hamster balls at the pet store

Vibration Motor

Adding the vibration motor in the main code was not too difficult. Unfortunately, the wires were very flimsy and there was concern that it would easily break when users were playing with the device. We tried soldering the motors to make the connection more sturdy but it was not an easy task. The interaction evolved from picking up the device to rolling and kicking it on the floor. With that in mind, the vibration motors were left out of the final prototype, but placed as a “next steps” for the project.

Final Prep of devices

The fritzing diagram was finalized before committed to the protoboards. Upon evaluation of the remaining tasks, the best use of time would be to divide and conquer. Two team members began soldering the final devices using the fritzing diagram and the other met with Reza quickly to shape some acrylic to secure the device to the centre of the hamster ball. This allowed the final assembly to be quite efficient. Upon testing of the final circuits, only two of the three prototypes was working. After review, there is no clear reason why the third device is not working.

screen-shot-2018-10-27-at-10-58-38-am

Figure 11.1 (left): Photo of finished circuit alone
Figure 11.2 (right): Photo of three finished circuits being mounted on acrylic inserts

Final circuit diagram

screen-shot-2018-10-27-at-11-14-17-am

Figure 12.1: Final Fritzing diagram of Attentive Motions prototype

Reflections

Upon reflection, the team is very happy with the way the final prototype turned out. There are many ways in which this idea could be scaled up as a final product. Each prototype plays the same tones, future versions have the potential to incorporate multiple tones to create depth in sound or even music. The vibration motor could also be added back in if the interaction moves back to holding the sphere. The vibration motor could also be used to emphasize the “critter” like qualities of the device. For instance, if the device is still for a certain amount of time, it could try and “shake” itself through vibration in an attempt to get attention and start moving again.

The work progress during the development stages were always very organized and strategic. Once the vision was established, together the team experimented until satisfied with the final product. Each member was supportive of one another and very good at leveraging strengths and learning in order to reach the final goal.

During the final critique, there was great feedback on the prototype. Some classmates did not see a clear distinction between the ‘irritated’ state and the ‘happy’ state but they did enjoy kicking it back and forth to each other. Next steps would be more user testing to see if the interaction is preferred without the ‘irritated’ state that the team originally set out to achieve.

References

M. (2016, March 26). 9 amazing projects where Arduino & Art meet! Retrieved October 29, 2018, from http://arduinoarts.com/2014/05/9-amazing-projects-where-arduino-art-meet/

(n.d.). Retrieved October 29, 2018, from https://www.arduino.cc/en/Tutorial/TonePitchFollower?from=Tutorial.Tone2

Hughes, M. (2017, March 22). Capturing IMU Data with a BNO055 Absolute Orientation Sensor. Retrieved October 29, 2018, from https://www.allaboutcircuits.com/projects/bosch-absolute-orientation-sensor-bno055/

Adafruit NeoPixel Überguide. (n.d.). Retrieved October 29, 2018, from https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library-installation

Lloyd, P. (2015, October 13). Make an LED Light Strip AHRS with Arduino and MPU-6050. Retrieved October 29, 2018, from https://www.allaboutcircuits.com/projects/make-an-led-light-strip-ahrs-with-arduino-and-mpu-6050/

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.