Absolutely on Music

by Lilian Leung

Project Description

Absolutely on Music is an interactive prototype of a potential living space. The space is composed of a sensor activated light that turns on when a participant sits on the chair and a copy of Haruki Murakami and Seiji Ozawa’s book Absolutely on Music, which plays the music the author and conductor talk about in each chapter of the book.

This experiment expands upon my personal exploration in tangible interfaces, as well as research further into slow-tech and the use of Zero UI (invisible user interfaces). This exploration is meant to re-evaluate our relationship with technology as being able to augment everyday inanimate objects rather that creating alternative screen-based experiences centered around a hand-held device. The audio played beneath the chair is played in context to each chapter of the book, divided between six individual conversations centered around a different topic and part of Ozawa’s career. There are five tracks played due to the sixth chapter being without a musical piece discussed in detail. The auditory feedback playing the music featured in the book creates a multi-sensory experience, and broadens the audience of the book rather than being solely to music experts that don’t require musical reference, to anyone looking to enjoy a light read.

Project Process

November 21 – 22 (Proposal Day)

Early research pointed to using an Arduino UNO instead of an Arduino Nano so I’d be able to use an MP3 shield to play my audio rather than depending on using Processing. For early exploration, I looked into using a combination of flex sensors and pressure sensors on the binding of the book and on the front and back cover to detect when the book was being picked up. This layout was based on inspiration I found by Yining Shi (2015), where they mapped the Jurassic Park novel with the movie.

After having my proposal meeting, I switched to using copper tape instead of flex sensors as switches to make the thing more reliable data. From there I decided on the modes of the experiment and how the book and chair should behave when not being used.

Modes

Idle Mode Active
Lamp – Dim Light Lamp – Full Brightness
Book – Orchestra Tuning Book – Play Audio

 

Having purchased the MP3 Shield, I starting formatting the MicroSD card with the appropriate tracks related to each chapter using ‘track00X’ to be appropriately read by the Arduino and shield. From the shield diagrams, I would only be able to use Analog Pins 0-5 and Digital Pins 0-1, 3-5, and D10. From this, I laid out each switch for each chapter from D0-1,3-5 and kept D10 to be used for the lamp and sensor input and output.

artboard-1

November 23
To create a more natural reading space, I went to purchase a chair and cushion from IKEA. I tried to pick a more laid back chair so that participants would be interested in sitting down rather than repurposing a studio chair. The supporting beam in the back of the chair allowed for a space to safely and discreetly place all my wiring that wouldn’t be seen. 

For the lamp design, initially I had intended to create free standing lamp, but after some thinking, I decided to have it incorporated inside the side table so there would be less clutter in the space. For the design of the side table, I wanted it to be minimal and be able to discreetly hide the light and all the wiring involved.

sidetableinspo

November 25

To conserve time and memory on the Arduino, all the audio clips for each chapter were compressed to 7 minutes maximum rather than playing the full one hour to two hour performances. I tested out the MP3 shield using external speakers instead of just headphones to check the sound quality.

November 26-27

Early iterations of the code for the Arduino and MP3 shield weren’t working as tracks refused to play with the if/else statements. Some revisions I made with the help of Nick Puckett was adding a statement to always have the track play the default tuning audio (track 7) and to simply change the track number on each switch rather than playing and stopping each track as it played.

In the early production stage of the side table, I cut a set of 7.5” by 10” sheets of ¼ inch plywood with a 4.5” diameter circle in the center and one with a 5” diameter circle to be able to house the LED stripe for the light. A hole was drilled on the bottom to allow for the wiring to be hidden away. To diffuse the light from the LED, a frosted acrylic sheet was cut to be used to securely hold in the lighting. I chose to have the LED light on the bottom side of the table so that the light would be more discreet and so readers wouldn’t have a bright light shining directly up at their faces while reading. 

artboard-1-copy-5

artboard-1-copy-4

Once the wiring was complete, I soldered the wiring onto a protoboard to securely hold everything. I used screw terminals for the wiring for the book switches, the chair pressure sensor and the side table light to be able to transport my work easily between locations and to easily troubleshoot wiring problems. From there I mounted a small board I made to the back supporting beam of the chair so the protoboard and Arduino could safely be placed inside.

November 28 – 29

To finish the side table, I put 4 sheets of ½ inch plywood and glued them together to make the legs stronger. For the wiring, I routered one side of the ½ in plywood from the inside so that the wiring could be hidden entirely inside the legs of the table and discreetly come out the leg.

artboard-1-copy

With the side table complete, I brought all the items together to see how the space would look all together.

artboard-1-copy-9

December 1

To solve the last few problems I was having with the Arduino and getting the chapter working, I updated the code from a if/else statement to an if statement followed by an “else if” for the remaining chapters. Another issue I was having was the difficulting uploading my code as I’d frequently get ths following error in the serial monitor:

AVRDUDE: STK500_GETSYNC<> ATTEMPT 10 OF 10 NOT IN SYNC

I managed to solve an error I was getting for uploading onto the Arduino Uno. From an online forum, a user mentioned it may be due to a pin being wired into pin 0 (RX) which would cause this error, unplugging this pin during upload managed to solve the issue. Another issue I was having was the consistency of the switches turning on and off as participants may hold the book from different angles and might now apply enough pressure for the switches to properly high and low.  Originally the switches were all formatted with all states stated with only the one switch indicated as HIGH. Though due to the inconsistency in pressure, I removed certain states that were inconsistent.

Ch. 1 Switch Ch. 2 Switch Ch. 3 Switch Ch. 4 Switch Ch. 5 Switch
HIGH LOW
LOW HIGH LOW LOW
LOW HIGH HIGH LOW
LOW LOW HIGH

From there the final test was adding in the speakers again with the finished chair and table to make speakers would comfortably fit underneath the chair.

 

Project Context

Absolutely on Music explores the use of audio and tactile sensors to create a more immersive experience for inanimate objects in the home, rather than creating an augmented screen-based experience. This experiment is based on the philosophy of slow-tech, countering our need to develop tools that work more efficiently to allow use to do more, faster (Slow Living LDN, 2018). The set-up space is meant to re-evaluate our experience technology and potential of creating a multi-sensory and accessible home. 

This work is an example of Zero UI, which involves interacting with a device or application without the use of a touchscreen or visual graphic interface. Zero UI technology allows individuals to communicate with devices through natural means of communication such as voice, movements and glances (Inkbotdesign, 2019). Most Zero UI-based devices are related to the internet of things and are interconnected with a larger network. For this experiment, I wanted to explore creating a multi-sensory experience not requiring any networked communication or quantified data gathered and allow participants a more immersed and mindful experience with an inanimate object.

I choose Absolutely on Music by Murakami and Ozawa purposely for the references to auditory content that readers may be unfamiliar with, and how searching for said music may interrupt the reading experience instead of making it a seamless experience. This makes the content more accessible to a broader spectrum of readers. 

A book was chosen as the object of choice because of the constant discussion between reading from a digital screen versus a physical copy on paper. A physical book is a dumb object and allows a slow more leisurely experience rid of distractions compared to reading on a digital device. 

The book used is Absolutely on Music, a series of six conversations between the Japanese author Haruki Murakami and Japanese conductor Seiji Ozawa. Classical music, like fine art is generally difficult to access and deeply personal. Interest declines as individuals may perceive themselves distrust their own reactions as classical music may feel perceived to more sophisticated folk as mentioned in a New York Times Op-Ed (Hoffman, 2018). By playing the actual audio through speakers below the chair and having the music audible from headphones, any one can follow along the book without any prior musical knowledge of the works described.

Within the book, the author and conductor discuss both of their careers, from key performances in Ozawa’s career and Murakami’s passion for music, as musical pieces are always deeply integral in all his works from the Wind-up Bird Chronicles and opening with Rossini’s The Thieving Magpie or a Hayd concerto within the page of Murakami’s Kafka on the Shore (2002)

To elevate the sensory experience of the book, a set of switches were placed within the first five chapters (conversations) of the book. The audio described in each chapter is played with the use of a switch situated within each chapter of the book to provide context to the works Murakami and Ozawa are discussing.

Table of Contents of Absolutely on Music

1st Conversation – Mostly on the Beethoven 3rd piano concerto
Interlude 1 – On Manic Record Collectors

2nd – Brahms at Carnegie Hall
Interlude 2 – The relationship of writing music

3rd Conversation – What happened in the 1960s
Interlude 3 – Eugene Ormandy’s Baton

4th Conversation – On the music of Gustav Mahler
Interlude 4 – from the Chicago Blues to Shin’inchi mori

5th Conversation – the Joys of Opera
6th Conversation – “There’s no single way to teach, you make it up as you go along”

Based on the contents of the book, I pulled the main musical piece the two individuals spoke about into a tracklist that I would use for the interactive book. 

Timing (Chapter) Tracklist
Idle Mode Orchestral Tuning Audio
Chapter 1 Glenn Gould’s 1962 Brahm’s Piano Concerto in C Minor
Chapter 2 Seiji Ozawa’s Beethoven’s 9th Symphony
Chapter 3 Seiji Ozawa’s Rite of Spring (by Igor Stravinsky)
Chapter 4 Seiji Ozawa’s The Titan / Resurrection (by Gustav Mahler)
Chapter 5 Dieskau; Scotto;  Bergonzi’s Rigoletto
Chapter 6 (No Audio, No Single Musical Piece Focused)

chairmock

2019-12-04-04-23-41-3

artboard-1-copy-15

artboard-1-copy-14

Project Video

Github Code

You can view the github repository here

Circuit Diagram

exp5-diagram

*Within the actual wiring, the button switches are two piece of copper foil placed on opposite pages acting as the switches

*The schematic displayed is using a Sparkfun VS1053 though I used a geeetech VS1053, the available pins laid out are slightly different where as the Sparkfun version used in the diagram show D3 and D4 being used, they’re available on the geeetech MP3 shield.

Exhibition Reflection

For the Digital Futures Open Show, my piece was exhibited in the entrance of the Experience Media Lab. I set up the space with some plants and an additional light as props to make the area more comfortable. The space was quieter than the Graduate Gallery which worked out for the piece and allowed participants to sit down and experience the piece one at a time without having too much noise in the background. For the seat sensor, I kept the table light on so that participants could see the reading space clearly rather than being seat pressure activated.

artboard-1-copy-12

My reflection on the experience would be from the participant aspect, where I noticed people were initially hesitant sitting down on the chair, not sure whether they were supposed to touch it, or that the space I created didn’t look like an art piece. I felt that the piece was successfully as it felt like a natural reading space, and don’t mind the confusion as the chair and book were designed in the context of being in a home rather than as an exhibition piece. 

There was some static from the speaker, but I also noticed that participants may have expected a much faster response from the book when the music changed, as many orchestral pieces had a natural slow build up, some participants flipped through the pages to experience the music change faster or couldn’t hear the musical piece.

While the book audio was designed to be for a single reader than can listen while reading rather than flipping through the pages, in hindsight, I’d probably revise the audio to begin likely in the middle of each musical piece when in an exhibition display so that participants could understand the concept faster.

Some helpful feedback I got on how to possibly improve the piece and learn more about invisible interfaces was reading Enchanted Objects: Design, Human Desire, and the Internet of Things by David L. Rose. Other feedback was also possibly exploring using a Maxuino which has more audio capabilities and support with Ableton live in case I wanted more control with my audio files and audio quality compared to using the MP3 shield.

 

Bibliography

Arduino Library vs1053 for SdFat. (n.d.). Retrieved November 29, 2019, from https://mpflaga.github.io/Arduino_Library-vs1053_for_SdFat/.

Hoffman, M. (2018, April 19). A Note to the Classically Insecure. Retrieved from https://www.nytimes.com/2018/04/18/opinion/classical-music-insecurity.html?rref=collection/sectioncollection/opinion&action=click&contentCollection=opinion®ion&module=stream_unit&version=latest&contentPlacement=4&pgtype=sectionfront.

Inkbotdesign. (2019, August 13). Zero UI: The End Of Screen-based Interfaces And What It Means For Businesses. Retrieved from https://inkbotdesign.com/zero-ui/.

Kwon, R. O. (2016, November 24). Absolutely on Music by Haruki Murakami review – in conversation with Seiji Ozawa. Retrieved from https://www.theguardian.com/books/2016/nov/24/absolutely-on-music-haruki-murakami-review-seiji-ozawa. 

LDN, S. L. (2019, May 25). Embracing Digital Detox and Slow Tech. Retrieved from https://www.slowlivingldn.com/lifestyle/slow-tech-digital-detox/. 

Murakami, H., & Ozawa, S. (2017). Absolutely on music conversations with Seiji Ozawa. London: Vintage. 

Shi, Y. (2015, February 7). Book Remote. Retrieved from https://www.youtube.com/watch?v=M1WrbADjfmM&feature=emb_title. 

Tench, B. (2019, February 11). Some Reflections on Slow Technology. Retrieved November 29, 2019, from https://www.becktench.com/blog/2019/2/11/some-reflections-on-slow-technology.

Experiment 5 Proposal

Zero UI (Working Title)

For Experiment 5, I’d like to expand on tangible interfaces and explore the use of Zero UI (invisible user interfaces) and having technology fully incorporated within a room (or household) with the use of pressure sensitive furniture and sensors with auditory feedback to elevate regular objects (a book) to create a more immersive experiences instead depending on screen based experiences. This experiment is an exploration in creating a multi-sensory reading experience with content catered towards the book’s contents.

The experiment would involve the use of a pressure sensor chair that lights up a nearby lamp when the participant sits down. The pressure sensor may be installed physically on the chair or hidden away with the design of a cushion or lining. The participant can pick up the book and read or flip through the book and hear the music referred in the book playing from a speaker hidden away (possible below or behind the chair). The audio would be mapped depending on what section of the book the participant is on.

screenshot-2019-11-19-at-8-26-20-pm

The book I’d like to use is still undecided but one with many musical references such as Haruki Murakami’s book, The Wind Up Bird Chronicle, where as the book begins with the protagonist listening to Rossini’s the Thieving Magpie and refers to many other classical musicians. Another possible book could be J.R.R. Tolkien’s The Hobbit with the movie franchise’s music by Howard Shore playing instead.

Project Main Components and Parts

  1. Arduino Nano
  2. Flex Sensor
  3. Pressure Sensor
  4. MP3 Shield (?)
  5. External Speakers
  6. Lightbulb and Wiring (Lamp)

Additional Components and Parts

  1. Chair (Supporting Prop)
  2. Fabric/Cushion (To Hide/Place Sensor)
  3. Book (Prop)
  4. Mat Rug (Prop To Hide Cables)

Workback Schedule

Fri, Nov 22 –  Proposal Presentation
Sat, Nov 23 –  Coding + Gathering Digital Assets + Building Lo-Fi Breadboard Prototype
Sun, Nov 24 – Coding + Gathering Digital Assets + Building Lo-Fi Breadboard Prototype
Mon, Nov 25 –  Coding + Creatron for Final Components
Tues, Nov 26 –  Presenting Progress of Lo-Fi Breadboard Prototype + Revisions
Wed, Nov 27 – Prop Purchasing
Thurs, Nov 28 – Laser Cutting Components and Coding
Friday, Nov 29 – Troubleshooting / Bug Fixes
Sat, Nov 30 – Troubleshooting / Bug Fixes
Sun, Dec 1 –  Troubleshooting / Bug Fixes
Mon, Dec 2 – Troubleshooting / Bug Fixes
Tues, Dec 3 – Final Critique
Wed, Dec 4 – Open Show

Physical Installation

I’d like to ideally place the set up in the corner of a room and with dimmer lighting so the lighting from the lamp is more visible when it turns on. Supporting objects within the set up would be the chair participants can sit on with the sensor attached.

screenshot-2019-11-19-at-8-26-37-pm

screenshot-2019-11-19-at-8-26-45-pm

Resource List

  1. Chair and Side table
  2. Will need extension cords for power
  3. External speakers

Info for Open Show

Preferably displayed in the Grad Gallery room. I will just need an electrical outlet nearby or extension cord. We will need to book external speakers from AV.

Experiment 3: Blue Moon

Project Title: Blue Moon
Names of group of members: Lilian Leung

Project Description

Blue Moon is a reactive room light that detects signs of anxiety through hand gestures and encourages the participants to take time away from their screen and practice mindfulness. The gold mirror emits a blue glow when participants are detected to be clenching their fist. To achieve a warm light, the participants right hand needs to be unclenched. The second activated switch is pressing their right hand over the left, which causes the music to start playing. The joint hand switch keeps the participant focused and relaxed and stops them from attempting to use a mobile device or computer. The project environment is based within a bedroom for times before rest or when trying to relax or meditate. The screen projection is intended to be on the ceiling as the viewer should be laying down with their hands together. 


Project Process

October 23 – 24
Beginning my initial prototype for the experiment, I began by mapping two potentiometers with two LEDs. By creating a small scale prototype I could slowly upgrade each section of the project into larger outputs such as upgrading the small LEDs to an LED stripe and replacing the potentiometer with flex sensors.

img_20191024_131639

October 25
By using Shiffman’s example of creating rain ripples, I was having difficulty controlling multiple graphic elements on the screen as the ripples affected all the visible pixels. Exploring OpenProcessing, I found a simpler sample that I could build from by having ellipses generated based on frame rate which I could control by N.Kato (n.d.) Rather than having the animation begin abruptly, I added an intro screen that was triggered by mouse click to move onto the main reactive animation.

Using my current breadboard prototype with the potentiometers and LED, I updated the switch from potentiometer to a pressure sensor using aluminum foil and velostat. Adjusting the mapping of the values to reduce the noise from the sensor I was able to map the pressure sensors for two hands:

Switch 1: Left Hand Increase Rate of Rain
Switch 2: Right Hand Control Audio Volume

* I used the Mouse.h library during initial exploration to trigger the intro screen and connected it to a physical switch though started having trouble controlling my laptop so I temporarily disabled it.

October 26
Initial testing to upgrade my breadboard prototype LED, I purchased a neon blue LED strip. I initially followed Make’s Weekend Projects – Android-Arduino LED Strip Lights (2018) guide to review how I could begin connecting my LED strip (requiring 9V) to my 3.3V Arduino Nano. One problem I didn’t expect when using Make’s video was that they used an RGB LED strip which had a different circuit while mine was a single colour.

RBG LED 12v, R, G, B
LED Strip 12v Input, 12v Ouput

I went to Creatron and picked up the neon led strip, tip31 transmitter, and an external 9 volt power source and power jack.Rewiring the breadboard prototype proved difficult as I had to use a separate rail of the breadboard for solely 9V and learn how to hook up the TIP31 transmitter with my digital pins to send values to the LED stripe.

One realization using from online references using different types of transmitters was that the 3 pins (base, collector, emitter) are laid out in a different order depending on the type used.

Pin 1 Base (Info + Resistor)
Pin 2 Collector (Negative end of the LED strip)
Pin 3 Emitter (Ground)

The diagram reference that ended up being the most useful was this:

8c37ae69e775698cb60b99db1dcc86ea

Figure 1. Sound Activated Light Circuit, n.d.

After being able to properly hook up the LED strip with my Arduino and mapping the potentiometer with the LED strip brightness, I began rewiring the potentiometer to use a flex sensor. To test the sensor, I first used two copper coins and velostat.

 

lighttest

sensortypes

From there I made two types of pressure sensors, the first one, using aluminum foil and velostat as a flex sensor that I could hide away inside a glove and use to detect pressure when clenching my fist with pressure along my index finger from my first and second knuckle. Another factor when attaching the flex sensor inside the glove was making sure the sensor was still attached when I clenched my fist, as the sensor would pull forward, following my finger joints and easily detached the sensor cables.

pressuresensortest

October 27-29
Expanding with the physical Arduino component of the project, I still needed to laser cut acrylic to make the circular ring that my LED stripes would wrap around. I’ve never laser cut before but with some help from Neo I was able to layout my two files, the first a 17” acrylic circle to be the front and the second being five 11.5” ring using 0.25” board that would house my Arduino and cables while also being a form that my LED stripes could wrap around while creating enough space between the wall and the light.

Additional updates were made to the prototype, such as adding the second warm neon LED to use as the main light, as well as updating all the wiring for my sensors with stranded wire instead of solid core wire so that the cable was more flexible and comfortable when participants put on the glove sensors.

The wooden ring was then sanded with holes drilled in from the side for the power cables to connect to as well as for the LED stripes strand wire to move from the exterior of the ring into the interior where the Arduino is placed. As all the wiring was finished, I moved all the components from my breadboard onto a protoboard to have everything securely in place. I also needed terminal blocks so that I could repurpose my LED stripes instead of soldering the wires into the protoboard.

protoboardtest

glovetest

Once the physical pieces were cut, I went back to redesigning the intro page to create more visual texture rather than just use solid coloured graphics. Within Processing, I added liquid ripple effect from an OpenProcessing sketch by oggy (n.d.) and adjusted the code to map the ripples based on the hand sensors instead of mouse co-ordinates.

The last issue to solve was how to get the intro screen to disappear when my hands were together in a relaxed position. Because of the multiple audio files being played, I had issues of audio looping when calling each page as a function and using booleans. In the end, I laid out the Sketch using sensor triggers and if statements. Possible due to the number of assets loaded within Processing, I was struggling to get the animation to load consistently as assets would be missing without any alterations to the code. In the end, I removed the rippling image effect.


Project Context

With the days getting darker and nights getting longer, seasonal affective disorder is around the corner. I’m faced with the problem that my current room doesn’t have a window and doesn’t have an installed light within the room, making the space usually dark with little natural light coming in. This combined with struggling with insomnia and ability to relax at night lead to creating this project.

roomexample

One finding for treating seasonal affective disorder is the use of light therapy. This treatment usually involves the use of artificial sunlight with a UV bulb or finding time to get natural sunlight. Terman (2018) suggests that treatment ranging from 20-60 minutes a day can lead to an improved mood over the course of four weeks.  

With a bad habit for multitasking, I find it difficult to concentrate and simply make time to rest without the urge of being productive. This proved to be a problem as using social media can also lead to feelings of depression and loneliness (Mammoser, 2018) due to feelings anxiety and fear of missing out and constant ‘upward social comparisons’ depending how frequently checks their accounts. 

To force myself to create time to relax and and removed from my digital devices, the experiments main functions were to first, detect signs of anxiety and second to force myself to stay still, present within my space and immersed in my brief moment of meditation. Nevarro (2010) from Psychology Today describes nonverbal signs of stress being rubbing hands together or clenching as a sign of “self massaging” or pacifying the self. To force myself to stay still and present, I decided my second trigger would be having both my hands together in a relaxed position. 

handgestures

Aesthetically, I chose a circular form, both symbolic as a sun and moon, as it would be the main light source in my room. Having the alternating lights, the circle appears both like a sun and an eclipse when the neon blue light is on. The visual style was inspired by Olafur Eliason’s Weather Project (2004) and the artist’s elemental style and his use of mirrors and lights to create the sun and sky within the Tate Modern’s Turbine hall. The Weather Project installation explored how weather shapes a city, and how the city itself becomes filter as to how locals experience fundamental encounters with nature.

olafur_eliasson_weather_project_02

The neon light is emitted when my right hand is clenched, limiting the light in the room, and prompting my to unclench my fist. The Processing visuals are supported by white noise of wind and rain as a distraction to my surroundings as white noise is beneficial for cutting out background noises by providing stimulation to the brain without being too overly excited (Green, 2019).

handgestures

The audio from the Processing sketch is also mapped to my right hand, with the audio being lower when my first is clenched and louder when my hand is open, prompting deeper immersion into the meditation. Opting to bring my hands together with my right hand over my left in a relaxed position, the sketch dissolves the intro screen to fill the ceiling with the imagery of rain falling and louder audio. Within a few moments a selected song will play, for my Processing sketch, I used one of my favourite songs, Wide Open by the Chemical Brothers (2016). 

handgestures2

During my exploration phase, I tried to trigger Spotify or Youtube to play, but because it would cut outside the processing program and bring me back to viewing social media and the internet, I opted to have the audio play within the sketch.

Additional Functionality for Future Iterations

  1. Connecting the Processing Sketch to a possible Spotify API that could be controlled with physical sensors. 
  2. Connecting to a weather API and having the images and audio switch depending on the weather.
  3. Adding additional hand gesture sensors to act as a tangible audio player.

screenshot-2019-11-01-at-11-14-57-pm

img_20191101_182827_ceiling

img_20191102_011141-copy

dsc_0429-light

dsc_0429-2_light

Project Code Viewable on GitHub
https://github.com/lilian-leung/experiment3

Project Video


Citation and References

Sound Activated Light Circuit. (n.d.). Retrieved from https://1.bp.blogspot.com/-2s2iNAOFnxo/U3ohyK-AjJI/AAAAAAAAADg/gAmeJBi-bT8/s1600/8C37AE69E775698CB60B99DB1DCC86EA.jpg

Ali, Z., & Zahid. (2019, July 13). Introduction to TIP31. Retrieved from https://www.theengineeringprojects.com/2019/07/introduction-to-tip31.html.

Green, E. (2019, April 3). What Is White Noise And Can It Help You Sleep? Retrieved October 31, 2019, from https://www.nosleeplessnights.com/what-is-white-noise-whats-all-the-fuss-about/.

Make: (2013, August 23) Weekend Projects – Android-Arduino LED Strip Lights. Retrieved from https://www.youtube.com/watch?v=Hn9KfJQWqgI

Mammoser, G. (2018, December 14). Social Media Increases Depression and Loneliness. Retrieved October 31, 2019, from https://www.healthline.com/health-news/social-media-use-increases-depression-and-loneliness.

Nevarro, J. (2010, January 20.). Body Language of the Hands. Retrieved October 31, 2019, from https://www.psychologytoday.com/us/blog/spycatcher/201001/body-language-the-hands.

N.Kato (n.d.) rain. Retrieved from https://www.openprocessing.org/sketch/776644

oggy (n.d.) Liquid Image. Retrieved from https://www.openprocessing.org/sketch/186820

Processing Foundation. (n.d.). SoundFile::play() \ Language (API) \ Processing 3 . Retrieved October 31, 2019, from https://processing.org/reference/libraries/sound/SoundFile_play_.html.

Studio Olafur Eliasson (Photographer). (2003). The Weather Project. [Installation]. Retrieved from https://www.tate.org.uk/sites/default/files/styles/width-720/public/images/olafur_eliasson_weather_project_02.jpg

Tate. (n.d.). About the installation: understanding the project. Retrieved November 2, 2019, from https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0-0.

Tate. (n.d.). Olafur Eliasson the Weather Project: about the installation. Retrieved November 2, 2019, from https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0

Terman, M. (2018, December). Seasonal affective disorder. Retrieved October 31, 2019, from https://www-accessscience-com.ocadu.idm.oclc.org/content/900001.

The Coding Train (2018, May 7) Coding Challenge #102: 2D Water Ripple. Retrieved from
https://www.youtube.com/watch?v=BZUdGqeOD0w&t=630s

Experiment 2 – Pet Me (If You Can)

Project Title
Pet Me (If You Can)

Team Members
Jignesh Gharat, Neo Chen, and Lilian Leung

Project Description
Our project explores the ability to create a creature character with the ability to surprise viewers with interactivity with the use of two distance sensors. The experiment is an example of living effect to give a machine a life of its own and the use of different modes of operation to create distinct emotions with the creature.

The creature was created with the use of two Arduinos, three servos, a row of LEDs and two distance sensors. The creature sits on a pedestal and moves on its own accord, surprises viewers when they come nearby by closing its mouth and eyes becoming from erratic.

Project Video

You can access the code for the experiment here
https://github.com/lilian-leung/experiment2

Project Context

To create a creature with the use of servos and sensors. We explored the ongoing questions as to “Why do we want our machines to appear live?” as mentioned by Simon Penny, a new media artist and theorist. Caroline Seck Langill in The Living Effect: Autonomous Behaviour in Early Electronic Media Art (2013) argues that we create lifelike characteristics to elicit a response from the audience that is suggestive of a fellow life-form to achieve living effect, where as we do not attempt to re-create life but rather to “make things have a life of their own,”.

Our original intention for the project was to create a creature to be halloween themed or to have a security-like box that would guard a valuable item such as jewellery or used for everyday use such as guarding cookies within a cookie jar-like shape.

Langill (2013) proposed the three characteristics of living effect being first, an adherence to behaviour rather than resemblance, the second; the effect is one of a whole body in space with abilities and attributes, and the third being potential for flaws, accidents and technical instabilities as imperfections allow one to acknowledge to living effect within a synthetic organism.

We initially began prototyping using the oscillation of two servos to create the eyes. We began our prototyping with the use of post-it notes over the two servos for the eyes to get the movements of the pupils to move at a natural speed with the use of easing using Nick’s animationTools Arduino library.

In Robotics facial expression of anger in collaborative human–robot interaction (2019) Reyes, Meza and Pineda describe expressive robotic systems favour feedback and engagement with viewers. Emotions such as anger created the most effective responsonse with participants. Using the minimal amount of facial expressions with the components available, we tried to replicate a human-like expression as an indicator for possible modes of operation the creature could react to.

From there we incorporated the main body (the box) of the creature and began exploring ways we could have the box open. Our initial thoughts were to save some sort of lever outside and above the box that would pull the lid open with the use of thread or fishing wire.

 exp2_wip-img1

We also explored possibly having the servo on the side of the box, but were concerned that the motor wouldn’t be able to handle the full width of pushing the lid open from one side. In the end, we landed on having the servo inside the box, in the back centre area where it could push the lid open with the assistance of a curved shape to reach the lid. We then began testing to get the right angle to use for the servo as well as what range it should have, so as not to push the servo out from its spot inside the box or to open to box too wide.

Servo Testing Gif

Before laser cutting out all our final shapes, we tested each component separately using the breadboard to make sure the circuit was functioning before soldering each piece. From there we built out new facial features by using the opening box and laser cutting a tongue-like shape which we then lit up using red LEDs.  We laser cut the pupil and iris to attach onto the servos, as well as making a small enclosure to hide the actuators afterwards. All the cables are then looped inside the box and tucked in the back to keep the cables tidy when the creature opens its mouth.

exp2_wip-img5

exp2_wip-img3

The creature has three modes of operation:

  1. The eyes oscillate from 0 to 180 degrees slowly within the two meter “safety zone” away from viewers at a speed of 0.1. Within this safety range, the servo controlling the mouth also props the mouth open as it deems the area “safe”. During this time, the LEDs within the tongue piece are lit up.
  2. Within the middle zone, the creature becomes “conscious” of viewers and the speed increases to 0.2. The increased speed of the eyes signify hesitation or caution with the creature.
  3. When viewers come in to the “danger zone” within approximately one meter from the object, the speed of the eyes increases to 0.8 and the mouth shuts closed.

To avoid overloading one of the Arduinos and to make sure the electrical circuit was consistent, we divided the sensor controlling the two servos for the eyes from the sensor that controls the LEDs and servo motor that controls the mouth opening. 

One of our challenges was working with the noise generated from the sensors which caused some of the modes of operation to fluctuate from opening the mouth to immediately dropping even when viewers are within a safe distance away from the distance threshold. Though we adjusted the settings to make the middle range shorter to turn the noise disturbance within the sensor to appear more like a laughing motion by the creature.

After our presentation, we got feedback as to how we could better incorporate the sensors into the experiment, so that the experiment can become more mobile and easily placed in different situations.

exp2_wip-img6

Our solution was to attach the creature onto a pedestal and have the sensors hidden away below the surface. The creature stands upright as if it were an exhibition piece. The creature takes a personality of its own when the eyes oscillate as if they’re patrolling the surrounding area and closes its mouth and looks downwards when viewers approach it in a more humble composure. 

1111

2222

6666

5555-copy

1010

Sources

Arduino. (2017, January 13). How to Use an Ultrasonic Sensor with Arduino [With Code Examples]. Retrieved from https://www.maxbotix.com/Arduino-Ultrasonic-Sensors-085/.

Circuit Digest. (2018, April 25). Controlling Multiple Servo Motors with Arduino. Retrieved from https://www.youtube.com/watch?time_continue=9&v=AEWS33uEwzA

Langill, Caroline. (2013). “The Living Effect: Autonomous Behavior in Early Electronic Media Art.” Relive Media Art Histories. Cambridge MA: MIT Press. pp.257-274. .

Programming Electronic Academy. (2019, July 2). Arduino Sketch with millis() instead of delay(). Retrieved from https://programmingelectronics.com/arduino-sketch-with-millis-instead-of-delay/.

Puckett, N. (n.d.). npuckett/arduinoAnimation. Retrieved from https://github.com/npuckett/arduinoAnimation.

Reyes, M. E., Meza, I. V., & Pineda, L. A. (2019). Robotics facial expression of anger in collaborative human–robot interaction. International Journal of Advanced Robotic Systems, 16(1), 172988141881797. doi: 10.1177/1729881418817972

Experiment 1 – Camp Fire

Project Title: Campfire
Team Members: Nilam Sari (No. 3180775) and Lilian Leung (No. 3180767)

Project Description:

Our experiment was an exploration as to how we could create a multi-screen experience that would speak to the value of ‘unplugging’ and having a conscious and present discussion with our classmates using the metaphor symbolism of the campfire.

From the beginning, we were both interested in creating an experience that would bring people together and be able to have a sort of digital detox and engage in deeper face-to-face conversation. We wanted to play along with the current trend of digital minimalism and the Hygge lifestyle focused on simpler living and creating deeper relationships.

bookexamples

While our project would only provide about a 10 minute reprieve from our connected lives, we wanted to bring to attention, while we’re in a digitally-lead program, that face-to-face conversation and interaction is just as important for improving our ability to empathize with one another.

Visual inspiration was taken from the visual aesthetic of the campfire as well as the use of abstract shapes used for many meditative and mental health apps such as Pause, developed by ustwo.

screenshot-2019-09-19-at-4-39-22-pm

ustwo, Pause: Interactive Meditation (2015)

How it Works:

The sketch is laid out with three main components: the red and orange gradient background, the fire crackling audio, and the transparent visuals of fire. 

On load, the .mp3 file of the audio plays with a slow fade in of the red and orange gradient background. The looped audio file’s volume is dependent on mic input, so that the more discussion from the group participating amplifies the volume. The visual fire graphics at the bottom adjust in size dependent on the volume of the mic input, creating a flickering effect similar to a real campfire.

To lower the volume and fade the fire, users can shake their devices and the acceleration of the x-axis causes the volume to lower and the tint of the images to decrease to 0. This motion is to recreate shaking out a light match.

Development of the Experiment

September 16, 2019

Our initial thought about visually representing the camp fire was to recreate an actual fire. Though we realized from our intentions to have all the phones laid flat on a surface, that fire is seen from a vertical perspective, while the phones would lay horizontally so we went with a more abstract approach instead by using gradients. 

The colours chosen were taken from the natural palette of fire though we also explored sense of contrast with the use of gradients. 

Gradient Studies

Righini, E. (2017) Gradient Studies

Originally we tried working with a gradient built in RGB, though while digging into control of the gradient and switching values, Lilian wasn’t quite comfortable yet with working with multiple values once we needed to go having them change based on audio level input.

Instead we began developing a set of gradients we could use as transparent png files, this allowed us more control over what they visually looked like and allowed the gradients to become more dynamic and also easier to manipulate.

assetsamples

Initial testing of the .png gradients as a proof of concept, worked as we managed to get the gradient image to grow using the mic Audio In event.

While Lilian was working on the gradients of the fire, Nilam was trying to figure out how to add on the microphone input and make the gradient corresponds to the volume of the mic input. One of her solution was using mapping.

The louder the input volume the higher the Red value gets and the redder the screen become. This way they could change the background to raster image, and instead of lowering the RGB value to 0 to create black, this changed its opacity to 0 to show the darker gradient image on the back of it.

Nilam made edits on Lilian’s version of experimentation and integrated the microphone input and mapping part into the interface she already developed.

September 19, 2019

Our Challenges

We were still trying to figure out why mic and audio input and output was working on our laptops but not on our phones. The translation of mic input on to increase the size of the fire seemed laggy even attempting to down-save our images.

On our mobile devices, the deviceShake function seemed to be working, while laggy on Firefox, playing the sketch on Chrome provided better, more responsive, results. Other issues were once we started changing the transition of the tint for our sketch that sometimes the deviceShake would stop working entirely.

We wanted a less abrupt and smoother transition from the microphone input. So we tried to figure out if there are functions like delay. We couldn’t find anything so we decided to try using if statement instead of mapping.

We found out from our google searches that there is a possibility of a bug that stopped p5.js certain functions like deviceShaken from working after the recent iOS update in this past summer. Because, while laggy, it still worked on Lilian’s Android Pixel 3 phone, while it just completely never worked Nilam’s iPhone.

Audio Output

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No Yes
Safari No N/A


deviceShaken Function

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No No
Safari No N/A


Additionally, Lilian started working on additional exploration like mobile rotation and acceleration to finesse the functionality of the experiment. She also began exploring how we could incorporate noise values to recreate organic movement. We were inspired by
these examples using Perlin noise.

scrns12

To add the new noise graphic, we used the createGraphics() and clear() functions to create an invisible canvas on top of the gradient to let the bezier curve trails so it looks like a flame. It clears itself and repeat the process again after the 600 frame count to decrease loading problems in the sketch.

September 21, 2019

After reviewing our code we realized some of the issues we were having with the audio were because of Chrome’s privacy restriction with disabling auto-playing audio, as well as our mic problem was because we placed our code within the function setup { section and was only running once, as compared to once we moved it to function draw {, the audio seemed to be working better.

September 23, 2019 – The Phone Stand

physicalprototype

After getting feedback for our prototype, we started creating a stand that we could place everyone’s phones during the presentation. We laid out two rows of stands, the outer circle holding 12 phones, and the inner circle holding 8 phones, as we explored how we could better re-create the ‘fire’ when we have out multi-screen presentation. 

scrns13

We started out by sketching the layout for the phone stand. The size is based on the widest phone width someone has in our class. We then went to the Maker Lab and drilled into the circular foam and chiselled out the middle sections to create an indent that the phones could sit within.

physicalprototype-copy

The next step is to apply finishing to the foam. We used black matte spray paint to cover the foam. The foam deteriorated a little from the aerosole of the spray paint, which we foresaw, but after a test paint it didn’t seem to damage the structure much so we decided to proceed. 

September 26, 2019 – deviceShaken to accelerationX

screenshot-2019-09-26-at-6-03-00-pm

Finding the mobile deviceShake event wasn’t working, Lilian created a new sketch testing the opacity and audio level using accelerationX as the new variable. The goal was to test changes in acceleration cause the audio volume to decrease and the images to fade out. AccelerationX seemed to providing more consistent results and was added into the main Experiment 1 sketch.

User Flow

This experiment is primarily conversation lead. Set-up from the facilitators are creating a wide open space for everyone to sit and dimming or turning off the lights to recreate a night scene. Users are then asked to load in the P5 experiment and join together in a circular formation in the room.

Users should allow the fire animation to load in and place their phone into the circular phone stand. The joint phones coming together recreate the digital campfire. Facilitators then can  speak to the importance of coming together and face-to-face conversation.

The session can run as long as needed. When the session is finished, users can shake their phones to dim the fire and lower the volume of the fire crackling audio.

However, it did have an impact on the foam around the slots. It melted the foam with the paint and it wouldn’t dry. We thought we could have use gesso before the spray paint for future reference, but we had to improvise for this one so we used paper towels.

The Presentation

img_7854_edited

Photo Credit: Nick Puckett

doc2

doc1

doc3

experiment1campfire_iniphone

The code for our P5 Experiment can be viewed on GitHub
https://github.com/nilampwns/experiment1

Project Context

The project’s concept was taken from our mutual interest in creating a multi-screen experience that would cause classmates to come together in an exercise rather than be just an experiment using P5 events. After brainstorming a couple of ideas and possibilities within our limited personal experience with programming, we came up with an idea about ‘unplugging’ and having a full attention to the people around us without distraction of devices, except that it is facilitated by screens.

We wanted the experience to be about ‘unplugging’ to recognize the value (even within a digital media program) that time away from screens is just as beneficial and an opportunity for self-reflection. While technology allows us to extend ourselves within the virtual space, there are also many consequences to our real life relationships and physical composure.

Described within a Fast Company article What Really Happens To Your Brain And Body During A Digital Detox (2015) experts explained that our connectedness with our digital devices alters our ability to empathize, read each others emotions as well as maintain eye contact in real life interactions.

After our presentation, we looked Sherry Turkle’s work in Reclaiming Conversation: the power of talk in the digital age (2016). Turkle describes face-to-face conversation is the most humanizing method of communication, and allows us to develop a capacity for empathy. People use their phones for the illusion of companionship when real life relationships may feel lacking, our connectedness online leads us to discredit the potentiality of empathy and intimacy of face-to-face conversation. 

We chose a campfire as the visual inspiration for our P5 sketch due to the casual ritual performed today that provides both warmth and comfort while people connect with nature. Fire is pervasive across all human history, but within the present context, we use it as a symbol to voluntarily disconnect with technology and give one’s self the opportunity to nurture our relationships with nature and those close to us. 

Expanding upon the campfire and going into the ceremonial practice of a bonfire, fire has been used across history as a method to bring individuals together for a common goal, whether it be celebrations or a folk lore custom.

Rather than working with the literal visual depiction of fire, we chose to take visual cues from mobile meditation apps. 

We don’t believe our experiment will provide all these benefits, but we wanted to use it as an opportunity that while we’re in a digitally-lead program, face-to-face interaction is just as important. To provide each of our classmates a moment of self-reflection, we’re provided the unique opportunity to evaluate what we would like to offer for one another and to create.

We think that our presentation helped our classmates to take a break from the hecticness over constantly having to look at multiple screens while working on our experiment one projects. One of the feedback that we got was that the presentation would have been more successful if we had presented towards the end of the class when everybody had spent more time looking at screens.

If we were to develop the experiment further, we could explore how we could use the phones camera input ability to dim the fire based on eye contact to encourage users to look away from their screens when in conversation together. Further improvements could be functionality to blow on the phone to dim the fire which would require ranges of mic input to detect the difference between conversation and blowing on the phone. 

Citations

2D and 3D Perlin noise. (n.d.). Retrieved September 21, 2019, from https://genekogan.com/code/p5js-perlin-noise/.

Righini, Evgeniya. “Gradient Studies.” Behance, 2017, www.behance.net/gallery/51830921/Gradient-Studies.

Turkle, S. (2016). Reclaiming conversation: The power of talk in a digital age. NY, NY: Penguin Books.

ustwo. (2015) Pause: Interactive Meditation, apps.apple.com/ca/app/pause-interactive-meditation/id991764216.