Experiment 1 – Echosystem


Group: Masha, Liam, Arsalan

Code: https://github.com/lclrke/Echosystem

Project description

This installation involves 20+ screens and participants that create a network through sound. Incoming sound is measured by the devices and this data is used to influence the visual and auditory aspects of the installation. Sound data is used as a variable within functions to affect the size and shape of the visuals. Audio synthesis using P5.js is used to create responsive sound to the participants input. The features of the oscillators are also determined by the data from the external audio input.

While the network depends on our participation, the devices concurrently relay messages through audio data. After we start the “conversation” there is a cascading effect as the screens interact with each other through sound, creating a bi-communications network via analog transmissions. 

Visually, every factor on the screen is affected by participant and device interactions. We created a voice synchronized procession of lines, color and sound that highlight and explore the sound as a drawn experience. The installation is continuously changing  at each moment. The incoming audio data influences how each segment is drawn in terms of shapes, number of lines and scale.  This is in contrast to a drawing or painting that is largely fixed in time and creates an opportunity to draw with voice and sound. Through interaction, the participants are able to affect the majority of the piece, bridging installation and performance art. 


Week 1

The aim of our early experiments was to create connections between participants through the devices rather than an external dialogue. We started with brainstorming various ideas and figured there were 2 directions:

  1. A  game or a play, which would involve and entertain participants:
  2. An audio/visual installation based on interaction between participants and devices.

First, we planned to create something funny and entertaining and sketched some ideas for the first direction.

OCAD Thesis Generator: Participants would generate a random nonsensical thesis and subsequently have to defend it.


Prototype: https://editor.p5js.org/liamclrke/present/9fBGEz9CH

Racing: Similar to slot car racing, you have to hum to stay within a certain speed in order to not crash. Too quiet and you’ll lose.

Inspiration: https://joeym.sgedu.site/ATK302/p5/skate_speed/index.html

Design Against Humanity: Screens used as cards. Each screen is a random object when pressed. Have to come up with the product after. Ex. “linen” & “waterspout” → so what does this do?

Panda Daycare: Pandas are set to cry at random intervals. Have to shake/interact with them to make them not cry.

Sketch: https://editor.p5js.org/liamclrke/present/MpxLmb1jQ


Week 2

After further exploring P5.js, we decided we were more interested in creating an interactive installation rather than a game.  

Raw notes/ideas for installation:

Wave Machine: An array of screens would form an ocean. Using amplitude measurement from incoming sound, the ocean would get rougher depending on the level of noise. Moving across an array of screens making noise would create a wave

Free form Installation: Participants  activate random sounds, images and video with touch and voice. Images include words in different languages, bright videos and gradients and various sounds. (this idea was developed into the final version of the experiment)

Week 3 

We agreed to work on art installation, which involves sounds, images and videos affected by participant interaction. An installation project seemed more attractive and closer to our interests than a game. We figured we could combine our skills together to create a stronger project and function as a cohesive team. 

That week we produced graphic sketches, short videos and chose sounds we would want to use in the project:



At this step, we took inspiration from James Turrell and his work with light and gradients.

Week 4

Uploading too many images, sounds and videos made the code run slow on devices with smaller processing power. We changed the concept to one visual sketch and used p5.js audio synthesis.

We were looking for a modular shape which expressed the sound in an interesting way, apart from a directly representative waveform. We started with complicated gradients which overtaxed the processors of mobile phones so we dialed down certain variables in the draw function. Line segment density was a factor of amplitude multiplied by a variable, which we lowered till the image could be processed without latency.

The final image is a linear abstraction, drawn through external and internal sound.

Project Concept:

The project was inspired by multiple art works.

Voice Array by Rafael Lozano-Hemmer: When a participant speaks into an intercom, the audio is recorded and the resulting waveform is represented in a light array. As more speech is recorded, the waveforms are pushed down the horizontal array, playing back 288 previous recordings. When a recording reaches the end, it is released as solo clip. This inspired us to use audio as a way to sync devices without a network connection.


Paul Sharits: Screens: Viewing Media Installation Art by Kate Mondloch was used for research, and within Paul Sharits’ work with screens was discovered. Paul Sharits was known for avante garde filmmaking, often featured across multiple screens accompanied by experimental audio. We took this concept and reformatted it into an interactive design.

unnamed-2 unnamed

Manfred Mohr: Manfred is a pioneer of art in the digital using algorithms to create complex structures and shapes. The visual simplicity driven by more complex underlying theory was a creative driver for the first iteration of Echosytem.


Challenges and solutions:

  1. The first challenge was lag from overloading processors from multiple video, sound and image files. These files slowed down the code, especially on the phone. Therefore, we decided to use P5.sound synthesis and creative coding to draw the image.
  2. First sketches were based only on touch, which did not create a strong enough interaction between participants, so the solution was to add voice and sound which affect the characteristics (amplitude and pitch) of the oscillators . 
  3. In previous ideas, it was difficult to affect videos and images (scaling and filters) so we created a simplified  image in P5.js which consists of lines of different colors. This step allowed us to affect the number of lines drawn by audio input data.
  4. In the beginning, to organize the physical space, we planned to build a round stand for devices. This would create a circle and bring participants together around the installation. However, different size and weight of devices complicated things.

photo_2019-09-30_21-34-485. Another idea was to hang screens to the ceilings, but the construction was too heavy. Without having the right equipment, we simplified these concepts and used flat horizontal surfaces to place the screens, so the number and size of devices was not limited.

6. The synthesizer built in P5.js led to a number of challenges. The audible low and high ends of a tablet differed greatly from a phone, resulting in certain frequencies sounding unpleasant depending on the device’s speaker. Through trial and error, we narrowed the pitch range that could be modulated by audio input for maximum clarity over multiple devices. There was also an issue of a continuous feedback loop, so the oscillator’s amplitude had to be calibrated in a similar fashion. The devices had to be at a certain distance range or would result in continuous feedback. We put a low-pass filter on finally in order to control the sound as a fail-safe as the presentation set up would be less controlled than in tests. 


Although we managed to involve 20 screens and groupmates into process of creating sounds and images, the design of the presentations logistics could have been more concrete. With preparation and set placement of screens, the project has high scalability, far above 20 screens and participants. 

The first question we asked upon assignment was whether we could overcome sync issues while keeping the devices off a network. Through the use of responsive sound we created an analog network of sound, resulting in a visual installation blurring the lines between participant and artist.


  1. Early Abstractions (1947-1956) Pt. 3  https://www.youtube.com/watch?v=RrZxw1Jb9vA
  2. Mondloch, Kate. Screens: Viewing Media Installation Art. University of Minnesota Press, 2010.
  3. Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial   https://youtu.be/jEwAMgcCgOA
  4. Sketches made in processing by Takawo  https://www.openprocessing.org/sketch/451569
  5. Screens: Viewing Media Installation Art- Kate Mondloch
  6. Rafael Lozano-Hemmer- Various works http://www.lozano-hemmer.com/projects.php
  7. United Visual Artists – Volume  https://www.uva.co.uk/features/volume
  8. https://collabcubed.com/2012/10/16/carsten-nicolai-unidisplay/
  9. http://jamesturrell.com/



Experiment 1 – Camp Fire

Project Title: Campfire
Team Members: Nilam Sari (No. 3180775) and Lilian Leung (No. 3180767)

Project Description:

Our experiment was an exploration as to how we could create a multi-screen experience that would speak to the value of ‘unplugging’ and having a conscious and present discussion with our classmates using the metaphor symbolism of the campfire.

From the beginning, we were both interested in creating an experience that would bring people together and be able to have a sort of digital detox and engage in deeper face-to-face conversation. We wanted to play along with the current trend of digital minimalism and the Hygge lifestyle focused on simpler living and creating deeper relationships.


While our project would only provide about a 10 minute reprieve from our connected lives, we wanted to bring to attention, while we’re in a digitally-lead program, that face-to-face conversation and interaction is just as important for improving our ability to empathize with one another.

Visual inspiration was taken from the visual aesthetic of the campfire as well as the use of abstract shapes used for many meditative and mental health apps such as Pause, developed by ustwo.


ustwo, Pause: Interactive Meditation (2015)

How it Works:

The sketch is laid out with three main components: the red and orange gradient background, the fire crackling audio, and the transparent visuals of fire. 

On load, the .mp3 file of the audio plays with a slow fade in of the red and orange gradient background. The looped audio file’s volume is dependent on mic input, so that the more discussion from the group participating amplifies the volume. The visual fire graphics at the bottom adjust in size dependent on the volume of the mic input, creating a flickering effect similar to a real campfire.

To lower the volume and fade the fire, users can shake their devices and the acceleration of the x-axis causes the volume to lower and the tint of the images to decrease to 0. This motion is to recreate shaking out a light match.

Development of the Experiment

September 16, 2019

Our initial thought about visually representing the camp fire was to recreate an actual fire. Though we realized from our intentions to have all the phones laid flat on a surface, that fire is seen from a vertical perspective, while the phones would lay horizontally so we went with a more abstract approach instead by using gradients. 

The colours chosen were taken from the natural palette of fire though we also explored sense of contrast with the use of gradients. 

Gradient Studies

Righini, E. (2017) Gradient Studies

Originally we tried working with a gradient built in RGB, though while digging into control of the gradient and switching values, Lilian wasn’t quite comfortable yet with working with multiple values once we needed to go having them change based on audio level input.

Instead we began developing a set of gradients we could use as transparent png files, this allowed us more control over what they visually looked like and allowed the gradients to become more dynamic and also easier to manipulate.


Initial testing of the .png gradients as a proof of concept, worked as we managed to get the gradient image to grow using the mic Audio In event.

While Lilian was working on the gradients of the fire, Nilam was trying to figure out how to add on the microphone input and make the gradient corresponds to the volume of the mic input. One of her solution was using mapping.

The louder the input volume the higher the Red value gets and the redder the screen become. This way they could change the background to raster image, and instead of lowering the RGB value to 0 to create black, this changed its opacity to 0 to show the darker gradient image on the back of it.

Nilam made edits on Lilian’s version of experimentation and integrated the microphone input and mapping part into the interface she already developed.

September 19, 2019

Our Challenges

We were still trying to figure out why mic and audio input and output was working on our laptops but not on our phones. The translation of mic input on to increase the size of the fire seemed laggy even attempting to down-save our images.

On our mobile devices, the deviceShake function seemed to be working, while laggy on Firefox, playing the sketch on Chrome provided better, more responsive, results. Other issues were once we started changing the transition of the tint for our sketch that sometimes the deviceShake would stop working entirely.

We wanted a less abrupt and smoother transition from the microphone input. So we tried to figure out if there are functions like delay. We couldn’t find anything so we decided to try using if statement instead of mapping.

We found out from our google searches that there is a possibility of a bug that stopped p5.js certain functions like deviceShaken from working after the recent iOS update in this past summer. Because, while laggy, it still worked on Lilian’s Android Pixel 3 phone, while it just completely never worked Nilam’s iPhone.

Audio Output

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No Yes
Safari No N/A

deviceShaken Function

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No No
Safari No N/A

Additionally, Lilian started working on additional exploration like mobile rotation and acceleration to finesse the functionality of the experiment. She also began exploring how we could incorporate noise values to recreate organic movement. We were inspired by
these examples using Perlin noise.


To add the new noise graphic, we used the createGraphics() and clear() functions to create an invisible canvas on top of the gradient to let the bezier curve trails so it looks like a flame. It clears itself and repeat the process again after the 600 frame count to decrease loading problems in the sketch.

September 21, 2019

After reviewing our code we realized some of the issues we were having with the audio were because of Chrome’s privacy restriction with disabling auto-playing audio, as well as our mic problem was because we placed our code within the function setup { section and was only running once, as compared to once we moved it to function draw {, the audio seemed to be working better.

September 23, 2019 – The Phone Stand


After getting feedback for our prototype, we started creating a stand that we could place everyone’s phones during the presentation. We laid out two rows of stands, the outer circle holding 12 phones, and the inner circle holding 8 phones, as we explored how we could better re-create the ‘fire’ when we have out multi-screen presentation. 


We started out by sketching the layout for the phone stand. The size is based on the widest phone width someone has in our class. We then went to the Maker Lab and drilled into the circular foam and chiselled out the middle sections to create an indent that the phones could sit within.


The next step is to apply finishing to the foam. We used black matte spray paint to cover the foam. The foam deteriorated a little from the aerosole of the spray paint, which we foresaw, but after a test paint it didn’t seem to damage the structure much so we decided to proceed. 

September 26, 2019 – deviceShaken to accelerationX


Finding the mobile deviceShake event wasn’t working, Lilian created a new sketch testing the opacity and audio level using accelerationX as the new variable. The goal was to test changes in acceleration cause the audio volume to decrease and the images to fade out. AccelerationX seemed to providing more consistent results and was added into the main Experiment 1 sketch.

User Flow

This experiment is primarily conversation lead. Set-up from the facilitators are creating a wide open space for everyone to sit and dimming or turning off the lights to recreate a night scene. Users are then asked to load in the P5 experiment and join together in a circular formation in the room.

Users should allow the fire animation to load in and place their phone into the circular phone stand. The joint phones coming together recreate the digital campfire. Facilitators then can  speak to the importance of coming together and face-to-face conversation.

The session can run as long as needed. When the session is finished, users can shake their phones to dim the fire and lower the volume of the fire crackling audio.

However, it did have an impact on the foam around the slots. It melted the foam with the paint and it wouldn’t dry. We thought we could have use gesso before the spray paint for future reference, but we had to improvise for this one so we used paper towels.

The Presentation


Photo Credit: Nick Puckett





The code for our P5 Experiment can be viewed on GitHub

Project Context

The project’s concept was taken from our mutual interest in creating a multi-screen experience that would cause classmates to come together in an exercise rather than be just an experiment using P5 events. After brainstorming a couple of ideas and possibilities within our limited personal experience with programming, we came up with an idea about ‘unplugging’ and having a full attention to the people around us without distraction of devices, except that it is facilitated by screens.

We wanted the experience to be about ‘unplugging’ to recognize the value (even within a digital media program) that time away from screens is just as beneficial and an opportunity for self-reflection. While technology allows us to extend ourselves within the virtual space, there are also many consequences to our real life relationships and physical composure.

Described within a Fast Company article What Really Happens To Your Brain And Body During A Digital Detox (2015) experts explained that our connectedness with our digital devices alters our ability to empathize, read each others emotions as well as maintain eye contact in real life interactions.

After our presentation, we looked Sherry Turkle’s work in Reclaiming Conversation: the power of talk in the digital age (2016). Turkle describes face-to-face conversation is the most humanizing method of communication, and allows us to develop a capacity for empathy. People use their phones for the illusion of companionship when real life relationships may feel lacking, our connectedness online leads us to discredit the potentiality of empathy and intimacy of face-to-face conversation. 

We chose a campfire as the visual inspiration for our P5 sketch due to the casual ritual performed today that provides both warmth and comfort while people connect with nature. Fire is pervasive across all human history, but within the present context, we use it as a symbol to voluntarily disconnect with technology and give one’s self the opportunity to nurture our relationships with nature and those close to us. 

Expanding upon the campfire and going into the ceremonial practice of a bonfire, fire has been used across history as a method to bring individuals together for a common goal, whether it be celebrations or a folk lore custom.

Rather than working with the literal visual depiction of fire, we chose to take visual cues from mobile meditation apps. 

We don’t believe our experiment will provide all these benefits, but we wanted to use it as an opportunity that while we’re in a digitally-lead program, face-to-face interaction is just as important. To provide each of our classmates a moment of self-reflection, we’re provided the unique opportunity to evaluate what we would like to offer for one another and to create.

We think that our presentation helped our classmates to take a break from the hecticness over constantly having to look at multiple screens while working on our experiment one projects. One of the feedback that we got was that the presentation would have been more successful if we had presented towards the end of the class when everybody had spent more time looking at screens.

If we were to develop the experiment further, we could explore how we could use the phones camera input ability to dim the fire based on eye contact to encourage users to look away from their screens when in conversation together. Further improvements could be functionality to blow on the phone to dim the fire which would require ranges of mic input to detect the difference between conversation and blowing on the phone. 


2D and 3D Perlin noise. (n.d.). Retrieved September 21, 2019, from https://genekogan.com/code/p5js-perlin-noise/.

Righini, Evgeniya. “Gradient Studies.” Behance, 2017, www.behance.net/gallery/51830921/Gradient-Studies.

Turkle, S. (2016). Reclaiming conversation: The power of talk in a digital age. NY, NY: Penguin Books.

ustwo. (2015) Pause: Interactive Meditation, apps.apple.com/ca/app/pause-interactive-meditation/id991764216.

Experiment 1: Digital Interactive Sound Bath


Our project is a digitized version of the experience of a sound bath. The objective was the same – to explore the ancient stress-relieving and sound healing practice. However we sought to achieve this using laptops and phones, which are often associated with being the cause of stress and anxiety. Our experiment made use of motion detection,  WEBGL animation, sound detection and emission.

Table of Contents

1.0 Requirements
2.0 Planning & Context
3.0 Implementation
3.1 Software & Elements
3.1.1 Libraries & Code Design
3.1.2 Sound Files
3.2 Hardware
4.0 Reflections
5.0 Photos
6.0 References

1.0 Requirements

The goal of this experiment is to create an interactive experience expandable to 20 screens.

2.0 Planning & Context



Stress is something that affects many. The constant hustle-bustle of work deadlines, fast-paced city life, and overachievement may push you to the edge and most could benefit from self-care, meditation, relaxation and pause from the busy life. Enter sound baths.

Sound baths use music for healing and relaxation. It is defined as an immersion in sound frequency that cleans the soul (McDonough). From Tibetan singing bowls to Aboriginal didgeridoos (Dellert), music has always been used for therapeutic uses for over decades now. The ancient Greeks also used sound vibrations to aid in digestion, treat mental disturbance and induce sleep. Aristotle’s ‘De Anima’ also shows how flute music can purify the soul.

Ever since the late 19th century, researchers have begun focusing on improving the correlation between sound and healing. These studies proved that music could lower blood pressure, decrease pulse rate and also assist the parasympathetic nervous system.

So, essentially a sound bath is a meditation class that aims to guide you into a deep meditative state while you are enveloped in ambient sounds.


Sound baths use repetitive notes at different frequencies to help bring your focus away from your thoughts. These sounds are generally created with crystal bowls, cymbals and gongs. Similar to a yoga session, the instructor of a sound bath creates the flow of a sound bath. Each instrument creates a different frequency that vibrates in your body and helps guide you to the meditative and restorative state. Some people believe bowls made from certain types of crystals and gems can channel different restorative properties.

Our project is a digitized version of the experience of a sound bath. The objective was the same – to explore the ancient stress-relieving and sound healing practice. However we sought to achieve this using laptops and phones, which are often associated with being the cause of stress and anxiety . We allowed those experiencing it a moment to pause, reflect and reconnect with their inner soul.


The concept of our experiment was to let the user interact with 4 primary zones in order to experience them:

  •     Zone A – Wild Forest – Green
  •     Zone B – Ocean Escape – Blue
  •     Zone C – Zen Mode – Purple
  •     Zone D – Elements of life – Pink
  •     Projections – a) Visually soothing abstract graphics. b) Life quotes
Zone Maps

We carefully segregated the different experiences based on their soothing experiences in the four corners of the space. The Zone A consisted of motion sensitive sounds of rain, chirping and crickets along with the motion sensitive zonal colour of green. Similarly, Zone B consisted of motion sensitive seascape sounds like ocean waves and seagulls along with the ambient lighting of blue tones. The Zone C being the zen zone, had meditation tunes as well as flute and bell melodies that were triggered by people passing by.  The zone also had the ambient lighting of pink touch to it. The final zone D represented elemental sounds such as rain, fire and earth which would be triggered by motion, however, we ultimately opted for silence within that zone, providing a brief audio escape. The colours were drawn together with the use of  the colour-cycling lamps on nears the floors.

The experience also consisted of projections – eye pleasing visualizations which were projected over the ceiling. These projections were volume sensitive. So, based on the interactive audience, the visualizations would become brighter and more prominent. To go along with the theme of digital sound bath, we also projected quotations about life which would instill the faith and provide inspiration to the users upon reading them.

Once all these elements came together, the space became a digital sound bath wherein users could come and relax their mind. The experience was made into a dark space where only upon detecting motion, would the room light up with different colours and project different ambient sounds. The result was a soothing and mind relaxing experience for the audience.

3.0 Implementation

3.1 Software & ELEMENTS

3.1.1 Libraries & Code Design

For the zones, the  library Vida was used for motion detection. The light emitted was a simple rectangle than slowly fades when motion is detected. The volume of the audio files mimics this as well.

WEBGL was used to generate the calming projection which was a slowly rotating cosine and sine animated plot in 3D suing spheres. It was sound activated and glowed brighter when the level increased.

The life quotes used an array and a set interval to redraw new quotations.

The code is designed to be centralised, so although there are 14 unique programs running, they share the base code where possible. For efficiency of set up, a home page was created with buttons for each program.

3.1.2 Sound Files

For sounds, the following were tested, but only the bold were implemented as they were the most audibly pleasing combination. These high quality sounds were purchased and licensed for use.

pink-zen(D): gentle-wind.wav, ambience.wav, bells.wav

purple-elements(C): wind.wav, rain-storm.wav, thunder.wav

blue-ocean(B): humpback-whales.wav, sea-waves.wav, california-gull.wav

green-forest(A): thrush-bird.wav, robin.wav, forest-leaves.wav, cricket.wav

3.2 Hardware
Some of the hardware used.
Some of the hardware used. (Plain dim LED lamp, glass jar, wax paper)

We used round table lamps on the floors with remote controlled  LEDs that cycled through the rainbow. Two plain dim LED lamps for used for safety in dark areas. Two projectors were used, one to project the life sayings onto the screen, and another to project the soothing animation onto the ceiling. Glass jars wrapped with decorated wax paper  held the phones as they light up. The wax paper was chosen to coincide with each of the zone themes, and the glass jars were tall enough to hide the majority of the screen to provide a soft glow, and short enough to keep the camera exposed as it is used for motion detection. An iPad was used at the entrance to provide context of the space. The space was decorated to simulate a Sound Bath.

4.0 Reflections

When approaching this topic, our group set out to examine explore a solution where the participants would not have to physically touch their phones themselves, but instead have it as part of an experience where they walk away from it, while it aids in relaxing themselves and others. While meditatively walking around the space, their motion act as the trigger for the light and soundscape. We noticed some some participants becoming enveloped in the experience, and lying down as one would in a traditional sound bath to absorb the experience with their senses. Others entranced by the lights and affirmation, were curious about what different pleasing sounds and colours could be produced. Due to the amount of the hardware and number of programs involved, there was a lot of set up required before the room could be entered. An additional complication is that this type of set up is one where the phones are accessible to the creators, and not something  the attendees bring with them into the experience.

The room initially requested was RHA 318, a smaller and more intimate space that would allow for more interaction between the lights by having them closer together, and a better layout for the projections. The room has recently gone out of service, and with the larger room, RHA 511, some of that interaction was diluted, as pointed out in the post-discussion.

Additionally, despite mentioning that participants were to just walk around to trigger the sound, many unfamiliar with the concept of a sound bath, still tried to manipulate the devices or holder, or use sound to the effects. This is likely due to the memories from the previous tactile experiments where manipulation of the elements within the experiment produced positive results.

5.0 Photos

6.0 References