Experiment 1 – Echosystem

upload

Group: Masha, Liam, Arsalan

Code: https://github.com/lclrke/Echosystem

Project description

This installation involves 20+ screens and participants that create a network through sound. Incoming sound is measured by the devices and this data is used to influence the visual and auditory aspects of the installation. Sound data is used as a variable within functions to affect the size and shape of the visuals. Audio synthesis using P5.js is used to create responsive sound to the participants input. The features of the oscillators are also determined by the data from the external audio input.

While the network depends on our participation, the devices concurrently relay messages through audio data. After we start the “conversation” there is a cascading effect as the screens interact with each other through sound, creating a bi-communications network via analog transmissions. 

Visually, every factor on the screen is affected by participant and device interactions. We created a voice synchronized procession of lines, color and sound that highlight and explore the sound as a drawn experience. The installation is continuously changing  at each moment. The incoming audio data influences how each segment is drawn in terms of shapes, number of lines and scale.  This is in contrast to a drawing or painting that is largely fixed in time and creates an opportunity to draw with voice and sound. Through interaction, the participants are able to affect the majority of the piece, bridging installation and performance art. 

Process:

Week 1

The aim of our early experiments was to create connections between participants through the devices rather than an external dialogue. We started with brainstorming various ideas and figured there were 2 directions:

  1. A  game or a play, which would involve and entertain participants:
  2. An audio/visual installation based on interaction between participants and devices.

First, we planned to create something funny and entertaining and sketched some ideas for the first direction.

OCAD Thesis Generator: Participants would generate a random nonsensical thesis and subsequently have to defend it.

screen-shot-2019-10-01-at-2-15-04-am

Prototype: https://editor.p5js.org/liamclrke/present/9fBGEz9CH

Racing: Similar to slot car racing, you have to hum to stay within a certain speed in order to not crash. Too quiet and you’ll lose.

Inspiration: https://joeym.sgedu.site/ATK302/p5/skate_speed/index.html

Design Against Humanity: Screens used as cards. Each screen is a random object when pressed. Have to come up with the product after. Ex. “linen” & “waterspout” → so what does this do?

Panda Daycare: Pandas are set to cry at random intervals. Have to shake/interact with them to make them not cry.

Sketch: https://editor.p5js.org/liamclrke/present/MpxLmb1jQ

 

Week 2

After further exploring P5.js, we decided we were more interested in creating an interactive installation rather than a game.  

Raw notes/ideas for installation:

Wave Machine: An array of screens would form an ocean. Using amplitude measurement from incoming sound, the ocean would get rougher depending on the level of noise. Moving across an array of screens making noise would create a wave

Free form Installation: Participants  activate random sounds, images and video with touch and voice. Images include words in different languages, bright videos and gradients and various sounds. (this idea was developed into the final version of the experiment)

Week 3 

We agreed to work on art installation, which involves sounds, images and videos affected by participant interaction. An installation project seemed more attractive and closer to our interests than a game. We figured we could combine our skills together to create a stronger project and function as a cohesive team. 

That week we produced graphic sketches, short videos and chose sounds we would want to use in the project:

privet

 hellohindu_%d0%bc%d0%be%d0%bd%d1%82%d0%b0%d0%b6%d0%bd%d0%b0%d1%8f-%d0%be%d0%b1%d0%bb%d0%b0%d1%81%d1%82%d1%8c-1

At this step, we took inspiration from James Turrell and his work with light and gradients.

Week 4

Uploading too many images, sounds and videos made the code run slow on devices with smaller processing power. We changed the concept to one visual sketch and used p5.js audio synthesis.

We were looking for a modular shape which expressed the sound in an interesting way, apart from a directly representative waveform. We started with complicated gradients which overtaxed the processors of mobile phones so we dialed down certain variables in the draw function. Line segment density was a factor of amplitude multiplied by a variable, which we lowered till the image could be processed without latency.

The final image is a linear abstraction, drawn through external and internal sound.

Project Concept:

The project was inspired by multiple art works.

Voice Array by Rafael Lozano-Hemmer: When a participant speaks into an intercom, the audio is recorded and the resulting waveform is represented in a light array. As more speech is recorded, the waveforms are pushed down the horizontal array, playing back 288 previous recordings. When a recording reaches the end, it is released as solo clip. This inspired us to use audio as a way to sync devices without a network connection.

unnamed-2

Paul Sharits: Screens: Viewing Media Installation Art by Kate Mondloch was used for research, and within Paul Sharits’ work with screens was discovered. Paul Sharits was known for avante garde filmmaking, often featured across multiple screens accompanied by experimental audio. We took this concept and reformatted it into an interactive design.

unnamed-2 unnamed

Manfred Mohr: Manfred is a pioneer of art in the digital using algorithms to create complex structures and shapes. The visual simplicity driven by more complex underlying theory was a creative driver for the first iteration of Echosytem.

1

Challenges and solutions:

  1. The first challenge was lag from overloading processors from multiple video, sound and image files. These files slowed down the code, especially on the phone. Therefore, we decided to use P5.sound synthesis and creative coding to draw the image.
  2. First sketches were based only on touch, which did not create a strong enough interaction between participants, so the solution was to add voice and sound which affect the characteristics (amplitude and pitch) of the oscillators . 
  3. In previous ideas, it was difficult to affect videos and images (scaling and filters) so we created a simplified  image in P5.js which consists of lines of different colors. This step allowed us to affect the number of lines drawn by audio input data.
  4. In the beginning, to organize the physical space, we planned to build a round stand for devices. This would create a circle and bring participants together around the installation. However, different size and weight of devices complicated things.

photo_2019-09-30_21-34-485. Another idea was to hang screens to the ceilings, but the construction was too heavy. Without having the right equipment, we simplified these concepts and used flat horizontal surfaces to place the screens, so the number and size of devices was not limited.

6. The synthesizer built in P5.js led to a number of challenges. The audible low and high ends of a tablet differed greatly from a phone, resulting in certain frequencies sounding unpleasant depending on the device’s speaker. Through trial and error, we narrowed the pitch range that could be modulated by audio input for maximum clarity over multiple devices. There was also an issue of a continuous feedback loop, so the oscillator’s amplitude had to be calibrated in a similar fashion. The devices had to be at a certain distance range or would result in continuous feedback. We put a low-pass filter on finally in order to control the sound as a fail-safe as the presentation set up would be less controlled than in tests. 

Reflection:

Although we managed to involve 20 screens and groupmates into process of creating sounds and images, the design of the presentations logistics could have been more concrete. With preparation and set placement of screens, the project has high scalability, far above 20 screens and participants. 

The first question we asked upon assignment was whether we could overcome sync issues while keeping the devices off a network. Through the use of responsive sound we created an analog network of sound, resulting in a visual installation blurring the lines between participant and artist.

References:

  1. Early Abstractions (1947-1956) Pt. 3  https://www.youtube.com/watch?v=RrZxw1Jb9vA
  2. Mondloch, Kate. Screens: Viewing Media Installation Art. University of Minnesota Press, 2010.
  3. Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial   https://youtu.be/jEwAMgcCgOA
  4. Sketches made in processing by Takawo  https://www.openprocessing.org/sketch/451569
  5. Screens: Viewing Media Installation Art- Kate Mondloch
  6. Rafael Lozano-Hemmer- Various works http://www.lozano-hemmer.com/projects.php
  7. United Visual Artists – Volume  https://www.uva.co.uk/features/volume
  8. https://collabcubed.com/2012/10/16/carsten-nicolai-unidisplay/
  9. http://jamesturrell.com/

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *