Antimaterial

img_8987

By Jessie, Liam and Masha

Project Description: 

Antimaterial is an interactive installation that explores abiogenesis and kinesis to articulate a relationship between the human and nonhuman world. Just as we are able to affect our surroundings, the ability of magnetic materials to move matter indicated the presence of life to pre-Socratic philosophers. The rocks beneath our feet were not only the essential catalysts for life, but that microbial life helped give birth to minerals we know and depend on today. In Antimaterial, these concepts are explored as primitive life emerges within the topography of the material in response to human proximity, demonstrating a connection between animate and inanimate matter.

Project Context:

Drawn the freedom and possibility of exploration, from the start of the project,we all agreed to use creative material to create a tangible and experimental interactive art piece. With some of the team member’s previous experience with using water in their personal projects, we thought to use water as it’s a very versatile medium due to it’s free flowing form. Using tools such as a solenoid to tap on water to create ripples, or use speakers in waters to visualize sound waves with water were some of the many initial ideas we thought of. However, with water being a difficult medium to control the form of, we were worried that the final piece would end up looking like a jacuzzi, which led us to finetune to our ideas further. 

After several sessions of brainstorming, we came up with the idea to mix water with some magnetic material to magnetize it which would also give us more control over its form. During our research, we found quite a few interesting projects that have guided us in new directions of exploration. Sachiko Kodama is a Japanese artist that dedicated the majority of her art practice to using ferrofluid and kinetic sculptures to making installations. By combining ferrofluid which can be a messy medium at times with kinetic sculpture which is inherently neatly structured, her works usually create order and harmony amongst chaos and come off as intriguing as well as energetic.  

j4

Sachiko Kodama Studio Website

Inspired by this kind of harmony generated by the juxtaposition of structure and chaos by the use of ferrofluid and kinetic sculpture, we wished to demonstrate it with ferrofluid in our practice for this project too. We purchased a bag of iron(III) oxide and started playing with it by mixing it with different different types of solvents including motor oil and washing up liquid while doing more research how it’s been incorporated to other artists’ works of art. 

 

 

One use of iron(III) oxide that drew our attention was to first mix it with oil and then encase the oily mixture in a container filled with water. For example, the project Ferrolic by Zelf Koelman utilizes the unmixable nature of oil and water and creates a versatile movement of ferrofluid within water with the use of magnets behind the display, and creates different patterns with different modes of operation.

 

We wanted the magnetic material to generate different patterns when visitors interact with it. The mechanism behind the project SnOil inspired us to make an array of electro magnets behind the magnetic material, and different patterns will be programmed to be generated through the activations of electro magnets on certain positions during visitors’ interactions. 

 

However, having talked to Kate and Nick during the proposal meeting, they shared their concerns about whether we would be able to manage an array of more than 10 electro magnets within 2 weeks’ of production period, as well as the cleanliness of the piece on display since ferrofluid tends to get messy. Having reevaluated these essential suggestions, we decided to abort the idea of making ferrofluid and only use iron(III) oxide powder as it generates a fur-like texture when magnetized and still creates a very aesthetically intriguing look. For example, the following project uses furry pom poms as part of the interaction piece to produce a very engaging experience.

 

We were also inspired by the mechanism behind this pom pom piece as it uses servos and turns a circular movement into a linear movement which we later used in our project to simply the movements of magnets. 

 

Process:

This project has undergone a series of experimentations, evaluations and adaptations. Having decided to explore creative material for this project, we chose to use magnets to experiment with. We first started making electro-magnets by wrapping conductive metal wires around a coil, and hoped to create patterns with magnetic material through the activation of an array of electro-magnets. The patterns will change and react to the on and off of the eletro-magnets. 

 

However, the magnetism of the electro-magnet is not strong enough and generates a lot of heat after a longer period of time when it’s activated. We worry that they would cause safety issues during the open show because of overheating and also the visual effect might not be desirable because of its weak magnetism. Eventually, we decided to use actual magnets instead. This change has led us to re-think the reactions of the magnets as they will not have the ability to be turned on and off, and will always have magnetism.

Therefore, we decided instead of making magnets become activated and deactivated, we could change their position through movement actuated by servos motors through a slider. 

It turns out the micro servo motors we have are too weak. One slider weighs around 6kg and the servo we have is only able to move weights up to 1.5 kg. We try to come up with different solutions to solve it. First, we changed the micro servo to a more powerful servo FS5106R which takes up to 6kg in weight. Second, we put the slider vertically instead of horizontally and use gravity to our advantage, and attach the slider to the servo through a piece of fishing wire.

 

 

 

However, it seems the slider is too heavy still, especially now that all the weight is put on the servo motor. The fishing wire also becomes less durable after the serco runs for a longer period of time. Finally we landed on the idea of laying the slider horizontally and using gears to achive the linear movement of magnets through the rotation of servos.

c1660601_main-1_1 lasercut-01

 

Having done that, the mechanism still seems inadequate because the full rotation servo doesn’t seem to have enough torque to move the gear plus the slider. We again had to change to an even more powerful motor, which led us to using stepper motors. We also favor the stepper motors over servos since they are simpler to code as it counts steps instead of time. 

 

Now that all the underlying mechanism is working. It’s time for us to put it all together. The production process begins. We have prototyped what the piece would look like in a digital 3D software and made sure that it would all come together in reality.

screen-shot-2019-11-25-at-4-28-38-pm

We started building the box with the help and guidance of a professional constructor friend. 

 

Finally, this is what our piece looks like.

j1

 

Code and Circuit:

https://github.com/jessiez0810/exp5/blob/master/nptest_2_with_sensor.ino

screen-shot-2019-12-08-at-2-06-57-pm

Reflection:

During the exhibition, the piece successfully achieved creating a sense of mystery and made many visitors to the open show wonder what it was and how we did it. During the critique session, we were suggested by Nick to get gloves to put besides the piece so people would know that they could touch it and maximize the level of interaction. However, having discussed it within our group, we agreed the gloves might temper with the eeriness the piece gave off and wouldn’t fit with the underlying tone. The gloves would also create a barrier for people to feel the texture of the powder, which is an essential part of this interaction experience – to find out what this alien material is through all means of explorations. 

To solve this issue, we decided to keep paper towels with us. After people touched it, we would hand off paper towels to them to wipe the powder off. However, if they didn’t ask us, we wouldn’t say anything about touching. As was expected, many people were tempted to touch the magnetic power on display and we allowed them to do it if they had asked. What’s interesting is that the seemingly dirty powder didn’t hinder people’s curiosity and a lot of people dig their hands into the powder, which seems to be a validation of success in mission of the mystery this piece tries to achieve. This project has taken us a lot of time in all stages of the process. However, it was truly rewarding to see it come together in the end after many trials and errors. 

j2 k3

References:

AccelStepper Class Reference. (n.d.). Retrieved December 8, 2019, from https://www.airspayce.com/mikem/arduino/AccelStepper/classAccelStepper.html.

Brownlee, J. (2018, July 10). MIT Invents A Shapeshifting Display You Can Reach Through And Touch. Retrieved December 9, 2019, from https://www.fastcompany.com/3021522/mit-invents-a-shapeshifting-display-you-can-reach-through-and-touch.

Ferrolic by Zelf Koelman. (n.d.). Retrieved December 9, 2019, from http://www.ferrolic.com/.

Kudama, S. Protrude, Flow (2018) Retrieved December 8, 2019, from http://sachikokodama.com/works/.

PomPom Mirror by Daniel Rozin. (n.d.). Retrieved December 9, 2019, from https://bitforms.art/archives/rozin-2015/pompom-mirror.

SnOil – A Physical 3D Display Based on Ferrofluid. (2009, September 20). Retrieved December 9, 2019, from https://www.youtube.com/watch?v=dXuWGthXKL8.

State Machine. (n.d.). Retrieved December 8, 2019, from http://www.thebox.myzen.co.uk/Tutorial/State_Machine.html.

 

 

 

Final assignment: Une Sculpture cinétique

By Jessie, Liam, Masha

Project Title: 

Une Sculpture cinétique  (working title)

 

Project description:

This project is going to display pattern and movement usually found in nature such as a bird flying or a flower blooming with kinetic sculptures. Changes in the pattern will be controlled based on visitors’ interactions with the sculptures. The project’s intention is to build a sense of connection between humans and nature and reflect on our relationship with it.

gif-2

Parts / materials / technology list:

  • Arduino Nano/ Uno
  • Servos – the amount is to be decided
  • Digital fabrication including laser cut wood/plastic/acrylic
  • Threads(fishing lines)
  • 3 laser sensors 

 

Work plan:

22 -26 November- Designing patterns and going through a few stages of prototyping

27-28 November- Coding and debugging

29 November- Test-assembling parts together to see if it works

30 November- Laser-cutting the final product and putting it altogether

1-2 December- Final testing and debugging

 

Physical installation details:

OCAD Graduate Onsite Gallery 

 

Resource list:

A display table with the size of 2m x 1m

An extension power cord

 

Mood Drop

By Jessie, Liam and Masha

Project description:

Mood Drop is an interactive communication application which creates a connection between individuals in different physical environments over distance through interaction of visual elements and sound on digital devices. It allows people to express and transmit their mood and emotions to others by generating melody and images based on interaction between users.

Mood Drop enables multi-dimensional communication since melody naturally carries mood and emotion. It distinguishes itself from the ordinary day-to-day over-the-distance communication methods such as texting, with its ability to allow people to express their emotions in abstract ways. 

Furthermore, Mood Drop embodies elements of nature and time which often plays a factor on people’s emotions. By feeding in real-time environment data such as temperatures, wind speed and cloudiness of a place, which affects variables within the code, it sets the underlying emotional tone of a physical environment. As people interact in the virtual environment which closely reflects aspects of the physical environment, a sense of telepresence of other people in one’s physical environment is created.

Code: https://github.com/lclrke/mooddrop
Process: 

Modes and Roles of Communication

Having learned networking, we tried to come up with ideas of different modes of communication. Rather than every user having the same role, we hoped to explore varying the roles that could be played in a unified interaction. Perhaps some people only send data and some people only receive it, and we could use tags to filter the data to receive on the channel.

One idea we considered was a  unidirectional communication method where each person receives data from one person and sends data to another person.

img_8254 img_8255

 

However, we didn’t pursue this idea further because we couldn’t justify this choice with a valid reason behind it apart from it’s an interesting idea. We eventually settled on the idea of creating a virtual community where everyone is a member and can have the same contribution. 

Ideation:

Once we were settled on the idea that everyone has the same role and figured out PubNub, we started brainstorming ideas. We all were interested in creating interactive piece, which would involve visual part and sound. Thus, we explored the p5.js libraries to find some inspiration. Vida Library by Pawel Janicki gave us an idea of affecting the sound with motion detected by web camera. This would not work because it was impossible to do video chat through pubnub (hence, no interaction).

Another thought was to recreate Rorschch’s Test, which would allow  users to see changing abstract image on the screen, so they could share their thoughts on what they saw between each other by typing it.

Finally we came up with the idea of creating an application which would allow users to express their mood through distance. By using visuals and sounds, participants would be able to cocreate musical compositions being far away from each other. We found a code, which was a foundation for the project, where users could affect the sound by interacting with shapes using their mouse.

Next we built a scale using notes from a chord, where frequencies were spaced in a way that the size of the shape generated by clicking would affect the mood of the transmitted sound. The lower part of the chord contains notes related to minor frequencies, while the top part focuses on the minor frequencies. The larger the circle, the more likely to play the lower minor roots of the chord. The final sound was simplified to one p5.js oscillator with a short attack and sustain to give it percussive characteristics. 

Working on visuals

As we started working on the visual components of the piece we decided to try the 3D Library in P5.js. We were looking for a design that would have the strong and clean sense of interaction when the shapes connected in digital space. Also, we were imagining the sound as a 3d object, which can exist in multiple dimensions and can have many directions. Many shapes, colors and textures were experimented with.

Simplifying shapes and palette:

An important moment occurred when we all were interacting with the same page independently from each other at home. While working on small code details, we soon found ourselves playing with each other in an unplanned session, which created an exciting moment of connection. We pivoted away from maximal visuals and sound after this to focus in on this feeling as we thought this was important to emphasize. While working on the project beside each other, we were wondering why being in separate rooms was important to demonstrate the piece. This moment of spontaneous connection through our P5.js editor window made us understand the idea of telepresence and focus in on what we then thought was important to the project.

We decided to return to a simple black and white draft featuring the changing size of a basic ellipse. The newer visuals did not clearly show the parameters of the interaction as the relationship between shapes on screen were not as clear as a basic circle.

By inputting to many aesthetic details, we felt we were predefining aspects that could define mood for a user. We found black and white was the better choice for palette as we wanted to keep the mood ambiguous and up to user interpretation. 
screen-shot-2019-11-17-at-1-35-54-pm screen-shot-2019-11-17-at-1-39-16-pm screen-shot-2019-11-12-at-3-20-09-pm

 

 

Project Context:

The aim was to create a connection between two different environments, and we looked to transfer something more than video and text.

placereddit

Place by Reddit (2017)

This experiment involved an online canvas of 1000×1000 pixel squares, located at a subredditcalled /r/place, which registered users could edit by changing the color of a single pixel from a 16-colour palette. After each pixel was placed, a timer prevented the user from placing any pixels for a period of time varying from 5 to 20 minutes. 

The process of cocreating one piece by multiple people from different places appealed to us, thus we also designed something that enables people to feel a connection to each other. To push this idea further, we decided to create something where visuals and sounds work in harmony as a coherent piece when people interact with each other. The interactions between people will be represented in the virtual space by the animation of interactions of visual elements they created and sound on a digital device.

 

unnamed-5-40 unnamed-7-40

 Unnumbered Sparks: Interactive Public Space by Janet Echelman and Aaron Koblin (2014).

The sculpture, a net-like canvas 745 feet long and suspended between downtown buildings, was created by artist Janet Echelman. Aaron Koblin, Creative Director of Google’s Data Arts Team, created the digital component, which allowed visitors to collaboratively project abstract shapes and colors onto the sculpture using a mobile app. We applied  simplicity and abstract shapes from this mobile app to our interface in order to make the process of interaction and co-creating more visible and understandable.

Telematic Dreaming by Paul Sermon (1993)

This project connects two spaces by projecting one space directly on top of another space. The fact that Sermon chose 2 separate beds as the physical space raises interesting questions. It provokes a sense of discomfort when two strangers are juxtaposed into an intimate space even if they are not really in the same physical space. The boundary between the virtual space and physical space becomes blurred because of this interesting play with space and intimacy.

Inspired by this idea of blurring the boundary of two spaces, we thought we could actually use external environmental data of the physical space which will be visualized and represented in a virtual space on screen in some way. The virtual space will be displayed on the screen which then exists in a physical space. In this case, not only is the user connected to their own environment, other people who are interacting with the person are also connected to this person’s environment by interacting within this virtual environment which is closely associated with data from the physical space. It blurs the line between the virtual and the physical space as these two spaces get intertwined and generate an interesting sense of presence within the virtual as well as the physical space as users interact with each other.

We eventually decided to add the Toronto live update weather API to mingle with our existing interaction elements. We used temperature, wind speed, humidity and cloudiness to affect the speed of the animation and the pitch and tone of the music notes. For example, during midday, the animation and music sound will happen after a faster speed than during the morning as the temperature rises, which also aligns with people’s energy level and mental state, and potentially emotions and mood.

References:

Manfred , M. (2012). Manfred Mohr, Cubic Limit, 1973-1974. Retrieved November 17, 2019, from https://www.youtube.com/watch?v=j4M28FEJFF8

OpenProcessing. (n.d.). Retrieved November 18, 2019, from https://www.openprocessing.org/sketch/742076.

Puckett, N., & Hartman, K. (2018, November 17). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment4

Place (Reddit). (2019, November 4). Retrieved November 18, 2019, from https://en.wikipedia.org/wiki/Place_(Reddit)

Postscapes. (2019, November 1). IoT Art: Networked Art. Retrieved November 18, 2019, from https://www.postscapes.com/networked-art/

Sermon, P. (2019, February 23). Telematic Dreaming (1993). Retrieved November 18, 2019, from https://vimeo.com/44862244.

Shiffman, D. (n.d.). 10.5: Working with APIs in Javascript – p5.js Tutorial. Retrieved November 17, 2019, from https://www.youtube.com/watch?v=ecT42O6I_WI&t=208s

 

 

 

PLAY

play

By Masha Shirokova

CODE: https://github.com/MariaShirokova/Experiment3

PROJECT DESCRIPTION:

My idea was to create a multi-sensory device, which allows users to explore sense crossing and experience at least 3 senses. 

Play is an (musical? visual? tangible?) instrument with multisensory interface: users are able to play sounds, create their own sound and visual compositions on the screen by interacting with tactile sensors. All of the sounds present a visual animation over the background when played. Play enables users to make a whole “orchestra” out of pom poms, glasses of water and other non-musical objects, turn a palette into a rhythmic  sequencer. 

For now, this device consists of 3 tactile objects: glass of water, paper foldable button and pompom button, that control three modes of visuals on the screen and 3 sounds. Further, I would like to expand the amount of objects, as well as to make visual part more complicated. 

This device provides users with lots of performance possibilities. It is also can be used for educational purposes and experiences to give kids and adults a chance to interact with music in new and different ways.

PROJECT CONTEXT:

Hearing smells or seeing sounds are examples of possible synesthesia – one of my main research interests. This experiment is my first attempt to create a multi-sensory object, which helps users to understand how tightly senses are crossed and connected to each other.  In the case of Play, pushing or touching DIY buttons triggers sound and colorful visual animation. 

The history behind the aesthetic expression of synesthesia arose from the paintings of Wassily Kandinsky and Piet Mondrian. It continued in note drawings of Cornelius Cardew, who literally drew his music on notation schemes. Sometimes these were quite identifiable notes, but their duration and relative volume should be determined by the performer. The epiphany of this approach was his book “Treatise”, comprising 193 pages of lines, symbols, and various geometric or abstract shapes that largely eschew conventional musical notation. Simple grid of the board and screen interface was inspired by geometrical abstract works of Mondrian, classical notation scheme and short films by Oskar Fischinger. The screen grid is affected by the sound and turns into the sound wave which changes depending on the volume (amplitude) of the sound.

Wassily Kandinsky was capable of “hearing”colors and that is why he composed his famous “symphony pictures” to be melodically pleasing. He combined four senses: color, hearing, touch, and smell. Therefore, experimenting with perceiving senses differently by using the device can be a valuable exercise to develop imagination and creativity. In his compositions, circles, arcs and other geometrical shapes seem to be moving, therefore I also used simple animated shapes and bright colors to keep connection with the artist who experienced synesthesia.

 

artwork-vasily-kandinsky-composition-8-37-262Composition 8 by Wassily Kandinsky

1f9284dca0f4160be8a0dcbb1f555ec1Composition London by Piet Mondrian

d5694rjnbom swjloncilnkDrawn notes from “Treatise” by Cornelius Cardew

Working on sound part was a new experience for me, therefore I picked 3 different sounds: rapid drum sound, and 2 xylophone sounds. There is a Russian band SBPCH (Samoye Bolshoe Prostoe Chislo) that plays electronic music, based on simple but nice sounds of water, rain, glass or ping pong sounds. I wanted to reach the same effect by picking up my sounds. This is how I “hear” collapsing and growing shapes.

As for the tactile part, my goal was to make tangible experience as much diverse as I could, so I included soft pom pom button, paper button and glass of water. At the same time, users experience something soft, colourful, dry and  solid, and liquid – this is where contrast of touch , sounds and visuals mix together. 

First, I saw the possibility of adding water to the circuit in the video of Adafruit Industries Company. Then, I realized that they use different boards, which use capacitive touch. Therefore, I started looking for other methods of using water as a sensor.  I added salt and it worked!I

 

PROCESS:

First week, I started with brainstorming some initial ideas for the project:

  • Shadow play
  • Use of bubble wrap
  • Game based on the principle of Minesweeper Online Game
  • Multi-sensory device

I decided to do the last, as it represents my research interest and, hopefully, will be helpful for my thesis.

The first class codes, provided by Kate, became foundation for my project. I replaced potentiometers by DIY sensors and added more details to the Processing code(sound and visual). 

citcuit

Circuit from the first class

Visual interface:

sketchbook

Interface sketches

For the grid, I used soundwave (the same method we used in Echosystem for Experiment 1t) which was affected by amplitude of the sound.

  1. 3D rotating cubes for the starting screen using P3D Library and rotation.screen1

 

2.First sensor activates the yellow square( Y position of the square was mapped with sensor value) and “play more” text.

button-puk

 

 

3. Second sensor activates static composition of star and rectangles.water-drum

 

 

4. Third sensor activates text and a  circle ( fill color was randomized: fill(0, random(0,240), 255), its Y position also was mapped with sensor value.  Moreover, there were activated three more ellipses, their size was changing due to FrameCount command using different proportions, so they looked like water surface. Third sensor is also responsible for the sound wave.
xylo-fluff

 

 


xylo

Two sensors are being activated

 

Figuring out the sound:

Wiring the potentiometers to the Arduino and writing the code for 3 DIY sensors  was simple. However, working with multiple sounds was a bit challenging. First, I looked at sound libraries I can use in processing and I found Sound Library and Minim Library.  While using 2 sounds, it was comfortable to use both, as it was possible to stop and play sound files from 2 different libraries. However, when I added the third sound, it did not play. So, instead of pausing sounds I changed volume and used only Sound Library.

Testing sounds

 

Combining sound+image

3 sounds

DIY sensors:

I was excited to work with different materials and provide users with very different experience. In the beginning, being inspired by performance where users used only fruits to play music, I wanted to use lemon as one of the sensors. However, there were 2 “not enough” – not enough voltage or lemon was not enough conductive. So, I switched to squishy circuits and tested play-dough. It  also was not very reliable, even though I tried 2 kinds of resistors. Play-dough sensor only worked as a switch from on mode to off. Therefore, I came up with the idea of 2 buttons (paper and pom pom) which would still allow to experience different interaction and contrast material. Furthermore, I still wanted to use a glass of water as a sensor. Although, I did not manage to activate this sensor with user’s touch, I did make it work as a simple switch (sensor reaches its max value – when both clips are in the glass). In this case, salt helped me a mot by making water more conductive.

I preferred copper to foil as a copper material, as it was less flexible and more stable. 

  • Lemon
  • Play-dough

  • Water

  • Fluffy Button & Paper Button

Processed with VSCO with n1 preset     Processed with VSCO with n1 preset

Assembling and making:

As I see this device as an open structure for adding extra tangible objects, I decided to keep the model structure also exposed to public and did not hide the wires which were connected to the button and to the glass of water.  

The tactile part consisted of laser-cut board, pom-pom button (fluffy balls were simply glued to the card with copper piece covered with velostat, while the other copper piece was glued to the board), and the paper button ( sensor part was hidden inside, so pressing allowed 2 parts of the sensor connect with each other).

For the visual interface i did laser-cutting of the board and box, where I could hide the bread board.

img_7636

Processed with VSCO with n1 preset

Processed with VSCO with kk2 preset

REFLECTIONS:

The best part of this experiment is that  my classmates were actually interested in interacting with the device and enjoyed the process of creating bits and melody.

A few days later after the presentation, I am looking  at the project and thinking, why I did not manage to make  it more complicated. I know I should have set up and edited the sounds, so the result melody would be better. 

In general, I enjoyed working on this project, as I could actually play with my favorite materials: sound, image and tactile materials. However, as I set up a to use all 3 senses, I did not have enough time to work on the quality of the sound. My further plan is to improve the processing code by adding sounds and complicating the visual part. 

Another plan is to match sound, visual and tactile part on real based data received from people who experience synesthesia. I believe this phenomena can be found very inspiring for other users. Even those who do not feel mixed senses, the synesthesia vocabulary can be used as a source of inspiration, since color and music associations are very poetic and metaphorical. Perhaps, users shall produce their very own vocabulary of vision to be able to experience art fully, and hopefully the future PLAY device can be useful in order to expand our sense experience.

REFERENCES

16 Pineapples – Teplo. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=SimccVMCpv4.

Adafruit Capacitive Touch HAT for Raspberry Pi – Mini Kit – MPR121. (n.d.). Retrieved November 5, 2018, from https://www.youtube.com/watch?v=Wk76UPRAVxI&list=PL5CF99E37E829C85B&index=130&t=0s.

An Optical Poem (1938) by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=_kTbt07DZZA.

Chen, P. A. (2016, November 15). How to add background music in Processing 3.0? Retrieved November 5, 2019, from https://poanchen.github.io/blog/2016/11/15/how-to-add-background-music-in-processing-3.0.

“Early Abstractions” (1946-57), Pt. 3 by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=RrZxw1Jb9vA.

Nelzya Skazat’ Koroche by SBP4. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=XIictPv-5MI.

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%203

Swinging. (n.d.). Retrieved November 5, 2019, from https://works.jonobr1.com/Swinging.

Visual music. (2019, September 19). Retrieved November 5, 2019, from https://en.wikipedia.org/wiki/Visual_music.

Experiment 1 – Echosystem

upload

Group: Masha, Liam, Arsalan

Code: https://github.com/lclrke/Echosystem

Project description

This installation involves 20+ screens and participants that create a network through sound. Incoming sound is measured by the devices and this data is used to influence the visual and auditory aspects of the installation. Sound data is used as a variable within functions to affect the size and shape of the visuals. Audio synthesis using P5.js is used to create responsive sound to the participants input. The features of the oscillators are also determined by the data from the external audio input.

While the network depends on our participation, the devices concurrently relay messages through audio data. After we start the “conversation” there is a cascading effect as the screens interact with each other through sound, creating a bi-communications network via analog transmissions. 

Visually, every factor on the screen is affected by participant and device interactions. We created a voice synchronized procession of lines, color and sound that highlight and explore the sound as a drawn experience. The installation is continuously changing  at each moment. The incoming audio data influences how each segment is drawn in terms of shapes, number of lines and scale.  This is in contrast to a drawing or painting that is largely fixed in time and creates an opportunity to draw with voice and sound. Through interaction, the participants are able to affect the majority of the piece, bridging installation and performance art. 

Process:

Week 1

The aim of our early experiments was to create connections between participants through the devices rather than an external dialogue. We started with brainstorming various ideas and figured there were 2 directions:

  1. A  game or a play, which would involve and entertain participants:
  2. An audio/visual installation based on interaction between participants and devices.

First, we planned to create something funny and entertaining and sketched some ideas for the first direction.

OCAD Thesis Generator: Participants would generate a random nonsensical thesis and subsequently have to defend it.

screen-shot-2019-10-01-at-2-15-04-am

Prototype: https://editor.p5js.org/liamclrke/present/9fBGEz9CH

Racing: Similar to slot car racing, you have to hum to stay within a certain speed in order to not crash. Too quiet and you’ll lose.

Inspiration: https://joeym.sgedu.site/ATK302/p5/skate_speed/index.html

Design Against Humanity: Screens used as cards. Each screen is a random object when pressed. Have to come up with the product after. Ex. “linen” & “waterspout” → so what does this do?

Panda Daycare: Pandas are set to cry at random intervals. Have to shake/interact with them to make them not cry.

Sketch: https://editor.p5js.org/liamclrke/present/MpxLmb1jQ

 

Week 2

After further exploring P5.js, we decided we were more interested in creating an interactive installation rather than a game.  

Raw notes/ideas for installation:

Wave Machine: An array of screens would form an ocean. Using amplitude measurement from incoming sound, the ocean would get rougher depending on the level of noise. Moving across an array of screens making noise would create a wave

Free form Installation: Participants  activate random sounds, images and video with touch and voice. Images include words in different languages, bright videos and gradients and various sounds. (this idea was developed into the final version of the experiment)

Week 3 

We agreed to work on art installation, which involves sounds, images and videos affected by participant interaction. An installation project seemed more attractive and closer to our interests than a game. We figured we could combine our skills together to create a stronger project and function as a cohesive team. 

That week we produced graphic sketches, short videos and chose sounds we would want to use in the project:

privet

 hellohindu_%d0%bc%d0%be%d0%bd%d1%82%d0%b0%d0%b6%d0%bd%d0%b0%d1%8f-%d0%be%d0%b1%d0%bb%d0%b0%d1%81%d1%82%d1%8c-1

At this step, we took inspiration from James Turrell and his work with light and gradients.

Week 4

Uploading too many images, sounds and videos made the code run slow on devices with smaller processing power. We changed the concept to one visual sketch and used p5.js audio synthesis.

We were looking for a modular shape which expressed the sound in an interesting way, apart from a directly representative waveform. We started with complicated gradients which overtaxed the processors of mobile phones so we dialed down certain variables in the draw function. Line segment density was a factor of amplitude multiplied by a variable, which we lowered till the image could be processed without latency.

The final image is a linear abstraction, drawn through external and internal sound.

Project Concept:

The project was inspired by multiple art works.

Voice Array by Rafael Lozano-Hemmer: When a participant speaks into an intercom, the audio is recorded and the resulting waveform is represented in a light array. As more speech is recorded, the waveforms are pushed down the horizontal array, playing back 288 previous recordings. When a recording reaches the end, it is released as solo clip. This inspired us to use audio as a way to sync devices without a network connection.

unnamed-2

Paul Sharits: Screens: Viewing Media Installation Art by Kate Mondloch was used for research, and within Paul Sharits’ work with screens was discovered. Paul Sharits was known for avante garde filmmaking, often featured across multiple screens accompanied by experimental audio. We took this concept and reformatted it into an interactive design.

unnamed-2 unnamed

Manfred Mohr: Manfred is a pioneer of art in the digital using algorithms to create complex structures and shapes. The visual simplicity driven by more complex underlying theory was a creative driver for the first iteration of Echosytem.

1

Challenges and solutions:

  1. The first challenge was lag from overloading processors from multiple video, sound and image files. These files slowed down the code, especially on the phone. Therefore, we decided to use P5.sound synthesis and creative coding to draw the image.
  2. First sketches were based only on touch, which did not create a strong enough interaction between participants, so the solution was to add voice and sound which affect the characteristics (amplitude and pitch) of the oscillators . 
  3. In previous ideas, it was difficult to affect videos and images (scaling and filters) so we created a simplified  image in P5.js which consists of lines of different colors. This step allowed us to affect the number of lines drawn by audio input data.
  4. In the beginning, to organize the physical space, we planned to build a round stand for devices. This would create a circle and bring participants together around the installation. However, different size and weight of devices complicated things.

photo_2019-09-30_21-34-485. Another idea was to hang screens to the ceilings, but the construction was too heavy. Without having the right equipment, we simplified these concepts and used flat horizontal surfaces to place the screens, so the number and size of devices was not limited.

6. The synthesizer built in P5.js led to a number of challenges. The audible low and high ends of a tablet differed greatly from a phone, resulting in certain frequencies sounding unpleasant depending on the device’s speaker. Through trial and error, we narrowed the pitch range that could be modulated by audio input for maximum clarity over multiple devices. There was also an issue of a continuous feedback loop, so the oscillator’s amplitude had to be calibrated in a similar fashion. The devices had to be at a certain distance range or would result in continuous feedback. We put a low-pass filter on finally in order to control the sound as a fail-safe as the presentation set up would be less controlled than in tests. 

Reflection:

Although we managed to involve 20 screens and groupmates into process of creating sounds and images, the design of the presentations logistics could have been more concrete. With preparation and set placement of screens, the project has high scalability, far above 20 screens and participants. 

The first question we asked upon assignment was whether we could overcome sync issues while keeping the devices off a network. Through the use of responsive sound we created an analog network of sound, resulting in a visual installation blurring the lines between participant and artist.

References:

  1. Early Abstractions (1947-1956) Pt. 3  https://www.youtube.com/watch?v=RrZxw1Jb9vA
  2. Mondloch, Kate. Screens: Viewing Media Installation Art. University of Minnesota Press, 2010.
  3. Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial   https://youtu.be/jEwAMgcCgOA
  4. Sketches made in processing by Takawo  https://www.openprocessing.org/sketch/451569
  5. Screens: Viewing Media Installation Art- Kate Mondloch
  6. Rafael Lozano-Hemmer- Various works http://www.lozano-hemmer.com/projects.php
  7. United Visual Artists – Volume  https://www.uva.co.uk/features/volume
  8. https://collabcubed.com/2012/10/16/carsten-nicolai-unidisplay/
  9. http://jamesturrell.com/