Author Archive

Zen Water

However important appears
Your worldly experience,
It is but a drop of water in a deep ravine.
– Tokusan

 

1. Introduction

Having an idea for the second creation and computation project was easier than the first one. I like to spend time near rivers and lakes, so I thought I could probably translate those experiences into an interactive project. By the time the Water Project was assigned, I happened to be reading about granular synthesis and realized I could use the grain concept as a metaphor for the way I try to perceive reality. Water is not something that exist by itself, it is just the interaction of hundreds and hundreds smaller particles that are constantly moving.
Since grains are a major part of this project, the fundamental question is “what is a grain”? Opie (1999) writes:

Like a nut, which has two basic elements, the shell, and the flesh contained within, a grain has an envelope which contains the actual sonic content. These two parts or parameters: contents and envelope make up an entire sonic grain. The sonic grain itself has a very short duration and as a single entity would seem very insignificant, but once the grain becomes part of a granular population, these two parameters make a big difference to the sound. Let us begin by looking at the contents.

Roads (2002):

A grain of sound lasts a short time, approaching the minimum perceivable
event time for duration, frequency, and amplitude discrimination (Whit®eld
1978; Meyer-Eppler 1959; Winckel 1967). Individual grains with a duration less
than about 2 ms (corresponding to fundamental frequencies > 500 Hz) sound
like clicks. However one can still change the waveform and frequency of grains
and so vary the tone color of the click. When hundreds of short-duration grains
®ll a cloud texture, minor variations in grain duration cause strong e¨ects in
the spectrum of the cloud mass. Hence even very short grains can be useful
musically.
Short grains withhold the impression of pitch. At 5 ms it is vague, becoming
clearer by 25 ms. The longer the grain, the more surely the ear can hear its
pitch.

And Ross (2001):

Granular Synthesis or Granulation is a flexible method for creating animated sonic textures. Sounds produced by granular synthesis have an organic quality sometimes reminiscent of sounds heard in nature: the sound of a babbling brook, or leaves rustling in a tree.

As can be seen from these three definitions, the sound produced by water is essentially the way Nature uses a granular synthesis to generate sound. The same idea can be adapted to visual contents, whereas instead of reducing sound into grains, video or images are reduced to grains. In this case, the grain could either be a tiny slice of time in a video or a single pixel or small groups of pixels could be considered grains. Joshua Batty, who ventured in creating an audio-visual granular system, adds that:

Visual granular synthesis follows a similar deconstruction process by which visuals are broken down into micro segments, manipulated and rearranged to form new visual experiences. Applying the process of granular synthesis to both audio and visual material reduces the material to their smallest perceivable properties.

Another influence for this project is the conceptual work One and Three Chairs by Joseph Kosuth.

One and Three Chairs

In this piece Kosuth plays with semiotic and shows three ways to represent the same object idea. Zen Water tries to explore the idea of water from different perspectives as well by deconstructing images and sounds of water and rebuilding them in ways that resemble water but they are not water at all. Hundreds of grains of sounds of water played together sound to human ears almost exactly like a conventional water sound, but ultimately they are not the same. And having hundreds of images of water playing at the same time may mimic the way water dynamics occur in nature, but it is not a precise representation water as an Icon. An Icon is never a precise representation of the Sign it represents, and this is the ground on which this project is built.

2. Project Description

Zen Water consists of two main elements: the granular water synthesizer and the visual granular projection. Supporting the audio portion are sounds of water drips that reinforce the idea that water is formed by many particles interacting with each other. Sounds of seagulls and an ambient kind of sound were also added: the first one because of my appreciation of the sound those birds make and the second one just to add some ambience and musical quality to the project. And to support the visual part, images are projected into a sandbox that intends to take the audience to the place in nature where water can be found. Water is always near sand or rocks. Sand was also chosen as a material because it is made by thousands of grains. The objective of the project is to lead the audience to medidate about the underlying interactions that make up everything in the world, similar to how Zen philosophy describes it.

3. Code

Chuck Granular Synthesizer

The first program written for the project was the granular synthesizer in ChucK. The program work as follows:
Seven audio files are load into the program. A function turns one the files into a grain of sound that has random values for the start position of the file playback and its speed. An envelope is used to fade the grain in and out (this process is important for eliminating issues when the file is starts playing such as the unwanted pop sound that naturally occurs in certain audio frequencies). Each grain can range from 40 to 400 milliseconds, a lenght that is larger than the lenght a grain should have according to the reviewed literature, but the results achieved with a larger duration was closer to what I had in mind. Then the main part of the program, where the funcion is called, determines which of the seven files will be turned into a grain. Of the seven files, three are heavily processed and intended to sound like wind while the other four are regular water sounds. The probability of the program choosing one of the four water sounds is significantly higher than choosing one the three processed ones. Each time the program is run, a number between 5 and 100 is chosen and this number determines the number of grains that will be played and time it takes for each grain to be triggered varies between .005 to .08 seconds. The program runs everytime it receives an OSC message containing a MIDI note. For best results, only notes that range from 44 to 51 should be used and that is because this range controls the ammount of reverb mixed into the grain sound. I am not aware of any mapping function that could map two sets of numbers, therefore a normalization equation was used:

0 + (oe.getInt() 44)*(.6 0)/(51 44) => float revVal;

Processing Granular Lake

The algorithm used to generate the grains of video, as mentioned in the Experiment 1 documentation, came from a code found on p5art. The Processing program loads a video of a lake and creates grains of video that are controlled by OSC messages. There are 8 possible OSC messages the program can receive, one for each knob in the MIDI controller, and they control the grain size and colour. My initial idea was to design a process that generated the grains of video in a similar way to how the grains of sound were generated in ChucK but since I found a code that had the same end result, I thought I should use it and focus on other experiments.

Max/MSP Patch

MaxPatch

The Max/MSP patch is where all the incoming data is received and interpreted. It sends MIDI note numbers to chuck and the 8 knob values that range from 0 to 127 to Processing. The patch also plays 10 different audio samples, 8 of them are water drips, 1 is seagull sounds and the last one is an ambient type of sound played using a synthesizer in Ableton Live. All 10 samples can be live mixed using the 8 MIDI controller knobs. The samples start playing when the program receives the MIDI note 44. MIDI note 51 makes all samples play in reverse (note 51 is also the one that makes ChucK play the grains with the highest amount of reverb).

4. Links to Experiments

For the eight experiments we were required to conduct, a few of them revolved around procedural image generation and audio reactive animation. Althought those experiments did not ended up as part of the final project, they are all topics that I want to further explore.
Below are links to all 8 blog posts describing each experiment.

4.1. Recording and Processing Audio and Video
4.2. Painting with a Strange Attractor Algorithm
4.3. The Proper Way to Create a 2D Plot in Processing
4.4. FFT in Processing
4.5. Granular Synthesis in ChucK
4.6. MIDI and OSC
4.7. Polyrhythm in ChucK
4.8. Processing P3D and FFT

5. Project Context

Context

The context where the Zen Water project would be displayed isn’t too dissimilar to the Graduate Gallery where it was presented. It would stand on a white room reminiscent to where mid-20th century minimal art was displayed. Computers and cables would be hidden so they don’t distract the audience. The project is also inspired by Zen aesthetics, so that should be reflected on the context the project is displayed.

6. Further Development

Throughout the design process, the initial concept changed slightly from being a minimal piece of audio-video projection into a live performance controlled by a midi keyboard. I would like to go back a few steps and transform the installation into a self-contained piece that would serve for contemplation or meditation purposes and don’t need someone controlling andio and video.

Project 2 Experiment 8 – Processing P3D and FFT

I will start the last experiment description with a burst of honesty: by the time I did the seventh experiment, the second project for the Creation and Computation class was finished. So any experiment done after that point wouldn’t affect the outcome of the main project in any way. Because of that, the eighth experiment was something I did just to explore some of my personal creative interests. At first I wanted to try fluid dynamics in Processing. The diewald_fluid library for Processing gives interesting results but I was having too much trouble trying to create my own code using it, and given how complicated the mathematics behind fluid dynamics are, I decided it would be better to try something else.
After spending some time on the Processing website, I found a short example code showing how to create a sphere, a camera and a light in Processing. In other words, a very simple 3d scene. So I decided to revisit the fourth experiment and use FFT to animate the parameters of the 3d scene. In the example video, the object size and camera position are animated based on audio analysis. Lights were not used because I thought removing the objects shading and only having the wireframes looked interesting. This short experiment opened up new possibilities for exploring audio and video interaction. I’d like to try animating things such as a displacement map in a more complex 3d object and see the results. This could also be done in a 3d package such as Cinema 4D which offers options to use audio tracks to animate parameters, but it won’t offer real-time results. Another plan I have for further exploring 3d and audio interaction is the softare Touch Designer. I have seen a few projects around the web that showed the capabilities of that software of handling 3d object manipulation in real-time and that is something I will spend time attenpting to achieve in the near future.

Project 2 Experiment 7 – Polyrhythm in ChucK

While preparing audio clips of water dripping in a random way in Ableton Live to use on the second project, I had the idea of trying to create a program that would handle polyrhythm in ChucK. From the start, it was an experiment not supposed to be used on the presentation version of the project because I was happy with the random sounding clips made in Ableton.
The easy to program concurrency capability of ChucK sounded like a relatively easy way to create polyrhythm. My idea was to have a function that each time it was called, the function would run in parallel with the other calls of itself and each call would handle different note timings. For the different timings, tuplets were stored in arrays that would be passed to the function. Another array stored the gain for each note in order to have control over wich beat should be stressed and also to have more control over the feel of the note sequence.
The function ended up looking like this:

fun void sequencer ( int beatArray[], float gainArray[],
float duration, SndBuf sampleName )
{ while ( true) {
for( 0 => int i; i < beatArray.cap(); i++ )
{
if( beatArray[i] == 1 )
{
gainArray[i] => sampleName.gain;
0 => sampleName.pos;
}
duration::second => now;
} } }

Very simple, but it worked as intended. The samples I used to test the program were recorded from a metal bucket with water. Had them in my computer for some time, but hadn’t use them untill writing this chuck code. For the example video, all instances of the function start at the same time and further rhythmic complexity can be added by making the them not share a starting point so they meet each other every certain number of measures.

Project 2 Experiment 6 – MIDI and OSC

The sixth experiment revolved around making programs talk to each other. For the second Creation and Computation I want to use a MIDI keyboard controller (an Akai MPK Mini) to send messages to various softwares at the same time. I tried to connect the controller to the computer and opened two softwares to receive incoming midi messages, but only one software was able to receive it. As far as I know, the only way to have many softwares receiving midi messages from a single controller is by having a main program that sends those messages via a communication protocol.
After looking online for a communication protocol I could use, OSC seemed like the protocol of choice for music and midi applications. Max/MSP with its visual programming environment was chosen to be the main program from where midi messages would be spread to other softwares. Getting the MIDI data was pretty straightfoward, only a midiin and ctlin objects were needed to receive midi note numbers and velocity and the values received from the 8 knobs. Sending that data to other softwares via OSC was trickier. Tried to do what was suggested on the documentation and online forum posts but was having some dificulty until Hector showed me the correct way to write the OSC message. Then it was possible to estabilish OSC communication from Max/MSP to processing. Had to do a series of if statements on both Max and Processing to separate the values comming from each one of the 8 knobs. Max has objects to handle lists and that would help in this situation but I still haven’t used them and felt using a series of ifs would be easier at this point. Receiving OSC messages in ChucK involved a similar process. ChucK has native support for OSC and didn’t need a library as processing did. The final result had Max sending 8 knob values that range from 0 to 127 to Processing and midi note numbers to ChucK.

Project 2 Experiment 5 – Granular Synthesis in ChucK

Experiment 5 is an important one for my project because of its relevance to the idea I want to explore. It was done in ChucK, a “programming language for real-time sound synthesis and music creation” (from the language website). A granular synthesizer could be built in many different programming languages and also on Max/MSP, but I thought ChucK’s native and easy to use concurrency was something I wanted to try. Each grain of sound is an individual ChucK shred (a process), so the grains can overlap. I believe this method is a close rendition to how a granular synthesis should work according to articles analysing and discussing this synthesis technique (this will be further discussed on the project blog post). Each grain of sound are randomly chosen from six different water audio samples. Three of these samples are ‘normal’ water sounds and the other three are processed water sounds. The probability for the program to choose the normal samples is greater than it is to choose the processed ones. This was meant to add a degree of randomness to the audio texture and the processed sounds, when heard as grains, resemble the sound of the wind.
In the first section of the video, each grain lenght can range from 40 milliseconds to almost half a second. The second section has shorter grains – their lenght is closer to the one a granular synthesizer is supposed to require – and I kept adding instances of the granular synthesizer until the program crashed.

Project 2 Experiment 4 – FFT in Processing

 The fourth experiment is something that I had been interested in trying for some time now. Before attempting to use a fast fourrier transform algorithm either in processing or Max/MSP, I tried to understand how it works but ended up having little success because of my lack of mathematical knowledge. Understanding the concepts behind fft and using a pre-built class in processing are two very different things.
A fast fourrier transform can be done in processing by using either the minim or the beads library. I looked at the example code of both of them and minim seemed easier to implement. I was having problems when I first tried to run the example codes from the minim library. Problem was coming from minim using P3D as the image renderer. Updating my graphics driver made everything work fine.
I used minim’ fft algorithm to control the a, b, c and d variables and the colour of the strange attractor from the previous experiment. It resulted in random images being produced by the combination of the variables inputs and I was happy with the audio-visual interaction produced.
For a second experiment with fft I tried a code previously done for Body-Centric Technologies and instead of controlling the x and y size values of an ellipse using an accelerometer, fft was used instead. Colour is also controlled by audio.

Project 2 Experiment 3 – The Proper Way to Create a 2D Plot in Processing

The second experiment left me puzzled. Why did the code work on R but not on processing? The solution for the problem came in the form of an idea while I was taking a shower on the next day (I’m just mentioning this mundane detail to remind myself how the mind makes better connections and associations when it is not busy trying to find a solution for a problem). A few days before attempting to write that code in processing, I was taking a look at the libraries available for processing. One of them, grafica, was meant to be used to create 2d plots and at the time I thought I’d probably not use that library, so I just throwed this information to the back of my mind to be later retrieved.
With the grafica library, the code finally worked. I still can’t really understand why it didn’t work before, though. I played with different values for a, b, c, and d and got many different results. All of them were animated by adding a small value of randomness. The changes in the values have to be really small, otherwise the image might break.

Project 2 Experiment 2 – Painting with a Strange Attractor Algorithm

The second experiment began when Gary and I were browsing the internet and stubled upon a website showing two lines of code required to create the formula for a strange attractor.

xnew=sin(y*b)+c*sin(x*b)
ynew=sin(x*a)+d*sin(y*a)

We tried to make the code work in processing, but the result was strange images that were definately not similar to the ones showed on the website.
I spent some time on google looking for more information on that particular strange attractor and found this link that explained how to create it using R language, a programming language used for statistics. The code provided on the website worked fine on R and I played with the values for a few moments while trying to make interesting looking images.
I then switched back to processing but still couldn’t make the code work. Before giving up on the strange attractor formula, I added a few lines of code that created an ellipse and used the results of the formular to position it on the x and y coordinates and used the same data to colour the ellipses. The result had a minimal painterly look to it. I also changed the ellipses for squares and then lines.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.