Zen Water

However important appears
Your worldly experience,
It is but a drop of water in a deep ravine.
– Tokusan


1. Introduction

Having an idea for the second creation and computation project was easier than the first one. I like to spend time near rivers and lakes, so I thought I could probably translate those experiences into an interactive project. By the time the Water Project was assigned, I happened to be reading about granular synthesis and realized I could use the grain concept as a metaphor for the way I try to perceive reality. Water is not something that exist by itself, it is just the interaction of hundreds and hundreds smaller particles that are constantly moving.
Since grains are a major part of this project, the fundamental question is “what is a grain”? Opie (1999) writes:

Like a nut, which has two basic elements, the shell, and the flesh contained within, a grain has an envelope which contains the actual sonic content. These two parts or parameters: contents and envelope make up an entire sonic grain. The sonic grain itself has a very short duration and as a single entity would seem very insignificant, but once the grain becomes part of a granular population, these two parameters make a big difference to the sound. Let us begin by looking at the contents.

Roads (2002):

A grain of sound lasts a short time, approaching the minimum perceivable
event time for duration, frequency, and amplitude discrimination (Whit®eld
1978; Meyer-Eppler 1959; Winckel 1967). Individual grains with a duration less
than about 2 ms (corresponding to fundamental frequencies > 500 Hz) sound
like clicks. However one can still change the waveform and frequency of grains
and so vary the tone color of the click. When hundreds of short-duration grains
®ll a cloud texture, minor variations in grain duration cause strong e¨ects in
the spectrum of the cloud mass. Hence even very short grains can be useful
Short grains withhold the impression of pitch. At 5 ms it is vague, becoming
clearer by 25 ms. The longer the grain, the more surely the ear can hear its

And Ross (2001):

Granular Synthesis or Granulation is a flexible method for creating animated sonic textures. Sounds produced by granular synthesis have an organic quality sometimes reminiscent of sounds heard in nature: the sound of a babbling brook, or leaves rustling in a tree.

As can be seen from these three definitions, the sound produced by water is essentially the way Nature uses a granular synthesis to generate sound. The same idea can be adapted to visual contents, whereas instead of reducing sound into grains, video or images are reduced to grains. In this case, the grain could either be a tiny slice of time in a video or a single pixel or small groups of pixels could be considered grains. Joshua Batty, who ventured in creating an audio-visual granular system, adds that:

Visual granular synthesis follows a similar deconstruction process by which visuals are broken down into micro segments, manipulated and rearranged to form new visual experiences. Applying the process of granular synthesis to both audio and visual material reduces the material to their smallest perceivable properties.

Another influence for this project is the conceptual work One and Three Chairs by Joseph Kosuth.

One and Three Chairs

In this piece Kosuth plays with semiotic and shows three ways to represent the same object idea. Zen Water tries to explore the idea of water from different perspectives as well by deconstructing images and sounds of water and rebuilding them in ways that resemble water but they are not water at all. Hundreds of grains of sounds of water played together sound to human ears almost exactly like a conventional water sound, but ultimately they are not the same. And having hundreds of images of water playing at the same time may mimic the way water dynamics occur in nature, but it is not a precise representation water as an Icon. An Icon is never a precise representation of the Sign it represents, and this is the ground on which this project is built.

2. Project Description

Zen Water consists of two main elements: the granular water synthesizer and the visual granular projection. Supporting the audio portion are sounds of water drips that reinforce the idea that water is formed by many particles interacting with each other. Sounds of seagulls and an ambient kind of sound were also added: the first one because of my appreciation of the sound those birds make and the second one just to add some ambience and musical quality to the project. And to support the visual part, images are projected into a sandbox that intends to take the audience to the place in nature where water can be found. Water is always near sand or rocks. Sand was also chosen as a material because it is made by thousands of grains. The objective of the project is to lead the audience to medidate about the underlying interactions that make up everything in the world, similar to how Zen philosophy describes it.

3. Code

Chuck Granular Synthesizer

The first program written for the project was the granular synthesizer in ChucK. The program work as follows:
Seven audio files are load into the program. A function turns one the files into a grain of sound that has random values for the start position of the file playback and its speed. An envelope is used to fade the grain in and out (this process is important for eliminating issues when the file is starts playing such as the unwanted pop sound that naturally occurs in certain audio frequencies). Each grain can range from 40 to 400 milliseconds, a lenght that is larger than the lenght a grain should have according to the reviewed literature, but the results achieved with a larger duration was closer to what I had in mind. Then the main part of the program, where the funcion is called, determines which of the seven files will be turned into a grain. Of the seven files, three are heavily processed and intended to sound like wind while the other four are regular water sounds. The probability of the program choosing one of the four water sounds is significantly higher than choosing one the three processed ones. Each time the program is run, a number between 5 and 100 is chosen and this number determines the number of grains that will be played and time it takes for each grain to be triggered varies between .005 to .08 seconds. The program runs everytime it receives an OSC message containing a MIDI note. For best results, only notes that range from 44 to 51 should be used and that is because this range controls the ammount of reverb mixed into the grain sound. I am not aware of any mapping function that could map two sets of numbers, therefore a normalization equation was used:

0 + (oe.getInt() 44)*(.6 0)/(51 44) => float revVal;

Processing Granular Lake

The algorithm used to generate the grains of video, as mentioned in the Experiment 1 documentation, came from a code found on p5art. The Processing program loads a video of a lake and creates grains of video that are controlled by OSC messages. There are 8 possible OSC messages the program can receive, one for each knob in the MIDI controller, and they control the grain size and colour. My initial idea was to design a process that generated the grains of video in a similar way to how the grains of sound were generated in ChucK but since I found a code that had the same end result, I thought I should use it and focus on other experiments.

Max/MSP Patch


The Max/MSP patch is where all the incoming data is received and interpreted. It sends MIDI note numbers to chuck and the 8 knob values that range from 0 to 127 to Processing. The patch also plays 10 different audio samples, 8 of them are water drips, 1 is seagull sounds and the last one is an ambient type of sound played using a synthesizer in Ableton Live. All 10 samples can be live mixed using the 8 MIDI controller knobs. The samples start playing when the program receives the MIDI note 44. MIDI note 51 makes all samples play in reverse (note 51 is also the one that makes ChucK play the grains with the highest amount of reverb).

4. Links to Experiments

For the eight experiments we were required to conduct, a few of them revolved around procedural image generation and audio reactive animation. Althought those experiments did not ended up as part of the final project, they are all topics that I want to further explore.
Below are links to all 8 blog posts describing each experiment.

4.1. Recording and Processing Audio and Video
4.2. Painting with a Strange Attractor Algorithm
4.3. The Proper Way to Create a 2D Plot in Processing
4.4. FFT in Processing
4.5. Granular Synthesis in ChucK
4.6. MIDI and OSC
4.7. Polyrhythm in ChucK
4.8. Processing P3D and FFT

5. Project Context


The context where the Zen Water project would be displayed isn’t too dissimilar to the Graduate Gallery where it was presented. It would stand on a white room reminiscent to where mid-20th century minimal art was displayed. Computers and cables would be hidden so they don’t distract the audience. The project is also inspired by Zen aesthetics, so that should be reflected on the context the project is displayed.

6. Further Development

Throughout the design process, the initial concept changed slightly from being a minimal piece of audio-video projection into a live performance controlled by a midi keyboard. I would like to go back a few steps and transform the installation into a self-contained piece that would serve for contemplation or meditation purposes and don’t need someone controlling andio and video.