Tell

Tell is an interactive audiovisual installation on the transience of all things. It surrounds the viewer with slowly dissolving waves, visually connected to the sound of the viewer’s voice.

Participants speak into a corner, flooding it with waves of moving light. These images shimmer with colour and intensity unique to the tonal qualities of the participant’s speech. Each contribution’s appearance is wholly based on the individual responsible. The participant’s phrase endlessly repeats, with each echo reshaping it into something different, but derivative of the original. The waves adjust, transforming the participant’s contribution into something entirely new. This is the legacy of the participant’s voice.

Tell seeks to engage individual reflection on temporality and mortality. Tell offers no stance of its own, instead seeking to prompt and facilitate the participant’s voice.
Link to video

Technical Documentation

This piece has the following requirements:

  • Microphone
  • Projector
  • Mirror
  • Adjustable tablet stand
  • Current-model laptop

To set this piece up:

  1. Point a projector a corner of the room.
  2. Place a mirror in the same corner facing the beam. Using an adjustable tablet stand, tweak the pitch of the mirror until the projector image appears on the walls and ceiling.
  3. Place a plinth near the projector, but outside of the beam. Set a microphone on top of it.
  4. Connect the microphone and projector to a laptop computer. Run the sketch.

The piece was made using Processing 3 with the Minim library. The resources for the sketch are available on GitHub.

Context

Tell inhabits the long-established tradition of interactive installation art.

This piece was inspired by the work of Bill Viola, who frequently explores transience and mortality in his video installations. Viola’s use of water as a metaphor for death, seen in The Messenger and He Weeps for You, guided my use of water as a metaphor for impermanence. (1) Tree of Life was a model for my work. In that piece, participants proceed down a narrow corridor, approaching a projection of a tree. As they get closer, the tree sheds its leaves until it is bare. (2) That piece neatly encapsulated a long-standing interest I’ve had in using guided interactions as a way to communicate an idea.

Many of Finnbogi Petursson’s works use water caustics as way to visualize sound. In Petursson’s Circle, a strong light shines through a clear bowl of water. A speaker beneath the bowl plays very low frequencies at high amplitudes, rippling the water in such a way that the light shining through it reveals the waves of the sound. Circle‘s use of water caustics to directly influenced my own, in that it visualized waves, a metaphor for impermanence, as concentrations of light.

The vehicle of decay in this piece is a process of iterative destruction, formally known as generation loss. My first encounter with this practice came through Jason Kirkegaard’s AION, exhibited as part of the MoMA’s SOUNDINGS project. For that piece, Kirkegaard recorded and re-recorded the room tone of various abandoned structures near the Chernobyl nuclear reactor, then added them all together. It was through that project that I encountered Alvin Lucier’s I Am Sitting in a Room. His work has inspired a variety of direct tributes, including a series using YouTube’s compression algorithm as the destructive vehicle.

References

  1. Elmarsafy, Ziad, and زياد المرصفي. “Adapting Sufism to Video Art: Bill Viola and the Sacred / تجليات التصوف في فن الڤيديو: بيل ڤيولا والمقدس.” Alif: Journal of Comparative Poetics, no. 28 (2008): 127-49. http://www.jstor.org/stable/27929798. Accessed December 9, 2017.
  2. Nawrocki, Dennis Alan. “Bill Viola: Intimations of Mortality.” Bulletin of the Detroit Institute of Arts 74, no. 1/2 (2000): 44-56. http://www.jstor.org/stable/41504963. Accessed December 9, 2017.

Process journal

Finding a platform

When I began this project, I was presented with a question: should I manipulate audio with Processing, or get creative and look for an external solution? I needed something that was:

  1. Easy enough to learn in three weeks
  2. Reliable, flexible and inexpensive
  3. Responsive to visual signals

After looking at analog solutions, Max and openFrameworks, I settled on the Minim library for Processing. This provided me with a free, Java-based library, ready to interact with other parts of the work.

Working with curves and lines

My first take on the water caustics drew on the work I’d done for Wanderlines. In that previous project, I stacked faint lines of varying length between wandering points, approximating the play of light seen underneath waves. Adapting it to this project, I made it so the points radiated from the center, and deleted themselves on reaching the edge of the screen. Getting this method to look good needed hundreds of points with extremely thin lines, making for an unacceptable performance trade-off.

At first, I made small changes, like changing the renderer to P2D and adding noSmooth. These moves made for significant but insufficient performance gains. It became clear that drawing lines between hundreds of points was the main obstacle, so I started looking for ways to reduce the point count needed to produce a satisfying effect. I modified Wanderlines by drawing curves from each point’s nearest and second-nearest neighbours. The intent was to create organic shapes with fewer points, and better emulate the net-like surfaces that real water caustics embody. This yielded the same problems as before, with an ugly tendency for curves to snap from point to point. To cover more space with fewer points, I made it so the strokeWidth of each curve expanded as its constituent points got further from the center of the canvas. This was based on Haru’s suggestion to draw the participant’s eye towards the edges of the scene.

On November 30, Adam introduced me to PGraphics, a Processing class that consolidates draw calls. This made for a big performance boost, bringing the sketch closer to where it needed to be. My next thought was to emphasize the net-like appearance of water caustics using Processing’s quad primitive, accessed through the PShape class. I did this knowing that primitives are generally faster than arbitrary shapes. In my case, the result was spindly and boring, but it also ran at a snug 40fps.

As I worked out the water caustics, I my audio plans hit a snag.

Minim’s AudioSample class loops arbitary recordings, either from a file or through the createSample() method. The class is simple, it exposes a lot of useful values, and it can interface directly with Minim’s AudioInput class. AudioSample is also completely incompatible with Minim’s UGens, meaning you can’t apply any effects to it. Up until this point, I had been using AudioInput to catch microphone input, and then funneling it into an AudioSample object using createSample(). Hitting a dead end with effects, I had to switch to the equivalents in Minim’s UGens header. That section of the library is very self-contained — meaning I had to rework my entire audio workflow.

Fleshing out the audio

I started by switching out AudioInput for the UGens-compatible LiveInput class. The audio that class produced was too choppy to be usable, so I looked for an alternative solution that could integrate AudioInput with UGens. With some tinkering, I worked out a way to continuously grab float[]s from AudioInput‘s mix, then add it to an ArrayList of floats. I used an ArrayList because I wanted flexible recording length. To convert that data into a sample, the sketch would convert the ArrayList to a MultiChannelBuffer, which could then be fed into a Sampler.

This solution worked nicely, and had the welcome side-effect of distorting the input audio. Having finished the recording functionality, I started working on generation loss. Sampling and replaying the audio output was simpler than expected. I simply had the sketch overwrite the contents of the MultiChannelBuffer with audio from AudioOutput‘s mix. Sending that information back to Sampler made for a minor roadblock, because Sampler doesn’t have any end-of-sample event. That makes it a closed system, leaving no room for me to hook in a new sample buffer with each iteration. To get around this, I wrote my own looping function (loopRecording()) that transferred the contents of the MultiChannelBuffer to the Sampler every time it reached the end of its playback. I also had the function push points from the center of the canvas each loop, basing the rate of spread on the AudioInput mix level.

As a final step, I added a Delay effect to the Sampler, then patched that to AudioOutput. The bass became obnoxiously loud over time, so I added a high pass using the Moog class.

Wrapping up the visuals

Having reached a good point with the audio, I returned my attention to the visual side of things. Two things had become clear to me: firstly that generating water caustics from scratch wasn’t working, and secondly, that I was really making a convoluted particle system. Until this point, I’d avoided traditional particle rendering techniques, because I thought they would look fake. In retrospect, that notion shouldn’t have abided without a challenge, because it was incredibly wrong.

On December 2, I removed the nearest-neighbour code and swapped out the points for sprites. When I saw the results, I knew I was moving in a good direction.

512 sprites referencing one PImage made for ~60 fps, and it looked very close to my design intentions. With Minim commented out, I could comfortably have 1100 active particles on display at 24fps. And all of this with very little adjustment to my original code.

I tweaked the code further by changing the PGraphic object’s blendMode to ADD. This gave me further performance gains, because ADD blends pixels together with a much simpler algorithm than BLEND. I also made the sprites rotate slightly. This involved messing with Processing’s pushMatrix, popMatrix, translate and rotate methods, which are simple to write, but somewhat expensive in terms of overhead. Seeing how much I’d bought myself in terms of performance, I was willing to make that tradeoff.

The last feature I implemented was tinting based on the frequencies of the input. Getting this to work involved implementing something called a Fourier transform, which represents audio as a graph of frequencies against intensities. In digital signal processing, Fourier transforms are performed using an algorithm called FFT — Fast Fourier Transform. Minim’s FFT class made this fairly trivial to implement.

Translating it into my project meant fixing a buffer size for the AudioOutput object, then matching it for the FFT object. And that was it — within ten minutes, I was performing Fourier transforms. From there, I divided the resulting graph into three parts for the red, green and blue channels. I had the sketch tally up the intensities within those frequency ranges, and then translate the resulting sums to the intensities of each colour channel. For each iteration, I stored those colours as properties of whatever particles were about to be spawned. This fulfilled the frequency-based tinting I had planned for.

This entry was posted in Experiment 4. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

You must be logged in to post a comment.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.