Tag Archives: led

Research Project: Volume by United Visual Artist

United Visual Artists (UVA) was founded in 2003 in London, UK by Matthew Clark, Chris Bird, and Ash Nehru. Originally, the 3 came together to create lighting and visuals for a concert for a London based music group called Massive Attack. Since then, they have showcased their work through many exhibitions and galleries and have won many awards. The group has been featured in multiple publications winning award after award for their creative approach in combining architecture, live-performances, installations, sculptors, and technology.

UVA has done work in Russia, UK, Australia, Hong Kong, Paris, and other cities across the globe. Apart from working with lasers, radars, and scanners, UVA also lectures across the globe. They tour around different universities and have even reached Toronto to speak to students at both Ryerson University And OCAD University in September of 2011.


As described on UVA’s site:

“UVA’s large-scale installation Volume first appeared in the garden of London’s V&A museum in 2006 and has since traveled as far as Hong Kong, Taiwan, St. Petersburg and Melbourne.

It consists of a field of 48 luminous, sound-emitting columns that respond to movement. Visitors weave a path through the sculpture, creating their own unique journey in light and music.

The result of a collaboration with Massive Attack, Volume won the D&AD Yellow Pencil in 2007 for Outstanding Achievement in the Digital Installation category.”



The inspiration for volume was both the Play Station brief, which was to engage people in an emotional experience for the launch of PS3 in UK and Monolith, which is an installation displayed by UVA at John Madejski garden on Onedotzero’s transvision night. Monolith emits soothing colours and you hear calming sounds when no one is near. As people approach, the colours and sounds become louder and harsher, forcing people to step back to find their comfort zone. Monolith wasn’t entirely successful from an interaction point of view. It had more people than anticipated, so it spent too much time in ‘overload’ mode. But it did ‘work’ in that it created a powerful aura and transformed the space.


3 interaction layers


Model Overview


Technical Overview

To create Volume, UVA would have used their proprietary software and hardware called D3. D3 allows artists to control many different pieces of hardware and tools for installations, performances and other visuals. In the case of Volume D3 is controlling 48 LED towers, infrared cameras and 48 individual speakers located in each of the LED towers.

The D3 software has the capability for Real Time Simulation, Projector Simulation, Sequencing, Content Mapping, Playback, Configuring Output, d3Net, Show Integration, and Backup & Support. To run the software efficiently D3 also offers specifically built hardware for running their software.



As mentioned above D3 does Real Time Simulations of the art work. Here’s a screenshot of one of the available simulations for Volume. You can see that there is a timeline for the different events and interactions as well as a digital representation for each of the 48 LED towers.


For motion tracking of the experience we suspect that they are using some sort of IR areal grid system similar to this illustration found on the Panasonic website. This method would allow participants location to be tracked and monitored relatively simply as well as keeping down cost by by minimizing the number of IR cameras required in the installation.


The audio for Volume was written by Massive Attack since UVA has had an existing relationship with the band. The sound artist Simon Hendry has also worked on additional effects for multiple iterations of the installation. The connection between the D3 software and hardware and the installation is done with MIDI (Musical Instrument Digital Interface) controls connecting with the Music production software Logic Pro.


To prototype the Volume installation we went through multiple iterations. The first iteration we were trying out a 8X8 RGB LED matrix and a MAX7219 LED driver chip.

IMG_1384 IMG_1395

We got it up and running but were unable to control individual LEDs or get unique colours displaying. So instead we switched to the 74HC595 chip for bit shifting the control output from the Arduino to allow one three pins control 8 LED’s per chip. Each chip can be connected, and daisy-chained to another chip which can control another 8 LEDs.

IMG_1423 IMG_1469 2

There is a lot of wiring but it is a good way for communicating with multiple LEDs and still keep open some pins on the Arduino for other sensors.

For the user motion and location detection we are using the Sonar sensor, for the audio we are using the Arduino Library called Mozzi and we are using a small speaker with a simple connection to the Arduino.




Prototype Video

 Final Video

Github Source Files

Github source for the master Arduino that runs the LEDs and sensor.

Github source for the Arduino that runs the Mozzi instance.