Project 2 – Final Documentation

While I presented this final project on wednesday, I still view this project as a work in progress.
Like most of my projects, the programming of the system takes a long time to setup and working out the interaction based on feedback from test users also takes a while for me to reformulate my ideas in response to their suggestions. I did have a friend test out this work before the presentation on wednesday to get preliminary feedback. One issue that I need to work out is how the user adds to the buildup of the system.

For every gesture within the space, a counter (capacitor) is increased. If no movement is tracked, the counter slowly decreases over time. The idea is that while a user is contributing to the overal soundscape through bodily gestures, they are also contributing to the ebb and flow of the system. When the counter reaches its maximum, the system stops generating sounds and a larger wave (sound) crashes, bringing the system back to a minimal state. After having a friend interact with the system, they expresses confusion on this part of the system. As I want to keep this elelement, I need to think about how to rework this to make it less confusing.

Based on the feedback from the group during critique several suggestions were to integrate the skeleton tracking (seen on laptop) into the piece. I think I may have failed in expressing that the purpose of this installation is to be centred entirely on audio. A person is to understand how their body movements contribute to the overal evolution of the soundscape. Adding a visual element into the installation would just take away the purpose of the installation. I also believe that working entirely with audio is difficult. As some of the class comments pointed out, I think most people have a better visual language and can understand visual gestures more than audio gestures. One of my intentions is not to create a direct one-to-one interaction between a gesture and an audio being generated. The system has its own internal mechanisms running, and so when a person enters the space, their movements are really only adding to the way the system functions. I think this element is lost for most people. Instead of abandoning my approach I just need to think through these ideas more and test the system out with a variety of people. I dont want to have bend this project just so that it remains within the current computing paradigm. I am trying to experiment and move past the typical idea that bodily gestures create an audio/visual event. I feel that while this approach is how we currently interact with computing technologies (a button approach – ie. trigger a button triggers an event) it is not the only way in which we can interact with systems. Systems in general (societal, economic, etc) rely on their own internal mechanisms and so any external input only really adds to the system but doesn’t really change the way the system progresses. These ideas are some of what I am trying to think trough with this installation.

Here is a short video showing the work in progress:

The code for this project are in 3 separate files.

The first is a processing sketch that tracks a person in the space. Skeleton tracking finds a person when they enter into the space between the speakers. Events are sent to SuperCollider via OpenSoundControl. Here is the code

The second file consists of the synth definitions in SuperCollider. These definitions are like based classes that define what type of sounds will be generated. Variables from Processing are passed into the synth to create differing sounds based on the same sonic structure. Here is the code

The third file consists of the Tasks and OpenSoundControl listeners that trigger events in SuperCollider. The Tasks are how the synths are generated. The OpenSoundControl listeners wait for OSC messages sent from Processing and then trigger various tasks. Here is the code

1 comment to Project 2 – Final Documentation