Project 3: bodyForTheTrees

by Hector Centeno, Jessica Kee, Sachiko Murakami, and Adam Owen

Project Description

bodyForTheTrees (bFTT) uses technological magic to create a dance experience to entertain and fascinate an audience. In this uncanny visualized interaction between the body and the digital, a performer moves her body and her actions are reflected on a multi-screen projection in which nonhuman figures move as she moves, while haunting sounds are also (optionally) triggered by her actions. The system is controlled by sensors on a wearable system on the dancer, which send data via a wireless tranceiver to a computer network, which processes the data and then projects a visualization onto a surface. This interactive performance piece has strong visual impact; possible uses may include integration into live, large-scale musical performances, street performances, theatre, and dance.

Technology

Hardware

List of components and materials used:

Xbee transceiver (2)
1 Arduino Pro Mini
Laptop computer (2)
Data projector (4)
Breadboard
Small plastic box
Li-Ion battery
Resistor (10k, for the flexometer)
Hookup wire (several feet)
Voltage booster (to boost battery voltage from 3.7 to 5V)
IR distance sensors (2)
Flex sensor (1)
IMU Sensor (1)
NeoPixel LED rings (2)
Black polyester and Velcro straps (for gloves)
Black cotton/lycra blend fabric (for knee sleeve)
Headband (for IMU)
Sewing machine

A wireless sensor system attached to the body of a performer and a receiver attached to a computer. The wireless communication was done using XBee transceivers.

On the performer, four sensors are hard wired and attached to wearables: two IR distance sensors (SHARP GP2Y0A21YK) on the glove’s, one flex sensor (Spectrasymbol 4.5”) on the right knee in a sleeve, and one IMU (SparkFun LSM9DSO) on the top of the head, attached by a headband.  All the power and data wires for the sensors are secured to the body of the performer and end at a small plastic enclosure attached at the waist. This enclosure houses a microcontroller (Arduino Pro Mini 3.3v), one wireless transceiver (Digi XBee Series 1) and a 3.7v, 2000mAh LiPo battery. Two LED rings (Adafruit  NeoPixel 12x WS2812 5050 RGB) are also attached to the back of both hands as a visual element. The colour and brightness of the rings is set by the computer system (via XBee transmission). The two LED rings and the IR distance sensors require 5v, so a battery power booster (Sparkfun) was also included as part of the circuit and as a connection point for the battery.

bFTT glove - palmbFTT glove - back

The glove with IR sensor on the palm and NeoPixel ring on the back.

bFTT knee sleeve - outerbFTT knee sleeve - innerbFTT knee sleeve - with sensor

The knee sleeve, from top: Outer, inner, and inner with sensor visible.

bFFT wearable housing - frontbFTT wearable housing - rearbFTT wearable housing - inner

The housing for the Arduino, battery, and XBee. Attached to the performer’s waist. From top: Front, back, and inner views.

Two computers are used as part of the system, each one displaying through two data projectors. On the first computer an XBee wireless transceiver is attached via USB using an XBee Explorer board. This computer is connected to the second via an ethernet cable to share the sensor values received from the performer system (see software below for details).

Circuit Diagram

bFTT circuit diagram revised

System Architecture

bodyForTheTrees - System Architecture

Software

Code available on GitHub.

The Arduino Pro Mini microprocessor on the performer runs a simple firmware that reads the sensor values and prints them to the serial output (XBee) in its raw form, except for the IMU sensor. In this case, the gyro, accelerometer and magnetometer values are integrated using the Adafruit algorithm part of the library made specifically for this device. The result of the integration is the yaw, pitch and roll in degrees. The data is sent packed as a message with each value divided by the number sign “#” and ending with a line feed.

The software system running on the computers uses Processing for generating the visuals, and Csound for generating the sound.

The Processing sketch places 25 trees randomly along a 3D scene rendered using the P3D rendering engine. The trees are built using a recursive algorithm (based on Advanced Tree Generator by James Noeckel) that uses triangular shapes for the leaves textured with a bitmap image. The scene has a solid colour background and the 3D lighting is done through ambient light and three directional lights of different colours. The view is moved using the OCDCamera library. A Tree class was made to contain all the logic and animation.

Each tree is independently animated using Perlin noise to create a more natural feel. The whole tree sways back and forth and each leaf is also animated by slowly changing some of the vertices by an offset. The camera view also moves back and forth using Perlin noise to add to the feeling of liveliness.

The sensor data acquired serially through the XBee is parsed in the Processing sketch and smoothed using an adaptation of the averaging algorithm found on the Arduino website. The smoothing of the polar data from the IMU was done using the angular difference between each reading (instead of smoothing the angle data directly) in order to avoid the values sliding from 359 back to 0 degrees at that crossing point. Each data stream is smoothed at different levels to achieve optimal response.

The values from the IR sensors are used to change the shape of the trees by modifying two of the angles of rotation in the recursive algorithm. The values from the flex sensor are used as triggers to modify the shape of the trees in two ways: when the performer flexes her leg for a short time, the leaves of the trees explode into the air to then slowly come back together; when the performer flexes for 3 seconds, the whole forest morphs into an abstract shape by elongation of the leaf vertices.

The data from the IMU sensor is used to rotate the whole scene around the Y (vertical) axis while the same data is used to change the background colour by interpolating between green and blue (lerpColor function). The current colour value is also transmitted to the performer’s system to make the hand LEDs match it. The pitch angle is used to rotate the scene around the X axis, limiting the rotation angle so the camera only shows the top part of the forest and not beneath it.

All this sensor data is sent to the same sketch running on the second computer via OSC messages and locally to the Csound instance running in the background. The Csound instance runs only on one computer and the Processing sketch is the same for both with the exception that in the slave computer the variable “isMaster” must be set to false.

The generative sounds are produced by Csound using a combination of synthesized sounds (vibraphone and wood blocks) and sound textures made by phase vocoder processing of short, single note instrumental sounds (a glass harmonica, a drum and tamtam). The sensor data received from the Processing sketch is used to synchronize the sound events and processes with the movements of the performer and the visuals.

Notes on process

Notes on process

From top left: Sewing the gloves, forming the circuit; textile production centre; the wearable circuit in progress; Hector soldering, soldering; testing XBee range; circuit board prep; testing the Processing sketch communication; Sachi wears the wearables; an early iteration of the sketch; Adam contemplates the projection problem; last-minute connectivity problems; Jessica gets hooked in for rehearsal.

For this project, several sub-projects were involved: coding the Processing sketch, creating the wearable system (both textiles and the circuit), network communication, projection, and the performance.

Coding

We began with the Processing sketch of a mouse position-responsive recursive tree that we found on OpenProcessing.org. We found the complexity of the movement and the responsiveness compelling and thought it would translate well to the movements of a performer. We altered the code and added triangle “leaves”, 3D ‘camera’ panning, and other functionality as described in the software section above.

Camera panning
We paired the IMU with the camera-pan function. At first, we used Peasy cam but it proved to have too many limitations on rotation handling when integrated with the IMU input. OCDCam suited our purposes much better because it allowed greater control over rotation.

Calibrating Sensor Input
Another challenge was getting meaningful visualizations from the sensor input. We needed to negotiate between the Processing movements with the body’s movements. We didn’t want the reflection to be exact (body != tree), but there needed to be enough of a causal connection that the audience would understand that the two were connected. We also needed to calibrate the sketch so that visually appealing dance movements would be meaningful on-screen. For example, at first the sketch only responded to very slow, slight movements with little visual impact – more of a tai-chi exercise than an attention-grabbing dance. However, amping up the visualization reactivity to sensor data created a visual frenzy that quickly overwhelmed the viewer. We hope we found a good balance between the two. The analogy we found useful was to describe the sensor system as an instrument that the performer ‘plays’.

Wearable System

When designing our wearable system, we had four objectives in mind: sensor functionality, durability, integration with the natural movements of the body, and aesthetics.

Sensor Functionality
The infrared distance sensors for the hands controlled the tree movement. The constraint with these sensors was that the hands needed to be close to a surface off of which the sensor could ‘bounce’ – the performer’s body, a wall, etc.

The flex sensor was placed on the front of the knee to trigger special visual functions. The challenge with the flex sensor was its fragility. During testing the base of the sensor was threatening to crack. We taped a piece of board to the base of the sensor to protect it.

bFFT - circuit in progress

The wearable system components in progress.

We found that during testing the sensor data produced very jerky movement in the tree and camera. We smoothed (averaged) the data to produce more fluid motion in the sketch. However, too much smoothing resulted in lag; we had to find the ideal smoothing spot that balanced between the two factors of too-jerky and too-laggy.

A Durable Wiring System + Hub
For robustness and durability, we chose colourful hookup wire to connect the sensors to the Arduino and battery, housed in the small box that can clip on a belt. Conductive thread may have been an option, but because we had not the time to learn how to fabricate using this delicate material, we chose to use safe, reliable hookup wire. Future iterations may use the lighter and more subtle wearable options. We used a Li-Ion battery for its thinness and capacity. We chose an Arduino pro-mini for its small size. There were no issues with the Pro-mini.

bFFT - circuit close up

Close-up of the housed hub. 

One challenge we faced with the wearable circuit was regulating the voltage for the different needs of the different sensors. When the battery was at full charge, the IMU (on 3.3V) was receiving too much power. We resolved this problem by rewiring the system so the Arduino regulated the voltage and then used a power booster for the sensors that required 5V.

Another challenge we faced was maintaining connectivity while the wearable was on the dancer. Several minutes before our performance, a connection came loose. We managed to fix the connectivity problem and the performance went smoothly.

bFFT - circuit board

Which connection came loose? We may never know.

Integration with the movement of the body
NeoPixels – The rings serve to strengthen the relationship between the performer and the sketch via the connected colour, as well as draw attention to the performer in a dark environment. If we had more resources, we would have added more RGB LEDs to the performer (perhaps an entire outfit!) so the colour connection would have been more apparent.

Textiles – None of us had any textile fabrication experience. We used scrap material to fashion the gloves and knee sleeve. We created a futuristic design using shiny black synthetic material, a sewing machine, and imagination. Velcro straps allow for easy removal of the gloves. Translating pattern to accurate sizing was a challenge we encountered, and there were many prototypes before we got it right. The knee sleeve was fashioned from stretchy cotton/lycra from an old tank top. We sewed in the flex sensor as it kept slipping out whenever the knee was bent. At first, we used a sparkly metal headband for the IMU sensor, but found that the metal interfered with the sensor data. We switched to a stretchy cotton/lycra blend for the headband and found the sensor worked much better.

Aesthetics
We attached the exposed wires to the dancer using white vinyl tape, as we wanted to create a raw, futuristic look. Future versions might employ more elegant means (or even fabric/KT tape) as the tape we used didn’t adhere to the body very well.

Network Communication

There were two communication issues involved in the project: XBee-to-XBee and computer-to-computer.

XBee network
We had originally thought we would project the visualization onto a window and have the performer on the street, which presented issues with XBee line-of-sight and range. When we revised our setup to be indoors, the range problem we were encountering was resolved.

Computer-to-computer
In order to project onto four surfaces, we needed two computers running the same Processing sketch and processing the same sensor information. We had several options, including:

A) Two XBee receivers attached to two computers receiving data simultaneously
B)  One XBee receiver attached to a computer that would then pass the sensor data to the second computer using an Ethernet cable.

We chose option B because it offered the option of expanding the system without needing extra XBees.

At first we tried to simply pass a video capture of the Processing sketch to the second computer via Max/MSP. However, the frame rate was 30/fps, and the sluggish, pixelated rendering on the second computer was unacceptable. We then switched to using the OSC Processing library to send the data, via OSC messages, which proved nearly seamless.

Projection

Originally we had thought we would project the four screens on the windows of 205 Richmond Street as a way of transforming the city into an entertainment space. However, at the last minute we were unable to secure the windows required (which are located in the staff lounge) and the Duncan-street facing windows of the Grad Gallery didn’t suit our needs. We moved the project indoors, at first into the Grad Gallery but space limitations were an issue there. We finally found a suitable classroom at the last minute which ended up being a perfect venue for this project. We used four projectors across a long wall for the greatest visual impact. We lamented not having four short-throw projectors to minimize shadow interference of the dancer and to gain maximal size within a limited space. However, the thrown shadow ended up being a visually interesting part of the performance. Two projectors were hooked up to each computer. We also would have liked to have a continuous sketch running across multiple projectors, but we encountered problems arising both from the computers not handling the extended sketch well and also using four different projectors all with different resolutions. We believe the effect of staggering the projected images was an effective compromise.

bFFT performance - projections 2

The first and third projections were from Computer 1; the second and third projections were from Computer 2.

Performance

On presentation night, we had two dance performances: one that demonstrated the integrated generative sounds, and one in which we used a popular song (“Heart’s a Mess” by Goyte). We wanted to showcase the versatility of this system, and how it could be adapted for both avant-garde, improvisational dance, as well as for mainstream performance such as might be found at a large-venue concert.

bFFT performance - space

Our venue was an OCAD classroom. With a bit of chair-arranging and lighting, it transformed from this into a fine performance space.

For the first performance, the dancer intuitively worked with sound and movement to generate interesting sounds and visualizations. For the second, we worked with a choreographer to create movement that would balance meaningful interactions with the visualization with contemporary dance.

Case Studies

Battlegrounds Live Gaming Experience

bftt - case study 1
A Toronto startup, Battlegrounds Live Gaming Experience is an update on laser tag – an amusement technology that hasn’t seen any advancement in nearly twenty years. Drawing influence from first-person shooter video games, Battlegrounds uses emergent digital technologies to create a realistic combat experience.

Technology: Arduino, XBee, 3D-Printing

Interaction: Many to Many

Battlegrounds is capable of supporting up to 16 users simultaneously in team-based or individual play. 3D printed guns containing an Arduino processor and XBee transmitter are paired with XBee-wired vests to send and receive infrared “shots”, registering hits and transmitting that data to a central CPU controlling the game, while logging ancillary transmitted information like ammunition and health levels, accuracy, and points.

The controlling CPU sends back data to users to apply game parameters (gun stops working when ammo is out, vest powers down when health is depleted, etc).

Narrative: Nonlinear

Because this is a competitive, multi-user experience, the narrative of each round is subjective to the specific parameters of the game mode and individual actions of the players. Narrative is assignable through any number of modes such as Capture the Flag, Team Deathmatch or “Everyone for themselves” Single Deathmatch.

Relevance to bFTT: Although this case study seems far from bFTT, both have the starting point of being wearable systems employing Arduino and XBees.

Toy Story Midway Mania!

bftt - case study 2

Toy Story Midway Mania! is an interactive ride at three Disney theme parks (Walt Disney World, Disneyland, and Tokyo Disney Resort) that allows riders to board a train running throughout a circuit of midway-themed worlds, interacting with a train-mounted cannon.

Technology: Wireless Programmable Logic Controllers operating real-time; ProfiNET Industrial Ethernet; 150 Windows XP PCs; Digital Projection

Interaction: One to One to Many

Each train car is outfitted with four pairs of outward facing seats and four mounted, aimable cannons. Riders aim these cannons at projected targets throughout the ride, while train position and speed, and cannon pitch, yaw, and trigger information is relayed to a central controller. This central controller analyses this information to track the vehicle and register cannon hits or misses, which are then relayed out to the more than 150 projection controllers to display the changing environments resultant from the players accuracy and progress.

Narrative: Linear

The narrative of the amusement is linear in that it progresses along a track, through pre-designed animations and experiences. Players interact with these experiences in a consecutive fashion before being returned to their point of origin.

Relevance to bFFT: ‘Virtual’ interactions with the environment that affect the state of a projected image.

Disney Imagineering Drone Patent Applications (in progress)

bftt - case study 3

Aerial Display System with Floating Pixels

Aerial Display System with Floating Projection Screens

Aerial Display System with Marionettes Articulated and Supported by Airborne Device

Disney’s Imagineers have recently submitted three patent applications for UAV-contingent amusement applications that, while as yet unrealized, demonstrate the capabilities of wireless synchronized technologies in public art and performance.

The first patent demonstrates a system of UAVs hovering in static relation to one another, with synchronized video screens acting as floating pixels to display a larger image. The second patent describes a different system where a series of UAVs suspend a screen onto which a projection is cast. Finally, the third patent describes a marionette system where synchronized UAVs attached to articulation points on a large puppet coordinate the movement of that puppet.

Technology: UAV, Projection, Wireless networking

Interaction: One to many

For all three patents, the interaction is based around synchronized movement, with no user involvement. All movement is pre-designed and coordinated by a central controller.

Narrative: Simple

Since there is no user involvement in the amusement, this application is more performance art than an interactive piece. Everything is pre-written, though there is a great deal of customization potential. This could be more accurately thought of as a new technique than a specific project, and thus, the applications of this technology could be used in larger, more interactive projects.

Relevance to bFTT: the use of wireless networking to coordinate visually stunning, larger-than-life, uncanny entertainment.

GPS Adventure Band

bftt - case study 4

Morgan’s Wonderland, a 25-acre amusement park in Texas geared towards children with special needs, their friends and their families, has unique demands, not only from an amusement perspective, but within its basic infrastructure. Technological solutions addressing some of the most pressing anxieties carried by parents of children with special needs are built into the park’s structure from the ground up. RFID bracelets, worn by all guests entering the park not only track all guests physically, but are encoded with important medical and behavioural information for park staff to best serve guests.

In 2012, a contest to name the system resulted in the name “GPS Adventure Band”, despite GPS not being used.

Technologies: Wireless networking, RFID

Interaction: Many to One

Hundreds of guests’ information is relayed to one central server, with physical access points around the park for user information retrieval. Information is also accessible by park staff on mobile devices. The bracelet is also used to coordinate the emailing of specific photos to assigned email accounts, and acts as a souvenir.

Narrative: Debatably none.

This is not an amusement, but depending on one’s liberal definition of narrative, the access to information and peace of mind that families are given through this interaction affects their own personal narratives as experiencers of the park as a whole.

Relevance to bFTT: The tracking of multiple users via RFID bracelets might be a method for expanding interaction to bFFT. A simple sensor might be embedded in the bracelet as well, and each bracelet linked to the movement of an object (such as a tree in bFFT). Groups of people could thus move the forest together, playing with individual and group movements.

Project Context

This project’s focus was on creating eye-catching visuals that create surprise and delight from an uncanny interaction with technology. As such, the topic most relevant to this project seems to be live performance involving digital technology.

Dixon (2007) argues that the turn of the 21st century marked an historic period for digital performance where its zenith was ‘unarguably’ reached in terms of the amount of activity and the significance of that activity – prior to a general downturn of activity and interest. Yet in the Summer 2014 issue of the Canadian Theatre Review devoted to digital performance, the editors assert that digital technologies continue to shape theatre practices in particular by redefining theatre by moving it outside of traditional theatre spaces (Kuling & Levin, 2014). The issue details, among other topics, theatre experiments with social media (Levin, Rose, Taylor & Wheeler); game-space theatre (Filewood; Kuling; Love & Hayter); and the possibilities for new media performances for Aboriginal artists (Nagam, Swanson, L’Hirondelle, & Monkman). This was heartening to learn, as the years since those Dixon names as the ‘zenith’ of digital performance have seen so many tools emerge that make technology affordable and accessible to artists (such as Arduino and Processing), that it seemed unlikely that theatre artists would totally lose interest in the digital. A brief review of the contents of the latest International Journal for Performance Arts and Digital Media and the inclusion of papers on technologically mediated performance the ACM Conference on Human Factors in Computing Systems (CHI) 2014 conference, ACM’s 2013 Creation and Cognition conference, and other notable conferences, suggests that relevant theatre, dance, and performance projects are still being created today.

Other relevant projects, organizations, and initiatives include:

If digital performance is still relevant in Canada, perhaps the methods of our project would be relevant to creators of such projects.

REFERENCES

Arizona State University. (n.d.). MFA in Dance. Retrieved December 9, 2014 from https://filmdancetheatre.asu.edu/degree-programs/dance-degree/master-fine-arts-dance

Barkhuus, L., Engstrom, A. and Zoric, G. (2014). Watching the footwork: second screen interaction at a dance and music performance.  CHI ’14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. p. 1305-1314. ACM: New York.

Battlegrounds (Ed.). (n.d.). Battlegrounds web site. Retrieved December 9, 2014, from http://battlegrounds.net/

Brunel University London. (n.d.). Contemporary Performance Making MA. Retrieved December 9, 2014 from http://www.brunel.ac.uk/courses/postgraduate/contemporary-performance-making-ma.

Digital Futures in Dance (2013). Retrieved December 9, 2014, from http://www.thebasement.uk.com/whats-on/digital-futures-in-dance/

Dixon, S. (2007). Digital Performance: A History of New Media in Theater, Dance, Performance Art, and Installation. Cambridge: MIT Press.

Greenfield, D. (2008, October 10). Automation and Ethernet Combine for 3D Disney Attraction. Retrieved December 9, 2014, from http://www.controleng.com/search/search-single-display/automation-and-ethernet-combine-for-3d-disney-attraction/d4dff85b6d.html

The Gertrude Stein Repertory Theatre (n.d.). Retrieved December 9, 2014, from http://www.gertstein.org/index.html
International Journal of Performance Arts & Digital Media, 9(2). (2013). Retrieved December 9, 2014, from http://www.ingentaconnect.com/content/routledg/padm/2013/00000009/00000002;jsessionid=blrhai8go429d.victoria

Kuling, P., & Levin, L. (Eds.). (2014). Canadian Theatre Review, 159. Retrieved December 9, 2014, from http://utpjournals.metapress.com/content/121522/

Maranan, D.S., Schiphorst, T., Bartram, L. and Hwang, A. (2013). Expressing technological metaphors in dance using structural illusion from embodied motion. C&C ’13 Proceedings of the 9th ACM Conference on Creativity & Cognition. p. 165-174.  ACM: New York.

Morgan’s Wonderland (Ed.). (n.d.). GPS Adventure Band. Retrieved December 9, 2014, from http://www.morganswonderland.com/attractions/gps-adventure-band

Rhizome. (2013). Prosthetic Knowledge Picks: Dance and Technology. Rhizome (online).

Retrieved December 9, 2014 from http://rhizome.org/editorial/2013/jul/31/prosthetic-knowledge-picks-dance-and-technology/

Noeckel, James. (2013). Advanced Tree Generator. Retreived December 9, 2014 from http://www.openprocessing.org/sketch/8941

University of the Arts London (n.d.). MA Digital Theatre. Retrieved December 9, 2014, from http://www.arts.ac.uk/wimbledon/courses/postgraduate/ma-digital-theatre/

University of California, Irvine. (2012). Experimental Media Performance Lab. Retrieved December 9, 2014 from http://dance.arts.uci.edu/experimental-media-performance-lab-xmpl

University of Colorado. (n.d.). The PhD in Intermedia Art, Writing, and Performance. Retrieved December 9, 2014 from

http://www.colorado.edu/cmci/academics/phd-intermedia-art-writing-and-performance

University of Michigan. (n.d.) Department of Performance Arts Technology. Retreived December 9, 2014 from http://www.music.umich.edu/departments/pat/index.php

University of Salford. Digital Performance MA. Retrieved December 9, 2014 from http://www.salford.ac.uk/pgt-courses/digital-performance

York University. Faculty of Fine Arts Job Posting. Retrieved December 9, 2014 from http://webapps.yorku.ca/academichiringviewer/specialads/FFA_CRC_Digital_Performance1.pdf

Zara, C. (2014, August 27). Disney Drone Army Over Magic Kingdom: Patents Describe Aerial Displays For Theme Parks. Retrieved December 9, 2014, from http://www.ibtimes.com/disney-drone-army-over-magic-kingdom-patents-describe-aerial-displays-theme-parks-1671604