CrowdSource

There are countless technologies at work at a typical rock concert. Lights illuminate the stage, speakers amplify the instruments and microphones, and huge screens give far-away concertgoers a close-up view of the stage. All of these technologies expand the capabilities of the performer by extending the area of their influence. In the same way, CrowdSource aims to expand the capabilities of audience members by amplifying their typical behaviours and providing a clear communication channel from audience member to performer. CrowdSource is an audience prosthetic.

CrowdSource_Logo

Background

Earlier projects have attempted to increase the influence of audience members at a concert, with varying degrees of effectiveness. In the 1990s, for example, “world-funk ensemble” D’CuCKOO introduced their “Midiball” – a five-foot-wide, helium-filled balloon containing wireless sensors that would trigger MIDI notes when bounced around by audience members. This allowed the crowd to “jam” with the band, but only those who were lucky enough to come within striking distance of the ball. Today, a company called Audience Entertainment creates “interactive group gaming” experiences where audiences control a game by leaning left and right. The system uses computer vision to estimate the average position in the audience. While it engages the whole audience at once, it has technical limitations, and there is no direct connection to each audience member. In 2012, British rock band Coldplay created an impressive light show by giving each of their fans an LED-embedded wristband. These so-called Xylobands are wirelessly controllable and would light up in sync as the band played. Giving every single audience member a device created a captivating effect, but these devices did not allow the crowd to really interact with the performance. My project, CrowdSource, is an attempt at addressing these shortcomings and truly expanding the abilities of audience members.

Process

At the beginning of the project, I knew that I needed to capture the behaviour of every audience member and collect it at on one computer. Fortunately, I had been playing around with Wii controllers and the Max visual programming language over the summer. This hardware/software combination seemed like it would fit my needs perfectly: Wii controllers contain various motion sensors and can communicate via Bluetooth, and Max makes it easy to represent a lot of data in many different ways. I acquired about fifteen Wii controllers and started experimenting.

Wiimotes

My first goal was to see how many Wii controllers I could get talking to my computer at once. To pair each controller with my computer, I used a program called OSCulator; this displays all of the data that is sent from each controller and routes it into Max:

OSCulator

I began syncing multiple Wii controllers, and I ran into problems once more than seven were connected. Thus, this prototype would be capped at seven audience members. From here, I created a Max program (“patcher”) to test displaying data from these seven controllers. This first patcher simply illuminates a button when a user presses the Wii controller’s “A” button:

Seven_Wiimotes

After running into no problems with this program, I started to look at the motion data coming from the controllers. The pitch, roll, yaw values and the x, y, z, and overall acceleration of the controller are continuously sent over Bluetooth. I wanted to capture typical audience behaviour, so I listed some common motions that a concertgoer may make: clapping their hands, swaying their arms, giving thumbs up or down, and doing “the wave.” I then identified the data streams that correlated to each of these movements. Finally, I created a Max patcher that indicated when each of these actions was being performed:

OneWiimote_Audience_1 OneWiimote_Audience_2

With this behaving as intended, my next goal was to develop a patcher that would pull this information from seven Wii controllers and display all of the information simultaneously. First, to keep the program manageable and modular, I put some of my previous patchers into “sub-patchers.” This allowed me to easily process multiple data streams from all seven controllers and individually watch for each established behaviour. The following images show the final Max patcher and an example of each detection mode in action:

Final_1

Mode 1: Thumbs up/down. This simply displays how many audience members are giving the thumbs up or down sign.

Final_2

Mode 2: The wave. This displays the pitch of each Wii controller. When audience members do the classic “wave,” the visualization displays a similar wave-like motion.

Final_3

Mode 3: Clap #1. The red button in the center represents the clapping of the “leader” – most likely the performer. If audience members follow the performer and clap to the rhythm, all buttons will light up simultaneously.

Final_4

Mode 4: Clap #2. This mode visualizes the overall clapping activity over time. The yellow bars grow as more audience members applaud more enthusiastically. This could be used to track audience excitation over the course of an entire performance.

Final_5

Mode 5: Sway. As audience members swing their arms back and forth above their head, an icon visualizes their movement. This mode also has the option of identifying a “leader” if the goal is to mimic the movement of the performer.

Final_6

At a typical large concert, the voice of the audience is muffled, overpowered by sound and lights and often very far from the stage. CrowdSource amplifies the message of each audience member at a performance and delivers it directly to the performer – a brand new audience prosthetic.

Comments are closed.