Category: Project 3

Ask Zoltar

Elliott Fienberg / Jay Irizawa / Tatiana Jennings / Monica Virtue
Project 3: City As Amusement
Creation & Computation
Digital Futures / OCAD University


Ask Zoltar is an outdoor interactive installation inspired by the fortune teller booth arcade type attractions. The traditional fortune teller booth is a self operating machine, which contains an automaton (a non-electronic moving machine in the form of a human or an animal, with a range of head and hand movements, sound and a simple interactive element such as a button).

The first mention of Automata (from Greek αὐτόματον, automaton, “acting of one’s own will”) could be traced to Greek mythology. The idea of mechanical human had been popular throughout the centuries and many intricate and accomplished examples were created to be used as amusement, tools, toys, religious idols, or prototypes for demonstrating basic scientific principles (Sharkey, 2007).


Case Studies:

We explored the following case studies to help ground our fortune telling project in time and space:

Helvetia Park – 2009

Location: Aarau, Bellinzona, MEN (Museum of Ethnography Neuchâtel)
Design: Anna Jones, Patrick Burnier, Raphaël von Allmen
Direction: Grégoire Mayor, Marc-Olivier Gonseth, Yann Laville


Helvetia Park is an interactive immersive exhibition at the Ethnography Museum Neuchatel (MEN) in Switzerland. The exhibit consists of 11 stalls inspired by the fairground attractions, including games, freak show, merry-go-round and a shooting gallery. The exhibit examines points of contact and friction between different understandings of culture in Switzerland today and comments on clash between elitism and popular amusement.


Helvetia Park is a playful artistic intervention, which satirizes museums and allows the visitors to upend the concept of value and preciousness of art.





Night Lights – 2010

Location: Auckland Ferry Building (Auckland, New Zealand)
Design: Yes Yes No (New York City, U.S.A.)


Source: LSN Global Insights

For this installation, Yes Yes No teamed up with the Auckland Ferry Building (Church) to create an immersive experience which encouraged participation and movement from visitors, thereby turning the city into an amusement park.

A series of bright, white panels were set up in front of the building, which helped create shadows which were then projected five stories above the participants. This amplified their bodies by seven times their actual height, turning the participant into something much larger.


Source: IDN World

The project not only explores the relationship of person to object (the building), but also person to person through the building, as people are encouraged to dance together.

This is a great example of how to turn the city into a playscape. It helps to (literally) shed light on the architecture of the neighbourhood, but the most interesting feature is that it isn’t that enticing until people get involved and interact with it. The bright panels not only make it possible to create shadows, but also help draw attention to the fact that you can participate.

Fun House – Nuit Blanche 2012

Location: Bay Street, Toronto Ontario
Design: Commissioned Feature Artist Thom Sokoloski

Fun House is an outdoor interactive installation conceived by artist Thom Sokoloski. Inspired by the camp and horror of carnivals from his youth, Sokoloski amplifies the banal into a sublime experience of pop culture through the projected images of old footage horror films, flashes of multicoloured lights, auditory screams and choreographed performances by post-apocalyptic carnies dragged through the mud. The contrasting areas of spectacle and experience create a cohesive yet jarring experience, not unlike the layered experiences one encounters in an entertainment venue high on multisensory stimuli output.


The main points to consider are taken from the experience of the event:

Anticipation: Attractors such as large entry-ways, flashing lights, sounds and human performance draw a crowd and entice them to come in. The promise of a changing experience awaits beyond the curtain.


Spectator to Participant: An immersive environment sets the tone, and elicits a response from the audience to actively engage and become a willing participant in the event.


Emotive responses: Cultural and subcultural signifiers elicit emotive responses, prompting memories in the form of pop-culture references, music and imagery. Visual cues such as fonts, colour, lighting, iconic objects, wardrobe, performance and other senses can leverage nostalgia connecting the audience in a familiar way.


Finale: The pace and choreography of the experience culminates to a grand moment, signifying the end of the experience, and a return to ‘normalcy’ (sometimes expedited through the egress experience).


Wonder Mountain’s Guardian – 2014

Location: Canada’s Wonderland (Toronto, Ontario)
Design: Triotech (Montreal, Quebec)

Wonder Mountain’s Guardian is a 4-D amusement ride that premiered at Canada’s Wonderland in May 2014. Designed by Triotech Studios of Montreal, the ride combines a traditional roller coaster with an immersive interactive component that utilizes custom-built projection technology. Triotech, a manufacturer of multisensory attractions for the amusement industry, developed the 3D animations, projections, special effects and sound specifically for the new ride.


Source: InPark Magazine

Guardian winds its way around theme park’s iconic Wonder Mountain on 325 metres of track, before travelling inside to five different layers of an “underground world.” It culminates in an interactive dragon’s lair built into the top of the mountain.

The new coaster is designed to appeal to gamers who enjoy a strong storytelling experience: the quest culminates as riders enter a dragon’s lair for an ultimate battle with Lord Ormaar. The ride is built for a group play experience, with the chance for individual scoring. Riders can test their gaming skills to compete against each other, with individuals able to land a coveted spot in the Wonder Mountain’s Guardian Hall of Fame.


Source: Vaughan Citizen

An accurate targeting system allows users to fire lasers from hand-held guns at real-time graphics displayed on the world’s longest interactive screen, at 200 metres long and four metres high.

In addition to the technology, the designers have overlaid the 3D experience with 4D elements, such as simulated wind and movement.

Disneyland – 2014

Location: Anaheim, California

Disney – the innovator of the amusement park concept (Gilmore and Pine: 57, 2007) – has introduced RFID technology to track large quantities of data to observe consumer behaviour and interaction with the programmed attractions (Business Week 2014).


The intent of the added feature to the operations infrastructure is two-fold: first, the mass patterns of large consumer data can be quantified in micro-elements to determine the efficiency of elements such as line-ups, circulation flow, optimization of point-of-purchase sales, and duration of captive audiences; and second, the analysis of data can be interpreted to forecast trends in the amusement park market to extend the consumer satisfaction beyond the event, as the analysis of such algorithmic trends could be tailored to offer a unique experience beyond the physical touch-point of the environment, into the psyche of the returning customer.

On one hand, RFID tracking maximizes the ROI by attending to customer satisfaction with the means to respond quicker to customers by anticipating their needs. Disneyland attendees are willing participants in the event process, paying for the experience seemingly made for their sole enjoyment. On the other hand, the capability of isolating individuals anytime, anywhere becomes a means of control, which for the most part is readily accepted by the park attendees-turned participants. On the other hand, information beyond the context of the park turns into personalized email notifications, promotional material, downloaded apps, post-experience retail consumption, and a deeper-rooted affinity to the intangible experience economy recognized by the psyche of consumers through the iconic status of animated figures across all media.

RFID may not have invented theme-park psychology, but it is the latest tool added to the history of feed-back loop marketing from large-scale amusement parks to small-scale retail places (iBeacon, Wired 2014), and is becoming commonly accepted as a pervasive yet quiet form of private information acquiescence in the finite space of the “public.” In a relevant but complex discussion to be unpacked another time, Rosalyn Deutsche makes a case to reclaim the public space from territorializing entities formed from markets and policies, citing architect and theorist Michael Sorkin, “…in the new ‘public’ spaces of the theme park or the shopping mall, speech itself is restricted: there are no demonstrations in Disneyland. The effort to reclaim the city is the struggle of democracy itself.” (Deutsche 283: 1996)

The Fortune-Telling experience is but a small scenario exemplifying an exchange between the citizen and her collection of private data as currency, in return for prophetic enlightenment in the medium of entertainment.


In the beginning of our creation process we explored various ideas inspired by the project’s theme, “City As Amusement.”  Living in a big city is a stressful and often grinding experience. Everyday’s mad rush along the streets to and from work doesn’t invite a playful mood or give an opportunity for an imaginative interlude. The streets fade into a blurred background and the passers by dissolve into a faceless crowd. An introduction of an unexpected interactive element can break the everyday routine and transform unremarkable mundane surroundings into a place of spontaneity and play.

The wireless communications nature of the assignment presented us with interesting opportunities. We explored a few different directions and scales, from transmitting a message to the CN Tower and changing colors on the CN tower’s night illumination system to creating a two person game on the playground.

One of the ideas was a wooden cabinet with lots of small drawers, which you can interact with  by writing a question on a piece of paper and depositing it in one of the drawers. The cabinet would respond by pushing out another drawer with a written answer in it. We liked a physical and surprising quality of the interaction and a challenge of building an object where the technology will not be apparent and the interaction would feel magical.


After a few iterations and research into street fortune telling and amusement park aesthetic we came up with Ask Zoltar – a fortune telling booth. The name is borrowed from the 1988 American fantasy comedy Big.

The final idea included an interactive kiosk embedded with sensors, lights, a small video screen, a receipt printer and a large video projection on the window above.

Ask Zoltar could be installed at any street corner. The booth is self-contained and can function independently for hours at a time, powered by a car battery. The projector could be installed on the top of a car and projected on a surface next to the booth.

Notes on the process:

We started with deciding what functions we wanted the box to perform, and then began creating a diagram with all of the box states and proposed functions. We wanted the box to be responsive to the environment and interactive, so we chose to use a proximity sensor, a pressure sensor, a small receipt printer, 2 strings of LED lights, XBee module, EL panel (with the shield and inverter) and a small bluetooth speaker.

We established 5 box states:

  1. Idle (The interface and screen are dark. The first loop, which is a black screen, is playing  and LED lights blink every 20 sec. There is a snoring sound coming out of the box. Zoltar is sleeping.)
  2. Sensing approach. (The proximity sensor detects a person standing in front of the box. The lights come alive and the hand EL panel begins to glow. The small screen switches to the second loop – sleeping Zoltar. You can see his eyes moving as he sleeps.)
  3. Hand press. (The viewer placed his/her hand on the palm print initiating the pressure sensor. The lights start blinking and the third loop starts playing. Zoltar wakes up, his eyes open and the screen displays a countdown from 10 to 0.)
  4. Projection. (The projection on the window is initiated through the XBee, while the small screen continuing to play the 3d loop, which is now displaying a slowly moving graphics).
  5. The fortune. (The projection stops and the small screen, which is still playing the 3rd loop, comes alive and displays a sign: “Your fortune is being printed.” At this moment the receipt printer is initiated through the XBee and the receipt is printed. The small screen displays the word “Goodbye,” and the box returns to an idle state).


Because of the nature of the project, our main challenge was to make the installation both interactive and feature rich, as well as reliable. In order to deal with multiple functions we had to create multiple timers running in parallel, and use boolean variables as logical triggers. We had to create a number of small loops, which contained the states of different things within the main loop. Although the sensors and functions separately were quite simple, running them all together at the same time proved to be complex and we had to look for help organizing the code and figuring out the timers. Dimitri Levanoff helped us with working on the code, and we also asked our classmates and our TA, Ryan, for help when we unable to figure out the logic. We drew a lot of flow charts, which was very helpful.



After the code was written and most of our parts arrived Nick helped us to organize the wiring, which was very helpful. We used his diagram as a guide and checked with it till the very last moment.


Some parts of our code were still very unreliable at the time of the presentation and we would like to continue to work on the project until everything is figured out. The lights and the printer were the most problematic. We could not make the printer work reliably with the XBee and at the very last moment we had to install a small button, so we could initiate the printer by hand for the presentation. Initially we wanted to have two laptops – a main laptop in the booth, running the videos on the tablet, plus one inside with the projector running the processing. However, we ended up reversing it and using only one laptop inside.


The booth contained an Arduino, sensors, and the XBee. All of the video and sound were run on the laptop inside the booth and sent through the Xbee, which is a better option if we want in the future to install the booth on the street by itself and control it from a distance. We also ended up using Max instead of processing to run video and sound.


Pressure Sensor; proximity sensor and printer sketch:

zoltar sketch pressure-proximity printer_bb

XBee sketch:

Zoltar xbee sketch_bb

EL panel sketch:

board and shield



View the Arduino code on GitHub


Fabrication techniques:

Zoltar interface:

The initial concept of an interactive screen between the threshold of the outdoors and indoors began with a preliminary schematic of XBees sending data from a user interface triggered by motion, and a receiving XBee outputting an image projected onto a large surface.


Ideation sketches considered a feedback narrative of a linear sequence, starting from the user, leading to the projected image. The step thereafter is dependent upon the user’s reaction to the captured sequencing of the user’s actions.

One factor to note in this preliminary iteration was the deficit of an object/interface, which we as a group identified as an important aspect to the experience. The object/interface anchors the concept from start to finish, and  is central to the topographic systems map, thus it grounds the user interaction with a tangible node of access in connection to the visceral projected output of the story. The communal ouija board, seance table, crystal ball…part of the mythos of the medium’s power is the ability to communicate through a portal from this world to the next.

Initial ideas drew upon the attraction of mystery (the oft-cited “black box”), and the spectacle of amusement attractions. Fun houses, lights, sounds, wardrobes, and the prize/token reward at the end of the journey comprise the complete experience before (anticipation), during (interaction), and after (the memento/echo take-away).


Barnum and Bailey’s commodification of the experience:


Where even the exit becomes an event…

Thus, the temporal aspect and timing of the sequence became central to the mapping of ideas.


In this network topographic map, the user has a point of interaction to activate the event. As each stage unfolds, the mechanics of the interface scale up exponentially, adding layers of visual information through reactive LEDs, text, auditory cues, animated video and finally a Deus ex machina projection finale.


The orchestration returns to the tangible element scaled down to a receipt-sized paper fortune at the point of the user interaction.



The box required many levels of engagement in conjunction with ergonomic elements to consider. It was decided – as all tellers of fortune require – to have a palm-reading interface through which the user could activate the story. But first, the “black box” always has an entryway, visible from afar. The following sketch identifies how the user is captivated, by a ‘header’ signage piece (first conceived of as printed graphic, later etched), embedded LED lights, low-level audio cues, and a standing-height kiosk with full-colour graphics.


As the concept developed to include a tablet interface, a printer and a projection environment, the ergonomics and proxemics of the kiosk needed to be approximated to an environment yet to be determined. Though not entirely universally accessible due to height and block structure, measurements were based on averages of standing height of the class, accommodating a range of stature.

Multiple venues were explored, and in the end, an outdoor location on the corner of Duncan and Richmond street was chosen for the affordances of projection surfaces (windows) and outdoor proximity to the placement of the kiosk with oncoming pedestrian traffic.


Existing corner of Richmond and Duncan Potential viewing and projection surfaces.

The base of the kiosk was an MDF box (pre-fabricated) finished to withstand inclement weather, with an interior structure measured to accommodate the largest of the internal hardware: an HD car battery, required to power the components. The top surface was determined to be at an angle of 20 degrees to be readily visible, and to be a perceived affordance of a touch-activated interface.


Prototype kiosk sketches – base determined at 16”x16”x37.5”, with allowance of 5-6 inches of display top-mounted to box. Foreground kiosk renders the palm interface as an angled protrusion with a box-constructed backing for graphics, housing internal lighting and surface-mounted tablet/screen display. Background option simplified the design and eliminated extra material use, creating an integral unified housing with one affixed back panel, uplit with LEDs. The latter was chosen as the prototype to fabricate once the tablet had been established as the interface for the intimate stage of relaying instructions to the user (a computer screen was a second option). Red line rectangle represents a figure standing at 6’-0” height; centre of tablet to be affixed at 4’-8” – 5’-0”.


Once the surface-mount interface was established, the rules of engagement for the user to follow were discussed, and how this would play out. Initially, a set of buttons were going to be adjacent to the palm reader, which would help to categorize the type of responses the user would receive (and to aide in the suspension of relevant  belief). However, two sets of interactions visually and cognitively created complexities, and could have sent confusing messages at different times. It was then established that a graphic on a hand icon would support the action of the hand placement, which in turn would set the rules of engagement running on the tablet (“Now that we have your attention…”)


Final working drawing details of the kiosk before disassembling the pieces into a 2D format to CNC laser cut the individual parts.


Prototype graphic (colour option 2) + Zoltar graphic, inspired by the movie Big.
Palm graphic applied to the CNC cut die for the palm interface.


First two templates – surface-mounted palm reader interface – white opaque acrylic ¼” thickness, includes printer slot, IR sensor slot, audio perforations, cable management and access portal, friction-fit sides and allowances for heat-bent curvature with pre-drilled holes for mounting plates.


Third template – clear acrylic ¼” backing with etched graphic illuminated by edge-lighting white LEDs.
Surface-mounted detachable tablet template secured with hardware. Hand template cut within 10” radius, post-mounted with graphic, pressure sensor, and EL luminescent programmed light film.
prototype5Light testing with edge-lit LEDs on etched vector-based graphics, illustrator format.


Costuming & Props:

The costume and props for our fortune teller was curated from the collection of Tatiana Jennings, who runs a theatre company on Dupont Street in Toronto.



The casting for this project brought up some unexpected topics, including political correctness and cultural sensitivity. At the time of filming, the project was known as “Ask Swami,” and was to feature an moaning, overweight yogi. However, the actor that ended up playing our fortune teller was Caucasian, which led to some discomfort within the group. Some were concerned that a Caucasian actor playing someone who spoke Hindi may be construed by some members of the class as being insensitive.

At this point, the footage had already been filmed, and re-shooting the video would take considerable time. It was then decided that instead of “Ask Swami,” the project could be changed to “Ask Zoltar.”

Aspect Ratio:

Videos with different aspect ratios needed to be created, including:

  • Several widescreen (16:9) videos to display on the Samsung tablet affixed to the fortune telling machine.
  • A pillar-boxed (4:3) video to create a digital projection in the square window of the Graduate Gallery.

The aspect ratio didn’t just affect the final video output, but also the way in which the shot was composed while filming. The widescreen videos needed imagery that filled out the 16:9 space, while the pillar-boxed videos needed to be framed in a way in which his movements were confined to the centre of the screen.


The actor who had volunteered to play “Swami” was not fluent in Hindi, so it was decided on set that he could speak English and we would later dub in Hindi and include English subtitles. The actor then went on to dramatically recite lines from Shakespeare’s Richard III.

Later, as the project transitioned into “Ask Zoltar,” it was decided that our fortune teller should speak Romani (which is similar to Hindi, and is the traditional language of the Roma people of Europe, also known as Gypsies.) A YouTube search was then underway in order to find a clip of a man speaking Romani in an angry manner. Once the clip was found it was downloaded, and the speed was reduced to 65% in order to give it a deep tone with a menacing quality.

Sound effects were also needed, including the sound of an explosion, snoring and moaning. A nuclear explosion was chosen to go at the end of the last video on the tablet in order to announce Zoltar’s arrival on the overhead projection. Both the snoring and the moaning effects were pulled off of YouTube, with the moaning consisting of a “cat in heat” recording that was slowed down to 25% in order to match the tone with Zoltar’s voice.


The overall goal was to have the video on the Galaxy tablet and the large digital projection visually tied together through style and colour. We also wanted to give the footage a nostalgic feeling, similar to amusement games from the 1950’s.


All of the footage was altered with a green filter to give it a nice contrast against the clear acrylic on which the tablet would be mounted, and to ensure it stood out as passersby viewed the large digital projection from the street. Both the tablet videos and the projection video were layered with an “old projector” filter to give them both an old, eerie quality.



The pacing of the videos was determined by a storyboard that was drawn out to create a theatrical effect. Timing was crucial, as the successive videos were being triggered by sensors and communication between XBees.


Final Video Outputs:

Video #1: Black Screen with Snoring

Video #2: Eyes Closed with Snoring

Video #3: Zoltar Awakes

Video #4: Digital Projection

Video & Sound:

In order for there to be feedback between the user’s actions and the machine, we needed there to be a way for Arduino to communicate with some sort of video playback software. Processing would have been a good choice, but as the deadline approached quickly, Max came to the rescue, offering a visual way of connecting different inputs (the sensors), to different outputs (the video and receipt printer).



max patch

Full Max Patch.

Information from the sensors was written to the serial port using Arduino. However, the way it got to the serial port of the laptop indoors was by using two XBee modules. Max was instructed to look out for various messages in the serial port such as “1” and “2”. The first key message was triggered by the proximity sensor, and that told Max to play the first video of just Zoltar’s eyes. The pressure sensor under the hand triggered the next video which told the user to think of a question. And from there, the “clocker” object started a countdown to trigger the final video in the projector.
Once the final video completed, it sent out a message which could be further used to signal the receipt printer to print out the user’s fortune. But there were problems getting the message to be picked up in our serial port, which was very busy. As a result, a button was hooked up to help prototype this feature in.

The serial object brings in data from the XBee on the kiosk:
serialThe Route object sends various messages in the serial bus (0, 1, 2), to trigger specific videos. The Clocker object starts a countdown for playing the final projection.
The three videos are played using one Jit.QT.Movie object, and reading different quicktime files:
This is the projector video, which goes to its own Jit.Window for the second screen. A “loopnotify” message sends out a bang when the video is done to signal the printer to start.

Playback devices:

All video and its accompanying sound was sent from the Max patch. The goal was to keep everything in a central place to avoid sync issues. The second screen from Max was sent to the indoor projector using a VGA cable, and the tablet video was sent by sharing the main screen over wifi. In order to minimize wifi problems, an open router was placed right by the window. Sound was sent to a portable speaker over Bluetooth, and worked surprisingly well through the window.


The wiring underneath the palm interface.


Max 6 seems to have trouble with some HD videos. As a result, the video clips were highly pixelated.

The app running on the Android tablet, iDisplay, worked marginally fine indoors but through the window the choppiness was unusable. Even with a day’s notice that this program was going to have problems handling video streams, it became difficult to find a replacement remote desktop application quickly enough.

Finally, most traditional operating systems function on the concept of two screens. For this project, we would have required three screens: One for the Max patch, one for the projector, and one for the tablet. This only became apparent just before launching the installation, and so the main screen of the tablet was shared, which is far from ideal.


20141207_204835A still from the video is used to create brightly coloured graphics for the sides of the kiosk. 
projector1Testing the projector inside the Grad Gallery. Translucent white fabric was hung in the window to give the projection a surface to reflect off of. 
20141208_180024 Making last minute adjustments before the palm graphic is applied.
tablet_kioskAffixing the tablet and closing up the back of the kiosk are the last elements of the project to be finished.
guardingElliott guards the kiosk while we wait for the class to appear. Several passersby asked to have their fortune read as we waited.

Final Presentation:

night Zoltar appears in the night, within eyeshot of the CN Tower.
DSC_3251The tablet announces Zoltar’s imminent arrival.
projectionsThe XBee cues the projection from inside the Grad Gallery building. 
DSC_3289The group poses with Ask Zoltar after the class presentation.


Apple. (2014, November 8). IOS: Understanding iBeacon. Retrieved from

black automaton (n.d.). Fortune Teller Lady Machine. Retrieved December 14, 2014, from website Tumblr:

Crystall, B. (2007, July 5). A programmable robot from 60 AD. New Scientist. Retrieved from

Dasilva, D. (2014, October 10) It May Be the ‘Now’ Economy, But Is Main Street Retail Ready for iBeacon? Retrieved from

Deutsche, R. (1996). Evictions: Art and Spatial Politics. Cambridge, Mass.: MIT Press

Gilmore, J. & Pine, J. (2007).  Authenticity: What Consumers Really Want, Boston: Harvard School of Business Press.

Groen, D. (2012, February 9). Who controls the CN Tower’s coloured lights? Retrieved from

Hess, R. (2014, January 7). Building a Working “Zoltar Speaks” Fortune Teller. Retrieved from

Kleiman, J. (2013, August 30). Cedar Fair Parks Take Guests Into the Mountain and Up to the Skies in 2014. Retrieved from

Koetsier, T. (2001). On the prehistory of programmable machines: musical automata, looms, calculators. Mechanism and Machine Theory, 36(5), 589–603. doi:10.1016/S0094-114X(01)00005-2

Lau, C. Research: Interactive installations. Retrieved from

MacDonald, B. (2014, February 24). Innovative roller coaster-dark ride combo heads to Canada’s Wonderland. Retrieved from

Martin-Robbins, A. (2014, May 23). Vaughan’s Canada’s Wonderland set to unveil interactive ride. Retrieved from

Palmeri, C. (2014, March 7) Disney Bets $1 Billion on Technology to Track Theme-Park Visitors. Retrieved from

Raphaël von Allmen Design & Scenography. (2009). Helvetia Park. Retrieved from

Safran, L. (1998). Heaven on Earth: Art and the Church in Byzantium. Pittsburgh: Penn State Press.

Sick of the Radio. (2011, April 13). ART:Helvetia Park, an installation in Switzerland. Retrieved from

Scotiabank Nuit Blanche Toronto. (2012). Funhouse, 2012. Retrieved from

Triotech Studios. (2014). Canada’s Wonderland. Retrieved from

Tribe — Xiangren Zheng(Gary), Harish Pillai, Yikai Zhang(Glen)

Tribe is an interactive projection and light installation. The project is divided into two parts. In the first part, audiences will enjoy a 2 minutes projection and light show. After that, in the second part, the show will become more interactive. Audiences are able to control the flashing patterns of the led lights and different video clips projected on the wall through a wireless joystick.


The concept of “Tribe” was started from an idea that we hope to create a new form of public entertainment. We noticed there are many recreational activities happening in the city of Toronto everyday. However, most of them are one-way communication, which lack some interaction between the performances and audiences. If we want to turn Toronto into an amusement park or opportunities for game-like encounters then we need to add more interaction elements in. In addition, through the observation to the Canadian’s life style we found many Canadians love socializing and nightlife. Bars, restaurants and nightclubs are some spaces they really like. Therefore, during the discussion we decided to create an interactive projection and light performance. Our expectation was to turn the city of Toronto into a large night amusement park and everyone can be the part of the show.


Step 1: Symbol

After determining the form of the show we started to look for the theme for our project. We all believed that integrating technology and culture to create a new kind of art form would be very interesting. During the discussion, we remembered there is a book called Zero written by a Japanese designer Matsuda Yukimasa. The book is about exploring major sources of symbols such as basic shapes, colours and numbers and placing symbols in context of mythologies and religions, the human life cycle, people and culture, and symbol systems. We found all these symbols are very simple, elegant and full of mystique. Therefore, we developed an idea that taking one symbol as our carrier and give new meaning to it through adding more content using new media technologies.
Screen Shot 2014-12-10 at 17.10.16Screen Shot 2014-12-10 at 17.10.32

Finally, we decided to use a symbol which is a hexagon consisted of two equilateral triangles. The reason we use this shape is we found this shape has a sense of stability and mystery simultaneously and also looks beautiful and elegant.

2014-11-27 22.17.04

Step 2: Fabrication

The size of project we made this time is comparatively large. The final size is 8 feet*8 feet. The hexagon is composed of four pieces of plywood and two equilateral triangles on the top are made of 10 pieces of 8 feet long wood strips. All cutting works were finished at the make lab on the 7th floor. Here we need to thank to our instructor Reza. We could not finish this part without his help. Then we carried all materials downstairs and finished remaining assembly works in the studio. After finishing all of these, for achieving better projection effect we painted it into the white colour.

2014-12-04 16.45.27
2014-12-04 17.00.02
2014-12-05 22.44.15
2014-12-07 03.35.25
2014-12-07 05.08.34
Photo 2014-12-10, 6 48 03 PM

Step 3: Light system

This is the first time for us to do a light system on such a scale. In project Tribe, we used 23 meters led strips. This 23 meters led strips were cut into 24 segments depends on the length of each side of the shape. Meanwhile each led strips was soldered with two 0.75mm2 wire and numbered according to the sequence of each side of the shape. At the last, each led strip was stuck on the visible edges of the mesh to outline the geometric shapes of the structure.
2014-12-07 01.15.48
2014-12-07 01.15.38
2014-12-06 21.04.43
2014-12-07 03.33.53
2014-12-07 08.24.05
2014-12-08 02.32.38
On the hardware side, we used one 12V 12A power supply to power all led strips. The power supply is connected with a 27 channel LED dimmer. The LED dimmer can take DMX512 signal as the input and then transmit that signal as PWM(pulse-width modulation) output.(DMX512 is a bus protocol used in theatres and live shows to send light intensity information to the light fixtures that are set up on stage.) For receiving DMX signal, the LED dimmer has to connected with another device called DMX512 interfaces, DMX512 interfaces allow to send or receive DMX512 from a computer. This is the bridge which allows computer to output the light intensity information to the LED dimmers. The standard connector used for DMX interface is the 5-pin XLR, but as they are less common than the 3-pin XLR used for audio, in this case, we also need a 5-pin XLR adaptor.

Photo 2014-12-10, 8 03 17 PM

Photo 2014-12-10, 8 03 01 PM

Photo 2014-12-10, 8 02 51 PM

Photo 2014-12-10, 8 04 48 PM

Photo 2014-12-10, 8 05 00 PM

On the software side, we used Madmapper to read the pixel information from videos or pictures. Then Madmapper will send out the DMX information according to the pixel information to the DMX ODE. In addtion, Madmapper also supports Syphon. This function creates a lot of new possibilities. Because Processing also has a library that allows Processing send the output directly into MadMapper. In this way, we can create various interactive light or projection works through integrating these two powerful tools.

The entire process runs like this. Light intensity will be determined by video fed to the software, sent through Art-Net over ethernet or USB to a DMX interface, and then to the LED dimmer. The LED dimmer will take the intensity value depending on their allowed addresses and dim the LED strip connected to the corresponding output by fast-switching the voltage provided by the power supply.

Step 4: Interaction

In this project, at first we wanted to use LeapMotion as our interaction tool. But after talking with Kate and Nick we got to know that we must to use radio connection as our main method to interact with our content. After that talk, we developed several ideas such as making a pair of gloves with accelerometers in it or sending midi signal wirelessly to let visual part interact with the sound. However, all of these interaction are either not natural enough or not interesting enough. Through reading the project’s requirements again and again finally we focused on the word “Game-like”. We thought it would be really fun if we use a joystick as the tool to control our lights and projections. Base on this idea we planned to write a Processing sketch that can read the signal send from Arduino and then switch different videos based on the data. After that, Processing will feed the video to Madmapper through Syphon. Then Madmapper can control both projections and lights according to the video it received.Though this way, audiences are able to interact with our project wireless and the interaction process is natural and interesting enough as well.

2014-12-02 23.31.25

Step 5: Animation

We started our animation part in less than two weeks, we knew that a detailed plan was required. Being knowledgeable about what kind of video will be need eventually, we decided to to the frame animation for the changing of patten on LEDs first. Since we are not using neopixels which could be programmed to lightened up accordingly to its shape, we have to divide the patten which audiences may have seen on the final presentation day into multiple segments and animate each segment’s the degree of transparency in which will be reflected in the voltage posed on each LED strip according to the beats of the music in frames.

Everything has to do with the tempo and the beats of the song. The more exactly on the point where the beats comes up, the more mesmerizing the final effect would be. And also the patten display on the polygon like object should be representative enough for the music note is playing. It took us 4 days to accomplished the light video.


light video 截图

2014-11-28 18.41.36
(Since Gary is the animator in our group, here are confessions of an animator)
As to the video we need to project on the way, honestly speaking I didn’t really have a concept at all, but for me it was ok for the fact that the final presentation would be something more like a installation in club than a conceptual artwork which people would critic about. As long as there motions would be intense and make sense according to the music, that final delivering would be enough to impress the audience.

However there is a serious problem that I’ve overlooked until I was done with the light video. I was too ambitious about making the whole 1’44’’ animations at the very beginning that I didn’t really think about how much work would that be for me for the fact that the measurement here we are talking about is not second but frame. Immediately after I accomplished the light patten video, I realized that it would be a mission impossible for me to task the animation all by myself before the deadline. Using some of the free vj clips and 2d and 3d works I’ve done before and resources I have would be the only solutions for me.

Even though, the amount of animation work I have to do personally reduced, the job wasn’t easy for me. The problem is that what we have to projected on the object, the shape of it wasn’t regular. So masking job was inevitable, and this has to be done by frame with the reference light video. And also, as a perfectionist, how to make every signal frame unified in art style with the fact that through using different styles material is tricky. I have to modified or redo some resources to make it logically understandable and not feeling awkward when people are watching.

As to the software side, I’ve mainly used after effect as the tool to create videos both for projections and the light control. Accompanied with Cinema 4D which covered some part of 3D animations whose files then imported into AE to do some refinement.

During this project I did benefit a lot whatever it is regarding the knowledge about software techniques or creativity constructing or even skills in group collaboration. It is a project that combine all the knowledge together not just what we have learned from C&C, but others like creative techniques and work it out in a very short time.





The final result:


Screen Shot 2014-12-11 at 03.01.46

Case Study 1:

4D virtual reality theme park
4D virtual reality theme park

This was our earliest case study when we were thinking very much how about how we could make an interactive amusement park. It got us thinking about possibly building some sort of infrastructure that would allow us to make an interactive amusement park that was interactive. Along the way, we begin to thinking about feasibility in terms of time, cost and construction.

Our animator was inspired to use Cinema4D to make 3D projections but our animator found it to be quite a huge feat to accomplish in a short time frame. This video was part of the inspiration process that got us thinking about what we could do.

Case Study 2:


In this video, you will be able to see symmetrical projection mapping in which the visual designer
uses a scaffold of sort that serves as a grounding for the projection mapping. The designer uses madmapper to shape a cube life projection, this gives the illusion of a 3-D structure once the projection mapping is made.

Case Study 3:

Enigmatica Mars

Another good reference on projection mapping that plays with illusions of spatial perception and 3-D space. During our case study period, we hadn’t really gave much though on how we could X-Bee in our projection mapping project.

Case Study 4:


This video had impressed us with it’s use of symmetry and projections on the interior of the building structure. At this phase of our case study, we had begin our construction of our wood shaped star. We felt that we might be able to do something similar with our own project but on a smaller scale using

Project 3: City as Amusement Park- Simon Says Lanterns (Tegan Power & Rida Rabbani)

Project Proposal:

Project Video:

Project Description:

Our project is an interactive two player game installation using XBee communication. This project is inspired by Simon Says but has been altered to be two player and to transform an existing space into an installation for game play. Lanterns of different sizes and colours communicate with each other and players attempt to match the colour sequence that the other player has input. A set of 3 small lanterns (blue, green, red) have large arcade buttons that player one may press 6 times in any order. This input corresponds to the large lanterns hanging on the other side of the room. Once player one has inputted the sequence, the corresponding LED string colours will blink in order on player two’s side. Once this is done, player two must match this sequence. Player two inputs their response by tapping the large lanterns, activating the tilt switch in each. If player two matches the sequence correctly, all six lanterns on both ends will blink in congratulations. If player two fails to match the sequence,  a buzzer will sound. This game could transform any space, indoor or outdoor and could be packaged and sold as a “install yourself” party game.


Transmit (small lanterns) code:

Receive (large lanterns) code:

Writing the code was by far the most challenging aspect of this project and took several days to evolve and become functional. We first started by testing arduino to arduino XBee communication by having buttons on one end control LEDs on the other end. We did this by assigning each button pin to a number (1,2,3) that would serial print to the LED side. The LED side would read the incoming number and light up the corresponding LED (1=blue, 2=green, 3=red). Once this was set up, we focused on the receiving end. (We wanted to have player one and player 2 roles interchangeable but once we realized the difficulty of the code we took on, we decided to stick to a one way communication that would simply reset at the end of a turn. The first thing we did on the receiving end was create variables for player and player two’s sequence inputs. Player one has six button presses so the code would look for button presses and fill the six “YourHold” spaces. We then had to add booleans to indicate when Your turn and My turn was over. Once YourTurnIsOver=True, the input sequence is digitally written to the corresponding LED string in the large lanterns for player two to memorize. Once this is done, player two fills the six MyHold spaces. It does this by looking for button presses 1,2 or 3 to fill the six slots. Once player two has finished, MyTurnIsOver=True. At this point, the YourHold and MyHold values are compared. For this comparison of the six values, each is nested into the last because if any value does not match, there is no need to check to others. At this point, if the matches are all satisfied, all LedPin outputs blink, if they do not match, the buzzer in pin 13 is HIGH. The system then resets for the next match.






Case Studies:

Case Study 1-Intel World Interactive Amusement Park

Intel came to Inhance needing an environment in which to explain their Intelligent System Frameworks in an educational and enjoyable way.

Interaction: It features an animated theme park that allows up to thirty people to engage with the wall, bringing up floating windows of content amidst the rides, roller coasters and people moving about the park.

Technology: The result is the Intel Amusement Park Experience, an interactive multitouch application displayed on a 3×2 multi-screen LCD wall. It integrates with a Social Media Photo Booth App that allows attendees to take photos that superimpose their faces on a roller coaster ride. The photos can be sent to Facebook, Twitter, the Intel®World Wall and their email.

Narrative: The wall brings all of Intel’s products into one environment to show the connectivity through the entire park. Our goal was to deliver the same emotion one experiences in an amusement park, drawing attendees to the wall to touch it and learn. The result was constant excitement on people’s faces and large clusters of people touching the wall. It was highly successful in terms of being created primarily for trade shows, including Embedded World, Mobile World Congress, Design West and Cornell Cup.

Case Study 2-Lagoon Amusement Park

Amusement parks are all about speed. Whether it’s riding a massive roller coaster or plummeting 70 feet inside a tubular water slide, guests want to go fast.

Interaction: Lagoon now is able to satisfy the needs of its employees and guests with the updated card printing technology, bringing the park back to its desired speed.

Narrative: Now that the Lagoon Amusement Park has established its current system, computer stations at the gates can track Season Passport access information and provide valuable marketing information. “We’re trying to increase our per person usage through promotions such as our Season Passport Holder coupon books,” Young said. This allows them to operate at full capacity all day long, allowing their guests get their season passports quickly and in a fun way.

Case Study 3-  XD Dark Ride in the world 

Set in the iconic Filmpark Babelsberg just outside Berlin, this full turnkey project was the first installation of the XD Dark Ride in the world.

Interaction: XD Dark Ride is an immersive and interactive screen with a capacity of  24 seat hence being an object to many people ride.

Technology: Adding interactivity to state-of-the-art immersive special effects, it has revolutionized the world of ride simulation by combining video game technology with 3D animated movies.

 Narrative: First XD Dark Ride theater project in Europe. Is a conversion project of a pre-existing spherical structure into a one-of-a-kind interactive dome integrating the world’s largest interactive screen (16m wide)
Case Study 4- Wizarding World of Harry Potter
The latest installment of The Wizarding World of Harry Potter is scheduled to open this summer in Orlando’s Universal Studios theme park. The new attraction features London and the magic-packed Diagon Alley.
Interaction: Guest will not only be able to enter the arches of the Leicester Square facade but will be immersed in a bustling Wizarding hub within a Muggle city where towering buildings are slightly askew, with steep staircases and jagged edges galore.
Technology: In the real-life version, visitors will be in awe of the marble lobby and cavernous passageways. They’ll take off from here on a multi-sensory thrill ride through the vaults. And the dragon that will perch atop the bank building (reminiscent of when it escapes from the bank in the series) really does blow a giant ball of fire quite frequently. The thrill ride requires visitors to don 3D Glasses and features 360-degree themed sets, intense 4K animations, and 3D projection systems for complete immersion.
Narrative: Guests around the world were impressed by the immersive experience Universal created and the meticulous attention to detail they used to bring the Harry Potter stories to life. With the theme central to that of Harry Potter it is brought to life through a real life version of the story.

Photos and Diagrams:


photo 1photo 2


Soldering the arcade button to longer leads. Installing the button into the small lanterns by wrapping wire around it and the metal piece in the centre of the lantern. LED string is fit into the small lanterns and affixed to the sides to keep it in. Long leads come out the bottom for later connection to the Arduino.

10841221_10152630338748477_483116723_n-2photo 3photo 5

All three small lanterns are affixed to the black player one board. Construction paper covers with button holes are attached to the top to hide electronics inside each lantern.

photo 4-2photo (22)


Large lanterns are also filled with corresponding coloured LED string and affixed to the edges. The tilt sensor is soldered to long leads and affixed to the bottom of the metal lantern structure. The tilt sensor had to be placed at a very specific angle so that players two’s tap would successfully close the switch fully. Long leads are soldered to the other end of the tilt switches and LED string for connection to the Arduino.

photo 1-2photo 2-2


Final setup: Hanging large lanterns for player two, board mounted small lanterns for player one.


photo 2-3photo 1-3




  • 2 Arduinos
  • 2 Xbees
  • 6 Lanterns
  • 3 Tilt sensors
  • 6 sets of LED string
  • 3 Buttons
  • 2 9V Batteries

Circuit Diagram:

small lanterns breadboard prototype: buttons are replaced with large coloured arcade buttons installed in small lanterns. LEDs are replaced with red, green and blue LED string inside small lanterns.


large lanterns breadboard prototype: buttons are replaced with tilt sensors. LEDs are replaced with red, green and blue LED string inside large lanterns.






Notes on process:

We started of thinking of different ideas generated by the theme of the project. With the theme being an amusement park we wanted something that involved people in terms of visuals and engaged them to join in or interact with the installation. Initially we wanted to create an environment which coul be from both inside and outside. However once we started working with our simon says idea, it really didn’t really matter where the lanterns were placed as long they could create a establish a communication.

Then we had to decide whether we wanted a one player game dedicated to one player interacting with the simon says lanterns or two players playing amongst each other, while the rest of the audience enjoyed this process of lanterns creating a pattern and lighting up.

After establishing the 2 player game installation we had to work with the different materials, at first we were thinking of using balloons but then we decided the lights and lanterns along with XBee  inside them as we could not find balloons with cavity space within it. When we proposed the idea we were also advised that larger lanterns and materials would make a bigger impact.

The code to respond to our game was the more complicated part however with a large help from Ryan we got a code which stored arrays and chunks of sequences, the point we got stuck was when it came to the buttons responding to the sequence of lights.

While at the same time we managed materials, sensors and how they respond to one another. On the day of the final presentation we were experimenting with stability of materials as well as the code to correspond to our idea, however it was more complicated then we thought although it was able to store the sequence, not only was the communication with the XBee lost along the line but when we got them to communicate one of the buttons kept sending faulty data, at some point our simple on and off button became a sensor detecting movement near it. It was finally with Ryans help when we got the circuit to work as a simon says game it was too late to set it up to the tilt sensors and lights on the larger lanterns. Which despite our last minutes attempts wasn’t sending any data to the lED lights if it was not connected directly to the arduino.

Project Context:

Although the simon says Arduino was a very simple demonstration without making use of the XBees it gave us an idea of how to send information back and forth and simply test out the led’s and match them using the buttons. The next step from here was to translate this into our more complicated wireless use of the simon says technique back and forth within the lanterns and making it more interactive.

These case studies helped us explore not only the potential of real time technology but how experiential and interactive attractions, sets and props add to the touch and feel of the environment. Provoking senses and working with the familiarities and surprises for the audience makes them curious and interested in the space and attractions.

Digital Bell Tower


Screen Shot 2014-12-10 at 12.30.53 PM


1. Project Context

For this project we wanted to re-imagine a bell tower for the digital age. Bell towers are a relic from a period in history where community was very location based. During their inception, the bell tower was used to keep a community on the same rhythm through notifications of the time. Today, in a society where everyone has the time easily available to them through their phone, a bell tower that measures time is no longer of value in the way that it once was. To add to this, community is no longer strictly location based. For example, the OCAD University community has several campus buildings, and for certain programs (such as inclusive design) the classes are also available online.Screen Shot 2014-12-07 at 6.23.44 PM

For our redesign of what a bell tower could be, we wanted to create an object that would allow for interaction between people in the physical space, as well as people in the community who are not present. Instead of measuring the time, our bell tower measures the mood of people within that community, and broadcasts it to all. Overall, instead of a bell tower measuring the tempo of time passing, our bell tower measures the tempo and mood of the people within it.

Screen Shot 2014-12-07 at 6.33.00 PM


2. Project Description



The major part of this installation is the bell itself which was built based on the shape of a balloon. Several layers of paper were added on the surface of it using paper matrix method and the next stage was to wait for completely dry and to spray the whole object into metal silver. Tinfoil tapes was ultimately used to tape on the formed bell and smooth the surface.
After being hung on the ceiling through two groups of hooks as the connection by chains, the bell was able to swing. The purpose of this project is to receive the current mood on twitter of #Toronto citizens, which are happy 🙂 or sad 🙁 , and then visualize them into different colours as well as bell sounds.

There are two pods on the ground with LED strips inside, which could be triggered by the bell when it swing and arrived above them. Once being triggered, the pods will respond by lighting up bright white. According to the mood type from tweets, the EL wires wrapped around the pods will either turn into pink, which represents the happy mood, or turn into blue, which stands for the sad mood. Simultaneously, the sound will change during the swing of the bell based on the tweet mood as well.
Four XBees are divided into two groups with each group has a sender and a receiver to achieve the expected communication.

3. Process


The first step in our project was making the digital bell. To do so, we knew that we needed a hollow interior in order to put our accelerometer and speaker inside. We purchased a large balloon and used the paper-mache technique with latex paint. The latex made it much harder and more durable than regular paper-mache, and we did this so that viewers would be able to hit the bell around without worrying about damage. At the same time, we needed the bell to be light in order for it to not pull on the ceiling. Overall, the structure we created was very durable. In order to make it look like a bell, we spray painted it at 100 McCaul to create a silver sheen on the structure. Unfortunately, the spray paint flaked off, so we decided to use tin foil tape to create the same effect. This ended up looking better and creating a smoother surface on the bell.



In terms of electronics on the bell, we placed a wireless bluetooth speaker inside the hollow concave and then used large screws to bolt it in place. At the top of the structure we added a light and accelerometer with the x bee to control the sound.


For the pods: we used two plastic lamps, ripped out the insides, and put in a LED strip and IR Proximity Sensor in each. Then we went to the metal shop, drilled holes in 2 metal bowls for the wires, and put the pods inside the bowls for a cleaner look.


Originally we wanted the EL wire to run across the bell. Unfortunately, when plugged into a battery instead of the wall the strength of the light got much weaker. With a higher battery voltage, the EL board began to burn. So with this dilemma (one too high, and the other too low), we decided to put the EL wires around the pods.


To broadcast the digital bell tower we created a website at  On the website is a quick description of the bell tower, buttons to influence the bell tower, and a live stream that is connected to the webcam facing the bell tower. We created the website with a template from Tumblr, added in the live streaming and twitter buttons with some html/css edits, and then purchased the domain and set it up. To stream we used the service uStream, which worked very well and was easy to use.

4. Code

The code part of the Digital Bell Tower project consists of a few individual programs. A Max/MSP patch that generates the bell sound, two Python scripts that read twitter data and sends OSC messages to Max/MSP and to one of the Arduinos, and three Arduino sketches – one for reading the accelerometer data which sent to the computer via XBee, one for reading distance data and triggering an LED strip every time the bell swings on top of one of the two sensors, and one for controlling the EL wires and receiving serial messages from Python via XBee.

The Arduino handling the accelerometer has a very simple sketch and won’t be discussed in this section.

const int xPin = A5;
const int yPin = A4;
void setup(){

void loop(){

4.1 Python

Even though the github links to a single python script, the actual project used two very similar scripts. This may have increased the project complexity, but simplified the configuration process of the XBees. The script runs a twitter search and gets the results for the last “Toronto :)” and “Toronto :(“ twits. It then compares the time each one of them were posted and returns 0 or 1 depending on the result. Lastly, it sends the value to Max/MSP via an OSC message and to Arduino by writing to the serial port. The script that was run on the Max/MSP computer did an additional search and through the same process described above, sends another OSC message, this time with three possible values returned: 0, 1 and 2.

4.2 Max/MSP


The Max/MSP patch is where all the sounds are generated. It is divided in six colour coded sections to make it easier to read and each section does a specific task.

The first red section receives the Arduino accelerometer data and parses it. The accelerometer is only reading and sending X and Y acceleration data. The first section is where the data can be calibrated by changing the values on the two “scale” objects.

The yellow section bellow receives OSC messages from Python. There are two incoming messages, one can receive a 0 and 1 and the second can receive 0, 1 and 2. These values will be used to control to type of sound that will be played. Mode determines which section of the Max/MSP patch will be used to generate the sound and posMode changes the sound in one of the modes (sad tweet mode).

The second of the upper sections, the green one, calculates the acceleration between incoming data. The initial idea was to find a way to calculate the difference between two subsequent incoming values, but ‘timloyd’ from the cycling74 forum presented a simple solution using just a couple of objects. This acceleration data is later used to trigger the bell sound when the patch is running in Mode 0 (happy tweet mode).

The third section mixes between four different buffered bell sounds based on the X and Y values coming from the accelerometer. For this, an algebraic expression from Dave Haupt of forum was used. The audio technique used in this project is very similar to the one used in vector synthesis ( but instead of mixing waveforms as a vector synthesizer would normally do, the patch mixes between four bell sounds of the same length. All four sounds were originally Tibetan bowl sounds previously sampled which were edited in Ableton Live for this project. While the patch is running in sad tweet mode, this section will mix between four shorter segments of the buffered sounds and, thus, making a sound that resembles a traditional vector synthesizer. Also, vector synthesizers are traditionally controlled with a joystick that controls how the waveforms are mixed based on X and Y position. The Digital Bell Tower projects builds on this idea and uses an accelerometer to gather X and Y data instead of a joystick. Or the bell itself can be understood as being a joystick.

The fourth section is the happy tweet mode sound generator. As already mentioned, it is activated based on incoming OSC messages from the Python script. If the message is 0 the “ggate” objects opens the audio signal patch and all four bell sounds are triggered every time the acceleration value is higher than the threshold. Each sound gain is then multiplied by a value from 0 to 1 based on the vector algebraic expression. The sum of all four audio gains is always 1.

The fifth and last upper section is the sad tweet sound generator. It works in a very similar way to what was described in the previous paragraph. The main difference is that instead of using a “~groove” object to play the buffered sound from beginning to end, a “~wave” object is used to play just a section of the sound. The section start and end position are determined by the posMode three possible values which are the second OSC message coming from Python. This mode is activated when the mode value is equal to 1. That value is used to control the “~phasor” object.

4.3. Pods

4.4 EL wires

5. Sketches

5.1 Accelerometer

Screen Shot 2014-12-10 at 6.47.22 PM

 5.2 – Pods – LED Strip and Proximity Sensors 

Screen Shot 2014-12-10 at 7.48.06 PM

5.3 EL Wire &  Twitter



6. Final Shots

Screen Shot 2014-12-10 at 8.36.58 PM

Screen Shot 2014-12-10 at 8.37.37 PM

Screen Shot 2014-12-10 at 8.38.42 PM

Screen Shot 2014-12-10 at 8.39.53 PM

Screen Shot 2014-12-10 at 8.40.23 PM


7. Feedback

Feedback from teachers & students (future aims):

– more variation within the physical push/accelerometer (make it more obvious when it is on, and when it is being pushed)

– make it more about the people in the space rather than those on twitter

– move EL wire to the bell (original plan, but work out the kinks)

8. Case Studies

8.1 – Public Face II


Public Face II is an installation created by artists Julius von Bismarck, Benjamin Maus and Richard Wilhelmer and displayed in the city of Lindau, Germany. The installation consists of a giant smiley built on top of a lighthouse.

A camera mounted on top of the lighthouse and facing down towards the crowd located on ground level uses computer vision algorithms the analyse people’s face and the smiley will react to emotion data in real time.

Public Face II uses emotion data coming from a group of people’s faces and displays it in a very straightforward manner: using a monumental smiley.The Digital Bell Tower aims to display emotion data not in an iconic way but turning data into symbols. (Link1, Link2)

8.2 – D-Tower

Screen Shot 2014-12-10 at 7.05.53 PM


D-Tower is a threefold project. It consists of an architectural tower, a questionnaire and a website and all three elements works together. The inhabitants of the city of Doetinchem located in Netherlands, the city who commissioned the art piece answers a questionnaire that records the overall emotions of the city. Happiness, love, fear and hate are the emotions being measured by the art piece. The results of the questionnaire are then displayed on the website and the tower also reflects the city of Doetinchem mood.

Our project aims to also be reactive the a city overall feeling. Colours and lights installed on the bell and the installation will change following the mood of Toronto (link1, link2, link3).

8.3 – Aleph of Emotion


Aleph of Emotion visualizes worldwide emotions. Twitter is the source of the collected emotions the website data was collected for 35 days in 2012 using openFramworks. The D.I.Y. physical object consists of an arduino and a tablet and it is built to resemble the appearence of a photography camera. The user points the object towards any direction and the tablet will display the emotion information for that particular region.

The Digital Bell Tower will similarly use Twitter to collect emotion data. Geography will be restricted to Toronto and won’t source worldwide data such as Aleph of Emotions. Data will be collected in real time and the project will also respond to twitter data in real-time as opposed to collect data during a specific time period and display that data (link).

8.4 – Syndyn Artistic Sports Game

Screen Shot 2014-12-10 at 6.59.07 PM


It is a badminton game created and named after Syndyn to be played in an indoor place. The speeder was colored by red LED while the racquets of each players are connected electro luminescent wire(and sensors) to reflect motion by light effect. The players are those who turn physical movement into audiovisual performance during playing the games(or doing the sports). With signals being transmitted from the sport equipment to computers, the games are performed by sound and light effects.

Similar to this installation, our project is also going to have a swing bell connected by accelerometer to detect its motion and XBee as the sender and receiver for data, as well as electroluminescent wires to perform the changes of conditions. In terms of the presentation, visual and sound effects are expected to emerge during the motion. All these mentioned function will be triggered while the bell swings and interacts with two objects on the ground.

In order to have a visual memory of the badminton game, the EL wires on the rackets are also going to be wrapped around the wrist of the player al long as the video camera will record the motion of the players lighting up by the wires. Plus, the speeder is lighted up in red LED so the camera can also catch the moving track of it and forms a image like this.

An iPod touch is used at the beginning of the game for the players to choose the theme color and other visualizations of the game. By contrast, our plan about the theme is to let the users hashtag their mood before sending a tweet. We have two mood types–happy and sad which could be triggered by tweets and will represent via pink and blue EL wires along with the objects with white LEDs on the ground (link). 

“LightGarden” by Chris, Hart, Phuong & Tarik











A serene environment in which OCAD students are invited to unwind and get inspired by creating generative artwork with meditative motion control.

The final Creation & Computation project -a group project- required transforming a space into an “amusement park” using hardware, software, and XBee radio transceivers.

We immediately knew that we wanted to create an immersive visual and experiential project — an interactive space which would benefit OCAD students, inspire them, and help them to unwind and meditate.

The Experience










Image credit: Yushan Ji (Cynthia)

We chose to present our project at the Graduate Galley as the space has high ceilings, large unobscured empty walls and parquet flooring. We wanted the participants to feel immersed in the experience from the moment they entered our space. A branded plinth with a bowl of pulsing and glowing rock pebbles greets the participants and invites them to pick up one of the two “seed” controllers resting on the pebbles. These wireless charcoal coloured controllers, or “seeds”, have glowing LEDs inside them and the colours of these lights attract the attention of participants from across the space.

Two short-throw projectors display the experience seamlessly onto a corner wall. By moving the seeds around, the participant can manipulate a brush cursor, and draw on the display. The resulted drawing uses complex algorithms that create symmetric, mandala-like images. To enhance the kaleidoscope-like visual style, the projection is split between two walls with the point of symmetry centered on the corner, creating an illusion of three-dimensional depth and enhancing immersion.

By tilting the seed controller up, down, left, and right, the participant shifts the position of their brush cursor. Holding the right button draws patterns based on the active brush. Clicking the left button changes the selected brush. Each brush is linked to a pre-determined colour, and the colour of the brush is indicated in the LED light on the seeds as well as the on-screen cursor. Holding down both buttons for 3 seconds resets the drawing of that particular participant but will not affect the other user.

To compliment and enhance the meditative drawing experience, ambient music accompanies the experience and wind chime sounds are generated as the player uses the brushes. Bean bags were also available in the space to give the participants the option to experience LightGarden while standing or sitting.

The visual style of the projections we were inspired by:

  • Mandalas
  • Kaleidoscopes
  • Zen Gardens
  • Fractal patterns
  • Light Painting
  • Natural symmetries (trees, flowers, butterflies, jellyfish)









Image credit: Yushan Ji (Cynthia)

Interactivity Relationships

LightGarden is an interactive piece that incorporates various relationships between:

  • Person to Object: which exhibits in the interaction between the player and the “seed” controller.
  • Object to Person: the visual feedback (i.e. the cursor on the screen responses predictably whenever the player tilts or click a button through the change in its location/appearance on the screen), as well as auditory feedback (i.e. the wind chimes sound fades in when draw button is clicked) let user know that he is in control of the drawing.
  • Object to Object: our initial plan was to use Received Signal Strength Indicator to illustrate the relationship between the controller and the anchor (e.g. the shorter of the distance between the anchor and the “seed” controller is, the faster the pulsing light on the anchor goes).
  • Person to Person: Since there are two “seed” controllers, two players can use each individual controller to collaboratively produce a generative art with different brush and color.

The Setup

  • 2 short-throw projectors
  • 2 controllers, each has an Arduino Fio, XBee, accelerometer, two momentary switches, one RGB LED, and lithium polymer battery.
  • 1 anchor point with an Arduino Uno, a central receiver XBee, and RGB LED
  • An Arduino Uno with Neo Pixel strip

System Diagram















The Software

Processing was used to receive user input and generate the brush effects. Two kinds of symmetry were modeled in the program, bilateral symmetry across the x-axis, and radial symmetry from ranging from two to nine points. In addition to using different colors and drawing methods, each brush uses different kinds of symmetry, to ensure that each one feels significantly different.

Each controller is assigned two brushes which it can switch between with the toggle button. A class was written for the brushes that kept track of its own drawing and overlay layers and was able to handle all of the necessary symmetry generation. Each implemented brush then extended that class and overrode the default drawing method. There was also an extension of the default brush class that allowed for smoothing, which was used by the sand brush.

One major downside discovered late in development was that the P2D rendering engine won’t actually make use of the graphics card unless drawing is done in the main draw loop of the Processing sketch. Most graphics work in the sketch is first rendered off-screen, then manipulated and combined to create the final layer, so as a result the graphics card was not utilized as effectively as it could have been.

Here is a listing of the four brushes implemented for the demonstration:











1. Ripple Brush

This brush uses a cyan color, and fades to white as it approaches the center of the screen. It uses both bilateral symmetry and a sixfold radial symmetry, which makes it well suited for flower-like patterns. It draws a radial burst of dots around each of the twelve cursor positions (six radial points reflected across the x-axis), and continually shifts their position to orient them toward the center of the screen. With smoothing effects applied, this creates multiple overlapping lines which interweave to create complex patterns.

2. Converge Brush

This brush uses a dark indigo color, and draws lines which converge toward the center of the drawing. It has bilateral symmetry and eightfold radial symmetry. As the lines approach the edge of the screen, a noise effect is applied to them, creating a textured effect. Because all lines converge, it creates a feeling of motion, and draws the viewer toward the center of the image

3. Sand Brush

This brush uses a vibrant turquoise color, and like the starburst brush fades to white as it nears the center of the image. It draws a number of particles around the brush position; the size, number, and spread of these particles increases as the brush approaches the outer edge, creating a scatter effect. This brush uses sevenfold radial symmetry, but does not have bilateral symmetry applied, which allows it to draw spiral patterns which the other brushes cannot make.

4. Silk Brush

This brush uses a purple color and has the most complex drawing algorithm of the brushes. It generates nine quadratic curves originating from the position the stroke was started to the current brush position. The effect is like strands of thread pulled up from a canvas. The brush has bilateral symmetry but only threefold radial symmetry so that the pattern is not overwhelming. Because it creates such complex designs, it is well suited for creating subtle backgrounds behind the other brushes.

The Controllers and Receiver










Image credit: Yushan Ji (Cynthia) and Tarik El-Khateeb

Seed Controller – Physical Design

When considering the design and functionally of our controllers, we started the endeavour with a couple goals from the start. These goals were determined by very realistic limitations of our intended hardware, most notably, the XBee transceiver and the 3-axis accelerometer.  We knew we needed accelerometer data for our visuals and in order to have reliable, consistent data, the base orientation of the accelerometer needed to be fairly standardized. Furthermore, the Xbee transceiver signal strength severely drops when a line of sight relationship is blocked, either by hands or other physical objects. Taking this into consideration, we designed a controller that would suggest the correct way of being held. The single affordance that we used to do this was a RGB LED that would illuminate and signify what we wanted to be the “front” of the controller.










Image credit: Tarik El-Khateeb and Phuong Vu.

Initially we started with the hopes of creating a 3D-Printed, custom shaped controller (by amending a ready 3D module from, however after some experimentation and prototyping, we quickly came to the conclusion that it was not the right solution given the time constraints associated with the project. In the end, we decided to go with found objects that we could customize to suit our needs. A plastic soap dish became the unlikely candidate and after some modifications, we found it to be perfect for our requirements.

To further suggest controller orientation, we installed two momentary, press-buttons that would act as familiar prompts as how to hold it. This would prevent the user from aiming the the controller with just one hand. These buttons also engaged the drawing functions of the software and allowed for customization of the visuals.

The interaction model was as follows:

  1. Right Button Pressed/Held Down – Draw pixels
  2. Left Button Pressed momentarily – Change draw mode
  3. Left and Right Buttons held down simultaneously for 1.5 seconds – clears that user’s canvas.










Image credit:  Tarik El-Khateeb

Seed Controller – Electronics

We decided early on to use the Xbee transceivers as our wireless medium to enable cordless control of our graphical system. A natural fit when working with Xbees, is the Arduino Fio, a lightweight, 3.3v microcontroller that would fit into our enclosures. Using an Arduino meant that would could add an accelerometer, RGB LED, two buttons and the Xbee without a concern of shortage of IO pins, as is the case with using a Xbee alone. By programming the Fio to poll the the momentary buttons we could account for duration of each buttons presses. This allowed some basic, on-device processing of data before sending instructions over wireless, helping reduce unnecessary transmission. Certain commands like “clear” and “change mode” were handled by the controllers themselves, significantly increasing reliability of theses functions.

In the initial period of development, we had hoped to use the Xbee-Arduino API, certain features seemed very appealing to us but as the experimenting began, it was clear that even though it was an API there were still several low-level functions that significantly complicated the learning processing and overall interfered with our development. We made a strategic decision to cut our losses with the API and rather use the more straight forward, yet significantly less reliable method of broadcasting serial directly and parsing it on the other end in Processing, after the wireless receiver relays it. Here is an example of the data being transmitted by each controller:




LightGardenControllerSchematic LightGardenControllerBreadBoard





Circuit diagrams for the Seed Controllers.
Wireless Receiver

In order to receive the wireless commands from both of our controllers, we decided to create an illuminated receiver unit. The unit is comprised of an Arduino Uno, RGB LED and an Xbee; it acts as a simple relay, forwarding the the serial data received via the Xbee to the USB port of the computer for the Processing sketch to parse. We used the SoftwareSerial library to emulate a 2nd serial port on the Uno so we could transmit the data as fast as it was being  received. In terms of design, instead of hiding the device we decided to feature it prominently in the view of the user, a pulsing white LED indicates that it serves a functional purpose and our hope was for it to remind users that wireless transmission is occurring, something that we take for granted nowadays.

LightGarden_Reciever_Schematic LightGarden_Reciever_BreadBoard





Circuit diagrams for the Wireless Receiver.


Branding strategy:

The LightGarden logo is a mix of two fonts from the same typeface family: SangBleu serif and sans serif. The intentional mix of serif and sans serif fonts is a reference to the mix and variety of effects, colours and brushes that are featured in the projections.

The icon consists of outlines of four seeds in motion symbolizing the four cardinal directions, four members of the group as well as the four main colours used in the visual projection.


Image credit: Tarik El-Khateeb

Colour strategy:

Purple, Turquoise, Cyan and Indigo are the colours chosen for the brushes. The rationale behind using cold colours instead of warm colours is that the cold hues have a calming effect as they are visual triggers associated with water and the sky.

Purple reflects imagination, Turquoise is related to health and well-being, Cyan represents peace and tranquility and Indigo stimulates productivity.


Sound plays a major role in our project. It is an indispensable element, without it the experience cannot be whole. Because the main theme of our project is to create a meditative environment, it was important to choose the type of sound which was meditative: enhancing rather than distracting the players from the visual experience. We needed to find a sound that was organic, can be looped, and yet does not become boring to the participants in the long run.

In order to fulfill all of aforementioned requirements, we decided to go with Ambient music, an atmospheric, mood inducing musical genre. The song “Hibernation” (Sync24, 2005) from Sync24, was selected as the background music. Using Adobe Audition (Adobe, 2014), we cut out the intro/outro part of the song, and beatmatched the ending and the beginning of the edited song so that the entire song can be seamlessly looped.










Image credit: Screen captures from Adobe Audition

Sound was also used as a means of giving the auditory feedback to the user of our “seed” controller, i.e., whenever player clicks the draw button on the “seed” controller, a sound is played with the purpose of notifying player that the action of drawing is being carried out. For this purpose, we employed the sound of wind chimes, whose characteristic is known for inducing the atmospheric sensation as used in Ambient Mixer (Ambient mixer, 2014). In our application, the ambient song is played repeatedly in the background  whereas the wind chimes sound fades in and out every time the player clicks and releases the draw button allowing the wind chimes to organically fuse into the ambient music. To do so, we utilized the Beads, a Processing library for handling real-time audio (Beads project, 2014). Beads library contains several features for playing the audio file and for generating a sequence of timed transition of the audio signal, i.e., sequence of changes in amplitude of the audio signal. So when the draw button is clicked the amplitude of wind chimes audio signal increases, and conversely, when the draw button gets released the amplitude of wind chimes audio signal decreases.


Case Studies

One: Pirates of the Caribbean: Battle for Buccaneer Gold

Pirates of the Caribbean: Battle for Buccaneer Gold is a virtual reality ride and interaction game at DisneyQuest, an “indoor interactive theme park”, located in Downtown Disney at the Walt Disney World Resort in Florida. (Wikipedia. 2014)

The attraction is 5 minutes long and follows a linear storyline in which Jolly Roger the Ghost Pirate appears on screen and tells the participants that their pirate ship is seeking treasure and that they can win this treasure by sinking other ships and taking their loot. The ship sails through different islands and comes across many ships to battle. 4:30 minutes into the ride, the pirate ghost re-appears and informs the players that they have to battle him and his army of skeletons in order to be able to keep any treasure they won by battling the ships. Once all the ghosts and skeletons have been defeated the final score appears on the screen.

The attraction can be experienced by up to 5 participants. One individual steers the pirate ship by using the realistic helm on the attraction inside of a detail rich 3D computer generated virtual ocean with islands and ships. Up to four players control cannons to destroy other ships. The cannons use wireless technology to “shoot” virtual cannons on the screen.

The attraction uses wrap-around 3D screens, 3D surround sound, and a motion platform ship that fully engages the participants and make them feel like real pirates on a real ship. (Shochet. 2001)


Two: Universal Studios Transformers: The Ride 3D

Transformer: The Ride 3D (Universal Studios, 2011) is a 3D indoor amusement ride situated in Universal Studios Hollywood, Universal Studios Florida and Universal Studios Singapore. The ride 3D is an exemplary case study of how a thrill ride when combined with visual, auditory and physical simulation technologies can create such immersive experience that it clears the borderline between fiction and reality.

The setup of this attraction consists of a vehicle mounted on motion platform that runs for 610 meters track. Each vehicle can carry up to 12 riders, who, throughout the ride, will be exposed to different kind of effects like motion, wind-blowing including hot air and air blast, water-spraying, fog, vibration, and a 18 meters high 3D projections that shows various Transformers characters (Wikipedia, 2014). Along the ride, participants will have a chance to “fight along side with Optimus and protect the AllSpark from Decepticons over four stories tall” (Universal Studios, 2011).


Three: Nintendo Amiibo

The Nintendo Amiibo platform is a combination of gaming consoles and physical artifacts, which take the form of well-known Nintendo characters in figurine form (Wikipedia, 2014). The platform is one of many following the same trend, that is, small, NFC (near-field-communication) equipped devices that, when paired with a console, add additional features to that console or game. NFC is actually a technology built up from RFID (Radio Frequency Identification) and most smart phones are now equipped with it (AmiiboToys, 2014).

The Amiibos have memory capability (only 1Kb-4Kb) and allow certain games to store data on the figurine itself (AmiiboToys, 2014). One example of this is with the newly released, Super Smash Bros game for the Wii U. The figurines actually “contain” NPC (non-playable characters) that match the resemblance of the character. These characters actually improve their abilities based on your own playing habits, apparently they actually become quite hard to beat! (IGN, 2014).

The interesting aspect of the Amiibo line and others like it, is the interaction between the digital representation of the character and the physical figurine itself. By using NFC, the experience seems almost magical, something that a physical connection would most likely ruin. There is a relationship between the player and the object but also between the player and the on-screen character, especially when said character is aggravating the player because its skills are improving. The transparency of the technology helps dissolve the boundaries between the physical object and the fully animated character.


Four: Disney MagicBand

The fourth case study will not be focusing on an attractive in an amusement park but rather on a new one billion dollar wearable technology that has been introduced in the Walt Disney parks: the MagicBand. (Wikipedia. 2014)

The MagicBand is a waterproof plastic wristband that contains a short range RFID chip as well as bluetooth technology. They come in adult and child sizes and store information on them. The wearer can use them as their hotel room key, park ticket, special fast pass tickets, photo-passes as well as a payment method for food, beverages and merchandise. (Ada. 2014)

The MagicBands also contains a 2.4ghz transmitter for longer range wireless communication, it can track the band’s location within the parks and link on-ride photos and videos to guests’ photo-pass account.

Thomas Staggs, Chairman of Walt Disney Theme Parks and Resorts, says that the band in the future might enable characters inside the park to address kids by their name. “The more that their visit can seem personalized, the better. If, by virtue of the MagicBand, the princess knows the kid’s name is Suzy… the experience becomes more personalized,” says Staggs. (Panzarino. 2013)


References & Project Context

3D printing:



Adobe. 2014. Adobe Audition. Retrieved from

Ambient Mixer. (2014). Wind chimes wide stereo. Retrieved from

Beads project. 2014. Beads library. Retrieved from

Sync24. “Hibernation.” Chillogram., 22 December 2005. Web. 01 Dec. 2014. Retrieved from


Case Study 1:

Disney Quest – Explore Zone. Retrived from

Shochet, J and Banker T. 2001. GDC 2001: Interactive Theme Park Rides. Retrived from
Wikipedia. 2014. Disney Quest. Retrived from


Case Study 2:

Inside the Magic. 2012. Transformers: The Ride 3D ride & queue experience at Universal Studios Hollywood. Retrived from

Universal Studios. 2011. Transformers: The Ride 3D. Retrieved from
Wikipedia. 2014. Transformers: The Ride. Retrieved from


Case Study 3:

AmiiboToys, (2014) Inside Amiibo: A technical look at Nintendo’s new figures. Retrieved from

IGN, (2014). E3 2014: Nintendo’s Amiibo Toy Project Revealed – IGN. Retrieved from [Accessed 10 Dec. 2014].

Wikipedia. (2014). Amiibo. Retrieved from


Case Study 4:

Ada. 2014. Making the Band – MagicBand Teardown and More. Retrieved from

Panzarino, M. 2013. Disney gets into wearable tech with the MagicBand. Retrieved from

Wikipedia. 2014. MyMagic+. Retrieved from


Project Inspirations / Context:

Project 3: bodyForTheTrees

by Hector Centeno, Jessica Kee, Sachiko Murakami, and Adam Owen

Project Description

bodyForTheTrees (bFTT) uses technological magic to create a dance experience to entertain and fascinate an audience. In this uncanny visualized interaction between the body and the digital, a performer moves her body and her actions are reflected on a multi-screen projection in which nonhuman figures move as she moves, while haunting sounds are also (optionally) triggered by her actions. The system is controlled by sensors on a wearable system on the dancer, which send data via a wireless tranceiver to a computer network, which processes the data and then projects a visualization onto a surface. This interactive performance piece has strong visual impact; possible uses may include integration into live, large-scale musical performances, street performances, theatre, and dance.



List of components and materials used:

Xbee transceiver (2)
1 Arduino Pro Mini
Laptop computer (2)
Data projector (4)
Small plastic box
Li-Ion battery
Resistor (10k, for the flexometer)
Hookup wire (several feet)
Voltage booster (to boost battery voltage from 3.7 to 5V)
IR distance sensors (2)
Flex sensor (1)
IMU Sensor (1)
NeoPixel LED rings (2)
Black polyester and Velcro straps (for gloves)
Black cotton/lycra blend fabric (for knee sleeve)
Headband (for IMU)
Sewing machine

A wireless sensor system attached to the body of a performer and a receiver attached to a computer. The wireless communication was done using XBee transceivers.

On the performer, four sensors are hard wired and attached to wearables: two IR distance sensors (SHARP GP2Y0A21YK) on the glove’s, one flex sensor (Spectrasymbol 4.5”) on the right knee in a sleeve, and one IMU (SparkFun LSM9DSO) on the top of the head, attached by a headband.  All the power and data wires for the sensors are secured to the body of the performer and end at a small plastic enclosure attached at the waist. This enclosure houses a microcontroller (Arduino Pro Mini 3.3v), one wireless transceiver (Digi XBee Series 1) and a 3.7v, 2000mAh LiPo battery. Two LED rings (Adafruit  NeoPixel 12x WS2812 5050 RGB) are also attached to the back of both hands as a visual element. The colour and brightness of the rings is set by the computer system (via XBee transmission). The two LED rings and the IR distance sensors require 5v, so a battery power booster (Sparkfun) was also included as part of the circuit and as a connection point for the battery.

bFTT glove - palmbFTT glove - back

The glove with IR sensor on the palm and NeoPixel ring on the back.

bFTT knee sleeve - outerbFTT knee sleeve - innerbFTT knee sleeve - with sensor

The knee sleeve, from top: Outer, inner, and inner with sensor visible.

bFFT wearable housing - frontbFTT wearable housing - rearbFTT wearable housing - inner

The housing for the Arduino, battery, and XBee. Attached to the performer’s waist. From top: Front, back, and inner views.

Two computers are used as part of the system, each one displaying through two data projectors. On the first computer an XBee wireless transceiver is attached via USB using an XBee Explorer board. This computer is connected to the second via an ethernet cable to share the sensor values received from the performer system (see software below for details).

Circuit Diagram

bFTT circuit diagram revised

System Architecture

bodyForTheTrees - System Architecture


Code available on GitHub.

The Arduino Pro Mini microprocessor on the performer runs a simple firmware that reads the sensor values and prints them to the serial output (XBee) in its raw form, except for the IMU sensor. In this case, the gyro, accelerometer and magnetometer values are integrated using the Adafruit algorithm part of the library made specifically for this device. The result of the integration is the yaw, pitch and roll in degrees. The data is sent packed as a message with each value divided by the number sign “#” and ending with a line feed.

The software system running on the computers uses Processing for generating the visuals, and Csound for generating the sound.

The Processing sketch places 25 trees randomly along a 3D scene rendered using the P3D rendering engine. The trees are built using a recursive algorithm (based on Advanced Tree Generator by James Noeckel) that uses triangular shapes for the leaves textured with a bitmap image. The scene has a solid colour background and the 3D lighting is done through ambient light and three directional lights of different colours. The view is moved using the OCDCamera library. A Tree class was made to contain all the logic and animation.

Each tree is independently animated using Perlin noise to create a more natural feel. The whole tree sways back and forth and each leaf is also animated by slowly changing some of the vertices by an offset. The camera view also moves back and forth using Perlin noise to add to the feeling of liveliness.

The sensor data acquired serially through the XBee is parsed in the Processing sketch and smoothed using an adaptation of the averaging algorithm found on the Arduino website. The smoothing of the polar data from the IMU was done using the angular difference between each reading (instead of smoothing the angle data directly) in order to avoid the values sliding from 359 back to 0 degrees at that crossing point. Each data stream is smoothed at different levels to achieve optimal response.

The values from the IR sensors are used to change the shape of the trees by modifying two of the angles of rotation in the recursive algorithm. The values from the flex sensor are used as triggers to modify the shape of the trees in two ways: when the performer flexes her leg for a short time, the leaves of the trees explode into the air to then slowly come back together; when the performer flexes for 3 seconds, the whole forest morphs into an abstract shape by elongation of the leaf vertices.

The data from the IMU sensor is used to rotate the whole scene around the Y (vertical) axis while the same data is used to change the background colour by interpolating between green and blue (lerpColor function). The current colour value is also transmitted to the performer’s system to make the hand LEDs match it. The pitch angle is used to rotate the scene around the X axis, limiting the rotation angle so the camera only shows the top part of the forest and not beneath it.

All this sensor data is sent to the same sketch running on the second computer via OSC messages and locally to the Csound instance running in the background. The Csound instance runs only on one computer and the Processing sketch is the same for both with the exception that in the slave computer the variable “isMaster” must be set to false.

The generative sounds are produced by Csound using a combination of synthesized sounds (vibraphone and wood blocks) and sound textures made by phase vocoder processing of short, single note instrumental sounds (a glass harmonica, a drum and tamtam). The sensor data received from the Processing sketch is used to synchronize the sound events and processes with the movements of the performer and the visuals.

Notes on process

Notes on process

From top left: Sewing the gloves, forming the circuit; textile production centre; the wearable circuit in progress; Hector soldering, soldering; testing XBee range; circuit board prep; testing the Processing sketch communication; Sachi wears the wearables; an early iteration of the sketch; Adam contemplates the projection problem; last-minute connectivity problems; Jessica gets hooked in for rehearsal.

For this project, several sub-projects were involved: coding the Processing sketch, creating the wearable system (both textiles and the circuit), network communication, projection, and the performance.


We began with the Processing sketch of a mouse position-responsive recursive tree that we found on We found the complexity of the movement and the responsiveness compelling and thought it would translate well to the movements of a performer. We altered the code and added triangle “leaves”, 3D ‘camera’ panning, and other functionality as described in the software section above.

Camera panning
We paired the IMU with the camera-pan function. At first, we used Peasy cam but it proved to have too many limitations on rotation handling when integrated with the IMU input. OCDCam suited our purposes much better because it allowed greater control over rotation.

Calibrating Sensor Input
Another challenge was getting meaningful visualizations from the sensor input. We needed to negotiate between the Processing movements with the body’s movements. We didn’t want the reflection to be exact (body != tree), but there needed to be enough of a causal connection that the audience would understand that the two were connected. We also needed to calibrate the sketch so that visually appealing dance movements would be meaningful on-screen. For example, at first the sketch only responded to very slow, slight movements with little visual impact – more of a tai-chi exercise than an attention-grabbing dance. However, amping up the visualization reactivity to sensor data created a visual frenzy that quickly overwhelmed the viewer. We hope we found a good balance between the two. The analogy we found useful was to describe the sensor system as an instrument that the performer ‘plays’.

Wearable System

When designing our wearable system, we had four objectives in mind: sensor functionality, durability, integration with the natural movements of the body, and aesthetics.

Sensor Functionality
The infrared distance sensors for the hands controlled the tree movement. The constraint with these sensors was that the hands needed to be close to a surface off of which the sensor could ‘bounce’ – the performer’s body, a wall, etc.

The flex sensor was placed on the front of the knee to trigger special visual functions. The challenge with the flex sensor was its fragility. During testing the base of the sensor was threatening to crack. We taped a piece of board to the base of the sensor to protect it.

bFFT - circuit in progress

The wearable system components in progress.

We found that during testing the sensor data produced very jerky movement in the tree and camera. We smoothed (averaged) the data to produce more fluid motion in the sketch. However, too much smoothing resulted in lag; we had to find the ideal smoothing spot that balanced between the two factors of too-jerky and too-laggy.

A Durable Wiring System + Hub
For robustness and durability, we chose colourful hookup wire to connect the sensors to the Arduino and battery, housed in the small box that can clip on a belt. Conductive thread may have been an option, but because we had not the time to learn how to fabricate using this delicate material, we chose to use safe, reliable hookup wire. Future iterations may use the lighter and more subtle wearable options. We used a Li-Ion battery for its thinness and capacity. We chose an Arduino pro-mini for its small size. There were no issues with the Pro-mini.

bFFT - circuit close up

Close-up of the housed hub. 

One challenge we faced with the wearable circuit was regulating the voltage for the different needs of the different sensors. When the battery was at full charge, the IMU (on 3.3V) was receiving too much power. We resolved this problem by rewiring the system so the Arduino regulated the voltage and then used a power booster for the sensors that required 5V.

Another challenge we faced was maintaining connectivity while the wearable was on the dancer. Several minutes before our performance, a connection came loose. We managed to fix the connectivity problem and the performance went smoothly.

bFFT - circuit board

Which connection came loose? We may never know.

Integration with the movement of the body
NeoPixels – The rings serve to strengthen the relationship between the performer and the sketch via the connected colour, as well as draw attention to the performer in a dark environment. If we had more resources, we would have added more RGB LEDs to the performer (perhaps an entire outfit!) so the colour connection would have been more apparent.

Textiles – None of us had any textile fabrication experience. We used scrap material to fashion the gloves and knee sleeve. We created a futuristic design using shiny black synthetic material, a sewing machine, and imagination. Velcro straps allow for easy removal of the gloves. Translating pattern to accurate sizing was a challenge we encountered, and there were many prototypes before we got it right. The knee sleeve was fashioned from stretchy cotton/lycra from an old tank top. We sewed in the flex sensor as it kept slipping out whenever the knee was bent. At first, we used a sparkly metal headband for the IMU sensor, but found that the metal interfered with the sensor data. We switched to a stretchy cotton/lycra blend for the headband and found the sensor worked much better.

We attached the exposed wires to the dancer using white vinyl tape, as we wanted to create a raw, futuristic look. Future versions might employ more elegant means (or even fabric/KT tape) as the tape we used didn’t adhere to the body very well.

Network Communication

There were two communication issues involved in the project: XBee-to-XBee and computer-to-computer.

XBee network
We had originally thought we would project the visualization onto a window and have the performer on the street, which presented issues with XBee line-of-sight and range. When we revised our setup to be indoors, the range problem we were encountering was resolved.

In order to project onto four surfaces, we needed two computers running the same Processing sketch and processing the same sensor information. We had several options, including:

A) Two XBee receivers attached to two computers receiving data simultaneously
B)  One XBee receiver attached to a computer that would then pass the sensor data to the second computer using an Ethernet cable.

We chose option B because it offered the option of expanding the system without needing extra XBees.

At first we tried to simply pass a video capture of the Processing sketch to the second computer via Max/MSP. However, the frame rate was 30/fps, and the sluggish, pixelated rendering on the second computer was unacceptable. We then switched to using the OSC Processing library to send the data, via OSC messages, which proved nearly seamless.


Originally we had thought we would project the four screens on the windows of 205 Richmond Street as a way of transforming the city into an entertainment space. However, at the last minute we were unable to secure the windows required (which are located in the staff lounge) and the Duncan-street facing windows of the Grad Gallery didn’t suit our needs. We moved the project indoors, at first into the Grad Gallery but space limitations were an issue there. We finally found a suitable classroom at the last minute which ended up being a perfect venue for this project. We used four projectors across a long wall for the greatest visual impact. We lamented not having four short-throw projectors to minimize shadow interference of the dancer and to gain maximal size within a limited space. However, the thrown shadow ended up being a visually interesting part of the performance. Two projectors were hooked up to each computer. We also would have liked to have a continuous sketch running across multiple projectors, but we encountered problems arising both from the computers not handling the extended sketch well and also using four different projectors all with different resolutions. We believe the effect of staggering the projected images was an effective compromise.

bFFT performance - projections 2

The first and third projections were from Computer 1; the second and third projections were from Computer 2.


On presentation night, we had two dance performances: one that demonstrated the integrated generative sounds, and one in which we used a popular song (“Heart’s a Mess” by Goyte). We wanted to showcase the versatility of this system, and how it could be adapted for both avant-garde, improvisational dance, as well as for mainstream performance such as might be found at a large-venue concert.

bFFT performance - space

Our venue was an OCAD classroom. With a bit of chair-arranging and lighting, it transformed from this into a fine performance space.

For the first performance, the dancer intuitively worked with sound and movement to generate interesting sounds and visualizations. For the second, we worked with a choreographer to create movement that would balance meaningful interactions with the visualization with contemporary dance.

Case Studies

Battlegrounds Live Gaming Experience

bftt - case study 1
A Toronto startup, Battlegrounds Live Gaming Experience is an update on laser tag – an amusement technology that hasn’t seen any advancement in nearly twenty years. Drawing influence from first-person shooter video games, Battlegrounds uses emergent digital technologies to create a realistic combat experience.

Technology: Arduino, XBee, 3D-Printing

Interaction: Many to Many

Battlegrounds is capable of supporting up to 16 users simultaneously in team-based or individual play. 3D printed guns containing an Arduino processor and XBee transmitter are paired with XBee-wired vests to send and receive infrared “shots”, registering hits and transmitting that data to a central CPU controlling the game, while logging ancillary transmitted information like ammunition and health levels, accuracy, and points.

The controlling CPU sends back data to users to apply game parameters (gun stops working when ammo is out, vest powers down when health is depleted, etc).

Narrative: Nonlinear

Because this is a competitive, multi-user experience, the narrative of each round is subjective to the specific parameters of the game mode and individual actions of the players. Narrative is assignable through any number of modes such as Capture the Flag, Team Deathmatch or “Everyone for themselves” Single Deathmatch.

Relevance to bFTT: Although this case study seems far from bFTT, both have the starting point of being wearable systems employing Arduino and XBees.

Toy Story Midway Mania!

bftt - case study 2

Toy Story Midway Mania! is an interactive ride at three Disney theme parks (Walt Disney World, Disneyland, and Tokyo Disney Resort) that allows riders to board a train running throughout a circuit of midway-themed worlds, interacting with a train-mounted cannon.

Technology: Wireless Programmable Logic Controllers operating real-time; ProfiNET Industrial Ethernet; 150 Windows XP PCs; Digital Projection

Interaction: One to One to Many

Each train car is outfitted with four pairs of outward facing seats and four mounted, aimable cannons. Riders aim these cannons at projected targets throughout the ride, while train position and speed, and cannon pitch, yaw, and trigger information is relayed to a central controller. This central controller analyses this information to track the vehicle and register cannon hits or misses, which are then relayed out to the more than 150 projection controllers to display the changing environments resultant from the players accuracy and progress.

Narrative: Linear

The narrative of the amusement is linear in that it progresses along a track, through pre-designed animations and experiences. Players interact with these experiences in a consecutive fashion before being returned to their point of origin.

Relevance to bFFT: ‘Virtual’ interactions with the environment that affect the state of a projected image.

Disney Imagineering Drone Patent Applications (in progress)

bftt - case study 3

Aerial Display System with Floating Pixels

Aerial Display System with Floating Projection Screens

Aerial Display System with Marionettes Articulated and Supported by Airborne Device

Disney’s Imagineers have recently submitted three patent applications for UAV-contingent amusement applications that, while as yet unrealized, demonstrate the capabilities of wireless synchronized technologies in public art and performance.

The first patent demonstrates a system of UAVs hovering in static relation to one another, with synchronized video screens acting as floating pixels to display a larger image. The second patent describes a different system where a series of UAVs suspend a screen onto which a projection is cast. Finally, the third patent describes a marionette system where synchronized UAVs attached to articulation points on a large puppet coordinate the movement of that puppet.

Technology: UAV, Projection, Wireless networking

Interaction: One to many

For all three patents, the interaction is based around synchronized movement, with no user involvement. All movement is pre-designed and coordinated by a central controller.

Narrative: Simple

Since there is no user involvement in the amusement, this application is more performance art than an interactive piece. Everything is pre-written, though there is a great deal of customization potential. This could be more accurately thought of as a new technique than a specific project, and thus, the applications of this technology could be used in larger, more interactive projects.

Relevance to bFTT: the use of wireless networking to coordinate visually stunning, larger-than-life, uncanny entertainment.

GPS Adventure Band

bftt - case study 4

Morgan’s Wonderland, a 25-acre amusement park in Texas geared towards children with special needs, their friends and their families, has unique demands, not only from an amusement perspective, but within its basic infrastructure. Technological solutions addressing some of the most pressing anxieties carried by parents of children with special needs are built into the park’s structure from the ground up. RFID bracelets, worn by all guests entering the park not only track all guests physically, but are encoded with important medical and behavioural information for park staff to best serve guests.

In 2012, a contest to name the system resulted in the name “GPS Adventure Band”, despite GPS not being used.

Technologies: Wireless networking, RFID

Interaction: Many to One

Hundreds of guests’ information is relayed to one central server, with physical access points around the park for user information retrieval. Information is also accessible by park staff on mobile devices. The bracelet is also used to coordinate the emailing of specific photos to assigned email accounts, and acts as a souvenir.

Narrative: Debatably none.

This is not an amusement, but depending on one’s liberal definition of narrative, the access to information and peace of mind that families are given through this interaction affects their own personal narratives as experiencers of the park as a whole.

Relevance to bFTT: The tracking of multiple users via RFID bracelets might be a method for expanding interaction to bFFT. A simple sensor might be embedded in the bracelet as well, and each bracelet linked to the movement of an object (such as a tree in bFFT). Groups of people could thus move the forest together, playing with individual and group movements.

Project Context

This project’s focus was on creating eye-catching visuals that create surprise and delight from an uncanny interaction with technology. As such, the topic most relevant to this project seems to be live performance involving digital technology.

Dixon (2007) argues that the turn of the 21st century marked an historic period for digital performance where its zenith was ‘unarguably’ reached in terms of the amount of activity and the significance of that activity – prior to a general downturn of activity and interest. Yet in the Summer 2014 issue of the Canadian Theatre Review devoted to digital performance, the editors assert that digital technologies continue to shape theatre practices in particular by redefining theatre by moving it outside of traditional theatre spaces (Kuling & Levin, 2014). The issue details, among other topics, theatre experiments with social media (Levin, Rose, Taylor & Wheeler); game-space theatre (Filewood; Kuling; Love & Hayter); and the possibilities for new media performances for Aboriginal artists (Nagam, Swanson, L’Hirondelle, & Monkman). This was heartening to learn, as the years since those Dixon names as the ‘zenith’ of digital performance have seen so many tools emerge that make technology affordable and accessible to artists (such as Arduino and Processing), that it seemed unlikely that theatre artists would totally lose interest in the digital. A brief review of the contents of the latest International Journal for Performance Arts and Digital Media and the inclusion of papers on technologically mediated performance the ACM Conference on Human Factors in Computing Systems (CHI) 2014 conference, ACM’s 2013 Creation and Cognition conference, and other notable conferences, suggests that relevant theatre, dance, and performance projects are still being created today.

Other relevant projects, organizations, and initiatives include:

If digital performance is still relevant in Canada, perhaps the methods of our project would be relevant to creators of such projects.


Arizona State University. (n.d.). MFA in Dance. Retrieved December 9, 2014 from

Barkhuus, L., Engstrom, A. and Zoric, G. (2014). Watching the footwork: second screen interaction at a dance and music performance.  CHI ’14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. p. 1305-1314. ACM: New York.

Battlegrounds (Ed.). (n.d.). Battlegrounds web site. Retrieved December 9, 2014, from

Brunel University London. (n.d.). Contemporary Performance Making MA. Retrieved December 9, 2014 from

Digital Futures in Dance (2013). Retrieved December 9, 2014, from

Dixon, S. (2007). Digital Performance: A History of New Media in Theater, Dance, Performance Art, and Installation. Cambridge: MIT Press.

Greenfield, D. (2008, October 10). Automation and Ethernet Combine for 3D Disney Attraction. Retrieved December 9, 2014, from

The Gertrude Stein Repertory Theatre (n.d.). Retrieved December 9, 2014, from
International Journal of Performance Arts & Digital Media, 9(2). (2013). Retrieved December 9, 2014, from;jsessionid=blrhai8go429d.victoria

Kuling, P., & Levin, L. (Eds.). (2014). Canadian Theatre Review, 159. Retrieved December 9, 2014, from

Maranan, D.S., Schiphorst, T., Bartram, L. and Hwang, A. (2013). Expressing technological metaphors in dance using structural illusion from embodied motion. C&C ’13 Proceedings of the 9th ACM Conference on Creativity & Cognition. p. 165-174.  ACM: New York.

Morgan’s Wonderland (Ed.). (n.d.). GPS Adventure Band. Retrieved December 9, 2014, from

Rhizome. (2013). Prosthetic Knowledge Picks: Dance and Technology. Rhizome (online).

Retrieved December 9, 2014 from

Noeckel, James. (2013). Advanced Tree Generator. Retreived December 9, 2014 from

University of the Arts London (n.d.). MA Digital Theatre. Retrieved December 9, 2014, from

University of California, Irvine. (2012). Experimental Media Performance Lab. Retrieved December 9, 2014 from

University of Colorado. (n.d.). The PhD in Intermedia Art, Writing, and Performance. Retrieved December 9, 2014 from

University of Michigan. (n.d.) Department of Performance Arts Technology. Retreived December 9, 2014 from

University of Salford. Digital Performance MA. Retrieved December 9, 2014 from

York University. Faculty of Fine Arts Job Posting. Retrieved December 9, 2014 from

Zara, C. (2014, August 27). Disney Drone Army Over Magic Kingdom: Patents Describe Aerial Displays For Theme Parks. Retrieved December 9, 2014, from

The City of Love – Wearables and more.. Chen & Frank & Mehnaz

IMG_0354 (1)

Ideation & Brainstorming                                                                                                                           

When we decided to create a wearable device in an amusement park environment, we first thought that creating human emotions and human contact..from coming there, first idea that we generated was as Toronto being an immigrant city, having people contact in a different level with the loved ones. Taking this to consideration, we thought that we can map a camera view of a person on an object and having this in both sides of the world for both people who are in communication through computers. Since we found out that XBees do not travel that long distance we reshaped our thoughts around keeping it in a short distance but adding more love to our idea.

The next step was improving the idea of HUGGING in a meaningful manner with a narrative behind.

IMG_0356         IMG_0352        IMG_0349


On the street of Toronto, the big installation takes its place, made out of plastic, (white acrylic) boxes the structure represents the city scape. 2 projectors to be installed as they are located to project on 4 sides of the sculpture. Mapping with a mapping software the images represent the cities which are known as most romantic cities in the world beside Toronto city image.

One block will capture a flashing banner which is; ” Can Toronto Be The City of Love?” question.

Having a wearable device sawn shirt which includes XBee, LilyPad and conductive material, one person will welcome visitors who are also wearing a conductive shirt or a necklace. As they hug the person who has the device, they close the circuit and an identifier coloured LED starts lighting on the person’s shirt who has the device. This colour represents the first person who hugs first. At the same time, same colour of light bubbles start floatin on the structure’s surface to represent filling it with love.

When second person hugs, new coloured LED takes place and new colour starts floating on the structure. As many people hugs, the structure collects more colours as representing more love. For people who don’t have the shirts, like public walking by, we designed a necklace to share the experience. The shirts and the necklace can be thought as marketing feature, or supporting a cause, as a result of public contribution, those can be the key items to be sold and/or given away.

IMG_0433                                        IMG_0434

Conductive fabric added necklace : We also used sheet copper to connect the two pieces of necklace to create flex possibility to reach to close the circuit on shirts with the necklace.

 3D Printing:

Before setting the city scape idea, we thought about having a small 3D printed model of the actual sculpted piece.


Rhino 3D model representing HUG


Rhino 3D Sketch


Four views of Rhino 3D model



The City of Love is made of;

3 wearable shirts,

LilyPad Arduino 328 Main Board

Lilypad XBee

2 XBee devices

JST Lipo Battery connector

Arduino board


Conductive fabric

Conductive thread



IMG_2684  IMG_0418  IMG_2691 (1)



How did we make it work?

We thought that if the person hug the wearing device is better. So we choice the LilyPad Arduino 328 Main Board, XBee and Lilypad XBee to send and recieve the wireless signal.

As this project is our the first time to work with XBee, we needed to experiment with XBees many times .

1. XBee—XBee, two XBees send and receive signal from each other.

2. XBee—Arduino, We used XBee with breadboard through Arduino to send and receive signal to another XBee

we had a problem as we couldn’t  upload the code to Arduino when we  connect it

with XBee, so we have to take off the two PINs – TX & RX, so Arduino can receive the


3. XBee—LilyPad Arduino & XBee,

IMG_2691 (1)


First problem we had that we couldn’t send signal from Arduino to XBee , but we could have received  signal from the XBee to Arduino. After discussing with our TA, which also made him think a lot, we were concerned that the PX-PIN on Lilypad should link to the same PIN in the Lilypad XBee, but Ryan told us that maybe the problem is the RX-PIN onthe Lilypad should link to the TX-PIN on the Lilypad XBee, and the TX-PIN link to RX-PIN. We tried it, and it worked. An other question is that the LilyPad Arduino 328 Main Board doesn’t have the case to link the battery, it has 6 holes and we were not quite sure what they were for. After searching the Schematic about the board in Sparkfun and we found the power hole and the the ground. Then we bought a JST Lipo Battery connector and soldered it on the LilyPad.

At the beginning of the project, we wanted to use a pressure sensor to control the signal with the interaction. After discussing it told us that we came to a point of not thinking as a button but an initiator, so it can be anything magical like using conductive fabric. At this point project started to be more fun. We used the conductive fabric as the buttons. We made one fabric circle to be connected with power and two other connected to the ground. Ground connected fabric was also linked with #7 & #8 PIN. There is also a resistor between the pins and fabric.


When the circle cut fabric of #7 or #8 PIN is connected to the fabric which is attached to power, the Arduino can recognize the signal and send it to Processing to control the animation. We set up two colours, one is blue and one is yellow. We sewed the context on a black microfiber shirt with conductive thread which created the “base” shirt. We also sewed two bigger pieces of conductive fabric on a blue shirt and a yellow shirt. Person who wears the blue shirt hugs the person who wears the base shirt, as conductive fabric on the blue shirt covers the two fabric circles on the base shirt as it closes the circuit, Arduino recognizes the HUG.

We sewed two LEDs which are linked in #10 & #13 PINs on the base shirt to distinguish and show whether the HUG works. As the sewing is a hard work it was very challenging to connect the LED on a synthetic material which has no grip. As the thread shouldn’t overlap each other, we had to try to create a different direction with each connection.

IMG_0430  IMG_0419  IMG_0421


After this step we started testing and found out connection problems. For example, if the height of the person who wears the base shirt is different, conductive area on the coloured shirts did not meet the circles. So we have added more fabric to yellow coloured shirt to raise create more coverage. With more testing we found out the best area of the body to contact when two people hug, and sewed the bigger conductive fabric there.

IMG_2699  hugteam  IMG_0420

IMG_0422    IMG_0421    IMG_0424


IMG_2703 (1)



In terms of Arduino codes, there are two main functions to achieve the interaction human hugging and computers. One is to communicate with processing program. The other one is to detect signals from different digital pin ports.

The principle of detecting different identities can be seen as a simple example of enlightening a LED by pressing a button. Specifically, on our t-shirt, the trigger of interaction is that assumes the conductive fabric as a button. Then, open or close the circuit to detect the statement of “HIGH” or “LOW”. The statement can be detected easily by digital pin ports of Arduino board. Therefore, according to different pin ports, we can achieve detecting different identities.

IMG_2688    IMG_2687 (1)

Arduino code

To achieve the collective love bubble effect, we use communication between Processing and Arduino. Processing detects different values from different digital pin port so that Processing can detect different identities as well. Additionally, singular color dots will be active after one circuit has been completed. The increasingly colorful dots rising effect is achieved through decreasing y-axis value. Also, random method has been added to vary speed of the dynamic effect. Another function of Processing is to play music after detected the first closed circuit.

IMG_2724 IMG_2726 cond

Processing code – Collective love effect

Collective love effect (test vision without XBee)

To fit the surface of the structure, we use VPT to adjust the interface. Also, for creating the sense of city, we simulate neon light effect on our slogan by using random color on the text.

Neon light effect

Creating the structure:

We wanted to use this setting as a street installation in Toronto downtown. Setting a big base structure of acrylic blocks on about 10’X 10′ area about 10 feet high, and projecting the images on 4 or more surfaces…also the reality can bring more development such as having different interacting images on each surface as an addition to city pictures and the code projection.

persp  IMG_0431  IMG_0446

Mapping with VPT

IMG_0445  IMG_0444




Bill of Materials

Assembly List

Label Part Type Properties
LED1 Blue LED type single color; color Blue; polarity common cathode
LED2 Blue LED type single color; color Blue; polarity common cathode
Part1 Lilypad Arduino Board type Lilypad Arduino
Part2 LilyPad XBee type Wireless; protocol XBee
R3 220Ω Resistor bands 4; pin spacing 400 mil; tolerance ±5%; package THT; resistance 220Ω
R4 220Ω Resistor bands 4; pin spacing 400 mil; tolerance ±5%; package THT; resistance 220Ω
U1 LIPO-1000mAh variant 1000mAh; package lipo-1000

Shopping List

Amount Part Type Properties
2 Blue LED type single color; color Blue; polarity common cathode
1 Lilypad Arduino Board type Lilypad Arduino
1 LilyPad XBee type Wireless; protocol XBee
2 220Ω Resistor bands 4; pin spacing 400 mil; tolerance ±5%; package THT; resistance 220Ω
1 LIPO-1000mAh variant 1000mAh; package lipo-1000



Case Studies: 

1- Pillow Fight Club Study

  • Interaction: person to person, person to object

When two people have pillow fight, interaction sends signal to display images

  • Technology: sensing and display

They used same technology as our project XBees and Arduino, processing software as they created images as a result of a wireless interaction

  • Narrative: simple

It is a simple narrative which is already a life experience for many people.


2- Super Hero Communicating Cuffs

The Superhero Communicator Cuffs enable brave souls to call on their partners in a time of need. This tutorial demonstrates how to send and receive wireless signals without the use of micro controllers or programming. You will learn how to configure Xbee radios, build a basic soft circuit , and work with conductive thread and conductive fabric.

How it works: Each pair of cuffs has an electronic switch made of conductive fabric. When the wrists are crossed, a wireless signal is transmitted which
activates the LED on your partner’s set of cuffs, beckoning to them that you need Super Hero assistance! Since you’ll be making two pairs of communicator cuffs, this tutorial will be great to make with a friend!

  • Interaction: person to person, person to object
  • Technology: sensing and wireless communication
  • Narrative: simple


3- SMILE – Interactive Lights

SMILE was originally created for an all-night outdoor installation at Toronto Nuit Blanche in the historic Fort York park. Each cube is outfitted with a high-brightness RGB LED, a SLA battery, and is wirelessly programmable. Additionally, the cubes can form a mesh-network, communicating with each other or receiving commands from a central computer.

  • Interaction: person to object, object to person
  • Technology: wireless communication
  • Narrative: simple


4- Dream Jammies

Icon: Chizuko Horman
Embroidery: Melody Litwin

Dream Jammies are pajamas which are aware of your body in several ways. They know whether you are standing or laying down, tossing or lying quietly. Dream Jammies also know your body temperature. This information is relayed to your partner’s iPhone, and expressed on their screen in color, changing in realtime.

As you lay down to sleep, the screen fades from green to blue, the shade of blue reflecting your body temperature. As you roll around, the screen flickers red. By shaking the iPhone your partner is able to reach out, causing the chest of your pajamas to vibrate. Not pleasant while you sleep, but a perfect alarm clock. Not only are you able to keep in touch while living on opposite sides of the world, Dream Jammies offer insight into how you sleep by capturing data as you snooze.







Make: Wearable Electronics , Hartman K., 2014, Toronto, Canada

Social Body Lab, OCAD

Super Hero Communicating Cuffs

Moment Factory

Floating City

Screen Shot 2014-12-10 at 6.50.35 AM



We created a “Floating City”, which is a typical part of the fantasy world. It is similar to the existing themes in amusement parks, in which everything is out of this world similar to a dream or a magic place. Floating City is a sign of fantasy that only appears in novels and movies, in this installation, we used physical technology, sensors and code to create an interactive city. We tried to change the normal functions of a balloon by making it hover in air like a magic city while turning the whole room into a fantasy world.

While brainstorming we kept in mind the emotions the visitors might experience while playing like Anticipation, excitements, Anxiousness and happiness among others.  In this project, we reflected and created the atmosphere of happiness, when people interact with our installation, they will feel the similar joy of being in an amusement park.




We used The balloon as the main visual of our installation Because of its typical association with happiness. In most times, balloons are used in happy events, parties, amusement parks and so on.


  • Balloon
  • Helium
  • Paper
  • 3 Bright LED
  • plastic for the structure to hold the motors
  • 2 motors
  • 2 helicopter blades
  • 1 RGB LED ring
  • 1 ping sensor
  • 2 XBEE
  • 2 arduino



Physical theory

To make the balloon hover in air, it’s necessary to add a specific weight depending on the weight of the balloon to resist the floating force of helium. When the weight is balanced, the 2 propellers will make the balloon move forward or turn around.



The Game

Players can use two buttons to control the movement of the balloon. Each button can trigger one of the propellers to change the direction to  either left or right and if pressed together the balloon will move forward. In order to win, the players have to bring the balloon close to the LED ring

which is placed on the wall attached to an ultrasonic sensor that can detect the distance. Once the balloon is close enough to the target, The LED ring will change from blue to different RGB colors.

Function and Code (2 days)















Notes on Process

Brainstorming (Week 1)

We had a very long and intensive brainstorming session to explore ideas and inspirations. Since the theme of this project is about amusement parks, we started with one question: How do people feel while they are in the theme park?

We listed every emotion a person might feel and depending on these emotions we transformed each feeling into an act. Then, depending on these acts we created some concepts and later on we focused on developing the best concepts into an interactive installation. Since we had so many ideas choosing one with the best potentials of development was not easy. At last settled on happiness, mesmerizing, and fantasy like as our main objective of this project. FLOATING CITY




IMG_6492 copy






Physical Theory and Material Preparation (7 days)

Because the physical theory is the most essential part of this installation. We did a lot of research about the floating force and air flow theory so we watched a lot of projects and we tried to do some experiments. To make the helium balloon hover in air, the weight should be very accurate and equal to the floating force. It’s hard to test it because it’s not only about this, it’s also about the motor’s power, weight, and the shape of the propeller, they all together affect the balance of the balloon.


The First Experiment

In the first experiment we used a 3 feet balloon which can only hold 300 g. We had two problems, one with the weight and the other one with the structure. We tried to mount the motors to the balloon but we failed because the structure was not balanced and the fans were not strong enough to move the balloon. For this experiment we made two prototypes for the structure and we also had two motors. At 4 am the balloon burst while experimenting. The whole process was risky because the balloon is very fragile, and it keeps losing helium over time, which leads to a serious problem of balance.


The second Experiment

Even though we were planning to use a 4 foot balloon for our final installation, we went with a 3 feet balloon again because it costs less and in this stage we were not sure if it was going to work or not. We bought different sizes of motors, but the problem was always with finding a light motor with enough power. We tried different sizes of computer fans and we even tested the blades of the drones but they all failed. We finally found the perfect  type of  motor and the right kind of blades (helicopter).


The Final Project

We got the 4 foot balloon on Friday  night because we didn’t want to risk losing helium which will affect the weight of  the balloon. Since we had a late start the last two days were very critical.

However by that time the structure of the propeller was ready. so in this stage we started to think more about the look of the balloon and the dynamics of the game.    We had so many ideas to explore but not enough time. Here are some of the ideas we experimented with.



Choice of motor




Structure for the motors
IMG_6686Model 3


IMG_6661Model 4
_MG_0656Final Model






_MG_0659Attempt to make the encoder sensor steer the balloon


We explored a lot of ideas about how to extend the concept of the balloon. Before the idea of the floating city we thought about the dynamic of the human face using paper since it’s the lightest material we could think of. We wanted to hide the balloon and cover it with three floating faces in which each face can different personality that can only be seen when interacting with. Then we thought about the beauty of origami and how the light can show the details of the folded paper but the problem was that it requires lots and lots of weight as well. Then we moved on to the floating city. At first, we were planning to cover the whole balloon with three different cities but again we have a limit amount of weight.

When we got the balloon and we started testing the concepts, we found it’s impossible to cover the whole balloon because of the weight balance. Finally we decided to only build 3 groups of abstract buildings, and put super bright led inside the buildings, to make it as the buildings at night and every room is turned on. We also wanted to take the concept further and cover the ceiling of the room but because the time and the cost we decided not to.



Screen Shot 2014-12-10 at 3.37.10 AM


Final Project








_MG_0863 copy



_MG_0629 copy

IMG_0727 copy

_MG_0751 copy 4


_MG_0856 copy 2






Case Studies

1.RC Car

Screen Shot 2014-12-11 at 2.52.24 AM

Interaction: person to object, object to object

Technology: radio control, mechanisms

Interaction: person to object, object to object

Technology: radio control, mechanisms

Narrative: simple

RC car is a typical toy that can be played both indoor and outdoor. Using radio-control technology, players can control the movement of the car without touching the wires. Based on this game, we had so many ideas that can be developed from it. There are two main directions to improve this game, the first one is to change the material of the object, and in our installation we changed the car into a balloon, which is more challenging. The second thing is about the design of the controller. In our installation, we added a simple game to give the movement a purpose that can make players immerse in the experience. In terms of interaction, depending on this direction, more complex and challenging game can be designed, using sensors (for example, ultrasonic sensor, pressure sensor, movement sensor and so on) to let players interact with each other, which means adding an object interaction to an object, and it can be even a person to a person.

To transfer the whole room into a fantasy world theme, we developed this example by changing the object to balloon, which was in the shape of a floating city, and we added a simple game to turn to change it into an installation where people feel happy and mesmerized.


2.Bernoulli Floating Ball


Interaction: object to object

Technology: mechanisms, physics

Narrative: simple

As the name suggest, it uses a tube and a fan to create a moving stream of air that keeps a light ball suspended in mid-air, giving evidence for one of the predictions of Bernoulli’s equation.

The theory of this installation is, because the movement of the air, the air pressure above the tube is lower than the other part, so it keeps the ball in the steam of air.

Depending on this theory, this installation can be designed to be more interactive. Each fan can be controlled separately. The relationship between this installation and our project is the concept of floating. We got this inspiration from this installation but then; we developed the same concept into


3.Floating Water Sport Park


Interaction: person to object

Technology: physics

Narrative: simple

The floating playground is a great place to relax and have fun in summer days. This floating water sport park is called Wibit Sports, which was produced by a German watersports company. In this playground, there are many slides , for example, high jump, swing, bridge, balance beam and so on. It transfer the surface of water into a playground.


4.Cyclone Arcade Game

Screen Shot 2014-12-11 at 5.33.47 AM

Interaction: person to object

Technology: sensor

Narrative: linear

This arcade game can detect the position of the light. The button can stop the moving light so if the moving light stops in the middle of the circle, the player will get a high score. The theory is similar to our project, judging the result by the distance and how close the player to the light.



Screen Shot 2014-12-11 at 5.33.58 AM

Interaction: person to object, object to object

Technology: sensor, physics, mechanism

Narrative: nonlinear

Drone is one of the sources of our inspiration. Drones can be built with an Arduino and combination of sensors that can receive data from the controller enable it to move on its own. The theory and mechanism is similar to our project, use propeller to raise the weight and control the direction. It’s a typical example for wireless device that have a strong relationship with Arduino. This type of drone can be used in different ways because of it small scale and dexterity.























Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.