Author Archives: David Oppenheim

Strange Networks (David Oppenheim)

Prototype for a network of analog and digital flames

This prototype combined several networks to explore the relationship between analog and digital objects and the use of fire as an interface element.

I followed an iterative approach to research, ideation, design and development, outlined in the documentation that follows. 

1. Context Research

This piece was inspired by (and ended up re-mixing) Randall Okita’s short film, “No Contract.”[1]


Photo credit: screenshot, “No Contract” (Randall Okita)

An excerpted portion of the film’s synopsis describes how it was made:

“All flames in the video are authentic and were recorded live, rather than created digitally. A single performer was lit on fire in front of an audience and performed a 30 foot a wire-assisted jump while burning.”

I was fascinated by the imagery of the two bodies, engulfed in flames, moving towards each other but also captivated by the scenes where we watch an audience watching the performer get lit on fire — I wanted to change the relationship between the audience and the imagery in my remix of Randall’s video.

Design inspiration for this project was also drawn from “Lightweeds” (2005) by Simon Heijdens.[2]

Photo credit: Simon Heijdens

Photo credit: Simon Heijdens

 As Heijdens says:

“Lightweeds is an ecosystem of living digital plants that overgrow the man-made space, moving and growing directly according to outdoor weather patterns.”

Weather sensors bring data from the outside world into the museum and Heijdens’ digital plants respond in real-time. They also respond to people passing by.

Lightweeds, like other outside-inside data-driven installations, provided conceptual inspiration to include oceanic (sea rise) data as an input into the interactive experience.


2. Conceptualization

When I watched the “No Contact” for the first time several years ago I sat mesmerized by the scene where two burning bodies seemed to hurtle through space in slow motion, moving towards each other but never colliding. I knew that Okita (a collaborator of mine) had wanted to explore themes of urgency and isolation in his film but that’s about all I knew in terms of his thematic intention.

Design Considerations

For this remix I wanted to play with the idea of a group of people watching this poetic imagery but situate the audience in a different relationship with the material by implicating them in a network of interactions.

While I was thinking of acts of self-immolation in the face of climate change while making this piece,[3] I wanted the audience to reach those themes on their own, preferring the risk that they wouldn’t make those connections (or make other ones) to being too literal or communicating my point-of-view too loudly through didactic design.

Screenshot of a portion of my design notebook

Screenshot of a portion of my design notebook

I eventually settled on a core interaction: lighting a match to light a candle. The intention was that a simple interaction with analog materials would connect the audience more intensively to the screen where bodies on fire were moving through space towards each other, and at the moment of lighting the match, towards the participant.

I also wanted to bring the outside world in and chose to contrast the fire with water, specifically the world’s rising oceans. More on that, below.

My main research questions were:

What does it feel like to light a match, then a candle, and then use flame as an interface element to control this durational video (an excerpt from Okita’s No Contract)? How can a series of small interactions with analog objects (wood, fire) in the physical world, accumulate and ultimately engender a feeling of connection with an abstract two-dimensional digital object (time-based media)?

Technologies and Materials Used

I chose to use an Arduino microcontroller to detect the act of lighting a match and to measure the change in light values as the participant lit candles and blew them out. The sensor values were networked using OSC input into TouchDesigner (TD) where I manipulated the video and sound. The intention was to bring the livestream of the ocean into TD as well, however I came up against some limitations, described below.

3. Prototype presentation and documentation

Location: Room 510 at 205 Richmond St. West (OCAD U)

Installation dimensions: Approximately 5’ wide by 10’ deep

Number of participants: Single or Multiple

Hardware: 1 x Laptop, 1 x Arduino nano 33IOT, 1 x Light sensor, 1 x Bread board, 1 x Micro USB cable, 9V battery, 2 x external monitors

Software/language: Arduino IDE, TouchDesigner, Python

Installation Design:

I repurposed a box from a previous group project (Emotive Objects) and used it to hide the electronics and provide a place for the sensor to be surrounded by candles. I covered the cut outs that had originally been positioned in opposite corners and fashioned a cut out in the centre of the box.

The sensor was extended from the bread board, threaded through the cut out and taped down so that it was positioned as unobtrusively as possible.

A 9V battery was used to provide external power to the microcontroller, allowing the entire prototyped circuit to standalone (separate from the laptop) and talk to TouchDesigner running on the laptop.


Job type: laser cut

Box size: 12”x 12”

Material: Baltic Birch

Material Thickness: 6mm

Network Diagram and User Experience Description

The following section outlines the prototype user experience that was demonstrated during the December 6th critique.

The network was as follows:


Network diagram

The 5’ x 10’ space was setup with a table covered in black fabric as the central point of interest. On the table were a monitor and six unlit candles on one end, and the black box with embedded light sensor surrounded by six candles, on the other. Some loose matches and a match box were placed next to the box, alongside a champagne glass to hold the discarded match sticks. One of the candles was lit and a few burnt matches were floating in the champagne glass (water was added so that the burnt matches would make a sound when discarded into the glass).

Opposite the table, on the periphery of the installation space, was a smaller table with an iPad and Bluetooth speaker. A livestream of the ocean was displayed on the tablet with the sound coming through the external Bluetooth speaker.

The opening state of the installation was designed so that the video of the bodies on fire was glitching but would shift into a smoother playback state as candles were lit. As more candles were lit the playback speed increased up to 1.5 times normal. If the candles were blown out the video would slow down and eventually stop. If light values in the room fell below the opening state, the video would have (in theory) moved into a reverse playback state.


User flow diagram




Video documentation of user experience (You Tube) 

Link to code on GitHub (TouchDesigner file, Arduino code)

If I were to take this prototype forward, the live stream of the ocean would be projected onto the outside of the installation and respond to the audience milling about outside the core space, perhaps creeping up the sides of the installation’s exterior walls, the sound of the ocean getting louder as time passed and more people gathered. For this prototype it simply formed an ambient layer that could not be interacted with. Attempts to bring the stream into TouchDesigner were unsuccessful. I was able to bring a YouTube stream in but unable to avoid the stream being interrupted by commercials. Initial research suggests that public RTSP video stream might be the best way to proceed going forward.

4. Next Steps

This version 1 prototype showed me that there is something to the interaction of lighting a match and using the flame as interface element, however the first thing I would do if I were to take this forward would be several rounds of user testing to get a better sense of the sensations, emotions and thoughts that the installation in its prototype form was engendering in participants. I would start with asking people to talk aloud as they were moving through the piece and then follow-up with a series of open-ended questions. I would compare their perceptions with my intentions and tweak the design until I found the sweet spot where interactions evoked affective responses — autonomic, pre-subjective, visceral and preceding emotional states, or as Anable writes, “…forces that inform our emotional states”[4] — and some (but not all) participants thought of what it means to perform an act of protest such as self-immolation, and what it means to live in a world where they let others perform the act on their behalf.





[3] For example, Wynn Bruce who set himself on fire outside the United States Supreme Court in April 2022

[4] Aubrey Anable, Playing with Feelings: Video Games and Affect (Minneapolis: University of Minnesota Press, 2018)., xvii



Sketch #5 – David Oppenheim

Unity + Arduino

Moving Unity blocks with Arduino

Moving Unity blocks with Arduino

For this sketch I wanted to start getting more comfortable with Unity and Arduino. Working from a blog post by Erika Agostinelli I created a simple Unity scene and was partially successful in getting serial data into the scene to control the movement of a block.

The following video shows the Arduino sketch with both buttons outputting correctly to pins 2 and 8:

Video (YouTube)

This video shows the block moving right (but failing to move left):

Video (YouTube)

A fair amount of debugging failed to solve the issue…and so I have a cube that moves right, but not left.

Code, including Arduino sketch and Unity project (GitHub)

*I decided to add a tower of blocks to see what it felt like to topple them with my simple controller; not sure if it was connected to this change, but I subsequently got an error from Unity saying the port was not open. I was unable to resolve this issue.

Sketch #3: Physi-digital Ball Drop, part2 — David Oppenheim

For this sketch I wanted to go back to my first sketch and add state machine logic to be able to decouple the virtual ball from the tracked object. I also wanted to add poseNet functionality in with the existing COCO model functionality to play with two different types of interactions in the same sketch: holding a physical ball and interacting just with your hands.

Video (YouTube)

Code (on p5.js)

Sketch #1: Physi-digital Ball Drop — David Oppenheim

For this first sketch I wanted to explore ml5.js and COCO to play around with the concept of merging an analog object with its digital counterpart. I was partially successful.

Intended user experience: 1) When the user throws a physical tennis ball in front of their camera, a virtual ball appears and follows their physical tennis ball as they throw it up and down in front of the camera. When their physical and virtual ball (tracking together) reaches a certain height, a “floor” is drawn and 2) the virtual ball takes on a life of its own – i.e., it continues to draw to the screen, now moving independently from the physical tennis ball, and drops from its height onto the “floor,” bouncing until it comes to a stop.

Results: I was able to get COCO to recognize the analog object (physical tennis ball) and then draw a virtual ball to the screen at the position of the physical tennis ball, drawing a “floor” (rectangle) when the balls reached a certain height. I was unable to decouple the virtual ball from the tracked object and achieve the second part of the intended user experience.

Video of the working components (part 1 of the user experience) (YouTube) 

Code (on P5.js)