Digi-Cart 1.0

Experiment 3 – Katlin Walsh

Project Description 

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention.  

Continue reading “Digi-Cart 1.0”

Skull Touch

S K U L L  T O U C H – An interactive skull


Skull Touch
An interactive skull that reacts to the touches of people and gives output in the form of spooky sound in different amplitudes and frequency.

Kate Hartman & Nick Puckett


This project tries to explore different states of touche such as no-touch, one finger touch, two-finger touch and grab. Capacitive touch enables to detect the different state of touch.
When you think of the word “Tangible interface”, the first thing comes in my mind is the tactile interface that anyone can feel, touch and interact with. Why scary theme, since it was Halloween time and hence the spooky skull.

Github link: https://github.com/Rajat1380/SkullTouch


I started with the idea about how to use a pressure sensor and a touch sensor, then I realized that only two states can be obtained and I was not satisfied with that. So I started looking for how to get more output.
I got to know about the Capacitive touch sensor through which any surface can be a sensor. After watching this video by DisneyResearchHub, this got me thinking that this is what I was trying to do initially. Disney had developed its own hardware to detect different sensors and information is not open for the public. Then I started looking for alternatives and found StudioNAND/tact-hardware. They have provided all the information regarding the capacitive sensor. I always want to work with audio and how to control them with different input methods. This project gave me all the required push to go for it. As an input device, I chose the skull as this project was happening around Halloween time.

The Process

I started with the circuit set up and code to get the capacitance value for the different touches. The list provided by studioNAND to make low budget capacitive sensor is given below.

1× 1N4148 diode
1× 10mH coil

1× 100pF
1× 10nF

1× 3.3k
1× 10k
1× 1M

I got all the except components except 10mH inductor. I got 3.3 mH inductor instead of recommended one in the list and preceded with the circuit set up.
I have to install TACT library for Arduino and processing to run the code.

Arduino Circuits



Prototype 1



The output was very distinctive for no-touch, one finger touch, but not for one finger and two-finger touch. I figured this out very late in my project. This was happening because of 3.3mH inductor.  The range I was getting is too narrow for distinctive touches. I tried to make an inductor on 10mH with the hollow cylinder and copper wire but I was not able to get near 10mH. So proceeded with the current code.

As I was planning the control audio with the skull as an input device through different modes of touch. I directly jumped into the processing to couple the capacitance value with the amplitude, frequency of the scary sound to give spooky experience to the user.

Final Setup


img_0597img_0601 img_0602Challenges & Learnings

  1. Libraries and how, where to install, these were my hard learning with this project. As TACT library was created in 2014 as now there is no support for the new Arduino microcontroller. So I have to use Arduino UNO.
  2. I was getting noise in capacitance output even for no-touch. I tweeted the developer of the TACT library and I got a reply from him. He gave me access to controlling sound via a microphone input. I am still finding this hard to understand. Someday I will.
  3. Getting components was hard for me as I was not able to get all the components from creation. I tried to craft inductor and it was not successful. It’s part of learning and it difficult to accept the dead ends.


StudioNAND, 2014 tact-hardware
28th Oct, 2019 [https://github.com/StudioNAND/tact-hardware]

Tore Knudsen, 2018 Sound Classifier tool
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2018 Capacitive sensing + Machine Learning
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2017 SoundClassifier
29th Oct, 2019 [https://github.com/torekndsn/SoundClassifier]

Tore Knudsen, 2017 Sound classification with Processing and Wekinator
29th Oct, 2019 [https://vimeo.com/276021078]

DisneyResearchHub, 2012 Botanicus Interacticus
30th Oct, 2019 [https://www.youtube.com/watch?v=mhasvJW9Nyc&t=49s]




Musical instrument  without Touch 

Jignesh Gharat

Project Description:

Hover Beat is an interactive musical instrument that is played without touch or physical contact installed in a controlled environment with a constant light source. The project aims to explore interactions with a strong conceptual and aesthetic relationship between the physical interface and the events that happen in the form of audio output.


A project’s potential radius of interaction is usually determined by technical factors, be it simply the length of a mouse cord or the need for proximity to a monitor used as a touch screen, the angle of a camera observing the recipient, or the range of a sensor. However, the radius of interaction is often not visible from the outset—especially in works that operate with wireless sensor technology.

In the project, the attempt is not to  mark out the radius of interaction or special boundaries at all, so that it can be experienced only through interaction

Explorations & Process:

I started exploring different sensors that can be used for controlling sound. Starting with a sound detection sensor, flex sensor, capacitive DIY sensors using aluminum foil, pressure sensors and finally ended up using a light sensor to make the interaction invisible. So that the user doesn’t see or understand how the instrument actually works and opens many possibilities to interact with the object and they explore, learn while interacting.


Flex Sensor | Arduino Uno or Arduino Nano


Light sensor LDR | Arduino Uno | Resistor


Using a glass bowl to calibrate the base sensor data reading in- DF 601 studio environment at night.

How does the sound actually change?

The data coming from the Arduino and LDR are used in processing to control playback speed and amplitude of the sound. A steady reading is used as a benchmark which is from the constant light amount detected by LDR.

Libraries in Processing: import processing.sound.*; ( For SimplePlayBack )
import processing.serial.*; ( import the Processing serial library )


I started experimenting with 8-bit sounds, vocals, and instruments.  The sounds changed its playback rate and amplitude based on the amount of light received by the LDR sensor. to minimize the noise and improve the clarity in ever-changing sounds the best option was to work with clean beats and so I decide to work on instrumental music. The interactions were mostly using hands so I did some research on musical instruments that use hands and make distinct clear sound beats. Indian classical instrument TABLA  a membranophone percussion instrument was my inspiration to develop the form and interaction gestures.musician-tabla


The form is inspired by the shape of Tabla.  Use of semi-circular glass bowl to define the boundary or say a start point to measure readings and define the limit of interaction in a radius as it uses the LDR sensor. The transparent material of the glass actually confuses the user and makes them curious about how it works. The goal was to come up with a minimal simple and elegant product which is intuitive and responsive in realtime.

img_20191031_163131 img_20191031_163119screenshot-2019-11-05-at-7-20-15-pm

Exhibition Setup:

The installations work only in a controlled environment where the light quality is constant and won’t change because there is a base reading calibrated and used as a bench mark to change the sound.


Experiment Observations:

The affordances were clear enough. Because of the sound playing the users got a clue that the object has something to do with the touching or tapping but later on interacting they found out quickly that it’s just hovering at different heights over the object to manipulate the sounds of tabla. People tried playing with the instrument. People with some technical knowledge on sensors were more creative as they found out its the light that controls sound.


Github – https://github.com/jigneshgharat/HoverBeat


The experiment has laid a foundation to develop the project further and make a home music system that reacts to the light quality and sets the musical mood for example if you dim the lights the music player switches the mood to an ambient peaceful soundtrack. A new musical instrument can be made just by using DIY sensors at a very low cost with new interesting interactions.




Touch: Graphite User Interface

Liam Clarke

touch head

Touch is a Arduino and processing project that creates a simple computer with a touch interface. 

The initial goal was to find ways to combine Processing and Arduino in a single interactive medium. The project is a touch screen using conductive paint, glue, and wires. A 10×5 grid was cut into an acrylic sheet which acts as the conductive circuit within the screen. While the initial dimensions were simple, a much more complex grid can be built upon the current version. An Arduino Uno and the CapacitiveSensor library are used to register touches via the grid. The data is then sent to Processing which performs the visual actions that creates the illusion of an operating system. The screen is created using projection onto the acrylic panel, where icons and features are mapped to the grid using MadMapper.

  secondTouch screen technologies were researched to develop ways to combine Processing and Arduino. The most attractive was using infrared touch frame around glass. The hardware for this method of touch screen, a contact image sensor, can be sourced from an average house hold printer. While this would be a visually clean method of sensing touch, a capacitive grid was chosen in favour for simplicity and time constraints. In capacitive touch, sensors have an electrical current running through them and touching the screen causes a voltage change. This sends a signal to a receiver  and touch is registered within software.

The screen was made with an acrylic panel. A grid was was cut into the panel and these grooves would be used to embed conductive material within. Small holes were drilled into the panel, which would be used as touch points. These touch points were filled with conductive paint and protected with conductive glue. Tests were done to find the smallest groove and amount of conductive paint that could be used to trigger a signal in the send pin.


The frame of the computer is sourced from a broken microwave, gutted of internal components. Different screens and casing shapes were tested, the setup and size was selected as it was the most adaptable for future expansions on the machine’s function. A large size helps improve screen resolution while back projecting on translucent material.


The code was created using Paul Badger’s CapSense library. Each sensor connects to two digital inputs on the Arduino. The send pin connects to a 1M ohm resistor which connects to the receive pin. Between the receive pin and the resistor, the sensor is connected. Multiple sensors can share the same send pin, which will help when scaling the number of functions on an Arduino.


The Processing side of the software is based on reactive images to the touch input. Images were mapped out to the grid using the Syphon Processing library through a projector via MadMapper. MadMapper facilitates organizing the layout, removing the need for precise calibration within Processing in regards to image location.

The style direction was based on the original Macintosh for it’s simplicity and colours. The point of basing the design off a real operating system was to add to the illusion of a fully functional computer. The current build features an audio player with a play and pause button, a simulated paint program, and a return to home function.




Pjrc.com. (2019). CapacitiveSensor Arduino Library with Teensy, for Capacitive Touch and Promimity Sensing. [online] Available at: https://www.pjrc.com/teensy/td_libs_CapacitiveSensor.html.

 GitHub. (2019). CapacitiveSensor. [online] Available at: https://github.com/PaulStoffregen/CapacitiveSensor.

Baxter, L. (1996). Capacitive sensors. John Wiley & Sons, p.138.





The Compass

The Compass

Priya Bandodkar


portfolio-2 portfolio-1 portfolio-4



“The Compass” is an experiment that uses interactions with a faux compass in the physical space to navigate in a virtual 3D environment. The exploration leverages affordances of a compass such as direction, navigation, travel to intuitively steer in a virtual realm. It closely emulates a VR experience sans the VR glasses.

Participants position facing the screen and rotate the handheld compass to traverse in an environment on the screen in directions mimicking the movement of the physical interface. Participants can turn the device, or even choose to turn around with the device for subtle variation of the experience. This movement influences the 3D sphere on the screen to rotate, creating an illusion of moving in the virtual space itself. The surface of the 3D sphere is mapped with a texture of a panoramic landscape. The landscape is a stylised city scene of crossroads bundled with characters, vehicles to compliment the navigational theme of the experiment. 

As a variation to the original concept, I embedded characters in the scene that participants need to search for. Thus, tapping into the ‘discovery’ affordance of the compass creating a puzzle-game experience.


As a 3D digital artist, I have always been interested in exploring the possibilities of interactive 3D experiences in my practice. The introduction to processing opened doors to accessing the ‘interactive’ part of this. I was then keen on playing with the freedom and limitations of incorporating the third dimension into processing and studying its outcomes.

One of my future interests lies in painting 3D art in VR and making the interactions as tangible as possible. I am greatly inspired by the work of Elizabeth Edwards, an artist who paints in VR using Tilt Brush and Quill, creating astonishing 3D art and environments in the medium. I was particularly fascinated by her art piece in VR called ‘Spaceship’ which was 3D painted using Tilt Brush. I posed myself with a challenge of emulating this virtual experience, and controlling it with a physical interface which was more intuitive than using a mouse.

It had to be a physical interactive object that helps you look around mimicking a circular motion. Drawing parallels in the real world, I realised the compass has been one of the most archaic yet intuitive interfaces to finding directions and navigating in the real space. Hence decided to leverage its strong affordance and compliment the visual experience of my project. While building on the interactivity, I realised how easy and effortless it became to comprehend and relate the virtual environment to your own body and space from the very first prototype. And even more due to physically controlling the object with an intuitive handheld in real space.


Studying about the gyroscope and sending its data to processing filled a crucial piece of the puzzle. It helped me utilise the orientation information with simple, yet very useful lines of code to bring in the anticipated interaction to a T.


I studied VR art installations such as the ‘Datum Explorer’, which created digital wilderness using a real landscape, that concluded to a non-linear storytelling using elusive animals. This elicited the idea of incorporating identifiable characters in my 3D environment to add an element of discovery to the experience. I looked up for games that were based on similar concepts such as ‘Where’s Waldo?’ to calibrate the complexity of this puzzle-game idea. I used six characters from the Simpsons family and embedded them using a glitching graphic effect, thus hinting that they did not exactly belong in the scene, and hence needed to be spotted.


To leverage the affordance of the compass, it was important to make it compact enough to fit in the hand and be rotated. I was able to achieve this by nesting the microcontroller on a mini-breadboard within the fabricated wooden compass. I stuck to the archaic look for the compass to keep the relatability towards the design intact for participants. While incorporating the puzzle-game aspect, I realised the design of the compass can be customised to hold clues related to the game. But I decided to let go of that in this version, as the complexity of the six-character puzzle was simple and straightforward enough for participants to solve.

compass-1 compass-2 compass-3


To conclude, the interaction with the compass in the physical world to control a virtual 3D environment came about intuitively for participants, and was successful. Some interesting interactions that came up during the demo, were when participants decided to turn around with the compass held in hand, and also when they placed the compass near the screen and rotated the entire screen to experience the emulation of ‘dancing with the screen’. The experience was also compared closely to resemble VR, however without wearing the VR glasses, making it more personal and tangible.




This is an exploration in continuum that I would like to build on using the following:

  • Layering the sphere with 3D elements, image planes in the foreground to create a depth in the environment.
  • Using image arrays that appear or disappear based on the movement of the physical interface.
  • Adding intricacies and complexities to the puzzle game by including navigation clues on the physical interface.


Edwards, Elizabeth. “Tilt Brush – Spaceship Scene – 3D Model by Elizabeth Edwards (@Lizedwards).” Sketchfab, Elizabeth Edwards, 1 Jan. 1967, https://sketchfab.com/3d-models/tilt-brush-spaceship-scene-ea0e39195ef94c9b809e88bc18cf2025.

“Datum Explorer.” Universal Assembly Unit, Wired UK, https://universalassemblyunit.com/work/datum-explorer.

“Interactive VR Art Installation Datum Explorer | WIRED.” YouTube, WIRED UK, https://www.youtube.com/watch?v=G7BaupNmfQU.

Ada, Lady. “Adafruit BNO055 Absolute Orientation Sensor.” Adafruit Learning System, https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/processing-test.

Strickland, Jonathan. “How Virtual Reality Works.” HowStuffWorks, HowStuffWorks, 29 June 2007, https://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality.htm.

“14 Facts About Where’s Waldo?” 14 Facts About ‘Where’s Waldo?’ | Mental Floss, 20 Jan. 2017, https://www.mentalfloss.com/article/90967/14-facts-about-wheres-waldo.






Commuting Fun


Project Name
Commuting Fun by Jessie Zheng

Project Description
Commuting Fun is an interactive installation aiming to bring some fun to the mundane day-to-day commuting and to take away the stress, anxiety and even anger people may experience during rush hours. Aesthetically interesting visual patterns are projected onto the interior of the vehicle and can be altered through passengers actions on the train. Expected actions on a vehicle such as sitting on a seat, grabbing the handles, and tapping on a ticket machine can generate unexpected changes in the visuals. Commuting Fun provides new possibilities to start people’s day with fresh and relaxed minds. The project aims to utilize as much space for exploration and interaction as possible, encouraging passengers to move around within the vehicle and get active.


Video of Interaction

Ideation Process
I have been wanting to make an installation for this class for a very long time, and finally I could do it for this project. I’ve looked up how different interactive museums worldwide incorporate different types of interactions. A lot of them use graphics as decorative elements to the interactive environment, and certain parts of the graphics react to where there are physical interactions from the participants. Based on the same idea, I wanted to decorative design patterns on processing which get projected to all surfaces in the installation space as if they are wall papers.

To add more interesting and tangible pieces as part of my installation, I came up with ideas such as putting balloons and exercise balls because their shapes echo the polka dots pattern on the walls, which could add to the aesthetic appeal. For example, one of the Nuit Blanche exhibitions I went used balloons in the installation space with lights around to amplify the visual appeal. They could also serve as switches to the circuit, which could trigger changes in the image of polka dots in the projection when people pull on the balloons and sit or move the exercise balls. The whole space should serve as a visually-pleasing play space for people. Moreover, I thought about how participants could also be part of the exhibition. If they could all wear whites, their clothes could be a canvas for the projection of the polka dots as well. However, considering the limited timeframe to figure out the logistics to add balloons and exercise balls as part of the circuit, I had to change direction to implement something that doesn’t move much so it causes less of a challenge to have a stable and secure circuit.


Nuit Blanche Exhibition That Inspired Me

One day on my way to commute to school, I noticed people interact with lot of objects in daily life without paying much attention to it. Commuting alone has lots of things people interact with, for example, tapping on a machine to get on a streetcar, pressing the stop button to get off, grabbing onto handles to stay stable. These are objects that have been carefully and systematically designed and put into certain places in a vehicle to ensure the functionality of a vehicle, and people are so used to them function in a certain way that they barely pay attention to them. What if something unexpected happens when people interact with a vehicle the way they normally do? Would they behave differently? Eventually I decided to recreate a part of a subway for my installation and have objects such as seats, handles, buttons act as switches to change the way graphics behave. The graphics will then be projected back onto the subway to be an integral part of this space so the digital as well as the physical space become as a whole.

Project Context
Inspired by the article Tangible Bit: Beyond Pixels by Hiroshi Ishii who prefers the multi layers of interactions in TUI over the one dimensional GUI interaction between users and the digital screen, this project takes on the challenge to explore the interactive relationship between the physical and virtual world. An installation becomes an inevitable choice as this project goes further into the ideation process due to my awareness not to confine the physical interactions of participants within the space of a single object, but to have multiple objects within the installation space for participants to walk around and discover.
Eventually the final decision was made to recreate TTC transit vehicles as the interactive space because of the amount of tangible things to be worked with, for example, stop buttons and wires, POP machines, seats etc. People could interact with the things within the installation space without being provided with any instructions or guidance as most people have predefined ideas in mind of how to interact with the objects within transit vehicles from day-to-day commuting experiences, which could minimize possible confusion of the interactive experience. As objects are spread out across the vehicle, it adds another level of interactions which enables broader movements of participants’ bodies.
An effort has been made to keep the physical and virtual world inseparable from each other. Having decided to incorporate objects on vehicles as switches to control the graphics interface, the next challenge is how to integrate it into the physical world organically. Coming from a Chinese background, I have seen themed subway trains in China, the interior of which are covered with decorative design during certain festivals. Using the graphic interface as a decorative design which is projected onto the train becomes my focus. While people interact with different objects on the train, the design of the interior of vehicles changes based on the interactions.


A Themed Subway Train in Ningbo, China

However, with technical limitations, I’m unable to map and project the decorative design pattern onto all surfaces of the recreated vehicle installation space, which has led me to think of other ways of projection methods. In China, a lot of times for commercial purposes, a series of pictures will be put up in the tunnels outside the train. While the train is moving at a fast speed, the windows will be used as a frame for the animated commercials outside. For this project, the window area will be utilized in a similar way to display the visual elements, which simplifies the projection process yet still enables the visual elements to be an integral part of the installation.

Drawn by the minimalist style of polka dots art installations by the Japanese artist Yayoi Kusama exhibition Infinity Mirrors at the AGO, I chose to use polka dots to be the visual elements of the decorative design. It strikes me how something really simple yet could still be extremely visually stimulating. Sometimes simplicity speaks more to the audience than a compilation of elements. Yayoi Kusama’s obsession with polka dots comes from her mental disorder caused by her difficult relationship with her mom while she was a child. She was encouraged to draw and paint by a therapist to as an outlet for her oppressive feelings. The statement behind her artworks created by the carefully chosen colors and deliberately arranged compositions is truly powerful on viewers such as myself. I decided to choose a playful and vibrant color palette for the dots and background because I believe this could soothe people’s anxiety during rush hours.

Work in Progress
I wrote the code at first using potentiometers and then replaced them with velostat pieces attached with aluminum foil to make sure the code works.


Once I decided to recreate the subway space, I jotted down a list of things which can be found in a subway in Toronto. I printed out posters and recorded the ambient sound in a subway on my way to school.
The most challenging thing was to make the chairs in our studio resemble the seats in a subway. I found red fabric at Michael’s that has a similar look and feel as the fabric on a subway seat, and found a silvery metallic adhesive film which I pasted and sewed onto the red fabric. This process was extremely time-consuming as I had no prior experience of sewing. It took me almost 2 days to finish making all three pieces of seat cover.


I also used card boxes to recreate the yellow handles on the TTC subway. Initially I intended to use them as switches as well, however, I couldn’t find a good method to secure them in place while people pull on them. Eventually they became props which add to the look of the recreated subway space.
After taping all three pieces of seat covers onto chairs, I put sensors under the covers and connected them to the circuit. I had people sit on the chairs to see if it created desired changes in the graphics, made changes accordingly.


However, after a few tests, the aluminum foil is starting to crumble and break after people sit on them many times. This has led me to seek alternative conductive material that can endure stretching a bit better. Conductive fabric became the enhanced alternative.
Finally, the chairs are ready to go.

Final Look


Final Look Of The Installation


First Aid Box To Hide The Arduino


Polka Dots Projected Back On The Train


Handles To Grab On To

artjouer (2018) A Visit to TeamLab Planets Tokyo: Amazing Interactive Art. Available at: https://youtu.be/G6EtM1r0Eko (Accessed: November 4, 2019).

BRANDS OF THE WORLD (2018) Toronto Transit Commision. Available at: https://www.brandsoftheworld.com/logo/toronto-transit-commission (Accessed: November 4, 2019).

British Council Arts (2015) What is digital art? Available at: https://youtu.be/2RWop0Gln24 (Accessed: November 4, 2019).

CHINADAILY (2015) Another themed subway train runs in Ningbo. Available at: http://www.chinadaily.com.cn/m/ningbo/2015-04/22/content_20503269.htm (Accessed: November 4, 2019).

Grief, A. (2015) What Toronto’s highways would look like as a TTC map. Available at: https://www.blogto.com/city/2015/09/what_torontos_highways_would_look_like_as_a_ttc_map/ (Accessed: November 4, 2019).

Ishii, H. (2019) Tangible Bits: Beyond Pixels. Massachusetts: MIT Media Laboratory. Available at: https://zhenbai.io/wp-content/uploads/2018/08/4.-Tangible-Bits-Beyond-Pixels.pdf (Accessed: November 4, 2019).

O’Neil, L. (2019) The TTC is putting fare evaders on blast in a new add campaign. Available at: https://www.blogto.com/city/2019/05/ttc-now-putting-fare-evaders-blast/ (Accessed: November 4, 2019).

Sinha, V. (2018) Yayoi Kusama: Her world of polka dots. Available at: https://www.thejakartapost.com/life/2018/09/06/yayoi-kusama-her-world-of-polka-dots.html (Accessed: November 4, 2019).

Tate (2012) Yayoi Kusama – Obsessed with Polka Dots | Tate. Available at: https://www.youtube.com/watch?v=rRZR3nsiIeA (Accessed: November 4, 2019).



By Masha Shirokova

CODE: https://github.com/MariaShirokova/Experiment3


My idea was to create a multi-sensory device, which allows users to explore sense crossing and experience at least 3 senses. 

Play is an (musical? visual? tangible?) instrument with multisensory interface: users are able to play sounds, create their own sound and visual compositions on the screen by interacting with tactile sensors. All of the sounds present a visual animation over the background when played. Play enables users to make a whole “orchestra” out of pom poms, glasses of water and other non-musical objects, turn a palette into a rhythmic  sequencer. 

For now, this device consists of 3 tactile objects: glass of water, paper foldable button and pompom button, that control three modes of visuals on the screen and 3 sounds. Further, I would like to expand the amount of objects, as well as to make visual part more complicated. 

This device provides users with lots of performance possibilities. It is also can be used for educational purposes and experiences to give kids and adults a chance to interact with music in new and different ways.


Hearing smells or seeing sounds are examples of possible synesthesia – one of my main research interests. This experiment is my first attempt to create a multi-sensory object, which helps users to understand how tightly senses are crossed and connected to each other.  In the case of Play, pushing or touching DIY buttons triggers sound and colorful visual animation. 

The history behind the aesthetic expression of synesthesia arose from the paintings of Wassily Kandinsky and Piet Mondrian. It continued in note drawings of Cornelius Cardew, who literally drew his music on notation schemes. Sometimes these were quite identifiable notes, but their duration and relative volume should be determined by the performer. The epiphany of this approach was his book “Treatise”, comprising 193 pages of lines, symbols, and various geometric or abstract shapes that largely eschew conventional musical notation. Simple grid of the board and screen interface was inspired by geometrical abstract works of Mondrian, classical notation scheme and short films by Oskar Fischinger. The screen grid is affected by the sound and turns into the sound wave which changes depending on the volume (amplitude) of the sound.

Wassily Kandinsky was capable of “hearing”colors and that is why he composed his famous “symphony pictures” to be melodically pleasing. He combined four senses: color, hearing, touch, and smell. Therefore, experimenting with perceiving senses differently by using the device can be a valuable exercise to develop imagination and creativity. In his compositions, circles, arcs and other geometrical shapes seem to be moving, therefore I also used simple animated shapes and bright colors to keep connection with the artist who experienced synesthesia.


artwork-vasily-kandinsky-composition-8-37-262Composition 8 by Wassily Kandinsky

1f9284dca0f4160be8a0dcbb1f555ec1Composition London by Piet Mondrian

d5694rjnbom swjloncilnkDrawn notes from “Treatise” by Cornelius Cardew

Working on sound part was a new experience for me, therefore I picked 3 different sounds: rapid drum sound, and 2 xylophone sounds. There is a Russian band SBPCH (Samoye Bolshoe Prostoe Chislo) that plays electronic music, based on simple but nice sounds of water, rain, glass or ping pong sounds. I wanted to reach the same effect by picking up my sounds. This is how I “hear” collapsing and growing shapes.

As for the tactile part, my goal was to make tangible experience as much diverse as I could, so I included soft pom pom button, paper button and glass of water. At the same time, users experience something soft, colourful, dry and  solid, and liquid – this is where contrast of touch , sounds and visuals mix together. 

First, I saw the possibility of adding water to the circuit in the video of Adafruit Industries Company. Then, I realized that they use different boards, which use capacitive touch. Therefore, I started looking for other methods of using water as a sensor.  I added salt and it worked!I



First week, I started with brainstorming some initial ideas for the project:

  • Shadow play
  • Use of bubble wrap
  • Game based on the principle of Minesweeper Online Game
  • Multi-sensory device

I decided to do the last, as it represents my research interest and, hopefully, will be helpful for my thesis.

The first class codes, provided by Kate, became foundation for my project. I replaced potentiometers by DIY sensors and added more details to the Processing code(sound and visual). 


Circuit from the first class

Visual interface:


Interface sketches

For the grid, I used soundwave (the same method we used in Echosystem for Experiment 1t) which was affected by amplitude of the sound.

  1. 3D rotating cubes for the starting screen using P3D Library and rotation.screen1


2.First sensor activates the yellow square( Y position of the square was mapped with sensor value) and “play more” text.




3. Second sensor activates static composition of star and rectangles.water-drum



4. Third sensor activates text and a  circle ( fill color was randomized: fill(0, random(0,240), 255), its Y position also was mapped with sensor value.  Moreover, there were activated three more ellipses, their size was changing due to FrameCount command using different proportions, so they looked like water surface. Third sensor is also responsible for the sound wave.




Two sensors are being activated


Figuring out the sound:

Wiring the potentiometers to the Arduino and writing the code for 3 DIY sensors  was simple. However, working with multiple sounds was a bit challenging. First, I looked at sound libraries I can use in processing and I found Sound Library and Minim Library.  While using 2 sounds, it was comfortable to use both, as it was possible to stop and play sound files from 2 different libraries. However, when I added the third sound, it did not play. So, instead of pausing sounds I changed volume and used only Sound Library.

Testing sounds


Combining sound+image

3 sounds

DIY sensors:

I was excited to work with different materials and provide users with very different experience. In the beginning, being inspired by performance where users used only fruits to play music, I wanted to use lemon as one of the sensors. However, there were 2 “not enough” – not enough voltage or lemon was not enough conductive. So, I switched to squishy circuits and tested play-dough. It  also was not very reliable, even though I tried 2 kinds of resistors. Play-dough sensor only worked as a switch from on mode to off. Therefore, I came up with the idea of 2 buttons (paper and pom pom) which would still allow to experience different interaction and contrast material. Furthermore, I still wanted to use a glass of water as a sensor. Although, I did not manage to activate this sensor with user’s touch, I did make it work as a simple switch (sensor reaches its max value – when both clips are in the glass). In this case, salt helped me a mot by making water more conductive.

I preferred copper to foil as a copper material, as it was less flexible and more stable. 

  • Lemon
  • Play-dough

  • Water

  • Fluffy Button & Paper Button

Processed with VSCO with n1 preset     Processed with VSCO with n1 preset

Assembling and making:

As I see this device as an open structure for adding extra tangible objects, I decided to keep the model structure also exposed to public and did not hide the wires which were connected to the button and to the glass of water.  

The tactile part consisted of laser-cut board, pom-pom button (fluffy balls were simply glued to the card with copper piece covered with velostat, while the other copper piece was glued to the board), and the paper button ( sensor part was hidden inside, so pressing allowed 2 parts of the sensor connect with each other).

For the visual interface i did laser-cutting of the board and box, where I could hide the bread board.


Processed with VSCO with n1 preset

Processed with VSCO with kk2 preset


The best part of this experiment is that  my classmates were actually interested in interacting with the device and enjoyed the process of creating bits and melody.

A few days later after the presentation, I am looking  at the project and thinking, why I did not manage to make  it more complicated. I know I should have set up and edited the sounds, so the result melody would be better. 

In general, I enjoyed working on this project, as I could actually play with my favorite materials: sound, image and tactile materials. However, as I set up a to use all 3 senses, I did not have enough time to work on the quality of the sound. My further plan is to improve the processing code by adding sounds and complicating the visual part. 

Another plan is to match sound, visual and tactile part on real based data received from people who experience synesthesia. I believe this phenomena can be found very inspiring for other users. Even those who do not feel mixed senses, the synesthesia vocabulary can be used as a source of inspiration, since color and music associations are very poetic and metaphorical. Perhaps, users shall produce their very own vocabulary of vision to be able to experience art fully, and hopefully the future PLAY device can be useful in order to expand our sense experience.


16 Pineapples – Teplo. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=SimccVMCpv4.

Adafruit Capacitive Touch HAT for Raspberry Pi – Mini Kit – MPR121. (n.d.). Retrieved November 5, 2018, from https://www.youtube.com/watch?v=Wk76UPRAVxI&list=PL5CF99E37E829C85B&index=130&t=0s.

An Optical Poem (1938) by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=_kTbt07DZZA.

Chen, P. A. (2016, November 15). How to add background music in Processing 3.0? Retrieved November 5, 2019, from https://poanchen.github.io/blog/2016/11/15/how-to-add-background-music-in-processing-3.0.

“Early Abstractions” (1946-57), Pt. 3 by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=RrZxw1Jb9vA.

Nelzya Skazat’ Koroche by SBP4. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=XIictPv-5MI.

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%203

Swinging. (n.d.). Retrieved November 5, 2019, from https://works.jonobr1.com/Swinging.

Visual music. (2019, September 19). Retrieved November 5, 2019, from https://en.wikipedia.org/wiki/Visual_music.

Live VJ Performance Show

Project Title: Live VJ Performance Show

-An interactive live performance that explored audio visualization

By Jun Li – Individual Project



This project was a live performance show based on the concept of ‘Audio Visualization’, duration of 11 mins-long show for the audience. Also, it’s my first attempt at being a VJ. All 8 of the different effects were generated and interacted with music input in real-time. It was built on the Arduino controllers, TouchDesigner in addition to projection of the video output on the background. The purpose of this experiment was to create a very simple user-interface with different switches and sliders to manipulate the effects.

This dynamic experience allows the every participant to have an opportunity to become a ‘Visual Jockey’ as, they can operate and furthermore, change each parameter of the audio resulting in creation of the energetic graphics in the background. 

Keywords:  VJ, Live Performance Show, Audio Visualization, Interaction.


The goal of this experiment was to create a tangible or tactile interface for a screen-based interaction that uses Arduino and Processing. Since, l came from a very similar undergraduate program called ‘Technoetic Art’ and l had a lot of experience working with these software beforehand. l am always eager to challenge myself for a high-tech level and create various fascinating projects. After discussing my idea with Kate and Nick, I received their permission. I used the technical logic of Processing and applied it to TouchDesigner. Thus, retaining the knowledge of serial communication in these 2 softwares.


Music visualization, refers to a popular way of communication that combines audio visual and audio with the core of vision, music as the carrier and various media technologies such as new media technologies to interpret music content through pictures and images. It provides an intuitive visual presentation technique for understanding, analyzing and comparing the expressiveness of internal and external structures of musical art forms.

Vision and hearing are the most important channels for human beings to perceive the outside world. They are the most natural and most common behaviors of human beings and are irreplaceable for the activities of the human cognitive world. “Watching” and “listening” are the most natural, direct, and most important means of recognizing the outside world through a variety of audiovisual senses. Sound and video, hearing and vision, in contemporary society, the two agree on aesthetic trends and dominate the aesthetic form of mass culture. Vision provides many conveniences for people to see and understand musical works and music culture. People will increasingly rely on visual forms to understand audio content. The specific application of music visualization is very wide. For example, live music, exhibition site and other music visualization systems, combined with special images and customized immersive content development works, can give people a strong visual impact while enjoying music.

Process & Techniques

Research question

First, l have explored and researched what kind of parameters from audio can affect the visual part and how can l utilize these to manipulate the shape or the transformation of the visual design. Usually, audio visualization is based on the high, mid, low frequency, beat and volume of the music. So, one of the most important techniques is how to get these data or parameters from the audio itself and how to convert it to audio visualization.

Arduino Implementation

Most of my project program was built in TouchDesigner, for the Arduino part, the code was not difficult, just sent the data of a switch and 8 sliders to TD for manipulating the visual effect through serial communication.

TouchDesigner Implementation

  1. How to generate visual effects in real-time based on the different parameters of audio?
  2. How to utilize the data coming from Arduino to affect the visual effects that already had been generated in real-time?
  3. How to arrange and switch different visual effects and what’s the key and special point for VJ performance.
  4. How to optimize the program to avoid affecting the real-time output.
  5. How to output the whole project (both visual and sound part) to large displays (2 screen monitors, 1 projection) and 2 speakers to let VJ performer monitor and change the effects in real-time and show it to the audiences.

For the first and second questions, l had created 8 effects, utilizing different parameters from audio.


1.1 – Audio analyzing (high, mid, low frequency)


1.2 – Utilize the data from Arduino to manipulate visual effects





1.3 – Opening introduction (particle writing system)



1.4 – 8 different visual effects (Initial state & Altered state, Same below)













For the third question, I used the data from the slider and added an additional layer in black to switching and transited to the next effect. At the same time, l explored a lot of VJ will add strobe visual effect while the music reached its climax. So, there was another additional layer based on the low frequency of the audio, it increased the strobe of the video, which could improve and enhance the live atmosphere.


1.5 – The interface of switching 8 effects


1.6 – Different function in layers

For the fourth question, in the beginning, functionally l could utilize one data from the slider to control all 8 visual effects. However, l realized it was a huge challenge for my laptop to process it on the GPU & CPU at the same time. So, l had to separate the data into 8 sliders to control each effect and test it carefully. Eventually, it worked and did optimize my program successfully.


1.7 – The data flow in Touchdesigner 


1.8 – Arudino interface

For the last question, l did explore the function in TD for project output, it was powerful to allow the creators to create a super convenience interface to supervise the whole process and outputted it to any screen and speaker.


1.9 – Output to display

Challenge & Learning

  1. The due time of this project was super short. It required us to submit in 9 days, including having ideas, coding, testing, debugging and final setup. Especially, l used a more difficult software – TouchDeisgner. Although l had experience working with TD before, not as familiar as Processing. At the same time, l didn’t have much project experience. l barely got any help and reference from my classmates. It’s really an individual work and quite challenging for me, which l am excited as well.
  1. Set a goal to a live VJ performance show means too few visual effects won’t be accepted and not powerful enough in visual and hearing. I am a perfectionist. So, I had to create 10 mins-long show at least, that’s the extra requirement and pressure for me. So, to achieve that in the end, l kept testing and creating different visual effects. Eventually, l chose 8 powerfully and great enough from 26 effects l created in TD. %e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-16-55-12                                       2.1 – Reference library l created
  1. Processing data and having these effects in real-time were a super heavy job on my GPU and CPU. it also challenged the performance of my computer. l almost getting closer to ruin my GPU and unfortunately, l still lost my 3.5mm audio output in the end. After coding, l had to work really hard to optimize the program to get a better, stable 60 FPS output. Because my computer was not powerful enough, especially the GPU. Eventually,there still were some frame droppings during live performances.
  1. To my satisfaction, this project was successful been made and the response from others were very great. l perfectly achieved the goal l set in the beginning, compared the initial goal, NOTHING had been changed. I learnt a lot of project experience and techniques in TD, including analyzing audio, visual designing, data integration, manipulation and analysis, project optimization, output and management.
  1. The next step l will do is to keep improving and optimizing my program and create a simpler operation interface, which can let users manipulate it easily.%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-20-28

2.2- VJ setup in TouchDesigner

Code Link:



Currently, we use technology-based new media art as a way out. These forms of new media art are often the forerunners of future art. Artists work in the area of “art and technology” to create collaborative artworks. What’s more, in the era of such a big technological transition in modern times, the way of life has changed accordingly, and it has become necessary to fundamentally grasp the essence of art. The cross-border nature of music visualization art is very obvious. It involves music art, visual art and the artistic integration of the two.

When this project was exhibited for the exhibition, I was the VJ and presented it myself and able to personally observe the interaction and experience of the participants with it. To my satisfaction,I achieved my goal of becoming a VJ, bringing a show which challenging me incredibly.




  1. vjfader (2017). VJ Set – Sikdope live @ Mad House – ShenZhen China.Available at: https://www.youtube.com/watch?v=uG1GrD7VQOs&t=902s.
  2. Transmission Festival (2017). Armin van Buuren & Vini Vici ft. Hilight Tribe – Great Spirit (Live at Transmission Prague 2016). Available at: https://www.youtube.com/watch?v=0ohuSUNHePA 
  3. Ragan, M. (2015). THP 494 & 598 | Simple Live Set Up | TouchDesigner.Available at: https://www.youtube.com/watch?v=O-CyWhN4ivI
  4. Ragan, M. (2015). THP 494 & 598 | Large Display Arrangement | TouchDesigner. Available at: https://www.youtube.com/watch?time_continue=1&v=RVqNjJfE9Lg 
  5. Ragan, M. (2015). Advanced Instancing | Pixel Mapping Geometry | TouchDesigner.  Available at: https://matthewragan.com/2015/08/18/advanced-instancing-pixel-mapping-geometry-touchdesigner/
  6. Ragan, M. (2013). The Feedback TOP | TouchDesigner.  Available at: https://matthewragan.com/2013/06/16/the-feedback-top-touchdesigner/

Tête-à-Tête <3

Rittika Basu

Project Description: ‘Tête-à-Tête’ <3 is a private-dating platform. The term, ‘Tête-à-Tête’ (French origin) refers to a secret one-to-one conversation between two people. The communication is encrypted through the use of ‘Cupid Cryptic Codes’ that gets transmitted through the playing of the mini piano keys. The objective is to have an ‘ongoing secret chat’ while being camouflaged as a piano player. There are ten piano keys in total which generate the Solfège/Solfa – an educational technique to teach musical notes and sight-reading skills and familiar beginners with lyrical patterns. For example, ‘A’ is denoted with one Red blink and generated the musical note of ‘Do’. Similar to an actual piano, the first eight mini piano keys produces Do, Re, Mi, Fa, So, La, Ti & Do consecutively. The last 2 keys, produces ‘Beep’ sounds denoting to set of odd and even numbers. The code syntax incorporates Alphabets, numbers, several punctuations in addition to a few emojis.






Attaching Alligator wires to the mini piano keys DIY buttons
Wiring the Arduino with the Mini Piano strings

Final Images:

cover-page setting-images 73420315_471938793422400_4582641864728903680_n background-4

Interactive Video:

Question: Guess what Neo is encrypting?

Answer: coffee?

Project Context:


I started observing ideas post observing the Mud Tub, Massage Me and Giant Joystick. One day, I came across an article on ‘Secret Dating’ and its reasons. Foremostly, its a common practise among the LGBT community as non-heterosexual forms of expressing love is a taboo in several regions of the world. In many Asian and African countries disclosure of such relationships might end up tragic incidents where the partners would be jailed, penalised, killed by relatives or chemically castrated. One of the reasons could be that the unconventional relationships are perceived as an act of humiliation and shame in society. Thus, homosexual lovers are often forced to pursue relationships in secrecy.

Being from India (one of the largest populous country), I have witnessed how dating, kissing or any form of displaying affection is publicly scowled. Whereas molestation, on the other hand, is ignored blatantly many times. Instead of teaching children on lessons of communal friendships in additions to having healthy relationships, parents wrongly depicts romantic intimacies as vile and inappropriate for youngsters. For example, during my 12th grade, my best friend’s mother told her that making boyfriend is indecent and girls who date boys will always perform poorly in academia.  This was because her parents considered ‘teenage love’ as a distraction and feared that their daughter might be engaged in pre-marital coitus.

Developing a new language – Cupid Codes:

Contemplating on the notion of secret dating, the idea of having a secret system of communication struck my mind during the next week. I commenced reading about cryptic messaging and cypher networking which had interested me since childhood. Inspired by ‘Morse Code’ and ‘Tap Code’, I attempted being an amateur cryptanalyst and created my own set of cryptic codes using LEDs blinking.


Ideation – Creating a ‘Cryptic Messaging System’

It was a struggle in the beginning as I had to conceive how to implement this transmission via serial communication within a limited set of 12 digital pins. In the fullness of time, after innumerable trials and errors, I came up with ‘Cupid Codes’ and strategise a systematic table to remember the new language. These set of multi-coloured blinks on Processing screen will help to transmit messages between two lovers secretly. While the other people surrounding the environment will assume that the participants or lovers are actually engaged in playing the Mini piano because the codes are conveyed through the playing (audio and visual output) from the piano keys.

Tap Code

Creation of a new language – Cupid Code (usage of multi-coloured blinks with different timings and syntaxes)


Rapid Prototyping:

Utilising the knowledge of DIY switches from Kate’s class 12, I tried to make the mini Piano keys wrapped with Aluminium foil to test the conductivity of electricity. However, this prototype’s form turned out to be very childish and juvenile. Hence, I had to reserve this one as a kid’s version because of its bright colours and playful mechanism.


After referring videos of DIY mini piano on YouTube and instructional images from Pinterest, I began implementing the gathered knowledge in developing my own little piano. Creating the piano switches as the DIY switches were intensive and tedious. Afterwards, I soldered the jumper wires, resistors with the copper strips attached on the piano keys. The idea was that, when these piano keys will be pressed, it completes the whole circuit and emits musical audio. There are 10 piano keys that are colour coded in Red, Orange, Yellow, Blue & Green. The 2 green-coloured keys are for odd and even numbers respectively. But rest of the eight keys are to represent Alphabets, emojis, phrases and punctuation in different combinations. Every 10 keys emit 10 different musical notes ie. the Solfège or Solfeggio, a.k.a. Sol-fa, Solfa, Solfeo, which follows as Do, Re, Mi, Fa, So, La, Ti, Do and Beep sounds.

Access from GitHub (Codes + Audio + Image + Typeface) :

The coding is simple and derived from the file shared by Kate and Nick’s Github page,  titled ‘Experiment_3_-_Arduino_Processing_ASCII_Digital_values.ino’ (Arduino file) and ‘Experiment_3__Arduino_Processing_ASCII.pde (Processing file)’. 

In Arduino: Arduino Code for Experiment 3: Tête-à-Tête <3

In Processing: Processing Code for Experiment 3: Tête-à-Tête <3

Supporting Files: Audio, background image and typeface


Replacing Aluminium with Copper because not only Cu is a better conductor of electricity and its strips are harder than Al foil. Cryptic communication language was developed involving of colour blinks that appeared on the screen at different timings which are to be used and exchanged between the participants or partners for flirting, messaging and calling each other. Every piano key generates different sounds making the messaging activity seems like playing of a musical note. Since, this instrument functions like a real piano, non-participants surrounding both the partners will assume them as piano players while they can happily date their lovers in peace and privacy. 


Research + Coding Tutorials
  1. Kremer, B. (2019). Best Codes. from https://www.instructables.com/id/Best-Codes/
  2. Hartman, K. (2019). Exp3_Lab1_ArduinotoProcessing_ASCII_3DigitalValues/. Lecture, OCAD University. https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment3/Exp3_Lab1_ArduinotoProcessing_ASCII_3DigitalValues
  3. curtis’s channel. (2016). processing: playing and using sound files [Video]. Retrieved from https://www.youtube.com/watch?v=DJJCci3kXe0
  4. Engel, M. (2014). Adding and using fonts in processing [Video]. Retrieved from https://www.youtube.com/watch?v=QmRbb-_d_vI
  5. Blum, J. (2011). Tutorial 06 for Arduino: Serial Communication and Processing [Image]. Retrieved from https://www.youtube.com/watch?v=g0pSfyXOXj
  6. Rudder, C. (2014). Seven secrets of dating from the experts at OkCupid. Retrieved, from https://www.theguardian.com/lifeandstyle/2014/sep/28/seven-secrets-of-dating-from-the-experts-at-okcupid
  7. Elford, E. (2018). HuffPost is now a part of Verizon Media. Retrieved from https://www.huffpost.com/entry/mom-secret-lesbian-relationship_n_5aa143e9e4b0d4f5b66e2b35
  1. Rodgers & Hammerstein. (1965). “Do-Re-Mi” – THE SOUND OF MUSIC [Video]. Retrieved from https://www.youtube.com/watch?v=drnBMAEA3AM
  2. Jaz_the_MAN_2. (2015). Do, re, mi, fa, so, la, ti, do – DO stretched.wav [MP3 file]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316899/
  3.  Jaz_the_MAN_2. (2015).  Do, re, mi, fa, so, la, ti, do – RE stretched.wav [Online]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316909/
  4.  Jaz_the_MAN_2. (2015).  Do, re, mi, fa, so, la, ti, do – MI.wav [WAV file]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316909/
  5.  Jaz_the_MAN_2. (2015). Do, re, mi, fa, so, la, ti, do. – FA stretched.wav [WAV file]. Retrieved from  https://freesound.org/people/Jaz_the_MAN_2/sounds/316905/
  6. Katy (2007).  Solfege – So.wav [Online]. Retrieved from https://freesound.org/people/digifishmusic/sounds/44935/
  7. Jaz_the_MAN_2. (2015). LA.wav [Online]. Retrieved from  https://freesound.org/people/Jaz_the_MAN_2/sounds/316902/
  8. Katy (2007).  Solfege – Ti.wav [Online]. Retrieved from https://freesound.org/people/digifishmusic/sounds/44936/
  9. austin1234575 (2014).  Beep 1 sec [Online]. Retrieved from https://freesound.org/people/austin1234575/sounds/213795/
  10. cheesepuff (2010).  a soothing music.mp3. [Online] Retrieved from https://freesound.org/people/cheesepuff/sounds/110215/






by Nadine Valcin



Trumpet is a fitting instrument as the starting point for an installation about the world’s most infamous Twitter user. It combines a display of live tweets tagged with @realDonaldTrump with a trumpet that delivers real audio clips from the American president. The piece is meant to be installed at room scale and provide a real-life experience of the social media echo chambers that so many of us confine ourselves to.

The piece constantly emits a low static sound, signalling the distant chatter that is always present on Twitter. A steady stream of tweets from random users, but always tagged with the president’s handle, are displayed on the screen and give a portrait of the many divergent opinions about the current state of the presidency.

Visitors can manipulate a trumpet that triggers audio. A sample of the Call to the Post trumpet melody played at the start of horse races can be heard when the trumpet is picked up. The three trumpet valves, when activated, in turn play short clips (verbal equivalent of tweets) from the president himself. Metaphorically, Trump is in dialogue with the tweets being displayed on the screen in the enclosed ecosystem. The repeated clips create a real live sonic echo chamber physically recreating what happens virtually online.


My initial ideas were centered on the fabrication of a virtual version of a real object: a virtual bubble blower that would create bubble patterns on a screen and virtual kaleidoscope. I then flipped that idea and moved to the idea of using a common object as a controller, giving it a new life and hacking it in some way to give it novel functionalities. Those functionalities would have to be close the original use for the object yet be surprising in some way. The ideal object would have a strong tactile quality. Musical instruments soon came to mind. T

hey are designed to be constantly handled, have iconic shapes and are generally well-made and feature natural materials such as metal and wood.

Image from Cihuapapalutzin

In parallel, I developed the idea of using data in the piece. I had recently attended the Toronto Biennial of Art and was fascinated by Fernando Palma Rodriguez’s piece Cihuapapalutzin that integrated 104 robotic monarch butterflies in various states of motion. They were built to respond to seismic frequencies in Mexico. Every day, a new data file is sent from that country to Toronto and uploaded to control the movement of the butterflies. The piece is meant to bring attention to the plight of the unique species that migrates between the two countries. The artwork led me to see the potential for using data visualisation to make impactful statements about the world.

Image from Just Landed

I then made the connection to an example we had seen in class. Just Landed by Jer Thorp shows real time air travel patterns of Twitter users through a live map. The Canadian artist, now based in New York, used Processing, Twitter and MetaCarta to extract longitude & latitude information from a query on Twitter data to create this work.

Image from Listen and Repeat

Another inspiration was Listen and Repeat by American artist Rachel Knoll, a piece featuring a modified megaphone installed in a forest that used text to speech software to enunciate tweets labeled with the hashtag “nobody listens”.

As I wanted to make a project that was closer to my artistic practice which is politically-engaged, Twitter seemed a promising way to obtain live data that could then be presented on a screen. Of course, that immediately brought to mind one of the most prolific and definitely the most infamous Twitter user: Donald Trump. The trumpet then seemed to be a fitting controller, both semantically and its nature as a brash and bold instrument.


Step 1: Getting the Twitter data

Determining how to get the Twitter data required quite a bit of research. I found the Twitter 4J library for Processing and downloaded it, but still needed more information on how to use it. I happened upon a tutorial on British company Coda Sign’s blog about Searching Twitter for Tweets. It gave an outline of the necessary steps along with the code. I then created a Twitter developer account and got the required keys to use their API in order to access the data.

Once I had access to the Twitter API, I adjusted the parameters in the code from the Coda Sign website, modifying it to suit my needs. I set up a search for “@realDonaldTrump”, not knowing how much data it would yield and was pleasantly surprised when it resulted in a steady stream of Tweets.

Step 2: Programming the interaction

Now that the code was running on Processing, I set up the code to get data from the Arduino. I programmed 3 switches, one for each valve of the trumpet and also used Nick’s code to send the gyroscope and accelerator data to Processing in order to determine which data was the most pertinent and what the thresholds should be for each parameter. The idea was that the gyroscope data would trigger some sounds when the trumpet was moved and the 3 trumpet valves would manipulate the tweets on the screen with various effects on the font of the text.

I soon hit a snag as it at first seemed like Processing wasn’t getting any information from the Arduino. Looking at the code, I noticed that there were several delay commands at various points in the code. I remembered Nick’s warning about the delay command and how it was problematic and realized that this, unfortunately, was a great example of it.

I knew the solution was to program the intervals using the millis function. I spent a day and a half attempting to find a solution but failed and required Kate Hartman’s assistance solving the issue. I has also discovered that the Twitter API would disconnect me if I ran the program for too long. I had to test in fits and starts and often found myself unable to get any Twitter data sometimes for close to an hour.

I attempted to program some effects to visually manipulate the tweets that would be triggered by the activation of the valves. I had difficulty affecting only one tweet as the effects would affect all subsequent tweets. Also, given that the controller was a musical instrument, it felt like sound was a more suited effect than a visual. At first, I loaded cheers and boos from a crowd that users could trigger in reaction to what was on screen, but finally settled on some Trump clips as it seemed natural to have his very distinctive voice. It was suitable because he takes to Twitter to make official declarations and because of the horn’s long history as an instrument to announce the arrival of royalty and other VIPs.

As the clock was ticking, I decided to work on the trumpet and return to working on the interaction when the controller was functional.

Step 3: Hacking the trumpet

Trumpet partly disassembled

I was fortunate to have someone lend me a trumpet. I disassembled all the parts to see if I could make a switch that would be activated by the piston valves. I soon discovered that angle from the slides to the piston valves is close to 90 degrees and given the small aperture connecting the two, it would be nearly impossible.

Trumpet parts
Trumpet valve and piston
Trumpet top of valve assembly without piston

The solution I found was taking apart the valve piston while keeping the top of the valve and replacing the piston with a piece of cut Styrofoam. The wires could then come out the bottom casing caps and connect to the Arduino.



I soldered wires to 3 switches and then carefully wrapped the joints in electrical tape.

Arduino wiring


A cardboard box was chosen to house a small breadboard. Holes were made so that the bottom of the valves could be threaded through and the lid of the box could be secured to the trumpet by using the bottom casing caps. Cardboard was chosen in order to keep the instrument light and as close to possible to its normal weight and the balance.

Finished trumpet/controller

Step 5: Programming the interaction part 2

The acceleration in the Y axis was chosen as a trigger for the trumpet sound to play. But given the imbalance in the trumpet weight, it tended to trigger a rapid succession of the trumpet sound before stopping. Raising the threshold didn’t help. With little time left, I then programmed the valves/switches to trigger some short Trump clips. I would have loved to accompany them with a visual distortion but the clock ran out before I could find something appropriate and satisfactory.


My ideation process is slow and was definitely a hindrance in this project. I attempted to do something more complex than I had originally anticipated and the bugs I encountered along the way made it really difficult. One of the things that I struggle with when coding is not knowing when to persevere and when to stop. I spent numerous hours trying to debug at the expense of sleep and in hindsight, it wasn’t useful.  It also feels like the end result isn’t representative of the time I spent on the project.

I do think though that the idea has some potential and given the opportunity would revisit it to make it a more compelling experience. Modifications I would make include:

  • Adding a number of Trump audio clips and randomize their triggering by the valves
  • Building a sturdier box to house the Arduino so that the trumpet could possibly rest on it and contemplate having it attached to some kind of stand that would control its movements somewhat
  • Have video as a background to the Tweets on the screen or a series of changing photographs and make them react to the triggering of the valve.

Link to code on Github:  https://github.com/nvalcin/CCassignment3


Knoll, Rachel. “Listen and Repeat.” Rachel Knoll – Minneapolis Filmmaker, rachelknoll.com/portfolio/listen-and-repeat. Accessed October 31, 2019

Thorp, Jer. “Just Landed: Processing, Twitter, MetaCarta & Hidden Data.” Blprnt.blg, May 11, 2009. blog.blprnt.com/blog/blprnt/just-landed-processing-twitter-metacarta-hidden-data. Accessed October25, 2019

“Fernando Palma Rodríguez at 259 Lake Shore Blvd E”. Toronto Biennial of Art. torontobiennial.org/work/fernando-palma-rodriguez-at-259-lake-shore/ Accessed October 24, 2019

“Processing and Twitter”. CodaSign, October 1, 2014. www.codasign.com/processing-and-twitter/. Accessed October 24, 2019.

“Trumpet Parts”. All Musical Instruments, 2019. www.simbaproducts.com/parts/drawings/TR200_parts_list.jpg. Accessed November 2, 2019