Digi-Cart 1.0

Experiment 3 – Katlin Walsh

Project Description 

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention.  

Continue reading “Digi-Cart 1.0”

Experiment 3: Toil & Trouble

1.0 Requirements
The requirement for this project was the inclusion of a tangible or tactile interface for a screen based interaction, with a strong conceptual and aesthetic relationship between the physical and the events happening on the screen.

2.0 Planning & Context

For this I looked into exploring three input methods of physical interaction that would affect an onscreen display generated using Processing.js:
– A push button highly integrated into the design and structure of the piece
– Tactile switch using copper wire to open and close a circuit
– Magnetism and Hall Effect Sensors to return a digital signal when in the presence of magnetism
– Elapsed time within a state

For the screen output in Processing.js, I demonstrated:
– Animation – using sprites & redrawn still images
– Video
– Sound
– Images
– Text
– Drawing Simple Shapes

The presentation of the Processing output is done via a link to an iPad on the front of the piece, ideally, hiding away the interface with the computer. In the initial testing, an iPad Pro was used and SideCar was utilised to make the connection to Processing seem magical. Unfortunately, the iPad pros in the AV Room have been out of commission for the past few weeks, so the tethering had to be done via a USB and Duet Display. Wireless Duet Display connectivity was possible, but tested to be unreliable. The computer was hidden under the table.

This project was built during Halloween, and featured the process of casting a spell to create a dragon (my Chinese Zodiac and favourite mythical creature). The altar is switched on, a grimoire opened, and ingredients are added one by one to the cauldron, and after the last item is added, the cauldron boils furiously and settles. The spell is then cast over the cauldron using a hex bag, and a dragon emerges. Each physical movement by the caster triggers a switch.

It is meant to be a playful piece, performed by a single person to an audience as entertainment. The process can then be repeated by observers  interested in how it works, and what the switches are, or with the animations on screen.

This project plays with hidden switches, the aim to make the magic seem magically and not reveal exactly where the switches are or how they work. There is a bit of mystery behind what is triggering the changes, and few actual buttons are pressed or manipulated.

I initially made prototypes for the use of velostat, light sensors, electromagnets, reed switches, and servos, but found the velostat to unreliable for the purposes I needed it for, electromagnets to be too weak to lift items, reed switches to be too fragile when manipulating, lights sensors to be fickle (even when averaging the current light upon each start up), and the servos to not be powerful enough to achieve the desired effect I wanted. (The iterations can be found in the code.)

Planning Details
> Sensors and actuators
(Digital) Push button -> push down button. turns on heat. turns on array of orange lights underneath, and turn on/off levitation
(Digital) Push button -> ingredients reset button
(Analog) Tactile button x 4 -> ingredients complete circuit. tap a few times before removing.
(Digital) Magnetic Switch -> hex bag with magnets to trigger incantation.
(Digital) Lights -> triggered at various stages

> Casting modes
-1 Pre-activation
0 (Activated)
1 (Ingredient 1)
2 (Ingredient 2)
3 (Ingredient 3)
4 (Ingredient 4)
5 (Stew Ready)
6 (Spell)
7 (Dragon)
8 (Off)

Note: Although two Arduinos are pictured, only one was used. The other was kept in to allow for rapid switching in the event of failure, which it did shortly before presentation. Ultimately the Mega was used as the “Uno” (actually a knockoff Arduino) began to fail unexpectedly. I found out after that it not being a genuine Arduino was the cause of the intermittent failures in this experiment, and also in previous experiments where we relied on it. It was too late for other experiments, but just in time for this one.

3.0 Implementation


Cauldron with ingredients, on stove and “wood” (actually small strips of acrylic).

3.1 Software & Elements

3.1.1 Libraries & Code Design

Code: https://github.com/jevi-me/CC19-EXP-3

As this was a solo experiment, I took the opportunity to explore the software design capabilities within Arduino. I wrote and adapted various classes for use within the code, and tried coding in a more object oriented way, just for experimental purposes. I started off with quite a number of borrowed and written classes and libraries (over 10 at one point), but slowly as features were cut, repurposed or shifted, the number of imported libraries was lowered to 3. The experience was very fulfilling however, and exposed me to a different way of communication with the device.

Arduino Loop Method:
A) Wait for Activation button to be triggered. If the stove isn’t on, nothing works.
– turns on stove lights, turns on altar lights
– change bool
– set state
– send state
B) Check for a change in pressure on the ingredient bases (& if activated)
– change lights
– change bool
– set state
– send state.
C) If all ingredients are added, stew is ready after boiling for a few seconds
– change lights
– change bool
– set state
– send state
D) Hall switch triggered (& if stewready)
– change lights
– change bool
– set state
– send state
X) Sync the info with Processing periodically

On the processing side, I was eager to explore as many of the output features that were available. I developed methods that would allow for animation using custom-made and premade sprites or still images running at different frame rates and loop/no-loop variables, videos at different locations and triggered to start based on the input from Arduino, and sound that started, looped or not, volume changed, and stopped based on the state of the system. I also used text, and the drawing functions of Processing. As in the case with Arduino, the aim was to explore as much of the capabilities that Processing.js had to offer.

In both Arduino and Processing, I explored the use of fail-safe methods that allowed the intertwined systems to fail gracefully and return helpful feedback to the user. Though not a requirement of this project, it was again, an exploration of the capabilities. Examples of this is if the state of the Serial Port, and another the status of USB tether between Arduino and Processing. In both cases, the system fails gracefully, and provides help hints on screen and/or in the console to help guide the method of correction.

3.1.2 Sound Files
Various sounds were used throughout:
– Background music played before the grimoire is opened
– Cauldron boiling sound — there are two types depending on the reaction in the cauldron taking place, and they increase with intensity as the ingredients are added
– “Double Double Toil and Trouble” from Shakespeare is read during the spell casting
– Background music played when the dragon is successfully created

3.1.3 Videos
Four videos are used in this project. The videos were placed on different locations of the screen when triggered to act as design flourishes. The videos were sourced from an Instagram video pack, scaled, cleaned up, and formatted appropriated for use within Processing. The videos serve as the lowest layer of the elements on the screen.

3.1.4 Animation – Sprites
The bunny animation on the lower right corner reacts to the state that the cauldron is in — each new action performed by the caster, or in some cases, a timed reaction, changes the animation of the bunny. It can sleep, run, sit, rise up or lower itself at different levels in response to the caster. The animation uses carefully named and numbered sprites and a customised method to cycle through the sprites drawing them to the screen, looping around to the beginning of the list if the animation cycles within the state. Since the framerate that Processing draws at is fixed, to control the framerate of the animations and save on the number of sprites used, I wrote a frame skipper, that allows for a number of Processing frames to be skipped over and not have the animation redrawn.

The cauldron sprite images were made by taking screenshots of a 3D image of three similarly designed cauldrons, as I rotated them around at different angles. As with the bunny animation, they were carefully named and numbered to allow the cauldron to spin, and different versions of the cauldron were drawn based on the stage of the spell.

3.1.5 Animation – Still Images
The grimoire can be open or closed, and is animated to mimic levitation using simple oscillating redrawing on the y-axis.

The dragon utilises the same “oscillating on the y-axis” to mimic flight, and only appears if all the previous steps have been completed. This information is received from Arduino which controls the logic.

3.1.6 Images
Representations of the physical images appear on screen when they are added to the cauldron, and are listed all together at the beginning. The position, and visibility of the images are tied to the physical interaction taking place.

3.1.7 Text and Drawing
I used text in various places to indicate the state of the spell and explain what is happening. Thre is also a small indicated on the bottom left that shows what stage the spell is in.

3.2 Hardware
Copper tape and alligator clips make the main tactile interaction. The circuit is completed when the ingredient in the storage basin of the piece is in its place. On its removal, the switch is “released”, as it the circuit is opened. This open circuit is what triggers actions processed by Arduino and sent to Processing.

Another switch used is the detection of a magnetic field. A circle of “diamonds” marks the spot where a hex bag is to be placed. The hex bag contains 4 neodymium magnets which create a magnetic field that is registered by a hall detector in place just under the wood.

The other types of switches are ordinary push buttons which were soldered and positioned flush to the surface of the piece allowing them to blend into the design, and time based “switches” which trigger after a predetermined time has elapsed e.g. during boiling and during incantations.

Other things of note in the build are the ingredients used, some real and natural ingredients like tulip bulbs and spices, and others symbolic like skeletons and eyes. Ingredients list: eyes, pumpkin, magic, winged bats, tulip bulb, mushrooms, skeleton of a snake (the word ‘dragons’ come from ‘draco’ and ‘drakon’ meaning ‘large serpent’ or ‘sea serpent’), orb, candle. Many These were purchased at Dollarama during the Halloween season.

Lights were used as a reference of what was happening on screen, mimicking a magical altar, and a flame below the stove under the cauldron.

The altar was laser cut and featured steampunk gears and a skeleton cup which held the “dried blood of lizard”. The stove also laser cut, used wall screws and brackets to add height, and had acrylic strips to mimic wood in the fire pit.

4.0 Reflections

The piece is meant to be a performance using sleight of hand and hidden switches, but it also encourages others to want to try it, and make a dragon as well. This then requires to reveal the switches, and it isn’t very obvious how things work. Another iteration of this could be a more DIY version where there is better labelling. That would make the system for others rather than a performance piece like this was created. In testing with others previously, I received the comment that it could be cool if different ingredients and patterns yielded different creatures. That would require a more complex code, but is possible as yet another iteration of this project.

5.0 Additional Photos

6.0 Literature References (MLA Cited)
Shakespeare, William, and A. R. Braunmuller. Macbeth. Cambridge University Press, 1997.
Double, Double Toil and Trouble – Shakespeare, http://www.potw.org/archive/potw283.html.
“DRAKON AITHIOPIKOS.” ETHIOPIAN DRAGON (Drakon Aithiopikos) – Giant Serpent of Greek & Roman Legend, https://www.theoi.com/Thaumasios/DrakonesAithiopikoi.html.

6.1 Additional Reference Links (Linked)
Envato Elements (elements.envato.com) – for some of the sounds, videos, and graphics.
Processing.js Guide (processingjs.org) – for part of animation code
Circuito.io (circuit.io) – for libraries. Licenses included in the source code.

Skull Touch

S K U L L  T O U C H – An interactive skull


Skull Touch
An interactive skull that reacts to the touches of people and gives output in the form of spooky sound in different amplitudes and frequency.

Kate Hartman & Nick Puckett


This project tries to explore different states of touche such as no-touch, one finger touch, two-finger touch and grab. Capacitive touch enables to detect the different state of touch.
When you think of the word “Tangible interface”, the first thing comes in my mind is the tactile interface that anyone can feel, touch and interact with. Why scary theme, since it was Halloween time and hence the spooky skull.

Github link: https://github.com/Rajat1380/SkullTouch


I started with the idea about how to use a pressure sensor and a touch sensor, then I realized that only two states can be obtained and I was not satisfied with that. So I started looking for how to get more output.
I got to know about the Capacitive touch sensor through which any surface can be a sensor. After watching this video by DisneyResearchHub, this got me thinking that this is what I was trying to do initially. Disney had developed its own hardware to detect different sensors and information is not open for the public. Then I started looking for alternatives and found StudioNAND/tact-hardware. They have provided all the information regarding the capacitive sensor. I always want to work with audio and how to control them with different input methods. This project gave me all the required push to go for it. As an input device, I chose the skull as this project was happening around Halloween time.

The Process

I started with the circuit set up and code to get the capacitance value for the different touches. The list provided by studioNAND to make low budget capacitive sensor is given below.

1× 1N4148 diode
1× 10mH coil

1× 100pF
1× 10nF

1× 3.3k
1× 10k
1× 1M

I got all the except components except 10mH inductor. I got 3.3 mH inductor instead of recommended one in the list and preceded with the circuit set up.
I have to install TACT library for Arduino and processing to run the code.

Arduino Circuits



Prototype 1



The output was very distinctive for no-touch, one finger touch, but not for one finger and two-finger touch. I figured this out very late in my project. This was happening because of 3.3mH inductor.  The range I was getting is too narrow for distinctive touches. I tried to make an inductor on 10mH with the hollow cylinder and copper wire but I was not able to get near 10mH. So proceeded with the current code.

As I was planning the control audio with the skull as an input device through different modes of touch. I directly jumped into the processing to couple the capacitance value with the amplitude, frequency of the scary sound to give spooky experience to the user.

Final Setup


img_0597img_0601 img_0602Challenges & Learnings

  1. Libraries and how, where to install, these were my hard learning with this project. As TACT library was created in 2014 as now there is no support for the new Arduino microcontroller. So I have to use Arduino UNO.
  2. I was getting noise in capacitance output even for no-touch. I tweeted the developer of the TACT library and I got a reply from him. He gave me access to controlling sound via a microphone input. I am still finding this hard to understand. Someday I will.
  3. Getting components was hard for me as I was not able to get all the components from creation. I tried to craft inductor and it was not successful. It’s part of learning and it difficult to accept the dead ends.


StudioNAND, 2014 tact-hardware
28th Oct, 2019 [https://github.com/StudioNAND/tact-hardware]

Tore Knudsen, 2018 Sound Classifier tool
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2018 Capacitive sensing + Machine Learning
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2017 SoundClassifier
29th Oct, 2019 [https://github.com/torekndsn/SoundClassifier]

Tore Knudsen, 2017 Sound classification with Processing and Wekinator
29th Oct, 2019 [https://vimeo.com/276021078]

DisneyResearchHub, 2012 Botanicus Interacticus
30th Oct, 2019 [https://www.youtube.com/watch?v=mhasvJW9Nyc&t=49s]




Musical instrument  without Touch 

Jignesh Gharat

Project Description:

Hover Beat is an interactive musical instrument that is played without touch or physical contact installed in a controlled environment with a constant light source. The project aims to explore interactions with a strong conceptual and aesthetic relationship between the physical interface and the events that happen in the form of audio output.


A project’s potential radius of interaction is usually determined by technical factors, be it simply the length of a mouse cord or the need for proximity to a monitor used as a touch screen, the angle of a camera observing the recipient, or the range of a sensor. However, the radius of interaction is often not visible from the outset—especially in works that operate with wireless sensor technology.

In the project, the attempt is not to  mark out the radius of interaction or special boundaries at all, so that it can be experienced only through interaction

Explorations & Process:

I started exploring different sensors that can be used for controlling sound. Starting with a sound detection sensor, flex sensor, capacitive DIY sensors using aluminum foil, pressure sensors and finally ended up using a light sensor to make the interaction invisible. So that the user doesn’t see or understand how the instrument actually works and opens many possibilities to interact with the object and they explore, learn while interacting.


Flex Sensor | Arduino Uno or Arduino Nano


Light sensor LDR | Arduino Uno | Resistor


Using a glass bowl to calibrate the base sensor data reading in- DF 601 studio environment at night.

How does the sound actually change?

The data coming from the Arduino and LDR are used in processing to control playback speed and amplitude of the sound. A steady reading is used as a benchmark which is from the constant light amount detected by LDR.

Libraries in Processing: import processing.sound.*; ( For SimplePlayBack )
import processing.serial.*; ( import the Processing serial library )


I started experimenting with 8-bit sounds, vocals, and instruments.  The sounds changed its playback rate and amplitude based on the amount of light received by the LDR sensor. to minimize the noise and improve the clarity in ever-changing sounds the best option was to work with clean beats and so I decide to work on instrumental music. The interactions were mostly using hands so I did some research on musical instruments that use hands and make distinct clear sound beats. Indian classical instrument TABLA  a membranophone percussion instrument was my inspiration to develop the form and interaction gestures.musician-tabla


The form is inspired by the shape of Tabla.  Use of semi-circular glass bowl to define the boundary or say a start point to measure readings and define the limit of interaction in a radius as it uses the LDR sensor. The transparent material of the glass actually confuses the user and makes them curious about how it works. The goal was to come up with a minimal simple and elegant product which is intuitive and responsive in realtime.

img_20191031_163131 img_20191031_163119screenshot-2019-11-05-at-7-20-15-pm

Exhibition Setup:

The installations work only in a controlled environment where the light quality is constant and won’t change because there is a base reading calibrated and used as a bench mark to change the sound.


Experiment Observations:

The affordances were clear enough. Because of the sound playing the users got a clue that the object has something to do with the touching or tapping but later on interacting they found out quickly that it’s just hovering at different heights over the object to manipulate the sounds of tabla. People tried playing with the instrument. People with some technical knowledge on sensors were more creative as they found out its the light that controls sound.


Github – https://github.com/jigneshgharat/HoverBeat


The experiment has laid a foundation to develop the project further and make a home music system that reacts to the light quality and sets the musical mood for example if you dim the lights the music player switches the mood to an ambient peaceful soundtrack. A new musical instrument can be made just by using DIY sensors at a very low cost with new interesting interactions.




Touch: Graphite User Interface

Liam Clarke

touch head

Touch is a Arduino and processing project that creates a simple computer with a touch interface. 

The initial goal was to find ways to combine Processing and Arduino in a single interactive medium. The project is a touch screen using conductive paint, glue, and wires. A 10×5 grid was cut into an acrylic sheet which acts as the conductive circuit within the screen. While the initial dimensions were simple, a much more complex grid can be built upon the current version. An Arduino Uno and the CapacitiveSensor library are used to register touches via the grid. The data is then sent to Processing which performs the visual actions that creates the illusion of an operating system. The screen is created using projection onto the acrylic panel, where icons and features are mapped to the grid using MadMapper.

  secondTouch screen technologies were researched to develop ways to combine Processing and Arduino. The most attractive was using infrared touch frame around glass. The hardware for this method of touch screen, a contact image sensor, can be sourced from an average house hold printer. While this would be a visually clean method of sensing touch, a capacitive grid was chosen in favour for simplicity and time constraints. In capacitive touch, sensors have an electrical current running through them and touching the screen causes a voltage change. This sends a signal to a receiver  and touch is registered within software.

The screen was made with an acrylic panel. A grid was was cut into the panel and these grooves would be used to embed conductive material within. Small holes were drilled into the panel, which would be used as touch points. These touch points were filled with conductive paint and protected with conductive glue. Tests were done to find the smallest groove and amount of conductive paint that could be used to trigger a signal in the send pin.


The frame of the computer is sourced from a broken microwave, gutted of internal components. Different screens and casing shapes were tested, the setup and size was selected as it was the most adaptable for future expansions on the machine’s function. A large size helps improve screen resolution while back projecting on translucent material.


The code was created using Paul Badger’s CapSense library. Each sensor connects to two digital inputs on the Arduino. The send pin connects to a 1M ohm resistor which connects to the receive pin. Between the receive pin and the resistor, the sensor is connected. Multiple sensors can share the same send pin, which will help when scaling the number of functions on an Arduino.


The Processing side of the software is based on reactive images to the touch input. Images were mapped out to the grid using the Syphon Processing library through a projector via MadMapper. MadMapper facilitates organizing the layout, removing the need for precise calibration within Processing in regards to image location.

The style direction was based on the original Macintosh for it’s simplicity and colours. The point of basing the design off a real operating system was to add to the illusion of a fully functional computer. The current build features an audio player with a play and pause button, a simulated paint program, and a return to home function.




Pjrc.com. (2019). CapacitiveSensor Arduino Library with Teensy, for Capacitive Touch and Promimity Sensing. [online] Available at: https://www.pjrc.com/teensy/td_libs_CapacitiveSensor.html.

 GitHub. (2019). CapacitiveSensor. [online] Available at: https://github.com/PaulStoffregen/CapacitiveSensor.

Baxter, L. (1996). Capacitive sensors. John Wiley & Sons, p.138.





The Compass

The Compass

Priya Bandodkar


portfolio-2 portfolio-1 portfolio-4



“The Compass” is an experiment that uses interactions with a faux compass in the physical space to navigate in a virtual 3D environment. The exploration leverages affordances of a compass such as direction, navigation, travel to intuitively steer in a virtual realm. It closely emulates a VR experience sans the VR glasses.

Participants position facing the screen and rotate the handheld compass to traverse in an environment on the screen in directions mimicking the movement of the physical interface. Participants can turn the device, or even choose to turn around with the device for subtle variation of the experience. This movement influences the 3D sphere on the screen to rotate, creating an illusion of moving in the virtual space itself. The surface of the 3D sphere is mapped with a texture of a panoramic landscape. The landscape is a stylised city scene of crossroads bundled with characters, vehicles to compliment the navigational theme of the experiment. 

As a variation to the original concept, I embedded characters in the scene that participants need to search for. Thus, tapping into the ‘discovery’ affordance of the compass creating a puzzle-game experience.


As a 3D digital artist, I have always been interested in exploring the possibilities of interactive 3D experiences in my practice. The introduction to processing opened doors to accessing the ‘interactive’ part of this. I was then keen on playing with the freedom and limitations of incorporating the third dimension into processing and studying its outcomes.

One of my future interests lies in painting 3D art in VR and making the interactions as tangible as possible. I am greatly inspired by the work of Elizabeth Edwards, an artist who paints in VR using Tilt Brush and Quill, creating astonishing 3D art and environments in the medium. I was particularly fascinated by her art piece in VR called ‘Spaceship’ which was 3D painted using Tilt Brush. I posed myself with a challenge of emulating this virtual experience, and controlling it with a physical interface which was more intuitive than using a mouse.

It had to be a physical interactive object that helps you look around mimicking a circular motion. Drawing parallels in the real world, I realised the compass has been one of the most archaic yet intuitive interfaces to finding directions and navigating in the real space. Hence decided to leverage its strong affordance and compliment the visual experience of my project. While building on the interactivity, I realised how easy and effortless it became to comprehend and relate the virtual environment to your own body and space from the very first prototype. And even more due to physically controlling the object with an intuitive handheld in real space.


Studying about the gyroscope and sending its data to processing filled a crucial piece of the puzzle. It helped me utilise the orientation information with simple, yet very useful lines of code to bring in the anticipated interaction to a T.


I studied VR art installations such as the ‘Datum Explorer’, which created digital wilderness using a real landscape, that concluded to a non-linear storytelling using elusive animals. This elicited the idea of incorporating identifiable characters in my 3D environment to add an element of discovery to the experience. I looked up for games that were based on similar concepts such as ‘Where’s Waldo?’ to calibrate the complexity of this puzzle-game idea. I used six characters from the Simpsons family and embedded them using a glitching graphic effect, thus hinting that they did not exactly belong in the scene, and hence needed to be spotted.


To leverage the affordance of the compass, it was important to make it compact enough to fit in the hand and be rotated. I was able to achieve this by nesting the microcontroller on a mini-breadboard within the fabricated wooden compass. I stuck to the archaic look for the compass to keep the relatability towards the design intact for participants. While incorporating the puzzle-game aspect, I realised the design of the compass can be customised to hold clues related to the game. But I decided to let go of that in this version, as the complexity of the six-character puzzle was simple and straightforward enough for participants to solve.

compass-1 compass-2 compass-3


To conclude, the interaction with the compass in the physical world to control a virtual 3D environment came about intuitively for participants, and was successful. Some interesting interactions that came up during the demo, were when participants decided to turn around with the compass held in hand, and also when they placed the compass near the screen and rotated the entire screen to experience the emulation of ‘dancing with the screen’. The experience was also compared closely to resemble VR, however without wearing the VR glasses, making it more personal and tangible.




This is an exploration in continuum that I would like to build on using the following:

  • Layering the sphere with 3D elements, image planes in the foreground to create a depth in the environment.
  • Using image arrays that appear or disappear based on the movement of the physical interface.
  • Adding intricacies and complexities to the puzzle game by including navigation clues on the physical interface.


Edwards, Elizabeth. “Tilt Brush – Spaceship Scene – 3D Model by Elizabeth Edwards (@Lizedwards).” Sketchfab, Elizabeth Edwards, 1 Jan. 1967, https://sketchfab.com/3d-models/tilt-brush-spaceship-scene-ea0e39195ef94c9b809e88bc18cf2025.

“Datum Explorer.” Universal Assembly Unit, Wired UK, https://universalassemblyunit.com/work/datum-explorer.

“Interactive VR Art Installation Datum Explorer | WIRED.” YouTube, WIRED UK, https://www.youtube.com/watch?v=G7BaupNmfQU.

Ada, Lady. “Adafruit BNO055 Absolute Orientation Sensor.” Adafruit Learning System, https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/processing-test.

Strickland, Jonathan. “How Virtual Reality Works.” HowStuffWorks, HowStuffWorks, 29 June 2007, https://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality.htm.

“14 Facts About Where’s Waldo?” 14 Facts About ‘Where’s Waldo?’ | Mental Floss, 20 Jan. 2017, https://www.mentalfloss.com/article/90967/14-facts-about-wheres-waldo.






Commuting Fun


Project Name
Commuting Fun by Jessie Zheng

Project Description
Commuting Fun is an interactive installation aiming to bring some fun to the mundane day-to-day commuting and to take away the stress, anxiety and even anger people may experience during rush hours. Aesthetically interesting visual patterns are projected onto the interior of the vehicle and can be altered through passengers actions on the train. Expected actions on a vehicle such as sitting on a seat, grabbing the handles, and tapping on a ticket machine can generate unexpected changes in the visuals. Commuting Fun provides new possibilities to start people’s day with fresh and relaxed minds. The project aims to utilize as much space for exploration and interaction as possible, encouraging passengers to move around within the vehicle and get active.


Video of Interaction

Ideation Process
I have been wanting to make an installation for this class for a very long time, and finally I could do it for this project. I’ve looked up how different interactive museums worldwide incorporate different types of interactions. A lot of them use graphics as decorative elements to the interactive environment, and certain parts of the graphics react to where there are physical interactions from the participants. Based on the same idea, I wanted to decorative design patterns on processing which get projected to all surfaces in the installation space as if they are wall papers.

To add more interesting and tangible pieces as part of my installation, I came up with ideas such as putting balloons and exercise balls because their shapes echo the polka dots pattern on the walls, which could add to the aesthetic appeal. For example, one of the Nuit Blanche exhibitions I went used balloons in the installation space with lights around to amplify the visual appeal. They could also serve as switches to the circuit, which could trigger changes in the image of polka dots in the projection when people pull on the balloons and sit or move the exercise balls. The whole space should serve as a visually-pleasing play space for people. Moreover, I thought about how participants could also be part of the exhibition. If they could all wear whites, their clothes could be a canvas for the projection of the polka dots as well. However, considering the limited timeframe to figure out the logistics to add balloons and exercise balls as part of the circuit, I had to change direction to implement something that doesn’t move much so it causes less of a challenge to have a stable and secure circuit.


Nuit Blanche Exhibition That Inspired Me

One day on my way to commute to school, I noticed people interact with lot of objects in daily life without paying much attention to it. Commuting alone has lots of things people interact with, for example, tapping on a machine to get on a streetcar, pressing the stop button to get off, grabbing onto handles to stay stable. These are objects that have been carefully and systematically designed and put into certain places in a vehicle to ensure the functionality of a vehicle, and people are so used to them function in a certain way that they barely pay attention to them. What if something unexpected happens when people interact with a vehicle the way they normally do? Would they behave differently? Eventually I decided to recreate a part of a subway for my installation and have objects such as seats, handles, buttons act as switches to change the way graphics behave. The graphics will then be projected back onto the subway to be an integral part of this space so the digital as well as the physical space become as a whole.

Project Context
Inspired by the article Tangible Bit: Beyond Pixels by Hiroshi Ishii who prefers the multi layers of interactions in TUI over the one dimensional GUI interaction between users and the digital screen, this project takes on the challenge to explore the interactive relationship between the physical and virtual world. An installation becomes an inevitable choice as this project goes further into the ideation process due to my awareness not to confine the physical interactions of participants within the space of a single object, but to have multiple objects within the installation space for participants to walk around and discover.
Eventually the final decision was made to recreate TTC transit vehicles as the interactive space because of the amount of tangible things to be worked with, for example, stop buttons and wires, POP machines, seats etc. People could interact with the things within the installation space without being provided with any instructions or guidance as most people have predefined ideas in mind of how to interact with the objects within transit vehicles from day-to-day commuting experiences, which could minimize possible confusion of the interactive experience. As objects are spread out across the vehicle, it adds another level of interactions which enables broader movements of participants’ bodies.
An effort has been made to keep the physical and virtual world inseparable from each other. Having decided to incorporate objects on vehicles as switches to control the graphics interface, the next challenge is how to integrate it into the physical world organically. Coming from a Chinese background, I have seen themed subway trains in China, the interior of which are covered with decorative design during certain festivals. Using the graphic interface as a decorative design which is projected onto the train becomes my focus. While people interact with different objects on the train, the design of the interior of vehicles changes based on the interactions.


A Themed Subway Train in Ningbo, China

However, with technical limitations, I’m unable to map and project the decorative design pattern onto all surfaces of the recreated vehicle installation space, which has led me to think of other ways of projection methods. In China, a lot of times for commercial purposes, a series of pictures will be put up in the tunnels outside the train. While the train is moving at a fast speed, the windows will be used as a frame for the animated commercials outside. For this project, the window area will be utilized in a similar way to display the visual elements, which simplifies the projection process yet still enables the visual elements to be an integral part of the installation.

Drawn by the minimalist style of polka dots art installations by the Japanese artist Yayoi Kusama exhibition Infinity Mirrors at the AGO, I chose to use polka dots to be the visual elements of the decorative design. It strikes me how something really simple yet could still be extremely visually stimulating. Sometimes simplicity speaks more to the audience than a compilation of elements. Yayoi Kusama’s obsession with polka dots comes from her mental disorder caused by her difficult relationship with her mom while she was a child. She was encouraged to draw and paint by a therapist to as an outlet for her oppressive feelings. The statement behind her artworks created by the carefully chosen colors and deliberately arranged compositions is truly powerful on viewers such as myself. I decided to choose a playful and vibrant color palette for the dots and background because I believe this could soothe people’s anxiety during rush hours.

Work in Progress
I wrote the code at first using potentiometers and then replaced them with velostat pieces attached with aluminum foil to make sure the code works.


Once I decided to recreate the subway space, I jotted down a list of things which can be found in a subway in Toronto. I printed out posters and recorded the ambient sound in a subway on my way to school.
The most challenging thing was to make the chairs in our studio resemble the seats in a subway. I found red fabric at Michael’s that has a similar look and feel as the fabric on a subway seat, and found a silvery metallic adhesive film which I pasted and sewed onto the red fabric. This process was extremely time-consuming as I had no prior experience of sewing. It took me almost 2 days to finish making all three pieces of seat cover.


I also used card boxes to recreate the yellow handles on the TTC subway. Initially I intended to use them as switches as well, however, I couldn’t find a good method to secure them in place while people pull on them. Eventually they became props which add to the look of the recreated subway space.
After taping all three pieces of seat covers onto chairs, I put sensors under the covers and connected them to the circuit. I had people sit on the chairs to see if it created desired changes in the graphics, made changes accordingly.


However, after a few tests, the aluminum foil is starting to crumble and break after people sit on them many times. This has led me to seek alternative conductive material that can endure stretching a bit better. Conductive fabric became the enhanced alternative.
Finally, the chairs are ready to go.

Final Look


Final Look Of The Installation


First Aid Box To Hide The Arduino


Polka Dots Projected Back On The Train


Handles To Grab On To

artjouer (2018) A Visit to TeamLab Planets Tokyo: Amazing Interactive Art. Available at: https://youtu.be/G6EtM1r0Eko (Accessed: November 4, 2019).

BRANDS OF THE WORLD (2018) Toronto Transit Commision. Available at: https://www.brandsoftheworld.com/logo/toronto-transit-commission (Accessed: November 4, 2019).

British Council Arts (2015) What is digital art? Available at: https://youtu.be/2RWop0Gln24 (Accessed: November 4, 2019).

CHINADAILY (2015) Another themed subway train runs in Ningbo. Available at: http://www.chinadaily.com.cn/m/ningbo/2015-04/22/content_20503269.htm (Accessed: November 4, 2019).

Grief, A. (2015) What Toronto’s highways would look like as a TTC map. Available at: https://www.blogto.com/city/2015/09/what_torontos_highways_would_look_like_as_a_ttc_map/ (Accessed: November 4, 2019).

Ishii, H. (2019) Tangible Bits: Beyond Pixels. Massachusetts: MIT Media Laboratory. Available at: https://zhenbai.io/wp-content/uploads/2018/08/4.-Tangible-Bits-Beyond-Pixels.pdf (Accessed: November 4, 2019).

O’Neil, L. (2019) The TTC is putting fare evaders on blast in a new add campaign. Available at: https://www.blogto.com/city/2019/05/ttc-now-putting-fare-evaders-blast/ (Accessed: November 4, 2019).

Sinha, V. (2018) Yayoi Kusama: Her world of polka dots. Available at: https://www.thejakartapost.com/life/2018/09/06/yayoi-kusama-her-world-of-polka-dots.html (Accessed: November 4, 2019).

Tate (2012) Yayoi Kusama – Obsessed with Polka Dots | Tate. Available at: https://www.youtube.com/watch?v=rRZR3nsiIeA (Accessed: November 4, 2019).



By Masha Shirokova

CODE: https://github.com/MariaShirokova/Experiment3


My idea was to create a multi-sensory device, which allows users to explore sense crossing and experience at least 3 senses. 

Play is an (musical? visual? tangible?) instrument with multisensory interface: users are able to play sounds, create their own sound and visual compositions on the screen by interacting with tactile sensors. All of the sounds present a visual animation over the background when played. Play enables users to make a whole “orchestra” out of pom poms, glasses of water and other non-musical objects, turn a palette into a rhythmic  sequencer. 

For now, this device consists of 3 tactile objects: glass of water, paper foldable button and pompom button, that control three modes of visuals on the screen and 3 sounds. Further, I would like to expand the amount of objects, as well as to make visual part more complicated. 

This device provides users with lots of performance possibilities. It is also can be used for educational purposes and experiences to give kids and adults a chance to interact with music in new and different ways.


Hearing smells or seeing sounds are examples of possible synesthesia – one of my main research interests. This experiment is my first attempt to create a multi-sensory object, which helps users to understand how tightly senses are crossed and connected to each other.  In the case of Play, pushing or touching DIY buttons triggers sound and colorful visual animation. 

The history behind the aesthetic expression of synesthesia arose from the paintings of Wassily Kandinsky and Piet Mondrian. It continued in note drawings of Cornelius Cardew, who literally drew his music on notation schemes. Sometimes these were quite identifiable notes, but their duration and relative volume should be determined by the performer. The epiphany of this approach was his book “Treatise”, comprising 193 pages of lines, symbols, and various geometric or abstract shapes that largely eschew conventional musical notation. Simple grid of the board and screen interface was inspired by geometrical abstract works of Mondrian, classical notation scheme and short films by Oskar Fischinger. The screen grid is affected by the sound and turns into the sound wave which changes depending on the volume (amplitude) of the sound.

Wassily Kandinsky was capable of “hearing”colors and that is why he composed his famous “symphony pictures” to be melodically pleasing. He combined four senses: color, hearing, touch, and smell. Therefore, experimenting with perceiving senses differently by using the device can be a valuable exercise to develop imagination and creativity. In his compositions, circles, arcs and other geometrical shapes seem to be moving, therefore I also used simple animated shapes and bright colors to keep connection with the artist who experienced synesthesia.


artwork-vasily-kandinsky-composition-8-37-262Composition 8 by Wassily Kandinsky

1f9284dca0f4160be8a0dcbb1f555ec1Composition London by Piet Mondrian

d5694rjnbom swjloncilnkDrawn notes from “Treatise” by Cornelius Cardew

Working on sound part was a new experience for me, therefore I picked 3 different sounds: rapid drum sound, and 2 xylophone sounds. There is a Russian band SBPCH (Samoye Bolshoe Prostoe Chislo) that plays electronic music, based on simple but nice sounds of water, rain, glass or ping pong sounds. I wanted to reach the same effect by picking up my sounds. This is how I “hear” collapsing and growing shapes.

As for the tactile part, my goal was to make tangible experience as much diverse as I could, so I included soft pom pom button, paper button and glass of water. At the same time, users experience something soft, colourful, dry and  solid, and liquid – this is where contrast of touch , sounds and visuals mix together. 

First, I saw the possibility of adding water to the circuit in the video of Adafruit Industries Company. Then, I realized that they use different boards, which use capacitive touch. Therefore, I started looking for other methods of using water as a sensor.  I added salt and it worked!I



First week, I started with brainstorming some initial ideas for the project:

  • Shadow play
  • Use of bubble wrap
  • Game based on the principle of Minesweeper Online Game
  • Multi-sensory device

I decided to do the last, as it represents my research interest and, hopefully, will be helpful for my thesis.

The first class codes, provided by Kate, became foundation for my project. I replaced potentiometers by DIY sensors and added more details to the Processing code(sound and visual). 


Circuit from the first class

Visual interface:


Interface sketches

For the grid, I used soundwave (the same method we used in Echosystem for Experiment 1t) which was affected by amplitude of the sound.

  1. 3D rotating cubes for the starting screen using P3D Library and rotation.screen1


2.First sensor activates the yellow square( Y position of the square was mapped with sensor value) and “play more” text.




3. Second sensor activates static composition of star and rectangles.water-drum



4. Third sensor activates text and a  circle ( fill color was randomized: fill(0, random(0,240), 255), its Y position also was mapped with sensor value.  Moreover, there were activated three more ellipses, their size was changing due to FrameCount command using different proportions, so they looked like water surface. Third sensor is also responsible for the sound wave.




Two sensors are being activated


Figuring out the sound:

Wiring the potentiometers to the Arduino and writing the code for 3 DIY sensors  was simple. However, working with multiple sounds was a bit challenging. First, I looked at sound libraries I can use in processing and I found Sound Library and Minim Library.  While using 2 sounds, it was comfortable to use both, as it was possible to stop and play sound files from 2 different libraries. However, when I added the third sound, it did not play. So, instead of pausing sounds I changed volume and used only Sound Library.

Testing sounds


Combining sound+image

3 sounds

DIY sensors:

I was excited to work with different materials and provide users with very different experience. In the beginning, being inspired by performance where users used only fruits to play music, I wanted to use lemon as one of the sensors. However, there were 2 “not enough” – not enough voltage or lemon was not enough conductive. So, I switched to squishy circuits and tested play-dough. It  also was not very reliable, even though I tried 2 kinds of resistors. Play-dough sensor only worked as a switch from on mode to off. Therefore, I came up with the idea of 2 buttons (paper and pom pom) which would still allow to experience different interaction and contrast material. Furthermore, I still wanted to use a glass of water as a sensor. Although, I did not manage to activate this sensor with user’s touch, I did make it work as a simple switch (sensor reaches its max value – when both clips are in the glass). In this case, salt helped me a mot by making water more conductive.

I preferred copper to foil as a copper material, as it was less flexible and more stable. 

  • Lemon
  • Play-dough

  • Water

  • Fluffy Button & Paper Button

Processed with VSCO with n1 preset     Processed with VSCO with n1 preset

Assembling and making:

As I see this device as an open structure for adding extra tangible objects, I decided to keep the model structure also exposed to public and did not hide the wires which were connected to the button and to the glass of water.  

The tactile part consisted of laser-cut board, pom-pom button (fluffy balls were simply glued to the card with copper piece covered with velostat, while the other copper piece was glued to the board), and the paper button ( sensor part was hidden inside, so pressing allowed 2 parts of the sensor connect with each other).

For the visual interface i did laser-cutting of the board and box, where I could hide the bread board.


Processed with VSCO with n1 preset

Processed with VSCO with kk2 preset


The best part of this experiment is that  my classmates were actually interested in interacting with the device and enjoyed the process of creating bits and melody.

A few days later after the presentation, I am looking  at the project and thinking, why I did not manage to make  it more complicated. I know I should have set up and edited the sounds, so the result melody would be better. 

In general, I enjoyed working on this project, as I could actually play with my favorite materials: sound, image and tactile materials. However, as I set up a to use all 3 senses, I did not have enough time to work on the quality of the sound. My further plan is to improve the processing code by adding sounds and complicating the visual part. 

Another plan is to match sound, visual and tactile part on real based data received from people who experience synesthesia. I believe this phenomena can be found very inspiring for other users. Even those who do not feel mixed senses, the synesthesia vocabulary can be used as a source of inspiration, since color and music associations are very poetic and metaphorical. Perhaps, users shall produce their very own vocabulary of vision to be able to experience art fully, and hopefully the future PLAY device can be useful in order to expand our sense experience.


16 Pineapples – Teplo. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=SimccVMCpv4.

Adafruit Capacitive Touch HAT for Raspberry Pi – Mini Kit – MPR121. (n.d.). Retrieved November 5, 2018, from https://www.youtube.com/watch?v=Wk76UPRAVxI&list=PL5CF99E37E829C85B&index=130&t=0s.

An Optical Poem (1938) by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=_kTbt07DZZA.

Chen, P. A. (2016, November 15). How to add background music in Processing 3.0? Retrieved November 5, 2019, from https://poanchen.github.io/blog/2016/11/15/how-to-add-background-music-in-processing-3.0.

“Early Abstractions” (1946-57), Pt. 3 by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=RrZxw1Jb9vA.

Nelzya Skazat’ Koroche by SBP4. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=XIictPv-5MI.

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%203

Swinging. (n.d.). Retrieved November 5, 2019, from https://works.jonobr1.com/Swinging.

Visual music. (2019, September 19). Retrieved November 5, 2019, from https://en.wikipedia.org/wiki/Visual_music.

Live VJ Performance Show

Project Title: Live VJ Performance Show

-An interactive live performance that explored audio visualization

By Jun Li – Individual Project



This project was a live performance show based on the concept of ‘Audio Visualization’, duration of 11 mins-long show for the audience. Also, it’s my first attempt at being a VJ. All 8 of the different effects were generated and interacted with music input in real-time. It was built on the Arduino controllers, TouchDesigner in addition to projection of the video output on the background. The purpose of this experiment was to create a very simple user-interface with different switches and sliders to manipulate the effects.

This dynamic experience allows the every participant to have an opportunity to become a ‘Visual Jockey’ as, they can operate and furthermore, change each parameter of the audio resulting in creation of the energetic graphics in the background. 

Keywords:  VJ, Live Performance Show, Audio Visualization, Interaction.


The goal of this experiment was to create a tangible or tactile interface for a screen-based interaction that uses Arduino and Processing. Since, l came from a very similar undergraduate program called ‘Technoetic Art’ and l had a lot of experience working with these software beforehand. l am always eager to challenge myself for a high-tech level and create various fascinating projects. After discussing my idea with Kate and Nick, I received their permission. I used the technical logic of Processing and applied it to TouchDesigner. Thus, retaining the knowledge of serial communication in these 2 softwares.


Music visualization, refers to a popular way of communication that combines audio visual and audio with the core of vision, music as the carrier and various media technologies such as new media technologies to interpret music content through pictures and images. It provides an intuitive visual presentation technique for understanding, analyzing and comparing the expressiveness of internal and external structures of musical art forms.

Vision and hearing are the most important channels for human beings to perceive the outside world. They are the most natural and most common behaviors of human beings and are irreplaceable for the activities of the human cognitive world. “Watching” and “listening” are the most natural, direct, and most important means of recognizing the outside world through a variety of audiovisual senses. Sound and video, hearing and vision, in contemporary society, the two agree on aesthetic trends and dominate the aesthetic form of mass culture. Vision provides many conveniences for people to see and understand musical works and music culture. People will increasingly rely on visual forms to understand audio content. The specific application of music visualization is very wide. For example, live music, exhibition site and other music visualization systems, combined with special images and customized immersive content development works, can give people a strong visual impact while enjoying music.

Process & Techniques

Research question

First, l have explored and researched what kind of parameters from audio can affect the visual part and how can l utilize these to manipulate the shape or the transformation of the visual design. Usually, audio visualization is based on the high, mid, low frequency, beat and volume of the music. So, one of the most important techniques is how to get these data or parameters from the audio itself and how to convert it to audio visualization.

Arduino Implementation

Most of my project program was built in TouchDesigner, for the Arduino part, the code was not difficult, just sent the data of a switch and 8 sliders to TD for manipulating the visual effect through serial communication.

TouchDesigner Implementation

  1. How to generate visual effects in real-time based on the different parameters of audio?
  2. How to utilize the data coming from Arduino to affect the visual effects that already had been generated in real-time?
  3. How to arrange and switch different visual effects and what’s the key and special point for VJ performance.
  4. How to optimize the program to avoid affecting the real-time output.
  5. How to output the whole project (both visual and sound part) to large displays (2 screen monitors, 1 projection) and 2 speakers to let VJ performer monitor and change the effects in real-time and show it to the audiences.

For the first and second questions, l had created 8 effects, utilizing different parameters from audio.


1.1 – Audio analyzing (high, mid, low frequency)


1.2 – Utilize the data from Arduino to manipulate visual effects





1.3 – Opening introduction (particle writing system)



1.4 – 8 different visual effects (Initial state & Altered state, Same below)













For the third question, I used the data from the slider and added an additional layer in black to switching and transited to the next effect. At the same time, l explored a lot of VJ will add strobe visual effect while the music reached its climax. So, there was another additional layer based on the low frequency of the audio, it increased the strobe of the video, which could improve and enhance the live atmosphere.


1.5 – The interface of switching 8 effects


1.6 – Different function in layers

For the fourth question, in the beginning, functionally l could utilize one data from the slider to control all 8 visual effects. However, l realized it was a huge challenge for my laptop to process it on the GPU & CPU at the same time. So, l had to separate the data into 8 sliders to control each effect and test it carefully. Eventually, it worked and did optimize my program successfully.


1.7 – The data flow in Touchdesigner 


1.8 – Arudino interface

For the last question, l did explore the function in TD for project output, it was powerful to allow the creators to create a super convenience interface to supervise the whole process and outputted it to any screen and speaker.


1.9 – Output to display

Challenge & Learning

  1. The due time of this project was super short. It required us to submit in 9 days, including having ideas, coding, testing, debugging and final setup. Especially, l used a more difficult software – TouchDeisgner. Although l had experience working with TD before, not as familiar as Processing. At the same time, l didn’t have much project experience. l barely got any help and reference from my classmates. It’s really an individual work and quite challenging for me, which l am excited as well.
  1. Set a goal to a live VJ performance show means too few visual effects won’t be accepted and not powerful enough in visual and hearing. I am a perfectionist. So, I had to create 10 mins-long show at least, that’s the extra requirement and pressure for me. So, to achieve that in the end, l kept testing and creating different visual effects. Eventually, l chose 8 powerfully and great enough from 26 effects l created in TD. %e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-16-55-12                                       2.1 – Reference library l created
  1. Processing data and having these effects in real-time were a super heavy job on my GPU and CPU. it also challenged the performance of my computer. l almost getting closer to ruin my GPU and unfortunately, l still lost my 3.5mm audio output in the end. After coding, l had to work really hard to optimize the program to get a better, stable 60 FPS output. Because my computer was not powerful enough, especially the GPU. Eventually,there still were some frame droppings during live performances.
  1. To my satisfaction, this project was successful been made and the response from others were very great. l perfectly achieved the goal l set in the beginning, compared the initial goal, NOTHING had been changed. I learnt a lot of project experience and techniques in TD, including analyzing audio, visual designing, data integration, manipulation and analysis, project optimization, output and management.
  1. The next step l will do is to keep improving and optimizing my program and create a simpler operation interface, which can let users manipulate it easily.%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-20-28

2.2- VJ setup in TouchDesigner

Code Link:



Currently, we use technology-based new media art as a way out. These forms of new media art are often the forerunners of future art. Artists work in the area of “art and technology” to create collaborative artworks. What’s more, in the era of such a big technological transition in modern times, the way of life has changed accordingly, and it has become necessary to fundamentally grasp the essence of art. The cross-border nature of music visualization art is very obvious. It involves music art, visual art and the artistic integration of the two.

When this project was exhibited for the exhibition, I was the VJ and presented it myself and able to personally observe the interaction and experience of the participants with it. To my satisfaction,I achieved my goal of becoming a VJ, bringing a show which challenging me incredibly.




  1. vjfader (2017). VJ Set – Sikdope live @ Mad House – ShenZhen China.Available at: https://www.youtube.com/watch?v=uG1GrD7VQOs&t=902s.
  2. Transmission Festival (2017). Armin van Buuren & Vini Vici ft. Hilight Tribe – Great Spirit (Live at Transmission Prague 2016). Available at: https://www.youtube.com/watch?v=0ohuSUNHePA 
  3. Ragan, M. (2015). THP 494 & 598 | Simple Live Set Up | TouchDesigner.Available at: https://www.youtube.com/watch?v=O-CyWhN4ivI
  4. Ragan, M. (2015). THP 494 & 598 | Large Display Arrangement | TouchDesigner. Available at: https://www.youtube.com/watch?time_continue=1&v=RVqNjJfE9Lg 
  5. Ragan, M. (2015). Advanced Instancing | Pixel Mapping Geometry | TouchDesigner.  Available at: https://matthewragan.com/2015/08/18/advanced-instancing-pixel-mapping-geometry-touchdesigner/
  6. Ragan, M. (2013). The Feedback TOP | TouchDesigner.  Available at: https://matthewragan.com/2013/06/16/the-feedback-top-touchdesigner/

Tête-à-Tête <3

Rittika Basu

Project Description: ‘Tête-à-Tête’ <3 is a private-dating platform. The term, ‘Tête-à-Tête’ (French origin) refers to a secret one-to-one conversation between two people. The communication is encrypted through the use of ‘Cupid Cryptic Codes’ that gets transmitted through the playing of the mini piano keys. The objective is to have an ‘ongoing secret chat’ while being camouflaged as a piano player. There are ten piano keys in total which generate the Solfège/Solfa – an educational technique to teach musical notes and sight-reading skills and familiar beginners with lyrical patterns. For example, ‘A’ is denoted with one Red blink and generated the musical note of ‘Do’. Similar to an actual piano, the first eight mini piano keys produces Do, Re, Mi, Fa, So, La, Ti & Do consecutively. The last 2 keys, produces ‘Beep’ sounds denoting to set of odd and even numbers. The code syntax incorporates Alphabets, numbers, several punctuations in addition to a few emojis.






Attaching Alligator wires to the mini piano keys DIY buttons
Wiring the Arduino with the Mini Piano strings

Final Images:

cover-page setting-images 73420315_471938793422400_4582641864728903680_n background-4

Interactive Video:

Question: Guess what Neo is encrypting?

Answer: coffee?

Project Context:


I started observing ideas post observing the Mud Tub, Massage Me and Giant Joystick. One day, I came across an article on ‘Secret Dating’ and its reasons. Foremostly, its a common practise among the LGBT community as non-heterosexual forms of expressing love is a taboo in several regions of the world. In many Asian and African countries disclosure of such relationships might end up tragic incidents where the partners would be jailed, penalised, killed by relatives or chemically castrated. One of the reasons could be that the unconventional relationships are perceived as an act of humiliation and shame in society. Thus, homosexual lovers are often forced to pursue relationships in secrecy.

Being from India (one of the largest populous country), I have witnessed how dating, kissing or any form of displaying affection is publicly scowled. Whereas molestation, on the other hand, is ignored blatantly many times. Instead of teaching children on lessons of communal friendships in additions to having healthy relationships, parents wrongly depicts romantic intimacies as vile and inappropriate for youngsters. For example, during my 12th grade, my best friend’s mother told her that making boyfriend is indecent and girls who date boys will always perform poorly in academia.  This was because her parents considered ‘teenage love’ as a distraction and feared that their daughter might be engaged in pre-marital coitus.

Developing a new language – Cupid Codes:

Contemplating on the notion of secret dating, the idea of having a secret system of communication struck my mind during the next week. I commenced reading about cryptic messaging and cypher networking which had interested me since childhood. Inspired by ‘Morse Code’ and ‘Tap Code’, I attempted being an amateur cryptanalyst and created my own set of cryptic codes using LEDs blinking.


Ideation – Creating a ‘Cryptic Messaging System’

It was a struggle in the beginning as I had to conceive how to implement this transmission via serial communication within a limited set of 12 digital pins. In the fullness of time, after innumerable trials and errors, I came up with ‘Cupid Codes’ and strategise a systematic table to remember the new language. These set of multi-coloured blinks on Processing screen will help to transmit messages between two lovers secretly. While the other people surrounding the environment will assume that the participants or lovers are actually engaged in playing the Mini piano because the codes are conveyed through the playing (audio and visual output) from the piano keys.

Tap Code

Creation of a new language – Cupid Code (usage of multi-coloured blinks with different timings and syntaxes)


Rapid Prototyping:

Utilising the knowledge of DIY switches from Kate’s class 12, I tried to make the mini Piano keys wrapped with Aluminium foil to test the conductivity of electricity. However, this prototype’s form turned out to be very childish and juvenile. Hence, I had to reserve this one as a kid’s version because of its bright colours and playful mechanism.


After referring videos of DIY mini piano on YouTube and instructional images from Pinterest, I began implementing the gathered knowledge in developing my own little piano. Creating the piano switches as the DIY switches were intensive and tedious. Afterwards, I soldered the jumper wires, resistors with the copper strips attached on the piano keys. The idea was that, when these piano keys will be pressed, it completes the whole circuit and emits musical audio. There are 10 piano keys that are colour coded in Red, Orange, Yellow, Blue & Green. The 2 green-coloured keys are for odd and even numbers respectively. But rest of the eight keys are to represent Alphabets, emojis, phrases and punctuation in different combinations. Every 10 keys emit 10 different musical notes ie. the Solfège or Solfeggio, a.k.a. Sol-fa, Solfa, Solfeo, which follows as Do, Re, Mi, Fa, So, La, Ti, Do and Beep sounds.

Access from GitHub (Codes + Audio + Image + Typeface) :

The coding is simple and derived from the file shared by Kate and Nick’s Github page,  titled ‘Experiment_3_-_Arduino_Processing_ASCII_Digital_values.ino’ (Arduino file) and ‘Experiment_3__Arduino_Processing_ASCII.pde (Processing file)’. 

In Arduino: Arduino Code for Experiment 3: Tête-à-Tête <3

In Processing: Processing Code for Experiment 3: Tête-à-Tête <3

Supporting Files: Audio, background image and typeface


Replacing Aluminium with Copper because not only Cu is a better conductor of electricity and its strips are harder than Al foil. Cryptic communication language was developed involving of colour blinks that appeared on the screen at different timings which are to be used and exchanged between the participants or partners for flirting, messaging and calling each other. Every piano key generates different sounds making the messaging activity seems like playing of a musical note. Since, this instrument functions like a real piano, non-participants surrounding both the partners will assume them as piano players while they can happily date their lovers in peace and privacy. 


Research + Coding Tutorials
  1. Kremer, B. (2019). Best Codes. from https://www.instructables.com/id/Best-Codes/
  2. Hartman, K. (2019). Exp3_Lab1_ArduinotoProcessing_ASCII_3DigitalValues/. Lecture, OCAD University. https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment3/Exp3_Lab1_ArduinotoProcessing_ASCII_3DigitalValues
  3. curtis’s channel. (2016). processing: playing and using sound files [Video]. Retrieved from https://www.youtube.com/watch?v=DJJCci3kXe0
  4. Engel, M. (2014). Adding and using fonts in processing [Video]. Retrieved from https://www.youtube.com/watch?v=QmRbb-_d_vI
  5. Blum, J. (2011). Tutorial 06 for Arduino: Serial Communication and Processing [Image]. Retrieved from https://www.youtube.com/watch?v=g0pSfyXOXj
  6. Rudder, C. (2014). Seven secrets of dating from the experts at OkCupid. Retrieved, from https://www.theguardian.com/lifeandstyle/2014/sep/28/seven-secrets-of-dating-from-the-experts-at-okcupid
  7. Elford, E. (2018). HuffPost is now a part of Verizon Media. Retrieved from https://www.huffpost.com/entry/mom-secret-lesbian-relationship_n_5aa143e9e4b0d4f5b66e2b35
  1. Rodgers & Hammerstein. (1965). “Do-Re-Mi” – THE SOUND OF MUSIC [Video]. Retrieved from https://www.youtube.com/watch?v=drnBMAEA3AM
  2. Jaz_the_MAN_2. (2015). Do, re, mi, fa, so, la, ti, do – DO stretched.wav [MP3 file]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316899/
  3.  Jaz_the_MAN_2. (2015).  Do, re, mi, fa, so, la, ti, do – RE stretched.wav [Online]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316909/
  4.  Jaz_the_MAN_2. (2015).  Do, re, mi, fa, so, la, ti, do – MI.wav [WAV file]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316909/
  5.  Jaz_the_MAN_2. (2015). Do, re, mi, fa, so, la, ti, do. – FA stretched.wav [WAV file]. Retrieved from  https://freesound.org/people/Jaz_the_MAN_2/sounds/316905/
  6. Katy (2007).  Solfege – So.wav [Online]. Retrieved from https://freesound.org/people/digifishmusic/sounds/44935/
  7. Jaz_the_MAN_2. (2015). LA.wav [Online]. Retrieved from  https://freesound.org/people/Jaz_the_MAN_2/sounds/316902/
  8. Katy (2007).  Solfege – Ti.wav [Online]. Retrieved from https://freesound.org/people/digifishmusic/sounds/44936/
  9. austin1234575 (2014).  Beep 1 sec [Online]. Retrieved from https://freesound.org/people/austin1234575/sounds/213795/
  10. cheesepuff (2010).  a soothing music.mp3. [Online] Retrieved from https://freesound.org/people/cheesepuff/sounds/110215/