Much of my work deals with memory and forgotten histories. I am constantly searching for new ways to portray the invisible past that haunts the present. (Un)seen is a video installation about presence/absence that uses proxemics to trigger video and sound. It recreates a ghostly presence appearing on a screen whose voice constantly beckons the viewer to get closer, but whose image recedes into the frame as the viewer tries to engage with it. Ultimately, the viewer is invited to touch the cloth it is projected on, but if they do, the ghost completely disappears from view, leaving an empty black screen.
With permission, I will be using unused footage from a previous project comprised of closeups of a Black woman on a black background and will be recording and mixing a new soudtrack.
Parts / materials / technology list
Distance sensors (2 or 3) HC-SR04 or KS102 1cm-8M Ultrasonic Sensors
King size bedsheet, hanging from rod
2 speakers (or 4?)
22.11.19-24.11.19 Edit 3 video loops
24.11.19-25.11.19 Write ghost dialogue and research sound
26.11.19-27.11.19 Record and edit sound
22.11.19-27.11.19 Program distance sensors and interaction
27.11.19 Mount bedsheet on rod
28.11.19-02.12.19 Testing and de-bugging
Physical installation details
The ideal space for the installation would be Room 118 on the ground floor.
With permission, I will be using footage shot for another project comprised of closeups of a Black women on a Black background. The ideal would be for the image to be projected from the rear onto the sheet. This would require a dark space and enough space behind and in front of the sheet. The mount for the sheet will be kept deliberately light. Metal wire can be used to hang the rod holding the sheet from the ceiling, but would potentially require discrete hooks screwed or attached to the ceiling.
Set-up Option A
Set-up Option B
Hand drill and other tools TBA
2 (or 4?) speakers
2 pedestals for the sensors (?)
Cables for the speakers
Power bar and electrical extension cords
– Can I have 4 speakers and have the play different sounds in pairs? I.e. the speakers behind the screen wouldn’t play the same sound as the speakers in front of the screen
– Do I actually need 3 distance sensors – 1 behind the screen for the functions triggered by people touching the screen and two mounted (possibly on pedestals) slightly in front of the screen at each side?
– Is it possible to hang things from the ceiling?
– Would a motion sensor also be useful to activate the installation when someone comes into the room?
Trumpet is a fitting instrument as the starting point for an installation about the world’s most infamous Twitter user. It combines a display of live tweets tagged with @realDonaldTrump with a trumpet that delivers real audio clips from the American president. The piece is meant to be installed at room scale and provide a real-life experience of the social media echo chambers that so many of us confine ourselves to.
The piece constantly emits a low static sound, signalling the distant chatter that is always present on Twitter. A steady stream of tweets from random users, but always tagged with the president’s handle, are displayed on the screen and give a portrait of the many divergent opinions about the current state of the presidency.
Visitors can manipulate a trumpet that triggers audio. A sample of the Call to the Post trumpet melody played at the start of horse races can be heard when the trumpet is picked up. The three trumpet valves, when activated, in turn play short clips (verbal equivalent of tweets) from the president himself. Metaphorically, Trump is in dialogue with the tweets being displayed on the screen in the enclosed ecosystem. The repeated clips create a real live sonic echo chamber physically recreating what happens virtually online.
My initial ideas were centered on the fabrication of a virtual version of a real object: a virtual bubble blower that would create bubble patterns on a screen and virtual kaleidoscope. I then flipped that idea and moved to the idea of using a common object as a controller, giving it a new life and hacking it in some way to give it novel functionalities. Those functionalities would have to be close the original use for the object yet be surprising in some way. The ideal object would have a strong tactile quality. Musical instruments soon came to mind. T
hey are designed to be constantly handled, have iconic shapes and are generally well-made and feature natural materials such as metal and wood.
In parallel, I developed the idea of using data in the piece. I had recently attended the Toronto Biennial of Art and was fascinated by Fernando Palma Rodriguez’s piece Cihuapapalutzin that integrated 104 robotic monarch butterflies in various states of motion. They were built to respond to seismic frequencies in Mexico. Every day, a new data file is sent from that country to Toronto and uploaded to control the movement of the butterflies. The piece is meant to bring attention to the plight of the unique species that migrates between the two countries. The artwork led me to see the potential for using data visualisation to make impactful statements about the world.
I then made the connection to an example we had seen in class. Just Landed by Jer Thorp shows real time air travel patterns of Twitter users through a live map. The Canadian artist, now based in New York, used Processing, Twitter and MetaCarta to extract longitude & latitude information from a query on Twitter data to create this work.
Another inspiration was Listen and Repeatby American artist Rachel Knoll, a piece featuring a modified megaphone installed in a forest that used text to speech software to enunciate tweets labeled with the hashtag “nobody listens”.
As I wanted to make a project that was closer to my artistic practice which is politically-engaged, Twitter seemed a promising way to obtain live data that could then be presented on a screen. Of course, that immediately brought to mind one of the most prolific and definitely the most infamous Twitter user: Donald Trump. The trumpet then seemed to be a fitting controller, both semantically and its nature as a brash and bold instrument.
Step 1: Getting the Twitter data
Determining how to get the Twitter data required quite a bit of research. I found the Twitter 4J library for Processing and downloaded it, but still needed more information on how to use it. I happened upon a tutorial on British company Coda Sign’s blog about Searching Twitter for Tweets. It gave an outline of the necessary steps along with the code. I then created a Twitter developer account and got the required keys to use their API in order to access the data.
Once I had access to the Twitter API, I adjusted the parameters in the code from the Coda Sign website, modifying it to suit my needs. I set up a search for “@realDonaldTrump”, not knowing how much data it would yield and was pleasantly surprised when it resulted in a steady stream of Tweets.
Step 2: Programming the interaction
Now that the code was running on Processing, I set up the code to get data from the Arduino. I programmed 3 switches, one for each valve of the trumpet and also used Nick’s code to send the gyroscope and accelerator data to Processing in order to determine which data was the most pertinent and what the thresholds should be for each parameter. The idea was that the gyroscope data would trigger some sounds when the trumpet was moved and the 3 trumpet valves would manipulate the tweets on the screen with various effects on the font of the text.
I soon hit a snag as it at first seemed like Processing wasn’t getting any information from the Arduino. Looking at the code, I noticed that there were several delay commands at various points in the code. I remembered Nick’s warning about the delay command and how it was problematic and realized that this, unfortunately, was a great example of it.
I knew the solution was to program the intervals using the millis function. I spent a day and a half attempting to find a solution but failed and required Kate Hartman’s assistance solving the issue. I has also discovered that the Twitter API would disconnect me if I ran the program for too long. I had to test in fits and starts and often found myself unable to get any Twitter data sometimes for close to an hour.
I attempted to program some effects to visually manipulate the tweets that would be triggered by the activation of the valves. I had difficulty affecting only one tweet as the effects would affect all subsequent tweets. Also, given that the controller was a musical instrument, it felt like sound was a more suited effect than a visual. At first, I loaded cheers and boos from a crowd that users could trigger in reaction to what was on screen, but finally settled on some Trump clips as it seemed natural to have his very distinctive voice. It was suitable because he takes to Twitter to make official declarations and because of the horn’s long history as an instrument to announce the arrival of royalty and other VIPs.
As the clock was ticking, I decided to work on the trumpet and return to working on the interaction when the controller was functional.
Step 3: Hacking the trumpet
I was fortunate to have someone lend me a trumpet. I disassembled all the parts to see if I could make a switch that would be activated by the piston valves. I soon discovered that angle from the slides to the piston valves is close to 90 degrees and given the small aperture connecting the two, it would be nearly impossible.
The solution I found was taking apart the valve piston while keeping the top of the valve and replacing the piston with a piece of cut Styrofoam. The wires could then come out the bottom casing caps and connect to the Arduino.
I soldered wires to 3 switches and then carefully wrapped the joints in electrical tape.
A cardboard box was chosen to house a small breadboard. Holes were made so that the bottom of the valves could be threaded through and the lid of the box could be secured to the trumpet by using the bottom casing caps. Cardboard was chosen in order to keep the instrument light and as close to possible to its normal weight and the balance.
Step 5: Programming the interaction part 2
The acceleration in the Y axis was chosen as a trigger for the trumpet sound to play. But given the imbalance in the trumpet weight, it tended to trigger a rapid succession of the trumpet sound before stopping. Raising the threshold didn’t help. With little time left, I then programmed the valves/switches to trigger some short Trump clips. I would have loved to accompany them with a visual distortion but the clock ran out before I could find something appropriate and satisfactory.
My ideation process is slow and was definitely a hindrance in this project. I attempted to do something more complex than I had originally anticipated and the bugs I encountered along the way made it really difficult. One of the things that I struggle with when coding is not knowing when to persevere and when to stop. I spent numerous hours trying to debug at the expense of sleep and in hindsight, it wasn’t useful. It also feels like the end result isn’t representative of the time I spent on the project.
I do think though that the idea has some potential and given the opportunity would revisit it to make it a more compelling experience. Modifications I would make include:
Adding a number of Trump audio clips and randomize their triggering by the valves
Building a sturdier box to house the Arduino so that the trumpet could possibly rest on it and contemplate having it attached to some kind of stand that would control its movements somewhat
Have video as a background to the Tweets on the screen or a series of changing photographs and make them react to the triggering of the valve.
Link to code on Github: https://github.com/nvalcin/CCassignment3