Trumpet

by Nadine Valcin

img_0670-valves

DESCRIPTION

Trumpet is a fitting instrument as the starting point for an installation about the world’s most infamous Twitter user. It combines a display of live tweets tagged with @realDonaldTrump with a trumpet that delivers real audio clips from the American president. The piece is meant to be installed at room scale and provide a real-life experience of the social media echo chambers that so many of us confine ourselves to.

The piece constantly emits a low static sound, signalling the distant chatter that is always present on Twitter. A steady stream of tweets from random users, but always tagged with the president’s handle, are displayed on the screen and give a portrait of the many divergent opinions about the current state of the presidency.

Visitors can manipulate a trumpet that triggers audio. A sample of the Call to the Post trumpet melody played at the start of horse races can be heard when the trumpet is picked up. The three trumpet valves, when activated, in turn play short clips (verbal equivalent of tweets) from the president himself. Metaphorically, Trump is in dialogue with the tweets being displayed on the screen in the enclosed ecosystem. The repeated clips create a real live sonic echo chamber physically recreating what happens virtually online.


CONTEXT

My initial ideas were centered on the fabrication of a virtual version of a real object: a virtual bubble blower that would create bubble patterns on a screen and virtual kaleidoscope. I then flipped that idea and moved to the idea of using a common object as a controller, giving it a new life and hacking it in some way to give it novel functionalities. Those functionalities would have to be close the original use for the object yet be surprising in some way. The ideal object would have a strong tactile quality. Musical instruments soon came to mind. T

hey are designed to be constantly handled, have iconic shapes and are generally well-made and feature natural materials such as metal and wood.

tba_fernandopalmarodriguez_07-1-2496x1096
Image from Cihuapapalutzin

In parallel, I developed the idea of using data in the piece. I had recently attended the Toronto Biennial of Art and was fascinated by Fernando Palma Rodriguez’s piece Cihuapapalutzin that integrated 104 robotic monarch butterflies in various states of motion. They were built to respond to seismic frequencies in Mexico. Every day, a new data file is sent from that country to Toronto and uploaded to control the movement of the butterflies. The piece is meant to bring attention to the plight of the unique species that migrates between the two countries. The artwork led me to see the potential for using data visualisation to make impactful statements about the world.

just_landed_jerthorp
Image from Just Landed

I then made the connection to an example we had seen in class. Just Landed by Jer Thorp shows real time air travel patterns of Twitter users through a live map. The Canadian artist, now based in New York, used Processing, Twitter and MetaCarta to extract longitude & latitude information from a query on Twitter data to create this work.

rachel_knoll_listenandrepeat
Image from Listen and Repeat

Another inspiration was Listen and Repeat by American artist Rachel Knoll, a piece featuring a modified megaphone installed in a forest that used text to speech software to enunciate tweets labeled with the hashtag “nobody listens”.

As I wanted to make a project that was closer to my artistic practice which is politically-engaged, Twitter seemed a promising way to obtain live data that could then be presented on a screen. Of course, that immediately brought to mind one of the most prolific and definitely the most infamous Twitter user: Donald Trump. The trumpet then seemed to be a fitting controller, both semantically and its nature as a brash and bold instrument.

PROCESS

Step 1: Getting the Twitter data

Determining how to get the Twitter data required quite a bit of research. I found the Twitter 4J library for Processing and downloaded it, but still needed more information on how to use it. I happened upon a tutorial on British company Coda Sign’s blog about Searching Twitter for Tweets. It gave an outline of the necessary steps along with the code. I then created a Twitter developer account and got the required keys to use their API in order to access the data.

Once I had access to the Twitter API, I adjusted the parameters in the code from the Coda Sign website, modifying it to suit my needs. I set up a search for “@realDonaldTrump”, not knowing how much data it would yield and was pleasantly surprised when it resulted in a steady stream of Tweets.

Step 2: Programming the interaction

Now that the code was running on Processing, I set up the code to get data from the Arduino. I programmed 3 switches, one for each valve of the trumpet and also used Nick’s code to send the gyroscope and accelerator data to Processing in order to determine which data was the most pertinent and what the thresholds should be for each parameter. The idea was that the gyroscope data would trigger some sounds when the trumpet was moved and the 3 trumpet valves would manipulate the tweets on the screen with various effects on the font of the text.

I soon hit a snag as it at first seemed like Processing wasn’t getting any information from the Arduino. Looking at the code, I noticed that there were several delay commands at various points in the code. I remembered Nick’s warning about the delay command and how it was problematic and realized that this, unfortunately, was a great example of it.

I knew the solution was to program the intervals using the millis function. I spent a day and a half attempting to find a solution but failed and required Kate Hartman’s assistance solving the issue. I has also discovered that the Twitter API would disconnect me if I ran the program for too long. I had to test in fits and starts and often found myself unable to get any Twitter data sometimes for close to an hour.

I attempted to program some effects to visually manipulate the tweets that would be triggered by the activation of the valves. I had difficulty affecting only one tweet as the effects would affect all subsequent tweets. Also, given that the controller was a musical instrument, it felt like sound was a more suited effect than a visual. At first, I loaded cheers and boos from a crowd that users could trigger in reaction to what was on screen, but finally settled on some Trump clips as it seemed natural to have his very distinctive voice. It was suitable because he takes to Twitter to make official declarations and because of the horn’s long history as an instrument to announce the arrival of royalty and other VIPs.

As the clock was ticking, I decided to work on the trumpet and return to working on the interaction when the controller was functional.

Step 3: Hacking the trumpet

img_0646
Trumpet partly disassembled

I was fortunate to have someone lend me a trumpet. I disassembled all the parts to see if I could make a switch that would be activated by the piston valves. I soon discovered that angle from the slides to the piston valves is close to 90 degrees and given the small aperture connecting the two, it would be nearly impossible.

trumpet-parts
Trumpet parts
img_0663
Trumpet valve and piston
img_0679
Trumpet top of valve assembly without piston

The solution I found was taking apart the valve piston while keeping the top of the valve and replacing the piston with a piece of cut Styrofoam. The wires could then come out the bottom casing caps and connect to the Arduino.

img_0685

 

I soldered wires to 3 switches and then carefully wrapped the joints in electrical tape.

img_0687
Arduino wiring

 

A cardboard box was chosen to house a small breadboard. Holes were made so that the bottom of the valves could be threaded through and the lid of the box could be secured to the trumpet by using the bottom casing caps. Cardboard was chosen in order to keep the instrument light and as close to possible to its normal weight and the balance.

img_0695
Finished trumpet/controller

Step 5: Programming the interaction part 2

The acceleration in the Y axis was chosen as a trigger for the trumpet sound to play. But given the imbalance in the trumpet weight, it tended to trigger a rapid succession of the trumpet sound before stopping. Raising the threshold didn’t help. With little time left, I then programmed the valves/switches to trigger some short Trump clips. I would have loved to accompany them with a visual distortion but the clock ran out before I could find something appropriate and satisfactory.

Reflections

My ideation process is slow and was definitely a hindrance in this project. I attempted to do something more complex than I had originally anticipated and the bugs I encountered along the way made it really difficult. One of the things that I struggle with when coding is not knowing when to persevere and when to stop. I spent numerous hours trying to debug at the expense of sleep and in hindsight, it wasn’t useful.  It also feels like the end result isn’t representative of the time I spent on the project.

I do think though that the idea has some potential and given the opportunity would revisit it to make it a more compelling experience. Modifications I would make include:

  • Adding a number of Trump audio clips and randomize their triggering by the valves
  • Building a sturdier box to house the Arduino so that the trumpet could possibly rest on it and contemplate having it attached to some kind of stand that would control its movements somewhat
  • Have video as a background to the Tweets on the screen or a series of changing photographs and make them react to the triggering of the valve.

Link to code on Github:  https://github.com/nvalcin/CCassignment3

References

Knoll, Rachel. “Listen and Repeat.” Rachel Knoll – Minneapolis Filmmaker, rachelknoll.com/portfolio/listen-and-repeat. Accessed October 31, 2019

Thorp, Jer. “Just Landed: Processing, Twitter, MetaCarta & Hidden Data.” Blprnt.blg, May 11, 2009. blog.blprnt.com/blog/blprnt/just-landed-processing-twitter-metacarta-hidden-data. Accessed October25, 2019

“Fernando Palma Rodríguez at 259 Lake Shore Blvd E”. Toronto Biennial of Art. torontobiennial.org/work/fernando-palma-rodriguez-at-259-lake-shore/ Accessed October 24, 2019

“Processing and Twitter”. CodaSign, October 1, 2014. www.codasign.com/processing-and-twitter/. Accessed October 24, 2019.

“Trumpet Parts”. All Musical Instruments, 2019. www.simbaproducts.com/parts/drawings/TR200_parts_list.jpg. Accessed November 2, 2019

 

 

 

Experiment 3: Weave Your Time

screen-shot-2019-11-04-at-22-20-06

Project Title: weave your time

Name of Group Member: Neo Nuo Chen

Project Description: People in this modern society are going under a lot of stress lately, they can’t focus on one task because there are so many distractions. This project is to help them to calm down and take some time to focus on one thing, and it’s something that is simple but grants major satisfaction after they finish. Based on my previous professional background, I decided that a relaxing activity would be weaving. It is an ancient technique that has been around for over xxx years. The anticipant is asked to place their cups onto the coaster that I laser cut earlier. The pressure sensor underneath the small rug that I weaved would detect the weight of the liquid, it will then trigger the reaction on the screen. What happens is the intro page would be gone and the background color would change.

 

Project Process

October 22

Since this project was a solo project, I wanted to somehow collaborate what I’ve learned in the past with it, so I started to think of ways of interaction and the use of a loom came into my mind, the aesthetic is very old-fashion but mixing with laser-cut transparent acrylic would be interesting. Also, I wanted to wrap aluminum foil onto the loom and the head of the shuttle, so that whenever the head touches the edges of the loom(which were wrapped with aluminum foil), it would trigger something. But the more I think about it, I realized that it was not possible since the cable has to be attached along and would be weaved into the piece. 

I then decided to focus on the pressure sensor first, to see what it could be used for. Because it was a pressure sensor, the reading value would alter base on how much pressure I’m putting on it, it would be interesting to somehow control the range and leads to a different reaction.

October 24

I started experimenting with the pressure sensor by using the code Kate and Nick shared with us on Github.

img_2659-2 img_2658-2img_2656-2

 

October 25

I personally really like the sound of rain, as well as the visual of it, hence the reason why I wanted to show them both on the screen. I found out that there are a lot of different effects that you can create by using processing from Daniel Schiffman’s coding challenge. And the purple rain that he made was exactly what I was looking for. So I worked with that code and changed a few parts so that I could alter the rain from only being purple to constantly change colors throughout the whole thing.

screen-shot-2019-11-04-at-19-49-16screen-shot-2019-10-28-at-14-10-52

I also went to laser cut a loom with 1/4 inch wood for an early test.

img_2003 img_2004

And I weaved a small sample with the loom.

img_2168 img_2169

October 28

Had my pieces laser cut with transparent acrylic to create the post-modern aesthetic that I was looking for.

img_2237 img_2256

October 29

Ever since I had the idea of having people to experience my work, I got a bunch of plastic cups and ran the test to see the value. The sensor was not as stable as I imagined especially when the cup is small, it was hard to navigate the exact value for different amounts of liquids.  So I just decided to put an intro image which would be taken off once the cup is filled with any amount of liquid and when the cup goes empty the image comes back. I think that this could be a good idea to work as a reminder to tell people that their cups are empty and in need of a refill. Also works well when multiple guests are around, they can read the intro image and follow the instruction individually.

October 30

Continue working on some more weaving so that I could give my breadboard and cables a disguise:)

img_2378 img_2400 img_2416 img_2477

img_2580 img_2581 img_2585

 

Code Repository:

https://github.com/NeonChip/NeoC/tree/Experiment3

Reference:

5.2: If, Else If, Else – Processing Tutorial https://www.youtube.com/watch?v=mVq7Ms01RjA

How to add background music to Processing 3.0? https://poanchen.github.io/blog/2016/11/15/how-to-add-background-music-in-processing-3.0

Coding Challenge #4: Purple Rain in Processing https://www.youtube.com/watch?v=KkyIDI6rQJI

Las Arañas Spinning and Weaving Guild https://www.lasaranas.org/

Medium rain drips tapping on buckets and other surfaces, Bali, Indonesia https://www.zapsplat.com/?s=Medium+rain+drips+tapping+on+bucket+&post_type=music&sound-effect-category-id=

Interaction Installation On Women’s Violence

front

This interactive installation explores one of the ways to create awareness on domestic violence for women which is observed in the month of October and November each year for example the campaign titled “Shine the Light” by using aesthetics to induce visitors. This experiment triggers audiences’ attention through an immersive experience by asking them to step on bare shoes to induce the question “how does it feel to be in their shoes” which displays stories of real women undergone domestic violence, and how visitors can support them. In this installation, movement of visitor on the bare shoes was detected by a pressure sensor which resulted in display of images of screen. The purpose of this installation is to engage visitors on domestic violence faced by women which is not so ubiquitous in interactive public art. The use of shoes was an outcome of reviewing various installation on women’s issues.

 thumbnail_img20191101120901

This could be applied on a street to trigger responses from pedestrians with the following layout:

untitled-1

 

Project Context – Aesthetic and Conceptual

Shine The Light on Women Abuse is a real campaign of the London, Ontario, Abused Women’s Center. Similarly, there are many other campaigns running in October and November to address the same issue. This subject is close to me as there is a lot installation and awareness required to impact how society works but in an immersive way.

context

Keeping this campaign in mind, I began browsing about how public art is being used as medium to address similar issues. One project that really inspired me is titled ” Embodying Culture: Interactive Installation On Women’s Rights” which lays emphasis on using project mapping on historical painting which feeds data from twitter on the issue.

context-1

The above paper inspired me to using twitter stories on a slider but I was missing more a more aesthetic representation on the complex subject. Thus, I looked around for that and particularly got intrigued by use of shoes where Yanköşe’s project who hung 440 pairs of women’s shoes on public art.

context-2

 

Visual

Imagery and color palette to display posters were designed using the Shine The Campaign guidelines and were made as follows.

front

second

first

The list of hardware used to design the isntallation are as follows:

  1. Shape of shoes using the maker lab
  2. FSR sensor
  3. A rug
  4. Arduino Micro

thumbnail_img20191031110406

thumbnail_img20191101152140

thumbnail_img20191031104117

thumbnail_img20191030235652

thumbnail_img20191031103400

For the Software, following two tools were utilized:

  1. Arduino; Connect FSR pressure sensor and read Analog values for above zero as it detect stepping of a person
  2. Processing; All values were sent from Arduino into processing using serial monitor to create IF loop and load image similar to visuals shown above

As a result we were able to complete this project like below:

whatsapp-image-2019-11-04-at-9-05-30-pm

thumbnail_img20191101120853

whatsapp-image-2019-11-04-at-9-07-14-pm

 

Reference Links:

  • https://www.lawc.on.ca/shine-light-woman-abuse-campaign/
  • https://firstmonday.org/ojs/index.php/fm/article/view/5897/4418

GitHub:

  • https://github.com/arsalan-akhtar-digtal/experiment/blob/master/Arduino
  • https://github.com/arsalan-akhtar-digtal/experiment/blob/master/processing

 

Generative Mandala

Project Title
Generative Mandala (Assignment 3)

Group Member
Sananda Dutta (3180782)

Project Description
In this experiment the task was to create a tangible or tactile interface for a screen-based interaction. A strong conceptual and aesthetic relationship was to be developed between the physical interface and the events that happened on the screen using Arduino and Processing.

This experiment was my dig at generative visualization wherein you can interact and generate visuals of your choice. My fascination towards geometry, its diversity and also the possibilities of geometric variations is something of great interest to me. So, I decided to make user specific visualizations. In this, I took values that could be used as variables and then assigning them to manual manipulation of variables to create geometric patterns.

images

Image: The Nature of Code

The experiment is a simple yet complicated act of how a simple geometric shape can attain different characteristics once we add repetitive components of the shape and alter them at the same time. I have played around with the number of vertices, R value, G value, B value and reset function of the repetitive geometry. The challenge of connecting Arduino to Processing and then making them communicate to receive inputs from Arduino and then depict the output in terms of Processing was something that worked in favour of this project.

Visuals and Images of the Process
Trying my hands at generative geometry with 2 components as variables – Red value and number of vertices of the shape. The circuit looks something like this:
10

rotationalsymmetrydrawingfritzing_389ez16rzh

Image Ref: Project by eschulzpsd on Rotational Symmetry (Similar to the initial work in progress phase of  2 potentiometers)

After finalizing the 4 variables that can alter the visualization, I tried to give them a more cleaner look for presentation purposes by keeping things minimal and easy to understand, such as shown below. Here, the handy-sized four potentiometers (10k) and a temporary button are being attached to the box inside which is the space to keep the breadboard and Arduino board.
11

After making the initial setup, due to fluctuating values of R,G and B along with fluctuating readings for the number of vertices, the first couple of visuals looked similar to the one below. These were worked upon, with proper soldering and connections and mapped to get decently stable values.
12

After mapping the Analog input values from the potentiometer properly to R, G and B from (0,255) and mapping the number of vertices to (0,10), the results looked better. Below are some visuals of various digital outputs.

frame-1

frame-2

frame-3
Images above: The 4 potentiometer values correspond to no. of vertices of the shape, Red value, Green value and Blue value. The last button corresponds to the reset visualization button.

 

Links
For Video
Work in Progress: https://vimeo.com/371047180
Final Screen test with code: https://vimeo.com/371043434
Final Setup Video: https://vimeo.com/371032587

For Code (Github)
Arduino & Processing: https://github.com/sanandadutta/genvisualization

Project Context
I was always fascinated by art that could be generated by coding. The amazing combination of visuals with code, can create mind blowing visualizations. This project gave me an opportunity to push myself to actually live that aspect and be one of those creators. Being a music lover from childhood, I have always been an admirer of the graphic visualizations that would alter with the pitch, beats, base, tempo and frequency of the music. When these aspects of music were mapped to the moving aspects of a particular algorithm, the result was nothing but a treat to watch. The use of colours, motion, waves, lines, thickness of stroke, etc could create something visually crazy. Below are some explorations which have given me a good idea of which direction to choose in terms of the project.

gart
Image: Math Rose(s) by Richard Bourne (https://www.openprocessing.org/sketch/776939)

the-deep-by-richard-bourne
Image: The Deep by Richard Bourne (https://www.openprocessing.org/sketch/783306)

wobbly-swarm-by-kevin
Image: Wobbly Swarm by Kevin (https://www.openprocessing.org/sketch/780849)

The works of amazing visual artists who have played around with code in terms of math, particles, fractals, lines, arrays, geometry, branching, visualizations, etc. have been my inspirational pillars in order to take this project ahead. Deep diving into the various factors that can be treated as variables and then mapping them based on motion or potentiometer values is something that gave a sense of closure to this desire to step into the field of generative art.

What began with mapping variable values of (0,1023) of the 10k potentiometer to the no. of vertices, R, G and B values was a step forward into this exploration. I also made the visualization boundary finite in terms of the screen boundary so that the overlapping effect could create wonders. With orderly gaps in geometry formations that showed transition in terms of shapes such as line, triangle, square…decagon using coding was a learning experience. Some major functions that this project relied on were: map(), millis(), random() and generative algorithms for the radii which was relative to the outward-inward motion.

Getting the hang of the initial requirements of the assignment from the projects of individuals and groups on makershed.com and Arduino Project Hub came in handy so as to get an idea of the possibilities that can be explored. Using the Analog value inputs for the potentiometers and the Digital Pin for the temporary button and then importing these to Processing was made simpler with Kate Hartman and Nick Puckett’s code. Learning to import analog and digital input values and then mapping them to the relevant potentiometers gave the exploratory visuals.

frame-4
Image: Live demo to the audience about the relation of the knobs to the visual

Since this project was a demonstration, I purposely did not name the knobs on the setup so that the audience can play with them and figure out what each one of these stood for. Seeing the audience interact with geometric visualizations and relate the values on the knobs to the visualization was in itself an experience. I am glad to have explored this aspect of coding. On how i would like to take this ahead, I would say, I would like create a fading trace path for the shapes and also explore their mapping to external noise/sound. I would also like to look into creating more variations of generative art in response to different stimuli that are introduced into their environment.

Links and References
eschulzpsd. (2018, Dec 23). Rotational Symmetry Drawing. Retrieved from Project Hub: https://create.arduino.cc/projecthub/eschulzpsd/rotational-symmetry-drawing-613503?ref=tag&ref_id=processing&offset=8

III, F. M. (2016, Nov 30). Convert Scientific Data into Synthesized Music. Retrieved from Make community: https://makezine.com/projects/synthesized-music-data/

openprocessing.org. (n.d.). Retrieved from Open Processing: https://www.openprocessing.org/

Hartman, N. P. (2019, Oct). Exp3_Lab2_ArduinotoProcessing_ASCII_3AnalogValues. Retrieved from github: https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment3/Exp3_Lab2_ArduinotoProcessing_ASCII_3AnalogValues

 

 

 

 

 

 

 

An Interface to Interact with Persian Calligraphy

By Arshia Sobhan

This experiment is an exploration of designing an interface to interact with Persian calligraphy. On a deeper level, I tried to find some possible answers to this question: what is a meaningful interaction with calligraphy? Being inspired by works of several artists, along with my personal experience of practicing Persian calligraphy for more than 10 years, I wanted to add more possibilities to interact with this art form. The output of this experiment was a prototype with simple modes of interaction to test the viability of the idea.

Context

Traditionally, Persian calligraphy has been mostly used statically. Once created by the artist, the artwork is not meant to be changed. Either on paper, on tiles of buildings or carved on stone, the result remains static. Even when the traditional standards of calligraphy are manipulated by modern artists, the artifact is usually solid in form and shape after being created.

I have been inspired by works of artists that had a new approach to calligraphy, usually distorting shapes while preserving the core visual aspects of the calligraphy.

"Heech" by Parviz Tanavoli Photo credit: tanavoli.com
“Heech” by Parviz Tanavoli
Photo credit: tanavoli.com
Calligraphy by Mohammad Bozorgi Photo credit: wsimag.com
Calligraphy by Mohammad Bozorgi
Photo credit: wsimag.com
Calligraphy by Mohammad Bozorgi Photo credit: magpie.ae
Calligraphy by Mohammad Bozorgi
Photo credit: magpie.ae

I was also inspired by the works of Janet Echelman who creates building-sized dynamic sculptures that respond to environmental forces including wind, water, and sunlight. Using large pieces of mesh combined with projection creates wonderful 3D objects in the space.

Photo credit: echelman.com
Photo credit: echelman.com

The project “Machine Hallucinations” by Rafik Andol was another source of inspiration for me that led to the idea of morphing calligraphy. It led to the idea of displaying an intersection of an invisible 3D object in the space in which two pieces of calligraphy morph into each other.

Work Process

Medium and Installation

Very soon I had the idea of back projection on a hanging piece of fabric. I found it suitable in the context of calligraphy for three main reasons:

  • Freedom of Movement: I found this aspect relevant because of my own experience with the calligraphy. The reed used in Persian calligraphy moves freely on the paper, often very hard to control and very sensitive.
  • Direct Touch: Back projection makes it possible for the users to directly touch what they see on the fabric, without any shadows.
  • Optical Distortions: Movements of the fabric create optical distortions that make the calligraphy more dynamic without losing its identity.

Initially, I had some tests on a 1m x 1m piece of light grey fabric, but for the final prototype, I selected a larger piece of white fabric for a more immersive experience. However, the final setup was also limited by other factors such as specifications of the projector (such as luminance, short-throw ability and resolution). I tried to keep in mind the human scale factor when designing the final setup.

installation-sketch

hanging-fabricVisuals

My initial idea for the visuals being projected on the fabric was a morphing between two pieces of calligraphy. I used two works that I created earlier based on two masterpieces of Mirza Gholamreza Esfahani (1830-1886). These two along with another were used in one of my other projects for digital fabrication course when I was exploring the concept of dynamic layering in Persian calligraphy.

My recent project for Digital Fabrication course, exploring dynamic layering in Persian calligraphy

Using several morphing tools including Adobe Illustrator, I couldn’t achieve a desirable result. The reason was the fact that these programmes were not able to maintain the characteristics of the calligraphy in the middle stages.

The morphing of two calligraphy pieces using Adobe Illustrator
The morphing of two calligraphy pieces using Adobe Illustrator

Consequently, I changed the visual idea to match both the gradual change idea and the properties of the medium.

3f3d7u

After creating the SVG animation, all the frames were exported into a PNG sequence consisting of 241 final images. These images were used as an array in processing later.

In the next step, after using two sensors instead of one, three more layers were added to this array. The purpose of those layers was to give users feedback on interacting with different parts of the interface. However, with two sensors only, this feedback was limited to differentiate the left and right interactions.

Hardware

In the first version, I started working with one ultrasonic sensor (MaxBotix MB1000, LV-MaxSonar-EZ0) to measure the distance of the centre of the fabric and map it on the index of the image array. 

The issue with this sensor was the resolution of one inch. It resulted in jumps of around 12 steps in the image array and the result was not satisfactory. I tried to divide the data from the distance sensor to increase the resolution (because I didn’t need the whole range of the sensor), but still, I couldn’t reduce the jumps to less than 8 steps. The result of the interaction was not smooth enough.

Distance data from LV-MaxSonar-EZ0 after calibration
Distance data from LV-MaxSonar-EZ0 after calibration

For the second version, I used two VL53L0X laser distance sensors with a resolution of 1mm. Although claimed 2m in its datasheet, the range of the data I could achieve was only 1.2m. However, this range was enough according to my setup.

Distance data from VL53L0X laser distance sensor with 1mm resolution
Distance data from VL53L0X laser distance sensor with 1mm resolution
VL53L0X laser distance sensor in the final setup
VL53L0X laser distance sensor in the final setup

Coding

Initially, I had an issue reading data from two VL53L0X laser distance sensors. In the library provided for the sensor, there was an example of reading data from two sensors, but the connections to Arduino was not provided. This issue was resolved shortly and I was able to read and send data from both sensors to Processing, using AP_Sync Library. 

I also needed to calibrate the data with each setup. For this purpose, I designed the code to be easily calibrated. My variables are as follows:

DL: data from the left sensor
DR: data from the right sensor
D: the average of DL and DR
D0: the distance of the hanging fabric in rest to the sensors (using D as the reference)
deltaD: the range of fabric movement (pulling and pushing) from D0 in both directions

With each setup, the only variable needed to be redefined are D0 and deltaD.  These data in Processing control different visual elements, like the index of the image array. The x position of the gradient mask is also controlled by the difference of data from DL and DR, with an additional speed factor, that can change the sensitivity of the movement.

Code Repository:
https://github.com/arshsob/Experiment3

References

-https://www.echelman.com/

-http://refikanadol.com/

-https://www.tanavoli.com/about/themes/heech/

-http://islamicartsmagazine.com/magazine/view/the_next_generation_contemporary_iranian_calligraphy/

 

Experiment 3: Block[code]

Block[code] is an interactive experience that engages the user in altering/modifying on screen visuals using tangible physical blocks. The visuals were created using processing with an attempt to explore The Nature of Code methodology of particle motion for creative coding.

Project by
Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Description
The experiment was designed to create a tangible interaction i.e. the play with the rectangular blocks their selection and their arrangement, that would in-turn generate alter the visual output i.e. the organisation and the motion of the rectangles on the screen. I conceptualisation the project taking inspiration from physical coding, Google’s Project Bloks, that use the connection and the order of joining of physical blocks to generate a code output. The idea was to use physical blocks i.e. the rectangular tangible shapes to influence the motion and appearance of the rectangles on the screen, from random rectangles to coloured strips of rectangles travelling at a fixed velocity to all the elements on the screen accelerating, giving users the experiences of creating visual patterns.

img_20191104_180701-01

Inspiration
Project Bloks is a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. It is a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. See the detailed project here.

projectbloks-580x358

Gene Sequencing Data
The visuals were largely inspired by gene or DNA sequencing data from by brief stint in the world of biotechnology. I use to love the vertical motion and the layering effect the sequencing data would create in the visual output and wanted to generate that using particle motion and code. I was also inspired to tie the commonality between genetic code and computer code, and bring it out in the visual experience.

gene-sequencing-dataDNA sequencing. Image sourced from NIST.gov.

Mosaic Brush sketch on openprocessing.org by inseeing generates random pixels on the screen and uses mouseDragged and keyPressed functions for pixel fill and visual reset. The project can be viewed here.

pixel-project

The Process
I started the project by first making class objects and writing coding for simpler visuals like fractal trees and single particle motion. Taking reference from single particle motion I experimented with location, velocity and acceleration to create a running stream of rectangle particles. I wanted the rectangles to leave a tail or a trace as they moved vertically down the screen for which I played with changing opacity with distance and also having the background called in the setup function so as to get a stream or trace of the moving rectangle particle [1].

With next iterations I created a class of these rectangle particles and subjected it to move function, update function and to system velocity functions based on their location on the screen. Once I was able to create the desired effect in a single particle stream, I created multiple streams of particles with different colours and different parameters for the multiple stream effect.

img_20191031_124817-01

img_20191031_171129-01

img_20191104_163708-01-01

The basic model of a Tangible User Interface is the interface between people and digital information requires two key components: input and output, or control and representation. Controls enable users to manipulate the information, while representations are perceived with the human senses [2]. Coding is an onscreen experience and I wanted to use the visual output as a way to allow the participant to be able to use the physical tangible blocks as an interface to influence the visuals on the screen and to build it. The tangible blocks served as the controls to manipulate the information and its representation was displayed in terms of changing visuals on the screen.

the-tui

setup-examples

Choice of Aesthetics
The narrative tying physical code to biological code was the early inspiration I wanted to build the experiment around. The visuals were in particular inspired from gene sequencing visuals, of rectangular pixels running vertically in a stream. The tangible blocks were chosen to be rectangular too, with coloured stripes marked on them to relate each one to a coloured stream on the screen. The vertical screen in the setup was used to amplify the effect of the visuals moving vertically. The colors for the bands was selected based on fluorescent colors commonly seen in the gene sequencing inspiration images due to the use of fluorescent dyes.

mode-markings

img_20191105_114412-01

img_20191105_114451-01

Challenges & Learnings
(i) One of the key challenges in the experiment was to make a seamless tangible interface, such that the wired setup doesn’t interfere with the user-interaction. Since it was arduino based setup, getting rid of the wires was not a possibility but could have been hidden in a more discreet physical setup.
(ii) Ensuring the visuals were created as per the desired effect was also a challenge for I was programming with particle systems for the first time. I managed this by creating a single particle with the parameters and then applying it to more elements in the visual.
(iii) Given more time I would have created more functions like the accelerate function that could alter the visuals like slowing the frame rate or reducing the width or changing the shape itself.
(iv) The experiment was more exploratory in terms of the possibility of using this technology and software and left room for discussions around what it could be rather than being conclusive. Questions that came up in the presentation was, How big do you imagine with vertical screen? OR How do you see these tangibles being more playful and seasmless?

img_20191105_001118

Github Link for the Project code

Arduino Circuits
The circuit for this setup was fairly simple, with a pull-up resistor circuit and DIY switches using aluminium foil.

arduino-circuit-1

arduino-circuit-2

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Hiroshi Ishii. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international conference on Tangible and embedded interaction (TEI ’08). ACM, New York, NY, USA, xv-xxv. DOI=http://dx.doi.org/10.1145/1347390.1347392

Experiment 3: Blue Moon

Project Title: Blue Moon
Names of group of members: Lilian Leung

Project Description

Blue Moon is a reactive room light that detects signs of anxiety through hand gestures and encourages the participants to take time away from their screen and practice mindfulness. The gold mirror emits a blue glow when participants are detected to be clenching their fist. To achieve a warm light, the participants right hand needs to be unclenched. The second activated switch is pressing their right hand over the left, which causes the music to start playing. The joint hand switch keeps the participant focused and relaxed and stops them from attempting to use a mobile device or computer. The project environment is based within a bedroom for times before rest or when trying to relax or meditate. The screen projection is intended to be on the ceiling as the viewer should be laying down with their hands together. 


Project Process

October 23 – 24
Beginning my initial prototype for the experiment, I began by mapping two potentiometers with two LEDs. By creating a small scale prototype I could slowly upgrade each section of the project into larger outputs such as upgrading the small LEDs to an LED stripe and replacing the potentiometer with flex sensors.

img_20191024_131639

October 25
By using Shiffman’s example of creating rain ripples, I was having difficulty controlling multiple graphic elements on the screen as the ripples affected all the visible pixels. Exploring OpenProcessing, I found a simpler sample that I could build from by having ellipses generated based on frame rate which I could control by N.Kato (n.d.) Rather than having the animation begin abruptly, I added an intro screen that was triggered by mouse click to move onto the main reactive animation.

Using my current breadboard prototype with the potentiometers and LED, I updated the switch from potentiometer to a pressure sensor using aluminum foil and velostat. Adjusting the mapping of the values to reduce the noise from the sensor I was able to map the pressure sensors for two hands:

Switch 1: Left Hand Increase Rate of Rain
Switch 2: Right Hand Control Audio Volume

* I used the Mouse.h library during initial exploration to trigger the intro screen and connected it to a physical switch though started having trouble controlling my laptop so I temporarily disabled it.

October 26
Initial testing to upgrade my breadboard prototype LED, I purchased a neon blue LED strip. I initially followed Make’s Weekend Projects – Android-Arduino LED Strip Lights (2018) guide to review how I could begin connecting my LED strip (requiring 9V) to my 3.3V Arduino Nano. One problem I didn’t expect when using Make’s video was that they used an RGB LED strip which had a different circuit while mine was a single colour.

RBG LED 12v, R, G, B
LED Strip 12v Input, 12v Ouput

I went to Creatron and picked up the neon led strip, tip31 transmitter, and an external 9 volt power source and power jack.Rewiring the breadboard prototype proved difficult as I had to use a separate rail of the breadboard for solely 9V and learn how to hook up the TIP31 transmitter with my digital pins to send values to the LED stripe.

One realization using from online references using different types of transmitters was that the 3 pins (base, collector, emitter) are laid out in a different order depending on the type used.

Pin 1 Base (Info + Resistor)
Pin 2 Collector (Negative end of the LED strip)
Pin 3 Emitter (Ground)

The diagram reference that ended up being the most useful was this:

8c37ae69e775698cb60b99db1dcc86ea

Figure 1. Sound Activated Light Circuit, n.d.

After being able to properly hook up the LED strip with my Arduino and mapping the potentiometer with the LED strip brightness, I began rewiring the potentiometer to use a flex sensor. To test the sensor, I first used two copper coins and velostat.

 

lighttest

sensortypes

From there I made two types of pressure sensors, the first one, using aluminum foil and velostat as a flex sensor that I could hide away inside a glove and use to detect pressure when clenching my fist with pressure along my index finger from my first and second knuckle. Another factor when attaching the flex sensor inside the glove was making sure the sensor was still attached when I clenched my fist, as the sensor would pull forward, following my finger joints and easily detached the sensor cables.

pressuresensortest

October 27-29
Expanding with the physical Arduino component of the project, I still needed to laser cut acrylic to make the circular ring that my LED stripes would wrap around. I’ve never laser cut before but with some help from Neo I was able to layout my two files, the first a 17” acrylic circle to be the front and the second being five 11.5” ring using 0.25” board that would house my Arduino and cables while also being a form that my LED stripes could wrap around while creating enough space between the wall and the light.

Additional updates were made to the prototype, such as adding the second warm neon LED to use as the main light, as well as updating all the wiring for my sensors with stranded wire instead of solid core wire so that the cable was more flexible and comfortable when participants put on the glove sensors.

The wooden ring was then sanded with holes drilled in from the side for the power cables to connect to as well as for the LED stripes strand wire to move from the exterior of the ring into the interior where the Arduino is placed. As all the wiring was finished, I moved all the components from my breadboard onto a protoboard to have everything securely in place. I also needed terminal blocks so that I could repurpose my LED stripes instead of soldering the wires into the protoboard.

protoboardtest

glovetest

Once the physical pieces were cut, I went back to redesigning the intro page to create more visual texture rather than just use solid coloured graphics. Within Processing, I added liquid ripple effect from an OpenProcessing sketch by oggy (n.d.) and adjusted the code to map the ripples based on the hand sensors instead of mouse co-ordinates.

The last issue to solve was how to get the intro screen to disappear when my hands were together in a relaxed position. Because of the multiple audio files being played, I had issues of audio looping when calling each page as a function and using booleans. In the end, I laid out the Sketch using sensor triggers and if statements. Possible due to the number of assets loaded within Processing, I was struggling to get the animation to load consistently as assets would be missing without any alterations to the code. In the end, I removed the rippling image effect.


Project Context

With the days getting darker and nights getting longer, seasonal affective disorder is around the corner. I’m faced with the problem that my current room doesn’t have a window and doesn’t have an installed light within the room, making the space usually dark with little natural light coming in. This combined with struggling with insomnia and ability to relax at night lead to creating this project.

roomexample

One finding for treating seasonal affective disorder is the use of light therapy. This treatment usually involves the use of artificial sunlight with a UV bulb or finding time to get natural sunlight. Terman (2018) suggests that treatment ranging from 20-60 minutes a day can lead to an improved mood over the course of four weeks.  

With a bad habit for multitasking, I find it difficult to concentrate and simply make time to rest without the urge of being productive. This proved to be a problem as using social media can also lead to feelings of depression and loneliness (Mammoser, 2018) due to feelings anxiety and fear of missing out and constant ‘upward social comparisons’ depending how frequently checks their accounts. 

To force myself to create time to relax and and removed from my digital devices, the experiments main functions were to first, detect signs of anxiety and second to force myself to stay still, present within my space and immersed in my brief moment of meditation. Nevarro (2010) from Psychology Today describes nonverbal signs of stress being rubbing hands together or clenching as a sign of “self massaging” or pacifying the self. To force myself to stay still and present, I decided my second trigger would be having both my hands together in a relaxed position. 

handgestures

Aesthetically, I chose a circular form, both symbolic as a sun and moon, as it would be the main light source in my room. Having the alternating lights, the circle appears both like a sun and an eclipse when the neon blue light is on. The visual style was inspired by Olafur Eliason’s Weather Project (2004) and the artist’s elemental style and his use of mirrors and lights to create the sun and sky within the Tate Modern’s Turbine hall. The Weather Project installation explored how weather shapes a city, and how the city itself becomes filter as to how locals experience fundamental encounters with nature.

olafur_eliasson_weather_project_02

The neon light is emitted when my right hand is clenched, limiting the light in the room, and prompting my to unclench my fist. The Processing visuals are supported by white noise of wind and rain as a distraction to my surroundings as white noise is beneficial for cutting out background noises by providing stimulation to the brain without being too overly excited (Green, 2019).

handgestures

The audio from the Processing sketch is also mapped to my right hand, with the audio being lower when my first is clenched and louder when my hand is open, prompting deeper immersion into the meditation. Opting to bring my hands together with my right hand over my left in a relaxed position, the sketch dissolves the intro screen to fill the ceiling with the imagery of rain falling and louder audio. Within a few moments a selected song will play, for my Processing sketch, I used one of my favourite songs, Wide Open by the Chemical Brothers (2016). 

handgestures2

During my exploration phase, I tried to trigger Spotify or Youtube to play, but because it would cut outside the processing program and bring me back to viewing social media and the internet, I opted to have the audio play within the sketch.

Additional Functionality for Future Iterations

  1. Connecting the Processing Sketch to a possible Spotify API that could be controlled with physical sensors. 
  2. Connecting to a weather API and having the images and audio switch depending on the weather.
  3. Adding additional hand gesture sensors to act as a tangible audio player.

screenshot-2019-11-01-at-11-14-57-pm

img_20191101_182827_ceiling

img_20191102_011141-copy

dsc_0429-light

dsc_0429-2_light

Project Code Viewable on GitHub
https://github.com/lilian-leung/experiment3

Project Video


Citation and References

Sound Activated Light Circuit. (n.d.). Retrieved from https://1.bp.blogspot.com/-2s2iNAOFnxo/U3ohyK-AjJI/AAAAAAAAADg/gAmeJBi-bT8/s1600/8C37AE69E775698CB60B99DB1DCC86EA.jpg

Ali, Z., & Zahid. (2019, July 13). Introduction to TIP31. Retrieved from https://www.theengineeringprojects.com/2019/07/introduction-to-tip31.html.

Green, E. (2019, April 3). What Is White Noise And Can It Help You Sleep? Retrieved October 31, 2019, from https://www.nosleeplessnights.com/what-is-white-noise-whats-all-the-fuss-about/.

Make: (2013, August 23) Weekend Projects – Android-Arduino LED Strip Lights. Retrieved from https://www.youtube.com/watch?v=Hn9KfJQWqgI

Mammoser, G. (2018, December 14). Social Media Increases Depression and Loneliness. Retrieved October 31, 2019, from https://www.healthline.com/health-news/social-media-use-increases-depression-and-loneliness.

Nevarro, J. (2010, January 20.). Body Language of the Hands. Retrieved October 31, 2019, from https://www.psychologytoday.com/us/blog/spycatcher/201001/body-language-the-hands.

N.Kato (n.d.) rain. Retrieved from https://www.openprocessing.org/sketch/776644

oggy (n.d.) Liquid Image. Retrieved from https://www.openprocessing.org/sketch/186820

Processing Foundation. (n.d.). SoundFile::play() \ Language (API) \ Processing 3 . Retrieved October 31, 2019, from https://processing.org/reference/libraries/sound/SoundFile_play_.html.

Studio Olafur Eliasson (Photographer). (2003). The Weather Project. [Installation]. Retrieved from https://www.tate.org.uk/sites/default/files/styles/width-720/public/images/olafur_eliasson_weather_project_02.jpg

Tate. (n.d.). About the installation: understanding the project. Retrieved November 2, 2019, from https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0-0.

Tate. (n.d.). Olafur Eliasson the Weather Project: about the installation. Retrieved November 2, 2019, from https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0

Terman, M. (2018, December). Seasonal affective disorder. Retrieved October 31, 2019, from https://www-accessscience-com.ocadu.idm.oclc.org/content/900001.

The Coding Train (2018, May 7) Coding Challenge #102: 2D Water Ripple. Retrieved from
https://www.youtube.com/watch?v=BZUdGqeOD0w&t=630s

I Might be Just a Text on Your Screen

Project Title: I Might be Just a Text on Your Screen
Names of Group Members: Nilam Sari

Project Description:

This experiment is a tool to help someone who is experiencing panic attack. “I Might be Just a Text on Your Screen” walks the interactor through their panic attack(s) through a series of texts and an interactive human hand shaped piece of hardware to hold on to.

Work in progress:

The first step that I took to figure out this project is to compose the text that would appear on the screen. This text is based on my personal guide that I wrote to myself to go through my own panic attacks.

text1

The reason that I included the part where it says “I might be just a text on your screen, with a hand made out of wires” is because I don’t want this to be a tool that pretends to be something else that it is not. It is in fact just a piece of technology that I, a human, happen to create to help people who are going through this terrible experience I’ve had.

The next step is to figure out how to display the text. That was a learning curve for me because I never worked with Strings on processing before. I learned from processing’s online reference guide on how to display text. It was fine until I ran into the problem of how do I display the text to appear a letter by letter as if it’s being typed out on the computer.

At first I thought I had to separate the text a character by character, so I watched a tutorial on Dan Shiffman’s channel, and ended up with this

text2

But to make the characters appear one by one means I have to use millis, so I did that and ended up with this

text3

But it didn’t work the way I wanted it to. Instead of displaying the characters one by one, the program just froze up for a couple of seconds, and displayed the characters all at once. So I went through more forums and found a simpler way to do it without using millis, and incorporate it in my file.

After I got those done, the next step I took was to build the physical component. I want this physical component to act as an extension of the computer that allows the interactor to navigate through the text with it. I bought a 75 cent pair of gloves and stuffed it. Then, it was time to work on the velostat. I tested it out with a fading LED light and map it to the amount of pressure on the velostat.

img_8073

I followed the step by step on Canvas and velostat testing worked fine, the light goes brighter when more pressure is put on the velostat and dimmer when there’s less pressure. I’m using the same tool but instead of using mapping, i just use multiple thresholds between 0 and 1023 so the program knows when the sensor is pressed at different pressures. 

I slipped the velostat into the glove, so when you squeeze the ‘computer’s hand’, it will activate the velostat. I went to Creatron to get a heat pad to put inside the glove, to mimic body heat. It’s powered by Arduino’s 5V port.

pic1

The next step was to figure out how to go through the text page by page. I had trouble figuring this out so I asked Nick about it. He suggested to create an int pageNumber. And put it at the end of every message. I added a timer with millis() to create a couple of seconds buffer before it changes pages. It worked wonderfully.

There were a couple of hiccups here and there while I was programming, but taking a break from it and going back into it helped me with solving most of the problems I didn’t mention above.

After everything was set, I put together the wires and soldered it into a small protoboard.

pic2

img_8103

Link to code on Github: https://github.com/nilampwns/experiment3

Documentation:

document1

pic3

project3diagram2

Project Context:

This project’s idea came from my own experience dealing with panic attack. Panic attack symptoms include physical sensations such as breathlessness, dizziness, nausea, sweating, trembling, palpitations, as well as cognitive symptoms like fear of dying and going crazy. Some people who suffer from Panic Disorder can experience this more than four times a month. Medications and therapy can help cure Panic Disorder, but not everybody has access to those things. This project is not a tool to replace medical treatment of Panic Disorders, however, it can be a nice tool that helps walk one through their panic attacks when no one else is around to assist them. Because it can get really scary to deal with this on your own.

When I used to suffer from constant panic attacks, in my wallet, I kept a piece of paper that had instructions on how to get through a panic attack on my own. These instructions are messages from myself for myself who are having a panic attack. This inspires me to create the text that appears on the screen. I thought, if a piece of paper could help me go through my own panic attacks, then an interactive piece would be a step up from that.

Technology has been used to help people with mental health issues, especially on smartphones. Smartphones apps provide useful functions that can be integrated to conventional treatments (Luxton et al., 2011). There are already apps out there that helps people with their anxieties such as Headspace (2012) that helps people meditate through their app, and MoodPath (2016), and app that helps people keep track of their depressive episodes.

pic4

(Left, Headspace (2011); Right, MoodPath(2016))

However, I don’t want this tool to appear as something that it is not. I don’t want this project to pretend like it understands what the interactor is going through. In the end, this is just a string of codes that appear on your screen, along with a physical interactive piece that is made of wires.

This reminds me of a point Caroline Langill made in regards to Norman White’s piece, “… an artwork that heightens our awareness of ourselves and our behaviours by emulating a living thing rather than being one.” It is performing empathy and offering a companion without it knowing that those are what it is doing. So if the interactors feel like they’re being empathized with, is it a real empathy that is being offered by this project? Is it real empathy that the interactor is feeling, or is it mere illusion of empathy from a machine? Sherry Turkle asked this question in her book “Reclaiming Conversation”. Turkle raised the concern of technology replacing actual human contacts. For my project, I don’t want this to be something that replaces treatments or help from other people and the society, rather a tool to close the gap in human fallacy, of not having mental health resources vastly available for those who need it.

Reference

Langill, Caroline S.. “The Living Effect: Autonomous Behavior in Early Electronic Media Art”. Media Art Histories. MIT Press, 2013.

Luxton, David D.; McCann, Russell A.; Bush, Nigel E.; Mishkind, Matthew C.; and Reger, Greg M.. “mHealth for Mental Health: Integrating Smartphone Technology in Behavioral Healthcare”. Professional Psychology: Research and Practice. 2011, Vol. 42, No. 6, 505–512.

Turkle, Sherry. “Reclaiming Conversation: The Power of Talk in a Digital Age”. New York: Penguin Press, 2015.