Trumpet

by Nadine Valcin

img_0670-valves

DESCRIPTION

Trumpet is a fitting instrument as the starting point for an installation about the world’s most infamous Twitter user. It combines a display of live tweets tagged with @realDonaldTrump with a trumpet that delivers real audio clips from the American president. The piece is meant to be installed at room scale and provide a real-life experience of the social media echo chambers that so many of us confine ourselves to.

The piece constantly emits a low static sound, signalling the distant chatter that is always present on Twitter. A steady stream of tweets from random users, but always tagged with the president’s handle, are displayed on the screen and give a portrait of the many divergent opinions about the current state of the presidency.

Visitors can manipulate a trumpet that triggers audio. A sample of the Call to the Post trumpet melody played at the start of horse races can be heard when the trumpet is picked up. The three trumpet valves, when activated, in turn play short clips (verbal equivalent of tweets) from the president himself. Metaphorically, Trump is in dialogue with the tweets being displayed on the screen in the enclosed ecosystem. The repeated clips create a real live sonic echo chamber physically recreating what happens virtually online.


CONTEXT

My initial ideas were centered on the fabrication of a virtual version of a real object: a virtual bubble blower that would create bubble patterns on a screen and virtual kaleidoscope. I then flipped that idea and moved to the idea of using a common object as a controller, giving it a new life and hacking it in some way to give it novel functionalities. Those functionalities would have to be close the original use for the object yet be surprising in some way. The ideal object would have a strong tactile quality. Musical instruments soon came to mind. T

hey are designed to be constantly handled, have iconic shapes and are generally well-made and feature natural materials such as metal and wood.

tba_fernandopalmarodriguez_07-1-2496x1096
Image from Cihuapapalutzin

In parallel, I developed the idea of using data in the piece. I had recently attended the Toronto Biennial of Art and was fascinated by Fernando Palma Rodriguez’s piece Cihuapapalutzin that integrated 104 robotic monarch butterflies in various states of motion. They were built to respond to seismic frequencies in Mexico. Every day, a new data file is sent from that country to Toronto and uploaded to control the movement of the butterflies. The piece is meant to bring attention to the plight of the unique species that migrates between the two countries. The artwork led me to see the potential for using data visualisation to make impactful statements about the world.

just_landed_jerthorp
Image from Just Landed

I then made the connection to an example we had seen in class. Just Landed by Jer Thorp shows real time air travel patterns of Twitter users through a live map. The Canadian artist, now based in New York, used Processing, Twitter and MetaCarta to extract longitude & latitude information from a query on Twitter data to create this work.

rachel_knoll_listenandrepeat
Image from Listen and Repeat

Another inspiration was Listen and Repeat by American artist Rachel Knoll, a piece featuring a modified megaphone installed in a forest that used text to speech software to enunciate tweets labeled with the hashtag “nobody listens”.

As I wanted to make a project that was closer to my artistic practice which is politically-engaged, Twitter seemed a promising way to obtain live data that could then be presented on a screen. Of course, that immediately brought to mind one of the most prolific and definitely the most infamous Twitter user: Donald Trump. The trumpet then seemed to be a fitting controller, both semantically and its nature as a brash and bold instrument.

PROCESS

Step 1: Getting the Twitter data

Determining how to get the Twitter data required quite a bit of research. I found the Twitter 4J library for Processing and downloaded it, but still needed more information on how to use it. I happened upon a tutorial on British company Coda Sign’s blog about Searching Twitter for Tweets. It gave an outline of the necessary steps along with the code. I then created a Twitter developer account and got the required keys to use their API in order to access the data.

Once I had access to the Twitter API, I adjusted the parameters in the code from the Coda Sign website, modifying it to suit my needs. I set up a search for “@realDonaldTrump”, not knowing how much data it would yield and was pleasantly surprised when it resulted in a steady stream of Tweets.

Step 2: Programming the interaction

Now that the code was running on Processing, I set up the code to get data from the Arduino. I programmed 3 switches, one for each valve of the trumpet and also used Nick’s code to send the gyroscope and accelerator data to Processing in order to determine which data was the most pertinent and what the thresholds should be for each parameter. The idea was that the gyroscope data would trigger some sounds when the trumpet was moved and the 3 trumpet valves would manipulate the tweets on the screen with various effects on the font of the text.

I soon hit a snag as it at first seemed like Processing wasn’t getting any information from the Arduino. Looking at the code, I noticed that there were several delay commands at various points in the code. I remembered Nick’s warning about the delay command and how it was problematic and realized that this, unfortunately, was a great example of it.

I knew the solution was to program the intervals using the millis function. I spent a day and a half attempting to find a solution but failed and required Kate Hartman’s assistance solving the issue. I has also discovered that the Twitter API would disconnect me if I ran the program for too long. I had to test in fits and starts and often found myself unable to get any Twitter data sometimes for close to an hour.

I attempted to program some effects to visually manipulate the tweets that would be triggered by the activation of the valves. I had difficulty affecting only one tweet as the effects would affect all subsequent tweets. Also, given that the controller was a musical instrument, it felt like sound was a more suited effect than a visual. At first, I loaded cheers and boos from a crowd that users could trigger in reaction to what was on screen, but finally settled on some Trump clips as it seemed natural to have his very distinctive voice. It was suitable because he takes to Twitter to make official declarations and because of the horn’s long history as an instrument to announce the arrival of royalty and other VIPs.

As the clock was ticking, I decided to work on the trumpet and return to working on the interaction when the controller was functional.

Step 3: Hacking the trumpet

img_0646
Trumpet partly disassembled

I was fortunate to have someone lend me a trumpet. I disassembled all the parts to see if I could make a switch that would be activated by the piston valves. I soon discovered that angle from the slides to the piston valves is close to 90 degrees and given the small aperture connecting the two, it would be nearly impossible.

trumpet-parts
Trumpet parts
img_0663
Trumpet valve and piston
img_0679
Trumpet top of valve assembly without piston

The solution I found was taking apart the valve piston while keeping the top of the valve and replacing the piston with a piece of cut Styrofoam. The wires could then come out the bottom casing caps and connect to the Arduino.

img_0685

 

I soldered wires to 3 switches and then carefully wrapped the joints in electrical tape.

img_0687
Arduino wiring

 

A cardboard box was chosen to house a small breadboard. Holes were made so that the bottom of the valves could be threaded through and the lid of the box could be secured to the trumpet by using the bottom casing caps. Cardboard was chosen in order to keep the instrument light and as close to possible to its normal weight and the balance.

img_0695
Finished trumpet/controller

Step 5: Programming the interaction part 2

The acceleration in the Y axis was chosen as a trigger for the trumpet sound to play. But given the imbalance in the trumpet weight, it tended to trigger a rapid succession of the trumpet sound before stopping. Raising the threshold didn’t help. With little time left, I then programmed the valves/switches to trigger some short Trump clips. I would have loved to accompany them with a visual distortion but the clock ran out before I could find something appropriate and satisfactory.

Reflections

My ideation process is slow and was definitely a hindrance in this project. I attempted to do something more complex than I had originally anticipated and the bugs I encountered along the way made it really difficult. One of the things that I struggle with when coding is not knowing when to persevere and when to stop. I spent numerous hours trying to debug at the expense of sleep and in hindsight, it wasn’t useful.  It also feels like the end result isn’t representative of the time I spent on the project.

I do think though that the idea has some potential and given the opportunity would revisit it to make it a more compelling experience. Modifications I would make include:

  • Adding a number of Trump audio clips and randomize their triggering by the valves
  • Building a sturdier box to house the Arduino so that the trumpet could possibly rest on it and contemplate having it attached to some kind of stand that would control its movements somewhat
  • Have video as a background to the Tweets on the screen or a series of changing photographs and make them react to the triggering of the valve.

Link to code on Github:  https://github.com/nvalcin/CCassignment3

References

Knoll, Rachel. “Listen and Repeat.” Rachel Knoll – Minneapolis Filmmaker, rachelknoll.com/portfolio/listen-and-repeat. Accessed October 31, 2019

Thorp, Jer. “Just Landed: Processing, Twitter, MetaCarta & Hidden Data.” Blprnt.blg, May 11, 2009. blog.blprnt.com/blog/blprnt/just-landed-processing-twitter-metacarta-hidden-data. Accessed October25, 2019

“Fernando Palma Rodríguez at 259 Lake Shore Blvd E”. Toronto Biennial of Art. torontobiennial.org/work/fernando-palma-rodriguez-at-259-lake-shore/ Accessed October 24, 2019

“Processing and Twitter”. CodaSign, October 1, 2014. www.codasign.com/processing-and-twitter/. Accessed October 24, 2019.

“Trumpet Parts”. All Musical Instruments, 2019. www.simbaproducts.com/parts/drawings/TR200_parts_list.jpg. Accessed November 2, 2019

 

 

 

Experiment 3: Weave Your Time

screen-shot-2019-11-04-at-22-20-06

Project Title: weave your time

Name of Group Member: Neo Nuo Chen

Project Description: People in this modern society are going under a lot of stress lately, they can’t focus on one task because there are so many distractions. This project is to help them to calm down and take some time to focus on one thing, and it’s something that is simple but grants major satisfaction after they finish. Based on my previous professional background, I decided that a relaxing activity would be weaving. It is an ancient technique that has been around for over xxx years. The anticipant is asked to place their cups onto the coaster that I laser cut earlier. The pressure sensor underneath the small rug that I weaved would detect the weight of the liquid, it will then trigger the reaction on the screen. What happens is the intro page would be gone and the background color would change.

 

Project Process

October 22

Since this project was a solo project, I wanted to somehow collaborate what I’ve learned in the past with it, so I started to think of ways of interaction and the use of a loom came into my mind, the aesthetic is very old-fashion but mixing with laser-cut transparent acrylic would be interesting. Also, I wanted to wrap aluminum foil onto the loom and the head of the shuttle, so that whenever the head touches the edges of the loom(which were wrapped with aluminum foil), it would trigger something. But the more I think about it, I realized that it was not possible since the cable has to be attached along and would be weaved into the piece. 

I then decided to focus on the pressure sensor first, to see what it could be used for. Because it was a pressure sensor, the reading value would alter base on how much pressure I’m putting on it, it would be interesting to somehow control the range and leads to a different reaction.

October 24

I started experimenting with the pressure sensor by using the code Kate and Nick shared with us on Github.

img_2659-2 img_2658-2img_2656-2

 

October 25

I personally really like the sound of rain, as well as the visual of it, hence the reason why I wanted to show them both on the screen. I found out that there are a lot of different effects that you can create by using processing from Daniel Schiffman’s coding challenge. And the purple rain that he made was exactly what I was looking for. So I worked with that code and changed a few parts so that I could alter the rain from only being purple to constantly change colors throughout the whole thing.

screen-shot-2019-11-04-at-19-49-16screen-shot-2019-10-28-at-14-10-52

I also went to laser cut a loom with 1/4 inch wood for an early test.

img_2003 img_2004

And I weaved a small sample with the loom.

img_2168 img_2169

October 28

Had my pieces laser cut with transparent acrylic to create the post-modern aesthetic that I was looking for.

img_2237 img_2256

October 29

Ever since I had the idea of having people to experience my work, I got a bunch of plastic cups and ran the test to see the value. The sensor was not as stable as I imagined especially when the cup is small, it was hard to navigate the exact value for different amounts of liquids.  So I just decided to put an intro image which would be taken off once the cup is filled with any amount of liquid and when the cup goes empty the image comes back. I think that this could be a good idea to work as a reminder to tell people that their cups are empty and in need of a refill. Also works well when multiple guests are around, they can read the intro image and follow the instruction individually.

October 30

Continue working on some more weaving so that I could give my breadboard and cables a disguise:)

img_2378 img_2400 img_2416 img_2477

img_2580 img_2581 img_2585

 

Code Repository:

https://github.com/NeonChip/NeoC/tree/Experiment3

Reference:

5.2: If, Else If, Else – Processing Tutorial https://www.youtube.com/watch?v=mVq7Ms01RjA

How to add background music to Processing 3.0? https://poanchen.github.io/blog/2016/11/15/how-to-add-background-music-in-processing-3.0

Coding Challenge #4: Purple Rain in Processing https://www.youtube.com/watch?v=KkyIDI6rQJI

Las Arañas Spinning and Weaving Guild https://www.lasaranas.org/

Medium rain drips tapping on buckets and other surfaces, Bali, Indonesia https://www.zapsplat.com/?s=Medium+rain+drips+tapping+on+bucket+&post_type=music&sound-effect-category-id=

Interaction Installation On Women’s Violence

front

This interactive installation explores one of the ways to create awareness on domestic violence for women which is observed in the month of October and November each year for example the campaign titled “Shine the Light” by using aesthetics to induce visitors. This experiment triggers audiences’ attention through an immersive experience by asking them to step on bare shoes to induce the question “how does it feel to be in their shoes” which displays stories of real women undergone domestic violence, and how visitors can support them. In this installation, movement of visitor on the bare shoes was detected by a pressure sensor which resulted in display of images of screen. The purpose of this installation is to engage visitors on domestic violence faced by women which is not so ubiquitous in interactive public art. The use of shoes was an outcome of reviewing various installation on women’s issues.

 thumbnail_img20191101120901

This could be applied on a street to trigger responses from pedestrians with the following layout:

untitled-1

 

Project Context – Aesthetic and Conceptual

Shine The Light on Women Abuse is a real campaign of the London, Ontario, Abused Women’s Center. Similarly, there are many other campaigns running in October and November to address the same issue. This subject is close to me as there is a lot installation and awareness required to impact how society works but in an immersive way.

context

Keeping this campaign in mind, I began browsing about how public art is being used as medium to address similar issues. One project that really inspired me is titled ” Embodying Culture: Interactive Installation On Women’s Rights” which lays emphasis on using project mapping on historical painting which feeds data from twitter on the issue.

context-1

The above paper inspired me to using twitter stories on a slider but I was missing more a more aesthetic representation on the complex subject. Thus, I looked around for that and particularly got intrigued by use of shoes where Yanköşe’s project who hung 440 pairs of women’s shoes on public art.

context-2

 

Visual

Imagery and color palette to display posters were designed using the Shine The Campaign guidelines and were made as follows.

front

second

first

The list of hardware used to design the isntallation are as follows:

  1. Shape of shoes using the maker lab
  2. FSR sensor
  3. A rug
  4. Arduino Micro

thumbnail_img20191031110406

thumbnail_img20191101152140

thumbnail_img20191031104117

thumbnail_img20191030235652

thumbnail_img20191031103400

For the Software, following two tools were utilized:

  1. Arduino; Connect FSR pressure sensor and read Analog values for above zero as it detect stepping of a person
  2. Processing; All values were sent from Arduino into processing using serial monitor to create IF loop and load image similar to visuals shown above

As a result we were able to complete this project like below:

whatsapp-image-2019-11-04-at-9-05-30-pm

thumbnail_img20191101120853

whatsapp-image-2019-11-04-at-9-07-14-pm

 

Reference Links:

  • https://www.lawc.on.ca/shine-light-woman-abuse-campaign/
  • https://firstmonday.org/ojs/index.php/fm/article/view/5897/4418

GitHub:

  • https://github.com/arsalan-akhtar-digtal/experiment/blob/master/Arduino
  • https://github.com/arsalan-akhtar-digtal/experiment/blob/master/processing

 

Generative Mandala

Project Title
Generative Mandala (Assignment 3)

Group Member
Sananda Dutta (3180782)

Project Description
In this experiment the task was to create a tangible or tactile interface for a screen-based interaction. A strong conceptual and aesthetic relationship was to be developed between the physical interface and the events that happened on the screen using Arduino and Processing.

This experiment was my dig at generative visualization wherein you can interact and generate visuals of your choice. My fascination towards geometry, its diversity and also the possibilities of geometric variations is something of great interest to me. So, I decided to make user specific visualizations. In this, I took values that could be used as variables and then assigning them to manual manipulation of variables to create geometric patterns.

images

Image: The Nature of Code

The experiment is a simple yet complicated act of how a simple geometric shape can attain different characteristics once we add repetitive components of the shape and alter them at the same time. I have played around with the number of vertices, R value, G value, B value and reset function of the repetitive geometry. The challenge of connecting Arduino to Processing and then making them communicate to receive inputs from Arduino and then depict the output in terms of Processing was something that worked in favour of this project.

Visuals and Images of the Process
Trying my hands at generative geometry with 2 components as variables – Red value and number of vertices of the shape. The circuit looks something like this:
10

rotationalsymmetrydrawingfritzing_389ez16rzh

Image Ref: Project by eschulzpsd on Rotational Symmetry (Similar to the initial work in progress phase of  2 potentiometers)

After finalizing the 4 variables that can alter the visualization, I tried to give them a more cleaner look for presentation purposes by keeping things minimal and easy to understand, such as shown below. Here, the handy-sized four potentiometers (10k) and a temporary button are being attached to the box inside which is the space to keep the breadboard and Arduino board.
11

After making the initial setup, due to fluctuating values of R,G and B along with fluctuating readings for the number of vertices, the first couple of visuals looked similar to the one below. These were worked upon, with proper soldering and connections and mapped to get decently stable values.
12

After mapping the Analog input values from the potentiometer properly to R, G and B from (0,255) and mapping the number of vertices to (0,10), the results looked better. Below are some visuals of various digital outputs.

frame-1

frame-2

frame-3
Images above: The 4 potentiometer values correspond to no. of vertices of the shape, Red value, Green value and Blue value. The last button corresponds to the reset visualization button.

 

Links
For Video
Work in Progress: https://vimeo.com/371047180
Final Screen test with code: https://vimeo.com/371043434
Final Setup Video: https://vimeo.com/371032587

For Code (Github)
Arduino & Processing: https://github.com/sanandadutta/genvisualization

Project Context
I was always fascinated by art that could be generated by coding. The amazing combination of visuals with code, can create mind blowing visualizations. This project gave me an opportunity to push myself to actually live that aspect and be one of those creators. Being a music lover from childhood, I have always been an admirer of the graphic visualizations that would alter with the pitch, beats, base, tempo and frequency of the music. When these aspects of music were mapped to the moving aspects of a particular algorithm, the result was nothing but a treat to watch. The use of colours, motion, waves, lines, thickness of stroke, etc could create something visually crazy. Below are some explorations which have given me a good idea of which direction to choose in terms of the project.

gart
Image: Math Rose(s) by Richard Bourne (https://www.openprocessing.org/sketch/776939)

the-deep-by-richard-bourne
Image: The Deep by Richard Bourne (https://www.openprocessing.org/sketch/783306)

wobbly-swarm-by-kevin
Image: Wobbly Swarm by Kevin (https://www.openprocessing.org/sketch/780849)

The works of amazing visual artists who have played around with code in terms of math, particles, fractals, lines, arrays, geometry, branching, visualizations, etc. have been my inspirational pillars in order to take this project ahead. Deep diving into the various factors that can be treated as variables and then mapping them based on motion or potentiometer values is something that gave a sense of closure to this desire to step into the field of generative art.

What began with mapping variable values of (0,1023) of the 10k potentiometer to the no. of vertices, R, G and B values was a step forward into this exploration. I also made the visualization boundary finite in terms of the screen boundary so that the overlapping effect could create wonders. With orderly gaps in geometry formations that showed transition in terms of shapes such as line, triangle, square…decagon using coding was a learning experience. Some major functions that this project relied on were: map(), millis(), random() and generative algorithms for the radii which was relative to the outward-inward motion.

Getting the hang of the initial requirements of the assignment from the projects of individuals and groups on makershed.com and Arduino Project Hub came in handy so as to get an idea of the possibilities that can be explored. Using the Analog value inputs for the potentiometers and the Digital Pin for the temporary button and then importing these to Processing was made simpler with Kate Hartman and Nick Puckett’s code. Learning to import analog and digital input values and then mapping them to the relevant potentiometers gave the exploratory visuals.

frame-4
Image: Live demo to the audience about the relation of the knobs to the visual

Since this project was a demonstration, I purposely did not name the knobs on the setup so that the audience can play with them and figure out what each one of these stood for. Seeing the audience interact with geometric visualizations and relate the values on the knobs to the visualization was in itself an experience. I am glad to have explored this aspect of coding. On how i would like to take this ahead, I would say, I would like create a fading trace path for the shapes and also explore their mapping to external noise/sound. I would also like to look into creating more variations of generative art in response to different stimuli that are introduced into their environment.

Links and References
eschulzpsd. (2018, Dec 23). Rotational Symmetry Drawing. Retrieved from Project Hub: https://create.arduino.cc/projecthub/eschulzpsd/rotational-symmetry-drawing-613503?ref=tag&ref_id=processing&offset=8

III, F. M. (2016, Nov 30). Convert Scientific Data into Synthesized Music. Retrieved from Make community: https://makezine.com/projects/synthesized-music-data/

openprocessing.org. (n.d.). Retrieved from Open Processing: https://www.openprocessing.org/

Hartman, N. P. (2019, Oct). Exp3_Lab2_ArduinotoProcessing_ASCII_3AnalogValues. Retrieved from github: https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment3/Exp3_Lab2_ArduinotoProcessing_ASCII_3AnalogValues

 

 

 

 

 

 

 

An Interface to Interact with Persian Calligraphy

By Arshia Sobhan

This experiment is an exploration of designing an interface to interact with Persian calligraphy. On a deeper level, I tried to find some possible answers to this question: what is a meaningful interaction with calligraphy? Being inspired by works of several artists, along with my personal experience of practicing Persian calligraphy for more than 10 years, I wanted to add more possibilities to interact with this art form. The output of this experiment was a prototype with simple modes of interaction to test the viability of the idea.

Context

Traditionally, Persian calligraphy has been mostly used statically. Once created by the artist, the artwork is not meant to be changed. Either on paper, on tiles of buildings or carved on stone, the result remains static. Even when the traditional standards of calligraphy are manipulated by modern artists, the artifact is usually solid in form and shape after being created.

I have been inspired by works of artists that had a new approach to calligraphy, usually distorting shapes while preserving the core visual aspects of the calligraphy.

"Heech" by Parviz Tanavoli Photo credit: tanavoli.com
“Heech” by Parviz Tanavoli
Photo credit: tanavoli.com
Calligraphy by Mohammad Bozorgi Photo credit: wsimag.com
Calligraphy by Mohammad Bozorgi
Photo credit: wsimag.com
Calligraphy by Mohammad Bozorgi Photo credit: magpie.ae
Calligraphy by Mohammad Bozorgi
Photo credit: magpie.ae

I was also inspired by the works of Janet Echelman who creates building-sized dynamic sculptures that respond to environmental forces including wind, water, and sunlight. Using large pieces of mesh combined with projection creates wonderful 3D objects in the space.

Photo credit: echelman.com
Photo credit: echelman.com

The project “Machine Hallucinations” by Rafik Andol was another source of inspiration for me that led to the idea of morphing calligraphy. It led to the idea of displaying an intersection of an invisible 3D object in the space in which two pieces of calligraphy morph into each other.

Work Process

Medium and Installation

Very soon I had the idea of back projection on a hanging piece of fabric. I found it suitable in the context of calligraphy for three main reasons:

  • Freedom of Movement: I found this aspect relevant because of my own experience with the calligraphy. The reed used in Persian calligraphy moves freely on the paper, often very hard to control and very sensitive.
  • Direct Touch: Back projection makes it possible for the users to directly touch what they see on the fabric, without any shadows.
  • Optical Distortions: Movements of the fabric create optical distortions that make the calligraphy more dynamic without losing its identity.

Initially, I had some tests on a 1m x 1m piece of light grey fabric, but for the final prototype, I selected a larger piece of white fabric for a more immersive experience. However, the final setup was also limited by other factors such as specifications of the projector (such as luminance, short-throw ability and resolution). I tried to keep in mind the human scale factor when designing the final setup.

installation-sketch

hanging-fabricVisuals

My initial idea for the visuals being projected on the fabric was a morphing between two pieces of calligraphy. I used two works that I created earlier based on two masterpieces of Mirza Gholamreza Esfahani (1830-1886). These two along with another were used in one of my other projects for digital fabrication course when I was exploring the concept of dynamic layering in Persian calligraphy.

My recent project for Digital Fabrication course, exploring dynamic layering in Persian calligraphy

Using several morphing tools including Adobe Illustrator, I couldn’t achieve a desirable result. The reason was the fact that these programmes were not able to maintain the characteristics of the calligraphy in the middle stages.

The morphing of two calligraphy pieces using Adobe Illustrator
The morphing of two calligraphy pieces using Adobe Illustrator

Consequently, I changed the visual idea to match both the gradual change idea and the properties of the medium.

3f3d7u

After creating the SVG animation, all the frames were exported into a PNG sequence consisting of 241 final images. These images were used as an array in processing later.

In the next step, after using two sensors instead of one, three more layers were added to this array. The purpose of those layers was to give users feedback on interacting with different parts of the interface. However, with two sensors only, this feedback was limited to differentiate the left and right interactions.

Hardware

In the first version, I started working with one ultrasonic sensor (MaxBotix MB1000, LV-MaxSonar-EZ0) to measure the distance of the centre of the fabric and map it on the index of the image array. 

The issue with this sensor was the resolution of one inch. It resulted in jumps of around 12 steps in the image array and the result was not satisfactory. I tried to divide the data from the distance sensor to increase the resolution (because I didn’t need the whole range of the sensor), but still, I couldn’t reduce the jumps to less than 8 steps. The result of the interaction was not smooth enough.

Distance data from LV-MaxSonar-EZ0 after calibration
Distance data from LV-MaxSonar-EZ0 after calibration

For the second version, I used two VL53L0X laser distance sensors with a resolution of 1mm. Although claimed 2m in its datasheet, the range of the data I could achieve was only 1.2m. However, this range was enough according to my setup.

Distance data from VL53L0X laser distance sensor with 1mm resolution
Distance data from VL53L0X laser distance sensor with 1mm resolution
VL53L0X laser distance sensor in the final setup
VL53L0X laser distance sensor in the final setup

Coding

Initially, I had an issue reading data from two VL53L0X laser distance sensors. In the library provided for the sensor, there was an example of reading data from two sensors, but the connections to Arduino was not provided. This issue was resolved shortly and I was able to read and send data from both sensors to Processing, using AP_Sync Library. 

I also needed to calibrate the data with each setup. For this purpose, I designed the code to be easily calibrated. My variables are as follows:

DL: data from the left sensor
DR: data from the right sensor
D: the average of DL and DR
D0: the distance of the hanging fabric in rest to the sensors (using D as the reference)
deltaD: the range of fabric movement (pulling and pushing) from D0 in both directions

With each setup, the only variable needed to be redefined are D0 and deltaD.  These data in Processing control different visual elements, like the index of the image array. The x position of the gradient mask is also controlled by the difference of data from DL and DR, with an additional speed factor, that can change the sensitivity of the movement.

Code Repository:
https://github.com/arshsob/Experiment3

References

-https://www.echelman.com/

-http://refikanadol.com/

-https://www.tanavoli.com/about/themes/heech/

-http://islamicartsmagazine.com/magazine/view/the_next_generation_contemporary_iranian_calligraphy/

 

Experiment 3: Blue Moon

Project Title: Blue Moon
Names of group of members: Lilian Leung

Project Description

Blue Moon is a reactive room light that detects signs of anxiety through hand gestures and encourages the participants to take time away from their screen and practice mindfulness. The gold mirror emits a blue glow when participants are detected to be clenching their fist. To achieve a warm light, the participants right hand needs to be unclenched. The second activated switch is pressing their right hand over the left, which causes the music to start playing. The joint hand switch keeps the participant focused and relaxed and stops them from attempting to use a mobile device or computer. The project environment is based within a bedroom for times before rest or when trying to relax or meditate. The screen projection is intended to be on the ceiling as the viewer should be laying down with their hands together. 


Project Process

October 23 – 24
Beginning my initial prototype for the experiment, I began by mapping two potentiometers with two LEDs. By creating a small scale prototype I could slowly upgrade each section of the project into larger outputs such as upgrading the small LEDs to an LED stripe and replacing the potentiometer with flex sensors.

img_20191024_131639

October 25
By using Shiffman’s example of creating rain ripples, I was having difficulty controlling multiple graphic elements on the screen as the ripples affected all the visible pixels. Exploring OpenProcessing, I found a simpler sample that I could build from by having ellipses generated based on frame rate which I could control by N.Kato (n.d.) Rather than having the animation begin abruptly, I added an intro screen that was triggered by mouse click to move onto the main reactive animation.

Using my current breadboard prototype with the potentiometers and LED, I updated the switch from potentiometer to a pressure sensor using aluminum foil and velostat. Adjusting the mapping of the values to reduce the noise from the sensor I was able to map the pressure sensors for two hands:

Switch 1: Left Hand Increase Rate of Rain
Switch 2: Right Hand Control Audio Volume

* I used the Mouse.h library during initial exploration to trigger the intro screen and connected it to a physical switch though started having trouble controlling my laptop so I temporarily disabled it.

October 26
Initial testing to upgrade my breadboard prototype LED, I purchased a neon blue LED strip. I initially followed Make’s Weekend Projects – Android-Arduino LED Strip Lights (2018) guide to review how I could begin connecting my LED strip (requiring 9V) to my 3.3V Arduino Nano. One problem I didn’t expect when using Make’s video was that they used an RGB LED strip which had a different circuit while mine was a single colour.

RBG LED 12v, R, G, B
LED Strip 12v Input, 12v Ouput

I went to Creatron and picked up the neon led strip, tip31 transmitter, and an external 9 volt power source and power jack.Rewiring the breadboard prototype proved difficult as I had to use a separate rail of the breadboard for solely 9V and learn how to hook up the TIP31 transmitter with my digital pins to send values to the LED stripe.

One realization using from online references using different types of transmitters was that the 3 pins (base, collector, emitter) are laid out in a different order depending on the type used.

Pin 1 Base (Info + Resistor)
Pin 2 Collector (Negative end of the LED strip)
Pin 3 Emitter (Ground)

The diagram reference that ended up being the most useful was this:

8c37ae69e775698cb60b99db1dcc86ea

Figure 1. Sound Activated Light Circuit, n.d.

After being able to properly hook up the LED strip with my Arduino and mapping the potentiometer with the LED strip brightness, I began rewiring the potentiometer to use a flex sensor. To test the sensor, I first used two copper coins and velostat.

 

lighttest

sensortypes

From there I made two types of pressure sensors, the first one, using aluminum foil and velostat as a flex sensor that I could hide away inside a glove and use to detect pressure when clenching my fist with pressure along my index finger from my first and second knuckle. Another factor when attaching the flex sensor inside the glove was making sure the sensor was still attached when I clenched my fist, as the sensor would pull forward, following my finger joints and easily detached the sensor cables.

pressuresensortest

October 27-29
Expanding with the physical Arduino component of the project, I still needed to laser cut acrylic to make the circular ring that my LED stripes would wrap around. I’ve never laser cut before but with some help from Neo I was able to layout my two files, the first a 17” acrylic circle to be the front and the second being five 11.5” ring using 0.25” board that would house my Arduino and cables while also being a form that my LED stripes could wrap around while creating enough space between the wall and the light.

Additional updates were made to the prototype, such as adding the second warm neon LED to use as the main light, as well as updating all the wiring for my sensors with stranded wire instead of solid core wire so that the cable was more flexible and comfortable when participants put on the glove sensors.

The wooden ring was then sanded with holes drilled in from the side for the power cables to connect to as well as for the LED stripes strand wire to move from the exterior of the ring into the interior where the Arduino is placed. As all the wiring was finished, I moved all the components from my breadboard onto a protoboard to have everything securely in place. I also needed terminal blocks so that I could repurpose my LED stripes instead of soldering the wires into the protoboard.

protoboardtest

glovetest

Once the physical pieces were cut, I went back to redesigning the intro page to create more visual texture rather than just use solid coloured graphics. Within Processing, I added liquid ripple effect from an OpenProcessing sketch by oggy (n.d.) and adjusted the code to map the ripples based on the hand sensors instead of mouse co-ordinates.

The last issue to solve was how to get the intro screen to disappear when my hands were together in a relaxed position. Because of the multiple audio files being played, I had issues of audio looping when calling each page as a function and using booleans. In the end, I laid out the Sketch using sensor triggers and if statements. Possible due to the number of assets loaded within Processing, I was struggling to get the animation to load consistently as assets would be missing without any alterations to the code. In the end, I removed the rippling image effect.


Project Context

With the days getting darker and nights getting longer, seasonal affective disorder is around the corner. I’m faced with the problem that my current room doesn’t have a window and doesn’t have an installed light within the room, making the space usually dark with little natural light coming in. This combined with struggling with insomnia and ability to relax at night lead to creating this project.

roomexample

One finding for treating seasonal affective disorder is the use of light therapy. This treatment usually involves the use of artificial sunlight with a UV bulb or finding time to get natural sunlight. Terman (2018) suggests that treatment ranging from 20-60 minutes a day can lead to an improved mood over the course of four weeks.  

With a bad habit for multitasking, I find it difficult to concentrate and simply make time to rest without the urge of being productive. This proved to be a problem as using social media can also lead to feelings of depression and loneliness (Mammoser, 2018) due to feelings anxiety and fear of missing out and constant ‘upward social comparisons’ depending how frequently checks their accounts. 

To force myself to create time to relax and and removed from my digital devices, the experiments main functions were to first, detect signs of anxiety and second to force myself to stay still, present within my space and immersed in my brief moment of meditation. Nevarro (2010) from Psychology Today describes nonverbal signs of stress being rubbing hands together or clenching as a sign of “self massaging” or pacifying the self. To force myself to stay still and present, I decided my second trigger would be having both my hands together in a relaxed position. 

handgestures

Aesthetically, I chose a circular form, both symbolic as a sun and moon, as it would be the main light source in my room. Having the alternating lights, the circle appears both like a sun and an eclipse when the neon blue light is on. The visual style was inspired by Olafur Eliason’s Weather Project (2004) and the artist’s elemental style and his use of mirrors and lights to create the sun and sky within the Tate Modern’s Turbine hall. The Weather Project installation explored how weather shapes a city, and how the city itself becomes filter as to how locals experience fundamental encounters with nature.

olafur_eliasson_weather_project_02

The neon light is emitted when my right hand is clenched, limiting the light in the room, and prompting my to unclench my fist. The Processing visuals are supported by white noise of wind and rain as a distraction to my surroundings as white noise is beneficial for cutting out background noises by providing stimulation to the brain without being too overly excited (Green, 2019).

handgestures

The audio from the Processing sketch is also mapped to my right hand, with the audio being lower when my first is clenched and louder when my hand is open, prompting deeper immersion into the meditation. Opting to bring my hands together with my right hand over my left in a relaxed position, the sketch dissolves the intro screen to fill the ceiling with the imagery of rain falling and louder audio. Within a few moments a selected song will play, for my Processing sketch, I used one of my favourite songs, Wide Open by the Chemical Brothers (2016). 

handgestures2

During my exploration phase, I tried to trigger Spotify or Youtube to play, but because it would cut outside the processing program and bring me back to viewing social media and the internet, I opted to have the audio play within the sketch.

Additional Functionality for Future Iterations

  1. Connecting the Processing Sketch to a possible Spotify API that could be controlled with physical sensors. 
  2. Connecting to a weather API and having the images and audio switch depending on the weather.
  3. Adding additional hand gesture sensors to act as a tangible audio player.

screenshot-2019-11-01-at-11-14-57-pm

img_20191101_182827_ceiling

img_20191102_011141-copy

dsc_0429-light

dsc_0429-2_light

Project Code Viewable on GitHub
https://github.com/lilian-leung/experiment3

Project Video


Citation and References

Sound Activated Light Circuit. (n.d.). Retrieved from https://1.bp.blogspot.com/-2s2iNAOFnxo/U3ohyK-AjJI/AAAAAAAAADg/gAmeJBi-bT8/s1600/8C37AE69E775698CB60B99DB1DCC86EA.jpg

Ali, Z., & Zahid. (2019, July 13). Introduction to TIP31. Retrieved from https://www.theengineeringprojects.com/2019/07/introduction-to-tip31.html.

Green, E. (2019, April 3). What Is White Noise And Can It Help You Sleep? Retrieved October 31, 2019, from https://www.nosleeplessnights.com/what-is-white-noise-whats-all-the-fuss-about/.

Make: (2013, August 23) Weekend Projects – Android-Arduino LED Strip Lights. Retrieved from https://www.youtube.com/watch?v=Hn9KfJQWqgI

Mammoser, G. (2018, December 14). Social Media Increases Depression and Loneliness. Retrieved October 31, 2019, from https://www.healthline.com/health-news/social-media-use-increases-depression-and-loneliness.

Nevarro, J. (2010, January 20.). Body Language of the Hands. Retrieved October 31, 2019, from https://www.psychologytoday.com/us/blog/spycatcher/201001/body-language-the-hands.

N.Kato (n.d.) rain. Retrieved from https://www.openprocessing.org/sketch/776644

oggy (n.d.) Liquid Image. Retrieved from https://www.openprocessing.org/sketch/186820

Processing Foundation. (n.d.). SoundFile::play() \ Language (API) \ Processing 3 . Retrieved October 31, 2019, from https://processing.org/reference/libraries/sound/SoundFile_play_.html.

Studio Olafur Eliasson (Photographer). (2003). The Weather Project. [Installation]. Retrieved from https://www.tate.org.uk/sites/default/files/styles/width-720/public/images/olafur_eliasson_weather_project_02.jpg

Tate. (n.d.). About the installation: understanding the project. Retrieved November 2, 2019, from https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0-0.

Tate. (n.d.). Olafur Eliasson the Weather Project: about the installation. Retrieved November 2, 2019, from https://www.tate.org.uk/whats-on/tate-modern/exhibition/unilever-series/unilever-series-olafur-eliasson-weather-project-0

Terman, M. (2018, December). Seasonal affective disorder. Retrieved October 31, 2019, from https://www-accessscience-com.ocadu.idm.oclc.org/content/900001.

The Coding Train (2018, May 7) Coding Challenge #102: 2D Water Ripple. Retrieved from
https://www.youtube.com/watch?v=BZUdGqeOD0w&t=630s

I Might be Just a Text on Your Screen

Project Title: I Might be Just a Text on Your Screen
Names of Group Members: Nilam Sari

Project Description:

This experiment is a tool to help someone who is experiencing panic attack. “I Might be Just a Text on Your Screen” walks the interactor through their panic attack(s) through a series of texts and an interactive human hand shaped piece of hardware to hold on to.

Work in progress:

The first step that I took to figure out this project is to compose the text that would appear on the screen. This text is based on my personal guide that I wrote to myself to go through my own panic attacks.

text1

The reason that I included the part where it says “I might be just a text on your screen, with a hand made out of wires” is because I don’t want this to be a tool that pretends to be something else that it is not. It is in fact just a piece of technology that I, a human, happen to create to help people who are going through this terrible experience I’ve had.

The next step is to figure out how to display the text. That was a learning curve for me because I never worked with Strings on processing before. I learned from processing’s online reference guide on how to display text. It was fine until I ran into the problem of how do I display the text to appear a letter by letter as if it’s being typed out on the computer.

At first I thought I had to separate the text a character by character, so I watched a tutorial on Dan Shiffman’s channel, and ended up with this

text2

But to make the characters appear one by one means I have to use millis, so I did that and ended up with this

text3

But it didn’t work the way I wanted it to. Instead of displaying the characters one by one, the program just froze up for a couple of seconds, and displayed the characters all at once. So I went through more forums and found a simpler way to do it without using millis, and incorporate it in my file.

After I got those done, the next step I took was to build the physical component. I want this physical component to act as an extension of the computer that allows the interactor to navigate through the text with it. I bought a 75 cent pair of gloves and stuffed it. Then, it was time to work on the velostat. I tested it out with a fading LED light and map it to the amount of pressure on the velostat.

img_8073

I followed the step by step on Canvas and velostat testing worked fine, the light goes brighter when more pressure is put on the velostat and dimmer when there’s less pressure. I’m using the same tool but instead of using mapping, i just use multiple thresholds between 0 and 1023 so the program knows when the sensor is pressed at different pressures. 

I slipped the velostat into the glove, so when you squeeze the ‘computer’s hand’, it will activate the velostat. I went to Creatron to get a heat pad to put inside the glove, to mimic body heat. It’s powered by Arduino’s 5V port.

pic1

The next step was to figure out how to go through the text page by page. I had trouble figuring this out so I asked Nick about it. He suggested to create an int pageNumber. And put it at the end of every message. I added a timer with millis() to create a couple of seconds buffer before it changes pages. It worked wonderfully.

There were a couple of hiccups here and there while I was programming, but taking a break from it and going back into it helped me with solving most of the problems I didn’t mention above.

After everything was set, I put together the wires and soldered it into a small protoboard.

pic2

img_8103

Link to code on Github: https://github.com/nilampwns/experiment3

Documentation:

document1

pic3

project3diagram2

Project Context:

This project’s idea came from my own experience dealing with panic attack. Panic attack symptoms include physical sensations such as breathlessness, dizziness, nausea, sweating, trembling, palpitations, as well as cognitive symptoms like fear of dying and going crazy. Some people who suffer from Panic Disorder can experience this more than four times a month. Medications and therapy can help cure Panic Disorder, but not everybody has access to those things. This project is not a tool to replace medical treatment of Panic Disorders, however, it can be a nice tool that helps walk one through their panic attacks when no one else is around to assist them. Because it can get really scary to deal with this on your own.

When I used to suffer from constant panic attacks, in my wallet, I kept a piece of paper that had instructions on how to get through a panic attack on my own. These instructions are messages from myself for myself who are having a panic attack. This inspires me to create the text that appears on the screen. I thought, if a piece of paper could help me go through my own panic attacks, then an interactive piece would be a step up from that.

Technology has been used to help people with mental health issues, especially on smartphones. Smartphones apps provide useful functions that can be integrated to conventional treatments (Luxton et al., 2011). There are already apps out there that helps people with their anxieties such as Headspace (2012) that helps people meditate through their app, and MoodPath (2016), and app that helps people keep track of their depressive episodes.

pic4

(Left, Headspace (2011); Right, MoodPath(2016))

However, I don’t want this tool to appear as something that it is not. I don’t want this project to pretend like it understands what the interactor is going through. In the end, this is just a string of codes that appear on your screen, along with a physical interactive piece that is made of wires.

This reminds me of a point Caroline Langill made in regards to Norman White’s piece, “… an artwork that heightens our awareness of ourselves and our behaviours by emulating a living thing rather than being one.” It is performing empathy and offering a companion without it knowing that those are what it is doing. So if the interactors feel like they’re being empathized with, is it a real empathy that is being offered by this project? Is it real empathy that the interactor is feeling, or is it mere illusion of empathy from a machine? Sherry Turkle asked this question in her book “Reclaiming Conversation”. Turkle raised the concern of technology replacing actual human contacts. For my project, I don’t want this to be something that replaces treatments or help from other people and the society, rather a tool to close the gap in human fallacy, of not having mental health resources vastly available for those who need it.

Reference

Langill, Caroline S.. “The Living Effect: Autonomous Behavior in Early Electronic Media Art”. Media Art Histories. MIT Press, 2013.

Luxton, David D.; McCann, Russell A.; Bush, Nigel E.; Mishkind, Matthew C.; and Reger, Greg M.. “mHealth for Mental Health: Integrating Smartphone Technology in Behavioral Healthcare”. Professional Psychology: Research and Practice. 2011, Vol. 42, No. 6, 505–512.

Turkle, Sherry. “Reclaiming Conversation: The Power of Talk in a Digital Age”. New York: Penguin Press, 2015.

 

 

 

 

A Circle of Lights!

 

0

2       3

Image: Stills from our experiment at different phases of the project

 

GROUP MEMBERS

Catherine Reyto | Masha Shirokova | Sananda Dutta 

 

PROJECT DESCRIPTION 

A Circle of Lights is a kinetic sculpture based on the concept of a children’s mobile, that involves users into a colourful play of reflections and shadows. It consists of acrylic transparent geometric shapes with the intent of reflecting LEDs and project them on the surrounding area (walls, floor, ceiling).

Interaction with the mobile is done by means of the distance between people surrounding it entering in and out of the proximity range of the sensors.  The combined (three) sensors are all connected to the Servo which in turn, spins the whole sculpture. Through fluctuations in distance, participants are able to affect the speed of rotation and activate the lights (the closer they get, the faster it spins!)

 

PROJECT CONTEXT

”It’s just beautiful, that’s all. It can make you very emotional if you understand it. Of course, if it had some meaning it would be easier to understand, but it’s too late for that.”  – Marcel Duchamp

We were inspired by Mobile work of these artists: Alexander Calder,  Alexander Rodchenko and  Edgar Orlaineta. And from our Digital Fabrication class, we had taken inspiration from a recently completed project by our classmate Arsh.  He had laser-cut the elegant shapes of Arabic script into three sheets of Acrylic, then layered the sheets together to demonstrate the beautiful reflections when in contact with direct or diffused light sources.  

4

Image: Black Gamma by Alexander Calder

 

5     6

Images: Hanging Spatial Construction No.11 and Oval Hanging Construction No.12 by Alexander Rodchenko


7
Image: Solar Do (It Yourself) Nothing Toy, After Charles Eames by Edgar Orlaineta

 

8    9

Images: Artwork by our classmate Arsh (Arshian Sobhan Sarbandi). His assignment for Digital fabrication course.

What characterizes all  types of mobiles is the fact they rely on balance and movement in order to achieve their artistic effect.  They are composed of a number of elements; often abstract shapes, interconnected with wires, strings, metal rods or similar objects, whatever serves best in maintaining constant movement while in a state of suspension. They represent a form of a kinetic sculpture because, unlike traditional sculptures, they do not remain static, but are literally mobile, set in motion by air currents, a slight touch of an infant, or even, as in our case,  a small motor. 

By the sequential attachment of additional objects, the structure as a whole consists of many balanced parts joined by lengths of fishing line, whose individual elements are capable of spinning when prompted by the Servo’s propulsion or direct contact.  Because of their weight (we deliberately opted for a material that had some density, rather than say, card stock), gravity assists in naturalizing the animated movement of the shapes through space with a bit of bounce and retract. 

While classic mobiles are manipulated by air and space, our idea was to upgrade the concept with Arduino and Servo functions, so users could actually interact with objects. Depending on location, participators could activate one, two or all three sections of LEDs and alter the speed of the Servo. 

 

PROCESS

Weary of time restraints but eager to get our hands dirty with electricity we set out to combine basic but as novices, daunting Arduino functions ( sensor-reactive Servo and LEDs).  Drawing on our combined experience as graphic designers, we planned to incorporate this circuitry in a way that could trigger interactive movement of an object containing simple but colourful shapes.

Initial ideas – Before starting on the ideation process, we worked on building primitive circuits to better understand the principles of working with Arduino.  Having no prior experience, we felt we needed to get a better sense of our bearings in order to set a benchmark for what might actually be feasible for us to build in a short time-frame.  Once we’d gained some sense of familiarity, we then attempted to combine various modes together to see how many sensors we could use at once. Our process with Arduino depended heavily the learnings from our latest classes that covered topics about getting LEDs blinking in relation to the sensor’s threshold limits, Servos working with a timer and multiple LEDs blinking at alternate times.  

From a product standpoint, all of our initial ideas involved the creation of an interactive art piece which would combine the sensors and LEDs.  We discussed assembling a circuit of lights that could be elegantly diffused behind a thin-papered painting. We explored the concept of a movement-responsive garden, where the LEDS could be arranged in the pattern of a flower petals. These would light up once proximity to an object was detected by the sensor. We then attempted to transfer this concept onto a cubic mesh as well, wherein the sensors placed at the vertices of the cube would sense an object in their radar and light up particular sections of the cube. These ideas were explored when we had misunderstood the description of the assignment: we thought there was a restraint of either using LEDs or servos, not both working together.  After this detail was clarified, we felt and increased freedom to incorporate the Servo motor as well. We opted to increase the level of challenge by incorporating operations of both a Servo motor with synchronized LEDs. As a group, we also wanted to mix both platforms in order to gain experience about how these applications work together.

 

MATERIALS AND ASSEMBLY

Tools

  • Laser-cutting
  • Soldering

Material

  • Acrylic sheets
  • Fishing tackle (line, swivels)

We began by laser-cutting the circular base that would serve to support the suspended objects beneath (also laser-cut acrylic), and support the breadboard circuitry on its surface. We also laser-cut patterns from fluorescent acrylic sheets for an added dimension of light reflection.  These shapes were attached by fishing line (on account of both its transparency as well as strength) that were in turn attached, by means of a swivel around the rim of the acrylic disk we’d designed to uphold the breadboard and circuitry (which we soon nicknamed ‘the bomb’ on its resemblance to this).  

 

10 11
12 13

Images (L-R from top to bottom): Live scenes from laser cutting lab, final cut-outs for hanging elements in the mobile, servo base with points for attaching the breadboard base, breadboard base with LEDs arranged on them.

We then designed a smaller disk (the yellow circle in the above left image) to be at the very top of the mobile, firmly attached to the servo’s propellor piece. The triads of holes are for threading the topmost strings of the mobile. The hole at the centre is for the screw that attaches to the motor head.

To achieve the goal of a mobile with interactive light, we opted to solder our LEDs into three separate parallel circuits, assigning each section to one of our three sensors respectively. Each circuit consists of 4 LEDs, and each is assigned its own pin on the Arduino. The desired outcome is that the LEDs would light up once the corresponding sensor detected a disturbance within the threshold limits of its radar.

14    15

High hopes: Planning the animation of the LED circuits

 

CODING

Main 

  1. Parallel Circuit
  2. Servo+Sensor
  3. Servo Sensor
  4. Servos Multiple LEDs

The project being a fairly open experiment, we were able to explore freely with physical  materials, LEDs, sensors and motors. But freedom came with a price; It quickly became quite challenging to carve a clear path in what seemed like a constant stream of possibilities of what we could do (or where things could go wrong). Being weary of our limited time-frame, we conceded to create something that we hoped would be reasonably possible to code and assemble given our limitations of time and knowledge.  But we kept coming back to the idea of a proximity mobile (noun: a decorative structure that is suspended so as to turn freely in the air), as seen hanging over the cribs of babies to lull them to sleep.  But as aforementioned, ours would have the added feature of physical interaction.  Though there were complexities in the concept, our strategy was to challenge ourselves in seeing if we could pull it off in time, and even if we failed, we would still have a beautiful piece.  It would be made up of various cut-out shapes that carried some small degree of weight, suspended by unobtrusive strings, and have the additional feature of a parallel circuit of harmonized LEDs.  The idea was that the light animation would come into effect by response to movement within the threshold limits of the sensors’ radar. Once the participators move within range, each of the three sensors would initiate the Servo’s rotation (180 degrees in either direction). 

 

Coding Syntax used

Libraries used – <servo.h> and <animationTools.h>

Float

Serial.begin

pulseIn

If () and else()

Oscillate (reference taken from https://github.com/npuckett/arduinoAnimation)

 

Final Code

https://github.com/sanandadutta/Circle-of-life.git

 

 17  parallel

18

Image: Soldering the parallel LED circuit Images: The Parallel LED circuit represented above and the end result shown below it.

As you can based, based on the above images, 3 sections – A, B and C have formed after having 4 LEDs in a parallel attachment amongst themselves, in 3 separate parallel connections. These sections have been assigned 3 separate sensor pins which have been assigned a range of 120 degrees for interaction with any sort of physical disturbance.

16

Diagram A: Sensor mapping to the LEDs (Sensor 1 assigned to Section A, Sensor 2 assigned to Section B, Sensor 3 assigned to Section C)

 

19

Diagram B: This diagram is a front view of the entire setup. The Servo motor is fixed to a small ceramic base which in turn supports the larger ceramic base that hosts the breadboard circuitry (including the Arduino). The Servo motion triggers a three-tier rotation; starting from the top, which then pulls breadboard base into an offset rotation, then lastly, tows the strings of the mobile pieces into a 180 degree rotation. 

 

TESTING

  • sensor with led; 
  • sensor with servo;
  • sensor with led and servo;
  • parallel circuit of LEDs 

Link: https://vimeo.com/367269220

Sensor threshold 1, 2 then 3, with LED circuit  https://vimeo.com/367269685

Once we could read data from the sensor input, use it to turn on the LEDs and had established an output response from the Servo, it was time to start doubling (actually, tripling) up.  We set up our prototype (so far consisting of an 8” diameter disk cut from foam-core) in a quiet, disruption-free room for testing the range of our sensors together. Nick’s tip about taping distances to the floor came in very handy for this part, and we found that especially true when testing where the threshold limit of each Servo overlapped with the next.

22

Image: Making contact – Configuring the sensors and their respective LED circuits

 

 

Testing the structural design

It was at this point in the testing phase that we started making iterations on our prototype in terms of material design and overall physical mechanics.  An 8” diameter disk meant a very small area and thus too much overlap for the sensor thresholds. This discovery led to a makeshift upgrade in the form of a piece of cardboard, cut to an 11” diameter, intact with a very precisely-measured place-mark at the centre for the Servo (all three of us tumbled down a rabbit hole where somehow the accuracy of this place-mark was of utmost importance).  

The mechanics of our material design were becoming complicated.  We had managed to get our prototype in motion by fastening the Servo to the underside by harnessing it with layers of electrical tape.  Our device was now taking inputs from all three sensors (arranged on the large disk in an equilateral triangle), which activated the three LED bulbs that served as stand-ins for our soon-to-come parallel circuits, as well as the Servo (thanks to the Oscillation function from the Animation Tools library.  But we anticipated a few issues regarding suspending decorative pieces to the base.  For one thing, we would need a material more sturdy than cardboard in order to support the weight, but the bigger concern was the jerky movement of the Servo. 
Because of the Servo’s 180° limit, we were concerned that the back-and-forth rather than circular motion of the suspended objects might look awkward.  

We wondered if we could increase the range of motion of the dangling pieces by means of an offset caused by gravity.  To test this, we added a second tier to our prototype: a small circular base, where the Servo would sit, that would in turn suspend the second, larger circle.  The decorative pieces would hang from that larger disk. Putting our pooled knowledge of Physics (limited to street-smarts and common sense) to use, we guessed that the speed of motion would decrease with each level of suspension (from the first tier to the lowest-ranking decorative pieces), but that the range of movement might appear to increase thanks to the pendulum effect.   After testing this addition to our prototype showed positive results, we set out to design both the large and small circular bases in Adobe Illustrator, intact with hole placement for the threading, sensors and Servo motor. We wouldn’t be able to know whether the two-tiered system would actually work as we hoped till we attached decorative pieces.  

 

23 24 25

First sketches of the two-tier design

We also had to figure out a way of suspending the larger disk from the smaller, top-tier disk.  We did so by way of fishing tackle; threading it in and out of laser-cut groupings of holes instead of cutting individual strands of fishing line, to make the length adjustable.  However, the drawback to using this load-bearing translucent string is that trying to maintain order is like herding cats, and that unyielding lack of control negated any flexibility in this system.  Hindsight led us to conclude that we should have limited its use to serving as a measurement tool. In using it to establish the best distance between the two disks, we could have cut the strands accordingly and then replaced them with individual strands of wire.   As a last-ditch effort to streamline the design, we opted to shorten the distance between the disks, and though this did help avoid tangling, there was an oversight : the lack of slack on the suspension lines made our would-be meditative mobile look rather spastic in presentation.  

26
Images: “Get a grip!” – Maneuvering the unwieldy fishing line through the suspension holes 
27


Links for testing:
https://vimeo.com/367268855

Final Link: https://vimeo.com/367905506

28

 

REFLECTION

As perhaps others in the class can attest, Experiment 2 was in many ways an exploration of restraints.  With the mid-term pile-up of assignments and presentations in our respective classes, we found our initial challenge was in figuring out when we could even find time to meet as a group.  We were also learning as we went, and that our ideation process depended on what material we were presented with in upcoming class lectures. We would watch the videos from class, attempting to replicate what had been demonstrated, repeat the motions ourselves, then try to put these findings to use creative ways.   It was a little exasperating but as a result we learned a valuable lesson about how to be independently resourceful.   

While from the outset, we did all agree that our project would need to be feasible to successfully produce for the rapidly-upcoming deadline, it wasn’t easy to quell our shared enthusiasm for working with LEDs, motors and sensors for the first time.  All three of us coming from graphic design backgrounds, we were simultaneously excited and distracted by colour and light sequencing. The mobile had been but one of many fanciful ideas. We wound up choosing it over the others because although the complexity of the product design risked falling outside the scope of feasibility, it was hard to resist the challenge of making a moving artwork installation that responded to people as they approached it.   Had we more time together to sketch out a road map in the way of a detailed storyboard about how the design would be assembled, we may have gained much from researching solutions to our pain-points instead of stumbling blindly into them like boobytraps throughout the process.  

29

Burning the LED circuits at both ends: final stages of assembly.  

Ultimately the main conflict was that there was a lot of new information being absorbed with too little time to move through cycles of practical ideation.  Instead, we brainstormed what might work then just rolled up our sleeves and hoped for the best.  We crossed bridges when we got to them, like how to suspend the Servo – which wound up being held up by a clamp, like pincers on its poor plastic temples, and fastened to a bar of LED ambient lights on the ceiling.  Another hurdle was how to extend the number of LEDS in the parallel circuit while still getting them to work in conjunction with the Servo rotation. We never did manage to resolve this but we were at least finally able to pinpoint the issue: The pressure on the small Servo to not just carry the weight of the entire three-layer assembly of acrylic objects, but also spin (read: thrash) them, was consuming a lot of voltage.  There was simply not enough power to go around (pun intended) to light the 12 LEDs of the circuit while the motor spun. Had we a little more time, we’d have opted to switch up the Servo for a larger motor, but we had already run out the clock on that part of production.  

A truly satisfying and memorable moment for us was when we succeeded in getting all three sensors responding at once, in conjunction with our three LEDs.  This was after testing the threshold limits of each and tweaking adjustments in the code for several hours. It was late in the evening when we saw all of this coming together on the spinning disk for the first time, and we had a group hug while looking on proudly at our achievement.  That did a lot to double up our drive in the remaining days. We’d proven to ourselves that we were actually capable of pulling off something that only a few days prior had seemed absurdly over our heads. None of us having any prior experience working with Arduino, electricity or much understanding of code fundamentals, it felt good to come that far in a short amount of time while working on something artistic and original.

Ultimately, our experiment was fatally flawed on two counts: 1) we should have resolved how to suspend the Servo before getting started.  We likely would have foreseen the issue had we mapped out the design in a storyboard, which would have likely been invaluable in terms of either finding a viable solution or if not, possibly resulting in eschewing the concept altogether, and 2) There were simply too many moving parts to work through in too little time.   The added layer of mechanical engineering that came with the material assembly meant a lot of questions about physics that we were unable to answer, for one because we barely knew how to ask the right questions, but mostly because we literally had our hands too full with learning to code for circuits of sensors, lights and motors.  

That being said, all of our hard work did result in a beautifully decorative piece, that in spite of the jerky motion, did seem to captivate our classmates on presentation day in the way we had hoped.  The potential peeked out of the iteration stage our device found itself in when we presented. Had there been more natural light in the room, the group might have been treated with a myriad of overlapping, colourful reflections on the floor and surrounding walls.  But with the lights off, we were able to envision what a few LEDs among the spinning translucent decals could do for achieving a similar effect, emitting fractals of reflections across the ceiling that moved around as participants walked in and out of the threshold area below.  In this respect, we felt we had achieved something more significant than a working product. We had attained a strong benchmark of iteration, one that opens doors for future designs for all three of us, working together or separately in our artistic design practice.  

 

REFERENCES

  1. Culkin, J., & Hagan, E. (2017). Learn electronics with arduino : An illustrated beginner’s guide to physical computing. Retrieved from – https://ebookcentral.proquest.com
  2. Digital Futures’ GitHub by Nick Pucket –  https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment2
  3.  Significance of Mobile Art – What is Art Mobile –  https://www.widewalls.ch/mobile-art-mobiles-kinetic-art/

The Red Field

Project Title: The Red Field
Project By Arshia Sobhan, Jessie Zheng, Nilam Sari, Priya Bandodkar

Project Description

Our project is an experimentation on light sequences. The piece is meant to be hung on a wall and gets activated when the viewer walks pass by it. The light sequences change based on the interaction the viewer has with the piece. We used mirror board and reflective clear acrylic sheet to create an infinite reflections for more an immersive illumination.

Ideation Process

The idea of the project has gone through series of evolution. At the start of this project, we jotted down the ideas that could potentially be built upon and/or combined. We tried to expand the interaction experience of the users as much as possible with our ideas even with the limited number and categories of tools available to us.  

1

Eventually, we came to an agreement to build something simpler with the limited timeframe to complete the project, yet experimental and aesthetically pleasing so we could still practice our design skills as well as familiarize ourselves with physical electronic basics and the Arduino programming language. Inspired by the LED light cube video on YouTube, we brainstormed ideas to make a variation of it to combine incorporate users’ physical interactions with the lights/ sensors as an essential part of this project. To make sure the project is mostly finished before the day of presentation, we made a schedule to keep us on track since we only have about 5 days to make the project.

Project Schedule

2

Based on the distance sensor data input and LED lights output information, we have explored the possible combinations of how they relate to each other. Initially, we hoped to use 3 distance sensors so that each distance controls a separate attribute of LED lights, for example brightness, blink rates and blink patterns. 

The idea behind our project was to collaboratively control the display of the light in the box in the same way DJs mix their music. Based on this idea, we created a light panel and a controller as the main part of the piece.

Work in progress

Day One (Monday, 7 Oct)

We have established 4 modes, which are idle mode, crazy mode, meditating mode and individual mode. To generate more interesting behavior patterns for the LEDs, we soldered 4 groups (red, blue, purple and green) of LEDs together, leaving one group (8 LEDS marked in yellow) in the center unsoldered in an attempt to have individuality in the midst of unity for LED patterns. To further increase the aesthetic appeal, we decided to use an infinity mirror to put behind the LED lights so that the lighting effects will be enhanced and amplified even more as if there are infinite blinking and fading LEDs.

panel-light-pattern

Day Two (Tuesday, 8 Oct)

We divided the coding into 4 different parts, with Arsh, Priya and Nilam each designing one of the modes of different lighting patterns of the four groups of LEDs that are soldered together, while Jessie designing a separate set of behaviors for the unsoldered group of LEDs. 

We regrouped a few times to make adaptations to our code for the maximum amount of clarity when it comes to users’ interactions with the sensor. Using the same thresholds becomes important when it comes to working individually on our own code and combining it altogether in the end. We tested different sensor values to come to the final threshold numbers.

Day Three (Wednesday, 9 Oct)

In order to hide the distracting wires on the back of LED lights, we designed and laser cut a box to encase the LED light panel as well as the wires at the back. We also designed a pyramid to place 3 sensors at the center of the each side for users to interact with to control the lighting behaviors and patterns. However, we realized by having 3 sensors will significantly affect the speed of execution of the code. Eventually, we decided to use only 1 sensor for this project and utilize different physical ranges to trigger different behaviors for the LED lights.

3

4

With our code close to finish, we started soldering the 4 groups of lights together so we can have the code tested on the LED light panel to see if the light patterns work well together since they have been done by separate people. We soldered the lights in parallel rather than in series in case that one of the lights burns out, it won’t affect the others LED lights that have been soldered onto the same group.

5

To achieve the effect of the infinity mirror, we got reflective acrylics from the plastic shop at the main building. We got this mirror-like acrylic for the base layer of the LEDs, and used clear transparent acrylic and coated it with a reflective layer as the cover for the box. We experienced some struggles while trying to coat the cover acrylic, as air bubbles got between the acrylic and the coating. However, it still looks good with all the physical elements combined.

 

Day Four (Thursday, 10 Oct)

On the final day before the presentation, we finalized the code together in case if they don’t work together. Problems occurred as we tried to do so. Arsh and Priya’s code couldn’t work together for some reason which we couldn’t figure out. Having consulted Nick, we learned that Boolean State is a digital function and can’t work the same time with analogWrite(), as one pin can either be assigned digitalWrite() or analogWrite(), but not both at the same time. We adjusted our code accordingly to solve this issue in the end.

With Arsh, Priya and Nilam finished with their code, Jessie had trouble achieving the effects of making 8 unsoldered individual LEDs blinking one after another, with setting different blink rates in an LED array. However, the 4 groups of LEDs already blink and fade in a coherent and unified manner with Arsh, Priya and Nilam’s code. We decided to let Jessie continue to work on her code. If she worked it out before the presentation, we would have more interesting light patterns. If she couldn’t, the LED panel worked well as the way it was and she could work on it after the presentation as well.

Day Five (Friday, 11 Oct)

Jessie eventually made the 8 individual LEDs behave in the way she desired them to. Unfortunately, there wasn’t enough time to assemble the lights together before the presentation, so we presented it as the way it was. During the presentation, Nick offered some insight on the psychology of human behaviors and possible interactions with our LED panel. He encouraged us to think more about how we could use this to our advantage by discarding the sensor pyramid completely, and hide the distance sensor somewhere as a part of the main body of our LED panel for example. Users would get closer to it in order to find out what triggers the lighting behaviors and have a more intimate and physical experience with the LED panel.

Project Adaptation After Presentation 

After the presentation, we received an important feedback that by having a separate controller, the physical distance between the piece and the controller might impede the natural interactions, because the controller would limit the physical spaces participants could utilize to play and experiment with, which basically only allows the users to wave their hands around it like a zombie. During break, we made changes on the display and concept of our project.

The new piece is meant to be hung on a wall and only gets activated when the viewer walks pass by it. In the new version of our project, the idle state of the wall piece becomes completely dark. It won’t have any sort of reaction until someone walks pass by it and activate the piece. Once the piece is activated and received attention from the viewer, the light sequences on the wall piece will change depending on different ways of interactions.

This new concept plays on the concept of proxemics, and try to minimize space or even eliminate it between viewers and the collaborative aspect. We thought that with this new concept, more focus would be placed on human relationships with the space around them.

Video of interaction

Link to Code

https://github.com/arshsob/Experiment2

Documentation

portfolio-image-fhd

led-board-fritzing

 

Technical Challenges

Due to our very limited experience with Arduino and coding, we faced several technical challenges on our way.

The first issue occurred when we were trying to control the brightness and the blink rate of LEDs at the same time. We understood that we can’t use analogWrite and digitalWrite to the same LED set simultaneously. This Issue was resolved by adding a little more code and changing all digitalWrite functions to analogWrite.

The second issue happened when we connected the LED board to the Arduino. At the test stage, the data coming from the distance sensor was reasonably smooth when there were only 4 LEDs, each connected to an output pin. After connecting the LED board, the distance data was wildly fluctuating and making it impossible to interact with it. This fluctuation was a result of the electrical noise caused by many wires connected to the board.

6

As suggested by MaxBotix, we added two components to filter the noise to our board: a 100ohm resistor and a 100µF capacitor.

7

Adding these components stabilized the distance data significantly and resolved the issue.

Finally, to amplify the brightness of LEDs, we used a transistor for each LED group. Otherwise, all the LEDs were too dim to demonstrate the fade effect relevant to distance changes.

After modifying the idea with regard to presentation feedback, the effect displayed for someone passing the box was another challenge, considering that it was supposed to happen only once after distance changes detected by the sensor. Using a variable to store the time of sudden change and several conditions over duration of fade-in/fade-out effect, the issue was resolved. However, there seems to be some kind of conflict among them, causing minor flickers during the effect.

Several attempts to use a sine function failed trying to create a degree related to time past after the sudden change and to limit it between 0 and PI, due to unnatural (and uncontrolled) behaviour of the output.

Project Context

The work by Harry Le, an 8x8x8 LED Arduino cube project, and The DIY Crab, a DIY infinity mirror coffee table on Youtube.com gave us the inspiration for this project.

Philippe Hamon said, within the context of architecture, “Every building, once completed, concretizes a sort of social and natural proxemics”. This applies to the existence of most objects including artworks. Interactive artwork, in particular, adds a new element to the relationship between the artwork and the viewers. Our work, “Title of our work”, is meant to grab the attention of passers-by to pay more attention to the objects around them.

People are more likely to interact with objects that react to them. “Title of our work” idle mode mimics a still mirror until the sensor picks up on a motion. Once the sensor picks up that there is a person passing by (using the change in distance), the wall piece would play a short light sequence. It is a random blinking effect of lights that has a pleasant fall off, subtly creating a notion of “I just saw you pass by”. On the counterpart, the quick blinking light sequence draws the attention of the passer-by, thus creating a sense of curiosity.

Once the piece grabs one viewer’s attention, it will draw other people’s attention as well. One of our goals is to get people to interact with the piece collaboratively, creating a sensual co-existence. People adjust their distances between each other based on their social activities, but sometimes the distances are also used to raise defense mechanisms when others intrude within their spaces (Hall, 1969). The size of the piece requires participants to share a relatively small space, encouraging them to get close into each other’s personal spaces. We encourage people to get close to each other while interacting with our work. However, we are also interested to see how participants who don’t know each other well would behave in close proximity with each other when they are all drawn to the same object.

Through physical interactions with the piece, participants will gain aesthetic pleasure and gratification through the lighting patterns their actions trigger. After the adaption on the piece, we encased the sensor together with the LED panel so it wouldn’t be easily seen. The idea behind it is for participants to freely experiment with the piece to try to figure out the mechanism behind it driven by their curiosity. As Costello and Edmonds (2007) have put it in their study of play and interactive art, “stimulating “playful audience behavior might be a way of achieving a deep level of audience engagement.” We build on this concept in our interactive piece to obtain engagement and entertainment. Participants will eventually adapt to the ways that LEDs behave, and gain a sense of gratification by obtaining the knowledge of how it works. This kind of rewarding system behind the piece will keep them invested in the experience throughout the interactions. Furthermore, with this acquired knowledge, participants can go on to use this piece for more advanced performances such as making the LEDs react in a cohesive way with the sounds of music and etc.

References

Costello, B. and Edmonds, E. “A study in play, pleasure and interaction design”. ACM New York, 2007.

Le, Harry. “8x8x8 LED CUBE WITH ARDUINO UNO”. Youtube. https://youtu.be/T5Aq7cRc-mU. accessed on October 18th, 2019.

The DIY Crab. “DIY Infinity Mirror Coffee Table”. Youtube. https://youtu.be/OasbgnLOuPI. Accessed on October 18th, 2019.

Hamon, Philippe, “Expositions : Literature and Architecture in Nineteenth-Century France”, trans. Katia Sainson-Frank and Lisa Maguire (Berkeley: U of California P, 19).

Hall, E.T. “The hidden dimension”. Anchor Books New York, 1969

https://dl-acm-org.ocadu.idm.oclc.org/citation.cfm?id=1314168

Experiment 2: Forget Me Not

Exploration in Arduino & Proxemics.
An interactive plant that senses the presence of people in its proximity and alters its behaviour as per the proximity of the people with respect to it.

Team
Manisha Laroia, Nadine Valcin & Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

img_20191016_200058_1-01-01

Description
We started ideating about the project on Proxemics with the intent of creating an experience of delight or surprise for the people who interacted with our artefact from varying proximities. We started exploring everyday objects, notably those you would find on a desk – books, lamps, plants and how they could be activated with servos and LED lights and those activities transformed with proximity data from a distance sensor. We wanted the desired effect to defy the normal behaviour expected from the object and that it would denote some form of refusal to engage with the user, when the user came too close. In that way it was anthropomorphizing the objects and giving them a form of agency.

We explored the idea of a book, a plant or a lamp that would move in unexpected ways. The size of the objects, the limitations of the servo in terms of strength and range of motion posed some challenges. We also wanted the object to look realistic enough not to immediately draw attention to itself or look suspicious, which would help build up to the moment of surprise. We finally narrowed down on an artificial plant that in its ideal state sways at a slow pace creating a sense of its presence, but shows altered behaviour whenever people come in threshold and near proximity of it.

img_20191021_150554-01

Inspiration
Don Norman in his book, The Design of Everyday Things, talks about design being concerned with how things work, how they are controlled, and the nature of the interaction between people and technology. When done well, the results are brilliant, pleasurable products. When done badly, the products are unusable, leading to great frustration and irritation. Or they might be usable, but force us to behave the way the product wishes rather than as we wish. He adds to it that experience is critical, for it determines how fondly people remember their interactions. When we interact with a product, we need to figure out how to work it. This means discovering what it does, how it works, and what operations are possible (Norman).

An essential part of this interaction is the affordance an object portrays and the feedback it returns for a usage action extended by the user. Altering the expected discoverability affordances and signifiers would result in making the experience stranger and surprising. With the rise of ubiquitous computing and more and more products around us turning into smart objects it is interesting to see how people’s behaviour will change with changed affordances and feedback from everyday objects in their environment, speculating behaviours and creating discursive experiences. Making an object not behave like it should alters the basic conceptual model of usage and creates an element of surprise in the experience. We felt that if we could alter these affordances and feedback in an everyday object based on proximity, it could add an element of surprise and open conversation for anthropomorphizing of objects.

The following art installation projects that all use Arduino boards to control a number of servos provided inspiration for our project:

surfacex

Surface X by Picaroon, an installation with 35 open umbrellas that close when approached by humans. See details here.

servosmit

In X Cube by Amman based design firm, Uraiqat architects consists of 4 faces of 3 m x 3 m each formed by 34 triangular mirrors (individually controlled by their own servo). All mirrors are in constant motion, perpetually changing the reflection users see of themselves. See details here.

dont-look-at-me

Elisa Fabris Valenti’s Don’t Look at Me, I’m Shy is an interactive Installation where the felt  flowers in a mural turn away from the visitors in the gallery when then come in close proximity. See details here.

robots_dune-raby

Dunne & Raby’s Technological Dreams Series: No.1, Robots, 2007 is a series of objects that are meant to spark a discussion about how we’d like our robots to relate to us: subservient, intimate, dependent, equal? See details here.

The Process
After exploring the various options, we settled on creating a plant that would become agitated as people approached. We also wanted to add another element of surprise by having a butterfly emerge from behind the plant when people came very close. We also had LEDs that would serve as signifiers along with the movement of the plant.

Prototype 1
We started the process by attaching a cardboard piece to the servo motor and tapping two wire stems with a plastic flower vertically on it, to test the motor activity. We wrote the code for the Arduino and used the sensor, motor, and the plant prototype to test the different motions we desired for different threshold distances.

img_20191009_141204-01

img_20191009_152458-01

The proxemics theory developed by anthropologist Edward Hall examines how individuals interpret spatial relationships. He defined 4 distances: intimate (0 to 0.5 m), personal (0.5 to 1 m), social (1 to 4 m) and public (4m or more). The sensors posed some difficulty in terms of getting clean data especially in the intimate and personal distance ranges. We decided on 3 ranges, combining the intimate and personal range into one less than a meter being < 1000mm, and kept the social range between 1000-3000mm and public ranges as 3000mm.

The plant has an idle state at more than 4 meters where it gently sways under yellow LEDs, an activated state where the yellow lights blink and the movement is more noticeable and an agitated state at less than a meter where its motion is rapid and jerky with red lights blinking quickly. Once we had configured the threshold distances, for which the motors could give the desired motion we moved to a refined version of the prototype.

Prototype 2
We made a wooden box using the digital fabrication lab and purchased the elements to make the plant foliage and flowers. The plant elements were created using wire stems that were attached to a wooden base secured to the servos. The plant was built using a felt placemat (bought from Dollarama) which was cut into the desired leaf like shape attached to the wire stems. Once we confined the setup into a wooden box, like a pot holding a plant, new challenge arose in terms of space constraint. Each time the plant would move the artificial foliage would hit the side walls of the box causing an interruption in the free motion of the motor. We had to continuously trim the plant and ensure the weight was concentrated in the centre to maintain a constant torque.

img_20191010_173559-01

img_20191009_153035-01

img_20191010_131929-01

The butterfly that we had wanted to integrate was attached to a different servo with a wire, but we never managed to get the desired effect as we wanted the rigging of the insect to be invisible for its appearance to elicit surprise. We therefore abandoned that idea but would like to revisit it given more time.

butterfly-setup

At this stage our circuit prototyping board, the sensors and the LEDs were not fully integrated into a single setup. The next steps was to combine all this in a single step, discretely hiding the electronics and having a single cord that powered the setup.

img_20191011_135618-01

led-setup

Final Setup
The final setup was designed such that the small plant box was placed within a larger plant box that housed all the wires, the circuits and the sensors. As we were to use individual LEDs the, connected LEDs were not able to fit in the plant box as they would hamper the motion of the plant and were integrated into the larger outer box with artificial foliage to hide the circuits.

img_20191016_162701-01

img_20191016_200307-01-01

Context aware computing relates to this, where: some kind of context2aware sensing method [1] which provides devices with knowledge about the situation around them; could infer where they are in terms of social action; and could act accordingly. The proxemic theories describe many different factors and variables that people use to perceive and adjust their spatial relationships with others and the same could be used to iterate people’s relations to devices.

img_20191016_200145-01

img_20191016_200300-01

img_20191016_200253-01

 

Revealing Interaction Possibilities: We achieved this by giving the plant an idle state slow sway motion. In case a person entered the sensing proximity of the plant, the yellow LEDs would light up as if inviting the person.

Reacting to the presence and approach of people: As the person entered the Threshold 1 circle of proximity the yellow LEDs would blink and the plant would start rotating as if scanning its context to detect the individual who entered in its proximity radius.

From awareness to interaction: As the person continues to walk closer, curious to touch or see the plant closely, the movement of the plant would get faster. Eventually if the person entered in the Threshold 2 distance, the right LEDs would light up and the plant would have violent movement indicating a reluctance to close interactions.

Spatial visualizations of ubicomp environments: Three threshold distances were defined in the code to offer he discrete distance zones for different interaction; similar to how people create these boundaries around them through their body language and behaviour.

img_20191016_200242-01


Challenges & Learnings

  • Tuning of the sensor data was a key aspect of the project such that we were able to use it to define the proximity circles. In order to get more stable values we would let the sensor run for some time, ensuring no obstacle was in its field, until we received stable values and then connect the motor to it or else the motor would take the erratic values and produce random motions from the ones programmed.
  • Another challenge was discovering the most suitable sensor positions and placement of the setup in the room with respect to the audience that would see and interact with it. It required us to keep testing in different contexts and with varying number of people in proximity.
  • Apart from the challenges with the sensors, we encountered other software and hardware interfacing issues. The programming of the red and yellow LEDs (4 of each colour) presented a challenge in terms of changing from one set to the other. They were initially programmed using arrays, but getting the yellow lights to shut off once the red lights were triggered proved to be difficult and the lights had to be programmed individually in order to get the effect we desired. In a second phase, we simplified things by soldered all the lights of the same colour in parallel and ran them from one pin on the Arduino.
  • The different levels of motion of the plant were achieved by a long process of trial and error. The agitated state provided an extra challenge in terms of vibrations. The rapid movements of the plant produced vibrations that would impact the box that contains it while also dislodging the lights attached to the container holding the plant.

Github Link for the Project code

Arduino Circuits
We used two arduinos, one to control the servo motor with plant and the other to control the LEDs.

motor-circuit

led-circuit

References
[1] Marquardt, N and S.  Greenberg,  Informing the Design of Proxemic Interactions. Research Report 2011100618, University of Calgary, Department of Computer Science, 2011
[2] Norman, Don. The Design of Everyday Things. New York: Basic Books, 2013. Print.