Experiment 3: Toil & Trouble

1.0 Requirements
The requirement for this project was the inclusion of a tangible or tactile interface for a screen based interaction, with a strong conceptual and aesthetic relationship between the physical and the events happening on the screen.

2.0 Planning & Context

For this I looked into exploring three input methods of physical interaction that would affect an onscreen display generated using Processing.js:
– A push button highly integrated into the design and structure of the piece
– Tactile switch using copper wire to open and close a circuit
– Magnetism and Hall Effect Sensors to return a digital signal when in the presence of magnetism
– Elapsed time within a state

For the screen output in Processing.js, I demonstrated:
– Animation – using sprites & redrawn still images
– Video
– Sound
– Images
– Text
– Drawing Simple Shapes

The presentation of the Processing output is done via a link to an iPad on the front of the piece, ideally, hiding away the interface with the computer. In the initial testing, an iPad Pro was used and SideCar was utilised to make the connection to Processing seem magical. Unfortunately, the iPad pros in the AV Room have been out of commission for the past few weeks, so the tethering had to be done via a USB and Duet Display. Wireless Duet Display connectivity was possible, but tested to be unreliable. The computer was hidden under the table.

This project was built during Halloween, and featured the process of casting a spell to create a dragon (my Chinese Zodiac and favourite mythical creature). The altar is switched on, a grimoire opened, and ingredients are added one by one to the cauldron, and after the last item is added, the cauldron boils furiously and settles. The spell is then cast over the cauldron using a hex bag, and a dragon emerges. Each physical movement by the caster triggers a switch.

It is meant to be a playful piece, performed by a single person to an audience as entertainment. The process can then be repeated by observers  interested in how it works, and what the switches are, or with the animations on screen.

This project plays with hidden switches, the aim to make the magic seem magically and not reveal exactly where the switches are or how they work. There is a bit of mystery behind what is triggering the changes, and few actual buttons are pressed or manipulated.

I initially made prototypes for the use of velostat, light sensors, electromagnets, reed switches, and servos, but found the velostat to unreliable for the purposes I needed it for, electromagnets to be too weak to lift items, reed switches to be too fragile when manipulating, lights sensors to be fickle (even when averaging the current light upon each start up), and the servos to not be powerful enough to achieve the desired effect I wanted. (The iterations can be found in the code.)

Planning Details
> Sensors and actuators
(Digital) Push button -> push down button. turns on heat. turns on array of orange lights underneath, and turn on/off levitation
(Digital) Push button -> ingredients reset button
(Analog) Tactile button x 4 -> ingredients complete circuit. tap a few times before removing.
(Digital) Magnetic Switch -> hex bag with magnets to trigger incantation.
(Digital) Lights -> triggered at various stages

> Casting modes
-1 Pre-activation
0 (Activated)
1 (Ingredient 1)
2 (Ingredient 2)
3 (Ingredient 3)
4 (Ingredient 4)
5 (Stew Ready)
6 (Spell)
7 (Dragon)
8 (Off)

Note: Although two Arduinos are pictured, only one was used. The other was kept in to allow for rapid switching in the event of failure, which it did shortly before presentation. Ultimately the Mega was used as the “Uno” (actually a knockoff Arduino) began to fail unexpectedly. I found out after that it not being a genuine Arduino was the cause of the intermittent failures in this experiment, and also in previous experiments where we relied on it. It was too late for other experiments, but just in time for this one.

3.0 Implementation

img_20191213_134549

img_20191213_131828
Cauldron with ingredients, on stove and “wood” (actually small strips of acrylic).

3.1 Software & Elements

3.1.1 Libraries & Code Design

Code: https://github.com/jevi-me/CC19-EXP-3

Arduino
As this was a solo experiment, I took the opportunity to explore the software design capabilities within Arduino. I wrote and adapted various classes for use within the code, and tried coding in a more object oriented way, just for experimental purposes. I started off with quite a number of borrowed and written classes and libraries (over 10 at one point), but slowly as features were cut, repurposed or shifted, the number of imported libraries was lowered to 3. The experience was very fulfilling however, and exposed me to a different way of communication with the device.

Arduino Loop Method:
A) Wait for Activation button to be triggered. If the stove isn’t on, nothing works.
– turns on stove lights, turns on altar lights
– change bool
– set state
– send state
B) Check for a change in pressure on the ingredient bases (& if activated)
– change lights
– change bool
– set state
– send state.
C) If all ingredients are added, stew is ready after boiling for a few seconds
– change lights
– change bool
– set state
– send state
D) Hall switch triggered (& if stewready)
– change lights
– change bool
– set state
– send state
X) Sync the info with Processing periodically

Processing
On the processing side, I was eager to explore as many of the output features that were available. I developed methods that would allow for animation using custom-made and premade sprites or still images running at different frame rates and loop/no-loop variables, videos at different locations and triggered to start based on the input from Arduino, and sound that started, looped or not, volume changed, and stopped based on the state of the system. I also used text, and the drawing functions of Processing. As in the case with Arduino, the aim was to explore as much of the capabilities that Processing.js had to offer.

Additional
In both Arduino and Processing, I explored the use of fail-safe methods that allowed the intertwined systems to fail gracefully and return helpful feedback to the user. Though not a requirement of this project, it was again, an exploration of the capabilities. Examples of this is if the state of the Serial Port, and another the status of USB tether between Arduino and Processing. In both cases, the system fails gracefully, and provides help hints on screen and/or in the console to help guide the method of correction.

3.1.2 Sound Files
Various sounds were used throughout:
– Background music played before the grimoire is opened
– Cauldron boiling sound — there are two types depending on the reaction in the cauldron taking place, and they increase with intensity as the ingredients are added
– “Double Double Toil and Trouble” from Shakespeare is read during the spell casting
– Background music played when the dragon is successfully created

3.1.3 Videos
Four videos are used in this project. The videos were placed on different locations of the screen when triggered to act as design flourishes. The videos were sourced from an Instagram video pack, scaled, cleaned up, and formatted appropriated for use within Processing. The videos serve as the lowest layer of the elements on the screen.

3.1.4 Animation – Sprites
Bunny
The bunny animation on the lower right corner reacts to the state that the cauldron is in — each new action performed by the caster, or in some cases, a timed reaction, changes the animation of the bunny. It can sleep, run, sit, rise up or lower itself at different levels in response to the caster. The animation uses carefully named and numbered sprites and a customised method to cycle through the sprites drawing them to the screen, looping around to the beginning of the list if the animation cycles within the state. Since the framerate that Processing draws at is fixed, to control the framerate of the animations and save on the number of sprites used, I wrote a frame skipper, that allows for a number of Processing frames to be skipped over and not have the animation redrawn.

Cauldron
The cauldron sprite images were made by taking screenshots of a 3D image of three similarly designed cauldrons, as I rotated them around at different angles. As with the bunny animation, they were carefully named and numbered to allow the cauldron to spin, and different versions of the cauldron were drawn based on the stage of the spell.

3.1.5 Animation – Still Images
Grimoire
The grimoire can be open or closed, and is animated to mimic levitation using simple oscillating redrawing on the y-axis.

Dragon
The dragon utilises the same “oscillating on the y-axis” to mimic flight, and only appears if all the previous steps have been completed. This information is received from Arduino which controls the logic.

3.1.6 Images
Representations of the physical images appear on screen when they are added to the cauldron, and are listed all together at the beginning. The position, and visibility of the images are tied to the physical interaction taking place.

3.1.7 Text and Drawing
I used text in various places to indicate the state of the spell and explain what is happening. Thre is also a small indicated on the bottom left that shows what stage the spell is in.

3.2 Hardware
Copper tape and alligator clips make the main tactile interaction. The circuit is completed when the ingredient in the storage basin of the piece is in its place. On its removal, the switch is “released”, as it the circuit is opened. This open circuit is what triggers actions processed by Arduino and sent to Processing.

Another switch used is the detection of a magnetic field. A circle of “diamonds” marks the spot where a hex bag is to be placed. The hex bag contains 4 neodymium magnets which create a magnetic field that is registered by a hall detector in place just under the wood.

The other types of switches are ordinary push buttons which were soldered and positioned flush to the surface of the piece allowing them to blend into the design, and time based “switches” which trigger after a predetermined time has elapsed e.g. during boiling and during incantations.

Other things of note in the build are the ingredients used, some real and natural ingredients like tulip bulbs and spices, and others symbolic like skeletons and eyes. Ingredients list: eyes, pumpkin, magic, winged bats, tulip bulb, mushrooms, skeleton of a snake (the word ‘dragons’ come from ‘draco’ and ‘drakon’ meaning ‘large serpent’ or ‘sea serpent’), orb, candle. Many These were purchased at Dollarama during the Halloween season.

Lights were used as a reference of what was happening on screen, mimicking a magical altar, and a flame below the stove under the cauldron.

The altar was laser cut and featured steampunk gears and a skeleton cup which held the “dried blood of lizard”. The stove also laser cut, used wall screws and brackets to add height, and had acrylic strips to mimic wood in the fire pit.

4.0 Reflections

The piece is meant to be a performance using sleight of hand and hidden switches, but it also encourages others to want to try it, and make a dragon as well. This then requires to reveal the switches, and it isn’t very obvious how things work. Another iteration of this could be a more DIY version where there is better labelling. That would make the system for others rather than a performance piece like this was created. In testing with others previously, I received the comment that it could be cool if different ingredients and patterns yielded different creatures. That would require a more complex code, but is possible as yet another iteration of this project.

5.0 Additional Photos

6.0 Literature References (MLA Cited)
Shakespeare, William, and A. R. Braunmuller. Macbeth. Cambridge University Press, 1997.
Double, Double Toil and Trouble – Shakespeare, http://www.potw.org/archive/potw283.html.
“DRAKON AITHIOPIKOS.” ETHIOPIAN DRAGON (Drakon Aithiopikos) – Giant Serpent of Greek & Roman Legend, https://www.theoi.com/Thaumasios/DrakonesAithiopikoi.html.

6.1 Additional Reference Links (Linked)
Envato Elements (elements.envato.com) – for some of the sounds, videos, and graphics.
Processing.js Guide (processingjs.org) – for part of animation code
Circuito.io (circuit.io) – for libraries. Licenses included in the source code.

Experiment 5: Eternal Forms

Names of the Group Members:

Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project Description:

“Eternal Forms” is an interactive artwork incorporating geometric elements in motion. The construction of the elements is highly precise in order to generate an optical illusion of constant collision. The illusion is a result of linear patterns overlapping in motion between geometric forms. The foreground square will be firmly stabilised while the background circle will be in rotating constantly. While display lights change their chromatic values when participants interact from ranging proximities.

The artwork takes inspiration from various light and form installation projects by Nonotak, an artist duo consisting of Illustrator Noemi Schipfer and architect-musician Takami Nakamoto. Nonotak works with sound, light and patterns achieved with repeating geometric forms. The installation work aims to immerse the viewer by enveloping them in the space with dreamlike kinetic visuals. The duo is also known for embedding custom-built technology in their installations, as well as conventional technology to achieve desired effects in unconventional ways.

Visuals:

Final Images:

 

Circuit Diagrams:

https://www.circuito.io/app?components=512,11021

Project Context

Initial Proposal

Originally we had the intention to continue our explorations with RGB displays. Four out of five of the group members had come a long way while working together on Experiment 4 (Eggsistential Workspace), only to have our communicating displays malfunction on account of the unexpected fragility the pressure sensors. We had hoped to pick up where we had left off, by disassembling our previous RGB displays and revamping the project into an elaborate interactive installation for the Open Show. We designed a four-panel display, each one showcasing a pattern of birds from our respective countries (Canada, Saint Lucia, India and China). The birds would be laser-cut and lit by effect patterns with the RGBs. After many hours of strategizing, we found we were facing too many challenges in the RGB code that, given our time constraints, became overly risky. For example, we intended to isolate specific lights within the RGB strips, thereby designating the neighbouring lights on the string to be turned off. Once we broke down how complex doing this would prove to be (each message sent to an LED would involve sending messages to all preceding LEDs in the string), it became clear that the desired codebase was out of scope. We returned to the drawing board and began restrategizing a plan that could work within the restraints of our busy schedules, deadlines and combined skills. Having five people in the group meant a lot of conflicting ideas, making it tricky to move out of the brainstorming process and into prototype iteration. But we were all interested in kinetic sculptures and the more examples we came across, the more potential we saw in devising one of our own. It seemed like an effective way of keeping us equally involved in the strategy as well as the code. Having minimal experience working with gears (only Jun Li had used them previously) we were intrigued by the challenge of constructing them. We came across this example and began to deconstruct it, replacing the hand-spun propulsion with a motor and controlling the speed and direction by means of proximity sensors.
(show video) :
https://www.youtube.com/watch?v=–O9eyKIubY

Though we aimed to keep the design as simple as possible, we weren’t able to gauge the complexities of the assembly until we had really started to dig into the design. We thought a pulley system could be built, where a mechanism surrounding the motor could trigger motion in another part of the structure by way of gears. We were mesmerized by the rhythmic patterns we came across, in particular, the work of Nonotok studio. They primarily work with light and sound installations. Taking inspiration from their pieces and work, we decided to create visual illusions based on the concept of pattern overlap. We also planned to make use of light and distance sensor to make the piece an interactive light display.

https://www.nonotak.com/_MASK

Tools and Software

  • Distance sensor
  • RGB lights
  • Servo motor (360)
  • Nano and Uno board
  • Acrylic sheets – black and diffused
  • ¼’ and ⅛’ Baltic Birch sheets
  • Laser cutting techniques
  • Illustrator

Ideation

Our previous ideas seemed complicated in terms of implementation. Hence, we sat for a second round of brainstorming on the several outcomes with the given time frame and resources. We commenced on browsing on existing projects of ‘Kinetic Installations of NONOTAk Studio, ‘The Twister Star Huge ‘by Lyman Whitaker, ‘Namibia’ by Uli Aschenborn and Spunwheel Award-winning sculptures made from circular grids. We proceeded with the creation of our circular grid. We designed a circular grid system, which is constructed by several interlinking ellipses running across a common circumference of the centre ellipse. This grid served the base our proceeding designs.

We derive inspiration from Félix González-Torres, American visual artist from Cuba who created minimal installations from common objects like lightbulb strings, clocks, paper, photographs, printed texts or hard candies. Being a member of ‘Group material’, a New-York based artist organisation formed to promote collaboration projects with regard to cultural activism and community education.

Process

Several constructions and geometrical formations were explored. We studied how to create an optical illusion with forms in motion. We tried to simplify the curvatures into straight lines since we had no idea on the feasibility and reliability of complicated junctions. Thus one simple circle and one simple square were included.

As you can see in the above diagrams, a layout was created to give us an idea of the entire frame, its size, materials to be used as well the complications or hindrances that may befall upon our way.

After the finalization of the entire setup, we can up with a list of different layers that would be encased in an open wooden box (20’by20’). The list is as follows from top to bottom:

  1. Square Black acrylic sheet with laser cut patterns – This will be the front view (covering of the open wooden box ) and will remain stationary – size 20’by20’
  2. Circular Black acrylic sheet with laser cut patterns – This will be in motion as the centre will be connected to the 360 servo motor – size 18.5’by18.5’
  3. Diffused white acrylic sheet with a cut outline in the centre to fix the base of the servo motor
  4. RGB lights + Nano and Uno board – These are stuck to the base of the wooden box
  5. A small wooden strip with a distance sensor holding area to be attached in front of the installation – This will change the pattern lighting based on distance

Image: The 2 layers of forms that were laser cut to include in our final setup.

Image: The RGB bulbs set up to create an even distribution of light across the 20”x 20” board.

Image: After the setup was done, above are a couple of effects created using lights an motion of the overlapping layers.

Prototyping

We created numerous samples of our design in miniatures and overlapped them. We experimented with black and white colours by playing with the following arrangements-

  • White Square rotating on the White Square (Stabilised)
  • Black Square rotating on the White Square (Stabilised)
  • Black Square rotating on the Black Square (Stabilised)
  • White Square rotating on the Black Square (Stabilised)
  • White circle rotating on the white square (Stabilised)

Coding

Motor — the motor is set to run slowly counterclockwise at the optimum speed to give the interplay of with the geometry. It’s important to get the speed exactly right or the lines will not show the desired effect.

Lights — the distance sensor reads in the value and includes it the running average of the distance (last 10 readings), it then maps that distance to a value that will be used for the brightness of the lights, and the speed of the effects. The closer the brighter, but slower the effects. The distance is also used to determine what light effect is shown. When very close, it breathes, a little further away, it blinks quickly, and at the standard distance it paints the colour to the background. Each effect adds to the effect of the illusion.

Github: https://github.com/jevi-me/CC19-EXP-5

Final Project & Exhibition Reflection

For the exhibition, we were given an entire room to display the piece. We projected a video of the manufacturing, on to one wall, and on the opposite wall, we solved the concern of the empty space by projecting artwork that was appropriate for the display : a generative mandala formation of various altering forms (coded by Sananda in her individual project). The work allows participants to create their patterns with varying colours using manual alterations by potentiometers. We also had some calming tunes that played along with the laser cutting video which was being projected.

Many in attendance commented that they couldn’t pull their eyes away from the piece, and that it was meditative, mesmerising and calming. We also received three offers for the purchase of the installation. One participant analysed the piece praising the use of colour, lines, geometry and interaction that made it very aesthetically pleasing, and we noticed many leaving and returning with their friends to have them experience the illusion themselves, and to interact with the distance with great delight. Overall the experience of light and subtle motion in a dark room created some beautiful visual illusions and that became the limelight of our experiment.

References

  1. SCHIPFER, NOEMI, and TAKAMI NAKAMOTO. “MASKS, Kinetic Object Series”. Nonotak.Com, 2014, https://www.nonotak.com/_MASKS.
  2. Kvande, Ryan. “Sculpture 40″ – Award Winning Optical Illusion Sculptures”. Spunwheel, 2019, https://www.spunwheel.com/40-kinetic-sculptures1.html
  3. Whitaker, Lyman. “Whitaker Studio”. Whitakerstudio.Com, 2019, https://www.whitakerstudio.com/

Experiment 4: Influence Over Distance

Eggsistential Workspace

Course: Creation & Computation
Digital Futures, 2019

Jun Li, Jevonne Peters, Catherine Reyto, Rittika Basu, Arsalan Akhtar

GitHub: https://github.com/jevi-me/CC19-EXP-4 

Project Description

Eggsistential workspace is a somatosensorial platform intended to communicate motivation and companionship between two participants communicating from distant spaces through transitional chromatic values. This personalised bonding experience enables participants to convey their activity while in their individual workspaces, and more specifically, their laptop or desktop PC. With the motion of their wrists rising and falling while in the act of typing, pressure sensors (nestled in a mousepad made from a balaclava and light stuffing) activate the RGB patterns of the corresponding participant’s set of lights. The faster they type, the greater the activity of the lights, and the slower they type, the more their inertia is echoed by the decreased activity of the light patterns. We have all experienced that feeling of being strapped to one’s desk under the pressure of a deadline, as well as the lack of community if working alone is a frequent occurrence. We thought it made for a fun, expressive but non-intrusive way of keeping one another company while working at home or in a solitary-feeling workspace.

One example of telepresence that inspired our ideation process was the project titled The Trace6 from El Rastro. In this installation, two people in remote rooms common space with the help of light visuals and sound which is triggered by a sensor when one individual enters a room. This results in them occupying the exact same position in the space.
In addition to that, another great use of ultraviolet lights for telepresence art installation could be observed from the project titled “Miscible “7. This work from Manuel Chantre and Mathieu Le Sourd used sensory, light and principles of chemistry to make two liquids homogeneously while participants were in remote locations. In this performance, participants from remote locations are expected to mix the liquid with UV lights in a way to mix them perfectly. Each UV light mixes to create a perfect blend of liquid and colour.

Ideation & PROcESS

In our first brainstorming session, we agreed from collective past experience that it would be wise to keep the idea simple. The very first topic we discussed was the language of colour, and how hues are interpreted differently in different countries. But we struggled to find a tangible means of working with colour translation, given the complexity of networking. We brainstormed several ideas, explored online examples and struggled to procure elements from previous projects. One of the dismissed proposals involved creating a winter scene (via graphics in Processing) wherein participants could collectively monitor parameters of day-to-night transition (changing the background colour), intensify the snow-storm via wind (rain/snow particle code), wavering opacity, amplify audio effects etc.

Through the iterations of our project, our common interests aligned and a concept began to take shape. We were inspired by the concept of ‘Telepresence’ from Kate’s class, especially the ‘LovMeLovU’ project by Yiyi Shao. (“Shao”) [1]. We were all drawn to the idea of remote interaction between two individuals, in two separate spaces, by means of two displays of light. We finalised on an output of 50 RGB LEDs set for 2 rooms. (bitluni’s lab) [2] It did mean upping the ante in a big way, but we had two group members with experience in our corner. It meant they had a chance to fulfil some of the objectives from a previous experiment and gave the rest of us the opportunity to learn about working with RGBs. We also recognized that there was a gap between the code capabilities of some group members compared to others, and it meant a lot to us to all have a hand in writing code in some capacity. Since Arduino has by now become a fairly comfortable language, it further emphasized the desire to add an extra layer to the project requirements, in that we could all work, code and test together, learning from one another along the way.

Because of the visual appeal and chromatic range offered by RGBs, we were determined from the outset to incorporate them in the output design. We were really taken by the idea of being able to illuminate a friend’s room from a remote location, and it was important that the display interaction feel emotive and intuitive. At first, we imagined this action taking place with hugs and squeezes (by way of a stuffed toy or pillow), sensor-driven to create an effective response in the corresponding display. A light squeeze of a pillow could light up a friend’s bedroom on a dreary and perhaps lonely evening, feeling like a hug and a small gift of an uplifting atmosphere. A hard squeeze, by contrast, might generate a bright and panicked effect in my friend’s room, letting them know I’m feeling anxious.

Knowing that we had our work cut out for us, we made a list of benchmarks on a timeline. We had laboured away for 9 hours on Saturday, learning how PubNub worked and by early evening, sending switch values through PubNub to Arduino and finally to Processing. That was the big hurdle we had been unsure about, but thanks to Jevi’s skills and clear communication, we were able to build a clear path that everyone was able to understand and work with. The achievement gave us confidence, and we set about storyboarding an ideal setup: two modes, one using Arduino and another using Touch Designer. We were very interested in trying out both systems; Arduino because we all knew our way around a bit, and Touch Designer for the added benefit of effects as well as what Jun Li could show us. By the next session though, when testing the LEDs out across the network, we encountered a major issue before even getting that far. To our surprise, the Nano could only power a small portion (about 5 to 10) of the RGB strips (out of 50 per strip). This wasn’t enough for a significant display. We were able to resolve some but not all of the issue by using Megas instead.

As the days past, the stuffed toy or pillow took on various forms… eventually landing on wristwarmers. We moved in this direction because for one thing, we were apprehensive about the surprise challenges that the pressure sensors might present. It seemed logical that the enclosure is easily accessible, and that the sensors have as much contact with the point of pressure as possible. With wrist-warmers, we could control the variable resistance by gripping and relaxing our hands. It felt like a very natural use and appropriate for the (very sudden) change in season. The fact that wrist-warmers are less common / expected than gloves was a bonus. We eventually settled our design on an ergonomic support pad for typing. There was a more refined simplicity in this concept. No coding language (in the form of squeeze strength of the number of squeezes to communicate) was needed, in fact, no thought on the user’s end was required at all. Instead, they would carry on as they normally would, typing away at their work.

It took some time to work out the display for the lights. We had decided on using the two south-facing adjacent walls in the DF studio, modelled as bedroom settings. We became invested in the notion of adjustable displays (and it still seems like a cool idea), where individual strips could be attached at hinges while supporting the LED strip. We envisioned participants configuring the display themselves, and hanging it from hooks we’d fashioned in the window sill. Ultimately this plan proved unfeasible and we set it aside as a divergent. We settled on the hexagon-shaped mounts because they made practical sense: they were good housing for the narrowly-spaced LED strips and were less time-consuming to produce. But we opted for the hexagons because in the meantime we had run into a major issue with the LEDs: even with the use of Megas, we could only generate power into the full strip if they were lit at 50% opacity, max. Thus having the lights configured in a cluster meant optimizing the light effect.
We had early on opted for using ping pong balls as diffusing shells for the bulbs, and had wisely ordered them in just enough time from Amazon. We went through a lengthy process of making holes in 100 balls, then fastening them to the hexacon rigs.
Meanwhile, we had devised a system for the sensors, fastening them snuggly into sewn-felt pouches intact with velcro openings for easy removal. These were designed to fit in the openings of the wrist-warmers, but after running into some unexpected complications with this system, we placed them instead inside mousepads improvised from balaclavas.

SOFTWARE IMPLEMENTATION

image13

Arduino + Processing + Pubnub Implementation
For the default implementation, we used two channels to communication between the rooms — each room published and listen to seperate channels corresponding with their rooms. We had issues with the port detection of the Arduino on certain computers but the root cause was never determined. Once the port had connected and maintained a stable connection, the communication from Arduino to Processing, to Pubnub and back could be made.

TD + Arduino Implementation
In this experiment, We brought more possibilities to this project with the experience of Jun Li. We tried to challenge ourselves by utilizing different techniques to achieve the same effect. In setup, we utilizing TCP/IP internet protocol instead of the PubNub to send the same data to control the LED lights. After testing Processing, we found the colour didn’t appear as what we designed, we tried and debugged a lot of ways to fix that colour problem. We thought it might be an issue with the hardware. After researching, we realized the model of LED lights is a bit different from the one that had been used in Jevi, Li and Arsalan’s Experiment 2. The first set was WSB2811, whereas the sets purchased for Experiment 4 are WSB2811. It turned out the Red, Green channels are switched. After editing the data, its effect and it worked the same as the way of using PubNub. All the interactive effects and settings were performed in Touchdeisgner and send through TCP/IP internet to Arudino in real-time.

Arduino + Processing + Pubnub + TD Implementation
Because of the powerful functions of Touchdesigner, which can easily perform and design a lot of effects, we had tried and brought TD into the PubNub way to meet the project requirement. So the workflow became more complicated and difficult in this implement. We tried to bring 2 Arduino on each side and 2 serial communication happened on the same side. One for receiving the data from processing coming from the other side, another for sending date the received data to TD and send it back to the LED light. Theoretically and Technically it can be done, however, we found it was difficult to send and receive data among much software and we don’t have enough time to achieve the same effect like above 2 ways in the end.

Reflections

It seemed clear from our first brain-storming session that we were going to work well together as a group. We had a diverse set of skills that we were eager to pool together to come up with something especially creative. The challenge inspired us and we were ready to put in the work.

The early success we’d achieved after putting in long hours on the weekend might have given us a false sense of confidence. It put us ahead of the game and made us feel that since we’d overcome the biggest obstacle (networking from Arduino to Processing to Pubnub, end-to-end), the rest seemed easy enough to achieve. We set into production of the materials while devising a library of messages that users could communicate to one another, by means of pulses on the pressure-sensors and and lighting. What we failed to foresee was assembly issues with the sensors. We took it for granted that they were made at least a little durably (seeing as they are commonly used in wearables projects), but that turned out being far from the case. In spite of protecting the soldering with heat-shrunk tubing, encasing the a wires with cardboard backing, and harnessing them as securely as possible in the hand-sewn pouches, we went through one sensor after another, the plastic ends shredding with the slightest movement. We didn’t yet know we could repair them on our own (and have yet to try although it was investigated after we broke the collective bank), resulting in trip after trip to Criterion throughout that snow-filled week. After our early successes, and especially once we saw the extent to which we could actually communicate messages via lighting effects (we had even devised our own list of messages), it was so disheartening to have the project fall apart each time we tested on account of the extreme fragility of the sensors. It was a major factor in pivoting the concept from wrist-warmers to a mouse-pad (which involved no movement of the sensor), a decision that unfortunately took place too late in the game and didn’t allow us sufficient time for proper testing with the rest of our setup.

 

image30
List of messages derived from a library of Arduino lighting effects

Hindsight is always 20/20, and there is no way in this case we could have anticipated this problem unless we had researched “How fragile are variable resistor pressure sensors when placed in clothing?”. But we will know to be that specific about troubleshooting and testing well beforehand next time around.
It was overwhelmingly frustrating to have our demo fall apart to the extent it did right before the presentations. We’d had such a strong start, and working on this project had been an invigorating, devoted process for our group. Overall it was a great experience, one that involved a great deal of learning and collaboration, creativity. In spite of falling a little short in the demo, we had succeeded in some pretty amazing results along the way.

 

References

  1. Shao, Yiyi. “Lovmelovu”. Yiyishao.Org, 2018, https://yiyishao.org/LovMeLovU.html. Accessed 8 Nov 2019.
  2. bitluni’s lab. DIY Ping Pong LED Wall V2.0. 2019, https://www.youtube.com/watch?v=xFh8uiw7UiY. Accessed 8 Nov 2019.
  3. Hartman, Kate, and Nicholas Puckett. “Exp3_Lab2_Arduinotoprocessing_ASCII_3Analogvalues”. 2019.
  4. Hartman, Kate, and Nicholas Puckett. November 5 Videos. 2019, https://canvas.ocadu.ca/courses/30331/pages/november-5-videos. Accessed 8 Nov 2019.
  5. Kac, E (1994). Teleporting An Unknown State. Article. Retrieved from: http://www.ekac.org/teleporting_%20an_unknown_state.html
  6. Rastro, E. (1995). The Trace. Retrieved from: http://www.lozano-hemmer.com/artworks/the_trace.php
  7. Chantre, M. (2014). Miscible. Retrieved from: http://www.manuelchantre.com/miscible/

Experiment 2: Promexics Study/Interactive Infinity Mirror

Interactive Infinity Mirror

An interactive & LED-Light project that explored ‘Proxemics’
By Arsalan Akhtar, Jevonne Peters, Jun Li

Proxemics – the study of human use of space and the effects that population density has on behaviour, communication, and social interaction.

Abstract

This experiment is an interactive, LED-light project, exploring in a critique of the concept of proxemics – the study of human use of space and the effects that population density has on behaviour, communication, and social interaction. In our interpretation, we attempt to show a visual representation of various reactions to “personal space” that humans create around them, in the form of various interactions of light, and represent the idea of an ideal level of social interaction amongst multiple parties.

The purpose of this study is to visually demonstrate the effect people can have on each other through the use of different colour effects. The goal is to deconstruct the relationship between behaviours and colour, and reshape this relationship to be presented in a new form. Throughout this process, the parties control the effects based on the states they are in.

Keywords: Colour, Behaviour, Communications, Visualization, Proxemics, Interaction

Repo: https://github.com/jevi-me/CC19-EXP-2


Table of Contents

1.0 Requirements
2.0 Planning & Context
3.0 Implementation
3.1 Hardware
3.2 Software 
3.2.1 Arduino-Only Implementation
3.2.2 TouchDesigner + Arduino Implementation
4.0 Reflections
5.0 Photos
6.0 References


1.0 Requirements

The goal of this experiment is to creatively use a microcontroller, up to 3 rangefinders, and actuators: any number of individual LEDs, or up to 4 servos, to create a minimum of 3 distinct behaviours in response to the environmental conditions.

2.0 Planning & Context

ca_c_exp2_sketches_compiled-min
Sketches and Brainstorming from our planning phase

 

In the planning phase concepts of anxiety, one’s “personal bubble/space”, and ideal desired interactions were examined. We determined that the most effective way to illustrate these changes would be by using light and incorporated the interaction of the primary colours of light (R, B, G) to illustrate this. We used the three distance sensors to capture the location of three participants within the space of the installation.

We completed planning the final design and concept, and began purchasing materials locally and abroad on October 4th.

2.1 Distances

Four ranges were plotted for each of the three sensors that read in the distances. This is reflective of the personal preferences of the three parties:

  • Too close: this is a distance considered to be too close for comfort. This can vary from person to person, but for our study, we fixed this distance.
  • Comfort zone: the general region of comfort, where one isn’t out of touch, and not too close.
  • Out of touch: so far the interaction is not possible.
  • Ideal: the preferred level or region of interaction.

 

c_and_c_range_diagram2x
Diagram of Our Distance Ranges for ‘Green’, ‘Red’ and ‘Blue’.

In our code, these were called states:

Out of touch, also called idle -> 0
Comfort zone -> 1
Ideal Zone -> 2
Too close -> 3

2.2 Interactions

Based on the state of the system, different behaviours were manifested. For example, if the red and blue are in their respective comfort zones, the two zones meshed to form their additive colour, which is magenta. The table below shows the various interactions that were planned for both implementations.

Interactions
State All Parties Some Parties One Party
Out of Touch/Idle (0) Colour wipe Off (Black Static) Off (Black Static)
Comfort (1) Sparkle Glow Blink in the secondary colour blend  Blackout chase of the respective colour
Ideal Zone (2) Rainbow Rainbow Rainbow
Too Close (3) Blink white

if, parties remain in the zone, blink orange, if they stay longer, rapidly blink red.*

Blink in the respective colour Blink in the respective colour

* The time-based interaction was added from a suggestion from Nick.

The colour wipe when all parties are idle represents a state of receptiveness from everyone. No boundaries are being pushed, but no positive interactions are being made either. The colours cycle through without interacting with each other, and forgetting the previous state. If one or more sides are idle, and the other side(s) are in different state(s), the side goes ‘off’. The ‘off’ state indicates that it is not currently participating in the interaction that is happening. It is ‘Out of Touch’.

Once the comfort zone is entered by a  side, that side performs a ‘blackout chase’ of the colour it represents, i.e. the red side will have a red ‘blackout chase’ effect. When two sides enter the comfort zone, their secondary colour is shown. This represents the potential for ideal communication between the parties. Three in the comfort zone results in a Sparkle Glow, a combination of red, green and blue.

The rainbow, a common symbol of happiness and bridging, is used when one or more sides are in the ideal state. When within the installation, the desire is to remain in that state, and hope others can also find their ideal. Once everyone has it, the rainbow effect runs in sync, simulating an ideal flow of information and ideas.

There is of course, the potential for one to feel overwhelmed by a presence, and be ‘too close’ for comfort. When this happens, the corresponding side(s) flashes its colour repeatedly as a warning. If all sides are experiencing this, the additive colour (white), flashes for all sides. If this warning is ignored, the colour changes from white, to orange, then to ‘danger’ as red, speeding up as the warning continues to be ignored.

3.0 Implementation

3.1 Hardware

polaroidmoodboard-polaroidwall-min
3.1.1 LEDs

In this project, we were required to use individual LEDs. After considering the desired final effect of the project, we quickly abandoned the use of domed single colour LEDs, and decided to use WS2812 Neopixel LEDs. These have a full range of colours and can be individually addressed. This decision came with the challenge of soldering all the individual LEDs. Nonetheless, we were at an advantage as Jun Li had previous experience soldering, and was determined to take on the ambitious task.

As a first step, we measured and performed calculations on the box, and deduced that 23 LEDs were to be placed on each side, amounting to a total of 92 LEDs in all. Next was to create the LED strips to be placed inside the box. This involved first adding solder to the 6 connections, cutting the wires to the measured lengths, and soldering on each of the connections. A total of 1104 soldered connections were made by our team, and this figure did not include the several failures that occurred during the process. It was very important to ensure that each connection was sound and functioned impeccably. This was definitely a tough challenge for the team, and we relied heavily on the expertise of Jun Li as guidance, and who was aptly nicknamed ‘Solder King’. The entire soldering process took approximately 30 hours over the course of a few days.

3.1.2 Body

The body of the installation was a shadow box spray painted black. A hole was cut on the side to feed the wires from the LEDs to the Arduino. A mirror was placed at the base, and the LEDs around the inside of the frame. To cover the frame, we cut an acrylic sheet to size, and added a layer of reflective film to it. Arsalan and Jun Li used their knowledge of fabrication and workshopping to make precise cuts and measurements for the holes and the acrylic. Adding the film was a group effort as a smooth and reflective surface was key to creating the desired effect.

Finally, the three rangefinders were hidden under the lip of the shadow box, and the hidden wires fed to the breadboard at the back of the installation. The front and back of the installation were controlled by the single rangefinder located at the front, and the sides were controlled by their attached rangefinders.

img_6815

3.2 Software

3.2.1 Arduino-Only Implementation

The Arduino-only implementation was built with the backing of the Adafruit_Neopixel, and WS2812FX libraries. Several other libraries were tested, including FASTLED, Neo Patterns, and NeoPixel Painter, but ultimately, WS2812FX was selected due to its simplicity, and wide range of built-in effects. The loop of the code ran the service function for the WS2812FX library, and then the distance readings of the rangefinders taken. If the difference between the current and the last distance value measured was above the noise threshold, the new state of the section(s) that rangefinder controlled was determined, and the function to display that light effect was activated.

3.2.2 TouchDesigner Implementation

The effects were replicated using a TouchDesigner + Arduino implementation. In this setup, Arduino was used as a communication tool and bridge between TouchDesigner and the WS2812FX LEDs. All the interactive effects and settings were performed in TouchDesigner, and sent to the LED strip through Arduino in real-time.

4.0 Reflections

Implementation Explorations and Outcomes

The light effects in the TouchDesigner and Arduino implementation proved to be easier to create, as it is a node-based and artist-friendly software. The transitions were smoother, and more visually appealing. However, this came with several drawbacks. The Arduino Nano and Uno were both not powerful enough to provide stable performance as a result of the two-way real-time communication, and the bridge (the Arduino) required to send and receive a large amount of data per second.

There were two possible solutions to this problem: (1) increasing the processing power by using an Arduino Mega, and (2) lowering the frequency of data transfer which would affect the real-time interaction and cause a delay. These two solutions were combined, and the parameters adjusted optimally to give the best performance, and mimic the effect in the Arduino-Only implementation.

The Arduino-Only implementation had similar issues with both processing and electrical power. To solve this, the brightness was lowered, and the rangefinder values were only read in every 5 seconds, as that was found to be the bottleneck in the code. This unfortunately made the installation less real-time, but the benefit was an increase in reliability and performance.

The only major obstacles during the hardware build, was the time required to complete, and skill required to ensure that the connections were true. Although buying an LED strip, would achieve the same effect and solve both of these obstacles, it would have been in violation of the limitations of this experiment.

Concluding Thoughts

This artistic experiment was meant to allow participants to critically consider the ideas of personal space in the context of proxemic behaviour. In a related study, “Proxemic Behavior: A Study of Extrusion”, the cultural group and sex were held constant and introduced the problem of the interviewer’s movement from the original comfortable distance established by the subject. In all cases within that study, the subject re-established a new comfortable distance, which in our study we called the “ideal” zone. The study surmised that this new state of comfort was a compromise between the one originally chosen, and the distance assumed by the interviewer. In our case, a retreat from the ideal resulting in a warning signal, and a new state of comfort was not sought.

Technology-based new media art is one of the forerunners of future art, and allows the creation of collaborative, and interactive artworks. This experiment serves as an example of such.

5.0 Photos

6.0 References

  1. https://www.tandfonline.com/doi/abs/10.1080/00224545.1991.9924653
  2. https://www.youtube.com/watch?v=sAPGw0SD1DE
  3. https://www.youtube.com/watch?v=b2bvWArORSc

Experiment 1: Digital Interactive Sound Bath

Abstract

Our project is a digitized version of the experience of a sound bath. The objective was the same – to explore the ancient stress-relieving and sound healing practice. However we sought to achieve this using laptops and phones, which are often associated with being the cause of stress and anxiety. Our experiment made use of motion detection,  WEBGL animation, sound detection and emission.

Repo: https://github.com/jevi-me/CC19-EXP-1

Demo: https://jevi.app/CC19-EXP-1/


Table of Contents

1.0 Requirements
2.0 Planning & Context
3.0 Implementation
3.1 Software & Elements
3.1.1 Libraries & Code Design
3.1.2 Sound Files
3.2 Hardware
4.0 Reflections
5.0 Photos
6.0 References


1.0 Requirements

The goal of this experiment is to create an interactive experience expandable to 20 screens.

2.0 Planning & Context

 

schedule
Schedule

Stress is something that affects many. The constant hustle-bustle of work deadlines, fast-paced city life, and overachievement may push you to the edge and most could benefit from self-care, meditation, relaxation and pause from the busy life. Enter sound baths.

Sound baths use music for healing and relaxation. It is defined as an immersion in sound frequency that cleans the soul (McDonough). From Tibetan singing bowls to Aboriginal didgeridoos (Dellert), music has always been used for therapeutic uses for over decades now. The ancient Greeks also used sound vibrations to aid in digestion, treat mental disturbance and induce sleep. Aristotle’s ‘De Anima’ also shows how flute music can purify the soul.

Ever since the late 19th century, researchers have begun focusing on improving the correlation between sound and healing. These studies proved that music could lower blood pressure, decrease pulse rate and also assist the parasympathetic nervous system.

So, essentially a sound bath is a meditation class that aims to guide you into a deep meditative state while you are enveloped in ambient sounds.

brainstorm
Brainstorming

Sound baths use repetitive notes at different frequencies to help bring your focus away from your thoughts. These sounds are generally created with crystal bowls, cymbals and gongs. Similar to a yoga session, the instructor of a sound bath creates the flow of a sound bath. Each instrument creates a different frequency that vibrates in your body and helps guide you to the meditative and restorative state. Some people believe bowls made from certain types of crystals and gems can channel different restorative properties.

Our project is a digitized version of the experience of a sound bath. The objective was the same – to explore the ancient stress-relieving and sound healing practice. However we sought to achieve this using laptops and phones, which are often associated with being the cause of stress and anxiety . We allowed those experiencing it a moment to pause, reflect and reconnect with their inner soul.

requirements-list
Requirements

The concept of our experiment was to let the user interact with 4 primary zones in order to experience them:

  •     Zone A – Wild Forest – Green
  •     Zone B – Ocean Escape – Blue
  •     Zone C – Zen Mode – Purple
  •     Zone D – Elements of life – Pink
  •     Projections – a) Visually soothing abstract graphics. b) Life quotes
zone-maps
Zone Maps

We carefully segregated the different experiences based on their soothing experiences in the four corners of the space. The Zone A consisted of motion sensitive sounds of rain, chirping and crickets along with the motion sensitive zonal colour of green. Similarly, Zone B consisted of motion sensitive seascape sounds like ocean waves and seagulls along with the ambient lighting of blue tones. The Zone C being the zen zone, had meditation tunes as well as flute and bell melodies that were triggered by people passing by.  The zone also had the ambient lighting of pink touch to it. The final zone D represented elemental sounds such as rain, fire and earth which would be triggered by motion, however, we ultimately opted for silence within that zone, providing a brief audio escape. The colours were drawn together with the use of  the colour-cycling lamps on nears the floors.

The experience also consisted of projections – eye pleasing visualizations which were projected over the ceiling. These projections were volume sensitive. So, based on the interactive audience, the visualizations would become brighter and more prominent. To go along with the theme of digital sound bath, we also projected quotations about life which would instill the faith and provide inspiration to the users upon reading them.

Once all these elements came together, the space became a digital sound bath wherein users could come and relax their mind. The experience was made into a dark space where only upon detecting motion, would the room light up with different colours and project different ambient sounds. The result was a soothing and mind relaxing experience for the audience.

3.0 Implementation

3.1 Software & ELEMENTS

3.1.1 Libraries & Code Design

For the zones, the  library Vida was used for motion detection. The light emitted was a simple rectangle than slowly fades when motion is detected. The volume of the audio files mimics this as well.

WEBGL was used to generate the calming projection which was a slowly rotating cosine and sine animated plot in 3D suing spheres. It was sound activated and glowed brighter when the level increased.

The life quotes used an array and a set interval to redraw new quotations.

The code is designed to be centralised, so although there are 14 unique programs running, they share the base code where possible. For efficiency of set up, a home page was created with buttons for each program.

3.1.2 Sound Files

For sounds, the following were tested, but only the bold were implemented as they were the most audibly pleasing combination. These high quality sounds were purchased and licensed for use.

pink-zen(D): gentle-wind.wav, ambience.wav, bells.wav

purple-elements(C): wind.wav, rain-storm.wav, thunder.wav

blue-ocean(B): humpback-whales.wav, sea-waves.wav, california-gull.wav

green-forest(A): thrush-bird.wav, robin.wav, forest-leaves.wav, cricket.wav

3.2 Hardware
Some of the hardware used.
Some of the hardware used. (Plain dim LED lamp, glass jar, wax paper)

We used round table lamps on the floors with remote controlled  LEDs that cycled through the rainbow. Two plain dim LED lamps for used for safety in dark areas. Two projectors were used, one to project the life sayings onto the screen, and another to project the soothing animation onto the ceiling. Glass jars wrapped with decorated wax paper  held the phones as they light up. The wax paper was chosen to coincide with each of the zone themes, and the glass jars were tall enough to hide the majority of the screen to provide a soft glow, and short enough to keep the camera exposed as it is used for motion detection. An iPad was used at the entrance to provide context of the space. The space was decorated to simulate a Sound Bath.

4.0 Reflections

When approaching this topic, our group set out to examine explore a solution where the participants would not have to physically touch their phones themselves, but instead have it as part of an experience where they walk away from it, while it aids in relaxing themselves and others. While meditatively walking around the space, their motion act as the trigger for the light and soundscape. We noticed some some participants becoming enveloped in the experience, and lying down as one would in a traditional sound bath to absorb the experience with their senses. Others entranced by the lights and affirmation, were curious about what different pleasing sounds and colours could be produced. Due to the amount of the hardware and number of programs involved, there was a lot of set up required before the room could be entered. An additional complication is that this type of set up is one where the phones are accessible to the creators, and not something  the attendees bring with them into the experience.

The room initially requested was RHA 318, a smaller and more intimate space that would allow for more interaction between the lights by having them closer together, and a better layout for the projections. The room has recently gone out of service, and with the larger room, RHA 511, some of that interaction was diluted, as pointed out in the post-discussion.

Additionally, despite mentioning that participants were to just walk around to trigger the sound, many unfamiliar with the concept of a sound bath, still tried to manipulate the devices or holder, or use sound to the effects. This is likely due to the memories from the previous tactile experiments where manipulation of the elements within the experiment produced positive results.

5.0 Photos

6.0 References

https://www.allure.com/story/sound-bath-meditation-benefits

https://www.elitedaily.com/p/what-is-a-sound-bath-5-thing-to-know-before-you-bathe-in-the-sound-2975477

https://articles.aplus.com/wtf-is-it-and-should-you-try-it/what-are-sound-baths-benefits?no_monetization=true

https://www.washingtonpost.com/lifestyle/wellness/tune-in-and-chill-out-what-are-sound-baths-and-why-you-should-try-one/2017/05/02/e74c697c-2b7c-11e7-a616-d7c8a68c1a66_story.html