Digi-Cart 2.0

Experiment 5: Katlin Walsh

Project Description 

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention.  Continue reading “Digi-Cart 2.0”

OCULAR

ocular-copy

Ocular
Animated Robotics – Interactive Installation

By Jignesh Gharat

Project Description

An animated robotics motion with a life-like spectrum bringing emotive behaviour into the physical dimension. An object with motion having living qualities such as sentiments, emotions, and awareness that reveals complex inner state, expressions and behavioural patterns.

He is excited to peak outside the box and explore around the surroundings but as soon as he sees a person nearby he panics and hides back in the box as if he is shy or feels unsafe. He doesn’t like attention but enjoys staring at others.

Design Process & Aesthetics

It’s an attempted to create an artificial personality. I wanted to work on an installation which encourages participation rather than spectatorship. So wanted to work on a physical installation that lets people experience familiar objects and interactions in refreshingly humorous ways.

The explorations started with exploring objects and living organisms that can be used as a part of the installation. Where I can implement curious interactions and a moment of surprise and unpredictability. Gophers, Crabs, Snails, Robots were few to mentions. I finally ended up finalising periscope as an object that can have behaviour and emotions and a perfect object to play hide and seek with the visitors.

What is Periscope?
An apparatus consisting of a tube attached to a set of mirrors or prisms, by which an observer (typically in a submerged submarine or behind a high obstacle) can see things that are otherwise out of sight. I started thinking of ideas, object behaviors as well as setup of the installation to come up with engaging and playful interactions.

Step 1 – Designing the bot

The model was developed in Rhino keeping in mind the moving parts that is rotating the head and the spin which is lifted from the bottom.

ocular-copy-81

ocular-copy-5

Step 2 – The mechanism.

Testing the motors stepper (28BYJ-48 5V 4 Phase DC Gear Stepper Motor + ULN2003 Driver Board UNO MEGA R3-Head ) and Servo (SG90 Micro Servo Motor – Head and 3001 HB analog servo – Arm ). Stepper motors the RPM was too less to give that quality and reflex to the bot so decided to go with the servo motor which had better torque.

ocular-copy-4

The 3d printed body was too heavy for the Analog servo motor to lift so finally decided to use paper to model the body and reduce the weight and load on the motor. Surface development was done using Auto CAD software for making 3 to 4 different options based on the mechanism and aesthetics of which I decided to work on the design shown in the below image. The arm was laser cut in acrylic and two options were made to reduce the friction between paper and the lifting arm contact surface.

ocular-copy-6

 

How does it Work?

A micro motor controls the oscillation of the head. A Distance sensor at the front of the wooden  controls the motor 2. When the visitor is in the range of set threshold of the distance sensor  that pulls the lever arm down and the motor 1 (Head)stops rotating.

ocular-copy-3

The installations is minimal and simple with just two materials wood and paper that gives a clean finish to the piece. The head is like a focus with a black sheet and white border as if the sensors are installed inside the head that controls the oscillation. In my further explorations and practice I will make sure that the sensor are not directly visible to the visitors as this some time leads to visitors interacting with the sensor and not actually experiencing the installation.

ocular_jigneshgharat_01-jpg ocular_jigneshgharat_02

 

Code :  https://github.com/jigneshgharat/Ocular

 

Project Context

Opto-Isolator (2007), Interactive Eye Robot  -Opto-Isolator (2007: Golan Levin with Greg Baltus) inverts the condition of spectatorship by exploring the questions: “What if artworks could know how we were looking at them? And, given this knowledge, how might they respond to us?” The sculpture presents a solitary mechatronic blinking eye, at human scale, which responds to the gaze of visitors with a variety of psychosocial eye-contact behaviors that are at once familiar and unnerving.

The Illusion of Life in Robotic Characters – Principles of Animation Applied to Robot Animation – Robot animation is a new form of animation that has significant impact on human-robot interaction.
This video relates to and extends a paper we have published at the ACM/IEEE Human-Robot Interaction Conference

Exhibition Reflection

The observations were made and recorded at Open Show Graduate Gallery and Centre for Emerging Artists & Designers which provided me with some good insights and learnings for necessary changes to be made developing the concept further. It was fun to see people interacting and playing around a nonliving object as if it had emotions, feelings and  the way Ocular reacted to the viewer’s actions. The interactions were supposed to be only from the front but as the wall card was placed at the side of the installation placed on a high plinth, people tend to read and start interacting from the side and did not really experience the installation as expected but most of them figured out the working as they read the wall card.
Some interesting comments from the visitors:

  • The Installations is making everyone Smile.
  • I have 3 cats and all 3 cats will go crazy if they see this.
  • She said all the guys react the same way Ocular did so she don’t want to go near him.
  • What are the next steps?
  • What was the inspiration?
  • How does this work?
  • Why does he ignore me?

A visitor (Steventon, Mike) referred to Norman White’s interactive robotic project, The Helpless Robot.

Observations and visitors reactions

ocular-copy-7

Reference

Interactive Canvas – Bring It To Life

interactive-canvas

 

Project Overview

 

Today there is so much competition, and advertisers are continuously looking to find new interactive and ambient mediums for advertising . This project explores a more traditional and yet interactive approach by using an ordinary canvas to bring three abstract icons to life upon touch with the idea of how a healing hand could protect wild life such as a butterfly and a tree. The use of touch isn’t limited to flat, shiny and expensive digital screens, but it could be brought to any tactile surface like in this project. The use of conductive ink and animations allows visitors to experience and feel the dotted paper anywhere and watch as flat objects change into animation. Moreover, such an advertising medium is economical to use and could be placed in any vicinity.

The aim of this object is part of my discovery into conductive materials and how it could be adapted to advertising mediums telling interactive stories.

thumbnail_img20191204200635

58231504-0ed3-4d5c-b18f-324d33eafd72

thumbnail_img20191204165833

Ideation:

My idea behind this project was exploring ways to being art pieces to life in an interactive and experiential ways. Although, there are many ways to do this today such as VR and AR but I was keen to discover something more traditional and familiar. Thus, for this project, I was keen to explore conductive ink and how capacitive sensors could allow a tactile surface to be interactive.

As a result, I came across variety of work within this field. This included the work from London’s Retail Design Expo1which used conductive ink to draw flat icons for storytelling. Another interesting concept about interactive wallpaper was introduced by High-Low Tech4. Here they added light and communication within the canvas. Similarly, another interesting work was the Dublin Interactive Wall2which explored conductive ink with advanced animation. In addition to this, conductive ink has also been used in combination with hologram and AR such as the from Ransquare3, an agency in Seoul.

 

Design Concept

To bring most commonly known symbols and icons to life, I chose a symbol of a hand, a butterfly and a tree. The story is about how energy from a hand can trigger life into the butterfly and tree. This is inspired from the documentary series of BBC Earth where intervention of humans is impacting wild life.

3ed0b590-a5d6-4e99-835a-2d0cbfc5f54c

5193d092-cead-4c68-891a-ee1b5b8a6116

f76cd67c-6779-4d99-b4a7-e75bf3fcce24

fd7cf8b6-9e95-495b-9b1b-3c743a40a1fc

Animation

To trigger an animation for each of the three objects, I made a straight path animation using three colors for each animation.

capture3 capture4 capture-2

 

Conductive Ink

I used a 12 ml conductive ink pot to trace my sketch on to the canvas and let it dry for an hour as it’s a better conductor when fully dry.

 

4e5ca389-542b-4582-b1e6-63d007b56fe0

72eff6f8-4d9f-43f5-b198-d6338b5d31f1

 

Software

Arduino

I downloaded the capacitive sensor library for Arduino to begin testing the code. I initially started with putting some conductive ink on a sample paper to check the values on console and gauge what values are triggered upon touch.

I used 3 1M OHM Resistors and placed it between the received and send pin which was Pin 4. This allowed the capactive sensing to work. I learned that when conductive ink was in contact values jumped above 1000.

thumbnail_20191209_165143

thumbnail_20191209_165153

Processing

Using serial monitor data from Arduino, I then wrote ‘and’,‘if’ and ‘else’ statement to run a movie for each sensor when its values are above 1000. For this process, a visitor presses and releases any area to play the animation.

8403e561-2e7b-4347-a0df-2424e0c55144

 

Production and Bringing Everything Together

On the flip side of the canvas, I attached all the wires which made electronics visible.

 

58231504-0ed3-4d5c-b18f-324d33eafd72

cf7a1639-5c33-4dd6-8ee4-68c41fdfb417

935c0e62-ab5c-4369-8d0e-c3bfbf7a0179

Installation

thumbnail_img20191204170350

 

The USB extender supported the idea to hide any wires. However, it could further help if I had used an HDMI extender and overhead projector to hide the projector and move laptop into a different room.

 

Reflection:

The canvas received appreciation as art is usually not to be touched but the interactivity and accessing art through a direct tap on the canvas brought new ideas to life. Few artist who interacted with canvas seemed delightful about using conductive ink on their own canvas.

Moreover, during critique various thinking points were suggested which helped the make project more canvas. These included making the narrative stronger, putting the canvas on a greater height, hiding most wires, discovering rear projection.

Supplies

  1. 18”x24” Canvas
  2. Conductive Ink
  3. Arduino Micro
  4. Pico Projector
  5. Jumper Wires
  6. 3x Resistor 1.0M Ohm

 

Large Scale Adaption.

This technique could be used in conferences as a backdrop to engage visitors about telling stories of products, interactive exhibitions and many others.

 

References

 

  1. Ayres, C.  (2015). How Dalziel and Pow Realized This Awesome Interactive Touch Wall. Retrieved from URL;

https://www.core77.com/posts/35697/How-Dalziel-and-Pow-Realized-This-Awesome-Interactive-Touch-Wall

  1. Studio,L. (March,2019). Dublin Interactive Mural. Retrieved from URL:

https://lightscape.ie/dublin-interactive-mural-interactive-projection-mapping-motion-design

 

  1. (January, 2019). 365 Safe Town (Korea Youth Safety Experience Center). Retrieved from URL:http://raonsquare.com/

 

  1. High-Low Tech Group. Living Wall. Retrieved from URL: http://highlowtech.org/?p=27

 

 

GitHub

https://github.com/arsalan-akhtar-digtal/Experiment5

 

Invisible ‘Kelk’

By
Arshia Sobhan Sarbandi

 

Invisible ‘Kelk’* is an interactive installation inspired by the transformation of Persian script, from calligraphy to digital typefaces that almost all the current publications are based on. This project is a visual extension of my previous project, an exploration to design an interface to interact with Persian calligraphy.

*’Kelk’ is the Persian word for calligraphy reed.

From Calligraphy to Typography

Reza Abedini is one of the most famous Iranian graphic designers, whose main source of inspiration is visual arts and Persian calligraphy. What is original about his work is that he was the first to recognize the creative potential of Persian calligraphy and to transform it into contemporary graphic typography.[1][4] In an interview, Abedini says: “After many years, because we have repeatedly seen and got used to typography in newspapers, it has become the reference of readability, and not calligraphy anymore. And now, everything written in that form is considered readable, which is one hundred percent wrong in my opinion.”[2]

 

Left: Vaqaye-Ettefaghiye, the second Iranian newspaper published on 1851 Right: Hamshahri, one of the current newspapers in Iran
Left: Vaqaye-Ettefaghiye, the second Iranian newspaper published in 1851, written in Nastaliq script (image: Wikipedia)
Right: Hamshahri, one of the current newspapers in Iran, using glyph-based typography (image: hamshahrionline.ir)

 

Abedini argues that we have lost the potentials of Persian calligraphy as a result of adapting to a technology that was not created for Persian script – movable type.[3] ‌Below, you can see Abedini’s original words written in calligraphy (Nastaliq, the prominent style in Persian calligraphy) and typography using one of the most common typefaces currently used in books and newspapers:

Despite the obvious visual difference between these two, the important fact is that both of these writings are completely readable for an Iranian person.

Designing the Visuals

Similar to my previous project, there are two different things happening when pushing or pulling the hanging piece of fabric – the canvas – from its state of balance. All the words from the short paragraph of what Abedini says about calligraphy and typography are scattered on the canvas in a random pattern that refreshes every minute. You see the same words on both sides, in calligraphy and typography.

The composition of calligraphic words is inspired by an important style in Nata’liq script, called Siah-Mashgh. This style is usually defined by its tilted written words in multiple baselines all over the canvas.

 

Calligraphy pieces by Mirza Gholamreza Isfahani
Calligraphy pieces by Mirza Gholamreza Isfahani in Siah-Mashgh style

 

What I found really interesting is the fact that although words are randomly positioned in my project, and the random pattern is constantly changing every minute, the result retains its visual identity in terms of the calligraphic style. The words are in harmony in all different random positions, which in my opinion, is the result of great visual potentials of Persian calligraphy.

 

Samples screenshots of how calligraphy words appear in different random patterns
Sample screenshots of how calligraphy words appear in different random patterns

invisible-kelk-2

 

Technical Details

Almost all the technical details are the same as the previous project, except that I used a slightly larger fabric in the exhibition and a cleaner physical setup. A larger piece of hanging fabric essentially results in slower movements of the fabric in the resting position and during the interaction, which I found more suitable regarding the overall experience I expected.

Exhibition Reflection

From the observation of and discussions with the people interacting with the project, two major points were perceived. The first obvious fact was that when confronting the hanging fabric, many people hesitated to physically interact with it, and even after being told that it was OK to touch the fabric, they touched it in a very gentle manner, unless they were ensured that they were supposed to ‘push’ the fabric. It was also perceived that the possibility of pulling the fabric was much less than pushing the fabric. However, after getting comfortable with the interaction, they usually spent several minutes with the piece and found it pleasing.

One possible solution (to resolve the issue of hesitation to interact with the fabric) could be installing it where people have to push the fabric to pass through it. Another possible solution could be some airflow in the environment that causes the fabric to move slightly in both directions from the resting state so that it can provide a clue that moving the fabric in two directions would result in different visual feedbacks.

Code Repository

https://github.com/arshsob/Experiment5

References

1- https://www.mashallahnews.com/reza-abedini-persian-calligraphy/
2-http://honargardi.com/portfolio/%D8%B1%D8%B6%D8%A7-%D8%B9%D8%A7%D8%A8%D8%AF%DB%8C%D9%86%DB%8C/
3-https://www.youtube.com/watch?v=XfI3grQDVyY
4-https://kabk.github.io/go-theses-18-saber-javanmard-gain-loss/thoughtsChapters.html

Antimaterial

img_8987

By Jessie, Liam and Masha

Project Description: 

Antimaterial is an interactive installation that explores abiogenesis and kinesis to articulate a relationship between the human and nonhuman world. Just as we are able to affect our surroundings, the ability of magnetic materials to move matter indicated the presence of life to pre-Socratic philosophers. The rocks beneath our feet were not only the essential catalysts for life, but that microbial life helped give birth to minerals we know and depend on today. In Antimaterial, these concepts are explored as primitive life emerges within the topography of the material in response to human proximity, demonstrating a connection between animate and inanimate matter.

Project Context:

Drawn the freedom and possibility of exploration, from the start of the project,we all agreed to use creative material to create a tangible and experimental interactive art piece. With some of the team member’s previous experience with using water in their personal projects, we thought to use water as it’s a very versatile medium due to it’s free flowing form. Using tools such as a solenoid to tap on water to create ripples, or use speakers in waters to visualize sound waves with water were some of the many initial ideas we thought of. However, with water being a difficult medium to control the form of, we were worried that the final piece would end up looking like a jacuzzi, which led us to finetune to our ideas further. 

After several sessions of brainstorming, we came up with the idea to mix water with some magnetic material to magnetize it which would also give us more control over its form. During our research, we found quite a few interesting projects that have guided us in new directions of exploration. Sachiko Kodama is a Japanese artist that dedicated the majority of her art practice to using ferrofluid and kinetic sculptures to making installations. By combining ferrofluid which can be a messy medium at times with kinetic sculpture which is inherently neatly structured, her works usually create order and harmony amongst chaos and come off as intriguing as well as energetic.  

j4

Sachiko Kodama Studio Website

Inspired by this kind of harmony generated by the juxtaposition of structure and chaos by the use of ferrofluid and kinetic sculpture, we wished to demonstrate it with ferrofluid in our practice for this project too. We purchased a bag of iron(III) oxide and started playing with it by mixing it with different different types of solvents including motor oil and washing up liquid while doing more research how it’s been incorporated to other artists’ works of art. 

 

 

One use of iron(III) oxide that drew our attention was to first mix it with oil and then encase the oily mixture in a container filled with water. For example, the project Ferrolic by Zelf Koelman utilizes the unmixable nature of oil and water and creates a versatile movement of ferrofluid within water with the use of magnets behind the display, and creates different patterns with different modes of operation.

 

We wanted the magnetic material to generate different patterns when visitors interact with it. The mechanism behind the project SnOil inspired us to make an array of electro magnets behind the magnetic material, and different patterns will be programmed to be generated through the activations of electro magnets on certain positions during visitors’ interactions. 

 

However, having talked to Kate and Nick during the proposal meeting, they shared their concerns about whether we would be able to manage an array of more than 10 electro magnets within 2 weeks’ of production period, as well as the cleanliness of the piece on display since ferrofluid tends to get messy. Having reevaluated these essential suggestions, we decided to abort the idea of making ferrofluid and only use iron(III) oxide powder as it generates a fur-like texture when magnetized and still creates a very aesthetically intriguing look. For example, the following project uses furry pom poms as part of the interaction piece to produce a very engaging experience.

 

We were also inspired by the mechanism behind this pom pom piece as it uses servos and turns a circular movement into a linear movement which we later used in our project to simply the movements of magnets. 

 

Process:

This project has undergone a series of experimentations, evaluations and adaptations. Having decided to explore creative material for this project, we chose to use magnets to experiment with. We first started making electro-magnets by wrapping conductive metal wires around a coil, and hoped to create patterns with magnetic material through the activation of an array of electro-magnets. The patterns will change and react to the on and off of the eletro-magnets. 

 

However, the magnetism of the electro-magnet is not strong enough and generates a lot of heat after a longer period of time when it’s activated. We worry that they would cause safety issues during the open show because of overheating and also the visual effect might not be desirable because of its weak magnetism. Eventually, we decided to use actual magnets instead. This change has led us to re-think the reactions of the magnets as they will not have the ability to be turned on and off, and will always have magnetism.

Therefore, we decided instead of making magnets become activated and deactivated, we could change their position through movement actuated by servos motors through a slider. 

It turns out the micro servo motors we have are too weak. One slider weighs around 6kg and the servo we have is only able to move weights up to 1.5 kg. We try to come up with different solutions to solve it. First, we changed the micro servo to a more powerful servo FS5106R which takes up to 6kg in weight. Second, we put the slider vertically instead of horizontally and use gravity to our advantage, and attach the slider to the servo through a piece of fishing wire.

 

 

 

However, it seems the slider is too heavy still, especially now that all the weight is put on the servo motor. The fishing wire also becomes less durable after the serco runs for a longer period of time. Finally we landed on the idea of laying the slider horizontally and using gears to achive the linear movement of magnets through the rotation of servos.

c1660601_main-1_1 lasercut-01

 

Having done that, the mechanism still seems inadequate because the full rotation servo doesn’t seem to have enough torque to move the gear plus the slider. We again had to change to an even more powerful motor, which led us to using stepper motors. We also favor the stepper motors over servos since they are simpler to code as it counts steps instead of time. 

 

Now that all the underlying mechanism is working. It’s time for us to put it all together. The production process begins. We have prototyped what the piece would look like in a digital 3D software and made sure that it would all come together in reality.

screen-shot-2019-11-25-at-4-28-38-pm

We started building the box with the help and guidance of a professional constructor friend. 

 

Finally, this is what our piece looks like.

j1

 

Code and Circuit:

https://github.com/jessiez0810/exp5/blob/master/nptest_2_with_sensor.ino

screen-shot-2019-12-08-at-2-06-57-pm

Reflection:

During the exhibition, the piece successfully achieved creating a sense of mystery and made many visitors to the open show wonder what it was and how we did it. During the critique session, we were suggested by Nick to get gloves to put besides the piece so people would know that they could touch it and maximize the level of interaction. However, having discussed it within our group, we agreed the gloves might temper with the eeriness the piece gave off and wouldn’t fit with the underlying tone. The gloves would also create a barrier for people to feel the texture of the powder, which is an essential part of this interaction experience – to find out what this alien material is through all means of explorations. 

To solve this issue, we decided to keep paper towels with us. After people touched it, we would hand off paper towels to them to wipe the powder off. However, if they didn’t ask us, we wouldn’t say anything about touching. As was expected, many people were tempted to touch the magnetic power on display and we allowed them to do it if they had asked. What’s interesting is that the seemingly dirty powder didn’t hinder people’s curiosity and a lot of people dig their hands into the powder, which seems to be a validation of success in mission of the mystery this piece tries to achieve. This project has taken us a lot of time in all stages of the process. However, it was truly rewarding to see it come together in the end after many trials and errors. 

j2 k3

References:

AccelStepper Class Reference. (n.d.). Retrieved December 8, 2019, from https://www.airspayce.com/mikem/arduino/AccelStepper/classAccelStepper.html.

Brownlee, J. (2018, July 10). MIT Invents A Shapeshifting Display You Can Reach Through And Touch. Retrieved December 9, 2019, from https://www.fastcompany.com/3021522/mit-invents-a-shapeshifting-display-you-can-reach-through-and-touch.

Ferrolic by Zelf Koelman. (n.d.). Retrieved December 9, 2019, from http://www.ferrolic.com/.

Kudama, S. Protrude, Flow (2018) Retrieved December 8, 2019, from http://sachikokodama.com/works/.

PomPom Mirror by Daniel Rozin. (n.d.). Retrieved December 9, 2019, from https://bitforms.art/archives/rozin-2015/pompom-mirror.

SnOil – A Physical 3D Display Based on Ferrofluid. (2009, September 20). Retrieved December 9, 2019, from https://www.youtube.com/watch?v=dXuWGthXKL8.

State Machine. (n.d.). Retrieved December 8, 2019, from http://www.thebox.myzen.co.uk/Tutorial/State_Machine.html.

 

 

 

Leave a window open for me

img_4037

Project title: Leave a window open for me

By: Neo NuoChen 

Project description This piece is meant to create a space which expands infinitely within a box. It is a reflection of myself and my personal experience of fighting with insomnia and depression. It welcomes the audience to take a peek inside the story and share the feelings through the “window”. 

visuals (photographs & screenshots):

  • (3-5) work-in-progress:img_3756 img_3843 img_4032 img_4053 img_3821
  • (3-5) portfolio-worthy final images

 img_4257

img_4037

dsc00075

dsc00072

edited video of the interaction: https://vimeo.com/378123696

 

code:https://github.com/NeonChip/NeoC/blob/master/exp5

circuit diagrams https://www.circuito.io/app?components=512,10190,11021,763365

Please note that the servo input is supposed to go to pin 9 and the LED panel input should go to pin 12. There was no breadboard being used for my project, and I was using a 16×16 LED matrix panel.

Project Context (500-1000 words)

I wanted to create this personal space within a box for the public because it is my story and I’m willing to share it with everyone. It is designed to be viewed through the “window” and one person per time so that the audience could have a more intimate and immersive experience.

This is a story about me going through insomnia and depression for several months when I was in New York. When I couldn’t sleep, the window was one of the things that I stared at the most, and I kept thinking about all the things that I shouldn’t have done and could have done. The stress was so real that it constantly haunted me in my head, the slightest light or noise became utterly unbearable. I could feel the bed spinning like a disc, or was it me spinning in my head? It didn’t matter day or night, they were the same, the thoughts kept me awake, the music that played was drowning me.

But I don’t want to drag the audience into a bad mood, it is not what my intention is, ever since I stepped out that situation, I’m proud of myself for doing that and I want to let people know that it is not undefeatable and you are not alone.

The initial idea of building an expanding space within a box was to see how the contrast between the outside and inside could be, the surprise element is always a twist that I want to collaborate in my design. I’ve been to multiple Yayoi Kusama’s exhibitions, and each time I’m amazed by how the glasses are forming these rooms. I was blessed to see Samara Golden’s work, The Meat Grinder’s Iron Clothes, at the Whitney Museum of American Art in New York. It was a multilevel installation built with mirrors to expand the space between where the audiences were standing. Looking up and down when standing there in between, the feeling was oddly satisfying, it made me lost in the thought of an existential crisis.

screen-shot-2019-12-08-at-18-32-27

screen-shot-2019-12-08-at-18-33-25

I was also inspired by Louise Bourgeois’ Untitled (Hologram Suite), 1998 – 2014. I like the way that she picked different items in different images to represent the different stories when creating this series of artworks. That helped me made my decision when I was debating whether to build a whole room full of furniture or go with one specific item. And I do agree that going with fewer items would express the feeling of loneliness more.

screen-shot-2019-12-08-at-18-32-48

“The holographic image is created by laser beams that record the light field reflected from an object, burning it onto a plate of glass. The image is scaled at a one-to-one correspondence with the original sculptural material that was created by Bourgeois. These elements reoccur in many of Bourgeois installations and are related to her interest in physical and emotional isolation and sexuality.”

The servo that I used, at first, I went for the servo in our tool kit and followed a tutorial trying to make it go 360, but the result was not what I was expecting, because it simply turned into a motor without the function of taking input, and the speed was too fast. So I re-soldered everything back and the servo was able to download codes again but the spin was not ideal and made a lot of noise as well as shakes. So I borrowed a servo that could go 360 without being manually altered from Li’s team, which helped me survived this.

As for the choice of LED colors was aiming to create this harmony between itself and the spinning bed, both of them seemed to form a peaceful and harmless space, but it also emphasizes the fact that it’s a space of absolutely no sleep.

Exhibition Reflection (250-500 words)

The open show was a blast, glad to see everyone’s work shine at their spots. It was interesting to receive feedback from the audience, especially when I ask them to take a look inside first without knowing the story behind it, most people described it as “beautiful and amazing”, which was great, because like I said, dragging people into a bad mood was never my intention, and people are free to feel whatever they feel. But after reading/hearing the background, they would be like, “yes, I can totally see that”. It made me wonder, did they actually see that because they’ve felt the same thing in the past? Or did they match the visual with the story? Either way, empathy was created and connected the audience with the piece, and I was more than happy about that.

Towards the end, there was a girl who came looked at my work and told me she knew exactly what I was going through because she went through the same thing before. We sat at the staircase and talked about the experience, and how good it felt talking about it, this made me realized the fact that there are more people than I expected who are facing a similar issue. And they should know that they are not alone, we are on the same boat here, and we will always be each other’s support.

I picked a location where fairly isolated and unnoticeable, which somehow went well with my concept, but not a lot of people went into the darker room where I was in along with the other three works. It would be nice if the audience knew about the room and paid us a visit:)

Reference:

Samara Golden. The Meat Grinder’s Iron Clothes, 2017. Whitney Biennial. The Whitney Museum of American Art, New York https://samaragolden.com/section/450598-THE-MEAT-GRINDER-S-IRON-CLOTHES-2017-WHITNEY-BIENNIAL-WHITNEY-MUSUEM-OF-AMERICAN-ART.html

Heather James Fine Art. Artsy.net. https://www.artsy.net/artwork/louise-bourgeois-untitled-hologram-suite

Yayoi Kusama, Infinity Mirrored Room – The Souls of Millions of Light Years Away, 2013. https://ago.ca/exhibitions/kusama

Unseen

(Un)seen

Nadine Valcin

unseen_still1

(Un)seen is a video installation about presence/absence that relies on proxemics to trigger 3 different processed video loops. It creates a ghostly presence projected on a screen whose image recedes as the viewer gets closer to it despite constantly trying to engage the viewer through its voice.

As visitors enter the room, they see a barely distinguishable extreme closeup of the ghost’s eyes. As they get closer to the screen, the ghost remains unattainable, visible through progressively wider shots. The last loop plays when visitors are in close proximity to the screen. At that distance the array of squares and circles that are layered over the video giving it texture become very distinct making the image becomes more abstract. The rhythm of the images also changes as short glimpses of the ghosts are seen through the progressively longer black sequences.

The video image is treated live by a custom filter created in Processing to give it a dreamy and painterly look.

Context

In terms of content, my recent work and upcoming thesis project deal with memory, erasure and haunting. I am interested in how unacknowledged ghosts from the past haunt the present. As Avery Gordon (1997, p.7) remarks:

“Haunting is a constituent element of modern social life. It is neither premodern superstition nor individual psychosis; it is a generalizable social phenomenon of great import. To study social life one must confront the ghostly aspects of it. This confrontation requires (or produces) a fundamental change in the way we know and make knowledge, in our mode of production.” (2008, p. 7)

This project was a way for me to investigate through image and sound, how a ghostly presence could be evoked. I also wanted to explore how technology could assist me in doing it in an interactive that differed from the linear media production I normally engage with. The video material for (Un)seen comes from an installation piece entitled Emergence that I produced in 2017. I thought the images were strong and minimalist and provided a good canvas for experimentation.

(Un)seen is heavily inspired by the work of Processing co-creator Casey Reas and his exploration of generative art.  I have been interested in his work as it explores the way in which computing can create new images and manipulate existing ones in ways that are not possible in the analog realm. Over the years, Reas has used various custom-built software to manipulate video and photographic images.

reas_transference_1

Transference, Source: Casey Reas (http://reas.com/transference/)

Transference (2018) is a video that uses frames from Ingmar Begrman’s black and white film Persona (1966). It deliberately distorts the faces represented rendering them unidentifiable and reflecting on contemporary questions around identity and digital media.

reas_samarra_1

Samarra, Source: Casey Reas (http://reas.com/samarra/)

He applies a similar image treatment in the music video Samarra (2016) and in Even the Greatest Stars Discover Themselves in the Looking Glass, An Allegory of the Cave for Three People (2014), an experience in which three audience members interact, mediated through cameras and projected images. In that piece Reas, once again looks at identity as through a technological lens against the backdrop of surveillance.

13129374695_9895c96e99_o13129674664_eea72112b1_o

Even the Greatest Stars Discover Themselves in the Looking Glass, An Allegory of the Cave for Three People, Source: Casey Reas (http://reas.com/cave/)
reas2-670

KNBC, Source: Casey Reas (http://reas.com/knbc/)

In KNBC (2015), Reas pushes his experimentation further, manipulating images to a level of abstraction where they become unrecognizable in the finished product, breaking their visual link to the original source material. The recorded television footage and accompanying sound are processed them into a colourful, pixelated  generative collage.

01-min_jfj07dvflm

Surface X, Source: Arduino Project Hub (https://create.arduino.cc/projecthub/Picaroon/surface-x-811e8c)

From the group project Forget Me Not (assignment 2), I retained the idea of working with an Arduino Uno and a distance sensor, this time to control the video on the screen. I wanted to create a meaningful interaction between the image and the distance that separated it from visitors..

The interactive art installation Surface X by Picaroon was cited in assignment 2 and is still relevant to this project because of its use of proxemics to provoke the closure of the umbrellas, revealing the underlying metal structure and mechanism when visitors approach. Whereas the cretors saw the activation of the umbrellas as a metaphor for the way we constantly prefect and control our digital personas and how they collide with reality upon closer inspection in the moments where all our cracks and flaws are revealed.

surfacex02

Surface X, Source: Arduino Project Hub (https://create.arduino.cc/projecthub/Picaroon/surface-x-811e8c)

In (Unseen), the proxemics are used differently, to signify the refusal of the ghost to visually engage with the visitor, or perhaps, signalling that its presence is not quite what it seems.

Process

emergence_4

Still from unprocessed original footage

I started by going through my original footage selecting all the takes from one of the four participants in the shoot for my installation Emergence. I chose this woman because she had the most evocative facial expressions and dramatic poses. I then created 3 video loops between 30 and 60 seconds in duration. The first loop is comprised of extreme closeups focused around the eyes, where the entire face of the character isn’t seen. The second loop consists of closeups where her entire face was visible. The third loop features shots that are a bit wider, but their duration is shorter and there is a significant amount of black between them.

 

unseen_still4

(Un)seen – loop 1

unseen_still2

(Un)seen – loop 2

unseen_still9

(Un)seen – loop 3

I originally thought of manipulating the video image through Adobe After Effects, but I encountered a Coding Train video by Shiffman that showed the potential of extracting the colour of pixels from a photograph (much like it is possible to do in Photoshop) to program filters that would change the appearance of images. It seemed interesting, but I didn’t know if it would be possible to apply those same features to a moving image, given the processing capacity necessary to play and render live video.

Some of the original footage was intentionally shot with a very shallow depth of field, leaving parts of the shots out of focus depending on the movement of the subject being filmed. As I started to experiment with textures, I found that images that were slightly out of focus helped blur the outline of the circles and squares that created were part of the video filter. I used the gaussian blur function in Premiere Pro to get the desired texture. It was a trial and error process, manipulating the footage in Premiere Pro, then in Processing through several iterations.

 

screen-shot-2019-12-08-at-15-04-58

Left: Original source footage, right: blurred footage

unseen_still10

Same footage rendered through processing

unseen_still8

Left: Original source footage, right: blurred footage

unseen_still5

Same footage rendered through processing

 

I recorded the soundtrack, then edited and mixed it. It consisted of a loop with a woman’s heavy breath on which a selection of 13 different short clips play randomly.

unseen_still13

The clips are mostly questions that demonstrate the ghost’s desire to engage with the visitor, but also at times challenges them. Examples include: Who are you? Where do you come from? Can you set me free? Do you want to hear my story?

 

unseen-circuit-diagram

The technical set-up was rather simple. The Arduino Nano was used to read the distance data from the LV-MaxSonar EZ ultrasonic distance sensor. The video for the idle state (loop 1) loaded automatically and two different thresholds were set to trigger the playback of loops 2 and 3.

Challenges

The distance sensor gave wildly different distance readings depending on the space and had to be patiently recalibrated several times. Despite the Arduino being set to send readings to Processing at 1500 millis intervals, the liminal distance at the thresholds for the different video loops triggers posed some problems creating rapid  flickering between the different loops. One might say that the system itself was haunted.

The ventilation in the classrooms at OCAD also proved challenging as despite playing at full volume on speakers, the soundtrack wasn’t fully audible except at very close range. The original intent was to have a 360 degree soundscape through a P5.js library to heighten the immersion and feeling of presence. Unfortunately I could not find an equivalent for Processing.

unseen_still3

unseen_still11

unseen_still15

screen-shot-2019-12-09-at-01-03-00

Closeup of image as seen up close, projected on a screen

The Exhibition   

The exhibition was a wonderful opportunity to get members of the OCAD community and of the general public to engage with the work. The fact that (Un)seen was in a separate room was at once an advantage and an inconvenience. Some people missed the piece because they concentrated on the main spaces, but those who ventured into the room focused their entire attention on the installation. Being the sole creator of the piece, left me with all the duties of engaging with visitors and didn’t allow me to visit my colleagues’ pieces, especially those from undergraduates or second year graduate students that I had not seen. I met and spoke with Digital Futures faculty that I hadn’t yet encountered as well as staff and students from other departments. It was a useful and engaging sharing that should happen more often as it created a real sense of community.

People were eager to engage with the piece and the feedback was overwhelmingly positive. Visitors understood the concept and enjoyed the experience. Because of the issues with the distance sensor, they had to be instructed not to move too quickly and to take the time to pause to minimize false triggerings. The only drawback to the room was the extremely noisy ventilation. Despite the sound playing at maximum volume on the room speakers,  the soundtrack and clips were barely audible. The fact that the door was open to entice people to come into the space only added additional din. It would also have been nice to have a totally dark space to present, but I ended up switching spaces with some of my colleagues in order to accommodate their project.

 

CODE: https://github.com/nvalcin/Unseen

 

BIBLIOGRAPHY

Correia, Nico. “Bridging the gap between art and code” in UCLA Newsroom, April 25, 2016 http://newsroom.ucla.edu/stories/bridging-the-gap-between-art-and-code. Accessed on December 6, 2019.

Gordon, Avery F. (2008). Ghostly Matters, Haunting and the Sociological Imagination. Minneapolis: University of Minnesota Press.

Picaroon (2018), Surface X in Arduino Project Hub. https://create.arduino.cc/projecthub/Picaroon/surface-x-811e8c. Accessed on December 6, 2019.

Reas, Casey(2019). Artists website. http://reas.com/ Accessed on December 6, 2019.

Rosenthal, Emerson, “Casey Reas’ Newest Art Is A Coded, Projected ‘Allegory Of The Cave’” in Vice Magazine, March 14, 2014. https://www.vice.com/en_us/article/mgpawn/casey-reas-newest-art-is-a-coded-projected-allegory-of-the-cave-for-thedigital-age  Accessed on December 6, 2019.

 

Experiment 5 – OK to Touch!?

Project Title
OK to Touch!?

Team members
Priya Bandodkar & Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Project Description
Code | Computer Vision | p5js
OK to Touch!? is an interactive experience to bring into the spotlight the inconspicuous tracking technology and make it visible to the users through interactions with everyday objects. The concept uses experience to convey how users’ private data could be tracked, without their consent, in the digital future.

A variety of popular scripts are invisibly embedded on many web pages, harvesting a snapshot of your computer’s configuration to build a digital fingerprint that can be used to track you across the web, even if you clear your cookies. It is only a matter of time when these tracking technology takes over the ‘Internet of Things’ we are starting to surround ourselves with. From billboards to books and from cars to coffeemakers, physical computing and smart devices are becoming more ubiquitous than we can fathom. As users of smart devices, the very first click or touch with the smart object, signs-us-up to be tracked online and be another data point in web of ‘Device’ Fingerprinting, with no conspicuous privacy policies and no apparent warnings.

With this interactive experience, the designers are attempting to ask the question:
How might we create awareness about the invisible data tracking methods as the ‘Internet of Things’ expands into our everyday objects of use?

Background
In mid 2018 when our inbox was full of Updated privacy policy emails, it was not a chance event that all companies decided to update their policies at the same time, but an after effect of the enforcement of GDPR, General Data Protection Regulation. The General Data Protection Regulation (EU) 2016/679 (GDPR) is a regulation in EU law on data protection and privacy for all individual citizens of the European Union (EU) and the European Economic Area (EEA). It also addresses the transfer of personal data outside the EU and EEA areas. The effects of Data breach for political, marketing and technological practices is evident with the Facebook–Cambridge Analytica data scandal, Aadhaar login breach and Google Plus exposed the data of 500,000 people, then 52.5 million to name a few.

datasecurity-paper
News articles about recent data scandals. Image source: https://kipuemr.com/security-compliance/security-statement/

When the topic of Data Privacy is brought up in discussion circles, some get agitated about their freedom being affected, some take the fifth and some say that, ‘we have nothing to hide.’ Data privacy is not about hiding but about being ethical. A lot of the data that is shared across the web is often used by select corporations to make profits at the cost of the individual’s Digital Labour, that is why no free software is free but all comes at a cost, the cost of your labour of using it and allowing for the data generated to be used. Most people tend to not know what is running in the background of the webpages they hop onto or the voice interaction they have with their devices, and if they don’t see it they don’t believe it. With more and more conversation happening around Data Privacy and ethical design, we believed that it would help if we could make this invisible background data transmission visible to the user and initiate a discourse.exp-5-proposal-question

Inspiration

immaterials

The Touch Project
The designers of the Touch project— which explores near-field communication (NFC), or close-range wireless connections between devices—set out to make the immaterial visible, specifically one such technology, radio-frequency identification (RFID), currently used for financial transactions, transportation, and tracking anything from live animals to library books. “Many aspects of RFID interaction are fundamentally invisible,” explains Timo Arnall. “As users we experience two objects communicating through the ‘magic’ of radio waves.” Using an RFID tag (a label containing a microchip and an antenna) equipped with an LED probe that lights up whenever it senses an RFID reader, the designers recorded the interaction between reader and tag over time and created a map of the space in which they engaged. Jack Schulze notes that alongside the new materials used in contemporary design products, “service layers, video, animation, subscription models, customization, interface, software, behaviors, places, radio, data, APIs (application programming interfaces) and connectivity are amongst the immaterials.”
See the detailed project here.

digital-fingerprint

This is Your Digital Fingerprint
Because data is the lifeblood for developing the systems of the future, companies are continuously working to ensure they can harvest data from every aspect of our lives. As you read this, companies are actively developing new code and technologies that seek to exploit our data at the physical level. Good examples of this include the quantified self movement (or “lifelogging”) and the Internet of Things. These initiatives expand data collection beyond our web activity and into our physical lives by creating a network of connected appliances and devices, which, if current circumstances persist, probably have their own trackable fingerprints. From these initiatives, Ben Tarnoff of Logic Magazine concludes that “because any moment may be valuable, every moment must be made into data. This is the logical conclusion of our current trajectory: the total enclosure of reality by capital.” More data, more profit, more exploitation, less privacy. See the detailed article here.

paper-phone_special project

Paper Phone
Paper Phone is an experimental app, developed by a London based studio Special Project as part of the Google Wellness experiments, which helps you have a little break away from your digital world, by printing a personal booklet of the information you’ll need that day. Printed versions of the functions you use the most such as contacts, calendars and maps let you get things done in a calmer way and help you concentrate on the things that matter the most. See the detailed project here.

irl-podcast

IRL: Online Life is Real Life
Our online life is real life. We walk, talk, work, LOL and even love on the Internet – but we don’t always treat it like real life. Host Manoush Zomorodi explores this disconnect with stories from the wilds of the Web, and gets to the bottom of online issues that affect us all. Whether it’s privacy breaches, closed platforms, hacking, fake news, or cyber bullying, we the people have the power to change the course of the Internet, keeping it ethical, safe, weird, and wonderful for everyone. IRL is an original podcast from Firefox, the tech company that believes privacy isn’t a policy. It’s a right. Here the podcast here.

These sources helped define the question we were asking and inspired us to show the connection between the physical and digital to make the invisible, visible and tangible to accept.

The Process

The interactive experience was inspired by the ‘How Might We’ question we raised post our research on Data privacy and we began sketching out the details of the interaction;

  • Which interactions we wanted- touch, sound, voice or tapping into user-behaviour
  • What tangible objects we should use, – daily objects or a new product which incorporated affordance to interact with or digital products like mobile phones, laptops.
  • Which programming platform to use, and
  • How the setup and user-experience would be?

ideation-comp_ While proposing the project we intended to make tangible interactions using Arduino, embedded in desk objects and using Processing with it to create visuals that would illustrate the data tracking. We wanted the interactions to be seamless and the setup to look as normal, intuitive and inconspicuous that would reflect the hidden, creepy nature of Data tracking techniques.  Here is the initial setup we had planned to design:

installation

Interestingly, in our early proposal discussion, we raised the concerns of having too many wires in the display if we use Arduino and our mentors proposed we look at the ml5 library with p5js; a machine learning library that works with p5js to be able to recognize objects using computer vision. We attempted the YOLO library of ml5 and iterated with the code trying to recognize objects like remotes, mobile phones, pens, or books. The challenge with this particular code was in trying to create the visuals we wanted to accompanied with each object that is recognized, to be able to track multiple interactions and to be able to overlay the video that is being captured with computer vision. It was very exciting for us to use this library as we had to not depend on hardware interactions and we could use a setup with no wires, no visible digital interactions and create a mundane setup which could then bring in the surprise of the tracking visuals aiding the concept.

track-remote-with-rectangle

ml5-recognize-remote
Using YOLO ml5 library to track and recognize a remote.

data-points-map

motion-tracking_code
Using the openCV library for p5js and processing.

In using the ml5 library we also came across the openCV libraries that work with processing and p5js and iterated with it to use the pixel change or frame difference function. We created overlay visuals on the video capture and also without the visual capture thus creating a data tracking map of sorts. Eventually we use the optical flow library example and built the final visual output on it. To input data we used a webcam and captured the video feed for running through p5js.

Challenges & Learnings:
Our biggest learning was in the process of prototyping and creating setups to user test and understand the nuances of creating an engaging experience.
The final setup, was to have a webcam on the top to track any interaction that happens with the products on the table and the input video feed would be processed to then give a digital output of data tracking visualizations. For the output we tried various combinations like using a projector to throw visuals as the user interacted with the objects, use an LCD display to overlay the visual on the video feed or use a screen display in the background to form a map of the data points collected through the interaction.

The top projection was something we felt would be a very interesting output method as we will be able to throw a projection on the products as the user interacted with them, creating the visual layer of awareness we wanted. Unfortunately, each time we had top projection the computer vision code would get a lot of visual noise as each projection was being added to the video capture as an input and a loop of feeds would generate creating visual noise and multiple projections which were unnecessary as part of the discrete experience we wanted to create. Projections looked best in dark spaces but that would compromise with the effective of the webcam as computer vision was the backbone of the working of the project. Eventually we used a LCD screen and a webcam top mounted.

proposal-image-1
Test with the video overlay that looked like an infographic
process-2
Testing the top projections. This projection method generated a lot of visual noise for the webcam and had to be dropped.
tracking
Showing the data point tracking without the video capture.
process-1
Setting up the top webcam and hiding it within a paper lamp for the final setup.

Choice of Aesthetics:
The final video feed with the data tracking visual we collected was looking more like an infographic and subtle in nature as compared to the strange and surveillance experience that we wanted to create. So we decided to use a video filter to add that additional visual layer on the video capture to show that it has undergone some processing and is being watched or tracked. The video was displayed on a large screen which was placed adjacent to a mundane, desk with the typical desk objects like books, lamps, plants, stationery, stamps, cup and blocks.

setup

Having a bare webcam during the critique made it evident for the user’s about the kind of interaction and learning from that we hid the webcam in a paper lamp in the final setup. This added another cryptic layer to the interaction adding to the concept.

setup-in-use

These objects were chosen and displayed in a way so as to create desk workspace where people could come sit and start interaction with the objects through the affordances created. Affordances were created using, semi-opened books, bookmarks inside books, open notepad with stamps and ink-pads, a small semi-opened wooden box, a half filled cup of tea with a tea bag, wooden block, stationery objects, a magnifying glass, all to hint at a simple desk which could probably be a smart desk that was tracking each move of the user and transmitting data without consent on every interaction made with the objects.The webcam was hung over the table and discreetly covered by a paper lamp to add to the everyday-ness of the desk setup.

Each time a user interacted with the setup, the webcam would track the motion and the changes in the pixel field and generate data capturing visuals to indicate and spark in the user, that something strange was happening making them question, if it was Ok to Touch!?

user1

Workplan:

Dates Activities
23rd November – 25th November Material procurement and Quick Prototyping to select the final Objects
26th November – 28th November Writing the code and Digital Fabrication
28th November – 30th November Testing and Bug-Fixing
1st December to 2nd December Installation and Final touches
3rd December to 4th December Presentation

portfolio-image-2

portfolio-image-1

The Project code is available on Github here.

__________________
References

Briz, Nick. “This Is Your Digital Fingerprint.” Internet Citizen, 26 July 2018, www.blog.mozilla.org/internetcitizen/2018/07/26/this-is-your-digital-fingerprint/.

Chen, Brian X. “’Fingerprinting’ to Track Us Online Is on the Rise. Here’s What to Do.” The New York Times, The New York Times, 3 July 2019, www.nytimes.com/2019/07/03/technology/personaltech/fingerprinting-track-devices-what-to-do.html.

Groote, Tim. “Triangles Camera.” OpenProcessing, www.openprocessing.org/sketch/479114

Grothaus, Michael. “How our data got hacked, scandalized, and abused in 2018”. FastCompany. 13 December 2018. www.fastcompany.com/90272858/how-our-data-got-hacked-scandalized-and-abused-in-2018

Hall, Rachel. Chapter 7, Terror and the Female Grotesque: Introducing Full-Body Scanners to the U.S. Airports pp. 127-149 In Eds. Rachel E. Dubrofsky and Shoshana Amielle Maynet, Feminist Surveillance Studies. Durham: Duke University Press, 2015.

Khan, Arif. “Data as Labor” Singularity Net. Medium, 19 November 2018
blog.singularitynet.io/data-as-labour-cfed2e2dc0d4

Szymielewicz, Katarzyna, and Bill Budington. “The GDPR and Browser Fingerprinting: How It Changes the Game for the Sneakiest Web Trackers.” Electronic Frontier Foundation, 21 June 2018, www.eff.org/deeplinks/2018/06/gdpr-and-browser-fingerprinting-how-it-changes-game-sneakiest-web-trackers.

Antonelli, Paola. “Talk to Me: Immaterials: Ghost in the Field.” MoMA, www.moma.org/interactives/exhibitions/2011/talktome/objects/145463/.

Shiffman, Daniel. “Computer Vision: Motion Detection – Processing Tutorial” The Coding Train. Youtube. 6 July 2016. www.youtube.com/watch?v=QLHMtE5XsMs

 

Experiment 5: Eternal Forms

Names of the Group Members:

Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project Description:

“Eternal Forms” is an interactive artwork incorporating geometric elements in motion. The construction of the elements is highly precise in order to generate an optical illusion of constant collision. The illusion is a result of linear patterns overlapping in motion between geometric forms. The foreground square will be firmly stabilised while the background circle will be in rotating constantly. While display lights change their chromatic values when participants interact from ranging proximities.

The artwork takes inspiration from various light and form installation projects by Nonotak, an artist duo consisting of Illustrator Noemi Schipfer and architect-musician Takami Nakamoto. Nonotak works with sound, light and patterns achieved with repeating geometric forms. The installation work aims to immerse the viewer by enveloping them in the space with dreamlike kinetic visuals. The duo is also known for embedding custom-built technology in their installations, as well as conventional technology to achieve desired effects in unconventional ways.

Visuals:

Final Images:

 

Circuit Diagrams:

https://www.circuito.io/app?components=512,11021

Project Context

Initial Proposal

Originally we had the intention to continue our explorations with RGB displays. Four out of five of the group members had come a long way while working together on Experiment 4 (Eggsistential Workspace), only to have our communicating displays malfunction on account of the unexpected fragility the pressure sensors. We had hoped to pick up where we had left off, by disassembling our previous RGB displays and revamping the project into an elaborate interactive installation for the Open Show. We designed a four-panel display, each one showcasing a pattern of birds from our respective countries (Canada, Saint Lucia, India and China). The birds would be laser-cut and lit by effect patterns with the RGBs. After many hours of strategizing, we found we were facing too many challenges in the RGB code that, given our time constraints, became overly risky. For example, we intended to isolate specific lights within the RGB strips, thereby designating the neighbouring lights on the string to be turned off. Once we broke down how complex doing this would prove to be (each message sent to an LED would involve sending messages to all preceding LEDs in the string), it became clear that the desired codebase was out of scope. We returned to the drawing board and began restrategizing a plan that could work within the restraints of our busy schedules, deadlines and combined skills. Having five people in the group meant a lot of conflicting ideas, making it tricky to move out of the brainstorming process and into prototype iteration. But we were all interested in kinetic sculptures and the more examples we came across, the more potential we saw in devising one of our own. It seemed like an effective way of keeping us equally involved in the strategy as well as the code. Having minimal experience working with gears (only Jun Li had used them previously) we were intrigued by the challenge of constructing them. We came across this example and began to deconstruct it, replacing the hand-spun propulsion with a motor and controlling the speed and direction by means of proximity sensors.
(show video) :
https://www.youtube.com/watch?v=–O9eyKIubY

Though we aimed to keep the design as simple as possible, we weren’t able to gauge the complexities of the assembly until we had really started to dig into the design. We thought a pulley system could be built, where a mechanism surrounding the motor could trigger motion in another part of the structure by way of gears. We were mesmerized by the rhythmic patterns we came across, in particular, the work of Nonotok studio. They primarily work with light and sound installations. Taking inspiration from their pieces and work, we decided to create visual illusions based on the concept of pattern overlap. We also planned to make use of light and distance sensor to make the piece an interactive light display.

https://www.nonotak.com/_MASK

Tools and Software

  • Distance sensor
  • RGB lights
  • Servo motor (360)
  • Nano and Uno board
  • Acrylic sheets – black and diffused
  • ¼’ and ⅛’ Baltic Birch sheets
  • Laser cutting techniques
  • Illustrator

Ideation

Our previous ideas seemed complicated in terms of implementation. Hence, we sat for a second round of brainstorming on the several outcomes with the given time frame and resources. We commenced on browsing on existing projects of ‘Kinetic Installations of NONOTAk Studio, ‘The Twister Star Huge ‘by Lyman Whitaker, ‘Namibia’ by Uli Aschenborn and Spunwheel Award-winning sculptures made from circular grids. We proceeded with the creation of our circular grid. We designed a circular grid system, which is constructed by several interlinking ellipses running across a common circumference of the centre ellipse. This grid served the base our proceeding designs.

We derive inspiration from Félix González-Torres, American visual artist from Cuba who created minimal installations from common objects like lightbulb strings, clocks, paper, photographs, printed texts or hard candies. Being a member of ‘Group material’, a New-York based artist organisation formed to promote collaboration projects with regard to cultural activism and community education.

Process

Several constructions and geometrical formations were explored. We studied how to create an optical illusion with forms in motion. We tried to simplify the curvatures into straight lines since we had no idea on the feasibility and reliability of complicated junctions. Thus one simple circle and one simple square were included.

As you can see in the above diagrams, a layout was created to give us an idea of the entire frame, its size, materials to be used as well the complications or hindrances that may befall upon our way.

After the finalization of the entire setup, we can up with a list of different layers that would be encased in an open wooden box (20’by20’). The list is as follows from top to bottom:

  1. Square Black acrylic sheet with laser cut patterns – This will be the front view (covering of the open wooden box ) and will remain stationary – size 20’by20’
  2. Circular Black acrylic sheet with laser cut patterns – This will be in motion as the centre will be connected to the 360 servo motor – size 18.5’by18.5’
  3. Diffused white acrylic sheet with a cut outline in the centre to fix the base of the servo motor
  4. RGB lights + Nano and Uno board – These are stuck to the base of the wooden box
  5. A small wooden strip with a distance sensor holding area to be attached in front of the installation – This will change the pattern lighting based on distance

Image: The 2 layers of forms that were laser cut to include in our final setup.

Image: The RGB bulbs set up to create an even distribution of light across the 20”x 20” board.

Image: After the setup was done, above are a couple of effects created using lights an motion of the overlapping layers.

Prototyping

We created numerous samples of our design in miniatures and overlapped them. We experimented with black and white colours by playing with the following arrangements-

  • White Square rotating on the White Square (Stabilised)
  • Black Square rotating on the White Square (Stabilised)
  • Black Square rotating on the Black Square (Stabilised)
  • White Square rotating on the Black Square (Stabilised)
  • White circle rotating on the white square (Stabilised)

Coding

Motor — the motor is set to run slowly counterclockwise at the optimum speed to give the interplay of with the geometry. It’s important to get the speed exactly right or the lines will not show the desired effect.

Lights — the distance sensor reads in the value and includes it the running average of the distance (last 10 readings), it then maps that distance to a value that will be used for the brightness of the lights, and the speed of the effects. The closer the brighter, but slower the effects. The distance is also used to determine what light effect is shown. When very close, it breathes, a little further away, it blinks quickly, and at the standard distance it paints the colour to the background. Each effect adds to the effect of the illusion.

Github: https://github.com/jevi-me/CC19-EXP-5

Final Project & Exhibition Reflection

For the exhibition, we were given an entire room to display the piece. We projected a video of the manufacturing, on to one wall, and on the opposite wall, we solved the concern of the empty space by projecting artwork that was appropriate for the display : a generative mandala formation of various altering forms (coded by Sananda in her individual project). The work allows participants to create their patterns with varying colours using manual alterations by potentiometers. We also had some calming tunes that played along with the laser cutting video which was being projected.

Many in attendance commented that they couldn’t pull their eyes away from the piece, and that it was meditative, mesmerising and calming. We also received three offers for the purchase of the installation. One participant analysed the piece praising the use of colour, lines, geometry and interaction that made it very aesthetically pleasing, and we noticed many leaving and returning with their friends to have them experience the illusion themselves, and to interact with the distance with great delight. Overall the experience of light and subtle motion in a dark room created some beautiful visual illusions and that became the limelight of our experiment.

References

  1. SCHIPFER, NOEMI, and TAKAMI NAKAMOTO. “MASKS, Kinetic Object Series”. Nonotak.Com, 2014, https://www.nonotak.com/_MASKS.
  2. Kvande, Ryan. “Sculpture 40″ – Award Winning Optical Illusion Sculptures”. Spunwheel, 2019, https://www.spunwheel.com/40-kinetic-sculptures1.html
  3. Whitaker, Lyman. “Whitaker Studio”. Whitakerstudio.Com, 2019, https://www.whitakerstudio.com/

Absolutely on Music

by Lilian Leung

Project Description

Absolutely on Music is an interactive prototype of a potential living space. The space is composed of a sensor activated light that turns on when a participant sits on the chair and a copy of Haruki Murakami and Seiji Ozawa’s book Absolutely on Music, which plays the music the author and conductor talk about in each chapter of the book.

This experiment expands upon my personal exploration in tangible interfaces, as well as research further into slow-tech and the use of Zero UI (invisible user interfaces). This exploration is meant to re-evaluate our relationship with technology as being able to augment everyday inanimate objects rather that creating alternative screen-based experiences centered around a hand-held device. The audio played beneath the chair is played in context to each chapter of the book, divided between six individual conversations centered around a different topic and part of Ozawa’s career. There are five tracks played due to the sixth chapter being without a musical piece discussed in detail. The auditory feedback playing the music featured in the book creates a multi-sensory experience, and broadens the audience of the book rather than being solely to music experts that don’t require musical reference, to anyone looking to enjoy a light read.

Project Process

November 21 – 22 (Proposal Day)

Early research pointed to using an Arduino UNO instead of an Arduino Nano so I’d be able to use an MP3 shield to play my audio rather than depending on using Processing. For early exploration, I looked into using a combination of flex sensors and pressure sensors on the binding of the book and on the front and back cover to detect when the book was being picked up. This layout was based on inspiration I found by Yining Shi (2015), where they mapped the Jurassic Park novel with the movie.

After having my proposal meeting, I switched to using copper tape instead of flex sensors as switches to make the thing more reliable data. From there I decided on the modes of the experiment and how the book and chair should behave when not being used.

Modes

Idle Mode Active
Lamp – Dim Light Lamp – Full Brightness
Book – Orchestra Tuning Book – Play Audio

 

Having purchased the MP3 Shield, I starting formatting the MicroSD card with the appropriate tracks related to each chapter using ‘track00X’ to be appropriately read by the Arduino and shield. From the shield diagrams, I would only be able to use Analog Pins 0-5 and Digital Pins 0-1, 3-5, and D10. From this, I laid out each switch for each chapter from D0-1,3-5 and kept D10 to be used for the lamp and sensor input and output.

artboard-1

November 23
To create a more natural reading space, I went to purchase a chair and cushion from IKEA. I tried to pick a more laid back chair so that participants would be interested in sitting down rather than repurposing a studio chair. The supporting beam in the back of the chair allowed for a space to safely and discreetly place all my wiring that wouldn’t be seen. 

For the lamp design, initially I had intended to create free standing lamp, but after some thinking, I decided to have it incorporated inside the side table so there would be less clutter in the space. For the design of the side table, I wanted it to be minimal and be able to discreetly hide the light and all the wiring involved.

sidetableinspo

November 25

To conserve time and memory on the Arduino, all the audio clips for each chapter were compressed to 7 minutes maximum rather than playing the full one hour to two hour performances. I tested out the MP3 shield using external speakers instead of just headphones to check the sound quality.

November 26-27

Early iterations of the code for the Arduino and MP3 shield weren’t working as tracks refused to play with the if/else statements. Some revisions I made with the help of Nick Puckett was adding a statement to always have the track play the default tuning audio (track 7) and to simply change the track number on each switch rather than playing and stopping each track as it played.

In the early production stage of the side table, I cut a set of 7.5” by 10” sheets of ¼ inch plywood with a 4.5” diameter circle in the center and one with a 5” diameter circle to be able to house the LED stripe for the light. A hole was drilled on the bottom to allow for the wiring to be hidden away. To diffuse the light from the LED, a frosted acrylic sheet was cut to be used to securely hold in the lighting. I chose to have the LED light on the bottom side of the table so that the light would be more discreet and so readers wouldn’t have a bright light shining directly up at their faces while reading. 

artboard-1-copy-5

artboard-1-copy-4

Once the wiring was complete, I soldered the wiring onto a protoboard to securely hold everything. I used screw terminals for the wiring for the book switches, the chair pressure sensor and the side table light to be able to transport my work easily between locations and to easily troubleshoot wiring problems. From there I mounted a small board I made to the back supporting beam of the chair so the protoboard and Arduino could safely be placed inside.

November 28 – 29

To finish the side table, I put 4 sheets of ½ inch plywood and glued them together to make the legs stronger. For the wiring, I routered one side of the ½ in plywood from the inside so that the wiring could be hidden entirely inside the legs of the table and discreetly come out the leg.

artboard-1-copy

With the side table complete, I brought all the items together to see how the space would look all together.

artboard-1-copy-9

December 1

To solve the last few problems I was having with the Arduino and getting the chapter working, I updated the code from a if/else statement to an if statement followed by an “else if” for the remaining chapters. Another issue I was having was the difficulting uploading my code as I’d frequently get ths following error in the serial monitor:

AVRDUDE: STK500_GETSYNC<> ATTEMPT 10 OF 10 NOT IN SYNC

I managed to solve an error I was getting for uploading onto the Arduino Uno. From an online forum, a user mentioned it may be due to a pin being wired into pin 0 (RX) which would cause this error, unplugging this pin during upload managed to solve the issue. Another issue I was having was the consistency of the switches turning on and off as participants may hold the book from different angles and might now apply enough pressure for the switches to properly high and low.  Originally the switches were all formatted with all states stated with only the one switch indicated as HIGH. Though due to the inconsistency in pressure, I removed certain states that were inconsistent.

Ch. 1 Switch Ch. 2 Switch Ch. 3 Switch Ch. 4 Switch Ch. 5 Switch
HIGH LOW
LOW HIGH LOW LOW
LOW HIGH HIGH LOW
LOW LOW HIGH

From there the final test was adding in the speakers again with the finished chair and table to make speakers would comfortably fit underneath the chair.

 

Project Context

Absolutely on Music explores the use of audio and tactile sensors to create a more immersive experience for inanimate objects in the home, rather than creating an augmented screen-based experience. This experiment is based on the philosophy of slow-tech, countering our need to develop tools that work more efficiently to allow use to do more, faster (Slow Living LDN, 2018). The set-up space is meant to re-evaluate our experience technology and potential of creating a multi-sensory and accessible home. 

This work is an example of Zero UI, which involves interacting with a device or application without the use of a touchscreen or visual graphic interface. Zero UI technology allows individuals to communicate with devices through natural means of communication such as voice, movements and glances (Inkbotdesign, 2019). Most Zero UI-based devices are related to the internet of things and are interconnected with a larger network. For this experiment, I wanted to explore creating a multi-sensory experience not requiring any networked communication or quantified data gathered and allow participants a more immersed and mindful experience with an inanimate object.

I choose Absolutely on Music by Murakami and Ozawa purposely for the references to auditory content that readers may be unfamiliar with, and how searching for said music may interrupt the reading experience instead of making it a seamless experience. This makes the content more accessible to a broader spectrum of readers. 

A book was chosen as the object of choice because of the constant discussion between reading from a digital screen versus a physical copy on paper. A physical book is a dumb object and allows a slow more leisurely experience rid of distractions compared to reading on a digital device. 

The book used is Absolutely on Music, a series of six conversations between the Japanese author Haruki Murakami and Japanese conductor Seiji Ozawa. Classical music, like fine art is generally difficult to access and deeply personal. Interest declines as individuals may perceive themselves distrust their own reactions as classical music may feel perceived to more sophisticated folk as mentioned in a New York Times Op-Ed (Hoffman, 2018). By playing the actual audio through speakers below the chair and having the music audible from headphones, any one can follow along the book without any prior musical knowledge of the works described.

Within the book, the author and conductor discuss both of their careers, from key performances in Ozawa’s career and Murakami’s passion for music, as musical pieces are always deeply integral in all his works from the Wind-up Bird Chronicles and opening with Rossini’s The Thieving Magpie or a Hayd concerto within the page of Murakami’s Kafka on the Shore (2002)

To elevate the sensory experience of the book, a set of switches were placed within the first five chapters (conversations) of the book. The audio described in each chapter is played with the use of a switch situated within each chapter of the book to provide context to the works Murakami and Ozawa are discussing.

Table of Contents of Absolutely on Music

1st Conversation – Mostly on the Beethoven 3rd piano concerto
Interlude 1 – On Manic Record Collectors

2nd – Brahms at Carnegie Hall
Interlude 2 – The relationship of writing music

3rd Conversation – What happened in the 1960s
Interlude 3 – Eugene Ormandy’s Baton

4th Conversation – On the music of Gustav Mahler
Interlude 4 – from the Chicago Blues to Shin’inchi mori

5th Conversation – the Joys of Opera
6th Conversation – “There’s no single way to teach, you make it up as you go along”

Based on the contents of the book, I pulled the main musical piece the two individuals spoke about into a tracklist that I would use for the interactive book. 

Timing (Chapter) Tracklist
Idle Mode Orchestral Tuning Audio
Chapter 1 Glenn Gould’s 1962 Brahm’s Piano Concerto in C Minor
Chapter 2 Seiji Ozawa’s Beethoven’s 9th Symphony
Chapter 3 Seiji Ozawa’s Rite of Spring (by Igor Stravinsky)
Chapter 4 Seiji Ozawa’s The Titan / Resurrection (by Gustav Mahler)
Chapter 5 Dieskau; Scotto;  Bergonzi’s Rigoletto
Chapter 6 (No Audio, No Single Musical Piece Focused)

chairmock

2019-12-04-04-23-41-3

artboard-1-copy-15

artboard-1-copy-14

Project Video

Github Code

You can view the github repository here

Circuit Diagram

exp5-diagram

*Within the actual wiring, the button switches are two piece of copper foil placed on opposite pages acting as the switches

*The schematic displayed is using a Sparkfun VS1053 though I used a geeetech VS1053, the available pins laid out are slightly different where as the Sparkfun version used in the diagram show D3 and D4 being used, they’re available on the geeetech MP3 shield.

Exhibition Reflection

For the Digital Futures Open Show, my piece was exhibited in the entrance of the Experience Media Lab. I set up the space with some plants and an additional light as props to make the area more comfortable. The space was quieter than the Graduate Gallery which worked out for the piece and allowed participants to sit down and experience the piece one at a time without having too much noise in the background. For the seat sensor, I kept the table light on so that participants could see the reading space clearly rather than being seat pressure activated.

artboard-1-copy-12

My reflection on the experience would be from the participant aspect, where I noticed people were initially hesitant sitting down on the chair, not sure whether they were supposed to touch it, or that the space I created didn’t look like an art piece. I felt that the piece was successfully as it felt like a natural reading space, and don’t mind the confusion as the chair and book were designed in the context of being in a home rather than as an exhibition piece. 

There was some static from the speaker, but I also noticed that participants may have expected a much faster response from the book when the music changed, as many orchestral pieces had a natural slow build up, some participants flipped through the pages to experience the music change faster or couldn’t hear the musical piece.

While the book audio was designed to be for a single reader than can listen while reading rather than flipping through the pages, in hindsight, I’d probably revise the audio to begin likely in the middle of each musical piece when in an exhibition display so that participants could understand the concept faster.

Some helpful feedback I got on how to possibly improve the piece and learn more about invisible interfaces was reading Enchanted Objects: Design, Human Desire, and the Internet of Things by David L. Rose. Other feedback was also possibly exploring using a Maxuino which has more audio capabilities and support with Ableton live in case I wanted more control with my audio files and audio quality compared to using the MP3 shield.

 

Bibliography

Arduino Library vs1053 for SdFat. (n.d.). Retrieved November 29, 2019, from https://mpflaga.github.io/Arduino_Library-vs1053_for_SdFat/.

Hoffman, M. (2018, April 19). A Note to the Classically Insecure. Retrieved from https://www.nytimes.com/2018/04/18/opinion/classical-music-insecurity.html?rref=collection/sectioncollection/opinion&action=click&contentCollection=opinion®ion&module=stream_unit&version=latest&contentPlacement=4&pgtype=sectionfront.

Inkbotdesign. (2019, August 13). Zero UI: The End Of Screen-based Interfaces And What It Means For Businesses. Retrieved from https://inkbotdesign.com/zero-ui/.

Kwon, R. O. (2016, November 24). Absolutely on Music by Haruki Murakami review – in conversation with Seiji Ozawa. Retrieved from https://www.theguardian.com/books/2016/nov/24/absolutely-on-music-haruki-murakami-review-seiji-ozawa. 

LDN, S. L. (2019, May 25). Embracing Digital Detox and Slow Tech. Retrieved from https://www.slowlivingldn.com/lifestyle/slow-tech-digital-detox/. 

Murakami, H., & Ozawa, S. (2017). Absolutely on music conversations with Seiji Ozawa. London: Vintage. 

Shi, Y. (2015, February 7). Book Remote. Retrieved from https://www.youtube.com/watch?v=M1WrbADjfmM&feature=emb_title. 

Tench, B. (2019, February 11). Some Reflections on Slow Technology. Retrieved November 29, 2019, from https://www.becktench.com/blog/2019/2/11/some-reflections-on-slow-technology.