Digi-Cart 3.0

Experiment 4 – Katlin Walsh

Project Description

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention. 

Continue reading “Digi-Cart 3.0”

Digi-Cart 2.0

Experiment 5: Katlin Walsh

Project Description 

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention.  Continue reading “Digi-Cart 2.0”

Digi-Cart 1.0

Experiment 3 – Katlin Walsh

Project Description 

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention.  

Continue reading “Digi-Cart 1.0”

Experiment 3: Toil & Trouble

1.0 Requirements
The requirement for this project was the inclusion of a tangible or tactile interface for a screen based interaction, with a strong conceptual and aesthetic relationship between the physical and the events happening on the screen.

2.0 Planning & Context

For this I looked into exploring three input methods of physical interaction that would affect an onscreen display generated using Processing.js:
– A push button highly integrated into the design and structure of the piece
– Tactile switch using copper wire to open and close a circuit
– Magnetism and Hall Effect Sensors to return a digital signal when in the presence of magnetism
– Elapsed time within a state

For the screen output in Processing.js, I demonstrated:
– Animation – using sprites & redrawn still images
– Video
– Sound
– Images
– Text
– Drawing Simple Shapes

The presentation of the Processing output is done via a link to an iPad on the front of the piece, ideally, hiding away the interface with the computer. In the initial testing, an iPad Pro was used and SideCar was utilised to make the connection to Processing seem magical. Unfortunately, the iPad pros in the AV Room have been out of commission for the past few weeks, so the tethering had to be done via a USB and Duet Display. Wireless Duet Display connectivity was possible, but tested to be unreliable. The computer was hidden under the table.

This project was built during Halloween, and featured the process of casting a spell to create a dragon (my Chinese Zodiac and favourite mythical creature). The altar is switched on, a grimoire opened, and ingredients are added one by one to the cauldron, and after the last item is added, the cauldron boils furiously and settles. The spell is then cast over the cauldron using a hex bag, and a dragon emerges. Each physical movement by the caster triggers a switch.

It is meant to be a playful piece, performed by a single person to an audience as entertainment. The process can then be repeated by observers  interested in how it works, and what the switches are, or with the animations on screen.

This project plays with hidden switches, the aim to make the magic seem magically and not reveal exactly where the switches are or how they work. There is a bit of mystery behind what is triggering the changes, and few actual buttons are pressed or manipulated.

I initially made prototypes for the use of velostat, light sensors, electromagnets, reed switches, and servos, but found the velostat to unreliable for the purposes I needed it for, electromagnets to be too weak to lift items, reed switches to be too fragile when manipulating, lights sensors to be fickle (even when averaging the current light upon each start up), and the servos to not be powerful enough to achieve the desired effect I wanted. (The iterations can be found in the code.)

Planning Details
> Sensors and actuators
(Digital) Push button -> push down button. turns on heat. turns on array of orange lights underneath, and turn on/off levitation
(Digital) Push button -> ingredients reset button
(Analog) Tactile button x 4 -> ingredients complete circuit. tap a few times before removing.
(Digital) Magnetic Switch -> hex bag with magnets to trigger incantation.
(Digital) Lights -> triggered at various stages

> Casting modes
-1 Pre-activation
0 (Activated)
1 (Ingredient 1)
2 (Ingredient 2)
3 (Ingredient 3)
4 (Ingredient 4)
5 (Stew Ready)
6 (Spell)
7 (Dragon)
8 (Off)

Note: Although two Arduinos are pictured, only one was used. The other was kept in to allow for rapid switching in the event of failure, which it did shortly before presentation. Ultimately the Mega was used as the “Uno” (actually a knockoff Arduino) began to fail unexpectedly. I found out after that it not being a genuine Arduino was the cause of the intermittent failures in this experiment, and also in previous experiments where we relied on it. It was too late for other experiments, but just in time for this one.

3.0 Implementation


Cauldron with ingredients, on stove and “wood” (actually small strips of acrylic).

3.1 Software & Elements

3.1.1 Libraries & Code Design

Code: https://github.com/jevi-me/CC19-EXP-3

As this was a solo experiment, I took the opportunity to explore the software design capabilities within Arduino. I wrote and adapted various classes for use within the code, and tried coding in a more object oriented way, just for experimental purposes. I started off with quite a number of borrowed and written classes and libraries (over 10 at one point), but slowly as features were cut, repurposed or shifted, the number of imported libraries was lowered to 3. The experience was very fulfilling however, and exposed me to a different way of communication with the device.

Arduino Loop Method:
A) Wait for Activation button to be triggered. If the stove isn’t on, nothing works.
– turns on stove lights, turns on altar lights
– change bool
– set state
– send state
B) Check for a change in pressure on the ingredient bases (& if activated)
– change lights
– change bool
– set state
– send state.
C) If all ingredients are added, stew is ready after boiling for a few seconds
– change lights
– change bool
– set state
– send state
D) Hall switch triggered (& if stewready)
– change lights
– change bool
– set state
– send state
X) Sync the info with Processing periodically

On the processing side, I was eager to explore as many of the output features that were available. I developed methods that would allow for animation using custom-made and premade sprites or still images running at different frame rates and loop/no-loop variables, videos at different locations and triggered to start based on the input from Arduino, and sound that started, looped or not, volume changed, and stopped based on the state of the system. I also used text, and the drawing functions of Processing. As in the case with Arduino, the aim was to explore as much of the capabilities that Processing.js had to offer.

In both Arduino and Processing, I explored the use of fail-safe methods that allowed the intertwined systems to fail gracefully and return helpful feedback to the user. Though not a requirement of this project, it was again, an exploration of the capabilities. Examples of this is if the state of the Serial Port, and another the status of USB tether between Arduino and Processing. In both cases, the system fails gracefully, and provides help hints on screen and/or in the console to help guide the method of correction.

3.1.2 Sound Files
Various sounds were used throughout:
– Background music played before the grimoire is opened
– Cauldron boiling sound — there are two types depending on the reaction in the cauldron taking place, and they increase with intensity as the ingredients are added
– “Double Double Toil and Trouble” from Shakespeare is read during the spell casting
– Background music played when the dragon is successfully created

3.1.3 Videos
Four videos are used in this project. The videos were placed on different locations of the screen when triggered to act as design flourishes. The videos were sourced from an Instagram video pack, scaled, cleaned up, and formatted appropriated for use within Processing. The videos serve as the lowest layer of the elements on the screen.

3.1.4 Animation – Sprites
The bunny animation on the lower right corner reacts to the state that the cauldron is in — each new action performed by the caster, or in some cases, a timed reaction, changes the animation of the bunny. It can sleep, run, sit, rise up or lower itself at different levels in response to the caster. The animation uses carefully named and numbered sprites and a customised method to cycle through the sprites drawing them to the screen, looping around to the beginning of the list if the animation cycles within the state. Since the framerate that Processing draws at is fixed, to control the framerate of the animations and save on the number of sprites used, I wrote a frame skipper, that allows for a number of Processing frames to be skipped over and not have the animation redrawn.

The cauldron sprite images were made by taking screenshots of a 3D image of three similarly designed cauldrons, as I rotated them around at different angles. As with the bunny animation, they were carefully named and numbered to allow the cauldron to spin, and different versions of the cauldron were drawn based on the stage of the spell.

3.1.5 Animation – Still Images
The grimoire can be open or closed, and is animated to mimic levitation using simple oscillating redrawing on the y-axis.

The dragon utilises the same “oscillating on the y-axis” to mimic flight, and only appears if all the previous steps have been completed. This information is received from Arduino which controls the logic.

3.1.6 Images
Representations of the physical images appear on screen when they are added to the cauldron, and are listed all together at the beginning. The position, and visibility of the images are tied to the physical interaction taking place.

3.1.7 Text and Drawing
I used text in various places to indicate the state of the spell and explain what is happening. Thre is also a small indicated on the bottom left that shows what stage the spell is in.

3.2 Hardware
Copper tape and alligator clips make the main tactile interaction. The circuit is completed when the ingredient in the storage basin of the piece is in its place. On its removal, the switch is “released”, as it the circuit is opened. This open circuit is what triggers actions processed by Arduino and sent to Processing.

Another switch used is the detection of a magnetic field. A circle of “diamonds” marks the spot where a hex bag is to be placed. The hex bag contains 4 neodymium magnets which create a magnetic field that is registered by a hall detector in place just under the wood.

The other types of switches are ordinary push buttons which were soldered and positioned flush to the surface of the piece allowing them to blend into the design, and time based “switches” which trigger after a predetermined time has elapsed e.g. during boiling and during incantations.

Other things of note in the build are the ingredients used, some real and natural ingredients like tulip bulbs and spices, and others symbolic like skeletons and eyes. Ingredients list: eyes, pumpkin, magic, winged bats, tulip bulb, mushrooms, skeleton of a snake (the word ‘dragons’ come from ‘draco’ and ‘drakon’ meaning ‘large serpent’ or ‘sea serpent’), orb, candle. Many These were purchased at Dollarama during the Halloween season.

Lights were used as a reference of what was happening on screen, mimicking a magical altar, and a flame below the stove under the cauldron.

The altar was laser cut and featured steampunk gears and a skeleton cup which held the “dried blood of lizard”. The stove also laser cut, used wall screws and brackets to add height, and had acrylic strips to mimic wood in the fire pit.

4.0 Reflections

The piece is meant to be a performance using sleight of hand and hidden switches, but it also encourages others to want to try it, and make a dragon as well. This then requires to reveal the switches, and it isn’t very obvious how things work. Another iteration of this could be a more DIY version where there is better labelling. That would make the system for others rather than a performance piece like this was created. In testing with others previously, I received the comment that it could be cool if different ingredients and patterns yielded different creatures. That would require a more complex code, but is possible as yet another iteration of this project.

5.0 Additional Photos

6.0 Literature References (MLA Cited)
Shakespeare, William, and A. R. Braunmuller. Macbeth. Cambridge University Press, 1997.
Double, Double Toil and Trouble – Shakespeare, http://www.potw.org/archive/potw283.html.
“DRAKON AITHIOPIKOS.” ETHIOPIAN DRAGON (Drakon Aithiopikos) – Giant Serpent of Greek & Roman Legend, https://www.theoi.com/Thaumasios/DrakonesAithiopikoi.html.

6.1 Additional Reference Links (Linked)
Envato Elements (elements.envato.com) – for some of the sounds, videos, and graphics.
Processing.js Guide (processingjs.org) – for part of animation code
Circuito.io (circuit.io) – for libraries. Licenses included in the source code.



Animated Robotics – Interactive Installation

By Jignesh Gharat

Project Description

An animated robotics motion with a life-like spectrum bringing emotive behaviour into the physical dimension. An object with motion having living qualities such as sentiments, emotions, and awareness that reveals complex inner state, expressions and behavioural patterns.

He is excited to peak outside the box and explore around the surroundings but as soon as he sees a person nearby he panics and hides back in the box as if he is shy or feels unsafe. He doesn’t like attention but enjoys staring at others.

Design Process & Aesthetics

It’s an attempted to create an artificial personality. I wanted to work on an installation which encourages participation rather than spectatorship. So wanted to work on a physical installation that lets people experience familiar objects and interactions in refreshingly humorous ways.

The explorations started with exploring objects and living organisms that can be used as a part of the installation. Where I can implement curious interactions and a moment of surprise and unpredictability. Gophers, Crabs, Snails, Robots were few to mentions. I finally ended up finalising periscope as an object that can have behaviour and emotions and a perfect object to play hide and seek with the visitors.

What is Periscope?
An apparatus consisting of a tube attached to a set of mirrors or prisms, by which an observer (typically in a submerged submarine or behind a high obstacle) can see things that are otherwise out of sight. I started thinking of ideas, object behaviors as well as setup of the installation to come up with engaging and playful interactions.

Step 1 – Designing the bot

The model was developed in Rhino keeping in mind the moving parts that is rotating the head and the spin which is lifted from the bottom.



Step 2 – The mechanism.

Testing the motors stepper (28BYJ-48 5V 4 Phase DC Gear Stepper Motor + ULN2003 Driver Board UNO MEGA R3-Head ) and Servo (SG90 Micro Servo Motor – Head and 3001 HB analog servo – Arm ). Stepper motors the RPM was too less to give that quality and reflex to the bot so decided to go with the servo motor which had better torque.


The 3d printed body was too heavy for the Analog servo motor to lift so finally decided to use paper to model the body and reduce the weight and load on the motor. Surface development was done using Auto CAD software for making 3 to 4 different options based on the mechanism and aesthetics of which I decided to work on the design shown in the below image. The arm was laser cut in acrylic and two options were made to reduce the friction between paper and the lifting arm contact surface.



How does it Work?

A micro motor controls the oscillation of the head. A Distance sensor at the front of the wooden  controls the motor 2. When the visitor is in the range of set threshold of the distance sensor  that pulls the lever arm down and the motor 1 (Head)stops rotating.


The installations is minimal and simple with just two materials wood and paper that gives a clean finish to the piece. The head is like a focus with a black sheet and white border as if the sensors are installed inside the head that controls the oscillation. In my further explorations and practice I will make sure that the sensor are not directly visible to the visitors as this some time leads to visitors interacting with the sensor and not actually experiencing the installation.

ocular_jigneshgharat_01-jpg ocular_jigneshgharat_02


Code :  https://github.com/jigneshgharat/Ocular


Project Context

Opto-Isolator (2007), Interactive Eye Robot  -Opto-Isolator (2007: Golan Levin with Greg Baltus) inverts the condition of spectatorship by exploring the questions: “What if artworks could know how we were looking at them? And, given this knowledge, how might they respond to us?” The sculpture presents a solitary mechatronic blinking eye, at human scale, which responds to the gaze of visitors with a variety of psychosocial eye-contact behaviors that are at once familiar and unnerving.

The Illusion of Life in Robotic Characters – Principles of Animation Applied to Robot Animation – Robot animation is a new form of animation that has significant impact on human-robot interaction.
This video relates to and extends a paper we have published at the ACM/IEEE Human-Robot Interaction Conference

Exhibition Reflection

The observations were made and recorded at Open Show Graduate Gallery and Centre for Emerging Artists & Designers which provided me with some good insights and learnings for necessary changes to be made developing the concept further. It was fun to see people interacting and playing around a nonliving object as if it had emotions, feelings and  the way Ocular reacted to the viewer’s actions. The interactions were supposed to be only from the front but as the wall card was placed at the side of the installation placed on a high plinth, people tend to read and start interacting from the side and did not really experience the installation as expected but most of them figured out the working as they read the wall card.
Some interesting comments from the visitors:

  • The Installations is making everyone Smile.
  • I have 3 cats and all 3 cats will go crazy if they see this.
  • She said all the guys react the same way Ocular did so she don’t want to go near him.
  • What are the next steps?
  • What was the inspiration?
  • How does this work?
  • Why does he ignore me?

A visitor (Steventon, Mike) referred to Norman White’s interactive robotic project, The Helpless Robot.

Observations and visitors reactions



Interactive Canvas – Bring It To Life



Project Overview


Today there is so much competition, and advertisers are continuously looking to find new interactive and ambient mediums for advertising . This project explores a more traditional and yet interactive approach by using an ordinary canvas to bring three abstract icons to life upon touch with the idea of how a healing hand could protect wild life such as a butterfly and a tree. The use of touch isn’t limited to flat, shiny and expensive digital screens, but it could be brought to any tactile surface like in this project. The use of conductive ink and animations allows visitors to experience and feel the dotted paper anywhere and watch as flat objects change into animation. Moreover, such an advertising medium is economical to use and could be placed in any vicinity.

The aim of this object is part of my discovery into conductive materials and how it could be adapted to advertising mediums telling interactive stories.





My idea behind this project was exploring ways to being art pieces to life in an interactive and experiential ways. Although, there are many ways to do this today such as VR and AR but I was keen to discover something more traditional and familiar. Thus, for this project, I was keen to explore conductive ink and how capacitive sensors could allow a tactile surface to be interactive.

As a result, I came across variety of work within this field. This included the work from London’s Retail Design Expo1which used conductive ink to draw flat icons for storytelling. Another interesting concept about interactive wallpaper was introduced by High-Low Tech4. Here they added light and communication within the canvas. Similarly, another interesting work was the Dublin Interactive Wall2which explored conductive ink with advanced animation. In addition to this, conductive ink has also been used in combination with hologram and AR such as the from Ransquare3, an agency in Seoul.


Design Concept

To bring most commonly known symbols and icons to life, I chose a symbol of a hand, a butterfly and a tree. The story is about how energy from a hand can trigger life into the butterfly and tree. This is inspired from the documentary series of BBC Earth where intervention of humans is impacting wild life.






To trigger an animation for each of the three objects, I made a straight path animation using three colors for each animation.

capture3 capture4 capture-2


Conductive Ink

I used a 12 ml conductive ink pot to trace my sketch on to the canvas and let it dry for an hour as it’s a better conductor when fully dry.







I downloaded the capacitive sensor library for Arduino to begin testing the code. I initially started with putting some conductive ink on a sample paper to check the values on console and gauge what values are triggered upon touch.

I used 3 1M OHM Resistors and placed it between the received and send pin which was Pin 4. This allowed the capactive sensing to work. I learned that when conductive ink was in contact values jumped above 1000.




Using serial monitor data from Arduino, I then wrote ‘and’,‘if’ and ‘else’ statement to run a movie for each sensor when its values are above 1000. For this process, a visitor presses and releases any area to play the animation.



Production and Bringing Everything Together

On the flip side of the canvas, I attached all the wires which made electronics visible.








The USB extender supported the idea to hide any wires. However, it could further help if I had used an HDMI extender and overhead projector to hide the projector and move laptop into a different room.



The canvas received appreciation as art is usually not to be touched but the interactivity and accessing art through a direct tap on the canvas brought new ideas to life. Few artist who interacted with canvas seemed delightful about using conductive ink on their own canvas.

Moreover, during critique various thinking points were suggested which helped the make project more canvas. These included making the narrative stronger, putting the canvas on a greater height, hiding most wires, discovering rear projection.


  1. 18”x24” Canvas
  2. Conductive Ink
  3. Arduino Micro
  4. Pico Projector
  5. Jumper Wires
  6. 3x Resistor 1.0M Ohm


Large Scale Adaption.

This technique could be used in conferences as a backdrop to engage visitors about telling stories of products, interactive exhibitions and many others.




  1. Ayres, C.  (2015). How Dalziel and Pow Realized This Awesome Interactive Touch Wall. Retrieved from URL;


  1. Studio,L. (March,2019). Dublin Interactive Mural. Retrieved from URL:



  1. (January, 2019). 365 Safe Town (Korea Youth Safety Experience Center). Retrieved from URL:http://raonsquare.com/


  1. High-Low Tech Group. Living Wall. Retrieved from URL: http://highlowtech.org/?p=27






Invisible ‘Kelk’

Arshia Sobhan Sarbandi


Invisible ‘Kelk’* is an interactive installation inspired by the transformation of Persian script, from calligraphy to digital typefaces that almost all the current publications are based on. This project is a visual extension of my previous project, an exploration to design an interface to interact with Persian calligraphy.

*’Kelk’ is the Persian word for calligraphy reed.

From Calligraphy to Typography

Reza Abedini is one of the most famous Iranian graphic designers, whose main source of inspiration is visual arts and Persian calligraphy. What is original about his work is that he was the first to recognize the creative potential of Persian calligraphy and to transform it into contemporary graphic typography.[1][4] In an interview, Abedini says: “After many years, because we have repeatedly seen and got used to typography in newspapers, it has become the reference of readability, and not calligraphy anymore. And now, everything written in that form is considered readable, which is one hundred percent wrong in my opinion.”[2]


Left: Vaqaye-Ettefaghiye, the second Iranian newspaper published on 1851 Right: Hamshahri, one of the current newspapers in Iran
Left: Vaqaye-Ettefaghiye, the second Iranian newspaper published in 1851, written in Nastaliq script (image: Wikipedia)
Right: Hamshahri, one of the current newspapers in Iran, using glyph-based typography (image: hamshahrionline.ir)


Abedini argues that we have lost the potentials of Persian calligraphy as a result of adapting to a technology that was not created for Persian script – movable type.[3] ‌Below, you can see Abedini’s original words written in calligraphy (Nastaliq, the prominent style in Persian calligraphy) and typography using one of the most common typefaces currently used in books and newspapers:

Despite the obvious visual difference between these two, the important fact is that both of these writings are completely readable for an Iranian person.

Designing the Visuals

Similar to my previous project, there are two different things happening when pushing or pulling the hanging piece of fabric – the canvas – from its state of balance. All the words from the short paragraph of what Abedini says about calligraphy and typography are scattered on the canvas in a random pattern that refreshes every minute. You see the same words on both sides, in calligraphy and typography.

The composition of calligraphic words is inspired by an important style in Nata’liq script, called Siah-Mashgh. This style is usually defined by its tilted written words in multiple baselines all over the canvas.


Calligraphy pieces by Mirza Gholamreza Isfahani
Calligraphy pieces by Mirza Gholamreza Isfahani in Siah-Mashgh style


What I found really interesting is the fact that although words are randomly positioned in my project, and the random pattern is constantly changing every minute, the result retains its visual identity in terms of the calligraphic style. The words are in harmony in all different random positions, which in my opinion, is the result of great visual potentials of Persian calligraphy.


Samples screenshots of how calligraphy words appear in different random patterns
Sample screenshots of how calligraphy words appear in different random patterns



Technical Details

Almost all the technical details are the same as the previous project, except that I used a slightly larger fabric in the exhibition and a cleaner physical setup. A larger piece of hanging fabric essentially results in slower movements of the fabric in the resting position and during the interaction, which I found more suitable regarding the overall experience I expected.

Exhibition Reflection

From the observation of and discussions with the people interacting with the project, two major points were perceived. The first obvious fact was that when confronting the hanging fabric, many people hesitated to physically interact with it, and even after being told that it was OK to touch the fabric, they touched it in a very gentle manner, unless they were ensured that they were supposed to ‘push’ the fabric. It was also perceived that the possibility of pulling the fabric was much less than pushing the fabric. However, after getting comfortable with the interaction, they usually spent several minutes with the piece and found it pleasing.

One possible solution (to resolve the issue of hesitation to interact with the fabric) could be installing it where people have to push the fabric to pass through it. Another possible solution could be some airflow in the environment that causes the fabric to move slightly in both directions from the resting state so that it can provide a clue that moving the fabric in two directions would result in different visual feedbacks.

Code Repository



1- https://www.mashallahnews.com/reza-abedini-persian-calligraphy/



By Jessie, Liam and Masha

Project Description: 

Antimaterial is an interactive installation that explores abiogenesis and kinesis to articulate a relationship between the human and nonhuman world. Just as we are able to affect our surroundings, the ability of magnetic materials to move matter indicated the presence of life to pre-Socratic philosophers. The rocks beneath our feet were not only the essential catalysts for life, but that microbial life helped give birth to minerals we know and depend on today. In Antimaterial, these concepts are explored as primitive life emerges within the topography of the material in response to human proximity, demonstrating a connection between animate and inanimate matter.

Project Context:

Drawn the freedom and possibility of exploration, from the start of the project,we all agreed to use creative material to create a tangible and experimental interactive art piece. With some of the team member’s previous experience with using water in their personal projects, we thought to use water as it’s a very versatile medium due to it’s free flowing form. Using tools such as a solenoid to tap on water to create ripples, or use speakers in waters to visualize sound waves with water were some of the many initial ideas we thought of. However, with water being a difficult medium to control the form of, we were worried that the final piece would end up looking like a jacuzzi, which led us to finetune to our ideas further. 

After several sessions of brainstorming, we came up with the idea to mix water with some magnetic material to magnetize it which would also give us more control over its form. During our research, we found quite a few interesting projects that have guided us in new directions of exploration. Sachiko Kodama is a Japanese artist that dedicated the majority of her art practice to using ferrofluid and kinetic sculptures to making installations. By combining ferrofluid which can be a messy medium at times with kinetic sculpture which is inherently neatly structured, her works usually create order and harmony amongst chaos and come off as intriguing as well as energetic.  


Sachiko Kodama Studio Website

Inspired by this kind of harmony generated by the juxtaposition of structure and chaos by the use of ferrofluid and kinetic sculpture, we wished to demonstrate it with ferrofluid in our practice for this project too. We purchased a bag of iron(III) oxide and started playing with it by mixing it with different different types of solvents including motor oil and washing up liquid while doing more research how it’s been incorporated to other artists’ works of art. 



One use of iron(III) oxide that drew our attention was to first mix it with oil and then encase the oily mixture in a container filled with water. For example, the project Ferrolic by Zelf Koelman utilizes the unmixable nature of oil and water and creates a versatile movement of ferrofluid within water with the use of magnets behind the display, and creates different patterns with different modes of operation.


We wanted the magnetic material to generate different patterns when visitors interact with it. The mechanism behind the project SnOil inspired us to make an array of electro magnets behind the magnetic material, and different patterns will be programmed to be generated through the activations of electro magnets on certain positions during visitors’ interactions. 


However, having talked to Kate and Nick during the proposal meeting, they shared their concerns about whether we would be able to manage an array of more than 10 electro magnets within 2 weeks’ of production period, as well as the cleanliness of the piece on display since ferrofluid tends to get messy. Having reevaluated these essential suggestions, we decided to abort the idea of making ferrofluid and only use iron(III) oxide powder as it generates a fur-like texture when magnetized and still creates a very aesthetically intriguing look. For example, the following project uses furry pom poms as part of the interaction piece to produce a very engaging experience.


We were also inspired by the mechanism behind this pom pom piece as it uses servos and turns a circular movement into a linear movement which we later used in our project to simply the movements of magnets. 



This project has undergone a series of experimentations, evaluations and adaptations. Having decided to explore creative material for this project, we chose to use magnets to experiment with. We first started making electro-magnets by wrapping conductive metal wires around a coil, and hoped to create patterns with magnetic material through the activation of an array of electro-magnets. The patterns will change and react to the on and off of the eletro-magnets. 


However, the magnetism of the electro-magnet is not strong enough and generates a lot of heat after a longer period of time when it’s activated. We worry that they would cause safety issues during the open show because of overheating and also the visual effect might not be desirable because of its weak magnetism. Eventually, we decided to use actual magnets instead. This change has led us to re-think the reactions of the magnets as they will not have the ability to be turned on and off, and will always have magnetism.

Therefore, we decided instead of making magnets become activated and deactivated, we could change their position through movement actuated by servos motors through a slider. 

It turns out the micro servo motors we have are too weak. One slider weighs around 6kg and the servo we have is only able to move weights up to 1.5 kg. We try to come up with different solutions to solve it. First, we changed the micro servo to a more powerful servo FS5106R which takes up to 6kg in weight. Second, we put the slider vertically instead of horizontally and use gravity to our advantage, and attach the slider to the servo through a piece of fishing wire.




However, it seems the slider is too heavy still, especially now that all the weight is put on the servo motor. The fishing wire also becomes less durable after the serco runs for a longer period of time. Finally we landed on the idea of laying the slider horizontally and using gears to achive the linear movement of magnets through the rotation of servos.

c1660601_main-1_1 lasercut-01


Having done that, the mechanism still seems inadequate because the full rotation servo doesn’t seem to have enough torque to move the gear plus the slider. We again had to change to an even more powerful motor, which led us to using stepper motors. We also favor the stepper motors over servos since they are simpler to code as it counts steps instead of time. 


Now that all the underlying mechanism is working. It’s time for us to put it all together. The production process begins. We have prototyped what the piece would look like in a digital 3D software and made sure that it would all come together in reality.


We started building the box with the help and guidance of a professional constructor friend. 


Finally, this is what our piece looks like.



Code and Circuit:




During the exhibition, the piece successfully achieved creating a sense of mystery and made many visitors to the open show wonder what it was and how we did it. During the critique session, we were suggested by Nick to get gloves to put besides the piece so people would know that they could touch it and maximize the level of interaction. However, having discussed it within our group, we agreed the gloves might temper with the eeriness the piece gave off and wouldn’t fit with the underlying tone. The gloves would also create a barrier for people to feel the texture of the powder, which is an essential part of this interaction experience – to find out what this alien material is through all means of explorations. 

To solve this issue, we decided to keep paper towels with us. After people touched it, we would hand off paper towels to them to wipe the powder off. However, if they didn’t ask us, we wouldn’t say anything about touching. As was expected, many people were tempted to touch the magnetic power on display and we allowed them to do it if they had asked. What’s interesting is that the seemingly dirty powder didn’t hinder people’s curiosity and a lot of people dig their hands into the powder, which seems to be a validation of success in mission of the mystery this piece tries to achieve. This project has taken us a lot of time in all stages of the process. However, it was truly rewarding to see it come together in the end after many trials and errors. 

j2 k3


AccelStepper Class Reference. (n.d.). Retrieved December 8, 2019, from https://www.airspayce.com/mikem/arduino/AccelStepper/classAccelStepper.html.

Brownlee, J. (2018, July 10). MIT Invents A Shapeshifting Display You Can Reach Through And Touch. Retrieved December 9, 2019, from https://www.fastcompany.com/3021522/mit-invents-a-shapeshifting-display-you-can-reach-through-and-touch.

Ferrolic by Zelf Koelman. (n.d.). Retrieved December 9, 2019, from http://www.ferrolic.com/.

Kudama, S. Protrude, Flow (2018) Retrieved December 8, 2019, from http://sachikokodama.com/works/.

PomPom Mirror by Daniel Rozin. (n.d.). Retrieved December 9, 2019, from https://bitforms.art/archives/rozin-2015/pompom-mirror.

SnOil – A Physical 3D Display Based on Ferrofluid. (2009, September 20). Retrieved December 9, 2019, from https://www.youtube.com/watch?v=dXuWGthXKL8.

State Machine. (n.d.). Retrieved December 8, 2019, from http://www.thebox.myzen.co.uk/Tutorial/State_Machine.html.




Leave a window open for me


Project title: Leave a window open for me

By: Neo NuoChen 

Project description This piece is meant to create a space which expands infinitely within a box. It is a reflection of myself and my personal experience of fighting with insomnia and depression. It welcomes the audience to take a peek inside the story and share the feelings through the “window”. 

visuals (photographs & screenshots):

  • (3-5) work-in-progress:img_3756 img_3843 img_4032 img_4053 img_3821
  • (3-5) portfolio-worthy final images





edited video of the interaction: https://vimeo.com/378123696



circuit diagrams https://www.circuito.io/app?components=512,10190,11021,763365

Please note that the servo input is supposed to go to pin 9 and the LED panel input should go to pin 12. There was no breadboard being used for my project, and I was using a 16×16 LED matrix panel.

Project Context (500-1000 words)

I wanted to create this personal space within a box for the public because it is my story and I’m willing to share it with everyone. It is designed to be viewed through the “window” and one person per time so that the audience could have a more intimate and immersive experience.

This is a story about me going through insomnia and depression for several months when I was in New York. When I couldn’t sleep, the window was one of the things that I stared at the most, and I kept thinking about all the things that I shouldn’t have done and could have done. The stress was so real that it constantly haunted me in my head, the slightest light or noise became utterly unbearable. I could feel the bed spinning like a disc, or was it me spinning in my head? It didn’t matter day or night, they were the same, the thoughts kept me awake, the music that played was drowning me.

But I don’t want to drag the audience into a bad mood, it is not what my intention is, ever since I stepped out that situation, I’m proud of myself for doing that and I want to let people know that it is not undefeatable and you are not alone.

The initial idea of building an expanding space within a box was to see how the contrast between the outside and inside could be, the surprise element is always a twist that I want to collaborate in my design. I’ve been to multiple Yayoi Kusama’s exhibitions, and each time I’m amazed by how the glasses are forming these rooms. I was blessed to see Samara Golden’s work, The Meat Grinder’s Iron Clothes, at the Whitney Museum of American Art in New York. It was a multilevel installation built with mirrors to expand the space between where the audiences were standing. Looking up and down when standing there in between, the feeling was oddly satisfying, it made me lost in the thought of an existential crisis.



I was also inspired by Louise Bourgeois’ Untitled (Hologram Suite), 1998 – 2014. I like the way that she picked different items in different images to represent the different stories when creating this series of artworks. That helped me made my decision when I was debating whether to build a whole room full of furniture or go with one specific item. And I do agree that going with fewer items would express the feeling of loneliness more.


“The holographic image is created by laser beams that record the light field reflected from an object, burning it onto a plate of glass. The image is scaled at a one-to-one correspondence with the original sculptural material that was created by Bourgeois. These elements reoccur in many of Bourgeois installations and are related to her interest in physical and emotional isolation and sexuality.”

The servo that I used, at first, I went for the servo in our tool kit and followed a tutorial trying to make it go 360, but the result was not what I was expecting, because it simply turned into a motor without the function of taking input, and the speed was too fast. So I re-soldered everything back and the servo was able to download codes again but the spin was not ideal and made a lot of noise as well as shakes. So I borrowed a servo that could go 360 without being manually altered from Li’s team, which helped me survived this.

As for the choice of LED colors was aiming to create this harmony between itself and the spinning bed, both of them seemed to form a peaceful and harmless space, but it also emphasizes the fact that it’s a space of absolutely no sleep.

Exhibition Reflection (250-500 words)

The open show was a blast, glad to see everyone’s work shine at their spots. It was interesting to receive feedback from the audience, especially when I ask them to take a look inside first without knowing the story behind it, most people described it as “beautiful and amazing”, which was great, because like I said, dragging people into a bad mood was never my intention, and people are free to feel whatever they feel. But after reading/hearing the background, they would be like, “yes, I can totally see that”. It made me wonder, did they actually see that because they’ve felt the same thing in the past? Or did they match the visual with the story? Either way, empathy was created and connected the audience with the piece, and I was more than happy about that.

Towards the end, there was a girl who came looked at my work and told me she knew exactly what I was going through because she went through the same thing before. We sat at the staircase and talked about the experience, and how good it felt talking about it, this made me realized the fact that there are more people than I expected who are facing a similar issue. And they should know that they are not alone, we are on the same boat here, and we will always be each other’s support.

I picked a location where fairly isolated and unnoticeable, which somehow went well with my concept, but not a lot of people went into the darker room where I was in along with the other three works. It would be nice if the audience knew about the room and paid us a visit:)


Samara Golden. The Meat Grinder’s Iron Clothes, 2017. Whitney Biennial. The Whitney Museum of American Art, New York https://samaragolden.com/section/450598-THE-MEAT-GRINDER-S-IRON-CLOTHES-2017-WHITNEY-BIENNIAL-WHITNEY-MUSUEM-OF-AMERICAN-ART.html

Heather James Fine Art. Artsy.net. https://www.artsy.net/artwork/louise-bourgeois-untitled-hologram-suite

Yayoi Kusama, Infinity Mirrored Room – The Souls of Millions of Light Years Away, 2013. https://ago.ca/exhibitions/kusama



Nadine Valcin


(Un)seen is a video installation about presence/absence that relies on proxemics to trigger 3 different processed video loops. It creates a ghostly presence projected on a screen whose image recedes as the viewer gets closer to it despite constantly trying to engage the viewer through its voice.

As visitors enter the room, they see a barely distinguishable extreme closeup of the ghost’s eyes. As they get closer to the screen, the ghost remains unattainable, visible through progressively wider shots. The last loop plays when visitors are in close proximity to the screen. At that distance the array of squares and circles that are layered over the video giving it texture become very distinct making the image becomes more abstract. The rhythm of the images also changes as short glimpses of the ghosts are seen through the progressively longer black sequences.

The video image is treated live by a custom filter created in Processing to give it a dreamy and painterly look.


In terms of content, my recent work and upcoming thesis project deal with memory, erasure and haunting. I am interested in how unacknowledged ghosts from the past haunt the present. As Avery Gordon (1997, p.7) remarks:

“Haunting is a constituent element of modern social life. It is neither premodern superstition nor individual psychosis; it is a generalizable social phenomenon of great import. To study social life one must confront the ghostly aspects of it. This confrontation requires (or produces) a fundamental change in the way we know and make knowledge, in our mode of production.” (2008, p. 7)

This project was a way for me to investigate through image and sound, how a ghostly presence could be evoked. I also wanted to explore how technology could assist me in doing it in an interactive that differed from the linear media production I normally engage with. The video material for (Un)seen comes from an installation piece entitled Emergence that I produced in 2017. I thought the images were strong and minimalist and provided a good canvas for experimentation.

(Un)seen is heavily inspired by the work of Processing co-creator Casey Reas and his exploration of generative art.  I have been interested in his work as it explores the way in which computing can create new images and manipulate existing ones in ways that are not possible in the analog realm. Over the years, Reas has used various custom-built software to manipulate video and photographic images.


Transference, Source: Casey Reas (http://reas.com/transference/)

Transference (2018) is a video that uses frames from Ingmar Begrman’s black and white film Persona (1966). It deliberately distorts the faces represented rendering them unidentifiable and reflecting on contemporary questions around identity and digital media.


Samarra, Source: Casey Reas (http://reas.com/samarra/)

He applies a similar image treatment in the music video Samarra (2016) and in Even the Greatest Stars Discover Themselves in the Looking Glass, An Allegory of the Cave for Three People (2014), an experience in which three audience members interact, mediated through cameras and projected images. In that piece Reas, once again looks at identity as through a technological lens against the backdrop of surveillance.


Even the Greatest Stars Discover Themselves in the Looking Glass, An Allegory of the Cave for Three People, Source: Casey Reas (http://reas.com/cave/)

KNBC, Source: Casey Reas (http://reas.com/knbc/)

In KNBC (2015), Reas pushes his experimentation further, manipulating images to a level of abstraction where they become unrecognizable in the finished product, breaking their visual link to the original source material. The recorded television footage and accompanying sound are processed them into a colourful, pixelated  generative collage.


Surface X, Source: Arduino Project Hub (https://create.arduino.cc/projecthub/Picaroon/surface-x-811e8c)

From the group project Forget Me Not (assignment 2), I retained the idea of working with an Arduino Uno and a distance sensor, this time to control the video on the screen. I wanted to create a meaningful interaction between the image and the distance that separated it from visitors..

The interactive art installation Surface X by Picaroon was cited in assignment 2 and is still relevant to this project because of its use of proxemics to provoke the closure of the umbrellas, revealing the underlying metal structure and mechanism when visitors approach. Whereas the cretors saw the activation of the umbrellas as a metaphor for the way we constantly prefect and control our digital personas and how they collide with reality upon closer inspection in the moments where all our cracks and flaws are revealed.


Surface X, Source: Arduino Project Hub (https://create.arduino.cc/projecthub/Picaroon/surface-x-811e8c)

In (Unseen), the proxemics are used differently, to signify the refusal of the ghost to visually engage with the visitor, or perhaps, signalling that its presence is not quite what it seems.



Still from unprocessed original footage

I started by going through my original footage selecting all the takes from one of the four participants in the shoot for my installation Emergence. I chose this woman because she had the most evocative facial expressions and dramatic poses. I then created 3 video loops between 30 and 60 seconds in duration. The first loop is comprised of extreme closeups focused around the eyes, where the entire face of the character isn’t seen. The second loop consists of closeups where her entire face was visible. The third loop features shots that are a bit wider, but their duration is shorter and there is a significant amount of black between them.



(Un)seen – loop 1


(Un)seen – loop 2


(Un)seen – loop 3

I originally thought of manipulating the video image through Adobe After Effects, but I encountered a Coding Train video by Shiffman that showed the potential of extracting the colour of pixels from a photograph (much like it is possible to do in Photoshop) to program filters that would change the appearance of images. It seemed interesting, but I didn’t know if it would be possible to apply those same features to a moving image, given the processing capacity necessary to play and render live video.

Some of the original footage was intentionally shot with a very shallow depth of field, leaving parts of the shots out of focus depending on the movement of the subject being filmed. As I started to experiment with textures, I found that images that were slightly out of focus helped blur the outline of the circles and squares that created were part of the video filter. I used the gaussian blur function in Premiere Pro to get the desired texture. It was a trial and error process, manipulating the footage in Premiere Pro, then in Processing through several iterations.



Left: Original source footage, right: blurred footage


Same footage rendered through processing


Left: Original source footage, right: blurred footage


Same footage rendered through processing


I recorded the soundtrack, then edited and mixed it. It consisted of a loop with a woman’s heavy breath on which a selection of 13 different short clips play randomly.


The clips are mostly questions that demonstrate the ghost’s desire to engage with the visitor, but also at times challenges them. Examples include: Who are you? Where do you come from? Can you set me free? Do you want to hear my story?



The technical set-up was rather simple. The Arduino Nano was used to read the distance data from the LV-MaxSonar EZ ultrasonic distance sensor. The video for the idle state (loop 1) loaded automatically and two different thresholds were set to trigger the playback of loops 2 and 3.


The distance sensor gave wildly different distance readings depending on the space and had to be patiently recalibrated several times. Despite the Arduino being set to send readings to Processing at 1500 millis intervals, the liminal distance at the thresholds for the different video loops triggers posed some problems creating rapid  flickering between the different loops. One might say that the system itself was haunted.

The ventilation in the classrooms at OCAD also proved challenging as despite playing at full volume on speakers, the soundtrack wasn’t fully audible except at very close range. The original intent was to have a 360 degree soundscape through a P5.js library to heighten the immersion and feeling of presence. Unfortunately I could not find an equivalent for Processing.





Closeup of image as seen up close, projected on a screen

The Exhibition   

The exhibition was a wonderful opportunity to get members of the OCAD community and of the general public to engage with the work. The fact that (Un)seen was in a separate room was at once an advantage and an inconvenience. Some people missed the piece because they concentrated on the main spaces, but those who ventured into the room focused their entire attention on the installation. Being the sole creator of the piece, left me with all the duties of engaging with visitors and didn’t allow me to visit my colleagues’ pieces, especially those from undergraduates or second year graduate students that I had not seen. I met and spoke with Digital Futures faculty that I hadn’t yet encountered as well as staff and students from other departments. It was a useful and engaging sharing that should happen more often as it created a real sense of community.

People were eager to engage with the piece and the feedback was overwhelmingly positive. Visitors understood the concept and enjoyed the experience. Because of the issues with the distance sensor, they had to be instructed not to move too quickly and to take the time to pause to minimize false triggerings. The only drawback to the room was the extremely noisy ventilation. Despite the sound playing at maximum volume on the room speakers,  the soundtrack and clips were barely audible. The fact that the door was open to entice people to come into the space only added additional din. It would also have been nice to have a totally dark space to present, but I ended up switching spaces with some of my colleagues in order to accommodate their project.


CODE: https://github.com/nvalcin/Unseen



Correia, Nico. “Bridging the gap between art and code” in UCLA Newsroom, April 25, 2016 http://newsroom.ucla.edu/stories/bridging-the-gap-between-art-and-code. Accessed on December 6, 2019.

Gordon, Avery F. (2008). Ghostly Matters, Haunting and the Sociological Imagination. Minneapolis: University of Minnesota Press.

Picaroon (2018), Surface X in Arduino Project Hub. https://create.arduino.cc/projecthub/Picaroon/surface-x-811e8c. Accessed on December 6, 2019.

Reas, Casey(2019). Artists website. http://reas.com/ Accessed on December 6, 2019.

Rosenthal, Emerson, “Casey Reas’ Newest Art Is A Coded, Projected ‘Allegory Of The Cave’” in Vice Magazine, March 14, 2014. https://www.vice.com/en_us/article/mgpawn/casey-reas-newest-art-is-a-coded-projected-allegory-of-the-cave-for-thedigital-age  Accessed on December 6, 2019.