Stories We Tell


Stories We Tell, by Naomi Shah

Experiment 3: Creation and Computation
Github link:




Project Description:

Stories are structured casualties. Strings of words form pages of sentences, recalling a past, painting a thought or envisioning a future. ‘Stories We Tell’ is an exploration into such microfiction narratives.


As you turn the knob, sentences change, new meaning emerges and new stories are created.

The combinations and permutations selected by users provide insight into experiences, thoughts, moods and even the worldview of a user when they string together seemingly disparate sentences to create coherence.





Project Context:

As a consumer of non-fiction books, cinema and graphic novels, I have always been fascinated with exploring ways in which newer stories emerge. Since the beginning of Creation and Computation, I have wanted to experiment with the migration of videos and stories to more interactive spaces, using participatory methods to engage with audiences to device ways in which they can become active users. This is my first attempt at generating stories through code.


I took inspiration in prose and storytelling from a crowd-sourced micro-fiction storytelling platform called Terribly Tiny Tales. Based out of India, this platform creates stories catering to the shrinking attention span of readers by limiting their stories to just 2000 characters. Creating succinct and engaging microfiction narratives which often sounds poetic, the ambiguity of the stories drew me towards engaging and interpreting them when I was younger. This was the starting point of inspiration for my project, exploring how I might adapt this into the ‘This and That’ format for Experiment 3.


More Inspiration:

A source of inspiration was from a game called ‘Pitchstorm’, which I recently purchased from Kickstarter. This is a party game that puts players in the position of unprepared writers, pitching movie ideas to the world’s worst executives. Through combinations and permutations of the 164 character and plot cards that are randomly drawn, players must create the premises for a movie. As a player, I enjoyed the random dealings of character and plot cards that created the possibilities of diverse stories, throwing players into a frenzy as they try concocting larger worlds from the minimal sentences on the cards.  I had the chance to play this game while building this project, which gave me some insights into the next steps of the project beyond the MVP.


I came across plenty of resources that allow users to engage in creating stories using prompts, guidelines or random generators. Another inspiration was The TVTropes Story Generator, a website that gives users a bunch of elements to use as inspiration to build their stories. Random sentences are assigned to certain elements of the narrative such as Setting, Plot, Narrative Device, Hero, Villain, Character As Device, Characterization Device. By hitting the refresh button, another set of random sentences are generated.


Similarly, the Amazing Story Generator is a book that allows users to combine the three different elements of setting, character and plot to mix and match and create unique a story ideas.


Hardware Used:

1 Breadboard

1 Arduino Micro

1 USB cable

4 10k Potentiometers

4 differently coloured knobs

M/M Jumper Wires


Other Materials Used:

1 Cardboard Box

2 Rubber Bands


Software Used:


Arduino IDE










Considering this was my first solo coding assignment, I was intimidated by the task . However, it gave me the opportunity to assess everything I had learnt (and had not learnt) to apply myself to this project. I found myself needing revision in the basic concepts of arrays, booleans and map functions, and having only myself as a resource as I was building this project out of the OCAD environment and back in my native environment. Because I am in the sleepy coastal town of Goa, resources are hard to find, and the challenges were different while building the project in this environment.  I realised I stumbled a lot, spent and in the process of trial and error, and I built my project.


My project was technically simple. I wanted to use 4 potentiometers, each assigned to a different category of sentences. The potentiometer would then be used to ‘scroll’ between different sentences. I tackled the Arduino and P5JS categories separately, and then combined the two using the p5.serialcontrol app.


I started by first writing down the sentences for each category, which I felt were open-ended enough to complement any combination of sentences to form random stories.  



That night in the kitchen,

It was a bright day in the garden

The walls of the fortress were high



She felt scared by what stood before her

She felt peace in that moment

She had a lump in her throat



She took a knife and shoved it into his stomach

And ran her fingers through her hair

She pet the back of her iguana



‘I am not going to let you get the better of me’ she yells.

‘It’s a new beginning’, she said.

‘Don’t let me down’ she whispered.


Wiring the potentiometers to the Arduino and writing the code for 4 potentiometers instead of one was simple. However, everytime I ran the code, the values from 0 to 1023 would not show up in the serial monitor and instead displayed a series of special characters. It took me a while to realise that the problem occured everytime I would charge my laptop, owing to an earthing problem in my home.  



img_20181223_154816 img_20181223_154820


For P5.js, I started off by first doing several coding train tutorials to understand how to piece together my project. At first, I created a local JSON file, with the intention of increasing the number of sentences for each category at a later stage. However, I eventually added the sentences into global arrays for each of the four categories: Location, Emotion, Action and Dialogue. I chose three sentences for each category and mapped each sentence to a range, for example sentence 1 has a range from 0-300, sentence two has a range from 300-600 etc, all the way up to 1023  which is the analog range the potentiometer returns to the Arduino.


I created a slider to scroll through each category using booleans to make sure that I was calling the data correctly. Once this was successfully achieved, I then set up the port and the serial control connection between Arduino and P5.js. This created some complications, and ‘p5.SerialPort is not a constructor’ was an error that the programs would throw up from time to time. One by one, I scanned every line and identified the errors, big and small before all the errors had vanished.  I was almost sure that my code would not work, but when it eventually did I was ecstatic.




I then spent some time designing the page with some basic HTML and CSS, giving it a clean and minimal aesthetic to package it to completion.




I re-purposed a packaging box to become the controller for ‘Stories We Tell’. I was lucky to have some carpenters at my disposal who assisted me while using the power tools to drill 4 holes into the cardboard. Through these holes would emerge the 4 colored knobs which the user would use to control various aspects of the micro fiction narrative. The four knobs were purchased from Creatron before my departure from Toronto. The knobs and text on the screen are designed to be color-coordinated, allowing users to distinguish quickly, which knobs correspond to which elements on the screen.


img_20181224_223555 img_20181224_223759 img_20181224_224039











img_20181224_235830    img_20181224_235932


Future Possibilities

An obvious next step for this project would be to populate each story category (Location, Action, Emotion, Dialogue) with several more sentences, ideally hundreds to encourage endless possibilities for generating unique stories.


While building this project, I had several ideas that developed along the way, all revolving around the core principle of scrolling through random, unconnected and disparate sentences to string together something that exhibits clarity and coherence. In this case, the sum of all the individual storytelling elements is greater than its individual parts, generating stories that demonstrates a user’s experience, worldview, thought, or mood. With variations made and layers added to this project, listed below are some potential future possibilities I would like to explore further:


Brainstorming tool for filmmakers and writers

Ideal for filmmakers or writers, ‘Stories We Tell’ could be a tool to jump-start a brainstorming session for fresh short stories, novels, scripts, screenplays, or improv sessions. Added layers of complexity to this project could involve users being encouraged to either convert their microfiction narratives into more long-form narratives, or perhaps generate a pitch for their microfiction narratives. Other variations of this could involve different kinds of elements within stories such as genre, characters, conflict, resolution, etc.


Stories created by children, for other children

As children increasingly take to the digital world for purposes ranging from entertainment to education, this could be a tool to encourage cognitive abilities and creativity among children by allowing them to create their own stories. The tool can be tweaked to become more visually heavy, relying on illustrations or images much more to appeal to a younger demographic. Children can create their own stories, save them, share  them with their friends and read stories created by others, all via the same platform.


Design research tool using storytelling for researchers

If the sentences and the categories that contain them were to be modified to suit a specific context, this tool could be used as a design research tool, where story making and story building can be used by researchers, clients and other participants to make sense of complex, interconnected situations. Narratives generated by simply rotating the knob and choosing seemingly disparate sentences that can be built into a story, can help shed light on the way people perceive themselves and their environment. This could also be useful prior to survey development, allowing researcher to gauge emotions and issues related to a situation, and then ask about it further through one-on-one interviews.




3.4: Boolean Variables – p5.js Tutorial. (2015, September 10). Retrieved from

7.3: Arrays of Objects – p5.js Tutorial. (2017, October 10). Retrieved from

Long story short. (n.d.). Retrieved from

Pitchstorm. (n.d.). Retrieved from

Story Generator. (n.d.). Retrieved from

Storytelling. (n.d.). Retrieved from

Train, T. C. (2015, October 30). 10.3: What is JSON? Part II – p5.js Tutorial. Retrieved from


Call Mom: The Redial Edition

By Tabitha and Jingpo

Our final setup on the exhibition day.

Our final setup on the exhibition day.

Project Description:

Call Mom is an Arduino-based networking project that uses a light sensor to determine when mom’s reading lamp is switched on and sends you a notification that she’s in relaxation mode, ready to hear from you. The device is small enough to fit inside a decorative object such as a book or a box allowing it to blend seamlessly with her bedroom decor. Powered by a simple battery pack it’s a low maintenance internet connected device that sits by her bedside. This new iteration includes a smartphone-friendly interface that allows the user to call or message mom directly as well as monitor her sleeping patterns and phone call statistics. In the gallery setting this project is presented as a set of bedside vignettes that are built using found-objects which allow the viewer to imagine the world of three unique moms as they wind down for the night.

Github Link: 


This second iteration of The ‘Call Mom’ Project focuses primarily on the unique needs of people living in different time zones who would like to easily communicate by phone with their mothers back home. Drawing from Jingpo’s personal experience we expanded upon our initial idea and built a web interface based on feedback from some of our international classmates. Our previous project has a simple function that worked very well: getting email alerts. This time, we built upon that to better meet the needs of people who live far from home. 

Another component of this project is the fabrication and in this second go-around we needed it to work within a gallery setting. In the previous version we presented the device inside an old book and placed it on a bedside table to help the viewer connect to the concept while imagining their own mom tucked in bed. For this version we expanded upon the idea based on a suggestion from Kate to add more moms!

Website Development: Jingpo

We went through a process of defining the functionalities of the web product: why a web site? What functionality should this website have? What makes this website different from others, such as google weather?

We then interviewed several international students in our class, hoping to get some inspiration from them.


We interviewed 4 international students in our class: Jingpo, Norbert, Naomi and Carisa. There are 3 key findings:

  1. They all very busy at school and live in different timezone with their parents. They all think receiving reminder would be very helpful.
  2. Sometimes they don’t know what to talk with their moms.
  3. Even though they don’t call their mom, they would still think about their mom: “what is she doing right now?”, “what is weather in my hometown, colder or warmer?”

Information Flow:

To finalize the insights we found in research and potential users interview and decide the final functionality of this website, we use FreeMind to make a minding map.




We used Adobe XD to design the webpage.


We also used PS and AI to create the homepage illusion of lamp is on and off.




1. Temperature sensors:

We want to know the temperature of the room. Tabitha’s mom doesn’t open the heat during the winter. She’d like to know the room temperature of her mom house and make sure her mom take good care of herself. We spent a lot of time working on the temperature sensor. The code I wrote doesn’t work for the temperature sensor I have.


2. API

We found API that we can use, called free open source for weather. We don’t know how to trace and track other data we want to have first. Then we found a YouTube a tutorial online and solve the problem but the code only works locally. The code was forbidden by OCADU server. It works when I started a new HTML and add Jsonp into the code.


3. Touch Screen

The code only worked for laptop not for touch screen. We checked the P5 JavaScript library tried different syntaxes, such as “mousePressed” and “touchStarted”, but they all didn’t work. Then we decided to create a navigation bar at the top of each webpage.

Fabrication: Tabitha

When expanding upon our initial version of Call Mom it was important that the new mom characters were specific enough to feel real but not so specific that it felt like you were looking at the bedside table of someone else’s mom. The hardest part of this was deciding what to do with the portraits. In the first version I used an old family portrait of my mom and my aunt mainly because of the aesthetic. Since it’s an old photo from the 70s it has this timeless quality that fit in with the rest of the objects I had selected. However, once you start adding multiple vignettes patterns begin to emerge and I did not want every setup to look like it belonged to a white baby boomer mom. It felt cheap to just google photos of people of other ethnicities and plop those images into the frames. The wrong choice would distract from the main purpose of the fabrication which was to communicate how we are all connected by the shared experience of having mothers. These moms had to feel real.

In sorting out what to do about this I got onto thinking about family vacation photos and whether I could play with the idea of the absentee child by going into photoshop and erasing or obscuring the kids within the pictures, as if their presence within the family was a faded memory. However I thought that might be pushing our project too far into the realm of conceptual art which didn’t seem to align with Jingpo’s vision. So I dug through my old photo albums and found an old black and white photo of a man whose face was obscured and a small photo of a friend of mine from public school. Both of these images did not draw too much attention to themselves which is why I think they served their intended purpose of representing family photos while still allowing the viewer to project themselves onto the project. I still really like the idea of picture frames representing children who aren’t present, but I think it would work best if I used backdrops that are obviously missing the subject such as empty school portraits or a birthday celebration with no birthday boy/girl.

Gathering materials. Instead of asking 'what do moms like?' I asked myself 'what does this mom like?'

Instead of asking myself ‘what do all moms like?’ I asked ‘what does this mom like?’

Next it was time to build the thing! My home is a strange place with lots of crazy objects so there were many items to choose from for the fabrication of two new mom vignettes. The trick was to establish a direction. To do this first I thought of two more Mom characters then I imagined what sort of bedside table they might have. So I made the following list:

Setup 1: (Original Mom)

  • Networked object: Book with sensor and arduino
  • Power supply
  • Cup of tea
  • Additional book props
  • Family photo
  • Vintage Table
  • Vintage Lamp
  • Light sensor
  • Funky carpet

Setup 2: (Humble Mom)

  • Networked object: heirloom jewelry box? Tupperware sewing kit? Kleenex box?
  • Power supply
  • Water in a budweiser cup
  • National enquirer magazines
  • Family photo
  • Old TV tray table
  • Crumpled tissues
  • Basket of knitting
  • Fuzzy bathroom carpet

Setup 3: (Posh/Modern Mom)

  • Networked object: orange Hermes box?
  • Power supply
  • Crystal decanter of rum
  • Family photos
  • Fancy lamp
  • Silk sleeping mask
  • Incense
  • Brass gate leg table
  • Persian rug
Trying out different table setups by grabbing things from around my apartment.

Trying out different table setups by grabbing things from around my apartment.

Next I thought of what tables I could use for each of the characters. I settled on a brass gate leg table for the ‘modern mom’ and an old battered tv tray for ‘humble mom’. This process was very improvisational, but I was working from a clear structure that would evolve as I found new objects. At one point I realized that slippers could be a nice addition so we could really feel mom’s presence – like how we kick off our slippers before climbing into bed. So I started with moccasins and thought of the other types of ‘slippers’ humble mom and posh mom might have. When it came to books humble mom had a few books from the library and posh mom had an expensive monocle city guide and a book on Persian cooking. Humble mom was reading a copy of the Toronto Star whereas posh mom had a subscription to the Margaret Atwood-backed local newspaper The West End Phoenix. It helped to focus on specific people from my life when curating these objects which added to the realism. Nothing was placed at random – I’ve got a rationale for every crazy object in this project! The one thing I wish I could have brought was a huge carpet but I don’t drive so I limited myself to things I could take with me in a taxi.

Preparing for the final presentation.

To prepare for the final presentation I had a ‘mum’s chicken’ pie for lunch.

Since we were duplicating our original project the circuit and sensor remained the same. Jingpo and I borrowed some Feathers from our classmates and stuck them on new breadboards and plugged them into battery packs. We decided to choose something other than books this time around so our two new sensors were housed inside a kleenex box and a big wooden box for simplicity. Unfortunately our battery pack was too large to use the orange cat teapot as a connected object! Next time…

Getting things together in the experimental media space.

Getting things together in the experimental media space.


Mom's nightcap.

Mom’s nightcap.

This mom is practical with a touch of whimsy.

This mom is practical with a touch of whimsy.

On the day of our show we set up in the experimental media space and waited for people to arrive. It ended up working perfectly because the main room was quite loud with all the other projects whereas this room was quiet enough for visitors to fully engage with our project. We marked the lamp switches in a colourful green tape to indicate the interaction and mounted an iPad against the wall so the viewers could make the connection between the phone messages and the on/off lamp graphics. Due to the simple nature of our project it was easy to convey the idea in a few short phrases and the crowd seemed to understand within seconds.

Tabitha had some concerns that it would be harder to make an emotional connection with the audience this time around since, unlike with experiment 4, she would not be literally calling her mom on the phone as a part of the presentation. But in the end the idea resonated with more force than we had expected where visitors appeared to project feelings of their own mothers and families onto the project. Much of the reaction was positive and when we would describe the concept there were smiles and laughs of recognition, especially amongst those who were far away from home. But there were some moments of genuine tension as well where Tabitha was grilled over our decision to focus on mothers. It was interesting to see how our project became a launching point for discussion around families, mothers and technology.

The sensors were a point of contention for several visitors.

The hidden sensors were a point of contention for several visitors.

Here are a few snippets of the reactions from our exhibition:

  • An international student was beaming as he described how the slippers we had placed next to the end tables made him want to kick off his own shoes and transport back home to be with his family.
  • An older man was quite bothered by the statistical aspect of our project where we could track our mother’s sleeping pattern and temperature in the room. He spent a few minutes arguing with Tabitha about the age of her mother and whether she would need such features. To Tabitha it seemed as though he was projecting his own feelings about aging onto the project.
  • One younger man was intrigued by the way we had hidden the devices in objects which led to a rich discussion about privacy. He said that on the surface it seems as though we are spying, but the irony is that the sensor is so simple and does not produce much data at all and is relatively harmless. We give more information voluntarily through our social media accounts, yet a simple light sensor can be perceived as a violation through the act of being hidden. Tie that in with the fact that we’re ‘spying on our mothers’ and you’ve got a very interesting dynamic.
  • Everyone got excited when we demonstrated how the lamp and the images on the iPad were connected through the on/off switch. One person said it was “as satisfying as popping bubble wrap.”
  • One guy was bothered by our choice to focus on mothers and wanted to know why fathers were left out since “not all mothers are that great.” Interestingly, his feelings seemed to be diffused when Jingpo described how we were inspired by her situation as an international student.
  • Jingpo asked her mom whether she would want this device in her home and she said absolutely no, but Tabitha’s mom gave a resounding yes!


We presented some case studies to the class to show how our project relates to the work done by other creators.

The immersive theatre experience Sleep No More communicates a clear sense of place.

The immersive theatre experience Sleep No More communicates a clear sense of place.

Inspiration #1 (Tabitha): Immersive Theatre Performance: Punchdrunk’s Sleep No More

Relevant Qualities

In Sleep No More participants are encouraged to explore an old hotel by navigating through an intricately curated immersive environment. A strange moment for me on this project was realizing that much of what makes Call Mom work relates directly to the location design/Layout course that I teach at Sheridan. The main difference is that here we work with physical objects whereas I’m used to just drawing them in 2D, though both use specificity of design to express character. Here are some elements of Sleep No More that were guiding principals for the fabrication.

  • Well-researched design details
  • Communicates a sense of time and place
  • Encourages participants to move through the space in unique ways while exploring its secrets
  • Builds a narrative world through found objects

Fabrication: Next Steps

These are the key takeaways from Sleep No More and how they relate to our project.

  • Build 2 new Mom characters to expand upon this world
  • Play within the same object categories, just change their characteristics
  • Draw people into the narrative with specificity
  • Emphasis on the universality of this experience
A diagram explaining how the Realtime Temperature Sensor works.

A diagram explaining how the Realtime Temperature Sensor works.

Inspiration #2 (Jingpo): Realtime Temperature Sensor

It is a simple project that uses a temperature sensor to monitor temperatures and stream the data to a live-updating dashboard, in realtime, anywhere in the world. The temperature sensor measures the ambient temperature and publishes it the data to a channel via the PubNub Data Network. A web browser that subscribes to this channel displays the data stream on a live visualization dashboard.

Sensors sitting anywhere in the world collect data. All they need is power and internet. The technology behind is using a public JavaScript API. Data, for example, coming from temperature sensor and light sensor generate raw data. We want to build a website and visually display those data.

This is how the Realtime Temperature Sensor works:

  1. The temperature sensor measures the ambient temperature.
  2. This connects to the Wi-Fi.
  3. The PubNub code enables us to publish the temperature in real time to any one subscribing to the same channel as a data stream.
  4. Through the PubNub Developer Console, we can receive this stream of information from as many sensors as you like in real time.
The Smart Lamp is a wirelessly connected device that closely relates to our project.

The Smart Lamp is a wirelessly connected device that closely relates to our project.

Inspiration #3 (Jingpo): Smart Lamp with IKEA Lampan, Sonoff and

The Smart Lamp is a domotic lamp that can be controlled with a smartphone, or even using Google assistant.

The idea consists in equipping a cheap IKEA Lampan with a Sonoff Basic controller, that counts with a Wi-Fi controlled relay and a quite discrete enclosure. Then, we will program it to integrate with platform and start controlling its status with different connected tools such as APP, Web console, or even in response to IFTTT triggers that we can extend with Google Home or thousands of other platforms.

A diagram describing how the Smart Lamp works.

A diagram describing how the Smart Lamp works.

How it works:

Replacing this mechanism with the Sonoff Basic, and it will act as the new switch.

Open Arduino IDE and open the “SonoffBasic” example code

Create a WiFi network that can be accessed to configure your home WiFi and service connection credentials.

Sign up and integrate the project with Platform

IFTTT is a platform that allows to configure several triggers. integrate the Google Assistant APP that can be used with our SmartPhone, to create a trigger of the action (the THIS part) and a Webhooks request to send it to



Inspiration #1


Inspiration #2


Inspiration #3


Flowers of Life

 By Mazin Chabayta & Naomi Shah


Project Description

The Flower of Life is an artistic investigation and meditation into the concept of endless growth in a densely overpopulated world.  

This kinetic installation comprises of seven pairs of disks that are equipped with fourteen servo motors. The seven pairs are designed to represent different geographic regions, where one disk of the pair depicts death rates while the other represents birth rates of each region. The two disks spin in opposite directions to generate a contrast and resemble the cycles of life and death. The disks are juxtaposed in front of one another to create an optical illusion of a flower pulsating with life. The speed of the disks vary depending on the birth and death rates of the regions they are meant to represent, and the center disk is meant to represent the world as a whole. Together, the constant rotation of the disks on the sculpture resembles the continuous cycle of life and death, without the user’s ability to slow it down or stop it. The sculpture stands tall at almost 7-ft high and creates a loud buzzing sound, feels alive.






This project used an Arduino Mega, 14x 360 Continuous Rotation servos and a 5V AC/DC power converter. The rotation of the disks was based on birth and death rate data for 6 different regions- North America, South America, Africa, Asia, Europe and Australia- which were directly hardcoded into the Arduino- as well as the world represented in the central disk. The values are taken from the CIA World Factbook, which lists countries by their most current birth and death rates. In order to get the values for the continents, we added the birth and date rates of all countries in each continent and used the results as servo rotation values on the Arduino Mega. This ensured our data is still relevant to the audience and allowed us to fine-tune those values to achieve a visually pleasing outcome.

For this project, we decided to dive entirely into the creation of an abstract art installation which would provide for a meditative and poetic journey for our viewers. The challenge was to push the boundaries in two aspects: The realization of an object that ‘lived’ and ‘spoke’ for itself without requiring any interaction from participants, and secondly the craftsmanship of this object to make it as immersive, engaging and advanced as possible. Our previous projects required an interactive relationship between the objects and its viewers, hence making them participants in the experience. This experiment, however, allowed us to delve into the creation of an object where we as creators self-imposed the restriction of eliminating any interaction from audiences that would change the state of our object. After much deliberation, we arrived at this decision because the nature of life and death are eventually forces of nature that is inevitable and we as human beings have no control over.

This project is meant to be an extension of Experiment 4, which was also titled the Flower Of Life. Drawing from the feedback from Experiment 4, our approach was to dabble in multiples pairs of the Flower Of Life to allow for a contrast between different data sets to emerge. Moving away from data visualization of populations across the world from experiment 4, experiment 5 instead focused on creating a visual experience reflective of growth and the circle of life.


References and Inspiration:

We categorically researched installations that worked with the concept of multiples, and especially servos to understand the broad scope and range of what we could achieve. The case studies helped significantly in broadening our understanding of how different artists approached installation art, the concepts and themes explored behind these objects, the kinds of materials used and the varying degrees of interactivity among the installations. The ones listed below were all inspirational and instrumental in discourse around what was aspirational and what was possible within our team.

Kinetic Sculpture in Thornton Tomasetti’s office by Jonatan Schumacher, who was commissioned to make a sculpture Illustrating the themes of education, collaboration and innovation at the San Fran office of Thornton Tomasetti. The sculpture, which is suspended from the ceiling, features 48 servos with lightweight carbon fiber rods that cantilever outward.

The movement of the servos is controlled via Arduinos and sensors that cause the sculpture to respond to people walking through the space below. This was built over a few months, starting with a ¼ scale functional prototype in New York where they tested the electronics, motors, sensors and wireless communication.

We were intrigued by this project that communicated complex themes through minimalist movement and shapes. However, most of all we were inspired by the process of first creating a scaled down prototype of the final installation which allowed for the testing of electronics and motors. We decided to treat experiment 4 similarly, reassessing the use of materials and electronics and how it might be improved for a more ambitious version. The exhibition design of this installation lead us to consider newer ways in which the installation could be displayed. We considered mounting on the wall, suspending from the ceiling as well as the pedestal mounting it.

The Pieces Back Together Again by Ralf Baecker is an artistic investigation into complex systems, self-organisation and scientific methodology. It was made using 1250 stepper motors arranged on a 2D grid, with several acrylic strips mounted on the motors moving in different directions. This kinetic installation is meant to represent emergent constellations and behaviours when the acrylic strips would sometimes intersect and reverse directions.

Once again, the articulation of this abstract concept through the use of minimalist shapes, movement and sensors made this artistic exploration an inspiring one, and we attempted to ideate other ways in which we could communicate the concept of ever-evolving growth through minimalist symbols This concept could have only been communicated through the employment of multiple acrylic strips that would self-organise and influence one another. We were then certain that the next step from experiment 4 would require us to dabble in multiple revolving disks to allow for a more comparative and universal experience of growth. We decided to use the birth and death rates across continents, which would be a manageable number of pairs to handle for the installation.

The Metaphase Sound Machine is an object with 6 rotating disks, arranged in a cluster similar to the Flowers of Life that we intended to construct. Essentially the object is an audio- and kinetic installation in which a sound is synthesized based on feedbacks, produced by microphones and speakers on rotating discs. In the center of installation a Geiger-Mueller counter is set, that detects ionizing radiation in the surrounding area. The intervals between these particles influence rotation velocity of each of the disks.

This installation shared a similar feature with The Flower of Life of containing revolving disks, whose movement was influenced by gathered data. However, for our installation, instead of collecting this data from the environment, we deliberated between the use of data extracted from an API or data hardcoded into the Arduino that would influence the speed of the disks. Furthermore, this installation was instrumental in making us think about the ‘animated’ character of our installation.  With the tentacle-like effect of the cables coupled with the audio output from the Metaphase Sound Machine, the installation gave the effect of having a life of its own. This inspired us to consider a similar experience using LEDs that would connect various discs together, lighting up erratically to give an effect of life pulsating between the revolving disks. We were also inspired by the LED light setup for the Audio-Visual Cortex exhibited at the Audioversum Museum at Innsbrucke, Austria. However, we decided to prioritise the functionality of the 14 servos that we would have to use and then assess the use of LEDs if time permitted.


Production Materials Used:

Technologies Used:

Arduino Mega


14x 360 continuous rotation servos

AC/DC power converter (5V)



Materials Used:

Birch Plywood



Fabrication (process)

Building the sculpture was a challenging task and required a very high level of accuracy and craftsmanship. The shape we chose is symmetrical, which meant that balance is vital for the sculpture’s stability and aesthetics. At first, we were considering different methods to rotate the disks, we considered using gears and belts, however, that meant we had to source bigger and more expensive servos. We also considered giving the disks a gear edge in order to allow them to move each other, but eventually, we decided to take on the 14 servo challenge, which was important for us to overcome, and overall made more sense for our concept since the disks needed to move on different speeds depending on the values coded into them.

Test Video

Before starting fabrication, we were encouraged to test connecting all 14 servos to an Arduino Mega and a power supply in order to make sure that the power and the board can handle the load. After a few tests, and a servo fried, we were able to confirm that we had the right components to run all 14 servos smoothly and safely. Once we confirmed this, we moved on to the next stage of fabrication knowing that the power and connections are sufficient.


After finalizing a strategy for the build, we drew the plans for the sculpture and finalized the sizes and proportions of the different pieces. We also had to keep in mind the size and weight of each of the disks since the servos had limited torque and we had to stay within the limit. Also, it was much easier and quicker for us to use ¼” plywood (for the disks since it allowed us to use the laser cutting machine without worrying about the thickness of the wood causing any delays in the cutting process.

img_20181206_144250img_1537 img_1539img_20181206_144250

Working in the Maker’s Lab was extremely beneficial since we have a great asset there, Reza. During our first meeting with Reza to discussed our plan, he immediately spotted several ways for us to save time while building. He recommended making the base of the sculpture in a triangle to maximize stability while minimizing the amount of materials used. Eventually, we completed the structure with almost perfect symmetry. We were also able to make the sculpture collapsable in order to make it easier to disassemble.



Once the structure and its base were built and standing on the ground, we started with the wiring stage. Since we had 14 servos, that meant we had 42 wires to go around the structure from top to bottom, which added to more than a 100 ft of wires overall. The wires stretched from the top servo with every seven servos’ wires going down one of the legs where there are power/ground bus terminals from breadboards, then the breadboards were both wired into the 5V converter, which is also powering the Arduino Mega inside the control box. Unfortunately, we underestimated the time that it would take to wire this structure, and ended up taking a lot more than planned previously. The result of that was by the time we reached the bottom of the structure, we ran out of time and had to quickly manage to connect and hide the cables from view. However, it is because of this misfortune, that we fully understood how complex, time-consuming and important the wiring stage is for the connections to be secure and troubleshooting accessible.



We decided to finish the structure with a light wood finish to keep the organic look and gave the disks higher contrast to ensure the optical illusion. So, the birth rates disks were in darker birch and the death rates disks were black.  



Once we realized the complexity in our fabrication process, we decided to simplify our code significantly. Basically, we identified the specific functions we wanted our structure to make and found the simplest approach to achieve that. We already had our JSON country list from experiment 4, which had all the birth and death rates for all the countries around the world, and so it was easy to generate the data for the regions based on that list. After getting the right values for the regions, we then started experimenting with those values and tweaking them until they represented the values in a visually pleasing way. Our code is basically a simple servo code that defines the pins each servo is connected to and defines the speed and direction each servo should rotate. First, we assigned two servos for each region, one for the birth rate and the other for the death rate of that continent, then coded values in opposite directions depending on the disk, this ensured they rotated in opposite directions and at different speeds.




The first and most important feedback was about the wires. Since 1 of our disks didn’t work at all, and 2 worked but not effectively, we believe the cause of that was the wiring. So. the first feedback is to rewire this structure in a more secure way. Other feedback was to have some sort of a screen to tell the audience what they’re looking at, which would orient the audience on the type of data they are witnessing and which area it belonged to.

Going forward we believe that we should invest time in installing some light features in the structure, this would enhance the visual experience for our audience and push our sculpture into a higher technical standard. Also, we are considering adding a control box to the system that would give the user the ability to manipulate what they are seeing. For example, we can give the user the ability to see a different kind of information based on their interest and perhaps background. This would make our project more appealing to a larger audience.



Overall experiment 5 was a challenging journey that opened our eyes to new things and new perspective in kinetics. We were able to utilize our modest skills in coding and capitalize on our fabrication and strong concept. In addition to that, our final installation received warm feedback from people especially older ones. In retrospect, it would’ve been very beneficial to have some more time to fine tune our system and make sure all elements are working properly. The community support is always a great asset to have in school and on the internet. We really benefitted from the “buffet” of electronics we learned about in class, especially when we learned how to identify good quality parts from cheap ones.




Putting The Pieces Back Together Again- The order of chaos


The Metaphase Sound Machine-


Thornton Tomasetti’s Kinetic scultpure-


Audio-Visual Cortex-
David C. Roy-

River Styx (2018)

River Styx

by Shikhar Juayl, Tyson Moll, and Georgina Yeboah

Figure 1

River Styx being presented at the “Current and Flow” end of the semester grad show at OCADU’s Grad Gallery on December 7th, 2018.

A virtual kayaking experience in the mythical word of River Styx. Navigate through the rivers of fire, hate, and forgetfulness using our handmade kayak controller, steering through rubble, rock and ruins. Discover jovial spirits, ancient gods and the arms of drowning souls across the waters between the worlds of the living and the dead.

The project environment was built in Unity using free 3D assets and characters created using volumetric footage. We used the Xbox One Kinect and a software entitled Depthkit in the game:play lab at OCAD U to produce the mysterious animated figures in the project. The vessel is operated with Tyson’s arduino-operated kayak controller, newly revised with 3d printed parts and more comfortable control aspects.


*The scripts and 3d objects used in the project are available via Github, but due to the size of several assets, the Unity project is not included.

Figure 4. Sketched out diagram of circuit.

circuit diagram for paddle controller


Monday, Nov 26

When we first convened as a group, we discussed the possibility of taking the existing experience that Tyson created for his third Creation & Computation experiment and porting it over to Unity to take advantage of the engine’s capacity for cutting-edge experiences. As Unity is more widely used in the game development industry as well as for the purpose of simulations, we thought it would make for an excellent opportunity to explore and develop for new technologies that we had access to at OCAD U such as Virtual Reality and Volumetric Video capture. We also thought it would be exciting to be able to use Arduino-based controllers in a game-development project; a cursory web search revealed to us that Uniduino, a Unity plugin, was made for this purpose.

We also wanted to explore the idea of incorporating a narrative element to the environment as well as consider the potential of re-adapting the control concept of the paddle for a brand new experience. River Styx was the first thing to come to mind, which married the water-sport concept with a mythological theme that could be flexibly adjusted to our needs. Georgina had also worked on a paper airplane flight simulator for her third C&C experiment which inspired us to look at alternative avenues for creating and exploring a virtual space, including gliding. We agreed to reconvene after exploring these ideas in sketches and research.

Tuesday, Nov 27

We came up with several exciting ideas for alternative methods of controlling our ‘craft’ but eventually came full-circle and settled on improving the existing paddle controller. The glider, while fun in concept, left several questions regarding how to comfortably hold and control the device without strain. Our first ideas for the device imagined it with a sail. We then considered abstracting the concept of this controller in order to remove extraneous hardware elements. VR controllers, for example, look very different from the objects that they are supposed to represent in VR, effectively making them adaptive to various experiences and more wieldy. As we continued to explore these ideas, it occurred to us that the most effective use of our time would be to improve an already tried-and-true device and save ourselves the two or three days it would take us to properly develop an alternative. Having further researched the River Styx lore and mythos, we were also very excited to explore the concept with the paddle controller and resolved to approach our project accordingly.


Wednesday, Nov 28

We visited the gamelab at 230 Richmond Street for guidance in creating volumetric videos with the kinect. 2nd year Digital Futures student Max Lander was kind enough to guide and give us pointers about using volumetric videos in Unity. Later that day, we made a serial port connection script to start integrating Tyson’s old paddle script in Unity.

Once that was completed, we then started looking into adding bodies of water assets for our environment using mp4 videos. Turns out the quality was not what we were going for so we later started integrating water assets from the the standard unity packages and began building our scenes.

In terms of the paddle and with the aid of a caliper, we dimensioned the elements of the original paddle controller and remodelled them with Rhinoceros for the purpose of 3D printing. Although the prospect of using an authentic paddle appealed to us, we chose to use the existing PVC piping and wood dowel design in order to reduce the amount of time we spent searching for just the right paddle and redesigning the attached elements. In order to improve communication between the ultrasonic sensor and the controller, the splash guards from the original kayak paddle controller were properly affixed to the dowel, as was the paddle. The ultrasonic sensor essentially uses sonar to determine distance, so it was important that the splash guards were perpendicular to its signal in order to ensure that the sound was properly reflected back. Likewise, we created a more permanent connection between the paddle headboards and the dowel and a neatly enclosed casing for the arduino and sensors.

The process of printing the materials took about five days, as not all printers were accessible and several parts needed to be redesigned to fit on available printer beds based on available material in the Makerlab. We also found that roll of material that we had purchased from Creatron for the project consistently caused printing errors compared to others, which resulted in significant time wasted troubleshooting and adjusting the printers.


Thursday, Nov 29

This was our first session finding Unity assets and integrating them into the Unity editor. We used a couple of references to help shape and build the worlds we wanted to create. We managed to find a few assets we could work with from the start such as our boat. As we continued to find and add more assets to our environment we noticed that some assets were more heavier than others and caused a lot of lag to the game when we ran it so we later decided to use more low-poly rendered 3D models. Once we were satisfied with a certain environment, we added a first person controller (FPS) to the boat which came with Unity’s standard assets and began to navigate the world we created. We wanted to experience what it would be like exploring these rivers through this view and later replace our FPS controller with our customizeable arduino paddle.

Figure x. Shikhar working on River Styx environment.

Shikhar working on River Styx environment.

Friday, Nov 30

Hoping that it would simplify our lives, we purchased Uniduino, a premade Unity Store plugin. This turned out not to be the case, as its interface and documentation seemed to imply that we would need to program our Arduino through Unity instead of working with our pre-existing code developed in the Arduino IDE and its serial output. We ended up resolving this with the help of a tutorial by Alan Zucconi; we transmitted a string of comma-separated numbers associated with the variables that are used to operate the paddle and split them with string-handling functions in a C# script.

After some initial troubleshooting, we managed to get the gyroscope and ultrasonic sensor integrated with Unity by applying rotation and movement to an onscreen cube. The only caveat was that there was a perceptible, growing lag, which we decided to resolve on a later date.


Saturday, Dec 1st

As our environment for River Styx’s grew, we continued to discuss the addition of the other rivers involved in the Greek mythological underworld such as River of Pain, River of Forgetfulness, River of Fire and River of Wailing.We then started to brainstorm ideas for map layout, creating a map for these multiple rivers and what reflective element each river should have. Our discussions expanded towards game design vs an exploratory experience. We wanted to see how we could implement certain aspects to make it more of a game and less exploratory. However foreseeing how much time we had to develop and finalize our project without making things too complicated on ourselves we decided to keep it as an exploratory experience.


Monday, Dec 3rd

With the continuation of developing out other scenes, we came across other assets to help compliment our indistinguishable river environments. We were able to find lava assets for the river of Fire and create fogs in the river of forgetfulness. With all these assets and possible addition of volumetric videos, we decided we needed a powerful computer to encompass and run our Unity project to further decrease any kind of lag when running and working on it. We considered asking our professors for one but the only one capable of handling our needs was in the DF studio and it was inaccessible for uploading additional software and we couldn’t install any drivers to resolve serial port issues without administrative permissions. To avoid all these bottlenecks we decided to use Tyson’s personal PC tower to continue work on the project and later for installation for our upcoming grad show.  

We also converted the kayak controller code from javascript to C# for use in the unity game engine in an uncalibrated state. The first sign we saw of movement in Unity’s play window was noticeably slow, but indicated that our attempt to translate the code worked. For our convenience, the variables we would need to access to calibrate the device were given the prefix ‘public’ in our code. This allowed us to manually edit them from the Inspector window in Unity without running the risk of adjusting a ‘private’ variable in error.

Tuesday, Dec 4th

We reconvened in the game:play lab to capture volumetric videos with the Xbox One Kinect and Depthkit and import them into Unity. Depthkit comes with several features for manipulating captured data from the Kinect camera, including a slider for cutting out object further than a particular distance and undesirable artifacts. In order to use the captures as looping animations, we tried to keep our recordings in sync with a ‘neutral’ state we determined at the start in order to avoid having the footage jump significantly between the first and last frames. Given that the Kinect and Depthkit render the captured information as a video file we also needed to be mindful about recording times and the number of objects that we wanted to include in our project in order to reduce performance impact.

Some of the animations we captured included hands, exaggerated faces, ‘statues’ of god-like figures and silly dances. We frequently took affordances from the clipped area in order to isolate particular limbs in frame. In one instance, we were able to create a 4-armed creature by using two subjects, one in frame and the other hidden behind in cropped space, contributing a second set of arms.


Wednesday, Dec 5th

At this stage we had three official scenes created. The paddle’s parts were ready to be assembled after going through the laser cutting machine. We then began to create a teleport code where the user could teleport from one cave entrance to the next at each scene but decided not to include it. We wanted the user to explore without feeling the need to be goal driven to get from one place to another. So, we decided to be facilitators and transport them whenever they wanted. We added a key press that would teleport the user from one scene to the next.

We had plenty of fun using the Zero Days Look for our Depthkit captures, which was created for the VR film of the same name. It allowed us to manipulate the default photographic appearance generated by default and incorporate colour, lines, points and shapes into the appearance of the Volumetric renditions. The more we worked with it, the more familiar we became with its interface and how our adjustments would look through in-game, as not all features of the plugin were directly rendered in the world view window in Unity during editing.



Thursday Dec 6th – Friday, Dec 7th

Prior to showcasing our project, we moved all of our unity assets and code to Tyson’s personal PC tower and continued our work from there. We began the integration of the volumetric videos into Unity and  play-tested the environment to get a feel for how comfortable it was to navigate with the paddle. We felt that the kayak’s motion was a bit slow for public demonstration; we tweaked the speed increment, friction, and maximum motion until it felt fluid.

Reception for the project was overall positive. Interestingly, children were able to pick up the controls with relative ease. Since the ultrasonic sensor targeted an area above the size of their hands, they were able to grip the paddle device wherever they desired. This could also be attributed to a lack of preconceptions of how the device works; one of the most experienced paddlers seemed to have the most difficulty operating the device.



Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.


Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and wayfinding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customisable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicalities and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  


Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.

Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and way-finding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customizable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicality’s and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  

The Night Journey by  Bill Viola and the USC Game Innovation Lab in Los Angeles.

The Night Journey is an experimental art game that uses both game and video techniques to tell the story of an individual’s journey towards enlightenment. With no clear paths or objectives and underpinnings from historical philosophical writings, the game focuses its core narrative towards creating a personal, sublime experience for the individual participant. Actions taken are reflected upon its world.

The techniques incorporated from video footage and the narrative premise of the game gave us inspiration for how we might tackle the scenic objectives for our project and interpret the paths we wanted players to take in River Styx.



Cloudy and Tangled Thoughts

Cloudy and Tangled Thoughts

By Olivia Prior, Amreen Ashraf, and Nick Alexander

“Cloudy and Tangled Thoughts is an interactive piece which uses conductive fabric to explore the movement of light and space. Participants are invited to sit down and explore. Relax on a comfortable blanket and watch the clouds drift by. An array of irregular objects catch and refract the light, gently moving in relation to your position on the blanket, creating a sense of serenity.”

Audience enjoying final exhibit.

Audience enjoying final exhibit





Cloudy and Tangled Thoughts evokes the experience of lying on a blanket, gazing at the sky, watching patterns form and dissipate in the leaves, wind, and clouds.

It consists of a blanket made from traditional and conductive textiles and a lattice of hanging geometric chimes. When participants lie or press on the blanket, lights and servo motors hidden among the chimes activate, causing them to swirl and tinkle. When more people lie on the blanket the pattern of lights and motion becomes more intricate in turn.

We succeeded in fabricating all the necessary technology, creating the code, assembling it, and proving the concept. However, the result did not live up to our vision. The team believes that the idea is strong and the tech is viable, and we will return to this project in order to develop it to a point where it meets our expectations.



Cloudy and Tangled Thoughts started with a feeling. The team wanted to create a screenless installation that evoked a feeling of peace and wonder. We wanted to use technology to bring people together with a magical experience, using technology in a way that was unfamiliar to the average user. We envisioned the experience of lying on a blanket watching the clouds make shapes. It was important to us that we not create something simple like an on-off switch, a mechanism most people understand intrinsically, but instead create a relationship between sensors and output that generated a sense of wonder.

The prompt for the project from the Creation & Computation class was Refine & Combine. We were to return to a previous project and expand on it. While the concept we came up with was not directly related to a previous project any of us had done, we felt confident that our previous work with code, servos, lights, sensors, and fabrication put us at the same place developmentally as we would be if this had been a prior project.



The process began with a discussion of the kind of work we wanted to create, along with the kind of skills, technology, and existing projects we wanted to carry forward. When we settled on the concept above we began to brainstorm ways to realize it.

 We knew from the beginning that we wanted to work with a blanket and conductive fabric, but we debated over what form the apparatus hanging above, which needed to interact with the conductive fabric, would take. Since we had begun by imagining looking up at clouds, we researched installations and works of art utilizing cloud imagery and looked for inspiration there.

Cloud inspiration

Cloud inspiration

We decided that a series of geometric shape would complement the organic flowing nature of the blanket below. We envisioned multi coloured plexiglass, laser-cut into geometric shapes, hanging like wind chimes, diffusing light from above as they drifted and tinkled.

We submitted a proposal and consulted with our professors, Kate Hartman and Nick Puckett, for their opinions on how best to proceed. Kate provisioned us with 16sqft of conductive fabric, velostat, and a sewing machine for us to experiment with. Nick suggested that, rather than use the heavy and difficult to cut plexiglass, that we look into vellum as our cloud material, as it was light and would keep its shape after being folded.

Experimentation with vellum gave us a lot of data. We liked the way light moved through it and its versatility, and after trying several forms we settled on a triangular prism shape for our cloud-chime objects. However, we did not like the look of the vellum and wanted to find something more uniform and robust. We settled on a thin plastic and consulted with John Diessel in the Plastics Lab. John suggested that, since we planned to manufacture many identical objects from lightweight plastic, vacuum form was the best process for us. We built a form out of wood with the help of Reza Safaei in the Maker Lab and returned to the Plastics Lab to begin making what we affectionately came to refer to as “the boys”, all of which is discussed in detail below.


Hanging apparatus

Our conductive quilt was designed to control and move elements using servos. At first we went for simple geometrical shapes and simple constructions. 

cloud shape ideation

We had started out with wanting to control 12 unique shapes as we had built 12 sensors on to our quilt. We became very focused on the quilt, to the point that we had used up much of our allotted time before turning our attention to the servos and hanging apparatus.. Nick Puckett our professor has suggested using vellum, a material which which is easy to control due to its lightweight. The problem we encountered with vellum was that whenever we folded it to take the shape we wanted it to, it would become brittle and break at the folds. We were also slightly getting frustrated with how to to control each shape with the servos and how to mount the servos on the ceiling. We considered but decided against laser cut a mount which would hold the servos together, as we felt it was too late in the process to begin something we were unfamiliar with. At this point we as a team were getting unsure of using the servos to control the shapes. Our team mate Olivia suggested buying some fans and a relay, so that the quilt would start a fan based on where the participants sat. The fan would then start to blow these shapes. We did a rapid prototype using the vellum to construct simple modular shapes and hung them up to see the effects. We all agreed that the effects that the simple shapes created with the lights would look great.  

Prototype with vellum and lighting

We decided we liked the shapes and the effect it had. We were still unsure about using the fans when we looked further into buying a relay which was expensive and we were not very sure if we had to write new code for the fans. In addition, the relays came with a large constraint: only one attached object per relay could be powered at a time. This, combined with the cost of the relays, led us to shelve the idea of using fans.

We made a trip to the plastics lab at 100 McCaul to consult them on our simple modular shapes. John Diessel suggested we use light weight acrylics and the vacuum molding machine. They suggested we fabricate a molding shape that could be used with the machine. The largest size the machine could handle was 12×12 inches.

 We went back to 205 and visited Reza at the makers lab on the 7th floor. He understood exactly what we were looking for and helped us construct a shape using our vellum prototype.

We took the shape and went back to the plastics lab. We were instructed on how to use the machine to vacuum mold our shapes. Each sheet gave us 8 shapes, which meant we could produce a lot of shapes fast. We bought ten 12×12 sheets, 5 in translucent colour and 5 in white.


Vacuum form in action


It took roughly about 5 hours to construct and cut the shapes. Due to the makers lab and the plastics lab both being closed at night, which meant we had to cut the shapes by hand using scissors. This took a long time and was physically taxing on the team. 


After we had completed molding our plastic shapes, we as a team were still unsure of how to have these shapes hang on the ceiling of the experimental media lab grid. We decided to use aircraft cable to hang the apparatus and even clip the shapes to hang using aircraft cable. We made a trip to Canadian Tire in the morning to buy crimpers for the aircraft cables. Unable to find the right grid to hang up, we made an exploratory supply run and came across a barbecue grill. Excited by the image of three circular mounts hanging in a staggered manner, we decided to buy three barbecue grills.


At this point when we bought the barbecue grills, we only had 80 hanging shapes, which cut in half gave us about 160. We soon realized that unfortunately that wouldn’t be enough for three grills, which meant we had to make a quick run again to the plastics lab.

This is where we should have as a team scaled down rather than scale up. Building the hanging apparatus consumed a lot of time and energy. We know now in retrospect that it would have been better to go with large simple shapes rather than having so many small shapes. The small shapes made sense in the moment and looked good when hung, however they took a long time to construct.


Blanket Controller

Meanwhile we had also been fabricating the blanket. We used documentation from “Intro to Textile Game Controllers Workshop” run by Kate Hartman to fabricate analog sensors from the conductive fabric she gave us.

We built several small sensors to test, including one we sewed into a “sandwich” with regular fabric above and below in order to approximate the effect of the sensor when sewed into the blanket.


The test sensors worked well, and we felt we were ready to scale up and begin fabricating full-size sensors. We laid out a large sheet of paper in order to mark and measure out the approximate size of the blanket.


We decided a size of 4 feet by 4 feet was ideal, as it was large enough for two to lie comfortably while not being too large to manage. We debated for some time on the best way to layout and orient the sensors, with pitches ranging from as few as four sensors arranged in quadrants to dozens arranged in small triangles.

Blanket and sensor ideas 1

sensor ideation

We settled on the final version, pictured below. It allowed us to have either ends’ point of contact be on the edge of the blanket, meaning we would not need to run wiring through the blanket proper. It was, we felt, a manageable number of sensors, but enough to give us a lot of options for interactions in the final code. We also felt it was aesthetically pleasing, and thus an excellent blend of form and function. We selected classic “picnic” fabric in red, blue, yellow, and white gingham to give the device the affordance of a homemade picnic blanket.


We plotted the sensor placement at 3-inch intervals, allowing 3 inches of velostat width per sensor, with conductive fabric cut slightly thinner than the velostat. We ironed the conductive fabric to strips of red-checked cloth, attached the velostat with dabs of hot glue, and folded the two sides together. They were kept in place with a few more dabs of hot glue until they could be sewn together permanently. We took pains to avoid puncturing the conductive fabric, sewing along the outside of the velostat. We left the ends of the conductive fabric trailing out of pockets at either end of the sensor to allow for easy connection later.

Building Full scale sensor: Measuring out pattern for cutting fabric and velostat

Building full scale sensor:
Measuring out pattern for cutting fabric and Velostat.

Building full scale sensor: Measuring velostat for cutting.

Building full scale sensor:
Measuring Velostat for cutting.

Building full scale sensor: pattern cutting Velostat.

Building full scale sensor:
Pattern tracing onto Velostat.

Building full scale sensor: Velostat laid out onto our big scale model

Building full scale sensor:
Velostat laid out onto our 4×4 ft model

Below: the process for making the sensors.


After attaching two 3inch-wide lengths of velostat, block out a length of cloth slightly wider.


Cut out the cloth.


Cut out a second length, the same size as the first.


Place and iron lengths of iron-on adhesive.


Iron the conductive fabric to the iron-on adhesive.


Lay the velostat over the conductive fabric.


Use small dabs of hot glue to keep the velostat secure on both sides.

Not pictured: sew both halves together

As we completed each sensor, we tested it to ensure it was viable. When all the sensors were sewed and tested we cut a swatch of blue checked cloth at 4.5×4.5 feet to be the base of the blanket. We measured out and placed our sensors where we wanted them to be, then pinned them in place.

We conceived of and experimented with a power bus of conductive fabric along two sides of the blanket, to reduce the amount of wiring we would have to attach. We liked this idea as it made use of the blanket’s form to inform the function of the installation. However, we discovered that this layout diminished the effective voltage too much to get effective sensor readings, and we shelved the idea of the power bus. In retrospect, this should have been a warning sign to us that the power we were supplying was insufficient for our purposes.

One by one we sewed a hem in between the sensors. This fixed them in place on the blanket base, covered up the ratty ends of the sensors, and had the added benefit of making the blanket look softer and inviting to sit down on. We intended to tuck the extra fabric at the outside of the blanket over making a hem and channel for wiring while keeping the blanket looking nice and minimizing obvious electrical attachment points.

Fabric getting ready to be attached to the sensor

Fabric getting ready to be attached to the sensor

Ironing fabric adhesive

Ironing fabric adhesive

laying out conductive fabric onto adhesive

laying out conductive fabric onto adhesive

Ironing sensor onto fabric

Ironing sensor onto fabric

stitching fabric and sensor

stitching fabric and sensor

Unfortunately, our trusty sewing machine hit a snag late in production (the housing for the lower bobbin was pushed out of alignment, jamming the machine). While apparently not an uncommon problem, online diagnostics recommended taking the machine in for service rather than attempting to fix as a layperson. Without enough time to get the machine fixed or exchanged before the deadline, this was as far as we would get with our blanket. Luckily all the sensors were secured by this time and subsequent stitching would have been aesthetic.



Our main concern when deciding was how to code the blanket to create an interesting relationship between the laid out sensors and the servo motors above. We were curious with how the user would explore the interaction between the two separate components. We contemplated having a one to one relationship (i.e. one servo motor to every sensor). We as well considered having a rippling effect amongst the servo motors – when one servo would be activated a chain of the surrounding servo motors would also move.

As well, something that was overall important to us was that the clouds above reflected the participant’s position beneath the hanging apparatus. We thought this was interesting because it became a piece about the reflection of interaction.

The design of our quilt provided us with the given aesthetic of “quadrants”. We decided that we could determine the user’s position based off of the sum of the values from each quadrant. From there we mapped out all of our inputs and outputs that needed to have a relationship.


Before we scaled up, we wanted to test the textile analog sensors acting as an input to control the output for a servo motor and led light strips. We determined what the threshold of both sensors was when some pressure was placed upon them and then used that data to determine when the motors and led lights should be activated. This was a great initial proof of concept, and we decided to proceed forward with this base code.

Our next step was to think about how to create a more interesting connection between the user activating the sensors to start the motors and led lights. We did not want the quilt to simply become a switch. As a solution, we created cases for each quadrant. Each quadrant would take the sum of the sensor input. The sum of the sensors would indicate the likelihood of how many sensors were being activated in each quadrant. The cases were as follows:


Maximum: most likely all of the sensors are being activated

  • Trigger all of the associated servos
  • Trigger all of the associated led lights with full brightness


Medium: most likely two of the sensors are being activated with great pressure

  • Trigger two (or one) of the associated servos
  • Trigger two (or one) of the associated led lights with full brightness


Minimum: most likely the sensors are being lightly activated

  • Randomly choose one servo to go on and off each time this case is triggered
  • Choose the corresponding led light to go on and off


Resting: All of the servos and led lights are off


Setting up for the critique

As we set up for our critique, it became apparent we had scaled up too much to implement our code. When we were assembling, we had decided to scale our inputs down to three sensors in our quadrant. Kate gave us the suggestion that rather than isolating the interaction to one quadrant to divide all of the sensors into three “super sensors”. Our quilt pattern naturally allowed us to have three rings of sensors; one on the outside, one in the middle, and one in the inside. We connected our quilt according to this diagram:



Another thing that became apparent was that the hanging apparatus, due to its circular shape was hard to mount and hang in a balanced manner. We had run out of aircraft cable – which had actually proved to be extremely difficult to work with – so we decided to use twine to get the shape mounted. Another difficulty we ran into was the wiring of the apparatus to the floor. We were not prepared with the right set of wires which would be long enough to reach our breadboard. We attempted to use long individual wires, but that was impractical. Kate and Nick lent us long modular wiring, which significantly helped us with the hanging process. Also we learned that Kate is master at knots. Her wizardry helped us hang the apparatus quickly and safely.




Diagram only shows what was connected for critique

  • 1 x Arduino Mega
  • 3 x textile sensors
  • 3 x 50 ohms resistors
  • 3 x LED light strips [6 pixels each]
  • 3 x Micro Servo Motors



We had lofty expectations for this project which the completed version did not meet. No aspect of the build was lax – we felt, in the end, that there had not been enough time in the two weeks we were allotted for the team to build, test, and iterate on the design enough to reach the state of completion we had envisioned.

In the end, we did have an interactive experience where the quilt activated led lights and gentle servos above. We also incorporated a projection behind to elevate the sense of being out in nature.



This project taught us many things about working with unfamiliar materials and pursuing lofty goals in a short time frame. Some core reflections we have taken away are below.

We encountered many challenges we did not foresee or appreciate during the planning phase. These included:

  • The amount of time required to fabricate objects of the size and complexity we envisioned
  • The difficulty and time required in learning to effectively use new tools
  • Power management with sensors that we had created from scratch
  • Effectively scaling from a working prototype to a full-sized installation
  • Accounting for the “unknown unknowns” that crop up in projects

Were we to take on a similar project in future, we would:

  • Focus on one core interaction – for example, we would focus on only the blanket or the hanging apparatus
  • Do careful math when fabricating rather than making estimates
  • Start with fewer/smaller materials and scale up
  • Make purchases of materials in small amounts to prototype with

In terms of the use of textiles, we came across a couple of discoveries:

  • Our sensors only worked consistently when the ground and the positive were clipped to the opposite ends of the fabric. We experimented with having the two ends of the circuit clipped close together, which – while somewhat effective – was unreliable for our purpose.
  • When all of the twelve sensors were divided and clipped together to make three “super” sensors, we had to lower the resistors significantly to get any viable reading to use with our code.
  • Physically small sensors gave more reliable readings than large sensors at the same voltage.
  • It is possible to use conductive fabric as a “power bus” to power multiple sensors – though at our scale, this diminished the voltage to an amount where they were not usable for our purpose.

Next steps to take when we return to this project include:

  • Test the sensors with higher power and/or using multiple power sources
  • Test multiple variations of circuitry running through the blanket
  • Design, from scratch, an apparatus for hanging the clouds, with the same focus we had as we designed the blanket
  • Explore wireless communication with the hanging apparatus
  • Reconsider the form of the “above” apparatus
    • For example, explore projection of a generative image rather than a physical apparatus



Kate Hartman & Yiyi Shao. Intro to Textile Game Controllers. Workshop held at Dames Making Games at Toronto Media Arts Centre on November 14, 2018

A special thank you to Nick Puckett whose advice on fabrication was invaluable, and who went out of his way to help the project get set up in time for its show.

A special thank you Kate Hartman for her donation of material and tools, for going out of her way to help the project get set up in time for its show, and whose infectious enthusiasm kept us going.

Sound Synthesis

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link:


Music Credit: Anish Sood @anishsood

Contributors: Olivia Prior and Georgina Yeboah
A special thanks to Olivia and Georgina for letting us leverage the code from experiment 2, Attentive Motions. Without the hard work contributed by both these ladies the musical spheres would not have been finished in time, we are truly grateful.

Figure 1.1: Left, Final display of Sound Synthesis
Figure 1.2: Center, Sound Synthesis Team
Figure 1.3: Right, Special thanks to Attentive Motions Team

Project overview
Sound Synthesis is an interactive light and music display that allows anyone passing by to become the party DJ. There are 3 touch points to this system. The first is the ‘DJ console’ which is made up of children’s blocks. Each block controls a different sound stem which is triggered by placing a block on the console. The next two touch points are wireless clear spheres which contain LEDs and a gyroscope that triggers another sound stem when the sphere is moved. These interactions not only activate sound and lighting but it also invokes a sense of play among all ages.

Intended context
The teams intent was simple- bring music and life to a gallery show using items common in child’s play. Relinquishing control over the music and ambience at a public event seems crazy but this trio was screwy enough to give it a try. The goal was to build musical confidence among the crowd and allow them to ‘play’ without the threat of failure. For a moment, anyone is capable of contributing the mood of the party regardless of their musical experience.


Figure 2.1: Left, Final display of Sound Synthesis
Figure 2.2: Veda showcasing the capabilities of each musical sphere
Figure 2.3: Veda showcasing the capabilities of the DJ console
Figure 2.4: Center display in action

Product video

Production Materials


The team brainstormed different ways to combine older projects together to create a playful music experience for those visiting the end of semester show. The ideation process started off quite ambitious, attempting to match the same footprint as another project called ‘The sound cave’.

Figure 3.1: Left, Initial drawing of floor layout
Figure 3.2: Center, Initial drawing of DJ console, sphere and proposed fabrication of center display
Figure 3.3: Right, Initial drawing of additional touch points for more interactions (if time allowed)

The sound cave had five stations hooked up the their center unit with a different interaction at each station. The original plan was to use the display tower from Omid’s Urchestra project as our center display with a few alterations. The first station would involve a kid’s puzzle which was taken from Veda’s Kid’s Puzzler project and the interaction would remain the same, utilizing the pull up resistors and copper tape to create a button. The next station would have the clear spheres from the Attentive Motions project and the interaction would also remain the same, utilizing the gyroscope to sense motion to send a signal to the main unit. The next 3 units would be brand new and this is where our ambitions got the best of us. After further group discussion, it was decided only to add one more station to the project. The new station would involve a version of a touch sensor which would require a wearable to ground the circuit, see figure 3.3.


Figure 4.1: Left, For a detailed understanding of the LED tower : Urchestra
Figure 4.2: Center, For a detailed understanding of the block puzzle : The Kid’s Puzzler
Figure 4.3: Right, For a detailed understanding of the clear spheres : Attentive Motion

Journey Map


Figure 5.1: Top, The first ambitious version of the Journey Map
Figure 5.2: Bottom, A more realistic and achievable Journey Map

As a team, we came up with a schedule. Early on we wanted to make sure we were being realistic with the amount of work we were taking on, especially since there were many other final projects in other classes. We arrived at this schedule, which needed to be shifted from time to time but overall we were able to stick to it and achieve a final product we are all very proud of (with enough sleep).


Figure 6.1: Team workback schedule

One of the benefits of revisiting previous projects is that most of the hard work has already been done. The first thing we needed to do was see what data we can get from each of them and assess what else needed to be added or altered.


Figure 7.1: Left, Changing the Arduino Micro to Feather ESP32, Center Circuitry for DJ Console and Spheres. Right, Installing the Circuitry into the base of the box.
Figure 7.2: Center, Moving the circuits from breadboards into prototyping boards
Figure 7.3: Right, Adding LEDs to the puzzle

The DJ Console (Blocks) The puzzle used six switches with the help of copper tape underneath the shapes to complete the circuit. Also, it had a single LED light to indicate when any of the shapes were placed in their right position. Each shape corresponded to a specific sound that was then played through the P5 file.

We wanted to stick to the same principle, with a straightforward addition. We wanted to provide instantaneous feedback to the users upon any changed that were made, so instead of having only one LED, we placed six of them indicating how many blocks were active at each time. The system still used an Arduino micro that sent the data for the switches over serial connection to the P5 file. The data was then sent to PubNub so that the display system could use them.

The Sphere The ball used an Arduino Micro, an Adafruit Orientation Sensor, a LED strip and a small speaker. It used to make noise whenever it was to stable asking people to move it. We didn’t want the device to play any sounds anymore, we only wanted it to send the orientation data to PubNub. To do that, we got rid of the speaker and changed the Arduino Micro for a Feather ESP32 board. The board read the data from the orientation sensor and send it to PubNub. To provide real-time feedback to the user, the LED Strip would show some light whenever the ball was shaken.

The Center Display The display used an Arduino micro, a LED strip and nine switches made of copper tapes. The biggest issue with this problem was the need for copper tapes under shoes to complete the circuit. So, we got rid of the tapes and only used the design as a display. We added two extra LED strips to the display to make the experience much better.
The P5 read the data that was sent from the two balls and the puzzle and based on their configuration played the track that was associated with them. The data then sent to the Arduino micro over the serial connection to control the 3 LED Strips. The primary LED Strip was focused on the puzzle. If any of the keys were placed the LED strip would flash a green colour every 2 seconds; else it would flash a white light. The other 2 LED Strips each were related to a specific ball. The strips would flash the same colour as the ball that was shaken.


Figure 8: The team testings of the units

Sound and Design
The sound was the most critical piece of the experience for us. Since none of us have worked with music before we were most concerned about how the experience would come alive without a high quality sound output. Instead of making any guesses we turned to Adam Tindale who has been working with sound for the last three decades.

Our meeting was extremely productive and the most important lesson from it was the difference between creating a musical experience and a musical instrument. While creating a musical instrument you have to have a very deep understanding of the instrument, how it works and what sounds it can produce. The audience for such experiences are usually musicians with a similar knowledge. We found a relevant case study that proved this and we knew that this is not the experience we wanted to create.


Figure 9.1: Left, Cave of sounds, a musical experience
Figure 9.2: Right, Color cord – a technological musical instrument

We wanted to design an experience that made it easy to play with music, and could empower users of all experiences to create music of their own. While learning how to use musical instruments is a difficult task, and requires countless hours of disciplined practice, how might we do the opposite and create something that is inclusive, easy to use, and engaging at the same time. We needed a total of 8 sounds, six sounds for the DJ console (puzzle blocks) that set the main track and 2 accent sounds for each of the spheres that are triggered upon shaking.

We began our search for sounds online, with royalty free sounds available to use. We even tried working with Ableton and Garageband to see if any sounds would work together and create a synchronized soundtrack. But nothing that was available online was good enough, and since none of us had prior sound making experience we turned to our friends to collaborate with us on this.

Anish Sood is a renowned DJ, songwriter and music producer based in Goa, India. The genre’s he focuses on are EDM, House, Techno and Electro House. These felt like the right fit for our experience. We did a call together and briefed him in detail about the project. We wanted a track that was upbeat and yet soothing, and not monotonous to listen to. We took inspiration from the artist Kygo to describe the kind of sounds we wanted to produce. We also shared with Anish many pictures and videos of the parts of the experience and our vision for it. He was extremely receptive and put together a beautiful track for us within 24 hours of our call. He created six sounds on the DJ console that were divided between base sounds and overlapping instrumental and vocal sounds. He also sent us the master track so we knew what it would all sound like when it came together.

Playlist for the DJ Console:

For the spheres we wanted to find sounds that accentuated the base track from the console well. After a mini-brainstorm we shortlisted on a tambourine and a gong for the spheres.

Playlist for the Spheres:


Our fabrication process was smooth and streamlined. The following steps were part of the process:

The DJ Console (Blocks) We already had the base for the DJ console in place from Experiment 3. This included the puzzle itself, a base box for it and a single LED light to indicate if the device has been activated. In order to convert the design from a kid toy to something more mature, we decided to spray paint its colourful keys to a simple black and white design. We also had to add 5 more holes for additional LEDs feedback, and one for the connecting cable. While presenting we used a plinth that housed the Laptop underneath.


Figure 10.1: Left, drilling holes for LED lights into box
Figure 10.2: Center, Adding circuitry into box
Figure 10.3: Right, Spray painting shapes for box

The Sphere The fabrication process for the spheres were already done in Experiment 2.The only thing that was required to be changed was the circuit, and addition of a battery holder for the LED Strips so that they could be run for more than 3 hours.

The Center Display We decided to stick with the same object that was made for Experiment 3. The only thing that needed to be changed was to remove the extra Ultrasonic sensors from the box. We added a base to the design so that we could glue down the three cylinders that were to hold the three LED Strips. We also added a back panel to the design so that the LED Strips would be invisible when the device was off.


Figure 11.1: Left, adding more LEDs to original circuit created for the Kid’s Puzzler project
Figure 11.2: Center left, rewiring new and improved DJ console
Figure 11.3: Right, April making alterations and rewiring to the original display unit used in the Urchestra project


Figure 12.1: Left, Final project layout
Figure 12.2: Center, Fine tuning the blocks and sphere
Figure 12.3: Right, Fine tuning the center display

Final Fritzing Diagrams


Figure 13: The final circuit for the hamster balls


Figure 14: The final circuit for the Blocks (DJ Console)


Figure 15: The final circuit for the center display

Presentation & Show


Figure 16.1: Left, Final floor plan of Sound Synthesis
Figure 16.2: Right, Instructional signs placed on plinth under each interactive device

For the final show, we wanted to make sure the connection between the three pieces were clear and the users know what each of the pieces did. To do that, a clean installation of the work was crucial. We placed all the objects in a corner, where they could see the display center from each of the stations. We used plinths of the same height, and printed short instructions on what to do with each piece to make sure the user is clear on his/her role in the experience. We also printed matching ID cards and wore black and white – to look like a team at the exhibit.

An issue we had to deal with was to make sure the web browser for our display unit was refreshed every now and then as the large quantity of data sent to it made it crash if it was opened for a long time. We made sure that at least one person was at the station at all time to make sure nothing goes wrong.

We received very positive feedback on the project. People were very interested in how easy it was for them to act as a DJ and play with the sounds without having to worry about the pace of each track and how to synchronize them. Kids especially enjoyed the experience because they were used to the puzzle and the games and they really liked to be in charge of what is being played. Other people really enjoyed the experience because of the unusual interface for the music. They liked how simple it was to control and how little work did they have to do to get good sounds out of the system. They also appreciated how instantaneous the feedback is with the interaction. One thing that they felt that could be improved was to add more tracks and give the users ability to choose which track is for each piece.


As a team, we really hit our stride with this project. Since we all enjoyed working together so much during project 4 we thought we would go out with a bang together in project 5. The 3 of us each brought something different to the table and we found ways to utilize each team member’s strength. Omid not only spearheaded the coding but he is also extremely patient and slowed down his process so we could all work to understand the code of each device and troubleshoot any errors. Veda is extremely detailed in her design approach. It’s not enough for it to just look good, she makes sure each design is functional and user friendly, in every detail. April brought to the table her professional experience with meticulous project management, scoping and planning, graphical design and human centered thinking. Her skill set with fabrication and printing methods was also a blessing.

One of the most important lessons for us was to scope realistically, and leave a safety margin for debugging and troubleshooting. We also made sure to give ourselves enough time to iron out all the details for the actual presentation and setup.

After all the hard work we were able to achieve something that works beyond the level of a basic prototype. Hamster balls were dropped and the system crashed but everything was up and running without anyone at the party noticing. We are extremely proud of the final product and still can’t believe how well it turned out. If this project was ever to be scaled up it would require more stable software and possibly custom microcontrollers but for a 2 week student project, we are very proud.


Figure 17.1: Left, April and Veda rocking out at the final show
Figure 17.2: Right, Veda continues to rock, While Omid makes sure everything is under control

(n.d.). Retrieved from
(n.d.). Retrieved from
Cave of Sounds. (n.d.). Retrieved from
Romano, Z. (2014, May 22). A tangible orchestra one can walk through and play with others. Retrieved from
Schoen, M. (n.d.). Color Chord. Retrieved from
Tangible Orchestra – Walking through the music. (2014, June 03). Retrieved from



Tinker Box


Tinker Boxes are physical manipulatives designed for digital interaction. Based on the concept of MiMs (Zuckerman, O., Arida, S., & Resnick, M. (2005)) which are “Montessori-inspired Manipulatives”. The boxes are low fidelity devices used to bridge the physical interaction with the digital world. They are aimed at children from 5 to 7yrs but can be extended to any age depending on the frontend software designed to fit the interaction. This iteration of the software looks at using it as a scaffolding tool to teach kids how to recreate or understand the making of a physical object, Lego toys in this instance. The plan to extend it to build other educational games and interactions to explore concrete and abstract concepts.



What can I tinker with next, the goal of Experiment 5 was to take a project from the past weeks and add to it in some meaningful way? This could be anything but “anything” is a very large canvas given 1.2 Weeks from concept to completion. I would like to say it was all clear from the start but it was murky on what this next step would look like. I tried to think about what I liked about the old project and what I did not it was quite clear the biggest pain point was the potentiometer breaking the interaction and dictating how far the kids could make the character chase each other before having to reverse and go backward to add more interaction.

This whole experiment would be to figure out how to use the Rotary Encoder, the initial idea was to change the whole first version of the box and add the rotary encoder to it, this would mean reverse engineering the hardware to fit in the new, it was not really worth it as the first version worked quite well as proof of concept and I wanted to keep it like that.

I then decided to use the RE[Rotary Encoder] to create a new kind of interaction but also examine critically what it is I was building, I used the case study assignment to dive deeper into what the interaction meant and how I could position the work in a meaningful way. In the paper “Extending Tangible Interfaces for Education: Digital Montessori-inspired Manipulatives”


So what can I make interactive and make it meaningful to children? This was the question I kept asking myself, it’s easy to build an interaction but to have it meaningful and pedagogical is where the challenge is. I looked at my kids for inspiration, where I usually start, they are learning through play every single day but we seem to miss it.

I’m jumping ahead a bit because before I could even imagine what kind of interaction I wanted I needed to get the rotary encoder working and sending data, this may seem like a no-brainer for a coder, but for person new to the coding world of p5 and Arduino it was a critical first step else I have No Dice!

The base code for the project was a mix from the class code from Nick and Kate and also from Atkinson, I got the encoder sending a signal and then used the JSON library to parse the code so I could read it in p5.js. This is not the best way to do it I realized in retrospect and as I need to map different variables based on the length of the sprite animation I will be controlling. the better way to do it is to set a large range in Arduino and then map that range down to what I need based on each individual interaction. This is a bit technical in understanding but if you do venture into using my code it is something to keep in mind when you need to modify it to your needs.


OK now that I had that and the button working I assembled all of the hardware even before I could get the software working, why would I do that, well basically because time was running out and once you write software, debugging and refining is a rabbit hole you can spend all of your time doing ill the cows come home, and I may never get to finish the physical hardware. I had this happen on other projects where the software take precedent and the hardware ends up being presented on a proto-board as there is no tie for refinement and fabrication.


The circuit is pretty simple as you can see in this Fritz diagram below, it uses:

  • 1 Rotary encoder
  • 1 button
  • 1 Arduino Micro Original

That’s it, the circuit is also very clean so I could get to the main task of creating the interaction.


Once the circuit was done, I built the housing and the soldered all the components on to the PCB board.



I now looked at all the possible interactions I could create, using this rotary dial.

The basic idea is Turing up or down a value, you can then map this value to anything you like, in my case I decided to use sprite animations.

Coming back to observing my son play with lego, he would iterate and create new creations, cars, safe’s, vending machines, the list was exhaustive, he would look through youtube videos to follow along or just iterate, he would then share these creations with us at home and take them to school to show his friends. The thing is they could see the completed work not the process of getting there or even the individual parts that made the whole, this sparked an idea based on other stop-motion projects I had seen, with my son’s permission I broke apart his creations brick by brick and shot them using an iPhone and a tripod, I then used that to create sprite sheets which were controlled by the Think boxes rotary encoder. It took a bit of time to figure out how the sprite sheets worked and what was possible but it worked and the end result was satisfying, I then used the button to change the sprite and show another sprite animation, in this way I could get the user to scroll through the different creations I had created animations for.


The interaction was automatic, there were no instructions needed, people turned the knob and clicked the button, I had built on their past experience of what buttons and Knobs do, it was now just a matter f changing the software to create a pedagogical experience for the child.

Some ideas I came up with based on this interaction are:

  • Simple Machines: where the box could be turned on its side and the knob is replaced with a crank, lending itself to simple machines, like cranes, fishing poles, ratchets, pulleys etc
  • The process of folding and unfolding has numerous pedagogical uses, least of which is just the wonder of seeing what is inside something, like for instance the layers of the earth’s crust, making planets move and just rotating objects on a different axis.
  • This makes the MiM very versatile yet simple in its interaction, the triangles complete when the software fits the user and the interaction.


Some of the feedback was I should use this as an educational tool for products building IKEA furniture, pitch it to the company to create stop-motion videos for showing the different steps.

There was also interest in seeing how two of these devices could change the interaction if they controlled different aspects of the same Object/Interaction.


I would like to explore this prototype further and build more tinker boxes, which network or are even wireless, I had an early idea of building a wireless interaction but Nick said it might be a delayed interaction because of using a server like pubNub, I will look to see if there is any way to directly interface with the Mac/pc without the use of a third party software.


Zuckerman, O., Arida, S., & Resnick, M. (2005). Extending tangible interfaces for education.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems – CHI 05. doi:10.1145/1054972.1055093

Atkinson, M. (n.d.). MultiWingSpan. Retrieved from

GitHub code can be found here:


Experiment 5: Refine/Combine/Unwind




Exploring an Art & Graphic Design movement through computational means.

GitHub Link

Team Members
Carisa Antariksa, Joshua McKenna, Ladan Siad




Project Description

//generative(systems); is a investigation into Constructivism, an art and graphic design movement from the 1920s, through generative form. By referencing elements popularized from that movement, this project explores the automation of design processes through the basis of variance and the discourse around the movement towards engineered design systems. How do we as designers create distinct works when were are in a time of algorithmic design? Can our work still be dynamic, compelling and emotive? //generative(systems); examines the process of algorithmic automation and how it will affect viewers connection to the the aesthetic experience.

This experiment expands upon the Generative Poster project presented in the Creation & Computation Course Experiment 3’s, This and That. The original concept involved generating multiple iterations of a single design through computational means, where the intent was for a user to have a unique copy of a poster according to a predetermined design system.


Tune by beatpick

How it works

Our code executed the following instructions:

  1. Randomly select background color from an already-made array of colors;
  2. Randomly select elements (triangles, circles, rectangles and other quadrilaterals);
  3. Randomly assign a color independently to each element;
  4. Randomly determine the # of elements that would be placed on page;
  5. Randomly determine the composition, placement and scale of each element;
  6. For each element selected, have an accompanied string value for the code to draw on a p5 sketch.
  7. Send the string values to a separate browser page in order as it appears on the graphic sketch.
  8. Save the composition as a .jpg.
  9. Print the code from browser to a physical paper artifact.
  10. Instruct the graphic generative system to wait until the print sketch completed printing.
  11. Restart the sketch.

This then repeated throughout the exhibition, conveying the design processes that the computer made based on the algorithm set through the code.

Project Context

For this project, we wanted to use this opportunity to use the tools we had learned in the course to dive into the history of graphic systems. The idea of using generative computation was very interesting to us as a tool for designers in this age, where the act of creating iterations through a more efficient algorithm can incite more possibilities and frequency of “happy accidents.” We first had to decide upon which graphic design movement we wanted to reference and be the basis of our system. Initially we proposed three historical graphic movements as possible selections for our project; the Memphis graphic design movement from the late 70s, the Bauhaus movement from the early 30s and the Russian Constructivism movement from the mid 20s. We soon narrowed our selection down to two, Constructivism and possibly Bauhaus, given the latter preceded the other.

From left to right, posters from the Constructivism, Bauhaus, Memphis Movements

From left to right, original posters from the Constructivism, Bauhaus, Memphis Movements

Constructivism was a movement with origins in 1920s Russia where it was primarily an art and architectural movement. It refused the concept of art for ‘art’s sake,’ which catered to the traditional bourgeois class of society and instead, focused on using art and design as a political tool. Within this time period, famous Constructivist Kazimir Malevich had also coined the term “construction art,” or “‘Suprematism” as a prominent characteristic of the movement’s visual language. This term was also widely influenced by the works of artist turned designer Aleksander Rodchenko in the 1890s, where he experimented these forms and combined them with photo-montages.

Russian Suprematist Paintings by Kashmir Malevich

Russian Suprematist Paintings by Kazimir Malevich

In its essence, the characteristics of this movement involved a free combination of basic geometric forms, such as circles, squares, lines and rectangles within a limited range of colors. The application of this visual language ranged from product packaging to logos, posters, book covers and advertisements. Despite its unfortunate end following the rule of Stalin, the legacy of this style lives on through succeeding movements, particularly Bauhaus and International Type Design. Nowadays it is widely used in parameters set in contemporary design, especially within flat design for digital means.

In our research, we also selected 3 different case studies that were relevant to the scope of this project iteration. They ranged from applications within modern architecture and graphic design.

  1. Project Discover explores the capability of software to produce an endless slew of architectural designs that satisfy specific criteria within a given project in terms of cost, constructibility or performance dynamics, at a pace and level of productivity that would be impossible for human beings to match. Their hope is that they go beyond basic automation to create an expanded role for the human designer and a more dynamic and collaborative interaction between computer design software and human designers in the future. This study helps contextualize our intentions with the project.
  2. “Randomatizm” is an exploration into the Suprematism movement in the early 1900’s in Russia. The term suprematism refers to an abstract art based upon “the supremacy of pure artistic feeling” rather than on visual depiction of objects. To date, the online gallery presents 16 works of famous figures of Suprematism. At the moment, visitors to the online exposition generated more than 80,000 random files. We found that the variance in juxtapositions between each generated composition to be a good foundation for our research in expanding upon this experiment.
  3. “Oi” is a flexible brand identity created by the renowned Wolff Olins for the Brazilian telecommunications company. The idea behind the “logo generator,” as the application is called, is for the logo to move, wobble, and respond to customers in a playful and interactive way within the company’s mobile and web apps. In application, the identity mixes all of its identity elements — techie-looking typeface, icons, and blobs — in various ways to keep the identity flexible yet cohesive.

We also owe most of our references to and how the author explores algorithmic and generative design in different design applications.

Process Journal

Sketches of Initial Concept

Sketches of Initial Concept

There were many opportunities to continue different projects that involved physical computing with p5, but the interest that brought us together was in completing a browser-based project. In the group meeting that we conducted with several others in the cohort, a proposal to revisit the Generative Poster project was brought forward and we three agreed to push that idea forward. It would also aid to further our learning in coding through p5. Along with that exploration, we also wanted to see how it can contribute to the discourse around algorithmic design and the potential it has in affecting us as designers in the future. All three of us have a substantial background in visual communication and graphic design which motivated us to proceed with this concept.

In writing our proposal for Kate and Nick, we spent time on defining the project and the scope of what can be achievable in the minimum viable product. We also discussed and shared projects that inspired the visual styles that we were interested in conveying. From that basis, we then decided on the design movements we could emulate using generative design. We discussed aspects from existing projects that we liked and saw how they linked to movements that have been applied in contemporary design, such as International Type Design, Memphis and Bauhaus.

Following the proposal, we noted the feedback given to us in a meeting with them the next day. The key pieces of advice that Kate and Nick gave us was to treat the projection as a digital prototype on the wall. We were also advised to make sure we exhibited the canvas to our poster(s) to achieve the greatest impact– the scaling of the projection was key. What would the template to the posters be? Would it be projected on a wall, or would it be projected onto poster paper?

There were also opportunities to think about how the space can be used to introduce unexpected outputs, such as showing the transparency of machine thinking through the console log. The possibilities to how we can present this project can go into many directions, from something that can convey the computer “thinking” in the generative aspect to the code or perhaps introducing a minor interactivity with the visitors of the installation through human input.

MVP of envisioned Installation

MVP of envisioned Installation

Converging into Main Idea

Our criteria for selecting a graphic design movement was dependent on the positioning and placement of the elements on the canvas (whether the elements were abstract shapes, typography or simple forms). We noted that if a grid system was present it would be a helpful parameter and boundary for elements to appear within as part of our generative poster. We studied each movement carefully to determine whether a system was evident that could be reinterpreted and generated independently to the reference paintings. Additionally, we wanted to ensure that whichever movement we selected, that the elements could be drawn easily inside p5.js, there also needed to be a distinct color palette and theme that could be referenced to.

Given the scale of a single generative system we decided collectively as a group that we would only attempt to make a single system based on one movement as opposed to 3 independent systems (one for each previously mentioned movement, Memphis, Bauhaus and Constructivism).

We began our process by attempting to recreate some of the abstract forms in the Memphis movement, but soon realized that it would be a challenge to do so in p5.js as the shapes involved a lot of masking of circular patterns and use of the bezier() shape.

Contemporary Memphis elements that we wanted to code in p5

Contemporary Memphis elements that we wanted to draw in p5

We also explored shapes within the Bauhaus movement by looking at more complex shape code, such as different angles to arc() and also created a document that identified the common aspects to the movement.

90-degree-arcs semicircles

Elements identified for the Bauhaus iteration of project

Elements identified for the Bauhaus iteration of the project

We then opted to start with what we agreed to be the easiest of the three movements to create a generative system of; that being the Russian Constructivism movement. After studying several of Kazimir Malevich’s Suprematism paintings from the 1920’s we recognized some recurring shapes and forms: circles, rectangles, triangles (equilateral, isosceles and scalene), and quadrilaterals. All elements that could be made fairly easily with p5’s draw functions. We observed the common compositions created in his work to apply to the code in terms of how each element appeared on the screen. By referencing the color palette of this graphic design movement and using these paintings as a guideline, we began to build out the code that would later generate random compositions with these above mentioned elements.


Elements identified for the Constructivism iteration of the project

Coding Process and Challenges


Throughout the process, we assigned tasks to each other to ease the workflow. There were many aspects for the installation that we wanted to implement in the MVP. We began by executing the sketches onto 3 canvases that continuously generate in one browser, which might have complicated the process more than it should. It required the use of Namespacing, a resource we found useful in executing this.


We soon scrapped this option as it would pose many challenges in the code, as we would then have to spend too much time on the code to maintain the sketch, drawing within each canvas.

The main challenge we faced while coding our system mostly involved making it behave human-like; we wanted it to convey this idea of it creating its own design decisions (ie. selecting a triangle or a circle for the first drawn element.) It was not as simple as we thought to code a machine to act this way. In our first iteration of this project, we had envisioned that the code would generate the entire poster at once before moving on to create a new sketch. It took some time to program a timer into the draw function of the system so that each element would be drawn consecutively one after another. Along with this problem, we also had to program an additional timer that communicated via PubNub between the two sketches. This was done so that the print sketch could pause the generative graphic sketch once a poster was completed. In the end, this was solved by setting the frameRate and stating the frameCount for each element in function draw().

After defining each equation to the randomly generated elements to numbered strings (e.g. var myString3 = “// Circle\nnoStroke();\nfill(” + circleColor + “);\ntranslate(” + translate2X + “, ” + translate2Y + “);\nellipse(0, 0, ” + radius2 + “, ” + radius2 + “);” instead of placing it into console.log(“”)) we were finally able to publish what was drawn on the canvas through Pubnub onto another browser. Some overlapping to the messages that were sent still remained, but was quickly fixed by defining the positions of each text based on the input, as seen in the print.js file.

Another task that we had to accomplish was print the poster as they are generated. We wanted the print to work without opening the dialog box but to just receive the information and execute the function. The research for the information that kept coming up was the term called silent printing or background printing. The way we were able to silent print was by getting into the browser setup and changed the parameters that would disable the dialog box pop-up. This required us to use Firefox because the instructions were easily accessible. We hacked into the about:config page and created New > Boolean, and then entered the preference name as print.always_print_silent and click OK. The boolean was then set to true. This successfully made the printing to happen automatically and not open the dialog box. After implementing this, the print() browser function was as easy as placing the command line in the code. The print() method then prints the contents of the current window. This process is well reflected in the final video.

Aside from printing the console.log, we also wanted to have a database of all the generated posters that the code executes through draw. We managed to link it to a dropbox folder through the save(); command line.

Presentation and Exhibition

For the installation, we had a clear idea on how to arrange all the elements in our corner in the Grad Gallery. Given if we had another day, we could have scaled up our installation to two generative systems, Constructivism and Bauhaus next to each other. The informative posters for each would be placed in the center area to create a conversation between the two movements and algorithmic systems. However, the Constructivism iteration is the MVP that we had hoped for and we were collectively very satisfied with the outcome. We focused the projection on the corner in a portrait manner by changing the orientation of the projector and placed the printer next to the screen that showed the saved images of the posters generated. The Title to the project was also projected to convey the overall concept of this experiment. Over 1200 posters were created and they are available to view and save through a Dropbox here. 

There was an overall positive response to the installation. Many visitors commended the idea of generative design and commented on how it can even reach the level of teaching an AI, albeit a simple version of it. The final exhibit demonstrated to our audience a process that involved both the designer as the creator of the tool and algorithm– along with the computer, generating the sketch based on the parameters set by the designer. It was also an ode to a history of how these systems began to be applied in means that can create a social and political impact. Furthermore, this process could provide a further commentary about how a machine would understand a cultural movement such as Constructivism; however it is important to note that the machine is still only as smart as the person who makes it.


Special Thanks to Adam Tindale for advising us through some p5 challenges and Omid Ettehadi for guiding us through Pubnub.


Algorithm-Driven Design? How AI is Changing Design. (n.d.). Retrieved from

Autodesk Research. (2017, February 23). Project Discover: An application of generative design for architectural space planning. Retrieved from

Bill. (2008, August 22). “Silent” Printing in a Web Application. Retrieved from

Creative Bloq. (2013, October 11). The easy guide to design movements: Constructivism. Retrieved from

Flask, D. (2015). Constructivism : Design Is History. Retrieved from

Google. (n.d.). Google Fonts. Retrieved from

Hebert, E. (n.d.). Namespacing / Multiple Canvases in P5JS. Retrieved from

Howe, M. (2017, April 5). The Promise of Generative Design. Retrieved from

Miller, M. (2016, April 19). The Ultimate Responsive Logo Reacts To The Sound Of Your Voice. Retrieved from

Moskovsky, A., & Day, Z. (2018). Randomatizm. Retrieved from

Ning, Y. (2017). p5PlayGround. Retrieved from

Tate. (2015). Suprematism ? Art Term | Tate. Retrieved from

UnderConsideration. (2016, April 4). New Logo and Identity for Oi by Wolff Olins and Futurebrand. Retrieved from

Grow You A Great Big Web Of Wires


Grow You a Big Web of Wires is a simple project built from copper tape, an Arduino Micro, lots of LED lights and a mess of wires. I wanted to explore what our homes and indoor spaces might look and feel like with the artifice of nature. This project was part sculpture, part installation and part futures imagining.

The leaves of the ‘plants’ are made of conductive copper tape that activate string LED’s when touched together. They are beautiful, but not alive, nor do they clean the air. The leaves look like leaves, the wires look like roots, but at best, they are a facsimile. The plants are meant to spark conversations about the place of nature’s contribution to our indoor lives.

“This project came to light from a love of plants; I found myself in a contemplation of what our homes might look like with more artificial and less natural. The forms I created are plant like, but are missing that chlorophyllic energy of something alive, though there is electrical energy flowing through each. These little plants are imaginings of a halfway point of the uncanny natural valley.” – Grow you a big web of wires 2018

Idea Process

In my initial thinking about this project I wanted to explore the gestures and communication design of plants and trees in nature.

I began pondering these questions to spark my ideation:

  • How do we think about communication methods of nature?
  • How can we use these notions to improve the way we as humans communicate?
  • How does the medium of physical technology change how we interact with these creations?
  • What are these things that plants talk about? Imagine a conversation between a pair of plants.
Idea sketches

Idea sketches

Material Process

I began by imagining how these plants would look. I knew that I wanted to recreate the way traditional house plants look in our homes. I went to source some wire at the hardware store and came back with 8 meters of galvanized steel wire.

Lots of experimentation with leaf shape and engraving veins into the copper tape. This shape was one of the successful shapes I was happy with

Lots of experimentation with leaf shape and engraving veins into the copper tape. This shape was one of the successful shapes I was happy with

The galvanized steel was pliable and easy to cut, but was VERY messy!

The galvanized steel was pliable and easy to cut, but was VERY messy!

I began manufacturing many leaves for the tree form. It reminded me of a beautiful fall day, but not quite.

I began manufacturing many leaves for the tree form. It reminded me of a beautiful fall day, but not quite.

I began sculpting with the galvanized wire and had initially had an idea of a tree shaped from a thousand wires all twisted together, however it proved more difficult than I had hoped. It could certainly happen, but would take much more time than I had. So I settled on creating a tree form out of some copper pipe I had laying around, then wound the steel wire around and through the pipe to create a base sketch of a tree. Taking cues from the plants in my house to structure the stems and leaf patterns.

In creating this tree form I realized how different metal is compared to plant life.

In creating this tree form I had a good contemplation session about how different metal is compared to plant life.

After building the form of the tree I began another plant. In working with the copper tape while making the leaves I began to get a sense of the best way to use this material. I keep some Sanseveria plants in my home in the places where there is not a lot of light. They are an extremely hardy plant with beautiful sculptural leaves. These plants don’t need to be watered often and can live through almost anything, it seemed like an excellent candidate to recreate. Using big long pieces of copper tape and solid core wire I began to form the plant by making leaves of different lengths and anchoring them in the plant pot using a base made out of a plastic lid.

My first Sanseveria leaf and my first proof of concept using a simple pullup connection.

My first Sanseveria leaf and my first proof of concept using a copper tape switch and a simple pullup resistor connection.

The final Snake plant

The final Snake plant

Code and Circuit Process

The circuit for this project was graciously offered to me by Veda Adnani from her project The Kid’s Puzzler. It was quite a simple set up utilizing the digital pins on the Arduino Micro. It was easy to create the proof of concept, however, there was much testing to be done with the copper tape once it was formed into the leaves and attached to the LED’s. There was a lot of real world differences once the circuit was connected. Sometimes the copper didn’t make a connection, sometimes the LED wires didn’t connect in the breadboard. It ended up taking a lot of troubleshooting and patience to come up with a setup that would work every time.

Circuit Diagram

Circuit Diagram

Circuit Diagram

Github Code

Final Presentation

The second part of this project was the actual installation of the plants. I had decided on displaying this work in the hallway between the Experimental Media Room and the main Graduate Gallery, this is a transitional space that could be peaceful and allow the viewers to have a quiet moment in the dark to reflect on these plants. The one issue I had been wrestling over all week were the walls, which were covered in a wheat pasted repeat pattern created by Inbal Newman. I had huge plans of covering the wall in large paper, or a projection, or even hanging a long white curtain in front of the work. But through the process of making the wire plant I realized that Inbal’s art and my creations had a good synergy, that complemented each other, so I began to wonder if the works could be incorporated together. The final display was directly in front of the piece and it did work well.

The title card in front of Inbal's wheat paste wall.

The title card in front of Inbal’s wheat paste wall.

Title and Description Cards

Title and Description Cards

The whole scene lit up and ready to glow.

The whole scene lit up and ready to glow.

The installation was quiet and contemplative; the two plants were placed on plinths with the title and info cards next to them, there was a mess of string lights everywhere. There was a lot of positive feedback. I was happy with the installation as a first iteration; but much like the first iteration of Grow You a Jungle, I want to go bigger! I am envisioning this project in a room with a multitude of copper plants, perhaps setting up a kitchen or bedroom area as the setting. In the future I would like to join the two projects together, using real plants as a switch for sound and the copper plants as a switch for light, creating a cycle of dependancy between the plants and humans.

Detail shots of both plants

Detail shots of both plants


One of the first projects I researched while thinking about this project was Botanicus Interacticus, a multi-faceted work funded by the Disney Research lab.

Botanicus Interacticus is “a technology for designing highly expressive interactive plants, both living and artificial. Driven by the rapid fusion of computing and living spaces, we take interaction from computing devices and places it in the physical world using livings plants as an interactive medium.” (Sato 2012)

Botanicus Interacticus

Botanicus Interacticus

This project uses the electrical currents in plants to enable a person to create music by touching a plant. They also created artificial plants that responded to touch. It was a look at how we can program interactivity into the world around us using the electricity that inherent in the world. I was interested in understanding how our gestures could be examined and used to reveal new ways of connecting to nature and this project was a big influence on my thinking.



Sonnengarten is an interactive light installation that reveals the relationship of plants and light. When a user presses their hand against the plant installation, “for a short time the plant is symbolically deprived of its energy of live.” (Sonnengarten 2015) so the light in the installation changes. This project had me thinking about how the lack of light indoors can affect a plants growth, and how it’s survival is reliant on the person taking care of it. The cycle of dependancy came to mind, and I began to think about how to connect the ideas from Grow You A Jungle to this new project.

Final Thoughts

This project was actually a lot more challenging than I initially expected, and that was mostly due to working with the copper tape. It was an exercise in learning your material and pushing it to the limits of its use. In working with the circuit and doing all the troubleshooting I came to a stronger understanding of how to fix connection issues. Something as small as a solder connection needs to be checked in the process of discovering the problem.

Works Cited

Cross the Dragon – An Interactive Educational Exhibit


Project Name : Cross the Dragon

Team Members: Norbert Zhao, Alicia Blakely, and Maria Yala


Cross the Dragon is an interactive art installation that explores economic changes in developing countries and the use of digital media to create open communication and increase awareness on the topic of economic investment from global powers in developing countries. The main inputs in the piece are a word find game on a touch interface and an interactive mat. When a word, belonging to the four fields: Transport, Energy, Real Estate, or Finance, is found,  a video is projected onto a touch-responsive mat. Through the touch sensitive mat one can initiate another video in response to the word puzzle word. The interactive mat plays video through projection mapping. In order to be able to interact with the mat again one has find another word. We have left the information in the videos open to interpretation as to keep it unbiased and to build a gateway to communication through art and digital gaming practices.

What we wanted to accomplish:

Through this interactive installation the idea was not to presume and impress preconceived notions about the educational information provided. The installation is designed to encourage positive thought process through touch, infographic video and game. Through this interface we can conceptualize and promote discussion on information that is not highly publicized and considered widely accessible or generally discussed in Canada.

Ideation & Inspiration:


This project was inspired by a story shared by one of our cohorts. She describes how Chinese companies are building a new artificial island off the beach in downtown Colombo, her hometown, and are planning to turn it into Sri Lanka’s new economic hub. At the same time, in the southern port Hambantota, the Sri Lanka government borrowed more than $1 billion from China for this strategic deep-water port, but couldn’t repay the money, so they signed an agreement and entrusted the management of the port to a Chinese national company for 99 years.

For us, such news is undoubtedly new and shocking. As China’s economic growth and increasing voice in international affairs, especially after The Belt and Road Initiative was born in 2013, China began to carry out a variety of large investment projects around the world, especially the developing countries in Asia and Africa, the investment in infrastructure projects from China has peaked. At the same time, we have discovered a series of reports from the New York times, How China Has Become an Superpower, which contains detailed data about China’s investment in other countries and project details.

Therefore, this project was focused around the discussion about the controversy of this topic, because some people think that these investments have helped the local economic development, but some people think it is neo-colonialism. In the beginning during concept development we knew this topic would have an awareness aspect. It was important to portray this topic that has a profound effect on the social, cultural lives and identities of people across the globe. By having a heterogeneous subject in the sense that it stemmed into other socioeconomic conditions. After discussion and data research, we decided to focus on China’s growing influence especially economic in Africa.

Finally, we decided to explore this interesting topic through interactive design. We came up with the idea of creating a mini-exhibition, through which visitors can explore the story behind this topic by interacting with the game. When the visitor first comes into contact with this exhibition, they do not have detailed information about the exhibition, but after a series of game interactions, the detailed information about the exhibition theme would be presented in the form of intuitive visual design. The resulting self-exploration process will give visitors a deeper impression of the topic.


These three interactive projects were chosen because of how they combine an element of play and the need for discovery in an exhibition setting. They engage the audience both physically and mentally, which is something we aim to do with our own project.

Case Study 1 – Interactive Word Games

An interactive crossword puzzle made for the National Museum in Warsaw for their “Anything Goes” exhibit that was curated by children. It was created by Robert Mordzon, a .NET Developer/Electronic Designer, and took 7 days to construct.


Case Study 2: Projection Mapping & Touch interactions

We were interested in projection mapping and explored a number of projects that used projection mapping with board games to create interactive surfaces that combined visuals and sounds with touch interactions.


Case Study 3: Interactive Museum Exhibits

ArtLens Exhibition is an experimental gallery that puts you – the viewer – into conversation with masterpieces of art, encouraging engagement on personal and emotional level. The exhibit features a collection of 20 masterworks of art that will rotate every 18 months to provide new, fresh experiences for repeat visitors.The art selection and barrier-free digital interactives inspire you to approach the museum’s collection with greater curiosity, confidence, and understanding. Each artwork in ArtLens Exhibition has two corresponding games in different themes, allowing you to dive deeper into understanding the object. ArtLens Exhibition opened to the public at the Solstice Party in June 2017.



We combined two of our projects – FindWithFriend and Songbeats & Heartbeats for our final project. The aspects of the two projects we were drawn to are the interactions. We wanted to create an educational exhibition that has a gamified component to it and encourages discovery almost like the Please Touch Museum.


We combined the touch interactions from the wordsearch & interactive mat.


P5, Arduino, PubNub, Serial Connection



Team brainstorming the user flow and interactions


Refined brainstorm diagram showing user flow, nodes, and interactions

How it works:

The piece will work like a relay race where one interaction on an Ipad will trigger a video projection onto an interactive mat. When a sensor on the mat is touched it will trigger a different projection showing the audience more data / information.

The audience is presented with a wordsearch game in a P5 sketch (SKETCH A) with the four keywords; “Transport”, “Energy”, “Real estate”, “Financial”, representing the industries that China has made huge investments in. Once the word is found e.g. “Transport”, a message is published to PubNub and is received by a P5 sketch (SKETCH B) that will play a projection about transport projects. When the audience touches the mat with the sensors, the sensor value (ON/OFF) will via a Arduino/P5 serial connection to a different P5 sketch (SKETCH B) will stop playing the Transport projection and displays more information about China’s transport projects in different African countries.

Step 1: Sketch A – Wordfind game

The viewer’s initial interaction with the “Cross the Dragon” exhibit is initiated in the wordfind game. This is created using p5.js. The gameboard is created using nested arrays that create the word find matrix. Each tile in the board is created from a Tile class with the following attributes: x,y co-ordinates, RGB color values, a color string description based on the RGB values, a size for it’s width and height, booleans – inPlay, isLocked, isWhite, and a tile category that indicates whether the tile is for Transport, Finance, Real Estate of Energy.

To create the gameboard, 3 arrays were used. One array containing the letters for each tile, another that contained the values that indicated whether a tile was in play or not. This was made up of 1’s and 0’s. Tiles that were in play, i.e tiles that contained letters for the words to be found, were marked with 1’s and those that were decoy tiles were marked with 0’s. The last array was one that indicated the tile categories using a letter i.e T,F,R,E, and O for the decoy tiles. The matrix was created by iterating over the arrays using nested for loops.


The arrays used to create the game board tile matrix of clickable square tiles


Generating the 11×11 game board and testing tile sizes

Once the tile sizes were determined, we focused on how the viewer would select the words for the four industries. The original Find With Friends game catered to multiple players, identifying them each with a unique color. However, here there is only one input point, an iPad, so we decided to have just two colors showing up on the game board; red to indicate the correct tile and grey to indicate a decoy tile. When the p5 sketch is initiated, all tiles are generated as white and marked with the booleans – inPlay and isWhite. When a tile is clicked and it’s inPlay value is true, it turns red. If it’s inPlay value is false, it turns grey.


Testing that inPlay tiles turn red when clicked

The image below indicates testing of the discover button. When a word is found, and the discover button is clicked, a search function loops through the gameboard tiles, counting from the tiles that are inPlay and have turned red, a tally of the tiles clicked is recorded in four variables i.e one for each industry. There are 9 Transport tiles, 6 Energy tiles, 10 Real Estate tiles, and 7 Finance tiles. Once looping through tiles is complete, a checkIndustries() function is called to check the tally of the tiles. If all the tiles in a category are found, the function sets a global variable currIndustry to the found industry and then calls a function to pass that industry to PubNub. When a tile is found to be in play and clicked, it is locked so that the next time the discover button is clicked, the tile is not counted again.


Testing that inPlay tiles are registered when found and that already found tiles are not recounted for the message sent to PubNub.

Step 2: Sketch B – Projection Sketch – Part 1

When the sketch initializes, a logo animation video, vid0, plays on the screen and a state variable which was initialized as 0 is set to 1 in readiness for the next state which will play video 1 / a general information video on a found industry.

When the second p5 sketch receives a message from PubNub, it uses the string in the message body that indicates the current industry to determine which video to play. The videos are loaded into the sketch in the preload function and played in the body of the html page crossthedragon.html. During testing we discovered that we had to hide the videos using css and show them only when we wanted to play them, re-hiding them after because they would all be drawn onto the screen overlapping each other. When the sketch is loaded videos are added to two arrays – one to hold the initial videos and another to hold the secondary videos that provide additional information. The positions both the arrays for each industry are Transport in index 0, Energy in 1, Real Estate in 2, and Finance in 3.

Once a message is received a function setupProjections(theIndustry) is called. The function takes the current industry from the PubNub message as an argument and uses it to determine which video should be played. The function sets the values of the global vid1 and vid2. This is done by using the industry to determine which video to pull from the two arrays that hold all the videos. e.g if transport was found, vid1 = videos1[0] and vid2 = videos2[0]

A function makeProjectionsFirstVid() is called. This function stops the initial “Cross the Dragon” animation from playing and hides it, then hides vid2 and plays vid1. It then updates a global variable state to 2 in readiness for the second in-depth informational video.

Note: vid0 only plays when state is 0, vid1 only plays when state is 1, and vid2 only plays when state is 2.

Step 2: Sketch B – Projection Sketch – Part 2 Arduino overs serial connection

The second in-depth video is triggered whenever an signal is sent over a serial connection from Arduino when the viewer interacts with the touch-sensitive mat. Readings from the 3 sensors are sent over a serial connection to the p5 sketch. During testing we determined that using a higher threshold for the sensors produced a desirable effect of reducing the number of messages sent over the serial connection thus speeding up the p5 sketch and reducing system crashes. We set the code up so that messages were only sent when the total sensor value recorded was greater than 1000. The message sent was encoded in JSON format. The p5 sketch parses the message and uses the sensor indicator values passed i.e. either 0 or 1 to determine whether to turn on the second video. If the sensor indicator is 0 this means OFF and the video start is not triggered, if the value is 1 this means ON and the video is triggered. The makeProjectionsSecVid() function triggers the start of the video. If the state is 2, the vid1 is stopped and then hidden and the vid2 is shown then played on a loop. An isv2Playing boolean is set to true and is used to determine whether to restart the video and prevents it from jumping through videos if one is already playing.

Electronic Development 

While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg.  To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard. Since the FSR sensors are plastic due to travel some of the connections would come loose. One of the challenges was having to replace the sensors a few times. When this occurred would follow up with quick testing to make sure all sensors were active through the serial monitor in Arduino. To save time I soldered a few extra sensors to wires so the old ones could be switched out easily if they became damaged.


Materials for the Interactive Mat Projection

  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×6 piece of canvas material
  • Optoma Projector
  • 6 x 10k resistors

Video Creation Process

Information was extracted for the four most representative investment fields from the database of investment relationship between China and Africa: transport, energy, real estate and finance. Transport and real estate are very typical, because the two famous parts of China’s infrastructure investment in Africa are railway and stadium construction. In addition, energy is also an important part of China’s global investment. The finance part corresponds to the most controversial part of China’s investment, that is, when the recipient country cannot repay the huge loan, it needs to exchange other interests. Sri Lanka’s port is a typical example.

Initially, we wanted to present investment data in four fields through infographic. But after the discussion, we believed that video is a more visual and attractive way to present. So we make two video’s for each field. When visitors get the correct words in this field, they will be shown the general situation of China in the world and Africa in this field, which is video 1, including data, location, time and so on. When visitors click on mat,  projector will play more detailed video about the field, which is video 2, such as details of specific projects.

In video 1, we use Final Cut to make dynamic images of infographic produced in adobe illustrator, and add representative project images of this field in the latter half of video. So that visitors have a general understanding of this field.

In video 2, we use Photoshop and Final Cut to edit some representative project images in this field, and then add key words about the project in the image, so that visitors can have a clear and intuitive understanding of these projects.1

The Presentation

The project was exhibited in a gallery setting in the OCAD U Graduate Gallery space. Below are some images from the final presentation night.


Setting up the installation


People interacting with the Cross the Dragon installation

Reflection and Feedback

Many of the members of the public who interacted with the Cross the Dragon exhibit were impressed by the interactions and appreciated the educational qualities of the project. Many people stuck around to talk about the topics brought up by the videos, asking to know more about the projects, where the information came from and how the videos were made. Others were more interested in just the interaction but most participants did engage in open ended dialogue without being prompted. Overall feedback was positive. People seemed to be really interested in changing the informational video after finding the word in the puzzle. Some participants suggested slowing down the videos so that they could actually read all the information in the text.

For future iterations of this project, we would like to explore projection mapping more so that we can make the interactive mat more engaging. We noticed that once people found out that they could touch the mat, they tended to want to keep touching it and exploring it. We had spoken about including audio and text with animation earlier on in our brainstorming and we believe this would be a good way to include these through having sensitive areas on the mat to create more interactions. It was also suggested that we should project the videos onto a wall also so that people who were around the room would still be included in the experience without having to actually be physically at the exhibition station.


Code Link on Github – Cross The Dragon

P5 Code Links:

Hiding & Showing HTML5 Video – Creative Coding

Creating a Video array – Processing Forum

HTML5 Video Features – HTML5 Video Features

Hiding & Showing video – Reddit JQuery

Reference Links:











Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.