Digi-Cart 3.0

Experiment 4 – Katlin Walsh

Project Description

While interactive media content displayed within galleries has been updated within the last 5-10 years, presentation formats for tradeshows have not. Digi-Cart brings an adaptive presentation style to the classic concept of a tool cart. Robust building materials and pegboard construction allows corporations to adapt their layout and presentation style to reflect their current corporate event. 

Digi-Cart features a basic controller layout which can be overlayed with a company’s vinyl poster cutout to create an interactive presentation that can be facilitated by an expert or self guided. Corporations are encouraged to update their digital materials & create animated graphics to capture audience attention. 

Continue reading “Digi-Cart 3.0”

Mood Drop

By Jessie, Liam and Masha

Project description:

Mood Drop is an interactive communication application which creates a connection between individuals in different physical environments over distance through interaction of visual elements and sound on digital devices. It allows people to express and transmit their mood and emotions to others by generating melody and images based on interaction between users.

Mood Drop enables multi-dimensional communication since melody naturally carries mood and emotion. It distinguishes itself from the ordinary day-to-day over-the-distance communication methods such as texting, with its ability to allow people to express their emotions in abstract ways. 

Furthermore, Mood Drop embodies elements of nature and time which often plays a factor on people’s emotions. By feeding in real-time environment data such as temperatures, wind speed and cloudiness of a place, which affects variables within the code, it sets the underlying emotional tone of a physical environment. As people interact in the virtual environment which closely reflects aspects of the physical environment, a sense of telepresence of other people in one’s physical environment is created.

Code: https://github.com/lclrke/mooddrop

Modes and Roles of Communication

Having learned networking, we tried to come up with ideas of different modes of communication. Rather than every user having the same role, we hoped to explore varying the roles that could be played in a unified interaction. Perhaps some people only send data and some people only receive it, and we could use tags to filter the data to receive on the channel.

One idea we considered was a  unidirectional communication method where each person receives data from one person and sends data to another person.

img_8254 img_8255


However, we didn’t pursue this idea further because we couldn’t justify this choice with a valid reason behind it apart from it’s an interesting idea. We eventually settled on the idea of creating a virtual community where everyone is a member and can have the same contribution. 


Once we were settled on the idea that everyone has the same role and figured out PubNub, we started brainstorming ideas. We all were interested in creating interactive piece, which would involve visual part and sound. Thus, we explored the p5.js libraries to find some inspiration. Vida Library by Pawel Janicki gave us an idea of affecting the sound with motion detected by web camera. This would not work because it was impossible to do video chat through pubnub (hence, no interaction).

Another thought was to recreate Rorschch’s Test, which would allow  users to see changing abstract image on the screen, so they could share their thoughts on what they saw between each other by typing it.

Finally we came up with the idea of creating an application which would allow users to express their mood through distance. By using visuals and sounds, participants would be able to cocreate musical compositions being far away from each other. We found a code, which was a foundation for the project, where users could affect the sound by interacting with shapes using their mouse.

Next we built a scale using notes from a chord, where frequencies were spaced in a way that the size of the shape generated by clicking would affect the mood of the transmitted sound. The lower part of the chord contains notes related to minor frequencies, while the top part focuses on the minor frequencies. The larger the circle, the more likely to play the lower minor roots of the chord. The final sound was simplified to one p5.js oscillator with a short attack and sustain to give it percussive characteristics. 

Working on visuals

As we started working on the visual components of the piece we decided to try the 3D Library in P5.js. We were looking for a design that would have the strong and clean sense of interaction when the shapes connected in digital space. Also, we were imagining the sound as a 3d object, which can exist in multiple dimensions and can have many directions. Many shapes, colors and textures were experimented with.

Simplifying shapes and palette:

An important moment occurred when we all were interacting with the same page independently from each other at home. While working on small code details, we soon found ourselves playing with each other in an unplanned session, which created an exciting moment of connection. We pivoted away from maximal visuals and sound after this to focus in on this feeling as we thought this was important to emphasize. While working on the project beside each other, we were wondering why being in separate rooms was important to demonstrate the piece. This moment of spontaneous connection through our P5.js editor window made us understand the idea of telepresence and focus in on what we then thought was important to the project.

We decided to return to a simple black and white draft featuring the changing size of a basic ellipse. The newer visuals did not clearly show the parameters of the interaction as the relationship between shapes on screen were not as clear as a basic circle.

By inputting to many aesthetic details, we felt we were predefining aspects that could define mood for a user. We found black and white was the better choice for palette as we wanted to keep the mood ambiguous and up to user interpretation. 
screen-shot-2019-11-17-at-1-35-54-pm screen-shot-2019-11-17-at-1-39-16-pm screen-shot-2019-11-12-at-3-20-09-pm



Project Context:

The aim was to create a connection between two different environments, and we looked to transfer something more than video and text.


Place by Reddit (2017)

This experiment involved an online canvas of 1000×1000 pixel squares, located at a subredditcalled /r/place, which registered users could edit by changing the color of a single pixel from a 16-colour palette. After each pixel was placed, a timer prevented the user from placing any pixels for a period of time varying from 5 to 20 minutes. 

The process of cocreating one piece by multiple people from different places appealed to us, thus we also designed something that enables people to feel a connection to each other. To push this idea further, we decided to create something where visuals and sounds work in harmony as a coherent piece when people interact with each other. The interactions between people will be represented in the virtual space by the animation of interactions of visual elements they created and sound on a digital device.


unnamed-5-40 unnamed-7-40

 Unnumbered Sparks: Interactive Public Space by Janet Echelman and Aaron Koblin (2014).

The sculpture, a net-like canvas 745 feet long and suspended between downtown buildings, was created by artist Janet Echelman. Aaron Koblin, Creative Director of Google’s Data Arts Team, created the digital component, which allowed visitors to collaboratively project abstract shapes and colors onto the sculpture using a mobile app. We applied  simplicity and abstract shapes from this mobile app to our interface in order to make the process of interaction and co-creating more visible and understandable.

Telematic Dreaming by Paul Sermon (1993)

This project connects two spaces by projecting one space directly on top of another space. The fact that Sermon chose 2 separate beds as the physical space raises interesting questions. It provokes a sense of discomfort when two strangers are juxtaposed into an intimate space even if they are not really in the same physical space. The boundary between the virtual space and physical space becomes blurred because of this interesting play with space and intimacy.

Inspired by this idea of blurring the boundary of two spaces, we thought we could actually use external environmental data of the physical space which will be visualized and represented in a virtual space on screen in some way. The virtual space will be displayed on the screen which then exists in a physical space. In this case, not only is the user connected to their own environment, other people who are interacting with the person are also connected to this person’s environment by interacting within this virtual environment which is closely associated with data from the physical space. It blurs the line between the virtual and the physical space as these two spaces get intertwined and generate an interesting sense of presence within the virtual as well as the physical space as users interact with each other.

We eventually decided to add the Toronto live update weather API to mingle with our existing interaction elements. We used temperature, wind speed, humidity and cloudiness to affect the speed of the animation and the pitch and tone of the music notes. For example, during midday, the animation and music sound will happen after a faster speed than during the morning as the temperature rises, which also aligns with people’s energy level and mental state, and potentially emotions and mood.


Manfred , M. (2012). Manfred Mohr, Cubic Limit, 1973-1974. Retrieved November 17, 2019, from https://www.youtube.com/watch?v=j4M28FEJFF8

OpenProcessing. (n.d.). Retrieved November 18, 2019, from https://www.openprocessing.org/sketch/742076.

Puckett, N., & Hartman, K. (2018, November 17). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment4

Place (Reddit). (2019, November 4). Retrieved November 18, 2019, from https://en.wikipedia.org/wiki/Place_(Reddit)

Postscapes. (2019, November 1). IoT Art: Networked Art. Retrieved November 18, 2019, from https://www.postscapes.com/networked-art/

Sermon, P. (2019, February 23). Telematic Dreaming (1993). Retrieved November 18, 2019, from https://vimeo.com/44862244.

Shiffman, D. (n.d.). 10.5: Working with APIs in Javascript – p5.js Tutorial. Retrieved November 17, 2019, from https://www.youtube.com/watch?v=ecT42O6I_WI&t=208s




Experiment 4 – You Are Not Alone!

You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space with illustrated on-screen windows based on the metaphor of ‘each window as a window into the other world’.
An experiment with p5.js & Networking using PubNub, a global data stream realt-time network.

Sananda Dutta, Manisha Laroia, Arshia Sobhan,Rajat Kumar

Kate Hartman & Nick Puckett

Starting with the brief for the experiment i.e. Online Networking across different physical locations, the two key aspects of networking that struck
us were:
– the knowledge of each others presence i.e. someone else is also
present here;
– the knowledge of common shared information i.e. someone else is also seeing what I am seeing; and how crucial these are to creating the networking experience.
You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space. The community in the shared space can light up windows with multiple clicks and have their ghost avatar travel through the windows that others light up. With prolonged inactivity in a pixel area the lit windows slowly started getting darker and turn off. The visual metaphor was taken from a neighbourhood building, and how people know of each others presence when they see lit windows and of the absence when they see unlit windows. In that space, people don’t communicate directly but know of each others presence. The individual avatar moving through this space was to create an interaction within the present community. The windows were metaphors for windows into other people’s world as they entered into the same space.



Paper Planes
The concept of this cross-location virtual presence experiment was to create and fold your own paper plane, stamp it with your location, and “throw” it back into the world where it can be caught by someone on the other side of the world. Paper Planes started as a simple thought – “What if you could throw a paper plane from one screen to another?”
See the detailed project here.


Dear Data
Dear Data is a year-long, analog data drawing project by Giorgia Lupi and Stefanie Posavec, two award-winning information designers living on different sides of the Atlantic. Each week, and for a year, they collected and measured a particular type of data about their lives, used this data to make a drawing on a postcard-sized sheet of paper, and then dropped the postcard in an English “postbox” (Stefanie) or an American “mailbox” (Giorgia)! By collecting and hand drawing their personal data and sending it to each other in the form of postcards, they became friends. See the detailed project here.

The Process
We started the project with a brainstorm building on the early idea of networked collaborative creation. We imagined an experience where each person in the network would draw a doodle and through networking they would add to a collaborative running animation. With an early prototyped we realized the idea had technical challenges and moved to another idea of based on shared frames and online presence to co-create a visual. A great amount of delight in the Paper Planes networked experience, comes from the pleasant animation, and knowing other people are present there and are carrying out activities.




We wanted to replicate the same experience in our experiment. We brainstormed several ideas like drawing windows for each user and having text added in it, windows appearing as people join in, sharing common information like number of trees around you or number of windows around you and representing this data visually to create a shared visual. We all liked the idea of windows lighting up with presence and built around the idea, light on meaning present and light off meaning absence or inactivity.
We created the networked digital space in two stages:
[1] One with the windows lighting up on clicking and disappearing on not clicking, and the personal ghost/avatar travelling through the windows
[2] The other with the shared cursor where each user in the networked space could see each others avatar and move through the shared lit windows.




We built the code in stages, with first creating the basic light on, light off colour changing on click functionality, followed by array of windows, the networking transfer of user mouse clicks and positions, and finally the avatars for the user. Our biggest challenge was to be able to network and make the avatar appear on click after the window is lit and be able to have everyone on the network space.


The shared space loads with a black screen. The visual has windows that light up as one clicks on the screen, each window gets brighter with each click and reaching its maximum brightness on three clicks. Each user in the space can see other windows light up and know the presence of other users. Each person also sees the windows in the same grid alignment as the rest of the community. Once a window is lit the users can see their avatar in it and can then navigate their avatar across the screen through lit windows along with their cursor movement.


In version 1, of the experience the users cannot see each others avatars moving but can see windows light up as others interact with the screen. The avatar was chosen to be a silhouette so that it could visually appear into the lit windows as compared to the black backdrop and the eyes popping in would create an element of surprise. In version 2, the windows appears as a user enters the digital space and each user has an avatar that can travel through the other’s windows. When the avatars in the user’s own window it is black and the ghost color is lighter when it is in another user’s window. Unfortunately, we were unable to show both the interactions in the same code program with the network messages for cursor click and cursor position at the same time.
The presence was indicated with a window, a signifier of the window into the other person’s world, like a global building or neighbourhood. The delight comes from the knowing people are there in the same space as you, at the same time and there is something in common.


Choice of Aesthetics
The page loads to a black screen. We tapped into the user’s behaviour to click randomly on a blank screen to find out what comes up on the screen and so we make the windows light up on each click, getting brighter with each click. Turning on and off the light of a window to show presence and also taps into the clickable fidgeter aspect so that visitor in the digital space hang in there and interact with each other.
Another feature was to allow each user to see the other user’s cursor as it navigates on the screen through the lit windows. We kept the visuals to a minimum with simple rectangles, and black color for dark windows and yellow for bright windows. While coding we had to define the click area that would register the clicks on the network to light up the windows.
The pixel area defined varied from screen to screen and would sometimes register clicks for 1 rectangle, for 2 or 3. We wanted to light one window at a time, but we let the erratic click behaviour be as it is, as it added a visual randomness to the windows lighting up.

Challenges & Learnings

  1. Networking and sending messages for more than one parameter like shared cursor, mouse position and mouse positions was challenging and we could not have the click and the mouse hover function be sent at the same time. Thus, we could get the code click to light up work but were unable to share the cursor information and the users could not see each others avatars on the network or vice versa.




  1. Another challenge was in being able to have the avatar appear in the window on the click after the window is lit up. We were able to get the avatar appear in each window in the array and have a common avatar appear on click in one window. The challenge was in managing the clicks and the image position at the same time. We eventually chose to make a shared cursor networking code but that interferred with the click functionality to dim and light the windows with increasing opacity.
  2. Our inexperience with code and networking was also a challenge, but we made visual decisions to make the user experience better, like chosing a black coloured icon that looked like a silhouette on the black background so we did not have to create it on click but it would be visible when a window was lit.
  3. If we had time we would  surely work on integrating the two codes to create one networked space where the windows followed the grid, all the users saw the same visual output, the windows lit up in increasing opacity and all the users can see all the avatars floating around through the windows.

Github Link for the Project code is here.

[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Shiffman, Daniel. The Coding Train. 7.8: Objects and Images – p5.js Tutorial. 2015. Digital.

[3] Active Theory. Google I/O 2016: Paper Planes. 2016. Digital.

Silent Signals

Silent Signals
Breaking the Language Barrier with Technology

Priya Bandodkar, Jignesh Gharat, Nadine Valcin


portfolio-02 portfolio-03 portfolio-04portfolio-05


‘Silent Signals’ is an experiment that aims to break down language barriers between users across locations by enabling them to send and receive text messages in the language of the intended recipient(s) using simple gestures. The gesture detection framework is based on poseNet technologies, and the experiment uses PubNub to send and receive these messages. It is intended to be a seamless interaction where users’ bodies become controllers and triggers for the messages. It does away with the keyboard as an input and takes communication into the physical realm, engaging humans in embodied interactions. It can comprise of multiple users and is irrespective of the spatial distance between these participants.


When we first started thinking about communication, we realized that between the three of us, we had three different languages: Priya and Jignesh’s native Hindi, Nadine’s native French and English that all three shared as a second language. We imagined teams collaborating on projects across international borders, isolated seniors who may only speak one language and globetrotting millenials who forge connections throughout the world. How could we enable them to connect across language barriers by making them connect across language barriers?

Our first idea was to build a translation tool that would allow people to text one another seamlessly in two different languages. This would involve the use of a translation API such as Cloud Translation by Google (https://cloud.google.com/translate/) that has the advantage of automatic language detection through artificial intelligence. 

We then thought that it would be more natural and enjoyable for each user to be able to speak their preferred language directly without the intermediary of text. That would require a speech-to-text API and a text-to-speech API. The newly release Web Speech API (https://wicg.github.io/speech-api/) would fit the bill as would the Microsoft Skype Translator API (https://www.skype.com/en/features/skype-translator/) which has the added benefit of translating direct speech to speech translation in some languages, but unfortunately that functionality is not available for Hindi.

Language A                                                                Language B


As we discovered that there are several translation apps already on the market, we decided to push the concept one step further enabling communication without the use of speech and started looking into visual communication.

The Emoji


Source: Emojipedia (https://blog.emojipedia.org/ios-9-1-includes-new-emojis/)

Derived from the Japanese terminology for “picture character”, the emoji has grown exponentially in popularity since its online launch in 2010. More than 6 billion emojis are exchanged every day and 90% of regular emoji users rated emoji messages as more meaningful than simple texting (Evans, 2018). They have become part of our vocabulary as they proliferate and are able to express at times relatively complex emotions and ideas with one icon.

Sign Language

Sign languages allow the hearing impaired to communicate. We also use our hands to express concepts and emotions. Every culture has a set of codified hand gestures that have specific meanings. 


American Sign Language

Source: Carleton University (https://carleton.ca/slals/modern-languages/american-sign-language/)

Culturally-Coded Gestures

Source: Social Mettle (https://socialmettle.com/hand-gestures-in-different-cultures)


We also simultaneously started thinking about how we can use technology, as the three of us shared a desire to make our interactions more intuitive and natural. 

“Today’s technology can sometimes feel like it’s out of sync with our senses as we peer at small screens, flick and pinch fingers across smooth surfaces, and read tweets “written” by programmer-created bots. These new technologies can increasingly make us feel disembodied.”

Paul R. Daugherty, Olof Schybergson and H. James Wilson
Harvard Business Review

This preoccupation is by no means original. Gestural control technology is being developed for many different applications, especially as part of interfaces with smart technology. In the Internet of Things, it serves to make interactions with devices easy and intuitive, having them react to natural human movements. Google’s Project Soli, for example, uses hand gestures to control different functions on a smart watch. 


Some of the challenges in implementing this approach to technology is that there is currently no standard format for body-to-machine gestures and that gestures and their meanings vary from country to country. For example, while the thumbs up gesture pictured above has a positive connotation in the North American context, it has a vulgar connotation in West Africa and the Middle East.



The original concept was a video chat that would include visuals or text (in the user’s language), triggered by gestures of the chat participants. We spent several days attempting to use different tools to achieve that result before Nick Puckett informed us that what we were trying to achieve was nearly impossible via PubNub. This left us with the rather unsatisfactory option of the user only being able to see themselves on screen. We nevertheless forged ahead with a modified concept that had these parameters:

  • Using the body and gestures for simple online communications
  • Creating a series of gestures with codified meanings for simple expressions that can be translated in 3 different languages



poseNet Skeleton

Source: ml5.js (https://ml5js.org/reference/api-PoseNet/)

We leveraged the poseNet library, which is a machine learning model that allows for Real-Time Human Pose Estimation. It tracks 17 nodes on the body using the webcam and creates a skeleton that corresponds to human movements. By using the node information tracked by poseNet, we were able to define the relationship of different body parts to one another, use their relative distances and translate that into code.


poseNet tracking nodes

Source: Medium (https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)


As we continued to develop the code, we soon realised that poseNet tracking seemed rather unstable and at times finicky, as it was purely based on the pixel-information it received from the camera. The output fluctuated as it was based on several factors such as the lighting, contrast of clothing, background and the user’s distance from the screen. Consequently, it meant that the gesture would not always be captured if these external factors weren’t acting favourably. Dark clothing and skin seemed to be particularly problematic.

We originally had 10 gestures coded, but the challenge of integrating them all was that they sometimes interfered or overlapped with the parameters of one another. To avoid this, we developed 5 in the prototype. We had to be mindful of using parameters that were precise enough to not overlap with other gestures, yet broad enough to take into account the fact that different body types and people would perform these gestures in slightly different ways.

Since there are very limited resources dealing with p5.js and PubNub, we had difficulty in finding code examples to help us resolve some of the coding problems we encountered. Most notably amongst these was managing to publish graphic messages we designed (instead of text), that would be superimposed on the recipient’s interface. We thus only managed to display graphics on the sender’s interface and send text messages to the recipient.




  • Participants expressed that it was a unique and satisfying experience to engage in this form of embodied interaction using gestures.
  • The users were appreciative of the fact that we developed our own set of gestures to communicate instead of confining to existing sign languages.


We would like to complete the experience by publishing image messages to recipients with corresponding translations rather than have the text interface.


Oliveira, Joana. “Emoji, the New Global Language?” In Open Mind https://www.bbvaopenmind.com/en/technology/digital-world/emoji-the-new-global-language/. Accessed online, November 14, 2019

Evans, Vyvyan. Emoji Code: the Linguistics behind Smiley Faces and Scaredy Cats. Picador, 2018. 

https://us.macmillan.com/excerpt?isbn=9781250129062. Excerpt accessed online, November 15, 2019

Schybergson H, Paul R. Daugherty Olof, and James Wilson. “Gestures Will Be the Interface for the Internet of Things.” in Harvard Business Review, 8 July 2015, https://hbr.org/2015/07/gestures-will-be-the-interface-for-the-internet-of-things. Accessed online November 12, 2019

Oved, Dan. “Real-time Human Pose Estimation in the Browser with TensorFlow.js” in Medium. 2018.

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5. Accessed online November 10, 2019.

Fire Pit

Project Title: Fire Pit
Names of group of members: Nilam Sari, Neo Chen, and Lilian Leung

Project Description

This experiment is a continuation of our first project “Campfire” from Experiment 1 by Nilam and Lilian. The experiment expands upon the topic of conversation, from the importance of face-to-face communication over the use of mobile devices, to exploring the power of anonymity and distance with a platform that allows participants to type their negative feelings into the text box and literally throwing their negative emotions into a virtual fire.

Project Progress

November 8

The team worked together to update the imagery of the flame, exploring different shapes and beginning to style the text box and images. One of the references we pulled was from aferriss (2018)’s Particle Flame Fountain. Taking their source code, we revised the structure of the fire to connect it with user presence from PubNub. 


(Source: https://editor.p5js.org/aferriss/sketches/SyTRx_bof)

Then we implemented it into our interface.


November 9

The team added additional effects such as a new fire crackle sound, to emphasize throwing a message into the fire when users send a message. Attempted to create an array that would cycle with a set of prompts or questions for participants to answer. Rather than just having the message stay on screen and abruptly disappear, we also added a timed fade onto the text so that users can see their message slowly burn.

November 10

We worked on the sound effect for when messages are being “thrown” into the fire, and managed to combine two files into one for the sound that we want. 

Changed the way to submit the text by using your finger to swipe up rather than pressing the button. Along with the new fire crackle sound, the fire temporarily grows bigger every time it receives an input. The input text box resets itself after every message is sent.

We tried to add an array of text for questions/prompts, but haven’t been able to display the selection of the questions randomly. When random() is used, the following error message shows up: 

“p5.js says: text() was expecting String|Object|Array|Number|Boolean for parameter #0 (zero-based index), received an empty variable instead. If not intentional, this is often a problem with scope: [https://p5js.org/examples/data-variable-scope.html] at about:srcdoc:188:3. [http://p5js.org/reference/#p5/text]”

The random() is now working by calling random(question); rather than the index number let qIndex = random(0,3);. Now everytime the program is opened there will be random questions and will get randomized again every time the user submitted an input.

Nov 11:

The sound of a log of wood being thrown into the fire is added to the program. Everytime a new input is being sent the sound plays. We also changed the CSS of the input box to match it with our general aesthetic. We then added a prompt to tell people how to submit their message instead of using ‘enter’


Nov 12

We are trying to change the fire’s color when the text input is sent and adding a dissolved effect for the text.


What we have so far:

  1. The more people that join the conversation, the bigger the fire becomes
  2. The text fades out after a couple of seconds, no trace of history
  3. The fire changes color and plays a sound when new input is being thrown into it
  4. Swipe up to send the messages into the server and fire
  5. Prompts added just above the text input

To communicate our concept, we felt that producing a short film for our documentation would be the most effective. To do this and be able to record content individually, we created moodboard of the visual style we wanted to use for our film.


Our final step was to put the film together as both presentation and documentation.

Project Context

This experiment is a continuation of our first project “Campfire” from Experiment 1 by Nilam and Lilian. The original concept for the piece was creating a multi-screen experience that was focused on coming together and the value of being present and having a face-to-face conversation. Participants were asked to place their phones on a foam pedestal that light all the screens to recreate a digital campfire.

img_7854_edited doc1

Switching the concept we explored the use of distance and presence online and the practice of speaking into the void when sharing content online; for example, twitter or tumblr, where users can post their thoughts without expectation of a response. People’s personal posts differ from vague to incredibly revealing and are a method of venting and also personally working through one’s own thoughts.

Within Sherry Turkle’s book Alone Together: Why We Expect More from Technology and Less from Each Other (2011), she writes how technology can be seductive because of what it offers for human vulnerabilities. Digital connections and networks provide the illusion of companionship without the demands of friendship. The platform is preferably used on a mobile device because of the intimacy of a personal handheld device.


Our idea was to have participants land on our main screen with a small fire. The size of the fire is dependent on the number of participants logged on, though the amount of individuals is hidden to maintain the presence of anonymity. Participants can’t tell how many users are online exactly but are aware there is a body of people depending on the size of the flame. Turkle (2011) writes about anonymity as compared to face-to-face confession as having the absence of criticism and evaluation. Participants are able to take their thoughts and throw them into the fire (by swiping or mouse drag) as both a cathartic and purposely gesture. 


Participants are able to see their message on screen after submission and have it  burned in the flame. When participants swipe or drag the screen, there’s the sound of a wood block and fire crackle as the image gets send, as the message metaphorically feeds the flame. The colour of the fire changes as well on send. 

Other participants on the platform can see submitted message temporarily on their screens, the change in the fire both informs interaction with the fire and encourages other to submit thoughts on their mind troubling them as well to burn as well. Rowe in Write it out: how journaling can drastically improve your physical, emotional, and mental health (2012) describes the use of journaling and writing out one’s emotions has been proven to reduce stress by allowing people to open up about their feelings. 


Once the message is sent and burnt, there will be no traces of the message anywhere. There is no history stored in the program or PubNub. It is as if the thoughts that the user wrote and threw into the digital fire and the thoughts become digital ashes. This is both symbolic and literal in terms of leaving no digital footprints. While PubNub allows developers to record the users’ IP addresses and information, we choose to not record any of the users’ information.

This work wasn’t inspired but harks back to the use of PostSecret, a community mail project created by Frank Warren in 2005, which allowed people to mail confessions and secrets anonymously which were hosted online.




Project Video https://vimeo.com/373640297

Project Code on Github https://github.com/nilampwns/experiment4


aferriss. (2018, April). Particle Flame Fountain. Retrieved from https://editor.p5js.org/aferriss/sketches/SyTRx_bof. 


Rowe, S. (2012, March-April). Write it out: how journaling can drastically improve your physical, emotional, and mental health. Vibrant Life, 28(2), 16+. Retrieved from https://link.gale.com/apps/doc/A282427977/AONE?u=toro37158&sid=AONE&xid=9d14a49b

Turkle, Sherry. Alone Together : Why We Expect More from Technology and Less from Each Other, Basic Books, 2011. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/oculocad-ebooks/detail.action?docID=684281.

Yezi, Denise. Maybe You Need A Tree Hole Too, May 3 2010 https://anadviceaday.wordpress.com/2010/05/03/maybe-you-need-a-tree-hole-too/