Experiment 5 Proposal – OK to Touch!?

Project Title
OK to Touch !?

Team members
Priya Bandodkar & Manisha Laroia

Concept
OK to Touch!? is an interactive experience to make the hidden tracking technology i.e. Fingerprinting, visible to the users through interactions with everyday desk objects. The concept uses these tangible interactions in physical objects to convey how users’ private data is being tracked without their consent in the near future.

Project Description
Peering into the digital future, OK to Touch!? is an interactive experience to make the hidden tracking technology Device Fingerprinting, visible to the users, currently used for browsers and apps and soon moving to smart devices. Through simple interactions with everyday desk objects, we will record the physical touch-points IRL, tag them and project them on screen in the backdrop to create an apparent visual of digital fingerprinting data that we as users are sharing unknowingly with each interaction we make.

Physical Installation
The setup is table top display with desk objects, with a large screen or a project wall in the backdrop. As the users interact with the objects, a visual representation of the Fingerprinting data is added on the backdrop to create a visual map and indication to the users that a simple touch or click on a smart object can be tracked and be used to profile them. It is an attempt to create an awareness about the privacy concerns associated with the Internet in today’s knowledge economy.

installation

Setup space:
4 ½  feet x 2 ½ feet table top, set along the wall. The wall serves as the projection screen or a large TV screen could be used.

Parts & Materials
Desk objects will be selected from the following list of items (subject to prototyping results): Table from the DF studio and a chair, a book, an ink stamp, iPad or mobile phone, headphones, an alarm clock, a calculator, a box. These will be either be sourced in their original form or fabricated as indicated below:

Sourced in original form Fabricated
Book Alarm clock (laser-cut wood)
iPad or mobile phone Box (laser-cut wood)
Headphones
Calculator

Technology
Softwares and Hardware to be used:
p5.js, Processing and Arduino, switches, digitally fabricated objects, DIY switches.

Object Interaction Sensor (Input) Output
Book with a bookmark or pen set to a page Open & close DIY switch Graphic circle or fingerprint icon projected on the wall behind or appears (fade in) on the screen behind the display with supporting fingerprinting text, and stays there the whole time. //particle.system

[Graphic/Icon]

navigator.userAgent

“Book/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Ink Stamp with a paper next to it with a few stamp patterns on it Stamp on paper Velostat/Pressure sensor [Graphic/Icon]

navigator.userAgent

“Stamp/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Mobile phone or iPad with text and icons to click

(Privacy Policy)

Multitouch function P5.js multitouch to note the touch on screen [Graphic/Icon]

navigator.userAgent

“Stamp/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Headphones hung on a metal stand Remove them from the stand to disconnect the switch and listen to a note on fingerprinting/music DIY switch [Graphic/Icon]

navigator.userAgent

“Headphones/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Alarm Clock Open & close DIY switch [Graphic/Icon]

navigator.userAgent

“Alarm Clock/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Calculator with a page displaying two numbers to be added Press calculator buttons Velostat/Pressure sensor [Graphic/Icon]

navigator.userAgent

“Calculator/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

 

Workplan

Dates Activities
23rd November – 25th November Material procurement and Quick Prototyping to select the final 4 Objects
26th November – 28th November Writing the code and Digital Fabrication
28th November – 30th November Testing and Bug-Fixing
1st December to 2nd December Installation and Final touches
3rd December to 4th December Presentation

References
Briz, Nick. “This Is Your Digital Fingerprint.” Internet Citizen, 26 July 2018, https://blog.mozilla.org/internetcitizen/2018/07/26/this-is-your-digital-fingerprint/.

Szymielewicz, Katarzyna, and Bill Budington. “The GDPR and Browser Fingerprinting: How It Changes the Game for the Sneakiest Web Trackers.” Electronic Frontier Foundation, 21 June 2018, https://www.eff.org/deeplinks/2018/06/gdpr-and-browser-fingerprinting-how-it-changes-game-sneakiest-web-trackers.

“Talk to Me: Immaterials: Ghost in the Field.” MoMA, https://www.moma.org/interactives/exhibitions/2011/talktome/objects/145463/.

Chen, Brian X. “’Fingerprinting’ to Track Us Online Is on the Rise. Here’s What to Do.” The New York Times, The New York Times, 3 July 2019, https://www.nytimes.com/2019/07/03/technology/personaltech/fingerprinting-track-devices-what-to-do.html.

 

Experiment 5 – proposal

(Un)seen
by Nadine Valcin

emergence-4

Project description
Much of my work deals with memory and forgotten histories. I am constantly searching for new ways to portray the invisible past that haunts the present. (Un)seen is a video installation about presence/absence that uses proxemics to trigger video and sound. It recreates a ghostly presence appearing on a screen whose voice constantly beckons the viewer to get closer, but whose image recedes into the frame as the viewer tries to engage with it. Ultimately, the viewer is invited to touch the cloth it is projected on, but if they do, the ghost completely disappears from view, leaving an empty black screen.

With permission, I will be using unused footage from a previous project comprised of closeups of a Black woman on a black background and will be recording and mixing a new soudtrack.

Parts / materials / technology list
MacBook Pro
Arduino Nano
Distance sensors (2 or 3) HC-SR04 or KS102 1cm-8M Ultrasonic Sensors
King size bedsheet, hanging from rod
Projector
2 speakers (or 4?)

Work plan
22.11.19-24.11.19     Edit 3 video loops
24.11.19-25.11.19     Write ghost dialogue and research sound
26.11.19-27.11.19    Record and edit sound
22.11.19-27.11.19     Program distance sensors and interaction
27.11.19                       Mount bedsheet on rod
28.11.19-02.12.19    Testing and de-bugging
03.12.19-04.12.19    Presentation

Physical installation details
The ideal space for the installation would be Room 118 on the ground floor.
With permission, I will be using footage shot for another project comprised of closeups of a Black women on a Black background. The ideal would be for the image to be projected from the rear onto the sheet. This would require a dark space and enough space behind and in front of the sheet. The mount for the sheet will be kept deliberately light. Metal wire can be used to hang the rod holding the sheet from the ceiling, but would potentially require discrete hooks screwed or attached to the ceiling.

Set-up Option A

unseen-setup-a1

Set-up Option B

unseen-setup-option-b1Resource list
Hand drill and other tools TBA
Ladder
Projector
2 (or 4?) speakers
2 pedestals for the sensors (?)
Cables for the speakers
Power bar and electrical extension cords
Table

Questions
– Can I have 4 speakers and have the play different sounds in pairs? I.e. the speakers behind the screen wouldn’t play the same sound as the speakers in front of the screen
– Do I actually need 3 distance sensors – 1 behind the screen for the functions triggered by people touching the screen and two mounted (possibly on pedestals) slightly in front of the screen at each side?
– Is it possible to hang things from the ceiling?
– Would a motion sensor also be useful to activate the installation when someone comes into the room?

Experiment 5 Proposal

Zero UI (Working Title)

For Experiment 5, I’d like to expand on tangible interfaces and explore the use of Zero UI (invisible user interfaces) and having technology fully incorporated within a room (or household) with the use of pressure sensitive furniture and sensors with auditory feedback to elevate regular objects (a book) to create a more immersive experiences instead depending on screen based experiences. This experiment is an exploration in creating a multi-sensory reading experience with content catered towards the book’s contents.

The experiment would involve the use of a pressure sensor chair that lights up a nearby lamp when the participant sits down. The pressure sensor may be installed physically on the chair or hidden away with the design of a cushion or lining. The participant can pick up the book and read or flip through the book and hear the music referred in the book playing from a speaker hidden away (possible below or behind the chair). The audio would be mapped depending on what section of the book the participant is on.

screenshot-2019-11-19-at-8-26-20-pm

The book I’d like to use is still undecided but one with many musical references such as Haruki Murakami’s book, The Wind Up Bird Chronicle, where as the book begins with the protagonist listening to Rossini’s the Thieving Magpie and refers to many other classical musicians. Another possible book could be J.R.R. Tolkien’s The Hobbit with the movie franchise’s music by Howard Shore playing instead.

Project Main Components and Parts

  1. Arduino Nano
  2. Flex Sensor
  3. Pressure Sensor
  4. MP3 Shield (?)
  5. External Speakers
  6. Lightbulb and Wiring (Lamp)

Additional Components and Parts

  1. Chair (Supporting Prop)
  2. Fabric/Cushion (To Hide/Place Sensor)
  3. Book (Prop)
  4. Mat Rug (Prop To Hide Cables)

Workback Schedule

Fri, Nov 22 –  Proposal Presentation
Sat, Nov 23 –  Coding + Gathering Digital Assets + Building Lo-Fi Breadboard Prototype
Sun, Nov 24 – Coding + Gathering Digital Assets + Building Lo-Fi Breadboard Prototype
Mon, Nov 25 –  Coding + Creatron for Final Components
Tues, Nov 26 –  Presenting Progress of Lo-Fi Breadboard Prototype + Revisions
Wed, Nov 27 – Prop Purchasing
Thurs, Nov 28 – Laser Cutting Components and Coding
Friday, Nov 29 – Troubleshooting / Bug Fixes
Sat, Nov 30 – Troubleshooting / Bug Fixes
Sun, Dec 1 –  Troubleshooting / Bug Fixes
Mon, Dec 2 – Troubleshooting / Bug Fixes
Tues, Dec 3 – Final Critique
Wed, Dec 4 – Open Show

Physical Installation

I’d like to ideally place the set up in the corner of a room and with dimmer lighting so the lighting from the lamp is more visible when it turns on. Supporting objects within the set up would be the chair participants can sit on with the sensor attached.

screenshot-2019-11-19-at-8-26-37-pm

screenshot-2019-11-19-at-8-26-45-pm

Resource List

  1. Chair and Side table
  2. Will need extension cords for power
  3. External speakers

Info for Open Show

Preferably displayed in the Grad Gallery room. I will just need an electrical outlet nearby or extension cord. We will need to book external speakers from AV.

Project Proposal: What is my purpose?

Project Title: What is my purpose?

Work by: Nilam Sari

Project Description: 

This project will be a small part of my thesis project. This project is going to be a 5 x 5 x 5 in wooden cube with microcontrollers inside that controls its arm to repeatedly stab itself with a chisel, slowly chipping off its body. The purpose of this project is to evoke an emotional reaction from its viewers.

Parts / Materials / Technology list: 

  • Wooden case (hard maple)
  • 3D printed arm parts
  • Chisel
  • Arduino Uno
  • 2 Servo motors

Work Plan:

experiment-5-timeline

Physical Installation Details:

The work will be battery powered, autonomously moving, non-interactive, sitting on top of a pedestal. There will be an on and off switch.

Resource List:

One small width pedestal to hold a 5 x 5 x 5 in cube at display height (around hips).

Information for Open Show:

Would like to borrow the pedestal from the graduate studies. Work can be shown anywhere in the gallery or the 7th floor space.

Mood Drop

By Jessie, Liam and Masha

Project description:

Mood Drop is an interactive communication application which creates a connection between individuals in different physical environments over distance through interaction of visual elements and sound on digital devices. It allows people to express and transmit their mood and emotions to others by generating melody and images based on interaction between users.

Mood Drop enables multi-dimensional communication since melody naturally carries mood and emotion. It distinguishes itself from the ordinary day-to-day over-the-distance communication methods such as texting, with its ability to allow people to express their emotions in abstract ways. 

Furthermore, Mood Drop embodies elements of nature and time which often plays a factor on people’s emotions. By feeding in real-time environment data such as temperatures, wind speed and cloudiness of a place, which affects variables within the code, it sets the underlying emotional tone of a physical environment. As people interact in the virtual environment which closely reflects aspects of the physical environment, a sense of telepresence of other people in one’s physical environment is created.

Code: https://github.com/lclrke/mooddrop
Process: 

Modes and Roles of Communication

Having learned networking, we tried to come up with ideas of different modes of communication. Rather than every user having the same role, we hoped to explore varying the roles that could be played in a unified interaction. Perhaps some people only send data and some people only receive it, and we could use tags to filter the data to receive on the channel.

One idea we considered was a  unidirectional communication method where each person receives data from one person and sends data to another person.

img_8254 img_8255

 

However, we didn’t pursue this idea further because we couldn’t justify this choice with a valid reason behind it apart from it’s an interesting idea. We eventually settled on the idea of creating a virtual community where everyone is a member and can have the same contribution. 

Ideation:

Once we were settled on the idea that everyone has the same role and figured out PubNub, we started brainstorming ideas. We all were interested in creating interactive piece, which would involve visual part and sound. Thus, we explored the p5.js libraries to find some inspiration. Vida Library by Pawel Janicki gave us an idea of affecting the sound with motion detected by web camera. This would not work because it was impossible to do video chat through pubnub (hence, no interaction).

Another thought was to recreate Rorschch’s Test, which would allow  users to see changing abstract image on the screen, so they could share their thoughts on what they saw between each other by typing it.

Finally we came up with the idea of creating an application which would allow users to express their mood through distance. By using visuals and sounds, participants would be able to cocreate musical compositions being far away from each other. We found a code, which was a foundation for the project, where users could affect the sound by interacting with shapes using their mouse.

Next we built a scale using notes from a chord, where frequencies were spaced in a way that the size of the shape generated by clicking would affect the mood of the transmitted sound. The lower part of the chord contains notes related to minor frequencies, while the top part focuses on the minor frequencies. The larger the circle, the more likely to play the lower minor roots of the chord. The final sound was simplified to one p5.js oscillator with a short attack and sustain to give it percussive characteristics. 

Working on visuals

As we started working on the visual components of the piece we decided to try the 3D Library in P5.js. We were looking for a design that would have the strong and clean sense of interaction when the shapes connected in digital space. Also, we were imagining the sound as a 3d object, which can exist in multiple dimensions and can have many directions. Many shapes, colors and textures were experimented with.

Simplifying shapes and palette:

An important moment occurred when we all were interacting with the same page independently from each other at home. While working on small code details, we soon found ourselves playing with each other in an unplanned session, which created an exciting moment of connection. We pivoted away from maximal visuals and sound after this to focus in on this feeling as we thought this was important to emphasize. While working on the project beside each other, we were wondering why being in separate rooms was important to demonstrate the piece. This moment of spontaneous connection through our P5.js editor window made us understand the idea of telepresence and focus in on what we then thought was important to the project.

We decided to return to a simple black and white draft featuring the changing size of a basic ellipse. The newer visuals did not clearly show the parameters of the interaction as the relationship between shapes on screen were not as clear as a basic circle.

By inputting to many aesthetic details, we felt we were predefining aspects that could define mood for a user. We found black and white was the better choice for palette as we wanted to keep the mood ambiguous and up to user interpretation. 
screen-shot-2019-11-17-at-1-35-54-pm screen-shot-2019-11-17-at-1-39-16-pm screen-shot-2019-11-12-at-3-20-09-pm

 

 

Project Context:

The aim was to create a connection between two different environments, and we looked to transfer something more than video and text.

placereddit

Place by Reddit (2017)

This experiment involved an online canvas of 1000×1000 pixel squares, located at a subredditcalled /r/place, which registered users could edit by changing the color of a single pixel from a 16-colour palette. After each pixel was placed, a timer prevented the user from placing any pixels for a period of time varying from 5 to 20 minutes. 

The process of cocreating one piece by multiple people from different places appealed to us, thus we also designed something that enables people to feel a connection to each other. To push this idea further, we decided to create something where visuals and sounds work in harmony as a coherent piece when people interact with each other. The interactions between people will be represented in the virtual space by the animation of interactions of visual elements they created and sound on a digital device.

 

unnamed-5-40 unnamed-7-40

 Unnumbered Sparks: Interactive Public Space by Janet Echelman and Aaron Koblin (2014).

The sculpture, a net-like canvas 745 feet long and suspended between downtown buildings, was created by artist Janet Echelman. Aaron Koblin, Creative Director of Google’s Data Arts Team, created the digital component, which allowed visitors to collaboratively project abstract shapes and colors onto the sculpture using a mobile app. We applied  simplicity and abstract shapes from this mobile app to our interface in order to make the process of interaction and co-creating more visible and understandable.

Telematic Dreaming by Paul Sermon (1993)

This project connects two spaces by projecting one space directly on top of another space. The fact that Sermon chose 2 separate beds as the physical space raises interesting questions. It provokes a sense of discomfort when two strangers are juxtaposed into an intimate space even if they are not really in the same physical space. The boundary between the virtual space and physical space becomes blurred because of this interesting play with space and intimacy.

Inspired by this idea of blurring the boundary of two spaces, we thought we could actually use external environmental data of the physical space which will be visualized and represented in a virtual space on screen in some way. The virtual space will be displayed on the screen which then exists in a physical space. In this case, not only is the user connected to their own environment, other people who are interacting with the person are also connected to this person’s environment by interacting within this virtual environment which is closely associated with data from the physical space. It blurs the line between the virtual and the physical space as these two spaces get intertwined and generate an interesting sense of presence within the virtual as well as the physical space as users interact with each other.

We eventually decided to add the Toronto live update weather API to mingle with our existing interaction elements. We used temperature, wind speed, humidity and cloudiness to affect the speed of the animation and the pitch and tone of the music notes. For example, during midday, the animation and music sound will happen after a faster speed than during the morning as the temperature rises, which also aligns with people’s energy level and mental state, and potentially emotions and mood.

References:

Manfred , M. (2012). Manfred Mohr, Cubic Limit, 1973-1974. Retrieved November 17, 2019, from https://www.youtube.com/watch?v=j4M28FEJFF8

OpenProcessing. (n.d.). Retrieved November 18, 2019, from https://www.openprocessing.org/sketch/742076.

Puckett, N., & Hartman, K. (2018, November 17). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment4

Place (Reddit). (2019, November 4). Retrieved November 18, 2019, from https://en.wikipedia.org/wiki/Place_(Reddit)

Postscapes. (2019, November 1). IoT Art: Networked Art. Retrieved November 18, 2019, from https://www.postscapes.com/networked-art/

Sermon, P. (2019, February 23). Telematic Dreaming (1993). Retrieved November 18, 2019, from https://vimeo.com/44862244.

Shiffman, D. (n.d.). 10.5: Working with APIs in Javascript – p5.js Tutorial. Retrieved November 17, 2019, from https://www.youtube.com/watch?v=ecT42O6I_WI&t=208s

 

 

 

Experiment 4: Influence Over Distance

Eggsistential Workspace

Course: Creation & Computation
Digital Futures, 2019

Jun Li, Jevonne Peters, Catherine Reyto, Rittika Basu, Arsalan Akhtar

GitHub: https://github.com/jevi-me/CC19-EXP-4 

Project Description

Eggsistential workspace is a somatosensorial platform intended to communicate motivation and companionship between two participants communicating from distant spaces through transitional chromatic values. This personalised bonding experience enables participants to convey their activity while in their individual workspaces, and more specifically, their laptop or desktop PC. With the motion of their wrists rising and falling while in the act of typing, pressure sensors (nestled in a mousepad made from a balaclava and light stuffing) activate the RGB patterns of the corresponding participant’s set of lights. The faster they type, the greater the activity of the lights, and the slower they type, the more their inertia is echoed by the decreased activity of the light patterns. We have all experienced that feeling of being strapped to one’s desk under the pressure of a deadline, as well as the lack of community if working alone is a frequent occurrence. We thought it made for a fun, expressive but non-intrusive way of keeping one another company while working at home or in a solitary-feeling workspace.

One example of telepresence that inspired our ideation process was the project titled The Trace6 from El Rastro. In this installation, two people in remote rooms common space with the help of light visuals and sound which is triggered by a sensor when one individual enters a room. This results in them occupying the exact same position in the space.
In addition to that, another great use of ultraviolet lights for telepresence art installation could be observed from the project titled “Miscible “7. This work from Manuel Chantre and Mathieu Le Sourd used sensory, light and principles of chemistry to make two liquids homogeneously while participants were in remote locations. In this performance, participants from remote locations are expected to mix the liquid with UV lights in a way to mix them perfectly. Each UV light mixes to create a perfect blend of liquid and colour.

Ideation & PROcESS

In our first brainstorming session, we agreed from collective past experience that it would be wise to keep the idea simple. The very first topic we discussed was the language of colour, and how hues are interpreted differently in different countries. But we struggled to find a tangible means of working with colour translation, given the complexity of networking. We brainstormed several ideas, explored online examples and struggled to procure elements from previous projects. One of the dismissed proposals involved creating a winter scene (via graphics in Processing) wherein participants could collectively monitor parameters of day-to-night transition (changing the background colour), intensify the snow-storm via wind (rain/snow particle code), wavering opacity, amplify audio effects etc.

Through the iterations of our project, our common interests aligned and a concept began to take shape. We were inspired by the concept of ‘Telepresence’ from Kate’s class, especially the ‘LovMeLovU’ project by Yiyi Shao. (“Shao”) [1]. We were all drawn to the idea of remote interaction between two individuals, in two separate spaces, by means of two displays of light. We finalised on an output of 50 RGB LEDs set for 2 rooms. (bitluni’s lab) [2] It did mean upping the ante in a big way, but we had two group members with experience in our corner. It meant they had a chance to fulfil some of the objectives from a previous experiment and gave the rest of us the opportunity to learn about working with RGBs. We also recognized that there was a gap between the code capabilities of some group members compared to others, and it meant a lot to us to all have a hand in writing code in some capacity. Since Arduino has by now become a fairly comfortable language, it further emphasized the desire to add an extra layer to the project requirements, in that we could all work, code and test together, learning from one another along the way.

Because of the visual appeal and chromatic range offered by RGBs, we were determined from the outset to incorporate them in the output design. We were really taken by the idea of being able to illuminate a friend’s room from a remote location, and it was important that the display interaction feel emotive and intuitive. At first, we imagined this action taking place with hugs and squeezes (by way of a stuffed toy or pillow), sensor-driven to create an effective response in the corresponding display. A light squeeze of a pillow could light up a friend’s bedroom on a dreary and perhaps lonely evening, feeling like a hug and a small gift of an uplifting atmosphere. A hard squeeze, by contrast, might generate a bright and panicked effect in my friend’s room, letting them know I’m feeling anxious.

Knowing that we had our work cut out for us, we made a list of benchmarks on a timeline. We had laboured away for 9 hours on Saturday, learning how PubNub worked and by early evening, sending switch values through PubNub to Arduino and finally to Processing. That was the big hurdle we had been unsure about, but thanks to Jevi’s skills and clear communication, we were able to build a clear path that everyone was able to understand and work with. The achievement gave us confidence, and we set about storyboarding an ideal setup: two modes, one using Arduino and another using Touch Designer. We were very interested in trying out both systems; Arduino because we all knew our way around a bit, and Touch Designer for the added benefit of effects as well as what Jun Li could show us. By the next session though, when testing the LEDs out across the network, we encountered a major issue before even getting that far. To our surprise, the Nano could only power a small portion (about 5 to 10) of the RGB strips (out of 50 per strip). This wasn’t enough for a significant display. We were able to resolve some but not all of the issue by using Megas instead.

As the days past, the stuffed toy or pillow took on various forms… eventually landing on wristwarmers. We moved in this direction because for one thing, we were apprehensive about the surprise challenges that the pressure sensors might present. It seemed logical that the enclosure is easily accessible, and that the sensors have as much contact with the point of pressure as possible. With wrist-warmers, we could control the variable resistance by gripping and relaxing our hands. It felt like a very natural use and appropriate for the (very sudden) change in season. The fact that wrist-warmers are less common / expected than gloves was a bonus. We eventually settled our design on an ergonomic support pad for typing. There was a more refined simplicity in this concept. No coding language (in the form of squeeze strength of the number of squeezes to communicate) was needed, in fact, no thought on the user’s end was required at all. Instead, they would carry on as they normally would, typing away at their work.

It took some time to work out the display for the lights. We had decided on using the two south-facing adjacent walls in the DF studio, modelled as bedroom settings. We became invested in the notion of adjustable displays (and it still seems like a cool idea), where individual strips could be attached at hinges while supporting the LED strip. We envisioned participants configuring the display themselves, and hanging it from hooks we’d fashioned in the window sill. Ultimately this plan proved unfeasible and we set it aside as a divergent. We settled on the hexagon-shaped mounts because they made practical sense: they were good housing for the narrowly-spaced LED strips and were less time-consuming to produce. But we opted for the hexagons because in the meantime we had run into a major issue with the LEDs: even with the use of Megas, we could only generate power into the full strip if they were lit at 50% opacity, max. Thus having the lights configured in a cluster meant optimizing the light effect.
We had early on opted for using ping pong balls as diffusing shells for the bulbs, and had wisely ordered them in just enough time from Amazon. We went through a lengthy process of making holes in 100 balls, then fastening them to the hexacon rigs.
Meanwhile, we had devised a system for the sensors, fastening them snuggly into sewn-felt pouches intact with velcro openings for easy removal. These were designed to fit in the openings of the wrist-warmers, but after running into some unexpected complications with this system, we placed them instead inside mousepads improvised from balaclavas.

SOFTWARE IMPLEMENTATION

image13

Arduino + Processing + Pubnub Implementation
For the default implementation, we used two channels to communication between the rooms — each room published and listen to seperate channels corresponding with their rooms. We had issues with the port detection of the Arduino on certain computers but the root cause was never determined. Once the port had connected and maintained a stable connection, the communication from Arduino to Processing, to Pubnub and back could be made.

TD + Arduino Implementation
In this experiment, We brought more possibilities to this project with the experience of Jun Li. We tried to challenge ourselves by utilizing different techniques to achieve the same effect. In setup, we utilizing TCP/IP internet protocol instead of the PubNub to send the same data to control the LED lights. After testing Processing, we found the colour didn’t appear as what we designed, we tried and debugged a lot of ways to fix that colour problem. We thought it might be an issue with the hardware. After researching, we realized the model of LED lights is a bit different from the one that had been used in Jevi, Li and Arsalan’s Experiment 2. The first set was WSB2811, whereas the sets purchased for Experiment 4 are WSB2811. It turned out the Red, Green channels are switched. After editing the data, its effect and it worked the same as the way of using PubNub. All the interactive effects and settings were performed in Touchdeisgner and send through TCP/IP internet to Arudino in real-time.

Arduino + Processing + Pubnub + TD Implementation
Because of the powerful functions of Touchdesigner, which can easily perform and design a lot of effects, we had tried and brought TD into the PubNub way to meet the project requirement. So the workflow became more complicated and difficult in this implement. We tried to bring 2 Arduino on each side and 2 serial communication happened on the same side. One for receiving the data from processing coming from the other side, another for sending date the received data to TD and send it back to the LED light. Theoretically and Technically it can be done, however, we found it was difficult to send and receive data among much software and we don’t have enough time to achieve the same effect like above 2 ways in the end.

Reflections

It seemed clear from our first brain-storming session that we were going to work well together as a group. We had a diverse set of skills that we were eager to pool together to come up with something especially creative. The challenge inspired us and we were ready to put in the work.

The early success we’d achieved after putting in long hours on the weekend might have given us a false sense of confidence. It put us ahead of the game and made us feel that since we’d overcome the biggest obstacle (networking from Arduino to Processing to Pubnub, end-to-end), the rest seemed easy enough to achieve. We set into production of the materials while devising a library of messages that users could communicate to one another, by means of pulses on the pressure-sensors and and lighting. What we failed to foresee was assembly issues with the sensors. We took it for granted that they were made at least a little durably (seeing as they are commonly used in wearables projects), but that turned out being far from the case. In spite of protecting the soldering with heat-shrunk tubing, encasing the a wires with cardboard backing, and harnessing them as securely as possible in the hand-sewn pouches, we went through one sensor after another, the plastic ends shredding with the slightest movement. We didn’t yet know we could repair them on our own (and have yet to try although it was investigated after we broke the collective bank), resulting in trip after trip to Criterion throughout that snow-filled week. After our early successes, and especially once we saw the extent to which we could actually communicate messages via lighting effects (we had even devised our own list of messages), it was so disheartening to have the project fall apart each time we tested on account of the extreme fragility of the sensors. It was a major factor in pivoting the concept from wrist-warmers to a mouse-pad (which involved no movement of the sensor), a decision that unfortunately took place too late in the game and didn’t allow us sufficient time for proper testing with the rest of our setup.

 

image30
List of messages derived from a library of Arduino lighting effects

Hindsight is always 20/20, and there is no way in this case we could have anticipated this problem unless we had researched “How fragile are variable resistor pressure sensors when placed in clothing?”. But we will know to be that specific about troubleshooting and testing well beforehand next time around.
It was overwhelmingly frustrating to have our demo fall apart to the extent it did right before the presentations. We’d had such a strong start, and working on this project had been an invigorating, devoted process for our group. Overall it was a great experience, one that involved a great deal of learning and collaboration, creativity. In spite of falling a little short in the demo, we had succeeded in some pretty amazing results along the way.

 

References

  1. Shao, Yiyi. “Lovmelovu”. Yiyishao.Org, 2018, https://yiyishao.org/LovMeLovU.html. Accessed 8 Nov 2019.
  2. bitluni’s lab. DIY Ping Pong LED Wall V2.0. 2019, https://www.youtube.com/watch?v=xFh8uiw7UiY. Accessed 8 Nov 2019.
  3. Hartman, Kate, and Nicholas Puckett. “Exp3_Lab2_Arduinotoprocessing_ASCII_3Analogvalues”. 2019.
  4. Hartman, Kate, and Nicholas Puckett. November 5 Videos. 2019, https://canvas.ocadu.ca/courses/30331/pages/november-5-videos. Accessed 8 Nov 2019.
  5. Kac, E (1994). Teleporting An Unknown State. Article. Retrieved from: http://www.ekac.org/teleporting_%20an_unknown_state.html
  6. Rastro, E. (1995). The Trace. Retrieved from: http://www.lozano-hemmer.com/artworks/the_trace.php
  7. Chantre, M. (2014). Miscible. Retrieved from: http://www.manuelchantre.com/miscible/

Experiment 4 – You Are Not Alone!

You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space with illustrated on-screen windows based on the metaphor of ‘each window as a window into the other world’.
An experiment with p5.js & Networking using PubNub, a global data stream realt-time network.

Team
Sananda Dutta, Manisha Laroia, Arshia Sobhan,Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

Description
Starting with the brief for the experiment i.e. Online Networking across different physical locations, the two key aspects of networking that struck
us were:
– the knowledge of each others presence i.e. someone else is also
present here;
– the knowledge of common shared information i.e. someone else is also seeing what I am seeing; and how crucial these are to creating the networking experience.
You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space. The community in the shared space can light up windows with multiple clicks and have their ghost avatar travel through the windows that others light up. With prolonged inactivity in a pixel area the lit windows slowly started getting darker and turn off. The visual metaphor was taken from a neighbourhood building, and how people know of each others presence when they see lit windows and of the absence when they see unlit windows. In that space, people don’t communicate directly but know of each others presence. The individual avatar moving through this space was to create an interaction within the present community. The windows were metaphors for windows into other people’s world as they entered into the same space.

Inspiration

paper-plane

Paper Planes
The concept of this cross-location virtual presence experiment was to create and fold your own paper plane, stamp it with your location, and “throw” it back into the world where it can be caught by someone on the other side of the world. Paper Planes started as a simple thought – “What if you could throw a paper plane from one screen to another?”
See the detailed project here.

10

Dear Data
Dear Data is a year-long, analog data drawing project by Giorgia Lupi and Stefanie Posavec, two award-winning information designers living on different sides of the Atlantic. Each week, and for a year, they collected and measured a particular type of data about their lives, used this data to make a drawing on a postcard-sized sheet of paper, and then dropped the postcard in an English “postbox” (Stefanie) or an American “mailbox” (Giorgia)! By collecting and hand drawing their personal data and sending it to each other in the form of postcards, they became friends. See the detailed project here.

The Process
We started the project with a brainstorm building on the early idea of networked collaborative creation. We imagined an experience where each person in the network would draw a doodle and through networking they would add to a collaborative running animation. With an early prototyped we realized the idea had technical challenges and moved to another idea of based on shared frames and online presence to co-create a visual. A great amount of delight in the Paper Planes networked experience, comes from the pleasant animation, and knowing other people are present there and are carrying out activities.

bg

brainstorm-2_11nov2019

brainstorm-11nov2019

We wanted to replicate the same experience in our experiment. We brainstormed several ideas like drawing windows for each user and having text added in it, windows appearing as people join in, sharing common information like number of trees around you or number of windows around you and representing this data visually to create a shared visual. We all liked the idea of windows lighting up with presence and built around the idea, light on meaning present and light off meaning absence or inactivity.
We created the networked digital space in two stages:
[1] One with the windows lighting up on clicking and disappearing on not clicking, and the personal ghost/avatar travelling through the windows
[2] The other with the shared cursor where each user in the networked space could see each others avatar and move through the shared lit windows.

2019-11-13

2019-11-13-2

2019-11-13-3

We built the code in stages, with first creating the basic light on, light off colour changing on click functionality, followed by array of windows, the networking transfer of user mouse clicks and positions, and finally the avatars for the user. Our biggest challenge was to be able to network and make the avatar appear on click after the window is lit and be able to have everyone on the network space.

2019-11-14-1

The shared space loads with a black screen. The visual has windows that light up as one clicks on the screen, each window gets brighter with each click and reaching its maximum brightness on three clicks. Each user in the space can see other windows light up and know the presence of other users. Each person also sees the windows in the same grid alignment as the rest of the community. Once a window is lit the users can see their avatar in it and can then navigate their avatar across the screen through lit windows along with their cursor movement.

boolean-window-ghost

In version 1, of the experience the users cannot see each others avatars moving but can see windows light up as others interact with the screen. The avatar was chosen to be a silhouette so that it could visually appear into the lit windows as compared to the black backdrop and the eyes popping in would create an element of surprise. In version 2, the windows appears as a user enters the digital space and each user has an avatar that can travel through the other’s windows. When the avatars in the user’s own window it is black and the ghost color is lighter when it is in another user’s window. Unfortunately, we were unable to show both the interactions in the same code program with the network messages for cursor click and cursor position at the same time.
The presence was indicated with a window, a signifier of the window into the other person’s world, like a global building or neighbourhood. The delight comes from the knowing people are there in the same space as you, at the same time and there is something in common.

screen-shot-2019-11-14-at-9-33-50-pm

Choice of Aesthetics
The page loads to a black screen. We tapped into the user’s behaviour to click randomly on a blank screen to find out what comes up on the screen and so we make the windows light up on each click, getting brighter with each click. Turning on and off the light of a window to show presence and also taps into the clickable fidgeter aspect so that visitor in the digital space hang in there and interact with each other.
Another feature was to allow each user to see the other user’s cursor as it navigates on the screen through the lit windows. We kept the visuals to a minimum with simple rectangles, and black color for dark windows and yellow for bright windows. While coding we had to define the click area that would register the clicks on the network to light up the windows.
The pixel area defined varied from screen to screen and would sometimes register clicks for 1 rectangle, for 2 or 3. We wanted to light one window at a time, but we let the erratic click behaviour be as it is, as it added a visual randomness to the windows lighting up.

Challenges & Learnings

  1. Networking and sending messages for more than one parameter like shared cursor, mouse position and mouse positions was challenging and we could not have the click and the mouse hover function be sent at the same time. Thus, we could get the code click to light up work but were unable to share the cursor information and the users could not see each others avatars on the network or vice versa.

screenshot-53

2019-11-14-3

 

  1. Another challenge was in being able to have the avatar appear in the window on the click after the window is lit up. We were able to get the avatar appear in each window in the array and have a common avatar appear on click in one window. The challenge was in managing the clicks and the image position at the same time. We eventually chose to make a shared cursor networking code but that interferred with the click functionality to dim and light the windows with increasing opacity.
  2. Our inexperience with code and networking was also a challenge, but we made visual decisions to make the user experience better, like chosing a black coloured icon that looked like a silhouette on the black background so we did not have to create it on click but it would be visible when a window was lit.
  3. If we had time we would  surely work on integrating the two codes to create one networked space where the windows followed the grid, all the users saw the same visual output, the windows lit up in increasing opacity and all the users can see all the avatars floating around through the windows.

Github Link for the Project code is here.

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Shiffman, Daniel. The Coding Train. 7.8: Objects and Images – p5.js Tutorial. 2015. Digital.

[3] Active Theory. Google I/O 2016: Paper Planes. 2016. Digital.
https://medium.com/active-theory/paper-planes-6b0008c56c17

Silent Signals

PROJECT TITLE, SUBTITLE
Silent Signals
Breaking the Language Barrier with Technology

TEAM MEMBERS
Priya Bandodkar, Jignesh Gharat, Nadine Valcin

PORFOLIO IMAGES

portfolio-02 portfolio-03 portfolio-04portfolio-05

PROJECT DESCRIPTION

‘Silent Signals’ is an experiment that aims to break down language barriers between users across locations by enabling them to send and receive text messages in the language of the intended recipient(s) using simple gestures. The gesture detection framework is based on poseNet technologies, and the experiment uses PubNub to send and receive these messages. It is intended to be a seamless interaction where users’ bodies become controllers and triggers for the messages. It does away with the keyboard as an input and takes communication into the physical realm, engaging humans in embodied interactions. It can comprise of multiple users and is irrespective of the spatial distance between these participants.

PROJECT CONTEXT

When we first started thinking about communication, we realized that between the three of us, we had three different languages: Priya and Jignesh’s native Hindi, Nadine’s native French and English that all three shared as a second language. We imagined teams collaborating on projects across international borders, isolated seniors who may only speak one language and globetrotting millenials who forge connections throughout the world. How could we enable them to connect across language barriers by making them connect across language barriers?

Our first idea was to build a translation tool that would allow people to text one another seamlessly in two different languages. This would involve the use of a translation API such as Cloud Translation by Google (https://cloud.google.com/translate/) that has the advantage of automatic language detection through artificial intelligence. 

We then thought that it would be more natural and enjoyable for each user to be able to speak their preferred language directly without the intermediary of text. That would require a speech-to-text API and a text-to-speech API. The newly release Web Speech API (https://wicg.github.io/speech-api/) would fit the bill as would the Microsoft Skype Translator API (https://www.skype.com/en/features/skype-translator/) which has the added benefit of translating direct speech to speech translation in some languages, but unfortunately that functionality is not available for Hindi.

Language A                                                                Language B

p1

As we discovered that there are several translation apps already on the market, we decided to push the concept one step further enabling communication without the use of speech and started looking into visual communication.

The Emoji

pic2

Source: Emojipedia (https://blog.emojipedia.org/ios-9-1-includes-new-emojis/)

Derived from the Japanese terminology for “picture character”, the emoji has grown exponentially in popularity since its online launch in 2010. More than 6 billion emojis are exchanged every day and 90% of regular emoji users rated emoji messages as more meaningful than simple texting (Evans, 2018). They have become part of our vocabulary as they proliferate and are able to express at times relatively complex emotions and ideas with one icon.

Sign Language

Sign languages allow the hearing impaired to communicate. We also use our hands to express concepts and emotions. Every culture has a set of codified hand gestures that have specific meanings. 

pic3

American Sign Language

Source: Carleton University (https://carleton.ca/slals/modern-languages/american-sign-language/)

Culturally-Coded Gestures

Source: Social Mettle (https://socialmettle.com/hand-gestures-in-different-cultures)

MOTIVATION

We also simultaneously started thinking about how we can use technology, as the three of us shared a desire to make our interactions more intuitive and natural. 

“Today’s technology can sometimes feel like it’s out of sync with our senses as we peer at small screens, flick and pinch fingers across smooth surfaces, and read tweets “written” by programmer-created bots. These new technologies can increasingly make us feel disembodied.”

Paul R. Daugherty, Olof Schybergson and H. James Wilson
Harvard Business Review

This preoccupation is by no means original. Gestural control technology is being developed for many different applications, especially as part of interfaces with smart technology. In the Internet of Things, it serves to make interactions with devices easy and intuitive, having them react to natural human movements. Google’s Project Soli, for example, uses hand gestures to control different functions on a smart watch. 

ADAPTATION CHALLENGES

Some of the challenges in implementing this approach to technology is that there is currently no standard format for body-to-machine gestures and that gestures and their meanings vary from country to country. For example, while the thumbs up gesture pictured above has a positive connotation in the North American context, it has a vulgar connotation in West Africa and the Middle East.

CONCEPT

pic4

The original concept was a video chat that would include visuals or text (in the user’s language), triggered by gestures of the chat participants. We spent several days attempting to use different tools to achieve that result before Nick Puckett informed us that what we were trying to achieve was nearly impossible via PubNub. This left us with the rather unsatisfactory option of the user only being able to see themselves on screen. We nevertheless forged ahead with a modified concept that had these parameters:

  • Using the body and gestures for simple online communications
  • Creating a series of gestures with codified meanings for simple expressions that can be translated in 3 different languages

TECHNICAL ASPECT

gif1

poseNet Skeleton

Source: ml5.js (https://ml5js.org/reference/api-PoseNet/)

We leveraged the poseNet library, which is a machine learning model that allows for Real-Time Human Pose Estimation. It tracks 17 nodes on the body using the webcam and creates a skeleton that corresponds to human movements. By using the node information tracked by poseNet, we were able to define the relationship of different body parts to one another, use their relative distances and translate that into code.

pic5

poseNet tracking nodes

Source: Medium (https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)

TECHNICAL LIMITATIONS

As we continued to develop the code, we soon realised that poseNet tracking seemed rather unstable and at times finicky, as it was purely based on the pixel-information it received from the camera. The output fluctuated as it was based on several factors such as the lighting, contrast of clothing, background and the user’s distance from the screen. Consequently, it meant that the gesture would not always be captured if these external factors weren’t acting favourably. Dark clothing and skin seemed to be particularly problematic.

We originally had 10 gestures coded, but the challenge of integrating them all was that they sometimes interfered or overlapped with the parameters of one another. To avoid this, we developed 5 in the prototype. We had to be mindful of using parameters that were precise enough to not overlap with other gestures, yet broad enough to take into account the fact that different body types and people would perform these gestures in slightly different ways.

Since there are very limited resources dealing with p5.js and PubNub, we had difficulty in finding code examples to help us resolve some of the coding problems we encountered. Most notably amongst these was managing to publish graphic messages we designed (instead of text), that would be superimposed on the recipient’s interface. We thus only managed to display graphics on the sender’s interface and send text messages to the recipient.

CODE ON GITHUB

https://github.com/jigneshgharat/Silent-Signals

OUTCOME

  • Participants expressed that it was a unique and satisfying experience to engage in this form of embodied interaction using gestures.
  • The users were appreciative of the fact that we developed our own set of gestures to communicate instead of confining to existing sign languages.

NEXT STEPS

We would like to complete the experience by publishing image messages to recipients with corresponding translations rather than have the text interface.

REFERENCES

Oliveira, Joana. “Emoji, the New Global Language?” In Open Mind https://www.bbvaopenmind.com/en/technology/digital-world/emoji-the-new-global-language/. Accessed online, November 14, 2019

Evans, Vyvyan. Emoji Code: the Linguistics behind Smiley Faces and Scaredy Cats. Picador, 2018. 

https://us.macmillan.com/excerpt?isbn=9781250129062. Excerpt accessed online, November 15, 2019

Schybergson H, Paul R. Daugherty Olof, and James Wilson. “Gestures Will Be the Interface for the Internet of Things.” in Harvard Business Review, 8 July 2015, https://hbr.org/2015/07/gestures-will-be-the-interface-for-the-internet-of-things. Accessed online November 12, 2019

Oved, Dan. “Real-time Human Pose Estimation in the Browser with TensorFlow.js” in Medium. 2018.

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5. Accessed online November 10, 2019.

Fire Pit

Project Title: Fire Pit
Names of group of members: Nilam Sari, Neo Chen, and Lilian Leung

Project Description

This experiment is a continuation of our first project “Campfire” from Experiment 1 by Nilam and Lilian. The experiment expands upon the topic of conversation, from the importance of face-to-face communication over the use of mobile devices, to exploring the power of anonymity and distance with a platform that allows participants to type their negative feelings into the text box and literally throwing their negative emotions into a virtual fire.

Project Progress

November 8

The team worked together to update the imagery of the flame, exploring different shapes and beginning to style the text box and images. One of the references we pulled was from aferriss (2018)’s Particle Flame Fountain. Taking their source code, we revised the structure of the fire to connect it with user presence from PubNub. 

1

(Source: https://editor.p5js.org/aferriss/sketches/SyTRx_bof)

Then we implemented it into our interface.

2

November 9

The team added additional effects such as a new fire crackle sound, to emphasize throwing a message into the fire when users send a message. Attempted to create an array that would cycle with a set of prompts or questions for participants to answer. Rather than just having the message stay on screen and abruptly disappear, we also added a timed fade onto the text so that users can see their message slowly burn.

November 10

We worked on the sound effect for when messages are being “thrown” into the fire, and managed to combine two files into one for the sound that we want. 

Changed the way to submit the text by using your finger to swipe up rather than pressing the button. Along with the new fire crackle sound, the fire temporarily grows bigger every time it receives an input. The input text box resets itself after every message is sent.

We tried to add an array of text for questions/prompts, but haven’t been able to display the selection of the questions randomly. When random() is used, the following error message shows up: 

“p5.js says: text() was expecting String|Object|Array|Number|Boolean for parameter #0 (zero-based index), received an empty variable instead. If not intentional, this is often a problem with scope: [https://p5js.org/examples/data-variable-scope.html] at about:srcdoc:188:3. [http://p5js.org/reference/#p5/text]”

The random() is now working by calling random(question); rather than the index number let qIndex = random(0,3);. Now everytime the program is opened there will be random questions and will get randomized again every time the user submitted an input.

Nov 11:

The sound of a log of wood being thrown into the fire is added to the program. Everytime a new input is being sent the sound plays. We also changed the CSS of the input box to match it with our general aesthetic. We then added a prompt to tell people how to submit their message instead of using ‘enter’

3

Nov 12

We are trying to change the fire’s color when the text input is sent and adding a dissolved effect for the text.

4

What we have so far:

  1. The more people that join the conversation, the bigger the fire becomes
  2. The text fades out after a couple of seconds, no trace of history
  3. The fire changes color and plays a sound when new input is being thrown into it
  4. Swipe up to send the messages into the server and fire
  5. Prompts added just above the text input

To communicate our concept, we felt that producing a short film for our documentation would be the most effective. To do this and be able to record content individually, we created moodboard of the visual style we wanted to use for our film.

5

Our final step was to put the film together as both presentation and documentation.

Project Context

This experiment is a continuation of our first project “Campfire” from Experiment 1 by Nilam and Lilian. The original concept for the piece was creating a multi-screen experience that was focused on coming together and the value of being present and having a face-to-face conversation. Participants were asked to place their phones on a foam pedestal that light all the screens to recreate a digital campfire.

img_7854_edited doc1

Switching the concept we explored the use of distance and presence online and the practice of speaking into the void when sharing content online; for example, twitter or tumblr, where users can post their thoughts without expectation of a response. People’s personal posts differ from vague to incredibly revealing and are a method of venting and also personally working through one’s own thoughts.

Within Sherry Turkle’s book Alone Together: Why We Expect More from Technology and Less from Each Other (2011), she writes how technology can be seductive because of what it offers for human vulnerabilities. Digital connections and networks provide the illusion of companionship without the demands of friendship. The platform is preferably used on a mobile device because of the intimacy of a personal handheld device.

6

Our idea was to have participants land on our main screen with a small fire. The size of the fire is dependent on the number of participants logged on, though the amount of individuals is hidden to maintain the presence of anonymity. Participants can’t tell how many users are online exactly but are aware there is a body of people depending on the size of the flame. Turkle (2011) writes about anonymity as compared to face-to-face confession as having the absence of criticism and evaluation. Participants are able to take their thoughts and throw them into the fire (by swiping or mouse drag) as both a cathartic and purposely gesture. 

6

Participants are able to see their message on screen after submission and have it  burned in the flame. When participants swipe or drag the screen, there’s the sound of a wood block and fire crackle as the image gets send, as the message metaphorically feeds the flame. The colour of the fire changes as well on send. 

Other participants on the platform can see submitted message temporarily on their screens, the change in the fire both informs interaction with the fire and encourages other to submit thoughts on their mind troubling them as well to burn as well. Rowe in Write it out: how journaling can drastically improve your physical, emotional, and mental health (2012) describes the use of journaling and writing out one’s emotions has been proven to reduce stress by allowing people to open up about their feelings. 

7

Once the message is sent and burnt, there will be no traces of the message anywhere. There is no history stored in the program or PubNub. It is as if the thoughts that the user wrote and threw into the digital fire and the thoughts become digital ashes. This is both symbolic and literal in terms of leaving no digital footprints. While PubNub allows developers to record the users’ IP addresses and information, we choose to not record any of the users’ information.

This work wasn’t inspired but harks back to the use of PostSecret, a community mail project created by Frank Warren in 2005, which allowed people to mail confessions and secrets anonymously which were hosted online.

 7

fireresponse

 

Project Video https://vimeo.com/373640297

Project Code on Github https://github.com/nilampwns/experiment4

References:

aferriss. (2018, April). Particle Flame Fountain. Retrieved from https://editor.p5js.org/aferriss/sketches/SyTRx_bof. 

https://audio-joiner.com/

Rowe, S. (2012, March-April). Write it out: how journaling can drastically improve your physical, emotional, and mental health. Vibrant Life, 28(2), 16+. Retrieved from https://link.gale.com/apps/doc/A282427977/AONE?u=toro37158&sid=AONE&xid=9d14a49b

Turkle, Sherry. Alone Together : Why We Expect More from Technology and Less from Each Other, Basic Books, 2011. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/oculocad-ebooks/detail.action?docID=684281.

Yezi, Denise. Maybe You Need A Tree Hole Too, May 3 2010 https://anadviceaday.wordpress.com/2010/05/03/maybe-you-need-a-tree-hole-too/

Experiment 3: Block[code]

Block[code] is an interactive experience that engages the user in altering/modifying on screen visuals using tangible physical blocks. The visuals were created using processing with an attempt to explore The Nature of Code methodology of particle motion for creative coding.

Project by
Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Description
The experiment was designed to create a tangible interaction i.e. the play with the rectangular blocks their selection and their arrangement, that would in-turn generate alter the visual output i.e. the organisation and the motion of the rectangles on the screen. I conceptualisation the project taking inspiration from physical coding, Google’s Project Bloks, that use the connection and the order of joining of physical blocks to generate a code output. The idea was to use physical blocks i.e. the rectangular tangible shapes to influence the motion and appearance of the rectangles on the screen, from random rectangles to coloured strips of rectangles travelling at a fixed velocity to all the elements on the screen accelerating, giving users the experiences of creating visual patterns.

img_20191104_180701-01

Inspiration
Project Bloks is a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. It is a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. See the detailed project here.

projectbloks-580x358

Gene Sequencing Data
The visuals were largely inspired by gene or DNA sequencing data from by brief stint in the world of biotechnology. I use to love the vertical motion and the layering effect the sequencing data would create in the visual output and wanted to generate that using particle motion and code. I was also inspired to tie the commonality between genetic code and computer code, and bring it out in the visual experience.

gene-sequencing-dataDNA sequencing. Image sourced from NIST.gov.

Mosaic Brush sketch on openprocessing.org by inseeing generates random pixels on the screen and uses mouseDragged and keyPressed functions for pixel fill and visual reset. The project can be viewed here.

pixel-project

The Process
I started the project by first making class objects and writing coding for simpler visuals like fractal trees and single particle motion. Taking reference from single particle motion I experimented with location, velocity and acceleration to create a running stream of rectangle particles. I wanted the rectangles to leave a tail or a trace as they moved vertically down the screen for which I played with changing opacity with distance and also having the background called in the setup function so as to get a stream or trace of the moving rectangle particle [1].

With next iterations I created a class of these rectangle particles and subjected it to move function, update function and to system velocity functions based on their location on the screen. Once I was able to create the desired effect in a single particle stream, I created multiple streams of particles with different colours and different parameters for the multiple stream effect.

img_20191031_124817-01

img_20191031_171129-01

img_20191104_163708-01-01

The basic model of a Tangible User Interface is the interface between people and digital information requires two key components: input and output, or control and representation. Controls enable users to manipulate the information, while representations are perceived with the human senses [2]. Coding is an onscreen experience and I wanted to use the visual output as a way to allow the participant to be able to use the physical tangible blocks as an interface to influence the visuals on the screen and to build it. The tangible blocks served as the controls to manipulate the information and its representation was displayed in terms of changing visuals on the screen.

the-tui

setup-examples

Choice of Aesthetics
The narrative tying physical code to biological code was the early inspiration I wanted to build the experiment around. The visuals were in particular inspired from gene sequencing visuals, of rectangular pixels running vertically in a stream. The tangible blocks were chosen to be rectangular too, with coloured stripes marked on them to relate each one to a coloured stream on the screen. The vertical screen in the setup was used to amplify the effect of the visuals moving vertically. The colors for the bands was selected based on fluorescent colors commonly seen in the gene sequencing inspiration images due to the use of fluorescent dyes.

mode-markings

img_20191105_114412-01

img_20191105_114451-01

Challenges & Learnings
(i) One of the key challenges in the experiment was to make a seamless tangible interface, such that the wired setup doesn’t interfere with the user-interaction. Since it was arduino based setup, getting rid of the wires was not a possibility but could have been hidden in a more discreet physical setup.
(ii) Ensuring the visuals were created as per the desired effect was also a challenge for I was programming with particle systems for the first time. I managed this by creating a single particle with the parameters and then applying it to more elements in the visual.
(iii) Given more time I would have created more functions like the accelerate function that could alter the visuals like slowing the frame rate or reducing the width or changing the shape itself.
(iv) The experiment was more exploratory in terms of the possibility of using this technology and software and left room for discussions around what it could be rather than being conclusive. Questions that came up in the presentation was, How big do you imagine with vertical screen? OR How do you see these tangibles being more playful and seasmless?

img_20191105_001118

Github Link for the Project code

Arduino Circuits
The circuit for this setup was fairly simple, with a pull-up resistor circuit and DIY switches using aluminium foil.

arduino-circuit-1

arduino-circuit-2

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Hiroshi Ishii. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international conference on Tangible and embedded interaction (TEI ’08). ACM, New York, NY, USA, xv-xxv. DOI=http://dx.doi.org/10.1145/1347390.1347392