Experiment 5 Proposal – OK to Touch!?

Project Title
OK to Touch !?

Team members
Priya Bandodkar & Manisha Laroia

Concept
OK to Touch!? is an interactive experience to make the hidden tracking technology i.e. Fingerprinting, visible to the users through interactions with everyday desk objects. The concept uses these tangible interactions in physical objects to convey how users’ private data is being tracked without their consent in the near future.

Project Description
Peering into the digital future, OK to Touch!? is an interactive experience to make the hidden tracking technology Device Fingerprinting, visible to the users, currently used for browsers and apps and soon moving to smart devices. Through simple interactions with everyday desk objects, we will record the physical touch-points IRL, tag them and project them on screen in the backdrop to create an apparent visual of digital fingerprinting data that we as users are sharing unknowingly with each interaction we make.

Physical Installation
The setup is table top display with desk objects, with a large screen or a project wall in the backdrop. As the users interact with the objects, a visual representation of the Fingerprinting data is added on the backdrop to create a visual map and indication to the users that a simple touch or click on a smart object can be tracked and be used to profile them. It is an attempt to create an awareness about the privacy concerns associated with the Internet in today’s knowledge economy.

installation

Setup space:
4 ½  feet x 2 ½ feet table top, set along the wall. The wall serves as the projection screen or a large TV screen could be used.

Parts & Materials
Desk objects will be selected from the following list of items (subject to prototyping results): Table from the DF studio and a chair, a book, an ink stamp, iPad or mobile phone, headphones, an alarm clock, a calculator, a box. These will be either be sourced in their original form or fabricated as indicated below:

Sourced in original form Fabricated
Book Alarm clock (laser-cut wood)
iPad or mobile phone Box (laser-cut wood)
Headphones
Calculator

Technology
Softwares and Hardware to be used:
p5.js, Processing and Arduino, switches, digitally fabricated objects, DIY switches.

Object Interaction Sensor (Input) Output
Book with a bookmark or pen set to a page Open & close DIY switch Graphic circle or fingerprint icon projected on the wall behind or appears (fade in) on the screen behind the display with supporting fingerprinting text, and stays there the whole time. //particle.system

[Graphic/Icon]

navigator.userAgent

“Book/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Ink Stamp with a paper next to it with a few stamp patterns on it Stamp on paper Velostat/Pressure sensor [Graphic/Icon]

navigator.userAgent

“Stamp/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Mobile phone or iPad with text and icons to click

(Privacy Policy)

Multitouch function P5.js multitouch to note the touch on screen [Graphic/Icon]

navigator.userAgent

“Stamp/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Headphones hung on a metal stand Remove them from the stand to disconnect the switch and listen to a note on fingerprinting/music DIY switch [Graphic/Icon]

navigator.userAgent

“Headphones/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Alarm Clock Open & close DIY switch [Graphic/Icon]

navigator.userAgent

“Alarm Clock/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

Calculator with a page displaying two numbers to be added Press calculator buttons Velostat/Pressure sensor [Graphic/Icon]

navigator.userAgent

“Calculator/5.0 (OS NT 10.0; v64; rv:70.0) User/20177101;Toronto; IP 10.4.3.46”

 

Workplan

Dates Activities
23rd November – 25th November Material procurement and Quick Prototyping to select the final 4 Objects
26th November – 28th November Writing the code and Digital Fabrication
28th November – 30th November Testing and Bug-Fixing
1st December to 2nd December Installation and Final touches
3rd December to 4th December Presentation

References
Briz, Nick. “This Is Your Digital Fingerprint.” Internet Citizen, 26 July 2018, https://blog.mozilla.org/internetcitizen/2018/07/26/this-is-your-digital-fingerprint/.

Szymielewicz, Katarzyna, and Bill Budington. “The GDPR and Browser Fingerprinting: How It Changes the Game for the Sneakiest Web Trackers.” Electronic Frontier Foundation, 21 June 2018, https://www.eff.org/deeplinks/2018/06/gdpr-and-browser-fingerprinting-how-it-changes-game-sneakiest-web-trackers.

“Talk to Me: Immaterials: Ghost in the Field.” MoMA, https://www.moma.org/interactives/exhibitions/2011/talktome/objects/145463/.

Chen, Brian X. “’Fingerprinting’ to Track Us Online Is on the Rise. Here’s What to Do.” The New York Times, The New York Times, 3 July 2019, https://www.nytimes.com/2019/07/03/technology/personaltech/fingerprinting-track-devices-what-to-do.html.

 

Experiment 4 – You Are Not Alone!

You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space with illustrated on-screen windows based on the metaphor of ‘each window as a window into the other world’.
An experiment with p5.js & Networking using PubNub, a global data stream realt-time network.

Team
Sananda Dutta, Manisha Laroia, Arshia Sobhan,Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

Description
Starting with the brief for the experiment i.e. Online Networking across different physical locations, the two key aspects of networking that struck
us were:
– the knowledge of each others presence i.e. someone else is also
present here;
– the knowledge of common shared information i.e. someone else is also seeing what I am seeing; and how crucial these are to creating the networking experience.
You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space. The community in the shared space can light up windows with multiple clicks and have their ghost avatar travel through the windows that others light up. With prolonged inactivity in a pixel area the lit windows slowly started getting darker and turn off. The visual metaphor was taken from a neighbourhood building, and how people know of each others presence when they see lit windows and of the absence when they see unlit windows. In that space, people don’t communicate directly but know of each others presence. The individual avatar moving through this space was to create an interaction within the present community. The windows were metaphors for windows into other people’s world as they entered into the same space.

Inspiration

paper-plane

Paper Planes
The concept of this cross-location virtual presence experiment was to create and fold your own paper plane, stamp it with your location, and “throw” it back into the world where it can be caught by someone on the other side of the world. Paper Planes started as a simple thought – “What if you could throw a paper plane from one screen to another?”
See the detailed project here.

10

Dear Data
Dear Data is a year-long, analog data drawing project by Giorgia Lupi and Stefanie Posavec, two award-winning information designers living on different sides of the Atlantic. Each week, and for a year, they collected and measured a particular type of data about their lives, used this data to make a drawing on a postcard-sized sheet of paper, and then dropped the postcard in an English “postbox” (Stefanie) or an American “mailbox” (Giorgia)! By collecting and hand drawing their personal data and sending it to each other in the form of postcards, they became friends. See the detailed project here.

The Process
We started the project with a brainstorm building on the early idea of networked collaborative creation. We imagined an experience where each person in the network would draw a doodle and through networking they would add to a collaborative running animation. With an early prototyped we realized the idea had technical challenges and moved to another idea of based on shared frames and online presence to co-create a visual. A great amount of delight in the Paper Planes networked experience, comes from the pleasant animation, and knowing other people are present there and are carrying out activities.

bg

brainstorm-2_11nov2019

brainstorm-11nov2019

We wanted to replicate the same experience in our experiment. We brainstormed several ideas like drawing windows for each user and having text added in it, windows appearing as people join in, sharing common information like number of trees around you or number of windows around you and representing this data visually to create a shared visual. We all liked the idea of windows lighting up with presence and built around the idea, light on meaning present and light off meaning absence or inactivity.
We created the networked digital space in two stages:
[1] One with the windows lighting up on clicking and disappearing on not clicking, and the personal ghost/avatar travelling through the windows
[2] The other with the shared cursor where each user in the networked space could see each others avatar and move through the shared lit windows.

2019-11-13

2019-11-13-2

2019-11-13-3

We built the code in stages, with first creating the basic light on, light off colour changing on click functionality, followed by array of windows, the networking transfer of user mouse clicks and positions, and finally the avatars for the user. Our biggest challenge was to be able to network and make the avatar appear on click after the window is lit and be able to have everyone on the network space.

2019-11-14-1

The shared space loads with a black screen. The visual has windows that light up as one clicks on the screen, each window gets brighter with each click and reaching its maximum brightness on three clicks. Each user in the space can see other windows light up and know the presence of other users. Each person also sees the windows in the same grid alignment as the rest of the community. Once a window is lit the users can see their avatar in it and can then navigate their avatar across the screen through lit windows along with their cursor movement.

boolean-window-ghost

In version 1, of the experience the users cannot see each others avatars moving but can see windows light up as others interact with the screen. The avatar was chosen to be a silhouette so that it could visually appear into the lit windows as compared to the black backdrop and the eyes popping in would create an element of surprise. In version 2, the windows appears as a user enters the digital space and each user has an avatar that can travel through the other’s windows. When the avatars in the user’s own window it is black and the ghost color is lighter when it is in another user’s window. Unfortunately, we were unable to show both the interactions in the same code program with the network messages for cursor click and cursor position at the same time.
The presence was indicated with a window, a signifier of the window into the other person’s world, like a global building or neighbourhood. The delight comes from the knowing people are there in the same space as you, at the same time and there is something in common.

screen-shot-2019-11-14-at-9-33-50-pm

Choice of Aesthetics
The page loads to a black screen. We tapped into the user’s behaviour to click randomly on a blank screen to find out what comes up on the screen and so we make the windows light up on each click, getting brighter with each click. Turning on and off the light of a window to show presence and also taps into the clickable fidgeter aspect so that visitor in the digital space hang in there and interact with each other.
Another feature was to allow each user to see the other user’s cursor as it navigates on the screen through the lit windows. We kept the visuals to a minimum with simple rectangles, and black color for dark windows and yellow for bright windows. While coding we had to define the click area that would register the clicks on the network to light up the windows.
The pixel area defined varied from screen to screen and would sometimes register clicks for 1 rectangle, for 2 or 3. We wanted to light one window at a time, but we let the erratic click behaviour be as it is, as it added a visual randomness to the windows lighting up.

Challenges & Learnings

  1. Networking and sending messages for more than one parameter like shared cursor, mouse position and mouse positions was challenging and we could not have the click and the mouse hover function be sent at the same time. Thus, we could get the code click to light up work but were unable to share the cursor information and the users could not see each others avatars on the network or vice versa.

screenshot-53

2019-11-14-3

 

  1. Another challenge was in being able to have the avatar appear in the window on the click after the window is lit up. We were able to get the avatar appear in each window in the array and have a common avatar appear on click in one window. The challenge was in managing the clicks and the image position at the same time. We eventually chose to make a shared cursor networking code but that interferred with the click functionality to dim and light the windows with increasing opacity.
  2. Our inexperience with code and networking was also a challenge, but we made visual decisions to make the user experience better, like chosing a black coloured icon that looked like a silhouette on the black background so we did not have to create it on click but it would be visible when a window was lit.
  3. If we had time we would  surely work on integrating the two codes to create one networked space where the windows followed the grid, all the users saw the same visual output, the windows lit up in increasing opacity and all the users can see all the avatars floating around through the windows.

Github Link for the Project code is here.

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Shiffman, Daniel. The Coding Train. 7.8: Objects and Images – p5.js Tutorial. 2015. Digital.

[3] Active Theory. Google I/O 2016: Paper Planes. 2016. Digital.
https://medium.com/active-theory/paper-planes-6b0008c56c17

Experiment 3: Block[code]

Block[code] is an interactive experience that engages the user in altering/modifying on screen visuals using tangible physical blocks. The visuals were created using processing with an attempt to explore The Nature of Code methodology of particle motion for creative coding.

Project by
Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Description
The experiment was designed to create a tangible interaction i.e. the play with the rectangular blocks their selection and their arrangement, that would in-turn generate alter the visual output i.e. the organisation and the motion of the rectangles on the screen. I conceptualisation the project taking inspiration from physical coding, Google’s Project Bloks, that use the connection and the order of joining of physical blocks to generate a code output. The idea was to use physical blocks i.e. the rectangular tangible shapes to influence the motion and appearance of the rectangles on the screen, from random rectangles to coloured strips of rectangles travelling at a fixed velocity to all the elements on the screen accelerating, giving users the experiences of creating visual patterns.

img_20191104_180701-01

Inspiration
Project Bloks is a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. It is a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. See the detailed project here.

projectbloks-580x358

Gene Sequencing Data
The visuals were largely inspired by gene or DNA sequencing data from by brief stint in the world of biotechnology. I use to love the vertical motion and the layering effect the sequencing data would create in the visual output and wanted to generate that using particle motion and code. I was also inspired to tie the commonality between genetic code and computer code, and bring it out in the visual experience.

gene-sequencing-dataDNA sequencing. Image sourced from NIST.gov.

Mosaic Brush sketch on openprocessing.org by inseeing generates random pixels on the screen and uses mouseDragged and keyPressed functions for pixel fill and visual reset. The project can be viewed here.

pixel-project

The Process
I started the project by first making class objects and writing coding for simpler visuals like fractal trees and single particle motion. Taking reference from single particle motion I experimented with location, velocity and acceleration to create a running stream of rectangle particles. I wanted the rectangles to leave a tail or a trace as they moved vertically down the screen for which I played with changing opacity with distance and also having the background called in the setup function so as to get a stream or trace of the moving rectangle particle [1].

With next iterations I created a class of these rectangle particles and subjected it to move function, update function and to system velocity functions based on their location on the screen. Once I was able to create the desired effect in a single particle stream, I created multiple streams of particles with different colours and different parameters for the multiple stream effect.

img_20191031_124817-01

img_20191031_171129-01

img_20191104_163708-01-01

The basic model of a Tangible User Interface is the interface between people and digital information requires two key components: input and output, or control and representation. Controls enable users to manipulate the information, while representations are perceived with the human senses [2]. Coding is an onscreen experience and I wanted to use the visual output as a way to allow the participant to be able to use the physical tangible blocks as an interface to influence the visuals on the screen and to build it. The tangible blocks served as the controls to manipulate the information and its representation was displayed in terms of changing visuals on the screen.

the-tui

setup-examples

Choice of Aesthetics
The narrative tying physical code to biological code was the early inspiration I wanted to build the experiment around. The visuals were in particular inspired from gene sequencing visuals, of rectangular pixels running vertically in a stream. The tangible blocks were chosen to be rectangular too, with coloured stripes marked on them to relate each one to a coloured stream on the screen. The vertical screen in the setup was used to amplify the effect of the visuals moving vertically. The colors for the bands was selected based on fluorescent colors commonly seen in the gene sequencing inspiration images due to the use of fluorescent dyes.

mode-markings

img_20191105_114412-01

img_20191105_114451-01

Challenges & Learnings
(i) One of the key challenges in the experiment was to make a seamless tangible interface, such that the wired setup doesn’t interfere with the user-interaction. Since it was arduino based setup, getting rid of the wires was not a possibility but could have been hidden in a more discreet physical setup.
(ii) Ensuring the visuals were created as per the desired effect was also a challenge for I was programming with particle systems for the first time. I managed this by creating a single particle with the parameters and then applying it to more elements in the visual.
(iii) Given more time I would have created more functions like the accelerate function that could alter the visuals like slowing the frame rate or reducing the width or changing the shape itself.
(iv) The experiment was more exploratory in terms of the possibility of using this technology and software and left room for discussions around what it could be rather than being conclusive. Questions that came up in the presentation was, How big do you imagine with vertical screen? OR How do you see these tangibles being more playful and seasmless?

img_20191105_001118

Github Link for the Project code

Arduino Circuits
The circuit for this setup was fairly simple, with a pull-up resistor circuit and DIY switches using aluminium foil.

arduino-circuit-1

arduino-circuit-2

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Hiroshi Ishii. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international conference on Tangible and embedded interaction (TEI ’08). ACM, New York, NY, USA, xv-xxv. DOI=http://dx.doi.org/10.1145/1347390.1347392

Experiment 2: Forget Me Not

Exploration in Arduino & Proxemics.
An interactive plant that senses the presence of people in its proximity and alters its behaviour as per the proximity of the people with respect to it.

Team
Manisha Laroia, Nadine Valcin & Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

img_20191016_200058_1-01-01

Description
We started ideating about the project on Proxemics with the intent of creating an experience of delight or surprise for the people who interacted with our artefact from varying proximities. We started exploring everyday objects, notably those you would find on a desk – books, lamps, plants and how they could be activated with servos and LED lights and those activities transformed with proximity data from a distance sensor. We wanted the desired effect to defy the normal behaviour expected from the object and that it would denote some form of refusal to engage with the user, when the user came too close. In that way it was anthropomorphizing the objects and giving them a form of agency.

We explored the idea of a book, a plant or a lamp that would move in unexpected ways. The size of the objects, the limitations of the servo in terms of strength and range of motion posed some challenges. We also wanted the object to look realistic enough not to immediately draw attention to itself or look suspicious, which would help build up to the moment of surprise. We finally narrowed down on an artificial plant that in its ideal state sways at a slow pace creating a sense of its presence, but shows altered behaviour whenever people come in threshold and near proximity of it.

img_20191021_150554-01

Inspiration
Don Norman in his book, The Design of Everyday Things, talks about design being concerned with how things work, how they are controlled, and the nature of the interaction between people and technology. When done well, the results are brilliant, pleasurable products. When done badly, the products are unusable, leading to great frustration and irritation. Or they might be usable, but force us to behave the way the product wishes rather than as we wish. He adds to it that experience is critical, for it determines how fondly people remember their interactions. When we interact with a product, we need to figure out how to work it. This means discovering what it does, how it works, and what operations are possible (Norman).

An essential part of this interaction is the affordance an object portrays and the feedback it returns for a usage action extended by the user. Altering the expected discoverability affordances and signifiers would result in making the experience stranger and surprising. With the rise of ubiquitous computing and more and more products around us turning into smart objects it is interesting to see how people’s behaviour will change with changed affordances and feedback from everyday objects in their environment, speculating behaviours and creating discursive experiences. Making an object not behave like it should alters the basic conceptual model of usage and creates an element of surprise in the experience. We felt that if we could alter these affordances and feedback in an everyday object based on proximity, it could add an element of surprise and open conversation for anthropomorphizing of objects.

The following art installation projects that all use Arduino boards to control a number of servos provided inspiration for our project:

surfacex

Surface X by Picaroon, an installation with 35 open umbrellas that close when approached by humans. See details here.

servosmit

In X Cube by Amman based design firm, Uraiqat architects consists of 4 faces of 3 m x 3 m each formed by 34 triangular mirrors (individually controlled by their own servo). All mirrors are in constant motion, perpetually changing the reflection users see of themselves. See details here.

dont-look-at-me

Elisa Fabris Valenti’s Don’t Look at Me, I’m Shy is an interactive Installation where the felt  flowers in a mural turn away from the visitors in the gallery when then come in close proximity. See details here.

robots_dune-raby

Dunne & Raby’s Technological Dreams Series: No.1, Robots, 2007 is a series of objects that are meant to spark a discussion about how we’d like our robots to relate to us: subservient, intimate, dependent, equal? See details here.

The Process
After exploring the various options, we settled on creating a plant that would become agitated as people approached. We also wanted to add another element of surprise by having a butterfly emerge from behind the plant when people came very close. We also had LEDs that would serve as signifiers along with the movement of the plant.

Prototype 1
We started the process by attaching a cardboard piece to the servo motor and tapping two wire stems with a plastic flower vertically on it, to test the motor activity. We wrote the code for the Arduino and used the sensor, motor, and the plant prototype to test the different motions we desired for different threshold distances.

img_20191009_141204-01

img_20191009_152458-01

The proxemics theory developed by anthropologist Edward Hall examines how individuals interpret spatial relationships. He defined 4 distances: intimate (0 to 0.5 m), personal (0.5 to 1 m), social (1 to 4 m) and public (4m or more). The sensors posed some difficulty in terms of getting clean data especially in the intimate and personal distance ranges. We decided on 3 ranges, combining the intimate and personal range into one less than a meter being < 1000mm, and kept the social range between 1000-3000mm and public ranges as 3000mm.

The plant has an idle state at more than 4 meters where it gently sways under yellow LEDs, an activated state where the yellow lights blink and the movement is more noticeable and an agitated state at less than a meter where its motion is rapid and jerky with red lights blinking quickly. Once we had configured the threshold distances, for which the motors could give the desired motion we moved to a refined version of the prototype.

Prototype 2
We made a wooden box using the digital fabrication lab and purchased the elements to make the plant foliage and flowers. The plant elements were created using wire stems that were attached to a wooden base secured to the servos. The plant was built using a felt placemat (bought from Dollarama) which was cut into the desired leaf like shape attached to the wire stems. Once we confined the setup into a wooden box, like a pot holding a plant, new challenge arose in terms of space constraint. Each time the plant would move the artificial foliage would hit the side walls of the box causing an interruption in the free motion of the motor. We had to continuously trim the plant and ensure the weight was concentrated in the centre to maintain a constant torque.

img_20191010_173559-01

img_20191009_153035-01

img_20191010_131929-01

The butterfly that we had wanted to integrate was attached to a different servo with a wire, but we never managed to get the desired effect as we wanted the rigging of the insect to be invisible for its appearance to elicit surprise. We therefore abandoned that idea but would like to revisit it given more time.

butterfly-setup

At this stage our circuit prototyping board, the sensors and the LEDs were not fully integrated into a single setup. The next steps was to combine all this in a single step, discretely hiding the electronics and having a single cord that powered the setup.

img_20191011_135618-01

led-setup

Final Setup
The final setup was designed such that the small plant box was placed within a larger plant box that housed all the wires, the circuits and the sensors. As we were to use individual LEDs the, connected LEDs were not able to fit in the plant box as they would hamper the motion of the plant and were integrated into the larger outer box with artificial foliage to hide the circuits.

img_20191016_162701-01

img_20191016_200307-01-01

Context aware computing relates to this, where: some kind of context2aware sensing method [1] which provides devices with knowledge about the situation around them; could infer where they are in terms of social action; and could act accordingly. The proxemic theories describe many different factors and variables that people use to perceive and adjust their spatial relationships with others and the same could be used to iterate people’s relations to devices.

img_20191016_200145-01

img_20191016_200300-01

img_20191016_200253-01

 

Revealing Interaction Possibilities: We achieved this by giving the plant an idle state slow sway motion. In case a person entered the sensing proximity of the plant, the yellow LEDs would light up as if inviting the person.

Reacting to the presence and approach of people: As the person entered the Threshold 1 circle of proximity the yellow LEDs would blink and the plant would start rotating as if scanning its context to detect the individual who entered in its proximity radius.

From awareness to interaction: As the person continues to walk closer, curious to touch or see the plant closely, the movement of the plant would get faster. Eventually if the person entered in the Threshold 2 distance, the right LEDs would light up and the plant would have violent movement indicating a reluctance to close interactions.

Spatial visualizations of ubicomp environments: Three threshold distances were defined in the code to offer he discrete distance zones for different interaction; similar to how people create these boundaries around them through their body language and behaviour.

img_20191016_200242-01


Challenges & Learnings

  • Tuning of the sensor data was a key aspect of the project such that we were able to use it to define the proximity circles. In order to get more stable values we would let the sensor run for some time, ensuring no obstacle was in its field, until we received stable values and then connect the motor to it or else the motor would take the erratic values and produce random motions from the ones programmed.
  • Another challenge was discovering the most suitable sensor positions and placement of the setup in the room with respect to the audience that would see and interact with it. It required us to keep testing in different contexts and with varying number of people in proximity.
  • Apart from the challenges with the sensors, we encountered other software and hardware interfacing issues. The programming of the red and yellow LEDs (4 of each colour) presented a challenge in terms of changing from one set to the other. They were initially programmed using arrays, but getting the yellow lights to shut off once the red lights were triggered proved to be difficult and the lights had to be programmed individually in order to get the effect we desired. In a second phase, we simplified things by soldered all the lights of the same colour in parallel and ran them from one pin on the Arduino.
  • The different levels of motion of the plant were achieved by a long process of trial and error. The agitated state provided an extra challenge in terms of vibrations. The rapid movements of the plant produced vibrations that would impact the box that contains it while also dislodging the lights attached to the container holding the plant.

Github Link for the Project code

Arduino Circuits
We used two arduinos, one to control the servo motor with plant and the other to control the LEDs.

motor-circuit

led-circuit

References
[1] Marquardt, N and S.  Greenberg,  Informing the Design of Proxemic Interactions. Research Report 2011100618, University of Calgary, Department of Computer Science, 2011
[2] Norman, Don. The Design of Everyday Things. New York: Basic Books, 2013. Print.

Experiment 1: Wake them up!

Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions.

Team
Manisha Laroia & Rittika Basu

Project Mentors
Kate Hartman & Nick Puckett

Description
Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions. The experience consisted of many virtual ‘Sleepy Monsters’ and the participant’s task was to ‘Wake them up’ by interacting with them. The experiment was an attempt to assign personalities and emotions to smartphones and create delight through the interactions.

THE MULTISCREEN EXPERIMENT EXPERIENCE
The participants were organized into four groups and assigned with a QR code each. They had to scan, wake up the monster, keep it awake and move to the next table to wake up the next monster. Eventually they would have woken up all four monsters and collected them all.

For the multiscreen aspect of the experience, we created four Sleepy Monster applications each with its unique color, hint, and wake up gesture. Each Sleepy Monster was programmed to pick a color from a predefined array of colors, in the setup, such that when the code was loaded onto a mobile phone, each of the 20 screens would have a different coloured monster. For each case, we added an indicative response, which was a pre-programmed response of the application to a particular gesture, so as to inform the user that it is or it is not the gesture that works for this Monster and they must try a different one. Participants were to try various smartphone interactions which involved speaking to, shaking, running, screen-tapping etc. The monsters responded differently by different inputs. There were four versions of the monster for mobile devices and one was created for the laptop as a bonus.

Sleepy Monster 1
Response: Angry face with changing red shades of the background
Wake up gesture: Rotation in the X-axis

Sleepy Monster 2
Response: Eyes open a bit when touch detected
Wake up gesture: 4 finger Multitouch

Sleepy Monster 3
Response: Noo#! text displays on Touch
Wake up gesture: Tap in a specific pixel area (top left corner)

Sleepy Monster 4
Response: zzz text displays on Touch
Wake up gesture: Acceleration in X-axis causes eyes to open

*Sleepy Monster 5
We also create a web application which was an attempt to experiment with keyboard input and using that to interact with the virtual Sleepy Monster. Pressing key ‘O’ would wake up the monster.

The four Sleepy Monsters and the interactions inbuilt:

The participant experience:

Github link for the codes

single-phone-interaction

20190927_124509-1

img_20190927_151122

laptop-app

Project Context
WHAT’S THE DEAL WITH THESE MONSTERS?
Moving to grad school thousands of miles away from home started-off with excitement but also with the unexpected irregular sleep patterns. Many of the international students were found napping on the softest green couch in the studio, sipping cups of coffee like a magic potion and hoping for it to work! Amongst them were us, two sleepy heads- us (Manisha and Rittika) perpetually yawning, trying to wrap our heads around p5.

The idea stemmed from us joking about creating a wall of phones with each displaying a yawning monster and see its effect on the viewer. Building on that we thought of vertical gardens with animals sleeping in them that awaken with different user interaction OR having twenty phones sleeping and each user figuring out how to wake their phone up. Eventually, we nailed it down to creating a twenty Sleeping Monster, and the participant must try out different interactions with their phones to Wake them up!

sketches2

THE CONCEPT
The way phones are integrated into our lives today, they are not just meer devices but more like individual electronic beings that we wake up, talk to, play with and can’t live without.  No wonder we feel we’ve lost part of ourselves when we forget to bring our smartphone along (Suri, 2013).We wanted the user to interact with their Sleepy Monster (on the phone) and experience the emotions of the monster getting angry if woken up, or saying NO NO if tapped, refusing to wake up unless they had discovered the one gesture that would cause it to open its eyes adding a personality to their personal device, in an attempt to humanize them. The experience was meant to create a moment of delight, once the user is able to wake up the Sleepy Monster and instill an excitement of now having a fun virtual creature in their pocket to play with or collect. The ‘Wake up the monster’ and collect its element of the experience was inspired by the Cat collecting easter egg game on Android Nougat and the pokemon Go mania for collecting virtual Pokémons.

inspiration1

By assigning personalities to the Monsters and having users interact with them, it was interesting to see the different ways the users tried to wake them.

From shouting WAKE UP! at their phones, poking the virtual eyes too vigorously shaking them, it was interesting to see users employ methods they would usually use for people.

The next steps with these Sleepy Monsters could be a playful application to collect them, or morning alarms or maybe do-not-disturb(DND) features for device screens.

THE PROCESS
Day 1: We used the ‘Create your Portrait’ exercise as a starting point to build our understanding of coding. Both of us had limited knowledge of programming and we decided to use the first few days to actively try our hand at p5 programming, trying to understand different functions, the possibilities of the process and understanding the logic. Key resources for this stage were The Coding Train youtube videos by Daniel Shiffman and the book Make: Getting Started with p5.js by Lauren McCarthy.

sketches1

Day 3:  Concept brainstorming, led us to questions about the various activities we could implement and what functions were possible. We spent the next few days exploring different interactivity and writing shortcodes based on the References, section on the p5.org website. Some early concepts revolved around, creating a fitness challenge, a music integrated experiences, picture puzzles, math puzzle games or digital versions of conventional games like tic-tac or catch-ball or ludo.

0001

Day 6: We did the second brainstorm, now with a more clear picture of the possibilities within the project scope. A lot of our early ideas were tending towards networking, but through this brainstorm, we looked at ways in which we could replace the networking aspects with actual people-people interactions. Once we had the virtual Sleepy Monster concept narrowed down, we started defining the possible interaction we could build  for the mobile interface.

sketches3

Day 8: We sketched out the Monster faces for the visual interface, and prototyped the same using p5. Parallelly, we programmed the interactions as individual codes, to try out each of them like acceleration mapped to eye-opening, rotation mapped to eye-opening, multitouch mapped to eye-opening, audio playback and random color selection on setup.

Day 10: The next step involved combining the interactions into one final code, where the interactions would execute as per conditions defined in the combined code. This stage had a lot of hits and trials, as we would write the code, then run it on different smartphones with varying OS and browsers.

Day 10-15 : A large portion of our efforts in this leg of the project was focussed on bug fixing and preparing elements(presentation, QR codes for scanning, steps for the demo & documenting the experience) for the final demo, simplifying the experience to fit everything in the allotted time of 7 minutes per team.

CHALLENGES
Getting the applications to work in different browsers and on different operating systems was an unforeseen challenge we faced during trials for the codes. The same problem popped up even during the project demo. For Android, it worked best in Firefox browsers, and for iOS, it worked best in Chrome browsers.
Seamlessly coordinating the experience for 20 people. We did not anticipate the chaos or the irregularity that comes with multiple people interacting with multiple screens.
Another issue came up with audio playback. We had incorporated a snoring sound for the Sleepy Monster to play in the background when the application loaded. The sound playback was working well on Firefox browsers in Android devices but didn’t run on Chrome browsers or iOS devices. In the iOS device, the application stopped running, with a Loading… message appearing each time.

PROJECT SPECIFIC LEARNINGS
Defining absolute values for acceleration and rotation sensor data
Random background color changes on each setup of the code
Execute multiple smart-phone interactions like acceleration, rotation, touch, multitouch, device shaken and pixel area-defined touches.

Meet the Sleepy Monsters by scanning the QR codes
qr-code_master


References

    1. “Naoto Fukasawa & Jane Fulton Suri On Smartphones As Social Cues, Soup As A Metaphor For Design, The Downside Of 3D Printing And More”. Core77, 2013, https://www.core77.com/posts/25052/Naoto-Fukasawa-n-Jane-Fulton-Suri-on-Smartphones-as-Social-Cues-Soup-as-a-Metaphor-for-Design-the-Downside-of-3D-Printing-and-More.
    2. McCarthy, Lauren et al. Getting Started With P5.Js., Maker Media, Inc., 2015, pp. 1-183 https://ebookcentral.proquest.com/lib/oculocad-ebooks/reader.action?docID=4333728
    3. Henry, Alan. “How To Play Google’s Secret Neko Atsume-Style Easter Egg In Android Nougat”. Lifehacker.Com, 2016, https://lifehacker.com/how-to-play-googles-secret-neko-atsume-style-easter-egg-1786123017
    4.  Pokémon Go. Niantic, Nintendo And The Pokémon Company, 2016.
    5. “Thoughtless Acts?”. Ideo.Com, 2005, https://www.ideo.com/post/thoughtless-acts
    6. Rosini, Niccolo et al. ““Personality-Friendly” Objects: A New Paradigm For Human-Machine Interaction”. IARIA, ACHI 2016 : The Ninth International Conference On Advances In Computer-Human Interactions, 2016.
    7. Wang, Tiffine, and Freddy Dopfel. “Personality Of Things – Techcrunch”. Techcrunch, 2019, https://techcrunch.com/2019/07/13/personality-of-things/
    8. Coding Train. 3.3: Events (Mousepressed, Keypressed) – Processing Tutorial. 2015,https://www.youtube.com/watch?v=UvSjtiW-RH8
    9. The Coding Train. 7.1: What Is An Array? – P5.Js Tutorial. 2015,  https://www.youtube.com/watch?v=VIQoUghHSxU
    10. The Coding Train. 2.3: JavaScript Objects – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=-e5h4IGKZRY
    11. The Coding Train. 5.1: Function Basics – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=wRHAitGzBrg
    12.  The Coding Train. p5.js Random Array Requests (whatToEat). 2016,https://www.youtube.com/watch?v=iCXBNKC6Wjw
    13. “Learn | P5.Js”. P5js.Org, 2019, https://p5js.org/learn/interactivity.html
    14. Puckett, Nick. “Phone Scale”. 2019. https://editor.p5js.org/npuckett/sketches/frf9F_BBA