Experiment 5 – OK to Touch!?

Project Title
OK to Touch!?

Team members
Priya Bandodkar & Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Project Description
Code | Computer Vision | p5js
OK to Touch!? is an interactive experience to bring into the spotlight the inconspicuous tracking technology and make it visible to the users through interactions with everyday objects. The concept uses experience to convey how users’ private data could be tracked, without their consent, in the digital future.

A variety of popular scripts are invisibly embedded on many web pages, harvesting a snapshot of your computer’s configuration to build a digital fingerprint that can be used to track you across the web, even if you clear your cookies. It is only a matter of time when these tracking technology takes over the ‘Internet of Things’ we are starting to surround ourselves with. From billboards to books and from cars to coffeemakers, physical computing and smart devices are becoming more ubiquitous than we can fathom. As users of smart devices, the very first click or touch with the smart object, signs-us-up to be tracked online and be another data point in web of ‘Device’ Fingerprinting, with no conspicuous privacy policies and no apparent warnings.

With this interactive experience, the designers are attempting to ask the question:
How might we create awareness about the invisible data tracking methods as the ‘Internet of Things’ expands into our everyday objects of use?

Background
In mid 2018 when our inbox was full of Updated privacy policy emails, it was not a chance event that all companies decided to update their policies at the same time, but an after effect of the enforcement of GDPR, General Data Protection Regulation. The General Data Protection Regulation (EU) 2016/679 (GDPR) is a regulation in EU law on data protection and privacy for all individual citizens of the European Union (EU) and the European Economic Area (EEA). It also addresses the transfer of personal data outside the EU and EEA areas. The effects of Data breach for political, marketing and technological practices is evident with the Facebook–Cambridge Analytica data scandal, Aadhaar login breach and Google Plus exposed the data of 500,000 people, then 52.5 million to name a few.

datasecurity-paper
News articles about recent data scandals. Image source: https://kipuemr.com/security-compliance/security-statement/

When the topic of Data Privacy is brought up in discussion circles, some get agitated about their freedom being affected, some take the fifth and some say that, ‘we have nothing to hide.’ Data privacy is not about hiding but about being ethical. A lot of the data that is shared across the web is often used by select corporations to make profits at the cost of the individual’s Digital Labour, that is why no free software is free but all comes at a cost, the cost of your labour of using it and allowing for the data generated to be used. Most people tend to not know what is running in the background of the webpages they hop onto or the voice interaction they have with their devices, and if they don’t see it they don’t believe it. With more and more conversation happening around Data Privacy and ethical design, we believed that it would help if we could make this invisible background data transmission visible to the user and initiate a discourse.exp-5-proposal-question

Inspiration

immaterials

The Touch Project
The designers of the Touch project— which explores near-field communication (NFC), or close-range wireless connections between devices—set out to make the immaterial visible, specifically one such technology, radio-frequency identification (RFID), currently used for financial transactions, transportation, and tracking anything from live animals to library books. “Many aspects of RFID interaction are fundamentally invisible,” explains Timo Arnall. “As users we experience two objects communicating through the ‘magic’ of radio waves.” Using an RFID tag (a label containing a microchip and an antenna) equipped with an LED probe that lights up whenever it senses an RFID reader, the designers recorded the interaction between reader and tag over time and created a map of the space in which they engaged. Jack Schulze notes that alongside the new materials used in contemporary design products, “service layers, video, animation, subscription models, customization, interface, software, behaviors, places, radio, data, APIs (application programming interfaces) and connectivity are amongst the immaterials.”
See the detailed project here.

digital-fingerprint

This is Your Digital Fingerprint
Because data is the lifeblood for developing the systems of the future, companies are continuously working to ensure they can harvest data from every aspect of our lives. As you read this, companies are actively developing new code and technologies that seek to exploit our data at the physical level. Good examples of this include the quantified self movement (or “lifelogging”) and the Internet of Things. These initiatives expand data collection beyond our web activity and into our physical lives by creating a network of connected appliances and devices, which, if current circumstances persist, probably have their own trackable fingerprints. From these initiatives, Ben Tarnoff of Logic Magazine concludes that “because any moment may be valuable, every moment must be made into data. This is the logical conclusion of our current trajectory: the total enclosure of reality by capital.” More data, more profit, more exploitation, less privacy. See the detailed article here.

paper-phone_special project

Paper Phone
Paper Phone is an experimental app, developed by a London based studio Special Project as part of the Google Wellness experiments, which helps you have a little break away from your digital world, by printing a personal booklet of the information you’ll need that day. Printed versions of the functions you use the most such as contacts, calendars and maps let you get things done in a calmer way and help you concentrate on the things that matter the most. See the detailed project here.

irl-podcast

IRL: Online Life is Real Life
Our online life is real life. We walk, talk, work, LOL and even love on the Internet – but we don’t always treat it like real life. Host Manoush Zomorodi explores this disconnect with stories from the wilds of the Web, and gets to the bottom of online issues that affect us all. Whether it’s privacy breaches, closed platforms, hacking, fake news, or cyber bullying, we the people have the power to change the course of the Internet, keeping it ethical, safe, weird, and wonderful for everyone. IRL is an original podcast from Firefox, the tech company that believes privacy isn’t a policy. It’s a right. Here the podcast here.

These sources helped define the question we were asking and inspired us to show the connection between the physical and digital to make the invisible, visible and tangible to accept.

The Process

The interactive experience was inspired by the ‘How Might We’ question we raised post our research on Data privacy and we began sketching out the details of the interaction;

  • Which interactions we wanted- touch, sound, voice or tapping into user-behaviour
  • What tangible objects we should use, – daily objects or a new product which incorporated affordance to interact with or digital products like mobile phones, laptops.
  • Which programming platform to use, and
  • How the setup and user-experience would be?

ideation-comp_ While proposing the project we intended to make tangible interactions using Arduino, embedded in desk objects and using Processing with it to create visuals that would illustrate the data tracking. We wanted the interactions to be seamless and the setup to look as normal, intuitive and inconspicuous that would reflect the hidden, creepy nature of Data tracking techniques.  Here is the initial setup we had planned to design:

installation

Interestingly, in our early proposal discussion, we raised the concerns of having too many wires in the display if we use Arduino and our mentors proposed we look at the ml5 library with p5js; a machine learning library that works with p5js to be able to recognize objects using computer vision. We attempted the YOLO library of ml5 and iterated with the code trying to recognize objects like remotes, mobile phones, pens, or books. The challenge with this particular code was in trying to create the visuals we wanted to accompanied with each object that is recognized, to be able to track multiple interactions and to be able to overlay the video that is being captured with computer vision. It was very exciting for us to use this library as we had to not depend on hardware interactions and we could use a setup with no wires, no visible digital interactions and create a mundane setup which could then bring in the surprise of the tracking visuals aiding the concept.

track-remote-with-rectangle

ml5-recognize-remote
Using YOLO ml5 library to track and recognize a remote.

data-points-map

motion-tracking_code
Using the openCV library for p5js and processing.

In using the ml5 library we also came across the openCV libraries that work with processing and p5js and iterated with it to use the pixel change or frame difference function. We created overlay visuals on the video capture and also without the visual capture thus creating a data tracking map of sorts. Eventually we use the optical flow library example and built the final visual output on it. To input data we used a webcam and captured the video feed for running through p5js.

Challenges & Learnings:
Our biggest learning was in the process of prototyping and creating setups to user test and understand the nuances of creating an engaging experience.
The final setup, was to have a webcam on the top to track any interaction that happens with the products on the table and the input video feed would be processed to then give a digital output of data tracking visualizations. For the output we tried various combinations like using a projector to throw visuals as the user interacted with the objects, use an LCD display to overlay the visual on the video feed or use a screen display in the background to form a map of the data points collected through the interaction.

The top projection was something we felt would be a very interesting output method as we will be able to throw a projection on the products as the user interacted with them, creating the visual layer of awareness we wanted. Unfortunately, each time we had top projection the computer vision code would get a lot of visual noise as each projection was being added to the video capture as an input and a loop of feeds would generate creating visual noise and multiple projections which were unnecessary as part of the discrete experience we wanted to create. Projections looked best in dark spaces but that would compromise with the effective of the webcam as computer vision was the backbone of the working of the project. Eventually we used a LCD screen and a webcam top mounted.

proposal-image-1
Test with the video overlay that looked like an infographic
process-2
Testing the top projections. This projection method generated a lot of visual noise for the webcam and had to be dropped.
tracking
Showing the data point tracking without the video capture.
process-1
Setting up the top webcam and hiding it within a paper lamp for the final setup.

Choice of Aesthetics:
The final video feed with the data tracking visual we collected was looking more like an infographic and subtle in nature as compared to the strange and surveillance experience that we wanted to create. So we decided to use a video filter to add that additional visual layer on the video capture to show that it has undergone some processing and is being watched or tracked. The video was displayed on a large screen which was placed adjacent to a mundane, desk with the typical desk objects like books, lamps, plants, stationery, stamps, cup and blocks.

setup

Having a bare webcam during the critique made it evident for the user’s about the kind of interaction and learning from that we hid the webcam in a paper lamp in the final setup. This added another cryptic layer to the interaction adding to the concept.

setup-in-use

These objects were chosen and displayed in a way so as to create desk workspace where people could come sit and start interaction with the objects through the affordances created. Affordances were created using, semi-opened books, bookmarks inside books, open notepad with stamps and ink-pads, a small semi-opened wooden box, a half filled cup of tea with a tea bag, wooden block, stationery objects, a magnifying glass, all to hint at a simple desk which could probably be a smart desk that was tracking each move of the user and transmitting data without consent on every interaction made with the objects.The webcam was hung over the table and discreetly covered by a paper lamp to add to the everyday-ness of the desk setup.

Each time a user interacted with the setup, the webcam would track the motion and the changes in the pixel field and generate data capturing visuals to indicate and spark in the user, that something strange was happening making them question, if it was Ok to Touch!?

user1

Workplan:

Dates Activities
23rd November – 25th November Material procurement and Quick Prototyping to select the final Objects
26th November – 28th November Writing the code and Digital Fabrication
28th November – 30th November Testing and Bug-Fixing
1st December to 2nd December Installation and Final touches
3rd December to 4th December Presentation

portfolio-image-2

portfolio-image-1

The Project code is available on Github here.

__________________
References

Briz, Nick. “This Is Your Digital Fingerprint.” Internet Citizen, 26 July 2018, www.blog.mozilla.org/internetcitizen/2018/07/26/this-is-your-digital-fingerprint/.

Chen, Brian X. “’Fingerprinting’ to Track Us Online Is on the Rise. Here’s What to Do.” The New York Times, The New York Times, 3 July 2019, www.nytimes.com/2019/07/03/technology/personaltech/fingerprinting-track-devices-what-to-do.html.

Groote, Tim. “Triangles Camera.” OpenProcessing, www.openprocessing.org/sketch/479114

Grothaus, Michael. “How our data got hacked, scandalized, and abused in 2018”. FastCompany. 13 December 2018. www.fastcompany.com/90272858/how-our-data-got-hacked-scandalized-and-abused-in-2018

Hall, Rachel. Chapter 7, Terror and the Female Grotesque: Introducing Full-Body Scanners to the U.S. Airports pp. 127-149 In Eds. Rachel E. Dubrofsky and Shoshana Amielle Maynet, Feminist Surveillance Studies. Durham: Duke University Press, 2015.

Khan, Arif. “Data as Labor” Singularity Net. Medium, 19 November 2018
blog.singularitynet.io/data-as-labour-cfed2e2dc0d4

Szymielewicz, Katarzyna, and Bill Budington. “The GDPR and Browser Fingerprinting: How It Changes the Game for the Sneakiest Web Trackers.” Electronic Frontier Foundation, 21 June 2018, www.eff.org/deeplinks/2018/06/gdpr-and-browser-fingerprinting-how-it-changes-game-sneakiest-web-trackers.

Antonelli, Paola. “Talk to Me: Immaterials: Ghost in the Field.” MoMA, www.moma.org/interactives/exhibitions/2011/talktome/objects/145463/.

Shiffman, Daniel. “Computer Vision: Motion Detection – Processing Tutorial” The Coding Train. Youtube. 6 July 2016. www.youtube.com/watch?v=QLHMtE5XsMs

 

Experiment 4 – You Are Not Alone!

You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space with illustrated on-screen windows based on the metaphor of ‘each window as a window into the other world’.
An experiment with p5.js & Networking using PubNub, a global data stream realt-time network.

Team
Sananda Dutta, Manisha Laroia, Arshia Sobhan,Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

Description
Starting with the brief for the experiment i.e. Online Networking across different physical locations, the two key aspects of networking that struck
us were:
– the knowledge of each others presence i.e. someone else is also
present here;
– the knowledge of common shared information i.e. someone else is also seeing what I am seeing; and how crucial these are to creating the networking experience.
You Are Not Alone! is a networking experiment designed to create an experience of digital presence and interaction in a shared online space. The community in the shared space can light up windows with multiple clicks and have their ghost avatar travel through the windows that others light up. With prolonged inactivity in a pixel area the lit windows slowly started getting darker and turn off. The visual metaphor was taken from a neighbourhood building, and how people know of each others presence when they see lit windows and of the absence when they see unlit windows. In that space, people don’t communicate directly but know of each others presence. The individual avatar moving through this space was to create an interaction within the present community. The windows were metaphors for windows into other people’s world as they entered into the same space.

Inspiration

paper-plane

Paper Planes
The concept of this cross-location virtual presence experiment was to create and fold your own paper plane, stamp it with your location, and “throw” it back into the world where it can be caught by someone on the other side of the world. Paper Planes started as a simple thought – “What if you could throw a paper plane from one screen to another?”
See the detailed project here.

10

Dear Data
Dear Data is a year-long, analog data drawing project by Giorgia Lupi and Stefanie Posavec, two award-winning information designers living on different sides of the Atlantic. Each week, and for a year, they collected and measured a particular type of data about their lives, used this data to make a drawing on a postcard-sized sheet of paper, and then dropped the postcard in an English “postbox” (Stefanie) or an American “mailbox” (Giorgia)! By collecting and hand drawing their personal data and sending it to each other in the form of postcards, they became friends. See the detailed project here.

The Process
We started the project with a brainstorm building on the early idea of networked collaborative creation. We imagined an experience where each person in the network would draw a doodle and through networking they would add to a collaborative running animation. With an early prototyped we realized the idea had technical challenges and moved to another idea of based on shared frames and online presence to co-create a visual. A great amount of delight in the Paper Planes networked experience, comes from the pleasant animation, and knowing other people are present there and are carrying out activities.

bg

brainstorm-2_11nov2019

brainstorm-11nov2019

We wanted to replicate the same experience in our experiment. We brainstormed several ideas like drawing windows for each user and having text added in it, windows appearing as people join in, sharing common information like number of trees around you or number of windows around you and representing this data visually to create a shared visual. We all liked the idea of windows lighting up with presence and built around the idea, light on meaning present and light off meaning absence or inactivity.
We created the networked digital space in two stages:
[1] One with the windows lighting up on clicking and disappearing on not clicking, and the personal ghost/avatar travelling through the windows
[2] The other with the shared cursor where each user in the networked space could see each others avatar and move through the shared lit windows.

2019-11-13

2019-11-13-2

2019-11-13-3

We built the code in stages, with first creating the basic light on, light off colour changing on click functionality, followed by array of windows, the networking transfer of user mouse clicks and positions, and finally the avatars for the user. Our biggest challenge was to be able to network and make the avatar appear on click after the window is lit and be able to have everyone on the network space.

2019-11-14-1

The shared space loads with a black screen. The visual has windows that light up as one clicks on the screen, each window gets brighter with each click and reaching its maximum brightness on three clicks. Each user in the space can see other windows light up and know the presence of other users. Each person also sees the windows in the same grid alignment as the rest of the community. Once a window is lit the users can see their avatar in it and can then navigate their avatar across the screen through lit windows along with their cursor movement.

boolean-window-ghost

In version 1, of the experience the users cannot see each others avatars moving but can see windows light up as others interact with the screen. The avatar was chosen to be a silhouette so that it could visually appear into the lit windows as compared to the black backdrop and the eyes popping in would create an element of surprise. In version 2, the windows appears as a user enters the digital space and each user has an avatar that can travel through the other’s windows. When the avatars in the user’s own window it is black and the ghost color is lighter when it is in another user’s window. Unfortunately, we were unable to show both the interactions in the same code program with the network messages for cursor click and cursor position at the same time.
The presence was indicated with a window, a signifier of the window into the other person’s world, like a global building or neighbourhood. The delight comes from the knowing people are there in the same space as you, at the same time and there is something in common.

screen-shot-2019-11-14-at-9-33-50-pm

Choice of Aesthetics
The page loads to a black screen. We tapped into the user’s behaviour to click randomly on a blank screen to find out what comes up on the screen and so we make the windows light up on each click, getting brighter with each click. Turning on and off the light of a window to show presence and also taps into the clickable fidgeter aspect so that visitor in the digital space hang in there and interact with each other.
Another feature was to allow each user to see the other user’s cursor as it navigates on the screen through the lit windows. We kept the visuals to a minimum with simple rectangles, and black color for dark windows and yellow for bright windows. While coding we had to define the click area that would register the clicks on the network to light up the windows.
The pixel area defined varied from screen to screen and would sometimes register clicks for 1 rectangle, for 2 or 3. We wanted to light one window at a time, but we let the erratic click behaviour be as it is, as it added a visual randomness to the windows lighting up.

Challenges & Learnings

  1. Networking and sending messages for more than one parameter like shared cursor, mouse position and mouse positions was challenging and we could not have the click and the mouse hover function be sent at the same time. Thus, we could get the code click to light up work but were unable to share the cursor information and the users could not see each others avatars on the network or vice versa.

screenshot-53

2019-11-14-3

 

  1. Another challenge was in being able to have the avatar appear in the window on the click after the window is lit up. We were able to get the avatar appear in each window in the array and have a common avatar appear on click in one window. The challenge was in managing the clicks and the image position at the same time. We eventually chose to make a shared cursor networking code but that interferred with the click functionality to dim and light the windows with increasing opacity.
  2. Our inexperience with code and networking was also a challenge, but we made visual decisions to make the user experience better, like chosing a black coloured icon that looked like a silhouette on the black background so we did not have to create it on click but it would be visible when a window was lit.
  3. If we had time we would  surely work on integrating the two codes to create one networked space where the windows followed the grid, all the users saw the same visual output, the windows lit up in increasing opacity and all the users can see all the avatars floating around through the windows.

Github Link for the Project code is here.

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Shiffman, Daniel. The Coding Train. 7.8: Objects and Images – p5.js Tutorial. 2015. Digital.

[3] Active Theory. Google I/O 2016: Paper Planes. 2016. Digital.
https://medium.com/active-theory/paper-planes-6b0008c56c17

Experiment 3: Block[code]

Block[code] is an interactive experience that engages the user in altering/modifying on screen visuals using tangible physical blocks. The visuals were created using processing with an attempt to explore The Nature of Code methodology of particle motion for creative coding.

Project by
Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Description
The experiment was designed to create a tangible interaction i.e. the play with the rectangular blocks their selection and their arrangement, that would in-turn generate alter the visual output i.e. the organisation and the motion of the rectangles on the screen. I conceptualisation the project taking inspiration from physical coding, Google’s Project Bloks, that use the connection and the order of joining of physical blocks to generate a code output. The idea was to use physical blocks i.e. the rectangular tangible shapes to influence the motion and appearance of the rectangles on the screen, from random rectangles to coloured strips of rectangles travelling at a fixed velocity to all the elements on the screen accelerating, giving users the experiences of creating visual patterns.

img_20191104_180701-01

Inspiration
Project Bloks is a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. It is a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. See the detailed project here.

projectbloks-580x358

Gene Sequencing Data
The visuals were largely inspired by gene or DNA sequencing data from by brief stint in the world of biotechnology. I use to love the vertical motion and the layering effect the sequencing data would create in the visual output and wanted to generate that using particle motion and code. I was also inspired to tie the commonality between genetic code and computer code, and bring it out in the visual experience.

gene-sequencing-dataDNA sequencing. Image sourced from NIST.gov.

Mosaic Brush sketch on openprocessing.org by inseeing generates random pixels on the screen and uses mouseDragged and keyPressed functions for pixel fill and visual reset. The project can be viewed here.

pixel-project

The Process
I started the project by first making class objects and writing coding for simpler visuals like fractal trees and single particle motion. Taking reference from single particle motion I experimented with location, velocity and acceleration to create a running stream of rectangle particles. I wanted the rectangles to leave a tail or a trace as they moved vertically down the screen for which I played with changing opacity with distance and also having the background called in the setup function so as to get a stream or trace of the moving rectangle particle [1].

With next iterations I created a class of these rectangle particles and subjected it to move function, update function and to system velocity functions based on their location on the screen. Once I was able to create the desired effect in a single particle stream, I created multiple streams of particles with different colours and different parameters for the multiple stream effect.

img_20191031_124817-01

img_20191031_171129-01

img_20191104_163708-01-01

The basic model of a Tangible User Interface is the interface between people and digital information requires two key components: input and output, or control and representation. Controls enable users to manipulate the information, while representations are perceived with the human senses [2]. Coding is an onscreen experience and I wanted to use the visual output as a way to allow the participant to be able to use the physical tangible blocks as an interface to influence the visuals on the screen and to build it. The tangible blocks served as the controls to manipulate the information and its representation was displayed in terms of changing visuals on the screen.

the-tui

setup-examples

Choice of Aesthetics
The narrative tying physical code to biological code was the early inspiration I wanted to build the experiment around. The visuals were in particular inspired from gene sequencing visuals, of rectangular pixels running vertically in a stream. The tangible blocks were chosen to be rectangular too, with coloured stripes marked on them to relate each one to a coloured stream on the screen. The vertical screen in the setup was used to amplify the effect of the visuals moving vertically. The colors for the bands was selected based on fluorescent colors commonly seen in the gene sequencing inspiration images due to the use of fluorescent dyes.

mode-markings

img_20191105_114412-01

img_20191105_114451-01

Challenges & Learnings
(i) One of the key challenges in the experiment was to make a seamless tangible interface, such that the wired setup doesn’t interfere with the user-interaction. Since it was arduino based setup, getting rid of the wires was not a possibility but could have been hidden in a more discreet physical setup.
(ii) Ensuring the visuals were created as per the desired effect was also a challenge for I was programming with particle systems for the first time. I managed this by creating a single particle with the parameters and then applying it to more elements in the visual.
(iii) Given more time I would have created more functions like the accelerate function that could alter the visuals like slowing the frame rate or reducing the width or changing the shape itself.
(iv) The experiment was more exploratory in terms of the possibility of using this technology and software and left room for discussions around what it could be rather than being conclusive. Questions that came up in the presentation was, How big do you imagine with vertical screen? OR How do you see these tangibles being more playful and seasmless?

img_20191105_001118

Github Link for the Project code

Arduino Circuits
The circuit for this setup was fairly simple, with a pull-up resistor circuit and DIY switches using aluminium foil.

arduino-circuit-1

arduino-circuit-2

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Hiroshi Ishii. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international conference on Tangible and embedded interaction (TEI ’08). ACM, New York, NY, USA, xv-xxv. DOI=http://dx.doi.org/10.1145/1347390.1347392

Experiment 2: Forget Me Not

Exploration in Arduino & Proxemics.
An interactive plant that senses the presence of people in its proximity and alters its behaviour as per the proximity of the people with respect to it.

Team
Manisha Laroia, Nadine Valcin & Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

img_20191016_200058_1-01-01

Description
We started ideating about the project on Proxemics with the intent of creating an experience of delight or surprise for the people who interacted with our artefact from varying proximities. We started exploring everyday objects, notably those you would find on a desk – books, lamps, plants and how they could be activated with servos and LED lights and those activities transformed with proximity data from a distance sensor. We wanted the desired effect to defy the normal behaviour expected from the object and that it would denote some form of refusal to engage with the user, when the user came too close. In that way it was anthropomorphizing the objects and giving them a form of agency.

We explored the idea of a book, a plant or a lamp that would move in unexpected ways. The size of the objects, the limitations of the servo in terms of strength and range of motion posed some challenges. We also wanted the object to look realistic enough not to immediately draw attention to itself or look suspicious, which would help build up to the moment of surprise. We finally narrowed down on an artificial plant that in its ideal state sways at a slow pace creating a sense of its presence, but shows altered behaviour whenever people come in threshold and near proximity of it.

img_20191021_150554-01

Inspiration
Don Norman in his book, The Design of Everyday Things, talks about design being concerned with how things work, how they are controlled, and the nature of the interaction between people and technology. When done well, the results are brilliant, pleasurable products. When done badly, the products are unusable, leading to great frustration and irritation. Or they might be usable, but force us to behave the way the product wishes rather than as we wish. He adds to it that experience is critical, for it determines how fondly people remember their interactions. When we interact with a product, we need to figure out how to work it. This means discovering what it does, how it works, and what operations are possible (Norman).

An essential part of this interaction is the affordance an object portrays and the feedback it returns for a usage action extended by the user. Altering the expected discoverability affordances and signifiers would result in making the experience stranger and surprising. With the rise of ubiquitous computing and more and more products around us turning into smart objects it is interesting to see how people’s behaviour will change with changed affordances and feedback from everyday objects in their environment, speculating behaviours and creating discursive experiences. Making an object not behave like it should alters the basic conceptual model of usage and creates an element of surprise in the experience. We felt that if we could alter these affordances and feedback in an everyday object based on proximity, it could add an element of surprise and open conversation for anthropomorphizing of objects.

The following art installation projects that all use Arduino boards to control a number of servos provided inspiration for our project:

surfacex

Surface X by Picaroon, an installation with 35 open umbrellas that close when approached by humans. See details here.

servosmit

In X Cube by Amman based design firm, Uraiqat architects consists of 4 faces of 3 m x 3 m each formed by 34 triangular mirrors (individually controlled by their own servo). All mirrors are in constant motion, perpetually changing the reflection users see of themselves. See details here.

dont-look-at-me

Elisa Fabris Valenti’s Don’t Look at Me, I’m Shy is an interactive Installation where the felt  flowers in a mural turn away from the visitors in the gallery when then come in close proximity. See details here.

robots_dune-raby

Dunne & Raby’s Technological Dreams Series: No.1, Robots, 2007 is a series of objects that are meant to spark a discussion about how we’d like our robots to relate to us: subservient, intimate, dependent, equal? See details here.

The Process
After exploring the various options, we settled on creating a plant that would become agitated as people approached. We also wanted to add another element of surprise by having a butterfly emerge from behind the plant when people came very close. We also had LEDs that would serve as signifiers along with the movement of the plant.

Prototype 1
We started the process by attaching a cardboard piece to the servo motor and tapping two wire stems with a plastic flower vertically on it, to test the motor activity. We wrote the code for the Arduino and used the sensor, motor, and the plant prototype to test the different motions we desired for different threshold distances.

img_20191009_141204-01

img_20191009_152458-01

The proxemics theory developed by anthropologist Edward Hall examines how individuals interpret spatial relationships. He defined 4 distances: intimate (0 to 0.5 m), personal (0.5 to 1 m), social (1 to 4 m) and public (4m or more). The sensors posed some difficulty in terms of getting clean data especially in the intimate and personal distance ranges. We decided on 3 ranges, combining the intimate and personal range into one less than a meter being < 1000mm, and kept the social range between 1000-3000mm and public ranges as 3000mm.

The plant has an idle state at more than 4 meters where it gently sways under yellow LEDs, an activated state where the yellow lights blink and the movement is more noticeable and an agitated state at less than a meter where its motion is rapid and jerky with red lights blinking quickly. Once we had configured the threshold distances, for which the motors could give the desired motion we moved to a refined version of the prototype.

Prototype 2
We made a wooden box using the digital fabrication lab and purchased the elements to make the plant foliage and flowers. The plant elements were created using wire stems that were attached to a wooden base secured to the servos. The plant was built using a felt placemat (bought from Dollarama) which was cut into the desired leaf like shape attached to the wire stems. Once we confined the setup into a wooden box, like a pot holding a plant, new challenge arose in terms of space constraint. Each time the plant would move the artificial foliage would hit the side walls of the box causing an interruption in the free motion of the motor. We had to continuously trim the plant and ensure the weight was concentrated in the centre to maintain a constant torque.

img_20191010_173559-01

img_20191009_153035-01

img_20191010_131929-01

The butterfly that we had wanted to integrate was attached to a different servo with a wire, but we never managed to get the desired effect as we wanted the rigging of the insect to be invisible for its appearance to elicit surprise. We therefore abandoned that idea but would like to revisit it given more time.

butterfly-setup

At this stage our circuit prototyping board, the sensors and the LEDs were not fully integrated into a single setup. The next steps was to combine all this in a single step, discretely hiding the electronics and having a single cord that powered the setup.

img_20191011_135618-01

led-setup

Final Setup
The final setup was designed such that the small plant box was placed within a larger plant box that housed all the wires, the circuits and the sensors. As we were to use individual LEDs the, connected LEDs were not able to fit in the plant box as they would hamper the motion of the plant and were integrated into the larger outer box with artificial foliage to hide the circuits.

img_20191016_162701-01

img_20191016_200307-01-01

Context aware computing relates to this, where: some kind of context2aware sensing method [1] which provides devices with knowledge about the situation around them; could infer where they are in terms of social action; and could act accordingly. The proxemic theories describe many different factors and variables that people use to perceive and adjust their spatial relationships with others and the same could be used to iterate people’s relations to devices.

img_20191016_200145-01

img_20191016_200300-01

img_20191016_200253-01

 

Revealing Interaction Possibilities: We achieved this by giving the plant an idle state slow sway motion. In case a person entered the sensing proximity of the plant, the yellow LEDs would light up as if inviting the person.

Reacting to the presence and approach of people: As the person entered the Threshold 1 circle of proximity the yellow LEDs would blink and the plant would start rotating as if scanning its context to detect the individual who entered in its proximity radius.

From awareness to interaction: As the person continues to walk closer, curious to touch or see the plant closely, the movement of the plant would get faster. Eventually if the person entered in the Threshold 2 distance, the right LEDs would light up and the plant would have violent movement indicating a reluctance to close interactions.

Spatial visualizations of ubicomp environments: Three threshold distances were defined in the code to offer he discrete distance zones for different interaction; similar to how people create these boundaries around them through their body language and behaviour.

img_20191016_200242-01


Challenges & Learnings

  • Tuning of the sensor data was a key aspect of the project such that we were able to use it to define the proximity circles. In order to get more stable values we would let the sensor run for some time, ensuring no obstacle was in its field, until we received stable values and then connect the motor to it or else the motor would take the erratic values and produce random motions from the ones programmed.
  • Another challenge was discovering the most suitable sensor positions and placement of the setup in the room with respect to the audience that would see and interact with it. It required us to keep testing in different contexts and with varying number of people in proximity.
  • Apart from the challenges with the sensors, we encountered other software and hardware interfacing issues. The programming of the red and yellow LEDs (4 of each colour) presented a challenge in terms of changing from one set to the other. They were initially programmed using arrays, but getting the yellow lights to shut off once the red lights were triggered proved to be difficult and the lights had to be programmed individually in order to get the effect we desired. In a second phase, we simplified things by soldered all the lights of the same colour in parallel and ran them from one pin on the Arduino.
  • The different levels of motion of the plant were achieved by a long process of trial and error. The agitated state provided an extra challenge in terms of vibrations. The rapid movements of the plant produced vibrations that would impact the box that contains it while also dislodging the lights attached to the container holding the plant.

Github Link for the Project code

Arduino Circuits
We used two arduinos, one to control the servo motor with plant and the other to control the LEDs.

motor-circuit

led-circuit

References
[1] Marquardt, N and S.  Greenberg,  Informing the Design of Proxemic Interactions. Research Report 2011100618, University of Calgary, Department of Computer Science, 2011
[2] Norman, Don. The Design of Everyday Things. New York: Basic Books, 2013. Print.

Experiment 1: Wake them up!

Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions.

Team
Manisha Laroia & Rittika Basu

Project Mentors
Kate Hartman & Nick Puckett

Description
Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions. The experience consisted of many virtual ‘Sleepy Monsters’ and the participant’s task was to ‘Wake them up’ by interacting with them. The experiment was an attempt to assign personalities and emotions to smartphones and create delight through the interactions.

THE MULTISCREEN EXPERIMENT EXPERIENCE
The participants were organized into four groups and assigned with a QR code each. They had to scan, wake up the monster, keep it awake and move to the next table to wake up the next monster. Eventually they would have woken up all four monsters and collected them all.

For the multiscreen aspect of the experience, we created four Sleepy Monster applications each with its unique color, hint, and wake up gesture. Each Sleepy Monster was programmed to pick a color from a predefined array of colors, in the setup, such that when the code was loaded onto a mobile phone, each of the 20 screens would have a different coloured monster. For each case, we added an indicative response, which was a pre-programmed response of the application to a particular gesture, so as to inform the user that it is or it is not the gesture that works for this Monster and they must try a different one. Participants were to try various smartphone interactions which involved speaking to, shaking, running, screen-tapping etc. The monsters responded differently by different inputs. There were four versions of the monster for mobile devices and one was created for the laptop as a bonus.

Sleepy Monster 1
Response: Angry face with changing red shades of the background
Wake up gesture: Rotation in the X-axis

Sleepy Monster 2
Response: Eyes open a bit when touch detected
Wake up gesture: 4 finger Multitouch

Sleepy Monster 3
Response: Noo#! text displays on Touch
Wake up gesture: Tap in a specific pixel area (top left corner)

Sleepy Monster 4
Response: zzz text displays on Touch
Wake up gesture: Acceleration in X-axis causes eyes to open

*Sleepy Monster 5
We also create a web application which was an attempt to experiment with keyboard input and using that to interact with the virtual Sleepy Monster. Pressing key ‘O’ would wake up the monster.

The four Sleepy Monsters and the interactions inbuilt:

The participant experience:

Github link for the codes

single-phone-interaction

20190927_124509-1

img_20190927_151122

laptop-app

Project Context
WHAT’S THE DEAL WITH THESE MONSTERS?
Moving to grad school thousands of miles away from home started-off with excitement but also with the unexpected irregular sleep patterns. Many of the international students were found napping on the softest green couch in the studio, sipping cups of coffee like a magic potion and hoping for it to work! Amongst them were us, two sleepy heads- us (Manisha and Rittika) perpetually yawning, trying to wrap our heads around p5.

The idea stemmed from us joking about creating a wall of phones with each displaying a yawning monster and see its effect on the viewer. Building on that we thought of vertical gardens with animals sleeping in them that awaken with different user interaction OR having twenty phones sleeping and each user figuring out how to wake their phone up. Eventually, we nailed it down to creating a twenty Sleeping Monster, and the participant must try out different interactions with their phones to Wake them up!

sketches2

THE CONCEPT
The way phones are integrated into our lives today, they are not just meer devices but more like individual electronic beings that we wake up, talk to, play with and can’t live without.  No wonder we feel we’ve lost part of ourselves when we forget to bring our smartphone along (Suri, 2013).We wanted the user to interact with their Sleepy Monster (on the phone) and experience the emotions of the monster getting angry if woken up, or saying NO NO if tapped, refusing to wake up unless they had discovered the one gesture that would cause it to open its eyes adding a personality to their personal device, in an attempt to humanize them. The experience was meant to create a moment of delight, once the user is able to wake up the Sleepy Monster and instill an excitement of now having a fun virtual creature in their pocket to play with or collect. The ‘Wake up the monster’ and collect its element of the experience was inspired by the Cat collecting easter egg game on Android Nougat and the pokemon Go mania for collecting virtual Pokémons.

inspiration1

By assigning personalities to the Monsters and having users interact with them, it was interesting to see the different ways the users tried to wake them.

From shouting WAKE UP! at their phones, poking the virtual eyes too vigorously shaking them, it was interesting to see users employ methods they would usually use for people.

The next steps with these Sleepy Monsters could be a playful application to collect them, or morning alarms or maybe do-not-disturb(DND) features for device screens.

THE PROCESS
Day 1: We used the ‘Create your Portrait’ exercise as a starting point to build our understanding of coding. Both of us had limited knowledge of programming and we decided to use the first few days to actively try our hand at p5 programming, trying to understand different functions, the possibilities of the process and understanding the logic. Key resources for this stage were The Coding Train youtube videos by Daniel Shiffman and the book Make: Getting Started with p5.js by Lauren McCarthy.

sketches1

Day 3:  Concept brainstorming, led us to questions about the various activities we could implement and what functions were possible. We spent the next few days exploring different interactivity and writing shortcodes based on the References, section on the p5.org website. Some early concepts revolved around, creating a fitness challenge, a music integrated experiences, picture puzzles, math puzzle games or digital versions of conventional games like tic-tac or catch-ball or ludo.

0001

Day 6: We did the second brainstorm, now with a more clear picture of the possibilities within the project scope. A lot of our early ideas were tending towards networking, but through this brainstorm, we looked at ways in which we could replace the networking aspects with actual people-people interactions. Once we had the virtual Sleepy Monster concept narrowed down, we started defining the possible interaction we could build  for the mobile interface.

sketches3

Day 8: We sketched out the Monster faces for the visual interface, and prototyped the same using p5. Parallelly, we programmed the interactions as individual codes, to try out each of them like acceleration mapped to eye-opening, rotation mapped to eye-opening, multitouch mapped to eye-opening, audio playback and random color selection on setup.

Day 10: The next step involved combining the interactions into one final code, where the interactions would execute as per conditions defined in the combined code. This stage had a lot of hits and trials, as we would write the code, then run it on different smartphones with varying OS and browsers.

Day 10-15 : A large portion of our efforts in this leg of the project was focussed on bug fixing and preparing elements(presentation, QR codes for scanning, steps for the demo & documenting the experience) for the final demo, simplifying the experience to fit everything in the allotted time of 7 minutes per team.

CHALLENGES
Getting the applications to work in different browsers and on different operating systems was an unforeseen challenge we faced during trials for the codes. The same problem popped up even during the project demo. For Android, it worked best in Firefox browsers, and for iOS, it worked best in Chrome browsers.
Seamlessly coordinating the experience for 20 people. We did not anticipate the chaos or the irregularity that comes with multiple people interacting with multiple screens.
Another issue came up with audio playback. We had incorporated a snoring sound for the Sleepy Monster to play in the background when the application loaded. The sound playback was working well on Firefox browsers in Android devices but didn’t run on Chrome browsers or iOS devices. In the iOS device, the application stopped running, with a Loading… message appearing each time.

PROJECT SPECIFIC LEARNINGS
Defining absolute values for acceleration and rotation sensor data
Random background color changes on each setup of the code
Execute multiple smart-phone interactions like acceleration, rotation, touch, multitouch, device shaken and pixel area-defined touches.

Meet the Sleepy Monsters by scanning the QR codes
qr-code_master


References

    1. “Naoto Fukasawa & Jane Fulton Suri On Smartphones As Social Cues, Soup As A Metaphor For Design, The Downside Of 3D Printing And More”. Core77, 2013, https://www.core77.com/posts/25052/Naoto-Fukasawa-n-Jane-Fulton-Suri-on-Smartphones-as-Social-Cues-Soup-as-a-Metaphor-for-Design-the-Downside-of-3D-Printing-and-More.
    2. McCarthy, Lauren et al. Getting Started With P5.Js., Maker Media, Inc., 2015, pp. 1-183 https://ebookcentral.proquest.com/lib/oculocad-ebooks/reader.action?docID=4333728
    3. Henry, Alan. “How To Play Google’s Secret Neko Atsume-Style Easter Egg In Android Nougat”. Lifehacker.Com, 2016, https://lifehacker.com/how-to-play-googles-secret-neko-atsume-style-easter-egg-1786123017
    4.  Pokémon Go. Niantic, Nintendo And The Pokémon Company, 2016.
    5. “Thoughtless Acts?”. Ideo.Com, 2005, https://www.ideo.com/post/thoughtless-acts
    6. Rosini, Niccolo et al. ““Personality-Friendly” Objects: A New Paradigm For Human-Machine Interaction”. IARIA, ACHI 2016 : The Ninth International Conference On Advances In Computer-Human Interactions, 2016.
    7. Wang, Tiffine, and Freddy Dopfel. “Personality Of Things – Techcrunch”. Techcrunch, 2019, https://techcrunch.com/2019/07/13/personality-of-things/
    8. Coding Train. 3.3: Events (Mousepressed, Keypressed) – Processing Tutorial. 2015,https://www.youtube.com/watch?v=UvSjtiW-RH8
    9. The Coding Train. 7.1: What Is An Array? – P5.Js Tutorial. 2015,  https://www.youtube.com/watch?v=VIQoUghHSxU
    10. The Coding Train. 2.3: JavaScript Objects – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=-e5h4IGKZRY
    11. The Coding Train. 5.1: Function Basics – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=wRHAitGzBrg
    12.  The Coding Train. p5.js Random Array Requests (whatToEat). 2016,https://www.youtube.com/watch?v=iCXBNKC6Wjw
    13. “Learn | P5.Js”. P5js.Org, 2019, https://p5js.org/learn/interactivity.html
    14. Puckett, Nick. “Phone Scale”. 2019. https://editor.p5js.org/npuckett/sketches/frf9F_BBA