Common Objects Used Uncommonly: Merging screen space and physical space with analog objects as inputs

By Shipra Balasubramani, David Oppenheim, Tamika Yamamoto

demo_3

Our research and design focus was on extending the screen into physical space and vice versa. We wanted to incorporate the body into the user experience and work with unconventional forms of analog input. We followed an iterative approach to research, ideation and design. 

 

1. Related Works Research 

The following projects served as our main references:

”Cleaning Windows” on EyeToy

eyetoy

Photo credit: David :3 from Youtube

 

The EyeToy (2003) is a color webcam accessory designed to be used with the Playstation 2. It uses computer vision and gesture recognition which allows players to interact with video games using motion. The Cleaning Windows game is a part of a series of mini-games in EyeToy:Play, that has the players “clean” windows with their body as fast as possible to get high scores. 

It served as inspiration for our project because although the mechanics were simple, the interactive experience ignited a sense of play and delight that we wanted to tap into for our project.

 

 

Draw Me Close

analog

Photo credit: NFB/National Theatre

 

Draw Me Close, by Jordan Tannahill, the NFB and National Theatre, blurs the worlds of live performance, virtual reality and animation to create a vivid memoir about the relationship between a mother and her son. The individual immersive experience allows the audience member to take the part of the protagonist inside a live, animated world. This project served as a reference because of how it mapped analog objects such as a crayon/pen, sketchbook and furniture into the virtual space to make the audience experience of the virtual experience more tangible. 

 

The Treachery of Sanctuary

treachery

Photo credit: Chris Milk

The Treachery of Sanctuary, by Chris Milk, ‘is the story of birth, death, and transfiguration that uses projections of the participants’ own bodies to unlock a new artistic language’. Digitally captured shadows of the visitor are reprojected on this large-scale interactive. A shallow reflecting pool keeps the user at a distance from the screen, creating a dramatic effect and a seamless user experience. The technology used to create this experience are Openframeworks, MS Kinect and Unity3D. 

The idea of particles forming the body and then disintegrating into smaller objects reformed as birds, inspired us to work with the idea of creating a body/human form out of unconventional particles/objects and tracking  the movement of the user. 

 

2. Conceptualization

 

docu-blackboard

 

Our main research questions revolve around the idea of collapsing the distance between the digital space of the screen and the physical space inhabited by the user and whether that might contribute to increased feelings of presence and affect. As part of that exploration we were interested in embodied interactions with analog objects that connected to the screen. We wanted to prompt the user to associate their physical body and space with the digital and engender a sense of surprise and delight. We discussed various references that provided inspiration for our design approach.

We wondered about getting rid of expected forms of input, for example the mouse, the Xbox or PS5 controller or even the Joy-Con controllers that Nintendo popularized with the Wii —by now, all of them are extensions of our body that we no longer think about —and instead, incorporate common objects that carry with them different associations. Our “Cleaning Windows” reference (and others) led us from bubbles and using the body as a controller to a ball as both input and extension of the body. 

 

docu-blackboard-bubbles

 

We worked on individual code sketches based on our research and ideation phase. These led into our design and development phase, described in the next section. 

1-624x464 img_sketch2-davidoppenheim-624x382sketch3_2

 

Design Considerations & Technical Description

Our larger vision would be a geometric installation with lots of affordances for projecting onto and filled with multiple everyday objects. We would play with the affordances and conventional associations of each object to create micro narratives that stemmed from the user’s own histories and relationships to those objects.

 

microsoftteams-image

 

For this initial prototype we focused on an interaction with one object only, although we did work in a surprise second object to test the logic of our programmatic approach (state machines) and object recognition library.

We focused on designing a space that would not require instructions and rely instead on the affordances of the physical design – it was important that the installation feel alive when the user first entered (the screen’s camera displayed the user’s image and a grid of moving tennis balls) and that the analog object and its position afford interaction (we chose a tennis ball and positioned it on a lit pedestal). 

We used P5.js in conjunction with ml5.js and the PoseNet machine learning model as well as the COCO dataset. 

We segmented our v1 prototype concept into features and created code sketches for each —object recognition and state machine, digital object using GIFs, and the GIFs’ interactions with the skeleton —and then integrated our separate code bases for testing and debugging.  

Final prototype v1 code: https://editor.p5js.org/tamikayamamoto/sketches/FKTt5dcfM

Fullscreen v1 code: https://editor.p5js.org/tamikayamamoto/full/FKTt5dcfM

 

3. Presentation & Documentation

 

img_2003

 

Location : Room 510 at 205 Richmond St. West (OCAD U)

Installation dimensions: 3m x 3m 

Number of participants: Single user

Hardware: Laptop, short throw projector with speaker, external webcam, tripod, 2 x spotlights

Software: P5.js web editor, ml5.js (PoseNet and COCO)  

Screen: Projection on blackboard 

Installation Design: During our initial ideation process we were driven towards creating a large-scale installation. We wanted to work with a 1:1 scale projection size, breaking away from the conventional screens as commonly used in our day-to-day lives. We used a short throw projector to project onto a black screen (blackboard),  helping to reduce the distance between the screen and the projector and allowing us to create a compact installation. 

While testing our initial work in progress prototype we were able to visualize and test the placement of the projector, screen, webcam and laptop. While doing so we realized the challenges that came along with working with computer vision. Inadequate lighting of the analog object created inaccuracies in COCO’s detection of the object. Keeping that in mind, we added two external light sources:  the first lighting the object and the second as a bounce light  providing ambient light for the installation and helping to avoid sharp shadows cast by the user which would have confused PoseNet.

The layout of the installation was finalized by trial and error method. The intent was to enhance the experience by creating immersive space for the user, leaving enough room for movement within the installation while creating a focus on the interactions and projected digital space. 

 

installation-setup-edit

 

User Experience Description

The following section outlines the user experience of our v1 prototype that we demonstrated during the October 20th critique, starting with a user flow diagram and then through a series of annotated gifs. 

 

research-ideation_screenspace-frame-1

1. User enters the playspace and sees a grid of digital tennis balls on screen, a webcam capture that mirrors their movement, and a physical tennis ball displayed on a tripod in front of them.

ball-enter

2. User picks up the tennis ball. On screen, the ball grid deconstructs and forms the body of the user. Background fades to black and a parallaxing background appears.

docu_display

3. As the User moves, the Ball Person on screen mirrors their movement. A parallax background that moves with the user suggests a sense of moving in a 3D environment that includes both the digital screen and the physical playspace.

docu_ball-pickup

 

docu_ball-person

4. User releases the physical tennis ball, either dropping it to the ground or returning it to the pedestal (tripod). On screen, the Ball Person deconstructs and tennis balls fall to the ground.

docu_ball-drop

5. End: after five seconds, the ball grid appears on screen once more.

docu_ball-grid-up

*We partially prototyped a second object (donut) but chose not to include it as part of the overall user experience and demoed it separately instead.

docu_donut-person

User Experience demo video (with sound):

https://youtube.com/shorts/zSFykZKXWsk?feature=share

User testing recordings (with sound): 

https://youtube.com/shorts/dw8qskS8Qr0?feature=share

4. Feedback and Next Steps

Feedback was obtained from the critique and not as part of formal user testing sessions. 

The critique began with three volunteers who tested the installation before receiving any context from us as designers. We observed their sessions and took note of their body language and utterances. During the discussion that followed our verbal presentation, we asked for the tester’s observations. Additionally we received feedback from individuals who observed the three testers. Finally, a few more users tried the installation toward the end of the critique. 

Our main takeaways from the session were:

  • Overall response to the experience was positive; users didn’t require instructions to move through the intended experience (pick up the tennis ball and play around with it and move their body); 
  • Users seemed to enjoy the key moment of picking up the ball and watching themselves transform into a ‘tennis ball person’, moving around as that form of tennis balls and then breaking apart (by dropping or placing the ball back on the pedestal); 
  • One user required prompting to pick up the ball; 
  • There was an acceptable moment of tension when users didn’t quite know what they were allowed (or supposed) to do with the tennis ball, however all users quickly began to bounce it, throw it, or put it back on the pedestal; 
  • Users did seem to want more complex behavior from the system, for example, to see one of the digital tennis balls follow their analog tennis ball when they threw it in the air or against the wall; 
  • Our demonstration of a second object (donut) seemed to be well-received, as was the larger vision of having multiple everyday objects available to play with.

Should we decide to further develop the project, we would begin by conducting formal user testing of this v1 prototype and then dive back into further research and ideation as part of a larger iterative design and build process.

 

Bibliography

“EyeToy.” Wikipedia, Wikimedia Foundation, 28 May 2022, https://en.wikipedia.org/wiki/EyeToy.

“Soarman Cleaning Windows on EyeToy.” YouTube, uploaded by David :3, 09 Mar. 2014, https://youtu.be/NZs1WfFVAPs.

“The Treachery of Sanctuary” (2012), website:  http://milk.co/treachery.html

“Draw Me Close” (2017, 2021), press kit: https://mediaspace.nfb.ca/epk/draw-me-close/