Author Archives: Tamika Yamamoto

Sketch 4 – Tamika Yamamoto


As part of an exploration for our project, this sketch uses a pulley and weight system hooked up to a continuous servo to hide and reveal an object. The spool had to turn several rotations to effectively lift and release the weights, and so a continuous servo was needed.

The action is triggered by LDR sensors to give the impression that the object is hiding from human presence.


Link to code and simulation:

Common Objects Used Uncommonly: Merging screen space and physical space with analog objects as inputs

By Shipra Balasubramani, David Oppenheim, Tamika Yamamoto


Our research and design focus was on extending the screen into physical space and vice versa. We wanted to incorporate the body into the user experience and work with unconventional forms of analog input. We followed an iterative approach to research, ideation and design. 


1. Related Works Research 

The following projects served as our main references:

”Cleaning Windows” on EyeToy


Photo credit: David :3 from Youtube


The EyeToy (2003) is a color webcam accessory designed to be used with the Playstation 2. It uses computer vision and gesture recognition which allows players to interact with video games using motion. The Cleaning Windows game is a part of a series of mini-games in EyeToy:Play, that has the players “clean” windows with their body as fast as possible to get high scores. 

It served as inspiration for our project because although the mechanics were simple, the interactive experience ignited a sense of play and delight that we wanted to tap into for our project.



Draw Me Close


Photo credit: NFB/National Theatre


Draw Me Close, by Jordan Tannahill, the NFB and National Theatre, blurs the worlds of live performance, virtual reality and animation to create a vivid memoir about the relationship between a mother and her son. The individual immersive experience allows the audience member to take the part of the protagonist inside a live, animated world. This project served as a reference because of how it mapped analog objects such as a crayon/pen, sketchbook and furniture into the virtual space to make the audience experience of the virtual experience more tangible. 


The Treachery of Sanctuary


Photo credit: Chris Milk

The Treachery of Sanctuary, by Chris Milk, ‘is the story of birth, death, and transfiguration that uses projections of the participants’ own bodies to unlock a new artistic language’. Digitally captured shadows of the visitor are reprojected on this large-scale interactive. A shallow reflecting pool keeps the user at a distance from the screen, creating a dramatic effect and a seamless user experience. The technology used to create this experience are Openframeworks, MS Kinect and Unity3D. 

The idea of particles forming the body and then disintegrating into smaller objects reformed as birds, inspired us to work with the idea of creating a body/human form out of unconventional particles/objects and tracking  the movement of the user. 


2. Conceptualization




Our main research questions revolve around the idea of collapsing the distance between the digital space of the screen and the physical space inhabited by the user and whether that might contribute to increased feelings of presence and affect. As part of that exploration we were interested in embodied interactions with analog objects that connected to the screen. We wanted to prompt the user to associate their physical body and space with the digital and engender a sense of surprise and delight. We discussed various references that provided inspiration for our design approach.

We wondered about getting rid of expected forms of input, for example the mouse, the Xbox or PS5 controller or even the Joy-Con controllers that Nintendo popularized with the Wii —by now, all of them are extensions of our body that we no longer think about —and instead, incorporate common objects that carry with them different associations. Our “Cleaning Windows” reference (and others) led us from bubbles and using the body as a controller to a ball as both input and extension of the body. 




We worked on individual code sketches based on our research and ideation phase. These led into our design and development phase, described in the next section. 

1-624x464 img_sketch2-davidoppenheim-624x382sketch3_2


Design Considerations & Technical Description

Our larger vision would be a geometric installation with lots of affordances for projecting onto and filled with multiple everyday objects. We would play with the affordances and conventional associations of each object to create micro narratives that stemmed from the user’s own histories and relationships to those objects.




For this initial prototype we focused on an interaction with one object only, although we did work in a surprise second object to test the logic of our programmatic approach (state machines) and object recognition library.

We focused on designing a space that would not require instructions and rely instead on the affordances of the physical design – it was important that the installation feel alive when the user first entered (the screen’s camera displayed the user’s image and a grid of moving tennis balls) and that the analog object and its position afford interaction (we chose a tennis ball and positioned it on a lit pedestal). 

We used P5.js in conjunction with ml5.js and the PoseNet machine learning model as well as the COCO dataset. 

We segmented our v1 prototype concept into features and created code sketches for each —object recognition and state machine, digital object using GIFs, and the GIFs’ interactions with the skeleton —and then integrated our separate code bases for testing and debugging.  

Final prototype v1 code:

Fullscreen v1 code:


3. Presentation & Documentation




Location : Room 510 at 205 Richmond St. West (OCAD U)

Installation dimensions: 3m x 3m 

Number of participants: Single user

Hardware: Laptop, short throw projector with speaker, external webcam, tripod, 2 x spotlights

Software: P5.js web editor, ml5.js (PoseNet and COCO)  

Screen: Projection on blackboard 

Installation Design: During our initial ideation process we were driven towards creating a large-scale installation. We wanted to work with a 1:1 scale projection size, breaking away from the conventional screens as commonly used in our day-to-day lives. We used a short throw projector to project onto a black screen (blackboard),  helping to reduce the distance between the screen and the projector and allowing us to create a compact installation. 

While testing our initial work in progress prototype we were able to visualize and test the placement of the projector, screen, webcam and laptop. While doing so we realized the challenges that came along with working with computer vision. Inadequate lighting of the analog object created inaccuracies in COCO’s detection of the object. Keeping that in mind, we added two external light sources:  the first lighting the object and the second as a bounce light  providing ambient light for the installation and helping to avoid sharp shadows cast by the user which would have confused PoseNet.

The layout of the installation was finalized by trial and error method. The intent was to enhance the experience by creating immersive space for the user, leaving enough room for movement within the installation while creating a focus on the interactions and projected digital space. 




User Experience Description

The following section outlines the user experience of our v1 prototype that we demonstrated during the October 20th critique, starting with a user flow diagram and then through a series of annotated gifs. 



1. User enters the playspace and sees a grid of digital tennis balls on screen, a webcam capture that mirrors their movement, and a physical tennis ball displayed on a tripod in front of them.


2. User picks up the tennis ball. On screen, the ball grid deconstructs and forms the body of the user. Background fades to black and a parallaxing background appears.


3. As the User moves, the Ball Person on screen mirrors their movement. A parallax background that moves with the user suggests a sense of moving in a 3D environment that includes both the digital screen and the physical playspace.




4. User releases the physical tennis ball, either dropping it to the ground or returning it to the pedestal (tripod). On screen, the Ball Person deconstructs and tennis balls fall to the ground.


5. End: after five seconds, the ball grid appears on screen once more.


*We partially prototyped a second object (donut) but chose not to include it as part of the overall user experience and demoed it separately instead.


User Experience demo video (with sound):

User testing recordings (with sound):

4. Feedback and Next Steps

Feedback was obtained from the critique and not as part of formal user testing sessions. 

The critique began with three volunteers who tested the installation before receiving any context from us as designers. We observed their sessions and took note of their body language and utterances. During the discussion that followed our verbal presentation, we asked for the tester’s observations. Additionally we received feedback from individuals who observed the three testers. Finally, a few more users tried the installation toward the end of the critique. 

Our main takeaways from the session were:

  • Overall response to the experience was positive; users didn’t require instructions to move through the intended experience (pick up the tennis ball and play around with it and move their body); 
  • Users seemed to enjoy the key moment of picking up the ball and watching themselves transform into a ‘tennis ball person’, moving around as that form of tennis balls and then breaking apart (by dropping or placing the ball back on the pedestal); 
  • One user required prompting to pick up the ball; 
  • There was an acceptable moment of tension when users didn’t quite know what they were allowed (or supposed) to do with the tennis ball, however all users quickly began to bounce it, throw it, or put it back on the pedestal; 
  • Users did seem to want more complex behavior from the system, for example, to see one of the digital tennis balls follow their analog tennis ball when they threw it in the air or against the wall; 
  • Our demonstration of a second object (donut) seemed to be well-received, as was the larger vision of having multiple everyday objects available to play with.

Should we decide to further develop the project, we would begin by conducting formal user testing of this v1 prototype and then dive back into further research and ideation as part of a larger iterative design and build process.



“EyeToy.” Wikipedia, Wikimedia Foundation, 28 May 2022,

“Soarman Cleaning Windows on EyeToy.” YouTube, uploaded by David :3, 09 Mar. 2014,

“The Treachery of Sanctuary” (2012), website:

“Draw Me Close” (2017, 2021), press kit:






Sketch 2 – Painting – Tamika Yamamoto




For this sketch I made a simple painting program that allows you to change the size and color of your paintbrush. You can preview the color and size of your brush with the circle on the bottom left of the canvas as reference. You can also clear the canvas by pressing “C” and save/download your artwork by pressing the down arrow.


Sketch 1 – NoseBoop – Tamika Yamamoto


For this sketch, I wanted to explore two things. First, a “Start Screen” which when triggered, initializes the interactive experience. Second, using GIFs in an array, how to isolate objects in that array such that events happen to one moving object at a time. In this case, booping bubbles one by one with your nose.