Category Archives: Screen Space

An Interactive Experience on Bioluminescence by

Sunidhi Naik

Mona Safari

Wentian Zhu


  1. Scene 1:
  2. Scene 2:
  3. Scene 3:


  1. Scene 1:
  2. Scene 2:
  3. Scene 3:


Links to the Scientific Research articles:

  2. The Facts of Deep-Sea Bioluminescence
  3. Why Are Fireflies Disappearing?
  4. 8 Brilliant Bioluminescent Animals
  5. Bioluminescence vs Biofluorescence

Links to the Artistic Inspirations:

  1. Jellyfish Village by Andreas Rocha
  2. Swamp Dwellings by Tarmo Juhola
  3. Birch Forest by Marcel Lenoir
  4. Bottle Bass by Leo Vann
  5. Magic Fingertips by Roelof knol 

Links to the related Projects:
Digital Graffiti Wall – Stencils Feature by Tangible Interactio 

Nightrise by Moment Factory









  1. Team, The Ocean Portal. “Bioluminescence.” Smithsonian Ocean, 18 Dec. 2018,


  3. Vartan, S. (2022) Why are fireflies disappearing?, Treehugger. Treehugger. Available at: (Accessed: October 24, 2022).

  4. Glow Effects: “Easiest Glow Effect in p5.Js (2 Lines of Code).” YouTube, YouTube, 4 Dec. 2021,

  5. PoseNet: shiffman. “Ml5.Js Pose Estimation with PoseNet.” YouTube, YouTube, 9 Jan. 2020,

  6. Badore, Margaret. “8 Brilliant Bioluminescent Animals.” Treehugger, Treehugger, 4 Apr. 2022,

  7. Science World Report. “Bioluminescence vs Biofluorescence: The Science of Glowing Seashores, Fluorescent Frogs, Sharks, Turtles and Parrots.” Science World Report, 20 Mar. 2017,

  8. Science World Report. “Bioluminescence vs Biofluorescence: The Science of Glowing Seashores, Fluorescent Frogs, Sharks, Turtles and Parrots.” Science World Report, 20 Mar. 2017,

  9. Hand Tracking: “P5.Js Web Editor.”, Accessed 25 Oct. 2022.

  10., Accessed 25 Oct. 2022.

  11., Accessed 25 Oct. 2022.

  12., Accessed 25 Oct. 2022

  13. “Instagram.” Instagram, Accessed 25 Oct. 2022.

  14. “—.” Instagram, Accessed 25 Oct. 2022.

  15. Wigington, Patti. “The Magic & Folklore of Fireflies.” Learn Religions, 3 July 2014,

  16. Hakai Magazine. “The Secret History of Bioluminescence.” Hakai Magazine, Accessed 25 Oct. 2022.


Virtuālis Puppet

Synopsis: Creation of a puppet storytelling environment in screen spaces through the use of hand tracking and hand motion controls for staging short stories for an audience.

Cast: As Venus; Mufaro Mukoki, As Jupiter; Ricardo ‘Ricky’ Quiza Suárez

ACT 1- OVERTURE; The referents

Puppet Narrator. Article: Hand gesture-based interactive puppetry system to assist storytelling for children.






Figure 1: Both images illustrate different Puppet Narrator features. (Left) The implementation of the system architecture is mainly composed of three parts: input, motion control, and output (Right) Basic gesture control, an example of using a combination of gestures to steer and manipulate the puppet. a Stretch. b Grip.

Authors: Hui Liang is a Marie Curie senior research fellow at the National Centre for Computer Animation, (NCCA) Bournemouth University, the UK, and an associate professor at the Communication University of China, Dr. Ismail K. Kazmi, a Senior Lecturer of Games Programming/Development at Teesside University, where he teaches a wide range of courses in Computer Science and Game Development; Peifeng Jiao, a lecturer at the basic school of the Southern Medical University of China, Jian J. Zhang, Professor of Computer Graphics at  NCCA, where he leads the National Research Centre, Jian Chang, Professor and active scientist in computer animation with over 15 years research experience in the NCCA.

This article was a pivotal point of information gathering and referential research for our project. The authors intended with this system, to develop narrative ability in the virtual story world. Depth motion sensing and hand gestures control technology were utilized in the implementation of user-friendly interaction. A key aspect in the drive for developing the Puppet Narrator was ‘how digital techniques have been used to assist narrative and storytelling, especially in many pedagogical practices; with the rapid development of HCI techniques, saturated with digital media in their daily lives, young children, demands more interactive learning methods and meaningful immersive learning experiences’ (Liang, Hui, et al 517).

Its abstract premise proposes the creation of a hand gesture-based puppetry storytelling system for young children; players intuitively use hand gestures to manipulate virtual puppets to perform a story and interact with different items in the virtual environment to assist narration. Analyzing the collected data in this article, facilitated us in scoping and giving form to our screen space exercise. Some of this data includes; how would interaction through the system architecture happens (Input, Motion Control, Output, what kinds of hands gestures can be used, what skills/areas are trained in the public with its use (narrative ability, cognitive skills, and motor coordination ability), what technologies were used in its realization and how and what was accomplished through the practical application of a prototype of this Puppet Narrator system in a selected target audience.

Handsfree.js. Library: Online library with multiple projects about its potentiality.







Figure 2: Both images illustrate different Handsfree.js projects. (Left) Laser pointers but with your finger (Right) Hand pointers and scrolling text. More examples at

Author: Oz Ramos a.k.a Metamoar / Midifungi, a generative artist exploring compositions with p5.js and fxhash artist, who has been exploring gesture-driven creative coding for about 4 years.

Handsfree.js started back in 2018 while the author was homeless, in order to help a friend recovering from a stroke at the shelter they habited, by navigating the web with face gestures. Over time and through the support and encouragement of many people, the project grew to become much more than that, expanding into a full-grown library with the capacity for both face and hand tracking. Regarding hand, it differs from the handpose ML5 library, in regard to detecting multi-hands, up to 4, a key aspect to integrating into our project.

As we began code sketching prototypes of our project, we encountered one issue; the libraries we were working with only detected one hand. P5js library Handsfree.js became key in overcoming that limitation, but it was more than just that. The online repositories of Oz Ramos, the author, about his and others’ explorative research use of Handsfree.js was beyond useful; both inspiration and code-wise, these examples allowed us to get more understanding of how the hands can interact and influence the screen space. While we manually explored venues on how to adapt the library to our screen space, we also incorporated plugins, like pre-established hand gestures (pinching fingers) to allocate functions within the screen space.

ACT 1 – ARIA; The idea

Marionettes have been a part of many world cultures for thousands of years and have played a unique role in storytelling. Although puppetry was never reserved solely for the entertainment of children, our project focused on the use of it to assist storytelling to children. Puppetry continues to be a respected form of art, however, the new generation, who were born into technology, are not fascinated much by the physical instruments of play around them. They have spent the better part of their lives glued to screens. It is important for educators to devise more interactive learning methods and meaningful immersive learning experiences (Liang, Hui, et al 517).

 In our project, we attempted to answer the following questions:

  • How can human-computer interaction assist in learning and communication?
  • What are new ways of engaging in education to facilitate learning?
  • How does HCI assist/improve engagement?


Our project uses handsfree.js which is easy to access through a browser. Handsfree.js uses hand tracking to position the cursor on the screen and incorporates hand gestures, creating an intuitive program. As a result, the user is able to use simple hand gestures to control the marionette puppet on the screen. To control the puppet, fingers are mapped to specific strings of the puppet. This results in a puppet that can move anywhere on the screen and whose arms make puppet-like movements. Other functions of the marionette puppet/s, such as mouth movements, are triggered by other hand gestures like pinching the fingers


Figure 3: A scene is composed of 1 to 2 puppets and interactive stage props. Hands are used to control puppet movement via fingertips, and gestures trigger different interactions.

The project provides a range of ready-made scenes, characters and props for educators to choose from to assist them in telling their chosen stories. If more than one scene is chosen, pressing the arrow keys on the keyboard will help them navigate from one scene to the next.


Figure 4: The storytelling unfolds itself through multiple scenes, controlled by the arrow keys. For implementation, we choose Easop’s fable, the Cat Maiden.


Figure 5: The interaction in the system occurs in a dialectic way; the puppeteer(s) controls action through screen space, reflected upon another screen space. Props can be used to expand upon the space where the action happens, mimicking puppeteer stands.

The final product is presented on a screen the size of a television and physical props, such as curtains and boxes, can be added around the screen space in an effort to emulate a traditional puppet show. An essential part of the program is that it must be user-friendly and require minimum effort from the facilitator which means the facilitators can be the children themselves, with supervised assistance. This makes it easy for them to adapt to a variety of different stories. The facilitator will still need to stand behind the main screen, much like the fashion of a typical puppet show, however, they have the advantage of having a smaller screen to coordinate their story. This means that they will also get to see their narrative as it is experienced by others.


Figure 6: Final version of the coded system while being implemented.

ACT 3 – FINALE; The documentation











Figure 7: Body of work showing the evolution of the project.

For our presentation, we made use of a 70-inch TV connected to a laptop via an HDMI connection. We stood and presented our puppet show in front of the laptop and the output was projected onto both the laptop and the television. As accessibility was an important consideration, we situated the puppet show in a small room to amplify the audio as much as possible without requiring additional hardware. Additionally, the presentation was accessed through the chrome browser. The puppet show was presented to an audience who sat and/or stood in front of the large-screen television. In preparation for our presentation, we stood in front of the laptop to test the best distance to stand from it in order to move the puppets more easily (2.5 inches) and to prevent the software from picking up any more hands that would disrupt the performance. 

Upon reflection, we ought to have created a physical structure around the large-screen TV to emulate the stage of a puppet show and create a more immersive experience. Our challenges and limitation were our own limited experience with coding. We experimented with various codes and sketches to mimic and stimulate hand movements and map our puppet strings to human fingers via the web camera.


Figure 8: One of the usability tests of different ways of applying hand detection on the screen, and using it as means of moving objects on the screen space.


scheme-3Figure 9: Mock-up practices and usability test of the system. The interaction happens on a screen that the puppeteers use to conduct the show, while the puppet show is output to an external, bigger screen.

You can watch our presentation here:

The final code utilized in the presentation (Edit):

The final code utilized in the presentation (FullScreen):


Some notable sketches we created can be found on the following links:


To view more of our documentation visit this link.



Aesop, . ““The Cat-Maiden”.” Aesop’s Fables. Lit2Go Edition. 1867. Web. <>. October 24, 2022.

Canada Museum of History. Qualities of a Puppet, Canada Museum of History, 2022,;lvl3=4826&amp;language=english.

Fling, Helen. Marionettes: How to Make and Work Them. Dover Publications, 1973.

Flower, Cedric, and Alan Jon Fortney. Puppets: Methods and Materials. Davis Publications, 1993.

Liang, Hui, et al. “Hand Gesture-Based Interactive Puppetry System to Assist Storytelling for Children.” The Visual Computer, vol. 33, no. 4, 2016, pp. 517–531.,

Mediapipe Solutions. “MediaPipe Hands.” Mediapipe, 2022,

Ramos, Oz. “Handsfree.js Intro (Spring 2019).” Vimeo, Oz Ramos, 24 Oct. 2022,

Ramos, Oz. “Hands-Free Libraries, Tools, and Starter Kits.” Handsfree.js, 17 May 2022,

Ramos, Oz. “MIDIBlocks/Handsfree: Quickly Integrate Face, Hand, and/or Pose Tracking to Your Frontend Projects in a Snap ✨👌.” GitHub, MidiBlocks, 2022,

Victory Infotech. “Indian Wedding Bride In Choli And Groom Kurta Pajama With Koti Couple Traditional Outfits Doodle Art PNG Picture.” Pngtree, Pngtree, 11 July 2022,;type=1&amp;time=1666238499&amp;token=NDllYmQ3MzdmY2JiYTkwMmRmYjg1MjEwMjBkYWE1M2M.

Paddle Player – Zaheen, Anusha, Prathistha


Artwork 1: A simple quick sketch made by Andre Burnier. The concept revolves around using the mouse click to make the ball interact with the various lines.

Name of the Artist – André Burnier

He is a graphic designer and coder, based in Brazil. He has done his masters in graphic design at AKV | St. Joost in Breda – Netherlands.

Andre focused on actively researching the interaction between graphic design and programming. Expertise includes generative design, custom-made designer bots, as well as more traditional graphic design – logos, visual identities, and editorial.

We were inspired by the simple graphics and movement of the ball to make a fun interaction with it, maybe something similar to an interaction-based game, because we wanted the user experience to be a very fun one. 

In order to manipulate the overall experience, we studied the collisions of the balls. 

This also led us to search for other interactive games with balls and different objects and we came across the brick breaker game. 


Searching further for interactive games, we came across this large-scale game created by the Moment Factory. 

Artwork 2: HOTSPOT – An Interactive Sprint With Moving Targets

In the game, players must catch moving hotspots. The gameplay focuses on agility and speed. Physical obstacles optional

1 runner at a time – 3 minutes

This game is 100% customizable and adapts to all types of environments.

Moment Factory is a multimedia studio with a full range of production expertise under one roof. Their team combines specializations in video, lighting, architecture, sound, and special effects to create remarkable experiences. With its headquarters based in Montreal, the studio also has offices in Paris, New York, Tokyo, and Singapore. They have created more than 500 unique shows and destinations. 



This inspired us to combine the world of a virtual game with physical space and also to explore real-time tracking and projection mapping. Virtual games can be integrated anywhere and tailored to any audience, making them ideal for public display. 

Also, incorporating virtual gaming with physical activity, as it is no less than a workout, can also be used to relieve stress or warm up at work.

As games are often very fun, we were encouraged to create something which is simple but also engaging and interactive.


Breakout was among the first games to be created, back in 1976 by one of the oldest video game developers, Atari. It came out as a modified version of Atari’s first game, Pong and was referred to as “Pong turned on its side”. A concept created by Nolan Bushnell and Steve Bristow, it uses a paddle near the bottom of the screen to bounce a ball towards layers of bricks at the top of the screen. When the ball hits the bricks, the bricks that are touched are eliminated. The goal of the player is to eliminate all the bricks on the screen. 


Following its success as an arcade game, there were a series of Breakout games released by Atari in the following years, as well as numerous similar games by other developers. Breakout also inspired Steve Wozniak, its original engineer along with Steve Jobs, to create Brick Out in BASIC for the first Apple ][ computer – the first software version of the original hardware creation. Where the original was a black and white arcade machine game that used cellophane strips on the screen to colour the bricks, this new version used both colour and sound to amplify the user’s experience playing the game.

For our project, we analysed the three primary elements of the game – the bricks, the paddle and the ball – to see if we could manipulate these elements to create an interesting experience for the player, considering the increased possibilities that computer vision offered us now, compared to the simplicity of the original game. We found that the engagement offered simply by bouncing the ball off the paddle was very high, especially when the paddle was controlled by an unexpected body part like a nose or a wrist, and players found that experience itself to be fairly rewarding. So we decided to continue with an exploration of the possibilities of interaction between the player and the screen, while maintaining the simplicity of the gameplay.


Some of the questions that arose during this process of exploration, helping us formulate our final concept were:

  1. How does the scale of the screen space impact the player’s ability to continuously engage with the game?
  2. What is the value of the body in creating an engaging experience with a game that is traditionally played using computer hardware?
  3. Is it possible to draw the player into the screen space without physically placing them within the screen?
  4. How can analog tools like a piece of chalk and a blackboard amplify the user’s experience of the game?

In answering these questions, we found that the best user experience for our game came from being able to present it in a life sized scale where the player is able to focus solely on the functionality of their body as the paddle and where the primary and only goal is to hit the ball. With no scorekeeping or ending, it functions as a way for the player to let go of any mental boundaries placed on the movement of their body and allows them to “enter” the screen space completely, almost forgetting about the presence of their body in the physical space.

To increase the effect of this experience, we decided to create the game with a black background using a brightly coloured ball and paddle that both changed colour based on the success of the player. This was achieved using p5.js for the design, Posenet by ml5.js for the computer vision aspect and Collide2D for the gameplay. This was then projected on a blackboard, stretching the limits of the screen beyond what the projector could offer, minimising the player’s awareness of the existence of a separate screen at all. Placing written guides on the screen, for which two parts of the body created the paddle, helped the player move more easily and offered a good level of amusement as well, by highlighting the absurdity of the placement of the paddle and how it would move. Ensuring the player could not see their actual body reflected on the screen in any way was the key to minimising how conscious they were of their physical presence.

Explorations and User Testing Link:


Demonstration Link:










The ideal state of our installation would be projected onto 2 screens wherein the person would have a bigger space to run around and play with the game providing a more wholesome experience.



Editable Code

Fullscreen Experience


“Fun Solutions to Transform Any Space into Immersive Playground.” Moment Factory, 2020,

“Fun Solutions to Transform Any Space into Immersive Playground.” Moment Factory, 2020,

Çimen, E. Gökçe. “A Brief History of Brick Breaker Video Games.” Hero Concept, 9 Sept. 2018, Accessed 4 Oct. 2022.

Wozniak, Steve. “How Steve Wozniak Wrote Basic for the Original Apple from Scratch.” Gizmodo, 1 May 2014,


Native vs. Invasive Particles

Firaas Khan
Rim Armouch



For some time, artists have been exploring the different ways they can raise awareness around certain issues related to the environment and let people interact with nature through the medium of digital technologies.

Firaas being interested in physics and I in ecology, we decided to create an interactive installation that lets the spectator understand the way invasive species destroy and alter freshwater environments. It also sheds light on how human activities can contribute to the spreading of invasive species.

According to the Government of Canada, invasive species such as viruses, bacteria, parasites, and other micro-organisms once introduced into a new environment, outside of their natural range, could grow quickly because they don’t have natural predators in their new environment. As a result, they can outcompete and harm native species. They can even alter habitats to make them inhospitable for the native species. This is especially concerning for species at risk.

Therefore, we ask ourselves how can we raise awareness around environmental issues through technology?

This installation was created using an interconnected screen and webcam. It was designed using the P5JS web editor, and the way the interaction works in this experiment is by detecting a human nose on the screen that, once detected, would bring an invasive species into the environment. 



pinstallation-01-copy   pinstallation-07   pinstallation-06
Phase 01                                        Phase 02                                       Phase 03




pinstallation-01Animated GIF of our set up



Ideally, we would want our installation to be exhibited at a science fair or science museum, which is why we created small notes that describe what each particle is, and how it behaves in a freshwater environment.

pinstallation-cards-2-02        pinstallation-cards-2-03


























Two projects that particularly inspired us are done by Thijs Biersteker:


The first one is Plastic Reflective, an interactive kinetic installation that lets the spectator reflect on the concept of the growing plastic problem in our oceans.



The second one is Shaded seas which shows that the power to change the overuse of plastic is in our own hands. The work sets out to keep creating awareness of the plastic problem in our oceans.

While both projects don’t necessarily use a screen or a webcam, they follow the same interaction that we’re seeking in our project and shed light on similar issues.  Additionally Firaas, and I thought that they could be done by using P5JS, Posenet, and machine learning. 

Our project was developed over several weeks, and while we weren’t able to achieve everything we needed with coding. We did go through an interesting research through design process, that helped us choose how to design the particles, how to design the human-screen interaction, the environment that we would like to create, and the way we want to present our installation.


screen-shot-2022-10-24-at-12-31-55-am  screen-shot-2022-10-24-at-12-32-31-am





p-sketch-01Sketch made based on images gathered from our particles research

Sketch made based on images gathered from our particles research

A big part of our sketching process was related to the abstraction and the materiality of the particles. Do we want to abstract them? Make them realistic? How can we represent the fact that they are alive? How do they behave? What kind of properties do we want to give them?

Another part that we were also concerned with, was the environment in which they live.


sketch01   sketch02  sketch-03  ezgif-com-gif-maker  sketch-06  ezgif-com-gif-maker-2ezgif-com-gif-maker-6








Then, we decided to create a group of native particles that would float in a freshwater environment, that would later be invaded by another type of species introduced by human interaction with the screen.





We chose to go with an abstract and simple version of the process, making it accessible to a wide range of people. Since it’s a freshwater environment, we decided to display our experiment lying horizontally on a surface. Given how screens with built-in webcams are easily accessible, we decided to use a tablet for the user to interact with. Making our project simple to install and portable.

We would like to consider our project as version 1 which can be enhanced over time. We already had a few ideas on how we can enhance the project:

+ Create an entire ecosystem and introduce more than 1 type of native particle that responds differently to the introduction of new particles

+ Elaborate on the idea of cycle and lifespan

+ Work on the visual approach. What happens if we add more details that could, later on, be used for additional behaviors?

+ Given that the iPad wasn’t processing our code efficiently, we did hear feedback related to the use of a projector and game engine, opening up the possibility to experiment with the shape of the screen

+ Elaborate more on the existing interactions, and add behaviors to the particles making the user journey even more interesting. Many users felt inclined to touch the screen, which could potentially lead to another interaction or be a limitation considering that we focused on webcam-screen communication


pinstallation-user-flow-05Additional properties of the native particles


The next step for us would be to keep inviting users to test our initial prototype to keep note of how people would like to interact with such a project. In addition, we would dig deeper into the different technologies that can display our project taking into consideration the feedback provided by our instructors and cohort.



Canada, Environment and Climate Change. “Government of Canada.” / Gouvernement du Canada, April 19, 2021.

“Thijs Biersteker —Plastic Reflectic.” 2016 Thijs Biersteker.

“Thijs Biersteker —Symbiosia.” 2020 Thijs Biersteker.

Shiffman, Daniel, Shannon Fry, and Zannah Marsh. 2012. The Nature of Code. United States: D. Shiffman.





Project 1: Screen Space – Victoria, Maryam, Gavin

Section 1: Related Works Research

A related screen space project is one called Minion Fun by Atharva Patil. This project uses poseNet to track the movements of the face to produce minion sounds on different sections of the screen. Atharva Patil is a product designer who is currently leading design at Atlas AI building geospatial demand intelligence tools. 

Picture of the work:

This work relates to our project and research because it uses the face to create a fun interaction with sound. We used this project as our main inspiration, but instead of using the face to produce a fun sound, we used the face to produce a picture of a specific emotion (happy, sad, angry, etc.) on top of a face mask. This project helped us understand how poseNet could be used just for the face and not the entire body.

Section 2: Conceptualization

In the post-COVID pandemic, wearing a face mask has become a norm – it is a necessary precaution, and in some countries, still enforceable by law. A face mask typically takes up half of a person’s entire face; obfuscating the nose and mouth. This can often make it harder to read another individual’s facial expression. We used this fact as a springboard for our project. 

Initially, we had to brainstorm what the face mask may block in terms of everyday life. As well as the aforementioned difficulty in reading facial expressions, we also noted that when using facial recognition software (such as Apple’s Face ID), the system often struggles to identify the individual. Therefore, we wanted to put the mask at the forefront of our project – to reconfigure it as a tool for effective communication. 


Some questions that arose from this initial brainstorm included: how do we ensure that the camera can recognize the face mask as opposed to a fully-exposed human face? How can we approach this in a creative manner? And ultimately, what do we want to express through the face mask?

We ran through a series of ideas – such as attempting to change the colour of the user’s mask itself – until we arrived at the concept of showcasing different facial expressions/emotions directly on the user’s mask, depending on their position within the screen space. The screen is broken down into four sections, each one representing a different emotion – happy, sad, tired, and angry. Corresponding music plays as the user enters each section.

In the final iteration, we used a webcam and ran the code directly from p5.js online editor. In a more idealistic situation, we envision this could work as a video filter for a social media platform. With more time, we probably wouldn’t have used the “illustrated mouth” images used in the presentation. A potential replacement would be actual human mouths, which could create a somewhat uncanny valley situation that may express our main idea more clearly; a direct response to the “Hi, how are you?” text displayed on the screen. We weren’t able to finalize this aspect for the presentation, because there’s difficulty in showcasing sadness, tiredness or anger through just the mouth alone. With more time, we may have been able to create more abstract/experimental depictions of emotions.


Videos of Project 1:

b30a1d24-d035-4ec2-bca0-a86f117a7d6c d967460a-3764-4877-963d-cc5b08a56733545e46db-8826-47c1-a181-5bc2281a7f0a

Patil, A. (2019, January 7). Motion Music. Medium. Retrieved October 24, 2022, from

Common Objects Used Uncommonly: Merging screen space and physical space with analog objects as inputs

By Shipra Balasubramani, David Oppenheim, Tamika Yamamoto


Our research and design focus was on extending the screen into physical space and vice versa. We wanted to incorporate the body into the user experience and work with unconventional forms of analog input. We followed an iterative approach to research, ideation and design. 


1. Related Works Research 

The following projects served as our main references:

”Cleaning Windows” on EyeToy


Photo credit: David :3 from Youtube


The EyeToy (2003) is a color webcam accessory designed to be used with the Playstation 2. It uses computer vision and gesture recognition which allows players to interact with video games using motion. The Cleaning Windows game is a part of a series of mini-games in EyeToy:Play, that has the players “clean” windows with their body as fast as possible to get high scores. 

It served as inspiration for our project because although the mechanics were simple, the interactive experience ignited a sense of play and delight that we wanted to tap into for our project.



Draw Me Close


Photo credit: NFB/National Theatre


Draw Me Close, by Jordan Tannahill, the NFB and National Theatre, blurs the worlds of live performance, virtual reality and animation to create a vivid memoir about the relationship between a mother and her son. The individual immersive experience allows the audience member to take the part of the protagonist inside a live, animated world. This project served as a reference because of how it mapped analog objects such as a crayon/pen, sketchbook and furniture into the virtual space to make the audience experience of the virtual experience more tangible. 


The Treachery of Sanctuary


Photo credit: Chris Milk

The Treachery of Sanctuary, by Chris Milk, ‘is the story of birth, death, and transfiguration that uses projections of the participants’ own bodies to unlock a new artistic language’. Digitally captured shadows of the visitor are reprojected on this large-scale interactive. A shallow reflecting pool keeps the user at a distance from the screen, creating a dramatic effect and a seamless user experience. The technology used to create this experience are Openframeworks, MS Kinect and Unity3D. 

The idea of particles forming the body and then disintegrating into smaller objects reformed as birds, inspired us to work with the idea of creating a body/human form out of unconventional particles/objects and tracking  the movement of the user. 


2. Conceptualization




Our main research questions revolve around the idea of collapsing the distance between the digital space of the screen and the physical space inhabited by the user and whether that might contribute to increased feelings of presence and affect. As part of that exploration we were interested in embodied interactions with analog objects that connected to the screen. We wanted to prompt the user to associate their physical body and space with the digital and engender a sense of surprise and delight. We discussed various references that provided inspiration for our design approach.

We wondered about getting rid of expected forms of input, for example the mouse, the Xbox or PS5 controller or even the Joy-Con controllers that Nintendo popularized with the Wii —by now, all of them are extensions of our body that we no longer think about —and instead, incorporate common objects that carry with them different associations. Our “Cleaning Windows” reference (and others) led us from bubbles and using the body as a controller to a ball as both input and extension of the body. 




We worked on individual code sketches based on our research and ideation phase. These led into our design and development phase, described in the next section. 

1-624x464 img_sketch2-davidoppenheim-624x382sketch3_2


Design Considerations & Technical Description

Our larger vision would be a geometric installation with lots of affordances for projecting onto and filled with multiple everyday objects. We would play with the affordances and conventional associations of each object to create micro narratives that stemmed from the user’s own histories and relationships to those objects.




For this initial prototype we focused on an interaction with one object only, although we did work in a surprise second object to test the logic of our programmatic approach (state machines) and object recognition library.

We focused on designing a space that would not require instructions and rely instead on the affordances of the physical design – it was important that the installation feel alive when the user first entered (the screen’s camera displayed the user’s image and a grid of moving tennis balls) and that the analog object and its position afford interaction (we chose a tennis ball and positioned it on a lit pedestal). 

We used P5.js in conjunction with ml5.js and the PoseNet machine learning model as well as the COCO dataset. 

We segmented our v1 prototype concept into features and created code sketches for each —object recognition and state machine, digital object using GIFs, and the GIFs’ interactions with the skeleton —and then integrated our separate code bases for testing and debugging.  

Final prototype v1 code:

Fullscreen v1 code:


3. Presentation & Documentation




Location : Room 510 at 205 Richmond St. West (OCAD U)

Installation dimensions: 3m x 3m 

Number of participants: Single user

Hardware: Laptop, short throw projector with speaker, external webcam, tripod, 2 x spotlights

Software: P5.js web editor, ml5.js (PoseNet and COCO)  

Screen: Projection on blackboard 

Installation Design: During our initial ideation process we were driven towards creating a large-scale installation. We wanted to work with a 1:1 scale projection size, breaking away from the conventional screens as commonly used in our day-to-day lives. We used a short throw projector to project onto a black screen (blackboard),  helping to reduce the distance between the screen and the projector and allowing us to create a compact installation. 

While testing our initial work in progress prototype we were able to visualize and test the placement of the projector, screen, webcam and laptop. While doing so we realized the challenges that came along with working with computer vision. Inadequate lighting of the analog object created inaccuracies in COCO’s detection of the object. Keeping that in mind, we added two external light sources:  the first lighting the object and the second as a bounce light  providing ambient light for the installation and helping to avoid sharp shadows cast by the user which would have confused PoseNet.

The layout of the installation was finalized by trial and error method. The intent was to enhance the experience by creating immersive space for the user, leaving enough room for movement within the installation while creating a focus on the interactions and projected digital space. 




User Experience Description

The following section outlines the user experience of our v1 prototype that we demonstrated during the October 20th critique, starting with a user flow diagram and then through a series of annotated gifs. 



1. User enters the playspace and sees a grid of digital tennis balls on screen, a webcam capture that mirrors their movement, and a physical tennis ball displayed on a tripod in front of them.


2. User picks up the tennis ball. On screen, the ball grid deconstructs and forms the body of the user. Background fades to black and a parallaxing background appears.


3. As the User moves, the Ball Person on screen mirrors their movement. A parallax background that moves with the user suggests a sense of moving in a 3D environment that includes both the digital screen and the physical playspace.




4. User releases the physical tennis ball, either dropping it to the ground or returning it to the pedestal (tripod). On screen, the Ball Person deconstructs and tennis balls fall to the ground.


5. End: after five seconds, the ball grid appears on screen once more.


*We partially prototyped a second object (donut) but chose not to include it as part of the overall user experience and demoed it separately instead.


User Experience demo video (with sound):

User testing recordings (with sound):

4. Feedback and Next Steps

Feedback was obtained from the critique and not as part of formal user testing sessions. 

The critique began with three volunteers who tested the installation before receiving any context from us as designers. We observed their sessions and took note of their body language and utterances. During the discussion that followed our verbal presentation, we asked for the tester’s observations. Additionally we received feedback from individuals who observed the three testers. Finally, a few more users tried the installation toward the end of the critique. 

Our main takeaways from the session were:

  • Overall response to the experience was positive; users didn’t require instructions to move through the intended experience (pick up the tennis ball and play around with it and move their body); 
  • Users seemed to enjoy the key moment of picking up the ball and watching themselves transform into a ‘tennis ball person’, moving around as that form of tennis balls and then breaking apart (by dropping or placing the ball back on the pedestal); 
  • One user required prompting to pick up the ball; 
  • There was an acceptable moment of tension when users didn’t quite know what they were allowed (or supposed) to do with the tennis ball, however all users quickly began to bounce it, throw it, or put it back on the pedestal; 
  • Users did seem to want more complex behavior from the system, for example, to see one of the digital tennis balls follow their analog tennis ball when they threw it in the air or against the wall; 
  • Our demonstration of a second object (donut) seemed to be well-received, as was the larger vision of having multiple everyday objects available to play with.

Should we decide to further develop the project, we would begin by conducting formal user testing of this v1 prototype and then dive back into further research and ideation as part of a larger iterative design and build process.



“EyeToy.” Wikipedia, Wikimedia Foundation, 28 May 2022,

“Soarman Cleaning Windows on EyeToy.” YouTube, uploaded by David :3, 09 Mar. 2014,

“The Treachery of Sanctuary” (2012), website:

“Draw Me Close” (2017, 2021), press kit:






The Digital Pet | Screen Space Experiment



October 2022

By: Divyanka Sadaphule, Nicky Guo, and Taylor Patterson


Context Research  

Related Work

  • For this experiment we wanted to focus on the exploration of colour and emotions.
  • Two artists that we’ve researched for inspiration are James Turrell and Jónsi, Hrafntinna. Both artists use the senses to emit emotions and evoke feelings from their installations.


James Turrell – Ganzfeld


This installation explores space and light, focusing on the effects of light on people’s moods as opposed to light just being there for illumination. James Turrell comes from a scientific background where he studied the ‘Ganzfeld Effect. In the Ganzfeld Effect, “your brain is starved of visual stimulation and fills in the blanks on its own. This changes your perception and causes unusual visual and auditory patterns” (Healthline, 2020). This is a perfect fit for our project as we aim to link colour to emotion.




Jónsi Hrafntinna – Obsidian

Located in the Art Gallery of Ontarpicture2io, this installation stood out the most with its use of sound to create deep vibrations. The installation is set in a room with little to no light and is a “Sixteen-channel sound installation, chandelier, speakers, subwoofers, carpet, fossilised amber scent.” (Art Gallery of Ontario, n.d.). Other senses such as smell and lighting are used in this installation as well. The idea is to ‘evoke the sensation of being inside a volcano’. To push our project even further, we want to incorporate the feeling of sound through vibrations.


The Impact of Colour Psychology on Emotions in Child Development  

Colours play an essential role when it comes to child development. It is an energy having wavelength and frequency. Colour psychology and its impact on a child’s learning abilities and behaviour is a much-researched subject. (Olesen, 2016)

Studies demonstrate the benefits of colours where brain development, creativity, productivity, and learning are concerned. With the help of colours, neural pathways in our brains are connected. The research was done that children wearing coloured goggles who were made to complete pegboard tests were found to solve the tests much faster when wearing goggles of their favourite colour. (Co, n.d.)

The reaction to the temperature of warm to cool colours was another matter, the warm colours in a way can calm certain students but they may excite others. Likewise, cool colours might stimulate one and relax another. In addition, research studies have also shown that with the help of colours we improve our learning ability and memory. The study concluded that red and blue colours are the best for enhancing cognitive skills and improving brain functions of children. (Renk Etkisi, 2017)


Historical Aspects  

Several ancient cultures, including the Egyptians and Chinese, practiced chromotherapy, or the use of colours to heal. Chromotherapy is sometimes referred to as light therapy or colorology.

Chromotherapy & Phototherapy in Ancient Egypt: The ancient Egyptians used to use colour as a healing technique as many aspects of their lives. Colours were also associated with gods. With a strong focus on worshiping the sun, they believed that shining the rays of light through coloured crystals could penetrate the body and act as a treatment for ailments.

Chromotherapy in Ancient China: In Chinese culture, they connect colours with health. Chinese culture has always been keen on the connection of body-mind-earth-spirit, and it shows in the holistic Traditional Chinese Medicine (TCM) techniques that have transcended generations. It is believed that the colours you attract are alignments or imbalances with the cosmos and surrounding energy.

(JACUZZI Saunas – Clearlight Infrared SaunasTM, 2018)


Conceptual Framework

The initial idea behind the digital pet was to help children to learn and understand emotions with the help of colours. This can be also beneficial for students who have learning disabilities and ADHD who often experience distorted colour discrimination. The impact of different colours can simulate emotions ranging from comfort and warmth to hostility and anger which can help children to understand their emotional factors.

  • We created an interactive platform for people to explore two different states of emotion through sound, technology and colours.
  • Our digital pet represents two states of emotions, happy and angry. We chose colours and sound to match each of these emotions to enhance the effect of our emotional qualities of our digital pet.


Research Question

  • How do we map emotion to sound, through technology and interaction while formulating a playful experience where users utilise their visual and hearing senses?


Technical description & design considerations

Ideal Project Location

Ideally, we would like the digital pet to be in a room set up with a projector, speakers and a webcam, the digital pet will be projected onto a wall. The colour of the octopus will reflect on all 4 walls of the room, and there will be a song playing. The use of colour, sound and touch will working together to produce an immersive art installation.


While the demonstration shown in the images below only utilise two of the three proposed emotions, happy, and anger, the aim is to give users the opportunity to experience all three 3 separate emotions /positions with the below actions associated with each.The idea is to have an interaction-based end goal where our character is reactive. The sequence of events are shown below:

Position 1 : Static/Sad

The character will be floating, it will look sad, but it will be ready to welcome on-coming guests. The character’s position should be in the middle of the screen and it will be white on a black background with a rain cloud, rain, thunder and lightning on all 4 walls.

Emotion 1 : Happy

After 20 seconds of someone entering the room, the digital pet becomes happy and turns yellow. The background will change to sunshine and a bed of flowers floating. The user will be prompted to wave their hands up and down to be ‘happy with the octopus’. The waving of the arms will change the digital pet’s colour to a yellow orange gradient and will bounce up and down on the grid. The face will have a smile on it.

Emotion 2: Angry

After the user waves their arms 5 times, the octopus experiences a new emotion, anger. When our character is angry,  the background will change into clouds of smoke. The character will change colour to red, the head will grow and the room will vibrate with rage. The character will have a frown on its face and angry brows at this point.

Emotion 1 (Happy)                                






Emotion 2 (Angry)



Video of working code



D, William. “Ganzfeld – a Light and Space Exhibiton by James Turrell.” Design Is This, 27 Oct. 2015,

“Jónsi: Hrafntinna (Obsidian).” Art Gallery of Ontario,,places%20or%20trigger%20different%20memories.

Pietrangelo, Ann. “Ganzfeld Effect: Sensory Deprivation Hallucinations.” Healthline, Healthline Media, 15 Oct. 2020,

Healthline. (2020). Ganzfeld Effect: Sensory Deprivation Hallucinations. [online] Available at:

Art Gallery of Ontario. (n.d.). Jónsi: Hrafntinna (Obsidian). [online] Available at: [Accessed 24 Oct. 2022].

Co, P. (n.d.). How To Improve Your Child’s Mood With Colors. [online] Parent Co. Available at:

Renk Etkisi (2017). Renk Etkisi | The Effect of Color | The Effects of Colors on Children. [online] Available at:

Olesen, J. (2016). Color Psychology: Child Behavior And Learning Through Colors. [online] Available at:

JACUZZI Saunas – Clearlight Infrared SaunasTM. (2018). The Ancient History of Color Light Therapy | Jacuzzi® Saunas Blog. [online] Available at:























Crowded project

Full documentation PDF linkcrowded-screenspace

The pandemic changed how we interact with spaces regarding cameras and our feeling towards being monitored continuously in public spaces. In the pandemic, a lot of data and footage has been collected of people through cameras, heatmaps and virus tests which are tracking everyone’s movements and health. Survival and health took a center stage than the breaching of continuous privacy and data collection in public spaces. People were comfortable being monitored for the everyone’s safety but the concept of being monitored has not been a comfortable feeling for people.

  • In this project, we simulate what it would be like to track people’s microbes and their related health. Doings so gives the audience the opportunity to experience the feeling of being seen and observed and their role in the environment interacting with the particles. screenshot-2022-10-24-at-3-24-45-pm
  • In our experience, we have 3 stages and we had to break down the stages each with its own implementation needs. The screens are placed in an L shape to prevent blocking the view of each other and breaking down the experience of the people.

In the stage 1, the participant enters the room and must be scanned for the microbes.

In stage 2, the participants move further in the room and view themselves in the other screen and interact with the particles they are creating.

In stage 3, one participant is infected with an “angry” particle, indicated by a looming sound from a Bluetooth speaker at a distance.

Code link :

Video link of the project :

Full documentation PDF linkcrowded-screenspace

Group: Tyler, Yueming, Purvi