Project 1: Screen Space – Victoria, Maryam, Gavin

Section 1: Related Works Research

A related screen space project is one called Minion Fun by Atharva Patil. This project uses poseNet to track the movements of the face to produce minion sounds on different sections of the screen. Atharva Patil is a product designer who is currently leading design at Atlas AI building geospatial demand intelligence tools. 

Picture of the work:
screen-shot-2022-10-24-at-6-40-14-pm

This work relates to our project and research because it uses the face to create a fun interaction with sound. We used this project as our main inspiration, but instead of using the face to produce a fun sound, we used the face to produce a picture of a specific emotion (happy, sad, angry, etc.) on top of a face mask. This project helped us understand how poseNet could be used just for the face and not the entire body.

Section 2: Conceptualization

In the post-COVID pandemic, wearing a face mask has become a norm – it is a necessary precaution, and in some countries, still enforceable by law. A face mask typically takes up half of a person’s entire face; obfuscating the nose and mouth. This can often make it harder to read another individual’s facial expression. We used this fact as a springboard for our project. 

Initially, we had to brainstorm what the face mask may block in terms of everyday life. As well as the aforementioned difficulty in reading facial expressions, we also noted that when using facial recognition software (such as Apple’s Face ID), the system often struggles to identify the individual. Therefore, we wanted to put the mask at the forefront of our project – to reconfigure it as a tool for effective communication. 

screen-shot-2022-10-24-at-11-57-30-pmimg_5860

Some questions that arose from this initial brainstorm included: how do we ensure that the camera can recognize the face mask as opposed to a fully-exposed human face? How can we approach this in a creative manner? And ultimately, what do we want to express through the face mask?

We ran through a series of ideas – such as attempting to change the colour of the user’s mask itself – until we arrived at the concept of showcasing different facial expressions/emotions directly on the user’s mask, depending on their position within the screen space. The screen is broken down into four sections, each one representing a different emotion – happy, sad, tired, and angry. Corresponding music plays as the user enters each section.

In the final iteration, we used a webcam and ran the code directly from p5.js online editor. In a more idealistic situation, we envision this could work as a video filter for a social media platform. With more time, we probably wouldn’t have used the “illustrated mouth” images used in the presentation. A potential replacement would be actual human mouths, which could create a somewhat uncanny valley situation that may express our main idea more clearly; a direct response to the “Hi, how are you?” text displayed on the screen. We weren’t able to finalize this aspect for the presentation, because there’s difficulty in showcasing sadness, tiredness or anger through just the mouth alone. With more time, we may have been able to create more abstract/experimental depictions of emotions.

screenspace1screenspace2

Videos of Project 1:
https://youtube.com/shorts/SR3K_EHgCjw
https://youtu.be/DG0jxVODSEg

b30a1d24-d035-4ec2-bca0-a86f117a7d6c d967460a-3764-4877-963d-cc5b08a56733545e46db-8826-47c1-a181-5bc2281a7f0a

BIBLIOGRAPHY:
Patil, A. (2019, January 7). Motion Music. Medium. Retrieved October 24, 2022, from https://medium.com/disintegration-anxiety-1/icm-final-project-53b624770bb6