Author Archives: Gavin Tao

Sketch 5 – Gavin Tao

doc     doc2

For this sketch, I was testing out how to make Processing recognize a series of .png files, and display them in a sequential order, so that it looks like an animated image.

I utilized imageCount, which you can program to recognize the format of your filenames that Processing is drawing data from. Therefore, you don’t have to manually type in each file within the code itself. This is especially helpful if you have a lot of images.

I also used mousePressed, so that the lines would “grow” if you held down the mouse button; and would stop growing if you released the mouse button.

Github of code:

– This example on Processing helped me to figure out the imageCount function.
– Animation created by me.

Sketch 4 – Gavin Tao


Communication between Arduino and p5.js.

I loaded an animation into p5.js, and mapped each individual frame to the data on the Arduino’s potentiometer. This gives a tactile feeling of spinning Cloud’s sword. Because I didn’t set any kind of “reset” function, the animation works in both a clockwise and counter-clockwise manner:

sketch4b sketch4c

Initially, before loading the images, I tested a very simple project of dimming the LED light with the potentiometer. I kept the LED in the final build, because it functions as a signal for me to know that the Arduino side of things are working if I get any kind of error.

p5.js can’t directly read the serial information on the Arduino, so you must use an external program called p5.SerialControl to open up the port for p5.js to read:

Serial Communication with Arduino and p5.js:
Serial input to p5:
Cloud Strife gif:

My github containing the p5.js code and Arduino code:

Fritzing diagram:

Project 1: Screen Space – Victoria, Maryam, Gavin

Section 1: Related Works Research

A related screen space project is one called Minion Fun by Atharva Patil. This project uses poseNet to track the movements of the face to produce minion sounds on different sections of the screen. Atharva Patil is a product designer who is currently leading design at Atlas AI building geospatial demand intelligence tools. 

Picture of the work:

This work relates to our project and research because it uses the face to create a fun interaction with sound. We used this project as our main inspiration, but instead of using the face to produce a fun sound, we used the face to produce a picture of a specific emotion (happy, sad, angry, etc.) on top of a face mask. This project helped us understand how poseNet could be used just for the face and not the entire body.

Section 2: Conceptualization

In the post-COVID pandemic, wearing a face mask has become a norm – it is a necessary precaution, and in some countries, still enforceable by law. A face mask typically takes up half of a person’s entire face; obfuscating the nose and mouth. This can often make it harder to read another individual’s facial expression. We used this fact as a springboard for our project. 

Initially, we had to brainstorm what the face mask may block in terms of everyday life. As well as the aforementioned difficulty in reading facial expressions, we also noted that when using facial recognition software (such as Apple’s Face ID), the system often struggles to identify the individual. Therefore, we wanted to put the mask at the forefront of our project – to reconfigure it as a tool for effective communication. 


Some questions that arose from this initial brainstorm included: how do we ensure that the camera can recognize the face mask as opposed to a fully-exposed human face? How can we approach this in a creative manner? And ultimately, what do we want to express through the face mask?

We ran through a series of ideas – such as attempting to change the colour of the user’s mask itself – until we arrived at the concept of showcasing different facial expressions/emotions directly on the user’s mask, depending on their position within the screen space. The screen is broken down into four sections, each one representing a different emotion – happy, sad, tired, and angry. Corresponding music plays as the user enters each section.

In the final iteration, we used a webcam and ran the code directly from p5.js online editor. In a more idealistic situation, we envision this could work as a video filter for a social media platform. With more time, we probably wouldn’t have used the “illustrated mouth” images used in the presentation. A potential replacement would be actual human mouths, which could create a somewhat uncanny valley situation that may express our main idea more clearly; a direct response to the “Hi, how are you?” text displayed on the screen. We weren’t able to finalize this aspect for the presentation, because there’s difficulty in showcasing sadness, tiredness or anger through just the mouth alone. With more time, we may have been able to create more abstract/experimental depictions of emotions.


Videos of Project 1:

b30a1d24-d035-4ec2-bca0-a86f117a7d6c d967460a-3764-4877-963d-cc5b08a56733545e46db-8826-47c1-a181-5bc2281a7f0a

Patil, A. (2019, January 7). Motion Music. Medium. Retrieved October 24, 2022, from