Category Archives: Code Sketches

Sketch 1 – Winter Welcoming – Mona Safari

This is my first actual experience so far with p5j, and I have to say, I didn’t expect it to be that fun to do!
I wanted to experiment with a few things, especially in terms of random coding objects, which in this case are the snowflakes, their positions, and falling. The way this code works is an array of numbers, and it increases the numbers that it increases each element in that array and draws a point at each element where the element like the X value of canvas is the array index and draws a point kind of randomly and adds a random number to them and resets them.
Also, adding the “mouse dragged” function was about to speed up the process of accumulating the snowflakes at the bottom so that over time it looks snow is filling the canvas, and it seems the bottom of the canvas gets filled with snowflakes dots and would create a fun interaction between the code and user.

Sketch 1 - Winter Welcoming

Fullscreen Experience:

Editable Version:

Sketch #1: Physi-digital Ball Drop — David Oppenheim

For this first sketch I wanted to explore ml5.js and COCO to play around with the concept of merging an analog object with its digital counterpart. I was partially successful.

Intended user experience: 1) When the user throws a physical tennis ball in front of their camera, a virtual ball appears and follows their physical tennis ball as they throw it up and down in front of the camera. When their physical and virtual ball (tracking together) reaches a certain height, a “floor” is drawn and 2) the virtual ball takes on a life of its own – i.e., it continues to draw to the screen, now moving independently from the physical tennis ball, and drops from its height onto the “floor,” bouncing until it comes to a stop.

Results: I was able to get COCO to recognize the analog object (physical tennis ball) and then draw a virtual ball to the screen at the position of the physical tennis ball, drawing a “floor” (rectangle) when the balls reached a certain height. I was unable to decouple the virtual ball from the tracked object and achieve the second part of the intended user experience.

Video of the working components (part 1 of the user experience) (YouTube) 

Code (on P5.js)


Sketch 1 _ Shipra



Sketch 1 : Pop the Bubble

For this sketch, I wanted to experiment with the mouse over function & adding sound to the action. In this case, I used GIF’s to create the bubble, and once the mouse is hovered over the GIF, it pops the bubble with the sound effect.

To play this game, make sure to keep the volume up & use safari for the fullscreen mode.

Have fun!

Editable Code :

FullScreen Link :

Sketch 1 – Anusha Menon

screenshot-2022-10-03-at-5-21-52-pm screenshot-2022-10-03-at-5-20-56-pm

I created this sketch with the help of an example from the p5.js website, by Keith Peters, titled ‘Bouncy Bubbles’

I modified the bubbles to scatter and speed up when the mouse is run over it and the colours are randomised to create a fun visual effect.

Here’s the fullscreen experience:

Here’s the editable code:




This code is aimed for creating some virus to simulate Covid-19 virus. I used the poseNet to track the position of nose with webcam. And I also code the class and array to draw floating virus in the air. Then I try to make the nose to attract the virus. The interaction is similar to the process that a person affect the Covid-19.

Code Link: p5.js Web Editor | Nosy honeysuckle (


  2. examples | p5.js (

Sketch 1 – Head Controller – Tyler


Full Screen:


This sketch has two parts:

  1. Cropping only the head of one person on the camera (using Posenet nose and eye locations). Once the x and y position of the nose is taken, and the distance between eyes are measured, the Image.get() function is able to capture only the desired pixels.
    – This was inspired by Lazano Hemmer‘s zoom pavilion. Getting the position and size of the head could be used for interaction that deals with the perspective of the audience (localizing where they are). In our screen space project we are considering using machine learning to identify masks or not, this method will be important for isolating parts of faces.
  2. Measuring head twist and tilt. Given the nose position relative to the eyes, the twist amount is a basic map calculation. The tilt works, but not well, comparing nose to eye-line distance to eye spacing. I think it would be better to compare the nose y position with the ear position. I show this data by moving the cropped head, so that you are always looking at yourself, but it looks like the person is simply translating their head. A better effect may be found for this input.
    – This turns your head into a (ineffective) mouse. But it could be used to estimate where on a big wall someone is looking.

Resources used:

Sketch 1 – Night’s Owl – Maryam Dehghani


In the sketch, I tried to make the mouse look like an owl’s eye, when the mouse is clicked the background image turns black and the owl’s face appears so we can hear its sound.
Edit Link:

Here are the codes I use: