This is my first actual experience so far with p5j, and I have to say, I didn’t expect it to be that fun to do!
I wanted to experiment with a few things, especially in terms of random coding objects, which in this case are the snowflakes, their positions, and falling. The way this code works is an array of numbers, and it increases the numbers that it increases each element in that array and draws a point at each element where the element like the X value of canvas is the array index and draws a point kind of randomly and adds a random number to them and resets them.
Also, adding the “mouse dragged” function was about to speed up the process of accumulating the snowflakes at the bottom so that over time it looks snow is filling the canvas, and it seems the bottom of the canvas gets filled with snowflakes dots and would create a fun interaction between the code and user.
I wanted to practice what I have managed to catch on to so far, which is still very basic. My bear appears friendly on the outside but once you click on it, it turns evil.
Get the experience here:
For this first sketch I wanted to explore ml5.js and COCO to play around with the concept of merging an analog object with its digital counterpart. I was partially successful.
Intended user experience: 1) When the user throws a physical tennis ball in front of their camera, a virtual ball appears and follows their physical tennis ball as they throw it up and down in front of the camera. When their physical and virtual ball (tracking together) reaches a certain height, a “floor” is drawn and 2) the virtual ball takes on a life of its own – i.e., it continues to draw to the screen, now moving independently from the physical tennis ball, and drops from its height onto the “floor,” bouncing until it comes to a stop.
Results: I was able to get COCO to recognize the analog object (physical tennis ball) and then draw a virtual ball to the screen at the position of the physical tennis ball, drawing a “floor” (rectangle) when the balls reached a certain height. I was unable to decouple the virtual ball from the tracked object and achieve the second part of the intended user experience.
Video of the working components (part 1 of the user experience) (YouTube)
Code (on P5.js)
For this sketch, I wanted to learn more about arrays.
I went through this tutorial: https://happycoding.io/tutorials/p5js/arrays
and made orange squares fall, using an array
Link to code: https://editor.p5js.org/victoriag/sketches/9iU8XtbuC
Sketch 1 : Pop the Bubble
For this sketch, I wanted to experiment with the mouse over function & adding sound to the action. In this case, I used GIF’s to create the bubble, and once the mouse is hovered over the GIF, it pops the bubble with the sound effect.
To play this game, make sure to keep the volume up & use safari for the fullscreen mode.
Editable Code : https://editor.p5js.org/Shipra_b/sketches/hj8ajIOyP
FullScreen Link : https://editor.p5js.org/Shipra_b/full/hj8ajIOyP
I created this sketch with the help of an example from the p5.js website, by Keith Peters, titled ‘Bouncy Bubbles’
I modified the bubbles to scatter and speed up when the mouse is run over it and the colours are randomised to create a fun visual effect.
Here’s the fullscreen experience: https://editor.p5js.org/anushamenon/full/FtS_ClXNHu
Here’s the editable code: https://editor.p5js.org/anushamenon/sketches/FtS_ClXNHu
I’m hoping to eventually be able to replace my face with a whole new image in p5, so this is a logical step to get there. I’m using face-api to track my face, and project it onto a new video screen. The stroke function is used to replicate the outer points of my face.
This code is aimed for creating some virus to simulate Covid-19 virus. I used the poseNet to track the position of nose with webcam. And I also code the class and array to draw floating virus in the air. Then I try to make the nose to attract the virus. The interaction is similar to the process that a person affect the Covid-19.
Code Link: p5.js Web Editor | Nosy honeysuckle (p5js.org)
- examples | p5.js (p5js.org
Full Screen: https://editor.p5js.org/tbeattyOCAD/full/7cYwInKpR
This sketch has two parts:
- Cropping only the head of one person on the camera (using Posenet nose and eye locations). Once the x and y position of the nose is taken, and the distance between eyes are measured, the Image.get() function is able to capture only the desired pixels.
– This was inspired by Lazano Hemmer‘s zoom pavilion. Getting the position and size of the head could be used for interaction that deals with the perspective of the audience (localizing where they are). In our screen space project we are considering using machine learning to identify masks or not, this method will be important for isolating parts of faces.
- Measuring head twist and tilt. Given the nose position relative to the eyes, the twist amount is a basic map calculation. The tilt works, but not well, comparing nose to eye-line distance to eye spacing. I think it would be better to compare the nose y position with the ear position. I show this data by moving the cropped head, so that you are always looking at yourself, but it looks like the person is simply translating their head. A better effect may be found for this input.
– This turns your head into a (ineffective) mouse. But it could be used to estimate where on a big wall someone is looking.
In the sketch, I tried to make the mouse look like an owl’s eye, when the mouse is clicked the background image turns black and the owl’s face appears so we can hear its sound.
Edit Link: https://editor.p5js.org/dehghani.maryam69/sketches/qYT-oOy7g
Here are the codes I use: