Buzz

Ubiquitous Computing Experiment 6

P5.js, Tensorflow.js & PoseNet

Alicia Blakey

Github

 

screen-shot-2019-06-28-at-3-02-32-pm
Project

Utilizing the machine learning model PoseNet, an algorithm that locates specific points on the body then painting over a video feed with p5.js are the elements of this project. PoseNet operates by tracking certain elements of the face or body. Carrying these estimates PoseNet ranks a confidence value of a match. Tensorflow.js is a javascript library that operates within Ml5 teaching machine learning models to operate and initiate within the browser and node.js. Ml5.js is a library that produces a gateway to machine learning algorithms in the browser with an elucidated API.

 

screen-shot-2019-06-28-at-3-04-27-pmConcept

To amalgamate myself better with Ml5 I decided to integrate with PoseNet so I could understand this aspect of machine learning to coalesce with other functions of Ml5 later. Learning from the examples of existing code provided by Nick Puckett I decided to explore the interaction of hand movement tracked through PoseNet. The keypoint confidence score doesn’t relay that well with hands but it does track the wrists efficiently enough. The intention of this conceptualization was to create a simple interaction that instilled movement while on the computer. Now that it’s summer consistently seeing people swat away the little mosquitos that come out this year I thought what a fun way to get moving and to do this through a simple simulation of augmented reality with ml5.js and PoseNet.

screen-shot-2019-06-28-at-3-06-09-pm

Process

After setting up a local server on my terminal my first step was to get the video working and call Ml5 and PoseNet with 2 arguments, video and modelReady. I was able to set this up using the examples from the ml5js. website. I then coded in the model ready function referred to as a callback. In this project I also started to define my variables a little differently and learned why using let is sometimes better. Utilizing let to define a global variable is better than using var as it let’s you re-assign the variable if needed. While using the examples from the Ml5 website I tried to imitate their style which is why I decided to change the way I defined variables in this instance.

video = createCapture(VIDEO);

video.size(width, height);

poseNet = ml5.poseNet(video, modelReady);

poseNet.on(‘pose’, function(results) {poses = results;

debugger;

 });

video.hide();

}

 

After getting PoseNet to work, learning from Daniel Schiffman’s Ml5 video I went to the p5.js website to look for pre-existing graphics and initially input the forces and particle system example to see how it worked with the code. Then I made a simple gif and searched for the preload function from p5.js so the image could be called before setup. After testing I wanted to see what the confidence values looked like so I could understand how the libraries were sharing data. By going into the sources panel of the browser I was able to see the data coming from the poses.

 

Conclusions

Thinking of future Iterations, now that I have a basis with Ml5 and PoseNet I think an interesting adaption would be to use PoseNet to maneuver graphics that interact with each other.  With a combination of interactive graphics and calling other images through poses on the browser, I could intricately add to the existing project. I could see a program like this used as a motivator to move from your desk to create interactions in front of your computer to not remain sedentary. It would be nice to be able to use it as a prompt to add to fitness tracking data when you have been in a solitary position for too long.

References

https://p5js.org/examples/

https://ml5js.org

https://www.youtube.com/watch?v=jmznx0Q1fPO

 

 

 

 

 

 

 

 

 

Leave a Reply