final-project-screencap
© 2019 Aljumaine Gayle. All rights reserved.

Pose.Net Generative Art

Project Description:

The overall objective of this project was to find a way to capture movement data to see which gestures users repeat while using the experience. Pose.Net Generative Art is a machine learning project which uses Pose.Net.JS which is a JavaScript library that has the ability to allow for real-time human pose estimation in order to track users' skeleton position. Pose.Net was used with ML5, which is a machine learning Javascript library that provides access to an accessible machine learning algorithms and models in a web browser. The web application works by tracking a user movement, which triggers a specific set of geometric shapes on a P5.JS canvas to respond by producing abstract art based on predetermined points of tracking.

Link to Github Repo:

https://github.com/aljumaine/Machine-Teaching-Learning_Assignment-2

Link to P5.JS Sketch:

https://editor.p5js.org/aljumaine/sketches/dw6-Vkwen

Selected Example Description:

Prior to completing this documentation, I selected my first iteration of the project to expand on. The first iteration I completed was to find a way to get ML5 to work using Pose.net and P5.Js. I wanted to be able to capture data of a user's poses to train ML5 using Pose.Net tracking data to move geometric shapes on a P5.JS Canvas.

Inspiration:

For this project, I was inspired by the complex and beautiful work that was produced by collectives such as TeamLab (A forest where God's live, Planets), and Studio Joanie Lemercier (Dome Projections). Both of these collectives and artists use projection mapping, interactive generative art and some machine learning. Both of these teams inspired me to consider finding a way to incorporate a form of play and movement with machine learning technologies and position tracking.

How the idea was expanded:

Originally I had wanted to create an experience that would involve having the geometric shapes to be displayed as an overlay on top of a live video feed. So that the end-user would see the shapes overtop their body as they moved during the interaction. I decided it would be best to find a way to display two screens to make the interaction clear to the end-user. The first screen or canvas would display a live feed of the user in front of a laptop computer screen. That screen or canvas would be a place to display the shapes moving around during the interaction. Which was a departure from how I originally wanted to communicate the interaction to the end-user.

What was successful:

I was able to successfully get ML5 and Pose.Net to track positions of a user's body using the live video from a laptop's web camera. The second part of my project focused on finding a way to set up a second canvas beside the live feed which was a P5.JS canvas as well. The second P5.JS canvas would be used to display the geometric shapes that the user could control on the left side while displaying the video live feed on the right side.

What was not successful:

I was unfortunately not successful in finding a way to incorporate a very complex generative art sketch I made in P5.JS. I had wanted the complex designs within that artwork to be produced based on a user's movement. I had to scale back this idea in order to produce something that was working well. I do wish my final project could have been much more random or generative in its appearance. It was challenging to correctly incorporate complex code that would look very interesting to the end-user. I was not successful at finding a way to add more variety into the shapes and background colors that would be displayed on the right side P5.JS canvas. I did have a challenge with respect to finding a way to record the poses that were used the most in order to represent that data with a visualization.

What could be improved with more time:

If time was not a constraint for this project I would have worked to find a way to get my complex P5.Js sketch to work as intended. I would have also found a way to incorporate more than one P5.JS canvas sketch. So that there would be not two canvases displayed but four displayed in a square format. The first canvas would display the end-user and the second, third and fourth canvas would display very different generative art P5.JS sketches which would generate art individuals based on the user's movement. I wanted to find a way to collect and compile the most common poses in order to display them in some form of data visualization in a P5.JS canvas sketch. Find a way to record and display data with respect to the most frequently used poses users make.

UI / Interface & Visuals:

Please find below a conceptual design of the user interface I would have wanted to incorporate for this project. Please note that this UI includes my original idea. I’d like to use these tools to build a more fleshed out experience that is complex and very interactive. I’d love to be able to include three sketches that can be controlled by the user's movement in real-time using ML5, Pose.net and P5.JS. 

Next steps:

I’d like to continue learning how to use ML5, Pose.Net and P5.JS in order to create an accessible conversation about how machine learning and position tracking works. I find that there are not a lot of resources beyond ML5 that can provide an entry-level understanding of how to experiment with these technologies as an artist or enthusiast. From this project I learned a lot about machine learning which I had thought to be completely inaccessible based on a previous attempt with a more complex platform. I’d also love to find a way to make some experiments using all of the above-noted technologies to create examples with better-documented code. This would be so those other users that want to try out simple applications an easier and accessible opportunity that is not daunting or intimidating.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published.
Required fields are marked:*

*