I expanded upon the PoseNet idea of tracking the human figure. My idea instead tracks in real-time the human face and displays some virtual add-ons I included in order to manipulate the visual look of the person, creating something similar to an AR filter.
The inspiration for this work came from my fascination with real time body tracking which I found interesting in the Pose net project, however I wanted to build mine to be more fun and interactive for the user to play around with.
Through a pre-trained PoseNet model i was able to map out which parts of the body i wanted the algorithm to focus on and manipulate. Through this I learnt how to import 2d images into the canvas and how to map them onto body parts, it was also my first time using video so it was interesting to work with a live representation in real time.
There were also challenges i faced here, when trying to line up the added objects to perfectly match the face but through experimentation and math logic i was able to get it right. Layering the different objects in a way that didn't clash was also a challenge, fortunately i was able to tackle this by resizing the images and using a 'lerp' function to allow for a smooth transitioning motion of the objects around the screen. In all i was able to achieve my main goal and was overall proud of the outcome.
I would love to expand this idea to the point where i can include and manipulate the look of the entire body of different types of people, as a way of allowing people to see model representations of themselves in different looks e.g. a different shirt, hairstyle e.t.c. I also wanted to multiply the video layer so that i could add some 3-dimensional object into the video to create depth. This would function in an interface as a good way for people to virtually try on different clothes and looks without actually trying them physically in the real world.