Real-time Human Body Tracking Explorations
Interacting/ Creating with your face: Painting with my nose.
Track your nose and use it to draw a picture.
Questions I’m trying to answer:
In thinking of this idea, I was wondering how could I make it so that the person, when only using their nose as a controller, can start and stop the drawing action so that it doesn’t end up being a continuous line drawing all the time?
Solution 1 : Maybe I could track the distance of the nose from the webcam so that whenever the person leans forward or backwards the drawing action would turn on and off.
Solution 2: Another idea would be to use a different part of the body as to toggle the drawing action.
Creatability though it makes working with PoseNet much simpler, is restrictive in that in simplifying the process, only allows you to track one body point at a time. I will be returning to the poseNet with ml5 tutorial which does the same thing but with less abstraction.
One thing I like about creditability is that since a lot of code is abstracted, you can refer to parts using strings such as ‘nose’, ‘rightEye’ instead of having to remember indexes in the pose array as in with PoseNet. The library would work well in instances where you need only to track a single part of the body.
Trying to use leaning back and forth to change the background of an image. Below are the results of my explorations. To determine the distance from the screen, I calculated the distance between my eye and my nose as when a face is closer to the screen, the distance between the two is larger and smaller when the person is leaning back. I then hid the video and tested leaning back and forth to change the background of my sketch as seen in the green and red. I set a range whereby when the distance was less than 25 or greater than 70 and I set an isDrawing boolean value to false.
I was not able to create the drawing lines with my nose effect because for some reason the drawing action is flipped so that there is a mirroring effect which makes it confusing when trying to think and draw. Additionally, I couldn’t figure out how to make a continuous line instead of dots. When tracking the mouse I would have been able to mouseX, mouseY and the pmouseX, pmouseY positions but I wasn’t able to do that with tracking my nose. I believe this would be possible if created an array and stored all the points where my nose had been, then used the draw function to draw those points instead. I decided to continue exploring other things. Below is a screenshot from my attempts at drawing with my nose.
Observations: Using leaning as an interaction turned out not to be so great. It was hard to control the reaction to the movements because you either had to be still and lean at the same time. Additionally, it just didn’t feel too natural, especially for drawing. Perhaps a different body part would have been better although I feel this just adds more complexity to the interaction.
Reading through past projects and works:
Text Rain (1999) by Camille Utterback is an art installation where one’s body is tracked and used to interact with digital text. This piece uses color black or white to detect where the body is and then animates the text accordingly. I think it would be interesting to see if I can achieve something similar with the body tracking technologies we have today. It might be harder to achieve with PoseNet as it doesn’t give you the skeleton region perhaps with the Kinect. The animation below is an example from the PoseNet sketchbook that tracks a point on the body and tracks it as a person dances then draws text animated on that path. I will look into this further in our interaction explorations for week 5 &6
Technology as a tool vs medium?
In my explorations this week I keep coming across artists and technologists who are asking “what can the technology do for me” and the idea of this body-tracking tech as a tool to convey meaning. e.g. “We quickly discovered that PoseNet was only interesting to Bill if it helped him convey meaning. The tech wasn’t an end in itself, it was only useful to him as a tool for artistic expression.” I also came across the same sentiment on Utterback’s website and it is something I will be reading more up on this week. Utterback links to this chapter as a reference to her Text Rain project on her site. The Tool Model: Augmenting the Expressive Power of the Hand – Inventing the Medium – Principles of Interaction Design as a Cultural Practise. “A lesson from the design of the google experiment was that the technology itself shouldn’t be the star of the show…push the boundaries of digital interaction design beyond the current standard of clicks, presses, and taps” – Maya Man