Experiment 1 – Body As Controller

Experiment 1: Body as Controller in 3 parts

Nooshin Mohtashami

The goal of this experiment is twofold:

  1. To explore the possibility of creating interactive virtual body art inspired by my related works research, and
  2. Using the computer’s built-in webcam and body movement to initiate interactions with the computer program instead of using the mouse or keyboard already attached to it.  Specifically creating: 2 ways to perform the action CLICK and 2 ways to perform the action SCROLL.

The tools and libraries used in this experiment are p5.js and ml5.js that enable immediate access to the pre-trained models through a Web browser (and much more).

The goals were reached in 3 separate parts listed below.

Part 1: Moving Sparkles

p5.js Present: https://preview.p5js.org/nooshin/present/QoWPCen-3

p5.js Editor:  https://editor.p5js.org/nooshin/sketches/QoWPCen-3

This is a simple sketch to show how controlled movement (scroll) can be implemented using a body pose.

movingsparkles

Fig 1 – Moving Sparkles

Moving Sparkles

Fig 2 – lifting shoulder to move the sparkles

In this sketch, lifting your shoulder to your ear will move the sparkles on the screen towards the lifted shoulder side (please note that the camera view on the screen is reversed). Meaning, lifting the left shoulder to your ear will move the sparkles to the left of the screen and lifting the right shoulder will move them to the right.

This is not really a sophisticated program and the sparkles do run “out” of screen and if the user starts by lifting the right shoulder first, the sparkles disappear and the user can get confused. However, this was one of my first experiments where I learned a lot about the basics of  how to work with p5.js and ml5.js’s libraries!


Part 2: Bowie’s Bolt

p5.js Present: https://preview.p5js.org/nooshin/present/izwLGOi_N

p5.js Editor: https://editor.p5js.org/nooshin/sketches/izwLGOi_N

This is a sketch to show how state change (click) can be implemented using a body pose.

face_m1

Fig 3 Bowie’s Bolt

Bowie's Bolt

Fig 4 – wrist to eye to change the virtual make up colour

In this sketch, there are 2 pre-defined “virtual make ups” drawn on the user’s face using p5.js vertex function. The colour of the virtual make ups can be changed by the user when the eye is touched or “clicked”.

One important learning from this experiment is learning about the size of the drawings on the screen and user’s face. When I was programming this I created the shape of the virtual make ups based on my distance from the webcam at the time and although the virtual make up moves on the screen with the model, it currently does not resize itself to be proportional to the face. This is because I used absolute numbers with the x,y coordinates of the body parts to create the shape. For example, to start the drawing I used:
vertex(eyeL.x, eyeL.y-100);
vertex(eyeL.x-80, eyeL.y-100)
This will draw a line from Left Ear’s x-position to Left Ear’s (x-80)-position while keeping the y-position constant. Now if the user moves further from the screen, while the face looks smaller, the 80 & 100 remain constant and therefore the virtual make up takes over the entire face (Fig 5). In the future version I would use dynamic values to draw the shapes. For example, by calculating the distance between the ears to find the width of the face on the screen and using that as reference to calculate where to place the virtual make up.
Moving away from the screen makes the virtual make up take over the entire face.

Fig 5 – the size of the virtual make up is not dynamic in the current sketch


Part 3: Finger Painting

p5.js Present: https://preview.p5js.org/nooshin/present/tLrxxrEjb

p5.js Editor: https://editor.p5js.org/nooshin/sketches/tLrxxrEjb

 

finger_p1

Fig 6 Finger paint on the screen

 

 

finger_paint_sparkvideo1

This is a 2-in-1 sketch where both movement (scroll) and state change (click) are implemented. I started with Steve’s Makerspace useful video on how to create the functionality for finger painting on the screen, extended it to include random colours and clearing of the screen by bringing the thumb and index finger together. It was also important to create a state of “painting” vs. “not painting” so that the screen doesn’t get cleared unintentionally if the thumb and index fingers accidentally come close together while the user is painting on the screen.

Instructions are as follows:

  • First raise your hand ✋for the program to recognize it. A white pointer will show on your index meaning it’s ready.
  • To start painting: point your index finger 👆.The white pointer will turn red. You are now ready to paint on the screen.
  • To stop painting open your hand and show all fingers ✋. You can move your hand to a different location, point your index finger and paint more.
  • To clear the screen: stop painting first ✋ then touch index + thumb together👌

 


Summary & Learnings 

I learned a lot with this experiment and also that the ML5 libraries seem to prefer:

  • Bright and clear rooms: body parts and movements were recognized much faster and more accurately in bright rooms and solid coloured user backgrounds than in dark rooms or with lots of colour or objects in the background.
  • Light clothing: wearing light coloured clothing worked better than dark coloured ones when working with body movement recognition.
  • No jewelry: wearing a bracelet or watch on the wrists seems to cause delay and confusion in recognizing hands/wrists and their location.
  • Slow movements: moving slowly was easier for the computer to recognize the body and its movements (or maybe this is a programmer’s issue!).

And as I started writing this summary, I realized I could extend the project and allow the user to draw their own “virtual makeup” on the screen using their finger (Part 3) and when done for the program to tie the drawn shape from the screen to the user’s facial coordinates. Then using Part 2, the user could change the colour of their virtual make up. A definitely fun future project 🙂