Explorations #2


Weeks 3 and 4 were dedicated to running short experiments and continuous research on projects that were interesting in the way that combined the body and computer vision.

I have divided the blogpost up into:

  1. Mini-experiments
  2. Research

1. Mini-Experiments

For the min experiments I started out first by simply sketching out body points using posenet and Ml5.js library.



I referenced an example sketch on the pose net library to draw a skeleton first. The idea of the sketch was to understand the perspective of how the points connect to do further exploration. I want to see how I could make two point collide in order to make something happen. This “something” could take place in the form of a sound or a visual.

I continued the exploration by focusing on the facial tracking. As I am drawn  to the idea of the facial recognition.


These snippets of trying out the code were to help with understanding all trackable points.

Another experiment I tried was Bodypix(). This is a code which only tracks the body and blurs out anything that is not the body. I wanted to see if instead of dull black background I could use a function to help me load and image which would act and the background. Since it is real time and dynamic, in which sense the sketch is always dynamic and moving the code draws out the color pixels every time.


The code was pretty good at recognizing the body however it wasn’t steady enough. It kept tracking the painting behind me as the body as well.


The code for this was:

function setup() {
createCanvas(320, 240);

// load up your video
video = createCapture(VIDEO);
video.size(width, height);
// video.hide(); // Hide the video element, and just show the canvas
bodypix = ml5.bodyPix(video, modelReady)

function modelReady() {
bodypix.segment(gotResults, options)

function gotResults(err, result) {
if (err) {
// console.log(result);
segmentation = result;

image(video, 0, 0, width, height)
image(segmentation.maskBackground, 0, 0, width, height)

bodypix.segment(gotResults, options)


Using an Image as a background did not work. Nor did other colors other than black. This is something I would like to come back to in the coming weeks for further exploration.


2. Further research

Body, Movement, Language:AI sketches with Bill T.Jones

This experiment conducted with tensorflow and built by the google creative team and famous choreographer Bill T. Jones. This was a remarkable experiment in the sense that it used real time track and speech recognition as a way to track and move words in real time. This opens up the possibilities of storytelling through this medium.



Space-Time Correlations Focused in Film Objects and Interactive Video 

I wanted to see how I could also understand these minexperiemnts through a theroetical lens. I really wanted to understand the idea of time and space through this medium. I also got interested in the applications of these technologies and the future of cinema. Will the future of cinema change due to our experimentations with these new technologies?

A number of contemporary and more recent art projects have transformed film-material into interactive virtual spaces, in order to break through the traditional linear quality of the moving image and the perception of time, at the same time to represent, or to visualise the spatial aspects of time respectively. In the times of resampling, the concentration on an relatively old picture medium and its transformation into a space-time phenomenon open to interactive experience does not seem surprising. The results of these experimental works exploring and shifting the parameters of the linear film are often oddly abstract and quite expressive in their formal composition, and, consciously elude simple legibility.

Leave a Reply

Your email address will not be published. Required fields are marked *