Muybrige Jump Final Project


This project started with an investigation into re-imagining early cinematic devices, and to examine the early uses of moving images and entertainment in the context of digital media.It’s meant to investigate the how the simplicity of cinematic devices generated wonder and enthusiasm, whereas in our era of over-saturation, such enthusiasm is more more difficult to achieve. I drew an idea map that shows the process of defining what could be done technically, and how this might lend itself to the theme.  Research into early cinematic devices helped identify a specific technology and point of reference- Eadward J. Muybridge’s early high speed photography. The intention of one of Muybridge’s early experiments was to determine whether horses’ feet left the ground while galloping. I immediately recognized a connection with a continued fascination with this very same idea; people are fascinated by capturing themselves in mid-air in photographs (as evidenced on blogs etc). Creating a Muybridge-based image matrix of non-gravitational imagery of participants would achieved not only an interactive experience, but would provide a comparison to what audiences of early cinema may have experienced.  At a very basic level, it might demonstrate that simple technologies and physical participation are underestimated in a large proportion of entertainment.


The original outline to create the project was;

-display images in a Muybridge based grid. My first reference was this Processing code, which was a capture display only, and did not allow for buffering or saving frames.

-set length of capture and loop

-capture to buffer  or folder – triggered by sensor

-indicator of begin capture with sound or light

-buffer replays mixture of captured videos

In terms of the Processing grid, my reference example only dealt with live capture. It took some research into the possibilities of capturing, saving, and displaying images. I wanted to modify this grid idea to have a captureEvent or serialEvent that saved images to a folder, and allowed random playback from a large pool of images.

Initially I wanted to capture only images of participants in mid-air, and have non-gravitational moments from many participants all play back randomly in a grid,. This appeared to be the best illustration of my concept.  As I explored the possibilities further, I also considered capturing the sequence of frames of a jump, from ground to mid-air to landing. The issue of how to isolate the beginning of the jump from the participants’ presence onscreen, and isolating their vertical movement with sensors was a trial and error process.  I conducted a series of tests using Processing motion detection code,  handheld accelerometers, foot-triggered pressure sensors, and suspended tilt sensors that jumpers would activate mid-air.  This will require more testing, as the sensors also need to work in tandem with the Processing sketch to capture properly.

Final Jump Project 

This iteration of my project allows participants to capture the complete sequence of their jump (or other movement) and experience their image in the Muybridge matrix immediately. A simple digital push button starts the countdown and the capture of up to 100 frames, or roughly 3 seconds to the data folder. An initial Jump Movie would be playing in an installation context and would instigate similar interactions. As I’m not yet sure the best way to isolate the non-gravitational moment, but for this phase a very direct instruction set was the best way to go. The existing jump images act as a bit of a directive, and the push button and text countdown lead the participant into usage.

Seeing the interactions in my class presentation, I now find there’s more to explore in this version of the idea. I question whether a pool of images with people in mid-air would have the same degree of user interest. It might have a very different outcome. Depending on the participants’ movement, the grid brought a lot to explore in terms of perception of movement, and in terms of tracking eye movement. The video from my class presentation is evidence that users will inevitably experiment and use the framework to explore different uses and ideas; it was a great opportunity to see how people engage with it and what the more interesting aspects of the interaction are.

Future Steps

I’d like to make the button a handheld wireless object that lends itself to jumping. The isolated jump movement is something I’d like to try to see how the interaction changes. This may be best achieved in combination with motion detection or background subtraction written into the Processing code.


Digital Push Button


Arduino code

void setup() {
pinMode(2, INPUT);

void loop() {
int sensorValue = digitalRead(2);

 Processing Code

//MUYBRIDGE JUMP – Creation and Computation Final Project
//By Maureen Grant
//Code Assistance by Jim Ruxton, Dan Fraser, and Dustin Freeman
//based on code from Learning Processing by Daniel Shiffman

import processing.serial.*;

Serial myPort;        // The serial port
Capture myCapture;    // The video capture

// Declare Variables
int countDown = 0;
int pictureCount=30;
int mode = 0;
int maxImages = 100; // Total # of images in jump folder
int imageIndex = 0; // Initial image to be displayed is the first

// Declare an array of images, and the grid
PImage[] images = new PImage[maxImages];
ImageGrid grid;

int jumpThreshold;  // if on, go to mode 2, saves captured images to the image array
boolean jumpCapture;
int imageCount;

void setup() {
size(640, 480, P2D);
myCapture = new Capture(this, 640, 480); //for version using iSight
//   myCapture = new Capture(this, 640, 480, “IIDC FireWire Video”, 30);  //for external camera


String portName = Serial.list()[0];  //initializing serial port
myPort = new Serial(this, portName, 9600);

void refreshImages() {    // for loop playing images in a 10 x 10 grid, offset
for (int i = 0; i < images.length; i ++ ) {
images[i] = loadImage(“jump/jump” + nf(i, 3) + “.jpg” ); //jump folder instead of data, filename
grid = new ImageGrid(images, 10, 10);

void draw() {

background(0);   // Modes contributed by Dustin Freeman.
if (mode == 0) {   // Mode 0 = Play Images
// increment image index by one each cycle
// use modulo ” % “to return to 0 once the end of the array is reached
imageIndex = (imageIndex + 1) % images.length;
if (mode == 1) {    // Mode 1 = startCapture = Capture images and save in jump folder according to naming convention
if (pictureCount<maxImages) {
image(myCapture, 0, 0);
saveFrame(“jump/jump” + nf(pictureCount, 3) + “.jpg”);
else {
mode = 0;
if (mode == 2) {  //Mode 2 is the countdown, followed by start capture
image(myCapture, 0, 0);
PFont font;
font = loadFont(“Arial-Black-100.vlw”); //need to have file in data folder
fill(255, 0, 0);
text(countDown/10 + 1, width/2, height/2);  // modify speed and length of countdown with countdown/##
if (countDown == 0) {

class ImageGrid {
int xSize;
int ySize;
int xTiles;
int yTiles;
int xTile;
int yTile;
int xImagePos;
int yImagePos;
int offset;
PImage[] images;
int imageCount;

ImageGrid(PImage[] tempImages, int tempXtiles, int tempYtiles) {
images = tempImages;
xTiles = tempXtiles;
yTiles = tempYtiles;

void draw(int startOffset) {
xSize = 64;
ySize = 48;

imageCount = 0;
for (yTile = 0; yTile < yTiles; yTile++) {
for (xTile = 0; xTile < xTiles; xTile++) {
xImagePos = xTile * (xSize);
yImagePos = yTile * (ySize);
offset = (startOffset + imageCount) % images.length;
image(images[offset], xImagePos, yImagePos, 64, 48);

void captureEvent(Capture myCapture) {;

void mousePressed() {
countDown = 48;
mode = 2;

void startCapture() {
mode = 1;

void serialEvent(Serial myPort) {  // simple digital button
int inByte =;
if (jumpThreshold >= 1) {    //if button is pressed, Processing receives a 1, else is 0. If equal to or greater than 1, go to mode 2 (startCapture)
countDown = 72;
mode = 2;

Comments are closed.