FRAME IT UP
By Finlay Braithwaite and Tommy Ting
Frame It Up is an interactive screen-based game best played with 10+ people. The game requires players to carry their laptops and physically walk around the room. Frame It Up is a choreography generating game influenced by Twister.
Using your laptop, open the URL in Google Chrome.
Read the instructions.
Click ‘PLAY’ to enter the game.
You are presented with a name and gestural prompt.
Use your camera to find the person and ask them to perform the prompt.
Click anywhere to take a picture.
Pictures are saved onto your laptop.
After capturing a prompt, a new prompt appears.
Take pictures of the person with the new prompt.
Repeat until the 1 minute timer.
Every minute, on the minute, all players are provided with a new name and prompt.
- To negotiate with other players in the room to capture an image of a person and gestural prompt.
- To generate random acts of choreography and dance movements to highlight human’s relationship with technology
Day 01 [2017.10.16]: Experiment 2 Introductions
We came up with a few different ideas on our first day. We were interested in using the camera function, but inherent in the camera technology is the conversation of ethics and more specifically privacy. We wanted to use the camera in a critical way that would open up discussions around ethics.
- “No Pervert!” Using the camera, the screen will direct you to point it at someone in order to “see what lies underneath”, but when once you line it up with a body, it will generate a message saying “Why would you ever want to do that?”
- “Conversation Helper” Your mobile device will connect you with another user, then it prompts you with some conversation topics
- “Colour Matcher” Using the mobile device’s gyroscope, you have to rotate your phone to the right x y and z coordinates to match the colour of the text to the colour of the background of the canvas.
- “Shake It Up” Using your phone again, shake it to generate a prompt to find another player in the room, once located, shake it up again to generate a prompt for a body part, take a picture.
After coming up with a few different ideas, we decided to go with Shake it Up. We were interested in the movement of humans that this game would generate. It brings up some important things we were both interested in exploring with this experiment which are physical interaction with digital technology and movement and dance.
Day 02 [2017.10.17]: Coding
The first major hurdle was to get the video camera to work in a consistent and predictable way. The number of different possible device types, makes, and models made this a daunting task. We were fairly determined to use smartphones and tap into their cameras as the technical underpinning for our project. We ran into some basic hurdles getting video to work even in a rudimentary fashion. Chrome, for example, demands an https:// server is used if the camera is to be engaged, for security/privacy reasons. This means that code has to be uploaded frequently to such a server for developing and testing purposes. Dreamweaver became our go-to editor as it facilitates automatic SFTP sync on save. It also has built-in Github integration which is a dream come true.
As the working title suggests, getting the shake input code to work would be imperative in our development. However, our early testing led us to conclude that it would not be an effective way to move through a serial sequence of interaction as unintentional double shakes and phantom shakes were difficult to avoid using code. This investigation was illuminating in that it demonstrated that our user flow had too many stages in the sequence and device interactions. We felt that this took away from the experience as the device became the focus of the experience rather than a catalyst. We played with the idea that the random people and body part prompts cycled on a timer, not relying on interaction. It would also be a great moment if this timer was set to a common clock on all devices, so that new prompts were generated for all players simultaneously.
Day 03 [2017.10.18]: Back to the Drawing Board
One immediate concern we had was capturing pictures of someone’s body part without their consent. Although it would call attention to the problems of privacy, we thought this was too simplistic and literal. We went back to the whiteboard to brainstorm new ideas.
We came up with a few different ideas for new prompts. One was to use colours, mood and feelings, this would be more abstract and will give the player the choice to interpret this however they want though it is still not consensual.
Next was to use a RGB value or a Grayscale value, player has to find the match colour with their camera on their person’s body. This would make our project more “game-like” but we didn’t know how to use camera to calculate values. Moreover it still doesn’t solve our consent issue.
Lastly we came up with a list of gestures such as head nod, smile, right hand shake, left middle finger, right peace sign. This immediate solves our consent problem since you have to ask your person to perform such a task. This also creates more of a negotiation between you and the other players. Finally, this would add a much richer dimension to our initial interest, which was to use this game to create random acts of dance and choreography.
Day 04 [2017.10.19-23]: Coding (Cameras, Mobiles to Laptops)
Eureka! We were starting to make real progress on the video front. Kate Hartman had suggested that we ‘time box’ this problem, giving up on it if we didn’t get the results we needed in specified amount of time. The biggest challenge that we overcame was specifying which camera a mobile device was used. The p5js video capture allows for constraints compliant with the W3C specification which includes language to call for different camera types. The type we were interested in was ‘environment’, the non-selfie outward facing camera on the back of a phone. Finding the correct syntax to connect this constraint to p5js was elusive and frustrating but eventually my android phone took a brave step and faced the world. With this victory, we began working with the video image and integrating it into our code. To accommodate for variable screen and camera resolutions, we created a display system that would respond to four possibilities:
- Camera resolution width narrower than horizontal display.
- Camera resolution width wider than horizontal display.
- Camera resolution width narrower than vertical display.
- Camera resolution width wider than vertical display.
With these four scenarios, our video placement would respond to the parameters and crop and place itself accordingly.
In this meticulous process we encountered a bug with the p5js reference. With the following function: ‘image(img,dx,dy,dWidth,dHeight,sx,sy,[sWidth],[sHeight])’, you can crop an image and place it into your canvas, possibly resizing it in the process. However, in working with this code it appears that the destination coordinates (d) and the source coordinates (s) are reversed from the documentation. We will investigate further and let p5js know if this is indeed the case.
This code was important as we wanted to crop our video instead of resizing it. We wanted a clean ⅓ height band of video centered in the middle of the screen. We wanted this to resize smoothly and adapt to variables of screen and camera resolution. We felt a crop would give us a natural zoom that would enhance the image finding aspect of the game and would also lower the CPU overhead of live video resizing.
Tommy’s phone won’t open %#%^#@^. Try as we might, our code worked well on android, but not iPhone, particularly Tommy’s iPhone. With the time box in shambles and our project in jeopardy, everything was on the table, including revisiting other ideas or generating new ones. Realizing that the majority of portable devices available to us were made by Apple, we swallowed our pride and began developing for laptops. Unfortunately, we didn’t have the current ability or time to figure out a way to include both android phones and laptops, so we went with laptops only.
Despite our worst fears, the laptops were great, and added some new dimensions to the game. People could see themselves being captured and adjusted their position and pose to assist in play. This interactive feedback element would not be possible with a phone’s ‘environment’ camera.
Day 05 [2017.10.24]: Playtesting
The playtest was extremely revealing and gave us a lot of insight into how to quickly resolve some immediate issues. We noticed a two main issues:
- Our sketch did not work on iOS consistently, some phones worked, most didn’t.
- People were upset by not having a specific end goal, namely they were confused with what to do after they framed the person up with the corresponding body part
- The 1 minute timer was too long since it was easy and simple to find the person and the body part.
It also confirmed what we had hoped for:
- During the scuffle to locate the person and the body part, it resulted in a dance amongst players.
- People had to negotiate with each other in order to find their body part.
Day 06 [2017.10.26]: Refinement in Code and Game Concept
On our last day, we refined the game visual interface from small details such as font size and stroke shade to adding a photo capture feature.
The last major coding hurdle turned out to be fairly easy. Neither of us had made an app with multiple states or scenes. Our code to this point would rely on one loop for the entire experience. We needed to make a splash page to introduce and explain the game. We could have done it as a separate html launch page, but we wanted to try doing it in a single p5js sketch. To start, Tommy created the launch page in one sketch and I finished the details on the main code. By using a simple ‘if’ statement tied to a button on the splash page we were able to have users cleanly move from one state to the next cleanly. Huzzah!
The coding details of this project were fun mini-challenges. We attempted to make everything proportional to the display such as the text and video size & placement. A fun example is that the button size is tied proportionally to the font size which is tied proportionally to the overall number of pixels in the canvas. Another fun detail was randomness. The colours are all randomly generated giving this a fun look that’s different every time. However, in our tests, users complained that the text often blended in with the background and became difficult to read. We set some rules to enforce that the randomly generated colours have a specified minimum difference in hue. Changing the p5js colour mode to a hue based system instead of RGB made colour picking of this nature possible. Making the sounds random was a larger challenge than anticipated. Generating a random hue is one thing, but randomly selecting from a pool of sound clips is another. With sound, we wanted generate a fun and chaotic reinforcement of the experience. We wanted each device to emit sounds unique from the next. To achieve this, each device loads ten random sounds from a pool of fifty-one. At each sound cue, the code randomly selects one of these ten files for playback. Loading all fifty-one sounds would have increased the loading time and made the experience fairly buggy considering there’s already a live video input in play. This seemed like a good compromise.
Playing with sounds
Finally we changed the focusArray from a list of body parts to a list of gestures which included positive, neutral and negative gestures. We decided that this would be more interesting as the people would have to negotiate even more with other players in order to capture their photograph. It acts as a consensual prompt which mitigates the issues of privacy. We also decided to keep the 1 minute timer instead of speeding it up (from our play test). Within the 1 minute, the person must capture as many different gestural prompts as possible. Lastly, using gestural prompts instead of finding their body parts creates more of a dance, this was more fitting in our conceptual framework of contemporary dance.
While doing some initial research for the project, we were immediately drawn to the relationships between kinaesthetics, human bodies, dance and choreography and camera technology.
Although Frame it Up is a game, we were more interested in the choreographic outcomes of playing the game. We found that the game was able to generate random acts of choreography which materialized our interest in human’s relationship with technology in the form of dance. We deliberately decided to use the camera as the main device to connect the players, as the act of taking someone’s picture is inherently violent (Sontag 1977) and to explore this violence through dance and play. Informed by Jane Desmond’s idea that how we move, and how one moves in relation to others comes from a place of desire (Desmond, pp.6), and gestus, a theatre technique created by theatre director Bertolt Brecht that understands gestures as an integral part of the human character, its wishes and desires (Baley, 2004). We wanted to investigate how do we move with each other and amongst each when our violent technological devices have become both embedded within and extended out to how we express desire.
Although it wasn’t our original idea, carrying around the laptops was able to intensely highlight our increasingly posthuman body. The sound tracks that we used were all compiled of laughing tacks, which call attention to human’s happiness, playfulness and desire but also its violence and brutality. We also looked to the works of choreographer Pina Bausch. Bausch’s work highlights the violence of men and the suffering and oppression of women to an incredibly uncomfortable degree. Her work “forces her audiences to confront discomfort: they are painful to look at but impossible to turn away from.” (Avadanei, pp. 123) Using Susan Sontag’s understanding of the camera as a weapon, dance theory and Pina Bausch work, our goal that Frame It Up is both a playful game and a tool to generate choreography that explore relationship between privacy, human desire and technology.
Avadanei, Naomi J. “Pina Bausch: An unspoken explorations of the human experience.” Women & Performance: a journal of feminist theory, vol. 24, no. 1, 7 May 2014, pp. 123–127., doi:10.1080/0740770X.2014.894289.
Baley, Shannon. “Death and Desire, Apocalypse and Utopia: Feminist Gestus and the Utopian Performative in the Plays of Naomi Wallace.” Modern Drama, vol. 47, no. 2, Summer 2004, pp. 237-249., doi: https:éédoi.orgé10.1353émdr.2004.0018
Desmond, Jane, editor. Dancing Desires. The University of Wisconsin Press, 2001.
Sontag, Susan. On Photography. Picador. 1977