Category: Experiment 1

Virtual Funhouse

Project Name: Virtual Funhouse


Project Members: April De Zen and Josh McKenna

Project Description:

Our idea was to create a virtual representation of a circus funhouse of mirrors. Each participant would move through each computer screen to see a different distortion/warping of their image in real time. We worked through about 12 different effects and rotated through them at each screen. We also wanted to create some signage at the start of the funhouse to give it a bit more flare. We used 4 iPads to add some funhouse graphics to the wall.

Project Links:



/* — — — */


Process Journal


Day 1:

Today we spent our time together in the ideation phase of our design process. We decided, collectively to bring forth some ideas that fit into the project brief of using 20 different screens to create a cohesive interactive experience.

Recognizing that our knowledge in programming was limited we wanted to create a project with a scope that would be conducive to learn new coding skills, as well as the project being within the reach of our current expertise.

Some of our ideas:

  • A Virtual Funhouse: using the webcams from laptops as input and warping the live images via p5.
  • The Telephone Game: a digital rendition of the classic message/party game, using captchas and input from users via the 20 interfaces.
  • A live paint from 1 input interface to 20 outputs, with unique variations of the original for each different output screen.
  • A Virtual Treasure Hunt: a clue-based treasure hunt that would include the gyroscope feature in mobile devices.

We decided to develop each of the three initial proposed ideas outside of the allotted classroom time to give us a better context of each concept’s demands so that we knew what would be required to bring any of the proposed ideas from concept to prototype.


Day 2:

Goal today: Finalize our concept.

After exploring and building-out some of the ideas mentioned above, we came to the following conclusions.

The Telephone game would require databasing user input data in-order for each interface to work in sequence. We decided together, that this was outside of the project’s scope.

The Live Paint idea, again required databasing, which wasn’t within our current skill-set to accomplish.


The Virtual Treasure Hunt game would require AR integration, as well as geolocation-specific gyroscope prompts. Ultimately, we felt again that this was outside of the project’s scope.

Given that our learning goals fit best with the Virtual Funhouse idea we ultimately decided to move forward with that idea. Essentially what we will need to do is create unique image warping effects from a live webcam feed. Although, admittedly we did not have a foundation of programming to immediately produce our concept, each of us wanted to learn how to manipulate live imaging. After briefly searching available online resources and consulting with Nick, we felt that this concept was within an achievable scope given the 2 week deadline to complete it.


Day 3:

Now that we’ve settled into the idea of the Virtual Funhouse we wanted to plan out how the concept would be realized in terms of setup and functionality.

We intended for the user to have no physical interaction with the devices, as we wanted the devices to act alone as independent “mirrors”. Since proposing the idea, we knew that the setup would involve laptops and their webcams rather than mobile phones. We wanted to treat each laptop as its own unique funhouse mirror. We hypothesized that the final presentation would involve up to 16 laptops positioned next to each other, with their webcams facing the back wall of the room. Each one of these laptops would be equipped with a different effect relative to the webcam’s input, altogether combining 16 unique effects.

In addition to each unique funhouse mirror we wanted to utilize the 4 iPads we had as a group. We decided that an introductory screen (composed of the 4 ipads) would be suitable to introduce users to the beginning of the funhouse.

To introduce an extra level of interactivity we hypothesized a photobooth function to the funhouse. At each laptop a timer or button linked to a screen-capture would be included and the taken photos would be forwarded to the last laptop of the funhouse so that the user can see some of their favourite moments from the experience.


Day 4:

Today we decided to attempt to build out some of the more complex functionality of the project. After building the framework for the Funhouse, we deemed the photobooth feature as a functional piece outside the core concept.


Today– let’s try to make it happen. Initial setup for screen capture.

Syncing the ‘snap’ button with image capture.


Code snippet for ‘snap’ and image placement.


Prototype of the photobooth.


The next step of the functionality would be forwarding the captured images from each respective laptop into an array on a separate browser page.

After some research we realized that node.js, and express npm packages would have to be incorporated into the functionality of our project. From our understanding this would allow us to send the image to the server-side and forwarded back to a new client page.

We recognized that these new added elements may be a challenge that is outside of the scope of this project. It was at this point we decided against having any photo capture functionality included. We found peace with this notion, given that the users participating with our experience would have a camera already with them from their mobile phones. We do not have to include that specific functionality with our project, as it would increase the complexity without actually bringing any extra value.

Maybe next time.



Day 5 and 6

Today, we began testing some of the mirror effects. To achieve the Funhouse distorted mirror effect, we are going to attempt several techniques.


We hypothesized 3 techniques

  1. The first, using WEBGL we would use the video feed as a live texture onto the 3D object.
  2. The second, using imported 3D .obj files we will also use the video feed as a live texture onto the 3D object.
  3. The third, we will use various effects referenced from online examples.





In this example, the 3D shape is moving accordingly to a sin and cos function, which adjusts each respective vertex.


Unfortunately, mapping live textures onto 3D .obj shapes resulted in the whole webcam video being mapped onto each individual vertice, overall not resulting in the cohesive effect we were aiming for.

However, we were able to successfully create other various effects with some success!


Day 7 and 8

Considering that we were not able to get the .obj files to display the webcam feed how we wanted, we decided to introduce a new effect.

After being inspired by clmtrackr by Audun M Øygard, we decided that it could make for an interesting effect to incorporate face tracking into our experience.

Working with the clmtrackr.js library into p5, we were able to achieve some interesting “funhouse” theme effects.

Using the point reference system included in the clmtrackr.js documentation we positioned .png files onto the points for the nose and forehead as illustrated below.




In preparation for the presentation we put together the intro display set to welcome users before the enter the virtual funhouse!


Coding and testing:

Once we had nailed down all the effects for the funhouse, we tried to merge them into one javascript file so that we could access all the effect by clicking one link. The idea was to cycle through each effect randomly every 5-10 seconds. We ran into a few problems when we attempted this.


  1. Some of the effects either stopped working altogether or they got blurry. Example above of the inverted colour acting strange when we added the timer loop.
  2. During the presentation, I also noticed the cycle of the effects stopped completely and remained on one of the effects. It seemed to be glitching but it was still working.
  3. The face recognition wasn’t very accurate but a lot of work was put into figuring it out, so we thought it would be worth while leaving it in. It made for interesting observation watching everyone chase the clown nose on the screen.
  4. All the 3D effects worked really well but since we were having trouble with the timer on the other effects we decided to leave it as is. During set up we just had to click a few extra links to get it running but it worked over all.


Examples of effects:




Class presentation:


We had to think through the flow of the screens. How did we want everyone to walk through the funhouse?  Also, we noticed that the class room is full of white walls which is a really boring backdrop.

In order to make this a great experience for the people walking through, we decided to line up all the tables and push the laptop screens as close together as possible. This simulated a wall and gave some structure and direction for the users to how you should move through the experience.

To further immerse the user in our experience, we created some digital signage with iPads and our project name to add the visual queue of an entrance. The only other worry we had was that participants would start clicking while they were interacting with the funhouse. We wanted to avoid having the participants clicking back and seeing a list of links. The main idea was they would look and move to interact, they didn’t need to touch. We had the idea to lay out sheets of paper over each keyboard to prevent them from clicking – and it worked! To liven the place up with some punches of colour we taped some bright coloured shapes to the wall so that the participants could see them in the background.

Overall, everyone walked through the funhouse and interacted with each screen in silly ways as we expected. The one thing we didn’t expect was seeing the participants snap photos of themselves on the screens. We investigated saving the images and sending them to each participant but it looks like they figured out a way to do that on their own.


Presentation Videos:


Reflections and learnings:

There were so many new ideas we looked into. Some were achievable in a 2 week time-frame and some were not. I think the biggest take away was learning what P5 could do easily and what it couldn’t. There were some things we really thought we could pull off but then something unforeseen (to us at least) would pop up and put an end to what we were working on.

Another key takeaway was embracing simple ideas and bringing them to life. I think we got caught up in making it cool and pushing the boundaries. Upon reflection, the point was to establish the boundaries and see what we can do with a small idea. I think we did achieve what we set out to do, but we could of done it with a lot less stress!


Project Context:

P5.js reference page:

11.1: Capture and Live Video

18.7: Loading OBJ Model – WebGL and p5.js Tutorial

Instagram Face Filters:


Free stock circus image:



img-1524 img-1521


Experiment 1

Ladan & Mazin

Project Description

Match is a multiscreen symbol matching game built using P5 in which participants are given a piece of symbol that has a matching piece or pieces. To find their Match players have to share their screen but also find the correct orientation of the symbol to connect with the other participants.  Participants are asked to used their mobile devices phones in familiar and other not so familiar ways. They are asked to look up and connect with others and ask questions. There are natural matches but what we found that some players if we’re not able to find their match connect different part of symbols to create new symbols. Ultimately Match is about getting people out of the screens of their phones and the silos in which it creates and connects with other people collaboratively to reach similar goals.

The symbols are mostly globally recognized and have impactful meanings, for example, the symbol of peace, or balance, etc. When a player recognizes their half, they can start walking around interacting with other players, asking about their symbols and perhaps teaching them some information about their symbols. This game of interaction is likely to lead to very important conversation about the origins and the meanings of these symbols. So for us, the symbols were an essential part of the experience and had to be picked accordingly.




Process Journal

From day one, Ladan and I wanted to create an interactive game. Because we both care deeply about the human aspect of interaction, we wanted our game to extend beyond the screens and encourage to talk to each other and walk around.

So our first ideas were musical chairs game that users have to use both their phone and their bodies in order to participate in the game. However, we analyzed the process and the end result and decided to go with something a little different which would teach important skills when it comes to coding. Also, since we both come from design backgrounds, we wanted to use our background to our to create compelling visuals to attract the user’s attention and maintain throughout the game.

Once we agreed on the idea of a puzzle rather than musical chairs, we started thinking about the type of imagery we want to us. At first, we wanted to use a big image split into 20 screens, but quickly that started looking complicated and a little difficult for the users to solve. So, we decided to use several images to keep the game more interesting and to use more impacting visuals.

We decided to use a “universe” look for our background symbolizing the spaces between all of us. Which the users needed to walk through and explore until they find their match.

In the end, we created 20 different index pages with 20 different sketches because we wanted to keep the links indistinguishable and the players anonymous, by allowing each to pick a piece of paper with no logo or name on it, only a QR code. Each QR code led a different page with a different symbol. That was important to us because it added an element of surprise and a shift from a physical textural object like paper into the intangible smooth digital world.

The project changed throughout the process but our idea of a simple neat solution was still the aim throughout. However, due to not enough time, we had to drop a couple of functionalities and priorities which ones we wanted to keep and which to drop. In retrospect, we realized during the demonstration that adding some sound would’ve added important value to the final presentation

Notes for Experiment 1


  • For mobile, all you need is to upload your p5 script on a server, and open that script on your phone. Thats it.
  • Make 20 different sketches and each person can use a different link to a different sketch
  • Focus on the look and feel of the page rather than complicating the functionality
  • Add gradient/rotation/colors with accelometer
  • Think of the center piece/person, what kind of movement and placement they will experience
  • Think of the image, how complex or how simple,  will drive the experience of the players
     notes from out first meeting with Nick.

Main Coding Challenges


We wanted some form of interaction with the piece and the incorporated how folks would move with the phone when they were searching. We got advice from Kate about the gyroscope/ accelerometer it measures the movement of the phone which would make the symbol move and you would also have to reorient it the correct position when you found your match. 

Challenges and Issues:

Our biggest challenges in this project revolved around code issues.  Mazin and I both a graphic design backgrounds and were new to p5.js. We had slight issues at first with trying to have things act separately on the stars as compared to out image/blob item.  

We also had issues with making one part of the code to one things such as the stars move in the ways that stars move and star arrays and background visuals (as mentioned before). After research, we found different code arrays that depicted the universe type theme that moved in the ways that we wanted. The problem ended up being that everything was moving in the same way. We had to figure out how to make the symbol move separately from the universe.  

What helped us with that is in a coding help meeting Kate and Nick told us about push() and pop() that contained actions for code within that parameter so that the gyroscope/accelemeter worked on the symbol piece only. 

We also found time as a major challenge because we had also wanted to add a start page and button,logo music but when we started coding for these particular features we ran into a lot of blank pages when we tried to make changes to the code and trying to figure out why it wasn’t working was eating up a lot of our time. We decided that the functionality that people are just dropped into the universe that is the game also still worked.


Project Context

The project was inspired by traditional match game where you all the card are face down and you have to pick the matching cards. However, we wanted to add a human interaction level, which that game didn’t reach. So, we edited the game concept to fit those criteria and Match is what we came up with. It is simple, but we believe using those specific symbols is a catalyst for meaningful conversation between people that are trying to find the match of the their symbol.

Match also draws influence, both visual and practical, meditation games that have simple aesthetics as well as simple functionality that allows people to focus the singular task at hand. Games like PAUSE, you touch the screen and a blob starts to follow your finger. All you need to do is follow the prompts and calmly move your finger across the screen. The slow, gentle motion grows the blob, and you’re prompted to grow it for as long as you can, perhaps even while closing your eyes. It sounds easy, but it’s not as simple as you’d think. If you go too fast—which you will—you’ll be told to slow down, but if you go too slow, you’ll be told to speed up. The simplicity and not overwhelming complexity of the game is what we wanted to draw on.  





Sketches, Designs, Videos and Photographs





symbolsoverlook qr-codes demonstration2

demonstration1 background2 background



Shadow Play

A Multiscreen Project
by Olivia Prior and Lauren Connell-Whitney

// Description //

Shadow Play is an interactive web application that self-directs visitors through an improvised shadow puppet show on the go. The application uses the participants built-in flashlight functionality on their phone to create a theatre scene where they may be.


Shadow Play randomly generates wait times, performing times, performance cues, and shadow puppet actions every round for each participant. The randomly generated wait times and performing times stagger participants so that everyone is not acting out their shadow puppet at once and nor are they performing the entire time. The randomly generated cues are a combination of an adverb, verb, and a .gif image of a shadow puppet demonstration. Viewers are invited to interpret these cues in any which way they prefer, and are also encouraged to interact with the other performers near them.

Shadow Play also requires audience interaction depending on the cue. Some of the shadow puppet demonstrations require both of the performers hands on at once. The performer will need to call upon an audience member to either create the demonstrated puppet with them, or alternatively for the audience member to hold onto their phone and direct the light at their hands for them.

// How it works //

  • Participants turn on the flashlight on their upon entering the application.
  • After confirming their flashlight has been turned on, the user is redirected to an instructions page
  • Upon clicking “Let’s play”, each participant is given a random time between 20-60 seconds
  • When the timer is up, a shadow puppet demonstration in the form of a gif, an adverb and a verb, and a new random timer between 20-45 seconds appears on the screen.
  • Participants act out the shadow puppet in a collaborative group setting for the designated time.
  • Once the timer is up, the participants screen returns to the previous timer screen with a new time between 20-60 seconds.
  • If the user is ever confused about the instructions, they can return to the instructions page by clicking the “i” in the lower right corner.
  • The process repeats until the user ends the game by clicking the “x” button in the lower left corner.

// Code //

// Process Journal //

Initial Ideas Day 1

We initially began talking about general project ideas that appealed to us, places that one might encounter lots of screens and ways we use multiple screens.

We discussed and shared common interests and goals that we wanted to incorporate into the design of this project. Through documented conversations, the common interests were to design a product that would create a self-directed narrative while using the application, laughter and physical interaction amongst participants, moving the users around the physical space when using the application, and incorporating randomly generated data so that each user’s experience was unique and open for their own interpretation.

More Initial Ideas Day 1

More ramblings of the things that sparked creativity and ideas that we may want to shape our project around.

// Ideation //

The concept for Shadow Play arose from discussing what are all of the abilities one has control with over their phone. The list from the discussion included: compass, camera, microphone, texts, flashlights, browsers, and accelerometers. After creating a list of possible ways to use each of these features, pursuing a product that relies on the flashlight was decided due to the simplicity of users being able to have control over this particular feature (rather than focussing on writing a program that relies solely on JavaScript to prompt the phone’s feature, such as using the compass). Something that was important to us in these discussions was creating an interaction between the user, the screen and the real world.

// Proof of concept //

We tested a quick proof of concept by casting a shadow puppet onto the wall and decided to pursue the concept of creating an interactive puppet show that was directed through a web application. The decision was made after seeing the simplicity of the action, and how complex the resulted shadow could be due to the placement of the flashlight (farther away you could create bigger shadows, the shadow is affected by how many lights are around, etc.). The placement of the person and the light naturally encouraged movement and interaction which were on our list of desired project goals. Something that was exciting about this was the possibility for storytelling and childlike imagination.

// Other possible ideas //

A big decision that was frequently discussed during our design process was how to direct a narrative in our product. Initially we decided upon creating a very explicit narrative experience. Our idea was to generate two shadow puppet demonstrations with a verb. The idea, though similar to our current iteration, was directed as a game rather than an improvised theatre application. The user would create the first shadow puppet on the screen, and act out the verb on the corresponding shadow puppet beneath the verb. The user would have 15 seconds to complete this action before the demonstrations and the verb switched to new options. If the user completed this action successfully within the 15 seconds they would get a point. We decided against this idea because we were not interested in making a game that was points based. It would deter from the fun nature of acting out the puppets. As well the concept of the game deterred from the act of shadow puppets; the shadow puppets were not required for the interaction, rather they were supplemental.

Our idea evolved from this game though. We decided that using one shadow puppet demonstration, a pairing of an adverb and verb, and a implementing a random time constraint allowed users to participate and create a changing environment. Everyone had a random time for waiting and performing the characters ensured that the theatre was always changing.

// Workflow Diagram //


Our very simple MVP in the bottom right of this note took us from the initial stages of ideation to the final show. An incredibly valuable concept that I (LCW) found to be one of the most enlightening and useful concepts of the whole project.

The workflow for the project was discussed immediately to determine the scope, tasks, and process of the project. It was also discussed to determine what the minimal viable product (MVP) would be to achieve our desired product goals of having an interaction shadow puppet application. The workflow determined the next steps: prioritization of tasks, discovering what pages will need to be wire framed, technical requirements, and technical organization.

// Tasks //

Our week of work planned out in moveable sticky notes, another valuable workflow tool that Olivia taught me (LCW)

Our week of work planned out in moveable sticky notes, another valuable workflow tool that Olivia taught me (LCW)

From the workflow diagram we assessed all the tasks and functionality our product will need. We wrote down each task individually on a post-it note and drew out a calendar of the upcoming week. On the calendar we placed the post-it notes on the dates we needed to start the written task. Post-it notes allowed us to move tasks around and and physically play with our schedule to determine which work needed to be done first and in which order to allow for the most efficient product completion. We placed each of our own initials on the tasks that each of us would lead.

// Technical planning //

Workflow Planning More workflow planning

From the workflow diagram, we assessed the functions that were required for the project. We decided to use p5.js as our main library because it provided us with the functionality to create and style everything in a responsive canvas. The assessed functions were incorporated into the post-it note tasks placed on the calendar. We set up a Git Repository and created a development branch for push/pull requests.


Our final function diagram that enabled us to see how we needed to proceed with building the project

// Wireframes //

We used two types of wireframes: low and high quality. The low wireframes were sketched out to allow us to start the technical side of the project. As we were developing the application we simultaneously started working on the high quality wireframes. Our process was to get all application functionality working first, and then to style the product once all of our elements were being rendered onto the page. Once the high quality wireframes were completed and the functionality was completed we applied the wireframes onto the product.

Drawn Wireframes

These initial drawings enabled us to move forward with coding while leaving the design and styling until after the bones of the project had been written.

The low quality wireframes were simply sketched out so that we were able to iterate on the product workflow and freely think through drawing. The high quality wireframes were created using Adobe XD. Adobe XD allowed us to style a theme and an UI that was easy to understand for our users. The theme was inspired by old Charlie Chaplin movies, to place an emphasis on theatre and narrative.

High Quality Wireframes

This was the final design. We decided to use the most simple design solutions to direct the user; using a classic silent film font “Windsor Condensed”, and a yellowish tint that was reminiscent of low light.

// Technical implementation //

Our project required data to pull for the randomly generated aspects of our product. As a way to store the data for the verbs, adverbs, and images, we created individual JavaScript files that held the information in arrays. We loaded these files onto the index.html and used them as global variables that we called in our sketch.js file.

We also required videos or gifs for our shadow puppet demonstration. We recorded and edited the images ourselves to have full control over our application content.

bird-480x480 bunny-480x480 scrappy-pup-480x480 hungry-kermit-480x480 bossy-goose-480x480 lilsnail-480x480

// The Play //

// Discoveries //

Much of the building of this project was smooth (ish) once we had created our idea and laid out our workflow and wireframe plans. However, the most interesting part of this whole process, and where the project came alive was the demonstration and watching the users interact with this tool. All the joys of the idea, and user interaction flaws became more apparent. One particular comment from a user made me think about if our UI was efficient enough; while trying to play this game, one user couldn’t figure out why the image on screen wasn’t moving with their own hands, when in fact it was only an image and not a camera of what what going on in front of the phone. This comment along with a suggestion to pair this project with Augmented Reality was particular enlightening as to the possibilities of this project’s future.

Some users took issue with the countdown timer page and suggested adding in a directive comment while the timer is playing, this was another comment that made us think a little deeper into the user experience of our design. This was a valuable comment in rethinking the whole flow of the play. Each moment must be directed and clear, even if the user is meant and encouraged to use their own imagination and creativity to play.

This is a good example of a timer that is too long! Who wants to wait 52 seconds when everyone else is having fun.

This is a good example of a timer that is too long! Who wants to wait 52 seconds when everyone else is having fun.

Another discovery that came clear when watching the group play the game was that the adverb/verb combination was, while funny, possibly not logically directing the user to do the action with their shadow puppet. Some of the word combinations were too vague or simply not able to be completed with the puppet. How would one “absentmindedly build” with a shark?

The timer was another discovery that occurred with the user testing, there may be a more ideal time set for group play depending on the number of players. The time we had set for a group of 20 could be shortened for a group of 4, leading to another possibility of game play sets for different groups of people.

// Challenges //

Some of the challenges we faced initially were something that seemed so simple:

Why could we not get our viewport to be responsive? The answer to this was discovered later and turned out to be an oversight that could be classed as a typo… We had forgotten to include our .css file in the index page. It is small trials like this that really are wonderful learning experiences to never forget to think simple and start from the beginning.

Another challenge that we had was at the beginning of this process, which was actually nailing down an idea and sticking to it. We went back and forth on game play ideas and if it was the right direction to take this project. Would our players understand what to do? Is it a game? Do we tell people where to go? How to act? Do we get them to each act out a part in a story? These are all valid questions, but what we learned, is that simple planning is best. We spent quite a bit of time adding complexities that didn’t add to the play experience enough to matter.

// Project Context //

These two projects were inspiration for our final product. The first, “Guten Touch” by Multitouch Barcelona demonstrated a physical interaction of screens. The painting of the screens promotes physical movement as the participants actively paint the screen with the given paintbrush. This project was inspiration for our goal to get people physically moving in the space, and the use of darkness to encourage a sense of play and unrestrained play.

The second project #MIMMI is inspiration for the collaborative work of our product. #MIMMI takes on twitter user data from a city to create a display the responds to the mood of the tweets. Together the data creates a narrative of the city, and encourages group participation in the greater display. Our project reflects #MIMMI by prompting a collaborative canvas to create a an ever changing real-time narrative. The participation of the users is what creates the display, similar to #MIMMI.

// References //

Get random item from JavaScript array. Resourced from:
How Can We Merge Our Digital and Physical Communities? Resourced from:

MIMMI Comes to Minneapolis Convention Center Plaza. Resourced from:

##MIMMI. Resourced from:

Shadow Puppets. Resourced from:

Create Image. Resourced from:

Countdown Timer. Resourced from:

P5.js Reference Library. Resourced from:

p5.js loadFont function? Resourced from:

Armengol Altayó, Daniel and Multitouch Barcelona, directors. Guten Touch. Vimeo, 19 Feb. 2009,

“Barcelona 2008 – The Exhibition.” Red Bull Music Academy,

Hu, Ray. “Talk to Me 2011: ‘Hi, a Real Human Interface’ by Multitouch Barcelona.” Core77, 20 Oct. 2011,

Multitouch Barcelona, director. Natural Paint. Vimeo, 14 Nov. 2008,

Multitouch Barcelona, director. Sabadell. Vimeo, 2 July 2012, “Multitouch Barcelona.” IdN World,

“Multitouch Barcelona.” Vimeo,

Vilar de Paz, Xavier, and Marvin Milanese. “MULTITOUCH BARCELONA. HOW ‘HEAT’ TECHNOLOGY.” Digicult,

Vilar, Xavi. “Guten Touch.” Xavi Vilar,


Exquisite Sketches ✿

Experiment 1:

Exquisite Sketches

Carisa P. Antariksa, Georgina Yeboah

Exquisite Sketches is a collaborative piece that involves digital sketching and assembly. Users are tasked to draw a specific body part prompted on their canvas with their smartphones and assemble their drawn sketch with others. The concept is to take something that can be intricately created using a variety of materials into something as simple as using your finger to draw on your smartphone browser for a fun activity.

Exquisite Sketches was created using P5.js and html pages. Each canvas prompt either had a normal pen brush or its own unique brush stroke.

Github Link  | Webspace Master Page

Canvas Prompts and their Brushes




Normal Pen Brush per Canvas:



screenshot_20181007-212048 screenshot_20181007-212246

Project Context

This project was inspired by the “Exquisite Corpse” art term, meaning a “collaborative drawing approach first used by surrealist artists to create bizarre and intuitive drawings” (Tate) a technique invented in the 1920s. Nowadays, it can be adapted into many uses, whether it is learning for children or a recreational game. The goal of the activity is to have a participants experience a “surprise reveal” at the end, when all the unique parts are assembled.

This art can be implemented in many ways through different mediums. There are projects that involve of this technique through relief print, using either linoleum tile pieces or woodblocks. In this implementation, each artwork created is then cut into pieces and then combined with other parts from other artworks. They are often aligned seamlessly to give the impression that they connect to create a whole new corpse.

A 3 part example:



A 4 part example:


This technique has also been adapted to many casual games. Body parts often transform to different combination of phrases and often, a mixture of the two. There could be archives of variations of these results.

Process Journal

Date: Monday, Sept 24th 2018

In our first meeting after the groups were formed, we listed possible ideas we could implement on 20 screens. Our thought process began with two directions: an interactive activity or an installation that people can contribute to. Our list of ideas are as follows:

  1. Music (using p5.sound library) – Tapping different screens to create a tune, encourage movement
  2. Weather – Creating “electronic rain.” Users can interact with the screen, the elements are moveable as you touch them
  3. Sending information across computers – Drawing on one computer and having that show up on the other? Users collaborating to complete an abstract drawing. Or a bouncing ellipse? The idea is to keep the ball “floating” as it moves forward.
  4. Creating a chain reaction with elements on the screen – Inspired by dominoes.

From there, we liked the idea of drawing, so we developed an initial idea where we can create a interactive experience that involved sketching with different brush strokes. The first option would be to have a user control one mouse on one device (laptop or smartphone) that could control various brush strokes on the other devices that could start off on different positions on the screen. The screens would be styled in a grid format. Simultaneously, the user would be playing and drawing with multiple brush strokes at the same time through one device.

The second option would be to have all the computers communicate with each other and allow a collaboration happen. If one user were to draw on their screen, it will appear on the other person’s screen and so on.

Date: Thursday Sept 27th 2018 – OSCP5, Sockets and Nodes

We wanted to create an interactive installation with 20 screens. Georgina proposed we use a library she had used before to communicate between sketches on different devices. She started looking into how to get OSCP5.js working but ran into some socket errors where variables weren’t being reached since the code was having trouble locating the file.

We found Youtube Tutorials by Daniel Shiffman ‘Coding Rainbow’ on how sockets work in P5.js. We are currently watching and learning from these videos to see if they can help us achieve what we want. They look into the introductions of nodes and sockets and how we can send and receive messages across other computers through a server. Daniel Schiffman’s tutorials on websockets and P5.js  videos well represented this:


Date: Friday, Sept 28 2018

At this point, we were still trying our best to push for using node.js in out project. We used half the class to make the web socket work, and tried to replicate it on Carisa’s laptop. After some failures, we went to Nick for advice and told us to not do it and focus our efforts into something more feasible. Regardless of the shift, we still felt that creating a project that involved drawing was something we wanted to achieve. Along with these complications, Georgina proposed a brush that she had created and Carisa tried to translate the code from processing to the p5 library.

Date: Monday Oct 1st 2018

The presentation of all the Case Studies we contributed to was very thorough and had great potential in expanding our ideas. Georgina pitched the idea of using unique brushes after her fun stroke sketches, made of triangles and ellipses were working on p5. Afterwards, she started to rewind and return to what made the node.js process not work. Turns out there was a hyperlink issue, so she instead stated the entire directory path.


Now that the Javascript was being reached and the server was connected to the code we could do some really cool stuff with this interactive brush stroke I made!


We both experimented with different points where the brush can start to see how they can create an interesting composition on other screens, given the connection works.


The next step is sending this drawing function across other devices and have different brush strokes going on. Hopefully it doesn’t as long as simply connecting javascript files. (The transition from Processing to P5 has been really tricky but getting there, slowly but surely…)

Date: Tuesday, Oct 2nd, 2018 – Wednesday, Oct 3rd, 2018

Unfortunately, further tweaking did not solve any major issues we had before and we were right back to where we started. The html file could be opened locally but the JavaScript files could not. The library was still not being found and javascript files from the project could not be opened locally due to some security permissions on Google Chrome and other browsers. We decided to abandon node.js and all together and rethink our approach of how to develop our sketch project.

Soon, we had produced a back-up plan that involved a “complete the drawing” concept. Pulling a case study from this forum: Complete the Drawing. Collaboration can be formed from drawing from the prompt, which can result to interesting possibilities.

To refine our new plan, ideas were laid out on a mind-map.


We both then narrowed down our ideas to a viable product we can create for Friday. With our brushes and the code that we have written, we found that applying it with the exquisite corpse idea would work well with the project brief. To fulfill the brief requirements, we separated the “corpse” into four parts, the Head, Chest, Torso and the Legs. This led to us writing the code for the body parts in different html files for smartphones, so they can create an abstract body with all four phones per group. People from other groups can mix and match their prompt drawings for some exciting mashups!

We brainstormed ways to make the prompts clearer and wanted to see if an existing drawn prompt on the canvas would allow for a more effective experience.


We then realized that having a drawn prompt might not be necessary. This might influence the participants in a direction that limits their imagination, considering the time constraints. To allow the process flow smoothly, we decided on text prompts that would allow everyone to draw freely.

bottom top

After some user testing, a problem rose in writing the code for smartphones. Before we could finalize the html pages, we had to figure out a way to have the canvas stay fixed on the web browser on both the Android and iPhone. Without coding this detail in, it would be difficult for participants to draw properly in the short period of time they are given in the presentation.

Date: Thursday, Oct 4th, 2018 and Friday, Oct 5th, 2018

Carisa spent the day figuring out how to fix the moving browser problem for the smartphones. The problem was that if a mousePressed() function was called with the function windowResized(), the browser would still move on browsers on the iPhone but had no problem on Android. After some time and a little advice, she managed to figure out that by simply placing in the same code within draw() and mousePressed() into touchMoved() stopped the unnecessary movement on the safari browser! We were relieved that we were able to fix this crucial detail to the experience, as it would allow for ease in drawing with our fingers on the phone screen.


After fixing the browser problem, we spent the rest trying to figure out ways to enhance the drawing experience. We tried to figure out a way to allow the stroke change color as it is drawn across the canvas, develop more unique brushes and perhaps allow a toggle button to change between different pen brushes.


The attempts led to unsatisfying results, perhaps due to limitation in understanding how to write the code. Trying to think about how this button could be implemented on the smartphone was difficult, as what allows the button to be pushed was a mousePressed() function and not a touch() function. Carisa tried a variety of ways to connect these dom elements (slider, buttons) to alternating strokes that could be drawn on the canvas, but it could not be referred to by the p5 libraries when called with a stroke(). For example, a ‘bline’ and a ‘rline’ variable was made to call upon the strokes, but it did not work once tested on the web browser.


Alternating the code between touchMoved() and draw() did not work either so the idea was scrapped.


We realized later on that this might be resolved through using a shape as a toggle instead, which required the use of Boolean statements. We could not continue experimenting on this possibility due to time constraints, so instead we decided to use the additional brushes Georgina created for the “fun” aspect of the activity. Some groups will be able to use a “funky” brush and the others, a normal pen stroke brush on different canvases. The aim of these differences were to reflect on the “exquisite” aspect of the sketches and inciting possibilities as to how people would react to using an unconventional brush.

Activity Implementation:    


The class was divided into 5 groups of 4 people, where everyone would each hold their smartphone. They will then lock the orientation of the screen and then access the canvases we have hosted on the webspace. 3 groups wanted to use the funky brushes, which led to the other 2 using the regular pen brush. Each person was then assigned to one part of the corpse composition, starting with the head, then followed by the chest, torso and lastly, the legs. Each group lined up in a row in front of a central table area where they took turns drawing the parts within a set amount of time. Given a good amount of time, it was planned that each person would spend around 1-2 minutes on one part so that they could refine it to whatever extent they wish. The next person with the next body part could then gather inspiration from the person before them and continue the formation of their “exquisite sketch.” Once they have completed this activity, each team can then see their group results. From there, the users can then go combine their own body part creation with other groups, creating a variety of possible “exquisite sketches.”



Presentation Reflection

We found that there were many possible iterations to implementing the activity. This can be done differently each time with game mechanics, such as:

  • Providing prompts versus no prompts

Some concerns such as the need to have different html files for each body part were brought up (Were they necessary? Did we need to see the prompts?) which allowed us to reflect on the code that we created. Was it out of necessity or something that could complete the concept? We concluded that having the basic information shown on the canvas was vital for beginners to the concept of  “Exquisite Sketches” and could transform to other possibilities for more knowledgeable participants.

  • Not needing all participants to go in order, but allowing them to draw whatever body part they please and see where the results take them.

This an observation that we considered, knowing how spontaneous the experience could be once performed in class. To make the activity slightly more structured however, we decided on allowing the participants go in order to understand the process.

  • Having the option to change brushes on the canvas

This would have been a great thing to implement. Having more tools to use would allow a change in the mechanics of time spent drawing and the craft of the sketch.

  • Instead of already seeing the result of the first body part, perhaps conceal it slightly so that the group will come to a more exciting sketch at the end.

This was a valid observation that could tie in to another iteration of game that involves changing the rules to make it a separate experience each time.

The game is very versatile in terms of the direction it could go. Each time it is played, whether in a party context or just a fun ice breaker, it can produce distinctive results. There is not just one way to play it, making it such an adaptable, entertaining project.

Learning Reflection

Through this project, we were able to realize both our own processes in reacting to the brief. Having to face the ordeal of creating an experience within 20 screens overwhelmed us, which led to us over-thinking the concept rather than realizing them. As a result, we prioritized the functionality and the presentation of a minimum viable product. The technical realization was limited to our own skills in writing code and the growth in which we started understanding how to write using the P5 library. We both realized the learning curve that we experienced, such as the  transition from Processing to P5 and the process of slowly understanding how java script code is written a certain way and what it can represent and can result into. There was also a lot of learning in understanding our own smartphone devices, complete with its abilities and constraints. We were successful in this regard after being able to identify which code functions work better than most. Hopefully, there will be a better opportunity to learn to code for responsive screens that can include all types of resolutions and devices in the future.


“Chat.” Socket.IO, 30 July 2018,

Yeboah, Georgina. “Georgina’s OCADU Blog.” Typothoughtography GRPH2A04FW1103 RSS, WordPress , 2018,

The Coding Train. “12.1: Introduction to Node – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 24 Sep 2018.

The Coding Train. “12.2: Using Express with Node – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 24 Sep 2018.

The Coding Train. “12.3: Connecting Client to Server with – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 27 Sep 2018.

The Coding Train. “12.4: Shared Drawing Canvas – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 27 Sep 2018.

Tom. “Challenge: Complete The Drawing.” Bored Panda, Web. 3 Oct 2018.

Gotthardt, Alexxa. “Explaining Exquisite Corpse, the Surrealist Drawing Game That Just Won’t Die.” 11 Artworks, Bio & Shows on Artsy, Artsy, 4 Aug. 2018, Web. 3 Oct 2018.

Tate. “Cadavre Exquis (Exquisite Corpse) – Art Term.” Tate, Tate, Web. 3 Oct 2018.

“Relief Prints.” Carisa Antariksa, Web. 4 Oct 2018.

“INSIGHTS.” MSLK, Oct 2018.




Experiment 1: DANCY
By Norbert and Tabitha

Dancy is an interactive mobile site that lets you create a spontaneous dance party with friends. Using p5 it allows the user to select from multiple midi tracks to play back a unique dance mix – but the songs will only play when the phone is shaken. If you want to hear the music you’ve got to dance!

GitHub Link: 


The idea for Dancy was created after a night of hanging out with our fellow classmates in Kensington when we realized that they love to dance! So we asked ourselves whether we can create a dance party experience using multiple phones. First we determined that the most important part of a dance party is the music. Each phone would represent an instrument and when all the phones come together they will form a band.

img_3036 img_3092

First Wireframes

After our discussion we created separate wire frames to see if we were both on the same page about the project. We realized that this project will also need a menu where you can select the instruments.

Norbert shared video of an art project called “Butterfly Room” to help explain his vision for the symbols. We decided that the symbols should look cute and fun, so we also referenced the dust creatures from the animated film “Spirited Away”.

img_3094 img_3093

Early Brainstorming – Exploring Jazz Music/Garageband

Next, we asked ourselves what type of music will be playing. We knew we wanted it to sound good and with the limited timeframe we could not record our own music tracks so we experimented with Garageband. We tried importing jazz standards into Garageband and found that they were very complicated. If the tempo wasn’t perfect the music wouldn’t sound good. So we experimented with the Apple Loops provided in the Garageband Library and pulled simple tracks with a clear beat.


Garageband audio files

In class we had a conversation with Nick that helped clarify our idea. He suggested that we explore the phone’s shake function as a way of playing our music. Kate had provided a document with a few p5 projects that we tried out in class and we found inspiration in the “shake to change colour” example. Nick said that we can use the phone’s accelerometer to change the tempo for the music. Different gestures could achieve different results as well. So we dug into some more p5 functions to see if there was any code that could help us.


Animated character poses for Dancy that were later scrapped.

There was a period of time where the program had more complex animation. We wanted the characters on the homescreen to wiggle and when a user shook their phone in the music page the character would squish into the side of the screen. Also, we wanted the accelerometer to speed up the playing of the music when shaken so that the user would have to sync their gestures with their neighbour to achieve the same tempo. Ultimately we didn’t figure out how to do these functions in time for the deadline and dropped them from the project to focus on Dancy’s core functionality. Nick reminded us that the user would not be looking at their phone while shaking anyhow, so we feel that it was a good decision to invest our time in other things.


Working on the character animation inside a music page. 

A big milestone in the coding was when we discovered how to move the a white box to either side of the screen. Later we would change the square to a colourful png character and make it play music when you shake your phone. For two people without any programming foundation, implementing such an idea is not an easy task. Therefore, once we determined the structure of the project we began to learn how to implement various functions. Starting from video, we worked our way through the most basic lessons of The Coding Train on Youtube to The function Coding in libraries in Many times, the code is technically correct, but it just doesn’t work. We finally solved our major problems by consulting teachers and classmates. For example, playing music, which is a simple function, took two days of various attempts and still could not be achieved. But when Norbert asked Nick, he found that a line of code was missing and the file was opened the wrong way. Through this process, we feel that writing code can only be the joy of trying, failing and finally achieving.

First movement of our white box

To be honest, before this project, we were all a little bit afraid of coding. But in the process of completing this project we are gradually falling in love with coding. When we needed to implement a feature we went to libraries to look up the code and found that there were many things we did not understand, such as abbreviations and logic which hindered our understanding. Then we go to Youtube to search for video that explains these elements again, and bit by bit we could break down a whole piece of code to fully understand its usage and logic. Then we could combine our own understanding and requirements to write the code that is suitable for Dancy. The whole process of learning can be frustrating in the beginning because trying is often met with failure, but when you calm down to learn a little bit and finally achieve even a small bit of progress, the feeling is very exciting.


Final Wireframe with shake function


On the day of our presentation we asked each of our classmates to select a colour from the 6 buttons on our homepage so that the music choices were evenly distributed. Tabitha ran through every button with the group so they understand that they play the music using the shake function. This was also an opportunity to isolate each sound before we play everything all at once. When it was time to combine all the tracks the class got into it immediately and we noticed that the physicality of the shake gesture helped everyone get into the mood for a party. One of the students turned out the lights and – unprompted – people started to turn on their phone flashlights. The result was genuinely fun and we were surprised at how good the music sounded.


Classmates are instructed on how to use Dancy

During the group critique we got a lot of useful feedback. People wondered whether it would be more fun to allow the group to explore the sounds on their own rather that assign people to groups. By doing so the music could shift and change as people discover different sounds. Another suggestion was to link Dancy to a playlist and as more people play a particular track that music would spike, almost like a battle of the bands. Everyone agreed that Dancy’s simplicity was also its strength and if we were to develop the project further we should double down on the program’s core functionality to make it even better.


Dance party begins!

Besides what was demonstrated in class, Dancy has many possibilities for play and group engagement. For example, under the guidance of a music teacher students could use the program to explore different instruments in a group setting and create unique combinations according to different styles. This way students can learn about a variety of musical styles using an approachable hands-on approach without having to master various instruments.

As mentioned in class, Dancy has another meaning for today’s isolationist society. At parties many people are busy looking at the phone screen but real communication and interaction are lacking. Using Dancy, there is a direct correlation between the number of participants and the scale of the music and party atmosphere that’s created. Ultimately, it’s more fun if everyone participates. As a result, people can no longer stare at their phones because their bodies must always be in motion to generate the sound. No more texting in a dark corner while playing Candy Crush. They must shake their bodies to facilitate face-to-face communication with their friends.


We found a lot of inspiration in Norbert’s case study of Yuri Suzuki’s sound installation Sharevari, which relates to Dancy because of the common element of music and also his dedication to user experience. He creates a tool that is easily accessible for all groups so they can learn music easily by simply waving their hands. This inspiration helped us continue to develop the idea of Dancy. We wanted to create a small game that is easy to use without extra guidance and professional music knowledge. Anyone can pick up a phone with a friend and create an exciting musical experience through interesting interaction and exploration.


Yuri Suzuki’s sound installation Sharevari

As described earlier, we wanted the characters for Dancy to be simple and cute. Here we were both inspired by the dust characters from the animated film Spirited Away. We settled on a round shape with eyes because we had intended to animate the character bouncing from the side of the screen, and rubber balls are easy to animate. In the end we didn’t get to that part but it still worked well because they were easy to see on screen since they were not too complicated. The colours we chose were bright to create a more playful party aesthetic.


Creating the character buttons

In Conclusion:

If we were to continue to work on Dancy we would match each piece of music to a specific gesture. That way the user could explore the sounds without having to go back into the menu and they could jump to different tracks on the fly. This was a function that we were not able to create in time, but overall the project was still a success. If we were to run another group test with a new prototype it would be interesting to take the suggestion to eliminate the teaching element when introducing the game. We’d like to see what would happen if the users were able to explore the program on their own with minimal prompts. How would that affect the interaction? It would also be cool to include a flashlight function as that was such a large part of creating the party atmosphere for Dancy.


Project website:




Code reference:

Project reference:

Collabcreature – collaborative mural

Project Title:  Collabcreature

Team: Alicia & Jingpo

Website URL; (Links to an external site.)

Smartphone: (Links to an external site.)



Project Description:

Collabcreature is an interactive screen-based digital painting game,  letting people use their images and creativity to produce a collective art. This is a collaborative interaction that can be played with as little as 3 people at minimum or 30 people maximum at a time.


Collaboration is all about teamwork, we wanted to experiment with an interaction that not only brings people together but initiates cognitive sharing. We were very interested in the amalgamation of the way people draw and think differently.  Even though we all conceptualize in our own individual way the whole of our differences can create something truly unique and beautiful.

Our research was conducted around collaborative murals and the role they play in bringing communities closer together; for instance when a neighborhood shares a wall mural and paints together. Other scenarios included designers having a preview of how other people work in their group and sharing an experience.  The potential benefit could be that it instills initial cohesion before starting an important project. We were very interested in creating something that effortlessly brought people working together in a positive scenario.


Using your smartphone or comupter, open the URL in Google Chrome.

  1. Read the homepage instructions.
  2. Get your canvas.
  3. Fill the canvas with certain constraints you are given
  4. Wait until the other players finish their drawings.
  5. Put 3 – 20 phones side by side to see the big picture
  6. If playing remotely with other collaborators upload your image and check the mural online.

Input: Touch screen drawing tool with different brushes for texture and creativity.

Output: Digital collective mural, an ability to see a singular entity transformed Into a bigger piece of art.




Stage 1 Brainstorming:

•Drawing improvisation game

•Broken telephone voice mobile game

•Make an animated mural together

Game maze


Stage 2  Concept Development

As a group the exploration of animation in a collaborative mural was the decided thought process we carried through with. We also hypothesized about how that interaction would work and what would be the best streamlined process of creating a visual that people engaging in would understand more effortlessly. The idea of making it a light show with animation after the individual canvases were completed was something we were also very interested in.   Due to time constraints, we knew we would not be able to create an animation to cohesively operate across all screens and came to the conclusion that was out of the scope of the project. For the broken telephone concept we discussed getting an automated voice message through text, then instructions to play the game through text message. The next idea was to delve into a motion-sensing maze that could move with the user. To conclude this stage in the process we made an executive decision to prototype a collaborative mural that could turn into any creature to facilitate sharing and understanding through an application. We also felt after initial exploration this was the project we could build within the given time constraints.

Process journal Day1: Research & Preparation

Goal: to create an interactive experience for multi screens.
Inspiration: We got the idea of creating murals using digital screens. Above is the following example of one of our references. People are drawing digital painting with the LambdaVision tiled display at EVL.

Inspiration regarding the line up multi smartphones on a desk.
Pinch is wonderful Multi-screen Interactive Application. People need to place the screens next to each other (in any alignment), ‘pinch’ the screens together and it will link them to work as a single large display. Please refer to the above reference.
(Japan Expo 2014 • Multi-screen Interactive Application)

collabcreature process day 2


Process journal Day 2: 

Day 2: Sharing findings & Brainstorm Ideation. Exploration of canvas size and developing constraints. Planning canvas that can easily be matched up for collaborative mural across 2 phones or more. Challenges thus far making drawing board and tools, p5.js, uploading individual drawings and merging into one big screen,PHP.

Process journal Day 3:
Sketching multiple configurations to further plan canvas. Storyboarding and conceptualizing interaction .


Process journal Day 4:
Wire-framing and Coding: Designing prototyping and coding. During this process we were considering how to bring the canvases together. At this point considering the various ways we could possibly push separate pages to each individual user. Researching sorting code for random files to be pushed through to our web page. We were also trying to work through the details of how many users we could use. How many canvasses we have to individually make and how we were going to make it responsive for mobile and web.


Process Journal Day 5, 6, 7

Putting images into the canvas and continuing to build draw functions.Trouble shooting through the sorting script and further research into php. At this point we were trying to prioritize if we should continue with the sorting script and php. With the canvases, we tried to backtrack and resize for mobile.While testing we were having a hard time using on the phone. During the final phase before presentation we could not get the gallery page to work with  php and sorting script but still had the ability to create collab art without saving the final canvas and uploading.

Process journal Dy 5,6,7

Stage 3  Game Testing and Critique


  • During exhibit classmates were looking around they were very curious about others progress
  • Interesting use of canvas some people drew in the lines
  • others wanted to expand
  • Interesting to see how the players used the guidelines on the canvas.
  • Element of surprise as people stepped away to see how the whole mural looked from everyone’s collaboration.

Takeaways for future iterations

  • Improve functionality of canvas
  • Add animations for further interaction
  • Investigate ways to expand library of canvases from different creatures to no guidelines at all.
  • A better function for sorting pages to incorporate further collaboration for more interesting possibilities.



Drawing Tool:

Upload Image:

(Collective painting session on a giant touchscreen)

(Japan Expo 2014 • Multi-screen Interactive Application)








Tapjack! : Image Association Puzzle Game

Group Members: Naomi Shah, Shikhar Juyal, & Veda Adnani

Project Name: Tapjack!

Project URL: (Mobile first site)

Github Repository:



Image: Tapjack game visualisation


The Concept:

Imagine seeing  a dozen images flash by in a fraction of a second. Would you be able to identify any number of them?

In this interactive experiment, a game of ‘Tapjack’ tests out your image association skills. Participants are broken up into two teams of 10 each. A common, large master image will be displayed, while each team uses 10 wall-mounted mobile screens.  As you look at the mobile screens on the wall, you will see different images flashing through them, some associated with the master image and some purely random. The task is a race to the finish line where participants need to identify all the flashing images that are associated with the master image to complete the experiment, but under a timer.


How it works:

  • The participants are broken up into two teams of 10 each. Each team is assigned their console, which consists of 10 wall-mounted mobile screens, one for each participant. A common screen displays a ‘master image’, and that kicks off your challenge.
  •  A buzzer will go off, and a number of different, random, and some ‘not so random’ images will start flashing at 13-80 millisecond intervals.
  • Similar to a relay race, each participant needs to take turns to run up to the console, quickly identify one image that corresponds to the master image, and tap on the screen of that image to get it to freeze. All this needs to be processed in a flash of a second. 1 down, 9 more to go! As soon as you are done, you run back to your team and another team member runs up to the console after you.
  • The game ends with whichever team manages to complete the 10 screens correctly under the shortest amount of time.


The Experience:





Project Documentation:


In this stage, we identified our strengths and articulated our research interests. Each of us had a vastly different skill set that we wanted to utilise, and spent some time describing how we individually interpreted the assignment.

Our research process spanned the internet, and a number of brainstorms. We then did some preliminary research on external case studies with multiple screens. We also studied the Digital Futures blog to gain inspiration from past cohorts and to better understand the flow of the project.



Image: Brainstorming together


Ideas: Participatory Narrative Interactions

Option 1: Building on theories explored in our possible futures classes, we brainstormed on creating a narrative interaction inspired by the themes of Anthropocene, Feminist Futurism or Afrofuturism. We wanted to create a participatory narrative by providing every participant an option of several storyboards/panels which would allow them to determine how the narrative would unfold towards its end. Participants would have to come to an agreement about the structure of the beginning, middle and end, and create the ‘hero’s journey’ for themselves.

Option 2: Building on the previous idea and inspired by traditional murder mysteries, we considered a non-traditional approach to mystery storytelling. Instead of focusing on how to solve a murder, we decided to focus on the psychological, emotional and social condition triggers behind an individual committing a murder.

Based on the turbulent life of one protagonist that lead them to commit a murder, we decided to unfold the narrative by providing participants with randomly assigned narrative panels on every screen.Participants would then be required to agree upon which circumstances in life lead the protagonist to commit a murder, and then take the narrative in their hands by changing those circumstances completely. They can restructure the narrative and it’s conclusion by choosing from a provided repository of other circumstances in life that substitute those that can change the course of the narrative outcome.


Ideas: Gamified Interactions

Option 1: Guess Who – The idea was for people to wear different screens which depicted a persona or character. People would be unable to see their own character and can approach someone else and ask them questions about themselves. Inspired from the board game Guess who?

Option 2: Whack the duck – The classic arcade game that allows the player to whack the duck that pops out of the holes in random order and quantity, we wanted to recreate the experience across 20 phones, with one phone holding one hole.

Option 3: Evolution of Man – Inspired by discussions around the evolution of our race in our Possible Futures class, we wanted to recreate the pattern of the evolution of man. We envisioned a walking man, across the 20 screens laid out. The reason we didn’t go ahead with this one was because it was not interactive enough and we knew we didn’t want our experience to be passive.

Option 4: Dosha test – We explored the possibility of coming up with an idea using our heritage. We wanted to create an interactive quiz experience that gave users a breakdown of their Ayurvedic mind and body types (doshas) – to enhance their well-being. These are based on the five elements in Ayurveda. With a hectic schedule, we thought this was a good way of getting our cohort to start taking care of themselves by knowing themselves better.

Option 5: Memory Game – The classic card memory game that we all grew up with was something we wanted to explore, but we didn’t think idea was unique enough so we did not go ahead with it.

Option 6: Life on Mars / Galaxy Installation – We envisioned a walk through observatory, more of an educational experience around a galaxy. We wanted to build an installation with phones hanging, but realized this was too ambitious for the time we had.



Image: Sketches for the Evolution of man idea and the Galaxy Installation


Inspiration for Tapjack!:

A major source of inspiration for us, was from a card game known as ‘SlapJack’. It is a casual card game with one simple rule –  cards keep unfolding in front of you one after another, you see a Jack, you slap it!

We wanted to translate this simple and yet fun interaction into the digital world, by  incorporating it within 20 screens and creating a delightful experience. We also decided to use a relay race (pass the baton) method of sorts to facilitate the game and divide people into two competing teams.

In the beginning, our ideas manifested mainly at the intersection of narratives and gamification, owing to our specific areas of interests. Over time, in the interest of simplifying our interactivity to make it achievable in the timeline that we had, we gravitated more towards games that already had a familiarity in the minds of the participants.


Image: Tapjack original concept sketch


Out of all the outcomes from the ideation process, we decided to build Tapjack for the following reasons:

  • It was a game with a quick outcome and a reward system, while also adapting the same mental models as games people have already played before. We wanted to build on the nostalgia people felt playing the old Push2Win arcade games. These mechanisms would require pushing buttons at a precise moment to win a prize.
  • We wanted to combine both these unique and nostalgic offline experiences, and recreate them using technology and the 20 screens as our medium. We felt that this concept worked well to fit into the brief of 20 screens working at once.
  • We included a relay aspect, which is usually associated with the exhilaration of children’s games which we thought would make the experience more dynamic and competitive.


Image: Push2Win Arcade Games – A nostalgic game experience.



These are the core elements that we decided to include. We broke our tasks up on the basis of high and low priority. The high priority tasks were what we needed to create an MVP. The low priority tasks would have extended itself to refining and fine-tuning the experience of playing the game. These goals were assessed regularly on the basis of our progress.


High Priority Tasks:

  • Adaptive screen image loop: We needed a bank of images- some supporting images and some pieces of the master image to be put into a single loop at a speed that would not be too fast or too slow. Over time, we decided to stick with including only one piece of the main image among a collection of random images. This was done assuming that players might not have time to keep referencing the main image to find its pieces under a timer.
  • Tap Responsiveness on Images: We required the tap response to be fairly accurate so that if images are looping really fast, a participant has a sense of gratification of tapping the right image at the right time.
  • Audio-visual feedback for a right or wrong image: This was extremely important to give feedback about the participants action, and also allow for more active participation of players. We realised that sound was as important as visuals to provide negative and positive affirmations to players to keep them engaged.
  • Linking phones: We wanted all 20 phones to work in tandem with each other, and randomly generate images from a bank of supporting and primary image pieces for every screen.


Low priority Tasks:

  • Timer on screen: We wanted to assign individual timers to each team so that at the end participants could have a comparison of which team finished the game first, further fostering a competitive spirit.
  • Makey-makey start button: In the spirit of nostalgia, we thought we could create a push button like old arcade games which would start the loop on all the phones at once. We considered using makey-makeys to create such a button.
  • A progress bar: We thought a progress bar for each team would be a nice touch to keep participants invested in other participants getting the right or wrong answer while they awaited their turn.


1539011559891 1539011588375

1539011602662 1539011612924

Images: Storyboards for Tapjack that describe the interactions, experience and the space.






Image: Low fidelity digital visualisation of Tapjack


Picking the Images for the Interface:

Part of our design process was to mutually understand our aesthetic sensibilities as a team. Initially, we thought about creating our own illustrations and artwork for the project. However, due to limited time we decided to look for image, MC Escher’s lithograph print Relativity  immediately grabbed our attention because of the three gravity sources that make it so alluring. It was complex but not chaotic, easily identifiable and yet mesmerising.

We tried to find supporting images that used similar visual tricks but yet distinguishable from the visual treatment of the master.



Image: Supporting Images



Image: Master Puzzle


Image: Interface support


Coding Process:

Once we had our user interface and its key elements in place, we began our coding process as a team.  We spent some time planning and dividing the load for the code, and got straight to it thereafter. Throughout the process we had to be mindful about the challenge of creating one seamless experience across 20 phones.

  • Step 1: Adaptive Image: Our first task with terms to the code, was to create an image in a window that was responsive to multiple mobile phone sizes. This was key since our puzzle pieces has to be edge to edge. We did this using the window resizing function available in p5.
  • Step 2 : Loop Our next task was to create an algorithm for a looping sequence of images, this was to form the skeleton of our game interaction. We worked with both Image sequencing code in p5 and Load Animate code from the library. The sequencing code worked better for us when it came to ensuring that the interface is responsive across screens, and when making arrays for the images, so thats what we went ahead with. Once the base for the code was clear we began experimenting with the Frame Rate function to control the speed of our sequences. We made sure that the speed wouldn’t be slow enough to make identifying and tapping an easy task, and fast enough to make deciphering the images impossible. We were still unclear about the multiple pieces at this stage and how the code would come together, but for the time being we moved our focus onto feedback mechanisms.


  • Step 3: Tap stop and feedback with sound With a responsive sequence in place, we began to focus our energy on feedback for the user. We knew that the feedback had to be both audio and visual. Our first order of priority was to get the user to be able to stop on the right image, and for the loop to continue of the image tapped was incorrect. Using a series of nested ‘if’ statements, we checked if the image tapped by a participant was a master image or a dummy image, and provided corresponding feedback to the user.  For sound feedback we used royalty free sounds and referenced the p5 sound library. Thereafter we implemented visual feedback using images designed by us to enhance the gaming experience.


Video: Testing out feedback!

  • Step 4: Image Arrays With our feedback in place we moved on to conducting a round of user testing (Videos for the same are available below). Thereafter we shifted focus to the image arrays for all the pieces. Initially we had hoped to have all 10 arrays for 10 puzzle pieces within one p5 file. We tried doing this by using if statements to separate the arrays so they could be called upon at the right time, but this proved to be a challenge. So we went ahead with separating each loop on a different file with a corresponding HTML to support it. In the long run, this seemed like a more feasible solution to ensure that each array is called upon accurately.
  • Step 5: Creating the separate javascript and HTML files We established a clear strategy as a team to go ahead with the separate page approach for each piece of the puzzle. We created 10 files in p5 along with 10 corresponding HTML files to support them. Thereafter we created a master index file with links to each of the 10 HTML files.
  • Step 6: Creating the homepage Once the skeleton for the homepage was created, we focussed on the interface design of the homepage. We created a custom homepage design to help introduce users to the project and navigate to the puzzle pieces. This was done using HTML, CSS and P5.


  • Step 7: Fine tuning Lastly, we fine tuned the whole look and feel of the homepage. Our priority was to make it mobile first, since that was the focus of the experience. We added a favicon to the homepage, fine tuned the design and interactions for buttons, and made it responsive to changing window sizes.


User Testing:

Once we had our feedback code in place, we conducted a detailed round of user research to help us better understand the interface better.


Material and Fabrication Process:

As a team, we wanted to create a piece of work that was both interactive and engaging, and beautiful to look at. A vital part of creating that experience was to have the right installation that held the 20 devices together both securely and aesthetically. We approached Reza at the Maker Lab to guide us through this process.

The fabrication was something we handled simultaneously with the coding process. We conducted the exercise over two sessions at the Maker Lab, which are highlighted below:

All three of our team members were present throughout the fabrication process, since it was an integral part of the project. Naomi was in charge of documentation, which is the reason for her absence from the pictures – she was taking all of them!


Session 1 (Monday October 1, 2018):

Our first session was a consultation with Reza, during which we discussed our concept in detail and then listed potential materials we can use for all the different parts of the installation. Thereafter we began experimenting with different thicknesses of plastics (0.03mm, 0.1mm, Ziploc, Pouches etc.) to understand the ergonomics of touch response over a surface, and different methods of fastening and securing the phones. We even considered using materials like mesh or wooden slots before deciding on plastic. We worked with staples and wood, and tried a few different phone sizes to see what’s working and not. Reza left us to think about the final materials we would like to use, and how we would like to fasten the installation to the wall securely.  At the end of the first session we ended up with some decisions to be made about materials, and a testing wood board with phone slot, that we conducted our user research with. We first tested amongst our team and then with our classmates.



Sourcing trip:

After our first meeting with Reza, and successful user research, we set out to search for the right plastic and sound boards in order to bring our vision to life. We visited multiple stores and acquired 2 different types of plastic pouches, a 2 different thicknesses of foam boards, and mounting tape – which we took to the Maker Lab for our second session.


Session 2 (Wednesday, October 3 2018):

Our second session with Reza at the Maker Lab focussed on production. We spent a short time discussing our game plan and then got straight to work. We tested out the plastic we had purchased and picked a final one. We assembled our master puzzle boards with spray glue first. Then we cut the wood for the phone holding section and began to fasten that. Thereafter we placed and stapled our plastic pouches to secure them, and finally covered the pins with matt black tape to make it aesthetically pleasing.

materials-2 materials-3


Images: Sessions at the Maker Lab


Final Presentation

We borrowed 20 mobile phones from within the cohort group to mount up on the fabricated boards that we created for our experiment, There were two boards with 10 plastic pouches each, holding the phones securely in place. The master image was hung in between the two boards.

With the collaboration and cooperation of our cohort group, we ensured that all the phones had the following settings:

  • Screen lock deactivated to allow for the loop to continue and for people to access it without trouble during their turn at the board.

  • Brightness at maximum

  • Volume at maximum.

        The group divided themselves into two teams and came up one by one to play their turn at the console. The game was played twice by the participants.


  • During the second round of playing the game, the players were more familiar with the rhythm of the image loop and hence played with a little bit more strategy. They were more careful about when they tapped the screen as opposed to the first round when it was still unfamiliar.

  • In the first round, people often tapped the screen continuously till they happened to land on the correct image. While we expected people to be more methodical over identifying the image as observed during our user testing phase, the high-pressure environment of a timer and a team rooting to win brought out other behaviors.

  • The sound effects of ‘Yes’ and ‘No’ really enhanced the experience of identifying the image as right or wrong. When players got it right, they displayed behaviours of excitement and energy.  Some people played multiple screens, in spite of there being one screen per participant


  •    Participants observed that they would have liked to receive a larger sense of reward at identifying an image as correct, with effects such as the image suddenly changing to colour from black and white upon selecting it. Others mentioned that they would have in general liked some colour distinction and variety among the images.

  • Some participants mentioned that they would have liked to find the pieces of the image puzzle separately, and then attempted to bring them together to form the complete image.

  • Other participants felt they liked the enjoyed the current game mechanics, with the screens mounted on the wall fostering a collaborative effort, a sense of investment in the game and a competitive spirit.

  • The visual design and interface of the Tapjack! Home page was appreciated as being refined and aesthetic.



Challenges Faced:

  • First time coders: During the early stages of the experiment, we ideated a range of concepts that was inclusive of our different areas of interest. However, as we started breaking down the code into smaller steps, it gave us the opportunity to assess the amount of complexity the code would require. As first-time coders who were attempting a project on the foundations of HTML, CSS and P5JS in a very limited time-frame, we decided to keep our concept achievable yet engaging, while  prioritising on the technical functionality of our concept. We spent time watching tutorials together, discussing different approaches and solutions for solving our technical difficulties. We often fell back on Adam Tindale as help for troubleshooting.
  • Time constraint: We were unable to find the time to refine and clean up the project, and in fact had to  remove some of the interactivity we had earlier planned in the interest of time, and prioritise tasks that were more crucial than others. For example, we wanted to add a progress scale for each team, so that they have a single visual representation of who is winning in the race. Another example is that the tap response for the images is not entirely accurate, making the tapping a game more of luck. In 2 weeks we could only churn out an MVP.
  • Android and iOs compatibility issues: We found inconsistencies in the functioning of the code across iOs and Android phones when we user tested with our peers.  For example, we realised that the function MousePressed does not work for iOs while it works perfectly for android. However, when we used TouchStarted instead, it worked perfectly fine for iPhones, we had considerable trouble getting the code to work on Androids. Our code was reworked several times for this. We tested on every phone one by one to assess the functioning of the looping, sound effects and pausing for the correct pictures. Finally, we approached Adam Tindale who helped us understand what to rewrite in the code, and the logic behind it.            



  • The project gave us all the opportunity to get out of our comfort zone and code with a purpose. All three of us undertook specific tasks on the code, but also realised our strengths and weaknesses in this specific area. Moreover, we coordinated well as a team while writing code, each one of us building on top of each other through research, additional resources and help from more proficient coders.
  • The android and iOs compatibility is something we didn’t consider to be a massive hurdle until the very end, where we couldn’t solve the issue of the negative affirmation image and sound repeating twice. We only discovered it when we decided to do user testing on the phones belonging to our peers. It’s something we intend to solve to understand writing code for both iOs and android.
  • While we were unable to find time to create a more sophisticated game, complete with a timer, a buzzer and a scale that shows progress of the team overall. While it wasn’t missed during the presentation, if this were to be a game played more than twice, it would require a more sophisticated mechanism and experience design to make the engagement sustainable.
  • It was important for us to focus on functionality and achievability in this first experiment, and hence scale down on the conceptual ambition of the project. While we had some really interesting ideas, we would not have been able to pull it off in the timeframe provided.


Project References:


HTML5 Doctor,

“16.9: Array Functions: Sort() – Topics of JavaScript/ES6.” YouTube, 23 Feb. 2018,

shiffman. “9.12: Local Server, Text Editor, JavaScript Console – p5.Js Tutorial.” YouTube, YouTube, 31 Mar. 2016,

“About.” HER STORY,

“Audioversum.” Innsbruck and Its Holiday Villages .:. Official Website!,

Chapman, Stephen. “Create More Concise Code by Nesting If/Else Statements in JavaScript.” ThoughtCo, ThoughtCo,

Chen, Pearl. “Built-in Browser Support for Responsive Images – HTML5 Rocks.” HTML5 Rocks – A Resource for Open Web HTML5 Developers,

“FRAMED Collection.” FRAMED Collection,

Processing. “Processing/p5.Js.” GitHub,

“Slapjack – Card Game Rules.” Bicycle Playing Cards,

Trafton, Anne. “In the Blink of an Eye.” MIT News, 16 Jan.

Humans Vs Zombies – Hack, Slash, Bang Bang!

An experiment with 20 screens and adventures in p5.js

– Omid Ettehadi and Frank Ferrao


Project Scope:
Humans Vs Zombies is a human interaction game that uses smartphones based on the classic playground game, Rock Paper Scissors. Initially, our goal was to make an entirely digital experience, but the project evolved to a simpler version which focused more on using the smartphones as tools to enhance the user’s experience in playing the game. The change was due to time and technology constraints, in addition to a better understanding of the assignment after learning more about multiscreen projects and their nature to create an interactive experience for their audience.



Working URL:

We have two version of the code for the final project.

The chocolate eclair version was written mainly by Omid where Frank contributed to the User testing and the overall user testing, iterating with the flow of the experience by mostly removing features we had planned on but was adding complexity to the usability.

Frank worked on a simple version, which included no dynamic switching of Human and Zombies in the interface, there are two urls with identical code but the graphics and sound effects have been changed in for each of those url’s.

The other main difference is that in Frank’s version you can just keep shaking the device to bring up the next random card, in the advanced version you have to reload the page to start a new game.

This has its advantages if you are playing solo, but because ours was group experience we did not want people to keep playing solo games as we would have liked them to engage in the group experience of winning as a team.


Design and UX:

There are two phases to the designs we went through, in the initial part we wanted to make it a full-fledged app and Frank went down the rabbit hole of getting a lot of art done for the project, he asked is his son to make some Zombie drawings, that he could use in the game. The initial concept did not have humans in the mix, we were calling it **Self Actualized Zombies**, where your goal was to become a zombie first and then become a mega-zombie as a group of 5. The game would have people collect Zombie body parts till they were a complete Zombie and then join body parts to become an S.A.Z as a group.

Planning with a complicated set of ideas and multiple UI screens:


Angry Zombie on the Right by Emily Age 7

We spoke with Nick after the first week of planning for this style of game and he steered us away from such a complex endeavour. We then went into our PHASE 2 planning, where we got rid of all in the in-app Scorekeeping, Multiple UI screens, and the final payout with tons of in-app messaging which would hard to read and understand in the dark while playing the game anyway.

We tried hard to simplify UX and UI. In doing this we looked at several criteria for the design

Character Design and UI Design:

Frank asked his 9yr old to design the Zombies for the game, we did not have any human characters at this point and below is what he came up with which Frank cleaned-up to fit into the game.


We also did a sample UI for the Interface, which Omid put into our first prototype of the game, it was based off the mock-up image below, we had things like an in-game economy, point collection and a timer with 3 lives.


The initial concept also called for a mix and match of Zombie parts on several screens, we experimented on how these would fit together for the final experience:


Game Genre:
A chance based game, much like a casino-style game, where there is a luck of the draw on what shows up, games of chance reward the user with money/virtual money or points in our case it was the latter. We used an analogue scoreboard to keep track of who was playing the leading.

An example of an analogue version of our game mechanic can be seen in this video:


Source: “Rock Paper Scissors Play Machine DIY from Cardboard.” YouTube, La Mamka Creative Club, 30 Oct. 2017,

The pacing of the game was real-time and turn-based, where different people in the group took turns and the rest would cheer them on.

Humans Vs Zombies the Experience:


We landed on a mix of a role-playing game with a digital component which was the luck based draw of the cards, we used music, lighting and prop’s to enhance the experience and user engagement, the app on its own can be mundane and glitchy as taking a device can lead to unexpected results depending on processor speeds and devices specifications.

The main question we kept asking ourselves: IS THIS FUN!? and if it will keep people engaged!

We did not want to use the words ROCK, PAPER, SCISSORS on the cards. We went through different iterations of objects but we always came back to interpretation issues, as nothing was clear enough to be identified as a rock, paper or Scissors, we knew we wanted separate objects for Zombies and Humans so we decided to use illustrations and Symbols to represent the type of Card you had down, the result was a card with R,P or S on them, we found during gameplay that it still took time to identify a winner as our past experience with the game had to make sense of the current graphics and the presented outcome.

Here are some of the features of p5.js we used in the final digital prototype, we dare call it a full-fledged app as it could do with a lot of refinement to become a well-polished app.

– The p5 Play library to make animated sprites
– The shake and random function to give users the resulting card
– The sound library to trigger sounds on shake.

What we found out that on the iOs device the silence button disables browser sounds even if the phone is at full volume. This was a last minute discovery and was causing lots of frustration on many other projects which depended on sound for their implementation.

This was the final loop of the game:

1. Chose a Character -> 2. Click to start -> 3. Ready to shake -> 4. The result of Shake.

Step 2 was very confusing, as players started shaking the device after they clicked the character card and were expecting the result to show up.


Process Journal:

Day One (Game/User Experience Design):

The first day was used to write down the design of the experience and see in which parts could the tool be of help. After much talk, we realized that the best thing to do is to keep the design simple and use it mainly as a random tool generator. The process started with the creation of the initial frame. We were sure that we needed two groups for the game so, we decided to make two simple buttons so that the users could choose which team they are playing. By selection of the button, the user will be shown different tools based on their character.


Day Two (Game Functionality):

Our goal for our second day was to make sure the website works completely, and it is fully functional. We developed a random generator to choose a card at random for players. We used the deviceShake ability so that after the team has been chosen, the player can shake their mobile phone so that a card is chosen at random as is displayed on the screen.


Day Three (Re-design of the User Experience):

After knowing that the design was fully functional, we focused on making the interface more like a game. We replaced the pictures with animations so that when the device is shaken, the animation is played. The animation shows a card rotating around and then revealing the card that was chosen at random. We added sound for each card and for each button as feedback to keep the experience more entertaining.


Day Four (Final Touches):

We changed the structure of the initial buttons to two cards, where now the users could select their team by choosing a card. This way, the experience seems much more like a game. In addition, we added a delay for the audio files so that no matter the phone that is being used, the audio file is played after the card has been revealed.



Coding Challenges:

  1. Difficulty dealing with animation

We faced two problems when we decided to work with animations. First was the speed of the animations. The speed was different on different devices based on the processing power and their frame rate. In order to overcome the problem, we added a delay to the audio file so that they would be played after the card has been revealed. The second problem was the size of the animation. We had no control over the size of the animation, the only control we found was to change the size of the pictures in the animation. We chose a dimension that would be perfect for most mobile devices.

  1. Audio file not playing on all devices

No all phones or browsers supported the audio file format. In order to fix this issue, we had to add different formats of the audio file so that most browsers would support them.

Some of the Feedback we got:

  • There should have been more of player choice as in the classic game of ROCK PAPER SCISSORS where you choose the hand you play.
  • The game could be used as is a party game for kids.
  • The build-up of the game environment was appreciated and well received.
  • If we could have more of group interaction as most players got to play it only once.

We have tried to respond to some of this feedback in the post above.


imag0572 imag0569 imag0568 imag0567

imag0566 imag0565 imag0564


Overcoming IOS HTML5 Audio Limitations. Retrieved October 8, 2018,

Pedercini, P. Retrieved October 8, 2018,

Reyes, A. (2017, March 29). p5.Play Game Development – Part. 1 and 2: Setting Up. Retrieved October 8, 2018,

Shiffman, D. (2018, July 30). Coding Challenge #111: Animated Sprites. Retrieved October 8, 2018,


.Game On!.


Storybook: The Princess & The Dragon


Maria Yala & Nick Alexander

Storybook: The Princess and the Dragon

Storybook is a collaborative storytelling game built using p5, in which each player assists in generating a random page of a storybook. Storybook asks players to tell the tale of a Princess and her dragon. Players require a mobile phone and internet access to create a story page. Each phone will generate one page of the story in this multi-player game. To create a page, each player picks one image from a random set of four images and a caption from a random set of four. Story pages can then be arranged in multiple ways to create different tales about the Princess and her dragon. How the players arrange their pages influences the way the story is read.

Link: (Optimized for mobile at this time)

Git repository:


Brainstorming & ideation document

Our first discussion was one of scope and simplicity. The most interesting projects presented to us as examples were ones that used phones as mechanisms to facilitate a face-to-face personal interaction, rather than the core experience. We brainstormed and explored several ideas, and Storybook was the idea that we both felt was the most fun, versatile, and achievable.

First wireframe of the project

We sketched out a rough wireframe of the screen-by-screen user flow. Even at this early point we were considering experience design, and included a “blank” screen as a sort of pause in the flow. Once all the players were ready, we thought, the screen could be tapped to reveal the final page chosen and thus preserve the surprise.

We were also discussing how to best generate random images, display them in a quadrant, and store them for retrieval at the end. We discussed whether having multiple canvases on the screen would be helpful in this. This wireframe captures our decision to go with a single large canvas – the black square represents the single canvas, drawing all the artwork.


First working version of the screen progression

This is the first complete prototype of the user progression. We generated multiple screens by creating a “screenState” variable and writing if-statements that drew screens based on the current number assigned to screenState. Every choice the user made increased the variable by one. So, if the screenState was 0, the user is seeing the first screen. If the screenState was 1, the user was seeing the second screen, and so on.


2jl4p8 2jl62n

Story images and caption generation and randomization

Once a player clicks on start to begins the game, they are presented with a random set of 4 images to choose from and a random set of four captions. To draw these images and captions we first had to determine how to draw the images to the screen. We decided to split the canvas into four quadrants. Using (x,y) coordinates and an array of 12 images, we generated 4 random integers based on the length of the array and used these integers as the indices to randomly choose an image. The images were then drawn at the (x,y) coordinates i.e. (0, 0), (width/2, 0), (0, height/2) and (width/2, height/2). Once we determined that the randomization was working correctly and that the test images were showing up in the correct quadrants. We moved the images again to center them inside the quadrants.  

At this point we merged the CaptureTo save the image and caption clicked, we created global boolean i.e q1Clicked, q2Clicked, q3Clicked, q4Clicked, which we set to true whenever a click was registered in a particular quadrant. Using if statements and our storyStage variable which indicated which stage of the game the player was in, we were able to determine which image and caption to save as each of the images and captions was placed in a particular quadrant. Once the storyStage advanced, the global variables were then reset to false. & Randomization script with the Screen Progression script to create storybook.js. This choice ultimately led to some problems, as we had duplicated work. We had different methods of tracking clicks, and the overlap of the two click tracking methods caused an issue with iPhone that we only uncovered within the last days of the project. The Screen Progression script used the mouseClicked function, which isn’t compatible with iPhone. We solved this problem by changing the mouseClicked function to mouseReleased, which is compatible and using “return false;” to override any browser presets.

Once we determined that the new storybook.js script was working correctly we tested the game on our own mobile phones. Below are screenshots from the test.





Princess character study

Both to save time and to evoke a cartoon children’s book aesthetic we made the decision to hand draw our artwork. The Princess needed to be expressive, with enough character of her own to make her feel unique, while being able to fit in with any caption. She is inspired by Allie Brosh’s Hyperbole and a Half, whose sassy and expressive cartoon heroine is a simple messy design, and the ultra-flexible (and ultra-disposable) stickmen of (not safe for work!) Cyanide and Happiness.


Dragon character study

The Dragon needed to be simple in order to match the Princess and also be easy to draw consistently multiple times. Having him be more snake than lizard was an early choice that never wavered. He is inspired by the dragon from the classic Paper Bag Princess by Munsch and Martchenko, and Adventure Time’s postmodern takes on fantasy creatures.


Final character designs

The Princess and the Dragon were chosen for their versatility. The trope of the dragon-stealing princess is common in myth and children’s stories, and the assumptions players have about it inform their preconceptions of the experience and the choices they make. Both characters are expressive but simple enough that any player choice of caption can be reflected in the matched artwork.


Layout for hand-drawing artwork and early pencils


Inking rough pencils

The 20-Screen moment:


Project Context

The project was inspired largely by illustrated children’s stories and collaborative storytelling games, as well as the common story elements of dragons and princesses.


The Paper Bag Princess

Familiarity with children’s stories (in particular we thought of The Paper Bag Princess by Robert Munsch and Michael Martchenko when creating The Princess and the Dragon) will likely help Storybook players find meaning in it. The clarity and earnest nature of stories directed at young children provides a framework for players familiar with them to accept the necessary simplicity of the semi-randomly generated pages that Storybook players create, and will perhaps have primed them to look for depth and meaning in Storybook’s brief sentences and simple pictures.

We also considered contemporary collaborative storytelling games. Many games exist in which players each make a choice from options they control (such as Joking Hazard, Cards Against Humanity, and Dungeons & Dragons) and combine their choices to make a narrative from scratch. This is a proven format – consider that the format of Cards Against Humanity has inspired an entire genre of social card game – and it was not a large leap to apply it to the Storybook project.


Joking Hazard/Cyanide and Happiness

Storybook also draws influence, both visual and practical, from sequential art – webcomics in particular. Visually it is inspired by the clear, attractive, and comedic styles of Hyperbole and a Half  and Cyanide and Happiness. (The game Joking Hazard is the brainchild and spinoff of Cyanide and Happiness, so the connection is strong).  These are both successful webcomics with wildly different tones, whose artwork styles are simple but expressive stick-people. Scott McCloud, comics theorist, explains that the more detailed a cartoon character the less a reader will see themselves in it. (Consider the ubiquity of the smiley-face or emoji, and ask why they rarely have much detail.) Storybook follows suit and uses simple figures with large, simple features so players will see themselves in and imprint upon the characters, and thus read meaning in a story where none exists. Furthermore, as Storybook is meant to be played on a flat surface and with images arranged sequentially, it could be considered a sort of collaborative electronic comic strip – a webcomic that doesn’t exist on the web.


McCloud on simple cartoon faces

Where Storybook is unique from its inspirations is that it asks players to collaborate on a single narrative without stated beginning or end, and invites players to consider what arrangement does to a story. It is not in itself a game with a goal. There is not beginning, end, or stated size for the story, nor is there a necessary number of players (though one player with one phone might not enjoy it all that much). Additionally, it is not explicitly necessary that the story be read sequentially left-to-right, as English-language printed material usually is. The nature of the medium that Storybook exists within means that players can arrange and read their stories however they wish. Storybook allows players to explore what happens to a story when it is arranged in unconventional ways. What if you played Storybook like a crossword? What if you read it top to bottom? What if you built a grid and looked for the stories that appeared inside? Storybook’s versatility and simplicity allow players to explore the nature of storytelling and discover meaning beyond the designers’ intentions.


“Hyperbole and a Half.” Brosh, Allie. Hyperbole and a Half,

“Cyanide & Happiness (” DenBleyker, McElfatrick, and Wilson. Cyanide & Happiness,

McCloud, Scott. Understanding Comics. Harper Perennial, 1994.

Munsch, Robert N., and Michael Martchenko. The Paper Bag Princess. Annick Press, 2018.

“Cards Against Humanity.” Temkin et. al.  Cards Against Humanity,

Ward, Pendleton. Adventure Time, Cartoon Network, 5 Apr. 2010.

OctoSWISH ++ —- A Sampler for Mobile Devices



by Tyson Moll, Amreen Ashraf

A miniature sequencer with an audiovisual display for desktop mobile devices, designed to be used in tandem with multiple screen instances.



September 24 & 25: GPS with LFSR-seeded random instruments

Going into the project, we decided we wanted to create some sort of ‘orchestra’ of sounds from multiple different devices, with users having the ability to contribute together towards a combined musical experience. Our original concept was to use GPS data as a seed to randomly generate unique musical sound samples. Users would be able to access and control a instrument based on a user’s position inside a room. The following day, we tested this out and decided to drop the concept due to the lack of sensitivity we achieved tracking GPS indoors; our devices were only capable of discerning between distances of at least 10 metres.



September 26 & 27: Back to the drawing board.

We picked up again the next day and pondered what to pursue next. During our ideation phase we decided that both of us were interested in game creation in general which led us to look closely at board and card games. We were particularly invested in the possibility of developing some sort of collaboration or team building experience. One idea we considered was using the phones as playing cards, particularly fascinated by the fact that an electronic ‘card’ didn’t need to have static properties; perhaps the face value could change or it’s rules tweaked. We also considered investigating murder mystery frameworks, where said cards could act as secret identities and tools to develop the game’s narrative. This prompted us to consider visiting a board game café to research simple playing card models and draw upon ideas from titles such as Fluxx, Coup and Uno. After sleeping on our ideas overnight, we became concerned that our ideas would be too complex to instantiate and test for a group environment. As much appeal as game design inherently has, we concluded that investing too much time into inventing rules and mechanisms for such a project was not within an achievable scope.

Still intrigued by the idea of using the p5.js Sound library, we revisited our interest in developing a musical experience and pondered the possibility of creating a beat maker. Participants would be able to record and create their own compositions on the fly. This also had the added benefit of giving users more control of the kind of sounds they could use as their ‘instrument’. In a sense, it seemed that we were back where we had started!



September 28th: The first sketch of our soundboard!

Right before class started we casually talked about music and our sequencer. We felt that there needed to be some element that needed to tie the 20 screen individual sequencer together. We chatted about our previous concert experiences and overall enthusiasm for Jon Hopkins, whom Tyson intended to see live that evening. Concert musicians notably accompany their performances with some sort of striking audio-visual element; we drew inspiration from one of Jon Hopkins’ music videos, noting its effectiveness in conveying artistic intention and sonic properties. This seemed like the critical element that our project was missing and we concluded that adding such a component would not only bring the project together cohesively for the 20-screen experience, but help modularize our work so that we could code our project elements independently without worrying about merge conflicts. We reconvened after class and briefly researched how we could accomplish the audiovisual component. By using the mic functionality of mobile devices, we determined that our sequencer could simultaneously analyze sounds emanating from neighboring devices. This effectively opened the doors for us to combine the sequenced sounds into a multi-sensory experience.


Prototyping: September 29th to October 4th

Our project was created almost entirely in Javascript using the p5.js series of libraries. Amreen Ashraf contributed the audiovisual experience design and recorded the experience, documenting our process along the way whilst Tyson Moll developed the sampler board, some audio-visual adjustments, and overall feature integration.

During the development of the Audio-Visual component, Amreen referenced a tutorial by the Coding Train, whose influence remains in the final product. The basic concept was to create a distinct line waveform capturing the Audio-Out feed from the device with an ambient, circular form to represent received microphone feedback. The ellipse proportions were associated with the amplitude of the sound, tracked as a shorter-term amplitude histogram the line through repeated draw functions. The line continuously displayed amplitude collected in an array data set. We had difficulties tracking down a means of sourcing the audio-out data stream; both visualizations ultimately used the microphone for input. In order to present each device in a unique manner, we later integrated colour randomization so that on initialization each device would exhibit a different audio-visual colour scheme.


The sound board interface was implemented by creating a series of custom objects with vector graphics. Each button was based off of a prototype object with a series of built-in variables and functions concerning its position in relation to an ideally-proportioned canvas, whether the button was pushed, what graphics to display, and what effects may have been applied to the button itself. The initial layout was prepared in Illustrator for the relatively common 16:9 ratio, at 1920 x 1080 pixels in size. By comparing this ratio to the window’s width and height, we were effectively able to maintain the proportions of all buttons by calling their respective ‘display()’ command and the windowResized() p5.js event. That being said, it would have been useful to integrate a script to detect device orientation for situations where the width of the screen was less than the height. In order to integrate both ‘modes’ of functionality into the device (Sequencer Board and Audio-Visual), we used a variable to track whichever mode was enabled and separated the draw and input events for each screen by means of ‘if’ statements. The slider, audio-visual button, and drop-down menu were implemented with the aid of the p5 DOM library. Since our project was not particularly reliant on CSS, we manipulated styles and properties for these HTML objects directly in Javascript rather than create another file. We used the touchStarted() event to capture touch input, which surprisingly overrided the ability to click the non-DOM button objects on the desktop. The opposite effect occurred when we left the mouseClicked() event as the handler. On reflection, it would have been a good issue to resolve to ensure barrier-free desktop compatibility.

Our microphone button captured sound using the p5 sound library’s Audio In and Recorder modules. The file captured by the Recorder was then available for processing and playback. In order to sequence the playback as a stem, we created an array of buttons. Since these buttons had inherent properties from their prototype we only needed to call loops in order to instantiate playback, display, and tweak properties. We added several methods of manipulating recorded sounds from the basic p5.js library (the volume and the pitch) as well as the capacity to modify the playback speed of their sample board compositions. We also implemented manual customization of individual nodes on the sample board, with moderate success: due to some quirks in the manner in which p5.js / javascript modifies pitch before playback, the device worked as intended under peculiar conditions: when playback speed was slow, singular node pitch adjustments would only activate with the addition of a preceding punched-in node. When playback speed was fast, pitch adjustments would work as intended.


Critique and Presentation: October 5th


The overall result effectively became a collaborative stem mixing experience. After users had the opportunity to build their stems, we collected the mobile devices in the centre of the room and dimmed the lights. The cacophony and visual experience was mesmerizing, easily likened to an abstract drone installation. We are pondering the possibility of a modified experience in a gallery context, maintaining user contribution / input and casting stems via projection screening. We received positive feedback for successfully creating a captivating and engaging experience from start to finish. The multiple stages of self-guided interaction with the device from creating the sample to bringing it together as a performative piece were applauded. However, it was also noted that the experience could have benefitted from being less nuanced given the time constraints. We made an effort to provide a prompt explanation of the project and offer troubleshooting, but especially given the time limit and challenges we encountered getting several devices up and running we consider this feedback quite understandable.

During our 10 minute in-class demonstration we encountered several compatibility issues arise in iPhone devices with microphone functionality, sometimes caused by browser selection, security preferences and other nebulous issues. After troubleshooting, our success rate exceeded 75% of devices tested and despite initial frustrations the combined audio-visual experience was spectacular to observe as a whole. Given additional time, our core focus would be to increase general compatibility across devices, use a more readily functional sound library, and enrich the audio-visual experience. We have also pondered transitioning the project towards a gallery experience by creating a projection displaying user contributions through server technology.



Audio Samplers

MIDI controllers and sample board devices are common tools in a DJ’s kit. Our project heavily simplifies the process to its essentials. With the inclusion of digital audio workspaces and interfaces, modern DJs have an impressive arsenal of tools to mix, mash and perform electronic compositions and recordings.


Bicycle Built for 2000 by Aaron Koblin and Daniel Massey

This project uses an audio recorder built by processing to collect sounds from people around the world. People from 71 countries participated to add their voice which was collected via a web browser and the synthesized into the song “Daisy Bell”, which was written in 1892 and is the first musical speech-synthesized song (popularized in 1962). Although our project is significantly more open-ended, the synthesis of human-made sounds into a music felt particularly inspiring. 


Max Cooper Concert (Amreen) / Daniel Avery & Jon Hopkins Concert (Tyson)

Attending a Max cooper concert is like diving head first into an explosion of sound and visuals. For his 2017 album release Emergence, Cooper collaborated across the board with many disciplines including coders, data visualizers to create a unique audiovisual album that delves deep into the synthesis of sound and emotions with lush visual accompaniment.

Similarly, Daniel Avery accompanied his slow-building, rhythmic DJ set with detailed abstract visuals of mountainsides and foreign atmosphere pulsating to the steady beat from his kit. With so little performance involved in the music-making (compared to more instrumental genres), the visuals presented a very rich experience and insight towards the moods and experience the artist wanted to convey to his audience. Jon Hopkins set closely followed, leading with tracks from his acclaimed Singularity. In particular, the track and accompanied visuals to “Everything Connected” held our captivation. The track slowly builds momentum, a purple heart was visualized on the screen quivering and moving to the sound of the track.

We hoped to bring this sort of multi-sensory experience to our project on a more intimate scale, allowing participants to, in a sense, control the heartbeat to our collective dissonance.

Reference :
Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial


Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.