Author Archive

River Styx (2018)

River Styx

by Shikhar Juayl, Tyson Moll, and Georgina Yeboah

Figure 1

River Styx being presented at the “Current and Flow” end of the semester grad show at OCADU’s Grad Gallery on December 7th, 2018.

A virtual kayaking experience in the mythical word of River Styx. Navigate through the rivers of fire, hate, and forgetfulness using our handmade kayak controller, steering through rubble, rock and ruins. Discover jovial spirits, ancient gods and the arms of drowning souls across the waters between the worlds of the living and the dead.

The project environment was built in Unity using free 3D assets and characters created using volumetric footage. We used the Xbox One Kinect and a software entitled Depthkit in the game:play lab at OCAD U to produce the mysterious animated figures in the project. The vessel is operated with Tyson’s arduino-operated kayak controller, newly revised with 3d printed parts and more comfortable control aspects.


*The scripts and 3d objects used in the project are available via Github, but due to the size of several assets, the Unity project is not included.

Figure 4. Sketched out diagram of circuit.

circuit diagram for paddle controller


Monday, Nov 26

When we first convened as a group, we discussed the possibility of taking the existing experience that Tyson created for his third Creation & Computation experiment and porting it over to Unity to take advantage of the engine’s capacity for cutting-edge experiences. As Unity is more widely used in the game development industry as well as for the purpose of simulations, we thought it would make for an excellent opportunity to explore and develop for new technologies that we had access to at OCAD U such as Virtual Reality and Volumetric Video capture. We also thought it would be exciting to be able to use Arduino-based controllers in a game-development project; a cursory web search revealed to us that Uniduino, a Unity plugin, was made for this purpose.

We also wanted to explore the idea of incorporating a narrative element to the environment as well as consider the potential of re-adapting the control concept of the paddle for a brand new experience. River Styx was the first thing to come to mind, which married the water-sport concept with a mythological theme that could be flexibly adjusted to our needs. Georgina had also worked on a paper airplane flight simulator for her third C&C experiment which inspired us to look at alternative avenues for creating and exploring a virtual space, including gliding. We agreed to reconvene after exploring these ideas in sketches and research.

Tuesday, Nov 27

We came up with several exciting ideas for alternative methods of controlling our ‘craft’ but eventually came full-circle and settled on improving the existing paddle controller. The glider, while fun in concept, left several questions regarding how to comfortably hold and control the device without strain. Our first ideas for the device imagined it with a sail. We then considered abstracting the concept of this controller in order to remove extraneous hardware elements. VR controllers, for example, look very different from the objects that they are supposed to represent in VR, effectively making them adaptive to various experiences and more wieldy. As we continued to explore these ideas, it occurred to us that the most effective use of our time would be to improve an already tried-and-true device and save ourselves the two or three days it would take us to properly develop an alternative. Having further researched the River Styx lore and mythos, we were also very excited to explore the concept with the paddle controller and resolved to approach our project accordingly.


Wednesday, Nov 28

We visited the gamelab at 230 Richmond Street for guidance in creating volumetric videos with the kinect. 2nd year Digital Futures student Max Lander was kind enough to guide and give us pointers about using volumetric videos in Unity. Later that day, we made a serial port connection script to start integrating Tyson’s old paddle script in Unity.

Once that was completed, we then started looking into adding bodies of water assets for our environment using mp4 videos. Turns out the quality was not what we were going for so we later started integrating water assets from the the standard unity packages and began building our scenes.

In terms of the paddle and with the aid of a caliper, we dimensioned the elements of the original paddle controller and remodelled them with Rhinoceros for the purpose of 3D printing. Although the prospect of using an authentic paddle appealed to us, we chose to use the existing PVC piping and wood dowel design in order to reduce the amount of time we spent searching for just the right paddle and redesigning the attached elements. In order to improve communication between the ultrasonic sensor and the controller, the splash guards from the original kayak paddle controller were properly affixed to the dowel, as was the paddle. The ultrasonic sensor essentially uses sonar to determine distance, so it was important that the splash guards were perpendicular to its signal in order to ensure that the sound was properly reflected back. Likewise, we created a more permanent connection between the paddle headboards and the dowel and a neatly enclosed casing for the arduino and sensors.

The process of printing the materials took about five days, as not all printers were accessible and several parts needed to be redesigned to fit on available printer beds based on available material in the Makerlab. We also found that roll of material that we had purchased from Creatron for the project consistently caused printing errors compared to others, which resulted in significant time wasted troubleshooting and adjusting the printers.


Thursday, Nov 29

This was our first session finding Unity assets and integrating them into the Unity editor. We used a couple of references to help shape and build the worlds we wanted to create. We managed to find a few assets we could work with from the start such as our boat. As we continued to find and add more assets to our environment we noticed that some assets were more heavier than others and caused a lot of lag to the game when we ran it so we later decided to use more low-poly rendered 3D models. Once we were satisfied with a certain environment, we added a first person controller (FPS) to the boat which came with Unity’s standard assets and began to navigate the world we created. We wanted to experience what it would be like exploring these rivers through this view and later replace our FPS controller with our customizeable arduino paddle.

Figure x. Shikhar working on River Styx environment.

Shikhar working on River Styx environment.

Friday, Nov 30

Hoping that it would simplify our lives, we purchased Uniduino, a premade Unity Store plugin. This turned out not to be the case, as its interface and documentation seemed to imply that we would need to program our Arduino through Unity instead of working with our pre-existing code developed in the Arduino IDE and its serial output. We ended up resolving this with the help of a tutorial by Alan Zucconi; we transmitted a string of comma-separated numbers associated with the variables that are used to operate the paddle and split them with string-handling functions in a C# script.

After some initial troubleshooting, we managed to get the gyroscope and ultrasonic sensor integrated with Unity by applying rotation and movement to an onscreen cube. The only caveat was that there was a perceptible, growing lag, which we decided to resolve on a later date.


Saturday, Dec 1st

As our environment for River Styx’s grew, we continued to discuss the addition of the other rivers involved in the Greek mythological underworld such as River of Pain, River of Forgetfulness, River of Fire and River of Wailing.We then started to brainstorm ideas for map layout, creating a map for these multiple rivers and what reflective element each river should have. Our discussions expanded towards game design vs an exploratory experience. We wanted to see how we could implement certain aspects to make it more of a game and less exploratory. However foreseeing how much time we had to develop and finalize our project without making things too complicated on ourselves we decided to keep it as an exploratory experience.


Monday, Dec 3rd

With the continuation of developing out other scenes, we came across other assets to help compliment our indistinguishable river environments. We were able to find lava assets for the river of Fire and create fogs in the river of forgetfulness. With all these assets and possible addition of volumetric videos, we decided we needed a powerful computer to encompass and run our Unity project to further decrease any kind of lag when running and working on it. We considered asking our professors for one but the only one capable of handling our needs was in the DF studio and it was inaccessible for uploading additional software and we couldn’t install any drivers to resolve serial port issues without administrative permissions. To avoid all these bottlenecks we decided to use Tyson’s personal PC tower to continue work on the project and later for installation for our upcoming grad show.  

We also converted the kayak controller code from javascript to C# for use in the unity game engine in an uncalibrated state. The first sign we saw of movement in Unity’s play window was noticeably slow, but indicated that our attempt to translate the code worked. For our convenience, the variables we would need to access to calibrate the device were given the prefix ‘public’ in our code. This allowed us to manually edit them from the Inspector window in Unity without running the risk of adjusting a ‘private’ variable in error.

Tuesday, Dec 4th

We reconvened in the game:play lab to capture volumetric videos with the Xbox One Kinect and Depthkit and import them into Unity. Depthkit comes with several features for manipulating captured data from the Kinect camera, including a slider for cutting out object further than a particular distance and undesirable artifacts. In order to use the captures as looping animations, we tried to keep our recordings in sync with a ‘neutral’ state we determined at the start in order to avoid having the footage jump significantly between the first and last frames. Given that the Kinect and Depthkit render the captured information as a video file we also needed to be mindful about recording times and the number of objects that we wanted to include in our project in order to reduce performance impact.

Some of the animations we captured included hands, exaggerated faces, ‘statues’ of god-like figures and silly dances. We frequently took affordances from the clipped area in order to isolate particular limbs in frame. In one instance, we were able to create a 4-armed creature by using two subjects, one in frame and the other hidden behind in cropped space, contributing a second set of arms.


Wednesday, Dec 5th

At this stage we had three official scenes created. The paddle’s parts were ready to be assembled after going through the laser cutting machine. We then began to create a teleport code where the user could teleport from one cave entrance to the next at each scene but decided not to include it. We wanted the user to explore without feeling the need to be goal driven to get from one place to another. So, we decided to be facilitators and transport them whenever they wanted. We added a key press that would teleport the user from one scene to the next.

We had plenty of fun using the Zero Days Look for our Depthkit captures, which was created for the VR film of the same name. It allowed us to manipulate the default photographic appearance generated by default and incorporate colour, lines, points and shapes into the appearance of the Volumetric renditions. The more we worked with it, the more familiar we became with its interface and how our adjustments would look through in-game, as not all features of the plugin were directly rendered in the world view window in Unity during editing.



Thursday Dec 6th – Friday, Dec 7th

Prior to showcasing our project, we moved all of our unity assets and code to Tyson’s personal PC tower and continued our work from there. We began the integration of the volumetric videos into Unity and  play-tested the environment to get a feel for how comfortable it was to navigate with the paddle. We felt that the kayak’s motion was a bit slow for public demonstration; we tweaked the speed increment, friction, and maximum motion until it felt fluid.

Reception for the project was overall positive. Interestingly, children were able to pick up the controls with relative ease. Since the ultrasonic sensor targeted an area above the size of their hands, they were able to grip the paddle device wherever they desired. This could also be attributed to a lack of preconceptions of how the device works; one of the most experienced paddlers seemed to have the most difficulty operating the device.



Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.


Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and wayfinding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customisable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicalities and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  


Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.

Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and way-finding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customizable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicality’s and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  

The Night Journey by  Bill Viola and the USC Game Innovation Lab in Los Angeles.

The Night Journey is an experimental art game that uses both game and video techniques to tell the story of an individual’s journey towards enlightenment. With no clear paths or objectives and underpinnings from historical philosophical writings, the game focuses its core narrative towards creating a personal, sublime experience for the individual participant. Actions taken are reflected upon its world.

The techniques incorporated from video footage and the narrative premise of the game gave us inspiration for how we might tackle the scenic objectives for our project and interpret the paths we wanted players to take in River Styx.



ABSTRACT (2018) by Georgina Yeboah

ABSTRACT (2018): An Interactive Digital Painting Experience by Georgina Yeboah

(Figures 1-3. New Media Artist Georgina Yeboah’s silhouette immersed in the colours of ABSTRACT. Georgina Yeboah. (2018). Showcased at OCADU’s Experimental Media Gallery.)

ABSTRACT’s Input Canvas:

ABSTRACT’s Online Canvas:

GitHub Link:

Project Description: 

ABSTRACT (2018) is an interactive digital painting collective that tracks and collects simple ordinary strokes from users’ mobile devices and in real-time, translates them into lively vibrant strokes projected on a wall. The installation was projected onto the wall of the Experimental Media room at OCADU November 23rd, 2018.  ABSTRACT’s public canvas is also accessible online so participants and viewers alike could engage and be immersed in the wonders of ABSTRACT anytime, anywhere.

The idea of ABSTRACT was to express and celebrate the importance of user presence and engagement in a public space from a private or enclosed medium such as mobile devices. Since people tend to be encased in their digital world through their phones or closing themselves off in their own bubbles at times, it was important to acknowledge how significant their presence was outside of that space and what users have to offer to the world simply by existing. The users make ABSTRACT exist.

Here’s the latest documented video of ABSTRACT below:


Process Journal:

Nov 15th, 2018: Brainstorming  Process

(Figures 4-6. Initial stages of brainstorming on Nov 15th.)

Ever since experiment 1, I’ve always wanted to do something involving strokes. I was also interested in creating a digital fingerprint that could be left behind by anyone that interacted with my piece. I kept envisioning something abstract yet anonymous for a user’s input online. Trying out different ways of how I could picture what I wanted to do, I started thinking about translating strokes into different ones as an output at first just between canvases on my laptop.  I wanted to even go further by outputting more complex brush strokes from simple ordinary ones I drew on my phone. A simple stroke could output a squiggly one in return or a drawing of a straight line could appear diagonally on screen. I kept playing with this idea until I decided to just manipulate the colour of the strokes’ output for the time being.

Nov 19th 2018: Playing with strokes in P5.JS and PubNub

Using Pubnub’s server to connect P5’s javascript messages I started to play with the idea of colours and strokes. I experimented with a couple of outputs and even thought about having the same traced strokes projected on the digital canvas too with other characteristics but later felt the traced strokes would hinder the ambiguity I was aiming for. I also noticed that I was outputting the same randomization of colours and strokes both on mobile and on the desktop which was not what I wanted.

Nov 21st,2018: Understanding Publishing and Subscribing with PubNub


Figure 9. Kate Hartman’s diagram on Publishing and Subscribing with PubNub.

After a discussion with my professors I realized that all I needed to do to distinguish different characteristics from the strokes I inputed and then later outputted was to create another javascript file that would only publish the sent variables I wrote in my ellipse syntaxes:

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Nov 22nd and 23rd 2018: Final Touches and Critique Day

On the eve of the critique I managed to create two distinguishable strokes: Ordinary simple strokes on one html page with it’s own JS file and vibrant stroke outputs for the other. The connection was successful. I decided to add triangles to the vibrant strokes and play around with the opacity to give the brush stroke more character. I later tested it along with another user and we both enjoyed how fun and fluid the interaction was.

Figure. User testing with another participant.

Figure 11. User testing with another participant.

Figure. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Figure 12. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Here are some stills with their related strokes:

Figure 13. Output of Vibrant strokes from multiple users' input.

Figure 15. Output of Vibrant strokes from multiple users’ input.

Overall, the critique was an overwhelming success with a positive outcome. When the installation was projected in a public space users engaged and interacted with the strokes they displayed on the wall. Some got up and even took pictures as strokes danced around them and their silhouettes. It was a true celebration of user presence and engagement.

Figure 14. A user getting a picture taken in front of ABSTRACT.

Figure 16. A participant getting a picture taken in front of ABSTRACT’s digital canvas.


Figure 17. Experimental Media room where ABSTRACT was installed.


Figure 18. Georgina Yeboah standing in front of her installation ABSTRACT in the Experimental Media room at OCADU.

Related References

One of my biggest inspirations towards interactive installations that require user presence and engagement like ABSTRACT always lied in the works of Camille Utterback. Her commissioned work Abundance (2007) tracked the movements and interactions of pass-byers on the streets of San Jose plaza. This created interesting projections of colours and traces across the building. Many of Utterback’s work uses spatial movement and user presence to express a reflection of the life interacting and existing in the work’s space.


Multiuser drawing.(n.d). Retrieved from

Kuiphoff, J. (n.d). Pointillism with Pubnub. Retrieved on November 21 2018 from

Npucket and katehartman. (2018, November 26). CC18 / experiment 4 / p5 / pubnub / 05_commonCanvas_dots/ . Github. Retrieved from

Utterback, C. (2007). Abundance. Retrieved from

PittsburghCorning. (2011, April 8). Camille Utterback – Abundance. Retrieved from

First Flight (An Interactive Paper Airplane Experience.)

Experiment 3:

By: Georgina Yeboah

Here’s the Github link:


First Flight. (An Interactive Paper Airplane Experience. 2018)

Figure 1.”First Flight. (An Interactive Paper Airplane Experience, 2018)” Photo taken at OCADU Grad Gallery.

First Flight (FF) (2018), is an interactive tangible experience where users use a physical paper airplane to control the orientation of the sky to appear they are flying with the screen, while attempting to fly through as many virtual hoops as they can.

Fig 2. Georgina Yeboah. 2018 "First Flight Demo at OCADU Grad Gallery."

Figure 2. “First Flight Demo at OCADU Grad Gallery.” 2018


Fig 2. Georgina Yeboah 2018. "First Flight Demo at OCADU Grad Gallery."

Figure 3.  First Flight Demo at OCADU Grad Gallery ( 2018).

Video Link:

The Tech:

The installation includes:

  • x1 Arduino Micro
  • x1 Bono 55 Orientation Sensor
  • x1 Breadboard
  • x1 Laptop
  • A Couple of Wires
  • Female Headers
  • 5 Long Wires (going from the breadboard to Bono 55)
  • A Paper Airplane

Process Journal:

Thursday Nov 1st, 2018: Brainstorming to a settled idea.

Concept: Exploring Embodiment with Tangibles Using a Large Monitor or Screen. 

I thought about a variety of ideas leading up to the airplane interaction:

  1. Using a physical umbrella as an on or off switch to change the state of a projected animation. If the umbrella was closed it would be sunny. However if it were open the projection would show an animation of rain.
  2. Picking up objects to detect a change in distance (possibly using an ultrasonic sensor.) I could prompt different animations to trigger using objects. (For example; picking up sunglasses from a platform would trigger a beach scene projection in the summer.)
  3.  I also thought about using wind/breathe as an input to trigger movement to virtual objects but was unsure of where or how to get the sensor for it.
  4. I later thought about using the potentiometer and creating a clock that triggers certain animations to represent the time of day. A physical ferris wheel that would control a virtual one and cause some sort of animation was also among my earliest ideas.
Fig 2. Georgina Yeboah. 2018. First initial ideas of embodiment.

Figure 4. First initial ideas of embodiment.


Fig 3. Georgina Yeboah. 2018 "Considering virtual counterpart of airplane or not."

Figure 5. Considering virtual counterparts of airplane or not.

Monday Nov 5th, 2018:

Explored and played with shapes in 3D space using the WEBGL feature in P5.js. I learned a lot about WEBGL and it’s Z  axis’s properties.

Fig 5. Georgina Yeboah, Screenshot of Airplane.Js code.

Figure 6. Screenshot of Airplane.Js code.

I looked at the camera properties and reviewed it’s syntax from the “Processing P3D” document by Daniel Shiffman. Later, I would plan to set the CSS background’s gradient and later attach the orientation sensor to control the camera instead of my mouse.

Fig x. Georgina Yeboah (2018). "Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY."

Figure 7. Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY.


Fig x. Georgina Yeboah. 2018. "First Flight's Interface using WEBGL."

Figure 8. First Flight’s Interface using WEBGL.

Tuesday Nov 6th, 2018.

I had planned to add cloud textures for the sky but never found the time to do so. I did manage to add my gradient background though using CSS. 

I also planned to add obstacles to make getting hoops challenging but I didn’t include it due to time restraints and prioritization and thought it be best suited for future work.

Tuesday Nov 8th, 2018.

The eve before the critique, I had successfully soldered long wires to the female head that would be attached to the Bono 55 orientation sensor. The sensor would sit nicely on the top of the paper airplane head, covered with extra paper. On the other end, the sensor would connect to a breadboard where the Arduino Micro would sit on.

Fig 6. Georgina Yeboah. 2018. Bono 55 Orientation sensor sits nicely on top of paper airplane.

Figure 9. Bono 55 Orientation sensor sits nicely on top of paper airplane.

References and Inspirations:

I wanted to play with the idea of embodiment. Since I’ve worked with VR systems in cohesion with tangible objects for awhile, I wanted to re-visit  those kind of design ideas but instead of immersive VR I wanted to use a screen. A monitor big enough to carry out the task of engagement seemed simpler enough to explore this idea of play with a paper airplane.

I looked online for inspiring graphics to help me start building my world. I wanted this to be a form of play so I wanted the world I’d fly through to be as playful and dynamically engaging as possible while flying.


Paper Planes by Active Theory created a web application for the Google I/O event back in 2016 (Active Theory). It was an interactive web based activity where guests at the event could send and receive digital airplanes from their phones by gesturing a throw to a larger monitor. Digital paper airplanes could be thrown and received across 150 countries (Active Theory). The gesture of creating and throwing in order to engage with a larger whole through a monitor inspired the idea to explore my project’s playful gesture of play and interactivity.

Fig. 8. Active Theory. (2016). Paper plane's installation at the Google I/O event.

Figure. 10. Active Theory. (2016). Paper Plane’s online web based installation .

The CodePad:

This website features a lot of programmed graphics and interactive web elements. I happened to come across this WEBGL page by chance and was inspired by the shapes and gradients of the world it created.

(Fig 4. Codepad. (n.d) "WebGL Gradient". Retrieved from

(Figure 11. Meyer, Chris. (n.d) “WebGL Gradient”. Retrieved from


P5.Js Reference with WEBGL:

I found that  the Torus (the donut) was a apart of the WEBGL and next to the Cone, I thought they would be interesting shapes to play and style with. The Torus would wind up becoming my array of hoops for the airplane to fly through.



Figure 12. P5.Js. (n.d) “Geometries” Retrieved from

Future work:

Currently, the project has many iterations and features I would like to add or expand on. I would like to finalize the environment and create a scoring system so that the user can collect points when they go through a hoop. The more hoops you go through the more points you get. Changing the gradient background of the environment after a period of time would also be a feature I would like to work on. I believe there is a lot of potential in First flight that can eventually become a fully playful and satisfying experience with a paper airplane.


3D Models. Cgtrader. (2011-2018). Similar free VR / AR / Low Poly 3D Models. Retrieved from

ActiveTheory. (nd). Paperplanes. Retrieved from 

Dunn, James. (2018). Getting Started with WebGL in P5. Retrieved on Nov 12th 2018 from

McCarthy, Lauren. (2018). Geometries. P5.js Examples. Retrieved from

Meyer, Chris.(2018). WebGl Gradient. Codepad. Retrieved from

Paperplanes. (n.d). Retrieved from

Shiffman, Daniel. (n.d). P3D. Retrieved from

W3Schools.(1999-2018). CSS Gradients. Retrieved from


Exquisite Sketches ✿

Experiment 1:

Exquisite Sketches

Carisa P. Antariksa, Georgina Yeboah

Exquisite Sketches is a collaborative piece that involves digital sketching and assembly. Users are tasked to draw a specific body part prompted on their canvas with their smartphones and assemble their drawn sketch with others. The concept is to take something that can be intricately created using a variety of materials into something as simple as using your finger to draw on your smartphone browser for a fun activity.

Exquisite Sketches was created using P5.js and html pages. Each canvas prompt either had a normal pen brush or its own unique brush stroke.

Github Link  | Webspace Master Page

Canvas Prompts and their Brushes




Normal Pen Brush per Canvas:



screenshot_20181007-212048 screenshot_20181007-212246

Project Context

This project was inspired by the “Exquisite Corpse” art term, meaning a “collaborative drawing approach first used by surrealist artists to create bizarre and intuitive drawings” (Tate) a technique invented in the 1920s. Nowadays, it can be adapted into many uses, whether it is learning for children or a recreational game. The goal of the activity is to have a participants experience a “surprise reveal” at the end, when all the unique parts are assembled.

This art can be implemented in many ways through different mediums. There are projects that involve of this technique through relief print, using either linoleum tile pieces or woodblocks. In this implementation, each artwork created is then cut into pieces and then combined with other parts from other artworks. They are often aligned seamlessly to give the impression that they connect to create a whole new corpse.

A 3 part example:



A 4 part example:


This technique has also been adapted to many casual games. Body parts often transform to different combination of phrases and often, a mixture of the two. There could be archives of variations of these results.

Process Journal

Date: Monday, Sept 24th 2018

In our first meeting after the groups were formed, we listed possible ideas we could implement on 20 screens. Our thought process began with two directions: an interactive activity or an installation that people can contribute to. Our list of ideas are as follows:

  1. Music (using p5.sound library) – Tapping different screens to create a tune, encourage movement
  2. Weather – Creating “electronic rain.” Users can interact with the screen, the elements are moveable as you touch them
  3. Sending information across computers – Drawing on one computer and having that show up on the other? Users collaborating to complete an abstract drawing. Or a bouncing ellipse? The idea is to keep the ball “floating” as it moves forward.
  4. Creating a chain reaction with elements on the screen – Inspired by dominoes.

From there, we liked the idea of drawing, so we developed an initial idea where we can create a interactive experience that involved sketching with different brush strokes. The first option would be to have a user control one mouse on one device (laptop or smartphone) that could control various brush strokes on the other devices that could start off on different positions on the screen. The screens would be styled in a grid format. Simultaneously, the user would be playing and drawing with multiple brush strokes at the same time through one device.

The second option would be to have all the computers communicate with each other and allow a collaboration happen. If one user were to draw on their screen, it will appear on the other person’s screen and so on.

Date: Thursday Sept 27th 2018 – OSCP5, Sockets and Nodes

We wanted to create an interactive installation with 20 screens. Georgina proposed we use a library she had used before to communicate between sketches on different devices. She started looking into how to get OSCP5.js working but ran into some socket errors where variables weren’t being reached since the code was having trouble locating the file.

We found Youtube Tutorials by Daniel Shiffman ‘Coding Rainbow’ on how sockets work in P5.js. We are currently watching and learning from these videos to see if they can help us achieve what we want. They look into the introductions of nodes and sockets and how we can send and receive messages across other computers through a server. Daniel Schiffman’s tutorials on websockets and P5.js  videos well represented this:


Date: Friday, Sept 28 2018

At this point, we were still trying our best to push for using node.js in out project. We used half the class to make the web socket work, and tried to replicate it on Carisa’s laptop. After some failures, we went to Nick for advice and told us to not do it and focus our efforts into something more feasible. Regardless of the shift, we still felt that creating a project that involved drawing was something we wanted to achieve. Along with these complications, Georgina proposed a brush that she had created and Carisa tried to translate the code from processing to the p5 library.

Date: Monday Oct 1st 2018

The presentation of all the Case Studies we contributed to was very thorough and had great potential in expanding our ideas. Georgina pitched the idea of using unique brushes after her fun stroke sketches, made of triangles and ellipses were working on p5. Afterwards, she started to rewind and return to what made the node.js process not work. Turns out there was a hyperlink issue, so she instead stated the entire directory path.


Now that the Javascript was being reached and the server was connected to the code we could do some really cool stuff with this interactive brush stroke I made!


We both experimented with different points where the brush can start to see how they can create an interesting composition on other screens, given the connection works.


The next step is sending this drawing function across other devices and have different brush strokes going on. Hopefully it doesn’t as long as simply connecting javascript files. (The transition from Processing to P5 has been really tricky but getting there, slowly but surely…)

Date: Tuesday, Oct 2nd, 2018 – Wednesday, Oct 3rd, 2018

Unfortunately, further tweaking did not solve any major issues we had before and we were right back to where we started. The html file could be opened locally but the JavaScript files could not. The library was still not being found and javascript files from the project could not be opened locally due to some security permissions on Google Chrome and other browsers. We decided to abandon node.js and all together and rethink our approach of how to develop our sketch project.

Soon, we had produced a back-up plan that involved a “complete the drawing” concept. Pulling a case study from this forum: Complete the Drawing. Collaboration can be formed from drawing from the prompt, which can result to interesting possibilities.

To refine our new plan, ideas were laid out on a mind-map.


We both then narrowed down our ideas to a viable product we can create for Friday. With our brushes and the code that we have written, we found that applying it with the exquisite corpse idea would work well with the project brief. To fulfill the brief requirements, we separated the “corpse” into four parts, the Head, Chest, Torso and the Legs. This led to us writing the code for the body parts in different html files for smartphones, so they can create an abstract body with all four phones per group. People from other groups can mix and match their prompt drawings for some exciting mashups!

We brainstormed ways to make the prompts clearer and wanted to see if an existing drawn prompt on the canvas would allow for a more effective experience.


We then realized that having a drawn prompt might not be necessary. This might influence the participants in a direction that limits their imagination, considering the time constraints. To allow the process flow smoothly, we decided on text prompts that would allow everyone to draw freely.

bottom top

After some user testing, a problem rose in writing the code for smartphones. Before we could finalize the html pages, we had to figure out a way to have the canvas stay fixed on the web browser on both the Android and iPhone. Without coding this detail in, it would be difficult for participants to draw properly in the short period of time they are given in the presentation.

Date: Thursday, Oct 4th, 2018 and Friday, Oct 5th, 2018

Carisa spent the day figuring out how to fix the moving browser problem for the smartphones. The problem was that if a mousePressed() function was called with the function windowResized(), the browser would still move on browsers on the iPhone but had no problem on Android. After some time and a little advice, she managed to figure out that by simply placing in the same code within draw() and mousePressed() into touchMoved() stopped the unnecessary movement on the safari browser! We were relieved that we were able to fix this crucial detail to the experience, as it would allow for ease in drawing with our fingers on the phone screen.


After fixing the browser problem, we spent the rest trying to figure out ways to enhance the drawing experience. We tried to figure out a way to allow the stroke change color as it is drawn across the canvas, develop more unique brushes and perhaps allow a toggle button to change between different pen brushes.


The attempts led to unsatisfying results, perhaps due to limitation in understanding how to write the code. Trying to think about how this button could be implemented on the smartphone was difficult, as what allows the button to be pushed was a mousePressed() function and not a touch() function. Carisa tried a variety of ways to connect these dom elements (slider, buttons) to alternating strokes that could be drawn on the canvas, but it could not be referred to by the p5 libraries when called with a stroke(). For example, a ‘bline’ and a ‘rline’ variable was made to call upon the strokes, but it did not work once tested on the web browser.


Alternating the code between touchMoved() and draw() did not work either so the idea was scrapped.


We realized later on that this might be resolved through using a shape as a toggle instead, which required the use of Boolean statements. We could not continue experimenting on this possibility due to time constraints, so instead we decided to use the additional brushes Georgina created for the “fun” aspect of the activity. Some groups will be able to use a “funky” brush and the others, a normal pen stroke brush on different canvases. The aim of these differences were to reflect on the “exquisite” aspect of the sketches and inciting possibilities as to how people would react to using an unconventional brush.

Activity Implementation:    


The class was divided into 5 groups of 4 people, where everyone would each hold their smartphone. They will then lock the orientation of the screen and then access the canvases we have hosted on the webspace. 3 groups wanted to use the funky brushes, which led to the other 2 using the regular pen brush. Each person was then assigned to one part of the corpse composition, starting with the head, then followed by the chest, torso and lastly, the legs. Each group lined up in a row in front of a central table area where they took turns drawing the parts within a set amount of time. Given a good amount of time, it was planned that each person would spend around 1-2 minutes on one part so that they could refine it to whatever extent they wish. The next person with the next body part could then gather inspiration from the person before them and continue the formation of their “exquisite sketch.” Once they have completed this activity, each team can then see their group results. From there, the users can then go combine their own body part creation with other groups, creating a variety of possible “exquisite sketches.”



Presentation Reflection

We found that there were many possible iterations to implementing the activity. This can be done differently each time with game mechanics, such as:

  • Providing prompts versus no prompts

Some concerns such as the need to have different html files for each body part were brought up (Were they necessary? Did we need to see the prompts?) which allowed us to reflect on the code that we created. Was it out of necessity or something that could complete the concept? We concluded that having the basic information shown on the canvas was vital for beginners to the concept of  “Exquisite Sketches” and could transform to other possibilities for more knowledgeable participants.

  • Not needing all participants to go in order, but allowing them to draw whatever body part they please and see where the results take them.

This an observation that we considered, knowing how spontaneous the experience could be once performed in class. To make the activity slightly more structured however, we decided on allowing the participants go in order to understand the process.

  • Having the option to change brushes on the canvas

This would have been a great thing to implement. Having more tools to use would allow a change in the mechanics of time spent drawing and the craft of the sketch.

  • Instead of already seeing the result of the first body part, perhaps conceal it slightly so that the group will come to a more exciting sketch at the end.

This was a valid observation that could tie in to another iteration of game that involves changing the rules to make it a separate experience each time.

The game is very versatile in terms of the direction it could go. Each time it is played, whether in a party context or just a fun ice breaker, it can produce distinctive results. There is not just one way to play it, making it such an adaptable, entertaining project.

Learning Reflection

Through this project, we were able to realize both our own processes in reacting to the brief. Having to face the ordeal of creating an experience within 20 screens overwhelmed us, which led to us over-thinking the concept rather than realizing them. As a result, we prioritized the functionality and the presentation of a minimum viable product. The technical realization was limited to our own skills in writing code and the growth in which we started understanding how to write using the P5 library. We both realized the learning curve that we experienced, such as the  transition from Processing to P5 and the process of slowly understanding how java script code is written a certain way and what it can represent and can result into. There was also a lot of learning in understanding our own smartphone devices, complete with its abilities and constraints. We were successful in this regard after being able to identify which code functions work better than most. Hopefully, there will be a better opportunity to learn to code for responsive screens that can include all types of resolutions and devices in the future.


“Chat.” Socket.IO, 30 July 2018,

Yeboah, Georgina. “Georgina’s OCADU Blog.” Typothoughtography GRPH2A04FW1103 RSS, WordPress , 2018,

The Coding Train. “12.1: Introduction to Node – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 24 Sep 2018.

The Coding Train. “12.2: Using Express with Node – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 24 Sep 2018.

The Coding Train. “12.3: Connecting Client to Server with – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 27 Sep 2018.

The Coding Train. “12.4: Shared Drawing Canvas – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 27 Sep 2018.

Tom. “Challenge: Complete The Drawing.” Bored Panda, Web. 3 Oct 2018.

Gotthardt, Alexxa. “Explaining Exquisite Corpse, the Surrealist Drawing Game That Just Won’t Die.” 11 Artworks, Bio & Shows on Artsy, Artsy, 4 Aug. 2018, Web. 3 Oct 2018.

Tate. “Cadavre Exquis (Exquisite Corpse) – Art Term.” Tate, Tate, Web. 3 Oct 2018.

“Relief Prints.” Carisa Antariksa, Web. 4 Oct 2018.

“INSIGHTS.” MSLK, Oct 2018.



Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.