Author Archive

ABSTRACT (2018) by Georgina Yeboah

ABSTRACT (2018): An Interactive Digital Painting Experience by Georgina Yeboah

(Figures 1-3. New Media Artist Georgina Yeboah’s silhouette immersed in the colours of ABSTRACT. Georgina Yeboah. (2018). Showcased at OCADU’s Experimental Media Gallery.)

ABSTRACT’s Input Canvas: https://webspace.ocad.ca/~3170683/index.html

ABSTRACT’s Online Canvas: https://webspace.ocad.ca/~3170683/ABSTRACT.html

GitHub Link: https://github.com/Georgiedear/Experiment-4-cnc

Project Description: 

ABSTRACT (2018) is an interactive digital painting collective that tracks and collects simple ordinary strokes from users’ mobile devices and in real-time, translates them into lively vibrant strokes projected on a wall. The installation was projected onto the wall of the Experimental Media room at OCADU November 23rd, 2018.  ABSTRACT’s public canvas is also accessible online so participants and viewers alike could engage and be immersed in the wonders of ABSTRACT anytime, anywhere.

The idea of ABSTRACT was to express and celebrate the importance of user presence and engagement in a public space from a private or enclosed medium such as mobile devices. Since people tend to be encased in their digital world through their phones or closing themselves off in their own bubbles at times, it was important to acknowledge how significant their presence was outside of that space and what users have to offer to the world simply by existing. The users make ABSTRACT exist.

Here’s the latest documented video of ABSTRACT below:

screen-shot-2018-11-26-at-9-22-16-am

https://vimeo.com/302788614

Process Journal:

Nov 15th, 2018: Brainstorming  Process

(Figures 4-6. Initial stages of brainstorming on Nov 15th.)

Ever since experiment 1, I’ve always wanted to do something involving strokes. I was also interested in creating a digital fingerprint that could be left behind by anyone that interacted with my piece. I kept envisioning something abstract yet anonymous for a user’s input online. Trying out different ways of how I could picture what I wanted to do, I started thinking about translating strokes into different ones as an output at first just between canvases on my laptop.  I wanted to even go further by outputting more complex brush strokes from simple ordinary ones I drew on my phone. A simple stroke could output a squiggly one in return or a drawing of a straight line could appear diagonally on screen. I kept playing with this idea until I decided to just manipulate the colour of the strokes’ output for the time being.

Nov 19th 2018: Playing with strokes in P5.JS and PubNub

Using Pubnub’s server to connect P5’s javascript messages I started to play with the idea of colours and strokes. I experimented with a couple of outputs and even thought about having the same traced strokes projected on the digital canvas too with other characteristics but later felt the traced strokes would hinder the ambiguity I was aiming for. I also noticed that I was outputting the same randomization of colours and strokes both on mobile and on the desktop which was not what I wanted.

Nov 21st,2018: Understanding Publishing and Subscribing with PubNub

img_1767

Figure 9. Kate Hartman’s diagram on Publishing and Subscribing with PubNub.

After a discussion with my professors I realized that all I needed to do to distinguish different characteristics from the strokes I inputed and then later outputted was to create another javascript file that would only publish the sent variables I wrote in my ellipse syntaxes:

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Nov 22nd and 23rd 2018: Final Touches and Critique Day

On the eve of the critique I managed to create two distinguishable strokes: Ordinary simple strokes on one html page with it’s own JS file and vibrant stroke outputs for the other. The connection was successful. I decided to add triangles to the vibrant strokes and play around with the opacity to give the brush stroke more character. I later tested it along with another user and we both enjoyed how fun and fluid the interaction was.

Figure. User testing with another participant.

Figure 11. User testing with another participant.

Figure. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Figure 12. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Here are some stills with their related strokes:

Figure 13. Output of Vibrant strokes from multiple users' input.

Figure 15. Output of Vibrant strokes from multiple users’ input.

Overall, the critique was an overwhelming success with a positive outcome. When the installation was projected in a public space users engaged and interacted with the strokes they displayed on the wall. Some got up and even took pictures as strokes danced around them and their silhouettes. It was a true celebration of user presence and engagement.

Figure 14. A user getting a picture taken in front of ABSTRACT.

Figure 16. A participant getting a picture taken in front of ABSTRACT’s digital canvas.

img_2842

Figure 17. Experimental Media room where ABSTRACT was installed.

img_2861-2

Figure 18. Georgina Yeboah standing in front of her installation ABSTRACT in the Experimental Media room at OCADU.

Related References

One of my biggest inspirations towards interactive installations that require user presence and engagement like ABSTRACT always lied in the works of Camille Utterback. Her commissioned work Abundance (2007) tracked the movements and interactions of pass-byers on the streets of San Jose plaza. This created interesting projections of colours and traces across the building. Many of Utterback’s work uses spatial movement and user presence to express a reflection of the life interacting and existing in the work’s space.

References

Multiuser drawing.(n.d). Retrieved from http://coursescript.com/notes/interactivecomputing/animation/pubnub/

Kuiphoff, J. (n.d). Pointillism with Pubnub. Retrieved on November 21 2018 from http://coursescript.com/notes/interactivecomputing/pubnub/

Npucket and katehartman. (2018, November 26). CC18 / experiment 4 / p5 / pubnub / 05_commonCanvas_dots/ . Github. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%204/P5/pubnub/05_commonCanvas_dots

Utterback, C. (2007). Abundance. Retrieved from  http://camilleutterback.com/projects/abundance/

PittsburghCorning. (2011, April 8). Camille Utterback – Abundance. Retrieved from https://www.youtube.com/watch?v=xgRFUsVVb84

First Flight (An Interactive Paper Airplane Experience.)

Experiment 3:

By: Georgina Yeboah

Here’s the Github link: https://github.com/Georgiedear/CNCExperiment3

 

First Flight. (An Interactive Paper Airplane Experience. 2018)

Figure 1.”First Flight. (An Interactive Paper Airplane Experience, 2018)” Photo taken at OCADU Grad Gallery.

First Flight (FF) (2018), is an interactive tangible experience where users use a physical paper airplane to control the orientation of the sky to appear they are flying with the screen, while attempting to fly through as many virtual hoops as they can.

Fig 2. Georgina Yeboah. 2018 "First Flight Demo at OCADU Grad Gallery."

Figure 2. “First Flight Demo at OCADU Grad Gallery.” 2018

 

Fig 2. Georgina Yeboah 2018. "First Flight Demo at OCADU Grad Gallery."

Figure 3.  First Flight Demo at OCADU Grad Gallery ( 2018).

Video Link: https://vimeo.com/300453454

The Tech:

The installation includes:

  • x1 Arduino Micro
  • x1 Bono 55 Orientation Sensor
  • x1 Breadboard
  • x1 Laptop
  • A Couple of Wires
  • Female Headers
  • 5 Long Wires (going from the breadboard to Bono 55)
  • A Paper Airplane

Process Journal:

Thursday Nov 1st, 2018: Brainstorming to a settled idea.

Concept: Exploring Embodiment with Tangibles Using a Large Monitor or Screen. 

I thought about a variety of ideas leading up to the airplane interaction:

  1. Using a physical umbrella as an on or off switch to change the state of a projected animation. If the umbrella was closed it would be sunny. However if it were open the projection would show an animation of rain.
  2. Picking up objects to detect a change in distance (possibly using an ultrasonic sensor.) I could prompt different animations to trigger using objects. (For example; picking up sunglasses from a platform would trigger a beach scene projection in the summer.)
  3.  I also thought about using wind/breathe as an input to trigger movement to virtual objects but was unsure of where or how to get the sensor for it.
  4. I later thought about using the potentiometer and creating a clock that triggers certain animations to represent the time of day. A physical ferris wheel that would control a virtual one and cause some sort of animation was also among my earliest ideas.
Fig 2. Georgina Yeboah. 2018. First initial ideas of embodiment.

Figure 4. First initial ideas of embodiment.

 

Fig 3. Georgina Yeboah. 2018 "Considering virtual counterpart of airplane or not."

Figure 5. Considering virtual counterparts of airplane or not.

Monday Nov 5th, 2018:

Explored and played with shapes in 3D space using the WEBGL feature in P5.js. I learned a lot about WEBGL and it’s Z  axis’s properties.

Fig 5. Georgina Yeboah, Screenshot of Airplane.Js code.

Figure 6. Screenshot of Airplane.Js code.

I looked at the camera properties and reviewed it’s syntax from the “Processing P3D” document by Daniel Shiffman. Later, I would plan to set the CSS background’s gradient and later attach the orientation sensor to control the camera instead of my mouse.

Fig x. Georgina Yeboah (2018). "Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY."

Figure 7. Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY.

 

Fig x. Georgina Yeboah. 2018. "First Flight's Interface using WEBGL."

Figure 8. First Flight’s Interface using WEBGL.

Tuesday Nov 6th, 2018.

I had planned to add cloud textures for the sky but never found the time to do so. I did manage to add my gradient background though using CSS. 

I also planned to add obstacles to make getting hoops challenging but I didn’t include it due to time restraints and prioritization and thought it be best suited for future work.

Tuesday Nov 8th, 2018.

The eve before the critique, I had successfully soldered long wires to the female head that would be attached to the Bono 55 orientation sensor. The sensor would sit nicely on the top of the paper airplane head, covered with extra paper. On the other end, the sensor would connect to a breadboard where the Arduino Micro would sit on.

Fig 6. Georgina Yeboah. 2018. Bono 55 Orientation sensor sits nicely on top of paper airplane.

Figure 9. Bono 55 Orientation sensor sits nicely on top of paper airplane.

References and Inspirations:

I wanted to play with the idea of embodiment. Since I’ve worked with VR systems in cohesion with tangible objects for awhile, I wanted to re-visit  those kind of design ideas but instead of immersive VR I wanted to use a screen. A monitor big enough to carry out the task of engagement seemed simpler enough to explore this idea of play with a paper airplane.

I looked online for inspiring graphics to help me start building my world. I wanted this to be a form of play so I wanted the world I’d fly through to be as playful and dynamically engaging as possible while flying.

PaperPlanes:

Paper Planes by Active Theory created a web application for the Google I/O event back in 2016 (Active Theory). It was an interactive web based activity where guests at the event could send and receive digital airplanes from their phones by gesturing a throw to a larger monitor. Digital paper airplanes could be thrown and received across 150 countries (Active Theory). The gesture of creating and throwing in order to engage with a larger whole through a monitor inspired the idea to explore my project’s playful gesture of play and interactivity.

Fig. 8. Active Theory. (2016). Paper plane's installation at the Google I/O event.

Figure. 10. Active Theory. (2016). Paper Plane’s online web based installation .

The CodePad:

This website features a lot of programmed graphics and interactive web elements. I happened to come across this WEBGL page by chance and was inspired by the shapes and gradients of the world it created.

(Fig 4. Codepad. (n.d) "WebGL Gradient". Retrieved from https://codepad.co/snippet/xC6SberG

(Figure 11. Meyer, Chris. (n.d) “WebGL Gradient”. Retrieved from https://codepad.co/snippet/xC6SberG

 

P5.Js Reference with WEBGL:

I found that  the Torus (the donut) was a apart of the WEBGL and next to the Cone, I thought they would be interesting shapes to play and style with. The Torus would wind up becoming my array of hoops for the airplane to fly through.

 

screen-shot-2018-11-12-at-11-56-41-pm

Figure 12. P5.Js. (n.d) “Geometries” Retrieved from https://p5js.org/examples/3d-geometries.html

Future work:

Currently, the project has many iterations and features I would like to add or expand on. I would like to finalize the environment and create a scoring system so that the user can collect points when they go through a hoop. The more hoops you go through the more points you get. Changing the gradient background of the environment after a period of time would also be a feature I would like to work on. I believe there is a lot of potential in First flight that can eventually become a fully playful and satisfying experience with a paper airplane.

References 

3D Models. Cgtrader. (2011-2018). Similar free VR / AR / Low Poly 3D Models. Retrieved from

https://www.cgtrader.com/free-3d-models/exterior/landscape/low-poly-clouds-pack-1

ActiveTheory. (nd). Paperplanes. Retrieved from https://activetheory.net/work/paper-planes 

Dunn, James. (2018). Getting Started with WebGL in P5. Retrieved on Nov 12th 2018 from   

https://github.com/processing/p5.js/wiki/Getting-started-with-WebGL-in-p5

McCarthy, Lauren. (2018). Geometries. P5.js Examples. Retrieved from https://p5js.org/examples/3d-geometries.html

Meyer, Chris.(2018). WebGl Gradient. Codepad. Retrieved from https://codepad.co/snippet/xC6SberG

Paperplanes. (n.d). Retrieved from https://paperplanes.world/

Shiffman, Daniel. (n.d). P3D. Retrieved from https://processing.org/tutorials/p3d/

W3Schools.(1999-2018). CSS Gradients. Retrieved from https://www.w3schools.com/css/css3_gradients.asp

 

Exquisite Sketches ✿

Experiment 1:

Exquisite Sketches

Carisa P. Antariksa, Georgina Yeboah

Exquisite Sketches is a collaborative piece that involves digital sketching and assembly. Users are tasked to draw a specific body part prompted on their canvas with their smartphones and assemble their drawn sketch with others. The concept is to take something that can be intricately created using a variety of materials into something as simple as using your finger to draw on your smartphone browser for a fun activity.


Exquisite Sketches was created using P5.js and html pages. Each canvas prompt either had a normal pen brush or its own unique brush stroke.

Github Link  | Webspace Master Page

Canvas Prompts and their Brushes

screenshot_20181007-213519screenshot_20181007-212849

screenshot_20181007-213043

screenshot_20181007-213211

Normal Pen Brush per Canvas:

screenshot_20181007-211723

chest-pen

screenshot_20181007-212048 screenshot_20181007-212246

Project Context

This project was inspired by the “Exquisite Corpse” art term, meaning a “collaborative drawing approach first used by surrealist artists to create bizarre and intuitive drawings” (Tate) a technique invented in the 1920s. Nowadays, it can be adapted into many uses, whether it is learning for children or a recreational game. The goal of the activity is to have a participants experience a “surprise reveal” at the end, when all the unique parts are assembled.

This art can be implemented in many ways through different mediums. There are projects that involve of this technique through relief print, using either linoleum tile pieces or woodblocks. In this implementation, each artwork created is then cut into pieces and then combined with other parts from other artworks. They are often aligned seamlessly to give the impression that they connect to create a whole new corpse.

A 3 part example:

3-combi-01

 

A 4 part example:

screen-shot-2018-10-08-at-3-47-16-pm

This technique has also been adapted to many casual games. Body parts often transform to different combination of phrases and often, a mixture of the two. There could be archives of variations of these results.

Process Journal

Date: Monday, Sept 24th 2018

In our first meeting after the groups were formed, we listed possible ideas we could implement on 20 screens. Our thought process began with two directions: an interactive activity or an installation that people can contribute to. Our list of ideas are as follows:

  1. Music (using p5.sound library) – Tapping different screens to create a tune, encourage movement
  2. Weather – Creating “electronic rain.” Users can interact with the screen, the elements are moveable as you touch them
  3. Sending information across computers – Drawing on one computer and having that show up on the other? Users collaborating to complete an abstract drawing. Or a bouncing ellipse? The idea is to keep the ball “floating” as it moves forward.
  4. Creating a chain reaction with elements on the screen – Inspired by dominoes.

From there, we liked the idea of drawing, so we developed an initial idea where we can create a interactive experience that involved sketching with different brush strokes. The first option would be to have a user control one mouse on one device (laptop or smartphone) that could control various brush strokes on the other devices that could start off on different positions on the screen. The screens would be styled in a grid format. Simultaneously, the user would be playing and drawing with multiple brush strokes at the same time through one device.

The second option would be to have all the computers communicate with each other and allow a collaboration happen. If one user were to draw on their screen, it will appear on the other person’s screen and so on.

Date: Thursday Sept 27th 2018 – OSCP5, Sockets and Nodes

We wanted to create an interactive installation with 20 screens. Georgina proposed we use a library she had used before to communicate between sketches on different devices. She started looking into how to get OSCP5.js working but ran into some socket errors where variables weren’t being reached since the code was having trouble locating the file.

We found Youtube Tutorials by Daniel Shiffman ‘Coding Rainbow’ on how sockets work in P5.js. We are currently watching and learning from these videos to see if they can help us achieve what we want. They look into the introductions of nodes and sockets and how we can send and receive messages across other computers through a server. Daniel Schiffman’s tutorials on websockets and P5.js  videos well represented this:

(https://www.youtube.com/watch?v=bjULmG8fqc8&t=1s)

Date: Friday, Sept 28 2018

At this point, we were still trying our best to push for using node.js in out project. We used half the class to make the web socket work, and tried to replicate it on Carisa’s laptop. After some failures, we went to Nick for advice and told us to not do it and focus our efforts into something more feasible. Regardless of the shift, we still felt that creating a project that involved drawing was something we wanted to achieve. Along with these complications, Georgina proposed a brush that she had created and Carisa tried to translate the code from processing to the p5 library.

Date: Monday Oct 1st 2018

The presentation of all the Case Studies we contributed to was very thorough and had great potential in expanding our ideas. Georgina pitched the idea of using unique brushes after her fun stroke sketches, made of triangles and ellipses were working on p5. Afterwards, she started to rewind and return to what made the node.js process not work. Turns out there was a hyperlink issue, so she instead stated the entire directory path.

capture1

Now that the Javascript was being reached and the server was connected to the code we could do some really cool stuff with this interactive brush stroke I made!

capture2

We both experimented with different points where the brush can start to see how they can create an interesting composition on other screens, given the connection works.

screen-shot-2018-10-08-at-1-02-26-am

The next step is sending this drawing function across other devices and have different brush strokes going on. Hopefully it doesn’t as long as simply connecting javascript files. (The transition from Processing to P5 has been really tricky but getting there, slowly but surely…)

Date: Tuesday, Oct 2nd, 2018 – Wednesday, Oct 3rd, 2018

Unfortunately, further tweaking did not solve any major issues we had before and we were right back to where we started. The html file could be opened locally but the JavaScript files could not. The Socket.io library was still not being found and javascript files from the project could not be opened locally due to some security permissions on Google Chrome and other browsers. We decided to abandon node.js and socket.io all together and rethink our approach of how to develop our sketch project.

Soon, we had produced a back-up plan that involved a “complete the drawing” concept. Pulling a case study from this forum: Complete the Drawing. Collaboration can be formed from drawing from the prompt, which can result to interesting possibilities.

To refine our new plan, ideas were laid out on a mind-map.

mindmap-crecomp

We both then narrowed down our ideas to a viable product we can create for Friday. With our brushes and the code that we have written, we found that applying it with the exquisite corpse idea would work well with the project brief. To fulfill the brief requirements, we separated the “corpse” into four parts, the Head, Chest, Torso and the Legs. This led to us writing the code for the body parts in different html files for smartphones, so they can create an abstract body with all four phones per group. People from other groups can mix and match their prompt drawings for some exciting mashups!

We brainstormed ways to make the prompts clearer and wanted to see if an existing drawn prompt on the canvas would allow for a more effective experience.

cre-and-01

We then realized that having a drawn prompt might not be necessary. This might influence the participants in a direction that limits their imagination, considering the time constraints. To allow the process flow smoothly, we decided on text prompts that would allow everyone to draw freely.

bottom top

After some user testing, a problem rose in writing the code for smartphones. Before we could finalize the html pages, we had to figure out a way to have the canvas stay fixed on the web browser on both the Android and iPhone. Without coding this detail in, it would be difficult for participants to draw properly in the short period of time they are given in the presentation.

Date: Thursday, Oct 4th, 2018 and Friday, Oct 5th, 2018

Carisa spent the day figuring out how to fix the moving browser problem for the smartphones. The problem was that if a mousePressed() function was called with the function windowResized(), the browser would still move on browsers on the iPhone but had no problem on Android. After some time and a little advice, she managed to figure out that by simply placing in the same code within draw() and mousePressed() into touchMoved() stopped the unnecessary movement on the safari browser! We were relieved that we were able to fix this crucial detail to the experience, as it would allow for ease in drawing with our fingers on the phone screen.

mouse-to-touch

After fixing the browser problem, we spent the rest trying to figure out ways to enhance the drawing experience. We tried to figure out a way to allow the stroke change color as it is drawn across the canvas, develop more unique brushes and perhaps allow a toggle button to change between different pen brushes.

process

The attempts led to unsatisfying results, perhaps due to limitation in understanding how to write the code. Trying to think about how this button could be implemented on the smartphone was difficult, as what allows the button to be pushed was a mousePressed() function and not a touch() function. Carisa tried a variety of ways to connect these dom elements (slider, buttons) to alternating strokes that could be drawn on the canvas, but it could not be referred to by the p5 libraries when called with a stroke(). For example, a ‘bline’ and a ‘rline’ variable was made to call upon the strokes, but it did not work once tested on the web browser.

code-experiment

Alternating the code between touchMoved() and draw() did not work either so the idea was scrapped.

code-experiment2

We realized later on that this might be resolved through using a shape as a toggle instead, which required the use of Boolean statements. We could not continue experimenting on this possibility due to time constraints, so instead we decided to use the additional brushes Georgina created for the “fun” aspect of the activity. Some groups will be able to use a “funky” brush and the others, a normal pen stroke brush on different canvases. The aim of these differences were to reflect on the “exquisite” aspect of the sketches and inciting possibilities as to how people would react to using an unconventional brush.

Activity Implementation:    

   pics-01

The class was divided into 5 groups of 4 people, where everyone would each hold their smartphone. They will then lock the orientation of the screen and then access the canvases we have hosted on the webspace. 3 groups wanted to use the funky brushes, which led to the other 2 using the regular pen brush. Each person was then assigned to one part of the corpse composition, starting with the head, then followed by the chest, torso and lastly, the legs. Each group lined up in a row in front of a central table area where they took turns drawing the parts within a set amount of time. Given a good amount of time, it was planned that each person would spend around 1-2 minutes on one part so that they could refine it to whatever extent they wish. The next person with the next body part could then gather inspiration from the person before them and continue the formation of their “exquisite sketch.” Once they have completed this activity, each team can then see their group results. From there, the users can then go combine their own body part creation with other groups, creating a variety of possible “exquisite sketches.”

visualisation-flowchart

 

Presentation Reflection

We found that there were many possible iterations to implementing the activity. This can be done differently each time with game mechanics, such as:

  • Providing prompts versus no prompts

Some concerns such as the need to have different html files for each body part were brought up (Were they necessary? Did we need to see the prompts?) which allowed us to reflect on the code that we created. Was it out of necessity or something that could complete the concept? We concluded that having the basic information shown on the canvas was vital for beginners to the concept of  “Exquisite Sketches” and could transform to other possibilities for more knowledgeable participants.

  • Not needing all participants to go in order, but allowing them to draw whatever body part they please and see where the results take them.

This an observation that we considered, knowing how spontaneous the experience could be once performed in class. To make the activity slightly more structured however, we decided on allowing the participants go in order to understand the process.

  • Having the option to change brushes on the canvas

This would have been a great thing to implement. Having more tools to use would allow a change in the mechanics of time spent drawing and the craft of the sketch.

  • Instead of already seeing the result of the first body part, perhaps conceal it slightly so that the group will come to a more exciting sketch at the end.

This was a valid observation that could tie in to another iteration of game that involves changing the rules to make it a separate experience each time.

The game is very versatile in terms of the direction it could go. Each time it is played, whether in a party context or just a fun ice breaker, it can produce distinctive results. There is not just one way to play it, making it such an adaptable, entertaining project.

Learning Reflection

Through this project, we were able to realize both our own processes in reacting to the brief. Having to face the ordeal of creating an experience within 20 screens overwhelmed us, which led to us over-thinking the concept rather than realizing them. As a result, we prioritized the functionality and the presentation of a minimum viable product. The technical realization was limited to our own skills in writing code and the growth in which we started understanding how to write using the P5 library. We both realized the learning curve that we experienced, such as the  transition from Processing to P5 and the process of slowly understanding how java script code is written a certain way and what it can represent and can result into. There was also a lot of learning in understanding our own smartphone devices, complete with its abilities and constraints. We were successful in this regard after being able to identify which code functions work better than most. Hopefully, there will be a better opportunity to learn to code for responsive screens that can include all types of resolutions and devices in the future.


References

“Chat.” Socket.IO, 30 July 2018, socket.io/get-started/chat/.

Yeboah, Georgina. “Georgina’s OCADU Blog.” Typothoughtography GRPH2A04FW1103 RSS, WordPress , 2018, http://blog.ocad.ca/wordpress/3170683/.

The Coding Train. “12.1: Introduction to Node – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 24 Sep 2018.

The Coding Train. “12.2: Using Express with Node – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 24 Sep 2018.

The Coding Train. “12.3: Connecting Client to Server with Socket.io – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 27 Sep 2018.

The Coding Train. “12.4: Shared Drawing Canvas – WebSockets and P5.js Tutorial” Online video clip. YouTube, YouTube, 13 Apr. 2016. Web. 27 Sep 2018.

Tom. “Challenge: Complete The Drawing.” Bored Panda,

www.boredpanda.com/challenge-complete-the-drawing/?utm_source=google&utm_medium=organic&utm_campaign=organic. Web. 3 Oct 2018.

Gotthardt, Alexxa. “Explaining Exquisite Corpse, the Surrealist Drawing Game That Just Won’t Die.” 11 Artworks, Bio & Shows on Artsy, Artsy, 4 Aug. 2018, www.artsy.net/article/artsy-editorial-explaining-exquisite-corpse-surrealist-drawing-game-die. Web. 3 Oct 2018.

Tate. “Cadavre Exquis (Exquisite Corpse) – Art Term.” Tate, Tate, www.tate.org.uk/art/art-terms/c/cadavre-exquis-exquisite-corpse. Web. 3 Oct 2018.

“Relief Prints.” Carisa Antariksa, cantariksa.com/Relief-Prints. Web. 4 Oct 2018.

“INSIGHTS.” MSLK, mslk.com/reactions/exquisite-corpse-aka-the-drawing-game/.4 Oct 2018.

 

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.