Author Archive

MESS.NET

Mess.net

by Tyson Moll, Joshua McKenna, Nicholas Alexander

Audio by Astrolope

mockup-cc4

dscf7783

 

GitHub

Overview:
Mess.net is a participatory online installation inviting users to join in on a collaborative work of art, from anywhere in the world. Visitors to the web site are assigned a paint colour and treated to a view of their canvas: a rapidly spinning disc. With the click of a mouse the user will see their colour of paint dropped on the disc and be treated to the mess it makes as it splatters off the surface. As projects are completed users can log in to the gallery and enjoy their handiwork.

 

How It Works:

Mess.net receives painting commands from wireless controllers and transmits the information to a device that creates spin art.

The mechanical aspect drops paint on a spinning wheel using gravity and solenoid valves. The valves are controlled by the Arduino microcontroller, held above the spinning wheel using a custom-built shelving unit. Paint is fed into the solenoid valves via plastic tubing and water sealant tape. The spinning wheel is attached to an industrial motor from a salvaged Sears fan head, which can be operated with three different speed settings. Several cameras are also fixed to the shelving unit to provide video footage of the action.

The digital aspect of the project is split across four nodes: the Arduino code, the Controller code, the Client code, and Twitch streaming. The Arduino code receives commands from the Controller and operates the solenoids as instructed. The Controller code (written with p5.js) receives instructions via PubNub from any number of Clients and forwards the information via USB serial communication to the Arduino. Programmed with the jQuery library, the Client code provides participants the ability to send “Paint” commands to the Controller through PubNub, with selection options for different colours as well as a live stream of the spinning wheel. In order to livestream the process, we used a program called OBS (Open Broadcaster Software) and three webcams to share a videostream of the process online on Twitch.

20181117_103054

 

Process Journal:
This project began with a desire to explore the possibilities afforded by the connective technology required by the project (Pubnub and Javascript Libraries). We brainstormed a series of concepts and tools we would be excited to work with and looked for overlap, in order to land on a solid project idea we could all be excited about.

From there we knew that we wanted to explore creating an activity that would be fun to join in with others on, from anywhere in the world, that might be improved by not being in proximity. We examined similarly-themed projects including Studio Puckey’s donottouch.org and Studio Moniker’s Puff Up Club, taking elements we liked and spinning them into something new. What was fun about these projects? What made them unique to their medium?

capture

We were inspired by the sense of collaboration on a tangibly satisfying act and the sense of play from the two projects above in particular. We seriously considered several projects (a remote bubble-wrap-popping machine, a model that users would slowly flood drip-by-drip, and a power-sander that users would guide click-by-click through common household objects) while continuing to rapidly-generate other design ideas that fit our constraints.

Early design ideations: a device that would slowly scrape away layers from a scratch-board  surface, a remote-controlled saw, and a power-sander slowly destroying a shoe, and a cut-your-own-tree holiday diorama

We settled on something with a fun, positive message, but kept the sense of mess and chaos. The basis for what would become Mess.net was in place.

We adjourned to do research and begin to gather materials. We conceived of the project as having three core pillars: the physical build of the apparatus, the coding, and the web-side interface.

We began the build of the apparatus by considering its requirements: it would have to be sturdy enough to hold up a considerable amount of liquid and hold the Arduino components steady and dry.

We considered and discarded several designs before landing on the final version with some help from the incomparable Reza Safaei.


Late design ideations. Here you can see the designs that would inform the final apparatus, as well as some explorations of how to realize the spinning plate.

We built the skeleton of the apparatus by assembling a tall box frame, spaced widely enough to allow us to reach inside and access whatever parts we placed there fairly easily. Knowing that we would want at least two levels of shelving for paint and valve control we drilled multiple guide holes along each pillar – this way we could make adjustments quickly and easily.We had been debating how best to realize the spinning-wheel portion of the apparatus (we considered a pottery wheel, a belt-and gear, and a drill-and-friction-driven system among others) when we found a discarded and functional Sears fan from the 1970s. We removed its cage and its fan blades, then made and affixed a wooden plate to its central bit.

The fans of the era appear to have been built with impressive torque; we had hoped that the fan’s adjustable speeds might afford us interesting opportunities in adjusting the speed of the paint, but it was so powerful that we settled on keeping it at the first setting. We spent some time exploring the possibility of adding a dimmer to the fan, but eventually shelved it as being out of scope.

The Arduino component of the project presented new challenges for us, as this was the first project most of us had encountered that required careful power management.

We chose solenoid valves as the best machinery for our purposes, having judged that the code required to control them (a simple HIGH/LOW binary command to open/close the valves) would be simple to send over Pubnub. The solenoids required 12 volts to function, far more than the Arduino Micro could supply, so we looked into managing multiple power sources. This led to the inclusion of diodes to protect the circuitry and transistors to function as the switches for the solenoids. Ultimately the Arduino component proved to be among the simplest aspects of the build: once we had the circuitry working for one valve it needed only be repeated exactly for the other two, and we were correct in judging that a simple HIGH/LOW command would effectively manage the valve. Our first iteration of the circuitry became our final iteration, and when troubleshooting we only ever needed to check connections.

paintdrop-fritz_bb

We selected plastic tubing of the same diameter as our solenoid valves. The tubing kept its shape strongly – we used a heat gun to straighten it out, at which time it screwed in tightly to the valves. It required only a small amount of waterproof sealing tape to make the connection from valve to tubing watertight. We had a tougher time connecting the tubing to the 2L pop bottles we chose as the paint receptacles due to their ease and simplicity. The mouth of the bottles was slightly too wide to connect easily to the tubing as the valves had. We managed to get the connection between tube and bottle sealed by using a combination of duct tape to hold the tubing tight and sealing tape to keep it watertight.

The process of coding the communication protocol for the device was relatively straightforward; we used example code provided to the class as a backbone for the PubNub communication interface and Serial communication between a computer and Arduino. The only message that we needed to send across the devices was a means of denoting the colour of paint to drop. In order to ensure that paint was dispensed from the solenoid, we implemented a short delay in the duration of the dispense signal. The only other features coded independent of the web interface were two arduino buttons that could debug the solenoids and live display of incoming messages from PubNub, two time-saving features for troubleshooting the devices.

img_0281

For the minimum viable product, we wanted the interface to allow the user to select between paint colours and to be able to paint with either red, blue or yellow. Collectively, as a group we felt that some sort of visual feedback was needed in the interface to demonstrate to the user that they painted with their selected paint colour. Originally we proposed an animation that would float above the paint button for each time the user clicked. We suggested either a paint drop specific to the colour selected, or a “+1” as an indicator that the user clicked the paint button. Because of time-restraints we opted for a counter in the top right corner that would demonstrate the total amount of paint drops from all users combined.

untitled-4

With another revision on this project we would include a user-specific counter or visual interface element so that the person interacting with the paint button knows exactly how much they are contributing to the artwork, themselves. Additionally we would have an HTML element in the bottom left corner replacing the “Project 4” with text that would update with each plate that was painted.

We developed graphical assets in illustrator based off a primary colour theme of Red, Yellow and Blue. Viewers would be able to click to adjust their colour type and press a magic paint button to deliver a command to our machine. Central to the web interface was a live stream of the wheel in motion. We intended to have an instantaneous video feed of the project, but we encountered an approximately 8 second delay in the streaming process that we believe is a result of the data transmission speed from computer to computer. These assets were all programmed in jQuery.

We were interested during the development of the web interface to include a particle system to visually display the colours being submitted live during the process. We discovered that jQuery and p5.js’ canvases seem to conflict with one another in terms of interactability; although there were solutions available to us to remedy the error (e.g. adjust layering or convert the jQuery elements to p5.js) we were short for time and decided to textually render the live feed of paint drops committed by participants.

 

img_20181122_165452
The solenoids available to us were only rated for use with water, so we were concerned about damaging them if we used paint at higher consistency. After consulting with local art professional Jordana Heney we were told that the best options for our purposes would be watercolour or watered-down tempera paint. The expense of watercolour paint precluded its use, so we went with tempera. Later the option of vegetable-based inks and dyes was brought to our attention, which we hadn’t had the chance to experiment with, but would like to in future.

We experimented with paint thicknesses to get a sense of what moved well through the solenoid, left a good colourful impression, and could be consistently reproduced. We settled on a ratio of approximately 1 part paint to 1 part water, give-or-take depending on brand, as being the best for our purposes. Just slightly thicker than water but not so thick as to cause disfunction to the valves, this was the ratio we stuck with through the rest of the project.

Apart from some minor rewrites to the code and UI tweaks, once every pillar was connected the apparatus worked perfectly. We tested several varieties of paper before settling on paper plates as our paint surface, as their porous surface and shape were a good fit for our consistency of paint and size of our spinner.dscf7549

After our crit we returned to the apparatus to create multiple variations on the painted plates, in order to better capture the different results our apparatus generated.

img_20181124_162935

 

 

Project Context:

Spin Art was the driving concept behind the machine’s functionality. Although interaction between the device and the wheel is presently minimal, we took great inspiration from the techniques employed in the process of developing such artworks. Callen Shaub is an excellent example of the practice, who works out of Toronto creating gallery-standard works.

Mess.net aligns with fun, exploratory, tongue-in-cheek internet installation art like donottouch.org and Puff Up Club. The intended experience is to share an out-of-the ordinary action with people, see what others did, and consider your own action in that light.

It also exists within the same sphere as participatory art installations such as The Obliteration Room by Yayoi Kusama where guests are given circular stickers to place anywhere in a room, and Bliss by Urs Fischer and Katy Perry where participants can slowly destroy the artwork to reveal new colours while adding colours of their own. The creators have laid out a framework, but it is the participants who define what they actual final visual state of the artwork is. The act of participating is the experience of the art – the final outcome is, perhaps, irrelevant.

 

Next Steps:

Based on feedback and testing we would expand this project by experimenting with different inks and receptacle media. Paper plates were something of a stopgap, as was tempera paint; they were choices we made out of necessity keeping time and budget in mind. Given the time we would experiment with multiple media and generate a large volume of work.

Once work is generated we would like to explore arranging it. Seeing many instances of the works juxtaposed might reveal interesting patterns, and playing with the arrangement would be as involved a project as their creation.

We would also like to improve the speed of interactions, add more valves and colours, automate paint reloads, and industrialize the entire process so it can be left unsupervised for long periods of time while still generating artwork.

Gallery:

5

3

2

1

 

Resources: 

Bliss (n.d.). Retrieved from http://ursfischer.com/images/439609

Controlling A Solenoid Valve With Arduino. (n.d.). Retrieved from https://www.bc-robotics.com/tutorials/controlling-a-solenoid-valve-with-arduino/

Shaub, Callen. (n.d.) Callen Shaub. Retrieved from https://callenschaub.com/

Studio Puckley. (n.d.). Do Not Touch. Retrieved from https://puckey.studio/projects/do-not-touch

Studio Moniker. (n.d.). Puff Up Club. Retrieved from https://studiomoniker.com/projects/puff-up-club

THE OBLITERATION ROOM. (n.d.). Retrieved from https://play.qagoma.qld.gov.au/looknowseeforever/works/obliteration_room/

 

 

MindPainter

img_0722

Overview

MindPainter reads a user’s brainwaves and uses them as the foundation for a painting. Each painting is unique to the user and the moment the readings were collected.

collage-01

It can also be used as a tool for meditation, with the inclusion of a meditation mode that measures the user’s mental relaxation and adjusts the image accordingly.

The shape of the image is determined by a mathematical function called the Rhodonea Curve.

rcurve

The Rhodonea Curve as described by Wikipedia

The Rhodonea Curve can be applied to draw shapes often described as “roses”, as well as to the game Spirograph.

rose-curve-demo

MindPainter expresses the Rhodeona Curve as code in p5.js and Processing. By adjusting variables in the equation, the code allows users to create mind-bending fractals.

The colours of the elements in the drawing are determined by assigning the values from sensor readings to RGB parameters.

firstcode

The basic code in p5 and Processing for generating Rhodonea Curves.

test5

Potentiometer inputs for k and j in the code above created this image. 

The Mindwave hardware returns a value for the following readings:

Delta
Theta
Low Alpha
High Alpha
Low Beta
High Beta
Low Gamma
Mid Gamma

Though some claim that brainwaves have significance it is not possible to draw meaningful conclusions on the relations of these readings to thoughts or mental states with the MindWave, as the hardware is actually reading electronic activity on the surface of the skin and extrapolating that data to speculate on brainwave activity (see MindWave Mobile user guide).  However users of MindPainter have reported being able to see relationships between their thoughts and the paintings MindPainter generates – for example, in one test, a user reported having an angry thought which led to a picture that was red and jagged and spoke of anger to the user.

The Mindwave also captures two proprietary values that are not based on brainwave activity but are extrapolated by the hardware:

Attention
Meditation

In testing it was determined that these values tend to be more reactive to user intention than the brainwaves.

The default mode of the MindPainter software uses all the values in order to create a series of notably differing images.

The secondary mode, Meditation Mode (accessible by pressing the ‘m’ key at any time), uses the brainwave readings only for generating cosmetic elements such as colour. The actual drawing of the shape is handled only by the Attention and Meditation parameters, meaning the image can be clearly related to the mental actions of the wearer though it always draws a starburst shape. The larger the starburst, the deeper in a meditative state the user is.

Concept

img_20181111_220548

The initial concept for this project was a participatory meditation. One user would wear the headgear and sit in meditation while other their brainwaves were visualized on a screen. In this way their meditation could become a group activity and an act of creation as well as introspection, as others could view and participate in it.

Another iteration of the early concept was as a meditation assistance tool. The user would sit in meditation and if their attention wandered an Arduino would cause a solenoid strike a chime, bringing the user’s focus back.

During development the concept moved away from being focused on meditation and into the more accessible artwork-generation program that it became.

Process

The development of MindPainter was a triumph of research more than anything else. With the understanding that all sensors simply return numbers I wanted to explore interesting and unique sensors and the possibilities they could provide. I took some time to consider the sensors available to me and remembered that there were sensors that could take brainwave readings!

Before committing wholeheartedly to the idea of using brain sensors I experimented with the sensors available to me. A fast and simple way to express sensor data is visually, so I approached the problem as an art project. Since sensors return numbers, how might I express those numbers in an interesting visual way?

Some research into painting by algorithm uncovered the Rhodonea Curve, and a video from our friend at the Coding Train provided the basic code that would become the core engine of the painting tool. I rigged up an Arduino with an array of sensors and experimented with it to see what kind of images I could generate.

mindpaint-sketch_bb

The code for replicating the MindPainter imagery via sensor and Arduino can be found on GitHub

The Arduino proved the concept: sensor readings could be used as inputs to generate complicated, visually compelling, reactive images.

At this point, secure in the notion that I could reliably generate interesting images with constant sensor data, I went looking for an affordable brain sensor. The least expensive option available on short notice was the NeuroSky Mindwave.

img_20181031_144452

NeuroSky has been making brainwave sensors for some time and there is a fair bit of documentation available for research and hacks. Thus the hacking documentation I could find was for previous models of the NeuroSky hardware. Nevertheless, I wanted to experiment with the hardware.

Documentation I found early on suggested that the sensor readings could be accessed via Arduino by soldering wiring directly into the TX and Ground pins of the Mindwave’s chip. Despite having a newer model than the documentation called for I wanted to try it and see if it would work.

img_20181031_145328

I took apart the Mindwave, made the necessary solders, uploaded the Arduino code for reading the sensor data, and tried to get the connection between Arduino and Mindwave working.

img_20181101_222530

img_20181101_222523

While the code compiled and loaded to the Arduino, there was no transmission of data between the devices.  I did more digging. I confirmed that the hardware that the documentation I was working from was written for was not the same hardware I had, but an earlier model that transmitted its data via USB. The model I had was the Mindwave Mobile, which used Bluetooth. Having never worked with Bluetooth before, I had no idea how to progress. Clearly this was going to be more complicated than soldering a few wires.

I got a hold of a HC-05 Bluetooth Module and did a few experiments. I tried to set up the Arduino as a Bluetooth receiver to bypass the dongle the Mindwave connects with by default. I tried to set up the Arduino as a transmitter to replace the Bluetooth inside the headset. I even managed to find documentation that was, seemingly, exactly what I was looking for: connecting HC-05 to the Mindwave Mobile. However, due to my own unfamiliarity with the hardware, I could not get the Mindwave and the Arduino to interface. In the interest of completing the assignment in time, I was ready to shelve the hardware interface and produce something interesting using only Arduino-based sensors and the painting function.

As a last-ditch effort to get something meaningful out of the brain sensor I sat down with Nick Puckett who advised me that I was trying to take too many steps. He explained to me that since a computer is orders of magnitude more powerful than an Arduino I’d have a far easier time if I looked for a method to connect the headset to the computer directly and skip the Arduino altogether. Furthermore, he said, it was very likely that someone had already written the code for connecting the Mindwave to a computer, and that I just had to find and adjust it to suit my purposes.

The hunt was back on. I went back to digging. I wasn’t able to find anything that attached the Mindwave to p5, the language we had been working in up until now, but I did find several options that used Processing (which is similar to p5). Not being constrained to using p5, I decided to eschew it in favour of Processing, since that was where most of the work of interfacing the headset with the computer had been done.

Finally, after several attempts to implement and understand the code I had found, I came across a post on the Processing forums. A user had a problem similar to mine – they needed code to parse the information coming through the port the MindWave was using – and ultimately, they solved the problem. They didn’t use a large custom library to read the data, but only a few lines of code.

This was the breakthrough I needed. This was a simple solution that allowed me to quickly comprehend the work necessary to translate the sensor readings to useful data. The code, when implemented in Processing, worked. For the first time I was able to connect the Mindwave to software besides the apps it arrived with. I ported over the code from p5, rewrote it as required for Processing, and connected the sensor. Below are the first images I generated using my brainwave data:

processingtest

processingtest2processingtest3

From here it was a matter of experimentation. I quickly realized that the more complicated the object drawn the slower the program would run. This coupled with the slow update rate of the hardware led to the decision to make the final MindPainter a sort of slideshow, showing a new image every two seconds of the data that was last available. I experimented with the placement of sensor data in various places in the draw equation until I settled on an arrangement that seemed to generate the most reliably interesting shapes without annihilating the frame rate.

This experimentation gave way to some quality-of-life features that I implemented both to make my experimentation process easier and to create features an end-user might want. First I implemented a system that displayed a note at the top of the screen when there was a connection issue, as I often was unsure if the shape I was seeing was due to the hardware having a bad connection or me having low brain activity. Next I implemented a feature that displayed the current brain readings with a keystroke (‘n’ in the current iteration of the software) as I (and future users, I expected) wanted to be able to see what kind of readings were generating the pictures I was seeing.

Finally, since I still wanted to explore the software as a meditation tool from the earliest concepts, I implemented the meditation mode. One arrangement of the sensor readings had not generated the kind of psychedelic shapes I was looking for, but it had generated a consistent highly reactive starburst shape based on the Attention and Meditation values. Staring at the centre of the starburst had a calming effect on me, and it seemed that as the readings rose so did the size of the shape. I wrote a function to swap between this arrangement and the default one.

prettyones2mindpainter135158mindpainter58351mindpainter29850

Images painted by the final software

Next Steps

If I expand this project (and I plan to) I would like to implement a user interface that allows wearers to swap the parameters they are tracking on the fly. With ten sensor readings and dozens of possible places to put them in the equation, I would love to give a user the freedom to experiment with where the readings go and what they do. In this way a user would gain a far finer degree of control over the kind of images they generate than they currently have, and perhaps allow for an increased sense of ownership and artistic pride in the images they create.

I envision this project as an installation where the images generated are projected on a sphere or the walls all around the subject, and where sensor readings also cause chimes to be struck or music to be played. I envision a complete audio-visual sensory experience generated by the user’s brain.

I would also like to optimize the software so it runs smoothly, and even go so far as to implement animation so the image is constantly in motion.

Context

As the ability to measure brainwaves has become accessible it has become folded in with the trend of tracking biometrics. Companies such as NeuroSky and Muse are offering products to measure and track brain activity (as well as some ancillary apps such as games). These companies are relatively innocuous and are promising nothing more than the ability to track your own numbers, like a FitBit for the brain. Some products, like the NeuroProgrammer 2 referenced in this article, are utilizing similar technology and flowery language to back up scurrilous claims of medical benefits – possibly to the detriment of their customers.

In making MindPainter I hope to fall in the former camp. I make no claims about the benefits of the program. In generating works of art rather than graphs or arrays of spreadsheets, I hope that a user of MindPainter might develop a relationship and sense of ownership and pride in their brain readings. If the goal of Muse, NeuroSky, and mindfulness apps such as Calm and Headspace is to get people thinking about and familiar with the activity of their minds, then MindPainter is a tool in the same tradition.

Special Thanks

Built with NeuroSky MindWave Mobile. This project was inspired by the work of Arturo Vidich, Sofy Yuditskaya and Eric Mika from Frontier Nerds and the Mental Block project http://www.frontiernerds.com/brain-hack

This project would not exist if not for some excellent advice from Nick Puckett.

While I didn’t use his code, some insights from the work of Eric Blue pointed me in the right direction.

Code from user precociousmouse from the Processing forums provided the breakthrough I needed. Thank you, whoever you are.

References:

Brainwave Entrainment and Marketing Pseudoscience. (2008, May 29). Retrieved from https://theness.com/neurologicablog/index.php/brainwave-entrainment-and-marketing-pseudoscience/

The Coding Train (2017, February 08). Retrieved November 12, 2018, from https://www.youtube.com/watch?v=f5QBExMNB1I

Experience Calm. (n.d.). Retrieved from https://www.calm.com/

Frontier Nerds: An ITP Blog. (n.d.). Retrieved from http://www.frontiernerds.com/brain-hack

Meditation Made Easy. (n.d.). Muse. Retrieved from https://choosemuse.com/

Meditation and Mindfulness Made Simple. (n.d.). Retrieved from https://www.headspace.com/

Mindwave Mobile 2 (n.d.). Retrieved from https://store.neurosky.com/pages/mindwave

Mindwave Mobile: User Guide http://download.neurosky.com/support_page_files/MindWaveMobile/docs/mindwave_mobile_user_guide.pdf [PDF]. (2015, August 5). NeuroSky Inc

What are Brainwaves? (n.d.). Retrieved from https://brainworksneurotherapy.com/what-are-brainwaves

Project GitHub: https://github.com/npyalex/mindPaint

More photos: Album on Google Drive

Storybook: The Princess & The Dragon

2jlc2s

Maria Yala & Nick Alexander

Storybook: The Princess and the Dragon

Storybook is a collaborative storytelling game built using p5, in which each player assists in generating a random page of a storybook. Storybook asks players to tell the tale of a Princess and her dragon. Players require a mobile phone and internet access to create a story page. Each phone will generate one page of the story in this multi-player game. To create a page, each player picks one image from a random set of four images and a caption from a random set of four. Story pages can then be arranged in multiple ways to create different tales about the Princess and her dragon. How the players arrange their pages influences the way the story is read.

Link: https://webspace.ocad.ca/~3174283/ (Optimized for mobile at this time)

Git repository: https://github.com/npyalex/C-C_Storybook

storybook-brainstorming

Brainstorming & ideation document

Our first discussion was one of scope and simplicity. The most interesting projects presented to us as examples were ones that used phones as mechanisms to facilitate a face-to-face personal interaction, rather than the core experience. We brainstormed and explored several ideas, and Storybook was the idea that we both felt was the most fun, versatile, and achievable.

mvimg_20180927_160624
First wireframe of the project

We sketched out a rough wireframe of the screen-by-screen user flow. Even at this early point we were considering experience design, and included a “blank” screen as a sort of pause in the flow. Once all the players were ready, we thought, the screen could be tapped to reveal the final page chosen and thus preserve the surprise.

We were also discussing how to best generate random images, display them in a quadrant, and store them for retrieval at the end. We discussed whether having multiple canvases on the screen would be helpful in this. This wireframe captures our decision to go with a single large canvas – the black square represents the single canvas, drawing all the artwork.

2jl83z

First working version of the screen progression

This is the first complete prototype of the user progression. We generated multiple screens by creating a “screenState” variable and writing if-statements that drew screens based on the current number assigned to screenState. Every choice the user made increased the variable by one. So, if the screenState was 0, the user is seeing the first screen. If the screenState was 1, the user was seeing the second screen, and so on.

 

2jl4p8 2jl62n

Story images and caption generation and randomization

Once a player clicks on start to begins the game, they are presented with a random set of 4 images to choose from and a random set of four captions. To draw these images and captions we first had to determine how to draw the images to the screen. We decided to split the canvas into four quadrants. Using (x,y) coordinates and an array of 12 images, we generated 4 random integers based on the length of the array and used these integers as the indices to randomly choose an image. The images were then drawn at the (x,y) coordinates i.e. (0, 0), (width/2, 0), (0, height/2) and (width/2, height/2). Once we determined that the randomization was working correctly and that the test images were showing up in the correct quadrants. We moved the images again to center them inside the quadrants.  

At this point we merged the CaptureTo save the image and caption clicked, we created global boolean i.e q1Clicked, q2Clicked, q3Clicked, q4Clicked, which we set to true whenever a click was registered in a particular quadrant. Using if statements and our storyStage variable which indicated which stage of the game the player was in, we were able to determine which image and caption to save as each of the images and captions was placed in a particular quadrant. Once the storyStage advanced, the global variables were then reset to false. & Randomization script with the Screen Progression script to create storybook.js. This choice ultimately led to some problems, as we had duplicated work. We had different methods of tracking clicks, and the overlap of the two click tracking methods caused an issue with iPhone that we only uncovered within the last days of the project. The Screen Progression script used the mouseClicked function, which isn’t compatible with iPhone. We solved this problem by changing the mouseClicked function to mouseReleased, which is compatible and using “return false;” to override any browser presets.

Once we determined that the new storybook.js script was working correctly we tested the game on our own mobile phones. Below are screenshots from the test.

                         2jlcbc2jlchf1

2jlc2s

 

img_20181001_162220

Princess character study

Both to save time and to evoke a cartoon children’s book aesthetic we made the decision to hand draw our artwork. The Princess needed to be expressive, with enough character of her own to make her feel unique, while being able to fit in with any caption. She is inspired by Allie Brosh’s Hyperbole and a Half, whose sassy and expressive cartoon heroine is a simple messy design, and the ultra-flexible (and ultra-disposable) stickmen of (not safe for work!) Cyanide and Happiness.

img_20181005_114953

Dragon character study

The Dragon needed to be simple in order to match the Princess and also be easy to draw consistently multiple times. Having him be more snake than lizard was an early choice that never wavered. He is inspired by the dragon from the classic Paper Bag Princess by Munsch and Martchenko, and Adventure Time’s postmodern takes on fantasy creatures.

img_20181005_114956

Final character designs

The Princess and the Dragon were chosen for their versatility. The trope of the dragon-stealing princess is common in myth and children’s stories, and the assumptions players have about it inform their preconceptions of the experience and the choices they make. Both characters are expressive but simple enough that any player choice of caption can be reflected in the matched artwork.

img_20181002_171134

Layout for hand-drawing artwork and early pencils

img_20181003_133053

Inking rough pencils

The 20-Screen moment:

2jlah5

https://drive.google.com/file/d/14S6m9T4wN55uvqR0ASgO_w656fI0FcAh/view?usp=sharing

https://drive.google.com/open?id=11_RLQ6hWze052pbPv2o9sjyVLCtlB7v8

Project Context

The project was inspired largely by illustrated children’s stories and collaborative storytelling games, as well as the common story elements of dragons and princesses.

http_%2f%2fi-huffpost-com%2fgen%2f4468128%2fimages%2fn-paper-bag-princess-628x314

The Paper Bag Princess

Familiarity with children’s stories (in particular we thought of The Paper Bag Princess by Robert Munsch and Michael Martchenko when creating The Princess and the Dragon) will likely help Storybook players find meaning in it. The clarity and earnest nature of stories directed at young children provides a framework for players familiar with them to accept the necessary simplicity of the semi-randomly generated pages that Storybook players create, and will perhaps have primed them to look for depth and meaning in Storybook’s brief sentences and simple pictures.

We also considered contemporary collaborative storytelling games. Many games exist in which players each make a choice from options they control (such as Joking Hazard, Cards Against Humanity, and Dungeons & Dragons) and combine their choices to make a narrative from scratch. This is a proven format – consider that the format of Cards Against Humanity has inspired an entire genre of social card game – and it was not a large leap to apply it to the Storybook project.

pic3306418

Joking Hazard/Cyanide and Happiness

Storybook also draws influence, both visual and practical, from sequential art – webcomics in particular. Visually it is inspired by the clear, attractive, and comedic styles of Hyperbole and a Half  and Cyanide and Happiness. (The game Joking Hazard is the brainchild and spinoff of Cyanide and Happiness, so the connection is strong).  These are both successful webcomics with wildly different tones, whose artwork styles are simple but expressive stick-people. Scott McCloud, comics theorist, explains that the more detailed a cartoon character the less a reader will see themselves in it. (Consider the ubiquity of the smiley-face or emoji, and ask why they rarely have much detail.) Storybook follows suit and uses simple figures with large, simple features so players will see themselves in and imprint upon the characters, and thus read meaning in a story where none exists. Furthermore, as Storybook is meant to be played on a flat surface and with images arranged sequentially, it could be considered a sort of collaborative electronic comic strip – a webcomic that doesn’t exist on the web.

understanding-comics

McCloud on simple cartoon faces

Where Storybook is unique from its inspirations is that it asks players to collaborate on a single narrative without stated beginning or end, and invites players to consider what arrangement does to a story. It is not in itself a game with a goal. There is not beginning, end, or stated size for the story, nor is there a necessary number of players (though one player with one phone might not enjoy it all that much). Additionally, it is not explicitly necessary that the story be read sequentially left-to-right, as English-language printed material usually is. The nature of the medium that Storybook exists within means that players can arrange and read their stories however they wish. Storybook allows players to explore what happens to a story when it is arranged in unconventional ways. What if you played Storybook like a crossword? What if you read it top to bottom? What if you built a grid and looked for the stories that appeared inside? Storybook’s versatility and simplicity allow players to explore the nature of storytelling and discover meaning beyond the designers’ intentions.

References

“Hyperbole and a Half.” Brosh, Allie. Hyperbole and a Half, hyperboleandahalf.blogspot.com/.

“Cyanide & Happiness (Explosm.net).” DenBleyker, McElfatrick, and Wilson. Cyanide & Happiness, explosm.net/.

McCloud, Scott. Understanding Comics. Harper Perennial, 1994.

Munsch, Robert N., and Michael Martchenko. The Paper Bag Princess. Annick Press, 2018.

“Cards Against Humanity.” Temkin et. al.  Cards Against Humanity, cardsagainsthumanity.com/.

Ward, Pendleton. Adventure Time, Cartoon Network, 5 Apr. 2010.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.