Synchrobots

By: Amreen Ashraf, Lauren Connell-Whitney, and Olivia Prior

Overview

Partner robots

Figure 1: Our two robots preparing themselves for the their first demo in front of a large group of people.

Synchrobots are randomly choreographed machines that attempt to move in synch with each other. The machine’s receive commands to start and stop their wheels, and as well the pause in between movements. Even though the two machines receive the same commands, the differences in weight, shape, and environment can cause the robots to fall out of synch. The machines  execute the commands as they travel across the room, sometimes bumping into each other or bumping into walls. The randomness of the movements creates a perceived intelligence, though there is none. The machines may nearly hit a wall but avoid it at the last second, it could be interpreted as very intentional, but it is the human interaction that creates these stories. These randomized in synch movements create life for these machine’s and make users delighted in following what they will do next.

Code

https://github.com/alusiu/experiment-4/

Idea Process 

We had a huge range of ideas from the beginning. We did manage to keep the idea of duality in our final project.

Figure 2: We had a huge range of ideas from the beginning. We did manage to keep the idea of duality in our final project.

We began the ideating by talking about how we could affect machines using outside data, for instance, using the data of the amount of people passing through the OCAD doors, or the wind at any given moment in Antarctica. We began developing an idea to make bots that would rove around and crash into things. Their construction would be of two pieces of steel that could crash together and make a thunderous noise. However, the idea of constructing something that was made to bump into things with force seemed like a tall challenge, and possibly not one we wanted to tackle just yet.

Quick sketches of how we wanted our kinetic sculptures to move

Figure 3: Quick sketches of how we could install our “thunder bots”.

Next we had an idea to create bots that would find each other in the dark; our Marco Polo bots. This was the idea that we began moving forward on: three bots that would seek and find another “leader” bot, then once it was found, another bot would become the leader. This idea lead into a thought about migration and how birds use this idea of passing around the leadership role so that the whole mechanism can stay strong.

One of our initial process sketch for our Marco Polo bots

Figure 4: One of our initial process sketch for our Marco Polo bots.

Our workflow written our for Marco Polo bots

Figure 5: Our workflow written our for Marco Polo bots.

Our "pseudo-code" hand written out for Marco Polo bots

Figure 6: Our “pseudo-code” hand written out for Marco Polo bots.

Creation Process 

Our prefab bots were simple to put together, but made of very cheap parts. One of the DC motor gear ended up breaking inside of the casing and so we did an emergency surgery using another 180 degree servo's gears. It was nice to take stuff apart and see how everything worked.

Figure 7: Our prefab bots were simple to put together, but made of very cheap parts. One of the DC motor gear ended up breaking inside of the casing and so we did an emergency surgery using another 180 degree servo’s gears. It was nice to take stuff apart and see how everything worked.

We began with a trip to Creatron, where the array of wheels and servo motors was vast and costly. So we ended up bringing home a lovely red chassis kit that came with two rear wheels that attached to two DC motors and a stabilizing front wheel. It was a great place to start and getting the wheels working was proof that our little bot could move if we wanted it to.

Video 1: A clip showing the robot moving with both DC motors connected to the feather micro-controller. 

Coding, Fabricating, & Development

Networking Platform & Iterating on Ideas

Our first step in development was to find a resource for networking our robots that was not PubNub. Unfortunately, PubNub API was not able to receive any data that was published from an Arduino – users were only able to subscribe to data. Since our initial idea prior to Synchrobots, was for three robots to talk to each other by sending coordinates to the other, we needed to have a platform that would allow us to send and receive data. After some research, we decided to pursue adafruit.io to network our robots.

Adafruit.io allowed us to very easily send and receive data through different feeds. The primary issue that adafruit.io presented was that the account could only send and receive data up to 30 times a minute. This meant that we could not be continuously be sending or receiving data like PubNub.

This presented some issues for our initial idea; we wanted our robots to be talking with each other continuously. Because we were developing for three robots that meant each one could only send and receive data ten times a minute. We discussed it amongst ourselves and decided to develop for an idea that required two robots that were not continuously sending and receiving data. As well we decided that if we changed our idea we would not scrap everything and start from the beginning. We would be able to re-use most of the code developed and thoughts from our previous hand written pseudo-code.

Additionally, at this time we had purchased out prefabricated platforms for the robots. The movement of these bots was not linear, nor consistent. We decided that our initial idea of “Marco Polo bots” would not reflect well with these movements, and decided to pursue an idea that would allow the robots to move more freely.

This brought us to the idea of Synchrobots: two robots that would receive the same data set, and execute at the same time. We were interested in how the robots would respond to the same data, and how the presentation of two robots would bring up connotations of partners, a couple dancing, or friends moving around.

At the start of development, we found that adafruit.io has a feature that allowed us to view the data in a visualization that was being sent in real time. This was a very useful tool for our development since it let us see what values were being sent by each feed while the robots would be moving. We were not required to have our microcontrollers connected to our machines to use the serial logs. 

Adafruit dashboard showing a data visualization of the randomly generated times for our robots.

Figure 8: Adafruit dashboard showing a data visualization of the randomly generated times for our robots.

Issues with DC motors and Servos

Our prefabricated platforms came with two DC motors and wheels each. Upon purchasing we were certain that the motors in the case were servos and were surprised that they were DC motors in servo casing. The DC motors would have worked well, but they did not have a data pin. This posed problems for us as we needed to be able to command both wheels independently through the microcontroller. As well, the motors were very cheaply made and one of the gears broke fairly early into development.  We took the motor apart and attempted to fix the gear by using one of the same parts from a servo we were not using. 

We attempted to fix the gear of our DC motor

Figure 9: We attempted to fix the gear of our DC motor.

Additionally, one of our feathers stopped working on us while attempting to control the DC motors; we suspect this was a power management issue that we were unaware of could happen. After a day’s worth of research, a feather passing away, and attempts of controlling the DC motors with our feather microcontrollers, we decided to get servo motors instead.

One of our feather microcontrollers stopped working during development

Figure 10: One of our feather microcontrollers stopped working during development.

Unfortunately, this did not solve all of our problems. The servos were installed in a way that they were “flipped”. When executing code that would turn them on and off, the wheels would turn in opposite directions. The next issue to tackle was finding out how to rotate the servos in the same way. We finally found a solution that starts the wheels in opposite positions: one starts at 360 degrees and goes to 0, and the other goes from 0 degrees to 360.

Video 2: A clip featuring both of our wheels spinning in the same way, and pausing in between times sent to the robots. 

Set up

Our set up for the microcontrollers was to have three feathers in total: two robots with feathers that would receive the data, and one feather connected to wifi sending the data. We designed the process as follows:

Synchorobots process diagram

Figure 11: Synchrobots process diagram.

We wanted to use randomly generated times so that the robots would be anthropomorphized as viewers watched them. The random times would reduce that chance of patterns that viewers engaging with the robots for a long time could pick up. As well, we were attracted to this idea because we had been talking throughout the project of the character of machines that can occur through real life obstacles and variances, which became apparent in our little bots as we constructed them. One of their wheels was slightly wobbly, one of them was heavier because of a different battery pack. All small details that would make the real life outcome slightly different between the two.

Deep in thought with the code and construction!

Figure 12: Deep in thought with the code and circuit construction with the servos.

Next we began soldering the circuit, a simple one at that. Something we also had in our plan initially was to add in blinking LEDs to the robots and possibly a robotic chatter with a speaker, as if they were talking to one another while they were roving. Near the end of the process we realized that it may have been too much for this one project and decided to keep this on for further iteration at another time.

Soldering the circuit, picky, fun, detail work.

Figure 13: Soldering the circuit; picky, fun, detail work.

Fabrication 

Once we had our components soldered we discussed the casing for the microcontrollers. Our debate amongst the team was how to keep the robots looking industrial, while also adding in features that elevate the anthropomorphized qualities we wanted. We thought that having an entire casing that hid the wires and the microcontrollers would create too much of a connotation of critters, while leaving the bare bones hardware would look unintentional.

We had some spare copper tubing and LED fairy lights around the studio. We experimented to see how adding some more industrial material onto the robots would change the experience of the machines roving around the room. We placed the copper tubing in the front with LED lights and found that it resembled an eye. This was the perfect middle ground between robot and critter. To compliment the front, we cased some of the wired in copper tubing in the back.

Assembling the robots with the copper tubing as casing for the wires, and as an "eye" in the front

Figure 14: Assembling the robots with the copper tubing as casing for the wires, and as an “eye” in the front

We had two types of LED lights, one “cool” strip and one “warm” strip. We decided, that to create small differences between the two robots, to have one robot that would house the cold strip, and one that would house the warm strip.

Experimenting with the LED lights and copper tubing on the front of the robot

Figure 15: Experimenting with the LED lights and copper tubing on the front of the robot

Final

Hardware

  • 3 X Adafruit feather ESP32
  • 2 X Feetech FT-MC-001 Kits(chassis)
  • 4 X Servo Motors
  • 2 X Protoboards
  • 2 X Rechargeable battery packs

    synchrobots-1

    Figure 16: Synchrobots ready and waiting.

GO TEAM!

Figure 17: Team of Amreen, Olivia and Lauren holding both finished Synchrobots.

Project Context

A work that provided a larger context for this work is Lichtsuchende: Exploring the Emergence of a Cybernetic Society by David Murray-Rust and Rocio von Jungenfeld. A project that created a community of robotic beings that interacted through light. The project examined how we as humans can design for other beings, namely, machines. It is a beautiful project that went through many iterations of how the robots reacted in the space and learned from their behaviours and group patterns.

Lichtsuchende: the robot community in action

Figure 18: Lichtsuchende: the robot community in action

Final Thoughts

Demoing our Synchrobots was a success. They seemed as if they were dancing at some points, they roved around separately, they even crashed into people and walls. It was a wonderful display of human and machines interacting. Peopled were delighted by them, some didn’t know how to react. It was a similar experience to watching people interact with a baby; some people were overly cautious and timid about the interactions, while there were others that actively and playfully interacted with the bots.

We received overall positive feedback and great advice on how we could carry our project forward. After the demonstration of our bots, Kate (Hartman) suggested that we could harness a Go Pro camera on the wall and have it observe the bots as they move about a room until the battery ran out. This is something we might like to pursue to track the patterns of our bots through time and space.

As we saw from the reactions of the our classmates, the movement and the meeting of the bots was such a cause for delight. Some other suggested direction was to look at LEGO mindstorms, which are tiny bots by the LEGO company that come as LEGO hardware with a preinstalled software. Nick Puckett had a thought to look at electric toothbrushes and dissecting them to get the vibration motors which would lead to creating small “dumb bots”. There were more suggestions to attach a pen or a marker to the bots as they moved around the room, as a look into bot art. This idea of letting the bot create work through movement was interesting because we had looked at a similar project while researching and decided against it because there were many such projects. We wanted a free flow movement of the bots without having a purpose attached to the movement. The feedback we will next implement if we do take this project forward is thinking about interactions between these machines. During this project we explored the concept of interactions via using LED strips. We got the LEDs working but we hadn’t coded any interactions on how or what the LEDs would do to react when the bots interacted. This point would be the most crucial in the further development of the project.

References

  • Šabanović, Selma, and Wan-Ling Chang. “Socializing Robots: Constructing Robotic Sociality in the Design and Use of the Assistive Robot PARO.” Ai & Society, vol. 31, no. 4, 2015, pp. 537–551., doi:10.1007/s00146-015-0636-1.
  • Conference, RTD, Dave Murray-Rust, and Rocio von Jungenfeld. “Thinking through robotic imaginaries”. figshare, 20 Mar. 2017. Online. Internet. 26 Nov. 2018. Available: https://figshare.com/articles/Thinking_through_robotic_imaginaries/4746973/1.
  • DeVito, James. “Bluefruit LE Feather Robot Rover.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, 2016, learn.adafruit.com/bluefruit-feather-robot.
  • Gagnon, Kevin. “Control Servo Power with a Transistor.” Arduino Project Hub, 2016, create.arduino.cc/projecthub/GadgetsToGrow/control-servo-power-with-a-transistor-3adce3.
  • McComb , Gordon. “Ways to Move Your Robot.” Servo Magazine, 2014, www.servomagazine.com/magazine/article/May2014_McComb.
  • PanosA6. “Start-Stop Dc Motor Control With Arduino.” Instructables.com, Instructables, 21 Sept. 2017, www.instructables.com/id/Start-Stop-Dc-Motor-Control-With-Arduino/.
  • Schwartz, M. “Build an ESP8266 Mobile Robot.” Memory Architectures | Memories of an Arduino | Adafruit Learning System, 2016, learn.adafruit.com/build-an-esp8266-mobile-robot/configuring-the-robot.

MESS.NET

Mess.net

by Tyson Moll, Joshua McKenna, Nicholas Alexander

Audio by Astrolope

mockup-cc4

dscf7783

 

GitHub

Overview:
Mess.net is a participatory online installation inviting users to join in on a collaborative work of art, from anywhere in the world. Visitors to the web site are assigned a paint colour and treated to a view of their canvas: a rapidly spinning disc. With the click of a mouse the user will see their colour of paint dropped on the disc and be treated to the mess it makes as it splatters off the surface. As projects are completed users can log in to the gallery and enjoy their handiwork.

 

How It Works:

Mess.net receives painting commands from wireless controllers and transmits the information to a device that creates spin art.

The mechanical aspect drops paint on a spinning wheel using gravity and solenoid valves. The valves are controlled by the Arduino microcontroller, held above the spinning wheel using a custom-built shelving unit. Paint is fed into the solenoid valves via plastic tubing and water sealant tape. The spinning wheel is attached to an industrial motor from a salvaged Sears fan head, which can be operated with three different speed settings. Several cameras are also fixed to the shelving unit to provide video footage of the action.

The digital aspect of the project is split across four nodes: the Arduino code, the Controller code, the Client code, and Twitch streaming. The Arduino code receives commands from the Controller and operates the solenoids as instructed. The Controller code (written with p5.js) receives instructions via PubNub from any number of Clients and forwards the information via USB serial communication to the Arduino. Programmed with the jQuery library, the Client code provides participants the ability to send “Paint” commands to the Controller through PubNub, with selection options for different colours as well as a live stream of the spinning wheel. In order to livestream the process, we used a program called OBS (Open Broadcaster Software) and three webcams to share a videostream of the process online on Twitch.

20181117_103054

 

Process Journal:
This project began with a desire to explore the possibilities afforded by the connective technology required by the project (Pubnub and Javascript Libraries). We brainstormed a series of concepts and tools we would be excited to work with and looked for overlap, in order to land on a solid project idea we could all be excited about.

From there we knew that we wanted to explore creating an activity that would be fun to join in with others on, from anywhere in the world, that might be improved by not being in proximity. We examined similarly-themed projects including Studio Puckey’s donottouch.org and Studio Moniker’s Puff Up Club, taking elements we liked and spinning them into something new. What was fun about these projects? What made them unique to their medium?

capture

We were inspired by the sense of collaboration on a tangibly satisfying act and the sense of play from the two projects above in particular. We seriously considered several projects (a remote bubble-wrap-popping machine, a model that users would slowly flood drip-by-drip, and a power-sander that users would guide click-by-click through common household objects) while continuing to rapidly-generate other design ideas that fit our constraints.

Early design ideations: a device that would slowly scrape away layers from a scratch-board  surface, a remote-controlled saw, and a power-sander slowly destroying a shoe, and a cut-your-own-tree holiday diorama

We settled on something with a fun, positive message, but kept the sense of mess and chaos. The basis for what would become Mess.net was in place.

We adjourned to do research and begin to gather materials. We conceived of the project as having three core pillars: the physical build of the apparatus, the coding, and the web-side interface.

We began the build of the apparatus by considering its requirements: it would have to be sturdy enough to hold up a considerable amount of liquid and hold the Arduino components steady and dry.

We considered and discarded several designs before landing on the final version with some help from the incomparable Reza Safaei.


Late design ideations. Here you can see the designs that would inform the final apparatus, as well as some explorations of how to realize the spinning plate.

We built the skeleton of the apparatus by assembling a tall box frame, spaced widely enough to allow us to reach inside and access whatever parts we placed there fairly easily. Knowing that we would want at least two levels of shelving for paint and valve control we drilled multiple guide holes along each pillar – this way we could make adjustments quickly and easily.We had been debating how best to realize the spinning-wheel portion of the apparatus (we considered a pottery wheel, a belt-and gear, and a drill-and-friction-driven system among others) when we found a discarded and functional Sears fan from the 1970s. We removed its cage and its fan blades, then made and affixed a wooden plate to its central bit.

The fans of the era appear to have been built with impressive torque; we had hoped that the fan’s adjustable speeds might afford us interesting opportunities in adjusting the speed of the paint, but it was so powerful that we settled on keeping it at the first setting. We spent some time exploring the possibility of adding a dimmer to the fan, but eventually shelved it as being out of scope.

The Arduino component of the project presented new challenges for us, as this was the first project most of us had encountered that required careful power management.

We chose solenoid valves as the best machinery for our purposes, having judged that the code required to control them (a simple HIGH/LOW binary command to open/close the valves) would be simple to send over Pubnub. The solenoids required 12 volts to function, far more than the Arduino Micro could supply, so we looked into managing multiple power sources. This led to the inclusion of diodes to protect the circuitry and transistors to function as the switches for the solenoids. Ultimately the Arduino component proved to be among the simplest aspects of the build: once we had the circuitry working for one valve it needed only be repeated exactly for the other two, and we were correct in judging that a simple HIGH/LOW command would effectively manage the valve. Our first iteration of the circuitry became our final iteration, and when troubleshooting we only ever needed to check connections.

paintdrop-fritz_bb

We selected plastic tubing of the same diameter as our solenoid valves. The tubing kept its shape strongly – we used a heat gun to straighten it out, at which time it screwed in tightly to the valves. It required only a small amount of waterproof sealing tape to make the connection from valve to tubing watertight. We had a tougher time connecting the tubing to the 2L pop bottles we chose as the paint receptacles due to their ease and simplicity. The mouth of the bottles was slightly too wide to connect easily to the tubing as the valves had. We managed to get the connection between tube and bottle sealed by using a combination of duct tape to hold the tubing tight and sealing tape to keep it watertight.

The process of coding the communication protocol for the device was relatively straightforward; we used example code provided to the class as a backbone for the PubNub communication interface and Serial communication between a computer and Arduino. The only message that we needed to send across the devices was a means of denoting the colour of paint to drop. In order to ensure that paint was dispensed from the solenoid, we implemented a short delay in the duration of the dispense signal. The only other features coded independent of the web interface were two arduino buttons that could debug the solenoids and live display of incoming messages from PubNub, two time-saving features for troubleshooting the devices.

img_0281

For the minimum viable product, we wanted the interface to allow the user to select between paint colours and to be able to paint with either red, blue or yellow. Collectively, as a group we felt that some sort of visual feedback was needed in the interface to demonstrate to the user that they painted with their selected paint colour. Originally we proposed an animation that would float above the paint button for each time the user clicked. We suggested either a paint drop specific to the colour selected, or a “+1” as an indicator that the user clicked the paint button. Because of time-restraints we opted for a counter in the top right corner that would demonstrate the total amount of paint drops from all users combined.

untitled-4

With another revision on this project we would include a user-specific counter or visual interface element so that the person interacting with the paint button knows exactly how much they are contributing to the artwork, themselves. Additionally we would have an HTML element in the bottom left corner replacing the “Project 4” with text that would update with each plate that was painted.

We developed graphical assets in illustrator based off a primary colour theme of Red, Yellow and Blue. Viewers would be able to click to adjust their colour type and press a magic paint button to deliver a command to our machine. Central to the web interface was a live stream of the wheel in motion. We intended to have an instantaneous video feed of the project, but we encountered an approximately 8 second delay in the streaming process that we believe is a result of the data transmission speed from computer to computer. These assets were all programmed in jQuery.

We were interested during the development of the web interface to include a particle system to visually display the colours being submitted live during the process. We discovered that jQuery and p5.js’ canvases seem to conflict with one another in terms of interactability; although there were solutions available to us to remedy the error (e.g. adjust layering or convert the jQuery elements to p5.js) we were short for time and decided to textually render the live feed of paint drops committed by participants.

 

img_20181122_165452
The solenoids available to us were only rated for use with water, so we were concerned about damaging them if we used paint at higher consistency. After consulting with local art professional Jordana Heney we were told that the best options for our purposes would be watercolour or watered-down tempera paint. The expense of watercolour paint precluded its use, so we went with tempera. Later the option of vegetable-based inks and dyes was brought to our attention, which we hadn’t had the chance to experiment with, but would like to in future.

We experimented with paint thicknesses to get a sense of what moved well through the solenoid, left a good colourful impression, and could be consistently reproduced. We settled on a ratio of approximately 1 part paint to 1 part water, give-or-take depending on brand, as being the best for our purposes. Just slightly thicker than water but not so thick as to cause disfunction to the valves, this was the ratio we stuck with through the rest of the project.

Apart from some minor rewrites to the code and UI tweaks, once every pillar was connected the apparatus worked perfectly. We tested several varieties of paper before settling on paper plates as our paint surface, as their porous surface and shape were a good fit for our consistency of paint and size of our spinner.dscf7549

After our crit we returned to the apparatus to create multiple variations on the painted plates, in order to better capture the different results our apparatus generated.

img_20181124_162935

 

 

Project Context:

Spin Art was the driving concept behind the machine’s functionality. Although interaction between the device and the wheel is presently minimal, we took great inspiration from the techniques employed in the process of developing such artworks. Callen Shaub is an excellent example of the practice, who works out of Toronto creating gallery-standard works.

Mess.net aligns with fun, exploratory, tongue-in-cheek internet installation art like donottouch.org and Puff Up Club. The intended experience is to share an out-of-the ordinary action with people, see what others did, and consider your own action in that light.

It also exists within the same sphere as participatory art installations such as The Obliteration Room by Yayoi Kusama where guests are given circular stickers to place anywhere in a room, and Bliss by Urs Fischer and Katy Perry where participants can slowly destroy the artwork to reveal new colours while adding colours of their own. The creators have laid out a framework, but it is the participants who define what they actual final visual state of the artwork is. The act of participating is the experience of the art – the final outcome is, perhaps, irrelevant.

 

Next Steps:

Based on feedback and testing we would expand this project by experimenting with different inks and receptacle media. Paper plates were something of a stopgap, as was tempera paint; they were choices we made out of necessity keeping time and budget in mind. Given the time we would experiment with multiple media and generate a large volume of work.

Once work is generated we would like to explore arranging it. Seeing many instances of the works juxtaposed might reveal interesting patterns, and playing with the arrangement would be as involved a project as their creation.

We would also like to improve the speed of interactions, add more valves and colours, automate paint reloads, and industrialize the entire process so it can be left unsupervised for long periods of time while still generating artwork.

Gallery:

5

3

2

1

 

Resources: 

Bliss (n.d.). Retrieved from http://ursfischer.com/images/439609

Controlling A Solenoid Valve With Arduino. (n.d.). Retrieved from https://www.bc-robotics.com/tutorials/controlling-a-solenoid-valve-with-arduino/

Shaub, Callen. (n.d.) Callen Shaub. Retrieved from https://callenschaub.com/

Studio Puckley. (n.d.). Do Not Touch. Retrieved from https://puckey.studio/projects/do-not-touch

Studio Moniker. (n.d.). Puff Up Club. Retrieved from https://studiomoniker.com/projects/puff-up-club

THE OBLITERATION ROOM. (n.d.). Retrieved from https://play.qagoma.qld.gov.au/looknowseeforever/works/obliteration_room/

 

 

FindWithFriends

By Maria Yala

Creation & Computation – Experiment 4 / Networking

1jwq71543250959

FindWithFriends is a collaborative web-app game that is built using p5.js and PubNub a data stream network used to send and receive messages between players’ devices. When the game starts, each player is presented with an identical game board made up of a matrix of clickable tiles and a list of words to find, however each player is assigned a random color to differentiate them from other players. Words are arranged on the board in varying directions – forwards, backwards, upwards, downwards, and diagonally. Players can either play collaboratively, or competitively. Every time a player clicks on a tile, the tile’s color changes to the player’s assigned color. Once a player finds a word, they can lock their tiles to prevent other players from stealing tiles that they have found. If a player clicks on an unlocked tile, it will turn white again. When the ‘lock tiles’ button is clicked, the player’s score is calculated and drawn on the screen.

Background

For this project, I wanted to learn more about creating custom classes in JavaScript as I didn’t have much experience with them in JavaScript before. Additionally, I was drawn to the idea of working visually with objects in a matrix when we went over some of the examples in class. I wanted to challenge myself to learn more about custom classes and nested for-loops.

Ideation & Inspiration

My main inspiration was to initially create something to do with collaborative storytelling, however, since I was thinking of working with grids/matrices, I ended up choosing word games particularly crosswords and word find puzzles. In the end, I settled on the idea of a collaborative word find game where each player was identified by a different color. This was inspired mainly by a word find game in a zine I had created and the game WordsWithFriends.

20181126_115158-1

20181126_115114-1

20181126_115131-1

20181126_115611-1

Step 1 – The Tile Class

I created a custom class to represent the tile objects of the game board. In the beginning I circular and square tiles using the ellipse() and rect() functions. Each tile had the following attributes: x and y coordinates and size dimensions. The Tile class also had a display function that when called would draw the tile at the object’s x and y position.

Step 2 – The Matrix

I began by testing out – creating a 3×3 matrix of square objects onto the screen. This was done by using a nested for loop that upon each inner iteration would create a new tile passing the x and y positions in the for-loop to a mapping function to generate a position on the screen. The matrix was restricted to a size of 600 and 600 and these were the dimensions used to map coordinates.

An image showing testing of a 3×3, 15×15, and 30×30 matrix on a screen portion of 600×600 pixels.

1

Step 3 – The Letters

I then created a second custom class to overlay letters over each tile. The Letter class was also created using a nested for-loop iterating over an array of letters. Each Letter object had the following attributes: a letter, an x and y coordinate. The Letter class had one method; a display function that when called, draws the letter on the corresponding tile. Below is an example of a 3×3 sample that was iterated over to generate a nested 3×3 array of Letter objects.

var test = [‘A’,’P’,’J’,’X’,’E’,’I’,’C’,’O’,’W’];

An image showing the letters overlaid onto a 3×3 game board

screenshot_2018-11-21-find-with-friends

Step 4 – Clicking on tiles

Upon success with drawing the tiles and letters onto the canvas, I began to work on interacting with the objects on the screen so that a tile would change color when clicked. I began by creating a variable pcolor to hold the players random color assignment which is generated when the page is loaded. Using the mousePressed() function, I got the x and y position of the mouse when the user clicked and then passed it to a new method in the Tile class, clickCheck(). This function, clickCheck(), used the x and y coordinates of the player’s click and the x and y coordinates of the tile, calculating the distance between the two to determine if the player had indeed clicked within the tiles radius. If the click was within the radius, the color of the tile would change from white to the player’s color. Here I also updated the Tile class, adding the clickCheck() function and color attributes i.e r, g, and b for RGB color mode. The nested for-loop that created the array of Tile objects was then updated to initially create tiles as white in color. Initially I was using mouseClicked() but changed it to mousePressed() because during testing I found that it worked on a laptop but not on the iPad.

An image showing testing of clicking on tiles to change their color, with color randomly updating upon page refresh

2

Step 5 – Adding another player

Once basic game functionality was working for one-player, I began to integrate PubNub so as to allow for multi-player functionality. I updated the mousePressed() function to publish a message to pubnub and upon receipt of a message back in the readIncoming() function, the clickCheck would be called. The message passed to and from PubNub carried information about the mouse x and y coordinates and player color. These were then passed to the clickCheck() function which would update the tiles accordingly.

An image of a game screen showing a 2 player game where each player is a different color

screenshot_2018-11-21-find-with-friends6

Step 6 – Testing Tile Size When Matrix Size Increases

I changed the test array(as shown below) containing letters, switching from the 3×3 grid to a 7×7 grid to begin testing out how the grid would look with a words that were placed in different directions on the grid. And tested adjusting the size of the tiles so that the background would be covered.

Figuring out correct size for the tiles on a new, and larger game board

screenshot_2018-11-26-007myala-wardragon

3Testing the game using a game board with circular tiles and 3 players

20181121_145904

I ended up removing the circular tiles as square tiles were more visually pleasing because they didn’t overlap with the other tiles.

Step 7 – The “Steal tiles” feature

Here I updated the Tile Class, adding 3 new attributes – ‘c’ a string to hold the color id of the tile e.g white would be 255255255, ‘isLocked’ a boolean to check whether a tile is locked or not, ‘isWhite’ a boolean to check whether a tile is white or colored. I used the ‘isWhite’ variable to detect clicks on tiles; a tile is created as white, when it is clicked, it’s color is changed and this variable is set to false. When a user clicks again on a tile they had already clicked, I compare the tile’s ‘isWhite’ value and its current color id to determine whether it is being stolen or the click is simply an undo. If it is an undo click, color reverts to white, if not, another player is stealing the tile. I had trouble with implementing an undo click because I was calling the clickCheck function twice i.e in my mousePressed() and again in my readIncoming() functions. This caused the color of the tile to change from the players color to white and then it remained white. I solved this by removing the call to the clickCheck function in the mousePressed listener.

An image showing testing the “undo click” feature and “steal tiles” feature

Step 8 – A “Lock tiles” feature

I used the ‘isLocked’ boolean to prevent a tile from being stolen by another player. I also added a button on the screen that when clicked would lock all the tiles that had the same color as the player. To do this I created 3 new method: lock() – a function to pass the player color to the tile’s lock function, updateLock(p) – a function to update other players screen locking all tiles belonging to a particular color, pubLock() a function to publish a message indicating a lock has occurred. I also added a new boolean ‘lockPressed’ that would be used to determine what kind of message was being sent i.e a normal message or a lock message. If lock id in message was 0 then the message was a normal message, if it was 1 then it was a lock message and the readIncoming() function would call the updateLock(p) function for the player who initiated the lock.

Step 9 – Final 15×15 matrix & Game Text (Hints)

I settled on a 15×15 grid for the final game and using a word find puzzle from the Huffington Post, I created a new array to hold the matrix. Text hints were drawn to the side of the game board with a ‘lock tiles’ button underneath. When choosing the theme for the puzzle, I wanted to pick a topic that was universal and a little controversial so I ended up in the realm of politics and Donald Trump with the “Who has Trump offended?” puzzle from Huffington Post. The 15×15 grid size was chosen as it was the best size that allowed legibility and precise clicks on a tablet and laptop using a fixed portion of the canvas.

Who Has Trump Offended?” puzzle from The Huffington Post

trumpsearch

The final 15×15 game board matrix array

screenshot_2018-11-26-switched-to-a-15-x-15-grid-added-hint-text-header-%c2%b7-007myala-wardragon-5a7b1211

The final 15×15 game board matrix

screenshot_2018-11-23-find-with-friends

Step 10 – Adding a Score

I updated the code, adding a points array that would be mapped to tiles the same way that letters were. The points system I created was that tiles in words positioned forwards or downwards were each worth 1 point each. Tiles in words going backwards were worth 2 points each. Tiles in words positioned diagonally were worth 3 points each. Tiles in the hidden word were worth bonus points of 2 points each. I added a points and a global variable score to calculate a players score based on locked tiles. Tiles that were not part of the words were each assigned 0 points. Calculation was triggered when the lock button was clicked. This score was then drawn to the screen in large font and was colored in the player’s color.

The points matrix for the final 15×15 game board

screenshot_2018-11-26-added-css-to-make-the-lock-button-bigger-on-ipad-screens-%c2%b7-007myala-wardragon-e1fa7b7

Testing the score calculation – PRESS is 5 points, HISTORIANS is 10 points

20181123_113428

Testing the game

Video link of test from the image below – Testing the game with 2 players playing on iPad & Laptop

20181123_120230

End result of the test between two players

4

Presentation (Setup & the experience )

For the presentation, I decided to use iPads instead of a combination of iPads and laptops, this was done mostly to ensure mobility. I didn’t want the players to be tied to one place. Although I had provided 5 iPads I anticipated that people may form teams when playing and would pick up the iPad or move about with them. I did use one laptop, hooked up to a projector so that onlookers would not be left out of the FindWithFriends experience. Projecting the game board on the wall was a choice that ended up being beneficial to the presentation as it heightened the game experience as people watched the tiles change color. Ultimately, the game experience changed when players realized they could steal each other’s tiles and once words had been found they proceeded to try and see who could get the most colored tiles on the screens. Below are some video and images from the presentation.

5

6

Video link to the presentation

Feedback & Future Plans

Some future adjustments to the game I would like to make would be to add the other players score onto the individual screens so as to heighten the competitive aspect. I would also like to explore turning this into an installation piece. This is inspired by feedback from the cohort and the experience of how that game turned from a simple word find to a different game when the players were notified that they could “steal” each other’s tiles. It shifted from collaborative play to competitive play when I informed them of the lock button’s functionality. Comments were also given about the choice of theme – Trump & politics. When some players were playing they would spontaneously shout out things like “I found humanity” or “I’m stealing immigrants” The words being found have the potential to make people uncomfortable and I would like to explore this further by perhaps playing with different contexts. It was also suggested that because of projecting the game board onto the wall, a new interaction could be actually having players pick tiles on the wall. This is also something that I would like to explore in the future.

Links:

Code on github – FindWithFriends

Reference Links

https://github.com/DigitalFuturesOCADU/CC18/blob/master/Experiment%204/P5/pubnub/06A_LOCAL_ONLYcommonCanvas_animSpeed_inertia/sketch.js
https://randomcolor.lllllllllllllllll.com/
http://nikolay.rocks/2015-10-29-rainbows-generator-in-javascript
https://p5js.org/examples/color-color-variables.html
https://stackoverflow.com/questions/42101752/styling-buttons-in-javascript

https://p5js.org/reference/#/p5.Element/parent

 

 

 

 

 

 

 

(manufactured) realities

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link: https://github.com/Omid-Ettehadi/Manufactured-Realities

ezgif-4-61972ea3b4f0

Figure 1.1: Introduction screen, featured on projection screens and mobile devices

Project overview
“Falsehood flies, and the Truth comes limping after it,” Jonathan Swift
(manufactured) realities is a project created to step out of the status quo and truly evaluate whether our beliefs are based on facts. In order to do this, the team selected six news stories, three true stories, and three stories that were released and later on retracted or debunked by credible news organizations. The increase in conflicting information has fuelled much discussion. ‘Our inability to parse truth from fiction on the Internet is, of course, more than an academic matter. The scourge of “fake news” and its many cousins–from clickbait to “deep fakes” (realistic-looking videos showing events that never happened)–have experts fearful for the future of democracy.’ (Steinmetz, Time Magazine, 2018) The six articles are presented, and after each, the participants will be given a chance to submit a vote on whether they believe it or challenge it. Once all votes are in, the servo device will show the results of the poll as the projector reveals whether the story is true or fraudulent. Once all six questions are answered, we end the exercise on a result page which will show the overall accuracy of the group and also the accuracy of each question.

screen-shot-2018-11-24-at-8-17-50-pm

Figure 2.1: Veda, Omid and April on presentation day
Figure 2.2: (manufactured) realities on display, projection in back and servo device in front

Intended context
How we receive information is more complex than ever before. It used to be as simple as picking up a book or a newspaper to update yourself on the news, current events or specialized educational materials. These media sources are held to rigours ethical standards, and if they ever breach this code of conduct, a retraction must be printed and released to the public. The more retractions, the less credible the publication becomes. Nice and simple. In 2018, this simplicity has been turned on its head all thanks to the internet. Nowadays, we are constantly overwhelmed with information, some of it playful and useless, some educational and enlightening but some streams of information are created only to conflict and confuse the public. With all of this content being released hourly to various public channels, there is more emphasis on releasing the information first and less concern about releasing accurate information. There has been a shift from reading credible sources by publishers to consuming information by our favourite ‘content creators’. These new creators of content do not run by any rigorous code of conduct and simply publish what they believe to be true. They also share articles with their subscribers and/or followers, further amplifying the story without knowing (or caring) if it is in fact credible. ‘A false story is much more likely to go viral than a real story’ (Meyer, The Atlantic, 2018) Media awareness is a long-standing issue; it is very easy for the person with the microphone to sway a crowd in their favour. The time we live in now goes far beyond that; we simply do not know what to believe anymore.

Product video

Production materials
Our aim in this project was to use a combination of hardware and software to create a seamless and straightforward evocative experience to spark conversation. The following materials were used for this project:

  • 1x Arduino Micro
  • 1x Laptop
  • 1x USB cord
  • 1x Strip of NeoPixel Lights
  • 2x Servos
  • 1x Breadboard
  • 1x Plywood
  • 1x Parchment paper
  • 1x Projector

Ideation
During the ideation phase, the team came up with many exciting options. The focus of each of our ideas was to create something that is extremely relevant and pertinent.

Idea 1: Constructing communication systems in a collapsed society
Communication devices can be made of scraps, discarded plastic debris and e-waste
This can be a way to communicate levels of remaining natural resources
‘Citizen scientists can take advantage of this unfortunate by-product of “throwaway culture” by harvesting the sensor technology that is often found in e-waste.’ (link)
Our team went to watch the Anthropocene movie for inspiration

Idea 2: A better way to communicate coffee needs
Texting and slack is not a sufficient way to get the coffee needs of a large group
Create an ‘if this then that’ type app, with your regular order saved and ready
When someone asks the app if anyone needs coffee, instantly they receive the orders

Idea 3: Broken telephone game
Using the sensors, we already have on our phones
Create a game to pass messages from phone to phone
Somehow creating a way to scramble the messages

Idea 4: Digital version of a classic board game
Pictionary, a digital version that can be played in tandem, speed rounds?

Idea 6: Think piece, ‘Challenging assumptions and perceptions.’
There is currently no way to validate content and communications on the internet
Create a survey for everyone to do at the same time, generate live results on screen
Use this to gauge perceptions or bias

One we listed out all the ideas, we gave each other a day to think through what we feel most excited to do. We returned the next day, and unanimously agreed to proceed with our think piece ‘Challenging assumptions and perceptions.’ We were also mindful about the potential for the scalability of this experience. While the prototype itself was built for a small group of people, the intent was to set the foundation for a product that can easily scale to more extensive experience and audience in the future.

Process Map
Once the idea was finalized, the next step was to flesh out all the details, including the flow of the experience. We began the process by creating the user flow diagrams. We broke down the hardware, software, and API instances, and how each of them is interconnected. It was vital to iterate the different pieces of the puzzle and see how they fit together.

cnc_exp4-2

Figure 3.1: User flow diagram

Wireframes
Once the flow was set, we focused on information architecture across all the devices. We used Adobe Illustrator to create wireframes with placeholder content. This helped us visualize the skeleton for the experience. We decided to use the projector as the centrepiece and the mobile phones as a balloting device.

Projector Experience Wireframes
The Projector experience would hold the critical question screens, the response screens and the final result screen to conclude the experience.

newswireframe-1

newswireframe-2newswireframe-3

newswireframe-4

Figure 4.1: Projector Wireframes

Mobile Experience Wireframes
Mobile devices around the room will function purely as ballots, and the projector will take center stage as soon as the voting process ends. The team put much consideration into the flow of the participant’s attention. Since there would be three interfaces in play, we made sure to include as much visual feedback as possible to make sure participants knew where to look and when.

 

phonewf1

phonewf2

Figure 5.1: Mobile Phone Wireframes

Finding the news stories
The team took the selection of stories very seriously and took the time needed to research and find surprising news that shook the world when it was released. We remembered stories that had stood out for us in the past, and looked for current pressing issues that were creating news. We also divided the stories into true reports and false ones. For this project, we felt it was important not to make up false stories, but instead, find stories that were released as true before being retracted later. This was crucial for the overall project objective. The team checked multiple sources and created a database of 15 stories initially, before shortlisting 6 with a random order of fake and true stories and thereafter began the UI design process.

screen-shot-2018-11-25-at-2-31-34-pm

Figure 6.1: April and Omid searching for news stories
Figure 6.2: Veda and Omid searching for news stories

User Interface Design
We began the interface design process with the introduction screen. We didn’t want to create something that was static, so we went with a moving background. For the identity design, we wanted to create something striking and beautiful at the same time. We used Adobe Illustrator and Photoshop for all the designs. Another difficult problem we were facing was the use of three different interfaces. The team put much consideration into the flow of the participant’s attention. Since there would be three interfaces in play, we made sure to include as much visual feedback as possible to make sure participants knew where to look and when.

ezgif-4-61972ea3b4f0

Figure 7: Introduction screen, displayed on projector and mobile devices

The team thought it would be essential to add a disclaimer screen to ensure that the exercise is well accepted. While we tried to be as mindful as possible while picking the stories, we knew that it was equally important to respect our cohort and faculties sentiments. Then we shifted focus to the news article in question.

screen-shot-2018-11-25-at-11-29-47-pm

Figure 8.1: Top right, Screen which displays the article
Figure 8.2: Top left, Screen which displays whether article is true or not
Figure 8.3: Bottom right, Screen which displays disclaimer
Figure 8.4: Bottom left, Screen which displays final results and accuracy percentage

screen-shot-2018-11-25-at-2-45-19-pm

Figure 9.1: First draft of the User Interface
Figure 9.2: Veda hard at work designing two versions of UI, one for projector and one for mobile

ui-mobile

Figure 10.1: Right, Mobile Screen UI for ballots
Figure 10.2: Left, Mobile Screen, feedback to allow a user to know they have completed the vote

Programming
Controlling the flow of the experience was a high priority. To do this, we decided to create three different pages. A page to display the news articles and the answers on a projection screen. An admin page, giving a button to a ‘moderator’ who will keep track of what page will be shown and a user page which acts as a ballot for every person involved in the experience. We know how important it was to choose appropriate articles that are related to today’s world and also topics that people are very opinionated about. To have more time to find the right questions, we decided to start with a simple structure for the program.

Connection to PubNub
For our first step, we created the connection between the three pages to the PubNub and tested the communication between the three pages. The admin page sends data to PubNub commanding which page to be shown on the other two pages. The user page receives data from the admin page and transmits data to the display page regarding the votes of the user. The display receives data from the admin and the user page to display the number of votes. Once everything was working, we added all to the articles to the display page and tested the program to make sure everything goes correctly. We then added a final page to show the results of the survey and allow the users to reflect on the experience that they just had.

screen-shot-2018-11-25-at-2-28-19-pm

Figure 11.1: Connection to PubNub and testing
Figure 11.2: Veda and Omid working on UI and Coding
Figure 11.3: Testing of the final infrastructure

Servos and NeoPixel hardware
After creating the basic coding structure and testing it by sending messages from each page to other, we added the communication to the Arduino Micro by the Serial Connection. Initially, we wanted to use the Feather board and connect the board to PubNub, but because of the difficulties we found connecting our boards to the OCAD U’s Wifi connection, we decided to stick with the Arduino Micros that we have already gotten to know well. We tested the communication by sending angles for the servos to the board based on the votes received in each category. The original idea included the lights to transition into 3 phases; a standby state would have the white lights, polling state would use a colour library from adafruit to show a rainbow of colours. Finally, when the poll is complete, the lights will turn to green. Unfortunately, the coding for the pixel lights fought with the servo coding, so we had to replace the beautiful colour library option and opt for solid RGB colour, blue matched nicely with the final designs. The final use of the pixel lights only included one state, it flashed on and off with a 10 second delay for each.
Adding Sound
We wanted the users to be focused on answering, and we decided to add audio recordings of each news item to keep a good pace and allow the participants to ingest the content with ease. Adding the sound to the code wasn’t difficult, once it was recorded and edited.

screen-shot-2018-11-25-at-2-41-08-pm

Figure 12.1: Setting up the circuit for the servos and NeoPixel lights
Figure 12.2: Testing the servos before completing fabrication

circuit_bb

Figure 13: Final Circuit Diagram

As a final step to make sure the experience was smooth, we tested each component of the program and ran many trials to make sure everything worked correctly.

screen-shot-2018-11-24-at-8-38-11-pm

Figure 14.1: Running final tests before presentation begins, ballot and article screen working correctly
Figure 14.2: Servo device and article results screen working correctly

Fabrication
Although it wasn’t necessary for this project to have any hardware, we all wanted to add something tangible for two reasons. First, to utilize our student access to a maker lab and learn more about how to use the equipment. Second, we wanted this experience to only happen in person and not be a simple online survey that disappears from your mind the moment it is completed. Our team had an idea nailed down quite early, and we were eager to get the fabrication underway as soon as possible, knowing that other teams will be using the Maker Lab. The goal was to finish the fabrication process in the first week. We initially wanted to 3D print a casing for the servo motors since we knew the laser cutting machine was down. Using both Illustrator and Autodesk Fusion 360, we created an STL file that could be read by the printer. This was a big learning curve since no one on the team had ever used the 3D software before. On Wednesday we met with Reza, we were advised to wait for the maintenance of the laser cutting machine to be completed since the execution of our design would be better using that device. Base on Reza’s advice, we went back to the original illustrator file that could be used by the laser cutting software. Waiting for the laser cut machine to be fixed did through us of our schedule, but we were able to pull it all together.

screen-shot-2018-11-24-at-8-23-02-pm

Figure 15.1: 3D View of the design, front view
Figure 15.2: 3D View of the design, back view

The first attempt was cut on cardboard to check the dimensions of the design and the quality of the cut patterns. In this process, we realized how small the lines in the background pattern were once laser cut. Some of the lines broke as soon as they were touched. Some of this was due to the ripples in the cardboard. To make sure this would not happen in our final product, once again we went back to the design and increased the thicknesses of the problem areas.

screen-shot-2018-11-24-at-8-27-56-pm

Figure 16.1: Laser cutter in action
Figure 16.2: Prototyping on cardboard, pattern was to intricate and needed to be reworked slightly
Figure 16.3: Prototyping on cardboard, front of the design

We decided to go with a thin layer of plywood. Reza was concerned that some pieces would jump out during the cutting process and either hurt the machine or the design, so he set the depth of the laser incisions to not completely cut through. Since there was a natural curve to the piece of plywood, some pieces came out quickly but other parts needed to be cut out later on using a x-acto knife.

screen-shot-2018-11-24-at-8-31-46-pm

Figure 17.1: Assembling final wooden casing that will house the servo device
Figure 17.2: Cutting out the patterns on the wood body
Figure 17.3: Cutting out the patterns on the wood body, back view

For the final project, we decided to add an LED Strip to the design so that we could highlight the moments when the users had to look at the servos. To hide all the electronics in the design and further infuse the light, we added a layer of parchment paper behind the patterns.

screen-shot-2018-11-24-at-8-35-41-pm

Figure 18.1: Final circuit prototype, back view
Figure 18.2: Adding parchment paper to hide the circuit and diffuse the light more effectively
Figure 18.3: Fabrication of final product

Presentation & Critique
For the critique, we wanted to make sure that everything would go smoothly and started early in the day and made sure we had enough time to test the design after connecting everything in the Gallery. Once connected, we ran through a few tests and triple checked that the servo was working. Once the computer was connected to the projector, the display would not go to full screen when we were using the Firefox. To improve the presentation of the project, we decided to change to another browser. Unfortunately, the serial connection was left idle and disconnected before the actual presentation causing the servos not to move at all.
The feedback we received was positive. The topic was relevant, and many shared similar concerns. There was a concern for the mobile device to continue showing the timer after the vote was cast. There was shock from everyone seeing the final results page display the overall accuracy. It was quite low, sitting at a 41% accuracy rating. Two articles, in particular, were quite convincing although they were both untrue and most people had believed. Having done much research on fake news, we expected people to accept false ideas that fell into their own confirmation bias.

Reflection
Upon reflection, there are a lot of minor tweaks we would make to this project based on the flow of the first presentation to a large group of people. First, the sound coming from the computer was not loud enough, and many participants were straining to either hear or quickly read what was on the screen, a wireless speaker is required. Second, we tried to design the experience with little interaction by our team needed during the polling. Working off this assumption, when we noticed a silence in the room or a look of confusion from a participant we realized we needed to be more prepared to guide them through. Third, after the vote was cast, we moved on to reveal the truth behind the story. The issue we noticed was there was too much content provided, and we don’t think anyone read what was on the screen. This is a flaw in the UX that can be fixed with a bit of editing and altering the hierarchy of the content. Forth, and possibly the most important, we did not put enough thought into the interface used by the ‘moderator.’ This interface did not show the timer which the participants saw on their screen and wasn’t completely sure when to switch to the next article. Also, if/when the servo device decides not to work it would be a bonus for the moderator’s interface to see the voting results so at least it can be delivered to the participants verbally. The team learned a great deal about effectively providing a message to a group of people using multiple interfaces, effective communication feedback and the importance of presence upon delivery.

References
Steinmetz, K. (2018, August 09). How Your Brain Tricks You Into Believing Fake News. Retrieved November 26, 2018, from http://time.com/5362183/the-real-fake-news-crisis/

Meyer, R. (2018, March 12). The Grim Conclusions of the Largest-Ever Study of Fake News. Retrieved November 26, 2018, from https://www.theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104/

English, J. (2016, November 08). Believe It Or Not, This Is Our Very Own River Yamuna. Retrieved November 26, 2018, from http://english.jagran.com/nation-believe-it-or-not-this-is-our-very-own-river-yamuna-72099

“The Office” Women’s Appreciation. (n.d.). Retrieved November 26, 2018, from https://www.imdb.com/title/tt1020711/characters/nm0136797

McLaughlin, E. C. (2017, April 26). Suspect OKs Amazon to hand over Echo recordings in murder case. Retrieved November 26, 2018, from https://www.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html

Gilbert, D. (2018, November 20). A teenage girl in South Sudan was auctioned off on Facebook. Retrieved November 26, 2018, from https://news.vice.com/en_us/article/8xpqy3/a-teenage-girl-in-south-sudan-was-auctioned-off-on-facebook

The truth behind ‘Fake fingers being used for orchestrating a voting fraud’ rumour. (2018, September 30). Retrieved November 26, 2018, from https://www.opindia.com/2017/02/the-truth-behind-fake-fingers-being-used-for-orchestrating-a-voting-fraud-rumour/

Sherman, C. (2018, November 21). Why the women suing Dartmouth over sexual harassment are no fans of Betsy DeVos. Retrieved November 26, 2018, from https://news.vice.com/en_us/article/d3b3dz/why-the-women-suing-dartmouth-over-sexual-harassment-are-no-fans-of-betsy-devos

ABSTRACT (2018) by Georgina Yeboah

ABSTRACT (2018): An Interactive Digital Painting Experience by Georgina Yeboah

(Figures 1-3. New Media Artist Georgina Yeboah’s silhouette immersed in the colours of ABSTRACT. Georgina Yeboah. (2018). Showcased at OCADU’s Experimental Media Gallery.)

ABSTRACT’s Input Canvas: https://webspace.ocad.ca/~3170683/index.html

ABSTRACT’s Online Canvas: https://webspace.ocad.ca/~3170683/ABSTRACT.html

GitHub Link: https://github.com/Georgiedear/Experiment-4-cnc

Project Description: 

ABSTRACT (2018) is an interactive digital painting collective that tracks and collects simple ordinary strokes from users’ mobile devices and in real-time, translates them into lively vibrant strokes projected on a wall. The installation was projected onto the wall of the Experimental Media room at OCADU November 23rd, 2018.  ABSTRACT’s public canvas is also accessible online so participants and viewers alike could engage and be immersed in the wonders of ABSTRACT anytime, anywhere.

The idea of ABSTRACT was to express and celebrate the importance of user presence and engagement in a public space from a private or enclosed medium such as mobile devices. Since people tend to be encased in their digital world through their phones or closing themselves off in their own bubbles at times, it was important to acknowledge how significant their presence was outside of that space and what users have to offer to the world simply by existing. The users make ABSTRACT exist.

Here’s the latest documented video of ABSTRACT below:

screen-shot-2018-11-26-at-9-22-16-am

https://vimeo.com/302788614

Process Journal:

Nov 15th, 2018: Brainstorming  Process

(Figures 4-6. Initial stages of brainstorming on Nov 15th.)

Ever since experiment 1, I’ve always wanted to do something involving strokes. I was also interested in creating a digital fingerprint that could be left behind by anyone that interacted with my piece. I kept envisioning something abstract yet anonymous for a user’s input online. Trying out different ways of how I could picture what I wanted to do, I started thinking about translating strokes into different ones as an output at first just between canvases on my laptop.  I wanted to even go further by outputting more complex brush strokes from simple ordinary ones I drew on my phone. A simple stroke could output a squiggly one in return or a drawing of a straight line could appear diagonally on screen. I kept playing with this idea until I decided to just manipulate the colour of the strokes’ output for the time being.

Nov 19th 2018: Playing with strokes in P5.JS and PubNub

Using Pubnub’s server to connect P5’s javascript messages I started to play with the idea of colours and strokes. I experimented with a couple of outputs and even thought about having the same traced strokes projected on the digital canvas too with other characteristics but later felt the traced strokes would hinder the ambiguity I was aiming for. I also noticed that I was outputting the same randomization of colours and strokes both on mobile and on the desktop which was not what I wanted.

Nov 21st,2018: Understanding Publishing and Subscribing with PubNub

img_1767

Figure 9. Kate Hartman’s diagram on Publishing and Subscribing with PubNub.

After a discussion with my professors I realized that all I needed to do to distinguish different characteristics from the strokes I inputed and then later outputted was to create another javascript file that would only publish the sent variables I wrote in my ellipse syntaxes:

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Figure 10. Drawn primitive shapes and their incoming variables being delivered from other javascript file under the function touchMoved();

Nov 22nd and 23rd 2018: Final Touches and Critique Day

On the eve of the critique I managed to create two distinguishable strokes: Ordinary simple strokes on one html page with it’s own JS file and vibrant stroke outputs for the other. The connection was successful. I decided to add triangles to the vibrant strokes and play around with the opacity to give the brush stroke more character. I later tested it along with another user and we both enjoyed how fun and fluid the interaction was.

Figure. User testing with another participant.

Figure 11. User testing with another participant.

Figure. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Figure 12. Simple white strokes creating vibrant strokes on the digital canvas of Abstract.

Here are some stills with their related strokes:

Figure 13. Output of Vibrant strokes from multiple users' input.

Figure 15. Output of Vibrant strokes from multiple users’ input.

Overall, the critique was an overwhelming success with a positive outcome. When the installation was projected in a public space users engaged and interacted with the strokes they displayed on the wall. Some got up and even took pictures as strokes danced around them and their silhouettes. It was a true celebration of user presence and engagement.

Figure 14. A user getting a picture taken in front of ABSTRACT.

Figure 16. A participant getting a picture taken in front of ABSTRACT’s digital canvas.

img_2842

Figure 17. Experimental Media room where ABSTRACT was installed.

img_2861-2

Figure 18. Georgina Yeboah standing in front of her installation ABSTRACT in the Experimental Media room at OCADU.

Related References

One of my biggest inspirations towards interactive installations that require user presence and engagement like ABSTRACT always lied in the works of Camille Utterback. Her commissioned work Abundance (2007) tracked the movements and interactions of pass-byers on the streets of San Jose plaza. This created interesting projections of colours and traces across the building. Many of Utterback’s work uses spatial movement and user presence to express a reflection of the life interacting and existing in the work’s space.

References

Multiuser drawing.(n.d). Retrieved from http://coursescript.com/notes/interactivecomputing/animation/pubnub/

Kuiphoff, J. (n.d). Pointillism with Pubnub. Retrieved on November 21 2018 from http://coursescript.com/notes/interactivecomputing/pubnub/

Npucket and katehartman. (2018, November 26). CC18 / experiment 4 / p5 / pubnub / 05_commonCanvas_dots/ . Github. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%204/P5/pubnub/05_commonCanvas_dots

Utterback, C. (2007). Abundance. Retrieved from  http://camilleutterback.com/projects/abundance/

PittsburghCorning. (2011, April 8). Camille Utterback – Abundance. Retrieved from https://www.youtube.com/watch?v=xgRFUsVVb84

The ‘Call Mom’ Project

By Frank, Jingpo, Tabitha

Project Description:

Mom misses you. She wants to know why you never call. The ideal moment to call Mom would be that fleeting period of time before bed where she’s snuggled under the blankets with a hot tea and a good book, just about to drift off to sleep. But of course she hasn’t told you this, you’re just expected to know through a nonexistent form of parent-child telekinesis.

Call Mom is an Arduino-based networking project that uses a light sensor to determine when her reading lamp is switched on and sends you a notification that she’s in relaxation mode, ready to hear from you. The device is housed inside a vintage book that blends in seamlessly with her bedroom decor. Powered by a simple battery pack it’s a low maintenance internet connected device that sits inconspicuously by her bedside. Some moms will want to know how it works and others won’t care in the least, but it’s a universal truth that moms just want to hear your voice and know that they haven’t been forgotten amidst the chaos of your busy life.

Github Link: https://github.com/imaginere/Experiment-4

Ideation:

The first iteration of our project was a simple device that allows parents to send their young children messages while they were at school, kind of like a kid-friendly pager. As we continued to develop the idea we discussed how children have trouble perceiving time in the same way adults do, so parents and teachers could program reminders with friendly icons to mark key moments throughout the day. Based on Frank’s initial sketch we decided that the object should resemble a wooden children’s toy with an lcd screen in the front and a simple button on the top for the child to confirm that they had received the message.

Frank's first sketch for the kid-friendly messaging device.

Frank’s first sketch for the kid-friendly messaging device.

We quickly ran into trouble when we realized that the Feather and our lcd screen were not compatible with Pubnub. After discussing the situation with Kate we determined that it was best to pivot towards a new idea. In the brainstorming session we explored other ways of marking the passage of time but many of these ideas felt like watered-down versions of google calendar. So we went back to the initial concept – networking. What does it mean to network? Why do we seek connection? Rather than think it through intellectually we distilled our project down to the universal feeling of separation through distance. Longing to be with the person you care about the most. Late night calls to your loved one, though miles apart, still knowing that you’re both looking out the window at the same night sky. I just want to know that you’re thinking of me.

As we discussed new directions Frank made drawings on the blackboard to help clarify our ideas.

While discussing new directions Frank made drawings on the blackboard to help solidify our ideas.

We continued to explore the idea of a remote and networked self. Tabitha explained how she keeps her favourite travel destinations on her phone’s weather app to help her imagine that she’s somewhere else. On a rainy Toronto day she can see the current weather in Paris and concoct an escapist fantasy of the adventures she would be having if she was there instead. Jingpo told us that she does something similar to help her imagine what her Mom is doing overseas. She described the experience of never knowing the best time to call her mom who lives far away in another time zone. The best time to call is usually right before bed. Frank mentioned hearing about a project where the artist created a networked sensor that would alert him when his mother was seated in her favourite chair. It was these elements combined that caused the “ah-ha!” moment – A light sensor that could send you a message when Mom’s bedside lamp turns on and she has settled down for the night with a cup of tea and a good book. Now we had a project to address the question “What’s the best time to call mom?”

Coding Process: (Frank)

Hardware & Coding the Device

This is the hardware we used to achieve this project:

– Adafruit Feather ESP8266

– 7mm photoresistor

– 1K Resistor

The Fritz diagram for the circuit:

booklight

As you can see the circuit is super simple. We have focused primarily on the functionality of what we were trying to achieve.

What do the parts do?

The photoresistor is a simple device that measures the amount of light it is receiving. It sends this number which is constantly changing to the Analogue pin(An analog pin is represented with the letter before the number, in this case, A3, A1, and A2 cannot be used in this instance since we are using WIFI on the ESP8266 which disables those two pins for use) on the Arduino.

When the Arduino is ON(Powered up) the Loop() function is constantly monitoring the sensor information, and when it goes above a certain number it triggers an event which sends a message to IFTT(An internet service which provides hooks to various notification protocols) The IFTT service which in our case is using Webhooks sends an email and notification to the users ID which is set in the Arduino code.

We can easily change this amount of people who receive this notification to up-to 12 different emails if we use the Gmail notification through IFTT, we did not use this during our presentation and in the code as it was unreliable at times and was causing our code to get glitchy.

The code also makes sure we only receive only one notification until the lamp is turned OFF, if the lamp is turned on again it will send another notification.

A feature that we would like to add in the next iteration is time of day, in which case it will bypass the notification if it is not 8-12pm, in which case it can be made very power efficient too if we use a switcher to power down the use of Wifi when it is not needed. This would allow for this device to be made very small using different hardware and a better power management circuit.

Trouble Shooting the Code:

The code although straightforward needed some mental gymnastics to get sorted out. These were the main challenges facing us.

1. How can we reliably get the notification to trigger? We were given PubNub (www.pubnub.com) as an internet protocol to use, which is essentially a messenger service, we would have to get the Arduino to send a message to PubNub and that in-turn send us a notification, this was easier said than done as PubNub has a lot of Api’s that connect to it which could do this but  it was very technical to set them up on the server side and also figure out the api documentation in a matter of a few days. We are not inherently coders and most of these technologies are introduced to us at the start of the week alongside 2-3 three other projects we have on the go. Given that limitation, we looked for a simpler solution which could meet our communication wireless needs for this project. IFTTT was the answer, there were good youtube videos which demonstrated the use of setting up the applet in IFTTT and using the api Key. The one I referred to was: [https://www.youtube.com/watch?v=znFMNzT_Gms&t=107s]

2. The second problem we faced was the trigger would keep going off all the time as soon as the light was turned on, this again is a simple fix in retrospect but at the time was a head scratcher for people who don’t come from an intensive coding background. The solution was using a conditional if then boolean statement which bypasses the trigger once the light has been turned on and the trigger is set to light on = true.

3. The final piece of the puzzle was the getting the notification to trigger reliably and finding a good median light reading that would not trigger in normal lighting conditions. Also, we would have liked to use the gmail notification which would notify multiple people(siblings) at the same time but this proved to be unreliable because of the IFTTT service. It still works and can be used in the code but might skip a few times the lamp is turned on because the IFTTT has an issue with this applet.

The Internet of Things

We looked at the internet of things for inspiration to build this project, which we environ becoming a standalone project in the future. The electronics are embedded into a real book creating a sense of continuity in a nostalgic use of an artifact we are all used to having on a bedside table. We would like to add a practical use to the book in the future, which could be a way of recording all the times we did call our mom because of this little book. It also blends into our daily lives silently performing what some would consider a mundane task, but the links us across continents giving us a tangible feeling of knowing our loved ones are getting ready to call it a night.

We could extend this project by adding an internet log of all times the lamp was switched on and off, which would show our parents sleep times, but we question the use of this and if they would want this kind monitoring, it could also be used for parent in aged homes, its a gentle nudge on our phones just letting us know the rhythm of their lives.

Where the wild things are

We are taken into the world of the imagination where we find a thread of connection from a simple sms, not sent by our mom’s but sent by a machine which is silently placed beside their bedside table reminding us that they matter. We have a world of over communication from all sides, we may receive a message every day from loved ones forwarding jokes or making casual remarks, but there is something magical about a voice and the time we spend just to reflect about time and space they might be in.

A sketch I did thinking about our concept over the weekend.

A sketch I did thinking about our concept over the weekend.

Fabrication: (Tabitha)

We decided early on that it was important for our project to feel nostalgic. Working from the idea of a mom’s bedside table we began to think of objects that could house a light sensor, Arduino and battery pack that wouldn’t feel out of place. We settled on a book because it would be large enough to house the components and could be sent through the mail to wherever Mom lives.

Items from my home

Items from my home including the book corner.

I looked around my apartment for objects that fit within the vision we had for the project. My family is really into antiques and my husband works for the library so I had no shortage of materials to choose from in our home. In selecting these objects I tried to create a vignette with universal appeal. Even though the items come from my family it was important that I wasn’t recreating the bedside table of my own mom. To connect emotionally with our audience they needed specific details but also the freedom to project their own Mom onto these objects. So in our case that meant a cup of tea, an assortment of books, a lamp and an old family portrait.

Building the book

Building the book required several hours of gluing and snipping.

After selecting a musty copy of Heidi from the never-to-be-read section of our bookshelf I headed to the Maker Lab where Reza shook his head and said there were no shortcuts in this case, each page had to be cut and glued by hand! At over 250 pages this had me wishing I had chosen a smaller book… but thankful that I hadn’t picked something ridiculous like Ulysses. The light sensor was placed on the front cover of the book and two holes were drilled to connect the wires to the Arduino. At the end I attached velcro to the front cover so the components would stay in place.

The final setupThe final setup at the Grad Gallery.

Before the presentation we set up the table with the light-sensitive book as well as props to help build the world of this fictional mom character. There was a cup of tea – brown rice tea, low in caffeine since it’s just before bed! She keeps by her bedside a copy of Heloise’s Kitchen Hints as well as  Machiavelli’s The Prince and we leave you to decide which of the two books has the greater influence! After it was all set up we did one more test to make sure we were receiving messages from the Feather to our phones.

Future Development of a User Interface: (Jingpo)

We decided to keep the main function and make user interface simple this time, but for future development of this project. We are thinking of having a web interface where provide users other useful ancillary functions. After class critique on last Friday, we found one interesting insight that sometimes it’s very difficult to call to our parents since we haven’t contacted in quite a while.

We got a very good reaction from our international students about this project. The emotional response that happened while demonstrating the project exceed our expectations. Many of them expressed they would buy the device if it was a merchandise for sale.

Follow-up functions for products:

Possibilities for a calling interface.

Possibilities for a calling interface.

1.Web page: When you received an email or text message alert, you could click on a link to an external webpage.
The whole webpage would be created in p5 Javascript. The image of the page changes when the light is turned on and off. Users can visually check multiple data, such as local weather, current temperature, humidity and air quality, date and time. We found there is a possibility to receive a central district of the city/town mom lives with its own parameters (city name) in API response.

2. Sensors:
If possible we can add temperature sensor into this device, so users can only know the local temperature but also real temperature at home.

3. Click-to-call links:
In most cases, international call would be very expensive, so people usually choose to call their moms online. It would be great to create click-to-call links for mobile browsers. They can call their mom directly through the link without downloading or opening video chat application such as Facetime or Skype. We found some meta tags or syntax code call Skype or Facetime applications from a website.

4. Data generator:
Hopefully we can access to user’s personal data and provide them some useful data, such as “What the average time she sleep?”, “What the last time you call her?”, or “How long did she read yesterday?” . We care about our parents’ health and want to know if they go to bed on time even though we don’t call them everyday.

5. Chat topics:
We are very interested in the insight we found that sometimes we feel it was difficult to call to our parents. You miss her voice and want to call but something hold you back. After you struggle a bit, it turn out you choose text. If possible we can randomly provide you some topics that you can talk with your mom.

A tracker that documents mom's routine before bed.

A tracker that documents mom’s routine.

Class Critique and Conclusions:

Presenting our project to the class.

Presenting our project to the class.

We had discussed doing a role play scenario where one of us would act as the mom but that didn’t seem like the right direction. As we were setting up Tabitha had the thought of calling her real mom during the presentation – coincidentally they had just been talking on the phone that morning. So Tabitha sent a synopsis of the presentation as well as some photos of the setup and told her mom to be herself and say whatever she’d normally say before bed. She was very excited to be asked to participate in the project!

Sending secret messages to my mom in class.

Sending secret messages to my mom in class.

The feedback we received was complimentary but the most striking was the emotional response that happened while demonstrating the project with Tabitha’s mom. We were able to see the heart of the project reflected in faces of our classmates. There was a sense of understanding why this simple device matters and how it can make a big difference in a very small way.

We were asked to think about practical aspects like the future iterations of the device. Suggestions included shrinking it down to bookmark size, repurposing it for different family members and networking it with communication messengers to develop a calling interface. The gallery project could be expanded by creating multiple character vignettes using the bedside table theme. However no technology can fully address the question “Now that she’s on the phone, what do I say to her?” Sometimes it’s very difficult to relate to our parents as they are disconnected from our day to day. But perhaps that’s besides the point. Moms are resilient and all they want is a brief acknowledgement that they are loved through the simple act of saying goodnight.

References/Context: The following are some items, blogs and resources that inspired us and helped develop our project.

screen-shot-2018-11-24-at-10-46-19-am

https://learn.adafruit.com/wifi-weather-station-with-tft-display/software

This weather station was the initial inspiration for our parent-child communication device. The plan was to use this lcd screen however we changed the direction for our project.

http://blog.ocad.ca/wordpress/digf6037-fw201602-01/category/experiment-3/

As we were still exploring the calendar idea we found this project from a previous digital futures class.

https://www.hackster.io/createchweb/displaying-an-image-on-a-lcd-tft-screen-with-arduino-uno-acaf48

This was useful in trying to troubleshoot the lcd screen and arduino connections.

https://www.youtube.com/watch?v=znFMNzT_Gms&t=107s

This is the video I referred to for the help with the setting up IFTT, he uses block code to setup the function.

http://easycoding.tn/tuniot/demos/code/

This is the block code editor for ESP8266, it makes troubleshooting code a little easier if you can’t follow normal syntax, It also provides the C++ code if you build your logic in the block code editor, you can just copy and paste the code in your arduino sketch.

600x600bf

https://en.wikipedia.org/wiki/WireTap_(radio_program)

Tabitha – The inspiration to call my mom came from years spent listening to Jonathan Goldstein interview his family on CBC’s Wiretap as well as a general interest in ‘ordinary people’ as performers. Three years of Second City training has taught me the power of unscripted acting for its spontaneity and truthfulness, but I especially love it when untrained actors are brought onstage. All it takes is a short briefing about the premise and away they go. That’s when the magic happens.

 

First Flight (An Interactive Paper Airplane Experience.)

Experiment 3:

By: Georgina Yeboah

Here’s the Github link: https://github.com/Georgiedear/CNCExperiment3

 

First Flight. (An Interactive Paper Airplane Experience. 2018)

Figure 1.”First Flight. (An Interactive Paper Airplane Experience, 2018)” Photo taken at OCADU Grad Gallery.

First Flight (FF) (2018), is an interactive tangible experience where users use a physical paper airplane to control the orientation of the sky to appear they are flying with the screen, while attempting to fly through as many virtual hoops as they can.

Fig 2. Georgina Yeboah. 2018 "First Flight Demo at OCADU Grad Gallery."

Figure 2. “First Flight Demo at OCADU Grad Gallery.” 2018

 

Fig 2. Georgina Yeboah 2018. "First Flight Demo at OCADU Grad Gallery."

Figure 3.  First Flight Demo at OCADU Grad Gallery ( 2018).

Video Link: https://vimeo.com/300453454

The Tech:

The installation includes:

  • x1 Arduino Micro
  • x1 Bono 55 Orientation Sensor
  • x1 Breadboard
  • x1 Laptop
  • A Couple of Wires
  • Female Headers
  • 5 Long Wires (going from the breadboard to Bono 55)
  • A Paper Airplane

Process Journal:

Thursday Nov 1st, 2018: Brainstorming to a settled idea.

Concept: Exploring Embodiment with Tangibles Using a Large Monitor or Screen. 

I thought about a variety of ideas leading up to the airplane interaction:

  1. Using a physical umbrella as an on or off switch to change the state of a projected animation. If the umbrella was closed it would be sunny. However if it were open the projection would show an animation of rain.
  2. Picking up objects to detect a change in distance (possibly using an ultrasonic sensor.) I could prompt different animations to trigger using objects. (For example; picking up sunglasses from a platform would trigger a beach scene projection in the summer.)
  3.  I also thought about using wind/breathe as an input to trigger movement to virtual objects but was unsure of where or how to get the sensor for it.
  4. I later thought about using the potentiometer and creating a clock that triggers certain animations to represent the time of day. A physical ferris wheel that would control a virtual one and cause some sort of animation was also among my earliest ideas.
Fig 2. Georgina Yeboah. 2018. First initial ideas of embodiment.

Figure 4. First initial ideas of embodiment.

 

Fig 3. Georgina Yeboah. 2018 "Considering virtual counterpart of airplane or not."

Figure 5. Considering virtual counterparts of airplane or not.

Monday Nov 5th, 2018:

Explored and played with shapes in 3D space using the WEBGL feature in P5.js. I learned a lot about WEBGL and it’s Z  axis’s properties.

Fig 5. Georgina Yeboah, Screenshot of Airplane.Js code.

Figure 6. Screenshot of Airplane.Js code.

I looked at the camera properties and reviewed it’s syntax from the “Processing P3D” document by Daniel Shiffman. Later, I would plan to set the CSS background’s gradient and later attach the orientation sensor to control the camera instead of my mouse.

Fig x. Georgina Yeboah (2018). "Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY."

Figure 7. Camera syntax in WEBGL. Controls the movement of the camera with mouseX and MouseY.

 

Fig x. Georgina Yeboah. 2018. "First Flight's Interface using WEBGL."

Figure 8. First Flight’s Interface using WEBGL.

Tuesday Nov 6th, 2018.

I had planned to add cloud textures for the sky but never found the time to do so. I did manage to add my gradient background though using CSS. 

I also planned to add obstacles to make getting hoops challenging but I didn’t include it due to time restraints and prioritization and thought it be best suited for future work.

Tuesday Nov 8th, 2018.

The eve before the critique, I had successfully soldered long wires to the female head that would be attached to the Bono 55 orientation sensor. The sensor would sit nicely on the top of the paper airplane head, covered with extra paper. On the other end, the sensor would connect to a breadboard where the Arduino Micro would sit on.

Fig 6. Georgina Yeboah. 2018. Bono 55 Orientation sensor sits nicely on top of paper airplane.

Figure 9. Bono 55 Orientation sensor sits nicely on top of paper airplane.

References and Inspirations:

I wanted to play with the idea of embodiment. Since I’ve worked with VR systems in cohesion with tangible objects for awhile, I wanted to re-visit  those kind of design ideas but instead of immersive VR I wanted to use a screen. A monitor big enough to carry out the task of engagement seemed simpler enough to explore this idea of play with a paper airplane.

I looked online for inspiring graphics to help me start building my world. I wanted this to be a form of play so I wanted the world I’d fly through to be as playful and dynamically engaging as possible while flying.

PaperPlanes:

Paper Planes by Active Theory created a web application for the Google I/O event back in 2016 (Active Theory). It was an interactive web based activity where guests at the event could send and receive digital airplanes from their phones by gesturing a throw to a larger monitor. Digital paper airplanes could be thrown and received across 150 countries (Active Theory). The gesture of creating and throwing in order to engage with a larger whole through a monitor inspired the idea to explore my project’s playful gesture of play and interactivity.

Fig. 8. Active Theory. (2016). Paper plane's installation at the Google I/O event.

Figure. 10. Active Theory. (2016). Paper Plane’s online web based installation .

The CodePad:

This website features a lot of programmed graphics and interactive web elements. I happened to come across this WEBGL page by chance and was inspired by the shapes and gradients of the world it created.

(Fig 4. Codepad. (n.d) "WebGL Gradient". Retrieved from https://codepad.co/snippet/xC6SberG

(Figure 11. Meyer, Chris. (n.d) “WebGL Gradient”. Retrieved from https://codepad.co/snippet/xC6SberG

 

P5.Js Reference with WEBGL:

I found that  the Torus (the donut) was a apart of the WEBGL and next to the Cone, I thought they would be interesting shapes to play and style with. The Torus would wind up becoming my array of hoops for the airplane to fly through.

 

screen-shot-2018-11-12-at-11-56-41-pm

Figure 12. P5.Js. (n.d) “Geometries” Retrieved from https://p5js.org/examples/3d-geometries.html

Future work:

Currently, the project has many iterations and features I would like to add or expand on. I would like to finalize the environment and create a scoring system so that the user can collect points when they go through a hoop. The more hoops you go through the more points you get. Changing the gradient background of the environment after a period of time would also be a feature I would like to work on. I believe there is a lot of potential in First flight that can eventually become a fully playful and satisfying experience with a paper airplane.

References 

3D Models. Cgtrader. (2011-2018). Similar free VR / AR / Low Poly 3D Models. Retrieved from

https://www.cgtrader.com/free-3d-models/exterior/landscape/low-poly-clouds-pack-1

ActiveTheory. (nd). Paperplanes. Retrieved from https://activetheory.net/work/paper-planes 

Dunn, James. (2018). Getting Started with WebGL in P5. Retrieved on Nov 12th 2018 from   

https://github.com/processing/p5.js/wiki/Getting-started-with-WebGL-in-p5

McCarthy, Lauren. (2018). Geometries. P5.js Examples. Retrieved from https://p5js.org/examples/3d-geometries.html

Meyer, Chris.(2018). WebGl Gradient. Codepad. Retrieved from https://codepad.co/snippet/xC6SberG

Paperplanes. (n.d). Retrieved from https://paperplanes.world/

Shiffman, Daniel. (n.d). P3D. Retrieved from https://processing.org/tutorials/p3d/

W3Schools.(1999-2018). CSS Gradients. Retrieved from https://www.w3schools.com/css/css3_gradients.asp

 

Generative Poster


Project by: 
Josh McKenna

cc3poster

GitHub

Browser Experience Here.

Introduction

Experiment 3, This & That was introduced to us as an opportunity to work individually and explore a concept that involved communication via Arduino and P5. The idea for my experiment originated from an experience I had earlier in the semester when I attended the Advertising & Design Club of Canada’s annual design talk– which this year’s event featured multiple graphic design studios from San Francisco (see Figure 1). At the end of the presentation I bought one of 3 posters they made. Each one of the three posters having small differences between one another. It was the first time I remember having the choice to buy the same poster in terms of its system and graphical elements, but which also had some variability between each design. Inspired by this experience, I recognized an opportunity to use generative design as a way to produce variability within graphic design artefacts. I felt that it could add some value or incentive to the attendant of an expo or event to bring home a personalized version of a poster that expanded the graphical identity for that event. This project experiments with that very idea and allows the user to explore the identity of a preset graphical system expressed through various compositions powered from an Arduino controller.

design-sf-website-post

Figure 1: The Advertising & Design Club of Canada’s Design San Francisco event poster for the 2018 event.

Recognizing that the variability demonstrated within my generative posters would be part of a larger system I decided to begin my ideation by re-visiting my favourite graphic design text, Josef Müller-Brockmann’s Grid System book. From there I continued looking at work from the Bauhaus and eventually to more contemporary works by studio Sulki & Min. It was through examining the Sulki & Min’s archived projects that I came across a body of work of theirs that I felt could be expanded upon within the time parameters and scope of this project (see Figure 2).

sulkimin

Figure 2: Perspecta 36: Juxtapositions by Sulki & Min, 2014

Hypothesis

Through an Arduino powered controller, users will be able to modulate and induce variability into poster design via generative computing.

Materials

The project hardware components were fairly simple. Altogether, the electrical components that were used in this experiment included an Arduino Micro board, a potentiometer and two concave push buttons. See the Fritzing diagram below for the full schematic (Figure 3).cc3

Figure 3: Fritzing diagram of electrical components and Arduino

Because of the constraints of this project and the limited skillset I have in fabrication, I decide to focus the majority of the project on developing the browser experience. When it came to constructing an actual controller to house the electronics, a simple cardboard jewelry box was sourced (See Figure 4). The controller itself includes the aforementioned potentiometer, a blue push button and a black push button.

The most important aspect of the physical casing for the project was that whether it worked or not. Compared to the ideation and execution of the browser aspect of this project, minimal time was spent planning the physical form of the Arduino controller.

56373039641__7d26bfda-8767-46f9-8c53-57fbc5f76509

img_0247

Figure 4: The Arduino Controller component as part of the Generative Poster Experiment (Top). Inside the physical container (Bottom).

Methods

The approach I decided to move forward with was simple: I first had to determine and define the limits in the variability of the system. Keeping a strong reference to Juxtapositions by Sulki & Min (Figure 3),  the first rule of the graphic system would be that all of the circles in each new sketch will be found along divisional lines along the x-axis of the canvas. I originally divided the canvas’ width into quarters, but landed on ninths as I felt that the widescreen of a browser worked best within that ratio. The code now had to reflect the x position of each generated circle to appear at a randomly selected multiple of 1/9 in the browser window. From there its Y position would also be randomized within the height of the canvas. Originally the concept intended for the user to be able to edit the fraction at which the canvas’ width would be divided into by a potentiometer but the functionality was eventually scrapped because of scope issues (although this feature can be reintroduced manually by unquoting code in the sketch.js file.)

The second rule of the system was that a large circle would appear in one of four quadrants each time the sketch is redrawn. This circle acts as the poster’s primary element and because of its dominance in the composition, I decided to give the user the ability to manipulate its sizing from large to small through a potentiometer linked to the Arduino controller (See Figure 5). This functionality was also the easiest to map to a potentiometer.

Finally the third rule of the composition was that a equivalent number of medium and very small circles would be drawn compared with a larger proportion of small circles. The ratio of medium and very small circles compared to small circles was experimented with, but finally a 4:1 ratio (M+VS : S) was decided upon. This ratio was not editable by the user when interacting with the Arduino controller.

Originally I wanted to add controls into the Arduino portion of the project over the rate at which each set of circle sizes would increase over the course of time in the sketch. However, this proved to be outside the scope of this project as I was not able to find a way to incorporate this functionality both from a technical and aesthetic viewpoints.

To give a sense of pacing and movement to the otherwise original static reference, I felt that all of the circles generated should have specific growth rates as they expand to fill the canvas.

img_0257

Figure 5: Ideation of the Generative Poster project

The variability of the posters would only be recognized if the user would redraw the sketch, therefore a refresh/redraw functionality was incorporated into the Arduino controller through a blue concave push button. By refreshing, the user is able to cycle through randomly generated posters and decide which composition suits best for them. Finally, the print screen/save image function was assigned to the other push button.

Results

I believe that this project was executed to the standard I set for at the beginning of this project. To my excitement, during the critique I was able to see some of the different posters people made based on the algorithm and system I laid out for my project (See Figure 6 and 7).

this-that-poster-1

Figure 6: Example of Generative Poster

The idea is for the user to sense when a composition forms into something that visually resonates with them. They can then choose to save it. During the critique of this project, the user’s selected composition was printed onto a paper, framing that moment in time– so that the user could have their own physical copy of the experience.

photocollage

Figure 7: Randomly generated posters during Experiment 3 Critique

References

The Coding Train. (2017, January 9). Coding Challenge #50.1: Animated Circle Packing – Part 1. Retrieved from https://www.youtube.com/watch?v=QHEQuoIKgNE&feature=youtu.be

Hertzen, N. V. (2016). html2canvas – Screenshots with JavaScript. Retrieved from https://html2canvas.hertzen.com/

NYU ITP. (2015, October 4). Lab: Serial Input to P5.js – ITP Physical Computing. Retrieved from https://itp.nyu.edu/physcomp/labs/labs-serial-communication/lab-serial-input-to-the-p5-js-ide/

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%203

StackOverflow. (2011, March). How to randomize (shuffle) a JavaScript array? Retrieved from https://stackoverflow.com/questions/2450954/how-to-randomize-shuffle-a-javascript-array

Sulki & Min. (2014, March). Archived Designs. Retrieved from http://www.sulki-min.com/wp/wp-content/uploads/2014/02/P36_1.png

Voice Kaleidoscope

 

screen-shot-2018-11-12-at-10-53-43-pm

 

 

screen-shot-2018-11-12-at-10-45-30-pm

 

Overview

Voice Kaleidoscope takes voice patterns from the microphone in the computer and outputs onto an LED circular matrix to make colours and patterns. Created for people on the autistic spectrum who have trouble interpreting facial expressions.  This tool was created for Pattern thinkers who have Autistic Spectrum Disorder.

img_2764

Concept

Voice Kaleidoscope was created as a tool to help communicate emotion through patterns and colours. Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. Autism spectrum disorder (ASD) is a severe neurodevelopmental disorder characterized by significant impairments in social interaction, verbal and non-verbal communication, and by repetitive/restricted behaviors. Individuals with ASD also experience significant cognitive impairments in social and non-social information processing.  By taking Voice expression and representing it as a pattern this can facilitate as a communication tool.

 

 

kaledoscopebackgroundillustration12

 

There are many variations of the way voice is utilized into patterns . I was curious about the fluctuations in voice and emotion. What was interesting was seeing sound waves translated into frequency.  I wanted to see what these patterns would look like and how it could help me conceptualize the design around my own project.  Through a HAM radio club I found someone who was willing to talk to me about sound frequency and patterns and the beautiful patterns of sound through an oscilloscope.

 

screen-shot-2018-11-12-at-11-35-56-pm

Ideation

Early in the process I was pretty secure on my concept. Seeing a friend with a family member who relates more to colours and patterns I always wondered why there wasn’t a tool to facilitate the interpretation of human emotions for people who sometimes deal with these barriers. It was also very important for me to get out of my comfort zone in regards to coding.  I wanted to embark on a journey of learning even if I was afraid of not sticking with what I already knew I could execute. Utilizing output from P5.js to arduino I knew would be much more challenging than the input infrastructure I had gotten more comfortable with. I was adamant about this also being a journey of taking chances and true exploration. This project was about communication and growth.

img_2717

While researching aspects of pattern thinking and ASD tools in classrooms my project went through an Initial metamorphosis. At first I thought of this design as a larger light matrix with literal kaleidoscope features, further in the thought process decided this communication tool should be more compact. Easy to fit into a back pack for most carrying mechanisms. Earlier versions also had construction plans for a complex face with cut out shapes.

 

screen-shot-2018-11-12-at-11-46-38-pm

 

Process

I started with the code right away, I knew my biggest hurdle would be to get P5.js working with arduino. I started to think about the architecture of the project. My first design process was to start thinking about through the flow of how voice would move through P5.js and into the arduino. What would that coding look like.

Initially had to decide how the microphone was going to be incorporated into the design. I began exploring adding a mircophone to the breadboard or  using the microphone in the computer and vice versa. At this stage in the process got started on serial control application right away. There were many issues with the application crashing.  The first step was to design the voice interface in p5.js this was a dificult task. I wanted to incorporate the same number of LEDS into the design with out it being over complicated and messy.  While designing the interface I began testing the microphone with the interaction of the p5.js. I was trying to encapture the animation of voice flickering in the p5.js sketch and started to look up code variations of turning the built in microphone on.

After this was set up and working I moved back to the json and serial control app. There were still connection issues.  In the first stage of coding I was having connection issues in the console not being able to stay on an open port, I continued to test through using variations of turning the serial control on then getting it to stay on a specific port. I discovered the port kept changing frequently. Decided to reinstall and that fixed the issue temporarily.

fritzing-4-boards

Putting together the board and LED lights:

For the Led matrix I decided to use three LED pixel WS2812B rings.   With initial testing of the rings and deciding how to power and design my breadboard I had the rings seperate.

screen-shot-2018-11-12-at-10-28-59-am

I had to figure out how to daisy chain the components to lead through 1 data in and out wire to the arduino.  While powering up the lights I discovered that an outside power source of 5 volts wasn’t enough. I did some online sleuthing and discovered that if I used a 12 or 9 volt power source and ran it through a DC to DC power converter that would be better for my LED’s.

screen-shot-2018-11-12-at-11-55-28-pm

Coding the Arduino:

During this process had to discern what the light patterns were going to look like. Went through many colour variations and patterns and decided to utilize a chase pattern and colour variations for loudness. Depending on how loud or soft the voice was would discern how many times the light went around the rings. Had to test variations of brightness.Even with the 9 volt power source the LED’s were running the power quickly and flickering.The rings proved to be very different operationally than the strips.

Finalizing the board and testing:

Once the lights and board were operational I dove into testing the p5 files with the arduino.There were many calibrations between the p5 and the arduino.  At first I could see that the port was open but was not sure if there was communication in the google console.  Since I couldn’t  use the serial monitor in arduino to see anything I initially had a hard time discerning if the arduino was connecting. I could see numbers running in the console and an open port but was still not able to get an open connection. Went back to researching what notifications I should see in console if arduino is connected. Found the connection nofication but still could not get running after going over code. finally with a reboot my microphone and p5.js files were connecting with the arduino and I could see my voice patterns in the matrix.

Presentation

This experiment brought my learning experience to a whole new level of json and serial communication. I learned the ins and outs of not just input but output as well. Even though there were many connective issues working through these problems made me a better coder and builder.  Getting feedback in regards to expanding on a much needed communication tools and seeing these thought processes expand the lives of other people was valued feedback to keep along this throught process and to continue exploring methods of assisting people through technology.

Added notes on future expansion for this project:

  • To make different sizes of this device for wearables or larger for environments such as presentations.

  • Incorporating a study conducted on voice patterns, light and how that incorporates with autism and  pattern oriented thinkers.

  • To expand on p5.js interface to reflect any findings in the study and expand on design based on these findings.

References

Article on Autism

https://www.forbes.com/sites/quora/2017/07/05/research-shows-three-distinct-thought-styles-in-people-with-autism/#3102323a221e

P5.js to Arduino

https://gist.github.com/dongphilyoo/1b6255eb2fb49f17c7a2ce3fd7d31377

Serial Call Response

https://www.arduino.cc/en/Tutorial/SerialCallResponse

Article Autism and Emotions

http://theconversation.com/people-with-autism-dont-lack-emotions-but-often-have-difficulty-identifying-them-25225

Paper on Emotions and Pattern Resarch

https://pdfs.semanticscholar.org/7e7e/d9bbf56ac238451a7488389731f58dc7a715.pdf

References p5

https://p5js.org/reference/

 

1

 

 

 

 

EducationTool

screenshot

Experiment 3: This+That

 

RGB-Geometry Control Tool

 

By Mazin Chabayta

 

 

Project Description

My project is made of 3 potentiometers, 3 sliders, and a strip of LEDs, all connected to an Arduino Micro, with an external power supply (3V x 3) for the LEDs. It functions has a basic graphic tool which gives the user control over the color of the background and the geometric shape in the foreground.

It is mainly designed as an educational tool for children with autism, however, as I was building, I identified many other interesting variations and applications.

 

Code

github.com/mazchab/CnC—Experiment-3—Arduino-p5

 

Video

https://youtu.be/wBgNxcPWego

 

Concept

 

For this experiment, we were tasked to create an interaction between physical components (Arduino Micro) and a digital interactive canvas (p5).

 

Coming from a very tactile hands-on background, working with coding and digital build was a challenge. However, I am interested in working with basic visual elements and creating interesting artistic pieces and interactions with them. So, my goal was to enter this p5 challenge from a direction I interested in and enjoy working with.

 

Once I identified what I wanted to learn and explore during this experiment, I started considering a concept that would resonate with me and speak to something I really care about. Coming from a background that was exposed to several cases of autism, I decided to venture into bringing together the world of technology and education for special needs. Through my research I discovered that there has been many positive results from merge: “ Despite exciting preliminary results, the use of ICT remains limited. Many of the existing ICTs have limited capabilities and performance in actual interactive conditions” (Boucenna et al. 2014). In the abstract, the authors agree that there needs to be more effort invested in utilising technology for educating children with autism.

 

When I identified the challenges ahead of me, I then started thinking of a function for this project and eventually landed on this idea: an educational tool / toy for children with special needs, specifically autism, that is beneficial, entertaining and attractive. Also, I’ve always believed in DIY concepts and how we can create solutions that can be shared with people to download and replicate at home easily.

 

Since I am not a professional in the field, I did not want to rely solely on my personal experience dealing with children with special needs. So, I started my research into available tools and theories in what has been observed to work best or least. I came across an article called Teaching Tips for Children and Adults with Autism”  by Dr. Temple Grandin, who has personal experience growing up and overcoming autism. In her article, Dr. Grandin provides valuable insights into how the minds of autistic children work and what tools and techniques work best. Grandin suggests to  Use concrete visual methods to teach number concepts. My parents gave me a math toy which helped me to learn numbers. It consisted of a set of blocks which had a different length and a different color for the numbers one through ten(Grandin 2002). As a graphic designer, I am trained to establish a visual line of communication with my audience, so this presented itself as an opportunity, and a challenge, to find the most appropriate visual language to communicate with a child with autism. In addition to that, Grandin also believes that “Many individuals with autism have difficulty using a computer mouse. Try a roller ball (or tracking ball) pointing device that has a separate button for clicking. Autistics with motor control problems in their hands find it very difficult to hold the mouse still during clicking” (Grandin, 2002). Based on those key insights from Dr. Grandin, I was able to gain an understanding of how the tool should physically look like, and the type of interactivity I would want the user to have with it.

 

Observing existing toys that are designed for children, I decided to use basic geometry and colors as a line of interaction between the device (the user) and the screen. In addition to that, since I wanted to keep the affordances of the device simple, I compared the input devices available to us, and decided that potentiometers and sliders would be appropriate options to use. In conclusion, the primary graphic elements I decided to use were the 3 RGB values + size of shape + rotation of shape + number of angles in the shape. Eventually, I added a strip of LEDs in order to reflect the color on the screen into a physical component, which was necessary in my opinion, in order for the child to see the interaction happen in real-time.

 

Design & Build

In the beginning I wanted to create a small four-sided pyramid, with the interactions knobs on each side: 1 geometric knob on each side and 3 RGB knobs on the fourth side. However, this direction quickly started showing some issues. My main issue with it is that it is not suitable for children with special needs, especially autistic children,  because the tools for interaction are spread out around the device, and this can be confusing. Since it was not clear which knob does what, I decided using one surface that has all the interactions.

sketch2

initialsketch

 

 

 

 

 

 

 

 

 

 

 

 

 

Once I started sketching the new design, I noticed that having 6 similar potentiometers can get confusing and since RGB levels are usually sliders on screen, I decided that the RGB potentiometers should actually be sliders. So, after securing all the input components I needed to build my device, I started connecting cables and ensuring that all my inputs were functioning properly, and once soldered, maintain a secure connection.

 

img_1335 fritzinsketch_bb

 

Video of initial testing

 

 

Working with physical prototypes is always very beneficial for me, because I quickly get a sense of the feel of the device, and more importantly, the size and space this device gives me. So, after working with my initial prototype for some time, I realized that I needed a bigger prototype in order to house all components and maintain accessibility to the inside of the device. At this point, I had a strong idea of the measurements and the shape of the final prototype, which gave me more time to focus on the functionality and the coding side of the process.

 

 

img_1340 img_1344

 

For my second and final prototype, I repurposed a small cardboard box, which had an opening/closing method that I was looking for in order to ensure I am able to access it easily without hindering my connections.

 

img_1393 img_1391img_1408

 

 

Code

Since I am still trying to wrap my head around p5, I wanted to identify a personal strength in p5 and a personal challenge. My strength when it comes to this project would be the geometry, and my challenge was the ‘map’ function and then communicating it from Arduino to p5.

 

Looking through the examples on p5, I found several inspirations on how I could add interactions. What stood out the most was the geometric polygon shape in p5, which allows the user to change the size, the rotation, and the number of sides this polygon has, which essentially allows the user to change the shape completely. I saw a great opportunity in this, since it’s directly related to my concept, and I quickly decided to use this function as my main foreground element. The interaction was based on the values of the three potentiometers, mapped in Arduino from 1023 to 100 for number of polygon sides, 100 for rotation, and 800 for size.

screenshot

screenshot-of-code-p5 screenshot-arduino

 

After that, I mapped the values of the three sliders from 1023 to 255 and assigned each to be an R, G, or B value. This combination gave the user all control over the visual elements on their page, however, maintained a threshold of simplicity in the visuals that is necessary for my concept as an education tool for children with autism.

 

Once I had my mapped values in the ranges necessary for meaningful interaction, the rest was simple application of what we learnt in class. I assigned the necessary port, established a connection between Arduino and p5, and started seeing a feed of data flowing from the physical components onto the screen, in real-time, which was quite satisfying.

 

This process was very educating for me because this was the first time I am able to create, as simple as it can be, a controller of some sort that allows the user to interact with the computer outside the conventional way, which was eye-opening. It inspires to push those boundaries and experiment using more interactions and achieving different results.

 

 

Going Forward

 

Even though this concept was created for children with autism, while working on it, I have discovered more applications for it. For example, since I have a background in tattoo art, I can see how a device like can be used as a quick tool to generate complex patterns from a simple geometric motif. A tool like this would be a great addition for a tattoo artist since it maintains the hands-on feel of the tattoo design rather than a complex computer program, which might take away from the authenticity of the experience. Of course, the functionality would have to be changed to reflect the requirements of a tattoo artist, but since color might not be of importance for the artist, changing the functionality of the sliders to apply mathematical equations used to generate complex patterns can replace the RGB values. I am interesting in pursuing this option and probably will have different variations of this device one day.

tattto-patterns

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.