EAT ME!

• project title 

EAT ME!

• names of group of members 

Neo Chen & Rajat Kumar

• project description 

This game that we created was first inspired by the classic game ——SNAKE, which we believed that most people have played before. We want to bring this game into a more physical interactive form. It is a game that requires all the players to use their phones, and the number of players is preferably to be even since the game plays in pairs in each round. (When there’s a player singled out, he/she will be given an extra number to balance  out the game.)

Instruction:

Shake the device to generate your random number and don’t let anyone else see it 🙂

You will be competing in pairs, show your number to your opponent 

The person with bigger number wins

The person who loses will have to follow the winner for the rest of the game(forming the snake)

But that doesn’t mean your game is over

Ever since you joined the winner, your number goes to the winner

So when they go to the next round, they are comparing numbers including the loser’s number

Same for the rest of the rounds, the person who singled out will get extra points towards the total to balance out the game 

The last person standing wins the game!

Extra: The final winner will have a long line of “losers” following behind, forming the shape of the SNAKE, the winner will be asked to take a selfie with everyone else standing behind.

• visuals (photographs & screenshots)

img_9596

img_0091 5fd94222-be73-42d2-8cc9-cbed9fb7d4bd img_0309

 

we found some existed code online for creating face tracking and filter in P5js that we wanted to collaborate with, but unfortunately it could only work on the website and when we try on the phone, the image we got was lagging and the filter would not show up, this is not what we were looking for, so we decided to give up on this as part of our game.

img_0443img_0384img_0384

 

 •edited video of the 20 screen experience (1 minute or less)

•code https://github.com/Rajat1380/CODE_OCAD/tree/master/EXPERIMENT_1

 

• Project Context 

Our first intention was to make a mini ice breaker game since all of us just joined the program and barely know each other, so we did some research on the social games and was looking for a good one to transform into our game, but we could not agree on one that represents our idea the most. One day this thought just popped up in my mind, why don’t we have everyone to play the snake game that the most of us have been played before, but this time, we are giving it a twist so that players are given more chances to be interactively involved. 

After finalizing the concept of our game we brokedown development into the minimum viable product of coding to function the game. We first prioritized the random number generation and started building. Preload function came to rescue us and initially, it was working fine with 4-5 images to generate randomly but failed with 21 images. We spend most of the time figuring it out and then Jun Lee came to us and pointed the error out. The main problem was with the for loop, It ran fast before loading all the images and we see nothing on the screen. We have to preload all the images manually. Images showing on the desktop but not on the mobile. The problem was that we were previewing in the present mode and it got solved by previewing in fullscreen mode. We had to fight with adobe XD for 3 hours to get the png format for the images.

untitled-2

%e6%9c%aa%e5%91%bd%e5%90%8d%e4%bd%9c%e5%93%81-3

img_0734  img_0736 img_0737

We originally wanted the game to be a combination of the snake game and the bluffing game by granting the players to bluff about the number they randomly draw, which would also be more challenging, like a psychological game. But it’s going to lengthen the game time for each round and we were having trouble with how to get people in pairs (which we decide later to use the hand written card for help). Not getting to use the network between phones was no doubt a huge issue we faced because we wanted to lower the chance of people drawing the same number, to solve this, we came up with the solution of having the ones who compete for each other and have the same number to redraw. (Although the chance for people to get the same number and also getting paired are pretty low.)

We implemented the random number generation with device shake ability. We have MVP of our game and we tested game with the numbers. We did playtest with more number of players and found one flaw in our game. We did the math of pairing and after the 3rd and 4th round of game one player got left out. We need to balance the gameplay in order to maintain the fun aspect of the game.

img_0319img_0326

After some maths, we decided to give 15 points in 3rd and 25 in 4th round to the player who got left out.

img_0148-2

Then we decided to develop some more functionality. We implemented the button, as we were thinking about the bluffing aspect of the game. Surprisingly button was showing on the desktop but not on the mobile phone. We searched on StackOverflow and got one clue that there is some openGL that we have to write in HTML. We tried but failed. All we wanted with buttons to hide the number and show them when required. It did not work out, so we shifted our focus to change the background of the number in order to make it less readable. At the end of the game, we thought of creating a selfie function along with a face tracking filter, we found some existed code online for creating face tracking and filter in P5.js that we wanted to collaborate with, but unfortunately it could only work on the website and when we try on the phone, the image we got was lagging and the filter would not show up, this is not what we were looking for, so we decided to give up on this as part of our game.

The major issue we had during the whole process, was to bring what we wrote on the P5js web editor into mobile devices, we found out that a lot of functions that we wanted to put into our game could only work on the computer which limited the final result that we wanted to present. 

• Presentation

img_0470 img_0471 img_0472

photo_20190927151904_6553767_0

(The snake formation)

• Reference

Face Tracking and Filter https://editor.p5js.org/jeeyeonr/sketches/Hyy5W3GiQ

Input and Button https://p5js.org/examples/dom-input-and-button.html

Create Button https://p5js.org/reference/#/p5/createButton

Device Shaken https://p5js.org/reference/#/p5/deviceShaken

Set Shaken Threshold https://p5js.org/reference/#/p5/setShakeThreshold

Snake (Video Game Genre) https://en.wikipedia.org/wiki/Snake_(video_game_genre)

Bluff (Poker) https://en.wikipedia.org/wiki/Bluff_(poker)

 

 

Forest Escape!

Group 8: Katlin Walsh & Jessie Zheng

Project Description

“In order to build trust and friendship in each other as we get into our digital futures program, the university has ordered us on a mandatory team building exercise in the woods. 

What Kate and Nick thought was going to be a traditional hike in the woods with some trust falls quickly turns into something unexpected. After walking for hours we finally admit to ourselves that we’re lost, but not to worry! As a digital futures cohort, we all pull out our phones and try to find a map to get home.

Alas, with no gps signal, the only thing that seems to be working is a strange webpage, directing you to the location of a paper map. You have to work together to make sure you’re not looking for a clue that’s already been found. And be careful, as soon as you lift your finger, the clue will disappear.”

Forest Escape! is a game that builds teamwork and communication skills within a group. Not only does it engage players within a physical meeting space, but it also encourages them to interact within a virtual space by interacting with their phones. By combining traditional geocaching, escape the room, and team building exercises, Forest Escape! aims to get players to think critically on how to use their individual devices effectively as a team. The game is purposefully designed so that any number of players can participate, prompting groups to create a strategy in order to work together and complete the challenge. 

Watch Video

Continue reading “Forest Escape!”

random (ANIMALS, MONSTERS & HUMANS)

PROJECT TITLE, SUBTITLE
random (ANIMALS, MONSTERS & HUMANS)
A fun grouping game that entails random (shuffling, counting, clustering).

TEAM MEMBERS
Priya Bandodkar & Jun Li

PORFOLIO IMAGES

image3

image2

image4

PROJECT DESCRIPTION

Random (ANIMALS, MONSTERS & HUMANS) is a smartphone-based interactive physical game experience. The game facilitates both person-to-person and person-to-phone interactions. It can be played with 20 players and up, in an open space.

It is a character-based game containing a variety of single and mixed sets of personas, which are either animals, monsters or humans. On hitting start, the code shuffles, randomises and assigns a persona or a mixed set to each player. This is completely random, based on when the participant chooses to stop the shuffle, thus leading to a dynamic range of character-sets on the floor. The host then announces a pairing condition between these animals, monsters and humans (for instance, 1 human, 2 animals, 2 monsters need to form a team). The players now have to group up with other participants in a way that their cluster meets the announced pairing condition. They then advance to the next round. This continues until we have a winner(s).

It was interesting to see how the game playfully turned out to be an integrity test, as some players sneaked in multiple shuffles to find a matching character, and thus survive longer in the game.

PLAY THE GAME

image5

PROJECT CONTEXT

Ideation

We explored and brainstormed to generate concepts that could weave in, what we thought was the essence of this experiment—creating one connecting experience with 20 screens without networking. We also wanted to build an engagement that allowed users to have fun. Some of the ideas that came to our minds were—creating a shopping aisle or gallery of unusual products, and using smartphones as interactive displays. Another one was building a world map installation, while using phones as different regions/cities around the world, and interactively depicting their landscapes, interesting facts. Although these concepts had quite a bit of creative scope and aligned to the initial brief, we realised they lacked the element of fun. We did a second round of ideas, drawing inspiration from ice-breaker games like Musical Chairs.[1] And that’s when we came across the ‘Group in the numbers of…’ game by Michael Hartley.[2] It involves each player to pair up with other participants and total up to match the number announced by the host. It was a simple concept, but had a potential to serve as a footing that we could build on.

Concept Development

Taking this concept further, we thought what if each player was allowed to have a dynamic, random number for each round. Visualising it in the digital context, smartphone was the ideal choice for introducing this twist. 

Prototyping

Looking up for code resources, we realised that the math references could help generate a dynamic number with each click. Here is a process video of testing this out:

Furthermore, to lend an organic feel to the game, we brought character into play instead of numbers. We also sub-categorised the characters into families of animals, humans and monsters (earlier, aliens) with a view to introduce an additional layer of challenge. We further considered adding mixed set of characters to shake up the combination possibilities between participants as drawn below:Here is a process video of testing the functionality with images:

image6-1

Game Flow Visualisation

We mapped the flow of the game early on for two reasons—gauge the scope and plan for milestones, and foresee possible challenges. It was succinctly visualised as below:

  1. The game starts with 20 individual participants in an open area with 1 phone each.
  2. The game is established using an animation with the game title and button to proceed and play.
  3. The rules of playing the game are explained to participants verbally and/or  including a rule page within the game.
  4. The participants trigger the shuffle on their respective phones either by using a phone shake or by tapping the screen.
  5. Pictures or animated GIF loops of character start shuffling in a random order.
  6. Players tap the screen to stop and stop at one character.
  7. The facilitator picks a pairing condition between different characters based on numbers (from a bowl of paper chits, may need to be improvised depending on players left) and announces it. 
  8. This requires players to pair up in a different number of groups to meet the announced criteria. Players have 30 seconds to for this.
  9. Players pair up quickly form teams. 
  10. The ones left out, are eliminated.
  11. We continue with more rounds (3-4 rounds) until we have a winner(s).

Code

We built on the random() functionality tested during prototyping. While developing the code with GIFs, we ran into a roadblock with using random GIF loops in place of the images. The GIFs somehow did not work on both browsers (Safari & Chrome). Looking up for references online about animated GIFs [2] , we encountered a way to play a single GIF, but the same logic did not cater to loading 18+ GIFs. We also realised using a repository of GIFs would take a toll on the loading time of the game due to relatively higher file size. To overcome this predicament, we decided to introduce color in our images. To compliment the idea further, we added a code to that resulted in a different background color, from the selected color palette each time. It thus increased image possibilities by multifolds, and making it less repetitive.

Visual Aesthetics

We were initially exploring graphic styles that would lend well to creating GIF loops. The purpose of including animated characters was essentially because they would add so much more life to the screen. We had narrowed down to the pixel art and the doodle style.

giff-1
We went ahead with the doodle style (right), because it had more scope to bring out different character personalities using facial expressions. Below is are the character designs and GIF loop:

giff-3We were unable to proceed with GIFs midway through the development phase, we decided to alternatively uplift the visual aesthetics by adding colors using a 4-color palette.

image8

The idea of randomising the background color in code led to an even more diverse image repository for the game.

GAME TESTING AND CRITIQUE

We launched the ‘random (ANIMALS, MONSTERS & HUMANS)’ game at the Creation and Computation Experiment 1 Critique on 27 September, 2019. The video features highlights from the round of this game played by our classmates:

Observations during gameplay

  • Players that got eliminated in the early rounds were keen on being a part of the game in some other way.
  • It was a playful integrity test as some players sneaked in multiple shuffles to find a matching image to stay longer in the game.
  • Participants adapted to the game fairly quickly, and were able to pair up in the initial 10 seconds or so.
  • There was a glitch in viewing the gameplay on the Android interface leading to cropped images.
  • Everyone seemed to have enjoyed. There was a request for more rounds in order to decide a single winner.

LEARNINGS

  • Building an interactivity in p5.js using random images with an optimal loading time.
  • Working with constraints of screen size for different platforms by adding the background color in the code.
  • For Priya: Embracing the steep learning curve that came with coding for the first time, and developing physical computing skills in a span of 20 days.
  • For Lee: Applying p5.js to create a working prototype of the idea and learning designing skills from Priya.

CONCLUSION

  • Including GIF loops would have made the game experience even more interesting and dynamic.
  • We would be able to restrict players from tapping multiple shuffles (read, cheating) in one round by adding the ongoing round number functionality on the screen.
  • Rotation and device-shaken functionalities would not be suitable for this game as players need to run and quickly to make groups.
  • The game itself was successful, as the players enjoyed and wanted to play more rounds. We thus achieved the objective we had in mind.

TAKEAWAYS FOR FUTURE ITERATIONS

  • Including the ongoing ‘number of round’ functionality.
  • Finding a way to involve players that get eliminated in the initial rounds.
  • Working on including GIFs.
  • Adding sound effects to make the game more interesting and playful.
  • Making it more versatile to work on different operating systems and browsers.

CODE

https://editor.p5js.org/leelijun961118/present/PBjABRx9C6  (developed for smartphone platform)

GITHUB

https://github.com/LLLeeee/Creation-Computation-Project1/tree/master

REFERENCES

[1] How to Play Musical Chairs.” wikiHow, 29 Mar., 2019. https://www.wikihow.com/Play-Musical-Chairs.

[2] Hartley, Michael. “Game of Getting Into Groups of Given Numbers | Dr Mike’s Math Games for Kids.” Dr-mikes-math-games-for-kids.com, 2019. http://www.dr-mikes-math-games-for-kids.com/groups-of-given-numbers.html.

OTHER ONLINE RESOURCES

Random functionality: https://p5js.org/reference/#group-Math

Phone functionality: Touches: https://p5js.org/reference/#/p5/touches

Using GIFs in p5.js:

Discussion: https://github.com/processing/p5.js/issues/3380

Library: https://github.com/wenheLI/p5.gif/

Example: https://editor.p5js.org/kjhollen/sketches/S1bVzeF8Z

p5.js to GIF:https://www.youtube.com/watch?v=doGFUaw_2yI

Array: https://www.youtube.com/watch?v=VIQoUghHSxU&list=PLRqwX-V7Uu6Zy51Q-x9tMWIv9cueOFTFA&index=27

Loading Animation: https://www.youtube.com/watch?v=UWgDKtvnjIU

Eggxercise

 

screenshot-2019-10-01-at-10-27-05-am

Eggxercise                                                     

 (Creation & Computation DIGF-6037-001)

  Team : Jignesh Gharat  & Nadine Valcin                                                          Mentors: Kate Hartman & Nick Puckett


Project description

Eggxercise is a hybrid digital and physical version of the traditional egg-and-spoon race where participants balance an egg on a spoon while racing to a finish line. It replaces the typical raw egg used in the classic version with a digital egg on a mobile screen. If the egg touches the side of the screen, it breaks, displaying the contents of a cracked egg with the message “Game over”. 

Because of space and time constraints, a relay race format was used for the in-class demonstration. The participants were divided into 3  teams whose members had to go through a simple obstacle course made out of a simple row of chairs. When they finished their leg, they had to pass on the phone to the next member from their team without breaking the egg. If at any moment, the egg broke, the participant who was holding the mobile phone had to reload the game and wait for the expiration of the 5 second countdown to resume the race.
game-play

Presentation Day

Project context

We both shared a strong desire to explore human-computer interaction (HCI) by creating an experience with an interface that forced participants to use their bodies in order to complete a task. That physical interaction had to be sustained (unlike a momentary touch on the screen or click of a mouse) and had to be different from the many daily interactions people have with their smart devices such as reading, writing, tapping, and scrolling. In other words, we were searching for an experience that would momentarily disrupt the way people use their phones in a surprising way that simultaneously made them more aware of their body movements. 

We also wanted to produce something that was engaging in the simple manner of childhood games that elicit a sense of abandon and joy. It had to have an intuitive interface and an immediacy that didn’t require complex explanations or a high level of skill, but it simultaneously had to provide enough of a challenge as to require a high level of engagement. As Eva Hornecker remarks:

 “ We are most happy when we feel we perform an activity skillfully […] Tangible and embodied interaction can thus be a mindful activity that builds upon the innate intelligence of human bodies.”  (23)


poseNet()  Library (ML)

We explored the different ways in which the body could be used as a controller mainly through sound and movement. Posenet – a machine learning model that allows for Real-time Human Pose Estimation (https://storage.googleapis.com/tfjs-models/demos/posenet/camera.html) offered the possibility of interacting with a live video image of a person’s face. We conjured an experience that would attach a virtual object to a person’s nose and allow the person to move that object along a virtual path on a computer screen. This led to the idea of a mouse, following its nose to find a piece of cheese. It would force users to move their entire upper body in unusual ways in order to complete the task.

We then moved onto the idea of controlling an object on a mobile device through voice and gestures. Building on our desire to make something inspired by childhood games, we decided to transpose the egg-and-spoon game. We didn’t want a traditional touch interaction so we used the accelerometers and gyroscopes data on the phone to sense tilting, rotation, and acceleration to control the movements of a virtual egg on a mobile phone. This allowed for immediate and unmediated feedback to the user who could quickly gauge the acceptable range of motion required not to break the egg. This can be seen as an application of a direct manipulation interface (Hutchins, 315) where the object represented, in this case, the virtual egg, behaves in a similar fashion than a real egg would if put on a moving flat surface. The interface also feels more direct because the user’s intentions to balance the egg as demonstrated by their hand movement provide the expected results that follow the normal rules of physics. 

Eggxercise is engaged with the aim to investigate a natural user interface (NUI) where the interaction with the smart device is direct, tangible and consistent with our natural behaviors. Within that paradigm, it aligns with the multi-disciplinary field of tangible embodied interaction (TEI), that explores the implications and rich potential for interacting with computational objects within the physical world and with the projects and experiments of  MIT’s Tangible Media Group, led by Hiroshi Ishii, that continuously search for new ways “to seamlessly couple the dual worlds of bits and atoms.” (tangible.media.mit.edu)

Mobile phone game play demo

Our project integrates a virtual object on a physical device that responds to movement in the physical world in a realistic way. In that way, it is related to the controllers that are used in gaming devices such as the Nintendo Wii and the Sony Playstation. It also has the embodied interaction of the Xbox Kinect while maintaining a connection to a real-world object.


Game court

Played between 3 teams of 6 participants on a 10mt relay track. The layout looked as illustrated in the image below.

Game Court


The code for our P5 Experiment can be viewed on GitHub:

https://github.com/nvalcin/Eggxercise/blob/master/Final%20code


Technical issues

The sound for Eggxercise launched automatically on Android devices. We spent a lot of time trying to get the sound to work on the iPhone only to accidentally discover that users had to touch the screen to activate the sound on those devices.

Sound output:

  • Android Device – Browser used Firefox.
  • iPhone Device – Browser used safari but only works on the first touch.

Next steps
  • A background with lerp color to have the screen turn red as the egg approached the edges.
  • More levels, increasing the speed of the egg, adding a second egg or obstacles on the screen. 
  • A timer indicating how long the users had managed to balance the egg

Observations / User test
  • The participants were using a different browser on mobile phones with different operating systems and specifications and all the phones were connected over wifi to load the code from P5.js, It was difficult to start the game at the same time as the loading time was different.
  • Participants having a large screen size had an advantage over others.
  • Adding obstacles on the path made the game more challenging and fun.
  • The background sound (Hen.mp3) did enhance the experience as the motion of the egg changed the speed and the amplitude of the sound.
  • Participants were having a good time using the app themselves as well as watching others play.

Learnings
  • Getting started with creative coding and understanding the basic workflow of P5.js and java.
  • Integrating graphics with code.
  • Making the screen adaptive to any screen size (Laptop & mobile phones).
  • Exploring various interaction and interface patterns using code. i.e Touch, voice, motion tracking, swipe and shake.

References

Hornecker, Eva, “The Role of Physicality in Tangible and Embodied Interactions”,  Interactions, March-April 2011, pp.19-23.

Hutchins, Edwin L, James D. Hollandand Donal A. Norma, “Direct Manipulation Interfaces”, Human-Computer Interaction, vol. 1, 1985, pp. 311-338.

Tangible Media Group, tangible.media.mit.edu/vision/. Accessed September 28, 2019.

 poseNet() librart                                                              https://ml5js.org/reference/api-PoseNet/

Amplitude Modulation                                                https://p5js.org/examples/sound-amplitude-modulation.html

Frequency Modulation                                               https://p5js.org/examples/sound-frequency-modulation.html

The Coding Train https://www.youtube.com/channel/UCvjgXvBlbQiydffZU7m1_aw

Hen Sound Effect                                                             https://www.youtube.com/watch?v=7ogWsIYJyGE

Road Hawgs

Creation and Computation | Experiment 1
By: Catherine Reyto & Arshia Sobhan

Project Description

Road Hawgs is a two-team game which is played by using phones as physical pieces on the game field. Each team tries to pass its own finish line through obstacles on the field, trying to prevent the other team to succeed at the same time. In this game, phone displays are used as multi-dimensional game pieces, giving players access to all the tools they need in one place.

The core concept of this game is a group game-play using phones which are mostly considered as the reason behind the isolation of people in social contexts, even in a regular multi-player computer game where players are staring at their screens without any physical interaction. This game combines the groupness that exists in traditional board games with new possibilities that a mobile phone screen can provide as a game piece.

Project Context

We were both interested in the idea of people using their phones as tactile objects to physically connect with one another, but it took us a few iterations before we landed on the roadblock game.  

Group game-play is reminiscent of childhood when structured social activities were routine and commonplace. We had this in mind – the act of people gathered around say, a puzzle or a board game – in considering how we would approach this experiment. We were interested in exploring what the experience of group games feels like: being lost in focus but with occasional bursts of bickering, cheering or laughter, competitive impulses, and most of all a strong sense of ‘togetherness’. We imagined a group leaned in shoulder-to-shoulder, far removed from the isolating tendencies that are typically associated with personal-use digital screens of any sort.  We started thinking about the environments that tend to go hand-in-hand with group game-play: living-room floors, basement rec rooms, cottages, and eventually got to the idea of camping. This, in turn, led us to our first iteration of an image-matching concept. The idea involved the whole group ‘assembling’ a tent, (an image of one that is, with all 20 phone screens making up the canvas), working together to build it out piece-by-piece, not far off from the real-life experience of threading the poles hand-over-hand to set up a tent in the woods. And just like real camping, when the work is done and it’s time to kick back and enjoy the view (or at least, the fire), we were thinking of setting up all the laptop screens to mimic the experience by an emitted glow or panoramic image. But we were dissuaded by the technical challenges that the level of orchestration involved. It entailed either networking, which was ruled out, or a level of programming beyond our three weeks with p5.

img_8677 img_8678

 

We were still attached to the collaborative puzzle work of image-matching and spent the next session brainstorming. We were both drawn to visual patterns and the power of code to alter complex graphics into drastically new mosaic designs with just a few taps (or clicks). We really liked the idea of working with Islamic tile patterns, both on account of their captivating beauty but also because like code, the designs are grounded in maths principles. But like Jessie and Kaitlin discussed their scavenger hunt map, we anticipated that the variety of screen sizes would be too disruptive to the visual rhythm.

Photo Credit: www.sigd.org
Photo Credit: www.sigd.org
img_8681
Illustrating the visual interference caused by the framing around various phone screen displays

We also became increasingly aware of an overarching issue beyond screen-size interference.  For the class to interact with the screens towards a common goal, we both felt a challenge was needed. Not simply for the sake of competition or upping the ante, but rather to continue with our ideas about group game-play. We wanted to see our classmates working together, sharing frustrations and accomplishments as they competed in large groups.    

Figuring out our challenge led us through several iterations of wheel-spinning and creative frustration. We kept falling short of the target with concepts that were visually stimulating but too easily achieved, to the point of risking complacency. We frequently turned to the work of Artist and Designer Purin Panichphant for inspiration, eventually coming across the artwork that led us to the idea of matching Pieces. Panichphant’s Optical Maze Generator allowed us to make that final connection, though at first only in an abstract sense. As soon as we saw the maze and how it worked, it was unanimously agreed that we’d build our idea from it. We tapped around on the screen, rotating the squares of a grid of shape patterns, and began to visualize the idea of positioning parts of vertical and horizontal Pieces.

Panichphant’s Optical Maze Generator
Panichphant’s Optical Maze Generator

img_8676

Designing the Game

A vision came to mind from the old version of the PC game Sim City (Will Wright, 1989). The game involves strategically building a metropolis on an allotted budget in order to grow the population and in turn, increase that budget to continue expansion and growth. One of the greatest satisfactions came from laying down Pieces of road pavement because it signified profit enough to invest in infrastructure.

SimCity, 1989 Video Game (Photo Credit: imgur.com)
SimCity, 1989 Video Game (Photo Credit: imgur.com)

We cut up several sheets of paper into phone screen-sized portions, then plotted out a system of match points (mid-width, mid-height, and all corners) for each screen. The shapes could be combined in many ways by simply rotating the sheets of paper to match connection points from one road piece to another. The strategic positioning of the road pieces was devised with the building blocks of Tetris (designed by Alexey Pajitnov) in mind: minimal variation (we used four shapes), relying greatly on rotation for combining point-to-point shape connections.

1

2

5

To create a bit of context and make the game more interesting, we threw in a few literal roadblocks, in the form of a river, a train passing and construction/road work. Each ‘blocker’ presented its own challenge: rivers need bridges to cross, as do trains, unless you choose to simply wait for the train to pass (miss a turn), and construction sites limit or alter your route. We added ‘relief’ pieces to the mix: a bridge for crossing the river and train tracks, and a ‘bribe’ to override the construction.  

With our pieces laid out, we felt good about having everything we needed to make a game that was simple yet clever enough that we could imagine it actually being played outside the classroom, by kids and adults alike. We just needed to strategize the game rules, and quickly learned that there is nothing simple about that. Game design is a puzzle in itself, or a story with a definitive beginning, middle and end that needs to be a delicate balance of pain and gain points. We wanted to focus on the collective experience of the whole class but keep the element of competition a priority. To solve this, we divided the class into two large teams that would be pit against one another on the same road. The tricky part came in trying to decipher their common goal. Was it to gain more distance (finish line), or more points (flags)? We also had to keep the demo time in mind, which meant having to omit the luxury of a first-round trial run. A complex set of rules could make for a far more interesting game, but we were always aware of having to keep it at a basic level. We also added the factor of luck to the game by having a dice toss to determine the number of moves each team has in a turn.

The title of the game has a double meaning. According to Wikipedia, a road hog is a motorist who drives recklessly or inconsiderately, making it difficult for others to proceed safely or at a normal speed.  Since the goal of the game is to be the first team to reach the finish line, the players will be placing pieces haphazardly, their strategy in selection curtailed by the pressure of the group (like the round-robin scenario in table-tennis).  Because both teams are ‘building’ the same road, they are detrimentally dependent on one another to win, thus making a play on the term ‘hog’, as into hoard to oneself. That concept was inspired by the billiards game “9 Ball”, which takes a non-linear approach to win the game (rather than a cumulative tally of points).

Final Version of the Game

After discussing different scenarios, we finalized these rules for the game:

  • Two teams are differentiated with two colours (team pink and team green)
  • The teams build the same road in turns
  • The number of moves in each turn is determined by a dice toss
  • Each team has its own respective finish line (placed side by side).
  • There are some obstacles in the field that prevent teams from going straight
  • Each team can use as many blockers as they want to deter the other team
  • When a team is blocked, they need to use a relief tool to get past the block
  • Teams have to physically use their phones in the game. They rotate them when finding the desired direction of a road piece, and they stack one phone on top of another when using blockers and reliefs (ie. bridge over the river).

Blockers:

  • Dynamite: Destroys the last three moves (road pieces)
  • River: Blocks the road (“bridge” is needed to pass)
  • Train: Blocks the road (“bridge” is needed to pass)
  • Construction: limits the directions to continue (“bribe” can be used to pass through in any direction)

Reliefs:

  • Bridge: to pass the river and the train
  • Bribe: To pass through construction in any desired direction

tools

game-board

featured-image

Using p5 and Technical Challenges

As far as p5 was concerned, in spite of our limited knowledge, we were pretty good at communicating approach strategies. We caught one another if an idea seemed out of scope, and Arsh really stepped up when it came to tackling challenges like adding a swipe mechanism.  The swipe was the fundamental feature needed for easy, intuitive game-play, as well as a great solution to simplifying our navigation. We aimed to keep every aspect of the game as minimal as possible because we anticipated the loss of time from explaining game rules in the demo.  

After finalizing the tools, we designed a simple navigation system using tap and swipe. Players have three separate tabs: for road pieces, blockers and reliefs, that can be accessed by swiping left and right respectively.  In each tab, they can then tap to toggle through subsets (ie. road shapes, blocker types). Although using tap was quite easily achieved using the event “touchStarted” and using variables to loop the toggle, the swipe function was not very straight forward. After some searching and testing, we finally used this example code from Shiffman incorporating hammer.js.  It enables swipes in all four directions and worked properly on iOS and Android in all browsers. We only needed to make use of the left and right swipe to give access to blockers and reliefs, with road pieces being the default toolset on the home screen.

The dice toss was also executed in p5 using the shake gesture to mimic the gesture of a real-life dice toss. The only factor that was in need of some tweaking was the shake threshold (setShakeThreshold()). After a bit of ‘road’ testing, we finally settled on a threshold of 40. But for presentation’s sake, yes – Nick had a good point, a real die would have sufficed.

We felt a little restrained by our limited skill-level. There were plenty of cute extras we had to rule out, like small animals scurrying across the road pieces for idle animation. We were both eager to challenge ourselves with p5, but the time restrictions added a precarious element to our codebase. It was and still is apparent that we could do with some refactoring, as doing so would lead to a free playground for adding and experimenting. Because we were sharing the codebase, and some of the code had been pulled from elsewhere (the foundation of the Swipe feature was courtesy of Shiffman), that sometimes led to hesitation about tampering with one another’s code. But we also worked really well at overcoming issues in the code when we were able to sit down together to work through them.

Presentation Reflection

Presenting first meant it was really difficult to gauge how to make the best use of our time.  Right off the bat, it was apparent that we had been too detailed in our projector-tutorial of the game rules. In hindsight, we’d have been pretty efficient in leading our classmates straight to the QR codes so there would be ample time for everyone to figure out the game as a hands-on trial run.  

It was a painful oversight that we hadn’t thought to load the QR code for the die into one of our own phones before we started the game.  We didn’t want to interrupt the flow of the lineups as we were weary about how much time we had left. This led to the call-out of ‘fake’ die rolls, the sort of on-the-spot thinking that happens in a worked-up presentation.  

In spite of what became a bit of a chaotic moment, it was really satisfying to see the game successfully play out.  We had anticipated that long start where the road needed to grow close enough to the finish line before the real fun of the game kicked in. In our own test-runs of the game, limited to just two phones, we were still able to see that we needed dispersed pain/pressure points in order to overcome that issue.  We resolved that being master game-designers might take a few more iterations yet. But in the meantime, we had achieved what we had set out to do. We got to watch our classmates compete and cheer and laugh as they used their phones like blocks from a classic board game.

presentation-1

presentation-3

presentation-2

 

Code

https://github.com/cat8dog/roadHawgs

References

Shiffman. (n.d.). hammer.js swipe. Retrieved from p5.js Web Editor: https://editor.p5js.org/projects/HyEDRsPel

Hammer.js. (n.d.). Retrieved from Swipe Recognizer: http://hammerjs.github.io/recognizer-swipe/

Pajitnov, A. (1984, June 6). Tetris Analysis. Retrieved from http://cmugameresearchlibrary.pbworks.com/w/page/3984534/Tetris%20Analysis

Phanichphant, P. (n.d.). Experiments with P5.js. Retrieved from http://purin.co/Experiments-with-P5-js

Phanichphant, P. (n.d.). Optical Maze. Retrieved from http://p5js.site44.com/019/index.html

THE MARRIAGE OF DIGITAL AND ANALOG: THE FUTURE OF GAMING? (2016, November 11). Retrieved from https://cmon.com/news/the-marriage-of-digital-and-analog-the-future-of-gaming

Luke Stern, S. W. (2015). Game of Phones. Retrieved from https://boardgamegeek.com/boardgame/179701/game-phones

Wright, Will. (February 2, 1989) SimCity. DOS, Maxis

Experiment 1: Wake them up!

Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions.

Team
Manisha Laroia & Rittika Basu

Project Mentors
Kate Hartman & Nick Puckett

Description
Wake them up! is an interactive experience with a family of Sleepy Monsters displayed across multiple screens, that wake up with pre-programmed, randomly assorted mobile user-interactions. The experience consisted of many virtual ‘Sleepy Monsters’ and the participant’s task was to ‘Wake them up’ by interacting with them. The experiment was an attempt to assign personalities and emotions to smartphones and create delight through the interactions.

THE MULTISCREEN EXPERIMENT EXPERIENCE
The participants were organized into four groups and assigned with a QR code each. They had to scan, wake up the monster, keep it awake and move to the next table to wake up the next monster. Eventually they would have woken up all four monsters and collected them all.

For the multiscreen aspect of the experience, we created four Sleepy Monster applications each with its unique color, hint, and wake up gesture. Each Sleepy Monster was programmed to pick a color from a predefined array of colors, in the setup, such that when the code was loaded onto a mobile phone, each of the 20 screens would have a different coloured monster. For each case, we added an indicative response, which was a pre-programmed response of the application to a particular gesture, so as to inform the user that it is or it is not the gesture that works for this Monster and they must try a different one. Participants were to try various smartphone interactions which involved speaking to, shaking, running, screen-tapping etc. The monsters responded differently by different inputs. There were four versions of the monster for mobile devices and one was created for the laptop as a bonus.

Sleepy Monster 1
Response: Angry face with changing red shades of the background
Wake up gesture: Rotation in the X-axis

Sleepy Monster 2
Response: Eyes open a bit when touch detected
Wake up gesture: 4 finger Multitouch

Sleepy Monster 3
Response: Noo#! text displays on Touch
Wake up gesture: Tap in a specific pixel area (top left corner)

Sleepy Monster 4
Response: zzz text displays on Touch
Wake up gesture: Acceleration in X-axis causes eyes to open

*Sleepy Monster 5
We also create a web application which was an attempt to experiment with keyboard input and using that to interact with the virtual Sleepy Monster. Pressing key ‘O’ would wake up the monster.

The four Sleepy Monsters and the interactions inbuilt:

The participant experience:

Github link for the codes

single-phone-interaction

20190927_124509-1

img_20190927_151122

laptop-app

Project Context
WHAT’S THE DEAL WITH THESE MONSTERS?
Moving to grad school thousands of miles away from home started-off with excitement but also with the unexpected irregular sleep patterns. Many of the international students were found napping on the softest green couch in the studio, sipping cups of coffee like a magic potion and hoping for it to work! Amongst them were us, two sleepy heads- us (Manisha and Rittika) perpetually yawning, trying to wrap our heads around p5.

The idea stemmed from us joking about creating a wall of phones with each displaying a yawning monster and see its effect on the viewer. Building on that we thought of vertical gardens with animals sleeping in them that awaken with different user interaction OR having twenty phones sleeping and each user figuring out how to wake their phone up. Eventually, we nailed it down to creating a twenty Sleeping Monster, and the participant must try out different interactions with their phones to Wake them up!

sketches2

THE CONCEPT
The way phones are integrated into our lives today, they are not just meer devices but more like individual electronic beings that we wake up, talk to, play with and can’t live without.  No wonder we feel we’ve lost part of ourselves when we forget to bring our smartphone along (Suri, 2013).We wanted the user to interact with their Sleepy Monster (on the phone) and experience the emotions of the monster getting angry if woken up, or saying NO NO if tapped, refusing to wake up unless they had discovered the one gesture that would cause it to open its eyes adding a personality to their personal device, in an attempt to humanize them. The experience was meant to create a moment of delight, once the user is able to wake up the Sleepy Monster and instill an excitement of now having a fun virtual creature in their pocket to play with or collect. The ‘Wake up the monster’ and collect its element of the experience was inspired by the Cat collecting easter egg game on Android Nougat and the pokemon Go mania for collecting virtual Pokémons.

inspiration1

By assigning personalities to the Monsters and having users interact with them, it was interesting to see the different ways the users tried to wake them.

From shouting WAKE UP! at their phones, poking the virtual eyes too vigorously shaking them, it was interesting to see users employ methods they would usually use for people.

The next steps with these Sleepy Monsters could be a playful application to collect them, or morning alarms or maybe do-not-disturb(DND) features for device screens.

THE PROCESS
Day 1: We used the ‘Create your Portrait’ exercise as a starting point to build our understanding of coding. Both of us had limited knowledge of programming and we decided to use the first few days to actively try our hand at p5 programming, trying to understand different functions, the possibilities of the process and understanding the logic. Key resources for this stage were The Coding Train youtube videos by Daniel Shiffman and the book Make: Getting Started with p5.js by Lauren McCarthy.

sketches1

Day 3:  Concept brainstorming, led us to questions about the various activities we could implement and what functions were possible. We spent the next few days exploring different interactivity and writing shortcodes based on the References, section on the p5.org website. Some early concepts revolved around, creating a fitness challenge, a music integrated experiences, picture puzzles, math puzzle games or digital versions of conventional games like tic-tac or catch-ball or ludo.

0001

Day 6: We did the second brainstorm, now with a more clear picture of the possibilities within the project scope. A lot of our early ideas were tending towards networking, but through this brainstorm, we looked at ways in which we could replace the networking aspects with actual people-people interactions. Once we had the virtual Sleepy Monster concept narrowed down, we started defining the possible interaction we could build  for the mobile interface.

sketches3

Day 8: We sketched out the Monster faces for the visual interface, and prototyped the same using p5. Parallelly, we programmed the interactions as individual codes, to try out each of them like acceleration mapped to eye-opening, rotation mapped to eye-opening, multitouch mapped to eye-opening, audio playback and random color selection on setup.

Day 10: The next step involved combining the interactions into one final code, where the interactions would execute as per conditions defined in the combined code. This stage had a lot of hits and trials, as we would write the code, then run it on different smartphones with varying OS and browsers.

Day 10-15 : A large portion of our efforts in this leg of the project was focussed on bug fixing and preparing elements(presentation, QR codes for scanning, steps for the demo & documenting the experience) for the final demo, simplifying the experience to fit everything in the allotted time of 7 minutes per team.

CHALLENGES
Getting the applications to work in different browsers and on different operating systems was an unforeseen challenge we faced during trials for the codes. The same problem popped up even during the project demo. For Android, it worked best in Firefox browsers, and for iOS, it worked best in Chrome browsers.
Seamlessly coordinating the experience for 20 people. We did not anticipate the chaos or the irregularity that comes with multiple people interacting with multiple screens.
Another issue came up with audio playback. We had incorporated a snoring sound for the Sleepy Monster to play in the background when the application loaded. The sound playback was working well on Firefox browsers in Android devices but didn’t run on Chrome browsers or iOS devices. In the iOS device, the application stopped running, with a Loading… message appearing each time.

PROJECT SPECIFIC LEARNINGS
Defining absolute values for acceleration and rotation sensor data
Random background color changes on each setup of the code
Execute multiple smart-phone interactions like acceleration, rotation, touch, multitouch, device shaken and pixel area-defined touches.

Meet the Sleepy Monsters by scanning the QR codes
qr-code_master


References

    1. “Naoto Fukasawa & Jane Fulton Suri On Smartphones As Social Cues, Soup As A Metaphor For Design, The Downside Of 3D Printing And More”. Core77, 2013, https://www.core77.com/posts/25052/Naoto-Fukasawa-n-Jane-Fulton-Suri-on-Smartphones-as-Social-Cues-Soup-as-a-Metaphor-for-Design-the-Downside-of-3D-Printing-and-More.
    2. McCarthy, Lauren et al. Getting Started With P5.Js., Maker Media, Inc., 2015, pp. 1-183 https://ebookcentral.proquest.com/lib/oculocad-ebooks/reader.action?docID=4333728
    3. Henry, Alan. “How To Play Google’s Secret Neko Atsume-Style Easter Egg In Android Nougat”. Lifehacker.Com, 2016, https://lifehacker.com/how-to-play-googles-secret-neko-atsume-style-easter-egg-1786123017
    4.  Pokémon Go. Niantic, Nintendo And The Pokémon Company, 2016.
    5. “Thoughtless Acts?”. Ideo.Com, 2005, https://www.ideo.com/post/thoughtless-acts
    6. Rosini, Niccolo et al. ““Personality-Friendly” Objects: A New Paradigm For Human-Machine Interaction”. IARIA, ACHI 2016 : The Ninth International Conference On Advances In Computer-Human Interactions, 2016.
    7. Wang, Tiffine, and Freddy Dopfel. “Personality Of Things – Techcrunch”. Techcrunch, 2019, https://techcrunch.com/2019/07/13/personality-of-things/
    8. Coding Train. 3.3: Events (Mousepressed, Keypressed) – Processing Tutorial. 2015,https://www.youtube.com/watch?v=UvSjtiW-RH8
    9. The Coding Train. 7.1: What Is An Array? – P5.Js Tutorial. 2015,  https://www.youtube.com/watch?v=VIQoUghHSxU
    10. The Coding Train. 2.3: JavaScript Objects – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=-e5h4IGKZRY
    11. The Coding Train. 5.1: Function Basics – p5.js Tutorial. 2015, https://www.youtube.com/watch?v=wRHAitGzBrg
    12.  The Coding Train. p5.js Random Array Requests (whatToEat). 2016,https://www.youtube.com/watch?v=iCXBNKC6Wjw
    13. “Learn | P5.Js”. P5js.Org, 2019, https://p5js.org/learn/interactivity.html
    14. Puckett, Nick. “Phone Scale”. 2019. https://editor.p5js.org/npuckett/sketches/frf9F_BBA

Experiment 1 – Echosystem

upload

Group: Masha, Liam, Arsalan

Code: https://github.com/lclrke/Echosystem

Project description

This installation involves 20+ screens and participants that create a network through sound. Incoming sound is measured by the devices and this data is used to influence the visual and auditory aspects of the installation. Sound data is used as a variable within functions to affect the size and shape of the visuals. Audio synthesis using P5.js is used to create responsive sound to the participants input. The features of the oscillators are also determined by the data from the external audio input.

While the network depends on our participation, the devices concurrently relay messages through audio data. After we start the “conversation” there is a cascading effect as the screens interact with each other through sound, creating a bi-communications network via analog transmissions. 

Visually, every factor on the screen is affected by participant and device interactions. We created a voice synchronized procession of lines, color and sound that highlight and explore the sound as a drawn experience. The installation is continuously changing  at each moment. The incoming audio data influences how each segment is drawn in terms of shapes, number of lines and scale.  This is in contrast to a drawing or painting that is largely fixed in time and creates an opportunity to draw with voice and sound. Through interaction, the participants are able to affect the majority of the piece, bridging installation and performance art. 

Process:

Week 1

The aim of our early experiments was to create connections between participants through the devices rather than an external dialogue. We started with brainstorming various ideas and figured there were 2 directions:

  1. A  game or a play, which would involve and entertain participants:
  2. An audio/visual installation based on interaction between participants and devices.

First, we planned to create something funny and entertaining and sketched some ideas for the first direction.

OCAD Thesis Generator: Participants would generate a random nonsensical thesis and subsequently have to defend it.

screen-shot-2019-10-01-at-2-15-04-am

Prototype: https://editor.p5js.org/liamclrke/present/9fBGEz9CH

Racing: Similar to slot car racing, you have to hum to stay within a certain speed in order to not crash. Too quiet and you’ll lose.

Inspiration: https://joeym.sgedu.site/ATK302/p5/skate_speed/index.html

Design Against Humanity: Screens used as cards. Each screen is a random object when pressed. Have to come up with the product after. Ex. “linen” & “waterspout” → so what does this do?

Panda Daycare: Pandas are set to cry at random intervals. Have to shake/interact with them to make them not cry.

Sketch: https://editor.p5js.org/liamclrke/present/MpxLmb1jQ

 

Week 2

After further exploring P5.js, we decided we were more interested in creating an interactive installation rather than a game.  

Raw notes/ideas for installation:

Wave Machine: An array of screens would form an ocean. Using amplitude measurement from incoming sound, the ocean would get rougher depending on the level of noise. Moving across an array of screens making noise would create a wave

Free form Installation: Participants  activate random sounds, images and video with touch and voice. Images include words in different languages, bright videos and gradients and various sounds. (this idea was developed into the final version of the experiment)

Week 3 

We agreed to work on art installation, which involves sounds, images and videos affected by participant interaction. An installation project seemed more attractive and closer to our interests than a game. We figured we could combine our skills together to create a stronger project and function as a cohesive team. 

That week we produced graphic sketches, short videos and chose sounds we would want to use in the project:

privet

 hellohindu_%d0%bc%d0%be%d0%bd%d1%82%d0%b0%d0%b6%d0%bd%d0%b0%d1%8f-%d0%be%d0%b1%d0%bb%d0%b0%d1%81%d1%82%d1%8c-1

At this step, we took inspiration from James Turrell and his work with light and gradients.

Week 4

Uploading too many images, sounds and videos made the code run slow on devices with smaller processing power. We changed the concept to one visual sketch and used p5.js audio synthesis.

We were looking for a modular shape which expressed the sound in an interesting way, apart from a directly representative waveform. We started with complicated gradients which overtaxed the processors of mobile phones so we dialed down certain variables in the draw function. Line segment density was a factor of amplitude multiplied by a variable, which we lowered till the image could be processed without latency.

The final image is a linear abstraction, drawn through external and internal sound.

Project Concept:

The project was inspired by multiple art works.

Voice Array by Rafael Lozano-Hemmer: When a participant speaks into an intercom, the audio is recorded and the resulting waveform is represented in a light array. As more speech is recorded, the waveforms are pushed down the horizontal array, playing back 288 previous recordings. When a recording reaches the end, it is released as solo clip. This inspired us to use audio as a way to sync devices without a network connection.

unnamed-2

Paul Sharits: Screens: Viewing Media Installation Art by Kate Mondloch was used for research, and within Paul Sharits’ work with screens was discovered. Paul Sharits was known for avante garde filmmaking, often featured across multiple screens accompanied by experimental audio. We took this concept and reformatted it into an interactive design.

unnamed-2 unnamed

Manfred Mohr: Manfred is a pioneer of art in the digital using algorithms to create complex structures and shapes. The visual simplicity driven by more complex underlying theory was a creative driver for the first iteration of Echosytem.

1

Challenges and solutions:

  1. The first challenge was lag from overloading processors from multiple video, sound and image files. These files slowed down the code, especially on the phone. Therefore, we decided to use P5.sound synthesis and creative coding to draw the image.
  2. First sketches were based only on touch, which did not create a strong enough interaction between participants, so the solution was to add voice and sound which affect the characteristics (amplitude and pitch) of the oscillators . 
  3. In previous ideas, it was difficult to affect videos and images (scaling and filters) so we created a simplified  image in P5.js which consists of lines of different colors. This step allowed us to affect the number of lines drawn by audio input data.
  4. In the beginning, to organize the physical space, we planned to build a round stand for devices. This would create a circle and bring participants together around the installation. However, different size and weight of devices complicated things.

photo_2019-09-30_21-34-485. Another idea was to hang screens to the ceilings, but the construction was too heavy. Without having the right equipment, we simplified these concepts and used flat horizontal surfaces to place the screens, so the number and size of devices was not limited.

6. The synthesizer built in P5.js led to a number of challenges. The audible low and high ends of a tablet differed greatly from a phone, resulting in certain frequencies sounding unpleasant depending on the device’s speaker. Through trial and error, we narrowed the pitch range that could be modulated by audio input for maximum clarity over multiple devices. There was also an issue of a continuous feedback loop, so the oscillator’s amplitude had to be calibrated in a similar fashion. The devices had to be at a certain distance range or would result in continuous feedback. We put a low-pass filter on finally in order to control the sound as a fail-safe as the presentation set up would be less controlled than in tests. 

Reflection:

Although we managed to involve 20 screens and groupmates into process of creating sounds and images, the design of the presentations logistics could have been more concrete. With preparation and set placement of screens, the project has high scalability, far above 20 screens and participants. 

The first question we asked upon assignment was whether we could overcome sync issues while keeping the devices off a network. Through the use of responsive sound we created an analog network of sound, resulting in a visual installation blurring the lines between participant and artist.

References:

  1. Early Abstractions (1947-1956) Pt. 3  https://www.youtube.com/watch?v=RrZxw1Jb9vA
  2. Mondloch, Kate. Screens: Viewing Media Installation Art. University of Minnesota Press, 2010.
  3. Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial   https://youtu.be/jEwAMgcCgOA
  4. Sketches made in processing by Takawo  https://www.openprocessing.org/sketch/451569
  5. Screens: Viewing Media Installation Art- Kate Mondloch
  6. Rafael Lozano-Hemmer- Various works http://www.lozano-hemmer.com/projects.php
  7. United Visual Artists – Volume  https://www.uva.co.uk/features/volume
  8. https://collabcubed.com/2012/10/16/carsten-nicolai-unidisplay/
  9. http://jamesturrell.com/

 

 

Experiment 1 – Camp Fire

Project Title: Campfire
Team Members: Nilam Sari (No. 3180775) and Lilian Leung (No. 3180767)

Project Description:

Our experiment was an exploration as to how we could create a multi-screen experience that would speak to the value of ‘unplugging’ and having a conscious and present discussion with our classmates using the metaphor symbolism of the campfire.

From the beginning, we were both interested in creating an experience that would bring people together and be able to have a sort of digital detox and engage in deeper face-to-face conversation. We wanted to play along with the current trend of digital minimalism and the Hygge lifestyle focused on simpler living and creating deeper relationships.

bookexamples

While our project would only provide about a 10 minute reprieve from our connected lives, we wanted to bring to attention, while we’re in a digitally-lead program, that face-to-face conversation and interaction is just as important for improving our ability to empathize with one another.

Visual inspiration was taken from the visual aesthetic of the campfire as well as the use of abstract shapes used for many meditative and mental health apps such as Pause, developed by ustwo.

screenshot-2019-09-19-at-4-39-22-pm

ustwo, Pause: Interactive Meditation (2015)

How it Works:

The sketch is laid out with three main components: the red and orange gradient background, the fire crackling audio, and the transparent visuals of fire. 

On load, the .mp3 file of the audio plays with a slow fade in of the red and orange gradient background. The looped audio file’s volume is dependent on mic input, so that the more discussion from the group participating amplifies the volume. The visual fire graphics at the bottom adjust in size dependent on the volume of the mic input, creating a flickering effect similar to a real campfire.

To lower the volume and fade the fire, users can shake their devices and the acceleration of the x-axis causes the volume to lower and the tint of the images to decrease to 0. This motion is to recreate shaking out a light match.

Development of the Experiment

September 16, 2019

Our initial thought about visually representing the camp fire was to recreate an actual fire. Though we realized from our intentions to have all the phones laid flat on a surface, that fire is seen from a vertical perspective, while the phones would lay horizontally so we went with a more abstract approach instead by using gradients. 

The colours chosen were taken from the natural palette of fire though we also explored sense of contrast with the use of gradients. 

Gradient Studies

Righini, E. (2017) Gradient Studies

Originally we tried working with a gradient built in RGB, though while digging into control of the gradient and switching values, Lilian wasn’t quite comfortable yet with working with multiple values once we needed to go having them change based on audio level input.

Instead we began developing a set of gradients we could use as transparent png files, this allowed us more control over what they visually looked like and allowed the gradients to become more dynamic and also easier to manipulate.

assetsamples

Initial testing of the .png gradients as a proof of concept, worked as we managed to get the gradient image to grow using the mic Audio In event.

While Lilian was working on the gradients of the fire, Nilam was trying to figure out how to add on the microphone input and make the gradient corresponds to the volume of the mic input. One of her solution was using mapping.

The louder the input volume the higher the Red value gets and the redder the screen become. This way they could change the background to raster image, and instead of lowering the RGB value to 0 to create black, this changed its opacity to 0 to show the darker gradient image on the back of it.

Nilam made edits on Lilian’s version of experimentation and integrated the microphone input and mapping part into the interface she already developed.

September 19, 2019

Our Challenges

We were still trying to figure out why mic and audio input and output was working on our laptops but not on our phones. The translation of mic input on to increase the size of the fire seemed laggy even attempting to down-save our images.

On our mobile devices, the deviceShake function seemed to be working, while laggy on Firefox, playing the sketch on Chrome provided better, more responsive, results. Other issues were once we started changing the transition of the tint for our sketch that sometimes the deviceShake would stop working entirely.

We wanted a less abrupt and smoother transition from the microphone input. So we tried to figure out if there are functions like delay. We couldn’t find anything so we decided to try using if statement instead of mapping.

We found out from our google searches that there is a possibility of a bug that stopped p5.js certain functions like deviceShaken from working after the recent iOS update in this past summer. Because, while laggy, it still worked on Lilian’s Android Pixel 3 phone, while it just completely never worked Nilam’s iPhone.

Audio Output

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No Yes
Safari No N/A


deviceShaken Function

Nilam (Iphone 6) Lilian (Pixel 3)
Chrome No Yes 
Firefox No No
Safari No N/A


Additionally, Lilian started working on additional exploration like mobile rotation and acceleration to finesse the functionality of the experiment. She also began exploring how we could incorporate noise values to recreate organic movement. We were inspired by
these examples using Perlin noise.

scrns12

To add the new noise graphic, we used the createGraphics() and clear() functions to create an invisible canvas on top of the gradient to let the bezier curve trails so it looks like a flame. It clears itself and repeat the process again after the 600 frame count to decrease loading problems in the sketch.

September 21, 2019

After reviewing our code we realized some of the issues we were having with the audio were because of Chrome’s privacy restriction with disabling auto-playing audio, as well as our mic problem was because we placed our code within the function setup { section and was only running once, as compared to once we moved it to function draw {, the audio seemed to be working better.

September 23, 2019 – The Phone Stand

physicalprototype

After getting feedback for our prototype, we started creating a stand that we could place everyone’s phones during the presentation. We laid out two rows of stands, the outer circle holding 12 phones, and the inner circle holding 8 phones, as we explored how we could better re-create the ‘fire’ when we have out multi-screen presentation. 

scrns13

We started out by sketching the layout for the phone stand. The size is based on the widest phone width someone has in our class. We then went to the Maker Lab and drilled into the circular foam and chiselled out the middle sections to create an indent that the phones could sit within.

physicalprototype-copy

The next step is to apply finishing to the foam. We used black matte spray paint to cover the foam. The foam deteriorated a little from the aerosole of the spray paint, which we foresaw, but after a test paint it didn’t seem to damage the structure much so we decided to proceed. 

September 26, 2019 – deviceShaken to accelerationX

screenshot-2019-09-26-at-6-03-00-pm

Finding the mobile deviceShake event wasn’t working, Lilian created a new sketch testing the opacity and audio level using accelerationX as the new variable. The goal was to test changes in acceleration cause the audio volume to decrease and the images to fade out. AccelerationX seemed to providing more consistent results and was added into the main Experiment 1 sketch.

User Flow

This experiment is primarily conversation lead. Set-up from the facilitators are creating a wide open space for everyone to sit and dimming or turning off the lights to recreate a night scene. Users are then asked to load in the P5 experiment and join together in a circular formation in the room.

Users should allow the fire animation to load in and place their phone into the circular phone stand. The joint phones coming together recreate the digital campfire. Facilitators then can  speak to the importance of coming together and face-to-face conversation.

The session can run as long as needed. When the session is finished, users can shake their phones to dim the fire and lower the volume of the fire crackling audio.

However, it did have an impact on the foam around the slots. It melted the foam with the paint and it wouldn’t dry. We thought we could have use gesso before the spray paint for future reference, but we had to improvise for this one so we used paper towels.

The Presentation

img_7854_edited

Photo Credit: Nick Puckett

doc2

doc1

doc3

experiment1campfire_iniphone

The code for our P5 Experiment can be viewed on GitHub
https://github.com/nilampwns/experiment1

Project Context

The project’s concept was taken from our mutual interest in creating a multi-screen experience that would cause classmates to come together in an exercise rather than be just an experiment using P5 events. After brainstorming a couple of ideas and possibilities within our limited personal experience with programming, we came up with an idea about ‘unplugging’ and having a full attention to the people around us without distraction of devices, except that it is facilitated by screens.

We wanted the experience to be about ‘unplugging’ to recognize the value (even within a digital media program) that time away from screens is just as beneficial and an opportunity for self-reflection. While technology allows us to extend ourselves within the virtual space, there are also many consequences to our real life relationships and physical composure.

Described within a Fast Company article What Really Happens To Your Brain And Body During A Digital Detox (2015) experts explained that our connectedness with our digital devices alters our ability to empathize, read each others emotions as well as maintain eye contact in real life interactions.

After our presentation, we looked Sherry Turkle’s work in Reclaiming Conversation: the power of talk in the digital age (2016). Turkle describes face-to-face conversation is the most humanizing method of communication, and allows us to develop a capacity for empathy. People use their phones for the illusion of companionship when real life relationships may feel lacking, our connectedness online leads us to discredit the potentiality of empathy and intimacy of face-to-face conversation. 

We chose a campfire as the visual inspiration for our P5 sketch due to the casual ritual performed today that provides both warmth and comfort while people connect with nature. Fire is pervasive across all human history, but within the present context, we use it as a symbol to voluntarily disconnect with technology and give one’s self the opportunity to nurture our relationships with nature and those close to us. 

Expanding upon the campfire and going into the ceremonial practice of a bonfire, fire has been used across history as a method to bring individuals together for a common goal, whether it be celebrations or a folk lore custom.

Rather than working with the literal visual depiction of fire, we chose to take visual cues from mobile meditation apps. 

We don’t believe our experiment will provide all these benefits, but we wanted to use it as an opportunity that while we’re in a digitally-lead program, face-to-face interaction is just as important. To provide each of our classmates a moment of self-reflection, we’re provided the unique opportunity to evaluate what we would like to offer for one another and to create.

We think that our presentation helped our classmates to take a break from the hecticness over constantly having to look at multiple screens while working on our experiment one projects. One of the feedback that we got was that the presentation would have been more successful if we had presented towards the end of the class when everybody had spent more time looking at screens.

If we were to develop the experiment further, we could explore how we could use the phones camera input ability to dim the fire based on eye contact to encourage users to look away from their screens when in conversation together. Further improvements could be functionality to blow on the phone to dim the fire which would require ranges of mic input to detect the difference between conversation and blowing on the phone. 

Citations

2D and 3D Perlin noise. (n.d.). Retrieved September 21, 2019, from https://genekogan.com/code/p5js-perlin-noise/.

Righini, Evgeniya. “Gradient Studies.” Behance, 2017, www.behance.net/gallery/51830921/Gradient-Studies.

Turkle, S. (2016). Reclaiming conversation: The power of talk in a digital age. NY, NY: Penguin Books.

ustwo. (2015) Pause: Interactive Meditation, apps.apple.com/ca/app/pause-interactive-meditation/id991764216.

Experiment 1: Digital Interactive Sound Bath

Abstract

Our project is a digitized version of the experience of a sound bath. The objective was the same – to explore the ancient stress-relieving and sound healing practice. However we sought to achieve this using laptops and phones, which are often associated with being the cause of stress and anxiety. Our experiment made use of motion detection,  WEBGL animation, sound detection and emission.


Table of Contents

1.0 Requirements
2.0 Planning & Context
3.0 Implementation
3.1 Software & Elements
3.1.1 Libraries & Code Design
3.1.2 Sound Files
3.2 Hardware
4.0 Reflections
5.0 Photos
6.0 References


1.0 Requirements

The goal of this experiment is to create an interactive experience expandable to 20 screens.

2.0 Planning & Context

 

schedule
Schedule

Stress is something that affects many. The constant hustle-bustle of work deadlines, fast-paced city life, and overachievement may push you to the edge and most could benefit from self-care, meditation, relaxation and pause from the busy life. Enter sound baths.

Sound baths use music for healing and relaxation. It is defined as an immersion in sound frequency that cleans the soul (McDonough). From Tibetan singing bowls to Aboriginal didgeridoos (Dellert), music has always been used for therapeutic uses for over decades now. The ancient Greeks also used sound vibrations to aid in digestion, treat mental disturbance and induce sleep. Aristotle’s ‘De Anima’ also shows how flute music can purify the soul.

Ever since the late 19th century, researchers have begun focusing on improving the correlation between sound and healing. These studies proved that music could lower blood pressure, decrease pulse rate and also assist the parasympathetic nervous system.

So, essentially a sound bath is a meditation class that aims to guide you into a deep meditative state while you are enveloped in ambient sounds.

brainstorm
Brainstorming

Sound baths use repetitive notes at different frequencies to help bring your focus away from your thoughts. These sounds are generally created with crystal bowls, cymbals and gongs. Similar to a yoga session, the instructor of a sound bath creates the flow of a sound bath. Each instrument creates a different frequency that vibrates in your body and helps guide you to the meditative and restorative state. Some people believe bowls made from certain types of crystals and gems can channel different restorative properties.

Our project is a digitized version of the experience of a sound bath. The objective was the same – to explore the ancient stress-relieving and sound healing practice. However we sought to achieve this using laptops and phones, which are often associated with being the cause of stress and anxiety . We allowed those experiencing it a moment to pause, reflect and reconnect with their inner soul.

requirements-list
Requirements

The concept of our experiment was to let the user interact with 4 primary zones in order to experience them:

  •     Zone A – Wild Forest – Green
  •     Zone B – Ocean Escape – Blue
  •     Zone C – Zen Mode – Purple
  •     Zone D – Elements of life – Pink
  •     Projections – a) Visually soothing abstract graphics. b) Life quotes
zone-maps
Zone Maps

We carefully segregated the different experiences based on their soothing experiences in the four corners of the space. The Zone A consisted of motion sensitive sounds of rain, chirping and crickets along with the motion sensitive zonal colour of green. Similarly, Zone B consisted of motion sensitive seascape sounds like ocean waves and seagulls along with the ambient lighting of blue tones. The Zone C being the zen zone, had meditation tunes as well as flute and bell melodies that were triggered by people passing by.  The zone also had the ambient lighting of pink touch to it. The final zone D represented elemental sounds such as rain, fire and earth which would be triggered by motion, however, we ultimately opted for silence within that zone, providing a brief audio escape. The colours were drawn together with the use of  the colour-cycling lamps on nears the floors.

The experience also consisted of projections – eye pleasing visualizations which were projected over the ceiling. These projections were volume sensitive. So, based on the interactive audience, the visualizations would become brighter and more prominent. To go along with the theme of digital sound bath, we also projected quotations about life which would instill the faith and provide inspiration to the users upon reading them.

Once all these elements came together, the space became a digital sound bath wherein users could come and relax their mind. The experience was made into a dark space where only upon detecting motion, would the room light up with different colours and project different ambient sounds. The result was a soothing and mind relaxing experience for the audience.

3.0 Implementation

3.1 Software & ELEMENTS

3.1.1 Libraries & Code Design

For the zones, the  library Vida was used for motion detection. The light emitted was a simple rectangle than slowly fades when motion is detected. The volume of the audio files mimics this as well.

WEBGL was used to generate the calming projection which was a slowly rotating cosine and sine animated plot in 3D suing spheres. It was sound activated and glowed brighter when the level increased.

The life quotes used an array and a set interval to redraw new quotations.

The code is designed to be centralised, so although there are 14 unique programs running, they share the base code where possible. For efficiency of set up, a home page was created with buttons for each program.

3.1.2 Sound Files

For sounds, the following were tested, but only the bold were implemented as they were the most audibly pleasing combination. These high quality sounds were purchased and licensed for use.

pink-zen(D): gentle-wind.wav, ambience.wav, bells.wav

purple-elements(C): wind.wav, rain-storm.wav, thunder.wav

blue-ocean(B): humpback-whales.wav, sea-waves.wav, california-gull.wav

green-forest(A): thrush-bird.wav, robin.wav, forest-leaves.wav, cricket.wav

3.2 Hardware
Some of the hardware used.
Some of the hardware used. (Plain dim LED lamp, glass jar, wax paper)

We used round table lamps on the floors with remote controlled  LEDs that cycled through the rainbow. Two plain dim LED lamps for used for safety in dark areas. Two projectors were used, one to project the life sayings onto the screen, and another to project the soothing animation onto the ceiling. Glass jars wrapped with decorated wax paper  held the phones as they light up. The wax paper was chosen to coincide with each of the zone themes, and the glass jars were tall enough to hide the majority of the screen to provide a soft glow, and short enough to keep the camera exposed as it is used for motion detection. An iPad was used at the entrance to provide context of the space. The space was decorated to simulate a Sound Bath.

4.0 Reflections

When approaching this topic, our group set out to examine explore a solution where the participants would not have to physically touch their phones themselves, but instead have it as part of an experience where they walk away from it, while it aids in relaxing themselves and others. While meditatively walking around the space, their motion act as the trigger for the light and soundscape. We noticed some some participants becoming enveloped in the experience, and lying down as one would in a traditional sound bath to absorb the experience with their senses. Others entranced by the lights and affirmation, were curious about what different pleasing sounds and colours could be produced. Due to the amount of the hardware and number of programs involved, there was a lot of set up required before the room could be entered. An additional complication is that this type of set up is one where the phones are accessible to the creators, and not something  the attendees bring with them into the experience.

The room initially requested was RHA 318, a smaller and more intimate space that would allow for more interaction between the lights by having them closer together, and a better layout for the projections. The room has recently gone out of service, and with the larger room, RHA 511, some of that interaction was diluted, as pointed out in the post-discussion.

Additionally, despite mentioning that participants were to just walk around to trigger the sound, many unfamiliar with the concept of a sound bath, still tried to manipulate the devices or holder, or use sound to the effects. This is likely due to the memories from the previous tactile experiments where manipulation of the elements within the experiment produced positive results.

5.0 Photos

6.0 References

https://www.allure.com/story/sound-bath-meditation-benefits

https://www.elitedaily.com/p/what-is-a-sound-bath-5-thing-to-know-before-you-bathe-in-the-sound-2975477

https://articles.aplus.com/wtf-is-it-and-should-you-try-it/what-are-sound-baths-benefits?no_monetization=true

https://www.washingtonpost.com/lifestyle/wellness/tune-in-and-chill-out-what-are-sound-baths-and-why-you-should-try-one/2017/05/02/e74c697c-2b7c-11e7-a616-d7c8a68c1a66_story.html