Frame It Up!

FRAME IT UP

By Finlay Braithwaite and Tommy Ting

https://webspace.ocad.ca/~3164558/FrameItUp/

Frame It Up is an interactive screen-based game best played with 10+ people. The game requires players to carry their laptops and physically walk around the room. Frame It Up is a choreography generating game influenced by Twister.

 

Instructions

Using your laptop, open the URL in Google Chrome.

Read the instructions.

Click ‘PLAY’ to enter the game.

You are presented with a name and gestural prompt.

Use your camera to find the person and ask them to perform the prompt.

Click anywhere to take a picture.

Pictures are saved onto your laptop.

After capturing a prompt, a new prompt appears.

Take pictures of the person with the new prompt.

Repeat until the 1 minute timer.

Every minute, on the minute, all players are provided with a new name and prompt.

 

Objectives

  1. To negotiate with other players in the room to capture an image of a person and gestural prompt.
  2. To generate random acts of choreography and dance movements to highlight human’s relationship with technology

 

P5.js Code

https://github.com/braithw8/OCAD_webspace/blob/master/FrameItUp/sketch.js

 

Supporting Visuals

Presentation Day

Screenshots

frameitup_documentation01

frame-it-up-kate_s-squat

frame-it-up-emilia_s-right-clench-fist

frame-it-up-finlay_s-right-knee-raise

 

Process Journal

Day 01 [2017.10.16]: Experiment 2 Introductions

We came up with a few different ideas on our first day. We were interested in using the camera function, but inherent in the camera technology is the conversation of ethics and more specifically privacy. We wanted to use the camera in a critical way that would open up discussions around ethics.

  1. “No Pervert!” Using the camera, the screen will direct you to point it at someone in order to “see what lies underneath”, but when once you line it up with a body, it will generate a message saying “Why would you ever want to do that?”
  2. “Conversation Helper” Your mobile device will connect you with another user, then it prompts you with some conversation topics
  3. “Colour Matcher” Using the mobile device’s gyroscope, you have to rotate your phone to the right x y and z coordinates to match the colour of the text to the colour of the background of the canvas.
  4. “Shake It Up” Using your phone again, shake it to generate a prompt to find another player in the room, once located, shake it up again to generate a prompt for a body part, take a picture.

After coming up with a few different ideas, we decided to go with Shake it Up. We were interested in the movement of humans that this game would generate. It brings up some important things we were both interested in exploring with this experiment which are physical interaction with digital technology and movement and dance.

Day 02 [2017.10.17]: Coding

The first major hurdle was to get the video camera to work in a consistent and predictable way. The number of different possible device types, makes, and models made this a daunting task. We were fairly determined to use smartphones and tap into their cameras as the technical underpinning for our project. We ran into some basic hurdles getting video to work even in a rudimentary fashion. Chrome, for example, demands an https:// server is used if the camera is to be engaged, for security/privacy reasons. This means that code has to be uploaded frequently to such a server for developing and testing purposes. Dreamweaver became our go-to editor as it facilitates automatic SFTP sync on save. It also has built-in Github integration which is a dream come true.

new-mockup-1

As the working title suggests, getting the shake input code to work would be imperative in our development. However, our early testing led us to conclude that it would not be an effective way to move through a serial sequence of interaction as unintentional double shakes and phantom shakes were difficult to avoid using code. This investigation was illuminating in that it demonstrated that our user flow had too many stages in the sequence and device interactions. We felt that this took away from the experience as the device became the focus of the experience rather than a catalyst. We played with the idea that the random people and body part prompts cycled on a timer, not relying on interaction. It would also be a great moment if this timer was set to a common clock on all devices, so that new prompts were generated for all players simultaneously.

Day 03 [2017.10.18]: Back to the Drawing Board

One immediate concern we had was capturing pictures of someone’s body part without their consent. Although it would call attention to the problems of privacy, we thought this was too simplistic and literal. We went back to the whiteboard to brainstorm new ideas.

img_1861

We came up with a few different ideas for new prompts. One was to use colours, mood and feelings, this would be more abstract and will give the player the choice to interpret this however they want though it is still not consensual.

Next was to use a RGB value or a Grayscale value, player has to find the match colour with their camera on their person’s body. This would make our project more “game-like” but we didn’t know how to use camera to calculate values. Moreover it still doesn’t solve our consent issue.

Lastly we came up with a list of gestures such as head nod, smile, right hand shake, left middle finger, right peace sign. This immediate solves our consent problem since you have to ask your person to perform such a task. This also creates more of a negotiation between you and the other players. Finally, this would add a much richer dimension to our initial interest, which was to use this game to create random acts of dance and choreography.

Day 04 [2017.10.19-23]: Coding (Cameras, Mobiles to Laptops)

Eureka! We were starting to make real progress on the video front. Kate Hartman had suggested that we ‘time box’ this problem, giving up on it if we didn’t get the results we needed in specified amount of time. The biggest challenge that we overcame was specifying which camera a mobile device was used. The p5js video capture allows for constraints compliant with the W3C specification which includes language to call for different camera types. The type we were interested in was ‘environment’, the non-selfie outward facing camera on the back of a phone. Finding the correct syntax to connect this constraint to p5js was elusive and frustrating but eventually my android phone took a brave step and faced the world. With this victory, we began working with the video image and integrating it into our code. To accommodate for variable screen and camera resolutions, we created a display system that would respond to four possibilities:

  • Camera resolution width narrower than horizontal display.
  • Camera resolution width wider than horizontal display.
  • Camera resolution width narrower than vertical display.
  • Camera resolution width wider than vertical display.

With these four scenarios, our video placement would respond to the parameters and crop and place itself accordingly.

In this meticulous process we encountered a bug with the p5js reference. With the following function: ‘image(img,dx,dy,dWidth,dHeight,sx,sy,[sWidth],[sHeight])’, you can crop an image and place it into your canvas, possibly resizing it in the process. However, in working with this code it appears that the destination coordinates (d) and the source coordinates (s) are reversed from the documentation. We will investigate further and let p5js know if this is indeed the case.

This code was important as we wanted to crop our video instead of resizing it. We wanted a clean ⅓ height band of video centered in the middle of the screen. We wanted this to resize smoothly and adapt to variables of screen and camera resolution. We felt a crop would give us a natural zoom that would enhance the image finding aspect of the game and would also lower the CPU overhead of live video resizing.

%#$&%#&!

Tommy’s phone won’t open %#%^#@^. Try as we might, our code worked well on android, but not iPhone, particularly Tommy’s iPhone. With the time box in shambles and our project in jeopardy, everything was on the table, including revisiting other ideas or generating new ones. Realizing that the majority of portable devices available to us were made by Apple, we swallowed our pride and began developing for laptops. Unfortunately, we didn’t have the current ability or time to figure out a way to include both android phones and laptops, so we went with laptops only.

Despite our worst fears, the laptops were great, and added some new dimensions to the game. People could see themselves being captured and adjusted their position and pose to assist in play. This interactive feedback element would not be possible with a phone’s ‘environment’ camera.

Day 05 [2017.10.24]: Playtesting

The playtest was extremely revealing and gave us a lot of insight into how to quickly resolve some immediate issues. We noticed a two main issues:

  • Our sketch did not work on iOS consistently, some phones worked, most didn’t.
  • People were upset by not having a specific end goal, namely they were confused with what to do after they framed the person up with the corresponding body part
  • The 1 minute timer was too long since it was easy and simple to find the person and the body part.

It also confirmed what we had hoped for:

  • During the scuffle to locate the person and the body part, it resulted in a dance amongst players.
  • People had to negotiate with each other in order to find their body part.

Play Test

Day 06 [2017.10.26]: Refinement in Code and Game Concept

On our last day, we refined the game visual interface from small details such as font size and stroke shade to adding a photo capture feature.

The last major coding hurdle turned out to be fairly easy. Neither of us had made an app with multiple states or scenes. Our code to this point would rely on one loop for the entire experience. We needed to make a splash page to introduce and explain the game. We could have done it as a separate html launch page, but we wanted to try doing it in a single p5js sketch. To start, Tommy created the launch page in one sketch and I finished the details on the main code. By using a simple ‘if’ statement tied to a button on the splash page we were able to have users cleanly move from one state to the next cleanly. Huzzah!

The coding details of this project were fun mini-challenges. We attempted to make everything proportional to the display such as the text and video size & placement. A fun example is that the button size is tied proportionally to the font size which is tied proportionally to the overall number of pixels in the canvas. Another fun detail was randomness. The colours are all randomly generated giving this a fun look that’s different every time. However, in our tests, users complained that the text often blended in with the background and became difficult to read. We set some rules to enforce that the randomly generated colours have a specified minimum difference in hue. Changing the p5js colour mode to a hue based system instead of RGB made colour picking of this nature possible. Making the sounds random was a larger challenge than anticipated. Generating a random hue is one thing, but randomly selecting from a pool of sound clips is another. With sound, we wanted generate a fun and chaotic reinforcement of the experience. We wanted each device to emit sounds unique from the next. To achieve this, each device loads ten random sounds from a pool of fifty-one. At each sound cue, the code randomly selects one of these ten files for playback. Loading all fifty-one sounds would have increased the loading time and made the experience fairly buggy considering there’s already a live video input in play. This seemed like a good compromise.

Playing with sounds

Finally we changed the focusArray from a list of body parts to a list of gestures which included positive, neutral and negative gestures. We decided that this would be more interesting as the people would have to negotiate even more with other players in order to capture their photograph. It acts as a consensual prompt which mitigates the issues of privacy. We also decided to keep the 1 minute timer instead of speeding it up (from our play test). Within the 1 minute, the person must capture as many different gestural prompts as possible. Lastly, using gestural prompts instead of finding their body parts creates more of a dance, this was more fitting in our conceptual framework of contemporary dance.

 

Project Context

While doing some initial research for the project, we were immediately drawn to the relationships between kinaesthetics, human bodies, dance and choreography and camera technology.

Although Frame it Up is a game, we were more interested in the choreographic outcomes of playing the game. We found that the game was able to generate random acts of choreography which materialized our interest in human’s relationship with technology in the form of dance. We deliberately decided to use the camera as the main device to connect the players, as the act of taking someone’s picture is inherently violent (Sontag 1977) and to explore this violence through dance and play. Informed by Jane Desmond’s idea that how we move, and how one moves in relation to others comes from a place of desire (Desmond, pp.6), and gestus, a theatre technique created by theatre director Bertolt Brecht that understands gestures as an integral part of the human character, its wishes and desires (Baley, 2004). We wanted to investigate how do we move with each other and amongst each when our violent technological devices have become both embedded within and extended out to how we express desire.

Although it wasn’t our original idea, carrying around the laptops was able to intensely highlight our increasingly posthuman body. The sound tracks that we used were all compiled of laughing tacks, which call attention to human’s happiness, playfulness and desire but also its violence and brutality. We also looked to the works of choreographer Pina Bausch. Bausch’s work highlights the violence of men and the suffering and oppression of women to an incredibly uncomfortable degree. Her work “forces her audiences to confront discomfort: they are painful to look at but impossible to turn away from.” (Avadanei, pp. 123) Using Susan Sontag’s understanding of the camera as a weapon, dance theory and Pina Bausch work, our goal that Frame It Up is both a playful game and a tool to generate choreography that explore relationship between privacy, human desire and technology.

 

Bibliography

Avadanei, Naomi J. “Pina Bausch: An unspoken explorations of the human experience.” Women & Performance: a journal of feminist theory, vol. 24, no. 1, 7 May 2014, pp. 123–127., doi:10.1080/0740770X.2014.894289.

Baley, Shannon. “Death and Desire, Apocalypse and Utopia: Feminist Gestus and the Utopian Performative in the Plays of Naomi Wallace.” Modern Drama, vol. 47, no. 2, Summer 2004, pp. 237-249., doi: https:éédoi.orgé10.1353émdr.2004.0018

Desmond, Jane, editor. Dancing Desires. The University of Wisconsin Press, 2001.

Sontag, Susan. On Photography. Picador. 1977

Bablebop

Project title: Bablebop

game-screen-1

Team:
Roxanne Baril-Bédard
Dikla Sinai
Emilia Mason

Project description:
Bablebop is a smartphone and human interaction game in which the players must elect who will be the next ruler of the planet. The game randomly assigns a character to each player, based on the character’s personalities they will vote and try to convince the other players to give them a crown. Each crown counts as one vote. The player with the most crowns wins the game and becomes the ruler of Bablebop.

Instructions:
1. Wear the phone in a pouch around your neck.
2. Press “Start to play” button
3. Read about your character. You must act according to your character’s personality.
4. Start campaigning: You have 4 minutes to convince everyone you talk to give you a crown. You can concord with other members of your species to take the win as a team.
5. These are elections and you have to vote. Give a crown to those you think to deserve it.
6. Give dirt to those who don’t deserve to rule Bablebop.

Input: Tapping Device Screen
Output: Changing screens, button blinking

Characters:
Blobs: The Blobs can see it all and use what they see to compliment everyone. You will have to use your compliment power to win this election. Make everyone feel extra good to get the most amount of crowns possible. Don’t fall for tricks and compliments, give crowns to those who you think to deserve them and dirt to the ones you think are lying.

Blarks: The Blarks are very manipulative and charming. You will need to convince everyone you deserve to win this election. Get the most amount of crowns possible. Lie if you have to. Don’t fall for tricks and compliments, give crowns to those who you think to deserve them and dirt to the ones you think are lying.

Blims: The Blims are very smart. You can read in between the lines and know not to trust anyone. Don’t fall for tricks and compliments, give crowns to those who you think to deserve them and dirt to the ones you think are lying.

Game story:
22851432_10155839844936306_1495889050_o

 

Code:
https://github.com/metanymie/bablebop

Problems faced

-It was hard to try to find a way to have a full-size image as the functions width and height did not fill the window. Ultimately, we settled for having slightly squished images in every browser with the windowWidth and windowHeight function, and sized the buttons with fractions of width and height of said window, in order not to have them difform.

Consequently, it works only on windows that are higher than wider.

Another hurdle was understanding how timers work, how to make a round last a certain amount of time and how to create a loop that can take note of time passing since a change in state.

We still have a bug in relation to browsers and operating systems. On Android in Chrome, it skips one screen because it reads a second tap. We tried to fix it by putting a timer so it will take a certain time before it could register a second tap but that completely broke it for IOS so right now in Android still skips a frame. We chose to go with the code that allowed iOs to work best being that most people in the class have iPhones.

-The code would be simpler if we understood how to make objects and for loops. As of now, it is almost 500 lines.

Another challenge was animating and putting a sense of feedback in the buttons. We managed to draw ellipses on top of the button to give a pressed look, but we also tried to have the buttons pulsate in the last quarter of the round and could not figure out the way to have the animation (successive drawing of the image bigger and smaller) be played slow enough that it is visible to human eyes.

Finally, we thought it would be best to optimize for mobile in order not to be resource hungry was a challenge too, because the images we were loading we so big that sometime they just wouldn’t load. We also implemented state loops in order for the images not to redraw themselves every frame.

Design Files:

22851319_10159468874355057_1592211101_o

screen-shot-2017-10-29-at-11-21-33-pm

screen-shot-2017-10-29-at-11-23-33-pm

screen-shot-2017-10-27-at-12-45-07-pm

Photographs and Videos:

Bablebop 1
https://www.youtube.com/watch?v=-uq6cvdcQeY&t=310s

Bablebop 2
https://vimeo.com/240372836

Bablebop 3
https://vimeo.com/240380977

screen-shot-2017-10-29-at-11-30-22-pm
screen-shot-2017-10-29-at-11-31-17-pm
screen-shot-2017-10-29-at-11-32-17-pm
img_9125

Journal:

October 16th
Dikla and Emilia are paired to work together on this assignment.
During their first meet up they decide their project will consist of dancing lessons using 20 screens. Ten of the screens will be giving directions on how to move your upper part of the body and the other 10 screens will give instructions on how to move your lower part of the body.

October 19th
Dikla and Emilia have a video call and decide to change the main topic of the project.
After reading some news from Nicaragua, we decided to focus on Sexual Abuse and Rape.

October 20th
Roxanne joined our project and we discussed what we could make regarding our new topic.

We started our meeting brainstorming and agreed on the next points:

Main topic: Sexual Abuse and Rape.
Possible points of the game: How easy it is to destroy someone
                                                Predators exist everywhere
Abuser – Survivor: Asymmetrical games

The Just World Hypothesis

Possible inputs: Camera picking up movement.
Characters: Predators: 2, Preys: 2, Bystanders: 2
Predators silence bystanders and Preys give trust tokens
We were trying to find a way to have the game’s procedural rhetoric make a point about the social dynamics surrounding sexual abuse. The idea that bystanders, making up to 80% of the players, would have to take a side, was explored.

References mentioned during the meeting.
screen-shot-2017-10-26-at-3-01-32-pm

screen-shot-2017-10-26-at-2-59-35-pm

screen-shot-2017-10-26-at-2-59-22-pm

 

October 22nd
Change game dynamic to Pacman’s logic. We tried to define the mechanics of the game.
At this point we wanted the characters to give possible powers to each other.

Define Steps:
Step 1: Code randomizes what character each screen gets.
Step 2: Once every screen has a character user must tap for instructions.

Reference:
screen-shot-2017-10-26-at-3-40-59-pm

October 23rd
Roxanne started working on Code
Dikla started illustrations
Emilia began description of characters, story, and instructions

We discussed making a special background animation to set the mood of our game, the background would go from night to day and day to night to let the users know the times. We will be using a projector for this. We were thinking of having night and day phases because it would be related to different characters and the time they each could use their specific powers.

We also created a design guideline to define color scheme and type of illustration.
screen-shot-2017-10-27-at-12-43-23-pmAt some point, we discussed the idea of using the phone’s camera as an input and have each player take a selfie and use that photo as the image of each character. For aesthetic reasons, we chose to design the three characters.

Presentation of the game to Nick during class time:
As we were explaining, we realized we needed to simplify our idea.

Suggestions from Nick:
-Simplify
-Make a storyboard, wireframe or user flow to get a better idea of our game
-More interactions on the screen than in person.
-Divide our information and instructions in Public info and Private info for the users.

October 24th
Our game idea went through another iteration. We figured that because of the time, we needed to simplify the gameplay loop a lot, being that we only had 5 minutes or so to play.
We managed to define the game logic and the steps.

23030693_10155776548899437_818215451_o

22879118_10159468428200057_2061618194_o22850418_10159468428190057_1340428421_o
22850307_10159468428210057_173971432_o22879322_10159468428215057_1336796356_o

Steps:
Step 1: Log into Website
Step 2: Grab your phone and put it in the ziplock bag
Step 3: Press button to randomize what character you got. Your identity is a secret.  Button in the middle “Press here to get your character”
Step 4: Screen gives card with secret character. Image and name of the character “Tap for more info”
Step 5: Tap gets you a new card with character’s information.
There is 1 card per character:
-Blobs
-Blarks
-Blims
“Instructions: Wear the phone around your neck and wait to hear the instructions from the game managers.”

Step 6: Give the instructions in person:
During the next few minutes, your job is to convince others to give you crowns. You will use your character’s personality to do so.
All of you can vote, you can give either crowns or dirt, use your judgment wisely when giving a crown or dirt.

Step 7: Screen will be showing two buttons for players to press. We decided to use Blarks, Blobs, Blims and Bablebop as names to avoid any language. English is not our first language and we decided to incorporate this experience of having problems pronouncing words.

During this day we also discussed the behavior the users would have. We were having doubts regarding if they would like to play or not, this is why we decided to give the users the ability to give crowns and also dirt. We thought having two options would be an incentive for users to engage. We talked about the personality of our classmates and that they would enjoy giving/getting crowns but would also laugh when giving or receiving dirt.

October 26th:
The code needed debugging. Roxanne spent a good amount of time making sure the game worked on IOS and Android. We were facing some difficulties with Android devices.

The timer was set for the code and tested it on different phones.

We also develop character’s information and instructions. The process of iteration regarding the story was very helpful to make sure the instructions made sense.

This day we also made the “phone necklaces”. We tested different bag sizes and discussed the length for the string.

Why using ziplock bags and strings to make “phone necklaces”:
-Allows players to use their hands while trying to convince other players to give them crowns.
-Allows players to make their vote secret.
-Made the interactions more fun and personal.

October 27th
This day we decided the amount of time the users would have to convince other players to give them crowns (4 minutes).

Roxanne tested different ways to make sure the two buttons were working and were giving feedback that they were being pressed.

Observations during gameplay:
-Players were pressing the “Start to Play” button before time. They were eager to play and weren’t engaging with the character personalities a lot. Similarly, the players didn’t really register and interact with other members of their species.
-Some players didn’t want to put their phones in the “phone necklaces”.
-Some players were giving crowns and dirt.
-Some players were cheating and giving themselves crowns (you can see this happening in the videos).
-One player’s device didn’t run the game.
-Most of the players were laughing very loudly.
-Groups were organically made, one person would try to convince and others would vote.
-Some players did not make any groups and were mingling with the other players.
-Some players felt uncomfortable with the “phone necklaces”.

Takeaways for future iterations:
-Clear instructions in the game for players to read and then explain it to them IRL.
-Find a way for players not to cheat.
-“Phone necklaces” should be in the shape of the mobile device, some phones were placed horizontally in the ziplock bags and it made it hard for the players to vote.
-String for “phone necklaces” should be smaller for some players and longer for others. Some players were very short and taller players found it difficult to vote on their devices.
-More animation and screen candy would be nice.
– Maybe integrating rules for team wins, having players of each species collect crowns together, in order to have more of a team play and less of a free for all.

Project Context:

Video games of the Oppressed:
http://www.electronicbookreview.com/thread/firstperson/Boalian
Just-world Hypothesis:
https://psychcentral.com/encyclopedia/just-world-hypothesis/
Werewolf Card Game: https://www.playwerewolf.co/rules/
Secret Hitler Game: http://secrethitler.com/
Spent Game: http://playspent.org/
Pacman: http://www.freepacman.org/welcome.php

 

 

Phoneless Xmas!

Put Your Phone Away & Have a Merry Christmas!
by  Jad Rabbaa & Yiyi Shao

Pictures and Videos of the final product:
img_9282  img_9281

Christmas has always been this time of the year when people, friends and families get together. Nevertheless, with the new generation’s continuous addiction to mobile phones, people forget that they are surrounded by others even when they deliberately go to events such as Christmas dinners. People tend to immediately get distracted.

The only solution is to be away from phone screens and this Christmas tree is the right solution and excuse.

Inspired by the new unconventional style of modern christmas trees, and the conventional tradition of bringing family members to collaborate in the christmas decoration and engaging them into a social activity, this project saw the light.


HOW IS IT USED?
1-Host installs the tree at home, and send the link to his guests.
2-Guests browse to the link provided, and every phone would have a different color of christmas ornament on the screen. Guests then place their phones in one of the clear sockets on the tree.
3-Interaction:  Christmas ornaments on the screen swing back and forth with the rotation of the phone. On tap they also play different christmas sounds and wishes of Merry Christmas in several languages of the world.
4-Outcome: As long as phones are away guests are forced to be social and engage with everyone taking back the christmas dinners to how they used to be.


INSPIRATION
:
1-In the 16th century, Christmas trees were real green pine. Later people started using artificial trees made from plastic. Recently, the fasion shifted to unconventional materials such christals, glass, light bulbs, and even sometimes weird stuff such as cups or bottles.
2-Some cultures https://www.whychristmas.com/cultures/argentina.shtml throw a pre christmas party so all family members gather to help decorate the tree.
3- People becoming antisocial during family dinners and parties.

Inspirations links:
https://theophane.co.uk/2009/12/09/mobile-mobile-an-interactive-installation/
https://www.psfk.com/2012/12/artwork-mimics-data-mining.html
https://www.engadget.com/2013/04/04/design-studio-creates-modular-multiple-ipad-kiosk-arrays-functi/
https://creators.vice.com/en_us/article/z4qqx9/a-video-installation-highlights-how-the-west-is-running-out-of-water


CODE
:
You can find the final code for our project here.


JOURNAL
:
* Day 1: 18 october 2017:
We started brainstorming for the the big idea of our project and we thought about so many ideas:

screen-shot-2017-10-30-at-1-24-06-am
We mainly wanted to use the multiple screens (2D planes) as layers to create a 3D effect. We thought about making one landscape with 3-4 different planes (layers) where the first layer would be trees that interact with the movement of the phones whether horizontal or vertical.

The idea was little limited to angle of view,  so we thought about the universe and create a 3d effect of stars and constellations with the milky way in the background.

We also were interested to use one mobile phone as a speaker (song or music) to influence other devices and visualize how close or far it is by using the sound volume and frequency as input.

This idea made us think of simulating instruments sounds with taps and use the 20 screens as a symphony, and with a little more research we found that this idea was explored last year, and then we thought about something else that combines the visuals effects to the sound effect we were interested in : an cylinder art installation.

We did some research and we found the Tibetan prayer wheel to be a good example that we can get inspired from. https://www.youtube.com/watch?v=IRlwaH4jNh8

We started sketching the structure as shown in the sketch above.

* Day 2: Friday 20 october 2017:
We started sketching the structure of the prayer wheel.
We decided the visual would be inspired from the tibetan culture and we thought of the lotus shape.
We also researched online examples for codes for the installation to simulate the bloop sound.
We then decides we can make the illusion of rotation when all the phones are aligned and let the graphic move from right to left to give the spinning effect, and then we decided to use the real tibetan characters on the wheel if we decided to go with this project.

We spoke to Kate about it and we decided to go forward.

* Day 3: Monday 23 October 2017:
We talked about the material needed and drew the new sketch for the project display as an installation. The main problems we put into consideration are as following:

  • How to hang 20 smartphones on a structure to be able for our audiences to interact with?
  • How to avoid screens smashing into each other when swinging?
  • What material is suitable to show both visualization and sound? Most importantly, the material should be able to hold phones very well and it should not take us a very long time to set everything up on Friday.
  • How can we use the example code, in particularly, to make sound that being generated like musical instruments?

After discussion, we both agreed that only clear plastic bag is the only choice because once we attached them to the installation, the only thing people need to do is putting smartphones into bags. So we don’t need to tie every single smartphone and then hang them up on Friday.

screen-shot-2017-10-30-at-1-24-19-am
sketch 2 

We talked to Nick about our idea, suggestions he gave us back are as following:

  • Instead of hanging 3 phone vertically on a cylinder, we can divide the cylinder into three parts. So each mobile phone can be attached to each level.
  • Sound library at P5.js is worthwhile to reference if we want to make an instrument-like sound.

* Day 4: Tuesday 24 October 2017:

screen-shot-2017-10-30-at-1-24-27-amWhen we sketched out the new structure with 4 different pieces, we found that the final appearance look very similar to christmas tree. As it’s nearly the time for christmas and we miss our family already. We decide to alter the concept from prayer wheel to christmas tree. Digital prayer wheel is a very interesting idea but we need to do deeper research into tibetan culture and buddhism to be able to deliver the final piece in appropriate condition for our audiences to understand. If we have more time, we would go further to this direction.


Sketch 3

Sketching the measurements of the structure and meeting with Reza to talk about execution.

screen-shot-2017-10-30-at-1-24-45-am
Sketch 4
 

screen-shot-2017-10-30-at-1-24-53-am
Sketch 5

Working on the example code that written by someone else is very difficult, because the one we found contains complicated arguments which made us very confused.

Sound research:

  1. How the example code work
    • The example code is using accel events and touch in mobile device. Only when touching is active, acceleration data will be mapped into notes. And with the mobiledevice moving, accelerator will sensoring different values.
    • Note.js is the other javascript included in the project folder, it used web audio API to make a synth. In the code, ‘createOscillator’ is the function to generate sound, and the waveform was set into sawtooth.
    • In Note.js, filtering, volume and pitch are the variables to change the differences of sound.
  2. How sound works in p5.js sound library:example code work:
    • Vibrations make sound in physical environment, the same in code, sounds are generated by math function in the form of  waves. There are four different waveforms in digital environment: sine, square, triangle and sawtooth. Amplitude is the distance between top and bottom in the wave which controls the volume of the sound. Period is the length of the wave circle which controls the pitch of the sound.  (1/period = frequency, measured in Hz) As we are trying to improve the quite plain sound in Bloop Dance and make it like a bell, sine wave is more gentle and closer to our goal.

screen-shot-2017-10-30-at-1-25-03-am
Image source from WikiPedia

  • By wrapping multiple waves into ADSR envelope, it will eventually sounds like a musical instrument with a certain set of value. A means Attack time, D means Decay time, S means Sustain time, R means Release time.

screen-shot-2017-10-30-at-1-25-10-am
Image source from wikipedia

  • Tone.js is the another library that can create music in the browser, it can help schedule synths and effects to build on top of web audio API.

This part is too complicated and time consuming, I believe there must be a way to make the Bloop Dance example sounds better after spending more time to figure out web audio API, tone.js and p5.js sound library. It will be super cool to generate sound directly from the acceleration data without using extra sound file.

However, we only have 2 days left for this project. Considering about time management and deliverability, besides, the final sound output will not have much differences between synthesis and loading sound file, we decide to shift to an easier solution and completely write the code from scratch.

* Day 5 : Wednesday 25 October 2017:
Problem proposed to Kate and Nick:
1- use of p5.min.js is included in p5.js
2-How to use tone.js
Meeting with Reza to execute the structure. Went to Reza to finalize the structure. and install the clear cases.

Video of Building installation with Reza:


* Day 6: Thursday 26 October 2017:
After meeting with Kate and Nick yesterday, visualization finally appears on the screen with Nick’s help. However the visualization runs in a wrong function, the ball is shaking unsteadily and remaining in the same position on the screen. Only when device is shaking very hard, the ball will start to move.

We know the value must be wrong, so try to find a way to make a console window (like arduino) to print out the value on the screen. With Feng’s help, we finally find the problem and differences between acceleration and orientation.

Acceleration:
screen-shot-2017-10-30-at-1-27-04-am

 

 

screen-shot-2017-10-30-at-1-27-17-am

Image resource: http://blog.contus.com/how-to-measure-acceleration-in-smartphones-using-accelerometer/

Orientation:
screen-shot-2017-10-30-at-1-27-26-am

Accelerometer provides the XYZ coordinate values which means direction and position of the device at which acceleration occurred (due to gravity g=9.81m/s2). The value is very small, while we mapped AccelerationX between -90 to 90 to reset the position of the ball, after checking back to p5.js reference we found out orientation data is the one representing the number of degrees which are in the right range.

<<< One main problem solved! >>>

The other problem we are facing is to map orientation degree to fit our purpose of playing sound. Phone will hang vertically on the tree and we want the first sound that people hear when tapping is the original one. Then pitch will change depending on different angles while devices been swinging back and forth. Also this time, we still use console window to find the right angle with right value and do some maths and voilà!Another problem solved! 

– Concerning the colors of the balls, we wanted them to be randomly selected so classmates would each have a different color on their phones  so the installation look like a regular christmas tree. We started with randomizing the rgb values but the problem we sometimes had dark colors that weren’t contrasting enough with the green background so we decided to create an array of 9 set bright colors (check picture below) defined as #26FF00, #0BF9F9 and so on.
It originally was just a rounded 2D shape and we wanted to leave it digital looking but to add more depth we had to add the light and dark sids and some shiny dots on top (as shown in the picture below).

The challenge was to move the whole ball as one entity, the Y position of the ball moves based on calculation of percentage of the the screen height reflecting the rotation of the phone.
Through testing and some maths (a lot of maths) we succeeded in finding the right equation for all the ellipses (5 of them) to move in a proportional way.
screen-shot-2017-10-30-at-1-27-34-am

Logic: vertical position of the ball  is 0 if rotation is -90.
           Vertical position of the ball is screen’s height if rotation is +90.

– We followed the same logic for the sound’s speed and pitch. For some reason with at rotation 0 degrees, the sound seemed to be way slower than it should.. And we need the phone’s sound to be normal when it is in a vertical position (which is the normal position when hung on the tree).

After some test and error, we tried to alter the minimum and maximum value of the orientation instead of 0 to 180, we added 30 degrees, so 30 to 210 and sound seemed fine when phone was vertical.
When we tilt the phone in one direction the sound becomes fast and the pitch gets higher and the vice versa.

– We came up with a new idea to record different voices to say “Merry Christmas” in different languages. Firstly, we couldn’t find good sound library and we couldn’t guarantee that when 20 phones play together whether it will be noisy. Secondly, when we are working at study room to find jingle bell song, Mudit was curious and asked why finding a christmas song singing in English? Very good question! Christmas is an international festival, why not just record different wishes from different languages as we have many international students here. At the end, we collect 20 different languages in total!

screen-shot-2017-10-30-at-1-27-48-am– Finalizing structure with more decoration.

 

 

Evaluation
We believe this project has lots of potential in the future. No matter if it is seen as an interactive installation art or a selling product, it questions the relationship between human and social media and that infamous addiction to continuously checking our phones.
Beyond this project, we envision a more developed tool and take it  further by setting a function to record people’s voice of saying wishes and upload to the Cloud service, so each time people tap and interact with the christmas ball on the tree, they will hear different wishes from everyone in the family or even everyone in the world (different button to set areas). People can also customize their own christmas ball by choosing color or shape of ornament or maybe even take a picture of their like when setting up before placing the phone in its socket and hanging it on the tree.

___________________________________________

Example code:
http://alpha.editor.p5js.org/projects/BJxoCbdxx
https://p5js.org/examples/mobile-shake-ball-bounce.html
https://github.com/whichlight/mobile-art-intro

Research:
http://opensoundcontrol.org
http://blog.contus.com/how-to-measure-acceleration-in-smartphones-using-accelerometer/
https://developer.tizen.org/community/tip-tech/advanced-web-audio-api-usage
https://www.w3.org/TR/webaudio/#the-biquadfilternode-interface
https://www.html5rocks.com/en/tutorials/webaudio/intro/
https://tonejs.github.io/docs/r11/Instrument
https://musiclab.chromeexperiments.com/
http://flockingjs.org

 

“What Do We Have Here?” by Quinn and Ramona

Title: “What Do We Have Here?”

wdwhh

Project Members: Quinn Rockliff and Ramona Caprariu

Project Description:

The parameter for this assignment was to use 20 screens. Using this as a platform, we set out to create a game that would be an educational experience for us as well as the users. Since neither of us had prior experience with p5.js, we wanted the resulting game to stress the importance of getting to know the vocabulary and relationships behind the simple interactions possible with this coding language, as that is what we found most paramount to our process.

The game “What Do We Have Here?” was developed from all of our brainstorming and trials.

Development Journal:

DAY ONE

We began our journey in class on Monday trying to brainstorm different possibilities. We discussed the concept of coworking spaces, shared desk space, and all their implications. We thought it so intriguing to enter a space where you could somehow see the space as it was used for the person previous and somehow that being a method of developing a bond or intimacy. So our initial idea developed into: creating a ‘desk top’ from phones that could sense the imprints of all the objects placed on top of them and then, using that information, translate into patterns and colours. We took the next couple of days to then mull over how exactly we would get the phones to use haptics to ‘sense’ objects not fingers.

 

DAY TWO

We came together and decided against the initial idea, seeing as we are both new to coding and wanted to keep within a realm of feasibility. All of our focusing around how we would teach ourselves the language of p5.js led us to think about creating a game.  Then emerged a theme that we then kept returning to: were camp games/childhood games. And only naturally, as we kept bringing back the 20 screen necessity, we decided on playing off a game that using a similar amount of ‘playing cards’, “Guess Who!”. Our idea spun off this popular game by having a one-on-one layout, with 10 phones/webpages for each player. Instead of just displaying faces like the original game, we both agreed that each screen should have an effect that could be described in ways we are learning through this p5.js process. We created a list of interactions and

We split up the list so that we would each be in charge of different pages and then during our game play, have everybody assigned to access one of the 10 pages (in 2 different sets) and then that would create the ‘game board’.

 

DAY THREE

We went to Michael’s to gather materials for our game board. We knew that learning how to code all ten webpages was a priority but even more so, we must understand how the game is going to be played. If all the phones are laying flat, the user will not be able to conceal which phone they have selected. We decided to use foam to cut slots for the ‘average’ phone size and this way phones could be slotted in quickly as well as turned around when eliminated from the game.

Note to selfs:  Do not try and cut foam like this again. It is messy and makes a snowy mess and not very easy to be precise. You will end up covering it in sparkly gold paper and fancy purple tape.

After 2 hours of fighting with an exacto knife and foam we had the outlines of our board!

img_4794

img_4797

img_4799

DAY FOUR

We met in between classes and reviewed some of our issues with the code. Some of the common issues we faced were:

How do we make all of these webpages look related?

We decided to pick a colour scheme and shape scheme. A 400×400 ellipse would be placed in the middle of the screen when possible. This would create an identifying relationship between all of the screens as well as make the game more difficult if they all seemingly look alike!

Secondly, we picked a colour scheme which we would input into the code whenever we could to add to the effect.

screen-shot-2017-10-24-at-6-07-45-pm

screen-shot-2017-10-24-at-6-07-24-pm

screen-shot-2017-10-24-at-6-07-11-pm

This decision while not related to the technical code really brought all of our webpages together. It created a final design that had intention and looked good!

How do we stop the webpage from dragging with our finger?

Ramona did some research, found a little beautiful line of code. Lives were changed.

screen-shot-2017-10-29-at-8-00-33-pm

We both continued to work diligently on our code and worked out the kinks along the way with the help of coding rainbow, google, and p5.js examples online.

We got all of our code loaded onto cyberduck and prepared for our presentation by creating stricter rules and printing them onto cards to distribute to players.

  1. LERP

https://webspace.ocad.ca/~3164561/1

  1. TOUCH TO EXPAND

https://webspace.ocad.ca/~3164561/2

  1. GRADIENT

https://webspace.ocad.ca/~3164573/3/

  1. TOUCH TO MOVE

https://webspace.ocad.ca/~3164561/4/

  1. TAP TO CHANGE COLOUR  

https://webspace.ocad.ca/~3164573/5/

  1. BALL BOUNCE

https://webspace.ocad.ca/~3164573/6/

  1. SNAKE – arm

https://webspace.ocad.ca/~3164573/7

  1. NOTHING

https://webspace.ocad.ca/~3164561/8

  1. XY IMAGE

https://webspace.ocad.ca/~3164573/9/

  1. FLOAT BOUNCE

https://webspace.ocad.ca/~3164561/10

instructions

DAY FIVE

We presented our game in class for a critique. Although we prepared to the best of our ability there was some chaos getting everyone’s phone loaded up with the webpages and all slotted into the game board. This was anticipated but still took up a lot of our presenting time. We were able to run two quick fire rounds of our game which went well and exemplified the interactiveness and playfulness of the game. Some players expressed confusion with some of the screens saying that they were not sure exactly what they were doing, or couldn’t remember what the x-y coordinates affected, all questions and concerns we hoped would arise in order to spark conversation and inquiries into the relationship we have with content placed in front of us. In the critique we discussed future iterations and potential applications of the game.

Documentation:

img_4823

img_

Video Documentation

Code:

You can find the code for all ten of our webpages here: https://github.com/ramonacaprariu/cnc2017

Conclusion:

Experimenting in the final class was a valuable experience in helping us ascertain how our intentions with this project play out. We got the chance to observe how all the participants individually chose to interact with their screens and how natural it was to explore the different ways in which these interactions are possible.

Context:

For this project we looked to the classic structure of the game “Guess Who”. This game uses the same characters on two sides of the boards with defining features. We wanted to educate the class and ourselves by using the classic examples on P5.js (https://p5js.org/examples/). We also wanted to think about how people interact with screens, what are our first instincts? Do we swipe, do we tap, shake? How have the apps we use, and the interface of our screens determined our movements and relationships to interactive design? When there are no words, how do we interpret the information we are provided with? and ultimately, how can we communicate this with others?

References:

https://architizer.com/projects/plane-white

http://architizer.com/projects/microsoft-briefing-center/

http://devcenter.metavision.com/design/spatial-interface-design-principles-touch-to-see

http://alpha.editor.p5js.org/LuqianChen/sketches/rJsuy4F-e

 

 

Written in the Stars

starmap2

By Kylie Caraway and Emma Brito

Written in the Stars operates like a digital puzzle that requires teamwork between 20 participants and their phone screens in order to view the entire night sky. It begins with a physical printed map of the sky with the constellations’ names, but is void of their images. To see a constellation, participants must go online on their phone, click on a link for a specific constellation, and then raise and tilt their phones slightly, as though they are viewing the sky through their phone. Once the phone is tilted to a specific degree, the image of the constellation will appear. In order to see all 20 constellations at once, there must be 20 people participating in order to piece the map and its proper constellations together.

The fact that each screen only displays one constellation  at a time is an important feature. When used alone, the screen only offers a small fragment of the night sky that is visible. This means that the screens, and the people holding them, are reliant on the interaction with others in order to complete the puzzle and entire image of the night sky.

Github Code https://github.com/kyliedcaraway/Written_in_the_Stars

Constellations

  1. Andromeda
  2. Aquarius
  3. Aries
  4. Cancer 
  5. Capricorn 
  6. Cassiopeia 
  7. Centaurus 
  8. Draco 
  9. Gemini 
  10. Leo
  11. Libra
  12. Orion
  13. Pegasus 
  14. Pisces 
  15. Sagittarius
  16. Scorpio 
  17. Taurus
  18. Ursa Major
  19. Ursa Minor
  20. Virgo

Process Journal

pasted-image-0

Brainstorming:

When we first received the assignment, we quickly decided on using stars and constellations as the focus of the project. This backdrop could utilize simple shapes in complex ways, which we found to be both doable and effective in P5 javascript.

Initially we liked the idea of having all of the constellations in a single  3-dimensional space, so as a device turned the sky-scape would change as well. We liked the idea of people having their own experience and perspective within the same space. (We later realized that this would rule out interaction between participants, therefore eliminating the need for 20 phones in a particular space.)

Beginnings/Trial and Error:

We found a P5.JS code called “orbit” that created a 3-dimensional space and would allow us to hang shapes within in it. When used on a laptop, the canvas would orient to the mouse as it was dragged, yet it would snap back to the original view when the mouse button was released. This created a problem of creating a realistic night atmosphere. We decided we would instead use phones and devices with an internal compass feature so that the change in position was registered based on the rotation of the device. Laptops were ruled out as a result.

Unfortunately, we also quickly found it difficult to manipulate the orbit code. We couldn’t randomize the spheres within the code in order to mimic a starry sky, and it was difficult to pinpoint new shapes where we wanted them to go. 2D planes were also very difficult to place in the 3D view. The 3-dimensional space itself was limiting in size on our phones, which would make all of our 20 constellations impossible to include.

orbit-1

The New Plan:

We scrapped 3D orbit after we realized it wasn’t going to work well for us, and instead decided on a 2D iteration of the night sky, as Kate suggested. We decided to give each user one constellation, as a piece of the larger puzzle of the universe surrounding us. Using p5.js, we would create a 2D landscape, a constellation, and an interaction comprised of tilting the phone to create an interactive experience that relies on the participation and interaction of 20 users.

Atmosphere / Arrays:

array

At first, we searched for codes or examples of astronomical atmospheres that created linkages between the stars as you clicked. (This idea can be visualized through particles.js). Unfortunately, we could not get the particles.js library and code to work within our canvas. There were issues in the javascript console between pieces of code within the particles library, which was too daunting to attempt and problem solve. Next, we looked at parallax effects using arrays. These seemed to work best on laptops, but would not translate well onto a phone without a mouse hover function. They also felt like they were more appropriate for a video game (such as asteroids) rather than an observant experience. Finally, we found a star array code that did not rely on interaction or extra libraries. This began to serve as our basis, creating an atmospheric background to surround our constellations. We changed portions of the code, because the stars were too slow and not visible on our phones. We adjusted the frames per second, the ellipses colors and sizes, as well as the orbit’s location.

Interaction:

Kylie incorporated a gyroscope measurement code into our project so that the motion of tilting the phone would result in the appearance of a constellation, in order to mimic the act of looking up to stargaze. The video of the constellation would then play on loop until the phone was lowered and no longer tilted. We only focused on the variable “beta” in our phone, which measured how much the phone was tilted on the X-axis. At first, we told the program to only draw the constellation when it was greater than 120. While this angle is more aligned to how users actually look up into the sky, we realized this would create problems with our map on a flat wall. We changed the code to draw the constellation when it is greater than 80, so people could view the constellations as they hold their phones against the map on the wall.

Constellations:

Once we got a grasp on the new kind of code we would be pursuing, we started making the constellations themselves. We chose to include the 12 astrology signs, as well as 8 of the more well known constellations. Astrology signs were important to us, because they are commonly a subject that implements conversation and interaction between users. We decided to create the constellations as simple animations in After Effects, and then place the video to play over the array. Each constellation would follow the same basic format with a few different variations (colour, effect, shapes) in order to keep them stylistically aligned without becoming repetitive. We also toyed with the idea of animating the myths behind the constellations, which would only be seen if users pressed the constellation, but we decided against this for 3 reasons: 1) users pressing an image this is tilted above their head would result in an uncomfortable, non-intuitive experience 2) this feat of ultimately 40 animations was beyond our scope and could not be accomplished in our timeframe 3) getting one video to work was proving impossible; two videos that rely on multiple, sequential interactions was asking for disaster. Although we tried numerous ways to get the video/GIFs to work, our code did not want to embed the frames within the canvas. In the end, we used 1920 x 1080 JPEG photos, so we could keep the quality of the design without large file sizes.

Challenges and Issues:

Our biggest challenges in this project revolved around code issues. We had slight issues at first with WEBGL and 3D scapes. Pieces were difficult to move around, the code was very sensitive to any changes, the orbit control did not move the way we visualized it would on a phone, and the 3D space it constructed felt too confined for our project. Through these issues, we opted for a 2D space instead.

We also had issues with star arrays and background visuals (as mentioned before). After trial and error, we researched different arrays that depicted constellations, and ultimately found one that was easy to understand, implement and edit, and fit nicely within our project.

Our largest obstacles were displaying video files and GIFs of our animated constellations. Our first signs of trouble began when we ran into problems when we rendered the constellations. We wanted to keep their alpha channel so users could view the stars behind them, but the video files were huge (200 MB or more,) and both Atom and Sublime crashed when we used them as assets in the code. We then tried to take the video and create PNG sequences out of them. Atom and Sublime didn’t like this either, because our animations were anywhere from 5 seconds to 10 seconds long (looking back on it now, I believe this could be why we were unable to play the videos or sequences). We downloaded the P5 Play library, and attempted to run the PNG sequence, but the animation would never load. We finally decided that we had to scrap the Alpha channel, and plan to have a background color behind the constellations. This realization forced us to change the location of the code, so a black box would not appear on top of our array. Ultimately, we had to draw the star array after the constellation, so that the canvas background and constellation background would blend seamlessly.

We also tried GIFs, with no optimal results. The GIFS would not load as videos.  They would either hold the first frame (creating a still image), they would draw on a weird spot on the screen that was not within the canvas whatsoever, or they would not draw at all (the most common scenario).We attempted to download a GIF p5.js library, but there were issues with the library, and the GIF would never play. We also attempted to use the P5 DOM library and elements code to run the GIFs. By using “create image” rather than “load image”, the GIFS would finally appear… Except they were writing over the array, regardless the location of the code, and their location would change based on the device. In the end, the GIFs never operated how we wanted them to.

Going back to video, we attempted to make smaller videos that would load more easily onto the phones. Unfortunately, the videos would either crash the site, never load, would only load the first frame, or would load outside of the canvas and ask you to press play on the video, which would open a new tab with the video. After multiple attempts to implement the code, from the p5.js book and website, to other tutorials online, we could never get the video to load in our code. After meeting with Kate and Nick on Wednesday, we attempted to take apart the pieces of the code. After separating the various portions, we could not get our videos to load within a canvas on both the iPhone and Android phones. In the end, we ultimately decided to use images rather than video. These images were able to load quickly, they were placed in the right location, and they were reliable, as they worked on both types of phones.

mp4-attempt

Code Issue Examples:

In this example, we attempted to load an MP4 file. While this code would respond to the Gyroscope and place the video in the proper location, it  would only show the first frame of the video.

video-test-1
In this example, we used the P5 GIF library in attempts to load a GIF. The GIF would not display in the proper location, it would not play, and it would not respond to the Gyroscope code. This attempt was the most failed, as nothing was working properly.

gif-video-test-2
In this example, we used p5 Element code. This was our closest success story. The GIF would respond to the gyroscope code, it would play the GIF, and it was centered for iOS. Unfortunately, it also drew over our array, creating a black box around the GIF, even though the code was beneath the array. Additionally, when we attempted this on an Android phone, the GIF would not center and created issues with the canvas fitting to the phone screen size.

code-deconstruct-1

Our attempts to deconstruct code. Removing all other code, we tried to load a MP4 video by itself. No success. I assume this is the result of our video file sizes or the video length.

Map:

In our first iteration, our map consisted of both the constellations’ names and diagrams. We were relying on the map to help users place their constellation in the larger image, but we realized that the devices would be obsolete if the information was already on the map, so we removed the constellation graphics to create a game element. Removing the constellations allows the user to have an incomplete visual without the assistance of their devices.

Final Iteration:

final-working

Regardless of the iterations this project went through, we are very happy with the final incarnation of Written in the Stars. It differs from our original plan since it is an image presented rather than a video, but the other features are present. The gyroscope prompts the image to appear, while the array is a constant. We incorporated the physical map in order to encourage interaction between people and devices, after all stargazing has been a social activity, with stories and mythologies associated with each constellation. We also provided information on zodiac signs for participants unfamiliar with astrology to learn their sign, as well as horoscopes for a fun read and conversation piece to connect with our interactive installation. As each participant has their own constellation, they can participate with others to create a full atmosphere of the night sky.

Vimeo link here : https://vimeo.com/240378307

Sketches, Designs, and Photographs

beginningsketches

Sketch of our initial brainstorming ideas. While we scrapped the 3D atmosphere and 2D game, we implemented our original user experience, complete with animation, gyroscope, and our revised sky map.

img_0351
We considered incorporating written information, such as a constellation’s history, science, or mythology, but we decided it clogged up the screen and detracted from the overall image of the night sky.

star-colors
This depicts our colour palette, as well as the aesthetic style we implemented in our project. We strove for clean and simple lines, ellipses and stars, with limited colour options, in order to remain cohesive, yet have enough variety to be visually appealing.

screenshot-8

This is a process image of the creation of the Cancer constellation in After Effects. This is one of the smaller of the 20 constellations included in Written in the Stars.

img_1763

Gemini was our test constellation within the code. This is an image of the visual that appears after the gyroscope is activated. It was with this image that we first realized the video wasn’t playing and launched a series of trial and errors in order to attempt to make the animation play.

constellationssmaller
Graphic of all of our constellations

22850128_10155006568523148_298448323_o

First design of our map. Kylie made the mistake of making it in Photoshop at 72 by 48 inches with a resolution of 300 pixels per square inch. The file was huge, wouldn’t save, and kept crashing. She finally was able to save it as a PDF. It was 1.64 GB and the print shop she sent it to would not accept a file that large. She then recreated it in Illustrator. While she couldn’t get the faint nebulae texture she had used in Photoshop, Illustrator was overall the preferable tool: the map could be resized to any desired amount, the print shop preferred illustrator files, and the file size was less than 1 MB. Lesson learned: use illustrator for large prints. Another lesson learned: don’t wait 2 days before project is due to get your map printed. Print shops love to charge you somewhere around 500% markup of the original price for a rush order…

img_2331

Our poster rolled out for the first time!

img_9112

Final Presentation Day!

Written in the Stars in action:

The presentation was successful. The coding and images worked properly and we got the desired reaction from the class. We also turned out the lights for added effect.

img_2188
img_4889
img_9570

Context

Our project can be contextualized through a couple of different avenues. As was touched on previously, it was important to us to include the astrological constellations because of the personal connection and sense of ownership people feel for their sign. We don’t need to look further than the fact horoscopes are a staple in nearly every newspaper.  A stir was even caused because of this  in 2011. Astronomers said the moon’s gravitational pull on the Earth had changed the axis of the Earth, resulting in different astrology signs for each month. After public commotion, NASA had to put out a statement reminding the community that astrology is, in fact, not science.

Stars, and the night sky in general, have been a popular subject throughout history in various forms of art, and later in media. From its initial introduction by the Babylonians as a storytelling technique, to its representation in artwork, such as Salvador Dali’s illustrations of the signs and Vincent Van Gogh’s iconic “The Starry Night,” to astronomy’s current popularity as both a marketing and social tool, there is no question regarding human affinity for the stars. Due to the ubiquity of astrology and the love of stargazing, this project is relatable to a wide audience. We wanted to capitalize on the social aspect of this activity as well, and the 20 screen requirement allowed up to do this.

While currently Written in the Stars currently serves as an installation which encourages communication and interaction, Written in the Stars could be further developed as an educational tool to teach astronomy. This project could additionally be used as a data visualization tool. Both NASA’s Kepler Space Telescope that monitors astronomical phenomena and the European Space Agency’s Gaia telescope that has produced a revolutionary catalog of the structure of stars in the Milky Way Galaxy serve as a model for this project.

In the end, Written in the Stars has the potential to be used for discussions about physical sciences as well as social sciences, as a digital puzzle that can be used for entertainment, group participation, and to illustrate “the unique cognitive-emotional link that makes us the intelligent creatures we are” as we sort through pieces of “randomness” and “information” in order to create a full, comprehensive picture of our surroundings (Mutalik).

References and Influences

It’s impossible to talk about our Written in the Stars project without mentioning the Sky Map (https://play.google.com/store/apps/details?id=com.google.android.stardroid&hl=en) app. It serves as both inspiration and aspiration for this project. While our project differs from Sky Map because of our focus on the interaction of people working together versus an individual experience, Sky Map is thorough and places all the constellations within one space. We would like to move this project forward to include Geolocation of the constellations, as Sky Map has effectively implemented throughout their app.

When we searched online for images of constellations and maps, we noticed that there were extreme variations of constellation forms, number of stars, and location of constellations. We decided to use a reputable source, National Geographic, (https://www.nationalgeographic-maps.com/media/catalog/product/cache/7/image/8ecabcfb697832bc77ac7e2547ded39f/x/n/xng195712a_90.jpg) as our resource for constellation formations, locations, and our map iteration.

Kelsey Oseid’s book, What We See in the Stars: An Illustrated Tour of the Night Sky (https://www.penguinrandomhouse.com/books/553191/what-we-see-in-the-stars-by-kelsey-oseid/9780399579530/) provided inspiration for visual aesthetic, as well as information about constellations. Although we were unable to get animations to run in this prototype, Oseid’s book will continue to be a great reference through the development of this project into an interactive storytelling tool about constellations, astrology, and the science behind our universe.

Astrology.com (www.astrology.com) provided us with the horoscopes and dates for each of the zodiac signs. During the installation, we handed out slips of paper with the constellation, dates of the zodiac, our website link, and their horoscope. This provided an extra entertaining detail to get participants engaged with their constellations before the installation began.

This was the initial code for the array we used. We altered both the size of the ellipses and the colour of the background to better suit our phone screens. Other code was gradually simplified, altered and added, from changes in frame rates, to position and flow of the orbit, to the number of stars.

This is where we received the code for the Gyroscope/ Accelerometer. We used this code to measure the phone’s position as we moved and tilted the phone. We realized we would only be using the Beta variable, so we removed the Alpha and Gamma. We then deleted the rectangle and code that showed the values of each axis. In the end, we ran a simple “if” statement, so that when the Beta variable was above 80, the code would draw the constellation.

We also referenced p5js.org (http://p5js.org) and Make: Getting Started with p5.js for multiple portions of our code.

Bibliography:

Alessio, Devin. “The Cocktail You Should Be Drinking Based on Your Zodiac Sign.” Elle Decor, 21 Dec.

   2016, www.elledecor.com/life-culture/food-drink/g2889/

   the-cocktail-you-should-be-drinking-based-on-your-zodiac-sign/. Accessed 25 Oct. 2017.

 

“Daily Horoscopes.” Astrology.com, www.astrology.com. Accessed 26 Oct. 2017.

Darley, James. “A Map of the Heavens.” National Geographic, Dec. 1957,

   www.nationalgeographic-maps.com/media/catalog/product/cache/7/image/8ecabcfb697832bc77ac7e2547ded39f/

   x/n/xng195712a_90.jpg. Accessed 25 Oct. 2017. Map.

Garreau, Vincent. Particles.js. www.vincentgarreau.com/particles.js/. Accessed 20 Oct. 2017.

Guarino, Ben. “Chaos in the Zodiac: Some Virgos Are Leos Now (But NASA Couldn’t Care Less).” The

   Washington Post, 26 Sept. 2016, www.washingtonpost.com/news/morning-mix/wp/2016/09/26/

   chaos-in-the-zodiac-some-virgos-are-leos-now-but-nasa-couldnt-care-less/?utm_term=.967880f3ac65.

   Accessed 25 Oct. 2017.

Johnson, Michele, editor. “What Does Kepler Have Its Eye On?” NASA, 31 Aug. 2017, www.nasa.gov/

   image-feature/what-does-kepler-have-its-eye-on. Accessed 25 Oct. 2017.

Kuiphoff, John. “Gyroscope with P5js.” Coursescript, edited by John Kuiphoff, 2017, www.coursescript.com/

   notes/interactivecomputing/mobile/gyroscope/. Accessed 25 Oct. 2017.

Max. “[p5.js] Starfield.” Codepen, 9 Oct. 2016, www.codepen.io/maxpowa/pen/VKXmrW. Accessed 25 Oct.

   2017.

McCarthy, Lauren, editor. P5.js. p5js.org/. Accessed 25 Oct. 2017.

McCarthy, Lauren, et al. Make: Getting Started with P5.js. San Francisco, Maker Media, 2016.

Mutalik, Pradeep. “Can Information Rise from Randomness?” Quanta Magazine, 7 July 2015,

   www.quantamagazine.org/information-from-randomness-puzzle-20150707/. Accessed 25 Oct. 2017.

Oseid, Kelsey. What We See in the Stars: An Illustrated Tour of the Night Sky. Ten Speed Press,

   2017.

Popova, Maria. “Salvador Dali Illustrates the Twelve Signs of the Zodiac.” Brain Pickings, 19 Aug.

   2013, www.brainpickings.org/2013/08/19/salvador-dali-signs-of-the-zodiac-1967/. Accessed 25

   Oct. 2017.

Sky Map. Android and iPhone app, Mobius Entertainment, 2016.

Wolchover, Natalie. “From Gaia, a Twinkling Treasure Trove.” Quanta Magazine, 14 Sept. 2016,

   www.quantamagazine.org/gaia-telescopes-first-data-set-released-20160914/. Accessed 25 Oct.

   2017.

“Lord of the Dance” (Feng Yuan and Dave Foster – Creation & Computation – Exp. 2)

Process Journal (Lord of the Dance)

Feng Yuan and Dave Foster

DIGF-6037-001 – Creation and Computation (Kate Hartman and Nicholas Puckett)

Brief Project Description:

Link 20 (0r more) screens (input device type optional – output device type optional – PC, Mac, Phone etc.) such that it produces an interactive experience or display for 1 or more users .

Tuesday, Oct. 17th:

We met in the 6th floor DF lab space at 12:00.

Project Design Discussion:

As a beginning, we discussed several ideas for the project (short description, discussion results noted below tentative titles):

  • Proximity Alarm” (your friends are close)
  • Your screen (phone, tablet or computer) gives you an alert when any of 20 specified people or their device comes close.
    • Several problems here (not insurmountable – probably –  but complex).  For one, how do we establish the “trigger” (bluetooth signal?, TCPIP address?, etc.).  Also, how do we reliably (and preferably simply) code for this?
  • Proximity Alert” (any Bluetooth device)
    • Your screen (phone, tablet or computer) gives you an alert with available “radar screen” display of direction and proximity of any bluetooth device within a set radius.  The idea being to warn us if a camera-equipped device might be nearby.
      • As above with “trigger” etc.  Also does not really address the “20 screen” portion of the proposed project.
  • Join the Choir
    • The trigger is simpler (just a counter on the site) but issues of timing would (we speculated) be problematic.  We could not see a way to reliably code around this question.
    • Lord of the Dance” (Mah Na Mah Na)
    • Each screen in the group of 20  (phone, tablet or computer) is given a “voice” in the choir (bass, tenor, alto, soprano, etc.) at random and upon “logging in” to an established website.  Once the trigger number is reached for the site (20 per project specifications), the “choir” begins to sing (we were unable to pick one tune for this).
  • Lord of the Dance
    • Based on the Muppet sketch “Mah Na Mah Nahttps://www.youtube.com/watch?v=8N_tupPBtWQ&t=41s .  Each screen becomes a separate but linked numbered site (from one to 20) all linked to a central node or site.  All 20 screens would receive and display the 2 “singers” and the base “tune” from a central node or site.  Site 1 would (initially) also receive the “little furry guy” or “dancer” who contributes the “mahna mahna” for a verse or two.  At that point the “dancer” begins to “riff” on the tune for a bar or two and the “singers” on that screen react “disapprovingly”, to which he responds by “jumping” to a random screen in the array and with the allowed “Mah Na Mah Na” and the song and dance continue.
      • We believe this is do-able and decided to go with this idea.

Research/Build Work:

Dave:

  • Produced a basic mock-up sketch of the project using Balsamiq (see below)

screen-shot-2017-10-27-at-11-10-22-am

Feng:

  • Began research of the required code at the https://p5js.org/reference/ site.
  • Began basic coding.
    • Some discussion (no hard conclusions reached) about what the “look” of the characters should be given the attempted “simplicity” of the coding desired by both participants.
    • Achieved the beginnings of the “dance” on at least one screen.

Friday Oct. 20:

Dave:

Showed Kate the Balsamiq mockup and described the idea to check acceptability within project parameters (seems to be OK and codeable – if we can make it work)

Monday Oct. 23:

Discussion:

Feng had some concerns regarding separation of sound track for singers and furry-guy.  It would be simpler to code if the furry-guy is separate from the singers.  Several files will have to be created for variation and to separate the “riff” and “reaction” animations.  It was decided that Dave would work on the sound file(s) while Feng coded the movement(s).  Next meeting scheduled for Tuesday, Oct. 24 @ noon.

Research/Build Work:

Dave:

  • Downloaded WavePad (audio editor) for work with MP3 file of “Mah Na Mah Na
  • Began separation of “singer” and “furry-guy” tracks

Feng:

  • Began coding for drawing characters for use in routine

Tuesday, Oct. 24:

Discussion:

As a result of Feng’s consultation with Nick, we decided that:

  • There was too much of the “server” paradigm in our original idea
  • There was (possibly) too little user-user/screen-screen interaction

As a consequence, we discussed a couple of ways in which to more closely conform to the project guidelines:

  • We discussed making the “dancer’s” movements contingent upon a mouse-click or enter key from each screen.
    • Could not come up with a way to time this to the “Mah Na Mah Na” tune
    • Could not decide exactly how to trigger the user response.
  • The above triggered a thought from Feng — what about a variant of “Whack-a-Mole”? (illustration from Balsamiq mockup below)

screen-shot-2017-10-27-at-11-10-55-am

    • Advantages:
      • No need to start from scratch for basic idea
      • We could keep the “Mah Na Mah Na” tune and background “singers” intact (no requirement to separate the character’s sound tracks)
      • Simplifies “stimulus/response” or “user interaction” portion of assignment.
      • Simplifies the programming and selection of the “dancer” character.

Research/Build Work:

Dave:

  • Used Photoshop to remove extraneous background from the picture (see below) and passed it to Feng

screen-shot-2017-10-27-at-11-13-11-am

Feng:

  • Began coding of the characters and the “Whack-a-Mole” game
  • Learning P5.JS from The Coding Train and P5.gif.js

Inspiration for the background design: https://d2v9y0dukr6mq2.cloudfront.net/video/thumbnail/dHnxL5V/space-hyperspace-travel-through-starfield-nebula_4jksaojb__F0000.png

Wednesday, Oct. 25:

Meeting with Nick and Kate re: concerns with project:

  • Nick reiterated his objection to the server based portion of the concept
    • Recommended some simplification of the concept for “Whack-a-Mole” format
      • 4 “states” required
        • “Mole absent”
        • “Mole up”
        • “Mole hit”
        • “Mole missed”
  • Both Kate and Nick recommended not being “married to” the whole Muppets theme.

Feng:

  • Work on the visual design of the game and draw the characters.

Thursday, Oct. 26:

Discussion:

Feng has simplified and coded the characters (illustrations below) allowing for the “states” recommended in Wednesday’s meeting.  Asked to have the tune “chopped” into manageable portions.  Discussion/decision as to which type of device(s) to use as well as “array” required to play the game.  iPads (X9) and iPhones (X11) chosen as best option given the game format.  Discussion as to whether 1 screen for 20 players or 20 screens for 1 player fits project parameters as well as game format.  Single player with 20 screens selected as best format.

screen-shot-2017-10-27-at-11-13-30-am

screen-shot-2017-10-27-at-11-13-45-am

Research/Build Work:

Feng:

  • Continued work on code
  • Work on the character’s’ animation.

Dave:

  • Chopped the tune into acceptable portions (“Mah Na Mah Na”, “Riffs” and 4 different “boop de de de” segments) and E-mailed WAV files to Feng.
  • Worked on Balsamiq mockup of array with input from Feng (illustration below):

screen-shot-2017-10-27-at-11-14-00-am

Second idea (3 tables in a “U”) chosen as best for single-player format.

Friday, Oct. 27 – presentation day:

Met in DF lab at 10:30 to finalize.  Decided to use Mac screens as coding appears to work better there than in other devices.

Code available at :  https://github.com/fengfengfengfeng/Creation-Computation-Exp02-LordofDance

Images from Presentation are below:

exp2_images-15 exp2_images-14 exp2_images-13 exp2_images-12 exp2_images-11 exp2_images-10 exp2_images-9 exp2_images-8 exp2_images-7 exp2_images-6 exp2_images-5 exp2_images-4 exp2_images-3 exp2_images-2 exp2_images-1

Bibliography:

    Antiboredom. P5.gif.js. https://github.com/antiboredom/p5.gif.js/tree/master

“Best 25+ Animal Muppet Ideas on Pinterest | Drum Kits, Drummers and Rudimental Meaning.” Pinterest. N.p., n.d. Web. 26 Oct. 2017.

“Hyperspace Image.” Google Images. N.p., n.d. Web. 26 Oct. 2017.

“Language Settings.” P5.js | Reference. N.p., n.d. Web. 26 Oct. 2017.

“Mahna Mahna » Free MP3 Songs Download.” Free MP3 Songs Download – EMP3d.co. N.p., n.d. Web. 26 Oct. 2017.

The Code Train on Youtube. P5.js Sound Tutorial. https://www.youtube.com/watch?v=Pn1g1wjxl_0&list=PLRqwX-V7Uu6aFcVjlDAkkGIixw70s7jpW

Umiliani, Piero. “Mah-na-mah-na.” Mah-na-mah-na. Parlophone, n.d. MP3.

VHTrayanov. “Muppet Show – Mahna Mahna…m HD 720p Bacco… Original!” YouTube. YouTube, 02 Oct. 2010. Web. 26 Oct. 2017.

The Apples Game

Members: Roxanne Henry, Margot Hunter

Description

Oh what a missed pun opportunity! The name of the game is to find your pair; why oh why didn’t I name this Find Your Pear?

The game loosely revolved around the idea of the memory card game. My original concept was to use the phones as cards, where the phones were laid down upside down as a grid, and then players would have to find matches the same way the card game was played. However, I didn’t think it quite got people as involved with one another as I would have liked. It could still very well be considered a single player game. So I started thinking of ways of involved each person and their own personal devices. The idea came to me that if each person was a card, they would have the agency to go find their partners themselves. This way, the would have to interact with one another and each others’ devices in order to determine if they were a pair or not.

Development Journal

Day 1

The plan moving forward, then was to have 20 different apple slices, so that they matched up in 10 different ways. I was adamant about randomizing the distribution process, but also, about making the game fair and making sure everyone would have a partner. I knew it would be impossible to do without involving some centralized list that kept up to date with the client-side allocation of apples.

I started looking into using server-side controls. Originally I had a file with a list of the apple slice names, and the client-side code would then look into the file to find out which apples were available for picking from. However, I needed a way to have the client-side code confirm with the file once they used a specific apple. It is unfortunately impossible for client side code to write directly into server-side files, for obvious security reasons.

graph1

graph2

So, I investigated the possibility of using server-side scripting to do the writing. It took me a while to figure out which server side scripting languages had been installed on the webspaces, but I soon discovered it used both Node.js and PHP. I had slightly more experience using PHP, so I started with that. Unfortunately, there seemed to be a security problem. I didn’t want to spend too long debugging that. I know out of experience that when it comes to permissions, that that kind of error could come from any level of security protocols. I took one go at asking IT for write permissions to the servers, and when that fell through, I immediately started looking for another option. I didn’t want to waste too much time on things if I wasn’t certain I could make them work.

graph3

Day 2

Moving on, I looked into external API-enabled database solutions; brief consultation with Nick had reminded me of their existence. It didn’t take too long to find one that was free to use. I signed up, created my database, and started getting to know the API.

graph4

To my surprise, it was fairly simple to set up my code, using only p5, to communicate through the API to the database. I hadn’t expected a library dedicated to drawing and animation to have a powerful selection of HTTP methods, but I was pleasantly proven wrong. The biggest challenge with using this API was making sure I had set up my CORS-enable API key properly. It took a few tries of a different combination of reading the examples, the API documentation, and brute force testing to figure out the happy combination I needed to access the database. Turns out, a terminal slash in the default URL means something pretty specific to the API, and it was throwing off all my results. It’s always the small things ?.

Soon enough, I had an infrastructure built that would randomly select an apple from a list of available apples. There was still a small risk of there being duplicates attributed. The time it took for the client-side code to attribute a random apple and then update the database with the information was still slow enough that someone had time to access the database for the SAME list of apples as the first person, meaning their randomly attributed apple could, in theory, be the same, but it severely limited the chances of this, and that was good enough for the requirements of the project. It would have been impossible to guarantee total randomness and fairness without the random apple being selected and immediately updated by the server, but I didn’t have the time to learn if the database server I was using even had that possibility.

Day 3

I finally had access to apple drawings that I could test loading images with. It worked out fairly quickly and easily to load the images based on the incoming image file name. I wasn’t sure why I thought that would be difficult, but it really wasn’t. Something that did vex me momentarily and without explanation, however, was that using displayWidth and displayHeight gave me tiny apples on mobile, though it displayed correctly on PC. I found that using windowWidth and windowHeight worked better, but in reverse. This was fine since the game is easier to play on mobile overall.

The hardest part I wasn’t expecting came later when I found that the API’s GET method was a little slow. I decided to create a custom loading animation, in order to entertain the player while they waited. A simple thing, only 15 frames long, of the game’s eponymous apple being eaten, and then exploding back into a full apple. I thought the loadImage() function would load a gif as easily as a png. I was wrong in the most obnoxious way: it loaded the gif, alright, but only the first frame. RIP.

I started looking for solutions. After about 30-45 minutes of research, the first one I found was a suggestion about using loadImg() instead. This worked, except not in the way I expected, and certainly not the way I wanted. It created an HTML img element outside the canvas, without transparency, and without p5 control over it. This was not a good solution.

Day 4

I moved on to find the p5.gif library which I found worked wonderfully during testing. It was simple to use and worked the same was as loadImage(), except one would use loadAnimation(). The honeymoon phase wore out fast, though, when I realized it doesn’t work on mobile. Ugh!

Next, I found p5.play which, thankfully, worked really well on all devices I tested on. It required a few extra lines of code, but it was worth it to get the loading gif to finally work.

Finally, I added an image that would appear when the database was out of usable apples to inform the user of the system’s status. I think this is a really important piece of information to share with the user.

Day 5 – Morning of the presentation

The game only works on my phone for some reason. Everyone else is getting security problems. Nick and Kate show up and I share my concerns. They suggest I host it elsewhere. Of course. I feel like if I had hosted it elsewhere from the start, I would have avoided a lot of the server problems I had experienced on the first few days. Alas. I have about 10 minutes to change the hosted location. I quickly set up a new git repository, and throw my code there. Everything loads except p5.play. I have no idea why. I made several rapid-fire changes in my rush to get this working. I think that maybe the solution was to add “.js” at the end of the include in my index file. Originally, it had been working no problem without it, but I suspect github’s hosting has some stricter rules about that. I guess I’ll never know! It ended up working in the end and people seemed to enjoy it, so, all’s well that ends well.

apple

suspense

Code

The code is available through, and also hosted on Github.

Video (thanks to Tommy for filming!)

Available at Vimeo.

Project context and bibliography

So the project was originally going to be a card game, and then it became a human card game. It’s a bit difficult for me to frame it as “a game which aimed to connect people physically instead of through technology” because I don’t personally believe they have to be mutually exclusive. Sure in the game’s case it could have been real, physical cards instead, but why not use the phone, which everyone already has. It can make for impromptu games, no planning ahead required. The technology allows us to access it any time we want, without worrying about having to bring the card game with us. Of course, several modifications would have to be made to allow for this. Custom game lobbies for groups of players, and true-random assignation of the apple halves would need to be implemented, for starters. Another good modification would be to change from apples to pears. Gotta be punny. But ultimately, I do not think this project’s aim was to bring people to interact outside of technology, but instead to embrace its possibilities and make use of them in a context where people just want to have an impromptu ice breaker game.



Antiboredom. “Antiboredom/p5.gif.js.” GitHub. December 20, 2016. Accessed October 24, 2017. https://github.com/antiboredom/p5.gif.js/tree/master.

 

“Apples-for-the-teacher-gift-bushelbasket.jpg.” Digital image. Two Sisters Crafting. Accessed October 24, 2017. http://www.twosisterscrafting.com/wp-content/uploads/2015/11/apples-for-the-teacher-gift-bushelbasket.jpg.

 

Pedercini , Paolo. “P5.play – a game library for p5.js.” P5.play – a game library for p5.js. Accessed October 24, 2017. http://p5play.molleindustria.org/.

 

Brig. “Processing 2.x and 3.x Forum.” Processing 2.0 Forum. Accessed October 22, 2017. https://forum.processing.org/two/discussion/12736/animated-gif-support-in-p5.

 

“Reference.” P5.js | reference. Accessed October 2017. https://p5js.org/reference/.

 

Restdb.io, The Team at. “Plug and Play database service.” Restdb.io. May 31, 2016. Accessed October 17, 2017. https://restdb.io/.

Experiment #2 – RGBelieve it!

Group Members: Roxolyana Shepko-Hamilton & Kristy Boyce

Project Title: RGBelieve it !

Description: RGBeleive it! is an interactive, multiscreen colour tracking experience. It works as a game or stand-alone art installation, depending on the user’s needs/mood. Using Computer Vision and webcams, RGBelieve it! Tracks colour and creates a variety of “stunning” shape based on screen experiences.

For our project, we wanted to get more familiar with capturing and tracking motion via a computer’s webcam.

We focused our research around the following questions:

How do we use multiple computers to track motion and show the effect of it on each screen? Is it possible to have a motion sensor on each computer connected to a central website? How would the website automate the output? Can we make motion based art? What about a self portrait done via video captured from a web cam?

Sketches & Brainstorming:

  • Create art with your body
  • Lay all of the phones down and use as a giant interface that you can interact with
  • Phones are all touch-based motion tracking? Gestural? Drawing input, swipe etc. could we make the phone vibrate
  • Computer screens – light sensitivity, motion tracking, speakers in computers,

We decided to try and create a multi-screen, motion activated art installation; when you moved, the visualization on the screens would move, grow and ideally draw.

So we researched people that had created similar work and watched tutorials on painting with pixels, edge detection, etc. We were able to get the webcam up and running fairly quickly but found the filters like “pixellate” etc, were a little too basic and we worried that though it would look cool to have 20 different screens with different looks: heatmap, pixel, edge detection, etc, using the filter function was a little too basic in terms of adding out own touch and really creating something new.

One of our first ideas was to have a wall vintage looking of TVs like one might have seen in a 1960s department store, but of course, in this case, the “TVs” would be png files layered over our webcam video feed.

Experimentation:

Edge detection with a png tv border
Edge detection with a png tv border
Motion tracking with webcam is a comparison of one frame to the next, shifting of the color
Motion tracking with webcam is a comparison of one frame to the next, shifting of the color

 

TV wall sketch
TV wall sketch

 

 

 

 

Links to some of our first early research:
https://www.npmjs.com/package/gifshot

http://diffcam.com/

https://medium.com/little-miss-robot/motion-tracking-in-the-browser-6a4f48b9ba29

To open webcam in browser

https://davidwalsh.name/browser-camera

Brightness Mirror – p5.js Tutorial This video looks at how to create an abstract mirror in a p5.js canvas based on the brightness values of the pixels from a live video feed:
https://www.youtube.com/watch?v=rNqaw8LT2ZU

 

 

Experimentation Cont’d:

We tried to create our motion/colour tracked output in a variety of ways:

  • Edge detection
  • Pixellate
  • Particle field (examples we found were way over our heads but that didn’t stop us from spending a few days tinkering with it!)
  • A faux heat mapping effect
  • Optical Flow

Video of particle effect responding to magenta on the webcam

Particle 1
Particle 1
Particle 2
Particle 2
Particle 3
Particle 3
Experimenting with webcam on the ipad
Experimenting with webcam on the ipad

 

Got particle effects working, got fullscreen (window, height etc), broke particle effect, fixed it. Original particle code was a different version than the javascript file we had downloaded. Some technical issues included how to hide the webcam window while still drawing the data from it.
To solve this, we tried the following:
Getting rid of the window entirely (didn’t work)
Hiding it through display:none (didn’t work)
Minimizing the window’s size so it wouldn’t appear on screen (didn’t work)

Essentially realized the #video, #canvas had to be on the screen in order for the color tracking to work. Solution: change the opacity of #video, #canvas so that it wouldn’t appear on screen but still existed there invisibly!

Once we got the code working locally and basically appearing in the layout we wanted, today was mainly an exploration day. We messed around with the color of the circles, the shape/size, how many in a cluster, etc. One of the biggest issues that we are finding is not being able to find how to change the specific color that the code is tracking. What if we wanted different colors to be tracked? Ideally, we would come up with at least 4 or so different colors that the website(s) would track and load up different folders onto webspace. We’d have 4 almost identical websites, which would open up the opportunity for us to create a sort of matching game; find which site responds to your color!

Our sense is that the code is in the the tracking.min.js file. There is a colortracking wording associated with ‘Magenta’ but there are no RGB or hex values, just the word, ‘Magenta.’

Another potential extension of this project would be to create/affect sound along with the particles; or at least call up some appropriate sound affect to go along with the interaction!

 

After our first meeting with Kate, she suggested that what we were trying to do was A. Computer Vision and B. HARD.

We then looked at Kyle McDonald’s work and additional tutorials from Coding Rainbow that focused more on painting with pixels and Computer Vision:

https://kylemcdonald.github.io/cv-examples/

Nick provided a simpler sample for creating a colour track and response system
Nick provided a simpler sample for creating a colour track and response system

Challenges:

  • A lot of the resources we found were in processing as opposed to p5.js
  • We found a working demo of a particle effect but because it was so complex, even editing it was a very steep learning curve and we had trouble really making it our own. That and it was a time waster when in the end Nick was able to point us towards a much simpler baseline as a (re) starting point.
  • Lack of understanding of javascript held us back in terms of knowing how to show our effects but not the capture video and how to layer visuals (pngs vs video, etc,.)

With our new, dumbed down approach we started again. While keeping with the idea of capturing video via the webcam, our new approach involved drawing simple shapes in response to motion. We also started to consider more analog ways of interacting with RGBelieve it!, like a game involving sticky notes.

One of our game layout sketches
One of our game layout sketches

 

Sticky notes to be placed on each user/player
Sticky notes to be placed on each user/player
Sana in action
Sana in action

 

Success!
Success!

 

 

Colour tracking via the webcam and tracking.js; the camera detects the colour, then draws a rectangle. The size and x, y coordinates change based on the motion captured via the webcam. Basically, you can put a magenta stickynote on your forehead and dance like a maniac in front of the webcam and a shape will be drawn and “dance” with you. If you thought it was hip to be square before, now it’s righteous to be a rectangle!
 

Shapes move along with tracked blue ellipses.
Shapes move along with tracked blue ellipses.

 

img_3364
 

Working from some examples, Kristy was able to get more complicated shapes with their own motion to show up on screen when magenta was detected by the webcam, but couldn't get any further tracking operable
Working from some examples, Kristy was able to get more complicated shapes with their own motion to show up on screen when magenta was detected by the webcam, but couldn’t get any further tracking operable

 

Shapes move along with tracked blue object
Sana was able to integrate the code from the complicated shapes that had their own separate movement to move along with our simple tracked blue object

Videos of Drawing/Tracking & Motion:

Working on making the "start" page responsive
Working on making the “start” page responsive
Brainstorming a name
Brainstorming a name

Game Logistics Setup:

To prepare for the in-class gameplay and to save time, we assigned each participant a team and a url to load onto their laptops in advance. We listed this url on our Digital Futures facebook page and came early to load the browsers manually to ensure timely setup for the presentation.

These are our groups (as chosen by this random list organizer)

(LIST GROUPS AND NAMES ON HOMEPAGE)

Final name and logo
Final name and logo

 

 

Welcome to RGBelieve It!.

These are your teams:

MAGENTA TEAM

Tommy

Prof. Kate

Karo

Feng

Dave

Yiyi

Ramona

Emilia

 

CYAN TEAM

Roxanne H.

Max

Prof. Nick

Sean

Quinn

Chris

Finlay

 

YELLOW TEAM

Kylie

Savaya

Jad

Emma

Margot

Roxanne B.

Dikla

 

Please locate the sticky note pads that correspond to your team’s colour.

 

Each team member will place 2 sticky notes onto themselves; this can be on their shoulder, knee, etc. Choose the sticky note location wisely, as once you have placed the 2 sticky notes on your body, the locations are final!

 

You will also carry extra sticky notes, but keep these in a pocket, or concealed in some way!

 

Once everyone has placed their sticky notes on themselves, the game will begin once everyone hits START. Make sure to allow RGBelieve It! to access your webcam!

 

Walk by each screen to see what happens. If the screen reacts in some way, that means you have found a screen that matches your team colour. Mark the screen with a sticky note of your team colour.

 

The team that locates 6 or more screens that reacts with their colour first, wins!

URLs:

1=magenta

2=cyan

3=yellow

 

https://webspace.ocad.ca/~3164381/1a/

https://webspace.ocad.ca/~3164381/1b/

https://webspace.ocad.ca/~3164381/1c/

https://webspace.ocad.ca/~3164381/1d/

 

https://webspace.ocad.ca/~3164381/2a/

https://webspace.ocad.ca/~3164381/2b/

https://webspace.ocad.ca/~3164381/2c/

https://webspace.ocad.ca/~3164381/2d/

 

https://webspace.ocad.ca/~3164381/3a/

https://webspace.ocad.ca/~3164381/3b/

https://webspace.ocad.ca/~3164381/3c/

https://webspace.ocad.ca/~3164381/3d/

 

URL list randomized and up to 20 (made sure to have at least 6 of each color):

 

https://webspace.ocad.ca/~3164381/2d/ savaya

https://webspace.ocad.ca/~3164381/1b/ feng

https://webspace.ocad.ca/~3164381/3c/ sana

https://webspace.ocad.ca/~3164381/2b/ kristy

https://webspace.ocad.ca/~3164381/1c/ kylie

https://webspace.ocad.ca/~3164381/2c/ roxanne

https://webspace.ocad.ca/~3164381/2b/ quinn

https://webspace.ocad.ca/~3164381/3d/ ramona

https://webspace.ocad.ca/~3164381/3b/ emilia

https://webspace.ocad.ca/~3164381/2a/ chris

https://webspace.ocad.ca/~3164381/1d/ max

https://webspace.ocad.ca/~3164381/2d/ jad

https://webspace.ocad.ca/~3164381/3d/ emma

https://webspace.ocad.ca/~3164381/1a/ sean

https://webspace.ocad.ca/~3164381/1b/ tommy

https://webspace.ocad.ca/~3164381/3a/ roxanne b.

https://webspace.ocad.ca/~3164381/2d/ dikla

https://webspace.ocad.ca/~3164381/3a/ karo

In Class Demo:

img_9587 img_6544

Chaos ensues
Chaos ensues

Code:

https://github.com/sanaalla/RGBelieveIt_Experiment2

Context:

We see RGBelieve it! as the first phase of research for a much larger project, involving motion capture and art creation via gesture. Much of our inspiration in terms of possibilities game from artists working with motion capture and Computer Vision. Caitlin Sikora’s interactive web application Self Portrait, 2015 “…uses motion capture data and Markov models to generate new movement data in real time.”

We really connected with the way her animations responded to the user and the user data influenced the output. One of her other projects, I need-le you, baby used p5.js and webcam pixel data, which made us think that many great interactions were possible using these same technologies. And though that’s true, Sikora’s knowledge and ability in this area is leaps and bounds ahead of ours. But from an inspiration perspective, her work was invaluable.

 

Moving forward, we see a large screened art installation being the likely final output, wherin the user stands in front a screen larger than themselves and their gestures are captured by a camera then rendered in real time on screen in some artistic visual output. Whether the motion is captured via placing magenta mittens and kneedpad sfor example, on the user or via other motion capture techniques will require more thought and research.

Bibliography:

McCarthy, L., Reas, C., & Fry, B. (2015). Make: Getting Started with p5.js. (1st ed.). San Francisco, CA: Maker Media

P5.Js. (n.d.). References. Retrieved October 17, 2017, from https://p5js.org/reference/

 

http://caitlinsikora.com/interactivity.html

Camera and Video Control with HTML5

Motion Detection with JavaScript

https://www.npmjs.com/package/gifshothttps://motiondetection.be/

http://diffcam.com/

https://motiondetection.be/

https://github.com/lonekorean/diff-cam-scratchpad

https://trackingjs.com/docs.html#trackers

https://github.com/ACassells/processing.js.SimpleWebCamInteraction

www.youtube.com/watch?v=DW3AR9PFY84

http://caitlinsikora.com/interactivity.html

https://kylemcdonald.github.io/cv-examples/

http://www.jamesalliban.com/#projects

https://github.com/eduardolundgren/tracking.js/issues/175

CONVOY 2017 – Assignment 2 – Multiscreens

GROUP

Chris Luginbuhl

Sean Harkin


PROJECT TITLE

CONVOY 2117


PROJECT DESCRIPTION

Players must work as a part of a team to complete a relay race game. Each team (10 players) stand in a line opposite one another. The objective of the game is to complete the mazes before the opposing team.

 

STORY

The year is 2117 and the world has been ravaged by nuclear war, climate change, general terrible dystopian tragedies etc. etc. You, and your comrades home has been destroyed and you must seek new shelter. News has traveled of a safe haven to the North – but wait! You have also received word that your bitter rivals are also seeking refuge. You must arrive before them to ensure your survival.

You must board your trusty Travasphere™ and race across the barren wasteland. Each player will take 1 leg of the journey consecutively, ensuring to collect the fuel source – the rare and valuable WD-50 lubricant –  in each level before reaching the end. Only once you have arrived at the end of your portion of the journey can the next player begin theirs

Good luck.


CODE

Github repository for Convoy 2117

(note – the folder /js/ace is not needed but is hard to delete from git!)

Github repository for “shake to play” mobile percussion with GUI interface is here. Try it at webspace.ocad.ca/~3165783/Sound-mobile


SKETCHES

sketch-3

Figure 1 – Drum battle concept

sketch-2

Figure 2 – Sound wall concept

 


DESIGN FILES

pacmens

Figure 3 – Video capture + animation overlay testing

screenshot-oct-26-2017-6-39-23-pm-1

Figure 4 – First working prototype for tilt-ball maze

maze-ver-1-sheet1

Figure 5 – Maze layout 1

maze-ver-2-sheet1

Figure 6 – Maze layout 2

maze-ver-3-sheet1

Figure 7 – Maze layout 3

img_0254

Figure 8 – Working prototype for instrumental phone concept

 


PHOTOGRAPHS

Introducing Convoy 2117:https://www.dropbox.com/sh/jawq8nqn362ywjh/AAD4pnTUXm1TPYUtc_mlLT1ra?dl=0&preview=2017-10-27+14.29.41.jpg

More photo and video


VIDEO

Instructions:

https://www.dropbox.com/sh/jawq8nqn362ywjh/AAD4pnTUXm1TPYUtc_mlLT1ra?dl=0&preview=2017-10-27+14.30.05.mp4

Battle royale:

https://www.dropbox.com/sh/jawq8nqn362ywjh/AAD4pnTUXm1TPYUtc_mlLT1ra?dl=0&preview=2017-10-27+14.35.25.mp4


PROCESS JOURNAL

We generated a bunch of concepts and spent some time experimenting with them. Here are a few:

Fractal painting

Each mobile phone screen becomes a canvas: use a recursive/fractal algorithm to break up a line to form a mountain range, meadow or water.  Each student controls the landscape they create – mountains, water, meadows, starry sky and colour. They then physically arrange the phones to form a landscape “painting”, with each phone providing one tile of the mosaic.

Inside a party box

Phones line the inside of a space (possibly a cardboard box that you wear, covering your head). They create a surround sound and light show that becomes your personal party in a box.

Lighting umbrella

Ever stumbled into a patch of bad lighting, just when photographers approach or you see the person you’re trying most to impress? By lining the inside of an umbrella with mobile screens and controlling the intensity & colour of light on each with touch (or clicks and whistles!), the person holding the umbrella carries a patch of photogenic lighting with them wherever they go.

Zoetrope

This concept was to take the sophisticated power of a mobile phone back to the earliest days of motion pictures. Mobile screens are arranged inside a tradition spinning drum-and-slit zoetrope. Rotating the drum and peering through any slit, the observer sees the static images on each screen combine to form a moving image. By making use of each phone’s orientation sensor, the images can cycle on each screen so that a different “movie” is shown depending which slit you peer through.

Interactive Drum machine

We experimented with this quite a bit. Our initial concept was to create a sort of interactive drum machine with 20 different inputs. We explored multiple concepts: 1 person using 20 devices; 20 people using 1 device each; 2 people using 10 devices each in a drum battle (Fig. 1). Although we liked the idea of this last concept the team felt, that to work effectively, it would require networking which is outside the brief of this project.

We began by exploring video capture as a source of input. The idea was to have the users interact with an object on screen to activate the sound effect mapped to each device. The devices would be arranged in a manner that would require the user to dance around in front of the set up in order to create an improvised musical performance (Fig. 2). To test this concept we used examples from the p5.js Examples page to combine video capture with example code from Make: Getting Started with p5.js (Fig. 3).

The functions of capturing video from the laptop’s webcam, animating shapes on top of the video feed and creating a randomised sound board from a library were all relatively straightforward, however, when we began trying to find methods to use the video as an input, things became difficult. We found a number of examples online for using different kinds of video capture as input within the p5 script (colour tracking, symbol tracking, movement tracking, converting video to pixels etc.) however since neither the camera or the client were not designed for this purpose, the reliability of these functions were questionable to say the least. The team decided that due to the time constraints of the project that a new direction should be taken.

Initially, we returned to one of our initial ideas of 20 people using 1 device each. This had earlier been discarded due to being too simplistic – so we decided to explore the space. We looked into different sound libraries and sliders which would enable users to choose their own sound board and play music together, however this still didn’t feel like enough. We wanted to create both a co-operative and competitive experience for users. This is how the project became about designing a game.

We researched some uses of p5 for games and found the p5.play library which would help us create sprites and check for overlap and collisions.

We used one of the simple example programs from p5.play (http://p5play.molleindustria.org/examples/index.html?fileName=collisions2.jsto understand how to use sprite groups, collisions and collection with this library. In the end, only a few lines of the example code were left in our program.

Unlike the example, we wanted to make use of device tilt as an input technique rather than keyboard or mouse. After some experimentation we created a tilt-controlled sprite. This founded the basis of our game design (Fig. 4).

Initial experiments worked well. We could tilt the device to move a sprite, until it collided with another stationary sprite. We designed a our first map layout (Fig. 5), built it within the code and tested our concept. In order to add variety to the gameplay, we also added in 2 more mazes which would be randomly assigned to the players when they open the game (Fig. 6 + 7).

We initially planned to implement a maze creation algorithm, and wrote the code to be scalable to any size screen, and any number of cells. Even the simplest maze algorithms are not super easy to implement, however, and we realized we didn’t have time.

Most of our testing at this point was focused on removing bugs and fine tuning the physical parameters of the ball. After testing with a few classmates, we understood quickly that this game ran very slowly on Android phones.

We were able to add algorithms to limit the speed of the ball (which also helps prevent the ball from “teleporting” through walls. We also added “dish” – a tendency for the ball to roll towards the centre of the screen. Without this bias, the small random fluctuations in the gyroscope cause the ball to jitter and roll around randomly even when the device was laid flat on a table.

In testing we found that adding friction and a speed limit to the ball caused it to run slower on older iPhones, so we removed the speed limiting code and any other unnecessary processing that could slow the animation of the ball.

In order to add more depth to the game play, we added an artifact that each player in the team must collect in their map before they can finish the map. This fitted well into the narrative we had created for the game. The original concept for this collectible was that each team member would have to decide whether to “risk” going out of their way for the artifact to gain points.

We added sounds for start, artifact collection and end, as well as nerve-wracking techno. The reason for the music was so that people on opposing teams could gauge the progress of their opponents.  

We tested the final version on multiple different devices (makes and models) in order to test the reliability. We found that older phones seem to have issues with frame rate – however newer models (and specifically iPhones) seem to work well.


PROJECT CONTEXT

Credit:

Music by JGaudio – ‘Aspiration to Success’ (https://www.melodyloops.com/composers/jgaudio/)

Splash screen modified from stock image: http://www.gettyimages.co.uk/detail/photo/futuristic-cityscape-maze-royalty-free-image/613319698

The story was inspired by dystopian futurscapes of the classic 80’s films such as Mad Max franchise, along with our own bleak perspectives of the future based on political, socio-economic and environmental issues which exist today.

Another game that inspired our careful choice of music to fit our setting is Pac-Mondrian (2002), a take on Namcoi’s classic Pac-Man (1980) that blends Mondrian’s signature graphic painting with the boogie-woogie jazz Piet Mondrian loved. I couldn’t find a playable version of Pac-Mondrian, but you can see a screenshot here: http://www.ianaleksanderadams.com/blog/2009/my-new-favorite-browser-game-pac-mondrian-by-prize-budget-for-boys/

Pac-mondrian showed how a maze game could be completely transformed by a small change to its graphics along with the right musical soundtrack.

These games can be related all the way back to the first patented ball-maze game in 1889, with “Pigs in Clover” by Charles Martin Crandall (http://www.museumofplay.org/online-collections/3/49/107.4148). However this concept has been adapted and recreated countless times (everything from Super Monkey Ball to every third free puzzle game on Android Play Store) – however what makes our game unique is the physical, cooperative aspect of the game play. We believe this interaction adds another level of enjoyment to the game, which cannot be found in a single player experience.

https://p5js.org/reference/

http://p5play.molleindustria.org/examples/index.html?fileName=collisions2.js

McCarthy, Lauren, Casey Reas, and Ben Fry. Getting Started with P5.js: Making Interactive Graphics in JavaScript and Processing. Maker Media, 2015. Print.

These texts were referenced when writing the code, although the majority of the code was written by the team specifically for the project.