Train Simulator Book++ Project

Train Simulator Book++ Project

Madelaine Fischer-Bernhut, Ola Soszynski, Vijaei Posarajah

Book: Trains: Classics of Transportation

20190117_100348

Group Work Breakdown:

Ola – PubNub and p5.js sketches and implementation (coal shovel and whistle for mobile)

Madelaine – research into and final path creation (with BG Curves and dynamic speed implementation), final game build and scripting in unity with PubNub, basic 3D models of train, terrain (map), station, and collision detection of train and station

Vijaei – research into path creation, Project context research, documentation compiling, shovel acquisition, unique asset design (trees, diverse coloured train stations – unfortunately, were not able to be implemented in the presented build)

 

Project Description:

Our project aimed to enhance the reading experience about classical 19th century trains by providing the participants a simulated experience of operating a steam engine train through teamwork. The train simulator game works with a laptop displaying a train on a route with train stations to stop at. The participants  are tasked with operating the train by using a phone mounted on a toy shovel to shovel coal using a shaking or shoveling motion allowing the train in the game to move forward. Another participant will have a whistle used as a button to supply the the coal shovelers with more coal. The main goal within the game is to move the train along the track and complete it’s route consisting of train stations. The participants have to work together as a team to operate the train. Our original inspiration was to replicate a train simulator type game with cooperative play in mind with multiple devices used to operate the train but instead of a modern train players would operate a steam engine train along with the devices associated with its operation. Our final project was a scaled down yet functioning version of our original concept.

The original concept involved tasks such as:

-Checking statistics (speed, supplies, engine)

-Shoveling coal into the steam engine, depleting supply, increasing speed

-Communicating with a station for supplies, increasing supplies

-Releasing steam, needs to be done or engine overheats

-Conducting the train, depleting speed

-Slowing down is required to receive supplies

-Going too fast will cause a collision, ending the simulation

-Running out of supplies results in the train stopping, ending the simulation

-Forgetting to release steam results in the engine overheating, ending the simulation

 

Code:

P5 Buttons and Controllers: https://github.com/olasosz/olasosz.github.io

Train Simulator Unity game:  

https://github.com/Vijaei/Train-Simulator-unity-files-updated

 

 

 

Displays:

Computer display – output of players actions (world and train representation in Unity. In the presented build we used one display, but in the future we can imagine having multiple displays to represent different views from the train. In unity, multiple cameras pointing in different directions were added  to support a multi display experience. Displays would be placed in a formation similar to an actual train.

Mobile display – input player actions, two were used for the presentation, although this can be increased so multiple players could be in charge of shoveling and “blowing” the whistle to restock coal. At least one coal shoveler and whistleblower are need to play the game.

 

Programs Used:

Our project used Unity to create the main train simulator game which involved assets such as the train referenced from our book, train stations, railway signs, trees and terrain. WIthin the Unity we used a pathway system for the railway track as well as systems for acceleration and collision. We used P5.JS for the steam whistle button interface as well as the coal shovel controller used to accelerate the in game train. To have our mobile devices communicate between each other as well as the gem we used Pubnub to establish a network. The steam whistle button communicates with the coal shovel by sending coal to the shovel to be used, and in turn the shovel communicates with the game to accelerate the train forward or stopping the trains movement by not shoveling. Ideally our system would allow one steam whistle user, one train simulator game display and multiple users for shoveling coal.

 

Project Context:

For real world references we looked at simple versions of existing train simulator games found online such as Train simulator 3d ( http://www.crazygames.com/game/train-simulator-3d) which uses the acceleration and braking mechanic as simple sliders to operate the train. We also took note of the optional fixed camera views provided by the game as a feature to implement in our own game. For the Cooperative multiplayer aspect of our game we looked at available and relevant games in the current market such as Space team (https://itunes.apple.com/us/app/spaceteam/id570510529?mt=8)  by Henry Smith, which allows multiple players to simulate a spaceship bridge by commanding players to press buttons and switches according to a sequence provided in the game. Here players are reliant on other players to follow through on their sequence so they can contribute as team and progress in the game as a whole (https://www.youtube.com/watch?v=ymwSbxUDtTw) . Where Space team is reliant on multiple individual devices, we used Artemis Spaceship Bridge Simulator (https://itunes.apple.com/us/app/artemis-spaceship-bridge-simulator/id578372500?mt=8)  by Incandescent Workshop LLC as a reference to how our game would work reliant on a single game world with multiple devices used as controllers to play a single sequence game. Here each participating player is in charge of a single aspect of operating the spaceship and rather than being told by the game what to do, they have to react to the game world as a team to survive encounters (https://www.youtube.com/watch?v=V9Q2X32hZNk) . We also used “Game Mechanics for Cooperative Games (https://fenix.tecnico.ulisboa.pt/downloadFile/395138343981/artigo.pdf ),” by Jose Bernardo Rocha, Samuel Mascarenhas, and Rui Prada to understand the variety of cooperative game mechanics that can be implemented into our cooperative simulator game.

 

Sketches:

Group meeting sketches used to flush out our concepts for a simulator type game

 

Process:

2019-01-17-1 2019-01-23-6

Trying the pathing asset by Surge. Madelaine was unable to implement dynamic speed values using this component.

spline-editor

The final asset used for pathing was the spline creator BG Curves.

bezier-curve-tests

Madelaine’s attempt at creating a bezier curve system from scratch.

20190122_131405 tree-assets

Unity game development process including modeling game assets, train paths, and terrain

20190122_131520 screenshot_20190117-125603_chrome screenshot_3

Development process for steam whistle button, coal shovel controller, and Pubnub network

 

Issues:

The biggest issue that we had to overcome was getting the different devices to communicate with each other. All the devices were able to publish information, but the mobile devices were unable to communicate with unity. After a lot of troubleshooting and help (thank you Nick) we realized that PubNub was getting confused with all the different data being published to one channel. The creation of subchannels for each data type, solved the issue (ie. A separate channel for the coal values from the speed channel).

 

Demonstration:

img_20190124_114208678 img_20190124_112603794

https://www.youtube.com/watch?v=s_VhflOroYA&feature=youtu.be

 

Experiment 2 final presentation Google slide link

Link: Google slide document.

In the slide presentations,  each student presents their own rule and its expression in P5.js, Processing desktop, or not programming based form.

Students will also select a rule from one of their fellow artists from the current class or the previous class with at least one expression for each.

Experiment 4 – Script redux

Nik Szafranek

I ended up trying to push the code further for Experiment 2. The code still doesn’t work but I’m getting somewhere. I was particularly drawn to this as I’ve always been fascinated with writing and linguistics, I really enjoyed the challenge despite it not getting it working in the end and I feel like I’ve gotten more comfortable with code. I hope to explore further.

Here is the link to the code:

https://github.com/nszafranek/Script

I have gotten it to recognize chunks of text and split them into component characters but I’m still struggling with concatenating and displaying the artboard-1

Jin Zhang(3161758)
Siyue Liang(3165618)

Documentation: Sound Interactive Installation

20181208002426

Project Description
For this project, our group decided to experiment with the capacitive sensor. It is a library in Arduino that turns any conductive material into sensors that senses proximity. Our original idea was to make an installation with metal wire and people can put their hands close to it to control the amplitude and speed of the audio. We didn’t achieve it the end because the sensor didn’t work the same as we thought it would. Therefore the thing we ended with was that we had to touch the metal with our hands in order to make the values change, which was not what we aimed for.
Inspiration & Related Works
In the beginning, we inspired by the wire loop game. The player needs to use a loop wand to pass the metal wire maze without touching the wires. This game is famous. We think the cool thing about this game is the metal wires. It can be any different shapes as you want and play the role as a sensor. Based on this game, we thought the idea of using metal wire as a sensor to control sound/visual effects will be super cool. (this is the link of a simple wire loop game that we found online: https://www.instructables.com/id/Wire-Loop-Game-Tutorial/)

We did some research online and found out that there are lots of cool metal artwork. We wanted to build cool metal sculpture and connect each of them to the different piano note.

graphite-large-clive-wall-decor_1

Building Process

  • Code For Arduino

#include <CapacitiveSensor.h

/*
* https://forum.arduino.cc/index.php?topic=188022.0
* CapitiveSense Library Demo Sketch
* Paul Badger 2008
* Uses a high value resistor e.g. 10M between send pin and receive pin
* Resistor effects sensitivity, experiment with values, 50K – 50M. Larger resistor values yield larger sensor values.
* Receive pin is the sensor pin – try different amounts of foil/metal on this pin
*/
CapacitiveSensor cs_4_2 = CapacitiveSensor(4,2); // 10M resistor between pins 4 & 2, pin 2 is sensor pin, add a wire and or foil if desired
/*
CapacitiveSensor cs_8_7 = CapacitiveSensor(8,7);
CapacitiveSensor cs_7_6 = CapacitiveSensor(7,6);
CapacitiveSensor cs_9_8 = CapacitiveSensor(9,8);
CapacitiveSensor cs_11_10 = CapacitiveSensor(11,10);
CapacitiveSensor cs_13_12 = CapacitiveSensor(13,12);
*/

void setup()
{
cs_4_2.set_CS_AutocaL_Millis(0xFFFFFFFF); // turn off autocalibrate on channel 1 – just as an example
/* cs_8_7.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_7_6.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_9_8.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_11_10.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_13_12.set_CS_AutocaL_Millis(0xFFFFFFFF); */
Serial.begin(9600);
}

void loop()
{
long start = millis();
long total1 = cs_4_2.capacitiveSensor(50);
/* long total2 = cs_8_7.capacitiveSensor(50);
long total3 = cs_7_6.capacitiveSensor(50);
long total4 = cs_9_8.capacitiveSensor(50);
long total5 = cs_11_10.capacitiveSensor(50);
long total6 = cs_13_12.capacitiveSensor(50); */

// tab character for debug windown spacing

Serial.println(total1);// print sensor output 1
/* Serial.println(total2);
Serial.println(total3);
Serial.println(total4);
Serial.println(total5);
Serial.println(total6); */
Serial.println(“\t”);

delay(30); // arbitrary delay to limit data to serial port
}

  • Code For Processing
    import processing.serial.*;
    import processing.sound.*;

Serial myPort;

int sensor1;

// Declare the processing sound variables
SoundFile sound;
Amplitude rms;

// Declare a scaling factor
float scale=5;

// Declare a smooth factor
float smooth_factor=0.25;

// Used for smoothing
float sum;

public void setup() {
fullScreen(P3D);
//size(800,800,P3D);

String portName = “/dev/cu.usbmodem141301”;

myPort = new Serial(this, portName, 9600);
myPort.bufferUntil(‘\n’);

//Load and play a soundfile and loop it
sound = new SoundFile(this, “sound2.mp3”);
sound.loop();

// Create and patch the rms tracker
rms = new Amplitude(this);
rms.input(sound);

}

public void draw() {

// Set background color, noStroke and fill color
background(0);

//println(“sensor1: “+sensor1);

sound.rate(map(sensor1*0.05, 0, width, 1, 4.0));
//sound.amp(map(mouseY, 0, width, 0.2, 1.0));

// smooth the rms data by smoothing factor
sum += (rms.analyze() – sum) * smooth_factor;

// rms.analyze() return a value between 0 and 1. It’s
// scaled to height/2 and then multiplied by a scale factor
float rms_scaled=sum*(height/2)*scale;
noFill();
if (sensor1 > 1000){
stroke(random(0,255),random(0,255),random(0,255));}
else {
stroke(255);
}

strokeWeight(5);
translate(width/2,height/2);
rotateX(-PI/6);
//if (sensor1 > 1000){
//rotateY(PI/3 + sensor1 * PI);}

rotateY(frameCount*0.04);
for (int size = 10; size < 400; size += 50){
box(size,size, rms_scaled);
}

}
void serialEvent(Serial myPort) {
//yPos = myPort.read();//like serial.write in arduino
// read the serial buffer:
String myString = myPort.readStringUntil(‘\n’);

if (myString != null) {
// println(myString);
myString = trim(myString);

// split the string at the commas
// and convert the sections into integers:
int sensors[] = int(split(myString, ‘,’));
for (int sensorNum = 0; sensorNum <=0; sensorNum++) {
print(“Sensor ” + sensorNum + “: ” + sensors[sensorNum] + “\t”);
}
println();
if (sensors[0] != 0){
sensor1 = sensors[0];
}

/* sensor2 = sensors[1];
sensor3 = sensors[2]; */

}
}

20181208002240 20181208002228 20181208002215 20181208002157 20181208002147 20181208002135

Features & Goals
Our goal was to make a sound interactive installation where the speed and amplitude are controlled by the value of the capacitive sensor.

In terms of code part, we were planning on using 4 to 6 capacitive sensors for the installation. However, after wiring and connecting for 6 sensors, they all stopped working for some reason. So we took one out at a time and check to see if it worked, but it wouldn’t work until there was only one sensor left in the circuit. We also wanted to have the sensor controlling some LEDs apart from the visuals, but when we added the output pin for the LED, the serial port would stop writing values to Processing, which was really frustrating to see.

In terms of physical build, when we noticed that the metal wire wasn’t reacting to proximity, we tried using some other materials to replace it. We were not sure if it was because of the conductivity of the metal is affecting the effectiveness of the code. So we put conductive paint on paper and used it as the capacitive sensor but the same thing happened.
So we switched back to using metal wires. We spent a lot of time considering the form of the installation. We thought it would be better to make the visual and installation somehow relates to each other hence the music notes. A cube shape was the other option but we decided to go with the notes eventually.

References

http://playground.arduino.cc/Main/CapacitiveSensor?from=Main.CapSense

https://processing.org/reference/libraries/sound/SoundFile.html

AerForge

The Team

Salisa Jatweerapong, 3161327

Melissa Roberts, 3161139

Mahnoor Shahid, 3162358

Samantha Sylvester, 3165592

 

About AerForge

AerForge is a project about change, and how the different spaces in which we create change the creation itself. The entity with which the user interacts transitions through realms and dimensions of creation, exploring and connecting the environments we learned in class.

To begin with, the AerForge experience is intangible. The user draws an imaginary line into the air in front of them, and out of thin air comes a visual. Projected on a screen in front of the user is the line they began to draw, and as the user continues to move their hand through nothing, they create something. Thin columns appear on the projection, their height matching the height of the user’s line. With a wave, the user rotates the projected image and out of lines and columns emerges a form. Though empty-handed, the user is able to create and engage with a virtual object. The user places their hands out in front of them, palms up as if asking for something. This gesture cues the virtual object to be downloaded and sent to a 3D printer. The final transformation brings AerForge into the physical world, and into the hands of the user.

Continue reading “AerForge”

Contact

 

contact2

 

Experiment 4: Final Report

Contact

Isaak Shingray-Jedig
DIGF 2004-001
December 7, 2018

Inspiration

The inspiration for this project came from past experience in the Digital Future program in the form of a past project.  The culminating assignment of Accio from first year that Roshe Leynes and I worked on introduced me to computer vision.  The project that we developed allowed people to interact relatively easily and naturally with an onscreen render and that was in large part thanks to the amazing potential of computer vision (in that case through the use of Kinect).  With experiment 4 I thought to explore the uses of Kinect further through a drawing experience that would use a large piece of conductive or resistive fabric as a makeshift paper or medium in combination with a Kinect for the tracking of the tip of a finger.  In practice, Kinect 1 wasn’t well suited to fine movement detection, so instead, I looked to video analysis computer vision and found my answer; colour tracking.  Colour tracking allowed for high levels of accuracy without any of the general depth noise that the Kinect was prone to producing, so I settled on it as my method for computer vision.

 

Experience

Users sit in front of a computer running the program.  They place a small fabric ring on the  index finger of their dominant hand.  They hold the space bar to use the color tracking calibrator and see the webcam feed, where they click on the colored band attached to the ring on their finger. they then release  space and begin to draw in the same way that one would finger paint or draw in sand.

 

contact3

Features and Components

The components used in Contact were:

  • USB webcam
  • Boom arm desk lamp
  • Large piece  of conductive fabric for desk
  • Small conductive fabric ring for finger with a patch of red electrical tape
  • Wires
  • Arduino

The physical construction of Contact was relatively simple but had to be set up in a very particular way because of the fact that the webcam had to have a clear line of sight on the entire pad of fabric, which in turn had to be very brightly lit.  In combination with the code, these components came together to produce a responsive touch drawing experience.  The most difficult part by far of the development of Contact was the implementation of reliable color tracking.  A few hurdles in that process were understanding how to think of colours in a space relative to X,Y,and Z, as well as understanding that every single pixel on screen had to be checked each frame.  In terms of rendering, when I tried to use an array to store line values, the system became very slow and unresponsive.  This is due to the heavy load that checking every pixel puts on the computer every frame.  For this reason, I put the background in the setup function and made 2 arrays 2 places long so that the program would draw one line at a time without erasing the previous line s

 

contact4

Goals

The goal of the project was to create a responsive and reliable, but above all else natural, drawing experience.  I think that in many ways Contact succeeded, especially in terms of usability and the responsive and reliable operation of the program through computer vision.  In terms of a natural experience, the physical setup of using a camera attached to the back end of a lamp seemed to produce a relatively natural feeling to Contact, but after having received feedback from the professors and seeing my peers interact with it, I think that the gestural nature of the experience should be further explored.  I will look to expand upon what exactly makes the experience unique from other forms of touch technology and play on its natural strengths as a system.  The most immediate area for experimentation would to me is to move the experience away from a seated position and into some form of more physical experience.  After having written the word sand in the experience section, I would also like to explore sand in a similar format.

 

contact5

https://github.com/isaakshingray/experiment4

 

Contextual works

Drawing using Kinect V2

https://pterneas.com/2016/03/15/kinect-drawing/

 

Computer vision for Artists and Designers

http://www.flong.com/texts/essays/essay_cvad/

 

The pixel array

 

 

 

The Soundscape Experience

Experiment 4: Final Report

Francisco Samayoa, Tyra D’Costa, Shiloh Light-Barnes
DIGF 2004-001
December 5, 2018

47394798_301736113794454_1847392723955351552_n


project details

  1. User starts at any location on the map
  2. They put on the headphones
  3. Depending on the position of the head, the headphones will play a particular sound.
  4. If a particular sound is playing then the screen will display an associated image or video
  5. The user must experience all the audio and visual data in order to try and make a connection between the media and a place that exists in the real world before their opponent.

project inspiration

Our project seeks to encompass elements of sound, sensory perception, and nature in a installation based artwork. Our process began by discussing the types of sensors we wanted to explore, eventually we decided on the MPU 9250 Accelerometer and Gyroscope. Our proposed idea, is to integrate the sensor into headphones, which can then change the sound and visual experiences of the user. As the user navigates the room and moves their head about, the world in which they are digitally immersed in will enhance the real world. Essentially, it is a 4D-sound experience. If we had more time we would add visual elements such as 360 video, or a 3D Unity-based environment.


Background Work

In the end, we were able to get the MPU sensor working with Arduino so that we receive X,Y and Z coordinates. After scouring the web for solutions to library related issues, and calibrating the code in order to receive accurate data readings. Additionally, we were able to take these coordinates and map them to a camera in Unity, so that the sensors orientation changes the perspective of the viewer. This meant we had the pitch, yaw and roll functioning for the gyroscope component. For the purpose of this experiment we disabled with roll since the user wouldn’t be rolling their head per se. However, there were a few “blind spots” the camera couldn’t pick up, such as the 180 degree mark. The interface was fully functional, for the most part. The main issue was the data overload kept causing Unity to freeze, so our solution was to reset the Arduino.

Links To Contextual Work

How to play multiple audio files

Research on Arduino + Gyroscope

Good Link explaining how IMU sensor works

Tutorial for IMU and Unity set up

How to set up MPU and Arduino Library

Unity + Arduino Library


Goals & Features

  1. Engaging UX Design: We want to solidify the conceptual ideas of the soundscape experience to create an immersive UI/UX design. We want this to include, elements of storytelling and gamification.
  2. Sound-Sensory Experience: Moving forward we will have to test that the sensor can allow smooth transition between sound files. The media assets will consist of ethereal soundtracks, data visualizations (natural algorithms and patterns), and nature related images.
  3. Integrate Wearables: Also, we need design a way to integrate the sensors into the wearable technology ( the headphones).
    The MPU unit was housed in a sewn leather pouch. There was velcro attached underneath in order to stick to the top of the headphones. This way, the wires were completely out of sight since it was hanging from above. For debugging purposes we wanted to the MPU unit detachable from the headphones. In the end, we were successful.
  4. Discuss concepts of nature, ecosystems and natural algorithms: Lastly, we want to think about how these concepts can work together to create a narrative and game play element.
    Using a blindfold we acquired, we were able to gameify the experience. With the blindfold on, the user would have to guess which environment they were placed in. We would randomly select 1 of 9 unique ecosystems, including 2 songs created by Francisco and Shiloh. These include a war scene, temple, beach, rain forest, and city intersection.

Pictures & videos

screen-shot-2018-11-26-at-6-19-54-pm

img_4926img_4925

46854703_644056072656401_7774253524338081792_n 47573880_349446675631454_7184122986348150784_n 47571987_341695319963113_3360251171074736128_n 47380658_359783828089595_3941993869863813120_n

20181204_125016

 

 

The Light

 

Kiana Romeo3159835 || Dimitra Grovestine3165616

 

Inspiration 

When it came to our project, everything we did in it was based on our original inspiration and our vision never wavered from it.There was a very specific message we wanted to get across in our project and the concept of the product trumped the need for the actual execution in the end. We were overall inspired by the idea of dying and “going into the light”; something grim and undesirable becoming something beautiful and inviting. Our project aimed to allow people the experience of dying and going into the afterlife without it actually happening. We also aimed to more or less allow people to interpret the afterlife as they saw fit as to not make any assumptions. Using visuals and audio associated with this type of scenario, we immersed people into a heavenly world in which they could briefly escape life on earth and ascend into something higher.

 

Contextual material


https://www.youtube.com/watch?v=axSxCo_uMoI&frags=pl%2Cwn-Don’t go into the White light philosophical video

This video goes over philosophical and religious reasons why one should not go into the light when they die. It was an interesting video to watch as for the most part, people believe that the white light is a good thing, meaning you are going up into heaven and towards God. But for this happy light to be a trap, going towards it would be a bad thing. This is why, when during the presentation of the project we decided to push people towards the light and then pull them out quickly in order to give them a chance to decide what they felt about the light.

 https://www.youtube.com/watch?v=hOVdjxtnsH8 – choir of angels singing (audio used in project)

Church choirs can have some of the most beautiful sounds and music coming from them, and after researching our concept, this music was incredibly inspiring. It expanded our concept as we wanted to create a fully immersive experience and while strong visuals and lighting could definitely help create this environment, sounds are very important too which is why we found it necessary to find just the right sounds to use in our project.

https://www.youtube.com/watch?v=lWqHRLjNZbE&frags=pl%2Cwn Man details what it was like going to heaven after death

although the credibility of this video was questionable, the idea of heaven in a religious and philosophical sense is all guesses of what its really like. Therefore, having this individual’s take on it was important as it covered some of the beliefs we wanted to be in the video.

 

Features of the project

Overall, we had set out to create a heavenly atmosphere. we believe that we achieved steps towards creating this however, we feel that our installation required more elements to bring the piece together. Obviously, we would have loved for all of our elements to work together cohesively, but after receiving feedback we think we would have added additional elements to the piece. I think that we would have considered an alternate shape of projection, verses the classic rectangular projection. We believe that this is necessary to add a unique element that feels like it belongs more in the piece. Where, the rectangular projection, looks plain and slightly out of place. We also would consider adding smoke and mirrors to create an ambiguous and more interesting space. Not being able to see and not necessarily knowing what you’re looking at would help create and add interest to the piece.

 

Proximity sensors:

picture4

The proximity/ distance sensors were meant to control all the elements of the installation. We were to use two sensors in the exhibit where one would control the led light strips that would illuminate as one got closer and control the whisper sound effect in the room while the other would control the brightness of the cloud visuals at the front of the room and the volume of the angelic singing as well. Unfortunately for us, the quality of the sensors wasn’t the best and their effectiveness did not work as well as I had wanted.

The room:

img_4158

Upon finalizing our concept, we had decided we wanted to construct our installation in a critique room where it would be dark enough to have a great effect. Unfortunately, though we could only book a meeting room where it was not as dark and was very large. We made it work but having a smaller space may have been a better option.

The lights:

picture1 img_4156

 

 

 

 

 

 

 

It was  a real challenge getting the lights to work even without the sensors. we ultimately fixed it by using a battery pack after realizing the lights needed a 12V power source instead of 5. This was a crucial part of our process and took a whole class to figure out but when we did it was more or less smooth sailing.

picture2

 

Code

atelierfinal (Clouds and visuals code)

atelierfinal (Lights and sound code)

 

Experiment 4 – Snow Day

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Snow Day

20181204_102722

Inspiration

Experiment 4 has gone through many developments and alterations when compared to our initial stage of the assignment. Essentially, our inspiration came from the concept of how movements could manipulate an environment. With the use of matter.js, ml5js, and the Posenet library, we set out to create an interactive installation that tracks an individual’s body movement and builds it into a skeleton that is capable of interacting with the environment within P5. The environment is set to mimic a snow day where particles gradually drop to the bottom of the canvas and the individual is able to interact with their physics capabilities through movement of the arms. The purpose is to provide an experience of playing in the snow via P5. Additionally, the installation promotes interactivity with others as it is capable of registering more than one individual onto the canvas and allowing all participants to interact with the environment.

20181129_111339

Related Work

The inspiration for our concept stemmed from an article  that introduced us to Posenet and it’s capabilities described it more in depth. With it’s basic understanding and implementation into P5 we then continued to explore and develop the idea of physical body interactivity by looking at inspiration of particle systems in codepen before looking into other various libraries. Additionally, some of our group members have previously worked with the webcam and its capability to manipulate particles in P5 via a webcam feed, this previous knowledge allowed us to jump start our concept development.

Background Related Work

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

 

Goals for the Project

20181129_100344

Our first goal was to implement Posenet into P5 to register a body with a background particle system on the canvas. This was achieved as pictured above. The basic points of the head, shoulders, and limbs was able to be registered and constructed in to a skeleton, furthermore it managed to capture more than one person. From there we continued to refine the presentation of the project by altering the particles, canvas, and skeleton.

20181204_091052

With Posenet and our particle system working well in P5 our next goal was to actually implement the interactivity. While this goal was achieved come presentation day, we did encounter difficulty when attempting to implement it. With the body movement tracked and represented as a skeleton onto P5, we added squares to the points of the hands that would follow the movement of the arms and eventually interact with the falling snow via physics upon touching it. The boxes weren’t always responsive especially when they had to follow the movement of multiple people. Additionally, we experimented with what shape would be able to manipulate the snow better and ultimately settled on squares.

Our final goal came from an issue that we encountered during critique day. In order to register the subject effectively, the body had to be well lit. We managed to achieve this by installing light stands to illuminate the subject. We experimented with different ways of eliminating shadows, and angles in which the light dropped onto the subject. At the end we used two light LED studio lights installed alongside the webcam to a white backdrop in order to capture the subject movement effectively.

 

Code with References and Comments

https://github.com/notbrian/Atelier-Snowday?fbclid=IwAR0XYzXsnVVGsWvfuWzT_TsOGNYARYvhxFyJ-71HK2yL5dtW4R3JV-jWAPs

Working Demo

https://notbrian.github.io/Atelier-Snowday/?fbclid=IwAR2pplMkpbB7mnTTWu5xq63prgu13r2Syy7KClFAADPmijTKe4BUcy-8As0