Experiment 4 – Snow Day

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Snow Day

20181204_102722

Inspiration

Experiment 4 has gone through many developments and alterations when compared to our initial stage of the assignment. Essentially, our inspiration came from the concept of how movements could manipulate an environment. With the use of matter.js, ml5js, and the Posenet library, we set out to create an interactive installation that tracks an individual’s body movement and builds it into a skeleton that is capable of interacting with the environment within P5. The environment is set to mimic a snow day where particles gradually drop to the bottom of the canvas and the individual is able to interact with their physics capabilities through movement of the arms. The purpose is to provide an experience of playing in the snow via P5. Additionally, the installation promotes interactivity with others as it is capable of registering more than one individual onto the canvas and allowing all participants to interact with the environment.

20181129_111339

Related Work

The inspiration for our concept stemmed from an article  that introduced us to Posenet and it’s capabilities described it more in depth. With it’s basic understanding and implementation into P5 we then continued to explore and develop the idea of physical body interactivity by looking at inspiration of particle systems in codepen before looking into other various libraries. Additionally, some of our group members have previously worked with the webcam and its capability to manipulate particles in P5 via a webcam feed, this previous knowledge allowed us to jump start our concept development.

Background Related Work

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

 

Goals for the Project

20181129_100344

Our first goal was to implement Posenet into P5 to register a body with a background particle system on the canvas. This was achieved as pictured above. The basic points of the head, shoulders, and limbs was able to be registered and constructed in to a skeleton, furthermore it managed to capture more than one person. From there we continued to refine the presentation of the project by altering the particles, canvas, and skeleton.

20181204_091052

With Posenet and our particle system working well in P5 our next goal was to actually implement the interactivity. While this goal was achieved come presentation day, we did encounter difficulty when attempting to implement it. With the body movement tracked and represented as a skeleton onto P5, we added squares to the points of the hands that would follow the movement of the arms and eventually interact with the falling snow via physics upon touching it. The boxes weren’t always responsive especially when they had to follow the movement of multiple people. Additionally, we experimented with what shape would be able to manipulate the snow better and ultimately settled on squares.

Our final goal came from an issue that we encountered during critique day. In order to register the subject effectively, the body had to be well lit. We managed to achieve this by installing light stands to illuminate the subject. We experimented with different ways of eliminating shadows, and angles in which the light dropped onto the subject. At the end we used two light LED studio lights installed alongside the webcam to a white backdrop in order to capture the subject movement effectively.

 

Code with References and Comments

https://github.com/notbrian/Atelier-Snowday?fbclid=IwAR0XYzXsnVVGsWvfuWzT_TsOGNYARYvhxFyJ-71HK2yL5dtW4R3JV-jWAPs

Working Demo

https://notbrian.github.io/Atelier-Snowday/?fbclid=IwAR2pplMkpbB7mnTTWu5xq63prgu13r2Syy7KClFAADPmijTKe4BUcy-8As0

Experiment 4 – Progress Report

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Soup of Stars

Inspiration

The inspiration of the project developed as we looked into the potential of our original concept. We started off with the inspiration of movement and and implementing it with analog sensors as a form of a ball game where users would attempt to keep the ball up with their body parts that has sensors attached. After reviewing several pieces we decided to develop the concept based entirely on a webcam because we wanted the body to be the entire subject of the concept

Relevant Links for Inspiration

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5

https://ml5js.org/docs/posenet-webcam?fbclid=IwAR2pg6qdmZfbi0Gxi3ohxtP9tcXUpokaYj6triiHtw6giJ9vTbYVyM1LNWI

Context

The project utilizes a webcam along with Posenet and P5. With the Posenet library, a skeleton is constructed based off the subject registered via the webcam. Within P5 a visual is constructed in the background of a particle system intended to resemble stars. While still focusing on movement, the particle system will react to the movement of the skeleton (mostly limbs). As the arms move across the canvas, the particle systems would swirl and twist following the movement of the skeleton. The skeleton of the subject would appear on the canvas. Additionally, more than one individual can be registered as a skeleton as long as they are in proper view of the webcam. The intent is to provide a sense of interactivity where the individual has an impact to the environment and can alter it the way they see fit.

screen-shot-2018-11-27-at-11-01-33-am

Pictured above is the skeleton using the Posenet demo that will be controlling the particle system. The movement of the limbs will be crucial in altering the environment. There are some issues where some limbs aren’t recognized at times especially when they are close to the body.

20181127_091845

Pictured above is the implementation of the Posenet library with P5

20181127_092035

Previous Materials/Experiments

For experiment 3, we’ve used the webcam with P5 to construct an image using particles. We managed to manipulate the particles with a sensor that then combined with the webcam feed. For this experimentation, we’re still using familiar elements such as the particle system in P5 and the webcam feed projection but altering its concept and their relation to one another.

Experiment 3 Final Prototype – Cam Gesture

Michael Shefer – 3155884

Andrew Ng-Lun -3164714

Rosh Leynes – 3163231

Cam Gesture

Project Description

For the project we set out to create a physical interface where, depending on the user’s physical movements, manipulations displayed on a screen would occur. The user would have two gloves fitted with stretch sensing fabrics that would read values emitted by movement. These values would then transfer over to a display reading a webcam feed and process them into various manipulations such as increasing the quantity boxes/pixels, increasing and decreasing the sizes of the boxes/pixels, and manipulating the intensity of the stroke. At first we had the intent on the final sensor adding multiple filters to the screen but trouble with the code forced us to adapt. The screen aspect uses P5 which reads values from the Arduino and our three analog sensors.

The project went through various alterations when compared to its initial stage. We sought out to connect the Arduino with Touchdesigner as we intended on constructing a matrix of pixels for an animation, then a real-time webcam feed, that could be easily manipulated with hands. The concept was that as your hands would open up the pixels would expand giving the illusion of control via the physical interface. The initial idea had to be quickly altered as we encountered various challenges with the values from the sensors and Arduino transferring onto Touchdesigner. It is there we switched to P5 which was more familiar to us.

As for materials we used two gloves, black and white, for a simplistic presentation to blend with the dark gray sensors and to counter the multicolored alligator clips, and three stretch sensing fabrics connected to fingertips because we wanted to emphasize various combinations of movement of the hand.

The initial stage of our project where we constructed a matrix of cubes to build an animation.

20181113_110011

20181113_104008Which later progressed into using the matrix of cubes to construct an image from the webcam feed. The farthest we got in this stage was having brief values being read within Toughdesigner but it wasn’t able to create a consistent image of the webcam feed.

20181113_131821Pictured above is the first build of our glove. Initially all the sensors were scattered across the fingertips on one glove but we decided to make two as over time it became difficult to manipulate certain functions.

20181113_125418This was the first attempt at reconstructing our Touchdesigner concept with P5.

20181115_110928Pictured above is the final build of the two gloves used for the critique.

Provided are videos with the first prototype glove working with the P5 code

https://drive.google.com/file/d/1VB5bvoKFE_A9Cye1EFV6X5dcEQyWHXO9/view?usp=sharing

https://drive.google.com/file/d/1cQFlUNgzErpail8aY0hJIDDKHp0MO3oQ/view?usp=sharing

https://drive.google.com/file/d/1PVzw5WnK9ABVWVlx3PQaTlcyjAMcEINa/view?usp=sharing

https://drive.google.com/file/d/1NY4Zog-s1ACMlxkXvsuDUniqPl_C3X_z/view?usp=sharing

Project Context

When given the project we automatically wanted to utilize body movement that would have a relationship with the display. Upon looking for inspiration we came across a company called Leap Motion which specializes in VR and more specifically hand and finger motions as sensors. From their portfolio we decided to implement the idea of having various finger sensors performing different functions.

Code with comments and references

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

https://github.com/jesuscwalks/experiment3?fbclid=IwAR357iwqUCAjnUOe3US_vSQIvfToqX51yMsmZMubwS-RNf6bCblsQj7RSDs

 

Experiment 1 Final Prototype – Michael Shefer, Andrew Ng-Lun

Text-To-Speech

Michael Shefer (3155884) Andrew Ng-Lun (3164714)

For our concept, we wanted to tackle the possibility of text-to-speech through a representation of a synthetic being speaking to the audience. We drew influence from futurists who perceive AI as a possible threat. To represent this, we decided to visualize a face with unsettling features which would speak in monotone similar to previous fantasy representations of a personified AI. Essentially, the prototype runs like this, the user inputs anything from numbers to words and sentences into the text box and then after pressing the enter key, the face would speak through animation. For the face, eyes, and mouth movement, we the p5.play library to visualize the AI and used the p5.play library for the audio aspect and the text-to-speech.

The project itself went through many phases and alterations. A text-to-speech code wasn’t our initial starting position. We still started off with the concept of a series of animated faces reacting to the tune of music. If the music was mellow the face would be different to a song that is upbeat.  We had to scrap this concept after encountering difficulties with the microphone as it is limited to picking up on specific frequencies and tunes.

Rationale of the Programming Language

Our group decided to use the famous programming language known as p5.js for our project since we were introduced to this language in the first day of class. Since then, we found out that the p5 language is very flexible and excels at animating objects. Our idea for the final project was based on the 5 experiments assignments where we discovered the p5 library and the vast possibility of features that it unlocks for the canvas. Therefore, we decided to use those add-ons to animate a AI interface. Our code is based on two major add-ons known as p5.play.js and p5.speech.js.

20180925_114131

https://photos.app.goo.gl/4qK5wGrkzB3EwpZ76

The video and image above is a representation of where we first started with our concept. We had two rough animations to represent the emotions we were going to have react different music frequencies.

20180927_102043

Above is the final image of our prototype with the visualized AI and text box for the audience to input a statement.

Code for GitHub [ References included]

https://github.com/NDzz/Final_Assignemnt-AI