Re-Do / Un-Do/ Over-Do – Starfish Generator

Digital Ocean

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984


20190409_173726

Description

For the final assignment for Atelier 2 our group set out to revisit one of our very first assignments. Although we weren’t all initially in the same group, everyone enjoyed the approach to the concept of starfish and saw potential expanding on it. When tackling ideas on how to expand the initial project the group settled on interactivity amongst individuals. We exercised ideas where individuals would interact with the starfish and the environment as well as other people. Ultimately, we went back to the initial concept of generating a starfish and worked on creating an interactive aspect where individuals would be able to create their personalized starfish and add it to an archive of other people’s creations. The project operates as follows: on one screen, the individual would be able to alter physical properties of the starfish such as amount of legs, length of legs, thickness of the starfish, and colour. Along with that, users would be able to search any image they want and it would be textured mapped over the starfish. Once the creation is done, the user would send the starfish onto a second screen that holds all previously user generated starfish.


Process

When approaching the project we prioritized expanding on previous attempts, concepts, and limitations and wanted to rebuild the project as a new experience. It went as far as completely removing the starfish aspect and focusing on generating and interactivity. Ultimately, we continued with the generator and focused on implementing previous suggestions.

20190326_102351

The first Starfish generator prototype used Pubnub to link all the various screens together but for our attempt we utilized Firebase to help us archive all the user generated starfish which eventually appeared on our aquarium. The different screens such as the generator itself and the final display screen of the aquarium operated as websites of their own from Github. One of the previous limitations with the starfish generator was the lack of customization since users were only able to select from a small array of options. To expand on this we provided the user with the capability of searching an image to work as the texture of the Starfish. Using Google images (along with WebGL), the first image would be selected and then mapped over the starfish as a texture. Additionally, we opted on using sliders instead of a selection of options to provide users with even more variation of values. This gave the users a more variation and freedom with their creations.

First iteration of the starfish being sent to the aquarium

20190409_144750

With the customization built we also worked on adding life to our 2D objects by incorporating animations to the starfish. Once archived and added to the aquarium, the starfish travel at random but with a noise added on to mimic organic movement from starfish. Although we did experiment with adding other functions such as sin onto the starfish, we settled on noise simply because of how we constructed the object. Other explorations saw us attempting to use bezier curves to construct the starfish but we ran into similar complications because of how we built the inner and outer radius of the starfish.

20190402_105837

Finally, the background was created within Photoshop and since we were using WebGL we also had to map the PNG of the aquarium over a plane to work as our canvas background.

Final Prototype Build

The Aquarium Populated with peoples Creations


Explorations and Challenges

Although the final build resembled what we had initially drafted out on paper, we weren’t able to incorporate all our ideas either due to limitations or complications.

We wanted to emphasize interactivity with our generated starfish by allowing either the user to name their starfish or, similar to the previous prototype build, give the starfish a scientific name. Then, once in the aquarium, users would be able to hover over any starfish and see its designated name. Unfortunately, since WebGL was such an integrated part in the programming, it limited our capabilities with text.

Collapsing City – Immersive Sound and Augmented Spaces

Collapsing City – An AR Experience

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984

Rosh Leynes – 3163231


Description

For our immersive sound and augmented spaces we set out to create a narrative space developed with sound and the assistance of visuals all experienced within an AR environment. A set of four rooms were modeled and built in Unity all with variations in concept and layout. All the rooms are connected to each other allowing for linear and seamless story telling. Essentially, with a phone, the audience would maneuver through the 4 different rooms and listen to the sounds of each room to understand the narrative. As the user approaches certain objects in the AR space a proximity sound would play to further tell the story. The narrative of the space follows a city collapsing due to global tensions. The initial room is a regular day in the city accompanied by music, children playing, and basic city chatter. The scene connects to a room with a television that cuts from a game of football to an emergency broadcast of nuclear war. The next scene connected is a dense city environment with a ruined building, air-raid sirens, and fire. The final scene is the city in complete ruin with buildings decimated, rubble scattered, and an ambient howl of the wind.

screen-shot-2019-03-19-at-3-02-20-pm

 


Process

When developing the concept we knew that we wanted to tell the story through different rooms and environments with a variations of sounds accompanied with them so we organized the rooms with a rough sketch first.

20190312_092030

20190307_111529

Originally we wanted to set the whole scene onto a plane where the audience could look over and closer onto details and experience the change in the narrative but then we decided that having the audience go through each room individually and experiencing them like they were in the environment would yield a stronger reaction and connection to the narrative. When creating the rooms separately, we initially had the idea to have the audience teleport to each room after stepping through a door frame or entrance to the scene. We decided to scrap the idea because our intent was to have the audience experience the change in the environment so we bridged all the individual rooms together to create a seamless experience. As we were all working on one unity project and making frequent changes we used Unity Collaborate which allowed us to import each others changes.

20190314_095210

20190312_123125

Since sound was a main component to this experiment, we worked with triggers and ambient sounds. On top of general ambience in each scene to establish the environment, we included triggers that would aid in telling the narrative. For instance, if approaching the television, a soccer game cutting into an emergency broadcast would play. Additionally, if approaching rubble, the sound of explosions would trigger to play to visualize what had happened. Although we had visuals and 3D models to create our environments, the sound was crucial to tell the story.

20190314_091242


Challenges

We experienced several challenges throughout the development of our experiment which were eventually resolved but two had stood out: the sound triggers and the scale of our approach.

For sound, we wanted to have each scene vary in atmosphere and sound so we used triggers to achieve this. The trigger’s require something physical to run through it in order to trigger the sound. So when the come walks into the room with the television, it had to interact with the trigger in order to activate the sound. We worked around this by attaching a cube to the camera and adding the rigid component onto it so that it would be able to interact with the triggers.

screen-shot-2019-03-19-at-3-03-18-pm

The largest challenge we encountered was how we approached the experiment. We really enjoyed the idea of having variety especially in scenes to tell a narrative so we focused most of our development on building these scenes and rooms accompanied with sound, particles, and an assortment of sourced 3D models. Throughout development we had to regularly scale down our scene sizes and limit particles and lighting to effectively run the build on phones. In the end, the build ran with lag and users weren’t able to make it through the first scene due to it’s sheer scale.

 

Fútbol – Narrative Space

Fútbol – An Interactive Experience

20190226_091424

Michael Shefer – 3155884

Andrew Ng-Lun – 3164714

Brian Nguyen – 3160984

Francisco Samayoa – 3165936


Description

For our narrative space we set out to create an environment of a soccer game using physical, visual, and audio installations. The installation was built like so: three pylon stations are spread across the ground accompanied with digital sensors, on top of the installation is a projection mapping of a field with visuals and animations. All the sound and visuals are controlled by the digital sensors. Once the player passes one of the pylon stations, they would press onto the switch which then would trigger an animation from the projector to lead the player onward as well as a variety of announcer recordings. This would continue on until the player would score a goal at the end of the installation. On top of all of this we also implemented a backdrop of crowd cheering audio samples through speakers. The purpose of this concept was to enthrall the participants in an intense soccer experience where their actions and movements would draw a reaction from a crowd and the announcers. The bright animated visuals and loud dynamic audio was intended to fill the participant with emotion.


Process

A lot of work had to be done to create our narrative space and so we broke it down into three portions: audio, visual, and physical. For audio we set out to the recording studio because we wanted to use as much self-recorded material as we could. Mainly, we wanted to record commentary that would reflect the actions of the player whether they were to score a goal, miss a goal, or pass through the pylons. For this we brainstormed and created a brief script to follow.

20190207_105021

After going through our recorded audio in the studio we decided that we wanted a variety of announcers for the narrative space so that the participants would not get tired of hearing the same voice over and over. Additionally, we discovered that improvising yielded more genuine commentary.

20190207_103304

After having every member of the group finish their recordings we brought the audio files into FL Studio.With FL Studio, we were able to make changes to the audio files by altering them to resemble the sound of stadium commentary. Additionally, since we wanted to create a pool of commentary to be called upon at random, we had to cut our long studio recordings into short bits of phrases which we then organized in Maxuino.

20190207_104927

For the physical portion of the installation we created a simple digital switch using cardboard and tin foil that would act as a pressure sensor. Two cardboard sheets with tinfoil on and conductive thread would be the main triggers for the audio and visuals.

20190223_142349

Originally the tinfoil spanned most of the cardboard sheet but after a trial run we discovered that the data showed that the switches were constantly being completed when we didn’t intend them to so we had to alter the sizes and placement of the tinfoil as well as adding more space and resistance between them.

20190214_114857

Since the placement of the tinfoil was centered we decided to add pylons on each side of the switch forcing the players to go over top the cardboard and press onto the centre in order to yield data. Addtionally, we had to add some changes to the goal post that contained the IR sensor. The IR sensor struggled to detect the ball no matter its speed when it went past the two posts we set up. If we tried to increase the sensitivity, the IR sensor would occasionally pick up the movement of the player instead. We quickly worked around this by placing a cone in the centre of the two posts that the player had to shoot for. With this, the IR sensor easily picked up the movement of the cone being struck and in turn triggered the celebratory goal animations and audio.

53026283_250034592543824_4346468991875678208_n

The visual aspect was created all in Photoshop along with sourced gifs. The animations consisted of the arrows that lead the player through the plyons and to the goal, the accompanying sourced gifs, and the flashing celebratory goal animations. Each phase of the installation was split into four videos. We constructed the presentation by projecting the field and it’s animations and placed the cardboard buttons and goal post accordingly. We had encountered issues with the visuals responding to the triggers. Although data was being read and transferred to trigger the animations, there was an occasional delay between the transitions of videos as the files were too large but manually triggering the transitions worked well.

Video of Final Product

53034607_324093878241515_8547277024060768256_n

Pictured above is the Maxuino patch used for organizing all the switches and their interactivity with the various commentary and visuals.

 

Documentation and Process Work

Stadium Field with various stages and animations

Audio Files of the various commentary along with sampled crowd audio

Additional documentation

Experiment 4 – Snow Day

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Snow Day

20181204_102722

Inspiration

Experiment 4 has gone through many developments and alterations when compared to our initial stage of the assignment. Essentially, our inspiration came from the concept of how movements could manipulate an environment. With the use of matter.js, ml5js, and the Posenet library, we set out to create an interactive installation that tracks an individual’s body movement and builds it into a skeleton that is capable of interacting with the environment within P5. The environment is set to mimic a snow day where particles gradually drop to the bottom of the canvas and the individual is able to interact with their physics capabilities through movement of the arms. The purpose is to provide an experience of playing in the snow via P5. Additionally, the installation promotes interactivity with others as it is capable of registering more than one individual onto the canvas and allowing all participants to interact with the environment.

20181129_111339

Related Work

The inspiration for our concept stemmed from an article  that introduced us to Posenet and it’s capabilities described it more in depth. With it’s basic understanding and implementation into P5 we then continued to explore and develop the idea of physical body interactivity by looking at inspiration of particle systems in codepen before looking into other various libraries. Additionally, some of our group members have previously worked with the webcam and its capability to manipulate particles in P5 via a webcam feed, this previous knowledge allowed us to jump start our concept development.

Background Related Work

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

 

Goals for the Project

20181129_100344

Our first goal was to implement Posenet into P5 to register a body with a background particle system on the canvas. This was achieved as pictured above. The basic points of the head, shoulders, and limbs was able to be registered and constructed in to a skeleton, furthermore it managed to capture more than one person. From there we continued to refine the presentation of the project by altering the particles, canvas, and skeleton.

20181204_091052

With Posenet and our particle system working well in P5 our next goal was to actually implement the interactivity. While this goal was achieved come presentation day, we did encounter difficulty when attempting to implement it. With the body movement tracked and represented as a skeleton onto P5, we added squares to the points of the hands that would follow the movement of the arms and eventually interact with the falling snow via physics upon touching it. The boxes weren’t always responsive especially when they had to follow the movement of multiple people. Additionally, we experimented with what shape would be able to manipulate the snow better and ultimately settled on squares.

Our final goal came from an issue that we encountered during critique day. In order to register the subject effectively, the body had to be well lit. We managed to achieve this by installing light stands to illuminate the subject. We experimented with different ways of eliminating shadows, and angles in which the light dropped onto the subject. At the end we used two light LED studio lights installed alongside the webcam to a white backdrop in order to capture the subject movement effectively.

 

Code with References and Comments

https://github.com/notbrian/Atelier-Snowday?fbclid=IwAR0XYzXsnVVGsWvfuWzT_TsOGNYARYvhxFyJ-71HK2yL5dtW4R3JV-jWAPs

Working Demo

https://notbrian.github.io/Atelier-Snowday/?fbclid=IwAR2pplMkpbB7mnTTWu5xq63prgu13r2Syy7KClFAADPmijTKe4BUcy-8As0

Experiment 4 – Progress Report

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Soup of Stars

Inspiration

The inspiration of the project developed as we looked into the potential of our original concept. We started off with the inspiration of movement and and implementing it with analog sensors as a form of a ball game where users would attempt to keep the ball up with their body parts that has sensors attached. After reviewing several pieces we decided to develop the concept based entirely on a webcam because we wanted the body to be the entire subject of the concept

Relevant Links for Inspiration

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5

https://ml5js.org/docs/posenet-webcam?fbclid=IwAR2pg6qdmZfbi0Gxi3ohxtP9tcXUpokaYj6triiHtw6giJ9vTbYVyM1LNWI

Context

The project utilizes a webcam along with Posenet and P5. With the Posenet library, a skeleton is constructed based off the subject registered via the webcam. Within P5 a visual is constructed in the background of a particle system intended to resemble stars. While still focusing on movement, the particle system will react to the movement of the skeleton (mostly limbs). As the arms move across the canvas, the particle systems would swirl and twist following the movement of the skeleton. The skeleton of the subject would appear on the canvas. Additionally, more than one individual can be registered as a skeleton as long as they are in proper view of the webcam. The intent is to provide a sense of interactivity where the individual has an impact to the environment and can alter it the way they see fit.

screen-shot-2018-11-27-at-11-01-33-am

Pictured above is the skeleton using the Posenet demo that will be controlling the particle system. The movement of the limbs will be crucial in altering the environment. There are some issues where some limbs aren’t recognized at times especially when they are close to the body.

20181127_091845

Pictured above is the implementation of the Posenet library with P5

20181127_092035

Previous Materials/Experiments

For experiment 3, we’ve used the webcam with P5 to construct an image using particles. We managed to manipulate the particles with a sensor that then combined with the webcam feed. For this experimentation, we’re still using familiar elements such as the particle system in P5 and the webcam feed projection but altering its concept and their relation to one another.

Experiment 3 Final Prototype – Cam Gesture

Michael Shefer – 3155884

Andrew Ng-Lun -3164714

Rosh Leynes – 3163231

Cam Gesture

Project Description

For the project we set out to create a physical interface where, depending on the user’s physical movements, manipulations displayed on a screen would occur. The user would have two gloves fitted with stretch sensing fabrics that would read values emitted by movement. These values would then transfer over to a display reading a webcam feed and process them into various manipulations such as increasing the quantity boxes/pixels, increasing and decreasing the sizes of the boxes/pixels, and manipulating the intensity of the stroke. At first we had the intent on the final sensor adding multiple filters to the screen but trouble with the code forced us to adapt. The screen aspect uses P5 which reads values from the Arduino and our three analog sensors.

The project went through various alterations when compared to its initial stage. We sought out to connect the Arduino with Touchdesigner as we intended on constructing a matrix of pixels for an animation, then a real-time webcam feed, that could be easily manipulated with hands. The concept was that as your hands would open up the pixels would expand giving the illusion of control via the physical interface. The initial idea had to be quickly altered as we encountered various challenges with the values from the sensors and Arduino transferring onto Touchdesigner. It is there we switched to P5 which was more familiar to us.

As for materials we used two gloves, black and white, for a simplistic presentation to blend with the dark gray sensors and to counter the multicolored alligator clips, and three stretch sensing fabrics connected to fingertips because we wanted to emphasize various combinations of movement of the hand.

The initial stage of our project where we constructed a matrix of cubes to build an animation.

20181113_110011

20181113_104008Which later progressed into using the matrix of cubes to construct an image from the webcam feed. The farthest we got in this stage was having brief values being read within Toughdesigner but it wasn’t able to create a consistent image of the webcam feed.

20181113_131821Pictured above is the first build of our glove. Initially all the sensors were scattered across the fingertips on one glove but we decided to make two as over time it became difficult to manipulate certain functions.

20181113_125418This was the first attempt at reconstructing our Touchdesigner concept with P5.

20181115_110928Pictured above is the final build of the two gloves used for the critique.

Provided are videos with the first prototype glove working with the P5 code

https://drive.google.com/file/d/1VB5bvoKFE_A9Cye1EFV6X5dcEQyWHXO9/view?usp=sharing

https://drive.google.com/file/d/1cQFlUNgzErpail8aY0hJIDDKHp0MO3oQ/view?usp=sharing

https://drive.google.com/file/d/1PVzw5WnK9ABVWVlx3PQaTlcyjAMcEINa/view?usp=sharing

https://drive.google.com/file/d/1NY4Zog-s1ACMlxkXvsuDUniqPl_C3X_z/view?usp=sharing

Project Context

When given the project we automatically wanted to utilize body movement that would have a relationship with the display. Upon looking for inspiration we came across a company called Leap Motion which specializes in VR and more specifically hand and finger motions as sensors. From their portfolio we decided to implement the idea of having various finger sensors performing different functions.

Code with comments and references

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

https://github.com/jesuscwalks/experiment3?fbclid=IwAR357iwqUCAjnUOe3US_vSQIvfToqX51yMsmZMubwS-RNf6bCblsQj7RSDs

 

Experiment 1 Final Prototype – Michael Shefer, Andrew Ng-Lun

Text-To-Speech

Michael Shefer (3155884) Andrew Ng-Lun (3164714)

For our concept, we wanted to tackle the possibility of text-to-speech through a representation of a synthetic being speaking to the audience. We drew influence from futurists who perceive AI as a possible threat. To represent this, we decided to visualize a face with unsettling features which would speak in monotone similar to previous fantasy representations of a personified AI. Essentially, the prototype runs like this, the user inputs anything from numbers to words and sentences into the text box and then after pressing the enter key, the face would speak through animation. For the face, eyes, and mouth movement, we the p5.play library to visualize the AI and used the p5.play library for the audio aspect and the text-to-speech.

The project itself went through many phases and alterations. A text-to-speech code wasn’t our initial starting position. We still started off with the concept of a series of animated faces reacting to the tune of music. If the music was mellow the face would be different to a song that is upbeat.  We had to scrap this concept after encountering difficulties with the microphone as it is limited to picking up on specific frequencies and tunes.

Rationale of the Programming Language

Our group decided to use the famous programming language known as p5.js for our project since we were introduced to this language in the first day of class. Since then, we found out that the p5 language is very flexible and excels at animating objects. Our idea for the final project was based on the 5 experiments assignments where we discovered the p5 library and the vast possibility of features that it unlocks for the canvas. Therefore, we decided to use those add-ons to animate a AI interface. Our code is based on two major add-ons known as p5.play.js and p5.speech.js.

20180925_114131

https://photos.app.goo.gl/4qK5wGrkzB3EwpZ76

The video and image above is a representation of where we first started with our concept. We had two rough animations to represent the emotions we were going to have react different music frequencies.

20180927_102043

Above is the final image of our prototype with the visualized AI and text box for the audience to input a statement.

Code for GitHub [ References included]

https://github.com/NDzz/Final_Assignemnt-AI