Arduino Basketball: Arcade Basketball

Francisco Samayoa, Isaak Shingray, Donato Liotino, Shiloh Light-Barnes

Kate Hartman

DIGF-2004-001

November 15, 2018


Project Description

Our proposed sensing method is pressure sensing fabric and digital input buttons. We will be using pressure sensing fabric, conductive fabric, conductive thread, wires, and electrical tape. For props we will use a basketball and a basketball net. In terms of sensor construction, the shot success sensor will be a broken circuit woven into the mesh of the netting that will be completed when the ball passes through the net.  The backboard sensor will be constructed of pressure sensitive material in order to provide an analog signal.  Finally the foot position switches will be incomplete circuits that upon be stepped on by the player will be completed. The backboard and foot switches are both analog, and mesh is digital.

In the end we had to glue conductive fabric onto the basketball because the fabric that was already on the ball was insufficient to complete the circuit. The mesh had to be made tighter in order for the ball to be sensed by the conductive thread. The foot switches were initially digital but we made a conscious decision to change them to analog. Rather than having players where aluminum foil on their feet the players will simply have to step on them.

On the screen there will be a scoreboard that coincides with the points scored. There will also be a timer of 1 minute, where the player will have to score as many points as possible in the allotted time. The score of each throw is calculated based on whether or not the basketball passes through the hoop, the power with which it hits the backboard, and the distance sensor that the player is standing on. This will simulate the actual basketball experience, with the 2-point and 3-point lines. When hitting the pressure sensing fabric on the backboard with enough power, a 1.5x multiplier will be applied to the basket scored. If there was more time, we would add a power score in relation to the amount of pressure the backboard is sensing.

Our vision for the presentation consists of attaching the net to the whiteboard and setting up the foot switches on the ground. The scoreboard will be displayed using a projector. In relation to the image the sensors on the ground will be placed in a semi-circle in front of the net, both at the different distances. Look for this product at your local arcade, coming soon!

Project Code

https://github.com/avongarde/Atelier/tree/master/Assignment%203


Photos + Videos
November 13

ball breadboard net net1

November 15
 img_1579 img_1578 img_1577 img_1576 img_1584 img_1585


Project Context

Demortier, Sylvain. “Sigfox-Connected Basketball Hoop.” Arduino Project Hub, 4 Sept. 2017, create.arduino.cc/projecthub/sylvain-demortier/sigfox-connected-basketball-hoop-47091c?ref=tag&ref_id=basketball&offset=0.

This project helped guide our aesthetic for the final product. If you see the picture of his project, it is hanging from the wall with the Arduino tucked behind the backboard. This would be ideal because it wouldn't get damaged and in the way of play. If you look closely you'll also see the positive and negative wires (in his case an IR receiver and emitter) on the side of the net. This would indicate that the ball triggers the score when passing through the hoop. This is the approach we opted for as well.

Instructables. “Arduino Basketball Pop-a-Shot: Upgrayedd.” Instructables.com, Instructables, 10 Oct. 2017, www.instructables.com/id/Arduino-Basketball-Pop-a-Shot-Upgrayedd/.

Another Arduino-based Basketball game. This project was visually impressive as well. The creator even placed the scoreboard on the backboard itself! Visually this was a project we wanted to imitate as well. However, this one uses a distance sensor to the count the buckets. While we decided to use pressure sensing fabric, we did like the idea of a digital scoreboard. And so we decided to reference this example's scoreboard approach, but we used p5.js to create it instead of a quad-alpha numeric display.

Experiment 3 Final Prototype – Cam Gesture

Michael Shefer – 3155884

Andrew Ng-Lun -3164714

Rosh Leynes – 3163231

Cam Gesture

Project Description

For the project we set out to create a physical interface where, depending on the user’s physical movements, manipulations displayed on a screen would occur. The user would have two gloves fitted with stretch sensing fabrics that would read values emitted by movement. These values would then transfer over to a display reading a webcam feed and process them into various manipulations such as increasing the quantity boxes/pixels, increasing and decreasing the sizes of the boxes/pixels, and manipulating the intensity of the stroke. At first we had the intent on the final sensor adding multiple filters to the screen but trouble with the code forced us to adapt. The screen aspect uses P5 which reads values from the Arduino and our three analog sensors.

The project went through various alterations when compared to its initial stage. We sought out to connect the Arduino with Touchdesigner as we intended on constructing a matrix of pixels for an animation, then a real-time webcam feed, that could be easily manipulated with hands. The concept was that as your hands would open up the pixels would expand giving the illusion of control via the physical interface. The initial idea had to be quickly altered as we encountered various challenges with the values from the sensors and Arduino transferring onto Touchdesigner. It is there we switched to P5 which was more familiar to us.

As for materials we used two gloves, black and white, for a simplistic presentation to blend with the dark gray sensors and to counter the multicolored alligator clips, and three stretch sensing fabrics connected to fingertips because we wanted to emphasize various combinations of movement of the hand.

The initial stage of our project where we constructed a matrix of cubes to build an animation.

20181113_110011

20181113_104008Which later progressed into using the matrix of cubes to construct an image from the webcam feed. The farthest we got in this stage was having brief values being read within Toughdesigner but it wasn’t able to create a consistent image of the webcam feed.

20181113_131821Pictured above is the first build of our glove. Initially all the sensors were scattered across the fingertips on one glove but we decided to make two as over time it became difficult to manipulate certain functions.

20181113_125418This was the first attempt at reconstructing our Touchdesigner concept with P5.

20181115_110928Pictured above is the final build of the two gloves used for the critique.

Provided are videos with the first prototype glove working with the P5 code

https://drive.google.com/file/d/1VB5bvoKFE_A9Cye1EFV6X5dcEQyWHXO9/view?usp=sharing

https://drive.google.com/file/d/1cQFlUNgzErpail8aY0hJIDDKHp0MO3oQ/view?usp=sharing

https://drive.google.com/file/d/1PVzw5WnK9ABVWVlx3PQaTlcyjAMcEINa/view?usp=sharing

https://drive.google.com/file/d/1NY4Zog-s1ACMlxkXvsuDUniqPl_C3X_z/view?usp=sharing

Project Context

When given the project we automatically wanted to utilize body movement that would have a relationship with the display. Upon looking for inspiration we came across a company called Leap Motion which specializes in VR and more specifically hand and finger motions as sensors. From their portfolio we decided to implement the idea of having various finger sensors performing different functions.

Code with comments and references

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

https://github.com/jesuscwalks/experiment3?fbclid=IwAR357iwqUCAjnUOe3US_vSQIvfToqX51yMsmZMubwS-RNf6bCblsQj7RSDs

 

Hot Hands

Team Members

Melissa Roberts 3161139

Samantha Sylvester 3165592

Project Description

We created a pair of gloves that warm up when you make a fist, press your hands together, or hold somebody’s hand. The main components are conductive pressure sensing fabric, conducting warming fabric, hand-sewn fleece lining, and a store-bought outer layer. The Eeontex Pressure Sensing Fabric is our sensor, which triggers a current to run through the Thermionyx Non-Woven Warming Fabric.

Continue reading “Hot Hands”

Experiment 1: Ripples Playground – Brian Nguyen

Ripples Playground

Brian Nguyen, 3160984

Re-posting this on the correct blog.

Code: https://github.com/notbrian/Atelier-Final-Prototype

Live Demo: https://notbrian.github.io/Atelier-Final-Prototype/index.html

The Ripples Playground is a fun interactive sketch meant to mesmerize the user by generating colorful, aesthetically pleasing, expanding ‘ripples’ on the screen from the users mouse and input. The ripples speed and color are randomized between a range. There is also a second variant of the ripple called a background ripple that is created on right click which is opaque and has a fill of either white or black. I added this because it gave a kind of wiping the screen look and it looked pretty trippy.

Alongside this, the ripples generate their own unique oscillation frequency, based off their speed. This gives them a slight resemblance of sound or radio waves.

I tried to add a radio button switcher for the oscillation type but since I got the idea last minute experimenting with the types I couldn’t get it fully implemented or cleaned up. You can view the sine and triangle buttons at the bottom of the page.

I chose p5.js for this project because it allowed the sketch to be accessible to lots of devices by not requiring them to download any software and running smoothly as well on many devices. For example, if I used Processing for this I would need to run it in the Processing editor. As well, it allows me to build UI using HTML elements.

The context and idea for this project came from me wanting to experiment and expand on my Sketch_1 with drawing colorful ellipse on the page. After I got them to ripple and expand on the page I thought they kind of looked like sound/radio waves and experimented with the oscillator.

Experiment 1: Clap Powered Fireworks – Kiana Romeo

Kiana Romeo 3159835

Clap Powered Fireworks!

Description 

For this project, I wanted to create something that would interact with sound, but I did not want to create the same generic DJ type sound reactive system that has been seen so many times before. Instead, I wanted to make it so that whatever was going on on the screen would be controlled by the user, and the sound made by the user would directly correlate to the resulting animation. I wanted the project to focus on loud sounds so I started brainstorming things that a clap would be able to imitate. Instead of the action causing the sound, I wanted the sound to cause the action!

At first, I had debated on whether making lightning would be a better option. With this I would make an environment in which random lightning strikes would be created when the user clapped their hands, thus mimicking the sound of lightning. But as it was a simulation of nature, it would have been too predictable so I instead chose to make fireworks instead. This way, I could change colour, size, velocity and shape as they are manmade and the way they look does not need to be a certain way in order to recognize what it is.

Github link: Clap-powered-fireworks

fireworks 1screen-shot-2018-09-29-at-12-12-58-am
Rationale

Although I could have successfully programmed the sketch in either program, I chose to work with p5.js instead of Processing because connecting and using the microphone to control the sketch was much more straightforward and I still got the same desired effect. As well, linking multiple javascript files to one html file felt much easier than linking multiple files in Processing. I created the sketch in both programs (with help from Dan Shiffman tutorials of course!) and preferred the output of the p5 version much more. 

screen-shot-2018-09-29-at-12-13-38-am

References:

https://www.youtube.com/watch?v=CKeyIbT3vXI&index=30&list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH (Coding Challenge #27: fireworks)

https://www.youtube.com/watch?v=aKiyCeIuwn4&t=679s&frags=pl%2Cwn (Coding Challenge #41: Clappy Bird)

https://www.youtube.com/watch?v=q2IDNkUws-A (17.8: Microphone Input – p5.js Sound Tutorial)

I relied heavily on these three videos to create my project. Many hours were spent simply watching the videos until the code made sense, and then typing out any code that was given to help with my project. Through watching these videos I learned a lot about particle systems, arrays and functions as well as using the p5 sound library to make interesting, interactive artwork.

Atelier: Animated Movie Experiment: Dimitra Grovestine

“Experiment 1” Assignment

Dimitra Grovestine

3165616

September 27, 2018

 

GitHub Link: https://github.com/dimitragrover/Ateliyay

Drive Link to Movie: https://drive.google.com/file/d/1OfjblX1HizLFLFF2SBLrmi8BLyrTs6CN/view?usp=sharing

Introduction

For the first experiment, we were told to choose something of interest to us and use code in which we had learned, to enhance our craft. A big part of my life consists of my acting and the creation of productions. I wanted to put together a complete project and decided to explore creating an animated movie.

A large topic in my computer theory class this semester has been discovering the difference between human thinking and computer thinking. One of the largest differences that we have outlined, is that humans carry emotion where computers do not. I wanted to test out how various digital translations and movements portray human-like emotions.

When choosing my video topic, I wanted to tackle an important subject, one that would be effective being presented through basic geometric shapes. I ended up choosing the topic of bullying and the overall human emotion of feeling unwanted or feeling like you don’t fit it with others. The choice to use geometric shapes in this case, enhances the topic because there is no race, gender or sexual orientation attached to circles; yet, there does exist colour difference between the two main characters of the film. This colour difference allows viewers of all ages to recognize that there does exist a difference between the two circles, yet it is not a difference that can be directly compared to one single social difference between humans. I found that my choice to use a geometric shape and to include a colour difference made a socially difficult topic easier to understand. It also allows for viewers to rethink their prejudices.

Humans tend to hold strong social opinions based on other humans different from themselves. Often, when presented with information of socially oppressed groups within other species such as the animal kingdom, those same humans think that the diversion is silly or that it doesn’t make sense. All in all, my use of non-human forms was to help others rethink the social and physical judgements that they carry.

Storyboard Development

Song choice: Don’t Laugh at Me (by Mark Wills)

Scene 1

screen-shot-2018-09-28-at-10-21-53-pm

Scene 1 stayed fairly similar to my original thoughts. I decided to implement a basic horizontal transition to mimic the human act of entering a space.

Scenes 2, 3 and 4

screen-shot-2018-09-28-at-10-22-10-pm

Scene 2 actually never ended up occurring in the final cut. But scene 3’s jiggling momentum was created using a random X position while moving up Y at a constant. The jiggling is very subtle, and I think this is effective when mimicking nerves. Because the human feeling is very internal rather than an external quality visually intensely seen by others.

Scenes 5, 6 and 7

screen-shot-2018-09-28-at-10-22-26-pm

When increasing the number of circles, a for loop was used. When creating that feeling of explosion, the radius size was being changed, increasing and decreasing like heart rate. Finally the spinning was produced using cosine and sine as well as height changes, having the circle appear to be moving in distance.

Scenes 8 and 9

screen-shot-2018-09-28-at-10-23-28-pm

In the film, I established a form of communication of the circles through the  ever changing size of the radius. When the two shapes were in the process of morphing into one colour, I layered the two circles on top of each other to begin to see the complement colour forming. I had them moving back and forth across the screen to mimic time passing. I also used a fade effect to allow the past scene to still be present for a small moment at the start of a new scene. This cinematic feature better helped portray the passing of time.

Scene 10

screen-shot-2018-09-28-at-10-22-40-pm

Once the circles finally blended colours, I used a for loop again to fill up the screen and conclude the film.

Canvas Exploration (How to Create a Movie on Canvas)

As I began exploring the creation of an animated film. I thought that it would be interesting to see if I could get an entire movie to play on the Canvas using timers. Going into the project, my last resort would result in the use of Premiere Pro. My very first test with the timer, involved playing back the first scene twice. This is when I realized that the timer starts from the moment you hit refresh on the computer, all of the timers do. When I play the exact scene back, and trigger the second timer to the moment that the first scene is done, it does some funny things. Three quarters through the first scene, it speeds up the movement of the three circles. When the second scene begins playing, the circles are moving at the new fast pace. On loop, the circles continue to move at the new fast pace.

 

Timer Experiment Code:

let timer = 2

let timer1 = 5

var Xplace = 0;

 

function setup() {

 createCanvas(720, 480);

}

 

function draw() {

 background(0);

 

 if (frameCount % 20 == 0 && timer > 0) {

   timer –;

 }

 if (timer == 0) {

   Xplace = Xplace – 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

ellipse(Xplace-85, 300, 65);

fill(255,0,0);

 

//red Parents

ellipse(Xplace-30,200, 100);

fill(255, 0,0);

ellipse(Xplace-140,200, 100);

fill(255, 0,0);

 }

 if (frameCount % 20 == 0 && timer1 > 0) {

 }

 if (timer1 == 0) {

   Xplace = Xplace – 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

ellipse(Xplace-85, 300, 65);

fill(255,0,0);

 

//red Parents

ellipse(Xplace-30,200, 100);

fill(255, 0,0);

ellipse(Xplace-140,200, 100);

fill(255, 0,0);

 }

}

 

To attempt at solving this problem, I decided to try and add an additional frameCount(); to the problemed area.  The only sweet spot I found when including the additional frameCount(); was at a rate of 30 which provides me with the same scene twice. But was my code now functioning properly or was it just playing the results of the first timer on repeat? I decided to create a timer producing the second scene. And it unfortunately presented no errors in the console but did not play the first scene followed by the second, rather it produced the first scene on loop. There also existed a visual error of a white flash at the beginning of scene one. I wasn’t sure if this white flash on the circles had something to do with the white circles in scene two or not.

 

Timer Experiment Code 2

let timer = 2

let timer1 = 5

var Xplace = 0;

 

function setup() {

 createCanvas(720, 480);

}

 

function draw() {

 background(0);

 

 if (frameCount % 20 == 0 && timer > 0) {

   timer –;

 }

 if (timer == 0) {

   Xplace = Xplace – 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

ellipse(Xplace-85, 300, 65);

fill(255,0,0);

 

//red Parents

ellipse(Xplace-30,200, 100);

fill(255, 0,0);

ellipse(Xplace-140,200, 100);

fill(255, 0,0);

 }

 if (frameCount % 20 == 0 && timer1 > 0) {

 }

 if (timer1 == 0) {

   Xplace = Xplace + 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

   //red baby character

   ellipse(Xplace-85, 300, 65);

   fill(255,255,255);

 

   //red Parents

   ellipse(Xplace-30,200, 100);

   fill(255, 255,255);

   ellipse(Xplace-140,200, 100);

   fill(255, 255,255);

 

 }

}

 

Another concern of mine was the film lacking basic cinematic features, ones that can be found on a movie editor, such as scene to scene transitions. I was also worried how I was going to be able to get exact timing with the chosen audio. The audio I chose was a very important aspect to the piece and was a big help in guiding the storyline. For that reason, I chose to use a video editing software to enhance the actual piece and story.
Final Thoughts:

Overall I want to be very honest with these final thoughts. I believe that I was successful in achieving what I set out to do; however, I feel that I missed the bar on this overall project, and this is something that I did not realize until I saw the explorations of my classmates. I spent a lot of time on this project and in my efforts to present a story and I think that I was very set on creating something complete and something within my comfort zone and something that I could visually see working. I do believe I was successful at producing exactly that. I believe I did enhance my knowledge in terms of animating and film making using the canvas. However, personally, after reviewing the works of others, I think that I would have liked to have tried to create something a little less complete and something that I was unfamiliar with. I think that I potentially misinterpreted the instructions and that I potentially missed out on a greater learning opportunity. All in all, I think that itself was the learning opportunity for me. It definitely will change how I move forward with projects in this class and in my career.

Experiment 1 Final Prototype – Michael Shefer, Andrew Ng-Lun

Text-To-Speech

Michael Shefer (3155884) Andrew Ng-Lun (3164714)

For our concept, we wanted to tackle the possibility of text-to-speech through a representation of a synthetic being speaking to the audience. We drew influence from futurists who perceive AI as a possible threat. To represent this, we decided to visualize a face with unsettling features which would speak in monotone similar to previous fantasy representations of a personified AI. Essentially, the prototype runs like this, the user inputs anything from numbers to words and sentences into the text box and then after pressing the enter key, the face would speak through animation. For the face, eyes, and mouth movement, we the p5.play library to visualize the AI and used the p5.play library for the audio aspect and the text-to-speech.

The project itself went through many phases and alterations. A text-to-speech code wasn’t our initial starting position. We still started off with the concept of a series of animated faces reacting to the tune of music. If the music was mellow the face would be different to a song that is upbeat.  We had to scrap this concept after encountering difficulties with the microphone as it is limited to picking up on specific frequencies and tunes.

Rationale of the Programming Language

Our group decided to use the famous programming language known as p5.js for our project since we were introduced to this language in the first day of class. Since then, we found out that the p5 language is very flexible and excels at animating objects. Our idea for the final project was based on the 5 experiments assignments where we discovered the p5 library and the vast possibility of features that it unlocks for the canvas. Therefore, we decided to use those add-ons to animate a AI interface. Our code is based on two major add-ons known as p5.play.js and p5.speech.js.

20180925_114131

https://photos.app.goo.gl/4qK5wGrkzB3EwpZ76

The video and image above is a representation of where we first started with our concept. We had two rough animations to represent the emotions we were going to have react different music frequencies.

20180927_102043

Above is the final image of our prototype with the visualized AI and text box for the audience to input a statement.

Code for GitHub [ References included]

https://github.com/NDzz/Final_Assignemnt-AI

Experiment 1: Cubical Vortex Effect – Jin Zhang & Siyue Liang

Experiment I: Cubical Vortex Effect

Jin Zhang(3161758) & Siyue Liang(3165618)

09.27.2018

Atelier I

Github Link:

https://github.com/haizaila/Experiment1

1

 

We were interested in the examples shown in class on sound controlled animation so we wanted to experiment more with it. Based on this idea, we tried to connect our 3D shapes by inputting songs or the computer microphone.

The reason we chose to work with html5 and javascript is that we had a similar lessons last year so we already learned some basic knowledge, so we are comfortable working with it.

 

This is what we made in the very beginning.  Cubes and cones are displayed from far to near by adding the z-axis.  It creates a tunnel-like shape and looks very cool.

2

 

 

Our original thought was to input an audio file and make animation react to the music.  We tried using the “loadSound” and “analyzer.getLevel” to input the music as a value for the animation.  But it didn’t work because the audio file couldn’t be loaded properly for some reason. So we went back to using “micLevel”. However, because the microphone records every tiny acoustical signal that it gets, the animation wouldn’t move as orderly because it is very easily disrupted by the outside voices.

 

 

After that, we just played around with the code more and got some of these random effects. Some of them worked and some of them did not.  Then we added a few if statements to make the shapes switch to cones when mouse is pressed, just to make it more fun and interactive.

5

6

7

We tried to change the “rotate” code in order to make the cubes and corns rotate in different angles individually. However, it could only rotate in one angle as a whole for some reasons.

 

I really liked this one because it has a cool future and space feel to it.  How we got this was we set the diameter of the cones to be equal to the mic value and when there is no sound, the cones would become these rod-like shapes.

3

 

In our final version, “micLevel” controls the stroke color, the size, and the rotating of the shapes and mouseX and mouseY controls their stroke and movement.  

8

 

It’s a really interesting process to see the changing of the result by playing with different elements.  We had a lot of fun doing this project together and we would definitely keep exploring sound interactive art.

 

 

Reference/Inspiration

This sketch is from openprocessing and it inspired us to have the idea of creating 3D effects and explore the beauty of geometrical shapes.  

We used the part for reference where a for loop was used to create shapes and translating them to different positions according to the mouse coordinates continually.

https://www.openprocessing.org/sketch/494306

Experiment 1: Interactive Audio Visualizer

Vijaei Posarajah                                                                                                                              3163608

 

Github link:  https://github.com/Vijaei/Experiment1-Interactive-Audio-Visualizer

(Issues running in chrome, run HTML file in Firefox or MS.Edge)

Programming Language: JavaScript(p5.js and sound.min.js)

For this project, I had decided to expand upon the Atelier tutorials, focusing on the implementation of p5.js. I goal was to create an interactive audio visualizer where the user can input their own tracks and interact with the visuals.

Project Description: 

The very beginning of the project focused on the in-class tutorial based around  circle visualizer  and a circular graph tutorial I found on Youtube:  https://www.youtube.com/watch?v=h_aTgOl9J5I&list=PLRqwX-V7Uu6aFcVjlDAkkGIixw70s7jpW&index=10

What led to the final design was based on a tutorial by Yannis Yannakopoulos, where he further explains the p5.sound library and the use of the FFT (Fast Fourier Transform) algorithm.

Creative Audio Visualizers

circle_

This eventually leads to the current iteration of the Interactive Audio Visualizer, which is composed of three rings of dots of various sizes that focus on the bass, mid, and treble of the audio track. Much like the circle visualizer, the three rings change in size according to the level of bass, mid, or treble in the audio track. On top of this, a fourth circle composed of lines focused on the bass which is interactive based on the user’s mouse position on the canvas. The lines mimic a camera shutter motion that rotates in place. Below the audio visualizer is a play and pause button, along with an upload button to allow the user to upload their own track to have visualized. The design theme around the visualization is based around a spring bloom concept with regards to colors and floral motifs.

Sketch Documentation:

visualizer

visualizer-2 visualizer-3 visualizer-4

Code Documentation: 

code-1 code-2 code-3 code-4

Refrences:

Creative Audio Visualizers by: Yannis Yannakopoulos   https://tympanus.net/codrops/2018/03/06/creative-audio-visualizers/

7.10: Sound Visualization: Radial Graph – p5.js Sound Tutorial  https://www.youtube.com/watch?v=h_aTgOl9J5I&list=PLRqwX-V7Uu6aFcVjlDAkkGIixw70s7jpW&index=10

https://p5js.org/reference/#/libraries/p5.sound

 

 

 

 

Experiment 1 – Untitled – Salisa Jatuweerapong

PROJECT SUMMARY:

Untitled is a series of VR project experiments of varying degrees of success, with the original aim of creating a VR music video and learning a VR workflow. While the former was not accomplished, the latter was somewhat achieved; I researched several types of VR workflows and semi-successfully implemented two on Google Cardboard: big bang (p5.vr), and AYUTTHAYA (Unity). big bang takes you to an indefinite point in surrealistic low-poly space, while the eponymous AYUTTHAYA places you in a cloudy day in Thailand.

PROCESS:

My initial idea was to create a VR music video for Halsey’s Gasoline (Tripled Layered) (https://www.youtube.com/watch?v=fEk-9bOqvoc ). Working with glowing, smoky, audio-reactive spheres (probably would’ve created a particle system and then brought in p5 sound libraries), I wanted to create one animated sphere, giving it personality, then make it grow and rush at the viewer, crowding them in as the audio grows louder. This would then be reflected three times around the viewer, with each sphere following each layer of the audio track. At one point in the song, they’d all transform into snake women (I was going to model them on Blender but now I’m inspired by Mellisa’s snakes to try something in p5). I also wanted to explore 3D sound (which would’ve been related to bringing in the triple-layered audio, but did not have enough time for that. I feel like 3D sound is an essential for VR spaces.

My second concept, born as my time dwindled, was a world where all objects were substituted by its corresponding noun (for example, a sky, but instead of blue and clouds, just the word sky floating in the air), that would be read out to you once they were in your line of site. This was a half-formed idea inspired by accessible blind-friendly image descriptions on the internet; though rather than designing for the blind, I suppose it would be more to show how blind people “saw” the world—through a disembodied computer voice telling them what the view was.

Before I could execute these concepts, however, I needed to just get a VR workflow HAPPENING. This ended up being rather difficult and took up the majority of my time.

I initially planned to use the p5.vr library (https://github.com/bmoren/p5.vr ), but upon testing, it was incompatible with Android (VRorbitcontrol didn’t work in X axis and display would not show up properly). I also had trouble hosting it on the chrome webserver and webspace, but shelved that.

I started searching for other ways to code VR, or turn a 3D environment into VR, and stumbled upon webVR. I researched that further and liked its concept; so I also downloaded that and looked through how to create an app in webVR. I also read up on webVR polyfill. Following this tutorial (https://developers.google.com/web/fundamentals/vr/getting-started-with-webvr/ ), I tried to integrate webVR into an existing p5.js WEBGL sketch I had. Didn’t work due to incompatibility between webVR polyfill & p5.js.

When I was researching webVR I also found three.js, and really loved the project examples hosted on their site, especially this one (https://demos.littleworkshop.fr/track ). Trying this one out (after figuring out I needed to disable some chrome flags first) was what convinced me to try out webVR.

I downloaded three.js and was looking through some tutorials (https://www.sitepoint.com/bringing-vr-to-web-google-cardboard-three-js/ ) on using that, when Adam suggested I try Unity instead. After spending a few hours learning how to navigate Unity through the basic tutorials on their website, I followed this (slightly outdated) tutorial for Unity (https://medium.freecodecamp.org/how-to-make-a-360-vr-app-with-unity-51cbe41ad8f1) to make the VR. I also looked up shaders in this time. Making the VR work in Unity was actually pretty simple, though I had a LOT of trouble building and running the apk. I had a lot of issues following this tutorial (https://docs.unity3d.com/Manual/android-sdksetup.html), I’m still not sure what was going wrong. I tried the command line version first but it didn’t work so I downloaded Android studio, but I had some issues with that too.

I have so many sdks downloaded now.

At this point I was running out of time and I switched back to p5.vr since it was supposed to work on Apple and I figured I could borrow an IPhone in class. Spoiler: it wasn’t working still. I don’t have an IPhone with me so I wasn’t able to investigate the issue further after class, but for some reason, even though it works fine on desktop, the mobile VR shows up with a large gap in the stereocanvas.

big bang notes:

big-bang2

big-bang1

the p5.vr library doesn’t open a lot of doors for interaction in its VR environment, something I was disappointed by as I value interaction a lot. I tried to counter that by position a directional light at your POV that would be adjusted towards whatever direction one was looking at, and then placing in planar materials that would disappear without proper lights. This created some sort of pseudo-interaction where viewers had to work to see the plane.

I created a simple space environment with stars, then was inspired by walking codes I’d seen on three.js and star wars and space operas to create some sort of warp drive effect by translating the stars. While the feel I created reflects the stars moving rather than the viewer moving, I still though it was sort of cool how they collected at a single point and that inspired the idea of the big bang.

Finally, I reset the code every 50 secs, because the big bang doesn’t happen just once. There’s probably a more contained, seamless way to do it than my goOut() code, but it worked.

I was also inspired by this year’s Nuit Blanche theme of You Are Here, and Daniel Iregui’s response to it with their installation, Forward. The sped up time in my VR work and the looping animation alludes to the presence of time and how the future is always out of reach. That’s also reflected by the half-present planar window, always too far ahead of you.

AYUTTHAYA notes:

“Wow, it feels like I’m actually there!” – anonymous OCAD student

I’m really, really fond of my home country, and that shows quite frequently in my work. Ayutthaya, dubbed (by me) as Thailand’s collection of mini Leaning Towers of Pisa, is one of Thailand’s oldest historic site. I recently visited this past summer (2018), and the sense of history in the air is palpable. I’m not sure this VR experience replicates that by any means, but it at least shows people that Ayutthaya exists.

Honestly, this was more of a test, than anything, I’d need to revisit this and create some interaction or movement. I believe Unity’s the right way to go with VR environments and would continue using Unity now that I’ve got it to work. It has a lot of functionality and I’d be able to easily place 3D objects in the environment. One thing I’m having trouble with is that the video won’t loop/play properly.

Video downloaded from here: https://vimeo.com/214401712

LINKS to FINAL PROJECTS:

big bang (mobile VR): https://webspace.ocad.ca/~3161327/p5.vr/examples/teapot_city/index.html

big bang (desktop VR (mouse control)): https://webspace.ocad.ca/~3161327/p5.vr/examples/teapot_city/index2.html

big bang files: https://github.com/salisajat/e1-big-bang *I’d accidentally worked straight inside a cloned repository of p5.vr and was unable to push that code or keep any of my commits when I remade the folder :/

AYUTTHAYA, Thailand (Unity): https://github.com/salisajat/VR-AYUTHAYA-TEST

code scraps that did not work (includes webVR + p5.js, webVR, early p5.vr experimentation): https://github.com/salisajat/E1-scraps

Additional process documentation: 

halsey concept
big bang process; webgl error?

[resolved] android sdk + java sdk PATH issue in unity

[resolved] android sdk + java sdk PATH issue in unity

[resolved] persisting problem in building apk in unity
[resolved] persisting problem in building apk in unity
snippets of my google search history
snippets of my google search history
their name is Car D. Board