Experiment 1: Mushrooms, Ferns and Grass

Francisco Samayoa, Isaak Shingray, Annie Zhang
Nick Puckett
Atelier II
January 28, 2019
Project Description

Our project – Mushrooms, Ferns and Grass – was inspired by the book of the same name. It is essentially an interactive painting with an ever changing aesthetic. With the user’s input, the painting shifts, adds glitch effects, changes textures, and switches colours. All with an audio track playing in the background: Pass the Hours by Toronto’s own, Mor Mor. It is supposed invoke emotion, similar to that of a psychedelic trip. With each development, the experience feels like a descent down the rabbit hole. With a name like that, how could we not be inspired by Lewis Carroll’s Alice in Wonderland?

The project was created using p5.js and utilizes PubNub. One computer is running the sketch, while two other computers are controlling the effects on screen, similar to an installation in a public space. One input is translating the background image from left to right using the mouseX and mouseY coordinates. Another input is changing the colour tint on screen using the arrow keys. The secret is in the arrangement of each function. Since we are not clearing the screen after every function, the result is an amalgamation of several functions all coalescing into one surreal painting. This was not intentional at first, but through careful experimentation we were able to birth a new creation. One better than our previous ideas.


Rationale

At first we had our sights set on using Unity. It seemed like a long shot, but we felt the end justified the means. Should we use Unity, we would have a immersive 3D environment that fully utilized the book to its fullest potential. Imagine being able to walk around and interact with different mushrooms and trees. However, we hit various walls due to our inexperience. Around the halfway mark, we decided to switch over to p5.js. It was better to use what we already knew, since we realized we bit off more then we could chew. While we couldn’t create a 3D environment, we decided to approach it differently. We had a canvas to “paint a picture” essentially. From there we delegated the tasks and merged our works together into one, surreal interactive painting. In the end we simply experimented with different functions, and the result is something we’re proud of.


Photos & Video

img_1775img_1779

img_1778img_1786
img_1791

img_1796

img_1799

img_1805

img_1800


Code

https://github.com/avongarde/Atelier-II/tree/master/Assignment%201


Project Context

Van Hemert, Kyle. “These Psychedelic Paintings Were Made Entirely From Code.” Wired, Conde Nast, 10 July 2018, www.wired.com/2013/10/psychedelic-digital-paintings-made-entirely-with-code/.

This project helped guide our aesthetic for the final product. If you see the pictures of Ferris' project, they are in most ways similar to ours. The glitchy effect was caused by using a Perlin noise function, whereas we opted for more basic methods like offset functions and stacking objects in different orders. If we were to continue working on our project, we would incorporate a Perlin noise function for a more surreal effect.

“Human/AI Collaboration.” Deep Dream Generator, deepdreamgenerator.com/.

We stumbled upon this website on our quest for inspiration. The infamous Deep Dream Generator, built by a team of programmers from Google. Google's AI neural network was notorious for creating artwork that sold for thousands of dollars. The realm of possibility for AI collaboration is immense. The website offers a variety of styles to tap into, like the Deep style or more psychedelic Dream style. If we hadn't proceeded with Unity or p5 we would've utilized the website's tools to bring our vision to fruition. Based on our initial Alice in Wonderland theme, we would've experimented with the Dream style.

Experiment 1: Ripples Playground – Brian Nguyen

Ripples Playground

Brian Nguyen, 3160984

Re-posting this on the correct blog.

Code: https://github.com/notbrian/Atelier-Final-Prototype

Live Demo: https://notbrian.github.io/Atelier-Final-Prototype/index.html

The Ripples Playground is a fun interactive sketch meant to mesmerize the user by generating colorful, aesthetically pleasing, expanding ‘ripples’ on the screen from the users mouse and input. The ripples speed and color are randomized between a range. There is also a second variant of the ripple called a background ripple that is created on right click which is opaque and has a fill of either white or black. I added this because it gave a kind of wiping the screen look and it looked pretty trippy.

Alongside this, the ripples generate their own unique oscillation frequency, based off their speed. This gives them a slight resemblance of sound or radio waves.

I tried to add a radio button switcher for the oscillation type but since I got the idea last minute experimenting with the types I couldn’t get it fully implemented or cleaned up. You can view the sine and triangle buttons at the bottom of the page.

I chose p5.js for this project because it allowed the sketch to be accessible to lots of devices by not requiring them to download any software and running smoothly as well on many devices. For example, if I used Processing for this I would need to run it in the Processing editor. As well, it allows me to build UI using HTML elements.

The context and idea for this project came from me wanting to experiment and expand on my Sketch_1 with drawing colorful ellipse on the page. After I got them to ripple and expand on the page I thought they kind of looked like sound/radio waves and experimented with the oscillator.

Experiment 1: Clap Powered Fireworks – Kiana Romeo

Kiana Romeo 3159835

Clap Powered Fireworks!

Description 

For this project, I wanted to create something that would interact with sound, but I did not want to create the same generic DJ type sound reactive system that has been seen so many times before. Instead, I wanted to make it so that whatever was going on on the screen would be controlled by the user, and the sound made by the user would directly correlate to the resulting animation. I wanted the project to focus on loud sounds so I started brainstorming things that a clap would be able to imitate. Instead of the action causing the sound, I wanted the sound to cause the action!

At first, I had debated on whether making lightning would be a better option. With this I would make an environment in which random lightning strikes would be created when the user clapped their hands, thus mimicking the sound of lightning. But as it was a simulation of nature, it would have been too predictable so I instead chose to make fireworks instead. This way, I could change colour, size, velocity and shape as they are manmade and the way they look does not need to be a certain way in order to recognize what it is.

Github link: Clap-powered-fireworks

fireworks 1screen-shot-2018-09-29-at-12-12-58-am
Rationale

Although I could have successfully programmed the sketch in either program, I chose to work with p5.js instead of Processing because connecting and using the microphone to control the sketch was much more straightforward and I still got the same desired effect. As well, linking multiple javascript files to one html file felt much easier than linking multiple files in Processing. I created the sketch in both programs (with help from Dan Shiffman tutorials of course!) and preferred the output of the p5 version much more. 

screen-shot-2018-09-29-at-12-13-38-am

References:

https://www.youtube.com/watch?v=CKeyIbT3vXI&index=30&list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH (Coding Challenge #27: fireworks)

https://www.youtube.com/watch?v=aKiyCeIuwn4&t=679s&frags=pl%2Cwn (Coding Challenge #41: Clappy Bird)

https://www.youtube.com/watch?v=q2IDNkUws-A (17.8: Microphone Input – p5.js Sound Tutorial)

I relied heavily on these three videos to create my project. Many hours were spent simply watching the videos until the code made sense, and then typing out any code that was given to help with my project. Through watching these videos I learned a lot about particle systems, arrays and functions as well as using the p5 sound library to make interesting, interactive artwork.

Experiment 1 Final Prototype – Michael Shefer, Andrew Ng-Lun

Text-To-Speech

Michael Shefer (3155884) Andrew Ng-Lun (3164714)

For our concept, we wanted to tackle the possibility of text-to-speech through a representation of a synthetic being speaking to the audience. We drew influence from futurists who perceive AI as a possible threat. To represent this, we decided to visualize a face with unsettling features which would speak in monotone similar to previous fantasy representations of a personified AI. Essentially, the prototype runs like this, the user inputs anything from numbers to words and sentences into the text box and then after pressing the enter key, the face would speak through animation. For the face, eyes, and mouth movement, we the p5.play library to visualize the AI and used the p5.play library for the audio aspect and the text-to-speech.

The project itself went through many phases and alterations. A text-to-speech code wasn’t our initial starting position. We still started off with the concept of a series of animated faces reacting to the tune of music. If the music was mellow the face would be different to a song that is upbeat.  We had to scrap this concept after encountering difficulties with the microphone as it is limited to picking up on specific frequencies and tunes.

Rationale of the Programming Language

Our group decided to use the famous programming language known as p5.js for our project since we were introduced to this language in the first day of class. Since then, we found out that the p5 language is very flexible and excels at animating objects. Our idea for the final project was based on the 5 experiments assignments where we discovered the p5 library and the vast possibility of features that it unlocks for the canvas. Therefore, we decided to use those add-ons to animate a AI interface. Our code is based on two major add-ons known as p5.play.js and p5.speech.js.

20180925_114131

https://photos.app.goo.gl/4qK5wGrkzB3EwpZ76

The video and image above is a representation of where we first started with our concept. We had two rough animations to represent the emotions we were going to have react different music frequencies.

20180927_102043

Above is the final image of our prototype with the visualized AI and text box for the audience to input a statement.

Code for GitHub [ References included]

https://github.com/NDzz/Final_Assignemnt-AI

Experiment 1: Interactive Audio Visualizer

Vijaei Posarajah                                                                                                                              3163608

 

Github link:  https://github.com/Vijaei/Experiment1-Interactive-Audio-Visualizer

(Issues running in chrome, run HTML file in Firefox or MS.Edge)

Programming Language: JavaScript(p5.js and sound.min.js)

For this project, I had decided to expand upon the Atelier tutorials, focusing on the implementation of p5.js. I goal was to create an interactive audio visualizer where the user can input their own tracks and interact with the visuals.

Project Description: 

The very beginning of the project focused on the in-class tutorial based around  circle visualizer  and a circular graph tutorial I found on Youtube:  https://www.youtube.com/watch?v=h_aTgOl9J5I&list=PLRqwX-V7Uu6aFcVjlDAkkGIixw70s7jpW&index=10

What led to the final design was based on a tutorial by Yannis Yannakopoulos, where he further explains the p5.sound library and the use of the FFT (Fast Fourier Transform) algorithm.

Creative Audio Visualizers

circle_

This eventually leads to the current iteration of the Interactive Audio Visualizer, which is composed of three rings of dots of various sizes that focus on the bass, mid, and treble of the audio track. Much like the circle visualizer, the three rings change in size according to the level of bass, mid, or treble in the audio track. On top of this, a fourth circle composed of lines focused on the bass which is interactive based on the user’s mouse position on the canvas. The lines mimic a camera shutter motion that rotates in place. Below the audio visualizer is a play and pause button, along with an upload button to allow the user to upload their own track to have visualized. The design theme around the visualization is based around a spring bloom concept with regards to colors and floral motifs.

Sketch Documentation:

visualizer

visualizer-2 visualizer-3 visualizer-4

Code Documentation: 

code-1 code-2 code-3 code-4

Refrences:

Creative Audio Visualizers by: Yannis Yannakopoulos   https://tympanus.net/codrops/2018/03/06/creative-audio-visualizers/

7.10: Sound Visualization: Radial Graph – p5.js Sound Tutorial  https://www.youtube.com/watch?v=h_aTgOl9J5I&list=PLRqwX-V7Uu6aFcVjlDAkkGIixw70s7jpW&index=10

https://p5js.org/reference/#/libraries/p5.sound

 

 

 

 

Experiment 1 – Untitled – Salisa Jatuweerapong

PROJECT SUMMARY:

Untitled is a series of VR project experiments of varying degrees of success, with the original aim of creating a VR music video and learning a VR workflow. While the former was not accomplished, the latter was somewhat achieved; I researched several types of VR workflows and semi-successfully implemented two on Google Cardboard: big bang (p5.vr), and AYUTTHAYA (Unity). big bang takes you to an indefinite point in surrealistic low-poly space, while the eponymous AYUTTHAYA places you in a cloudy day in Thailand.

PROCESS:

My initial idea was to create a VR music video for Halsey’s Gasoline (Tripled Layered) (https://www.youtube.com/watch?v=fEk-9bOqvoc ). Working with glowing, smoky, audio-reactive spheres (probably would’ve created a particle system and then brought in p5 sound libraries), I wanted to create one animated sphere, giving it personality, then make it grow and rush at the viewer, crowding them in as the audio grows louder. This would then be reflected three times around the viewer, with each sphere following each layer of the audio track. At one point in the song, they’d all transform into snake women (I was going to model them on Blender but now I’m inspired by Mellisa’s snakes to try something in p5). I also wanted to explore 3D sound (which would’ve been related to bringing in the triple-layered audio, but did not have enough time for that. I feel like 3D sound is an essential for VR spaces.

My second concept, born as my time dwindled, was a world where all objects were substituted by its corresponding noun (for example, a sky, but instead of blue and clouds, just the word sky floating in the air), that would be read out to you once they were in your line of site. This was a half-formed idea inspired by accessible blind-friendly image descriptions on the internet; though rather than designing for the blind, I suppose it would be more to show how blind people “saw” the world—through a disembodied computer voice telling them what the view was.

Before I could execute these concepts, however, I needed to just get a VR workflow HAPPENING. This ended up being rather difficult and took up the majority of my time.

I initially planned to use the p5.vr library (https://github.com/bmoren/p5.vr ), but upon testing, it was incompatible with Android (VRorbitcontrol didn’t work in X axis and display would not show up properly). I also had trouble hosting it on the chrome webserver and webspace, but shelved that.

I started searching for other ways to code VR, or turn a 3D environment into VR, and stumbled upon webVR. I researched that further and liked its concept; so I also downloaded that and looked through how to create an app in webVR. I also read up on webVR polyfill. Following this tutorial (https://developers.google.com/web/fundamentals/vr/getting-started-with-webvr/ ), I tried to integrate webVR into an existing p5.js WEBGL sketch I had. Didn’t work due to incompatibility between webVR polyfill & p5.js.

When I was researching webVR I also found three.js, and really loved the project examples hosted on their site, especially this one (https://demos.littleworkshop.fr/track ). Trying this one out (after figuring out I needed to disable some chrome flags first) was what convinced me to try out webVR.

I downloaded three.js and was looking through some tutorials (https://www.sitepoint.com/bringing-vr-to-web-google-cardboard-three-js/ ) on using that, when Adam suggested I try Unity instead. After spending a few hours learning how to navigate Unity through the basic tutorials on their website, I followed this (slightly outdated) tutorial for Unity (https://medium.freecodecamp.org/how-to-make-a-360-vr-app-with-unity-51cbe41ad8f1) to make the VR. I also looked up shaders in this time. Making the VR work in Unity was actually pretty simple, though I had a LOT of trouble building and running the apk. I had a lot of issues following this tutorial (https://docs.unity3d.com/Manual/android-sdksetup.html), I’m still not sure what was going wrong. I tried the command line version first but it didn’t work so I downloaded Android studio, but I had some issues with that too.

I have so many sdks downloaded now.

At this point I was running out of time and I switched back to p5.vr since it was supposed to work on Apple and I figured I could borrow an IPhone in class. Spoiler: it wasn’t working still. I don’t have an IPhone with me so I wasn’t able to investigate the issue further after class, but for some reason, even though it works fine on desktop, the mobile VR shows up with a large gap in the stereocanvas.

big bang notes:

big-bang2

big-bang1

the p5.vr library doesn’t open a lot of doors for interaction in its VR environment, something I was disappointed by as I value interaction a lot. I tried to counter that by position a directional light at your POV that would be adjusted towards whatever direction one was looking at, and then placing in planar materials that would disappear without proper lights. This created some sort of pseudo-interaction where viewers had to work to see the plane.

I created a simple space environment with stars, then was inspired by walking codes I’d seen on three.js and star wars and space operas to create some sort of warp drive effect by translating the stars. While the feel I created reflects the stars moving rather than the viewer moving, I still though it was sort of cool how they collected at a single point and that inspired the idea of the big bang.

Finally, I reset the code every 50 secs, because the big bang doesn’t happen just once. There’s probably a more contained, seamless way to do it than my goOut() code, but it worked.

I was also inspired by this year’s Nuit Blanche theme of You Are Here, and Daniel Iregui’s response to it with their installation, Forward. The sped up time in my VR work and the looping animation alludes to the presence of time and how the future is always out of reach. That’s also reflected by the half-present planar window, always too far ahead of you.

AYUTTHAYA notes:

“Wow, it feels like I’m actually there!” – anonymous OCAD student

I’m really, really fond of my home country, and that shows quite frequently in my work. Ayutthaya, dubbed (by me) as Thailand’s collection of mini Leaning Towers of Pisa, is one of Thailand’s oldest historic site. I recently visited this past summer (2018), and the sense of history in the air is palpable. I’m not sure this VR experience replicates that by any means, but it at least shows people that Ayutthaya exists.

Honestly, this was more of a test, than anything, I’d need to revisit this and create some interaction or movement. I believe Unity’s the right way to go with VR environments and would continue using Unity now that I’ve got it to work. It has a lot of functionality and I’d be able to easily place 3D objects in the environment. One thing I’m having trouble with is that the video won’t loop/play properly.

Video downloaded from here: https://vimeo.com/214401712

LINKS to FINAL PROJECTS:

big bang (mobile VR): https://webspace.ocad.ca/~3161327/p5.vr/examples/teapot_city/index.html

big bang (desktop VR (mouse control)): https://webspace.ocad.ca/~3161327/p5.vr/examples/teapot_city/index2.html

big bang files: https://github.com/salisajat/e1-big-bang *I’d accidentally worked straight inside a cloned repository of p5.vr and was unable to push that code or keep any of my commits when I remade the folder :/

AYUTTHAYA, Thailand (Unity): https://github.com/salisajat/VR-AYUTHAYA-TEST

code scraps that did not work (includes webVR + p5.js, webVR, early p5.vr experimentation): https://github.com/salisajat/E1-scraps

Additional process documentation: 

halsey concept
big bang process; webgl error?

[resolved] android sdk + java sdk PATH issue in unity

[resolved] android sdk + java sdk PATH issue in unity

[resolved] persisting problem in building apk in unity
[resolved] persisting problem in building apk in unity
snippets of my google search history
snippets of my google search history
their name is Car D. Board

Experiment 1 Final Prototype | Shiloh Light-Barnes

Shiloh Light-Barnes – 3162824 – Adam Tindale – DIGF – 2004 – 001

Track Tiles

Track Tiles is a one-week game jam that I created for our class prototype assignment. I wrote the game in C# using the Unity3D game engine and designed the art assets in Illustrator CC. The game is an endless runner themed, single tap mobile game that takes place over an infinity long span of train tracks. Once the game begins, squares of these tracks spawn in a predetermined path along the screen. However, the track pieces are rotated randomly – leaving it up to the player to keep the train from crashing. As the game progresses the player must tap on track pieces to rotate them in such a way that the train will continue to ride forward. As of now, players can collect stars along the way and notice their score increase. In the future, I would hope to add additional track types, power-ups, more obstacles, and cosmetic items.

In terms of context, my assignment was very much a solo venture. The Idea came to me while I was thinking of a game that is complex enough to keep the player stimulated yet simple enough that It can be played with a single tap. Track Tiles was also inspired by a previous (nameless) game that I have in development. Some of the code was actually directly used – the ninety-degree turn mechanic and tap detection code mainly. However, a majority of Track Tiles was inspired by my own thoughts and ideas, and online forums that helped me move past some difficult aspects of the project – All of which I have linked below.

I decided to work in Unity 3D over any other environment both because of my previous experience in the program, as well as the type of project I had decided on. I knew a mid-tier 2D game could be made much easier in Unity than in p5 or javascript, and I already had the understanding of the program interface and the C# programming language. This decision also opened up the possibility that I could actually publish the game to the App store or Google Play if it ended up playing well.

Post Presentation Thoughts:

After showing the project to the class, first impressions were mostly positive. Many players found the game much too hard at the beginning, and never saw a score that they were proud of. However, almost everyone that played immediately hit the “retry” button. This was interesting because I’ve made and demoed games before, but never received this behavior so strongly. I believe the higher difficulty actually drew the player in, rather than repelling them. Interesting as that may be, I will be lowering the difficulty of the game at the beginning, and then increasing train speed and adding double tracks later on. Overall, I’m very happy with how the project turned out and I will surely continue to work on it into the future.

Link To Github Project:

https://github.com/ShilohLightOfficial/Track-Tiles-GITHUB

References:

https://answers.unity.com/questions/46918/reload-scene-when-dead.html

https://answers.unity.com/questions/29183/2d-camera-smooth-follow.html

https://answers.unity.com/questions/284054/lists-c.html

https://answers.unity.com/questions/1126621/best-way-to-detect-touch-on-a-gameobject.html

screen-shot-2018-09-27-at-9-05-46-am screen-shot-2018-09-27-at-9-05-58-am screen-shot-2018-09-27-at-9-06-11-am screen-shot-2018-09-27-at-9-06-38-am screen-shot-2018-09-27-at-9-06-44-am screen-shot-2018-09-27-at-9-48-08-am screen-shot-2018-09-27-at-9-48-19-am

Atelier (Discovery): Colour Tracking Audio Visualizer

By Madelaine Fischer-Bernhut

Github: https://github.com/MaddieFish/Atelier–Discovery—Assignment-1

Programming Language: JavaScript(p5.js and tracking.js)

For this project, I decided to stick with p5.js as we had been learning about its capabilities within Atelier this past couple of weeks. Javascript, in general, is the language I have the most experience in and furthermore, p5.js is a library I am most familiar with (I have used processing a lot in the past). Still, I wanted to challenge myself by integrating another library alongside p5.js into my work, so I decided I wanted to try adding a computer vision library to my project. I have always been fascinated with computer vision, so exploring it was something I was excited to do. I found and decided to use the computer vision library tracking.js. The library allows the user to colour track, motion track, face track, and more. I decided to focus on colour tracking.

Project Description:

The project I created is a colour tracking audio visualizer. The program takes the pixel information of a webcam/computers camera to track the presence of specific colours (magenta, cyan, and yellow). The tracking information/data (which includes x and y coordinates, tracked colour, and dimensions of the coloured object) is stored in an array as values. Within the script, I was able to call those values to create parameters for the representative ellipses and sounds for the tracked colours. I used p5.js’s capabilities in oscillation to create a sound output. The tracked cyan colour controls the oscillator’s frequency and the tracked yellow colour controls the amplitude.  The magenta colour adds a Bandpass filter over the sound and controls the filter’s resonance and frequency.

The coloured objects I used for tracking.
The coloured objects I used for tracking.

Note: In the future, I would like to create a glove of sorts with colourful fingers instead of individual objects. I think it would be easier for the user to create more controlled and finesse sounds. For example, one would only have to hold a finger down if they do not want to track a particular colour, instead of having to completely put it down while juggling the objects for other colours.

My goal for this project was to create something that was fun to interact with. I wanted to go beyond simple mouse events and into an interactivity that went beyond just a gesture of your fingers. Because I decide to use colour tracking and computer vision, the user can interact with the program without even touching the computer. All they need is colour to control the sound and visuals of the project.

I have always been fascinated with the installations and projects that use technology to track the movements of the human body to create generative works of art or immersive experiences.  My use of colour tracking in my project is just a stripped down way of implementing movement analysis within p5.js.  Originally, I thought of using a Kinect, like many other of the projects I’ve seen, for its built-in motion tracking abilities, but I decided against it. Instead of using the form of the human body I wanted to use colour because I felt it would be easier to implement with my present skills.

Continue reading “Atelier (Discovery): Colour Tracking Audio Visualizer”

Experiment 1: Digital Rain

https://github.com/avongarde/Atelier/tree/master/Assignment%201

For this project, I wanted to emulate the digital rain from the Matrix. The Matrix is one of my favourite movies, and the iconography associated with it is the falling green code. The code is a way of representing the virtual activity inside the Matrix – a simulated world – on screen. For my interface, I used the p5.js programming language and recreated the same effect with my own spin on it. I wanted to show that the user could directly affect the code and its state. The falling green code is associated with uniformity and equilibrium. However, if the mouse veers off to the right of the screen, the code turns red and begins wandering off in many directions. Thus, symbolizing the code’s corruption. In conclusion, I used the idea of digital rain and turned it into a visual association of a programmer (the user) encountering code with/without errors. P5.js was perhaps the best language for this project because of my previous knowledge of JavaScript and Processing. P5.js is essentially a sketchbook for your browser and is accessible for artists and designers.

My father and I are movie buffs, and the Matrix has stood the test of time as one of our shared picks. I’ve always linked the movie to computer programming, and now that I’m learning it in school I wanted to explore the possibility of emulating something from it. When I began to learn p5, I instantly sought out ways to recreate the digital rain. I came across a tutorial on YouTube that showed you exactly how to do it. Actually, it was on The Coding Train, a channel I am quite familiar with. But before that, I wanted to create as much of it as I could on my own. I ended up creating the Symbol class on my own, using the String fromCharCode() method and displayed one symbol on the screen. From there I populated the screen with symbols using an array, with much dissatisfaction. After that, it became increasingly more difficult, even with my knowledge and some help from a friend. I ended up referencing the tutorial but – including the code corruption aspect – most the final code is original. One aspect I wish I improved on was making the canvas full screen.

https://www.youtube.com/watch?v=S1TQCi9axzg
https://www.w3schools.com/jsref/jsref_fromcharcode.asp

screen-shot-2018-09-26-at-2-38-28-pm

Figure 1. One symbol centered on the canvas

screen-shot-2018-09-26-at-2-42-18-pm

Figure 2. A stream of symbols cantered on the canvas falling from above

screen-shot-2018-09-26-at-2-41-20-pm

Figure 3. Version 1.0: Green symbols populating the screen and falling from above

screen-shot-2018-09-26-at-2-44-55-pm

screen-shot-2018-09-26-at-2-43-37-pm

screen-shot-2018-09-26-at-2-45-04-pm

Figure 4abc. Version 2.5: MouseX is incorporated; affects the fill colour and symbols’ x-value

video-1-1