Search results

The Lake

20181109_124839

A kayak paddle simulation examining kinetic motion with gyroscopic metrics in a raycast virtual space.

developed by Tyson Moll

JavaScript raycasting engine based on a demo by Bryan Ma
Game includes Flickering, a song contributed by Alex Metcalfe

GitHub

Ideation Phase: October 29th – November 2nd

1

The initial concept I wanted to explore for this experiment was the idea of a game controller with an unintuitive approach to engagement with digital content, ideally in a multiplayer context. I explored different forms that a game controller could take or receive data by sketching ideas out on paper, as well as techniques implemented in several genres of games that I considered exploring for the project:

Footage from Mario Party, released in 1998.

In “Paddle Battle”, a mini game from the original Mario Party for the Nintendo 64, players ‘co-operate’ to move a vehicle whilst simultaneously trying to screw over inconvenience the opposing side of the vessel. This idea of simultaneous co-operation and villainy was very enticing to me, as did the concept of utilizing a paddle or stick instead of a typical control scheme.

Trailer for Starwhal, originally published in 2013.

In Starwhal, fluorescent space narwhals fight to stab each other in the heart with their tusks. Interestingly, the movement is accomplished by accelerating the presumably aquatic mammals and diverting their movements from the level’s surface: a physics-controlled playground. This lead me to explore the possibility of incorporating a physics engine written for JavaScript called matter.js and by association 3D objects in three.js. Having explored similar concepts in the past with GameMaker, the idea of creating some sort of game in such a vein felt exciting.

Video footage of Battletoads, 1991.

In the Hyper Tunnel segment of the Nintendo Entertainment System title Battletoads, 1 to 2 players must race at ridiculous speeds while dodging miscellaneous obstacles along the way. In terms of gameplay, this is accomplished by moving the player character between two vague ‘lanes’ on the surface of the level in addition to the airspace above. Another lane-based title that sparked my imagination was Excitebike, which had the interesting mechanic of featuring a temperature gauge: you can accelerate faster at the push of a button, but if the engine overheats you are forced to wait until it cools down before returning to the race. Pondering these ideas alongside indie title Helltour Cross Country and the item mechanics of Mario Kart, I decided to pursue this idea with my initial work.

Exploratory Phase – November 3rd to November 5th

20181109_212522 20181109_212526

The initial concept for the controller was to incorporate a strong thrusting forward and backward motion to drive the acceleration of a vehicle, with the ability to tilt the controller to change lanes and two buttons to perform actions in the game. The tilting motion, I imagined, would be informed by wrist movement functioning akin to a motorcycle throttle via forward or planar rotation. In consultation with my peers, I revised this idea in the controller design such that the controller would tilt in relation to the direction in which the player changed lanes.

Another of the struggles in designing this controller was measuring the distance in which the handle of the controller traveled; would it be solved mechanically? I pondered the possibility of using gears and a rotation-based potentiometer, separately and in conjunction with each other. But reflecting on the functionality of the ultrasonic sensors used in several of my classmate’s previous experiments I realized that the mechanical approach to collecting these metrics could be cheated by mounting the sensor to the handle of the device and directing it towards a flat surface in a controlled environment.

In the meantime, I wrote some pseudo-code and created a Photoshop mock-up to organize how I wanted my game to be organized. I separated game states between titles, transitions and gameplay and detailed how I would accomplish gameplay mechanics in code. This all ended up being scrapped after I started playing around with physical materials on November 6th.

Prototyping Phase – November 6th to November 8th

Starting the day fresh Monday morning, I made my trek to Home Hardware to purchase the physical materials necessary to produce the controller I envisioned with PVC piping. The hardware store had an assortment of pieces, some that fit snugly together and others that were simply incompatible. In selecting the pipe pieces, I had to consider how I could manage to fit electrical components inside as well as how snug the pieces were; looser pieces invited a rotation mechanism, whereas stiffer connections promised more permanent fixtures. Ultimately, I purchased enough piping to produce two ‘joysticks’, with legs to support the yoke within a box that the handle could slide through. I also purchased two sheets of corrugated plastic and two wooden dowels, envisioning some sort of salvage-based post-apocalyptic racing game with matching controllers.

I experimented by manipulating the parts I purchased to inform how the device might be operated, or if a new approach would be more fitting or interesting with the pieces. Sliding the piping along the wooden dowel inspired me to investigate how this smooth motion could manifest as a controller. Some of the ideas that came to mind were a trombone, rifle action, and a submarine lookout shaft. The experience of physically operating the pieces proved to be much more interesting than the process of imagining with sketches, as the ideas that generated seemed to be grounded by the physical properties I had available to me to manipulate. One of the last ideas I played with was the idea of operating the dowel connected to the piping like a paddle. The smooth, intuitive nature of the movement created with the pieces was surprising, and the success of the motion inspired me to change course with my project scope towards producing a paddling controller.

20181106_172825

Perhaps still grounded by my first concept for the experiment, I envisioned a transition from racing vehicles to kayaks cruising through mangroves. I contacted Alex Metcalfe about creating a soundtrack based on a mockup image I produced for the project. The track he provided, ‘Flickering’, helped inspire the mood and atmosphere of the project as it developed. Foliage would delineate the direction the player would be required to paddle, and an image of the kayak bow would adjust according to the motions received from the controller. Moving towards the left, for example, would cause the foilage to stretch horizontally across the canvas as if the kayak were moving towards them. As I developed a mockup for this concept, I felt concerned about the level of detail that would be required in rendering the perspective of the objects as well as issues with having the player forced to follow a path ‘on rails’ instead of affording them the freedom of a 3D space.

doom
Screenshot from DOOM, 1993 (Image Source)

At this stage I was inspired to look at DOOM’s approach to 3D. Published in 1993, DOOM was one of the first successful 3D games and a pioneer of the raycasting technique for generating 3D spaces (it was also less than 2.5MB in size). The size of 3D elements are determined by their distance from the player object, drawn as a series of lines in the middle of the screen. Rays are based off the concept of the path traveled by light; the player object casts lines between itself and the nearest object to determine how close the object is to the player and accordingly draws lines upon the screen. The closer the object, the larger the line.  What results is a 3D perspective of the world from a fish-eye lens, corrected with the aid of trigonometry to better emulate human vision.

Image Source (Hunter Loftis)

 

Image Source (Hunter Loftis)

After studying several examples in JavaScript and C, I eventually came upon a texture-less demo created by Bryan Ma that incorporated the technique into the context of p5.js. Although several examples provided means of incorporating wall textures and sprite display, I didn’t find the time to explore these additional features in depth. Ma’s technique essentially takes a two-dimensional array of numbers (called a Map) and converts them into walls and voids, scaled and spaced appropriately for the 3D context. The values in the array map are then used to determine where walls exist in the 3D environment as well as what type of material they are. The draw function calls upon a raycasting method described above to then create the 3D environment in a 2D context (without the use of WebGL, although the added benefit of hardware acceleration might have been advantageous). The demo provided was designed to be controlled with the arrow keys, leaving the challenge for me to incorporate the serial input provided by Kate and Nick into the project and replace the arrow key controls with paddle-operated controls, as well as additional aesthetic features.

I manipulated Creative Commons licensed stock footage of water using Adobe Premiere and drew it into the scene behind the raycasted elements, from the horizon to the base of the window, in order to create the illusion of moving water. The initial output from Premiere caused considerable lag (and couldn’t be uploaded to GitHub due to its size) and was scaled down accordingly to 480p. Interestingly, this created the strange visual illusion that the waves are moving up and down against the walls of the maze. I also created a white gradient overlay over the screen to contribute to the relaxing, exploratory vibe I envisioned for the project. I decided to scrap the idea of rotating an image of the kayak’s bow, since the movement of the vessel would be evident by the perspective on screen.

20181107_13401920181109_212541

 

Having created a compelling environment for the project, I proceeded to finish creating the physical device. The cavities inside the piping proved challenging to accommodate the potentiometers. Where would they be placed given the shaft moving through the piping and the moving elements of the device? How would these sensitive electronics be protected from damage while the device is being manipulated? I was thankful that the ultrasonic sensor saved me the hassle of preparing a physical device to determine the shaft’s length when paddling the device; I created ‘splash guards’ on both ends of the shaft to keep the user’s hands from interfering with the ultrasonic signal, which doubled as a mechanism to cleanly reflect the sonic signal back to the sensor. After testing various options for positioning the potentiometers, this idea of cheating the challenge of physically computing the manipulation of the paddle brought me to the concept of using a gyrometer to determine the necessary data. It was a sort of eureka moment when I realized that all the motions that I hoped to record were essentially three-dimensional rotations from a central pivot point. I immediately began to test this alternative… the solution to all my problems! All but one, at least.

20181031_142230

I recently purchased several clone Arduino Nano units on eBay (drivers are here). I have worked with the clones in the past with Alex Metcalfe for a project, as the devices are very affordable and typically function identically to the authentic device. With the multiplayer concept in mind, I developed a modification of the serial code provided by Kate and Nick to accept multiple serial inputs during the Ideation Phase. Unfortunately, the units are incompatible with the library code for the Adafruit gyrometer I used in the project. Along with concerns regarding performance with the raycasting engine, I decided at this juncture to abandon the prospect of multiple controllers and focus on developing a single working paddle with the authentic micro-controller. I still had yet to grapple how I could divide the visible screen between two participants and the added challenge of having to build a second controller. Sometimes it’s best to focus on a completed project over a broad scope.

After having solved the incompatibility issue, my gyrometer and ultrasonic sensor succeeded in sending the requisite data to p5.js (with the aid of a serial monitor software provided to us for this experiment). I strapped the sensors to two adhesive mini breadboards on the central piping and created a box around the sensors and the Arduino Micro. I didn’t have enough of these mini breadboards to accommodate all three of the electrical components, so I soldered two headers to a circuit board to provide the Arduino with a temporary connection to the controller. Afterwards, I used the corrugated plastic, some sheet brass and some duct tape from Home Hardware to house the devices. Prototype aesthetics!

20181108_17495220181109_212557

After collecting some measurements of the angle and distance readings I received from the device I created a function to interpret the controller movements. When paddling a kayak, drawing the paddle against the water creates forward motion, but also rotates the vessel. The paddle can also be operated backwards, or held in the water while moving to slow down and make quick turns. To detect movement based on the gyrometer’s readings, I tracked whenever the X orientation of the device changed more than a specific threshold, and whether the direction was a forward or backwards motion. I used the Y orientation to determine whether the device was tilted to the left or right (or in a neutral, center position) and I used the ultrasonic sensor’s shaft length recording to determine whether the paddle was submerged in the water; without this reading, it would be impossible to tell whether the long end of the paddle was in the water or swinging around in midair. If the paddle was in the water and moving, then a paddle motion was applied. Similarly, if the paddle wasn’t moving but was in the water, the vessel would rotate and slow down (a ‘brake’ motion).

The next step was to create the effect of acceleration and friction; with consistent paddling, I wanted to create the effect of the vessel speeding up yet slow down over time when left idle. I found the p5.js function lerp() to be of considerable help in accomplishing this: it takes two numerical values and determines a mid value based on a third argument between 0 and 1. I tracked the current speed and rotation of the boat as well as a target speed and rotation determined by the paddling actions recorded, adjusting the current values towards the target values using the lerp function. I also used the round() function in order to ignore excessive digits encapsulated by float values; the major benefit of rounding is the guarantee that values produced in math such as lerp eventually reach target values instead of constantly approaching them (consider the mathematical concept of a ‘limit’). I then applied a subtraction ‘friction’ value to the speed in order to eventually drive the value to zero.

My code and device, as one might expect, didn’t work together instantaneously. In order to reach the method above I went through a process of debugging, calibration and experimentation. The code needed to respond to the physical action of paddling without its values behaving bizarrely. During this process, I tracked the values received from my controller by drawing them on screen using the text() function and the nf() function which helped me understand why my code behaved in the manner that I observed on-screen. I also spent time understanding how and why the keyboard input method that Bryan Ma used in his demo worked, and how its respective variables were implemented in relation to the rest of the software. As expected of an accomplished programmer, the values that I needed to deal with were restricted to his input function and several variables instantiated in the setup() function and at the beginning of the .js file, making the process of adjusting the code to my needs very smooth.

Presentation and Reflections – November 9th

The device was presented in the graduate gallery with an overhead projector and a chair to seat the user. Users who had the opportunity to try out the device learned to operate it within 2 minutes, with some assistance. Keeping the ultrasonic sensor in line with the splash pads on the shaft of the paddle generally worked without issue provided that users where shown its importance; it is not an intuitive design feature. For that matter, neither was the concept of dipping the paddle: until elaborated upon, users would instinctively paddle the oar without ‘dipping it into the water’. This was cause for confusion for several users, but also may have been an unnatural movement given that I have relatively long arms compared to the average person (the device was calibrated to my features). Perhaps mounting the device physically to the chair would have improved the devices functionality between users. Overall, the presentation and operation of the device seemed to have the intended effect of simulating kayak motion for a virtual environment.

This project inspires intriguing commentary on the deficiencies of current day virtual reality controllers. Try as we might to simulate reality, what many controllers lack is physical feedback that influences how we operate within a space. The kayak controller was successful in providing a physical apparatus that mimics reality, but without being able to actually see or feel the context of water near their body the idea of paddling to create forward motion was not apparent (specifically, by moving the paddle through water). Using the controller in a headset virtual reality context might determine if having water visually next to the participant in a simulated boat improves how easily users pick up on how to operate the controller, although the question remains how haptic feedback plays into the creation of the kayak illusion.

Looking back at the time I spent reviewing the demo code, I feel that I learned a lot from observing the methods in which other programmers solve mathematical, technical and algorithmic challenges and I hope to experiment more with other programmer’s code. We take so much for granted when we simply import libraries and use the functions given to us. It keeps the process of developing simple, but there is plenty to learn from the methods and functions embedded inside these code snippets.

Additional Project Context

see also: the video games listed in the Ideation Phase, and JavaScript libraries three.js, matter.js

Loftis, Hunter. (n.d.) A first-person engine in 265 lines. PlayfulJS. Retrieved from http://www.playfuljs.com/a-first-person-engine-in-265-lines/

Loftis’ first person engine first introduced me to the concept of applying raycasting to a JavaScript context, with image accompaniment that conveyed the ideas behind the algorithm fluidly. This was the first engine that I tried to implement into my project. Because the script was implemented directly into an HTML context as opposed to being included as a JavaScript file, I ran into several snags trying to convert it into a more easily manipulable p5.js context and decided to search around for a more convenient implementation.

Ma, Bryan. (March 31, 2016) demo for non-textured raycaster for p5.js. GitHub. Retrieved from https://gist.github.com/whoisbma/8fd99f3679d8246e74a22b20bfa606ee

This demo for non-textured raycaster for p5.js is the raycasting engine that the game is built upon. Although several changes were made in terms of input, presentation and layout it remains a core element of this experiment. Compared to Loftis’ version, this engine lacks imported textures but saved me significant time by more or less being ready for implementation out of the box. Bryan Ma is a new media developer / designer based in New York City; you can find his portfolio here.

Macke, Sebastian. (September 16, 2018). Voxel Space. GitHub. Retrieved from https://github.com/s-macke/VoxelSpace

On the topic of exciting web libraries and code, Voxel Space is an engine that converts a 2-d image and corresponding depth map into a 3d environment that can be freely explored. It employs a clever technical concept first introduced in 1992 with Comanche, an arcade title that produced rich, 3d landscapes. For the time, this was a huge technical accomplishment given the limited memory and CPU affordances available.

 

River Styx (2018)

River Styx

by Shikhar Juayl, Tyson Moll, and Georgina Yeboah

Figure 1

River Styx being presented at the “Current and Flow” end of the semester grad show at OCADU’s Grad Gallery on December 7th, 2018.

A virtual kayaking experience in the mythical word of River Styx. Navigate through the rivers of fire, hate, and forgetfulness using our handmade kayak controller, steering through rubble, rock and ruins. Discover jovial spirits, ancient gods and the arms of drowning souls across the waters between the worlds of the living and the dead.

The project environment was built in Unity using free 3D assets and characters created using volumetric footage. We used the Xbox One Kinect and a software entitled Depthkit in the game:play lab at OCAD U to produce the mysterious animated figures in the project. The vessel is operated with Tyson’s arduino-operated kayak controller, newly revised with 3d printed parts and more comfortable control aspects.

GITHUB LINK

*The scripts and 3d objects used in the project are available via Github, but due to the size of several assets, the Unity project is not included.

Figure 4. Sketched out diagram of circuit.

circuit diagram for paddle controller

PROCESS JOURNAL

Monday, Nov 26

When we first convened as a group, we discussed the possibility of taking the existing experience that Tyson created for his third Creation & Computation experiment and porting it over to Unity to take advantage of the engine’s capacity for cutting-edge experiences. As Unity is more widely used in the game development industry as well as for the purpose of simulations, we thought it would make for an excellent opportunity to explore and develop for new technologies that we had access to at OCAD U such as Virtual Reality and Volumetric Video capture. We also thought it would be exciting to be able to use Arduino-based controllers in a game-development project; a cursory web search revealed to us that Uniduino, a Unity plugin, was made for this purpose.

We also wanted to explore the idea of incorporating a narrative element to the environment as well as consider the potential of re-adapting the control concept of the paddle for a brand new experience. River Styx was the first thing to come to mind, which married the water-sport concept with a mythological theme that could be flexibly adjusted to our needs. Georgina had also worked on a paper airplane flight simulator for her third C&C experiment which inspired us to look at alternative avenues for creating and exploring a virtual space, including gliding. We agreed to reconvene after exploring these ideas in sketches and research.

Tuesday, Nov 27

We came up with several exciting ideas for alternative methods of controlling our ‘craft’ but eventually came full-circle and settled on improving the existing paddle controller. The glider, while fun in concept, left several questions regarding how to comfortably hold and control the device without strain. Our first ideas for the device imagined it with a sail. We then considered abstracting the concept of this controller in order to remove extraneous hardware elements. VR controllers, for example, look very different from the objects that they are supposed to represent in VR, effectively making them adaptive to various experiences and more wieldy. As we continued to explore these ideas, it occurred to us that the most effective use of our time would be to improve an already tried-and-true device and save ourselves the two or three days it would take us to properly develop an alternative. Having further researched the River Styx lore and mythos, we were also very excited to explore the concept with the paddle controller and resolved to approach our project accordingly.

capture

Wednesday, Nov 28

We visited the gamelab at 230 Richmond Street for guidance in creating volumetric videos with the kinect. 2nd year Digital Futures student Max Lander was kind enough to guide and give us pointers about using volumetric videos in Unity. Later that day, we made a serial port connection script to start integrating Tyson’s old paddle script in Unity.

Once that was completed, we then started looking into adding bodies of water assets for our environment using mp4 videos. Turns out the quality was not what we were going for so we later started integrating water assets from the the standard unity packages and began building our scenes.

In terms of the paddle and with the aid of a caliper, we dimensioned the elements of the original paddle controller and remodelled them with Rhinoceros for the purpose of 3D printing. Although the prospect of using an authentic paddle appealed to us, we chose to use the existing PVC piping and wood dowel design in order to reduce the amount of time we spent searching for just the right paddle and redesigning the attached elements. In order to improve communication between the ultrasonic sensor and the controller, the splash guards from the original kayak paddle controller were properly affixed to the dowel, as was the paddle. The ultrasonic sensor essentially uses sonar to determine distance, so it was important that the splash guards were perpendicular to its signal in order to ensure that the sound was properly reflected back. Likewise, we created a more permanent connection between the paddle headboards and the dowel and a neatly enclosed casing for the arduino and sensors.

The process of printing the materials took about five days, as not all printers were accessible and several parts needed to be redesigned to fit on available printer beds based on available material in the Makerlab. We also found that roll of material that we had purchased from Creatron for the project consistently caused printing errors compared to others, which resulted in significant time wasted troubleshooting and adjusting the printers.

capture

Thursday, Nov 29

This was our first session finding Unity assets and integrating them into the Unity editor. We used a couple of references to help shape and build the worlds we wanted to create. We managed to find a few assets we could work with from the start such as our boat. As we continued to find and add more assets to our environment we noticed that some assets were more heavier than others and caused a lot of lag to the game when we ran it so we later decided to use more low-poly rendered 3D models. Once we were satisfied with a certain environment, we added a first person controller (FPS) to the boat which came with Unity’s standard assets and began to navigate the world we created. We wanted to experience what it would be like exploring these rivers through this view and later replace our FPS controller with our customizeable arduino paddle.

Figure x. Shikhar working on River Styx environment.

Shikhar working on River Styx environment.

Friday, Nov 30

Hoping that it would simplify our lives, we purchased Uniduino, a premade Unity Store plugin. This turned out not to be the case, as its interface and documentation seemed to imply that we would need to program our Arduino through Unity instead of working with our pre-existing code developed in the Arduino IDE and its serial output. We ended up resolving this with the help of a tutorial by Alan Zucconi; we transmitted a string of comma-separated numbers associated with the variables that are used to operate the paddle and split them with string-handling functions in a C# script.

After some initial troubleshooting, we managed to get the gyroscope and ultrasonic sensor integrated with Unity by applying rotation and movement to an onscreen cube. The only caveat was that there was a perceptible, growing lag, which we decided to resolve on a later date.

 

Saturday, Dec 1st

As our environment for River Styx’s grew, we continued to discuss the addition of the other rivers involved in the Greek mythological underworld such as River of Pain, River of Forgetfulness, River of Fire and River of Wailing.We then started to brainstorm ideas for map layout, creating a map for these multiple rivers and what reflective element each river should have. Our discussions expanded towards game design vs an exploratory experience. We wanted to see how we could implement certain aspects to make it more of a game and less exploratory. However foreseeing how much time we had to develop and finalize our project without making things too complicated on ourselves we decided to keep it as an exploratory experience.

 

Monday, Dec 3rd

With the continuation of developing out other scenes, we came across other assets to help compliment our indistinguishable river environments. We were able to find lava assets for the river of Fire and create fogs in the river of forgetfulness. With all these assets and possible addition of volumetric videos, we decided we needed a powerful computer to encompass and run our Unity project to further decrease any kind of lag when running and working on it. We considered asking our professors for one but the only one capable of handling our needs was in the DF studio and it was inaccessible for uploading additional software and we couldn’t install any drivers to resolve serial port issues without administrative permissions. To avoid all these bottlenecks we decided to use Tyson’s personal PC tower to continue work on the project and later for installation for our upcoming grad show.  

We also converted the kayak controller code from javascript to C# for use in the unity game engine in an uncalibrated state. The first sign we saw of movement in Unity’s play window was noticeably slow, but indicated that our attempt to translate the code worked. For our convenience, the variables we would need to access to calibrate the device were given the prefix ‘public’ in our code. This allowed us to manually edit them from the Inspector window in Unity without running the risk of adjusting a ‘private’ variable in error.

Tuesday, Dec 4th

We reconvened in the game:play lab to capture volumetric videos with the Xbox One Kinect and Depthkit and import them into Unity. Depthkit comes with several features for manipulating captured data from the Kinect camera, including a slider for cutting out object further than a particular distance and undesirable artifacts. In order to use the captures as looping animations, we tried to keep our recordings in sync with a ‘neutral’ state we determined at the start in order to avoid having the footage jump significantly between the first and last frames. Given that the Kinect and Depthkit render the captured information as a video file we also needed to be mindful about recording times and the number of objects that we wanted to include in our project in order to reduce performance impact.

Some of the animations we captured included hands, exaggerated faces, ‘statues’ of god-like figures and silly dances. We frequently took affordances from the clipped area in order to isolate particular limbs in frame. In one instance, we were able to create a 4-armed creature by using two subjects, one in frame and the other hidden behind in cropped space, contributing a second set of arms.

capture

Wednesday, Dec 5th

At this stage we had three official scenes created. The paddle’s parts were ready to be assembled after going through the laser cutting machine. We then began to create a teleport code where the user could teleport from one cave entrance to the next at each scene but decided not to include it. We wanted the user to explore without feeling the need to be goal driven to get from one place to another. So, we decided to be facilitators and transport them whenever they wanted. We added a key press that would teleport the user from one scene to the next.

We had plenty of fun using the Zero Days Look for our Depthkit captures, which was created for the VR film of the same name. It allowed us to manipulate the default photographic appearance generated by default and incorporate colour, lines, points and shapes into the appearance of the Volumetric renditions. The more we worked with it, the more familiar we became with its interface and how our adjustments would look through in-game, as not all features of the plugin were directly rendered in the world view window in Unity during editing.

screen

capture

Thursday Dec 6th – Friday, Dec 7th

Prior to showcasing our project, we moved all of our unity assets and code to Tyson’s personal PC tower and continued our work from there. We began the integration of the volumetric videos into Unity and  play-tested the environment to get a feel for how comfortable it was to navigate with the paddle. We felt that the kayak’s motion was a bit slow for public demonstration; we tweaked the speed increment, friction, and maximum motion until it felt fluid.

Reception for the project was overall positive. Interestingly, children were able to pick up the controls with relative ease. Since the ultrasonic sensor targeted an area above the size of their hands, they were able to grip the paddle device wherever they desired. This could also be attributed to a lack of preconceptions of how the device works; one of the most experienced paddlers seemed to have the most difficulty operating the device.

capture

 

Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.

 

Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and wayfinding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customisable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicalities and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  

 

Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.

Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and way-finding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customizable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicality’s and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  

The Night Journey by  Bill Viola and the USC Game Innovation Lab in Los Angeles.

The Night Journey is an experimental art game that uses both game and video techniques to tell the story of an individual’s journey towards enlightenment. With no clear paths or objectives and underpinnings from historical philosophical writings, the game focuses its core narrative towards creating a personal, sublime experience for the individual participant. Actions taken are reflected upon its world.

The techniques incorporated from video footage and the narrative premise of the game gave us inspiration for how we might tackle the scenic objectives for our project and interpret the paths we wanted players to take in River Styx.

References

  • https://docs.unity3d.com/ScriptReference/Rigidbody-isKinematic.html
  • https://forum.unity.com/threads/simple-collision-detection-blocking-movement.86712/
  • https://docs.depthkit.tv/docs/unity-plugin
  • https://www.unity3dtips.com/move-objects-in-unity/
  • http://www.uniduino.com/
  • https://www.arduino.cc/
  • https://www.zerodaysvr.com/

Cross the Dragon – An Interactive Educational Exhibit

screenshot_2018-12-10-jpeg-image-480-x-270-pixels

Project Name : Cross the Dragon

Team Members: Norbert Zhao, Alicia Blakely, and Maria Yala

Summary:

Cross the Dragon is an interactive art installation that explores economic changes in developing countries and the use of digital media to create open communication and increase awareness on the topic of economic investment from global powers in developing countries. The main inputs in the piece are a word find game on a touch interface and an interactive mat. When a word, belonging to the four fields: Transport, Energy, Real Estate, or Finance, is found,  a video is projected onto a touch-responsive mat. Through the touch sensitive mat one can initiate another video in response to the word puzzle word. The interactive mat plays video through projection mapping. In order to be able to interact with the mat again one has find another word. We have left the information in the videos open to interpretation as to keep it unbiased and to build a gateway to communication through art and digital gaming practices.

What we wanted to accomplish:

Through this interactive installation the idea was not to presume and impress preconceived notions about the educational information provided. The installation is designed to encourage positive thought process through touch, infographic video and game. Through this interface we can conceptualize and promote discussion on information that is not highly publicized and considered widely accessible or generally discussed in Canada.

Ideation & Inspiration:

Ideation

This project was inspired by a story shared by one of our cohorts. She describes how Chinese companies are building a new artificial island off the beach in downtown Colombo, her hometown, and are planning to turn it into Sri Lanka’s new economic hub. At the same time, in the southern port Hambantota, the Sri Lanka government borrowed more than $1 billion from China for this strategic deep-water port, but couldn’t repay the money, so they signed an agreement and entrusted the management of the port to a Chinese national company for 99 years.

For us, such news is undoubtedly new and shocking. As China’s economic growth and increasing voice in international affairs, especially after The Belt and Road Initiative was born in 2013, China began to carry out a variety of large investment projects around the world, especially the developing countries in Asia and Africa, the investment in infrastructure projects from China has peaked. At the same time, we have discovered a series of reports from the New York times, How China Has Become an Superpower, which contains detailed data about China’s investment in other countries and project details.

Therefore, this project was focused around the discussion about the controversy of this topic, because some people think that these investments have helped the local economic development, but some people think it is neo-colonialism. In the beginning during concept development we knew this topic would have an awareness aspect. It was important to portray this topic that has a profound effect on the social, cultural lives and identities of people across the globe. By having a heterogeneous subject in the sense that it stemmed into other socioeconomic conditions. After discussion and data research, we decided to focus on China’s growing influence especially economic in Africa.

Finally, we decided to explore this interesting topic through interactive design. We came up with the idea of creating a mini-exhibition, through which visitors can explore the story behind this topic by interacting with the game. When the visitor first comes into contact with this exhibition, they do not have detailed information about the exhibition, but after a series of game interactions, the detailed information about the exhibition theme would be presented in the form of intuitive visual design. The resulting self-exploration process will give visitors a deeper impression of the topic.

Inspiration

These three interactive projects were chosen because of how they combine an element of play and the need for discovery in an exhibition setting. They engage the audience both physically and mentally, which is something we aim to do with our own project.

Case Study 1 – Interactive Word Games

An interactive crossword puzzle made for the National Museum in Warsaw for their “Anything Goes” exhibit that was curated by children. It was created by Robert Mordzon, a .NET Developer/Electronic Designer, and took 7 days to construct.

screenshot_2018-12-10-final-presentation

Case Study 2: Projection Mapping & Touch interactions

We were interested in projection mapping and explored a number of projects that used projection mapping with board games to create interactive surfaces that combined visuals and sounds with touch interactions.

screenshot_2018-12-10-final-presentation1

Case Study 3: Interactive Museum Exhibits

ArtLens Exhibition is an experimental gallery that puts you – the viewer – into conversation with masterpieces of art, encouraging engagement on personal and emotional level. The exhibit features a collection of 20 masterworks of art that will rotate every 18 months to provide new, fresh experiences for repeat visitors.The art selection and barrier-free digital interactives inspire you to approach the museum’s collection with greater curiosity, confidence, and understanding. Each artwork in ArtLens Exhibition has two corresponding games in different themes, allowing you to dive deeper into understanding the object. ArtLens Exhibition opened to the public at the Solstice Party in June 2017.

screenshot_2018-12-10-final-presentation2

Technology:

We combined two of our projects – FindWithFriend and Songbeats & Heartbeats for our final project. The aspects of the two projects we were drawn to are the interactions. We wanted to create an educational exhibition that has a gamified component to it and encourages discovery almost like the Please Touch Museum.

Interactions:

We combined the touch interactions from the wordsearch & interactive mat.

Components:

P5, Arduino, PubNub, Serial Connection

Brainstorm

img_6212

Team brainstorming the user flow and interactions

screenshot_2018-12-10-untitled-diagram-xml-draw-io

Refined brainstorm diagram showing user flow, nodes, and interactions

How it works:

The piece will work like a relay race where one interaction on an Ipad will trigger a video projection onto an interactive mat. When a sensor on the mat is touched it will trigger a different projection showing the audience more data / information.

The audience is presented with a wordsearch game in a P5 sketch (SKETCH A) with the four keywords; “Transport”, “Energy”, “Real estate”, “Financial”, representing the industries that China has made huge investments in. Once the word is found e.g. “Transport”, a message is published to PubNub and is received by a P5 sketch (SKETCH B) that will play a projection about transport projects. When the audience touches the mat with the sensors, the sensor value (ON/OFF) will via a Arduino/P5 serial connection to a different P5 sketch (SKETCH B) will stop playing the Transport projection and displays more information about China’s transport projects in different African countries.

Step 1: Sketch A – Wordfind game

The viewer’s initial interaction with the “Cross the Dragon” exhibit is initiated in the wordfind game. This is created using p5.js. The gameboard is created using nested arrays that create the word find matrix. Each tile in the board is created from a Tile class with the following attributes: x,y co-ordinates, RGB color values, a color string description based on the RGB values, a size for it’s width and height, booleans – inPlay, isLocked, isWhite, and a tile category that indicates whether the tile is for Transport, Finance, Real Estate of Energy.

To create the gameboard, 3 arrays were used. One array containing the letters for each tile, another that contained the values that indicated whether a tile was in play or not. This was made up of 1’s and 0’s. Tiles that were in play, i.e tiles that contained letters for the words to be found, were marked with 1’s and those that were decoy tiles were marked with 0’s. The last array was one that indicated the tile categories using a letter i.e T,F,R,E, and O for the decoy tiles. The matrix was created by iterating over the arrays using nested for loops.

screenshot_2018-12-10-cross-the-dragon1

The arrays used to create the game board tile matrix of clickable square tiles

buildingtiles

Generating the 11×11 game board and testing tile sizes

Once the tile sizes were determined, we focused on how the viewer would select the words for the four industries. The original Find With Friends game catered to multiple players, identifying them each with a unique color. However, here there is only one input point, an iPad, so we decided to have just two colors showing up on the game board; red to indicate the correct tile and grey to indicate a decoy tile. When the p5 sketch is initiated, all tiles are generated as white and marked with the booleans – inPlay and isWhite. When a tile is clicked and it’s inPlay value is true, it turns red. If it’s inPlay value is false, it turns grey.

testingwords

Testing that inPlay tiles turn red when clicked

The image below indicates testing of the discover button. When a word is found, and the discover button is clicked, a search function loops through the gameboard tiles, counting from the tiles that are inPlay and have turned red, a tally of the tiles clicked is recorded in four variables i.e one for each industry. There are 9 Transport tiles, 6 Energy tiles, 10 Real Estate tiles, and 7 Finance tiles. Once looping through tiles is complete, a checkIndustries() function is called to check the tally of the tiles. If all the tiles in a category are found, the function sets a global variable currIndustry to the found industry and then calls a function to pass that industry to PubNub. When a tile is found to be in play and clicked, it is locked so that the next time the discover button is clicked, the tile is not counted again.

testingdiscover

Testing that inPlay tiles are registered when found and that already found tiles are not recounted for the message sent to PubNub.

Step 2: Sketch B – Projection Sketch – Part 1

When the sketch initializes, a logo animation video, vid0, plays on the screen and a state variable which was initialized as 0 is set to 1 in readiness for the next state which will play video 1 / a general information video on a found industry.

When the second p5 sketch receives a message from PubNub, it uses the string in the message body that indicates the current industry to determine which video to play. The videos are loaded into the sketch in the preload function and played in the body of the html page crossthedragon.html. During testing we discovered that we had to hide the videos using css and show them only when we wanted to play them, re-hiding them after because they would all be drawn onto the screen overlapping each other. When the sketch is loaded videos are added to two arrays – one to hold the initial videos and another to hold the secondary videos that provide additional information. The positions both the arrays for each industry are Transport in index 0, Energy in 1, Real Estate in 2, and Finance in 3.

Once a message is received a function setupProjections(theIndustry) is called. The function takes the current industry from the PubNub message as an argument and uses it to determine which video should be played. The function sets the values of the global vid1 and vid2. This is done by using the industry to determine which video to pull from the two arrays that hold all the videos. e.g if transport was found, vid1 = videos1[0] and vid2 = videos2[0]

A function makeProjectionsFirstVid() is called. This function stops the initial “Cross the Dragon” animation from playing and hides it, then hides vid2 and plays vid1. It then updates a global variable state to 2 in readiness for the second in-depth informational video.

Note: vid0 only plays when state is 0, vid1 only plays when state is 1, and vid2 only plays when state is 2.

Step 2: Sketch B – Projection Sketch – Part 2 Arduino overs serial connection

The second in-depth video is triggered whenever an signal is sent over a serial connection from Arduino when the viewer interacts with the touch-sensitive mat. Readings from the 3 sensors are sent over a serial connection to the p5 sketch. During testing we determined that using a higher threshold for the sensors produced a desirable effect of reducing the number of messages sent over the serial connection thus speeding up the p5 sketch and reducing system crashes. We set the code up so that messages were only sent when the total sensor value recorded was greater than 1000. The message sent was encoded in JSON format. The p5 sketch parses the message and uses the sensor indicator values passed i.e. either 0 or 1 to determine whether to turn on the second video. If the sensor indicator is 0 this means OFF and the video start is not triggered, if the value is 1 this means ON and the video is triggered. The makeProjectionsSecVid() function triggers the start of the video. If the state is 2, the vid1 is stopped and then hidden and the vid2 is shown then played on a loop. An isv2Playing boolean is set to true and is used to determine whether to restart the video and prevents it from jumping through videos if one is already playing.

Electronic Development 

While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg.  To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard. Since the FSR sensors are plastic due to travel some of the connections would come loose. One of the challenges was having to replace the sensors a few times. When this occurred would follow up with quick testing to make sure all sensors were active through the serial monitor in Arduino. To save time I soldered a few extra sensors to wires so the old ones could be switched out easily if they became damaged.

screenshot_2018-12-10-cross-the-dragon2

Materials for the Interactive Mat Projection

  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×6 piece of canvas material
  • Optoma Projector
  • 6 x 10k resistors

Video Creation Process

Information was extracted for the four most representative investment fields from the database of investment relationship between China and Africa: transport, energy, real estate and finance. Transport and real estate are very typical, because the two famous parts of China’s infrastructure investment in Africa are railway and stadium construction. In addition, energy is also an important part of China’s global investment. The finance part corresponds to the most controversial part of China’s investment, that is, when the recipient country cannot repay the huge loan, it needs to exchange other interests. Sri Lanka’s port is a typical example.

Initially, we wanted to present investment data in four fields through infographic. But after the discussion, we believed that video is a more visual and attractive way to present. So we make two video’s for each field. When visitors get the correct words in this field, they will be shown the general situation of China in the world and Africa in this field, which is video 1, including data, location, time and so on. When visitors click on mat,  projector will play more detailed video about the field, which is video 2, such as details of specific projects.

In video 1, we use Final Cut to make dynamic images of infographic produced in adobe illustrator, and add representative project images of this field in the latter half of video. So that visitors have a general understanding of this field.

2
In video 2, we use Photoshop and Final Cut to edit some representative project images in this field, and then add key words about the project in the image, so that visitors can have a clear and intuitive understanding of these projects.1

The Presentation

The project was exhibited in a gallery setting in the OCAD U Graduate Gallery space. Below are some images from the final presentation night.

settingup

Setting up the installation

shotsfrompresenation

People interacting with the Cross the Dragon installation

Reflection and Feedback

Many of the members of the public who interacted with the Cross the Dragon exhibit were impressed by the interactions and appreciated the educational qualities of the project. Many people stuck around to talk about the topics brought up by the videos, asking to know more about the projects, where the information came from and how the videos were made. Others were more interested in just the interaction but most participants did engage in open ended dialogue without being prompted. Overall feedback was positive. People seemed to be really interested in changing the informational video after finding the word in the puzzle. Some participants suggested slowing down the videos so that they could actually read all the information in the text.

For future iterations of this project, we would like to explore projection mapping more so that we can make the interactive mat more engaging. We noticed that once people found out that they could touch the mat, they tended to want to keep touching it and exploring it. We had spoken about including audio and text with animation earlier on in our brainstorming and we believe this would be a good way to include these through having sensitive areas on the mat to create more interactions. It was also suggested that we should project the videos onto a wall also so that people who were around the room would still be included in the experience without having to actually be physically at the exhibition station.

References

Code Link on Github – Cross The Dragon

P5 Code Links:

Hiding & Showing HTML5 Video – Creative Coding

Creating a Video array – Processing Forum

HTML5 Video Features – HTML5 Video Features

Hiding & Showing video – Reddit JQuery

Reference Links:

1] https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

2] http://osusume-energy.biz/20180227155758_arduino-force-sensor/

3] https://gist.github.com/mjvo/f1f0a3fdfc16a3f9bbda4bba35e6be5b

4] http://woraya.me/blog/fall-2016/pcomp/2016/10/19/my-sketch-serial-input-to-p5js-ide

5] https://www.nytimes.com/interactive/2018/11/18/world/asia/world-built-by-china.html

6] http://www.sais-cari.org/

7] http://www.aei.org/china-global-investment-tracker/

 

 

 

Sound and Heart Beats – Interactive Mat

music beats & heart beats

Music Beats & Heart beats by Alicia Blakey

 

Music and Heart Beats is an interactive installation that allows users to wirelessly send sounds and interact with a digital record player. Through the installation, you can either send someone a sound beat or a heartbeat. Listening to certain music or the sound of loved ones hear beat has proven to help improve mood and reduce anxiety.

If a user opens the application connected to the interactive record player they can see when others are playing songs. The digital record player then starts spinning and is initiated when a user interacts with the app that corresponds with the installation. The pin on the record player is indicated by LED lights that music is being played or this fun interaction can also be initiated through touch sensors as well.

This art installation also conceptualizes the experience of taking a moment to initiate with your senses of hearing and touch to have fun and take a few minutes out of your day to feel good and listen to sounds that are good for your body and mind.

 

img_1913

 

 

Ideation

Initially, I had a few variations of this idea that encompassed the visual of music vibrations and heartbeat blips. After the first iteration, the art and practice of putting on a record engaged with the act of listening more.  The visual aspect of watching a record play is captivating with in itself.  I always notice after someone puts on a record they always stay and watch the record spinning. There is something mesmerizing with the intrinsic components of this motion. I wanted to create an interaction that was more responsive with colour, light and sound. Expanding on the the cyclical nature of the turntable as a visual the intent was to create an environment.

 

rough-draft-proj-4

heart-beat-to-song-beat

img_2780

 

Development

While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg. I also used a WS2812B Neo pixel strip envoloped in plastic tubing. The LED strip required 5v power while the feather controller required 3v power. To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard.

 

 

img_2784

Coding 

When initializing the video it with testing it proved more optimal that the  video sequence sat over the controller by changing the z index styling.  My next step was to apply a mask styling over the whole desktop page to prevent clicks altering the p5 sketch. Styled the controller.js to be in the same location as both desktop and mobile so it could share pub nub x y click locations.  The media.js  file would connect with the controller.js for play and stop commands. One of the initial issues was a long loading time for the mobile client. The solution was distinguishing with inline javascript a variable that we can use to stop the mobile client from running the inload audio function.T he mobile and desktop sites were not working on iphone only on android.  Pub nub would initiate on android mobile phones but in the end could not debug the mobile issue. If the desktop html page was loading its media.js  while a mobile client was trying to communicate with it the overlying result was unexpected behaviour. A possible solution would be to apply in the desktop a call back function; this would tell the mobile client it is loaded.

 

 

 

screen-shot-2018-11-28-at-3-38-34-pm

 

Materials

  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×8 Canvas Material
  • Optoma Projector
  •  6 x 10k resistors
  •  3.2 ft Plastic tubing

 

I decided to use a breadboard instead of a proto – board this time due to the fact that the interactive touch sensitive mat was large. In order for the prototype to remain mobile I needed to be able to disconnect the LED’s and power converter. It was easier to roll the mat up this way and quickly reconnect everything. Since I was running over 60 LEDS I used a 9volt power supply to run through the converter.  I originally tested with the 3.7k resistors but found the sensors were not really responsive. I then replaced and tested with the 10k resistors and the mat had varied greatly in sensitivity and was more accurate.

 

The outcome of my project was interesting people were really encompassed in just watching the video  projected onto the interactive mat.  Being able to control the LED’s was a secondary approach that users seemed to enjoy but just watching the playback while listening to music seemed to cause a state of clam and happiness. The feedback and response to the instillation was very positive. It was noted that the projection was hypnotic in nature. The installation was designed to bring a state of calm and enjoyment.  Although the LED’s were very responsive with the touch sensors there was some flickers on the LED’s I think due to an issue with the converter dwindling. I had purchased used but after using the Ywrobot converter would purchase new for other projects.  Other comments suggested that I add another interaction into the p5.js sketch to enable users to control the motion of the record through the video with the sensors. The overall reaction was very promising for this prototype. I’m extremely happy with the conclusion of this project. There was a definitive emotional reaction that the project was designed for. 

https://github.com/aliciablakey/SoundBeats2HeartBeats.git

 

screen-shot-2018-11-27-at-12-36-24-am

 

References

https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

http://osusume-energy.biz/20180227155758_arduino-force-sensor/

https://gist.github.com/mjvo/f1f0a3fdfc16a3f9bbda4bba35e6be5b

http://woraya.me/blog/fall-2016/pcomp/2016/10/19/my-sketch-serial-input-to-p5js-ide

 

 

 

 

 

 

 

Winter Is Here!

Experiment 3: Winter is Here

by Tabitha Fisher

Description

Winter is Here is a multimedia installation that fully embraces the inevitable. The project uses a p5 snowflake simulator in combination with Arduino which allows the user to “turn up the dial” on winter for a fully immersive experience.

Code

https://github.com/tabithadraws/winter

Process Journal

Step one: make a thing control another thing. A daunting task when you can barely understand your previous things. See, in the last two assignments I had the benefit of working with some very smart people. We balanced out each others strengths and weaknesses. I can delegate, I can facilitate and I can ideate, but I’m not great at doing when I don’t know what I’m doing. I am also not great at retaining information without having time to let it sink in. Which is a problem when that is the guiding philosophy of this particular course. To rephrase that Jenn Simmons quote, The only skill I know is how to identify what I don’t know… and then what???

I began this project knowing that I had to keep it simple. My goal was to have something as soon as possible with the thought that at any moment this can be taken away. Basically, working from the general to the specific. It’s a philosophy I use while drawing/writing/animating – the idea that your work is never truly finished so you must have something to show at every stage of the process. It’s not meant to be as melodramatic as it sounds.

I knew I wanted to understand the code. In the previous group projects I understood the theory behind some of the code, but I couldn’t confidently explain how it works. So, rather than start with some lofty plan that I couldn’t possibly execute I decided to work from what was already working as a basis for my experiment. My goal was to create a project that I could slowly take the time to understand.

In class we were learning about JSON Protocol and how it allows the Arduino and P5 to talk with each other. There was an in-class example that used the mouse positioning on-screen to control two LEDs. After a few tries and a bit of help I was able to get it working, which was quite exciting. I also happened to have my potentiometer hooked up from a previous exercise, and it was at this point where I realized how I can have multiple devices on the same breadboard without having them mess each other up. There was also this serial port thing that was fairly new – in order for JSON to work we had to use this serial control software and add the serial port name into our code. I managed to get that working too. Knowing you can do something in theory is very different from actually doing it yourself.

Fig. 1 - Two things on the thing and nothing exploded!

Fig. 1 – Two things on the thing and nothing exploded!

After this I thought since I’ve already got a potentiometer on here I should try to get it to control some stuff. In the p5 examples page there was a particle simulation for falling snowflakes. I ran this idea past Nick and we discussed possibilities for displaying the project. I thought it would be pretty lame to just have it on a laptop and shake it like a snow globe, so Nick suggested that I check out what’s available in the AV Rental room. That’s when I learned all about short throw and long throw projectors!

At this point, I had to remember how to get the P5 example to display on my computer. Basic, I know, but at this point we had tried out so many other new things in class that it was a struggle to remember the steps. Oh yeah… I had to make a new folder with an index.html document and copy the code into a javascript document. And open the index.html in Chrome. Baby steps, people…

Fig. 2 - Mockup of my dial that I used from a class example.

Fig. 2 – Mockup of my dial that I used from a class example.

I got the snow working but I had no idea how to control it – or even begin trying to figure out that information. We had been given a document from the in-class assignment that I had already loaded into my Arduino but I wasn’t sure how to figure out the code. A few classmates had some ideas but I wasn’t able to fully follow along, which had me even more confused. One of them even got the snow to fall but I didn’t understand how they did it and there were way too many snowflakes, and I couldn’t fix that either. There were many challenges such as remembering the shortcut to access the console – not to mention remembering it was even called ‘console’ so I could google it and find out for myself. At this point my n00b brain was completely fried as I came to the realization that all my computer knowledge up until this point has relied solely on an understanding of a few select programs and outside of them I am a helpless baby kitten.

Fig 3. - Way too many snowflakes, my computer is dying! But at least the potentiometer is working!

Fig 3. – Way too many snowflakes, my computer is dying! But at least the potentiometer is working!

After speaking with Nick he suggested that I put the Arduino away for now and spend the weekend trying to understand the code as well as the controls for the snowflake properties. Could I figure out how to change the number of snowflakes? How about the size? So I went home and adjusted some of the code on my own. To start, I knew I wanted to make the screen larger than a little rectangle so I played around with the canvas size. Then, I changed the colour to blue! Next, I wanted to try adding an image but the usual method wasn’t working.

Fig 4. - Bigger canvas! Blue canvas!

Fig 4. – Bigger canvas! Blue canvas!

During Explorations class I was able to add images by putting one in my library and writing the title into the code, but nothing was happening. I found an example from the P5js site that described how to load and display an image using the command line to create a simple server. It was not working as described, so I abandoned that option and took Kate’s advice to host my sketch on the OCADU server for now. I just wanted to see if the image was working!

Fig 5. - Oh yeah, remember cyberduck? What a quack… #dadjokes

Fig 5. – Oh yeah, remember cyberduck? What a quack… #dadjokes

After a bit of tinkering I got it to do the thing! Meaning, my image was on screen and my snow was falling. Super great. Except this wasn’t a permanent solution since Arduino can’t talk to my website over the OCAD server… le sigh.

Fig 6. – Webspace snow video

So I put aside the idea of working with the image on screen for the moment and focused on trying to get the Arduino to speak to the p5 sketch. Which had been working a few days ago. But when I sat down to recreate the interaction I couldn’t remember what I needed to do. I knew I had to load something into the Arduino, but what? I had taken notes but suddenly they no longer made sense. Was that something located in my project folder? Weren’t they supposed to be .oi files – and how come I wasn’t seeing one of those? And how come when I tried to load my index.html page I got… these horrible grey streaks??

Fig 7. - The saddest snow.

Fig 7. – The saddest snow

At this point I was very concerned that I had somehow ruined the whole thing. It was already Wednesday and the project was due on Friday. Going back to my work ethos that you should always be at some form of “done” for every stage of a project… at this point, it seemed as though I had no project at all. Dark times. I spent hours fussing with the code, trying to get it back to where it was a few days earlier but to no avail. I had a meeting with Nick and Kate that afternoon where I was hoping to discuss presentation possibilities but at that point I had nothing to show and no idea how to fix it. I really can’t recall a time where I felt more lost on a project.

Nick got me back on the right track by starting over with the original p5 example. It appears that I had just messed up something in the code, but within a few minutes he was able to get it going again. I was relieved but also pretty frustrated that I wasn’t able to figure it out on my own. He also pointed out the videos on Canvas that showed us how to make a server on our computers. That’s exactly what I needed to run the sketch with my images. Then it was time to head off to work for the evening, so any testing would have to wait until the following day.

Fig 8. – Building it up again

I was able to spend the next day applying these changes in the time between my other classes, and for the first time I was able to get through it all smoothly. I was simply retracing my steps, but this time I was understanding why I was doing them. When my Arduino failed to speak with my sketch I knew to re-select the correct serial port. I had a better understanding of the purpose of the index.html file vs the javascript file. I swapped my old background image for a new one. I knew where to go if I wanted to upload the .ino file to my Arduino. I felt as though I was controlling my project rather than allowing it to control me!

Fig 9. – Hey, things are working!

Fig. 10 - Thanks coding train!

Fig. 10 – Thanks, coding train!

The best part was getting the image up onto the wall. By chance I had positioned the projector so that it was tilted towards the corner of the room and when I turned it on the image came to life. Because the image was a nature scene it created the feeling of a little immersive world. A simple one, but still. The projector took it from being was is essentially a 90s-era screen saver to an actual installation project. I’ve never made such a thing before. Generally, I don’t make ‘art’. I make assets that are part of a great big commercial venture (animated tv!). Or, if I make something for myself I do so using methods I already know (films! sketchbooks!). But nothing that I’ve made has ever been in a gallery. I have always wished I could do this, but modern forms of installation art have always seemed so mysterious. Nuit-blanche type stuff for fancy art folks. How do they come up with those ideas? What are the guiding principles behind this style of making?

Fig. 11. – My baby on the big screen!

I can draw stuff and I can draw it pretty well, but DF requires an entirely different set of skills. It would have never dawned on me to consider the effects of scale on a projected image unless it was a comparison between a film screened in a festival vs on a phone. I remembered back to the second project when Kate described the opportunities of working with physical materials. In terms of code the project may be technically simple but the way we use the physical environment can turn the project into something special. I knew I wanted to create this immersive snow environment but when I saw my classmates react so positively to the projection I thought it could be about revelling in the thing we dread… winter… with its dark days and icy sidewalks. My project could be about embracing the best sides of winter and all the things that make it special. Cozy scarves and hats. Hot drinks and chocolate chip cookies. A little holiday ‘muzak’ for ambiance. The comfort of the familiar. A metaphor for this particular journey of mine, perhaps?

Toques!

Fig. 12 – Toques!

Presentation

On the morning of the presentation I gathered the last of my supplies and left time to setup and ensure that my project was running properly. One of my classmates had suggested using Incognito while working on  project to ensure that my changes updated properly on the browser. It also had the side benefit of being quite dark which helped it blend in with the rest of my image while I was presenting. In a moment of brazen recklessness I decided to pull the yellow and blue LEDs from my breadboard moments before the start of class. They weren’t exactly needed anymore, and I felt that I understood my setup enough to do it with confidence. Thankfully I was right. Then, glorious synchronicity. I learned that another classmate had brought an office-sized vessel of coffee to share which they generously donated to my cause – I was planning to pick up something similar for my installation. I had grabbed some hats and scarves at Black Market and found some tasty-looking cookies at Rabba to share. They need to be tasty or what’s the point?

When it was time to present I cranked the Bublé tunes and found myself feeling… strangely exposed. My project was not nearly as sophisticated as the work of my classmates. It had taken me two weeks to use a preexisting snow simulation and make it work with a dial. What’s so special about that? Could you even call it an experiment? Well, for me it was and here’s why. Up until the moment I started at DF my value as an artist (and maybe as a person?) has been measured by my ability to draw. This has always been my currency. It’s at the very core of me, but in a way that’s very limiting. I came to this program to explore the unexplored and expose myself to methods of working that I know nothing about. Well, coding is one of those things.

Interestingly, during the critique Kate made the point that she wished I had used some of my drawings on this project. I agree that would have been great, but admittedly it hadn’t crossed my mind because my last few months of schooling have been about exploring worlds beyond that person. It is very easy for me to dress up one of my projects with a nice drawing. I know that’s not how she meant it, but I wanted to have a reason for using my drawings and I hadn’t quite arrive there yet. In a way I think I needed that reset. Maybe it’s silly to purposefully disengage from that part of myself but I was hoping that I’d benefit from the distance. Like going away on a very long holiday to somewhere completely new, only to return with a newfound appreciation for the familiar. Having gone through this process I now feel that I’m ready to reimagine what’s possible.

Fig. 13 – Final Snow

 

Context

Underwater Aquarium - Windows 98

Underwater Aquarium – Windows 98

Once I started working with the P5 snow example I noticed how it gave off a 90’s screensaver vibe. I really love that kitchy aesthetic. While I wasn’t able to fully explore the options here because I was so caught up with managing the basics, I kept them in mind when selecting the image. I have fond memories of those fish.  https://www.youtube.com/watch?v=5j5HA3Z8CZQ

Office party photo booth

Office party photo booth

These days it seems as though every office holiday party needs to be equipped with some kind of photo booth. My favourite part about it is the fanciness of the suits in contrast with the silliness of the props. When I think of an installation this is what comes to mind – probably because I’ve experienced more office holiday parties than immersive art projects. But there’s an earnestness to it all that I love very much.

 

Winter as a national tragedy

Winter as a national tragedy

I am very interested in the way citizens of a large city collectively gripe about specific topics throughout the year. It seems as though everything is always the worst. Winter especially. Maybe it’s somehow cathartic for us to perform this strange ritual at the turn of each season?

 

References

Original snowflake simulation example:

https://p5js.org/examples/simulate-snowflakes.html

 

Instructions on loading images (not super useful, however):

https://p5js.org/examples/image-load-and-display-image.html

 

Making a local server for the P5 Sketch:

https://www.youtube.com/watch?v=UCHzlUiDD10

https://www.youtube.com/watch?v=F6tP3joL90Q

 

Tiny Trotters

 

screen-shot-2018-10-29-at-10-41-28-pm

 

 

 

 

A digital spin on an old-fashioned toy. Push toys are meant to help children and encourage them to walk more by offering a fun interaction.  Tiny Trotters is an interactive push toy with a light up pixel indicator in the wheel. When the toy is around others it becomes a game instilling togetherness. Like a stop light when walking at night Tiny Trotters can be used in unison when together they are green; if they haven’t connected in a short period of time the toys turn yellow then red to indicate to go back. If veering away from each other the red indication and bright LED and can be considered a safety feature for children that wander off as well.

screen-shot-2018-10-29-at-10-43-49-pm

 

screen-shot-2018-10-29-at-10-45-18-pm

screen-shot-2018-10-29-at-10-46-43-pm

screen-shot-2018-10-29-at-10-48-10-pm

screen-shot-2018-10-29-at-10-49-59-pm

screen-shot-2018-10-29-at-10-51-31-pm

screen-shot-2018-10-29-at-10-52-25-pm

screen-shot-2018-10-29-at-10-53-49-pm

screen-shot-2018-10-29-at-10-55-19-pm

screen-shot-2018-10-29-at-10-57-03-pm

 

screen-shot-2018-10-29-at-11-15-27-pm

screen-shot-2018-10-29-at-10-58-30-pm

screen-shot-2018-10-29-at-11-17-31-pm

screen-shot-2018-10-29-at-11-21-00-pm

screen-shot-2018-10-29-at-11-02-01-pm

 

https://github.com/aliciablakey/Pin-wheel.git

 

REFERENCES

http://www.seeeklab.com/en/portfolio-item/
https://m.youtube.com/watch?v=RKBUGA2s9JU https://www.teamlab.art/w/resonatingspheres-shi-mogamo/ http://www.cinimodstudio.com/experiential/projects/dj-light#videoFull https://www.arduino.cc/en/Tutorial/MasterReader. https://www.youtube.com/watch?v=t3cXZKBO4cw https://www.instructables.com/id/Arduino-Photore-sistor-LED-onoff/http://www.cinimodstudio.com/experiential/projects/dj-light#videoFull https://arduinomylifeup.com/arduino-light-sensor/ https://www.youtube.com/watch?v=CPUXxuyd9xw http://www.electronicwings.com/arduino/ir-communi-cation-using-arduino-uno https://www.instructables.com/id/Infra-Red-Obsta-cle-Detection/

 

 

 

 

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.