Author Archive

The Lake

20181109_124839

A kayak paddle simulation examining kinetic motion with gyroscopic metrics in a raycast virtual space.

developed by Tyson Moll

JavaScript raycasting engine based on a demo by Bryan Ma
Game includes Flickering, a song contributed by Alex Metcalfe

GitHub

Ideation Phase: October 29th – November 2nd

1

The initial concept I wanted to explore for this experiment was the idea of a game controller with an unintuitive approach to engagement with digital content, ideally in a multiplayer context. I explored different forms that a game controller could take or receive data by sketching ideas out on paper, as well as techniques implemented in several genres of games that I considered exploring for the project:

Footage from Mario Party, released in 1998.

In “Paddle Battle”, a mini game from the original Mario Party for the Nintendo 64, players ‘co-operate’ to move a vehicle whilst simultaneously trying to screw over inconvenience the opposing side of the vessel. This idea of simultaneous co-operation and villainy was very enticing to me, as did the concept of utilizing a paddle or stick instead of a typical control scheme.

Trailer for Starwhal, originally published in 2013.

In Starwhal, fluorescent space narwhals fight to stab each other in the heart with their tusks. Interestingly, the movement is accomplished by accelerating the presumably aquatic mammals and diverting their movements from the level’s surface: a physics-controlled playground. This lead me to explore the possibility of incorporating a physics engine written for JavaScript called matter.js and by association 3D objects in three.js. Having explored similar concepts in the past with GameMaker, the idea of creating some sort of game in such a vein felt exciting.

Video footage of Battletoads, 1991.

In the Hyper Tunnel segment of the Nintendo Entertainment System title Battletoads, 1 to 2 players must race at ridiculous speeds while dodging miscellaneous obstacles along the way. In terms of gameplay, this is accomplished by moving the player character between two vague ‘lanes’ on the surface of the level in addition to the airspace above. Another lane-based title that sparked my imagination was Excitebike, which had the interesting mechanic of featuring a temperature gauge: you can accelerate faster at the push of a button, but if the engine overheats you are forced to wait until it cools down before returning to the race. Pondering these ideas alongside indie title Helltour Cross Country and the item mechanics of Mario Kart, I decided to pursue this idea with my initial work.

Exploratory Phase – November 3rd to November 5th

20181109_212522 20181109_212526

The initial concept for the controller was to incorporate a strong thrusting forward and backward motion to drive the acceleration of a vehicle, with the ability to tilt the controller to change lanes and two buttons to perform actions in the game. The tilting motion, I imagined, would be informed by wrist movement functioning akin to a motorcycle throttle via forward or planar rotation. In consultation with my peers, I revised this idea in the controller design such that the controller would tilt in relation to the direction in which the player changed lanes.

Another of the struggles in designing this controller was measuring the distance in which the handle of the controller traveled; would it be solved mechanically? I pondered the possibility of using gears and a rotation-based potentiometer, separately and in conjunction with each other. But reflecting on the functionality of the ultrasonic sensors used in several of my classmate’s previous experiments I realized that the mechanical approach to collecting these metrics could be cheated by mounting the sensor to the handle of the device and directing it towards a flat surface in a controlled environment.

In the meantime, I wrote some pseudo-code and created a Photoshop mock-up to organize how I wanted my game to be organized. I separated game states between titles, transitions and gameplay and detailed how I would accomplish gameplay mechanics in code. This all ended up being scrapped after I started playing around with physical materials on November 6th.

Prototyping Phase – November 6th to November 8th

Starting the day fresh Monday morning, I made my trek to Home Hardware to purchase the physical materials necessary to produce the controller I envisioned with PVC piping. The hardware store had an assortment of pieces, some that fit snugly together and others that were simply incompatible. In selecting the pipe pieces, I had to consider how I could manage to fit electrical components inside as well as how snug the pieces were; looser pieces invited a rotation mechanism, whereas stiffer connections promised more permanent fixtures. Ultimately, I purchased enough piping to produce two ‘joysticks’, with legs to support the yoke within a box that the handle could slide through. I also purchased two sheets of corrugated plastic and two wooden dowels, envisioning some sort of salvage-based post-apocalyptic racing game with matching controllers.

I experimented by manipulating the parts I purchased to inform how the device might be operated, or if a new approach would be more fitting or interesting with the pieces. Sliding the piping along the wooden dowel inspired me to investigate how this smooth motion could manifest as a controller. Some of the ideas that came to mind were a trombone, rifle action, and a submarine lookout shaft. The experience of physically operating the pieces proved to be much more interesting than the process of imagining with sketches, as the ideas that generated seemed to be grounded by the physical properties I had available to me to manipulate. One of the last ideas I played with was the idea of operating the dowel connected to the piping like a paddle. The smooth, intuitive nature of the movement created with the pieces was surprising, and the success of the motion inspired me to change course with my project scope towards producing a paddling controller.

20181106_172825

Perhaps still grounded by my first concept for the experiment, I envisioned a transition from racing vehicles to kayaks cruising through mangroves. I contacted Alex Metcalfe about creating a soundtrack based on a mockup image I produced for the project. The track he provided, ‘Flickering’, helped inspire the mood and atmosphere of the project as it developed. Foliage would delineate the direction the player would be required to paddle, and an image of the kayak bow would adjust according to the motions received from the controller. Moving towards the left, for example, would cause the foilage to stretch horizontally across the canvas as if the kayak were moving towards them. As I developed a mockup for this concept, I felt concerned about the level of detail that would be required in rendering the perspective of the objects as well as issues with having the player forced to follow a path ‘on rails’ instead of affording them the freedom of a 3D space.

doom
Screenshot from DOOM, 1993 (Image Source)

At this stage I was inspired to look at DOOM’s approach to 3D. Published in 1993, DOOM was one of the first successful 3D games and a pioneer of the raycasting technique for generating 3D spaces (it was also less than 2.5MB in size). The size of 3D elements are determined by their distance from the player object, drawn as a series of lines in the middle of the screen. Rays are based off the concept of the path traveled by light; the player object casts lines between itself and the nearest object to determine how close the object is to the player and accordingly draws lines upon the screen. The closer the object, the larger the line.  What results is a 3D perspective of the world from a fish-eye lens, corrected with the aid of trigonometry to better emulate human vision.

Image Source (Hunter Loftis)

 

Image Source (Hunter Loftis)

After studying several examples in JavaScript and C, I eventually came upon a texture-less demo created by Bryan Ma that incorporated the technique into the context of p5.js. Although several examples provided means of incorporating wall textures and sprite display, I didn’t find the time to explore these additional features in depth. Ma’s technique essentially takes a two-dimensional array of numbers (called a Map) and converts them into walls and voids, scaled and spaced appropriately for the 3D context. The values in the array map are then used to determine where walls exist in the 3D environment as well as what type of material they are. The draw function calls upon a raycasting method described above to then create the 3D environment in a 2D context (without the use of WebGL, although the added benefit of hardware acceleration might have been advantageous). The demo provided was designed to be controlled with the arrow keys, leaving the challenge for me to incorporate the serial input provided by Kate and Nick into the project and replace the arrow key controls with paddle-operated controls, as well as additional aesthetic features.

I manipulated Creative Commons licensed stock footage of water using Adobe Premiere and drew it into the scene behind the raycasted elements, from the horizon to the base of the window, in order to create the illusion of moving water. The initial output from Premiere caused considerable lag (and couldn’t be uploaded to GitHub due to its size) and was scaled down accordingly to 480p. Interestingly, this created the strange visual illusion that the waves are moving up and down against the walls of the maze. I also created a white gradient overlay over the screen to contribute to the relaxing, exploratory vibe I envisioned for the project. I decided to scrap the idea of rotating an image of the kayak’s bow, since the movement of the vessel would be evident by the perspective on screen.

20181107_13401920181109_212541

 

Having created a compelling environment for the project, I proceeded to finish creating the physical device. The cavities inside the piping proved challenging to accommodate the potentiometers. Where would they be placed given the shaft moving through the piping and the moving elements of the device? How would these sensitive electronics be protected from damage while the device is being manipulated? I was thankful that the ultrasonic sensor saved me the hassle of preparing a physical device to determine the shaft’s length when paddling the device; I created ‘splash guards’ on both ends of the shaft to keep the user’s hands from interfering with the ultrasonic signal, which doubled as a mechanism to cleanly reflect the sonic signal back to the sensor. After testing various options for positioning the potentiometers, this idea of cheating the challenge of physically computing the manipulation of the paddle brought me to the concept of using a gyrometer to determine the necessary data. It was a sort of eureka moment when I realized that all the motions that I hoped to record were essentially three-dimensional rotations from a central pivot point. I immediately began to test this alternative… the solution to all my problems! All but one, at least.

20181031_142230

I recently purchased several clone Arduino Nano units on eBay (drivers are here). I have worked with the clones in the past with Alex Metcalfe for a project, as the devices are very affordable and typically function identically to the authentic device. With the multiplayer concept in mind, I developed a modification of the serial code provided by Kate and Nick to accept multiple serial inputs during the Ideation Phase. Unfortunately, the units are incompatible with the library code for the Adafruit gyrometer I used in the project. Along with concerns regarding performance with the raycasting engine, I decided at this juncture to abandon the prospect of multiple controllers and focus on developing a single working paddle with the authentic micro-controller. I still had yet to grapple how I could divide the visible screen between two participants and the added challenge of having to build a second controller. Sometimes it’s best to focus on a completed project over a broad scope.

After having solved the incompatibility issue, my gyrometer and ultrasonic sensor succeeded in sending the requisite data to p5.js (with the aid of a serial monitor software provided to us for this experiment). I strapped the sensors to two adhesive mini breadboards on the central piping and created a box around the sensors and the Arduino Micro. I didn’t have enough of these mini breadboards to accommodate all three of the electrical components, so I soldered two headers to a circuit board to provide the Arduino with a temporary connection to the controller. Afterwards, I used the corrugated plastic, some sheet brass and some duct tape from Home Hardware to house the devices. Prototype aesthetics!

20181108_17495220181109_212557

After collecting some measurements of the angle and distance readings I received from the device I created a function to interpret the controller movements. When paddling a kayak, drawing the paddle against the water creates forward motion, but also rotates the vessel. The paddle can also be operated backwards, or held in the water while moving to slow down and make quick turns. To detect movement based on the gyrometer’s readings, I tracked whenever the X orientation of the device changed more than a specific threshold, and whether the direction was a forward or backwards motion. I used the Y orientation to determine whether the device was tilted to the left or right (or in a neutral, center position) and I used the ultrasonic sensor’s shaft length recording to determine whether the paddle was submerged in the water; without this reading, it would be impossible to tell whether the long end of the paddle was in the water or swinging around in midair. If the paddle was in the water and moving, then a paddle motion was applied. Similarly, if the paddle wasn’t moving but was in the water, the vessel would rotate and slow down (a ‘brake’ motion).

The next step was to create the effect of acceleration and friction; with consistent paddling, I wanted to create the effect of the vessel speeding up yet slow down over time when left idle. I found the p5.js function lerp() to be of considerable help in accomplishing this: it takes two numerical values and determines a mid value based on a third argument between 0 and 1. I tracked the current speed and rotation of the boat as well as a target speed and rotation determined by the paddling actions recorded, adjusting the current values towards the target values using the lerp function. I also used the round() function in order to ignore excessive digits encapsulated by float values; the major benefit of rounding is the guarantee that values produced in math such as lerp eventually reach target values instead of constantly approaching them (consider the mathematical concept of a ‘limit’). I then applied a subtraction ‘friction’ value to the speed in order to eventually drive the value to zero.

My code and device, as one might expect, didn’t work together instantaneously. In order to reach the method above I went through a process of debugging, calibration and experimentation. The code needed to respond to the physical action of paddling without its values behaving bizarrely. During this process, I tracked the values received from my controller by drawing them on screen using the text() function and the nf() function which helped me understand why my code behaved in the manner that I observed on-screen. I also spent time understanding how and why the keyboard input method that Bryan Ma used in his demo worked, and how its respective variables were implemented in relation to the rest of the software. As expected of an accomplished programmer, the values that I needed to deal with were restricted to his input function and several variables instantiated in the setup() function and at the beginning of the .js file, making the process of adjusting the code to my needs very smooth.

Presentation and Reflections – November 9th

The device was presented in the graduate gallery with an overhead projector and a chair to seat the user. Users who had the opportunity to try out the device learned to operate it within 2 minutes, with some assistance. Keeping the ultrasonic sensor in line with the splash pads on the shaft of the paddle generally worked without issue provided that users where shown its importance; it is not an intuitive design feature. For that matter, neither was the concept of dipping the paddle: until elaborated upon, users would instinctively paddle the oar without ‘dipping it into the water’. This was cause for confusion for several users, but also may have been an unnatural movement given that I have relatively long arms compared to the average person (the device was calibrated to my features). Perhaps mounting the device physically to the chair would have improved the devices functionality between users. Overall, the presentation and operation of the device seemed to have the intended effect of simulating kayak motion for a virtual environment.

This project inspires intriguing commentary on the deficiencies of current day virtual reality controllers. Try as we might to simulate reality, what many controllers lack is physical feedback that influences how we operate within a space. The kayak controller was successful in providing a physical apparatus that mimics reality, but without being able to actually see or feel the context of water near their body the idea of paddling to create forward motion was not apparent (specifically, by moving the paddle through water). Using the controller in a headset virtual reality context might determine if having water visually next to the participant in a simulated boat improves how easily users pick up on how to operate the controller, although the question remains how haptic feedback plays into the creation of the kayak illusion.

Looking back at the time I spent reviewing the demo code, I feel that I learned a lot from observing the methods in which other programmers solve mathematical, technical and algorithmic challenges and I hope to experiment more with other programmer’s code. We take so much for granted when we simply import libraries and use the functions given to us. It keeps the process of developing simple, but there is plenty to learn from the methods and functions embedded inside these code snippets.

Additional Project Context

see also: the video games listed in the Ideation Phase, and JavaScript libraries three.js, matter.js

Loftis, Hunter. (n.d.) A first-person engine in 265 lines. PlayfulJS. Retrieved from http://www.playfuljs.com/a-first-person-engine-in-265-lines/

Loftis’ first person engine first introduced me to the concept of applying raycasting to a JavaScript context, with image accompaniment that conveyed the ideas behind the algorithm fluidly. This was the first engine that I tried to implement into my project. Because the script was implemented directly into an HTML context as opposed to being included as a JavaScript file, I ran into several snags trying to convert it into a more easily manipulable p5.js context and decided to search around for a more convenient implementation.

Ma, Bryan. (March 31, 2016) demo for non-textured raycaster for p5.js. GitHub. Retrieved from https://gist.github.com/whoisbma/8fd99f3679d8246e74a22b20bfa606ee

This demo for non-textured raycaster for p5.js is the raycasting engine that the game is built upon. Although several changes were made in terms of input, presentation and layout it remains a core element of this experiment. Compared to Loftis’ version, this engine lacks imported textures but saved me significant time by more or less being ready for implementation out of the box. Bryan Ma is a new media developer / designer based in New York City; you can find his portfolio here.

Macke, Sebastian. (September 16, 2018). Voxel Space. GitHub. Retrieved from https://github.com/s-macke/VoxelSpace

On the topic of exciting web libraries and code, Voxel Space is an engine that converts a 2-d image and corresponding depth map into a 3d environment that can be freely explored. It employs a clever technical concept first introduced in 1992 with Comanche, an arcade title that produced rich, 3d landscapes. For the time, this was a huge technical accomplishment given the limited memory and CPU affordances available.

 

Emoji-Bot Mk.3

aka Emotion State Machine

by Veda, Ladan, and Tyson

Presentation Slides (visibility limited to OCAD U)

GitHub

The Emotion State Machine is the third iteration of our assistive communication device: an emotion display device activated by capacitive touch input. The device intended to facilitate verbal and emotional communication for limited mobility users and non-verbal interpersonal situations.

 

The Concept

One of the main areas of commonality with our project suggestions was our interest in using Arduino technology as a communication tool between individuals. Some of the critical topics we wanted to explore included:

Accessibility / assistive / adaptive technologies: devices for people with disabilities while also including the process used in selecting, locating, and using them.
Affective computing: the study and development of systems and devices that can recognize, interpret, process, and simulate human affects (emotions).
State Machines: a programming technique that allows a device to operate in one of a set number of stable conditions depending on its previous condition and on the present values of its inputs.

We envisioned that the device would be used by low-mobility users or those who may have difficulty verbally communicating. The device is intended to facilitate a more straightforward and direct emotional communication that clears up any confusion as well starts a dialogue between user and receiver, the touch input enabling a more natural communication than conventional push button triggers.

concept

The Machine

Development of the project was split between coding, acquiring parts, developing the input device, and manufacturing the output display in order to modularize work between individuals. Ladan and Veda worked together to realize the input device while Tyson took on the work of fabricating the output display devices.

The process for coding for the input started really basic. The input device was originally designed with flex-triggered input in mind but the idea was dropped to keep down costs. We took the tutorial of with lighting an LED with a button and continued to add layers of complexity. We first started with multiple lights, then moved on to multiple buttons then when Nick gave us advice about the capacitive touch were able to use the same code we made with the multiple buttons just adding more variables and if and else statements in the code. The touch sensor would be labeled with the emotions that the user is feeling and would be land start a dialogue with the individual on the receiving end of the affect signal. We used a ready-made soap dish to contain the touchable surfaces and appropriate matching symbols co-relating to the individual faces represented in our final prototype.

The final version of the project came as a result of several technical limitations we encountered while developing our relatively ambitious first prototype. We initially intended to create an articulated ‘Emoji-Bot’, with a chassis of 3d printed material and a poseable arm that could demonstrate arm and hand gestures. We chose to pursue the modelling and 3d printing of the body early on in the project’s development as the extrusion process of printing the material was expected to take at least 3 days for all three robots. Joints in the arm would be rotated by servo action, controlling the shoulder rotation and position as well as the elbow rotation. The servos were to be connected via tensioned fishing wire and the Arduino and circuitry for the output was to be housed within the droid’s body. The head was intended to feature five different expressions that could be readily adjusted through full-rotation servo motion, with separate LED lights embedded within the head. The design of the robot was created in Rhinoceros over the course of a day but the process of attempting to print a single iteration took three days and ultimately failed due to limited availability of printing resources at the school, the lack of purchasable material for the in-house 3d printers near the school, and the high overhead cost and time required to print the project elsewhere in the city.

With two days remaining to prepare a working project, we scaled back to design of our second prototype, which translated the emoji display from 3D to 2D fixture and reduced the articulated joints to one poseable hand. Instead of using 3d printing technology, we opted to lasercut the elements of the casing; in terms of time, the process of preparing the lasercut components took about 1/5th of the time invested in preparing the 3d components. This gave us the opportunity to quickly cut, assemble and test out the project over the course of a day, but it wasn’t until the day prior to submission that we discovered that the electrical design of the project was insufficient to properly power and operate all the connected devices (caused by a large amperage load). Had we tested our circuity prototype earlier we may have avoided this issue entirely.

This leads us to the design of our final prototype, which maintained the emotion display component but forgoed the inclusion of servo action. Rather than display emotion selections via servo rotation, the emojis are instead displayed simultaneously, highlighted via coloured LEDs. The intensity of the touch input directly influences the brightness value of several of the LEDs by means of pulse width modulation.

img_6353

Daily Project Schedule

October 16th, 2018

Our first meeting was to introduce ourselves, interests and areas of skill sets / expertise that would be beneficial to the execution of the project. The following was discussed.

Ladan – Brought strategic and design expertise to the table, experience with organizational skills, graphic design skills and ability to ideate well and think critically.

Tyson – Technical & Strategic expertise. Brought experience with conceptualization, 3D printing, laser cutting and other methods of fabrication. He also has experience with computer programming, and has worked with Arduino programming before on passion projects.

Veda – Brought experience with design thinking, project planning, graphic design and ability to work well with physical handmade fabrication.

Once we discussed our strengths and weaknesses, we took the opportunity to reflect upon what we learnt in class that day and how to pursue our interests the the technology of Arduino. We charted out a game plan that would tackle multiple parts of the project.

 

  • The first was a basic agreement for a quick solo crash course in Arduino basics.
  • Second, Tyson was kind enough to help us understand what was in our kits and how we could potentially use each of the parts for our project.
  • Third was a discussion on, and appreciation of the project brief from a conceptual standpoint, so we could conduct some independent research in the areas of social interactions and communication and brainstorm at the following meeting.

We concluded the meeting with way forward, the next step was to come back more equipped to work with Arduino, and the second was to ideate on potential concepts for our multi-person project.

 

October 18th, 2018

Once we had covered all the groundwork that was discussed at the first meeting, we came back to the second meeting with a whole bunch of ideas to share with each other! We first discussed the ideas that we had brought with us, and then took a deep dive into a verbal brainstorm to see if anything new comes our way. Some of the great and not-so-great concepts that we discussed were as follows:

  • Sneeze detector: A microphone input that reacts only to the sound of a sneeze and creates an interesting output.
  • Open lace detector: A device that alerts the wearer if their laces get untied.
  • Simon says game: A sensor that tracks body movements of the user based on commands from the computer.
  • Obstacle Course: An invisible obstacle course with a few sensors to detect height, movement and proximity – based on commands from a wearable.
  • Motion tracker: A simple wearable light that changes colour depending on the motion of the wearer. It can respond to speed, height and frequency.
  • Digital Twister: A wearable that directs a group of users on a Twister board, with a carefully coordinated list of movements.
  • Dance Dance Revolution: An LED screen wearable that displays arrows to guide the users movements. Users dance in a sequence to a song directed by the device.
  • Morse code game: A way for users to travel back in time. Using the push buttons and an LED display, users can decode a message prescribed by us to win a prize!
  • Fitness tester: A wearable that can track your movements on certain exercises, and display your level of fitness.
  • Sumo Robots: Controlling robots with gestures, each fighting against another to push outside of a ring
  • Microbots in a Maze: Using a single servo to control a v-shaped bot through a maze, similar to scientific tests on mice. Collaborative puzzle environments.

20181018_183727

An interesting idea that came up after the verbal brainstorm was an Emotion Detector, with an LED display wearable. While similar to our final concept, this was the first instance of the idea.

 

 

wearables

After looking at the wearables, we were not convinced with the idea. We wanted to elevate it a bit further and began thinking of our final concept which was a more unique output as compared to a simple LED screen. We decided to pursue the idea of using a 3D printed robot with coloured eyes and moving arms to convey five key emotions that humans feel.

Once we agreed upon the idea, we decided to allocate work roles for execution and decided to touch base daily to give each other updates on the progress for the day.

Veda & Ladan were assigned Input circuitry and design for the input panel.

Tyson was in charge of manufacturing the 3d body. Due to the lengthy amount of time required to print the objects, work began the following weekend with the intention of having the parts printed over the course of the week.

image-3-copy

October 20th, 2018

Since Veda and Ladan were both fairly new to Arduino, the process of circuitry began with the very basics. Our first order of business was to understand basic LED outputs as tried in class with Kate. We first experimented with a single and then multiple LED’s, along with the code for the same. We tried implementing the blinking mechanism as well to understand its workings.

Meanwhile, Tyson designed the printable model for the Makerlab Lulzbot printers in Rhinoceros 5. He reviewed online information regarding belt drives and speculated moving the limbs via rubber bands, but ultimately chose to use fishing line in tension to operate the components.

image-6-copy

image-9

emojibot

 

October 22nd, 2018

While in class on Monday, we had the opportunity to discuss our idea with our professor Nick, and collectively come up with a game plan for the following week to complete the prototype.

Minutes from the session:

  • Our first task should be to create a low fidelity prototype that functions the same way as our final project – which is fundamentally a button input to an LED output. He wanted us to try this with three buttons attached to three LED outputs respectively.
  • He suggested that we use a capacitive touch sensor to trigger the inputs as opposed to just push buttons, as that can make any surface conductive and will be an interesting twist on the input mechanism
  • He wanted us to conquer the prototype before we go out and purchase the sensors and solder them – which was great insight to begin work with
  • He helped us get to the guts of the hardware. We had developed an idea for a state machine, and everything that we needed to focus on revolved around our experience with it.
  • He proposed adding a layer of complexity to the inputs by giving some thought to the intuitive movements of fingers associated with emotions. Like speed and frequency for anger, intensity for sadness and other such instances.

 

After our session with Nick, we focussed on creating a prototype with the buttons and LED outputs, and it was a success. The code was fairly simple in this case as

 

img_6307 img_6308

 

Once we were successful with our prototype we made a trip to Creatron to purchase the 5 key capacitive touch sensor made by Adafruit for Arduino.

The first task at hand with the sensor was to solder it. This was not easy for us to do. With no prior experience in soldering we had a few failed attempts and one burnt board before we got the soldering right. The biggest challenge with the touch sensor, was realizing that the field around the actual input tip was very sensitive, hence the soldering had to be extremely precise in order to avoid the overlap of two different input points.

During this period, Tyson worked alongside Reza, the Makerlab technician, to get the 3d printable parts on the printer. Multiple colours of materials were used due to the limited availability of in stock material for the printers. As the first parts were printed, several revisions to the 3d model were applied to ensure that the device could be easily assembled and accessed by hand and accommodate the hardware. The complexity and thickness of the model were also reduced in order to improve print times and reduce material costs. During wait times, Tyson also developed the working code for the first prototype: each input button would operate a different servo, with one dedicated to changing the emotion state. The remaining buttons would adjust the servo angle. Whenever a button was released it would reverse the direction of movement, allowing for dynamic arm expressions.

image-15 image-14 20181022_212649

img_627520181022_112740 20181022_161700

October 23rd, 2018

 

Once we had the soldering in place, we focused on getting the input for the touch sensors right. We attached the sensors to one LED at first, to understand how it responds and works.

With research and advice from our cohort, we figured out that the input for the sensor requires a 4.7m resistor to work. We experimented with one input key and one LED light.

Figuring out the code was challenging at this point, as there are very limited resources available online that catered to this particular sensor, and we were unsure if it needed a specific library or input command to function well. Combining the tricky circuitry with the code, we finally got the flow right for a single input key and an LED light.

We also proceeded to complete soldering the extra touch sensors, so as to be able to create the final circuitry for the three devices in sync with each other.

 

img_6284

img_6285

At this stage, we realized that printing 3 copies of the robot was an ambitious task as available material for the printers was minimal. Tyson went out to buy material from the local area; per recommendation of the Makerlab technician, we acquired filament from Creatron that we later learned was incompatible with the working printers at the graduate school (incorrect filament size). Several online sources, including the Lulzbot user manual and forums, suggested that it may be possible to print 1.75 gauge material on the Lulzbot Taz but in practice the material only printed adequately for the first 15 minutes of operation.

20181023_184539

This roadblock resulted in the design of our second prototype. Inspired by paper colour wheel selectors, the second prototype translated the 3d head of the robot into a 2d wheel, maintaining the functionality of the servo operation but reducing the action to a 2-dimensional plane. Tyson prepared laser cut files for matboard overnight in order to accelerate the manufacturing process.

image-16 image-17

October 24th, 2018

Once we had a single flow working, we focussed on multiplying this flow into five inputs and five outputs. Later in the day we had our consultation session with Nick and Kate, both of whom advised us to try out the servo motor with the touch sensor now that we had the basic mechanics in place. And that was the next piece of the puzzle for us.

We also spent some time sourcing the wooden platform and copper rings that we had conceptualised for the fabrication of the input box at Chinatown. We did some research and discovered that copper is a great conductor of electricity and we thought the rings would be an interesting way of driving inputs from users.

image

20181024_140804

Tyson and Ladan spent the evening working with the laser cut materials prepared in the morning to assemble the second prototype, pulling together the circuitry in the process. The behaviour was bizarre; the capacitive sensor would fluctuate between values and the servos would only periodically respond to the capacitive sensor input. We would later find out that the cause behind this behaviour was excessive amperage load: we needed to supply power to the components separate from that of the arduino’s supply.

screenshot_20181026-134910_gallery end

October 26th, 2018

On this day, after trying tirelessly to get the servo motor to work with the touch sensor, we decided to call it quits on the servo. We shifted our focus to a leaner version of the original concept of the emotional state machine, which was the conductive input with the copper rings, along with the LED output of the five lights. We decided to create an intricate installation with some of the existing parts from our previous prototype.

We had the advantage of a perfectly working circuit by this point, which was replicated across the 3 boards. We wanted to minimise the soldering and restricted it to the LED’s and resistors.

img_6337

Tyson drew up a vector of the installation along with a 3d render, which perfectly housed the breadboard and power outlets for convenience. We had these laser cut at the Maker Lab with help from Reza.

  img_6341

Thereafter we focussed on assembly and installation of the final piece. We began with soldering the LED lights, and then putting all the pieces of hardware and craft together.

img_6344

20181026_113016 20181026_113019 20181026_113022

 

Final Input details: Although we tried implementing the copper coupling pieces we intended to use for the input device, they were difficult to connect with the box and hold in place. Instead, we opted for a more simplistic version of the input.

 

img_6373

Materials used: Wooden soap dish, foam board base, printed emotions and copper tape.

Method: We used the holes in the soap dish to insert the wiring for the input from the output box to the input interface surface. We then made holes in the foam board for the wires to penetrate through. We then stuck the foam board to the soap dish and pasted the emotions onto the input points. The last step was pasting copper tape along the tip of the wire for a conductive input.

 

Final Output details:

Materials used: Foam board, screw driver, Vellum, Black PVC sheet, Masking tape and electric tape.

Method: First we took the existing discs of the faces that we had from our last prototype and pasted the vellum in the areas that we wanted the light to penetrate through.

Then we focused on getting the wires of the LED’s attached to the breadboard securely. Thereafter we stuck the LED’s on the center of each eye to maximise the output of the lighting. Once we had all the eyes in place we assembled the basic structure of the installation and inserted the breadboard. We held the emotion disc in place with a screwdriver. We kept testing along the way to ensure that power supply was given to all the lights without compromise. Thereafter we connected the input wires to the input port through the hole we created for it at the back of the board, and lastly closed the blox with black PVC paper on top to conceal the circuitry.

20181028_141020 20181028_141053

Final Circuit:

The final circuit consisted of five input keys from the touch sensor that were connected to the Arduino at PWM pins. There was a 4.7M resistor between each input wire to regulate the flow of the current.

 

On the output front, we had power wires from LED lights connecting to the Analog side of the Arduino controller with 330 ohm resistors for each light to regulate the flow of the current.

 

circuitry-2

 

Overall Reflections

Veda:

This project was extremely challenging for me, and in retrospect I feel like we should have opted for a simpler outcome, but it was a rich learning experience in terms of project management, time management, role allocation and resource budgeting. I still commend the team and the effort we made on trying something ambitious. Futuristically, I would plan leaner and allocate more time to debugging for both hardware and software.

Ladan:

The project had a high learning curve for me in terms of working with the Arduino but industrial design and fabrication aspect. I have mostly worked as digital designer and most of my projects were on digital platforms. Learning from both Veda and Tyson was helpful as they both are experienced in areas that I didn’t have. We had an ambitious project from the start which made us feel that we were against time from the beginning. Stronger project management and group cohesion would have made the project planning execution easier. Also troubleshooting and testing separate parts earlier.  As well a more direct and cohesive concept would have made Moving forward I will take the skills i learned in the project (fabrication, arduino) to support a strong conceptual idea.

Tyson:

The biggest issue with our project scope was that we went into the project expecting to be assemble the pieces as we went instead of testing out the individual elements of the project that would make it successful. Had we tested out the servos and the capacitive touch sensor working in tandem early in the project’s development we likely would have succeeded in producing our second prototype. We made several group administrative errors such as poorly coordinated scheduling, communication, and properly reviewing the project outline early in the project’s timeline. While projects may be envisioned, they need to be built up as individual working components to avoid wasteful use of time. Moving forward, I hope to elicit a more collaborative engagement with future teammates and spend more time focusing on learning before doing.

 

Still holding out on getting that robot printed out, though! 🙂

 

Works Cited:

  1. An Emotion Robot For Long-distance Lovers
    Zoe Romano – https://blog.arduino.cc/2014/05/05/an-emotion-robot-for-long-distance-lovers/
  2. Moodbox Makes You Play with Emotions For Perfect Ambience
    Arduino Team – https://blog.arduino.cc/2016/03/29/moodbox-makes-you-play-with-emotions/
  3. The Interactive Veil Expressing Emotions with Lilypad
    Zoe Romano – https://blog.arduino.cc/2013/09/12/the-interactive-veil-expressing-emotions-with-lilypad/
  4. 21 Arduino Modules You Can Buy For Less Than $2
    https://randomnerdtutorials.com/21-arduino-modules-you-can-buy-for-less-than-2/
  5. Arduino Forum – Index
    http://forum.arduino.cc/
  6. Arduino Uploading Error Code?
    jim_reed -codewizard58 -chris101 -Bibi – http://discuss.littlebits.cc/t/arduino-uploading-error-code/21875
  7. Tree Of Life (arduino Capacitive Touch Sensor Driving Servo Motor)
    Instructables – https://www.instructables.com/id/Tree-of-Life/
  8. Interactive Projects
    https://idl.cornell.edu/projects/
  9. Affective Computing: From Laughter to IEEE
  10. Rosalind Picard – IEEE Transactions on Affective Computing – 2010
  11. Data As Art
    https://www.cis.cornell.edu/data-art
  12. Robotic Arms (see https://www.robotshop.com/ca/en/robotic-arms.html)
  13. Belt Drives: Types, Advantages, Disadvantages https://me-mechanicalengineering.com/belt-drives/
  14. Assistive Devices For People with Hearing, Voice, Speech, or Language Disorders
    https://www.nidcd.nih.gov/health/assistive-devices-people-hearing-voice-speech-or-language-disorders
  15. Gently Used Marketplace
    https://canasstech.com/collections/gently-used-equipment-marketplace
  16. Lulzbot Taz 6
    Lulzbot – https://www.lulzbot.com/store/printers/lulzbot-taz-6

OctoSWISH ++ —- A Sampler for Mobile Devices

screens

 

by Tyson Moll, Amreen Ashraf

A miniature sequencer with an audiovisual display for desktop mobile devices, designed to be used in tandem with multiple screen instances.
GITHUB | WEBLINK

PROJECT JOURNAL

img_1504

September 24 & 25: GPS with LFSR-seeded random instruments

Going into the project, we decided we wanted to create some sort of ‘orchestra’ of sounds from multiple different devices, with users having the ability to contribute together towards a combined musical experience. Our original concept was to use GPS data as a seed to randomly generate unique musical sound samples. Users would be able to access and control a instrument based on a user’s position inside a room. The following day, we tested this out and decided to drop the concept due to the lack of sensitivity we achieved tracking GPS indoors; our devices were only capable of discerning between distances of at least 10 metres.

 

img_1540

September 26 & 27: Back to the drawing board.

We picked up again the next day and pondered what to pursue next. During our ideation phase we decided that both of us were interested in game creation in general which led us to look closely at board and card games. We were particularly invested in the possibility of developing some sort of collaboration or team building experience. One idea we considered was using the phones as playing cards, particularly fascinated by the fact that an electronic ‘card’ didn’t need to have static properties; perhaps the face value could change or it’s rules tweaked. We also considered investigating murder mystery frameworks, where said cards could act as secret identities and tools to develop the game’s narrative. This prompted us to consider visiting a board game café to research simple playing card models and draw upon ideas from titles such as Fluxx, Coup and Uno. After sleeping on our ideas overnight, we became concerned that our ideas would be too complex to instantiate and test for a group environment. As much appeal as game design inherently has, we concluded that investing too much time into inventing rules and mechanisms for such a project was not within an achievable scope.

Still intrigued by the idea of using the p5.js Sound library, we revisited our interest in developing a musical experience and pondered the possibility of creating a beat maker. Participants would be able to record and create their own compositions on the fly. This also had the added benefit of giving users more control of the kind of sounds they could use as their ‘instrument’. In a sense, it seemed that we were back where we had started!

 

img_1602

September 28th: The first sketch of our soundboard!

Right before class started we casually talked about music and our sequencer. We felt that there needed to be some element that needed to tie the 20 screen individual sequencer together. We chatted about our previous concert experiences and overall enthusiasm for Jon Hopkins, whom Tyson intended to see live that evening. Concert musicians notably accompany their performances with some sort of striking audio-visual element; we drew inspiration from one of Jon Hopkins’ music videos, noting its effectiveness in conveying artistic intention and sonic properties. This seemed like the critical element that our project was missing and we concluded that adding such a component would not only bring the project together cohesively for the 20-screen experience, but help modularize our work so that we could code our project elements independently without worrying about merge conflicts. We reconvened after class and briefly researched how we could accomplish the audiovisual component. By using the mic functionality of mobile devices, we determined that our sequencer could simultaneously analyze sounds emanating from neighboring devices. This effectively opened the doors for us to combine the sequenced sounds into a multi-sensory experience.

formatfactoryimg_1675

Prototyping: September 29th to October 4th

Our project was created almost entirely in Javascript using the p5.js series of libraries. Amreen Ashraf contributed the audiovisual experience design and recorded the experience, documenting our process along the way whilst Tyson Moll developed the sampler board, some audio-visual adjustments, and overall feature integration.

During the development of the Audio-Visual component, Amreen referenced a tutorial by the Coding Train, whose influence remains in the final product. The basic concept was to create a distinct line waveform capturing the Audio-Out feed from the device with an ambient, circular form to represent received microphone feedback. The ellipse proportions were associated with the amplitude of the sound, tracked as a shorter-term amplitude histogram the line through repeated draw functions. The line continuously displayed amplitude collected in an array data set. We had difficulties tracking down a means of sourcing the audio-out data stream; both visualizations ultimately used the microphone for input. In order to present each device in a unique manner, we later integrated colour randomization so that on initialization each device would exhibit a different audio-visual colour scheme.

 

The sound board interface was implemented by creating a series of custom objects with vector graphics. Each button was based off of a prototype object with a series of built-in variables and functions concerning its position in relation to an ideally-proportioned canvas, whether the button was pushed, what graphics to display, and what effects may have been applied to the button itself. The initial layout was prepared in Illustrator for the relatively common 16:9 ratio, at 1920 x 1080 pixels in size. By comparing this ratio to the window’s width and height, we were effectively able to maintain the proportions of all buttons by calling their respective ‘display()’ command and the windowResized() p5.js event. That being said, it would have been useful to integrate a script to detect device orientation for situations where the width of the screen was less than the height. In order to integrate both ‘modes’ of functionality into the device (Sequencer Board and Audio-Visual), we used a variable to track whichever mode was enabled and separated the draw and input events for each screen by means of ‘if’ statements. The slider, audio-visual button, and drop-down menu were implemented with the aid of the p5 DOM library. Since our project was not particularly reliant on CSS, we manipulated styles and properties for these HTML objects directly in Javascript rather than create another file. We used the touchStarted() event to capture touch input, which surprisingly overrided the ability to click the non-DOM button objects on the desktop. The opposite effect occurred when we left the mouseClicked() event as the handler. On reflection, it would have been a good issue to resolve to ensure barrier-free desktop compatibility.

Our microphone button captured sound using the p5 sound library’s Audio In and Recorder modules. The file captured by the Recorder was then available for processing and playback. In order to sequence the playback as a stem, we created an array of buttons. Since these buttons had inherent properties from their prototype we only needed to call loops in order to instantiate playback, display, and tweak properties. We added several methods of manipulating recorded sounds from the basic p5.js library (the volume and the pitch) as well as the capacity to modify the playback speed of their sample board compositions. We also implemented manual customization of individual nodes on the sample board, with moderate success: due to some quirks in the manner in which p5.js / javascript modifies pitch before playback, the device worked as intended under peculiar conditions: when playback speed was slow, singular node pitch adjustments would only activate with the addition of a preceding punched-in node. When playback speed was fast, pitch adjustments would work as intended.

 

Critique and Presentation: October 5th

 

The overall result effectively became a collaborative stem mixing experience. After users had the opportunity to build their stems, we collected the mobile devices in the centre of the room and dimmed the lights. The cacophony and visual experience was mesmerizing, easily likened to an abstract drone installation. We are pondering the possibility of a modified experience in a gallery context, maintaining user contribution / input and casting stems via projection screening. We received positive feedback for successfully creating a captivating and engaging experience from start to finish. The multiple stages of self-guided interaction with the device from creating the sample to bringing it together as a performative piece were applauded. However, it was also noted that the experience could have benefitted from being less nuanced given the time constraints. We made an effort to provide a prompt explanation of the project and offer troubleshooting, but especially given the time limit and challenges we encountered getting several devices up and running we consider this feedback quite understandable.

During our 10 minute in-class demonstration we encountered several compatibility issues arise in iPhone devices with microphone functionality, sometimes caused by browser selection, security preferences and other nebulous issues. After troubleshooting, our success rate exceeded 75% of devices tested and despite initial frustrations the combined audio-visual experience was spectacular to observe as a whole. Given additional time, our core focus would be to increase general compatibility across devices, use a more readily functional sound library, and enrich the audio-visual experience. We have also pondered transitioning the project towards a gallery experience by creating a projection displaying user contributions through server technology.

 

PROJECT CONTEXT

Audio Samplers

MIDI controllers and sample board devices are common tools in a DJ’s kit. Our project heavily simplifies the process to its essentials. With the inclusion of digital audio workspaces and interfaces, modern DJs have an impressive arsenal of tools to mix, mash and perform electronic compositions and recordings.

 

Bicycle Built for 2000 by Aaron Koblin and Daniel Massey

This project uses an audio recorder built by processing to collect sounds from people around the world. People from 71 countries participated to add their voice which was collected via a web browser and the synthesized into the song “Daisy Bell”, which was written in 1892 and is the first musical speech-synthesized song (popularized in 1962). Although our project is significantly more open-ended, the synthesis of human-made sounds into a music felt particularly inspiring. 

 

Max Cooper Concert (Amreen) / Daniel Avery & Jon Hopkins Concert (Tyson)

Attending a Max cooper concert is like diving head first into an explosion of sound and visuals. For his 2017 album release Emergence, Cooper collaborated across the board with many disciplines including coders, data visualizers to create a unique audiovisual album that delves deep into the synthesis of sound and emotions with lush visual accompaniment.

Similarly, Daniel Avery accompanied his slow-building, rhythmic DJ set with detailed abstract visuals of mountainsides and foreign atmosphere pulsating to the steady beat from his kit. With so little performance involved in the music-making (compared to more instrumental genres), the visuals presented a very rich experience and insight towards the moods and experience the artist wanted to convey to his audience. Jon Hopkins set closely followed, leading with tracks from his acclaimed Singularity. In particular, the track and accompanied visuals to “Everything Connected” held our captivation. The track slowly builds momentum, a purple heart was visualized on the screen quivering and moving to the sound of the track.

We hoped to bring this sort of multi-sensory experience to our project on a more intimate scale, allowing participants to, in a sense, control the heartbeat to our collective dissonance.

Reference :
Shiffman, Daniel. 17.9: Sound Visualization: Graphing Amplitude – P5.js Sound Tutorial
https://youtu.be/jEwAMgcCgOA

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.