Alternative Methods of Input: Synchronization of Music & Gameplay by Angela & Denzel

Alternative Methods of Input: Synchronization of Music & Gameplay

a project by Denzel Arthur & Angela Zhang

Denzel Arthur: Unity Gameplay

For this assignment I wanted to continue exploring unity, most importantly, the developer side of things. I wanted to connect my love for music and video games, in order to help give meaning to the back end side of the project. Through this assignment I got to work with the animator in unity, scripting with c#, serial port communication between arduino and unity, and music synchronization within the unity application, and ableton in order to create and cut sound clips.

 

The first part of this project was based on doing research on the concept in order to find out how I could replicate it. I immediately found resources from the developers of “140” a music based platformer that became the main inspiration for this project. As the game was developed in unity, the majority of the information provided by the developers aided me in my quest. They had information about frame per second, beats per second, and how to match them in the unity game engine. Although the code the added to the pdf was no longer active, the explanation itself was enough for me to create an prototype of the concept.

The second part of the project included setting up the unity scene. This part of the process involves using basic unity primitive objects to create a basic concept of the game. The primitive objects where used to arbitrarily represent the player object, enemies, and environment. With these basic assets, I was able to implement program collision systems, death and respawn animations, and other triggers like button events.

The third part of the process included the physical computing part. I was initially supposed to work with another classmate to create more elaborate buttons, therefore I created a basic protype buttons found in the arduino kit. These buttons became the main source of communication between the game and the player during the presentation. A very lackluster physical presentation, but that seems to be a trend in my work here in the digital futures program. Nonetheless, after the buttons were created I proceeded to connect the physical buttons attached to the microcontroller, to the unity engine. This proved more challenging than need be due to poor documentation on the free methods of connection, but after purchasing the Uduino kit, the connection became a seamless process. This process also included programming the buttons and adjusting the animations, mechanics, scripts, and audio files in order to get a prototype that was playable and had the right amount of difficulty.

The final part of this process was creating a visually appealing product within the game engine by adjusting the virtual materials and shaders within unity, and also swapping out any assets with ones that fit the concept of the game. I still went with primitive shapes in order to achieve a simplistic aesthetic, but certain shapes were modified in order to to make the different levels and enemy types seem more diverse.

unnamed-2 unnamed-3 unnamed unnamed unnamed-1

 

Angela Zhang: Physical Interface

For Experiment 3, I wanted to use capacitive sensing to make a series of buttons that would control the gameplay in Unity for Denzel’s game. I started with a 9×9” wooden board primed with gesso, that also has a ¾” thick border so that there is a bit material to protect the electro galvanized nails that will be nailed in as solder points.

I did a digital painting in Procreate to plan out my design for the buttons.

conceptual drawing - digital drawing on iPad & Procreate
conceptual drawing – digital drawing on iPad & Procreate

I ended up using the blue triangle design and splitting the yellow design in half to be two triangles that look like left and right arrows, which I intended to use as FRONT and BACK navigation; the blue triangle would be the JUMP button.

process - stencil for design, graphite on tracing paper
process -stencil for design, graphite on tracing paper
tracing paper stencil - shaded w 5B pencil on backside
tracing paper stencil – shaded w 5B pencil on backside

I traced the design onto some tracing paper with graphite pencil.

On the opposite side of the tracing paper, I used a 5B graphite pencil to shade in the entire shape to make a transferable stencil.

Tracing with a pen with the shaded side down I transferred the design onto the gesso board.

process - transferred design onto gesso-ed 9x9" wooden board.
process – transferred design onto gesso-ed 9×9″ wooden board.

Once the design was transferred, I applied green painter’s tape around the edges so that when I applied the black conductive paint overtop, it would be clean around the edges. I added three more rectangular buttons for additional functionality. Once I had transferred it, I used a hammer and nailed some electro galvanized nails, about an 1.5cm long into each of the ‘buttons’ [not really buttons yet, but empty spaces where the buttons should be]. Because the nails were so small I used a thumb tack to do some of the registration holes for better accuracy.

process – back of gesso board with electro galvanized nails.

I then applied a generous coat of conductive paint by Bare Conductive mixed with a little bit of water, as the carbon based paint is a little sticky and hard to work, and water is conductive so this did not prove to be a problem. After I finished painting the designs with conductive paint, I sealed it with a layer of acrylic varnish spray to prevent it from rubbing when being touched. For some of the buttons, I planned to put another layer of acrylic paint to see if it was possible to activate the conductive paint with coloured paint overtop, to allow for more than just black designs as I had planned.

process - conductive paint applied and tape taken off
process – conductive paint applied and tape taken off
final button - conductive paint, acrylic varnish spray, regular acrylic paint, final coat of acrylic varnish
final button – conductive paint, acrylic varnish spray, regular acrylic paint, final coat of acrylic varnish
img_0700
Final Painting.
img_4594
back of board – set up

I painted the background with regular acrylic paint to make the board more aesthetically pleasing. With the final layer of acrylic paint, and a final coat of varnish, I was ready to test my connections. Using a soldering iron, I soldered wires to each of the connections, then alligator clipped these wires each to an electrode on the Touch Board micro-controller.

The LED on the Touch Board lit up when I touched each button, so it was all working. The only thing I noticed was that the connections were very sensitive to touch, so even if the wires in the back were touching on another it would sometimes trigger an electrode it was not supposed to. This can be solved with better cable management and enclosing the micro controller inside the back of the board if I want to do a standalone project.

The original idea was to hook up the board to Unity so that they could replace the tactile buttons that Denzel described using in his half of the documentation. Using the Arduino IDE, I uploaded the following code to the Touch Board [screenshots do not show code in entirety]:

screenshot - Uduino code
screenshot – Uduino code

screenshot - Uduino code [cont'd]
screenshot – Uduino code [cont’d]
screenshot - Uduino code [cont'd]
screenshot – Uduino code [cont’d]
The Uduino code (to bridge the serial connection between Unity and Arduino) uploaded successfully onto the Touch Board’s AtMega32u4 chip, which is the same chip as the Arduino Uno or Leonardo. The problem with the connection however, was that the conductive paint buttons were using capacitive sensing logic and not digital ON/OFF switch  logic, and neither Denzel and I were proficient enough in C# to change the Unity scripts accordingly so that the capacitive touch sense buttons could be used to trigger movement in the game engine. I tried looking at a few tutorials on this and watched one about analog inputs to Unity, which used a potentiometer as an example. I wasn’t sure if this was going to be what I needed in the scope of time that I had so I ended up settling on another idea and decided to attempt the Unity game controller with Denzel at a later date when we both had the time to look at forums (lol)

I changed the function of the conductive painting to be a MIDI keyboard, as the Touch Board is particularly good for being used as a MIDI/USB device. I uploaded this code instead to the Touch Board:

Arduino IDE - MIDI interface example sketch from Touch Board Examples Library
Arduino IDE – MIDI interface example sketch from Touch Board Examples Library
Arduino IDE - MIDI interface example sketch from Touch Board Examples Library
Arduino IDE – MIDI interface example sketch from Touch Board Examples Library

I then used Ableton Live as a DAW to make my MIDI make sound. I changed the preferences in Ableton > Preferences > Audio Inputs > Touch Board, as well as the Output. I also turned on Track, Sync, and Remote so I could map any parameter within Ableton to my conductive painting just like any regular MIDI keyboard. I used the Omnisphere for a library of sounds I could play with my MIDI; because the capacitive button is analogue, I can map parameters like granular, pitch bends, etc onto the buttons as well as trigger tracks, pads in a drum rack or sampler, or any of the Live view channels in Ableton to trigger whole loops.

Omnisphere 2.0 VST in Ableton - sound library
Omnisphere 2.0 VST in Ableton – sound library
Conductive painting inside of Ableton
Conductive painting inside of Ableton

Even though we did not successfully link Unity and the painting together, I still feel like I learned a lot from creating this unusual interface and I will push this idea further in both Unity and Ableton; I want to use Live for Max to trigger even more parameters in physical space, eventually things like motors.

Physical Interface w Digital Feedback: MIDI Data using Unity/Ableton/MaxMSP with Soft Tactile Buttons [Analog and Digital, Cap Sens

created by Denzel Arthur & Angela Zhang

For this final assignment, we decided to continue working on what we explored for the previous project, which was controlling a 3d object in a game engine or some other digital environment with non orthodox physical input switches. We made a lot of good progress during the initial experiment, and because we have an affinity for music, we decided to continue pursuing it for this next project, to solidify a foundation for how we create multimedia works in the future. The initial goal was to find a unique way of visualizing audio, and having it be interactive, either in the progression of the music itself, sonic feedback, or changes in the visual component that corresponds to the music, visual feedback. This lead us to experimenting with the Unity game engine, Ableton, and building physical buttons for extended and an unusual combination of hard and soft user inputs.

img_7724
conductive thread

 

 

arduino micro
arduino micro
conductive ribbon
conductive ribbon

We wanted a variety of tactile experiences on our game-board, including both analog and digital inputs and outputs. Using methods explored in class, we used materials such as conductive thread, conductive fabric, velostat, and some insulating materials such as felt and foam to interface the buttons and the electronic elements. We also wanted to use some elements that would normally be used in regular electronic setups but not necessarily in a soft circuit, such as light cells, infrared sensors, and actual tactile buttons.

schematic for digital button
Angela’s schematic for digital button, [+] and [-] intertwined but not touching, in order to maximize the surface area that can be activated.
schematic for analogue button
Angela’s schematic for analogue button [top down view]
diagram of construction of velostat button
Angela’s schematic diagram: construction of velostat button [layers, component view]

mid production - analogue button, velostat, conductive thread, conductive fabric, felt, foam, embroidery floss.

Our first soft button button was an analogue pressure sensing button made of coelostat between two pieces of conductive fabric, with 3 connecting lines of conductive thread sewn into each of the pieces of conductive fabric on either side of the velostat, in the middle. One of the sides of conductive thread is positive, the other negative. These are sewn to the edge of the square that is cut, deemed to be the button, and come off of the square approx 2 cm away from each other, and are eventually sewn into the blue felt that becomes the base of the button. The yellow foam and red felt are added for haptic feedback of a range of pressure, and the idea was to hopefully allow for a wider range of pressure sensitivity from the velostat, as well as aesthetic purpose. The without the added layers of material the button felt very flat and there did not seem to be a big margin of input for the user, especially as an analog input which is meant to provide a range of numerical data, which would then be used to control some element in Unity, as for the other components of the project. 

The completed pressure button on the board, and a light cell beside it. Two bits of silver thread on the button are conductive; one is positive and one is negative, detecting how much the velostat is being pressed.
legs of the components are fed through the board, and twisted to be flush to the back of board so they can be sewn into the prototyping board, also on the back.

a tactile button, three 5mm neopixel LEDs, and the analog pressure button, with some conductive thread connections [front view]
LED connections sewn with conductive thread to the prototyping board [mid prod, back view]
LED connections sewn with conductive thread to the prototyping board [mid prod, back view]

The main idea was to use these buttons to control gameplay within Unity for a game that Denzel had programmed for Experiment 3. The Arduino Micro however, as well as the Touch Board by Bare Conductive that Angela used for Experiment 3 to create the conductive painting [to be used as input for Unity as well, but was last used for Ableton] both have the ability to send MIDI data. We decided to switch it up and see if we could get a 3D object from Unity to work with Ableton and Max MSP’s patch for Ableton, Max for Live, to make it respond in real time to a MIDI signal sent from one of our buttons or sensors. Unfortunately we did not have time to hook up all the different sensors and components, but there is potential to hook up all sorts of different sensors and to keep adding different analog and digital parameters to the board and this is going to be an ongoing experiment for us to see how many different components we can combine.

For the connection of Unity -> Ableton -> Max:

screenshot: GitHub for The Conductor
screenshot-309
screenshot – YouTube video describing connection of Unity and Max using The Conductor
screenshot – Ableton 10 Remote Script OSC to MIDI Note forum
screenshot – we looked at some of the possibilities between Ableton and Max using Max for Live
screenshot-315
mapping audio parameters (OSC data) from ableton to Max to visualize change in audio through color mapping

The final product is a set of buttons that sends a MIDI signal from the button, to Ableton, which triggers or changes a sound. The track that the sound is on has a script that connects its phenomena to Max, which takes the MIDI data it receives and outputs a corresponding behaviour in the 3d object in either the change of its shape or colours, relative to the change happening in the sonic component. In theory anything that sends a MIDI signal can work with this setup, so it can work with both soft circuit buttons, conductive paint, or any regular MIDI keyboard; any input device works as long as you can get it to communicate with MIDI data. We experimented with other MIDIs such as the OP-1 [by Teenage Engineering] as well as the conductive painting [uses Touch Board by Bare Conductive] from the previous experiment, which outputs MIDI.

img_4594
Works with Touch Board – which outputs MIDI

56520213087__f2ad6f2b-0035-49e4-b108-a5b923533654

The Conductor – Initial setup inside Unity
Ableton - Live view; jit.window Max object with Ableton tracks (audio source)
Final set up – jit.window Max object, Max patches and MIDI tracks in Ableton Live View (audio source)

 

Yay!! Now you can be a VJ 😀

Shooting Game – Final Report

Maddie Fisher-Bernhut, Donato Liotino, Ola Soszynski

Code: https://github.com/ToxicDon/qyro-shooter-game-arduino

Our main inspiration for this experiment was arcade shooting games, where the player earns points by shooting various targets. However, we mostly wanted to work with some new sensors in Arduino. Specifically, we were very inspired to figure out the gyroscope accelerometer, and the Bluetooth connectors. Other things we wanted to work with were vibration motors in regards to some game concepts we had in mind.
The final productOriginally, we planned on having the game be competitive, with two players versing each other to earn more points, or to defeat the other. We wanted a fun, wireless shooting game, and were inspired by similar games, often seen in arcades. This would encourage friendly competition between players, and generally create an enjoyable experience. The experience would be inspired by sci-fi movies and shows, such as Doctor Who, and fantasy, such as Harry Potter. The two players would fight each other, trying to defeat the oppositions’ creature, also themed or inspired by their theme.

img_20181204_122401917For inspiration, we looked at how competition assists one’s performance, as well as if any games like this had been made in p5 before. This led us to https://www.bigfishgames.com/online-games/8942/shooting-range/index.html, for games similarly made, and a basic game on DOS named Shooting Gallery (https://youtu.be/incPLdf712M). https://www.theglobeandmail.com/life/health-and-fitness/why-a-bit-of-healthy-competition-is-good-for-everyone/article8749934/, and

https://www.psychologicalscience.org/news/minds-business/the-upside-of-rivalry-higher-motivation-better-performance.html helped us read into how competition helps, regardless of the fact we did not end up making a competitive game.

Final wiring setup
Final wiring setup

Goals of the Experiment:

1. Use and learn Bluetooth for Arduino

2. Use and learn the gyroscope accelerometer

3.  Use vibration motor to detect invisible targets

    • To initially figure out how the motor worked, we used a tutorial (https://www.precisionmicrodrives.com/content/how-to-drive-a-vibration-motor-with-arduino-and-genuino/)
    • working-with-the-vibration-motorAfter using the tutorial, the code is mostly made by testing out different values and intensity according to distance, originally tested through a light sensor and then implemented into the game code.
    • Sadly, we were unable to implement this aspect, due to clashing libraries in the code disallowing the Arduino to operate as anything more than from movement.

4. Create different movement paths for targets as one progresses

    • Coded in using sine and cosine waves
    • While we wanted to implement a highly difficult mode, perhaps with a basic AI which avoided the player, we did not have enough time or understanding to do so.
    • The presented version shifts the randomly moving target once hit.

5.  Have working gifs

6.  Be able to see and track player on the screen using colour detection

    • We wanted to implement Maddie’s code from experiment one, which involved colour tracking, to track the crosshairs of the players. However, this code runs much slower and faces many avoidable complications when trying to track two colours at once. Due to this we later cut the number of players down to one.
    • diffusing-the-ledLater on, we realized the program runs far smoother using the gyroscope rather than the colour detection. So, we used the gyroscope for tracking, with the trigger being implemented through colour detection.
    • Another issue faced with the colour detection was that the LED was too small to properly activate the code, so we needed to diffuse the light and make a larger trackable object. Due to this, we used a ping-pong ball. Due to availability, we were only able to access orange ping-pong balls, which caused difficulties with the colour detection, which would trigger the game when too close. This could be fixed by using a white ping-pong ball instead. Our short-term fix was to distance the gun, in a relatively dark location.

7.  Have the game shoot by moving the gun in a specific way

    • 20181201_204007We realized that by using movement to fire the gun, the targeting would be offset. So, we changed plans, having it trigger fire but pressing a button that would light the LED for the colour detection to activate, “firing” the gun and seeing if it hit the target. Furthermore, as a failsafe, we could program the button to be what tests for collision, if the colour detection decides to not work.

8.  Customize the controller

    • first-layerWe worked with spray paint for the first time. It isn’t the cleanest or nicest looking but it turned out exactly as we wanted with our current skills.
    • The addition of the ping-pong ball also assists with the aesthetic of the prop gun for the game, as we needed to keep school guidelines in mind as we worked.

 

 

9.  Create a working shooter game

    • Accomplished

Hunting Game: Progress Report

 

Jin Zhang(3161758) & Siyue Liang(3165618)

Inspiration

In our previous projects, we paid more attention to the visual effects aspect. Most of our projects are a bit weak in terms of interactivity. For this final project, we decided to explore more on the physical interactions. We want to build a shooting game by using physical materials and make it realistic. Our idea is to create a shooting theme related game where the player uses a laser pointer / a toy gun to aim for the center of the targets.

 

Contextual Work

The duck hunting game:

https://www.youtube.com/watch?v=YyM6MmBM14w

 

A light sensor would be placed in the center of the target and a servo motor is attached to the base of the target.  When the player put the pointer to the center, the target would fall flat. Some of the details still need to be refined and we are still in the testing phase.

 

Materials & Techniques

  • Light sensors/pressure sensors
  • Servo motors
  • Light pointers/laser light
  • Cardboards and aluminum foil

 

Since our idea for this project is pretty different from what we did for previous projects, we might not be using any same materials or techniques as before. We are still thinking what kind of sensor should we use as switches in this game.

 

Code & Wiring

 

#include <Servo.h>

Servo servo1;

int forcevalue = analogRead(A0);   

int pos1 = 90;

 

void setup() {

Serial.begin(9600);   

servo1.attach(9); //servo at digital pin 9

//servo.write(0); //initial point for servo

 

}

 

void loop() {

 forcevalue = analogRead(A0); //attached to analog 0

 Serial.print(“Sensor value = “);

 Serial.println(forcevalue);

 

 //int value = map(reading, 0, 1023, 0, 255);

 if (forcevalue = 100){

 for (pos1 = 90; pos1 <= 180; pos1 += 30) {

 servo1.write(pos1);

 delay(1);  

 }

}

}

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2018-11-26-%e4%b8%8b%e5%8d%886-18-53

Work in Progress

wechatimg4015

wechatimg4017

The sensor works as a switch to control the servo motor and sound.

wechatimg4019

wechatimg4021

The value of the sensor would go from 0 and suddenly to 1023 for some reasons.  We were confused and figured that there might be something wrong with the wiring or the code.  We are still testing it and trying to find a solution for this issue.

The Soundscape Experience

The Soundscape Experience


Tyra D’Costa | Shiloh Barnes | Francisco Samayoa

screen-shot-2018-11-26-at-6-19-54-pm

Project Inspiration

Our project seeks to encompass elements of sound, sensory perception, and nature in a installation based artwork. Our process began by discussing the types of sensors we wanted to explore, eventually we decided on the MPU 9250 Accelerometer and Gyroscope. Our proposed idea, is to integrate the sensor into headphones, which can then change the sound and visual experiences of the user. As the user navigates the room and moves their head about, the world in which they are digitally immersed in will enhance the real world.

Goals

  1. Engaging UX Design: We want to solidify the conceptual ideas of the soundscape experience to create an immersive UI/UX design.We want this to include, elements of storytelling and gamification.
  2. Sound-Sensory Experience: Moving forward we will have to test that the sensor can allow smooth transition between sound files. The media assets will consist of ethereal soundtracks, data visualizations (natural algorithms and patterns), and nature related images.
  3. Integrate wearables: Also, we need design a way to integrate the sensors into the wearable technology ( the headphones).
  4. Discuss concepts of nature, ecosystems and natural algorithms: Lastly, we want to think about how these concepts can work together to create a narrative and gameplay element.

Background Work

So far, we have able to get the MPU sensor working with Arduino so that we receive X,Y,Z coordinates. This required, scouring the web for solutions to library related issues, and calibrating the code in order to receive accurate data readings. Additionally, we were able to take these coordinates and map them to a camera in Unity, so that the sensors orientation changes the perspective of the viewer.

Links To Contextual Work

How to play multiple audio files

Research on Arduino + Gyroscope

Good Link explaining how IMU sensor works

Tutorial for IMU and Unity set up

How to set up MPU and Arduino Library

Unity + Arduino Library

Project Details

  1. User starts at any location on the map
  2. They put on the headphones
  3. Depending on the position of the head, the headphones will play a particular sound.
  4. If a particular sound is playing then the screen will display an associated image or video
  5. The user must experience all the audio and visual data in order to try and make a connection between the media and a place that exists in the real world before their opponent. 

Sketches

img_4916img_4924img_4924img_4926

Process

46837114_564283070696082_6498849018557235200_n 46812922_473012876557775_1587641447014727680_n

46801407_337187233779982_3353290089944842240_n 46854703_644056072656401_7774253524338081792_n 46796952_337902460325535_4724015759863316480_n

Air Printing: Drawing Physical Objects with Leap

Experiment 4: Progress Report:

Air Printing: Drawing Physical Objects with Leap

 

Salisa Jatuweerapong, Sam Sylvester, Melissa Roberts, Mahnoor Shahid

Atelier I: Discovery 001

Kate Hartman, Adam Tindale, Haru Ji

2018-11-27

 

Inspiration

We started with an idea of drawing in the air and transmitting art onto the screen with the movements. At first, we thought of using an accelerometer or conductive paint proximity sensors. We didn’t want any sensors to be attached to the hand. Through research and feedback, we discovered the Leap Motion Controller and a project called “Air Matter”.

“Air Matter” is an interactive installation by Sofia Aranov. The installation takes a new approach on traditional pottering with Leap Motion Controller. The viewer draws a 3D pot in the air which is then 3D printed. An Arduino is also used with potentiometers to control aspects of the model.

 

Context

Related imageThis project is an exploration of alternative interfaces and virtual and physical space.

We took the “Air Matter” installation as our main inspiration. Instead of drawing a vase, we decided to draw a sculpture made of thin rectangles. This idea was based on the disappearing sculptures by Julian Voss-Andreae– which, depending on the point of view, seem to disappear into thin air. Our project “conjures” physical objects from thin air, yet the physical objects it creates disappears back into thin air (conceptually. Our final design isn’t print thin enough for that to actually work). There’s something to be said about the transfer of objects from physical, to virtual, back to physical space, and its permanence and materiality in each layer.

Related interfaces include: webcam motion tracking, Kinect, and a variety of glove interfaces (Captoglove game controller, Mi.Mu). We chose to explore Leap as it seemed an exciting challenge; as well, we wanted to explore extremely non-invasive, non-physical interfaces (no gloves).

Other work that is being done on Leap includes Project Northstar, a new AR interface that aims to redefine the AR experience. Otherwise, the Leap team is focused on creating accurate hand tracking software to be used as a tool for any other projects.

Links to Contextual Work

Air Matter: https://www.sofiaaronov.com/air-matter

Julian Voss-Andreae Sculpture: https://www.youtube.com/watch?v=ukukcQftowk

Mi.Mu Gloves: https://mimugloves.com/

Northstar:https://developer.leapmotion.com/northstar

Images of the work in progress

Progress Timeline Checklist (link).

Thursday 22nd: 

Designing the visuals

11-23-04811-23-047

Friday 23rd:

Getting Started with Leap

received_305280590079371 received_334901954003267 received_347159419428689

Tried out the Leap, ran into some challenges with the different software available for download. Tutorials we found (Research) seem to be for some versions of the software and not others.

Monday 26th:

Processing Sketch with mouseDragged

screen-shot-2018-11-25-at-4-11-45-pm screen-shot-2018-11-25-at-8-34-32-pm screen-shot-2018-11-26-at-1-02-43-pm

According to the sketch, the person would draw a squiggle with their finger as an outline for the sculpture. Thin rectangles should be placed at specific X positions to maintain a consistent gap between them. The height of the rectangles is determined by the Y position of the cursor or the finger of the person.

Processing Sketch with Leap

finger_painting_1003 finger_painting_1010

finger_painting_1102
Wrote code in Processing using the LeapMotion + Arduino Processing library. Used input from the Leap to draw a line. Boxes are drawn centered along the vertical middle of the screen, height and depth of the box are dependent on the y position of the user’s finger, placement along x-axis is dependent on the x position of the user’s finger (box is drawn if the x value is divisible by 50). The width of the box is constant. There is a bit of a lag between the line being drawn and the box is drawn, so the line has to be drawn slowly.

JavaScript finger/hand code reference: https://developer-archive.leapmotion.com/documentation/javascript/api/Leap_Classes.html?proglang=javascript

Tuesday 26th:

Converted Processing Sketch to Javascript

finger_painting-1 finger_painting-2

There was no export as STL file for the processing version we were using, so had to switch to javascript. This was important since Melissa’s STL library code from Experiment 2 had proven to work.

In the Javascript code, we used three libraries

  • Three.js (export STL library)
  • Leap.js (Leap Motion Controller for the javascript library)
  • P5.js
  • Serial Port

Pictured above is the functional p5.js/leap.js code.

Implementing three.js library into the functional p5.js/leap.js code

screenshot-58

This involved getting rid of p5 code, as the three libraries (three, p5, and leap) didn’t work well together. The biggest changes were changing how we created 3D shapes, creating a renderer to replace our canvas, setting up a scene (full of our shapes) to animate and render, and including an STL exporter, which will allow us to print the 3D object drawn on the screen.

The Leap coordinate system seemed to be very different from the Three.js coordinate system, which means the shapes we had created displayed as far larger than originally intended. However, the code technically works. The scene (airPrint) has our shapes in it, and they are being reproduced on the screen. Leap has a coordinate system where the units are millimeters, the origin being the center of the top surface of the Leap.

Further steps involve possibly implementing additional controls with Leap.

Connected Arduino to USB C

screen-shot-2018-11-27-at-12-03-28-pm

Using WebUSB, created a workflow for a physical button push as ‘enter’ on the keyboard.

This push button will download the STL file from the sketch which can then be used to 3D print.

GitHub: https://github.com/SuckerPunchQueen/Atelier-Ex-4?fbclid=IwAR1wH5XmWsn4G-S9e3zezb8yrDMfmp56uRA7xzTVI80JTh3Wj-hnKFjrZ-w 

Previous Experiments

Melissa’s Nameblem provided a starting point for a generative code → .stl → 3D printing workflow. Melissa’s original project combined p5.js with three.js and exported into a .stl file that she would have to manually fix on 3D Builder. While we had hoped to just reuse this code for Air Printing (it is a rather technical workflow), we are having issues interfacing Leap.js with p5.js. As well, something we are hoping we can do is automating the process.

Mahnoor’s work with capacitive sensing on Experiment 3 inspired our original interface for air sensing. Her umbrella had a proximity sensor created by using conductive paint and the CapSense library, and we reasoned we could use two capacitive sensors on two different axis to take an x-position and y-position for a hand. This would not be as accurate as Leap, and since Melissa wanted to buy a Leap anyways, we opted to use that for our project.

We are using p5.js, which Adam introduced us to in Experiment 1 to draw our design.

Haru’s Endless Forms Most Beautiful, specifically the experiments based off William Latham’s work, was our launch point for the visual design. Originally, our code was a bastardized Tinkercad / building blocks game. We felt that we could do more visually, to elevate the project from a tool/workspace to an actual artwork. We looked at the rule-based work we explored in Haru’s unit for inspiration, since we were already restricted by rules as to what would practically be able to print (basic geometry, cubes, connected lines).

Experiment 4 – Progress Report

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Soup of Stars

Inspiration

The inspiration of the project developed as we looked into the potential of our original concept. We started off with the inspiration of movement and and implementing it with analog sensors as a form of a ball game where users would attempt to keep the ball up with their body parts that has sensors attached. After reviewing several pieces we decided to develop the concept based entirely on a webcam because we wanted the body to be the entire subject of the concept

Relevant Links for Inspiration

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5

https://ml5js.org/docs/posenet-webcam?fbclid=IwAR2pg6qdmZfbi0Gxi3ohxtP9tcXUpokaYj6triiHtw6giJ9vTbYVyM1LNWI

Context

The project utilizes a webcam along with Posenet and P5. With the Posenet library, a skeleton is constructed based off the subject registered via the webcam. Within P5 a visual is constructed in the background of a particle system intended to resemble stars. While still focusing on movement, the particle system will react to the movement of the skeleton (mostly limbs). As the arms move across the canvas, the particle systems would swirl and twist following the movement of the skeleton. The skeleton of the subject would appear on the canvas. Additionally, more than one individual can be registered as a skeleton as long as they are in proper view of the webcam. The intent is to provide a sense of interactivity where the individual has an impact to the environment and can alter it the way they see fit.

screen-shot-2018-11-27-at-11-01-33-am

Pictured above is the skeleton using the Posenet demo that will be controlling the particle system. The movement of the limbs will be crucial in altering the environment. There are some issues where some limbs aren’t recognized at times especially when they are close to the body.

20181127_091845

Pictured above is the implementation of the Posenet library with P5

20181127_092035

Previous Materials/Experiments

For experiment 3, we’ve used the webcam with P5 to construct an image using particles. We managed to manipulate the particles with a sensor that then combined with the webcam feed. For this experimentation, we’re still using familiar elements such as the particle system in P5 and the webcam feed projection but altering its concept and their relation to one another.

Untitled (for now)

 

Dimitra Grovestine 3165616 | Kiana Romeo 3159835

Context

screen-shot-2018-11-27-at-12-04-21-pmThe concept of the afterlife has always been a controversial and highly debated topic. Many cultures and individuals disagree on what happens after death. Some believe in reincarnation while others think the human soul leaves its shell and ascends into a higher place. Some even believe nothing happens at all. In our project we aim to encompass all these concepts in which one will go through many stages of life after death. First, they will walk into darkness, then a walkway will slowly light up once they advance a bit farther until they reach the end of the path where a beautiful display of clouds will appear signalling they have reached the end.

 

Inspiration

We were inspired by the idea of dying and “going into the light”; something grim and undesirable becoming something beautiful and inviting. But of course, no one wants to die in order to experience this. Our project aims to fabricate the concept of “Heaven” and the afterlife by recreating it in many ways. We want to give people a near death experience without them actually having to die, complete with visuals and sounds we believe would be present when in this scenario.

 

Concepts/ how it will work

In order to create this effect, we will be using proximity sensors to trigger img_3993events. The space will be dark or inactive when no person has entered the room/ space. Whispers will be heard around the room as the other “souls” acknowledge your presence. The first proximity sensor will be connected to one laptop and will trigger the LED strips. These strips will gradually light up as you walk lighting up a path representing you being welcomed into the afterlife. Upon reaching a certain point, the next proximity sensor, connected to a second device will be activated where the heavenly clouds will gradually appear and angels will sing marking the moment you reach your final destination. A concept we are taking from a previous experiment is the idea of using sensors to trigger different events. Although we did not create these sensors, we plan on formatting and designing a space that will make the sensors completely unnoticeable and use materials to make our space as aesthetically pleasing as possible.


Contextual articles and videos:

http://time.com/68381/life-beyond-death-the-science-of-the-afterlife-2/

https://www.youtube.com/watch?v=axSxCo_uMoI&frags=pl%2Cwn

 

Progress Report: Shooting Gallery

Phantom Blasters (Working Title)

Maddie Fisher-Bernhut, Donato Liotino, Ola Soszynski

followermagicWe were inspired by the classic carnival and arcade games, namely shooting galleries. We also wanted to work with the gyroscope sensor and vibration motors. We really enjoy the aspects of friendly competition between two players, and wanted to create something that celebrates that. Playing off of that, we wanted to incorporate magic and technological themes to show the competitive aspect in a literal sense. Whether this will carry through is undecided currently. We truly want to make a game people enjoy playing. Furthermore, we wanted to work off of the typical shooting-gallery type games, shown below in the first two links.

Checking collision for the mouse and target
Checking collision for the mouse and target
Detecting mouse presses
Detecting mouse presses

Otherwise, we also wanted to incorporate vibration sensors, by having the targets be invisible, only tracked by the vibration, in a sonar-like way, in which it becomes more intense the closer the target is. By doing this, we will be working mostly with new things, while incorporating what we already know. In our case, we also want to work off some of what Maddie did in assignment one. By doing this, we can track the players’ crosshairs with a colour tracking camera. The coding ideas are a tad more simplified, so we can focus mostly on the new sensors and

assets in process

buzzers, allowing us to learn how they work. Other new aspects we are working with is loading gifs in p5.js, something Donato and Ola have n=both been unable to figure out thus far.


Research and context:

https://www.bigfishgames.com/online-games/8942/shooting-range/index.html

https://www.theglobeandmail.com/life/health-and-fitness/why-a-bit-of-healthy-competition-is-good-for-everyone/article8749934/

https://www.psychologicalscience.org/news/minds-business/the-upside-of-rivalry-higher-motivation-better-performance.html

Experiment 3 – Naruto Glove Game Controller

image24

Experiment 3: Material as Sensors

Naruto Glove Game Controller

Madelaine Fischer-Bernhut, 3161996

Salisa Jatuweerapong, 3161327

Brian Nguyen, 3160984

Atelier 1: Discovery, DIGF-2004-001

15/11/2018

Concept

A Naruto Primer:

From Wikipedia: “Naruto (ナルト) is a Japanese manga series written and illustrated by Masashi Kishimoto. It tells the story of Naruto Uzumaki, an adolescent ninja who searches for recognition from his peers and the village and also dreams of becoming the Hokage, the leader of his village.”

The Naruto franchise is one of the most internationally well-received and popular of Japanese IPs– ranking as the third best-selling manga series in history. This pervasiveness of the IP gives us a solid launching point for our project; the majority of people should at least have heard of it, while for others it invokes deep nostalgia and childhood memories of mimicking hand jutsus  in hopes of creating fireballs and wearing dorky headbands and cosplays for play-acting.

The main power system of Naruto revolves around chakra, a “life energy” that all ninjas can tap into and channel into a variety of attacks  and powers (jutsus) by using hand seals. Hand seals (Hare, pictured on the left) are specific hand gestures, and different combinations of them create different types of attacks. There are 12 basic hand seals (based off the zodiacs).

Since its initial birth, there have been over 50 Naruto games released and the majority of them are fighting games that rely on jutsus among physical attacks. However, for the most part they are all played using generic consoles, ie button presses. The game we are using: Super Smash Flash 2 (Pictured on the right), is a fan-made flash game made in homage of the Super Smash Bros series. In this game, the creator decided to add many popular characters from anime and gaming that are not typically found in the original series. For example, Goku from the Dragon Ball series and Naruto from the Naruto franchise, which we are using for our project.

Our concept is to bridge the gap between the physical world and the digital world through motion gesture controls that put you in the shoes of the playable character. Instead of pressing a button to create a hand seal inside the game and form the jutsu, the player would be able to form the hand seal in the physical world and have it reflected by the character in the game world; instead of pressing arrow keys to move your character around, the player would be able to step in a direction and have the character on screen copy their movements. In the spirit of childhood nostalgia and avid fans, we are turning people into ninjas.

image31 image11

Group Member Roles

Madelaine: Creating the positive glove, Leaf Symbols, sensor placement plans, and forearm guards creation.

Salisa: Initial conception, creating the grounded glove, sensor placement plans, and movement mat design and creation.

Brian: Programming the Arduino Micro and finding a workable game.

All: Conceptual planning and circuit creation, documentation.

Initial Planning

Placement of sensors

image16 image14 image4

Image 1: Initial sensor/switch placement colour coded

Image 2: Updated sensor/switch placement colour coded

Image 3: Sensor/switch designing (shape, type of stitch, positioning, etc)

When we were planning the the placement of the switches we had to make sure that we would not accidentally trigger the wrong switch when making a hand sign. The first step was to only choose hand signs that did not have that many of the same points touching. We decided on Boar, Dog, Ram, and Hare because their signs almost all made contact with at least one different point. At first, we were thinking of creating a switch by putting the ground and positive sides of the circuit on the same hand. The other hand would be unattached to any wires and bridge the circuit (acting as an activator only). We decided against this idea because we realized that it would make sense to have one of the gloves be the ground to limit the clutter of too many wires. Because of this decision, the left hand contained the triggering switches connected to each gesture.

Hare and Ram are the only gestures that required an override of another sensor, the palm switch. Ram we knew from the beginning had a point overlap, but for Hare, it came to us when we were testing the performance of the gloves. Ram’s unique point of contact was the middle and index finger and it shared the palm point with the boar. Hare also shared a palm point with boar depending on the angle the player used in making the gesture, so we had to make sure its unique point, crease of pinky and ring finger, overrode the palm point.

Materials

Hardware: Arduino Micro. It has the ATmega32U4 chip which allows it to act as a keyboard when connected to a computer through USB. The full support of being able to program keyboard keys to the micro was extremely helpful in writing the code. Before this, we thought of alternative ways to simulate keyboard inputs such as a Python script or a Chrome extension listening on the serial port. As well, Donato suggested using a Makey Makey.

Gloves: A black knitted glove, black thread, silver (non conductive) thread, conductive paint, conductive fabric, and conductive thread.

Movement Mat: Velostat pressure sensitive material, aluminum foil, neoprene, and fabric glue.

Forearm Guards: Black poster paper, neoprene, electrical tape, black thread, and fabric glue.

Implementation

Gloves

While we originally debated sewing our own gloves (this way we could fully match the design and create the entire ground glove using conductive material), we ended up grabbing 99 cent gloves from Chinatown because of the time crunch. We chose knit gloves over leather ones for the price point and for the one-size-fits-all quality. These gloves stretched rather well and were able to fit grown men with larger hands.

The biggest issues we ran into were the shoddy construction of the gloves (we had to patch up at least 3 holes that just opened up) and the stretchiness of the knit, which made it impossible to iron on the conductive fabric as we originally planned. It was also difficult to sew evenly-placed stitches; the craftsmanship was a steep learning curve.

Outside

image32

image15

Image 1: Back of gloves      Image 2: Front of gloves

Inside

image22 image19

Image 1 (from left to right): Conductive fabric “wire” connections for the pinky and index finger crease, the middle finger, and the Backplate.

Image 2 (from left to right): Conductive fabric “wire” connection for the Leaf Symbol and the ground.

*the paint cracked by the end of the day, unfortunately, leaving just half a leaf

The sensors/switches were made with either conductive thread to be less prominent as possible or conductive fabric to create more intricate designs. Each sensor/switches positive side was connected to the alligator clips, going to the Arduino Micro, using a thin strip of fabric inside the glove. The reasons for this was:

  1. We wanted the gloves to remain as close to the original as possible. This did not include silver thread crisscrossing the black glove.
  2. If we used thread, the stitches on the outside of the glove may accidentally trigger the wrong hand gesture if the wrong sensor is touched by the ground.
  3. We did not want to use wires as they would be too rigid and would make putting on the gloves difficult.

We decided to stick with the black gloves and Leaf Symbol design because they are so iconic in the Naruto Universe. The black gloves with the metal plate on the back are a signature accessory for Naruto’s ninja mentor, Kakashi. The Leaf Symbol is the symbol of Naruto’s home village in the franchise. Both Kakashi’s gloves and the leaf symbol are extremely popular in Naruto themed merchandise and cosplay, so Kakashi’s gloves are usually designed with the Leaf Symbol (the plates in the canon universe are plain) and pretty much accepted as fan-canon.

The Leaf Symbol for the palm switch was a stylistic choice to keep with the theme. Many of our initial ideas were basic palm padding designs. We ended up scrapping the idea because we felt that knitted gloves usually do not have pads and that the silver of the fabric would be too different from the original black glove.

The biggest issue we had creating the sensors was with the conductive thread being covered with the thick fabric of the knitted glove. Even after adding more thread to the spots with the sensors, some of the hand gestures had issues triggering (the Hare and the Ram)– mostly in part, due to larger hand compositions. Sewing the conductible fabric designs were also difficult. The material is extremely stretchy so it was easy to make a placement mistake that would look normal when unworn and look extremely off when worn; or vice versa. The conductible threads tendency to twist and knot were also banes of our existence.

On a stylistic note, the conductive stitching on the seams were meant to create “invisible” sensors; however, the stitches sunk too far into the fabric if they were not bunched up, and were not so “invisible” if they were. We stuck with it on the ground glove as it was needed to create the circuit, but avoided it on the right glove as the look was not as polished as we imagined. If we were to use a different material for the glove (and get really good at sewing), stitching matching seams on both gloves would effectively hide the sensors in plain sight. In addition, using black conductive thread.

Hand Signals/Gestures:

image6 image5

Key: I – Move: Defense Barrier/Shield

Switch Sensor Location: Backplate of the right glove (positive), Leaf Symbol on the palm of the left glove (ground)

image28 image21

Key: O – Move: Special

Switch Sensor Location: Conductible thread seam in the crease between the pinky and ring finger on the right glove (positive) and between the thumb and pointer finger on the left (ground)

image12 image25

Key: P – Move: Basic Attack/Smash Attack

Switch Sensor Location:  Leaf Symbol on the palm of left (negative) and right gloves (positive), conductible thread sensor around the middle finger on the right (positive and left gloves (ground)

***For the longest time we thought this gesture was the Tiger sign. So please do not mind the slightly off finger positioning in documentation photos.

image13 image8

Key: U – Move: Grab    

Switch Sensor Location:  Leaf Symbol on the palm of left (ground) and right gloves (positive)

One thing we debated was making the hand seals work even in mirror position (mostly for Dog) as many players mixed up their left and right hands. We ultimately decided not to as it went against the spirit of the ninja way. As a hypothetical commercial decision, this controller is marketed towards fans, so it should not be an issue– in fact, making them work in reverse would likely be a complaint from hardcore fans.

Testing

image38 image1 image3 image9

Every time we finished an element of a switch (positive or ground), we tested to make sure was able to run through the circuit by using an LED and a battery. The placement of the Leaf Symbols, the Backplate, and the sensors/switches were roughly figured out using tape on our hands, then on the glove (as well as a fabric pen later on).

Movement Foot Control Mat

image33 image39 image27 image29 image34

Image 1: initial concepts for floor mat

Image 2: WIP of floor mat

For the concepts, we conducted research by simply walking around and trying to figure out how to create a natural body sensor for moving. This was a scrolling 2D game, so a 3D floor-mat like DDR did not make as much intuitive sense to us; should the up arrow be jump, or be walking forward on the screen? Our project was aiming to connect the player to the onscreen character as much as possible, so a similar movement was key. The other key design focus was ease of reach: we wanted the player to not have to search or reach far for the sensor, and ideally always be to press the sensor without moving their foot placement.

Based off the way the characters stood on the Smash screen (and other Tekken-style games), we came up with the layout pictured on the lined paper: a ready-position half-crouch, with one foot in front of the other. The drawback with this was that it was not intuitive at first glance (though it would still be nice to playtest; it has potential, especially since we can use code to make it work), and most importantly, the sprites in this game could flip around on screen. So even when the sprite would be running left on the screen, the player would be pressing back with your left foot, and the focus became more on the left-right than the back-front.

Therefore, that idea was scrapped for a simpler left-right 2-sensor design. On top are some Naruto-themed designs, including ones based off significant clan symbols, weapon designs, and for one, a symbolically geographical representation of Naruto’s clan maps (there was a lot of overthinking).

Our biggest grievance with the 2-sensor designs was how… untethered, they felt. Both in position in relation to the other (they could easily be set-up differently or kicked apart) as well as in relation to the digital game and other gear. It raised the question of where the player would stand, at rest or otherwise: an space in between the two could work, but would have a significant delay in reaction time, meanwhile a space behind the sensors could make the player feel removed. With that in mind, and the map idea– we decided to make a single mat, with 2 sensors on either side. By marking a specific play area, we created a deeper sense of player immersion. Keeping with the colour scheme, red carpet fabric was glued to a sizeable neoprene mat. The tactile raise and feel of the carpet fabric helped players find the sensors with their feet. So the players would not have to lift their feet/ leave the play area, the sensors are located in the balls of the feet only; players can rest on their heels and rock forth on the balls of their feet to activate the sensor. (This movement needs playtesting and research for strain in continued use and should be adapted accordingly for safe play.)

The three comma-like shapes in the middle form the Sharingan, another nod to the Naruto franchise. Currently it does not do anything, but there is the possibility of 1) turning it into a jump sensor (up) 2) turning it into a duck sensor (down) 3) turning it into a special move where the player crouches down and presses their hand to it (it would likely be an analog sensor so any pressure works.) However, staying true to the franchise, there are seals and moves that require a ninja to press their hand onto the ground.

One critique of the floor mat design was that the player intuitively inferred that rocking forward would be moving forwards on the game, and rocking backwards would be moving backwards. This, while a cool mechanic, runs into the same issue as the initial half-crouch concept. The main issue here, however, is that the design does not clearly convey to the players to rock forwards to activate the sensor. This could possibly be changed by making the back heel an empty silhouette (we do want to mark where to stand regardless.).

On the day of the presentation, we were unable to get the movement mat to work due to a circuit issue. We knew something was wrong when we realized that the pressure values were way too low with little difference between rest and active states. Thankfully, right before the class presentations, we were able to rearrange the circuit and make it work properly. We made the threshold 500, so the movement would be easier to control.

Forearm Guards:

image20 image18 image10

The forearm guards were our design fix for both organizing the wires to allow greater ease of player movement and set-up, as well as (somewhat) hiding the circuitry. We always knew that it would be best to get the wires out of the way by bundling them along the arm with some sort of band, and then upon further research we were able to design a themed cover that would hide the wires as well.

These guards are based off Kakashi’s Anbu uniform in the storyline (the gloves themselves are also based of Kakashi’s design). Many of the players mentioned how they felt cool and in character while wearing them, and that they helped with the immersion. The wires were organized, separated, and attached to the inside of the guard with electrical tape. Organizing the wires also made it easier to keep track of how the sensors were connected to the circuit.

In order to fasten the guard to the arm, two bands were created.  Black poster paper was used to make the band permanently attached to the bottom of the guard near the wrist. The top band, on the other hand, was made from neoprene and was able to be placed anywhere to adjust to different sized arms. Both bands were fastened using Velcro dots.

A second prototype would likely have the forearm guards made of a mixture of foamcore and neoprene, and properly painted and textured.

*The neoprene strips were too short, so multiple strips had to be sewn together. Furthermore we ran out of neoprene, or we could’ve used them for all of the straps.

Circuit

image40

image41

*We taped down the breadboard so it wouldn’t get knocked over during demos.

Code

Link to code: https://gist.github.com/notbrian/f1bdf661cceba52939de42252c553623

The code has a couple of different states it runs through during every loop.

  1. Checks the value of every pin, whether the circuit is closed or open.
  2. Loops through the state of these pins, and presses the keys associated with them
  3. Delays the program 10ms (Required for the game to register actions properly)
  4. Releases all the keys that were pressed

When designing the code, one of the most important elements we wanted to uphold is responsiveness. This element of controlling the game is especially important in fighting games like Smash where the milliseconds count when performing different moves which could result in behavior that the player doesn’t expect.

Originally the code had a delay associated with every key press and key release. This led to issues playing the game however. Firstly, the player couldn’t use more than one key at once which lead to unresponsiveness in gameplay. Secondly, the player couldn’t use “smash attacks” attacks in the game that are extra powerful but only triggered when the player uses punch + a direction at the same time or “side specials” , attacks that are triggered when the player uses special + a direction. These attacks are a crucial part of the Smash Bros gameplay which led to a rewriting of the fundamental design of the code. This code ended up being much more compact and elegant than the early prototype.

Critique & Presentation

Video of a playtest

image24 image42 image37 image10 image26 image7 image2 image36 image30 image43

Overall, the reception and critiques were positive. One thing to note was that there were two main types of players: those who were familiar with Naruto, and those who had no clue what this was about. Somewhat disappointingly, this crowd largely consisted of the latter: these folks were enthusiastic about the experience and idea, but lacking the nostalgia and excitement of the former. An explained idea, after all, can never replace the power of nostalgia. Someone called the Rasengan “the blue cloud!”, which we thought was funny.

One viewer brought up using this in either a multiplayer video game, or in a table-top game with mini-holograms (referencing Gatebox’s virtual assistant or those pyramid displays) of the characters that would replicate jutsus accordingly.

A player also suggested adding to the UI design of the game, and having what symbols the player was making appear at the top of the screen (to keep track of combos and give immediate feedback). Many other games (especially tutorial modes) do already have this, and we agree it would be beneficial for us to either find one, or code a seperate layer where that is happening on top of our emulator (sort of like the basketball scoreboard from Shilo’s group). This was originally brought up during our development but with the small time frame, wasn’t able to be pushed out in time.

There was one isolated incident where the person had really big arms and we weren’t able to put the gloves on them properly, which was disappointing all around.  

One observation we made while play-testing were players’ tendencies to–once figuring out where the triggers were–skip the full hand seal and simply connect the triggers with minimal movement. This, really, isn’t a surprise– gamers consistently find loopholes in game systems to improve their reaction time with little energy (see: moving only the sensored limb in Just Dance, GPS hacking Pokemon Go, etc). For that reason, it’s debatable how much of a design flaw this is and how much of this is just unavoidable without more advanced sensor techniques.

Related works

Flightstick controls for D.Va on Overwatch

image35

This alternative control method follows the same vein of thought as ours: bringing an in-game mechanic/motion to the real world. In this case, flightsticks–matching the D.Va’s in-game flightsticks that she uses the pilot her mech– are wired up to Overwatch as the game console.

One comment points out a flaw: “I know I’m being picky, but it always bugs me that the flight stick input doesn’t match what D.va is actually doing on screen.” This is also an issue that we experienced in our work; the pre-programmed controls for Naruto in Super Smash Flash 2 are limited to punch, kick, grab, and block--not all special jutsus. And there’s no way we’d be able to find a Naruto game that doesn’t have those basic moves that are essential to every fighting game. A solution that isn’t coding one ourselves would be to create an interface that also allows you to punch, kick, etc, along with the special jutsus (adding extra physical controls).

That being said, we would ideally find a more jutsu/combo-based game.

The CaptoGlove Game Controller

image23

This glove-based controller was a good source of inspiration. The glove uses bend sensors and a cool implementation of haptic feedback to create an interesting game experience. On its Kickstarter page, one of the pledge goals was to add a programmable pressure sensor to the thumb of the glove. Unfortunately, they did not make that goal. It is amazing how we have learned to make pressure sensors, in this class, in parallel with a pressure sensor implementation worth over $40 000 USD.

Although we also thought of creating bend sensors with the Velostat, we chose not to because we wanted our players to have free movement of their hands in order to move from one hand sign to the next. The gestures of the Naruto hand signs are a lot more complex than the hand signs of the CaptoGlove. The CaptoGlove also consists of only one hand. We wanted to create a relationship between both of the player’s hands in order to control the game, so we chose to use switches. In order to complete the switch and trigger an action, both hands must be present and meet. It is extremely difficult to find a game controller classic or glove, that requires both hands to make contact.

Nintendo Power Glove

image17

The Power Glove is a glove controller developed by Nintendo in 1989 for the NES (Nintendo Entertainment System.) The player makes hand and finger motions in order to control the game. It uses two ultrasonic sensors to track the roll and yaw of the hand and conductive ink on the fingers to detect how flexed they are. The two ultrasonic speakers on the hand that transmit a signal to three receivers placed around the television set. Through triangulation, it can determine the yaw and roll of the hand. Unfortunately, when the glove came out it ended up being a complete commercial failure due to its poor tracking, and difficulty of use.

Unlike ours, the Power Glove uses wireless ultrasonic technology in order to transmit its data and also tracks hand position rotation, whereas our gloves use switches in order to translate actions to a game. Compared to the Power Glove, our implementation is much more responsive and intuitive.

Future Exploration

As discussed above, a different Naruto game could be explored (Shippuden was suggested). As well, in the critique, different games altogether were suggested.

The different UI/tutorial mode would also be incredibly useful, and improve the gameplay experience by clearly informing you what buttons you were pressing.

Exploring different materials for the build could also improve build quality. As well, soldering wires instead of using the alligator clips; it would make the visible parts of the wires under the guards more aesthetically acceptable.

Further developing it into a more closed product by attaching the Arduino to the arm guard with a battery power source. This would have to wirelessly communicate to the computer either through a direct bluetooth/wi-fi connection, or another Arduino connected.