AerForge

The Team

Salisa Jatweerapong, 3161327

Melissa Roberts, 3161139

Mahnoor Shahid, 3162358

Samantha Sylvester, 3165592

 

About AerForge

AerForge is a project about change, and how the different spaces in which we create change the creation itself. The entity with which the user interacts transitions through realms and dimensions of creation, exploring and connecting the environments we learned in class.

To begin with, the AerForge experience is intangible. The user draws an imaginary line into the air in front of them, and out of thin air comes a visual. Projected on a screen in front of the user is the line they began to draw, and as the user continues to move their hand through nothing, they create something. Thin columns appear on the projection, their height matching the height of the user’s line. With a wave, the user rotates the projected image and out of lines and columns emerges a form. Though empty-handed, the user is able to create and engage with a virtual object. The user places their hands out in front of them, palms up as if asking for something. This gesture cues the virtual object to be downloaded and sent to a 3D printer. The final transformation brings AerForge into the physical world, and into the hands of the user.

Continue reading “AerForge”

The Soundscape Experience

Experiment 4: Final Report

Francisco Samayoa, Tyra D’Costa, Shiloh Light-Barnes
DIGF 2004-001
December 5, 2018

47394798_301736113794454_1847392723955351552_n


project details

  1. User starts at any location on the map
  2. They put on the headphones
  3. Depending on the position of the head, the headphones will play a particular sound.
  4. If a particular sound is playing then the screen will display an associated image or video
  5. The user must experience all the audio and visual data in order to try and make a connection between the media and a place that exists in the real world before their opponent.

project inspiration

Our project seeks to encompass elements of sound, sensory perception, and nature in a installation based artwork. Our process began by discussing the types of sensors we wanted to explore, eventually we decided on the MPU 9250 Accelerometer and Gyroscope. Our proposed idea, is to integrate the sensor into headphones, which can then change the sound and visual experiences of the user. As the user navigates the room and moves their head about, the world in which they are digitally immersed in will enhance the real world. Essentially, it is a 4D-sound experience. If we had more time we would add visual elements such as 360 video, or a 3D Unity-based environment.


Background Work

In the end, we were able to get the MPU sensor working with Arduino so that we receive X,Y and Z coordinates. After scouring the web for solutions to library related issues, and calibrating the code in order to receive accurate data readings. Additionally, we were able to take these coordinates and map them to a camera in Unity, so that the sensors orientation changes the perspective of the viewer. This meant we had the pitch, yaw and roll functioning for the gyroscope component. For the purpose of this experiment we disabled with roll since the user wouldn’t be rolling their head per se. However, there were a few “blind spots” the camera couldn’t pick up, such as the 180 degree mark. The interface was fully functional, for the most part. The main issue was the data overload kept causing Unity to freeze, so our solution was to reset the Arduino.

Links To Contextual Work

How to play multiple audio files

Research on Arduino + Gyroscope

Good Link explaining how IMU sensor works

Tutorial for IMU and Unity set up

How to set up MPU and Arduino Library

Unity + Arduino Library


Goals & Features

  1. Engaging UX Design: We want to solidify the conceptual ideas of the soundscape experience to create an immersive UI/UX design. We want this to include, elements of storytelling and gamification.
  2. Sound-Sensory Experience: Moving forward we will have to test that the sensor can allow smooth transition between sound files. The media assets will consist of ethereal soundtracks, data visualizations (natural algorithms and patterns), and nature related images.
  3. Integrate Wearables: Also, we need design a way to integrate the sensors into the wearable technology ( the headphones).
    The MPU unit was housed in a sewn leather pouch. There was velcro attached underneath in order to stick to the top of the headphones. This way, the wires were completely out of sight since it was hanging from above. For debugging purposes we wanted to the MPU unit detachable from the headphones. In the end, we were successful.
  4. Discuss concepts of nature, ecosystems and natural algorithms: Lastly, we want to think about how these concepts can work together to create a narrative and game play element.
    Using a blindfold we acquired, we were able to gameify the experience. With the blindfold on, the user would have to guess which environment they were placed in. We would randomly select 1 of 9 unique ecosystems, including 2 songs created by Francisco and Shiloh. These include a war scene, temple, beach, rain forest, and city intersection.

Pictures & videos

screen-shot-2018-11-26-at-6-19-54-pm

img_4926img_4925

46854703_644056072656401_7774253524338081792_n 47573880_349446675631454_7184122986348150784_n 47571987_341695319963113_3360251171074736128_n 47380658_359783828089595_3941993869863813120_n

20181204_125016

 

 

The Light

 

Kiana Romeo3159835 || Dimitra Grovestine3165616

 

Inspiration 

When it came to our project, everything we did in it was based on our original inspiration and our vision never wavered from it.There was a very specific message we wanted to get across in our project and the concept of the product trumped the need for the actual execution in the end. We were overall inspired by the idea of dying and “going into the light”; something grim and undesirable becoming something beautiful and inviting. Our project aimed to allow people the experience of dying and going into the afterlife without it actually happening. We also aimed to more or less allow people to interpret the afterlife as they saw fit as to not make any assumptions. Using visuals and audio associated with this type of scenario, we immersed people into a heavenly world in which they could briefly escape life on earth and ascend into something higher.

 

Contextual material


https://www.youtube.com/watch?v=axSxCo_uMoI&frags=pl%2Cwn-Don’t go into the White light philosophical video

This video goes over philosophical and religious reasons why one should not go into the light when they die. It was an interesting video to watch as for the most part, people believe that the white light is a good thing, meaning you are going up into heaven and towards God. But for this happy light to be a trap, going towards it would be a bad thing. This is why, when during the presentation of the project we decided to push people towards the light and then pull them out quickly in order to give them a chance to decide what they felt about the light.

 https://www.youtube.com/watch?v=hOVdjxtnsH8 – choir of angels singing (audio used in project)

Church choirs can have some of the most beautiful sounds and music coming from them, and after researching our concept, this music was incredibly inspiring. It expanded our concept as we wanted to create a fully immersive experience and while strong visuals and lighting could definitely help create this environment, sounds are very important too which is why we found it necessary to find just the right sounds to use in our project.

https://www.youtube.com/watch?v=lWqHRLjNZbE&frags=pl%2Cwn Man details what it was like going to heaven after death

although the credibility of this video was questionable, the idea of heaven in a religious and philosophical sense is all guesses of what its really like. Therefore, having this individual’s take on it was important as it covered some of the beliefs we wanted to be in the video.

 

Features of the project

Overall, we had set out to create a heavenly atmosphere. we believe that we achieved steps towards creating this however, we feel that our installation required more elements to bring the piece together. Obviously, we would have loved for all of our elements to work together cohesively, but after receiving feedback we think we would have added additional elements to the piece. I think that we would have considered an alternate shape of projection, verses the classic rectangular projection. We believe that this is necessary to add a unique element that feels like it belongs more in the piece. Where, the rectangular projection, looks plain and slightly out of place. We also would consider adding smoke and mirrors to create an ambiguous and more interesting space. Not being able to see and not necessarily knowing what you’re looking at would help create and add interest to the piece.

 

Proximity sensors:

picture4

The proximity/ distance sensors were meant to control all the elements of the installation. We were to use two sensors in the exhibit where one would control the led light strips that would illuminate as one got closer and control the whisper sound effect in the room while the other would control the brightness of the cloud visuals at the front of the room and the volume of the angelic singing as well. Unfortunately for us, the quality of the sensors wasn’t the best and their effectiveness did not work as well as I had wanted.

The room:

img_4158

Upon finalizing our concept, we had decided we wanted to construct our installation in a critique room where it would be dark enough to have a great effect. Unfortunately, though we could only book a meeting room where it was not as dark and was very large. We made it work but having a smaller space may have been a better option.

The lights:

picture1 img_4156

 

 

 

 

 

 

 

It was  a real challenge getting the lights to work even without the sensors. we ultimately fixed it by using a battery pack after realizing the lights needed a 12V power source instead of 5. This was a crucial part of our process and took a whole class to figure out but when we did it was more or less smooth sailing.

picture2

 

Code

atelierfinal (Clouds and visuals code)

atelierfinal (Lights and sound code)

 

Experiment 4 – Snow Day

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Snow Day

20181204_102722

Inspiration

Experiment 4 has gone through many developments and alterations when compared to our initial stage of the assignment. Essentially, our inspiration came from the concept of how movements could manipulate an environment. With the use of matter.js, ml5js, and the Posenet library, we set out to create an interactive installation that tracks an individual’s body movement and builds it into a skeleton that is capable of interacting with the environment within P5. The environment is set to mimic a snow day where particles gradually drop to the bottom of the canvas and the individual is able to interact with their physics capabilities through movement of the arms. The purpose is to provide an experience of playing in the snow via P5. Additionally, the installation promotes interactivity with others as it is capable of registering more than one individual onto the canvas and allowing all participants to interact with the environment.

20181129_111339

Related Work

The inspiration for our concept stemmed from an article  that introduced us to Posenet and it’s capabilities described it more in depth. With it’s basic understanding and implementation into P5 we then continued to explore and develop the idea of physical body interactivity by looking at inspiration of particle systems in codepen before looking into other various libraries. Additionally, some of our group members have previously worked with the webcam and its capability to manipulate particles in P5 via a webcam feed, this previous knowledge allowed us to jump start our concept development.

Background Related Work

https://github.com/NDzz/Atelier/tree/master/Experiment-3?fbclid=IwAR3O6Nm8dLJ1ZMWGYfHoAZdNMrf8qYHPqX-nz5xDunLjfR5xTTWmfsNbfHM

 

Goals for the Project

20181129_100344

Our first goal was to implement Posenet into P5 to register a body with a background particle system on the canvas. This was achieved as pictured above. The basic points of the head, shoulders, and limbs was able to be registered and constructed in to a skeleton, furthermore it managed to capture more than one person. From there we continued to refine the presentation of the project by altering the particles, canvas, and skeleton.

20181204_091052

With Posenet and our particle system working well in P5 our next goal was to actually implement the interactivity. While this goal was achieved come presentation day, we did encounter difficulty when attempting to implement it. With the body movement tracked and represented as a skeleton onto P5, we added squares to the points of the hands that would follow the movement of the arms and eventually interact with the falling snow via physics upon touching it. The boxes weren’t always responsive especially when they had to follow the movement of multiple people. Additionally, we experimented with what shape would be able to manipulate the snow better and ultimately settled on squares.

Our final goal came from an issue that we encountered during critique day. In order to register the subject effectively, the body had to be well lit. We managed to achieve this by installing light stands to illuminate the subject. We experimented with different ways of eliminating shadows, and angles in which the light dropped onto the subject. At the end we used two light LED studio lights installed alongside the webcam to a white backdrop in order to capture the subject movement effectively.

 

Code with References and Comments

https://github.com/notbrian/Atelier-Snowday?fbclid=IwAR0XYzXsnVVGsWvfuWzT_TsOGNYARYvhxFyJ-71HK2yL5dtW4R3JV-jWAPs

Working Demo

https://notbrian.github.io/Atelier-Snowday/?fbclid=IwAR2pplMkpbB7mnTTWu5xq63prgu13r2Syy7KClFAADPmijTKe4BUcy-8As0

Physical Interface w Digital Feedback: MIDI Data using Unity/Ableton/MaxMSP with Soft Tactile Buttons [Analog and Digital, Cap Sens

created by Denzel Arthur & Angela Zhang

For this final assignment, we decided to continue working on what we explored for the previous project, which was controlling a 3d object in a game engine or some other digital environment with non orthodox physical input switches. We made a lot of good progress during the initial experiment, and because we have an affinity for music, we decided to continue pursuing it for this next project, to solidify a foundation for how we create multimedia works in the future. The initial goal was to find a unique way of visualizing audio, and having it be interactive, either in the progression of the music itself, sonic feedback, or changes in the visual component that corresponds to the music, visual feedback. This lead us to experimenting with the Unity game engine, Ableton, and building physical buttons for extended and an unusual combination of hard and soft user inputs.

img_7724
conductive thread

 

 

arduino micro
arduino micro
conductive ribbon
conductive ribbon

We wanted a variety of tactile experiences on our game-board, including both analog and digital inputs and outputs. Using methods explored in class, we used materials such as conductive thread, conductive fabric, velostat, and some insulating materials such as felt and foam to interface the buttons and the electronic elements. We also wanted to use some elements that would normally be used in regular electronic setups but not necessarily in a soft circuit, such as light cells, infrared sensors, and actual tactile buttons.

schematic for digital button
Angela’s schematic for digital button, [+] and [-] intertwined but not touching, in order to maximize the surface area that can be activated.
schematic for analogue button
Angela’s schematic for analogue button [top down view]
diagram of construction of velostat button
Angela’s schematic diagram: construction of velostat button [layers, component view]

mid production - analogue button, velostat, conductive thread, conductive fabric, felt, foam, embroidery floss.

Our first soft button button was an analogue pressure sensing button made of coelostat between two pieces of conductive fabric, with 3 connecting lines of conductive thread sewn into each of the pieces of conductive fabric on either side of the velostat, in the middle. One of the sides of conductive thread is positive, the other negative. These are sewn to the edge of the square that is cut, deemed to be the button, and come off of the square approx 2 cm away from each other, and are eventually sewn into the blue felt that becomes the base of the button. The yellow foam and red felt are added for haptic feedback of a range of pressure, and the idea was to hopefully allow for a wider range of pressure sensitivity from the velostat, as well as aesthetic purpose. The without the added layers of material the button felt very flat and there did not seem to be a big margin of input for the user, especially as an analog input which is meant to provide a range of numerical data, which would then be used to control some element in Unity, as for the other components of the project. 

The completed pressure button on the board, and a light cell beside it. Two bits of silver thread on the button are conductive; one is positive and one is negative, detecting how much the velostat is being pressed.
legs of the components are fed through the board, and twisted to be flush to the back of board so they can be sewn into the prototyping board, also on the back.

a tactile button, three 5mm neopixel LEDs, and the analog pressure button, with some conductive thread connections [front view]
LED connections sewn with conductive thread to the prototyping board [mid prod, back view]
LED connections sewn with conductive thread to the prototyping board [mid prod, back view]

The main idea was to use these buttons to control gameplay within Unity for a game that Denzel had programmed for Experiment 3. The Arduino Micro however, as well as the Touch Board by Bare Conductive that Angela used for Experiment 3 to create the conductive painting [to be used as input for Unity as well, but was last used for Ableton] both have the ability to send MIDI data. We decided to switch it up and see if we could get a 3D object from Unity to work with Ableton and Max MSP’s patch for Ableton, Max for Live, to make it respond in real time to a MIDI signal sent from one of our buttons or sensors. Unfortunately we did not have time to hook up all the different sensors and components, but there is potential to hook up all sorts of different sensors and to keep adding different analog and digital parameters to the board and this is going to be an ongoing experiment for us to see how many different components we can combine.

For the connection of Unity -> Ableton -> Max:

screenshot: GitHub for The Conductor
screenshot-309
screenshot – YouTube video describing connection of Unity and Max using The Conductor
screenshot – Ableton 10 Remote Script OSC to MIDI Note forum
screenshot – we looked at some of the possibilities between Ableton and Max using Max for Live
screenshot-315
mapping audio parameters (OSC data) from ableton to Max to visualize change in audio through color mapping

The final product is a set of buttons that sends a MIDI signal from the button, to Ableton, which triggers or changes a sound. The track that the sound is on has a script that connects its phenomena to Max, which takes the MIDI data it receives and outputs a corresponding behaviour in the 3d object in either the change of its shape or colours, relative to the change happening in the sonic component. In theory anything that sends a MIDI signal can work with this setup, so it can work with both soft circuit buttons, conductive paint, or any regular MIDI keyboard; any input device works as long as you can get it to communicate with MIDI data. We experimented with other MIDIs such as the OP-1 [by Teenage Engineering] as well as the conductive painting [uses Touch Board by Bare Conductive] from the previous experiment, which outputs MIDI.

img_4594
Works with Touch Board – which outputs MIDI

56520213087__f2ad6f2b-0035-49e4-b108-a5b923533654

The Conductor – Initial setup inside Unity
Ableton - Live view; jit.window Max object with Ableton tracks (audio source)
Final set up – jit.window Max object, Max patches and MIDI tracks in Ableton Live View (audio source)

 

Yay!! Now you can be a VJ 😀

Shooting Game – Final Report

Maddie Fisher-Bernhut, Donato Liotino, Ola Soszynski

Code: https://github.com/ToxicDon/qyro-shooter-game-arduino

Our main inspiration for this experiment was arcade shooting games, where the player earns points by shooting various targets. However, we mostly wanted to work with some new sensors in Arduino. Specifically, we were very inspired to figure out the gyroscope accelerometer, and the Bluetooth connectors. Other things we wanted to work with were vibration motors in regards to some game concepts we had in mind.
The final productOriginally, we planned on having the game be competitive, with two players versing each other to earn more points, or to defeat the other. We wanted a fun, wireless shooting game, and were inspired by similar games, often seen in arcades. This would encourage friendly competition between players, and generally create an enjoyable experience. The experience would be inspired by sci-fi movies and shows, such as Doctor Who, and fantasy, such as Harry Potter. The two players would fight each other, trying to defeat the oppositions’ creature, also themed or inspired by their theme.

img_20181204_122401917For inspiration, we looked at how competition assists one’s performance, as well as if any games like this had been made in p5 before. This led us to https://www.bigfishgames.com/online-games/8942/shooting-range/index.html, for games similarly made, and a basic game on DOS named Shooting Gallery (https://youtu.be/incPLdf712M). https://www.theglobeandmail.com/life/health-and-fitness/why-a-bit-of-healthy-competition-is-good-for-everyone/article8749934/, and

https://www.psychologicalscience.org/news/minds-business/the-upside-of-rivalry-higher-motivation-better-performance.html helped us read into how competition helps, regardless of the fact we did not end up making a competitive game.

Final wiring setup
Final wiring setup

Goals of the Experiment:

1. Use and learn Bluetooth for Arduino

2. Use and learn the gyroscope accelerometer

3.  Use vibration motor to detect invisible targets

    • To initially figure out how the motor worked, we used a tutorial (https://www.precisionmicrodrives.com/content/how-to-drive-a-vibration-motor-with-arduino-and-genuino/)
    • working-with-the-vibration-motorAfter using the tutorial, the code is mostly made by testing out different values and intensity according to distance, originally tested through a light sensor and then implemented into the game code.
    • Sadly, we were unable to implement this aspect, due to clashing libraries in the code disallowing the Arduino to operate as anything more than from movement.

4. Create different movement paths for targets as one progresses

    • Coded in using sine and cosine waves
    • While we wanted to implement a highly difficult mode, perhaps with a basic AI which avoided the player, we did not have enough time or understanding to do so.
    • The presented version shifts the randomly moving target once hit.

5.  Have working gifs

6.  Be able to see and track player on the screen using colour detection

    • We wanted to implement Maddie’s code from experiment one, which involved colour tracking, to track the crosshairs of the players. However, this code runs much slower and faces many avoidable complications when trying to track two colours at once. Due to this we later cut the number of players down to one.
    • diffusing-the-ledLater on, we realized the program runs far smoother using the gyroscope rather than the colour detection. So, we used the gyroscope for tracking, with the trigger being implemented through colour detection.
    • Another issue faced with the colour detection was that the LED was too small to properly activate the code, so we needed to diffuse the light and make a larger trackable object. Due to this, we used a ping-pong ball. Due to availability, we were only able to access orange ping-pong balls, which caused difficulties with the colour detection, which would trigger the game when too close. This could be fixed by using a white ping-pong ball instead. Our short-term fix was to distance the gun, in a relatively dark location.

7.  Have the game shoot by moving the gun in a specific way

    • 20181201_204007We realized that by using movement to fire the gun, the targeting would be offset. So, we changed plans, having it trigger fire but pressing a button that would light the LED for the colour detection to activate, “firing” the gun and seeing if it hit the target. Furthermore, as a failsafe, we could program the button to be what tests for collision, if the colour detection decides to not work.

8.  Customize the controller

    • first-layerWe worked with spray paint for the first time. It isn’t the cleanest or nicest looking but it turned out exactly as we wanted with our current skills.
    • The addition of the ping-pong ball also assists with the aesthetic of the prop gun for the game, as we needed to keep school guidelines in mind as we worked.

 

 

9.  Create a working shooter game

    • Accomplished

Hunting Game: Progress Report

 

Jin Zhang(3161758) & Siyue Liang(3165618)

Inspiration

In our previous projects, we paid more attention to the visual effects aspect. Most of our projects are a bit weak in terms of interactivity. For this final project, we decided to explore more on the physical interactions. We want to build a shooting game by using physical materials and make it realistic. Our idea is to create a shooting theme related game where the player uses a laser pointer / a toy gun to aim for the center of the targets.

 

Contextual Work

The duck hunting game:

https://www.youtube.com/watch?v=YyM6MmBM14w

 

A light sensor would be placed in the center of the target and a servo motor is attached to the base of the target.  When the player put the pointer to the center, the target would fall flat. Some of the details still need to be refined and we are still in the testing phase.

 

Materials & Techniques

  • Light sensors/pressure sensors
  • Servo motors
  • Light pointers/laser light
  • Cardboards and aluminum foil

 

Since our idea for this project is pretty different from what we did for previous projects, we might not be using any same materials or techniques as before. We are still thinking what kind of sensor should we use as switches in this game.

 

Code & Wiring

 

#include <Servo.h>

Servo servo1;

int forcevalue = analogRead(A0);   

int pos1 = 90;

 

void setup() {

Serial.begin(9600);   

servo1.attach(9); //servo at digital pin 9

//servo.write(0); //initial point for servo

 

}

 

void loop() {

 forcevalue = analogRead(A0); //attached to analog 0

 Serial.print(“Sensor value = “);

 Serial.println(forcevalue);

 

 //int value = map(reading, 0, 1023, 0, 255);

 if (forcevalue = 100){

 for (pos1 = 90; pos1 <= 180; pos1 += 30) {

 servo1.write(pos1);

 delay(1);  

 }

}

}

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2018-11-26-%e4%b8%8b%e5%8d%886-18-53

Work in Progress

wechatimg4015

wechatimg4017

The sensor works as a switch to control the servo motor and sound.

wechatimg4019

wechatimg4021

The value of the sensor would go from 0 and suddenly to 1023 for some reasons.  We were confused and figured that there might be something wrong with the wiring or the code.  We are still testing it and trying to find a solution for this issue.

The Soundscape Experience

The Soundscape Experience


Tyra D’Costa | Shiloh Barnes | Francisco Samayoa

screen-shot-2018-11-26-at-6-19-54-pm

Project Inspiration

Our project seeks to encompass elements of sound, sensory perception, and nature in a installation based artwork. Our process began by discussing the types of sensors we wanted to explore, eventually we decided on the MPU 9250 Accelerometer and Gyroscope. Our proposed idea, is to integrate the sensor into headphones, which can then change the sound and visual experiences of the user. As the user navigates the room and moves their head about, the world in which they are digitally immersed in will enhance the real world.

Goals

  1. Engaging UX Design: We want to solidify the conceptual ideas of the soundscape experience to create an immersive UI/UX design.We want this to include, elements of storytelling and gamification.
  2. Sound-Sensory Experience: Moving forward we will have to test that the sensor can allow smooth transition between sound files. The media assets will consist of ethereal soundtracks, data visualizations (natural algorithms and patterns), and nature related images.
  3. Integrate wearables: Also, we need design a way to integrate the sensors into the wearable technology ( the headphones).
  4. Discuss concepts of nature, ecosystems and natural algorithms: Lastly, we want to think about how these concepts can work together to create a narrative and gameplay element.

Background Work

So far, we have able to get the MPU sensor working with Arduino so that we receive X,Y,Z coordinates. This required, scouring the web for solutions to library related issues, and calibrating the code in order to receive accurate data readings. Additionally, we were able to take these coordinates and map them to a camera in Unity, so that the sensors orientation changes the perspective of the viewer.

Links To Contextual Work

How to play multiple audio files

Research on Arduino + Gyroscope

Good Link explaining how IMU sensor works

Tutorial for IMU and Unity set up

How to set up MPU and Arduino Library

Unity + Arduino Library

Project Details

  1. User starts at any location on the map
  2. They put on the headphones
  3. Depending on the position of the head, the headphones will play a particular sound.
  4. If a particular sound is playing then the screen will display an associated image or video
  5. The user must experience all the audio and visual data in order to try and make a connection between the media and a place that exists in the real world before their opponent. 

Sketches

img_4916img_4924img_4924img_4926

Process

46837114_564283070696082_6498849018557235200_n 46812922_473012876557775_1587641447014727680_n

46801407_337187233779982_3353290089944842240_n 46854703_644056072656401_7774253524338081792_n 46796952_337902460325535_4724015759863316480_n

Air Printing: Drawing Physical Objects with Leap

Experiment 4: Progress Report:

Air Printing: Drawing Physical Objects with Leap

 

Salisa Jatuweerapong, Sam Sylvester, Melissa Roberts, Mahnoor Shahid

Atelier I: Discovery 001

Kate Hartman, Adam Tindale, Haru Ji

2018-11-27

 

Inspiration

We started with an idea of drawing in the air and transmitting art onto the screen with the movements. At first, we thought of using an accelerometer or conductive paint proximity sensors. We didn’t want any sensors to be attached to the hand. Through research and feedback, we discovered the Leap Motion Controller and a project called “Air Matter”.

“Air Matter” is an interactive installation by Sofia Aranov. The installation takes a new approach on traditional pottering with Leap Motion Controller. The viewer draws a 3D pot in the air which is then 3D printed. An Arduino is also used with potentiometers to control aspects of the model.

 

Context

Related imageThis project is an exploration of alternative interfaces and virtual and physical space.

We took the “Air Matter” installation as our main inspiration. Instead of drawing a vase, we decided to draw a sculpture made of thin rectangles. This idea was based on the disappearing sculptures by Julian Voss-Andreae– which, depending on the point of view, seem to disappear into thin air. Our project “conjures” physical objects from thin air, yet the physical objects it creates disappears back into thin air (conceptually. Our final design isn’t print thin enough for that to actually work). There’s something to be said about the transfer of objects from physical, to virtual, back to physical space, and its permanence and materiality in each layer.

Related interfaces include: webcam motion tracking, Kinect, and a variety of glove interfaces (Captoglove game controller, Mi.Mu). We chose to explore Leap as it seemed an exciting challenge; as well, we wanted to explore extremely non-invasive, non-physical interfaces (no gloves).

Other work that is being done on Leap includes Project Northstar, a new AR interface that aims to redefine the AR experience. Otherwise, the Leap team is focused on creating accurate hand tracking software to be used as a tool for any other projects.

Links to Contextual Work

Air Matter: https://www.sofiaaronov.com/air-matter

Julian Voss-Andreae Sculpture: https://www.youtube.com/watch?v=ukukcQftowk

Mi.Mu Gloves: https://mimugloves.com/

Northstar:https://developer.leapmotion.com/northstar

Images of the work in progress

Progress Timeline Checklist (link).

Thursday 22nd: 

Designing the visuals

11-23-04811-23-047

Friday 23rd:

Getting Started with Leap

received_305280590079371 received_334901954003267 received_347159419428689

Tried out the Leap, ran into some challenges with the different software available for download. Tutorials we found (Research) seem to be for some versions of the software and not others.

Monday 26th:

Processing Sketch with mouseDragged

screen-shot-2018-11-25-at-4-11-45-pm screen-shot-2018-11-25-at-8-34-32-pm screen-shot-2018-11-26-at-1-02-43-pm

According to the sketch, the person would draw a squiggle with their finger as an outline for the sculpture. Thin rectangles should be placed at specific X positions to maintain a consistent gap between them. The height of the rectangles is determined by the Y position of the cursor or the finger of the person.

Processing Sketch with Leap

finger_painting_1003 finger_painting_1010

finger_painting_1102
Wrote code in Processing using the LeapMotion + Arduino Processing library. Used input from the Leap to draw a line. Boxes are drawn centered along the vertical middle of the screen, height and depth of the box are dependent on the y position of the user’s finger, placement along x-axis is dependent on the x position of the user’s finger (box is drawn if the x value is divisible by 50). The width of the box is constant. There is a bit of a lag between the line being drawn and the box is drawn, so the line has to be drawn slowly.

JavaScript finger/hand code reference: https://developer-archive.leapmotion.com/documentation/javascript/api/Leap_Classes.html?proglang=javascript

Tuesday 26th:

Converted Processing Sketch to Javascript

finger_painting-1 finger_painting-2

There was no export as STL file for the processing version we were using, so had to switch to javascript. This was important since Melissa’s STL library code from Experiment 2 had proven to work.

In the Javascript code, we used three libraries

  • Three.js (export STL library)
  • Leap.js (Leap Motion Controller for the javascript library)
  • P5.js
  • Serial Port

Pictured above is the functional p5.js/leap.js code.

Implementing three.js library into the functional p5.js/leap.js code

screenshot-58

This involved getting rid of p5 code, as the three libraries (three, p5, and leap) didn’t work well together. The biggest changes were changing how we created 3D shapes, creating a renderer to replace our canvas, setting up a scene (full of our shapes) to animate and render, and including an STL exporter, which will allow us to print the 3D object drawn on the screen.

The Leap coordinate system seemed to be very different from the Three.js coordinate system, which means the shapes we had created displayed as far larger than originally intended. However, the code technically works. The scene (airPrint) has our shapes in it, and they are being reproduced on the screen. Leap has a coordinate system where the units are millimeters, the origin being the center of the top surface of the Leap.

Further steps involve possibly implementing additional controls with Leap.

Connected Arduino to USB C

screen-shot-2018-11-27-at-12-03-28-pm

Using WebUSB, created a workflow for a physical button push as ‘enter’ on the keyboard.

This push button will download the STL file from the sketch which can then be used to 3D print.

GitHub: https://github.com/SuckerPunchQueen/Atelier-Ex-4?fbclid=IwAR1wH5XmWsn4G-S9e3zezb8yrDMfmp56uRA7xzTVI80JTh3Wj-hnKFjrZ-w 

Previous Experiments

Melissa’s Nameblem provided a starting point for a generative code → .stl → 3D printing workflow. Melissa’s original project combined p5.js with three.js and exported into a .stl file that she would have to manually fix on 3D Builder. While we had hoped to just reuse this code for Air Printing (it is a rather technical workflow), we are having issues interfacing Leap.js with p5.js. As well, something we are hoping we can do is automating the process.

Mahnoor’s work with capacitive sensing on Experiment 3 inspired our original interface for air sensing. Her umbrella had a proximity sensor created by using conductive paint and the CapSense library, and we reasoned we could use two capacitive sensors on two different axis to take an x-position and y-position for a hand. This would not be as accurate as Leap, and since Melissa wanted to buy a Leap anyways, we opted to use that for our project.

We are using p5.js, which Adam introduced us to in Experiment 1 to draw our design.

Haru’s Endless Forms Most Beautiful, specifically the experiments based off William Latham’s work, was our launch point for the visual design. Originally, our code was a bastardized Tinkercad / building blocks game. We felt that we could do more visually, to elevate the project from a tool/workspace to an actual artwork. We looked at the rule-based work we explored in Haru’s unit for inspiration, since we were already restricted by rules as to what would practically be able to print (basic geometry, cubes, connected lines).

Experiment 4 – Progress Report

Brian Nguyen – 3160984

Andrew Ng-Lun – 3164714

Michael Shefer – 3155884

Rosh Leynes – 3164714

Soup of Stars

Inspiration

The inspiration of the project developed as we looked into the potential of our original concept. We started off with the inspiration of movement and and implementing it with analog sensors as a form of a ball game where users would attempt to keep the ball up with their body parts that has sensors attached. After reviewing several pieces we decided to develop the concept based entirely on a webcam because we wanted the body to be the entire subject of the concept

Relevant Links for Inspiration

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5

https://ml5js.org/docs/posenet-webcam?fbclid=IwAR2pg6qdmZfbi0Gxi3ohxtP9tcXUpokaYj6triiHtw6giJ9vTbYVyM1LNWI

Context

The project utilizes a webcam along with Posenet and P5. With the Posenet library, a skeleton is constructed based off the subject registered via the webcam. Within P5 a visual is constructed in the background of a particle system intended to resemble stars. While still focusing on movement, the particle system will react to the movement of the skeleton (mostly limbs). As the arms move across the canvas, the particle systems would swirl and twist following the movement of the skeleton. The skeleton of the subject would appear on the canvas. Additionally, more than one individual can be registered as a skeleton as long as they are in proper view of the webcam. The intent is to provide a sense of interactivity where the individual has an impact to the environment and can alter it the way they see fit.

screen-shot-2018-11-27-at-11-01-33-am

Pictured above is the skeleton using the Posenet demo that will be controlling the particle system. The movement of the limbs will be crucial in altering the environment. There are some issues where some limbs aren’t recognized at times especially when they are close to the body.

20181127_091845

Pictured above is the implementation of the Posenet library with P5

20181127_092035

Previous Materials/Experiments

For experiment 3, we’ve used the webcam with P5 to construct an image using particles. We managed to manipulate the particles with a sensor that then combined with the webcam feed. For this experimentation, we’re still using familiar elements such as the particle system in P5 and the webcam feed projection but altering its concept and their relation to one another.