Air Printing: Drawing Physical Objects with Leap

Experiment 4: Progress Report:

Air Printing: Drawing Physical Objects with Leap

 

Salisa Jatuweerapong, Sam Sylvester, Melissa Roberts, Mahnoor Shahid

Atelier I: Discovery 001

Kate Hartman, Adam Tindale, Haru Ji

2018-11-27

 

Inspiration

We started with an idea of drawing in the air and transmitting art onto the screen with the movements. At first, we thought of using an accelerometer or conductive paint proximity sensors. We didn’t want any sensors to be attached to the hand. Through research and feedback, we discovered the Leap Motion Controller and a project called “Air Matter”.

“Air Matter” is an interactive installation by Sofia Aranov. The installation takes a new approach on traditional pottering with Leap Motion Controller. The viewer draws a 3D pot in the air which is then 3D printed. An Arduino is also used with potentiometers to control aspects of the model.

 

Context

Related imageThis project is an exploration of alternative interfaces and virtual and physical space.

We took the “Air Matter” installation as our main inspiration. Instead of drawing a vase, we decided to draw a sculpture made of thin rectangles. This idea was based on the disappearing sculptures by Julian Voss-Andreae– which, depending on the point of view, seem to disappear into thin air. Our project “conjures” physical objects from thin air, yet the physical objects it creates disappears back into thin air (conceptually. Our final design isn’t print thin enough for that to actually work). There’s something to be said about the transfer of objects from physical, to virtual, back to physical space, and its permanence and materiality in each layer.

Related interfaces include: webcam motion tracking, Kinect, and a variety of glove interfaces (Captoglove game controller, Mi.Mu). We chose to explore Leap as it seemed an exciting challenge; as well, we wanted to explore extremely non-invasive, non-physical interfaces (no gloves).

Other work that is being done on Leap includes Project Northstar, a new AR interface that aims to redefine the AR experience. Otherwise, the Leap team is focused on creating accurate hand tracking software to be used as a tool for any other projects.

Links to Contextual Work

Air Matter: https://www.sofiaaronov.com/air-matter

Julian Voss-Andreae Sculpture: https://www.youtube.com/watch?v=ukukcQftowk

Mi.Mu Gloves: https://mimugloves.com/

Northstar:https://developer.leapmotion.com/northstar

Images of the work in progress

Progress Timeline Checklist (link).

Thursday 22nd: 

Designing the visuals

11-23-04811-23-047

Friday 23rd:

Getting Started with Leap

received_305280590079371 received_334901954003267 received_347159419428689

Tried out the Leap, ran into some challenges with the different software available for download. Tutorials we found (Research) seem to be for some versions of the software and not others.

Monday 26th:

Processing Sketch with mouseDragged

screen-shot-2018-11-25-at-4-11-45-pm screen-shot-2018-11-25-at-8-34-32-pm screen-shot-2018-11-26-at-1-02-43-pm

According to the sketch, the person would draw a squiggle with their finger as an outline for the sculpture. Thin rectangles should be placed at specific X positions to maintain a consistent gap between them. The height of the rectangles is determined by the Y position of the cursor or the finger of the person.

Processing Sketch with Leap

finger_painting_1003 finger_painting_1010

finger_painting_1102
Wrote code in Processing using the LeapMotion + Arduino Processing library. Used input from the Leap to draw a line. Boxes are drawn centered along the vertical middle of the screen, height and depth of the box are dependent on the y position of the user’s finger, placement along x-axis is dependent on the x position of the user’s finger (box is drawn if the x value is divisible by 50). The width of the box is constant. There is a bit of a lag between the line being drawn and the box is drawn, so the line has to be drawn slowly.

JavaScript finger/hand code reference: https://developer-archive.leapmotion.com/documentation/javascript/api/Leap_Classes.html?proglang=javascript

Tuesday 26th:

Converted Processing Sketch to Javascript

finger_painting-1 finger_painting-2

There was no export as STL file for the processing version we were using, so had to switch to javascript. This was important since Melissa’s STL library code from Experiment 2 had proven to work.

In the Javascript code, we used three libraries

  • Three.js (export STL library)
  • Leap.js (Leap Motion Controller for the javascript library)
  • P5.js
  • Serial Port

Pictured above is the functional p5.js/leap.js code.

Implementing three.js library into the functional p5.js/leap.js code

screenshot-58

This involved getting rid of p5 code, as the three libraries (three, p5, and leap) didn’t work well together. The biggest changes were changing how we created 3D shapes, creating a renderer to replace our canvas, setting up a scene (full of our shapes) to animate and render, and including an STL exporter, which will allow us to print the 3D object drawn on the screen.

The Leap coordinate system seemed to be very different from the Three.js coordinate system, which means the shapes we had created displayed as far larger than originally intended. However, the code technically works. The scene (airPrint) has our shapes in it, and they are being reproduced on the screen. Leap has a coordinate system where the units are millimeters, the origin being the center of the top surface of the Leap.

Further steps involve possibly implementing additional controls with Leap.

Connected Arduino to USB C

screen-shot-2018-11-27-at-12-03-28-pm

Using WebUSB, created a workflow for a physical button push as ‘enter’ on the keyboard.

This push button will download the STL file from the sketch which can then be used to 3D print.

GitHub: https://github.com/SuckerPunchQueen/Atelier-Ex-4?fbclid=IwAR1wH5XmWsn4G-S9e3zezb8yrDMfmp56uRA7xzTVI80JTh3Wj-hnKFjrZ-w 

Previous Experiments

Melissa’s Nameblem provided a starting point for a generative code → .stl → 3D printing workflow. Melissa’s original project combined p5.js with three.js and exported into a .stl file that she would have to manually fix on 3D Builder. While we had hoped to just reuse this code for Air Printing (it is a rather technical workflow), we are having issues interfacing Leap.js with p5.js. As well, something we are hoping we can do is automating the process.

Mahnoor’s work with capacitive sensing on Experiment 3 inspired our original interface for air sensing. Her umbrella had a proximity sensor created by using conductive paint and the CapSense library, and we reasoned we could use two capacitive sensors on two different axis to take an x-position and y-position for a hand. This would not be as accurate as Leap, and since Melissa wanted to buy a Leap anyways, we opted to use that for our project.

We are using p5.js, which Adam introduced us to in Experiment 1 to draw our design.

Haru’s Endless Forms Most Beautiful, specifically the experiments based off William Latham’s work, was our launch point for the visual design. Originally, our code was a bastardized Tinkercad / building blocks game. We felt that we could do more visually, to elevate the project from a tool/workspace to an actual artwork. We looked at the rule-based work we explored in Haru’s unit for inspiration, since we were already restricted by rules as to what would practically be able to print (basic geometry, cubes, connected lines).

Experiment 1 – Untitled – Salisa Jatuweerapong

PROJECT SUMMARY:

Untitled is a series of VR project experiments of varying degrees of success, with the original aim of creating a VR music video and learning a VR workflow. While the former was not accomplished, the latter was somewhat achieved; I researched several types of VR workflows and semi-successfully implemented two on Google Cardboard: big bang (p5.vr), and AYUTTHAYA (Unity). big bang takes you to an indefinite point in surrealistic low-poly space, while the eponymous AYUTTHAYA places you in a cloudy day in Thailand.

PROCESS:

My initial idea was to create a VR music video for Halsey’s Gasoline (Tripled Layered) (https://www.youtube.com/watch?v=fEk-9bOqvoc ). Working with glowing, smoky, audio-reactive spheres (probably would’ve created a particle system and then brought in p5 sound libraries), I wanted to create one animated sphere, giving it personality, then make it grow and rush at the viewer, crowding them in as the audio grows louder. This would then be reflected three times around the viewer, with each sphere following each layer of the audio track. At one point in the song, they’d all transform into snake women (I was going to model them on Blender but now I’m inspired by Mellisa’s snakes to try something in p5). I also wanted to explore 3D sound (which would’ve been related to bringing in the triple-layered audio, but did not have enough time for that. I feel like 3D sound is an essential for VR spaces.

My second concept, born as my time dwindled, was a world where all objects were substituted by its corresponding noun (for example, a sky, but instead of blue and clouds, just the word sky floating in the air), that would be read out to you once they were in your line of site. This was a half-formed idea inspired by accessible blind-friendly image descriptions on the internet; though rather than designing for the blind, I suppose it would be more to show how blind people “saw” the world—through a disembodied computer voice telling them what the view was.

Before I could execute these concepts, however, I needed to just get a VR workflow HAPPENING. This ended up being rather difficult and took up the majority of my time.

I initially planned to use the p5.vr library (https://github.com/bmoren/p5.vr ), but upon testing, it was incompatible with Android (VRorbitcontrol didn’t work in X axis and display would not show up properly). I also had trouble hosting it on the chrome webserver and webspace, but shelved that.

I started searching for other ways to code VR, or turn a 3D environment into VR, and stumbled upon webVR. I researched that further and liked its concept; so I also downloaded that and looked through how to create an app in webVR. I also read up on webVR polyfill. Following this tutorial (https://developers.google.com/web/fundamentals/vr/getting-started-with-webvr/ ), I tried to integrate webVR into an existing p5.js WEBGL sketch I had. Didn’t work due to incompatibility between webVR polyfill & p5.js.

When I was researching webVR I also found three.js, and really loved the project examples hosted on their site, especially this one (https://demos.littleworkshop.fr/track ). Trying this one out (after figuring out I needed to disable some chrome flags first) was what convinced me to try out webVR.

I downloaded three.js and was looking through some tutorials (https://www.sitepoint.com/bringing-vr-to-web-google-cardboard-three-js/ ) on using that, when Adam suggested I try Unity instead. After spending a few hours learning how to navigate Unity through the basic tutorials on their website, I followed this (slightly outdated) tutorial for Unity (https://medium.freecodecamp.org/how-to-make-a-360-vr-app-with-unity-51cbe41ad8f1) to make the VR. I also looked up shaders in this time. Making the VR work in Unity was actually pretty simple, though I had a LOT of trouble building and running the apk. I had a lot of issues following this tutorial (https://docs.unity3d.com/Manual/android-sdksetup.html), I’m still not sure what was going wrong. I tried the command line version first but it didn’t work so I downloaded Android studio, but I had some issues with that too.

I have so many sdks downloaded now.

At this point I was running out of time and I switched back to p5.vr since it was supposed to work on Apple and I figured I could borrow an IPhone in class. Spoiler: it wasn’t working still. I don’t have an IPhone with me so I wasn’t able to investigate the issue further after class, but for some reason, even though it works fine on desktop, the mobile VR shows up with a large gap in the stereocanvas.

big bang notes:

big-bang2

big-bang1

the p5.vr library doesn’t open a lot of doors for interaction in its VR environment, something I was disappointed by as I value interaction a lot. I tried to counter that by position a directional light at your POV that would be adjusted towards whatever direction one was looking at, and then placing in planar materials that would disappear without proper lights. This created some sort of pseudo-interaction where viewers had to work to see the plane.

I created a simple space environment with stars, then was inspired by walking codes I’d seen on three.js and star wars and space operas to create some sort of warp drive effect by translating the stars. While the feel I created reflects the stars moving rather than the viewer moving, I still though it was sort of cool how they collected at a single point and that inspired the idea of the big bang.

Finally, I reset the code every 50 secs, because the big bang doesn’t happen just once. There’s probably a more contained, seamless way to do it than my goOut() code, but it worked.

I was also inspired by this year’s Nuit Blanche theme of You Are Here, and Daniel Iregui’s response to it with their installation, Forward. The sped up time in my VR work and the looping animation alludes to the presence of time and how the future is always out of reach. That’s also reflected by the half-present planar window, always too far ahead of you.

AYUTTHAYA notes:

“Wow, it feels like I’m actually there!” – anonymous OCAD student

I’m really, really fond of my home country, and that shows quite frequently in my work. Ayutthaya, dubbed (by me) as Thailand’s collection of mini Leaning Towers of Pisa, is one of Thailand’s oldest historic site. I recently visited this past summer (2018), and the sense of history in the air is palpable. I’m not sure this VR experience replicates that by any means, but it at least shows people that Ayutthaya exists.

Honestly, this was more of a test, than anything, I’d need to revisit this and create some interaction or movement. I believe Unity’s the right way to go with VR environments and would continue using Unity now that I’ve got it to work. It has a lot of functionality and I’d be able to easily place 3D objects in the environment. One thing I’m having trouble with is that the video won’t loop/play properly.

Video downloaded from here: https://vimeo.com/214401712

LINKS to FINAL PROJECTS:

big bang (mobile VR): https://webspace.ocad.ca/~3161327/p5.vr/examples/teapot_city/index.html

big bang (desktop VR (mouse control)): https://webspace.ocad.ca/~3161327/p5.vr/examples/teapot_city/index2.html

big bang files: https://github.com/salisajat/e1-big-bang *I’d accidentally worked straight inside a cloned repository of p5.vr and was unable to push that code or keep any of my commits when I remade the folder :/

AYUTTHAYA, Thailand (Unity): https://github.com/salisajat/VR-AYUTHAYA-TEST

code scraps that did not work (includes webVR + p5.js, webVR, early p5.vr experimentation): https://github.com/salisajat/E1-scraps

Additional process documentation: 

halsey concept
big bang process; webgl error?

[resolved] android sdk + java sdk PATH issue in unity

[resolved] android sdk + java sdk PATH issue in unity

[resolved] persisting problem in building apk in unity
[resolved] persisting problem in building apk in unity
snippets of my google search history
snippets of my google search history
their name is Car D. Board