AerForge

aerforge_logo

The Team

Salisa Jatweerapong, 3161327

Melissa Roberts, 3161139

Mahnoor Shahid, 3162358

Samantha Sylvester, 3165592

 

About AerForge

AerForge is a project about change, and how the different spaces in which we create change the creation itself. The entity with which the user interacts transitions through realms and dimensions of creation, exploring and connecting the environments we learned in class.

To begin with, the AerForge experience is intangible. The user draws an imaginary line into the air in front of them, and out of thin air comes a visual. Projected on a screen in front of the user is the line they began to draw, and as the user continues to move their hand through nothing, they create something. Thin columns appear on the projection, their height matching the height of the user’s line. With a wave, the user rotates the projected image and out of lines and columns emerges a form. Though empty-handed, the user is able to create and engage with a virtual object. The user places their hands out in front of them, palms up as if asking for something. This gesture cues the virtual object to be downloaded and sent to a 3D printer. The final transformation brings AerForge into the physical world, and into the hands of the user.

 

Initial Goals & Outcome

  • Track hand movement (Leap Motion Controller)

Our first goal, tracking hand movement, was achieved. Due to how new a technology the Leap is, there were very little documentation and very few tutorials to help us use it. The main challenge with the Leap was figuring out which software version to download. The Leap website did not clearly indicate the differences between versions on the download page, and until we found a page with a list of releases it wasn’t even clear how many different versions there were. We attempted to find a software version that functioned through trial and error, installing, uninstalling, reinstalling, each time something else went wrong.

copy-of-received_766808476992873screenshot-49

At the same time, we did research on the different versions, trying to get information on what systems and languages they were compatible with.As is turns out, there are four versions. There was no link for V1, V2 was not recommended due to outdated gesture code and lack of accuracy, V2 is also the only version compatible with Mac, and V4 only supported code written in C. That left V3, which supported both Java and JavaScript. However, only V3.2.1 is compatible with Windows 10. The entire process of determining the correct software version to download took about 8 hours straight. Once we had the right version downloaded, tracking hand movement was relatively easy.

The Leap is pretty impressive, and lots of fun to work with, however in terms of accuracy it still needs improvement. We found that it was often unable to differentiate between the left and right hand, and also between fingers. It could recognize positions, movement, and the number of fingers, but it wasn’t able to distinguish which finger it was tracking. For example, in the code we specified that the user could only draw with their right index finger, however, you could stick out any finger and it would still draw as if it were your index finger.

ezgif-com-optimize-1

The Leap is able to track hands, fingers, joints, and different combinations of those. To create AerForge we tracked the user’s right index finger and both hands. The right index fingertip was used as input for the circles that made up the line the user drew, and it was also used as a variable for size and position of cuboids. The left-hand input controlled an orbit camera, orbit around the x-axis was to be limited, so the x-value of the left hand position changed the position of the camera, and thus the view the user had of the scene. The “save” and “reload” functions were triggered by gestures in which both hands were tracked, not only their position but also orientation and whether they were open or in a fist. To save, the user has to put both hands palms up and to reload and start over with a new design, the user has to make a fist with both hands.

 

  • Draw a shape in p5

copy-of-finger_painting_1010

This goal was both achieved and not achieved. When combining the LeapJS library with p5, we were able to get input from the leap, draw 2D shapes, and even 3D boxes. However, a crucial part of our project was downloading an STL file (a 3D model) and printing it. Neither p5 nor Processing (at least not the current version of Processing) is able to save an STL file, even when you’re drawing 3D shapes.

 

  • Within p5, create a 3D version of that shape (ideally on Arduino button-press)

We did achieve our goal of creating 3D shapes, though not as we initially planned. Due to the lack of an STL exporter compatible with p5, as explained above, we worked with the three.js library instead. While the plan for p5 was to draw a 2D shape from the coordinates of a fingertip from Leap and then call a function that extrudes that plane, we made some changes due to the switch to three.js as well as for aesthetic purposes. We changed our design from an extruded custom 2D shape to a series of cuboids connected along a single perpendicular cuboid, with size and placement dependent on our input from the Leap. Because we were working in three.js, it was a lot simpler to simply create a 3D shape from the input, rather than create a 2D shape and then later extrude it, especially since we now planned to create multiple shapes.

ezgif-com-optimize

Despite going straight to 3D, we were able to retain some aspects from out original plan. We drew a series of ellipses to the screen at the user’s fingertip coordinates, which allowed to user to see what they were drawing and how that changed the AerForge model. We also positioned the camera (within the code) so that the user was looking at the profile of the model, thus giving the illusion of 2-dimensionality. The ability to orbit the scene with a gesture gave the user more freedom, in the sense that they were not limited to the 2D perspective, but could move away from the profile view.

constrain_left

Another aspect of the original plan was to use an Arduino button to call some function, either to extrude the 2D shape into a 3D form or to save the STL. However, we decided to forgo the Arduino in favour of a gesture, as a gesture suited the project more, and the only reason to incorporate the Arduino was that it was suggested we use Arduino in Experiment 4.

 

  • Project p5 canvas so the user can see what they’re drawing

img_9384

It was not the p5 canvas that we projected, but the rendered scene from three.js, though essentially both “canvas” and “scene” act as a region on a screen where a visual expression of code is displayed.

 

  • Using Melissa’s STL code, save the object as an STL file

screenshot-93screenshot-99

This goal was pretty simply achieved. Using Melissa’s code from Experiment 2, we were able to save our scene full of 3D geometry as an STL file.

 

  • Print using a 3D printer.

Achieved. The issue with this step of the process is how to get from the downloaded STL to a g-code file sent to the printer. After being downloaded, the STL file has to be repaired (using 3D Builder), then sliced (using Cura), then sent to the printer. All of this has to be done by a person, and can’t easily be automated.

copy-of-mvimg_20181130_130610 img_20181202_162409 img_20181202_154244

While doing test prints to make adjustments to the design and reduce print time, we also experimented with material and methods of communicating with the printer. The standard method of sending g-code to the printer is via sd card, which can be tedious if we have to move the sd card from the printer to the laptop and back again after every print to add a new file. An enticing method supported by the printer is printing over wifi using a web UI. The appeal of printing over wifi was not needing to move the SD card back and forth between devices and having a (customizable) webpage within which commands and files can be sent to the printer. In order to connect the printer to wifi, we had to send the printer a g-code file containing the SSID and password, which are a little complicated for OCAD’s wifi, as each student signs into the wifi with their student number and password. We tried a couple of combinations of SSIDs and passwords but were not able to connect the printer to OCAD wifi.

Influences

AerForge incorporated elements from our previous experiments but was also influenced by two external projects, Air Matter by Sofia Aronov and Vanishing Sculptures by Julian Voss-Andreae.

Samantha’s Experiment 1 (Scott Pilgrim vs. The World Animation) and Salisa’s Experiment 1 (Big Bang) combined virtual and visual contexts, using code to display a scene. Their differing outputs, video and VR, led us to explore the methods in which a visual display could be a part of the AerForge experience. We combined aspects of video and VR, specifically, movement and real-time, life-size interaction, to create a live projection.

img_20181025_025811 screenshot-303

Melissa’s Experiment 2 (Nameblem) was also based in p5 but created a physical output rather than a visual one. Her method of using specific variables from a text entry to manipulate the design of a 3D model allowed the creation of a custom 3D print; the customization came from the user inputting their name, the user did not interact directly with the design. We expanded upon this method for AerForge’s 3D modelling. Though we maintained the framework of using user input as design variables, we made a number of changes to increase customization. With the integration of the real-time projection, we were able to incorporate real-time editing — the user was able to interact with and change their design, start over if they wanted to, and even decide when to save.

unadjustednonraw_thumb_dec

The type of input we used also factored into how interactive AerForge was. Mahnoor’s Experiment 3 (Rainy Weather Controller), created a visual display from physical input, which allowed for a great deal of interactivity. She used material sensors, manipulated by the user, to send a stream of inputs to alter the display. The constant user-generated input provided by the sensors is something we wanted to incorporate, and we also wanted the user to be independent of the sensor. Based on Mahnoor’s use of conductive paint, an option we had for AerForge was to paint two sheets of paper with conductive paint. One sheet to be stood up horizontally and the other placed vertically; representing the X and Y axes respectively. The viewer was to draw within the proximity range of each sensor and then use each sheet’s inputs as X and Y coordinates for drawing geometry on the screen. The issues with this idea were the limited interaction space (the user would have to be within the imaginary box created by the sheets of paper) and the cost of the conductive paint (especially considering it would be difficult to reuse). We chose to use the Leap Motion Sensor instead because it allowed for more freedom of movement, and as this was an exploratory project, it was more appealing to use a sensor we hadn’t worked with yet.

While our own experiments provided references for the mode and environment of creation, the aesthetics of the entity being created was more influenced by other projects we had researched.

Air Matter is an interactive installation that allows the participant to generate virtual vases using motions drawn from traditional pottery. Vanishing Sculptures are large-scale sculptural installations that warp user perception of space by using thin planes to construct large forms, that would “disappear” at a full frontal angle.

airmattervanishingsculpture

Our original concept was something akin to a Lego or building block builder, much like a simple kids 3D modelling software. We quickly backtracked from that lane, wanting to avoid creating a too-technical project that was simply a low-budget reiteration of a simple modeling software.

Instead, we decided the user input would need to be reprocessed to create a unique aesthetic. Inspired by the look of the Vanishing Sculptures, we applied created a set of rules using Leap input as variables, the resulting expression being our model. We explored rule-based art in Experiment 2, and were particularly inspired by Sol LeWitt’s Wall Drawings, which are an example of early algorithmic art. This method creates a unique piece with every iteration. 

Our rules:

(using x-value and y-value from Leap fingertip coordinates)

  1. Each time x-value divisible by 25, draw a cuboid.
  2. Position the cuboid at x-value on the x-axis (y = 0, z = 0).
  3. Multiply the corresponding y-value by 0.75 to determine depth and height of the cuboid (maintain a constant width).

The output was intended to be visually similar to Vanishing Sculptures, which give the illusion of vanishing into thin air depending on the angle from which you’re looking at the sculpture. While we can’t achieve the same illusion due to the printer’s limitations, the AerForge entity moves through states of being, creating a similar experience. The intersections of spaces and the impermanence of all matter—virtual, digital, and physical—is contextually rich, and we will continue to explore new ways of representing this experience in the future. 

Something we thought about as well during the design process was functionality and post-project relevancy. Based on our intended print time and design complexity, the size of our print was around 4cm. At this size, the most effective way to display the model and motivate the user to customize the model was to produce a key-chain as the final product. We made a quick modification to our design (the addition of a torus) and took a couple seconds post-print to put the model on a key-ring before presenting it to the user.

img_9405img_9419

Leave a Reply