Leanna Barwick
DIGF-2014-001-Atelier 1
Experiment 2: Machine Teaching
Github Project
Mr Spock's Pin: Project gateway
Code for the sketch
Code example expanded:
I built upon the poseNet example from the ML5 website and The Coding Train tutorial, and also incorporated the sound mic input and threshold examples from the P5.js website.
How I built on the basic idea:
PoseNet: With the ML5.org poseNet sample code as a basis, I updated the code to recognize multiple poses, used an image related to my artistic concept to denote the various keypoints; added code to rotate the keypoint images; code so canvas would mirror the webcam; covered up the video feed with a background; added a mousePressed function to assign random colours to the background; inserted a function to resize the canvas when the browser window has been changed.
AudioIn, Shapes, random combinations: I drew some shapes related to my artistic concept onto the canvas, and building on the P5.AudioIn examples, added micLevel in order to change the shape dimensions upon sound input and digitally reflect the social audio input visually.
I included mic threshold code to draw ellipses when a specified volume level is surpassed and based the shape size of these on the volume level
These shapes have all been assigned random looping fill colours, to continue playing with the concept of diverse combinations expressed through code.
I created an entrance page to contextualize the project within my artistic scope.
What is working and not working:
Overall, the code used in the final project is working when run from the P5 editor and glitch where the code is hosted, but when I try to run the code locally it does not work.
I couldn’t execute the poseNet code from ML5.org, the "flipHorizontal: true" instruction to flip the video image to create the mirrored effect I wanted, so instead, coding instructions were added to accomplish the same effect by working directly with the canvas.
I had attempted to incorporate PubNub but could not get it integrated with my code.
I had attempted to add more p5 sound code to use in conjunction with poseNet, to use Perlin noise and control frequency modulation with body gestures, but could not get it integrated with my code.
How I can expand this code:
- Pubnub channel could be added to bring in more users to influence the visual representation of IDIC across virtual channels
- While the existing sound input code effects the visual, I want more sound output and poseNet interaction in this piece
- Think more about how the experience could be presented in the physical space and different ways that participants could engage the work (which is a reflection of the larger whole of their shared system), and additional coding would reflect this
A summary of thoughts for how I would like to use these tools to develop my own interface:
Building on this iteration, I would like to create an interface to use for larger scale real-time events (workshops, dance, parties, live performances) that could incorporate microcontrollers, and capture multiple kinds of participant sensory input data like spatial depth, crowd density, general movements, sound levels, temperature, which could then influence the outputs in the physical space, such as projected visuals, lighting, and audio. The group could train their own model to strengthen user control, and incorporate more personal data, like facial recognition, emotional expression classification, specific gestures or symbols, and how they want to see this correlated to physical outputs..
Data concerns that might impact interfaces come into play when the group wants to have their own trained models, and concerning where any potentially sensitive data is stored, what are the geographic laws pertaining to the physical location of the servers, who controls this and what privacy concerns might exist.