Buzz

Ubiquitous Computing Experiment 6

P5.js, Tensorflow.js & PoseNet

Alicia Blakey

Github

 

screen-shot-2019-06-28-at-3-02-32-pm
Project

Utilizing the machine learning model PoseNet, an algorithm that locates specific points on the body then painting over a video feed with p5.js are the elements of this project. PoseNet operates by tracking certain elements of the face or body. Carrying these estimates PoseNet ranks a confidence value of a match. Tensorflow.js is a javascript library that operates within Ml5 teaching machine learning models to operate and initiate within the browser and node.js. Ml5.js is a library that produces a gateway to machine learning algorithms in the browser with an elucidated API.

 

screen-shot-2019-06-28-at-3-04-27-pmConcept

To amalgamate myself better with Ml5 I decided to integrate with PoseNet so I could understand this aspect of machine learning to coalesce with other functions of Ml5 later. Learning from the examples of existing code provided by Nick Puckett I decided to explore the interaction of hand movement tracked through PoseNet. The keypoint confidence score doesn’t relay that well with hands but it does track the wrists efficiently enough. The intention of this conceptualization was to create a simple interaction that instilled movement while on the computer. Now that it’s summer consistently seeing people swat away the little mosquitos that come out this year I thought what a fun way to get moving and to do this through a simple simulation of augmented reality with ml5.js and PoseNet.

screen-shot-2019-06-28-at-3-06-09-pm

Process

After setting up a local server on my terminal my first step was to get the video working and call Ml5 and PoseNet with 2 arguments, video and modelReady. I was able to set this up using the examples from the ml5js. website. I then coded in the model ready function referred to as a callback. In this project I also started to define my variables a little differently and learned why using let is sometimes better. Utilizing let to define a global variable is better than using var as it let’s you re-assign the variable if needed. While using the examples from the Ml5 website I tried to imitate their style which is why I decided to change the way I defined variables in this instance.

video = createCapture(VIDEO);

video.size(width, height);

poseNet = ml5.poseNet(video, modelReady);

poseNet.on(‘pose’, function(results) {poses = results;

debugger;

 });

video.hide();

}

 

After getting PoseNet to work, learning from Daniel Schiffman’s Ml5 video I went to the p5.js website to look for pre-existing graphics and initially input the forces and particle system example to see how it worked with the code. Then I made a simple gif and searched for the preload function from p5.js so the image could be called before setup. After testing I wanted to see what the confidence values looked like so I could understand how the libraries were sharing data. By going into the sources panel of the browser I was able to see the data coming from the poses.

 

Conclusions

Thinking of future Iterations, now that I have a basis with Ml5 and PoseNet I think an interesting adaption would be to use PoseNet to maneuver graphics that interact with each other.  With a combination of interactive graphics and calling other images through poses on the browser, I could intricately add to the existing project. I could see a program like this used as a motivator to move from your desk to create interactions in front of your computer to not remain sedentary. It would be nice to be able to use it as a prompt to add to fitness tracking data when you have been in a solitary position for too long.

References

https://p5js.org/examples/

https://ml5js.org

https://www.youtube.com/watch?v=jmznx0Q1fPO

 

 

 

 

 

 

 

 

 

Button of Fun

 

 

screen-shot-2019-02-12-at-7-00-47-amConcept

Button of Fun is a p5.js sketch used as an interface with an IFTTT applet. Meant to be a creative way to take a break and laugh. The P5.js sketch and the IFTTT applet are both meant to be utilized as buttons.  You can interact with the happy face in the p5.js sketch from the IFTTT applet on your mobile phone. “The Smiley has travelled far from its early 1960s origins, changing like a constantly mutating virus: from early-70s fad to late-80s acid house culture, from millennial txt option and ubiquitous emoticon” (Savage, John ,2009. The Guardian).Through adafruit IO you can communicate from IFTTT to p5.js to create an interaction. Adafruit IO allows us to transfer data and utilize libraries to interact with the server. The application portrays the relationship between devices  and personal customization for an intended interaction.

Objective

The most intricate aspect of this project was creating statements to get the adafruit applet through IO to communicate with error free code in the p5.js sketch. I wanted to get a better understanding of how to create effects in p5.js through IFTTT. I felt this would give me a strong foundation to connect simple applets through p5 in the future by focusing on the connection between IFTTT, IO and p5; versus a small aspect of p5 and 2 applets. As my first experience working with IO I chose the adafruit applet button because I wanted to become more practiced at understanding how to incorporate IO more seamlessly  and utilize data in and out through this platform to trigger interactions.

Software

  • JQuery JS –JavaScript library that can be used to simplify event handling.
  • Adafruit IO – Data connection software.
  • p5.js – Library for graphics and interactive design.
  • IFTTT- Connects applets that work with your services like twitter, facebook or smart devices.

 

Process

The code I expanded on was provided by Nick Puckett and Kate Hartman to connect  p5.js to IO Adafruit. Getting the value from the data feed to connect to P5.js initially triggered error after error.  Through IO you can push data to communicate with IFTTT applet to design an interaction. IFTTT (If This Then That) creates a 2 part system. If this occurs then that will happen and this works with services like Instagram and digital devices in your home like Hue lights. Some of the AdafruitIO documentation was limited I needed to incorporate jQuery and learned about including a promise.  A Promise is an object representing the eventual completion or failure of an asynchronous operation. Essentially, a promise is a returned object to which you attach callbacks, instead of passing callbacks into a function. When I passed callbacks into a function I kept getting errors.

screen-shot-2019-02-12-at-5-34-19-am

 

I used 2 Adafruit applets and programmed them. The first applet is ( send data to fun button feed) the second applet is (an alert notification based on the data from P5). Whenever the fun button is initiated a certain number of times an automated  mobile notification that says“You are so fun for creating a fun button, a fun button is a button that is fun, fun, fun,fun”.  The second app initiates this text based on a data parameter. When that parameter is reached I am sent a notification.

Conclusion

Working with IFTTT it’s interesting the different initializations that you can connect to each other. The trigger of actions and relationships is a longer exploratory process now that I have a basic understanding of how to connect everything. I think that there are some other fascinating amalgamations with this process such as networked automation. I’m really inspired by using this in future iterations of gathering data from simultaneous places and converting that into one data stream through IFTTT and P5.

button-of-fun.git

References

https://io.adafruit.com/

 (Links to an external site.)

Links to an external site.

https://learn.adafruit.com/welcome-to-adafruit-io/overview

 (Links to an external site.)

Links to an external site.

https://ifttt.com/

 

Data Stars

 

Overview:

With this exercise I discovered different databases and various ways to incorporate data into a visual conceptualization. Originally working with wolfram I used the incoming string of data and manipulated the type being received as answers. I was curious about other databases and came across the HYG database. It holds background information on the data of stars such as spectrum,brightness,position and distance. What is particularly interesting is HYG utilizes parsecs and gives you the distance from the earth of all the stars even ones we can’t see with the naked eyes. I thought it would be interesting to portray the visual magnitude of many of the stars we can’t see and turn it into a data visualization. Using P5.js to receive the incoming data I mapped the catalogue of stars onto a P5.js sketch.

 

Process:     

  if (readDelay) {

           setTimeout(() => {

               return reader.read().then(processText);

           }, readDelay);

       }

       else {

           return reader.read().then(processText);

       }

 

I utilized the above code to control the incoming data.  During the first stages of testing I found that the data came in very slowly at 42kb. Basically to get the data to stream in effectively I decided to just download the data once and received 121 data points to work with initially.

Now that I figured out how to control the data I had to convert that information into the stars I wanted to conceptualize. I accomplished this by converting the variable line to star  line.split; this inputs each entry individually in an array of entries and all the values between the commas are put in an array.

When trying to figure out how to use the data I incorporated the ID at index 0 and tried to parse it into a number then saved it into the star object. I did this for id, distance and magnitude. Reading the raw data in the console it was difficult to discern which increments meant what so I went look up more documentation. For example the position of the star is actually 3 values but originally ended up coding ony 1 designation for it.

At this point I wanted to see if I could distinguish the names of the stars from the abbreviations in the index. My initial idea was to attach the scientific names typed out to other characteristics listed in the index. To see if I could do this I decided to try and find the sun specifically in the data set so I had a comparison. That proposed a really good question, with all this data how do I plan to sift through it? I figured since it is a data set of all the known stars I should be able to find the sun specifically.  To sift through the information I was looking for took out the read delay. This kept initiating errors ( uncaught in promise type error cannot read undefined). Next I turned to searching the documentation from the HYG Database then went to astronomy stack exchange. Other people were having the same issue with no solution in sight but to use other databases that are better for basic naming conventions. I put that aspect aside for a possible future iteration.

 

Using a four loop initiating the star variable I used the values x,y,z.  I needed the simplest way of dealing with positions and discovered the best way to do that is just to map it onto 1 plane. You can then decide which plane to coordinate to; using x.z for example, I used the x,z coordinates of the star and mapped it right into a sketch.js file.

The star database seemed to have a lot of errors I was getting all kinds of zeros that I didn’t want to display. The next issue was that the data might exceed the canvas. My solution was to get a maximum number for all the values that I could find.  Applying information from Lodash for all the data points that I have, I then created 1 object that tracked what the biggest and smallest numbers are. This gave me a better idea of how to map all the different stars onto the canvas. At first, while trying to discern if I have an x value of -1000 how do I display it within the range of the canvas. I then realized If the smallest value of a star is -1000 then that should map back to 0.

I was able to accomplish this by initiating in the code (If stars.length already exceeds 1000 then stop). I then looked up star spectral types and then went to the HYG data base page to look up what the spectral numbers are so that I could map the spectral values of a star to RGB colours. Used this information to  create a variable for spectral colours and a variable for spectral colour names.

https://vimeo.com/user92253960/review/317858895/0955e223d6

Conclusion

I learned a lot about more advanced debugging in this exercise. Would not have been able to make this project work without the forums where people were able to form solutions with the same issues I was experiencing. When utilizing databases they are not perfect entities and it is imperative to recalibrate. Mapping in this exercise produced varied results at first it was hard to wrap my head around having values that are meant to be 3 dimensional and how to place that data in a 2 dimensional space. Discovering Orthographic projection (sometimes orthogonal projection) is a means of representing three dimensional objects in two dimensional space.  Utilizing this process is what helped me create this code effectively and was a valuable learning experience. I would like to iterate on this project at a later date and discover new ways to display this particular data set; expanding on the graphics I created and adding other elements like sound.

https://github.com/aliciablakey/star.git

References

https://github.com/astronexus/HYG-

Databasehttps://astronomy.stackexchange.com

https://lodash.com

Colour Radio

 

Untitled-1

 

 

img_3312

 

Overview:

Colour Radio is a communications project encompassing analog signals conceptualizing an interaction through sensors and analog radio communication. This project incorporates an interactive transition through the colours in your surroundings with a colour sensor. With Neo Pixel LED lights we have created a visual output that acts as an abstract representation of environments. Our intention was to create a beautiful visual that represents analog communication through Xbee components.

 

Ideation:

Phase 1 of brainstorming incorporated the knudgeables we explored the idea of creating a wearable that surrounded an interaction based on being able to express your state of mental health.

Phase 2 was discussing other forms of communication with the Xbee radios. At this phase of discovery, we decided that we wanted to delve further into an aspect of networking. We wanted to learn how to handshake between the Xbees and formed our project around the mode of learning we wanted to explore.

image-from-ios

img_3278

Phase 3 Throughout this stage we discovered that we wanted to make a representation of radio communication.
We discussed analog technology and the older forms of electrical components like the old vacuum tubes in televisions. While contemplating the circuit and its components we decided to use a sensor that could grasp aspects of surroundings without obvious communication like words, letters or noises. We discussed Infrared sensors and other distance sensors but decided to incorporate a colour sensor into the project and use LEDs as an output of this data.

photo-on-2019-01-25-at-7-26-pm
screen-shot-2019-01-29-at-11-18-12-am

Process:

We initiated the projects 1st steps by calibrating the light sensor. This was fun for the group because it was our first time working with colour sensors. The light sensor works by shining a white light at an object and then recording the reflected colour. It can also record the intensity of the reflection. Through red, green and blue colour filters the photodiode converts the amount of light to current. The converter then converts the current to voltage which our arduino can read. We weren’t able to get good absolute RGB readings from the sensor What we have been able to get is what I would call relative RGB, as the colour it is sensing is clearly visible by one value spiking, but it’s not within the 0-255 range. So a red object might cause the red value to rise by 30, not by 255. This is clearly visible when connecting an RGB LED, it will change colour, but the colour change might not be visible, an issue is also that generally blue LEDs have a higher energy output so it might overpower.

With further exploration into the networking of Xbees with sensors and the variations of input and output while exploring various nodes we originally had some issues transferring through the RGB values.  We would get some data readings to communicate as an output through the Xbees but would continually stop working.
screen-shot-2019-01-30-at-9-26-54-am
Our goal was to establish a call/response handshake between two Xbee radios, however we had many issues executing this. We set up a transmitting XBee ( A ) connected to an Arduino that would send RGB values and a receiving XBee (B) that would get those values and use them to light a NeoPixel LED strip. We kept getting strange values in the buffer on the receiving end and data would sometimes be sent and sometimes not. We weren’t able to get it working however were able to get the handshake to work when the receiving XBee (B) was  connected to a laptop directly without an Arduino, with this setup we were able to initiate the handshake call/response action whereby the XBee(A) would send “hello” until we sent a character over the serial monitor and it would respond with the RGB reading from the color sensor. The image below shows a screenshot of this (The values returned are shown twice as we were printing out Serial and Serial1 values).

screen-shot-2019-01-30-at-9-21-23-amIn the end we modified our code so that XBee (A) transmitting color sensor values and the NeoPixel strip ran on the same breadboard, so that the strip would change colors according to the colour sensor reading. The receiving XBee (B) was connected to Processing and the values received were used to change the background of a sketch, we envisioned this as an additional representation of the color sensor’s readings.

Conclusion :

For future iteration, we would create a mesh network with colour sensors and more intricate variances in the colour changes of the led’s. Like initiating different patterns perhaps a blending of colours depending what the light sensor picks up. Exploring communication with the Xbees further and advancing on the foundation of this project we would like to add colour temperature sesnors in accordance with basic colour sensors in the future.

 

References:

http://www.robotsforfun.com/webpages/colorsensor.html

How to use the Color Sensor with Arduino board (TCS3200 & TCS3210)

https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library-use

 

XBee Exploration

 

 

Summary

In this experiment we utilized Xbee radios; an analog radio transmitter and receiver that can be incorporated into a mesh network. Through this experiment there is an exploration of XBees and  initiating an interaction with an arduino component. This is communicated through ‘L’ or ‘H’ low or high to send and receive data and control pins in arduino. XBees can talk to each other by setting the channel and id’s. Incorporating the use of tones with this project I was curious to see how to program music and how that would coincide with sounds from other projects.

 

Process

Configuring the XBees through the software Coolterm to confirm the XBees are communicating was my first step. I did this by making sure ATID,ATMY and ATDL in Coolterm were properly configured.

ATID: This is the ID of the channel both XBees are talking on

ATMY: This is the ID that defines the Xbee you are using

ATDL : This is the ID of the Xbee you are transmitting to.

Using just an LED with Arduino Mirco and Xbee I tested to make sure the ‘L’ and ‘H’ commands were working. Using the sample Arduino file Physical Pixel and uploading to the controller I was able to turn my LED on and off through the serial monitor.  Then I connected my Xbee and changed the Arduino code from ‘Serial’ to ‘Serial1’. I could see my LED turning on and off so I was ready to progress and attach new components to my bread board.

While setting up my Xbee with my breadboard it’s important to note that the XBee cannot run on 5V it has to be connected to the 3V power pin in the controller.

Physical Pixel Code

const int ledPin = 13; // the pin that the LED is attached to
int incomingByte; // a variable to read incoming serial data into

void setup() {
// initialize serial communication:
Serial.begin(9600);
// initialize the LED pin as an output:
pinMode(ledPin, OUTPUT);
}

void loop() {
// see if there’s incoming serial data:
if (Serial.available() > 0) {
// read the oldest byte in the serial buffer:
incomingByte = Serial.read();
// if it’s a capital H (ASCII 72), turn on the LED:
if (incomingByte == ‘H’) {
digitalWrite(ledPin, HIGH);
}
// if it’s an L (ASCII 76) turn off the LED:
if (incomingByte == ‘L’) {
digitalWrite(ledPin, LOW);
}
}
}

 

Never having used a speaker with arduino before I started to research how to use tones. From the Arduino website I found pitches.h. This file contains all the pitch values for typical notes. For example, NOTE_C4 is middle C. NOTE_FS4 is F sharp, and so forth. So instead of writing the frequency in the tone( ) function, we’ll just have to write the name of the note. This note table was originally written by Brett Hagman. I wanted to make use of the Piezo Speaker in order to play melodies. It produces PWM signals in order to play music. While using and testing with the Piezo speaker I learned can’t use tone() while also using analogWrite() on pins 3 or 11. That’s because the tone() function uses the same built in timer that analogWrite() does for pins 3 and 11.

 

 

With time being a factor I realized creating my own music wasn’t viable but I did discover a lot of pre existing music. I decided to use the music from the video games Super Mario Brothers. I correlated the music tones with the ‘L’ and ‘H’ command through the XBee. I added an LED to initiate with the speaker. While testing with this version of the breadboard the ‘L’ function would only sometimes work. I think this has to do with the sequencing of the tones.

 

Conclusion

With this collaborative project, I learned about the different affordances of creating an abstract orchestra through different sounds with others. The aspect of testing with others and seeing their projects was a positive experience and provided a good foundation in radio analog communication. Experiencing each project as a collective with these components is something that could be expanded upon later and with a larger network now that we know how to test and troubleshoot with the XBees.

https://github.com/aliciablakey/speakerxbee.git

References

http://www.arduino.cc/en/Tutorial/PhysicalPixel*/

https://learn.adafruit.com/adafruit-arduino-lesson-10-making-sounds/playing-a-scale

Play simple melodies with an Arduino and a piezo buzzer

 

Explorations In Light Mapping

 

 

 

explorations

 

The Project:

This project is an interactive exploration of analog radio communication through the collaboration of multiple sensors communicating between Xbee modules, to create an ambient experience using Processing. the intention is to create an expressive representation with formula based shapes by integrating multiple nodes with light mapping. We chose concentric squares to conceptualize this perceptual field as incoming data values of light through networked modules.This interaction is transmitted through XBee radio transceivers and interpreted by Processing, creating a geometric array, sequencing a colour experience that reflects art, light, data and cooperative interaction; portraying the output of a network through a creative interface.

 

Explorations in light mapping ideation

Ideation:

As a group, our main goal was to create a collaborative ambient experience. Our ideas gravitated towards active and body focused experiences. The initial idea we arrived at was to use temperature sensors to create a heat map of everyone’s body temperature in the room. We tested the TMP36 Temperature Sensor to measure the temperature of our bodies, the sensor can detect a wide range of temperatures from -40 to +120, but we noticed that there wasn’t much difference in our body temperature, to the extent that it would barely be noticeable.

Our ideation continued forward using the inspiration of a map as our guidance. We discussed the possibilities of mapping different parts of a room through different inputs. The group consensus was to use photoresistors to map the light in the room. Mapping the light could be taken on in many different ways: we could place XBees in either active/inactive areas or place the radios in the various window sills to map out the light of the building. Eventually, we decided that we wanted an active participatory experience over an ambient passive project.

Creating the participatory experience was interesting as everyone had different distinct images of how the differing light values should be reflected on the screen. The first discussion was informed by our conversations of mapping: could the Processing screen be split up into a grid and reflect where the person was in the room? We decided that this idea was a bit too rigid and wasn’t easily adaptable for a various amount of XBees. This discussion of the project had the realization that we wanted a project that collaborative, generative, and was not restricted by numbers.

We came to the consensus of displaying the data in concentric squares. The squares would generate based on how many radios we had connected, and the inner values of the squares would be mapped to the values of the incoming light sensors. Each person participating would have a square that their radio would control. We liked this idea because the shapes of the concentric squares had connotations to optical illusions. It was playful, adaptable, and generative so we went forth with this idea.

Parallel to this decision, our group was still considering the concept of a lightmap. We realized that the code for mapping the light the concentric squares would be almost identical to the code for mapping light in the room. We decided that if we had time, we could create a floor map of the Digital Futures lounge in Processing and map the light in the different areas.

These two above ideas our group decided were manageable, interesting, and could be played with in many different ways depending on how many radios were connected, or how many people were actively participating at any given time. The results of both ideas had variables that had the opportunity to change and produce new data visualizations.

Exploration light mapping testing

 

Design + Testing:

Through experimentation, we designed three generative outputs that all use photoresistors to control light or colour values on the screen. The first is concentric squares that are mapped to the analog value by changing the displayed gray scale of the square (lower values are closers to black, and higher values are closer to white); the second is the same as the first concentric squares, but rather each square is assigned a different hue that lightens or darkens to reflect the analog value of each individual sensor rather than solely black and white values. For the third experiment, we mapped out a floor plan of the Digital Futures lounge where the team envisioned placing the XBee radios in different locations to get a “light” map that would change throughout the day.

The responsive concentric square layout was designed to be collaborative. We designed the Processing side of the project to, in theory, take in as many radios as possible to contribute to the generative scene. A new radio could be added, and the squares would adjust in size to accommodate. The concentric squares allow a group to playfully collaborate to create different illusions. Each operator of a photoresistor would be able to alter the values by changing the gray-scale or the hue of the squares to create different compositions as a team. Something to note is that we did not label the squares according to the participant’s individual radios.

This allows for investigation and communication amongst those participating in the group. Through our own testing, it is interesting to try and create different depth illusions by controlling the value of the photoresistor. Since the sensor is quite sensitive to any amount of light in a room, there was a little trouble in achieving a darker color for the low values, making the composition more often gray in situations where the light in the room is turned on. This would mean that the ideal setup for this experiment would be in a dark room, where the light values can be controlled by the individual through a flashlight on their smartphone or by fully closing the space around the sensor with our hands.

The intention of the floorplan design was to place the radios amongst the room and remove the active participation from the group. The light sensors would calibrate to the spaces that they were placed and would reflect changes or movement within the space. This would allow for an ambient light map of the space with the ability to change along with the different parts of the day, such as if the automatic lights turn on or off, if the sun sets, or if someone in the room blocks the light directly. This passive design has connotations of surveillance. The photoresistors act as light monitors of the space; if someone were to come in the middle of the night and turn on the lights the sensors would automatically react wirelessly to the sudden change of light. Overall, we did not have time to fully test this idea. As an early iteration of this experiment, we took the mapping of the values from Arduino to Processing from the code for the concentric squares. We also designed the floor plan on Processing and tested it similarly to how we tested the previous experiments. We hope to be able to test this one at some point to see the results.

The Hardware and Circuit

screen-shot-2019-01-22-at-11-40-59-am

The Software:

We gained some valuable insights through the process of testing, connecting the nodes and Xbees to the network. Throughout this process, we problem solved some communication issues between the Xbees removing RX and TX connections before uploading code (Arduino will give an error). If you find yourself in the scenario where your XBee just won’t communicate; try resetting to the default values.

Although many of us have worked with the photoresistor in previous projects, we learned that calibrating the sensor is important to ensure that it does not take in values that are negative or beyond the maximum value set on the Arduino code. Creating code for this project our initial goal was to have created a dynamic network through Processing; a process by which an infinite amount of radio nodes could by dynamically networked as they were detected. Due to time constraints we opted instead, for this iteration, program a static array of radio nodes into Processing, one for each node that will be at the presentation. In the next iteration we intend to make the code dynamic, adding a potentially endless number of XBees, that are automatically detected and added to the visualization.

Conclusion:

Demonstrating our light mapping project was highly successful. The outcome was a fun collaboration that intrigued some game interaction. This aspect of the project stemmed from the curiosity of trying to discover which grid of the concentric square your Xbee is controlling.

As a future iteration of this project, were it to be expanded upon;  we wanted to explore a node network meshed theremin. The theremin would create reflective colours, represented through values from distance sensors. If we wanted to expand our network this project taught us aspects of the processing code and how to develop a possible larger more complex network.  Considering  Zigbee mesh networking as the next step, utilizing the exploration of a small network of radio communication proved successful. This project taught us many things about working with Xbees.  Xbees cannot manage the received or sent data. However, they can communicate with intelligent devices via the serial interface.

https://github.com/npyalex/Ubiquitous-Connectivity-Project.git

References:

https://www.digi.com/resources/documentation/digidocs/pdfs/90000976.pdf

http://www.ardumotive.com/how-to-use-xbee-modules-as-transmitter–receiver-en.html

https://core-electronics.com.au/tutorials/how-to-network-xbee-and-arduino.html