Do you think you can “Sing”?

Code: https://webspace.ocad.ca/~3170557/UbiquitousComputing/Week6/CanYouSing.rar

untitled-1

I started this experiment by going through all the examples available on the ML5 website.  I found the webcam classification quite interesting, but also difficult to work with because of how sensitive it was to background objects. In addition, I already had the chance to explore PoseNet in the Body-Centric course. So for this project, I decided to explore something different. Moving away from webcams and picture, I decided to experiment with the pitch detection example in the ML5. Having already done work in digital signal processing (DSP), I found it quite fascinating how quickly and accurately the software was able to identify the musical note in the piano example. I wanted to modify the example, creating an user interaction with the algorithm using the existing data available to provide a tool for them to practice their musical skill.

“Can You Sing?” is an expansion on the Piano example, where the user can select the note they are trying to mimic to practice specific notes. They software indicates when they user has successfully mimic the sound by highlighting the key in green. Only then the user is allowed to select another key to repeat the experience.

To do that, I had to create a way for the user to select the note that they wanted to mimic. I divided the heights of the keys into two. The top half would look for black keys and the bottom half looks for the white keys. Every time the a mouse key is pressed, the Y location of the mouse is checked. if it is in the bottom half of the piano shape, then base on the X location of the mouse, a white key is selected. If the Y position is in the top half, base on the X position a black key is selected.untitled-2

After the key is selected, user’s voice is converted into a note and drawn on the screen. Only if the user’s input is the same as the selected note, the note will change color to green. After that it requests from the user to select another note.untitled-3

In  the other examples, I found the voice command also very interesting. So I added it to this program so that every time the user matches the selected key a voice will say “Nicely Done!”. This was purely so that I could also explore this feature of ML5.

untitled-4Setting up the example was very challenging. For some reason the device software would not always receive data from the microphone which I found irritating. I had to restart the browser each time that I made some changes to the code. I wasn’t able to figure out if the problem was because of the libraries or it was just the local server, but it took a few tries each time to run the application.

But the most challenging part of the program was to identify the location for each note on the screen so that the user could choose their desired note by clicking on it on the piano shape. After a few tries I was able to get a good understanding of how they were drawn and was able to use the same technique to identify which position is associated with which key. I did end of dividing the keys into half base on their Y location just to simply the separation between the black and the white keys.

What I found useful was how the algorithm could detect the note independent on the octave. Plus, the speed that it was able to process the data and its accuracy made it an ideal tool for musicians. This can simply be used as an online tuner for almost all musical instruments which I find quit useful.

References:

ML5 – https://ml5js.org/

ML5 Examples – https://github.com/ml5js/ml5-examples

ML5 Pitch Detection Documentation – https://ml5js.org/docs/PitchDetection

SOS

1

Code: https://webspace.ocad.ca/~3170557/UbiquitousComputing/Week5/SOS.rar

For this project, I wanted to explore how the communication between the Feather ESP32 and P5 using Adafruit IO can work.  I wanted learn more about the process and see if I can implement it in my other projects.

SOS is a simple device that allows users to contact Omid if they ever need him. The goal of the project is to notify Omid if any one needs him while he is sitting at his desk, listening to music and working on his own projects, without badly disturbing him. This system involves two different point of interaction. One is the SOS Device that Omid has to install on his table. This device has a LED, a Vibration Motor and a button.

untitled-sketch-2_bb

The other point of interaction with this system is the website. The website is a simple button that allows you to contact Omid. When the button is pressed, The LED on Omid’s device turns on with a small vibration indicating some one is looking for Omid.

2

When Omid Notices this LED, He can press the button on his device, which then notifies the user that Omid is on his way.

3

In addition, Using a custom IFTTT, Whenever a request is being sent, a text message is also sent to Omid’s Phone in case he is not sitting at his desk.

untitled Setting up the Feather was not very challenging. I used the two examples provided by the Adafruit IO Library, Digital In and Digital Out. I mixed the programs together to try to communicate both ways. I did also run the Analog examples and use the Block options in adafruit IO which was very fun to do. During my exploration, I noticed that some times because of the message rate limitation in the Adafruit  IO, the messages were not sent from the SOS device causing some minor errors in the program. But it very rarely happened so not a lot of problem there. But what I noticed and was troublesome was the delay that occurred when dealing with IFTTT. In some cases the message took more than 3 minutes to be sent to my phone. I did explore the IFTTT and setup a bunch of different Applets and tested them out. But the same problem was there. I tried designing a circuit for my Ring doorbell, so that every time someone rang the bell and LED Would go off, but there was delay in the system that made it really ineffective.

What I did found useful was how easy it was to log all the activities that happened in my program in a google sheet. This would be very ideal in cases when you want to track everything that happens in your program but you don’t need the data live.

untitled1

References:

Adafruit IO API Docs – https://io.adafruit.com/api/docs/

Adafruit IO Basics Digital Input Example – https://learn.adafruit.com/adafruit-io-basics-digital-input/overview

Adafruit IO Basics Digital Output Example – https://learn.adafruit.com/adafruit-io-basics-digital-output

 

 

World Wide Info

final-appCode: https://webspace.ocad.ca/~3170557/UbiquitousComputing/Week4/WorldWideInfo.rar

This project is all about my exploration of accessing Wolfram APIs though p5.  Starting with the Wolfram Spoken API, I wanted to be able to extract as much as information that I could. The search began around countries, I looked for their Longitude and Latitude position, to be able to trace them out. To do that, I had to extract the longitude and latitude angles from the resulting text. I found out the angles are always after the words “are” and “by”, So I look into every sentence and found those words and kept the word after them as the angle. I then converted the angles into coordinates of the canvas by looking at the geographical element. After gaining the data, I realized that I have only access to a central location for the country. Therefore I decided to stick with a simple circle indicating the countries’ positions.

map

However, this wasn’t enough, I wanted to find a way to get more information from about the countries, so I tried asking multiple questions and showing all the results. It took a very long time for the API to return each of the answers. Besides, I also wanted to gain access to more than just text. I wanted to get access to photos as well; therefore I decided to look into Wolfram’s other APIs.

After looking on the Wolfram API website, I found out I could use the Full Result API. This API returns a JSON or an XML including all the information that you might need on a question or a phrase. It also returns links to images. In order to gain access to this API, I had to make some changes to the PubNub function base on the documentation. The API allows you to choose the output format that you want in addition to other filters that you can apply to your answer.

After changing the API, I realized that the Full Result API contains a lot of information, much more than what I expected; therefore it requires a long time to get any information back from it. I decided to narrow down my questions by giving additional URL filters for the answer. I filtered the results base on their pod id to only receive data that I needed. I looked through all the data that the API returned given a country’s name and selected which one I wanted to show.
This way I had a rapid way to access the data that I needed.

I decided for my project to show a summary of information from different countries when they were look up. To begin with, I wanted to start with the flags of the countries and see if I can gain access to the URL image that I could receive from the API. I was able to retrieve the link, but when I requested for the link to be shown, it would give me a “Cross-Origin” error. After a few failed attempts at requesting the image link through PubNub and trying different things on the P5 side, I decided to ignore the image and focus back on the text data that I had access.

crossorigin

I decided to ask different questions and show their results as they come on rather than waiting for a large amount of data to be received from the API. This made the experience much more interactive. I started by sending a request to the API with a specific pod id and wait for its response. After receiving the response, I would send another request for another search with a different pod id.

I chose to request the countries’ coordinates, their demographic details, their capital city and their neighbouring countries. For the coordinates, I had to write a specific way to extract the coordinates. The API returned a string with the coordinates such as “35°N ,23°E”. I then had to convert the string into an array based on the comma, and then again break it down base on “°” to check for N, S, E and W. After that I would map the results coordinates into a position on the canvas.

For the rest of the requests, I found a way that worked for all of them, giving me access to the results and then formatted them different for the display.

untitled

I finally work on the visual of the application, in addition to checking the input for any mistakes, such as wrong spelling or name of things other than countries. I also added the Enter Key so that whenever it is pressed it would search the word in the API.

untitled2

Next Step:

I would like to attempt to zoom on the position of the country in the map and also gain access to other types of data such as images, videos and etc.

Resources:

 

 

Papillon

Project Name: Papillon

Project Members: Omid Ettehadi

Code: https://webspace.ocad.ca/~3170557/UbiquitousComputing/Week1/Papillon.rar

Process Journal:

The XBee Chat Exercise was a fascinating opportunity to explore the XBee devices and to test out the capabilities of the XBee and the CoolTerm Software. Wireless communication in its simplest form can lead to a much simpler connection between different devices in a project.

While doing the exercise, I could not stop thinking about how this system can replace the complicated, internet-based systems that I had used for my previous projects, and how much things would become more reliable with a much shorter lag time.

To make a metronome receiver device, I decided to start with a simple example first, to test the limits of the XBee and find out the range and the kind of packaging I could use in my project. I ran the “Physical Pixel” example and placed the device in different locations to see how different materials affect the range.

For my next step, I changed the Serial in the example to Serial1 and used another XBee with the CoolTerm software to send data to the device.

mvimg_20190113_132413For this project, I wanted to play with something that I hadn’t had the chance to play with before. I decided to create a simple piano by playing different frequencies for each note and allowing the metronome device to set the tempo for the music.

To create different frequencies, I used the “Melody” example, I turned on the speaker, waited for the period of the note and then turned the speaker off and waited for the period of the speaker again. I repeated the same procedure for 20 times so that the note is created. For the music, I chose the soundtrack of the movie Papillion and wrote down the notes for the song in an array.  Every time the device receives an H or an L, it will play a note from the array.

To add more to the project, I added a servo motor installed on a drum piece, so that after every 4 notes, the drum will make a sound as well, keeping the beat of the music.

mvimg_20190114_162206mvimg_20190114_163832

mvimg_20190115_092041

Component List:

components-list

Final Schematic:

circuit_bb