Do you think you can “Sing”?

Code: https://webspace.ocad.ca/~3170557/UbiquitousComputing/Week6/CanYouSing.rar

untitled-1

I started this experiment by going through all the examples available on the ML5 website.  I found the webcam classification quite interesting, but also difficult to work with because of how sensitive it was to background objects. In addition, I already had the chance to explore PoseNet in the Body-Centric course. So for this project, I decided to explore something different. Moving away from webcams and picture, I decided to experiment with the pitch detection example in the ML5. Having already done work in digital signal processing (DSP), I found it quite fascinating how quickly and accurately the software was able to identify the musical note in the piano example. I wanted to modify the example, creating an user interaction with the algorithm using the existing data available to provide a tool for them to practice their musical skill.

“Can You Sing?” is an expansion on the Piano example, where the user can select the note they are trying to mimic to practice specific notes. They software indicates when they user has successfully mimic the sound by highlighting the key in green. Only then the user is allowed to select another key to repeat the experience.

To do that, I had to create a way for the user to select the note that they wanted to mimic. I divided the heights of the keys into two. The top half would look for black keys and the bottom half looks for the white keys. Every time the a mouse key is pressed, the Y location of the mouse is checked. if it is in the bottom half of the piano shape, then base on the X location of the mouse, a white key is selected. If the Y position is in the top half, base on the X position a black key is selected.untitled-2

After the key is selected, user’s voice is converted into a note and drawn on the screen. Only if the user’s input is the same as the selected note, the note will change color to green. After that it requests from the user to select another note.untitled-3

In  the other examples, I found the voice command also very interesting. So I added it to this program so that every time the user matches the selected key a voice will say “Nicely Done!”. This was purely so that I could also explore this feature of ML5.

untitled-4Setting up the example was very challenging. For some reason the device software would not always receive data from the microphone which I found irritating. I had to restart the browser each time that I made some changes to the code. I wasn’t able to figure out if the problem was because of the libraries or it was just the local server, but it took a few tries each time to run the application.

But the most challenging part of the program was to identify the location for each note on the screen so that the user could choose their desired note by clicking on it on the piano shape. After a few tries I was able to get a good understanding of how they were drawn and was able to use the same technique to identify which position is associated with which key. I did end of dividing the keys into half base on their Y location just to simply the separation between the black and the white keys.

What I found useful was how the algorithm could detect the note independent on the octave. Plus, the speed that it was able to process the data and its accuracy made it an ideal tool for musicians. This can simply be used as an online tuner for almost all musical instruments which I find quit useful.

References:

ML5 – https://ml5js.org/

ML5 Examples – https://github.com/ml5js/ml5-examples

ML5 Pitch Detection Documentation – https://ml5js.org/docs/PitchDetection

Leave a Reply