High Scores

This exercise was fairly simple, thankfully, as I didn’t have too much time for it what with the thesis deadline on Friday.

The hardest part was coming up with a reason for the data to come from a webpage at all. Often when I think about adafruitIO, it’s in relation to hardware, gather passive sensor input, and my first instinct was to do something with this. However, I saw the applet on IFTT which allowed you save records into a google sheet, and this intrigued me. I came up with the idea of recording data into a leader board for a game.

This was trivial to set up. I didn’t want to reinvent the wheel, so I simply used p5’s snake game tutorial. Everything stayed relatively the same, except that I changed the controls to work with WASD instead of IJKL, and added a congratulatory message for high scores.

function checkGameStatus() {
 if (xCor[xCor.length - 1] > width ||
   xCor[xCor.length - 1] < 0 ||
   yCor[yCor.length - 1] > height ||
   yCor[yCor.length - 1] < 0 ||
   checkSnakeCollision()) {
     noLoop();
      var scoreVal = parseInt(scoreElem.html().substring(8));
      scoreElem.html('Game ended! Your score was : ' + scoreVal);
      if(scoreVal >= 10){
        gratsElem = createDiv('Score = 0');
        gratsElem.position(20, 40);
        gratsElem.id = 'grats';
        gratsElem.style('color', 'white');
        gratsElem.html('Congrats! Your high score has been recorded!');
      }
   sendData(scoreVal);
 }
}

By the way, this felt redundant AF because IFTT will check for the score’s validity too. This could be salvaged if IFTT let you compare values to other values, instead of forcing you to hard code them, but whatever. Once the game ends, it sends the score to adafruiIO, which in turn, sends its data to IFTT. I have two applets recording data there.

two-applets

The first is that google drive applet I mentioned earlier. This one is a bit slow to start up, so I got the second one to test with faster. The conditions are the same for them both, though. When a player reaches the “high score” of 10, then save the data to the drive, send me a notif.

highschore-notif

These work really well.

screenshot_20190212-003405_ifttt

scores

Future work:

Now that I have this data, it would make sense to add a leaderboard to the game page itself. I started looking into this, but the google API for fetching data from sheets is… A Lot. Will take more time/effort than I have to spare this. It makes sense tho. That could be sensitive data, and there’s a lot of layers of authentication going on. Anyways, this would involve getting pubnub set up to take the API, and modify the p5 code to make a call to pubnub to make a call to google’s sheets api and retrieve the scores sheet. Then, once the sheet’s data is received, it would be to sort from highest to lowest, and display it on the game upon game completion.

LINK

CODE

Alexa-fying wolfram alpha

When it comes to asking questions, the most natural to people, I think, is to say them outloud.

Speech recognition is really easy to set up in p5. There’s a neat library called speech.p5 which has a verbal speech component, and a speech recognition component, letting you infer strings from speech, as well as produce speech from strings.

The voice recognition part is trivial. Simply start the recording, and wait for it to identify a sentence. Once it has, save the sentence to a variable to preserve it, and then send it off to wolfram’s API.

The speech is presenting more of a challenge. As of December 2018, Chrome has disabled speech without direct user input.

boo

Of course, this complicates things, since I want the synth to automatically speak the response it receives from the API.

I have yet to solve this, without making the user press a button to confirm playback.

UPDATE: changing the API type from “spoken” to “short answer” actually somehow solves this???? I have no idea why yet but IRONICALLY, SPOKEN DOESNT LIKE TO BE SPOKEN.

That being said, the speech synthesizer is… mmm. Bad with time. “seven colon thirty-nine” is not a human friendly way to give the hour!!


As for customizing an engine to be more specific about a field or topic,that one is an interesting case. One of the things I find especially frustrating about verbal communication with computers is that they give out the most simple answer possible, but sometimes, you want to know context. What assumptions did the machine take for granted that allowed them to pick such an answer? In some of my test cases, I would ask for the temperature, but the location didn’t match my own. Going through the API, I found that location is an input parameter you can be explicit about. For the sake of time, I simply hard coded the location to be Toronto, but in the future, it would be more useful for the user’s location to be identified by their IP address, and then passed into the server-side code in order to locate the user wherever they might be. It would be worth looking into PubNub’s geolocation API.

However, this proved to be a bit of a frustrating roadblock. Though the API documentation for Wolfram Alpha suggests that simply specifying the location for the query parameter should accept a string such as ‘Toronto’, the location never seemed to change. I know it wasn’t me not managing to save the function properly because I managed to change the units from Fahrenheit to Celsius no problem.

poop

IT TURNS OUT. The conversational API varies from the base API and it doesn’t use the “location” parameter, but the “geolocation” parameter.

I hate APIs so much 🙃 (thats a lie, i think apis are really neat and they do wonderful things, but they rely so much on good documentation, and when there’s conflicting sources, it causes so many headaches.)

Ok so, it’s cool, it’s chill, it works. If you ask it questions now, it will assume Toronto’s where you are and try to answer accordingly.

ALSO. This text to speech recognizer has been a source of ummm… INTERESTING HUMOUR all night, as it has been picking up on, and censoring my err…. colourful exclamations of frustration.

censorship-2

censorship

PS: another small thing I find is kind of annoying is that if you’re not careful of the language you use, the results might make the wrong assumptions. For example “what is the weather” returns a definition of the word “weather” instead of the weather, but saying “what is the temperature” returns the expected results. It doesn’t appear as though the spoken API accepts the “assumption” query parameter that the base API does. This would require a lot of query interpretation code-side, and that can get really tricky really fast.


video documentation of it working.

P5 CODE HERE.

pubnub code:

const xhr = require(‘xhr’);
const query = require(‘codec/query_string’);

export default (request) => {
const appId = ‘4PJJL4-RQK8E84YR7’;
const spoken = ‘http://www.wolframalpha.com/api/v1/result-json.jsp’;

const queryParams = {
appid: appId,
input: request.message.text,
geolocation: ‘43.649404,-79.388785’,
units: ‘metric’
};

const apiUrl = spoken + ‘?’ + query.stringify(queryParams);

return xhr.fetch(apiUrl)
.then((r) => {
const body = JSON.parse(r.body || r);
// console.log(body)
request.message.answer = body.result || body.error;
return request.ok();
})
.catch((e) => {
console.error(e);
return request.ok();
});
};

Hand Shaking

The most frustrating thing about working with xbee radios is how hard it is to actually see what’s going on between them. The code is fairly simple, when you follow the tutorial available from the slides, but when a hiccup happens, and you’re not sure where the bug is, it becomes a whole production to implement debug statements between getting information from the arduino, the radio hooked up to the arduino, and the processing script. In the future, I should look into maybe making the onboard LED blink upon transmission or something, as well as sending debug messages for processing to print without counting it in its “received” data.

Another unexpected issue was that the whole thing didn’t work unless the server radio was plugged into port 5. I thought maybe it might be the processing script failing to pick up the active port, but even CoolTerm refused to connect to anything that wasn’t port 5. I have yet to identify the cause of this.

One of the things about this handshake method that I’m not fond of is that the order of which program starts first seems to matter. It shouldn’t. The radio hooked up to the arduino should be sending nothing but the message to establish contact which should be picked up by the radio which is hooked up to the processing sketch, but for some reason, if the arduino radio is plugged in first, the processing radio never establishes contact with it. It doesn’t even try. I have a suspicion that it’s caused by the fact that the radio being plugged in at all is what Serial1.available() is looking for, so that the processing sketch is already 5 steps behind (ie, the arduino “established contact” already) by the time processing fires up.

Update: I was right, and i actually forgot to check for the request from processing before stopping sending hellos. The loop now makes sure to check for a question from the processing radio, but it seems to stall after two or three questions. Will update with further debugging.

Code

 

Update II: It’s probably because the processor radio only sends a request after it receives a packet from the arduino radio. If that request gets lost during transfer, the arduino processor will never know it was requested.

further work: implement an easy debug script please this will save you so much time.

further work II: implement a timer to send another request if no further paquets are received from the arduino radio after a set amount of time.

Process Journal #1

1. Chat

Worked mostly as expected. Loose USB cable caused minor problems. Received possibly interference? from other radios communicating as we would occasionally receive input neither of us had sent, despite having set each other as addressees.

2. Metronome

I wanted to explore working with a servo for this. Plugging the servo in, and seeing the regular beat of its movement reminded me of the dull rhythm of working out. At first, i was just using the servo arm to push the cutout up and down, but found I could give it more life by tying a string to the wrist and attaching it to the servo arm. This way, as the arm moves away to push the cutout up, it pulls the arm in. The string relaxes as the arm resets, and lets the cutout “relax” its own arm.

The metronome itself was easy enough to coordinate with, except that I was too far to receive its signal the first time the attempt was made. There is also an obvious delay between the transmitting radio and the reception of its data on the Arduino, noticeable when it takes a few beats for the speed to change after adjusting the potentiometer.