Working with Api’s – Tag Cloud

051

Abstract:

In this experiment, we had to explore connecting APi’ and make use of the data we received from it. I explored various api other than the one we worked on in class, called Wolfram. The data we were working with was mainly text and the challenge was to take the text data received from different queries and make it into something visual on the screen. I used p5.js for this experiment but we were not limited to just using this framework. I had several ideas of using other data sets and frameworks and tried a few but they were not realtime, I could not query and get back an interactive response, like I could with the setup of PubNub and Wolfram, setting up the others needed hosting your own servers and I did not have time to explore that. I settled on using the Data from Wolfram and creating a Tag Cloud with displays in different colours.

 

Git Hub Link:

https://github.com/imaginere/UbiquitousComputing/tree/master/MyCode
Process:

I explored using node.js, the desktop version of processing and plain html 5. My main idea was to download stock data and convert it to a chart. This idea was possible in processing after I downloaded a CSV file from. https://www.alphavantage.co/, they gave me an Api key and the stock data. This I could accomplish but the data would not be real time and there was no interactive loop to call the data with a query from the browser.

I use the code samples from https://github.com/feltron to explore using the data set and visualizing them in processing.

The skillshare class was also very good to explore how to visualize data I have in the form of Json or Csv files.

The API:

I settled on using the API we setup in class and that had an interactive component and it was sending back text data from different Queries we sent it through PUBNub.

One of the ideas was to convert the Colour text string received into its corresponding RGB value and show it on the screen as a palette. I thought this would be a relatively straight forward process but like with anything, simple does not mean easy. For one the text I was getting back was a string with many words and I would have to evaluate for each letter in the string by making it an array, then there was the problem that sometimes what I got back were things like “Light Blue” which needed my code to understand that pair as a colour and convert it based on that. I was able to make the code into an array and check for words but I got dumped in using a dictionary to map the words to rub codes and then put that back on the screen, the fact that RGB is in brackets and are integers seemed to be the biggest problem. I scrapped that idea for this assignment as I need to brush up my coding to get past that coding challenge.

In the process I did find that I could display visual data based on the input I received, I understood how an array can be broken down and how PUBnub sends back a data string.

I used this to inform my next experiment, I made a new sketch and started from Scratch, I made use of the text I was reciting and tried to break it up into single words, for this I went down the rabbit hole of working with text in P5.js, I explored a .js add-on for P5 called:

https://rednoise.org/rita/

This is a javascript library is compatible with P5.js and also allows for sophisticated functions for working with text. I would like to say this was needed but it was not, its a good resource I now have for future projects but I could not use it for this project.

It was with a very few lines of code that I was able to create a Tag cloud with the text I received, this proved to be very satisfying and also very effective as the communication was real-time and it always created a new visual pattern based on the query. I would like the add a bit more of finesse to the final file but for now, I have a working prototype which uses the data received and Dows something interesting with it.

This was the Result:

052

References:

https://p5js.org/reference/
https://www.alphavantage.co/ ( Finance Api)
https://github.com/feltron ( Processing Code for Data Viz)
https://rednoise.org/rita/ ( Javascript library for working with text)
https://products.wolframalpha.com/api/

Code from Nick

DIY Siri

DIY Siri

Ubiquitous Computing Process Journal #4
GitHub
Website Application
By Olivia Prior

DIY Siri Web application screen shot
DIY Siri web application screenshot

Concept

DIY Siri is a web application that uses a combination of the JavaScript built text to speech functionality and Wolf Ram Alpha API in an attempt to answer questions asked through voice and respond back through sound. The web application prompts the user to ask a question by “speaking out loud” and is given a “speech to text” response in return. The web application is open to all questions being asked, so the user is able to view and listen to the responses given by anyone else using the application in parallel. This web application is a small demonstration of how the ubiquitous smart devices that occupy households can be accessible for customization.

Objective

With this assignment, I was curious about how does one actually construct their own Siri, Alexa, or Google Home through readily available APIs online. In another class, I had explored the JavaScript “Text to Speech” functions through continuous listening. I found that that the JavaScript functionality was clunky and would time out. When introduced to this subject of API’s in class I thought it would be an interesting experiment to see if 1) If using explicit commands such as start and stop buttons would lend itself to a smoother use of the JavaScript functions and 2) is it really that simple to connect JavaScript to an “all knowing” API like Wolf Ram Alpha to craft a simple at home assistant?

Tools & Software

– Text-editor of your choice
– Wolf Ram Alpha API developer key
– PubNub Account for connecting the web page to the Wolf Ram Alpha API
– JQuery JS

Process

The base code I used for the JavaScript is followed closely to this tutorial that demonstrates the many different types of uses that could be applied. The tutorial shows the ability to start a recording, document the voice to text as someone is speaking, pause/stop the recording, and save the recording to the local notes. Once the recording is saved to the local notes, the user has the ability to “listen” to those notes through a browser audio-based speech to text function. 

My first step was to isolate the steps of “I am talking and asking a question” to “I need to send this question off into the ether”.

I created a speech recognition object and two buttons: the first was a record button and the second was a stop. I made the button visibility toggle for UX purposes: the user can only press ask or stop.

Video demo of toggling the button 

The first button was actively listening through the JavaScript function “.start()” which is an available function from the speech recognition object.

Screenshot of instantiating the screen recognition object.
Screenshot of instantiating the screen recognition object.

The second button had an on click event that executed “.stop()”. The start would transcribe what the user was saying and input it into a text area on the page. This worked surprisingly well. When I pressed stop, the microphone would turn off.

The next step was tying it into the Wolf Ram Alpha code we had used with p5.js in class. I took the code that had my PubNub module that connected my account to the Wolf Ram Alpha Developer API. I took the value of what I had transcribed into the text area on the page and sent it as a message through PubNub. Just as if I was typing what I was saying, I received a message response to my question that was from Wolf Ram Alpha. I outputted the message onto the web page.

My next step was to connect the speech to text so that the answer could playback audibly. I created a new object “SpeechSynthesisUtterance” which is a standard JavaScript object. I passed the message from Wolf Ram Alpha into the function .speech() and the browser responded with the answer to my question.

Screenshot of the text to speech object being instantiated.
Screenshot of the text to speech object being instantiated.

DIY Siri Demo

Challenges & Next Steps 

Upon testing this web application, I sent a link to my friend in a different province. I was fixing my CSS and all of a sudden heard my browser start talking. The way I had set up my PubNub server on the web application was that everyone who had access to the application had the ability to listen to whatever was being asked. Initially, I started to fix the issue. Upon reflection, I realized that the ability to listen to whatever anyone is asking brings up interesting connotations of security and surveillance, especially in an open source concept. I decided to keep this the same and to test it out in class and see what the reception would be from my classmates.

A next step I considered was to store previously asked questions so that the user could quickly press and re-ask if needed such as “what is the weather”, etc. Once I had encountered that my web application was able to listen to anyone’s question, anywhere, I decided that this was more of a listening tool rather than an asking application. If I were to enforce individual privacy on this application, I would consider storing the frequently asked questions in the local browser storage.

Since I was attracted to the idea of listening, I think I would make it more apparent that multiple people were on the application and asking questions. It makes it a much more collaborative experience and could be elevated to a more polished art piece. Currently, this application lays in the world between tool and commentary and needs refining touches on either spectrum to make it more of a complete experience. Until then, this is a simple, basic, DIY Siri that allows you to ask questions through voice.

References and Resources

 

Speech to Text Tutorial 

Mozzila Speech to Text Documentation 

AM to PM ( Digital Postcards)

Description

AM to PM is a visualization of local time all around the world using PubNub and the Wolpfram Spoken API to query the current local time of countries around the world. Click on a country to see the local time visualized as a lit up sky with colors indicating the time of day.

Ideation

My first idea was to create a project under the theme of wildlife potentially wildlife conservation however I was unable to find a suitable wildlife data api to query. I tried querying Wolfram alpha but the results returned using the spoken API were very unpredictable and I wasn’t able to predict what kind of response I would get. eg. When I asked “How big is a lion?” I was able to get the response

The typical length of a lion is about 9.8 feet

however when I queried “How big is a buffalo?” I got the response

The area of Buffalo, New York is about 40.4 square miles

This showed me that the spoken API would not be beneficial for my purpose. It returned conversational answers that are more suited to an app where voice is being used as a feature. I decided to then limit my queries to countries and their local time using the query ” local time country name” e.g “local time Kenya” returns

The answer is 7:14:48 P.M. EAT, Tuesday, February 5, 2019

To work with this response, I split the response at each ” ” and then find the position of the string ‘is’, once this string is found, I then get the next index i.e response[i+1] to get the time (7:14:48) and response[i+2] to get the string ‘A.M.’ or ‘P.M.’.

The time variable is then split at each “:” where I use the hour value and “A.M.” boolean check to determine what color sky to display. Each sky color varies per hour with dark black – blue – purple skies for night time, purple – pink skies for sunrise – blue skies for mornings – yellow skies for midday – orange skies for evening and purplish – orange skies for sunsets.

I would like to expand my project to query more climate data perhaps using the Wolfram Full Details API which I was not able to get working for this project. I didn’t query this data with the spoken API as i wasn’t able to predict what response I would get for each country, hard coding an algorithm like I had done to extract the time with the “local time” query becomes much more difficult with my larger dataset of 244 countries due to the varied responses e.g:

“Current weather in Kenya” returns

The weather in Kenya is mostly clear, with light winds and a temperature of 73 degrees Fahrenheit

“Current weather in United States” returns

The weather in the United States ranges from fog to overcast, with light winds and a temperature of 20 degrees Fahrenheit

Code / Process – Challenges & Experience:

Initial idea was to work with wild animals – couldn’t find data

I wanted to map the mouse position to latitudes and longitudes but i scrapped this idea because: 1. would end up making too many api calls 2. found that giving (lat, long) values to wolfram didn’t return expected results. Wolfram just reinterpreted the value and can’t return a country based on a latitude and longitude. One can’t query what country is a specific latitude.

screenshot-2019-02-04-150039

I thought of using the getLocation() geolocation function to get GPS co-ordinates but went against it as this would tie me to the country linked to my IP address and I wanted my sketch to be able to change dynamically to show data on various countries. I ended up translating latitude,longitude values from Google Developer’s dataset countries.csv, using this data I translated the lat-long to x-y screen co-ordinates using the Mercator map projection function.  Below is an example of how the co-ordinates translated to 2D space.

giphy

After determining that the translation function was working correctly, I created a Country class, generating new countries from a .csv file and saving them in an array when the sketch was loaded. Each Country object had the following attributes : lat, long, country code, country name, x co-ordinate, and y co-ordinate.

In my Country.translate() function, I put the code to translate from mercator lat,long to 2D x-y co-ordinates. In the Country.display(), I placed code to draw a circle at a specific x,y co-ordinate.

Upon drawing the dots for the countries I realized that they were skewed off the map as the x,y values corresponded to a Cartesian plane. To fix this I used the map function to map the x and y values to the screen coordinates.

giphy

Mapping the x-values to between 0 and screen width.

screenshot-2019-02-04-201249

The final x,y values mapped to the screen’s width & height.

The next step was to make each country dot clickable so that when a user clicked on a country dot, its country name would be passed to Pubnub to query the wolfram api.

screenshot-2019-02-04-210729

I then created a Country.clickCheck() function that is triggered on mouseClicked() to check which dot has been clicked. (see image above). This is done by checking the mousex,y location against a designated area around the x,y coordinate of the country dot. Upon finding a country that has been clicked, the country name, which is saved in a global variable that is constantly updated each time a user successfully clicks on a country, is passed to the Pubnub Wolphram Alpha query.

I also updated my code so that when a country is clicked, it is highlighted by a black dot to make the visualization more meaningful.

finalamtopm

Kenya’s sky at 8:00:43 P.M EAT on Tuesday, February 5, 2019

Aesthetic choices made:

giphy

To experiment with a new way of asking a question, once I had constrained my project to local time and countries, I wanted to keep away from asking the user to type in a country. So i settled on a visualization of a map taking advantage of the affordances of a map. I wanted to keep it abstract so instead of country shapes, I created Country objects that were visually represented on a 2D map and unexpectedly, just by looking at the dots you can almost tell which continent is which. My goal was to keep it aesthetically pleasing but still informational. I am toying with the idea of printing the time values  but I think the colors are pretty easy to interpret.

Possible future expansions:

I’d like to explore, adding more APIs to this project.

Link to code: here

References:

Natural Logarithm in p5 : here and here

Mercator Map projection: Lat,Long -> x,y screen co-ordinates: here

Working with / Reading from CSV files: here

 

 

 

Data Stars

 

Overview:

With this exercise I discovered different databases and various ways to incorporate data into a visual conceptualization. Originally working with wolfram I used the incoming string of data and manipulated the type being received as answers. I was curious about other databases and came across the HYG database. It holds background information on the data of stars such as spectrum,brightness,position and distance. What is particularly interesting is HYG utilizes parsecs and gives you the distance from the earth of all the stars even ones we can’t see with the naked eyes. I thought it would be interesting to portray the visual magnitude of many of the stars we can’t see and turn it into a data visualization. Using P5.js to receive the incoming data I mapped the catalogue of stars onto a P5.js sketch.

 

Process:     

  if (readDelay) {

           setTimeout(() => {

               return reader.read().then(processText);

           }, readDelay);

       }

       else {

           return reader.read().then(processText);

       }

 

I utilized the above code to control the incoming data.  During the first stages of testing I found that the data came in very slowly at 42kb. Basically to get the data to stream in effectively I decided to just download the data once and received 121 data points to work with initially.

Now that I figured out how to control the data I had to convert that information into the stars I wanted to conceptualize. I accomplished this by converting the variable line to star  line.split; this inputs each entry individually in an array of entries and all the values between the commas are put in an array.

When trying to figure out how to use the data I incorporated the ID at index 0 and tried to parse it into a number then saved it into the star object. I did this for id, distance and magnitude. Reading the raw data in the console it was difficult to discern which increments meant what so I went look up more documentation. For example the position of the star is actually 3 values but originally ended up coding ony 1 designation for it.

At this point I wanted to see if I could distinguish the names of the stars from the abbreviations in the index. My initial idea was to attach the scientific names typed out to other characteristics listed in the index. To see if I could do this I decided to try and find the sun specifically in the data set so I had a comparison. That proposed a really good question, with all this data how do I plan to sift through it? I figured since it is a data set of all the known stars I should be able to find the sun specifically.  To sift through the information I was looking for took out the read delay. This kept initiating errors ( uncaught in promise type error cannot read undefined). Next I turned to searching the documentation from the HYG Database then went to astronomy stack exchange. Other people were having the same issue with no solution in sight but to use other databases that are better for basic naming conventions. I put that aspect aside for a possible future iteration.

 

Using a four loop initiating the star variable I used the values x,y,z.  I needed the simplest way of dealing with positions and discovered the best way to do that is just to map it onto 1 plane. You can then decide which plane to coordinate to; using x.z for example, I used the x,z coordinates of the star and mapped it right into a sketch.js file.

The star database seemed to have a lot of errors I was getting all kinds of zeros that I didn’t want to display. The next issue was that the data might exceed the canvas. My solution was to get a maximum number for all the values that I could find.  Applying information from Lodash for all the data points that I have, I then created 1 object that tracked what the biggest and smallest numbers are. This gave me a better idea of how to map all the different stars onto the canvas. At first, while trying to discern if I have an x value of -1000 how do I display it within the range of the canvas. I then realized If the smallest value of a star is -1000 then that should map back to 0.

I was able to accomplish this by initiating in the code (If stars.length already exceeds 1000 then stop). I then looked up star spectral types and then went to the HYG data base page to look up what the spectral numbers are so that I could map the spectral values of a star to RGB colours. Used this information to  create a variable for spectral colours and a variable for spectral colour names.

https://vimeo.com/user92253960/review/317858895/0955e223d6

Conclusion

I learned a lot about more advanced debugging in this exercise. Would not have been able to make this project work without the forums where people were able to form solutions with the same issues I was experiencing. When utilizing databases they are not perfect entities and it is imperative to recalibrate. Mapping in this exercise produced varied results at first it was hard to wrap my head around having values that are meant to be 3 dimensional and how to place that data in a 2 dimensional space. Discovering Orthographic projection (sometimes orthogonal projection) is a means of representing three dimensional objects in two dimensional space.  Utilizing this process is what helped me create this code effectively and was a valuable learning experience. I would like to iterate on this project at a later date and discover new ways to display this particular data set; expanding on the graphics I created and adding other elements like sound.

https://github.com/aliciablakey/star.git

References

https://github.com/astronexus/HYG-

Databasehttps://astronomy.stackexchange.com

https://lodash.com

Weather-any-city

Design Features

To allow users to simply type/say the city name to get the current weather update in that city. The complete sentence also works.

Design Concept

End User -> Dialogflow -> OpenWeatherMapAPI (Heroku) -> Dialogflow -> End User

Tools&Materials

Dialogflow/Heroku/GitHub

Process

My starting point on this project is to research the differences between RESTful APIs and Conversational APIs. WolframAlpha is definitely a powerful conversational API, however, with no much knowledge of p5, I have no confidence to come up with the code which can send the question string automatically or display the answer in another way. Therefore my focus was moved to how to allow the “conversation features” while adding some scopes and making the user experience more smooth.

Then I found Dialogflow, many may still think it is a chatbot which allows users to develop their own questions and answers then the AI feature of it can learn from these inputs. This feature still exists, however, all the answers in my app were not input by me, they are generated by the attached API, which is enabled by the “webhook” feature.

The first step is to create an agent in Dialogflow, every agent would come with 2 default intents which can answer basic questions such as “Hello?”, after creating the new intent “Weather”, I typed in several common sentences used to ask weather such as “How’s the weather in Toronto?”, “Tell me about the current weather in Toronto.”, “Is it cold in Toronto?” and just the city name”Toronto”. The first three natural language sentences can satisfy users who prefer to speak the full sentence so it is more like a dialogue to them, the city name can satisfy the users who type or would like to get their answers right away.

With the connection to an API, I did not have to put anything in the respond section to teach Dialogflow how to answer a question, instead, I need to find an API which is adequate to my design features. I wasted a lot of time with Yahoo weather API, which has stopped working in this January. However, once I found the OpenWeatherMap API, everything was solved.

I do not have to write my own code to deploy an application, there have been some resources in GitHub. I forked the useful part of some of them and link it to my app in Heroku, where I can generate a webhook to use in the Dialogflow.

github

heroku

dialogflow

After some other checkings like “enable the webhook in this intent”, this App was able to answer questions related to weathers of any city.

Check out the video here:

https://vimeo.com/user53586243/review/315144500/e73a0e5dc9

 

World Wide Info

final-app   GitHub Link: https://github.com/Omid-Ettehadi/WorldWideInfo

This project is all about my exploration of accessing Wolfram APIs though p5.  Starting with the Wolfram Spoken API, I wanted to be able to extract as much as information that I could. The search began around countries, I looked for their Longitude and Latitude position, to be able to trace them out. To do that, I had to extract the longitude and latitude angles from the resulting text. I found out the angles are always after the words “are” and “by”, So I look into every sentence and found those words and kept the word after them as the angle. I then converted the angles into coordinates of the canvas by looking at the geographical element. After gaining the data, I realized that I have only access to a central location for the country. Therefore I decided to stick with a simple circle indicating the countries’ positions.

map

However, this wasn’t enough, I wanted to find a way to get more information from about the countries, so I tried asking multiple questions and showing all the results. It took a very long time for the API to return each of the answers. Besides, I also wanted to gain access to more than just text. I wanted to get access to photos as well; therefore I decided to look into Wolfram’s other APIs.

After looking on the Wolfram API website, I found out I could use the Full Result API. This API returns a JSON or an XML including all the information that you might need on a question or a phrase. It also returns links to images. In order to gain access to this API, I had to make some changes to the PubNub function base on the documentation. The API allows you to choose the output format that you want in addition to other filters that you can apply to your answer.

After changing the API, I realized that the Full Result API contains a lot of information, much more than what I expected; therefore it requires a long time to get any information back from it. I decided to narrow down my questions by giving additional URL filters for the answer. I filtered the results base on their pod id to only receive data that I needed. I looked through all the data that the API returned given a country’s name and selected which one I wanted to show.
This way I had a rapid way to access the data that I needed.

I decided for my project to show a summary of information from different countries when they were look up. To begin with, I wanted to start with the flags of the countries and see if I can gain access to the URL image that I could receive from the API. I was able to retrieve the link, but when I requested for the link to be shown, it would give me a “Cross-Origin” error. After a few failed attempts at requesting the image link through PubNub and trying different things on the P5 side, I decided to ignore the image and focus back on the text data that I had access.

crossorigin

I decided to ask different questions and show their results as they come on rather than waiting for a large amount of data to be received from the API. This made the experience much more interactive. I started by sending a request to the API with a specific pod id and wait for its response. After receiving the response, I would send another request for another search with a different pod id.

I chose to request the countries’ coordinates, their demographic details, their capital city and their neighbouring countries. For the coordinates, I had to write a specific way to extract the coordinates. The API returned a string with the coordinates such as “35°N ,23°E”. I then had to convert the string into an array based on the comma, and then again break it down base on “°” to check for N, S, E and W. After that I would map the results coordinates into a position on the canvas.

For the rest of the requests, I found a way that worked for all of them, giving me access to the results and then formatted them different for the display.

untitled

I finally work on the visual of the application, in addition to checking the input for any mistakes, such as wrong spelling or name of things other than countries. I also added the Enter Key so that whenever it is pressed it would search the word in the API.

untitled2

Next Step:

I would like to attempt to zoom on the position of the country in the map and also gain access to other types of data such as images, videos and etc.

Resources:

 

 

Computational APIs – Moon Phases, When?

final-scsht
“When will the moon look like that? The answer is…”

Website Experience
Github

1.

Concept

This web page was born through my burning desire to know more things about The Moon and using Wolfram Alpha API to provide more information about it. I had initially wanted to dive into largest moons in the solar system (in which the earth’s moon is 5th largest) because I wanted to create an interactive version of a similar poster I have on my wall. Another option was to continue within the realm of outer space and also thought of creating an information page about the Earth’s moon, which would focus on coding a fact page (Moon Phase 101), but alas, dealing with the way the answers are given through Wolfram led me to other options. As a result, I created a page to tell me when the moon phases are happening according to real time. The web page became less about information about the actual phases themselves but more of giving the answers about “when will the moon look like that certain shape?”

My goal for this experiment was simply to how to send a query string to an API such as Wolfram’s with a single click of the image and receive the feedback through text. I also wanted to pay attention to the overall look of the MVP page and allow some refinement on the design.

2.

Process

I started with using the example Nick showed in class about sending a query string to Wolfram through an input box. Since I was interested in creating a web page in the theme of the galaxy, I began by asking about the type of information that might be used. I asked questions about the different moons in the solar system and selected a few to ask as a string to Wolfram Alpha.

process-questions-02 process-questions-01

 

I did not expect these answers as they were specific to telling information about size, when I wanted a random fact about the moons themselves. Once I changed from using “What” to just treating the input box like a google search bar, it finally gave me a larger variety of answers. Using “tell me about” also generated a different answer, albeit long.

I realized that when working with Wolfram, you had to be clearer in what you ask for. If you use words that refer to a size, Wolfram will give you the answer relating to that. I would assume that if I wanted to ask about characteristics of the moon, I would need to have known about that information to ask it. Wolfram would not give me facts that I would not know how to ask. This contradicts the purpose of the web page, as I wanted to use it to provide more facts rather than having to search them on google before hand. Perhaps I wanted one single query that can provide different answers, but that would not be possible in this case.

process8

Knowing this, I proceeded to change the focus to the earth’s moon. What I found amusing was when I sent the question with a “What” I received an answer that would be a response to “When.” It was only until I asked about the shape itself (“crescent”) that I would get information of the shape, which would serve the concept of “Moon Phase 101.” However, this would not work for the other shapes as Wolfram couldn’t understand the input to the names of the other phases. “What is a waning” “What is a Waxing” and “What is a full moon” would still give me the time in which the moon would be at that state in the month. I found this intriguing in terms of understanding the capabilities and limits of Wolfram Alpha. For the final page, I decided to create a page based on the answers that would imply “when” the moon will be at a certain shape, which allowed for visual association of the shape to when the phase occurs at the time of the month.

process9

The p5.js aspect to this was quite simple, as I created the illusion of space in the background through randomly scattered circles that would change each time it refreshes.

screen-shot-2019-02-04-at-8-34-57-pm

I then found a free resource online on freepik.com with the different moon phases and edited them accordingly.

screen-shot-2019-02-05-at-10-10-50-am

The final challenges to this page were preloading these images and using the mouseClicked or touchStarted function that would sendtheMessage of the question string through pubnub to Wolfram.

process1
Resizing the vector images and dividing them by 2 and arranging them accordingly on the page
process3
The vector images arranged according to different x and y positions on the webpage. Used a reference from a previous p5 coordinate example to pinpoint all the locations.

In approaching the clicking/touch mechanism, I tried to use classes within functions to call upon the range on the images, which when activated, gives text feedback. I consulted this process with Maria and got a sense of it, but was not able to execute it. I believe in future projects or applications where it is needed, I would be able to utilize classes and perhaps create a “cleaner” javascript file.

moonrange

Since it did not work, I consulted Omid and was reminded that the easiest way to complete this simple action was just to set the range within an if function. It was nice to be reminded that sometimes the simplest things are just truly, achieved through simple means.

if-statement

process4
Testing the clicking/touching of the moons and printing them on console

Each of the images would call the word to the phase itself, as shown above with “new moon.” Other phases would also send information pertaining to the shape itself, such as “waning gibbous” or “waxing crescent” and in turn, giving feedback of when it will next happen in the span of a month.

3.

Next Steps

To further enhance the page, I would perhaps find out a way to prevent the overlapping of the response given when the images of the phases are touched. The answer is generated in the middle of the page but I would then need to find a way so that every time it is clicked, the text from the previous session disappears and is replaced with the new one. This would need experimentation with the styling of the text through readIncoming function on the basic Pubnub code and perhaps separating the strings or placing a text background if possible. I also would like to explore other types of api to enhance the page in terms of information and usage. It would be a great opportunity for me to use this theme as practice into diving deeper into css and p5.js for future web browser projects.

References

“9.8: Random Circles with No Overlap – P5.js Tutorial.” YouTube, 15 Mar. 2016, youtu.be/XATr_jdh-44.

“Hand Drawn Polka Dot Phrases.” Freepik, www.freepik.com/free-vector/hand-drawn-polka-dot-phrases_1080663.htm.

“Moon Phase and Libration, 2019 Moon: NASA Science.” Moon: NASA Science, 19 Dec. 2018, moon.nasa.gov/resources/373/moon-phase-and-libration-2019/.

“P5.js | Reference.” P5.js | Home, p5js.org/reference/#/p5/textStyle.

“Top 10 Largest Moons In The Solar System.” THE LATEST SCIENCE SPACE HI-TECH NEWS, 29 Jan. 2018, forcetoknow.com/space/top-10-largest-moons-solar-system.html.

“Top 4 Keys to Understanding Moon Phases | EarthSky.org.” EarthSky – Earth, Space, Human World, Tonight, 25 Jan. 2019, earthsky.org/moon-phases/understandingmoonphases.

Alexa-fying wolfram alpha

When it comes to asking questions, the most natural to people, I think, is to say them outloud.

Speech recognition is really easy to set up in p5. There’s a neat library called speech.p5 which has a verbal speech component, and a speech recognition component, letting you infer strings from speech, as well as produce speech from strings.

The voice recognition part is trivial. Simply start the recording, and wait for it to identify a sentence. Once it has, save the sentence to a variable to preserve it, and then send it off to wolfram’s API.

The speech is presenting more of a challenge. As of December 2018, Chrome has disabled speech without direct user input.

boo

Of course, this complicates things, since I want the synth to automatically speak the response it receives from the API.

I have yet to solve this, without making the user press a button to confirm playback.

UPDATE: changing the API type from “spoken” to “short answer” actually somehow solves this???? I have no idea why yet but IRONICALLY, SPOKEN DOESNT LIKE TO BE SPOKEN.

That being said, the speech synthesizer is… mmm. Bad with time. “seven colon thirty-nine” is not a human friendly way to give the hour!!


As for customizing an engine to be more specific about a field or topic,that one is an interesting case. One of the things I find especially frustrating about verbal communication with computers is that they give out the most simple answer possible, but sometimes, you want to know context. What assumptions did the machine take for granted that allowed them to pick such an answer? In some of my test cases, I would ask for the temperature, but the location didn’t match my own. Going through the API, I found that location is an input parameter you can be explicit about. For the sake of time, I simply hard coded the location to be Toronto, but in the future, it would be more useful for the user’s location to be identified by their IP address, and then passed into the server-side code in order to locate the user wherever they might be. It would be worth looking into PubNub’s geolocation API.

However, this proved to be a bit of a frustrating roadblock. Though the API documentation for Wolfram Alpha suggests that simply specifying the location for the query parameter should accept a string such as ‘Toronto’, the location never seemed to change. I know it wasn’t me not managing to save the function properly because I managed to change the units from Fahrenheit to Celsius no problem.

poop

IT TURNS OUT. The conversational API varies from the base API and it doesn’t use the “location” parameter, but the “geolocation” parameter.

I hate APIs so much 🙃 (thats a lie, i think apis are really neat and they do wonderful things, but they rely so much on good documentation, and when there’s conflicting sources, it causes so many headaches.)

Ok so, it’s cool, it’s chill, it works. If you ask it questions now, it will assume Toronto’s where you are and try to answer accordingly.

ALSO. This text to speech recognizer has been a source of ummm… INTERESTING HUMOUR all night, as it has been picking up on, and censoring my err…. colourful exclamations of frustration.

censorship-2

censorship

PS: another small thing I find is kind of annoying is that if you’re not careful of the language you use, the results might make the wrong assumptions. For example “what is the weather” returns a definition of the word “weather” instead of the weather, but saying “what is the temperature” returns the expected results. It doesn’t appear as though the spoken API accepts the “assumption” query parameter that the base API does. This would require a lot of query interpretation code-side, and that can get really tricky really fast.


video documentation of it working.

P5 CODE HERE.

pubnub code:

const xhr = require(‘xhr’);
const query = require(‘codec/query_string’);

export default (request) => {
const appId = ‘4PJJL4-RQK8E84YR7’;
const spoken = ‘http://www.wolframalpha.com/api/v1/result-json.jsp’;

const queryParams = {
appid: appId,
input: request.message.text,
geolocation: ‘43.649404,-79.388785’,
units: ‘metric’
};

const apiUrl = spoken + ‘?’ + query.stringify(queryParams);

return xhr.fetch(apiUrl)
.then((r) => {
const body = JSON.parse(r.body || r);
// console.log(body)
request.message.answer = body.result || body.error;
return request.ok();
})
.catch((e) => {
console.error(e);
return request.ok();
});
};

Capturing the Weather

Tyson Moll

Pastebin Copy of C# Script for Unity Weather API

This week we were tasked with utilizing an API in order to ‘do something’. So I built a very simple weather app in Unity.

It took a little scrounging around to find practical examples of what I wanted to accomplish. I came across AtlasVR, a beautiful example created by Sebastian Scholl and Nate Daubert as well as a script written out of boredom by a Unity forum user known as ‘johnnydj’. Both seemed to have been written for slightly older versions of Unity based on their usage of the depreciated ‘WWW’ class, so with reference to the Unity Documenation I modified the scripts to use the modern ‘UnityWebRequest’ class. I used OpenWeatherMap.org as the provider of the JSON-formatted weather data used in my Unity project as denoted by the two sources: after registering on their website, anyone can make a request to their system at a rate of 60 calls per hour.. not significant but enough to get my application working. With testing, I was successfully able to parse the retrieved JSON information and use the data in Unity!

Being able to access these numbers and import other elements of OpenWeatherMap’s variety of JSON properties with the script opened the opportunity to use them within Unity’s game environment. I was able to adjust the color value of the directional light with the temperature values and increase the density of a particle fog using the humidity percentage. The capacity to use this information to model real-world environments seems powerful, and I’m glad that I now have access to this script for future reference when developing JSON-based API accessors / decoders. From my understanding, this method of storing information can also be used for saving games or sharing information across networks in easily readable format. Perhaps it’s not as efficient as binary, but it’s incredibly legible.

componentcolor

Resources:

johnnydj. “Current Weather Script” Unity Forums. Forum Post. Retrieved from <https://forum.unity.com/threads/current-weather-script.242009/>

Scholl, Sebastian and Nate Daubert. “AtlasVR: Connecting to Web API’s in Unity for Weather Data” hackster.io. Journal. Retrieved from <https://www.hackster.io/team-dream/atlasvr-connecting-to-web-api-s-in-unity-for-weather-data-38a099>

Unity Documentation. “JSON Serialization” and “UnityWebRequest.Get”. Retrieved from <https://docs.unity3d.com/Manual/JSONSerialization.html> and <https://docs.unity3d.com/ScriptReference/Networking.UnityWebRequest.Get.html>

 

 

Art is the Answer

docufinal

GitHub

Art is the Answer is an exploration of the way we interpret information. When a question is asked, Wolfram Alpha’s response is displayed as a procedurally generated artistic abstraction. Art is the Answer disrupts the succinct question/answer call/response of Wolfram Alpha’s usual use and asks the user to interpret what the answer to their question might be, and how the shapes displayed affect that interpretation.

Concept

I started, as I often do when I encounter new technology, by examining each component and breaking them down into their simplest terms in an attempt to develop a mental model. I took a close look at the demo code and copied it over in to my sketch. Having not worked with PubNub with any depth previously (in the Networking assignment of Creation & Computation I spent most of the time working on physical builds) I knew I didn’t want to get too complicated with regards to APIs – I left PubNub and Wolfram Alpha alone after connecting them. Since we were working with p5, a tool designed originally for making art, I decided that I would break down the response from Wolfram Alpha into data that p5 could use for art’s sake.

I was sure there must be a way to turn letters into numbers – after all, in code, everything is a number eventually. I did a little research into letter/number encoding. The first hit I got was for A1Z26 code, a simple cipher in which A=1 and Z=26. I doubted this would suit. Eventually I remembered ASCII and hexadecimal code, and while I was researching these I came across the function charCodeAt(). This function returns the UTF-16 code at the referenced location. A few tests showed that this would be perfect. It was time to get into arrays and for loops.

I had never worked with arrays with any depth before, nor for loops outside of simple limited mathematical situations. I knew that in order to get this to work I would have to store every letter of Wolfram Alpha’s response in an array, and run that array as a for loop so the variables could be used to draw shapes.

Process Journal

docu1

Above is my first successful attempt at translating Wolfram Alpha’s response letters into shapes on a p5 canvas.  Printed in the console is the UTF-16 value of each letter. Those values are entered into an array which in turn is called during every calling of a for loop, and the values are entered as the fill, size, and coordinates of the shapes.

The problem was that since every value was identical within each loop, I always got a print out of a series of greyscale shapes along a diagonal.

After consulting with some classmates, I adjusted the parameters of the shape so that they would not all be identical: they would call the Character Code of the letter in their step ( p[i] ) as well as the Character Code of the letters one or two places ahead of their step ( p[i+1] ) .

docu2

So now we had a variance in shape size and colour, but the numbers being returned from Wolfram Alpha were too small to make a pleasant scatter effect on the canvas – the x and y coordinates were all under 200 on a canvas that was 900 units wide. I decided a few random numbers in the mix to serve as multipliers would be useful to get that effect.

I played around with including the random number generator from Wolfram Alpha as well, but that proved to be a little too complicated to work in to the existing for loop. It also proved unnecessary: the random() function included in p5 was sufficient.

docu3

One random numbers were included as multipliers to the numbers returned from Wolfram Alpha, I had the kind of results you see above. Rather than scattering them around the canvas, they distributed them fairly evenly.

I set the random number to be between 0 and 10, and set the translate() by to move itself by this random amount each loop. Thus, the relative point of origin of every shape was changing in a random direction by a random amount each loop. Finally I had the kind of imagery I had imagined.

docufinal

Next Steps

This project is mechanically similar to my mindPainter project, and it would be interesting to combine them in some way.

I would like to polish the art that this produces so the shapes it paints are not consistently primitives like rectangles and squares.

I would like to add a mechanism wherein a user can click or press a key to see the answer to their question in text – although, this might undermine what is in my opinion the strongest aspect of the piece: that the user has no way to getting to their answer without interpreting the artwork.

References
Demo code by Nick Puckett & Kate Hartman

p5 Reference by Processing Foundation

MDN Web Tools by Mozilla

ASCII Table by asciitable.com

UTF-16 by fileformat.info