Intro to ML5_Exploring StyleTransfer Example

Process Journal #6


Learning: ML5&P5

Computer visions seems very interesting to me. I checked the 4 videos posted on canvas and started this assignment by going through all the examples available on GitHub and the ML5 website. I just want to use some tensorflow.js model to implement some functions in the web. Then I don’t need to learn tensorflow. js systematically, I found I just need to use a ready-made model packaged as an NPM package. Such as MobileNet (image classification), coco-ssd (object detection),PoseNet (human gesture recognition), roll commands (voice recognition). The NPM pages for these models have detailed code examples that you can copy. There are also a number of third-party developments of off-the-shelf model packages, such as ML5, which includes pix2pix, SketchRNN and other fun models. We were asked to build upon one of the existing ML5 examples by changing the graphic or hardware input or output. I found StyleTransfer example is quite interesting, so I decided to work on that example. I already had the chance to explore PoseNet in the Body-Centric course. For this project, I decided to explore something different. I still used the webcams and picture, I decided to experiment with the style transfer example in the ML5.


It is an expansion on the style transfer example in ml5, where users select the paintings of their favorite artists as the materials within a limited range of choices, and the selected paintings will change the style of real-time images, thinking of a unique abstract painting video.


Firstly, the position detector detects the movement of the object. When the object moves to the visual center of the camera system, the detector immediately sends a signal to the image acquisition part to trigger the pulse.

Then, according to a predetermined program and delay, the image acquisition section sends pulses to the camera and lighting system, and both the camera machine and the light source are turned on.

The camera then starts a new scan. The camera opens the exposure mechanism before starting a new frame scan, and the exposure time can be pre-set. Turn on the lighting source at the same time. The lighting time should match the exposure time of the camera.

At this point, the screen scanning and output officially began. The image acquisition part obtains digital image or video through A/D mode conversion. At the same time, the obtained digital image/video is stored in the memory of the processor or computer, and then the processor processes, analyzes and recognizes the image.

StyleTransfer example:

Step1: allow styletransfer to use camera


Step 2: select the art work


(Start and stop the transfer process)

Step 3: check new synthesized video


In the example, there is only one painting, and the composition of video is too abstract. I wanted to give users more choices, so I tried several other paintings to see if they have different effects.



Now I have a big problem, that is, there is no big difference between the color and composition of video between the chrysanthemum painting and the abstract painting. I don’t know what the problem is. Then, I tried an abstract painting in blue.



The difference is so small that I don’t know what to do with the picture. The naked eye can only see very slight differences. But I’m going to make the framework so users can choose different images.


Users select the paintings of the artists they like as the materials, and the selected paintings will change the style of real-time images, as a unique abstract painting video.




Ubiquitous Computing Process Journal 4

By Jingpo Li


(CountrySize web application screenshot)


CountrySize is a web application that uses Javascript and Wolf Ram Alpha API to visually compare the size of any country you ask. Wolf Ram Alpha API is open to all questions being asked, but we were asked to narrow down the questions in this assignment. Then the user is able to view the responses or answers in an interesting way. The image above is a small demonstration of how this web works.


I was always curious about how does Google Home,Siri, Alexa work through readily available APIs online. In another class, I had explored the API when building a website: I used Weather online API data to find the real time temperature data online. After I did research, I found there are two APIs, Data API and Question API. This time I had an opportunity to work with question API and I had a hard time find the concept and decide what to do. Because we need to narrow down the questions we ask and visually give the answers to that questions.


I am sure you know the biggest country in the world. I think you also know the top 5 biggest counties but I doubt if you know top 15 largest counties. Which country is bigger or how much bigger? When a question is asked, Wolfram Alpha’s response is displayed. People are bored of numbers and sentences. Instead of representing numbers, represent through aesthetic visuals. Instead of designing for affect-as-information, design for affect-as-interaction.

I got inspiration from an existing website called, MapFight. It is a website that compares the size of any two geographical areas.


Our assignment is to explore a new way to ask questions and get the results by using specific type of data, so I change a way to ask the question. Instead of asking or typing the whole questions (What is the size of Canada?), just type the country name. MapFight can only compare the sizes of two countries. What I attempt to do is to visually compare the sizes between countries as many as you want.


Instead of giving you the answer (The total area of Canada is about 3.86 million square miles), I wanted to use different size of circle to represent the size of the country.


Tools & Software

– Text-Bracket
– Wolf Ram Alpha API
– PubNub connecting the Wolf Ram Alpha API

Process book:

Data has gradually become a very important aspect tool in the world today. Data is driving the world, and at this stage when the world is being held to ransom by data and the web, it only makes sense to read information into the lots of data being generated out there. A language like JavaScript is very important in the manner in which a web page gets to be displayed.

The very first step is to have the question bar.



Since I need to get the number after “about” in the answer, I split the sentence.



After I got my Wolf Ram Alpha API developer key in class and set up PubNub Account for connecting the web page to the Wolf Ram Alpha API. I started working on the p5 example code. Above is my first successful attempt at accessing split Wolfram Alpha’s response on a p5 canvas.  Split an array so that in every loop, those value are entered separately side by side.

In order to get some visualization, I need to extract the number out of the answer(text). I found out these answers have the same format as “The total area of (country name) is about (certain number) square miles.” That is to say I need to have the number right after “about”.

I used for each statement to find out “about” in an array. The forEach() method executes a provided function once for each array element.



When I got my circles working, I decided to design the web page a little bit. I placed a world map on canvas and used random colour for circles.



I still cannot find a way to solve the problem. What I did for the code is to extract the number after “about” and to make a circle based on that number. The thing is the answers have different units and the range of those number is too big.

For example:

“The total area of Canada is about 3.86 million square miles.”

“The total area of Thailand is about 513,120 square miles.”

So if I just used the number after about, Thailand would be larger than Canada and cannot fit in canvas.

I also tried to ask the question in a different way, such as”The size of Canada in million square miles” They don’t have an answer for that question.


Process Journal 1- Jingpo

In this class we had experimented with the XBee devices and explore the possibility to complement with Arduino boards(micro and feather). Firstly, we set a chat between two computers using the XBee’s without Arduino; secondly, as an extra activity, we decided to establish a chat such as the messages sent from one computer through an XBee, were received by a second XBee connected to the Arduino. Then we tried to set up communication between two Arduino.

XBee Chat Exercise:

We used the following components:
  • 2 Digi XBee.
  • 2 Xbee Explorer.
  • 2 cables.
  • 2 Computers.
  • 2 Arduino micro.
  • 2 Breadboard.
  • Several wires to interconnect the breadboard and Arduino.

Step1: Download software(CoolTerm) and driver(FTDI USB). The first step consists in achieving communication between two machines connected to the following set:



Step2: Change the configurations of the radio in command mode. In this second part we are going to show how we can display the information transmitted between two XBee in a screen connected to Arduino. To do that, we need to understand how the XBee Explorer works:

The pins we must connect to the Arduino in order to be able to display the information sent by the other XBee module. The input voltage must be 3V instead of 5V to avoid potential problems. The XBee Explorer is not only important to connect the XBee to the computer, but also to put it in the breadboard so as connections can be set. screen-shot-2019-03-01-at-10-59-10-am

1) Attach Xbees to SainSmart boards and plug into computer

2) Open two CoolTerm Sessions and click on one of the Options buttons.

3) In the Options pop-up select the COM Port of either Xbee and then go into the Terminal Tab and select Local Echo (in order to see what is being transmitted). Then press OK. No additional changes are necessary for standard comms.

4) Repeat the above step for the other Xbee in the other port. If either time you do not see the Xbee’s port as an option, or if you plug it in with the options box open, press the Re-Scan Serial Ports button show in the red box from the above image.

5) Now press Connect for each terminal and you can now type in either and it will transmit to the other. Press Disconnect when finished with the session to ensure it is available for other programs to use.

6) We need to configure XBees to enable the communication. These radios are configured with AT commands, so we learnt some basic AT commands from AT command set:

ATRE = factory reset – this will wipe any previous settings
ATMY = my address – the label on your radio
ATDL = destination low – the label on your partners radio
ATID = pan ID – the agreed upon “channel” both radios are using to communicate
ATWR = write to firmware – this will make sure the configurations are saved even when the radio loses power
ATCN = end command mode

Open CoolTerm:

Go to Options → Serial Port

Go to Terminal → Select Local Echo

Step 3: Let’s chat!




Once connectivity between the two XBee modules has been established, we can implement the part where we display the information on computer screen.


Metronome Receiver Device:

Arduino Code:

Use “Serial” to test the code. Change “Serial” to “Serial1” when arduino is connected to xBee and send the “H” and “L” commands via the serial monitor.

Sample Code is from Arduino example, Physical Pixel. It is an example of using the Arduino board to receive data from the computer.


Add a servo and change the Arduino code:






Connect the circuit as show in the figure below:

  1. The servo motor has a female connector with three pins. The darkest or even black one is usually the ground. Connect this to the Arduino GND.
  2. Connect the power cable that in all standards should be red to 5V on the Arduino.
  3. Connect the remaining line on the servo connector to a digital pin on the Arduino.


  • Servo red wire – 5V pin Arduino
  • Servo brown wire – Ground pin Arduino
  • Servo yellow wire – PWM pin Arduino


I want to have a fan connected to the servo, so I went to dollar store and got this candy toy for 2 dollars, along with an unexpected free motor, a button and a battery.



It was my first time to use servo though we had c&c last semester. Usually, they have a servo arm that can turn 180 degrees. Using the Arduino, we can tell a servo to go to a specified position and it will go there. I don’t know how to make it work like a fan. I think I need to choose another type of servo. The code I had will turn a servo motor to 0 degrees, wait 1 second, then turn it to 90, wait one more second, turn it to 180, and then go back. I need to check if I can make it rotate in one direction(not back and forth).