The Real-Time Virtual Sunglasses

My interest in ML5 is focused on real-time animated effects. Compared to other professional software such as Adobe Character, to make real-time face animation using ML5 is more customizable and simpler. Though the result may not be so highly finished, it is a great choice for designers and coders to produce visual work.

I found it easier for me to just use the p5 editor, however the ML5 script needs to be put in the HTML file in the p5 editor. (the fourth “script”)screenshot-2019-04-17-at-10-35-38

The model used is poseNet. It allows real-time human pose estimation, it can track for example where my eyes, nose, hands are and then build visual work on those positions.

screenshot-2019-04-17-at-10-50-26

Then I set the canvas and draw functions in the p5 editor, I used the gray filter to add more fun.

screenshot-2019-04-17-at-10-35-50

Program the poseNet into my coding. When everything is settled, we can see that the ml5 recognizes “object, object, object (which should be my face)…” from the WebCam.

screenshot-2019-04-17-at-11-15-11

After some research, I learned that nose to feet are coded as 0 to 16 in poseNet. The left eye and the right eye should be 1 and 2.

screenshot-2019-04-17-at-13-47-03

The first try:

screenshot-2019-04-17-at-14-15-31

01-2019-04-17-17_16_35

As the gif showed, if I move out of the screen the circles will not be able to track back.

The second try solved it: (if (poses.length > 0))

screenshot-2019-04-17-at-14-19-31

02-2019-04-17-17_17_10

In fact, I can call my project successful at this point, however, I wanted to make it more finished.

In the third try, I tested the lerp function and instead of a set size, the size of the ellipses are defined by the “distance”, which allows the ellipses to become larger or smaller as I move forward and backward:

screenshot-2019-04-17-at-16-27-10

03-2019-04-17-17_17_31

04-2019-04-17-17_17_50

 

Reference:

https://ml5js.org/docs/PoseNet

The Coding Train

 

ValenClick – Jingpo and April

1st Idea:

 

Code: https://github.com/jli115/GoogleHome

 

Design Concept:

 

This time we consider to do something fun with the IFTTT functions. After researching the current projects online, we found it is possible to link Google Home to “this” using Google Assistant, and then by linking “that” to webhook service in IFTTT, we can control the output by directly speaking to the Google Home.

 

The set-up of Google Home:

In “”this”:
1

2

 

Webhook Configuration:

 

3

 

The URL should be the IP address+index.php?state={{Text Field}}

We were inspired by the project, “Google Home – Control DIY Devices”. This project shows how to control multiple IoT devices using Google Home using php. However considering we were planning to use Arduino feather as the output, the configuration might be a little different. We changed the “that” service to Adafruit and expect to control the result using the toggle block in the Adafruit.

4

So far, the logic of the data flow we are planning is:

Google Home > Google Assistant > IFTTT > Webhook > PHP > Turn on/off the light

or

Google Home > Google Assistant > IFTTT > Adafruit Send Data > Adafruit > Arduino Feather Board > Turn on/off the light

 

Challenges:

  1. Arduino code

After we went over the Adafruit IO feed and dashboard basics guides we learnt in class, we all agree the most challenging part of this project is to get Arduino code working on Adafruit platform. We found online source under Adafruit learn section. It covers the basics of using Adafruit IO, showing us how to turn a LED on and off from Adafruit IO using any modern web browser.

2. Google Home

We failed to connect to the Google Home SSID from the school’s Wi-Fi settings. We guess Google Home cannot connect to public Wi-Fi.

Step 1: Adafruit IO Setup:

5

6

 

Step 2: Arduino Wiring

7

Step 3: Arduino Setup

Have the Adafruit IO Arduino library installed and open the Arduino example code.
8

We follow the tutorial step by step. When we compelling the code, it didn’t work. We keep adding libraries indicated by Arduino IDE.
9

 

 

2nd Idea:

Code: https://github.com/jli115/ValenClick

Design Concept:

Considering the timeframe of this homework, we had to change our mind towards something easier to approach and more manageable. As Valentine’s Day is at the corner, we were thinking to relate the project to it. We believe that for some people, to tell someone their love can be very hard, but also, to break up with someone they do not feel anything anymore can be even harder. There comes our project “ValenClick”, the users can send their love or no to anyone by just one click…in a funny way.

10

The interface is super clear and simple, the users just need to click the right or the left side of the screen to send the different emails to their receivers.

IFTTT configuration:

The centre of the image is about 680px.

11

Configure the left IFTTT applets:
12 13

Configure the right IFTTT applets:14 15

Test: 16

Reference:

https://www.hackster.io/phpoc_man/google-home-control-diy-devices-3be448

https://learn.adafruit.com/adafruit-io-basics-digital-output/network-config

Weather-any-city

Design Features

To allow users to simply type/say the city name to get the current weather update in that city. The complete sentence also works.

Design Concept

End User -> Dialogflow -> OpenWeatherMapAPI (Heroku) -> Dialogflow -> End User

Tools&Materials

Dialogflow/Heroku/GitHub

Process

My starting point on this project is to research the differences between RESTful APIs and Conversational APIs. WolframAlpha is definitely a powerful conversational API, however, with no much knowledge of p5, I have no confidence to come up with the code which can send the question string automatically or display the answer in another way. Therefore my focus was moved to how to allow the “conversation features” while adding some scopes and making the user experience more smooth.

Then I found Dialogflow, many may still think it is a chatbot which allows users to develop their own questions and answers then the AI feature of it can learn from these inputs. This feature still exists, however, all the answers in my app were not input by me, they are generated by the attached API, which is enabled by the “webhook” feature.

The first step is to create an agent in Dialogflow, every agent would come with 2 default intents which can answer basic questions such as “Hello?”, after creating the new intent “Weather”, I typed in several common sentences used to ask weather such as “How’s the weather in Toronto?”, “Tell me about the current weather in Toronto.”, “Is it cold in Toronto?” and just the city name”Toronto”. The first three natural language sentences can satisfy users who prefer to speak the full sentence so it is more like a dialogue to them, the city name can satisfy the users who type or would like to get their answers right away.

With the connection to an API, I did not have to put anything in the respond section to teach Dialogflow how to answer a question, instead, I need to find an API which is adequate to my design features. I wasted a lot of time with Yahoo weather API, which has stopped working in this January. However, once I found the OpenWeatherMap API, everything was solved.

I do not have to write my own code to deploy an application, there have been some resources in GitHub. I forked the useful part of some of them and link it to my app in Heroku, where I can generate a webhook to use in the Dialogflow.

github

heroku

dialogflow

After some other checkings like “enable the webhook in this intent”, this App was able to answer questions related to weathers of any city.

Check out the video here:

https://vimeo.com/user53586243/review/315144500/e73a0e5dc9

 

Week 3 Process Journal – Jingpo and April

Intelligent remote control system for the baby/child room.

 

Concept:
Based on our existing hardware equipment, we designed a two-way transceiver device which can be placed in the master bedroom and the baby room respectively. The idea is to help parents and infants/children communicate remotely (in separate rooms). We made use of the characteristics of XBee and our existing electronic components, simulated the interaction scenario of the parents’ bedroom and the baby’s room. To make this simple interactive device, we edited XBee commands and used Arduino as the power supply.  

 

Ideation:

It is challenging for young children to turn off the light and then go to bed in the dark room by themselves, also, if they wake up at night needing their parents, they have no choice but crying to make their parents awake. Keeping this scenario in mind, we were aiming to create a two-way transceiver system which is 1) easy for the baby/child to use, 2) has the function of comforting the baby/child and 3) not too disturbing for the grown-ups.  During the design process, we tried out different buttons/switches, buzzer/speakers and lights, and finally decided that for the baby/child’s room, the input in is a simple button, all the baby/child has to do is to push it to buzz the buzzer in the parents’ room, the output is a light which can be adjusted to different brightnesses, with the control of the input in the parents room (a potentiometer), it can softly light itself from dim to bright.

Design:

  • Parent bedroom:

Input: light controller (potentiometer)

Output: sound prompt(buzzer)

parent

  • Baby room:

Input: call button (button)

Output: light (led)

baby

Code:

Parent Room:

parentcode

Baby Room:

babycode

Testing:

Day one:

  1. Connect two XBees.
  2. Test the code with one led and one potentiometer.

The first thing to do is to make sure two XBees are connected, so we plan to start from the simplest LED and potentiometer. We hope that the light on the one side will be on when the button is pressed on the other side. It didn’t work.

We checked the codes for two XBees and the circuits respectively and finally got LED and potentiometer working.  We saw the LED flickering light with the change of potentiometer, that is to say, two radios can communicate. However, it did not achieve the expected result – the change of light brightness with the change of potentiometer.   

Day two:

  1. Write the code for two distant controllers.
  2. Test the code with two LEDs, one button and one potentiometer.

We wrote codes for two transceivers, one for the parent room with one button and one led and one for the baby room with one potentiometer and one LED. The experiment was very successful.

photo1

3. Test the code with one led, one button, one potentiometer and one speaker.

Since the last experiment is very successful, we exchanged LED with the speaker. It didn’t work very well. It made some slight noises.

photo2

4. Instead of using the speaker, we replaced the speaker with a buzzer! We did it!!!

 

Video:

https://vimeo.com/user83802499/review/313924989/85921ef3e2

References:  

https://www.digi.com/blog/802-15-4-pwm-output-with-an-led/

https://www.digi.com/blog/802-15-4-analog-input-with-a-potentiometer/

 

 

Process Journal 1-April

  1. Design pattern 

design-pattern

Since Serial1 would not work on Serial Monitor on Arduino, to make sure the pattern works as expected, “Serial” was used in programming. The expected result is that when “H” is entered, the blue led (ledPin), green led (green) and white led (white) quickly blink in order and then all light up. When “L” is entered, all three led light out. 

Please click here to view the pattern.

password: pattern

 

2. Configure the radio

radio

To make sure two radios can chat.

3. Program Arduino

program1

program2

This time “Serial1” is used to allow Arduino to receive commands from the radio. It is okay that we do not test it after uploading since we did it in step 1. If there is any problem, it should not be the code.

4. Light!

After this step, we successfully lit my Arduino led using coolterm in Jing’s laptop. We did not succeed in once. Please see analysis in the next paragraph “Problems and Findings”.

Problems&Findings

  1. The RX and GND should be plugged in the very left two holes of the breakout board as the photo shows.solution1
  2. When a chat is successfully made, directly unplug the XBee from radio pad and plug it onto Arduino. There is no need to enter +++ again in coolterm.
  3. To make sure the command is delivered, use send string instead of directly typing it.
  4. Just send “H” or “L”, don’t go like “HHHHH” or “LLLL”.