Author Archive

Cap’n Kirk’s Replicator

replicator-1

I have been interested in exploring intersection of two worlds; more so two worlds talking to each other. I have tried to bring the world of Start Trek into mine in this experiment. Being a Trekkie and an ardent follower of the Star Trek Original Series, I wanted to transform my workspace into making me feel I am in the USS Enterprise starship whenever I come to my desk. A computer that belongs to Captain Kirk in his quarters called the Replicator, is capable of materialising anything one wants in the starship. I wanted to bring that affordance into the object. I thought up a scenario where Captain Kirk asks, “Computer, I would like a cigaretter please” and the replicator 3d prints one for him. So similarly the object I was trying to make was going to offer me a cigarette when I ask for one. Being intelligent it would also bring in the layer of nudging me into quitting smoking if it could. So the object will ask me, “Are you sure?”. This may not be much but if you ever really take a moment on that you may actually consider that suggestion and just drop the urge of lighting one up. I smoke occasionally but when code doesn’t work, I am lighting one after the other as though nicotine seems to help with the debugging. To help me quit smoking I thought this was a perfect sensitive object to get augmented into my environment in a playful way.

Calm Technology Principles Used

It should amplify the best of technology and the best of humanity. Like a Kettle that user can switch it on, forget about it and once it is done heating, you are free to pick it up, make use of it or not, etc., this dispenser aims to live in your environment in a similar mode.

Technology can communicate, but doesn’t need to speak. So the mode in which it shares information or its status with you ought to be inconsequential versus the actual information. I was looking at using LEDs to light up a simple cutout of PU foam from behind so that the letters “Sure?” are visible and the message comes to visually. User replies ‘Yes!’ as a confirmation to receive a cigarette handed to them by the device (more of popping one out for you to pick up from the stack).

 

The Process

array

form-factor

The design started with a concept sketch of making LEDs emulate the sonar array scanning the starfield (shown in the image below) inside what looks like a computer from the starship as shown in the image on the right. I wanted it to have the old-style duotronic sensor array sound which appears in the bridge of Starship and has a scanning cycle of three seconds with a pinging sound. As it would have required a lot more of components including an SD card reader, that approach was dropped but the LEDS were retained to show the Prompt of “Sure?” from the system to the user. I ran into a lot of USB cable issue because I thought my house grounding seems to be faulty. So each time I would have something working, the next moment it was not the case; ended up using two separate Arduinos. Used the first, Arduino Uno, for distance detection and lighting the LEDs.

 

cigarette-dispense replicator-2

img_6728

The second,Nano BLE 33 Sense, uses TinyML library to read “yes” or “no” from the user. I have modified the microspeech example and tied include the servo motor actuation inside the detected ‘y’ routine in arduino_command_responder tab. The challenge was even though global variables were defined in the main tab/page, it was not being picked up inside arduino_command_responder tab/page. Defining it again and modified it to use values directly there in the particular page was a learning step.

Also found that HC-SR04-Ultrasonic Range Finder was delivering better results than the proximity sensor in Nano 33 BLE Sense. A range of 0 – 400 ensures if someone sat down in front of a work desk and the system activates the LEDs.

 

The Experience | How it Works  | Arduino Code

 

The Circuit

replicator-circuit

References

Star trek Computer 1https://hapgood.us/2015/02/20/people-have-the-star-trek-computer-backwards/

Star trek Computer 2https://memory-alpha.fandom.com/wiki/Computer_core?file=Engineering_computer_core_and_hatch.jpg

Can’t believe I am using ML!

I am still a newbie with P5js. Last year I had tried to make a simple platform game with keypress actions to move a character around the screen in my leisure time. So when it came to applying ML library for ‘body as controller’ there was more curiosity than fear of using a new library or attempting something new. It was delightfully easy to start with Ml5 and poseNet. My only foray into using AI/ML till now was building a perceptron program coding along Daniel Shiffman’s video.

I have tried to use code shared from coding train video + learn.ml5js.org and modified it to try out newer ways to click and scroll. I didn’t want to keep the benchmark too high for myself and fall into the trap of chasing it. So I thought up a simple experiment of making (reusing from tutorial video) bubbles float around and using body gestures to click for the first exercise. I am still not touching the confidence scores as well looking at fine-tuning the positions of the pose Key points. 

I did learn about and have tried to use LERP function and liked how it allows us to smoothen the movement of pose Key points by taking the midway point between what’s predicted and current position. It makes the jumps a little less jittery. I also liked capturing click events using distance function from Dan’s videos. It was strikingly effective and I couldn’t believe it solved the problem so quickly of checking whether both wrist key points are inside the bubble.

 

‘X’

ezgif-4-a15f15ced46b

The action is to cross your wrists to make a pose ‘X’. Where the two keypoints of left wrist and right wrist intersect any of the floating bubble, the response changes the colour of the translucent bubble to white indicating the CLICK. The code checks for distance between each of the circle’s centre and the bubble’s centre and qualifies it to be a CLICK event if the range is within the size of the bubble’s radius.

So by taking one wrist, the program doesn’t consider it a CLICK event until the second wrist is brought in the are engulfed by the bubble.

Present View –

https://preview.p5js.org/rewritablehere/present/gc2XD3M0s

Code View –

https://editor.p5js.org/rewritablehere/sketches/gc2XD3M0s

Video –

https://ocadu.techsmithrelay.com/au31

 

Swoop

ezgif-4-091d2c2b740b

The action is to rotate your elbow as if you are turning a wheel by holding one of the edge of the wheel. The key points of right wrist defines the angle which is read by arctangent to be translated into SCROLL. SCROLL value can be mapped to the rotation angle described. I attempted to use both wrist and elbow point for accuracy but couldn’t use it employ in the code.

The gesture is similar to browsing a carousel where in you are repeatedly rotating the carousel to bring the next horse towards you. Got the chance to use Push & Pop function here to draw the rectangle. And used Translate and Rotate for positioning and orienting it. the Translate put the origin to the centre of the screen and made the angle detection possible. The challenge here was working with trigonometry and arriving at a common understanding of whether angles or radians can be called into the arguments.

Present View –

https://preview.p5js.org/rewritablehere/present/kDjF5qOz5

Code View –

https://editor.p5js.org/rewritablehere/sketches/kDjF5qOz5

Video –

https://ocadu.techsmithrelay.com/ozfj

 

Just because you have ears 😛

ezgif-4-7cb5b5465a73

A simple gesture of tapping your ear to your shoulder to register a click has been tried with Blazeface model/detector of Google’s TensorFlow library. A boolean flag has been made use to capture the click event and show the eyes for “MOUSEDO

WN” event and make the eyes disappear on “MOUSEUP” events.

Present View –

https://preview.p5js.org/rewritablehere/present/vx1XAY-ws

Code View –

https://editor.p5js.org/rewritablehere/sketches/vx1XAY-ws

Video –

https://ocadu.techsmithrelay.com/SLr2

 

Reflection:

This experiment has given me some kind of confidence to use computer vision. I can now think about all the possibilities of creating simple experiments around body guestures and interactions that we embody on regular basis and whats possible in using AI to classify them, make inferences. It made me think about making use of this to define things to a digital assistant. I could just signal to alexa to change a song by winding my hand or something on those lines.

 

References and code reused from:

ml5.js: https://ml5js.org/

The Coding Train: https://www.youtube.com/user/shiffman

Multiple hands detection  for p5js coders. https://www.youtube.com/watch?v=3yqANLRWGLo. Accessed 8 Sept 2021.

Blazeface: https://arxiv.org/abs/1907.05047 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.