Author Archive

Let There Be Light

Let There Be Light

set1

This project utilizes the BLE technology between an Arduino central with a button, a potentiometer, a light sensor, a water sensor, and an LED, and an Arduino peripheral controlling a rotating lamp with a hanging rotating lattice. The aim is to explore the layers of a central-peripheral control and use visual cues to imply functionality. The step-by-step interactions are as follows:

    1. The blue LED inside of a lotus origami on the packaged central lights up as the BLE connection is established. The user sees it and notices a button embedded in the origami and presses it. This lights up the blue LED in the identical lotus on the peripheral.

    2. The light sensor detects dimming in brightness as the user interacts with the peripheral. An LED in the koi origami lights up as this happens. 

    3. The user rotates the koi origami, making the lattice spin.

    4. The user notices the water drop icons next to the water sensor and sprays water on the surface, increasing the light brightness in the lamp.

I can see this being used as a playful lamp that can be used in the home setting. If two people are interacting with it, the multiple modes of control can invite surprising discoveries and interaction.

Experience Video

How It Works Video

Final Project Images

set3     set2

Development Images

img_4765 img_4778 img_4785

Code

Circuit Diagram

peripheral-diagram

diagram-central

Network Diagram

network-diagram

Experiment 2: Sensible Objects—Artificial Nature

Key project image

key-pic

Project Description

Artificial Nature consists of origami house plants and LEDs mimicking fireflies that sense and respond to sound and proximity in their environment. This project aims to reflect on technology as a human companion in the form of familiar and personified objects. 

The leafy arrowhead plant responds to sound volume; the grassy spider plant and all the LEDs respond to proximity. There are three states for the plants and two for the LEDs. In each state, the accelerated and growingly obvious action from the actuators responds to the increased stimuli in the environment. 

In the idle state, there’s no sound or proximity trigger, and both plants, attached to servos, spin at a slow and consistent rate. The yellow LEDs arranged by the plants fade on and off at different “breathing” rates to mimic fireflies. 

In the first triggered state, the leafy arrowhead plant responds to low sounds by moving faster at a consistent rate. The grassy spider plant responds to a range of proximity values by moving to a defined position depending on how close the user is with the sensor. The yellow LEDs respond by glowing brighter in unison as the user moves closer.

In the second triggered state, the leafy arrowhead plant responds to loud sounds by moving to random positions over a large range. The grassy spider plant responds to proximity less than 3 by moving to random positions over a small range.

 

Calm Technology Principles

I engaged with 3 principles through this project:

I. Technology should require the smallest possible amount of attention:

The actuators are small and compact; when triggered, the responses are non-disturbing.  

III. Technology should make use of the periphery:

By having 3 states, the user can choose to engage or ignore Artificial Nature at any time, having the technology “move easily from the periphery of our attention to the center, and back”, without affecting its functionality or meaning.

VI. Technology should work even when it fails:

In the case when technology fails to respond to sensors, it defaults back to the idle state not requiring any stimulus. Furthermore, In the case when technology breaks down, the amiable physical form of the piece still remains to create meaning. 

 

Experience Video

How It Works Video

Final Project Images

f4  f1  f3

 

 

Development Images

p1 p2 p3 p4 p5

 

Link to the Arduino code hosted on Github

link

 

Circuit Diagram

e2-_circuit

Experiment 1 – Body As Controller

Experiment 1 – Body As Controller

Project Description

In this collection of 4 studies, I explored using computer vision and motion tracking to make interactive moments of clicks and scrolls. My goal was to create a sense of playfulness by having interesting visuals and audio outputs. My overall success is understanding how to translate mouse interactions into body movements, and the benefits of creating content with programming. When users interact with elements on a screen using a mouse, they generally have four categories of interactions: hover, click, scroll, and drag; however, when the body becomes a controller, these interactions are translated into analyzing positions (both absolute and relative) of keypoint positions. This challenged me to switch to a different mindset and consider “click” as a movement or position that can fulfill (make true) or can’t fulfill (make false) a threshold, and “scroll” as a continual movement feeding into an accumulative change. I also made progress learning P5.js with all three models. My failure/challenge is also related to the limitations of the types of body movements there are to complete different interactions, and how to switch from one type of interaction to another.

  • Click 1
    • click-1
    • present link
    • edit link
    •  click-1-emojis
    • demo video
    • This is the first one I completed. I used the Handpose Tensorflow model implemented in ML5 and tracked the distance between my thumb and index finger to realize a “click”. As the distance reaches a certain threshold an emoji appears. There’s one emoji with a sound effect for each corner of the screen. My original plan was to make the emoji be random but ran into the difficulty of having random emojis flicker nonstop as the draw loop goes on.
  • Click 2
    • click-2
    • present link
    • edit link
    • click-2-particles
    • demo video link
    • I used the PoseNet model to track the left shoulder, the nose, and the distance between the eyes. I also used a particles js file for the particles animation. The “click” here is realized by having the left shoulder be in the circled area. When that condition is satisfied, the playful elements will appear on the user’s face on the screen and track respective areas on their face. Background music was added to make the interaction more playable. The googly eyes are something I’d like to reflect on, since their position is relative to the center point between the eyes, as the user tilts their head, the position would be off. This could be something to work on to improve playability.
  • Scroll 1
    • scroll-1
    • present link
    • edit link
    • scroll-1-rotate_1
    • demo video link
    • I used the PoseNet model to track both wrists’ positions and the distance between them. The “scroll” is realized by two movements: the position of the right wrist controls the rotation direction and speed; the distance between the two wrists determines the size. I explored using push and pop with rotation which allowed for an interesting and unpredictable visual effect. The blue and white circles in the background are lines that also rotate responsively, which ended up creating a pattern I didn’t anticipate when adding them in as an experiment.
  • Scroll 2
    • scroll-2
    • present link (see the edit link if this link is not working)
    • edit link
    • scroll-2-amp-audio
    • demo video link
    • I used the Clmtrackr to track the distance between the eyes and the midpoint between the eyes. The “scroll” is realized by two movements: the X position of the midpoint between the user’s eyes determines the sound channel the audio comes out from; the distance between the eyes determines the playback speed of the audio. I also mapped the volume of the audio to the size and position of the graphics, to create a “dancing to the beats” effect. The audio is EDM, which works well with this kind of audio manipulation and the aesthetics of the graphics.

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.