Author Archives: Tyler Beatty

Sketch 4 – Tyler

In this sketch, I use two light-detecting resistors (LDR) to detect the position of a hand. The program starts in a seeking or idle state using Prof. P’s oscillate function. When a hand is present the program calculates the difference between the two LDRs. Potentiometers are used to help tune the LDR signal.

An improvement would be to include a third center LDR. Currently, a hand outside the two LDRs may give the same output as a hand between them. This could be fixed by code, but having a center LDR to activate/deactivate the behavior would be more intuitive for the user.

Code and wiring: https://www.tinkercad.com/things/a9CMLTatuf8-incredible-elzing/editel?sharecode=Iu5JxgXhnUBG0rUMoKTby9VRhrycILI95jvDgDkqKr0

screenshot-2022-11-07-092606

Code:

#include <Servo.h>

int state = 0;
int stateTimer = 0;

int ldrAPin = A0;
int ldrBPin = A1;
int servoPin = 3;
int ldrAVal = 0;
int ldrBVal = 0;
int servoPos = 0;
Servo servo;

void setup()
{
pinMode(ldrAPin, INPUT);
pinMode(ldrBPin, INPUT);
pinMode(LED_BUILTIN, OUTPUT);
servo.attach(servoPin);
Serial.begin(9600);
}

void loop()
{
// track light sensors
ldrAVal = analogRead(ldrAPin);
ldrBVal = analogRead(ldrBPin);
int newServoPos = map(ldrBVal – ldrAVal , -200, 200, 0, 180);
servoPos = 0.1*newServoPos + 0.9*servoPos;
Serial.print(ldrAVal);
Serial.print(“, “);
Serial.print(ldrBVal);
Serial.print(“: “);
Serial.println(servoPos);

manageState();

switch(state){
case 0:                        //idle state
digitalWrite(LED_BUILTIN, HIGH);
servo.write(oscillate(20, 160, servoPin, 120));
break;
case 1:                        //following state
digitalWrite(LED_BUILTIN, LOW);
servo.write(servoPos);
break;
}
delay(100);
}

void manageState(){
switch(state){
case 0:                        //idle state
if(checkActivity()){
state = 1;
stateTimer = millis() + 2000;
}
break;
case 1:                        //following state
if(millis() > stateTimer){
state = 0;
}else if(checkActivity()){
stateTimer = millis() + 2000;
}
break;
}
}

bool checkActivity(){
if(ldrAVal < 850 || ldrBVal < 850){
return true;
}
return false;
}

//simple oscillation function
//use millis() to step across a sin wave as an oscillator
//map function converts to specified output range
//From Nick Pucket’s animation library: https://github.com/npuckett/arduinoAnimation
int oscillate(int minVal, int maxVal, int offset, int speed)
{

int oscillValue = map((sin(float((millis()+offset)*(float(speed)/1000)) * PI / 180.0)*1000),-1000,1000,minVal,maxVal+1);
return oscillValue;

}

Sketch 3 – Tyler – Fake Scanner

This sketch tries to create an almost analog affordance indicating that you are being tracked. Shifting the pixels in certain areas of the image, I achieved a scanner-like effect. Then the PoseNet keypoints are drawn on a separate graphic so that only the desired area is transferred to the canvas (above the scan line).

fake-scanner

https://www.youtube.com/watch?v=ew0MpCQIWlk

https://editor.p5js.org/tbeattyOCAD/sketches/yUdtyMrEq

Sketch 2 – Tyler – Sound Drag

For this sketch, I wanted to play with two things, sounds, and draggable elements. I used code from the Coding Train “Draggable Class” to help manage the events. I used sounds in two ways, first the .loop() for different sound clips, so they play as long as the draggable is on the screen (they delete if brought down into the dash), and the “Shock” sound for when more eerie sounds are chosen.

https://editor.p5js.org/tbeattyOCAD/sketches/aCVCf2-wA

https://www.youtube.com/watch?v=NVj2P4e78AU

sounddrag

 

Sketch 1 – Head Controller – Tyler

headcrop

Full Screen: https://editor.p5js.org/tbeattyOCAD/full/7cYwInKpR

Editor: https://editor.p5js.org/tbeattyOCAD/sketches/7cYwInKpR

This sketch has two parts:

  1. Cropping only the head of one person on the camera (using Posenet nose and eye locations). Once the x and y position of the nose is taken, and the distance between eyes are measured, the Image.get() function is able to capture only the desired pixels.
    – This was inspired by Lazano Hemmer‘s zoom pavilion. Getting the position and size of the head could be used for interaction that deals with the perspective of the audience (localizing where they are). In our screen space project we are considering using machine learning to identify masks or not, this method will be important for isolating parts of faces.
  2. Measuring head twist and tilt. Given the nose position relative to the eyes, the twist amount is a basic map calculation. The tilt works, but not well, comparing nose to eye-line distance to eye spacing. I think it would be better to compare the nose y position with the ear position. I show this data by moving the cropped head, so that you are always looking at yourself, but it looks like the person is simply translating their head. A better effect may be found for this input.
    – This turns your head into a (ineffective) mouse. But it could be used to estimate where on a big wall someone is looking.

Resources used: