Computer Vision & Graphics Explorations

Exploring PoseNet & ML5

header

Workshop insights:

During the workshop I was part of a group that explored PoseNet which allows for real-time human pose estimation in the browser using the tensorflow.js library. Read more about it here . We were able to test PoseNet in the demo browser and during explorations I noticed that the program would slow down when using their multiple pose capture feature. Additionally, I noticed that the skeleton drawn was pretty accurate regardless of how form fitting or loose one’s clothing was. At the time we were not able to test the effect of different colors of clothing as coincidentally all four of us had worn varying shades of gray. We attempted to download the Github repository found here however we had a lot of trouble running the code; A lot of dependencies and setup is required, that we didn’t quite understand.

When I couldn’t get the demo working locally on my laptop I tried following the Coding Train Hour of Code tutorial on using PoseNet that is available here. In the tutorial Daniel Shiffman uses ml5.js and p5.js – ml5.js is a tensorflow.js wrapper that makes the PoseNet and tensorflow.js more accessible for intermediaries or people who haven’t had much experience with tensorflow.js. The tutorial is however not suitable for people who haven’t used p5.js before although in the video, Shiffman links to other videos for complete beginners.

Insights from the tutorial:

In this tutorial I learned:

What is ml5.js? A wrapper for tensorflow.js that makes machine learning more approachable for creative coders and artists. It is built on top of tensorflow.js, accessed in the browser, and requires no dependencies installed apart from regular p5.js libraries. Learn more here

NOTE: To use ml5.js you need to be running a local server. If you don’t have a localhost setup you can test your code in the p5.js web browser – you’ll need to create an account.

You can create your own Instagram like filters! The aim of the tutorial was to create a clown nose effect where a red nose would follow your nose on screen. In theory, once you master this tutorial you can create different effects like adding a pair of sunglasses, or other effects. I learned about p5.js filter() effect which adds a filter to an image or video. I tested out THRESHOLD, which converts the image to black or white pixels if they are below a certain threshold, and GRAY, which adds a greyscale to the video. usage is filter(THRESHOLD) or filter(GRAY);

Pros & Cons of using a pre-trained model vs. a custom model? When using a pre-trained model like tensorflow.js a lot of the work has already been done for you. Creating a custom model is beneficial only if you are looking to capture a particular pose e.g. If you want to train the machine on your own body but in order to do this you will need tons of data. Think 1000s or even hundred of thousands of images, or 3D motion capture to get it right. You could crowdsource the images however you have to think of issues of copyright and your own bias of who is in the images and where they are in the world. It is imperative to be ethical in your thinking and choices.

Another issue to keep in mind is diversity of your source images as this may cause problems down the line when it comes to recognizing different genders or races. Pre-trained models too are not infallible and is recommended that you test out models before you commit to them.

What are keypoints? These are 17 datapoints that PoseNet returns and they reference different locations in the body/skeleton of a pose. They are returned in an array where the indices 0 to 16 reference a particular part of the body as shown below:

Id Part
0 nose
1 leftEye
2 rightEye
3 leftEar
4 rightEar
5 leftShoulder
6 rightShoulder
7 leftElbow
8 rightElbow
9 leftWrist
10 rightWrist
11 leftHip
12 rightHip
13 leftKnee
14 rightKnee
15 leftAnkle
16 rightAnkle

In the array additional information for the pose such as the certainty percentage and x,y co-ordinate of the keypoints are returned. These keypoints are important as they are how you will determine where to generate your filter  or effects e.g. clown nose.

keypoints

source: TensorFlow here

keypoints_m

Some keypoint readings and accuracy recorded from the motion capture of the image above of me sitting down were. These results are printed to the console and are shown here with the array expanded: 0.99 “leftEye”, 0.84 “rightEye”, 0.97 “leftEar”, 0.41 “rightEar”, 0.01 “leftShoulder”, 0.00 “rightShoulder” … 0.02 “leftHip”.

Once I determined that ml5 was working correctly. I drew the clown nose – a red ellipse drawn at the x and y co-ordinates of my nose. To do this I used the keypoint data at index 0 of the array which corresponds to nose info.  To access this data I first needed to access the 0 index of the poses array which holds all the detected poses. This will give me latest pose. Once I have the latest pose, I used the following to update a global variable noseX and noseY e.g.

noseX = poses[0].pose.keypoints[0].position.x

noseY = poses[0].pose.keypoints[0].position.y

The result:

rednose_a

The nose following crashes when you go off screen! You need to use an if-function to detect whenever at least one pose has been found, otherwise the nose will remain stuck at the last part you were on screen

rednose_a4

rednose_fix

The red nose is too bouncy! I noticed that the red nose was a little jumpy as it moved from position to position. To fix this, I used the lerp function to smooth the values so that the nose doesn’t jump immediately to a new positions. The value to use in the lerp function depends on what looks good to you. Tried 0.2 at first but this was too choppy, so I upped it to 0.5. Since I knew how to detect the nose, I attempted to add an additional keypoint tracking and tracked my left-eye which is at keypoint 1 index.

rednose_lerp

Red nose is out of proportion! I learned that the distance between keypoints is bigger when you are closer to the camera and smaller when you are further away which caused the nose to be really big when far away and really small when closer. In order to fix this I needed to estimate the camera distance and draw the nose proportional to the distance between my eye and my nose keypoints. This corrects the proportions so that up close, the nose is big and far away it shrinks in size.

rednose_c

Proportions are off

rednose_proportion

rednose_b

Fixed proportions

It is possible to continue adding effects e.g. I could create sunglasses or a hat to go with my red nose. I however did not like this approach, as it works best only for selfies and not full body poses because there are too many keypoints to keep track off when attempting to create a unique effect at each point, especially with the addition of lerping. To create an effect where there is no keypoint e.g. there is no keypoint for the top of your head but you can use the position of the right and left eye to determine where a hat should go.

Video Classification Example

I was toying around with the idea of having the algorithm detect an image in a video and explored for a video classification. It quickly dawned on me that this was a case for a custom model as the pre-trained model seemed to only work best when generic objects were in view. e.g. At times it recognized my face as a basketball, my hand as a band-aid, my hair as an abaya etc. I also noticed that if I brought the objects closer to the screen, the detection was slightly better. Below are some of my findings using MobileNet Video Classification in p5.

mnet

Ideation & Exploring PoseNet with Webcam:

I wanted to leverage the power of PoseNet to track poses in music videos but also subvert its usage to create a trivia game that I called Name That Singer. The idea was to create a video that showed only the pose skeletons dancing and a viewer would have to guess who the singer was based on the pose on the screen. I chose a viral video – Beyonce’s Single Ladies – that I assumed would be easy to figure out. I didn’t take into account how fast they dancers in the videos were moving and this made it hard to determine which song was playing when the skeletons were showing on the screen.

For this part, I decided not to use the lerping function to create a unique effect and instead used the pre-determined functions in ml5.js for PoseNEt Webcam to capture the skeleton. These pre-determined functions were beneficial in this case as my points and skeletons are identical in aesthetic so I was able to cut down on coding needed. I followed the tutorial here and instead of using webcam I loaded my own videos.

Below are some screenshots from my testing. I also tested the poses when filters such as threshold, invert, and blur were added to the video and found that the tracking was really good. Even with cartoons.

findthatsinger

Artists/Creative Coding Projects:

Chris Sugrue – She is an artist and programmer working across the fields of interactive installations, audio-visual performances, and experimental interfaces. website

chrissugrue

source: Chris Sugrue

Delicate Boundaries – Light bugs crawls off a computer screen onto human bodies as people touch the computer screen, exploring how our bodies interact with the virtual world if the world in our digital devices could move into our physical world.

I liked this project [Delicate Boundaries] because it explores beyond the computer screen, it could be cool to do something like this with PoseNet where instead of just mapping onto the screen, poses can be mapped onto the body.

References:

Real-Time Human Pose Estimation in the Browser with TensorFlow.js – here

PoseNet with webcam in ML5.js – here

Github code: here

Haptic Breathing Cues

mediationvibe

Haptic Breathing Cues is a meditation program that provides visual and gentle haptic feedback cues for a guided breathing meditation.

Haptics Workshop Insights:

  • Hello Vibe Motors

Blink – exploring with Arduino Blink ex. code

When the unsecured vibration motor is placed on the body, it bounces on the skin and the resulting feeling was unpleasant and irritating.

When the vibration motor is pinched between two fingers, the sensation feels more pleasant and soothing.

Setting the delay for both LOW and HIGH to 100 makes the a vibrating sensation that feels like a racing heartbeat when the body is pressing on the motor or pinching the sensor between two fingers.

NOTE: Ensure that when testing with the vibration motor, at a fast speed, that the motor is secured tightly to a surface as if left to vibrate for too long the wires detach from the vibration motor

Setting the delay after turning HIGH to 100 and the delay after turning low to 5000, still generates a heartbeat like effect but instead it is more subtle and feels like a light buzz.

screenshot-2019-02-27-231605

Fade – exploring with Arduino Fade ex. code

When the motor is placed on the body, there is less bouncing on the finger when compared to Blink test. This creates a more pleasing sensation that feels like a tickle

Pinching the sensor between two fingers creates a soothing sensation with the gradual fading of the vibrations

Changing the fadeAmount value to 15 and pinching between two fingers created a sensation that felt like breathing

I tried increasing the delay in the fade code but this felt like the vibration went on too long and made my finger feel weird due to the prolonged vibration.

To determine the threshold I reset the fadeAmount to 5 and began by setting the upper limit of the brightness value in the if condition to 100 and this generated a subtle effect. Lowering it to 10 and I couldn’t sense any vibrations. At 50 the sensation was barely noticeable and felt like a bug walking on my skin. At 60 there wasn’t much change. I felt like at 70, the sensation was the easiest to detect without having to concentrate too hard.

screenshot-2019-02-27-231829

  • Motor Arrays

For this part our group tested a grouping of 3 vibration motors both using modified blink sample code and fade sample code. We tested the motors on the skin and with the body on the sensors where we pinched the sensors between our fingers. With the blink code, we were able to create a sequence where a motor vibrated one after the other causing a subtle sensation of directional movement.

Blink Vibe 3

  • Haptic Motor Drivers

For the haptic drivers i tested the basic and complex code, preferring to pinch the motor between my fingers to feel the sensations. With the basic code, a tingling sensation that seemed to crawl up my arm was felt as the program looped through the 117 possible effects. With the complex sample code, I tested  a combination of effects and found the vibrations to be too subtle. Since I wanted to have the motor sandwiched in fabric/felt material I decided not to use the haptic motor driver. I feel like this driver is more beneficial in cases where you want to create a single interaction and it didn’t work too well for me when trying to mimic a breathing patter or heartbeat effect.

I found that it was simple to use the blink and fade code with the vibration motor on my own to create my haptic breathing cues.

Haptic Breathing Cues

Ideation: I wanted to continue exploring self-care tools and during testing in the haptics workshops I noticed that some of my favorite interactions were when I pinched the vibration sensor between two fingers and was running code that created a heartbeat like sensation. This got me thinking about creating a handheld meditation tool that would incorporate both visual and haptic cues as feedback.

To first create the breathing cues – I used a formula for the meditation where a person would inhale for 4 seconds then exhale for 4 seconds. This code was placed in the loop where it would play continuously. Each time an exhale or inhale state starts, a signal is sent over the serial monitor to trigger the simultaneous visual cue to accompany the haptic cue. When the person is inhaling, the vibration monitor vibrates continuously and stops when they are meant to exhale.

Blink Vibe code

To create the visual cue, I used Processing to read serial monitor values and set states to indicate whether the person is meant to be inhaling or exhaling. Below is a sample of my code for when a person is inhaling. When the condition is found to be true, the size for the exhale circle is reset to 600 and font size reset to 50 first so that i can maintain a smooth transition between states in the animation to create the circle growing and shrinking effect.

Processing Code

Processing code 2

When testing, I noticed that the breathing meditation was a little jarring in the transition between exhaling and inhaling and I decided to add a state where the person held their breath. Additionally, I wanted to try using fade in and fade out for the vibrations instead of having continuous buzzing or silence. I modified my Arduino code to use analogWrite() in place of digitalWrite() allowing me to create a subtle fade effect. Additionally, on the Processing side, I added a hold state to handle when the person is holding their breath.

HBC 1

HBC 2

hbc 3

Components used:

  • Foam circles to press the vibration motor
  • 1 Vibration motor
  • Arduino Micro
  • Breadboard
  • Masking Tape
  • Alligator clips
  • Wires
  • Scissors

haptic-breathing

 

 

 

References:

While loop with millis() : here

Link to Github code – Arduino: here

Link to Github code – Processing: here

 

 

Activity Nudger

The concept:

The Activity Nudger light is a desktop companion that notifies a person working at their desk when they have been sitting for more than an hour, gently nudging them to get up for a few minutes. The device works with an on-off switch mechanism that controls the two states a person can be in. i.e Stationary Mode and Mobile Mode. While in Stationary Mode the device lights up white with a yellow ring and when in Mobile Mode, it glows ambiently in a greyscale color scheme slowly changing colors from black to white.

The design / Worksheet:

img_20190214_104519_edit

  • Display info that is important but not critical:

Activity nudge shows the user time information in a visual manner. It informs them that they’ve been working for at least an hour and are due a break. The info is not critical in that, if the user feels like they are not ready for a break yet, they can simply reset the Stationary mode by pressing the button to toggle the state. Ignoring the device, does not result in any critical side-effects.

  • Can move from the periphery to the focus of attention and back again.

When the device is in Stationary mode, it is in the periphery as no animations are happening, allowing the user to focus on their task at hand. When it switches into mobile mode, the ambient nudge, gently notifies the user and becomes the focus of attention. However, the user can with the click of a button, switch back to stationary mode thus shifting the device back to the periphery.

  • Focus on the tangible; representations in the environment

It is also a small, portable device that sits atop one’s desk and blends into the study/work environment non-intrusively. When it is off, it still blends into the desk environment as it looks like a small book.

  • Provide subtle changes to reflect updates in information ( should not be distracting)

The Activity Nudger stays white in stationary mode and when in mobile mode it lerps through a greyscale color scheme that it not distracting to the eye as it gradually changes going from white to greys to black with the yellow ring turning white as an additional indication of state.

  • Aesthetically pleasing and environmentally appropriate

The book theme fits into the office/study environment. Additionally, I would like the glass for the book to be frosted so that the ambient lights are even more muted and less distracting.

Low-fidelity Prototype:

Parts list:

  • Arduino Genuino Micro
  • 1 concave pushbutton
  • 1 single LED light
  • Jumper cables
  • Breadboard
  • 220 Ohms resistor
  • 10k Ohms resistor

screenshot-2019-02-14-004120Circuit diagram created on circuit.io web app.

Code:

Arduino code:

The Arduino code for the device monitors signals from the push button and passes data to a Processing sketch over serial communication. Using an if condition, I toggle between the Stationary & Mobile state whenever the button is pressed.

screenshot-2019-02-14-093150

screenshot-2019-02-14-093245

screenshot-2019-02-14-093305

Processing code:

The Processing code alerts a user to get active by creating an ambient nudge notification that tells them they have been stationary or working on something for at-least an hour. During testing, time was sped up to switch the state from stationary to nudge for mobile after a minute. When designing your own activity nudge use the time values you desire. I user 60 steps for 60 seconds a min and 60 min and hour, I user 1000 to represent 1 second. For an hour change this to 3,600,000  in the millis() check.

When msg is received over serial communication in serialEvent(), the string is split into an array and then depending on the state i.e 1 = = Stationary and 2 == Mobile, the isMobile boolean value is then set. This is what the draw loop determines to which animation to draw.

In my draw loop I use the millis function to run at set time intervals and change the background from black to grey in the Mobile state. In the Stationary state, the millis() check is used to figure out when to trigger the ambient nudge notification if a user has been in the Stationary state for atleast an hour or when testing atleast a minute.

screenshot-2019-02-14-094228

screenshot-2019-02-14-101110

screenshot-2019-02-14-094359

screenshot-2019-02-14-094423

Insights:

When you need to run algorithms that happen over time – it is better and more efficient to use the millis() function. I had attempted to use a for loop and count down from a specific number to determine what color to make the ambient background, however this didn’t work because the loop still ran too fast and i couldn’t get the gradual change in colors and millis() gives a much cleaner solution.

Found a nice circuit diagram creation web app: Circuit.io

References:

Github link to code: here

Performing an action every x-seconds : here

 

Project Journal #3 – Soma Garden

Concept:

SOMA GARDEN is a digital garden powered by your bio data. The sweatier your palms, the more plants and flowers grow. By using electro-dermal activity(EDA) measured using DIY sensors and an Arduino, the changes in a person’s skin sweat glands cause changes in electrical resistance in the circuit which will trigger different levels of animations on the dome’s walls.

Ideation & reflecting:

I wanted to create a project that would treat biofeedback about sweaty palms in a positive way. For a lot of people, sweaty palms are a cause of anxiety as sweaty palms are normally viewed negatively in society. My project prompts personal self-reflection as instead of presenting data to show whether the body is under stress whether emotional or physical, it leverages the neutral nature of plants thus taking away any negative connotations.

Transferring authority of interpreting biofeedback back to the individual:

The animations allow anyone interacting with the SOMA GARDEN to interpret the volume of plants and flowers growing in their own way thus transferring authority on interpreting the bio data back to the user. As a developer, my role in the interpreting is minimal and doesn’t add any connotations either positive or negative to the final result because I focused on quantifying the sweatiness of one’s hands and creating different states for the dome; 0 – 20 (not very sweaty), 20 – 60 (slightly moist), 60-120 (sweaty), >120 (very sweaty)

How it works:

The dome responds to different states according to the sensor value ranges:

0 -(no interaction)

Dome’s walls go white / clear

0 – 20 (not very sweaty)

Animation of grass blades swaying in a breeze. 1/4 height

20 – 60 (slightly moist)

Grass blades swaying, flowers growing to half the dome’s height

60 – 120 ( sweaty)

Grass blades swaying, flowers growing to 3/4’s of the dome’s height

> 120 (very sweaty)

Whole dome fills with flowers and plants

Aesthetic visuals instead of biofeedback as numbers, maintains ambiguity so that individual interprets visuals by themselves:

The idea to grow plants came from the notion that sweating means our bodies are losing water  and that “water is life” and plants need water to grow.

Code:

screenshot-2019-02-07-163244

screenshot-2019-02-07-163130

 

 

 

 

 

Test run results:

screenshot-2019-02-07-164944 

To realize this low-fidelity prototype the following materials are needed:

  • Arduino Genuino Micro
  • Jumper cables
  • 2 Alligator clips
  • Aluminium foil
  • Tape or glue
  • Cardstock and writing implements
  • Scissor or cutting tool & cutting mat
  • 10K resistor
  • Breadboard

Circuit Diagram : source

DIY Polygraph

The grey wires in the fritzing diagram represent the sensors. They are connected to analog PIN A0 on the Arduino board.

To create your sensors – follow the instructions here and design the sensors according to whatever your idea is. This article presents instructions for a sensor to be attached around one’s fingers. For my idea, I envisioned having a pad with a palm cut out so that an individual is invited to place their hand on it. The sensors would be on the tips of two of the fingers. Below are images from my steps of creating the sensors.

 screenshot-2019-02-07-170032

screenshot-2019-02-07-170245

Design of a high-fidelity prototype:

Sketch:

somagardensketch

For a proof of concept I would like to work on creating this dome idea but on a single screen or panel hanging on a wall. The palm rest with the sensors would be placed beside it. I would like to explore perhaps using Google Glass so that the individual can still be immersed in the SOMA GARDEN. I believe having the individual enclosed gives them space to actually reflect in private on what they are seeing and what it might mean to them.  I believe that this design also gives the individual space to determine what their sweaty palms may mean, perhaps if they are feeling anxious the growing plants may distract them enough so that they can calm down and in terms of personal motivations, they can use the SOMA GARDEN as a space to meditate without external pressure.

screenshot-2019-02-07-162415

Dream design:

SOMA Garden high fidelity idea

Insights:

When coming up with threshold values test the sensors at different times of the day to get a variety in bio data. Additionally test with multiple individuals to get a more reliable sense of what kind of values you might get back.

Test different resistors to get one that suits your needs. Preferably choose a resistor that generates higher values or allows for a wider range of values on the serial monitor. This will help when creating different thresholds for the sensors. For this project a 330 ohms, 1.oK ohms and 10K ohms resistor were tested and a 10K resistor was chosen. Below are screenshots of readings from the testing of the resistors in the circuit.

resistors

References:

Galvanic Skin Response Powered by Arduino (note: the circuit diagram shown in this article is incorrect) : here

Detect Lies with Tin Foil, Wire and Arduino : here

Images:

Roof Glass Dome: here

Potted plant: here

 

Smart Charms

Smart Charms / A smart stress toy

Project Idea – Wanted to create a wearable stress toy that would distract a user whenever they were feeling anxious. It works by pressing or scrunching up the sensor in one’s hands. This was inspired by self-care and art as a form of therapy / visual meditation.

Materials Testing Results

  1. Multimeter results

screenshot-2019-01-31-114705

 

2. Arduino results

Arduino tests

Body-Centric Sensor Design

Ideation

I wanted to create a stress toy for self-care that would distract the wearer whenever they were feeling anxious. When i began ideating, my original idea was to have a plush sensor that could be squeezed and squashed. I then began to think of portability and having the sensor around my wrist but also it not being cumbersome. This led me to thinking of something in the vein of a key-chain, that could be attached or detached onto various items of clothing. My ideal version of this sensor would be a charm bracelet where each charm could be manipulated to produce a different effect distracting the wearer almost like a visual meditation.

How it works.

When the sensor is crushed, pressed, or manipulated sensor values are passed to a recorded. These values are then mapped to a range between 0 and 10 to create a “stressLevel” variable which is later used in a Processing sketch.

Stress Level

Mapping original sensor readings to a stress level value between 0 and 10

In Processing, sensor values are used to create a digital painting of circles in a limited color range. The stressLevel value is used to determine what color the circle will be; values closer to 10 are in the blues & violets and values closer to zero are in the reds & pinks. I chose this color scheme to further indicate state of mind i.e when the sensor isn’t being manipulated the painting will be more blue and purple indicating calm and when more reds and oranges are present, this could indicate that the person is agitated or excited. The idea is that by concentrating on manipulating the sensor to create a digital painting, the person is distracted from their anxious feelings.

Variable Resistor Choice

For my variable resistor, I had initially intended to work with the EEonyx StatTex Conductive Fiber but during testing with the Arduino, i noticed that the range for sensor readings was very small and this would not work for me when interpreting how hard a person was clenching their sensor. I decided to use the Velostat / Linqstat as it provided me with a wide range of sensor values about 1023 to 0 when tested on Arduino with various resistors. This material was also chosen as it provided a good range of values when crushed, pinched, bend or pressed.

Fixed Resistor Choice

I chose to use a 10K resistor as from my materials testing, I found that this resistor gave me the greatest range of values whenever the fabric was subjected to various manipulations

Sensor Materials

To construct my sensor I used the following materials & tools.

  • Neoprene
  • Sewing thread & Needles
  • 2 Alligator clips
  • Felt material
  • Conductive fabric
  • Velostat / Linqstat
  • Scissors
  • Iron ( if no glue available)
  • Cardstock (to create your outline / pattern)

Steps

smartcharmsgrid

 

Processing / Visualization results.

Initially I wanted to draw the circles in real-time however, the sensor readings were coming in too fast and this created a very frenetic painting that kind of started giving me vertigo when i stared at it. It was distracting but it seemed to increase my anxiety the longer i stared at the circles moving on the screen.

To fix this, i decided to create an array that would hold each circle when it was created and then the draw loop would just pull objects from this array. This seemed to work and also ensured that circles stayed on the screen, thus creating more of a painting than an illusion as with my first explorations. I decided to stay on this route as i had a lot of fun trying to create a painting.

I noticed that as the sketch ran longer, the array got bigger and this slowed down the real-time nature of the paintings. The lag between me squeezing the sensor and the appropriate circle showing up got noticeably larger. I tried changing frameRates however this didn’t give me the effect I wanted. In the end, i decided to modify my for loop so that I would only grab the last 20 readings. In the future, i think it would be better to optimize this loop so that instead of saving all the readings into an array, i could only save a limited amount thus minimizing read times and increasing the real-time response between sensor and screen. I didn’t have enough time to implement this.

Digital Painting Results from testing

smartcharms

Next Steps

To improve on SmartCharms i would like to redesign the sensor itself, perhaps making it smaller and more discrete or keeping it at it’s current size but experimenting with different housing for the sensor itself so that i could get softer textures and more flexibility. I’d like to see what the values would be if i used a thinner cover, i.e. removed the neoprene…would the resistance be higher when the sensor is crumpled?

I would also like to improve on the visualizations. I was thinking of creating a coloring book type feel where different shapes would be generated on the screen to form a mandala and different parts of it would be colored depending on the sensor reading from squeezing the sensor.

Link to code: https://github.com/007myala/wardragon/tree/master/smartcharms

 

 

 

Process Journal #1 – An E-Textile Safety Badge & Bracelet

E-Textile Concept:

Who: Women and vulnerable communities e.g. LGBTQ+, children

What: A discrete pin-able patch and a bracelet

When: Can be used in everyday situations – women’s march, pride celebrations, walking down a dark alley,

Where: Pinned onto clothing or worn around the wrist

How: Sends a distress signal, perhaps with GPS location, when the bracelet is pinched or the badge is pressed. Ideally, the signal would be sent to an application on a phone that would pass the message to a designated party e.g first responders or family and friends.

Technique chosen: In the beginning I decided to either knit or weave as I wanted to create a push button controller and a pinch controller. I settled on knitting as I felt that weaving might not work for the push button and may have been too stiff. However, now with hindsight, if I were to continue with the idea of a safety bracelet, I would use the weaving technique to create the pinch controller.

Badge & Bracelet design & design:

Below is how I envision the push button and pinch mechanism envisioned in a wearable version of the safety badge and safety bracelet.

img_20190123_215858_editPush Mechanism Design for Safety Badge

img_20190123_222204_edit

Pinch Mechanism Design for Safety Bracelet

Process: The Badge

E-Textile Components

Steps:

  1. Prepare your materials and cut felt according to the size of badge you want. i.e. This design is for a square badge.
  2. Cast-on 10 stitches onto the knitting needle. Begin with your non-conductive piece.
  3. Knit in your preferred knitting style.
  4. Once you have a square piece, cast-off the stitches to complete the square.bc-3
  5. You should now have a non-conductive knitted square.
  6. Begin the next square by joining your conductive thread with the yarn to form one thread.
  7. Leave a “tail” of at least an inch in length and then cast-on 7 stitches.
  8. Knit in your desired style for 7 rows or until you have a square then cast-off.bc-4
  9. Repeat steps 6-8 to create a second conductive square. Ensure that each square has a “tail”.
  10. Sandwich the non-conductive part between the conductive parts and use the e-textile testing tool to test that the push button mechanism works. Tip: Ensure that the stitches on the non-conductive part are loose.
  11. Sew one conductive square to one side of the felt piece.
  12. Sew the other conductive square onto the opposite side of the felt piece ensuring that the tails of the pieces are on opposite sides.bc-5
  13. Pick one side of the felt and sew the non-conductive piece over one of the conductive pieces.
  14. Fold over the felt and test the safety badge lights the LED on the e-textile tool.

bc-6

Process: The Bracelet

E-Textile Components

Steps:

  1. Cut two 1/2 inch wide strips of conductive fabric.
  2. Using an LED and your battery from the e-textile testing tool, designate a + and – side to the strips.
  3. Place the remaining LEDs onto the strips and test that they all light up
  4. String some conductive thread through a press stud and sew a running stitch into the positive side of the conductive strips. Make sure that the press stud fits over the positive side of the battery pack’s press stud. bc-7
  5. Begin sewing the “legs” of the LEDs onto the positive strip.
  6. Sew all 5 LEDs onto the strip and snip the conductive thread when you get to the end of the positive strip or after the last LED has been fastened to the strip.
  7. Sew down the LEDs on the negative strip and snip the conductive thread leaving a tail. bc-8
  8. To create the knitted strip for the pinch circuit begin knitting with a mix of yarn and conductive thread. Ensure that you leave about a 1 inch tail.
  9. Knit for about 7 rows and then continue knitting with only yarn. You can snip the conductive thread.
  10. Knit with only yarn for 10 rows then attach the conductive thread at the end of a row and continue knitting a new conductive part.
  11. Knit for 7 rows and then cast-off and snip the yarn leaving a tail of conductive thread. bc-9
  12.  Attach the tail with yarn and conductive thread mix to the conductive thread tail from the negative strip from step 7.
  13. Attach conductive tail end to another press stud that fits over the negative side of the battery pack.
  14. Complete the circuit by attaching the battery pack i.e attach the press studs.
  15. When you pinch the knitted part, the LEDs should light up. I realized that the blue and green LEDs would not light as the 3V battery did not have enough voltage for the 3.2V bulbs.
  16. I switched out the LEDs using red, yellow, and orange LEDs which required 2.2V and all of them lit up as seen in the last picture. bc-10

The technique of working with the LEDs and was inspired by this post.