Subverting Body Tracking

Veda Adnani, Nick Alexander, Amreen Ashraf

Our response includes slide deck linked here.

We examined the field of countering computer vision (with a focus on face detection), began to speculate on further developments, and consider research and design projects.

Introduction

For our research on computer vision, we used a top-down approach. We started out trying to understand what “computer vision” is and what its implications are. Computer vision is the name given to a series of technologies which help the way a computer sees. The human eye is important to the way we use our visual understanding of the world to piece together information, in the same way, the camera is the eye of the computational device.

As of 2019, computer vision is all around us. Our smartphones, apps, social media, banks and other industries, use computer vision every day in aiding humans to carry out tasks with computational devices.

In Class Activity:

We started out by doing the class activity which was to research our topic. Some of the apps we looked at were those used commercially like the newly acquired app “…” by L’oreal. We also looked at the list “faces in new media” which is a list by Kyle Mcdonald (Face in new media art Kyle Mcdonald). The list is comprised of artists using computer vision in new and novel ways. In this list, there is a section on intervention which highlights arts using computer vision to counter tracking and thereby subverting these technologies.

Concepts:

We conducted a broad range of research to understand face tracking used by industries and governments to not just collect data but to classify humans and other potential uses of computer vision. Some initial concepts we jotted down were:

  1. Deepface: using facial recognition AI algorithms to alert or highlight when being detected.
  2. Blockchain: using blockchain technologies to scramble and save data on different databases for security.
  3. Physical: Using physical objects or clothing to misdirect.

 

Some interesting things we came across in this phase of the research was the way governments across the world are using computer vision. Privacy international is an NGO that does a lot of work with the legality of the ways in which computer vision is currently being implemented.

bodsub1

 

Instagram Face Filters:

Our first and most basic experiment was experimenting with Instagram face filters to understand the extent to which they can be used to alter, modify or even transform the face. One of the most striking filters that we found is shown below. It is called “Face Patch” and it gradually eliminates all the features from the user’s face leaving them only with a blank patch of skin and the outline of their head. We leave this finding open to your interpretation.

 

Beating Apple’s Facial Recognition

We tried deceiving Apple’s “True Depth” Face ID by using photographs, however this did not work. What did work was when we tried using a mirror to detect the face, and we found this odd since a mirror is a flat surface and cannot convey depth. Yet it somehow managed to cheat the software and unlock the device.

 

Modiface :

We experimented with Modiface an AR app that uses facial recognition to mockup different cosmetic products on the wearer’s face. A range of brands like INGLOT use this platform to advertise their products but what caught our attention was the apps ability to remove any scars and blemishes on the user’s face, even ones that the user was unaware of. It also allowed the user to change their eye colour if they desired. This was quite disturbing, and a rude awakening into the lengths that the beauty and cosmetics industry goes to, to promote vanity and unrealistic aesthetic perfection.

bodsub2

 

Free and Accessible Resources:

Accessible and free sources for body tracking are easy to find. Simple but robust face tracker tools made by independent developers, like CLM Face Tracker or Tracking.js are available with a minimum of web searching. More robust face tracking technology, such as that developed by Intel and Microsoft, is easily accessible by businesses. Body tracking code such as Posenet can also be found very easily.

For those who care to look, face and body tracking is widely available and can be adapted to a user’s purpose with no oversight.

 

Deceive v/s Defeat:

Through our research process, we came across two possible scenarios to subvert face recognition products. The first one was to “deceive” the intelligence into thinking that the user was someone else, and the second one was to “defeat” the system by rendering the user unidentifiable using certain tactics. Our findings below cover both of these possibilities.

 

In light of the examples listed below, we see an emerging need for subversion. Our identities, faces and bodies are sacred and personal. But we are constantly being violated by multiple entities, and it is unfair to be subjected to this kind of surveillance unknowingly. Where does this impending lack of trust leave humankind?

 

Amazon Rekognition:

https://aws.amazon.com/rekognition/

screenshot-2019-03-07-at-8-54-04-am

Amazon claims “ “Real-time face recognition across tens of millions of faces and detection of up to 100 faces in challenging crowded photos.” And was recently caught secretly licensing this facial recognition software to multiple state governments in the USA. With is real-time tracking and the ability to analyze several camera-feeds in multiple cities simultaneously, this is a serious concern for privacy and consent with government surveillance entities.

screenshot-2019-03-08-at-7-10-48-pm

Butterfleye:

homx1mjij7pbmapf0zgv

Using the same technology provided by Amazon’s recognition. Butterfleye is a B2C facial recognition device, that was built to help businesses get to “know their customers better”. Every time a customer enters any business establish like a coffee shop/salon/bank, the person serving the customer is immediately given a bank of data including the customers personal details, preferences and purchase history. They claim its a way for businesses to become more “efficient” and serve customers better, but where does this leave any possibility of privacy for the average human being?

 

SenseTime: Viper Surveillance System

screenshot-2019-03-07-at-9-23-07-am

SenseTime is a Chinese company that focuses on AI-based facial recognition systems. It is currently the most highly valued entity of its kind in the world at a net worth of 3 BILLION dollars. It’s flagship product the Viper Surveillance System detects faces in crowded areas and is most used by the government. What is shocking is that the government uses this technology the most in provinces with dense Muslim populations to track “terrorist” activity. However, its claims for doing so are far different.

1_wfaswvx_6zpvgrc0iwr_1a

Government claims across the globe:

Most governments are employing facial recognition software for various reasons. Some claim it is to find missing children, others claim it is to prevent and stop human trafficking. However, the actual uses are far from the truth they project.

 

AI-Generated Human Faces

New Website Generates Fake Photos of People Using AI Technology

AI-assisted image editing is used in the creation of “deepfakes” (a portmanteau of “deep learning” and “fake”) which are high-quality superimpositions of faces onto bodies. Generative Adversarial Networks have also been used to generate high-quality human faces, which, using face tracking technology, can be made to seem to be speaking in real-time.

Video forensics can be used, or image metadata can be extracted and analyzed, to identify AI-generated faces and videos.

How does one evade these various entities?

Classifiers v/s Detectors:

bodysub3

One of the key differences in surveillance systems is that between classifiers and detectors. While classifiers work towards categorizing pre-determined objects and are commonly used in face surveillance systems such as Apple’s True Depth Face ID with 30,000 touch points to identify faces. Detectors have to locate and determine objects themselves, i.e. create their own bounding boxes and are used in areas like autonomous driving vehicles.

NSAF: Hyphen Labs

1_qz-bkdogvwzaxa8lvyogeq

Hyphen Labs is a multidisciplinary lab which focuses on using technological tools to empower women of colour. They use human-centred design and speculative design methodologies in the aid of prototyping technologies. They have developed a concept called Neurospeculative Afrofuturism which integrates computational technologies, virtual reality and neuroscience to aid in the design of prototypes. HyperFace is a prototype which uses many faces drawn onto a scarf to misdirect the use of computer vision in data collection and profiling. It uses the data points used by tracking software to graphically design a scarf which has many of these points. It also uses certain colours which are not recognized by this software.

 

Glasses that confuse surveillance:

Researchers at Carnegie Melon University have devised a pair of glasses that “perturb” or confuse facial recognition systems

1_hvsqkmhmsxworomdk36pwg

 

Facial Camouflage that disturbs surveillance:

A team of researchers at Standford U led by Dr. Jiajun Lu have devised facial camouflage patterns to confuse cameras. This pattern renders the face unidentifiable from various angles, distances, lighting and so sob. They are experimenting with “living tattoos” for the face to create long term solutions to fight surveillance.

1_izwnp7o8ojv1ldnuu1v5ia

 

NIR LED Glasses, Caps or Burqa:

goggles

A low cost and feasible way to avoid any facial surveillance system is using Near Infrared LED lights. The lights are practically invisible to the naked human eye and when designed well in a prototype they can go unnoticed. The lights successfully blind cameras. The first prototype was a pair of eyeglasses designed by professors Isao Echizen and Seiichi Gohshi of Kogakuin University, and since then various prototypes ranging from caps to burqas have been made. The lights are inexpensive and available on Sparkfun.

 

URME Mask:

selvaggio1

THe URME mask is a 400$ mask sold at cost by its founder to help people evade surveillance. When worn, it is extremely realistic and the only time a wearer can be detected is when the lack of lip movement is noticed.

Facial Weaponization suite:

screenshot-2019-03-08-at-6-35-29-pm

Facial Weaponization is a series of modelled masks created in revolt to the political spectrum of facial surveillance. The masks are made in workshops using aggregated data from participants that are unrecognizable by biometric facial surveillance systems.

Concepts

amreen

In addition to exploring existing forms of facial countermeasures (like CV Dazzle) we considered utilizing the technology against itself. We imagined a digital mask that superimposed itself over any image recognized as a face taken by a device it was installed on, scrambling it and rendering it useless for facial data. We also considered bio-powered Near-Infared LED stickers that could be placed subtly on a face, and powered by body electricity.

Proximity Tap

1

GitHub: https://github.com/npyalex/ProximityTap/blob/master/ProximityTap.ino 

In considering the use of haptics I was very interested in a product that would place vibrating motors on the wearer’s clavicle. To my mind, the clavicle is a sensitive space that is receptive to tactile feedback, but positioned in a place on the body where a wearable device could be concealed – or put on proud display.

My first concept involved using a wearable set on the clavicle as a control-centre for a smartphone or other device. I enjoyed the notion of a wearer subtly manipulating a device by touching their collarbone, especially when contrasted with the socially-invasive procedures of controlling (for example) a Google Glass or a smart watch.

When, upon consideration, this concept was out of scope and too far removed from the exploration of haptics this assignment asks for, I rolled it back into what I considered to be a simpler concept: a proximity sensor that would inform the wearer, through subtle haptic feedback on the clavicle, communicate the direction and proximity of people moving about behind the wearer.

Exploring sensor options brought me to the Sparkfun Gesturesense which I borrowed from a colleague. I preferred this sensor the UV proximity sensor in our kits. The Gesturesense could sense distance, orientation, and direction of movement, while the UV sensor could only sense distance. The UV sensor had a much better range than the Gesturesense, but I wanted to explore the new tech.

ZX Distance & Gesture Sensor - SparkFun | Mouser

 

I had some trouble getting the Gesturesense working with my Arduino Micro. It’s possible it was an issue with the pins serving different functions on the Micro than on the Uno (which is supported by the documentation), because when I switched to an Uno it worked without any trouble.

I mapped the sensor readings, which ranged from 240 (as a low reading) to 0 (max) to the simpler range of 0 (low) to 255 (high) in order that they could better interface with the vibration motors and pulse width modulation from the Arduino’s pins. I assigned the Z-position reading, the distance from the sensor, to a variable called vibeStrength, which serves as the value for the pulse width modulation on each motor. The X-position reading activates one of three motors based on the reading returned – below 80 left motor, between 81 and 160 middle motor, 161 and above right motor – based on the relative position of whoever is activating the sensor.

Thus, the motors vibrate softly when the sensor subject is far away and strongly when they are close, and the motors that vibrate are based on the direction of whoever is behind.

A recurring issue with the sensor is that it did not return a zero or null reading when not sensing any data – instead, it would continue to return the last number it sensed until it returned something new.  This meant the outputs would remain at the last reading, vibrating at that rate, until the sensor picked up something different. This often led to the motors vibrating when there was nothing to sense, giving a false positive.

To solve this, I wrote a function that would run a counter every frame the sensor was not returning a value. When the counter reached a threshold, it would turn off all outputs.
if (!zx_sensor.positionAvailable() ){
nullReading++;
}
if (nullReading>=500){
analogWrite(vibePinMid, 0);
analogWrite(vibePinLeft, 0);
analogWrite(vibePinRight, 0);
}

This caused the outputs to freeze at their last value if the object they were sensing moved into another area, giving false positives until none of the sensors read anything.

To solve this, I added an else statement to each of the statements that called the motor pins, setting their PWM strength to 0 if the sensor number returned put the subject in a different section.

Next Steps

For now, there is no overlap between the vibration motors. I would like if, in future, there was a smooth transition between motors as whoever is being tracked moves, rather than having each motor operate independently as is the case now.

I would also like to 3d-print a casing for the sensor and microcontroller, and thread wiring through a sash, to complete the wearable.

Reference

https://learn.sparkfun.com/tutorials/zx-distance-and-gesture-sensor-smd-hookup-guide/all

Attendance

img_20190213_120554

GitHub: https://github.com/npyalex/OnCampus 

Attendance describes a speculative ambient body-centric design project in which a the relative location of a roster of people are loosely tracked. A family (or a cohort of grad students, perhaps) each have an entry on a fixture (which could be expressed multiple ways – for the purposes of this project I imagined working with shape-memory alloys) that highlights when those people are nearby.

Above is a rough sketch of the fixture realized with shape-memory alloys: lengths of wire twist into an approximation of  the person’s name when they are close, and unwind into nothingness when they are away. Below is the same concept rendered with LEDs on a flat clock-face.  This project was researched and coded with the theoretical understanding that it would be realized with lengths of shape-memory alloy wiring.

1-1

I’ve worked with If This Then That and Adafruit IO quite often recently and I’ve been enjoying it, so they were the first place my mind went to when considering how to realize this project.

I started by setting up feeds in Adafruit IOdocu2and a pair of IFTTT applets
docu1

to track my location and interface with Adafruit IO. When I enter a radius around campus it sends “1” to the “arrived” feed, and when I leave the radius it sends “1” to the “gone” feed.

docu3

A little walking demonstrated that IFTTT and Adafruit IO were interfacing correctly: below you can see that the feeds successfully tracked the instances when I left for lunch and when I returned.

docu2

Without shape-memory alloys to play with I had to get speculative with the code. I did some research and learned that SMAs require careful voltage, with some trial and error depending on size. I set up my code to use a transistor and pulse width modulation so that when it is eventually hooked up to SMA I can find the ideal voltage for it.

In the photo below the LED is in the place of the shape-memory alloy.

img_20190213_170550

docu5

docu4

It took me a fair bit of digging to figure out how to monitor multiple Adafruit feeds in one sketch, and I continue to have some trouble with the syntax of the functions in the Gone/Arrived sections of the code.

I’d like the chance to work with shape-memory alloys properly and expand this concept – until then, I can prove the concept with an LED.

Works Consulted

https://learn.adafruit.com/adafruit-feather-m0-basic-proto/adapting-sketches-to-m0

https://learn.adafruit.com/adafruit-io-basics-analog-output/arduino-code

https://github.com/adafruit/Adafruit_IO_Arduino/blob/master/examples/adafruitio_12_group_sub/adafruitio_12_group_sub.ino

www.makezine.com/2012/01/31/skill-builder-working-with-shape-memory-alloy/

https://www.arduino.cc/en/Tutorial/TransistorMotorControl

https://github.com/adafruit/Adafruit_IO_Arduino/blob/master/examples/adafruitio_03_multiple_feeds/adafruitio_03_multiple_feeds.ino

https://github.com/adafruit/Adafruit_IO_Arduino/blob/master/examples/adafruitio_12_group_sub/adafruitio_12_group_sub.ino

Muscle Manager

img_20190207_083929 docu3screenshot_20190207-085348

Github: https://github.com/npyalex/Muscle-Sensor/

Concept

I get headaches often. I clench my jaw when I’m stressed, when I’m focusing, or when I’m nervous. Discussions with many professionals  throughout my life have convinced me that habitual jaw clenching is bad for my teeth, my bones, my muscles, and is major factor in my headaches.

Mindfulness practice has helped somewhat. With increased body awareness I have been better able to notice when my jaw is clenched, and adjust. I then also ask myself “why might I have been clenching my jaw? What here is making me stressed, or anxious, or focused?” With the symptom noticed I am then able to look outward for the cause.

1

I imagined a simple wearable (a hat or band?) that could conceal the sensor-stickers of an EMG muscle sensor. I’ve been enjoying playing around with If This Then That (IFTTT) in another class and having fun with it, and I thought this might be a fun way to integrate it as an unobtrusive opportunity for self-reflection.

My concept, then, was a wearable that detects jaw tension. When the tension is sustained the wearable sends a notification to the wearer’s phone, reminding them that their jaw is tense. No judgement is implied; it is simply a statement of fact. The wearer can self-reflect and adjust as required.

A high sensor reading is sent to Adafruit IO, which triggers an IFTTT applet, which sends a notification to the user’s phone.

Process

After playing with and testing the sensor on various muscles with the help of my friend and colleague Amreen, I wrote some code interfacing with Adafruit IO and commented it for clarity, viewable here.

The code uses the Adafruit Feather microcontroller and Adafruit IO Wifi to connect the board to the internet. I used the Adafruit IO Arduino library and the guide here. Note that the Adafruit Feather won’t connect to 5G wifi networks!

In summary, the code checks every ten seconds to see if your tension is above a threshold. If it’s above the threshold twice in a row (signifying extended tension) it sends a “1” value to Adafruit IO. An IFTTT applet, listening to the feed, sends a notification to the user’s phone with a non-judgemental reminder that they are experiencing jaw tension.

img_20190207_091230

Wiring is simple.  From Getting Started with MyoWare Muscle Sensor from the Adafruit website:

docu4

My code includes an LED so you can have a real-time visual representation of the sensor’s readings. That LED is currently assigned to pin 13, the built-in LED on the microcontroller, so no wiring is necessary.

docu2

The Adafruit IO feed is created automatically when the Feather sends data to it.

docu3

screenshot_20190207-085348

Stumbling Blocks

Everything works in theory, but the fact that the electrodes on the EMG sensor are only good for two or three placements has been an impediment to testing. Early in testing I was able to get values consistently from the sensor, but by the end it was unresponsive and was only delivering the same, very high, result.

docu1

The upshot of this is that the infrastructure of the project – Adafruit IO to IFTTT – can still be tested, and that it works consistently.

Next Steps

More tuning of sensor placement and adjustment of thresholds and timing is required. If this project was to become a wearable it would have to move away from the MyoWare EMG sensor, as the sticker-based electrodes are not feasible for long-term use or use with a piece of clothing.

Thoughts

This is ultimately a very personal self-reflection project, as I have lived most of my life with physical issues caused by or related to jaw tension. I imagine that this system, were it used by other people, would be adapted into a system that manages whatever tension points they hold.

I chose to send a smartphone notification rather than some other form of feedback because the smartphone is ubiquitous, and notifications tend not to draw undue notice. As monitoring one’s body is a personal experience, I would prefer to have the feedback presented in a relatively subtle and unobtrusive way.

The other option for feedback I considered was haptics, but this still felt more obtrusive than I would have liked. A smartphone notification can be ignored or forgotten, while a physical sensation can not. The intention with this piece is to gently remind the user that they’re carrying tension, and communicate that information when the user is ready for it, not to force them to confront it.

References

Welcome to Adafruit IO. (n.d.). Retrieved from https://learn.adafruit.com/welcome-to-adafruit-io/arduino-and-adafruit-io

and

https://learn.adafruit.com/getting-started-with-myoware-muscle-sensor/placing-electrodes

Ifttt. (n.d.). IFTTT helps your apps and devices work together. Retrieved from https://ifttt.com/

Range of Motion Visualizer

img_20190130_145055

GitHub: https://github.com/npyalex/Knee-Stretch-Sensor

Overview

The Range of Motion Visualizer is an exploration of the capabilities of stretch-sensor conductive fabric. The Visualizer uses a stretch-sensor in a bending motion rather than a stretching one and visualizes the output as a multicoloured band. In concept, the Visualizer is imagined as being used as a rehab tool. When prescribed a limited range of motion as a part of physical therapy, a wearer would calibrate the Visualizer to their prescribed range. They would see their motion represented. Safe motion that would not harm their recovery would (at this stage, anyway) be represented by a small green bar. The bar would elongate and turn yellow as they approached the outside of their prescribed range, and turn red when they were outside it.

Process

After exploring the functionalities and properties of the fabrics we had been given in-class, I decided to work with the stretch sensor. I had worked with pressure sensors briefly in the first semester, so I preferred not to work with them, and had been knocking around the idea for a wearable that would visualize motion for a little while, inspired by the Motex Project for smart textiles. The stretch sensor was an opportunity to realize that idea.

img_20190124_154352

knee-fritz

I put together a circuit using the diagram provided in-class as an example, and ran it while looking at the Arduino Serial Port to see what kind of readings it generated.

Then I took some code from the Ubiquitous Computing class and used it as the basis for moving the sensor readings into Processing.

When that confirmed that Processing worked, I wrote some code to adjust the length of a displayed rectangle based on the sensor reading.

After a few tests I determined what felt like a good range to implement the changing colours – yellow for approaching the danger zone, and red for beyond.

Then I put together the sleeve for holding the sensor. I took an old sock and cut away the foot. I sewed a hem into the sock, then turned it inside out and sewed in the length of stretch sensor fabric with conductive thread.

img_20190129_185352

img_20190129_192905

I left long lengths of conductive thread to attach alligator cables to while testing.

In testing this was uncomfortable and unwieldy. Because of the simplicity of the circuit I decided to minimize it. I used a small breadboard. I removed the extraneous wires and ran the power side of the sensor directly from the 5V pin. I set up the variable resistor directly from the A0 pin, and the resistor directly to ground.

img_20190129_195818

I removed the extra threads and sewed patches for alligator clips to attach to. I had to recalibrate the code at this time, as the sensor had begun returning lower readings. As a nice bonus, touching the two pads completed the circuit and served as a default “max” for the range-of-motion tracking function.

img_20190130_145055

I continued to explore visualization. I wanted to create a curve that mirrored the bend of the user’s arm. I used the curve() function, as well as explored using curveVertex() within the beginShape()/endShape() functions. I did get a reactive curve going, but I decided that it was not as strong a visualization as the bar.

silly-boy

Next Steps

This could easily be made wireless – perhaps with XBees? I would have to do more sewing, including a pocket for the microcontroller. I also considered exploring haptic feedback in addition to – or perhaps instead of – visual feedback. I would like to include a vibrating motor that would buzz lightly when in the yellow zone and strongly when in the red. Beyond that, I would want to create a means for quickly and simply re-calibrating the sensor on the fly, and continue working on using a curved image as a visualization.

References

https://github.com/npyalex/Ubiquitous-Connectivity-Project

https://docs.google.com/presentation/d/1xHIjrmXHmO3N-q6QnVvZ4OYTXpfT2C0L8y68Nwbwj5U/edit#slide=id.g4e5141135c_0_67

http://www.motex-research.eu/about-motex.html

Deceptive Jumping Necklace

After a creative elicitation exercise involving mix-and-matching verbs, adverbs, and feelings, I sketched out a series of goofy designs.

1

Many of them were so goofy or so obtuse that when it came time to select one to pursue for this project, they had to be discarded by default. The one idea that I thought would be achievable based on the parameters of the assignment, and not so complex as to necessarily require a microcontroller, was the so-called Deceptive Jumping Necklace.

1

The Necklace would sit clasped on its wearer’s neck until, when it was most unexpected, it would unclasp and leap off. When expanding the design I imagined it held fast by a set of electromagnets controlled by a microcontroller hidden in the central pendant. This central pendant would also hold springs that would push the necklace away when it was activated. It was goofy, but it could be read as a piece of critical or dark design, which are design avenues I am interested in.

1-1

I had no practical experience with knitting. I had done some simple weaving before, but I wanted to learn to knit. Even at the time I felt that weaving would be more appropriate than knitting for this object, but I wanted to take the opportunity to push myself and learn something new. I planned to knit the body of the necklace and weave a small patch to serve as the mounting for the magnetic clasp.

img_20190117_142336

It took me several false starts to get the hang of knitting. The first round of stitches that set up the first needle was simple enough, but the process and movements for the core stitching did not come easily. Furthermore, in my hubris, I had asked my instructor for small needles as I wished to knit something that would have the same stitch density as a weave. She warned me that large stitches would lead to larger loops which would be easier to knit, and she was right. The small loops were difficult to keep ordered and occasionally got very tight.

I had to stop and restart several times, but eventually, thanks to a very helpful YouTube video, I got it going.

While knitting, I decided that a necklace was the wrong form for the project. A bracelet would maintain the same kind of affordance as the necklace with respect to the critical design aspects, and would be a little simpler and faster to make. Also, I had by now decided to try to realize the project without a microcontroller, and a bracelet would be a better fit for an object that was just a swatch of knitted cloth.

As I knit, I attempted to include two lengths of conductive thread – one of the fourth stitch from the beginning, and one on the fourth stitch from the end. These will eventually become the wiring that keeps the clasp engaged.

img_20190123_164546

The bracelet turned out well enough considering it was my first serious foray into knitting. For some reason – probably through missing or fouling up stitches – the finished knit has a distinct curvature to it, which works for a bracelet!

img_20190123_164542

For the next step, I wove a swatch to serve as a place to anchor the clasp mechanism. I had done some weaving in workshops previously so this was familiar to me, and a YouTube video was a good refresher.

img_20190123_165848

I tied off the cut portions of the weft and trimmed them down.

img_20190123_170837

img_20190124_092253

There is much more work to do. Having never knitted before, I spent a majority of the week getting comfortable with the process through trial and error. I understand now how to recognize a mistake and fix it right away, which I did not when I began. Mistakes I made early in the knit were deeply woven before I recognized what they were.

Furthermore, I settled on the initial design of the project before I truly understood the needs of it. Before this piece is completed I intend to re-imagine it so it can function without a microcontroller, and to utilize one of the fabric-based sensors. Perhaps I will eschew magnets altogether?

While I’m disappointed to not have a completed product I am excited to have discovered knitting, which I find fun and relaxing. Now that the hurdle of learning to knit has been overcome I’m looking forward to continuing exploring, and perhaps knitting myself a big fluffy scarf.

References & Resources

RJ Knits (2018, November 24). How to Knit: Easy for Beginners. Retrieved from https://www.youtube.com/watch?v=p_R1UDsNOMk&feature=youtu.be

The Met (2016, March 11). #MetKids-Weave on a Mini Loom. Retrieved from https://www.youtube.com/watch?v=AWLIy-Um7_0