Odd Man In & Consider Growth

Project Log – Odd Man In & Consider Growth (Experiment 4)

Dave Foster, Chris Luginbuhl, Karo Castro-Wunsch

Creation & Computation DIGF-6037-001 (Kate Hartman & Nicholas Puckett)

Project Description (from course Assignments tab):

Digital messaging is a pervasive part of our daily lives that has taken on a variety of forms and formats.  Working in small groups you will develop hardware and software systems that investigate methods of messaging and notifications.  Some of the topics we will cover include: synchronous/asynchronous communication, ambient data, alerts, web API’s, and messaging services.

For this project you will design and create a custom device that allows the members of your group to communicate in new ways.  Using the Wifi connection of the Feather M0, you will create standalone objects that communicate over the internet.  Each group must consider how these devices create a specific language that allows the things, people, or places on the ends of the network to communicate.

Project Initial Ideas:

From Dave (Balsamiq sketch below):

The Problem Being Addressed:

We are all, at some point in our lives, deeply immersed in one project or another (essays, art projects, etc.) and have no wish to be interrupted by Facebook Messenger, Twitter, etc. (even if these services are part of another assigned project).  What if there was a step back to the old-fashioned pager?  A self-standing, project specific communication request alarm triggered by any content added to the project’s “group application” (Facebook, Twitter, etc.)?.

The Concept:

A project specific communication request notification device (similar in concept to early pagers) which bypasses or steps back from direct Facebook/Chat-room/Web pop-ups/etc.  The idea is that the group’s 3 (or more) members would not have to be logged in to anything to receive notification that another group member was requesting communication.  It would be implied that this would be specific to messages about the project assigned to the group.

A project specific Facebook group, Twitter account and E-mail account would be set up with membership restricted to the group members for a specific project.  Each communication modality would be assigned a colour (red, green or blue) and would carry/wear a small device containing his Feather board wired to 2 differently coloured LED’s and (possibly) a vibration motor or similar noise maker.  The LED of each modality’s assigned colours would blink (and possible noisemaker would buzz) whenever any member had posted to the group’s service.

Use would be as a “filter” so that, if you’re frantically working on an essay or other assignment, you would not have to have any other service or application active (no distractions from your work due to pop-ups etc.).  All you would see is a blinking light if a group specific message was posted by another group member.

screen-shot-2017-11-25-at-3-03-48-pm

From Chris & Karo:

Concept 1:

The problem & context:

People who are trying to establish a consistent rhythm in their life often struggle to be consistent. Typical examples include:

  • self-employed people who wish to be at their desk and working by a certain time each day.
  • People learning to play a musical instrument by practicing daily
  • People wishing to establish a regular mindfulness meditation practice.
  • Writers struggling to finish a book

Establishing a routine is an important part of maintaining helpful habits (Gardiner, 2012)

Accountability partners work in different ways, but one format is a quick, daily check in, along the lines of “It’s 9am and I’m at my desk”.

Accountability partners is an idea that has gained momentum since the 90s. There is ample testimony that accountability improves the chances of sticking with a program (Inc, Quiet Rev, Huffington Post ).

From Wikipedia:  Not having an accountability partner to help a person accomplish their goal is one reason 92% of people did not accomplish their New Year’s resolution according to a University of Scranton study[5] by Dan Diamond in Forbes and an article by Dale Tyson.[6]

See also: Why an Accountability Buddy Is Your Secret Weapon for Faster Growth (Entrepreneur Magazine)

At the same time more and more people are working from home (3% of US workers work from home at least half time, according to CNN and Global Workplace Analytics and FlexJobs (link)). This means that remote accountability partners that on-site ones – typically smartphones and/or social media to connect. One problem with this arrangement is that smartphones and social media are perfect tools for procrastination (Meier, 2016).

Solutions

One approach to accountability partnering that avoids these pitfalls follows the approach of calm technology (Weiser 2015) – that technology can help us best by existing at the periphery of awareness, rather than by demanding our attention.

Borrowing from Kate Hartman’s book “Make: Wearable Electronics: Design, prototype and wear your own interactive garments” pp 56-59, a sandwich switch could be used in a couch cushion, chair cushion or meditation cushion to send a wireless signal to an accountability partner that indicates the user is sitting at their work (practice, meditation, etc). This signal could result in a public post for accountability, or a private signal sent just to the accountability partner.

Numerous variations are possible… the partners could each have the same device and commit to both sitting down at 9am. Each partner receives a visible, tactile or auditory cue that the other has arrived. Progress and consistency could be tracked on an online dashboard.

This system could also be used (e.g. with exercise clothing having bend sensors integrated) for exercise accountability tracking, thereby overcoming some of the shortcomings of the much hyped but disappointing accountability ecosystem Gym Pact (www.pactapp.com/)

References:

Gardner, B., Lally, P., & Wardle, J. (2012). Making health habitual: the psychology of “habit-formation” and general practice. The British Journal of General Practice, 62(605), 664–666. http://doi.org/10.3399/bjgp12X659466

Hartman, Kate “Make: Wearable Electronics: Design, prototype and wear your own interactive garments” O’Reilly Media (2014) pp 56-59

Adrian Meier, Leonard Reinecke, Christine E. Meltzer “Facebocrastination”? Predictors of using Facebook for procrastination and its effects on students’ well-being Computers in Human Behavior 64 (2016) 65-76

Weiser, Mark, and John Seely Brown. “Designing Calm Technology.” Designing Calm Technology. Http://www.ubiq.com/weiser/calmtech/calmtech.htm, 21 Dec. 1995. Web. 28 Oct. 2015.

Concept 2

Problem – we want to connect with loved ones, but screen time takes us away from each other.

Solutions

A metallic pendant worn against the skin incorporating the feather, heating element and Li-poly battery. A squeeze sends a message to the partner’s matching pendant, causing it to glow, vibrate, and/or heat up briefly. Partners wear matching pendants and messaging is 2-way.

Concept 3

Artwork investigating networks – neural networks, ecosystem, societal networks.

Networks are familiar from their hundreds of examples in nature and underpin the structure of our own brain. Neural networks are at least partially responsible for brains.

Creating a wifi (or XBee) connected network of physical nodes (a node being e.g. an LED with a sensor or button in a housing), would it be possible to establish and demonstrate information being passed through the network?

Could human input or intervention alter, enhance or suppress the patterns of information?

If more people come to the party to interact, at what point does it enhance the complexity, connectivity or synchronicity or the network, and at what point does too much human interference make it collapse?

Final Form(s):

Dave:

  1. Free-standing container with Feather controller (coded for wireless access), lithium ion battery & 3 LED’s (Red, Green, Blue).  Container to be configured to “hook” over the screen of a laptop such that the LED’s are visible on a flat surface facing the user.
    1. Each LED coded to flash/blink (or at least turn on) indicating “communication request” through one of 3 pre-established group pages (Facebook, Twitter & E-mail).
    2. Allows for (semi)uninterrupted work on other projects while remaining potentially aware of communication regarding assigned group project.

screen-shot-2017-11-26-at-11-55-45-am

Plain English Logic Flow (not code)

Begin:

All LED’s to OFF

Link Feather to Pubnub account

Link Pubnub account to Facebook, Twitter and E-mail account

Monitor

  1. Facebook group “Odd Man In”
  2. Twitter feed “Odd Man In”
  3. OCADU student E-mail account

IF – posting to Facebook group = YES

Go to RED LED

IF – posting to Twitter feed = YES

Go to GREEN LED

IF – posting to OCADU E-mail = YES

Go to BLUE LED

RED LED

IF – Not logged into Facebook

RED LED at maximum

IF – logged into Facebook

Ignore

GREEN LED

IF – Not logged into Twitter

GREEN LED at maximum

IF – logged into Twitter

Ignore

BLUE LED

IF – Not logged into E-mail account

BLUE LED at maximum

IF – logged into E-mail account

Ignore

Project Daily Log:

Tuesday, Nov. 14 – 12:00 to 13:30:

Chris & Dave met in the D/F lab at 12:00 for a design conference.  Dave put his idea forward (see above Balsamiq sketch) noting the physical design simplicity and adherence to the assignment parameters.  Chris mentioned some related notification input methods involving pressure sensors or switches installed in cushions.

While searching for applicable IFTTT links to Facebook Messenger, Dave received an E-mail from PubNub regarding a new service called ChatEngine that merits further examination.  Chris and Dave to meet again (probably) late Thursday afternoon.

Thurs Nov 16 – 6-8pm

Experimenting with example code. Trying to understand some of the workings:

-how JSON objects are created & parsed

-how pointers (* and &) work in C++

-object-oriented programming principles (e.g. wifiClient object).

Fri Nov 17 – 13:00 and After Class

Chris & Dave met in DF lab and after C & C class.  Further discussion as to which idea to implement and method of implementation.  After brief discussion with Kate, Dave seems to be leaning strongly towards the pager application with Chris concentrating on “point of presence” or “accountability partner” application.  At base, we’re trying to find a relatively simple “hey you” function specific to group members.  Looked through IFTTT for applets that might work.  We may be able to go through Adafruit I/O directly rather than over-complicating the exercise with Pubnub’s ChatEngine or similar.  We might be able to push a notification with a colour code for each communication method required (blue for Facebook Messenger, red for Twitter, etc.).  Chris has some coding examples which will be examined Tuesday.

Mon. Nov 20 (in class)

Further discussion amongst Dave, Chris & Karo re:  final form for individual devices.  Dave is concentrated on the simple pager.  Chris is attracted to the “accountability partner” cushion idea.  There will be some differences in the three products.  Meeting in the D/F lab Tuesday.

We worked together to ensure everyone’s Feather worked and that we could publish and read from the same Pubnub channel. We also prototyped a version of the software which published a message (1, 2 or 3) depending on which one of three switches was pressed, then read that message back from pubnub and lit an LED corresponding to the message (see video: https://youtu.be/kjJLgw94Yiw)

screen-shot-2017-11-26-at-12-01-57-pm

Caption: pager circuit working on a breadboard

Tues. Nov 21

Dave working on container for over-screen paging system as well as wiring for the Feather controller in his version of the project.  Final testing of code and Feather assembly (all LED’s working to spec. — code link and Fritzing diagram below).

https://github.com/DFstop/Odd-Man-In/blob/master/almost_instant_messaging.ino

screen-shot-2017-11-26-at-12-03-49-pm

Thursday Nov 23

Final form for Dave’s prototype cut from foamcore and glued together (photo below).  To set and cure overnight.  Hopefully all connections will remain intact after assembly and will work tomorrow.

screen-shot-2017-11-26-at-12-05-03-pm

Friday, November 24:

Both Chris’ and Karo’s applications functioned as planned.  Dave’s appears to have suffered a disconnect during construction of the housing as it does not function (though it did Thursday night).  Tried multiple reloads/resets of the controller with no luck.  Believe I did at least explain the function adequately.

Consider Growth:

As mentioned above, “Consider Growth” branched off of our group project on Monday Nov 20, though we continued to meet as a group. We discussed the right kind of technology to bring to this problem – how to encourage users without distracting or irritating them.

We decided to build a sandwich switch which could be slid inside existing cushions (e.g. couch cushions or meditation cushions) or placed on top of a chair.

At the same time, we discussed different ways reflecting users’ data back to them on a website or app. A simple line bar graph showing daily and monthly sitting totals would be a conventional option, but we wanted to do something more imaginative to reflect the open-ended experience of taking up a new skill, discipline or hobby. More details on how the graphical representation evolved can be found below.

A typical use scenario works like this:

-At 9am, the server sends a message start the session.

-If the user’s accountability partner is sitting, the user’s cushion EL wire lights up to indicate the accountability partner is sitting. And vice-versa.

-When the user sits, the user’s EL wire turns off.

-When both users are sitting, the server sends a message that rings the users bells signalling the start of the session.

-When either user sits, a generative animation “grows” on the webpage. If both users sit, both animations grow. If either one gets up, their animation stops growing.

-When the timer reaches a set amount of time (e.g. 20 minutes), the bell rings signalling the end of the session.

-Note that once the users sit, they can choose to view the animation or not. They will not need to interact with or receive notifications from the system until the end bell rings. This is a deliberate measure to reduce distractions.

We needed to make a larger “sandwich switch” than the one illustrated in Kate’s book (Hartman,  2014 pp 56-59). We decided to include internal “springs” of felt. As luck would have it, our first guess about the design of insulator between layers of conductive fabric worked well in testing on a variety of cushions. The conductive fabric was cut into two identical shapes, ironed onto the felt, and along with the insulation layer, the 5 layer sandwich was stitched together with bar tacks in the corners.

screen-shot-2017-11-26-at-12-07-42-pm

Caption: Switch – black felt with silver conductive fabric

screen-shot-2017-11-26-at-12-08-38-pm

Caption: you can’t solder to this conductive fabric. We had to stitch with conductive thread.

screen-shot-2017-11-26-at-12-09-34-pm

Caption – the lower layer of conductive fabric is visible through the layer of insulating felt with holes cut into it. It was tempting to make the holes in a seasonal snowflake pattern.

screen-shot-2017-11-26-at-12-10-43-pm

Caption: Assembled switch. The garter clip provides strain relief for the barrel jack used as a connector. We wanted to use a two-prong plug that could not be accidentally connected to our battery back, which had a JST connector.

We chose EL wire for this application because of its soft, even light and inherent flexibility and adaptability to different cushions. We also wanted to use a solenoid to ring a meditation gong at to signal the beginning and end of the sitting session rather than a screen or mobile device-based notification.

Both the solenoid and EL wire required 5V, so we used NPN transistors to switch the 4.5V from our battery pack using the Feather’s 3.3V logic. We tested this assembly by having the sandwich switch operate the EL wire and solenoid using a simple Arduino sketch (see video: https://youtu.be/5M5sWKTFRKo)

Putting the whole assembly into the project box with strain relief was time consuming, but it was helpful to have connectors on everything so that it could be transported easily without yanking wires out of their connections accidentally.

screen-shot-2017-11-26-at-12-12-30-pm

Caption: The solenoid (top), feather and featherwing protoboard (upper middle) with EL wire inverter (bottom) in a project box.

screen-shot-2017-11-26-at-12-13-30-pm

Caption: The wiring diagram for the device. It was all connected to the featherwing protoboard shown at the top.

We didn’t have time to make a second copy of the switch and circuit, and decided to demonstrate the operation by using a second Feather (running the same code) with an SPST tactile switch attached.

Some design sketches are below:

screen-shot-2017-11-26-at-12-14-40-pm

Caption: System architecture v1. “Everything is going to fit easily and there will be no need for a box”

screen-shot-2017-11-26-at-12-15-37-pm

Caption: The two transistor-based switch circuits. The one for the solenoid has a diode to prevent a high reverse voltage from damaging the transistor when the solenoid is disconnected, since it is an inductive load.

screen-shot-2017-11-26-at-12-16-52-pm

Caption: Design notebook page showing final system architecture including connectors. The notes are a prioritized list of the issues to work on. We got to most of these…

screen-shot-2017-11-26-at-12-17-46-pm

Caption: Final assembly with sandwich switch removed from inside of cushion.

The code for the Arduino is here: https://github.com/ChrisLuginbuhl/consider_growth

The code uses a single channel to publish and read from Pubnub. Messages are JSON formatted and have a user name as the key, with a simple binary code indicating whether that user is sitting (e.g. {karoMessage:1} means Karo is sitting). The website javascript also receives these messages, and sends a message to ring the bell.

Consider Growth Visualization

Code: https://github.com/KaroAntonio/consider-growth

Demo: http://realgoodinternet.me/consider-growth/

The original consider growth visualization concept was to employ vis that grows organically, ie morphogenic structures that imitate the growth and system interactions of biological forms. In pursuit of this, I implemented a JS port of the differential line algorithm hoping to use the underlying growth pattern to inform the display. The implementation worked but due to runtime considerations, it’s current version isn’t useable for real-time use. Moving on from this we employed trigonometric waves, perlin deformations and simple modular rhythms in combinations to produce a series of parametric randomized forms that have a large amount of variation but are consistently visually engaging. The waves’ specifics can be investigated in the repo.

The waveforms were hooked into pubnub via pubnub’s JS API so that a waveform is turned on whenever someone sits on their pillow and announces their presence into the virtual space. The intention of using a line as an avatar is to produce an environment which is non-competitive and really stripped down, the limitations allowing users to be present without having to make any choices about their virtual actions and representation.

 

Assignment 3 – Help Sean Study

SEAN HARKIN


ASSIGNMENT 3 – HELP SEAN STUDY

The initial concept was to design and build a device to track the amount of time the user has been playing video games, and create some sort of social-shaming output which forces the user to stop spending too much of their time playing video games. Unfortunately the original scope for the project had to be decreased due to coding errors; so the final product is a device which acts as a add on to your gamepad. When turned on, a tweet will be sent from your own Twitter account every 10 minutes, notifying all of your followers that you are not working.


CODE

p5

https://github.com/SeanHarkin/stopsean

Arduino https://github.com/SeanHarkin/stopsean/blob/3f2445de7765f4855ee9a55d9d7875edd9b95118/stopsean.ino

I’ve been having some issues with the GitHub program, so there is only 1 version of the code uploaded. I have shown my construction through the comments.


SKETCHES

controller-1

Figure 6 – Gamepad Add On Sketch(1)

controller-2

Figure 7 – Gamepad Add On Sketch (2)


DESIGN FILES

steam-api

Figure 1 – Steam API for Rumoclause

twitter

Figure 2 – @helpseanstudy Twitter Feed

test-feed-api
Figure 3 – Test API

ifttt-test-1

Figure 4 – A sample from the barrage of messages I sent myself using IFTTT

api-test-1

Figure 5 – The interference between API and Timer codes

stopsean-feed

Figure 8 – Adafruit.IO working test feed

ifttt-sort-of-working

Figure 9 – IFTTT working, then immediately not working as it was flooded with input from the Feather via. Adafruit.io


PHOTOGRAPHS

20171110_140400 20171109_135536 20171109_135548

Photographs of the prototype and presentation


PROCESS JOURNAL

Being distracted for long periods of time by video games has been and continues to be a very real problem for me, and as such, I wanted to create a novel device which would help me keep track of how long I’d been playing for, as well as give me some sort of incentive to stop and get back to work.

I considered 2 different approaches for tackling this brief:

Option 1

I very quickly realized I could get access to Steam’s Developer Tools to access Steam’s API’s. This would allow me access to my own data (username: Rumoclause (fig. 1)) which would then allow me to track how long I’d been playing using the lastlogoff data. With this, I planned to push a notification through a P5 timer page, which after an hour, would activate some sort of output device using the Adafruit Feather. Some options I had considered for the output device:

  1. A blinking LED which would flash to let the people around me know I had been playing for over an hour (specifically designed  as an undisruptive way to discourage me from wasting time in the studio space).
    1. A speaker which would announce every minute I had been playing after the hour (specifically designed as a very disruptive way to discourage me from wasting time in the studio space).
    2. A device which would just turn the game off (though I struggled with how the Adafruit Feather would be incorporated into this idea).
    3. A device which gives me an electric shock every minute after the hour of allotted game time (a suggestion from a peer who I now think might want to kill me).

Option 2

In summary the reversal of this process. I would connect my Adafruit Feather to my gamepad which would start a P5 timer when the device was turned on. Once the timer reached an hour, I would use IFTTT to publicly shame me, hopefully forcing me to decrease my gaming time. Some of the options for the IFTTT which I explored:

    1. Sending myself text alerts/messages to my phone (Too easy to ignore, not enough public shame).
    2. Sending text alerts/messages to my girlfriend and family (Probably the most effective method I came up with – however I worried that they may collectively kill me after a while.)
    3. Posting to social media. This was ultimately the route I chose as it seemed public enough that it would force me to change my habits, but not so direct as to have my girlfriend take my laptop away from me. For the reason that I didn’t want my current social media feeds to be banned from testing, and I also really enjoyed the idea of giving my mother this specific feed to follow so she knew when to nag me appropriately, I chose to make a new Twitter feed (fig. 2).

The reason I chose option 2 was ultimately that this was a 2 week project and I thought it would be easier to produce in the time allotted. This turned out to be a significant underestimation of my own coding skills, resulting in a product which did not work as intended, or very consistently.

I began with the IFTTT. In class, using Nick’s example and walk-through I was able to connect my Adafruit.io test page and my personal Twitter account. I chose to send myself a message every time there was data (mousePressed) on the feed test1 (fig. 3 & 4). This taught me 2 things:

  1. Do not use your own Twitter account. It send a lot of data, which will post a lot and you will be banned.
  2. IFTTT seemed simple! (I was very wrong).

I established a new feed through Adafruit.io called stopsean and began working on P5. in order to connect the input from my Feather to the feed.

The whole concept centred around creating a timer on p5 which could send data on to the Adafruit.io once a designated time had been reached. I foolishly thought this would be a fairly easy exercise, however failed quite terribly. I began by looking at how this had been done before, and found many examples of how this could be done from Processing or JavaScript, however I was struggling to find examples form p5. I did find a video from The Coding Train (link found in Context) – however when I attempted to recreate my own timer from these instructions (I had to edit as I was building a timer from 0 up instead of from 5:00 down), the rest of my code would not work(fig. 5).

As the deadline approached, I received some good advice from one of my peers; I was running out of time and had to prioritize functionality. All I realistically had to do was to get my Feather to talk to my p5 in a novel way. If I couldn’t get the timer working, but the .io and IFTTT were (fig. 8 + 9) then I should concentrate on that connection.

To save time, I opted to use one of the Arduino’s button examples, removing the unnecessary LED code; I pushed a button and the value changed from 1 to 0. I also added in a delay as when I first connected the Feather to my p5 it could not handle the amount of data incoming.

Ultimately the code for my button was so simple it was not an issue. I had a greater issue with the wiring, only in terms of reliability. Due to the placement of my feather (which was chosen to ensure functionality of the gamepad) and my lack of time to build proper casing for the components; the wires I was using to connect the Feather to the button had to be changed repeatedly due to malfunction. I had intentionally left the design and build of the casing until the end as I knew this would be the easiest part of the project with my skills. Unfortunately, no matter how skilled I may think myself to be, you can’t make anything with no time at all. I’ve included a proposed sketch of how my casing would have looked (fig. 6 + 7).

Ultimately I do not view the project as a success. The product did not work reliably. However it was a good learning exercise, because I do know why I do not consider this project a success: To start with, my coding skills are not up to par. This is little comfort now but it is something I plan on working on building on when I have more time (ie. the winter break). I do not plan on being a programmer but I think coding is now a base level skill in most areas of design and one in which I am sorely lacking. From feedback from the critique, from my peers and from some of my own reflection; I’ve realized a few of my issues on this particular piece:

  • The timer from the The Coding Train would have almost certainly worked if I had executed it properly.
  • The reason my .io feed was being flooded – which caused my IFTTT’s to be rejected – was because of both my Arduino and p5 code. At both of these stages I could have added code which would only send 1 button push at a time, resulting in a more functional product.
  • If I had been able to add a randomized component (time, random value etc.) into the data being send to IFTTT then I could have avoided the applets being blocked.

PROJECT CONTEXT

There have been many studies into the long-term effects of video games on user, both positive and negative – such as Daphne Bavelier from MIT who has done some very interesting work on the subject (https://www.youtube.com/watch?time_continue=573&v=FktsFcooIG8). The purpose of this project was not to deter myself from playing video games for this reason, but because I’m very busy with my masters and am prone to get distracted and procrastinate.

There are hundreds – if not thousands – of different techniques, products and services which are have been created for the express purpose of helping us to focus in today’s world where we are constantly surrounded by distractions. This product was designed specifically for me, as I know how guilty I feel when I take a short break to play a game for an hour, and am still sitting there playing 4 hours later. Many IoT products we use daily now use social media as a method of encouraging healthy lifestyle habits (while advertising their own products or services) such as Fitbit, Nike+ etc. however I decided to turn this idea around and publicly shame myself out of procrastinating for long periods of time.

Coding Train Timer https://www.youtube.com/watch?v=MLtAMg9_Svw

Underdress’d by Savaya Shinkaruk

Title: Underdress’d

By: Savaya Shinkaruk


Project Description

My product was created by thinking of a project for our Peripheral assignment.

“For this project you will work individually to create a new peripheral for your computer that customizes an input or output function specifically for you.  This could be a new input device that changes the way you interact with your digital world or a means of notifying you about physical or virtual events around you.  To achieve this, you will use P5.js in conjunction with a variety of web APIs and a USB connection to your controller to create your new device.  Beyond the intended functionality of your new peripheral, you should also consider its materiality and spatial relationship to you and your computer.”  – Nick and Kate.

When I was first assigned this project, I wanted to create something that would work in the fashion industry. During class we started talking about weather API and I got an idea. When getting dressed I run into issues where I am not wearing something 100% appropriate for the weather conditions – so I thought what if I could make something to help with this fashion problem.

I go more into depth about the journey of my process towards making my product in my blog  but the overall description of my project is:

My Underdress’d experiment is a project for people to work within their own closet to know what to wear for the day based on the temperature outside.

The goal of this project is for people to never be too warm or too cold again based on what they are wearing for the day.

The weather revolves around what the current temperature is, for me I did it based on Toronto’s weather – as that is where I am living. And then to know what clothing item would work for the current temperature you have UID Cards that act as a ID label for your clothing. So, upload the website link, take your UID Card and scan it on the scanner (you have in your home) and on the screen will either pop up the word Yes! if should should wear it, and it is a Nope! you will get a blank screen.

And wow! You will never be underdress’d.

So, continue on to the rest of my page to read more about me and my  journey in creating Underdress’d.


About Me

Savaya Shinkaruk: Savaya Shinkaruk is a fashion stylist and journalist with a keen interest in wanting to blend components of the online fashion industry with design. She graduated with a BA in communications in 2017 and is completing her MDes in Digital Futures at OCAD University.


BLOG: PROCESS JOURNAL

DAY ONE

DAY ONE OF MY EXPERIMENT:

October 30, 2017

Today we were assigned our third experiment for this class, and although I am nervous with this assignment because I am working alone – I started to look for ideas through Google on what could be created.

This assignment is to come up with a new peripheral for your computer and that customizes a input and output method.

Some of the ideas I am interested in exploring more of:

  • Link the weather on my computer to link to Instagram on my iPhone to give me ideas on what to wear.
  • Take a photo of outside with your iPhone and it will connect you to a hashtag of what to wear on Instagram.
  • Study party – where a timer goes off when it is time to stop studying and ‘party’
  • You share your pulse and it will give off a scene on your computer of what you are feeling. How much coffee you need??
  • Morning selfie to tell you how much coffee you need based on your motion.
  • Morning type in to see how much coffee you need based on the mood you input.
  • Moving display with a snack time feature when you stop typing after a bit.
  • Using a slider to input your price on Amazon or Ebay.

Notes on breaking down my ideas with input and output:

  • On Idea # 1: INPUT = voice command (Alexia or Google) OR a what do I wear today button  // Links to the weather through IF Statements // OUTPUT = Instagram  
  • On Idea # 2: INPUT = photo // OUTPUT = Instagram
  • On Idea # 3: Need to do more research on.
  • On Idea # 4: INPUT = pulse sensor or slider?? // OUTPUT = Digital Scene
  • On Idea # 5: INPUT = photograph // OUTPUT = Digital Scene
  • On Idea # 6: INPUT = keyboard // OUTPUT = Digital Scene
  • On Idea # 7: INPUT = keyboard // OUTPUT = Digital Scene
  • On Idea # 8: INPUT = slider // OUTPUT = Ebay numbers

The bolded idea is what I like the most.

From there I talked with Finlay about how I can do this when it comes to coding, as I am new to this.

Here are his suggestions:

  • Create this as a way where you read your clothing (not using Instagram now) (using RFID) and if it is ok to wear based on the current weather.  // get at Creatron + stickers to do this and place on clothing.
  • Create a spreadsheet on Excel to have characteristics and YES and NO answers and statements based on the YES and NO temperature ranges I input.
  • Characteristics include: shoes // bottoms // tops // outerwear – and so on.
  • Use RFID reader as a way to keep inventory

He suggested it will be easier for me to do the coding with the RFID Reader than sourcing a way my API Weather links to an Instagram Hashtag.

In class on Friday November 3 we will be learning more about how to use input and output sources for this project. And I will get Nicks input on what he thinks about this.

End of day one.

Thanks!


DAY TWO

DAY TWO OF MY EXPERIMENT:

November 3, 2017

Today we are working with Nick in class to understand more about how to create this project.

I asked Nick and some classmates about my project ideas and what they think will work…

So, in the end I am going to do (in short form):

I will have a ‘what do I wear button” that you click which will bring you to your weather input and through RFID it will tell you what clothing you can wear that day based on the weather.

Today I also started to work on my YES and NO answers and statements in an excel sheet.

Here is a link to the excel sheet:

https://docs.google.com/spreadsheets/d/16twY9lrF14V6dvcrtCDqIBHXuPN-m6WgZop_KFjVDyE/edit?usp=sharing

I also went to Creatron to purchase the needed materials: the RFID scanner and scanning cards.

Here are the new items I have never used, recently purchased, and will be incorporating into my experiment:  

new-items

Now, because even 15 of the RFID cards were expensive, I only purchased that amount so everything in the Excel sheet will not be available for this experiment. * I am not going to use all 15 in the end – I am going to use 12, so I also have some left over incase something happens.

The clothing items that are colour coded in RED are what will be available in this experiment. Which is shown in the Excel sheet link above.

And all the clothing in this experiment is from my personal wardrobe.

End of day two.

Thanks!


DAY THREE

DAY THREE OF MY EXPERIMENT:

November 6, 2017

Today I worked on my sketches of how I want this experiment to look in the end and how I want it to function.

Here is an image of those sketches:

sketch-1

sketch-2

I also worked on getting my Weather API to work. And yay! And thank you to Feng, she helped me to get it all working.

I needed to create a function where the information linked to the button was grabbing the Main.Temp information in a constant way. This is what I was missing in my code.

I followed the information Nick gave us about the TTC code, and followed that system – which for the most part worked! Just was missing a few elements. Like Console.log.

All the issues I seemed to have, when I added in a console.log variable it would work!

The research I did to get my Weather API working and functioning with P5 was from these links:

https://openweathermap.org/current#name

https://www.youtube.com/watch?v=ecT42O6I_WI&t=802s

The YouTube video I followed step by step – which got me all the correct information, I just needed to expand it a bit to include all the weather functions like Main.Temp and / or Max.Temp and Min.Temp (which I didn’t end up including in the end). Through this process (of expanding the original code), I learnt a lot about the need for variables. For something to run and gather the data needed – I need to create a variable for this to happen. The function is the grouping of what data is being transported.

Here are the images of what the Weather API will show for information (BEFORE AND AFTER CLICKING THE BUTTON):

The numbers do not change that frequently as the weather does not update every second.

Image of BEFORE clicking “get current weather”:

before

Image of AFTER clicking “get current weather”:

after

End of day three.

Thanks!


DAY FOUR

DAY FOUR OF MY EXPERIMENT:

November 7, 2017

Now that I have my Weather API working to get information on the current weather and the max temp of Toronto’s current weather, I am off to work on getting my RDIF scanner up and running.

When I tried to hook it up yesterday, my port on my Arduino disappeared. So I had to go and get a new Feather MO – however it is now working today. So, I have backup.

I originally found a PDF – RFID Quick Start Guide : Arduino to get my Feather MO talking to my RDIF Reader, but the code it indicated to use, for some reason isn’t working. So since the setup of the Feather MO to my RFID Reader matches all other charts I Googled, I followed this setup on my board.

Here is a image of the what the setup should be on my board (i.e. what wires link to what):

setup-image

So, I am back to the drawing board to find different code as that is what seems to be the issue now, because both my Feather MO and my RDIF Reader light up when on the board.

In the end, from doing research, I found this code on this website:

NOTE: This website also confirmed I had my pin wiring correct.

https://create.arduino.cc/projecthub/Aritro/security-access-using-rfid-reader-f7c746

//

So… after typing in all that process and trying to get my Feather to link to my RFID Reader – it was a NO GO. Which was a lot of time wasted, BUT also a learning curve. But it was good to know it was not the code in the end. It was just the system I was using. I tried switching the code to fit a Feather MO but it would not function as well – I think that is also because I was getting a WARNING on the systems being used on my data information on P5 (though, Roxanne told me it shouldn’t really matter because it is just a warning), but it started working when I switched my system to the UNO.

I asked some of the second years to help me and give me any insight, and with that I was given a Arduino Uno and all of a sudden the RFID Reader worked and I was able to use my RFID/NFC Classic Card to get an UID number.

Here is a screenshot of the first card scan I did:

first-card-scan

Here is a link to the YouTube video I watched to get the card scanner to work:

https://www.youtube.com/watch?v=uihjXyMuqMY&t=166s

So now that I know the best thing to use is a Arduino UNO instead of my Feather MO, I am going to work on learning more about ‘if statements’ and how to link the weather number with the UID number on the card.

And when searching what code based on the GitHub zip file I downloaded to use from the link below this paragraph – the best code for what my project is, will be the DumpInfo file.

Here is the original DumpInfo Raw file:

https://raw.githubusercontent.com/miguelbalboa/rfid/master/examples/DumpInfo/DumpInfo.ino

For the DumpInfo code, I changed it a bit to match the way this link showed it:

I did this because it was more simple, and I understood the goal of the coding better. It is code that looks at two functions “Access” and “Deny” – which in my case I will change it to “YES” and “NO”.

http://mertarduinotutorial.blogspot.ca/2017/03/security-access-using-rfid-reader.html

Here is the where I downloaded the GitHub zip file:

https://github.com/miguelbalboa/rfid

End of day four.

Thanks!


DAY FIVE

DAY FIVE OF MY EXPERIMENT:

November 8, 2017

Here is a image of the Fritzing of my board that I will be using for this project:

fritzing_experiment3_bb

After doing some research about IF / ELSE statements last night, in general this is what I need to figure out:

This will be for me to add into P5: This is written in a way I would understand what I am coding.

IF CARD ID AND TEMPERATURE = THE CORRECT RANGE OF TEMP THEN DISPLAY YES.

IF CARD ID AND TEMPERATURE DON’T EQUAL THE CORRECT RANGE OF TEMP THEN DISPLAY NO.

In Arduino I need to add in code to link my UID Card number to the weather in P5. Here is a link to show the general code I need to look into:  Which in the end I DID NOT need to use.

https://www.arduino.cc/reference/en/language/structure/control-structure/else/

I also spoke to Finlay about my research for my IF / ELSE statements and he said I shouldn’t have to do anything with Arduino. I should just make all my UID Cards “YES”.

So let’s see how this goes later on…

Taking a break from doing the above ^ research, I started to work on getting the UID card numbers done.

Here is a image to show that process: 

uid-card-number

uid-card-process

Presentation Ideas:

I also started to think about how I was to show this assignment to the class, and I decided I need a rolling rack to bring in the clothing I have decided to use.

However, bringing in a rolling rack to school will be interesting.

So, I am also going to print a small picture of the item of clothing that matches the UID Card and stick it on that – I will do this either way if I do or don’t bring the clothing items in.

Here is a image to show what I am talking about with the cards:

uid-cards

Project / Product Names:

  • Suitable Dress
  • Weather Pants
  • Underdressed = WE HAVE A WINNER! Which I later changed to: Underdress’d because when I was designing the box to go over my wires and board, I bought a pack of stickers which didn’t have enough ‘e’ letters in it, so I changed the word to this.

Meeting with Nick:

Later on today at 5:30 PM I am meeting with Nick – so hopefully I will have my Arduino and P5 connecting.

I want to ask him about my IF statements, and see if what I have planned is the right direction to go.

Nick assisted in updating my IF statements to have my weather, text, and UID Card number talking to one another.

I needed to also add in a drawYes function to add in my IF statements because I had them in just my draw function – which Roxanne H pointed out to me! So thank you.

Roxanne also pointed out to me I need to add in a String function like I had for my weather into my Arduino code because when trying to get the information to link it didn’t know what code to grab to send.

I needed to update my IF statements to link the inData code and the mainTemp code which was done by adding in “==”.

So the new code (in simple terms) would be:

IF (ARDUINO DATA == CARD ID) THEN THE MAIN TEMP RANGE.

In the end, the concept of my project is:

When wondering what to wear based on the current weather in Toronto, you will use a Card that is linked to a clothing item in your closet, where you will scan it to receive a YES on your screen if it is appropriate to wear based on the weather.

Here is a video of the first trial run of this working:

End of day five.

Thanks!


DAY SIX

DAY SIX OF MY EXPERIMENT:

November 9, 2017

So yay! Code is working and I am feeling much better about this project. I am so happy I stuck through it, even though there were a few times I thought I would have to change it.

But I love the concept and idea I came up with – so I wanted to make it work.

And it does!

So, today I worked on making a box for my board and Uno so it is covered up, and also a way to show my UID Cards.

I booked an apt with Reza so he could assist me on building something.

Here is a image of the sketch of how I want it to look like:

sketch-of-presentation

Here are some images to show what I crafted with Reza – it still needs to look better: 

craft-one

craft-two

I also decided I won’t bring  in my clothing, I will make a ‘style book’ where the UID cards will sit with the images of clothing on the cards. 

Here is a image to show what I am talking about:

display

box-design-with-reza

I have been having issues with my ‘Else Nope!” variable on my code. I did was Nick suggested where I type in all my If statements, and then at the end add in the else Nope! variable, but it when I would do two things. 1. If i did a Yes! item before a Nope! Item, it would just go on top of the Yes! word and 2. It would either just have the text word Nope! always showing.

So I tried adding a delay to this, but that would make it stop all together. I also tried to do the same IF statements like I did with the Yes!, but with the Nope! But that didn’t work at all. I had Finlay look at it, and he was also confused as to why it would not show.

So I did take a shortcut, and just decided that Nope! would not show on the screen. So if you should wear an item of clothing you should just wait to see Yes! on the screen.

End of day six.

Thanks!


DAY SEVEN

DAY SEVEN OF MY EXPERIMENT:

November 10, 2017

Today is the day of presentations!

I am so excited to show what I created. I feel so proud and satisfied with what I was able to design and functionally create.

To take a look at my code go to this link: https://github.com/SavayaShinkaruk/Experiment3

To take a look at the website of my product go to this link: https://webspace.ocad.ca/~3164420/experiment3/indexcopy.html

Here is a image to show my final set up of my presentation of my product:

display2

For this presentation I set it up, like a desk set up in your room or office area. Or wherever you clothing might be.

But for later down the line in this project, I would change it so there were not any Cards, the code was linked to an article item number where you would scan that on a scanner in your closet and you would either get a colour coordination of yes or no to answer your question – What Should I Wear?

I would also add in other weather tools like humidity and weather conditions too.

End of day seven.

Thanks!


FINALE PROJECT BLOG POST

Underdress’d:

My Underdress’d experiment is a project for people to work within their own closet to know what to wear for the day based on the temperature outside.

The goal of this project is for people to never be too warm or too cold again based on what they are wearing for the day.

The weather revolves around what the current temperature is, for me I did it based on Toronto’s weather – as that is where I am living. And then to know what clothing item would work for the current temperature you have UID Cards that act as a ID label for your clothing. So, upload the website link, take your UID Card and scan it on the scanner (you have in your home) and on the screen will pop up the word Yes! if should should wear it, and it is a Nope! you will get a blank screen.

And wow! You will never be underdressed.

Code:

https://github.com/SavayaShinkaruk/Experiment3

URL link:

https://webspace.ocad.ca/~3164420/experiment3/indexcopy.html

Supporting Visuals:

Here is my prototype video: 

Here are some images to highlight my journey to my  final product Underdress’d:

There are more images of our process and journey in our blog post above.

uid-cards

display

Design Files:

Here is my Fritzing image:

fritzing_experiment3_bb

Project Context:

The Underdress’d product is a way for men and women to never have to be underdressed for the weather based on the current weather of where you are living. It is a way for you to never feel too cold or too warm when walking to work or school, or just going out with your friends.

Since I am new to coding, I did a lot of research on YouTube of basic tutorials for P5 and also  Googled how a RFID Scanner works and runs on Arduino.

During my brainstorming and idea hatching steps in this project, I wanted to try and link something with fashion. I wanted to do this because it will look good on my portfolio to see technology I built and coded that could be used in the fashion industry. And so my Underdress’d  product was created!

The goal of this assignment was to create a peripheral that customizes an input and output function. I am using Arduino and P5 for this project too. For my product I am using my mouse to click the “Weather Now” button, so it will update the Main Temperature to the current weather in Toronto. From there you are using a card as a item of clothing to then scan and receive a Yes or No text message on your computer screen. So I am using a input of a mouse press and card scanner and an output of API weather updates and text.

Bibliography:

OpenWeatherMap. (n.d.). Current Weather Data. Retrieved November 6, 2017, from https://openweathermap.org/current#name

Arduino: Project Hub. (n.d.). Security Access Using RFID Reader. Retrieved Novemeber 7, 2017, from https://create.arduino.cc/projecthub/Aritro/security-access-using-rfid-reader-f7c746

Arduino: Project Hub [Photograph]. (n.d.). Retrieved from https://create.arduino.cc/projecthub/Aritro/security-access-using-rfid-reader-f7c746

Github. (n.d.). Miguelbalboa/RFID Library for MFRC522. Retrieved November 7, 2017, from https://github.com/miguelbalboa/rfid

The Coding Train. (2015, October 30). 10.5: Working with APIs in Javascript – p5.js Tutorial [Video file]. Retrieved from https://www.youtube.com/watch?v=ecT42O6I_WI&t=802s

logMaker360. (2016, January 13). MF522 RFID Write data to a tag [Video file]. Retrieved from https://www.youtube.com/watch?v=uihjXyMuqMY&t=166s

Mer Arduino and Tech. (2017, March 26). Arduino Tutorial – Security Access Using RFID Reader (MFRC522) [Video file]. Retrieved from https://www.youtube.com/watch?v=3uWz7Xmr55c

Security Access Using RFID Reader (MFRC522).  (n.d.). Retrieved November 7, 2017, from http://mertarduinotutorial.blogspot.ca/2017/03/security-access-using-rfid-reader.html

Arduino. (n.d.). Else. Retrieved November 8, 2017, from https://www.arduino.cc/reference/en/language/structure/control-structure/else/

 

Experiment 3: Palmistry Yi Ching

yichingpalmistry

A Project By
Roxanne Baril-Bédard

Find it here, activated on click.
Here is the repo.

This project consists of a lazercutted acrylic box with an hand engraving, two small holes for photocels. When pressing the hand on the hand engraving, when the photocels are completely obscured, a random number between 0 and 63 is generated. This number is found in two APIs JSON, getting the character associated with the number, the title and the description. It is shown in browser with a cute background.

Sketches and process

sketch_plamistry

Box model

box model

I made the box outline using this . The red lines are cut. The black area is etch.

Process of my illustration with inspiration for palette

illuandinspo

Pictures of the box and microcontroller’s circuit

img_20171110_122859_mh1510335257239 img_20171110_122923_mh1510335223390 img_20171110_123003_mh1510335322046

Journal

I have decided to make a palmistry type machine, like the ones in the entrance of theatres. I am interested in all type of divination and i wanted to use the yi ching, which conveniently has ascii character that I could go and write. i find that the yi ching has really interesting description and is not widely known, so it has a little more spice than all the tarot themed projects i have seen in esoteric tech. I like the idea of having a computer tell you your fortune

i also wanted to make a project in which i could make cute visuals, so i decided to go for a physical sensor and some p5 sketches and vector illustrations. i want it to be a kawaii overload. i thought of using a bio sensor but they were really expensive, so instead i decided to use a light sensor and to design an acrylic box to remind people of the palmistry machines, inviting them to put their whole hand on the box in rder to have a more immersive experience, even if in the end it’s picking up light or lack thereof, more like a button almost but with a more immersive interaction.

i first worked on designing a box that i could lazer cut. i figured having the final box, being that it takes time, should be done earlier rather than later. i first designed the illustrator file that I would send to the lazer cutting.

then, i started working on making the light sensor work.

once the box was cut, I glued it together. i wanted to put some glitter under the hand to give it a bit more pizzaz but it did not see through the sketched acrylic. i went for a cloud texture, to give it a drwamy feel.

my light sensors are working together. i just added their total divided by 2 to have the data i want to send to p5 byte sized.

i had a hard time figuring out how to “talk” to the api’s json but I was helped by my friend via screensharing. it was hard for me to understand how to navigate an array and to understand how to build a function – I still barely understand what the variable in the parenthesis is for.

another problem I had was finding the perfect api with all of the info I wanted on each hexagram (description and character of the Yi Ching hexagrams selected via a random generated number) so I ended up taking from two apis instead.

for hexagram description:

https://raw.githubusercontent.com/la11111/willie-modules/master/texts_iching/iching2.json

for hexamgram character:

https://cdn.jsdelivr.net/npm/i-ching@0.3.5/lib/data.json

i finished by spending some time making an illustration and background. I added also a little bit of responsivity, and the website works for other size of window than 1680 x 1080 if it is in a 19:9 ratio, the text resizing proportionally.

…im melting

Karo Castro-Wunsch + i

 

DESCRIPTION

a dialogue with a sentient glacier. it’s a needy glacier granted, but a valid one. a glacier that just wants to be cool. dont we all have a right to be cool? a basic glacier with froyo dreams. dreams that are melting away.

The use of multiscreen here is used to bring visibility to the choices people make in their interactions with social issues. Donations are traditionally often made anonymously whereas this project brings people interacting with the ‘climate change propaganda’ into the same space next to each other. Making the interaction public puts pressure on people involved to make a move and put down $$$. This is not necessarily a good or effective method of propaganda as there are definite benefits to anonymous donations starting with personal differences in what is considered an important cause. The multiscreening also serves the effect of giving each user their own interface to interact with the piece, but each interface is part of a larger cohesive issue. This emphasizes the we’re-all-in-it-togetherness of global climate issues.

 

CODE

http://realgoodinternet.me/help-im-melting/

* in order to engage with the piece solo, enter the values 1,1,0 into the text inputs, then click the ++++ button.

WASD: move

QE: tilt

RF: vertical strafe

click to cycle the narrative

22883732_10159662121300422_384492340_o 22908246_10159662120765422_2095046417_o

PROCESS

The goal of this project is founded in the intention of making rallying propaganda for climate change. Of the many elements of our natural backdrop being lost in this process of change, glaciers are a large and symbolic one. They’re more generic than the loss of individual species and more focused, visually. In order to draw an empathic response to the glacial loss, the idea is to anthropomorphize the glacier in maybe an anime-esque way and drape a human narrative over a crystal losing it’s integrity. There are so many metaphors to play with here wrt to the losing of shape, the losing of hardness, of clarity, of majesty, of terrain (ice terrain). I attempted to communicate the loss of coolness (losing your cool) and projecting that onto the inanimate symbolic glacier.

The interactivity of the piece was the most difficult to determine. It was originally planned to be more complex, with objects in the 3d environment that could be offered to the glacier, attempting to placate it, none of which would get to the root of the problem. The root of the problem being the user’s own IRL actions. The virtual glacier, as a symbolic form, is unsatisfiable and unfixable, it’s purpose being to do ‘what all ads are supposed to do: create an anxiety relievable by purchase.’. purchase here meaning action in the form of buying trees. Note: the buying of trees is to offset your carbon footprint to get the glacier her (it’s?) coolness back. It was decided though, that interaction in the 3d space would ad little to the message of the piece. In fact, the final product in it’s current state also has too much going on, a more simple and dialled back scene composition would be more effective. The choice to add the distractions button was to call attention to the user’s own inevitable distraction. The distraction that pulls us away from things that are difficult to engage with. the idea is that by calling this out, by making a visible button for distraction, the user can pseudo-pacify themselves for a moment which would be short-lived and then bring them back to the confronting the issue at hand.

Technically, much of what was set out to be accomplished was accomplished: the embodiment of a glacier in the form of a human textured with reflective surfaces, the incorporation of water texturing to show the melting of the glacier, the multiscreen code to split up the scene into smaller pieces, the incorporation of distraction gifs.

This piece was too dry to succeed in its goal of drawing an emotional response from it’s users and moving them to action. The aesthetic of the 20-screen interactive collage was also, I think, too fractured and fragmented for people to really engage well with it. I’m inspired to attempt to create a piece that is more in-line with the piece made by one of the other groups that involved taking pictures of each other in poses and using that sort of physical action as a segue for social action. A sort of catharsis of body. The beginning of any propaganda piece should be easily engaging, and emotionally charged in order to move people.

CONTEXT

Propaganda has been around for a while. Governments, religions, corporations all love it and use it to further there ideologies. Propaganda is always about some sort of war, and the better looking and more engaging it is, the more it helps it’s cause. Climate change is a war vs ourselves and our own over-consumptive tendencies, it stands to reason that we need some good propaganda for it.

https://people.howstuffworks.com/propaganda4.htm

Stylistically the piece heralds to the net art/vaporwave aesthetic with it’s 3d models and scene with overlaying text and .gifs.

SOURCES/LIBS

three.js was used with flycontrol, sound js

water shader : https://github.com/jbouny/ocean

 

DOUGH NOT GAME BY MAX AND SAVAYA.

Title: Dough Not Game

Group: Savaya Shinkaruk and Max Lander


Project Description

Our game was created by thinking of a project for our Multiscreens assignment.

The goal of this experiment is to create an interactive experience for 20 screens. This could be 20 laptops lined up in a row, 20 phones laid out in a grid, or something of your own imagining. Possible inputs include camera or mouse. Possible responses will be reviewed in class. Students are responsible for developing their own conceptual framework. – Nick and Kate

When we were first assigned this project we wanted to create something that would be fun but also play with people’s emotions when working with a pre-designed art installation. We need an audience to create an image, with no audience there is no art piece.

We go more into depth about the journey of our process towards making our game in our blog  but the overall description of our project is:

We created a game where people can interact together as much as they are interacting with their phones. The idea is to be in an exhibit where you and others need to work together to create a large image. Large enough that 20 screens will be needed to do this game because on each phone being used will be a piece of the whole image.

To play the game, navigate to dough-not.glitch.me on your smartphone, once loaded do an erase like motion on your phone screen to get a image, and once you have ‘scratched’ enough to get most of the image, shake your phone to stop the image from disappearing from the ellipses (circles). Zoom in on the image if necessary to align the edges with your screen and line up your phone with your fellow players complimentary images.

Be sure to test the game because there are lots of fun emotional factors!

So, continue on to the rest of our page to read more about the ‘Dough Not Game’ team and our journey.


About team

Savaya Shinkaruk: Savaya Shinkaruk is a fashion stylist and journalist with a keen interest in wanting to blend components of the online fashion industry with design. She graduated with a BA in communications in 2017 and is completing her MDes in Digital Futures at OCAD University.

Max Lander: N. Maxwell Lander is a photographer, designer, game-maker and hedonist. His work often blurs the line between disgust and desire, and involves a lot of fake and real blood. He enjoys making things that engage with gender, kink, violence, and neon. In particular, his work critically engages with masculinity in ways that range from the subtle and playful to brutal and unnerving.


BLOG: PROCESS JOURNAL

DAY ONE

DAY ONE OF OUR EXPERIMENT:

October 16, 2017

Today we were introduced our second assignment in our Creation and Computation class.

This assignment is to come up with a digital concept that is both interactive and is shown on 20 digital screens. These screens can include an iPhone, computer, or iPad.  

During class hours, max and I discussed some possibilities of what we could do for this assignment.

NOTE: Our research for this project mainly came from us using Make: Getting Started with p5.js. And just testing code from there and re-testing it till it worked the way we wanted it to.

Also from the Reference page on the https://p5js.org/reference/ website. And again, just testing and re-testing code till it worked in a way that we wanted it to. 

Here are some initial ideas we started with:

  1. We want our digital screen to be a phone.
  2. Each person gets a piece of a full image on their phone screen.
  3. The interactive element is to have people walk around and look for their puzzle match so people can create a large image.
  4. The other interactive element is to have each person go to the link where their image will be and have to do a ‘scratch and sniff’ method to find out what their image is.

We liked the puzzle concept so much, that we decided to keep this concept but figure out a way to make it more challenging when it came to coding and designing this. And also a more frantic interactive element.

Here are things we need to remember when putting our project together:

  • This assignment has to be interactive on a digital and physical level – which we accomplished in our brainstorming ideas.
  • We have to code it in a way so people don’t get the same image every time.
  • We need to make it digitally challenging for us to code but for people to play too.

In the end our concept is:

You download the code – erase to see what image you get – then assemble a puzzle – with a frustrating factor (we need to create).  

We came up with a game where you interact with people in the room by using your phone to create a larger and full image. Each person in the room will have a different piece to the puzzle – and to figure out what your image is, you have to use your finger on your phone to make it appear.

Additions to our concept:

  • Will the image you are trying to see stop so you don’t have to keep using your finger to get the image on your screen?
  • What will the frustrating factor be? One idea we really like is to have random ellipses popping up on the screen while you are erasing. As if the computer is fighting back.
  • A colour theme.
  • A image.

From here we both went home and researched and tested ways to make this more intricate in the design and coding and how we can take the game to a next level. And to visually look better.

Here are some sketches of the ideas we came up with:

img_3032

img_3031

End of day one.

Thanks!


DAY TWO

DAY TWO OF OUR EXPERIMENT:

October 20, 2017

Over the week Max and I were busy with other school projects – but kept in touch via Facebook when we had an idea or coded something new.

During our conversations we both felt we needed to add another element into the interactive portion of people ‘scratching’ to see the image on their phone screen. – Which was mentioned in day one where we talked about using random ellipses.

In class on October 20, 2017 we re-connected face to face about some of the ideas we thought about and started to get to work.

Things to think about / research to code / we want to incorporate:

  • Cut a full image into an even number of squares / cut it into 20 pieces OR cut it in less than 20 pieces where things appear more frequently.
  • Have a reload button so people can reload to get a new image if someone else already has their image.
  • Figure out a way to have a image fit to the screen.
  • Random versus sequential (x+1, x+2) when it comes to who gets what image.
  • How does the image stop?
  • List of images // each page refresh will place someone randomly in that list // and will then have a button to move sequential through that list.
  • Pick a phone // ‘scratch and sniff’ process with that phone to figure out the image // see if it links to a buddies image // if not press next image
  • We will use 20 phones // but talked about only using 10 // could change this later.

Images to use:

    • Doughnuts.
    • Something poetic.
    • Something sensational.
    • We need an image that doesn’t have a lot of detail in and it will be easy for people to find their match.
    • So we decided on the: Doughnuts image – shown below.

doughnuts_crop
Testing period for some of the things we wanted to add / change / test:

P5 Code / what we looked at / what is working / what is not:

  • Background image – full image or to cover with a colour to punch through?
  • Adding and using Javascript also – so we can see how to load images from a different folder – as we are unsure if p5 can do so
  • P5 – copy code – where it copies whatever you tell it to – in our case we are telling it to copy the image we put in the background
  • Loaded image . then when you click its copying onto your canvas / it’s stamping (which is the copy code)  the pattern of the image we loaded into script  BUT because it’s not making the image full width its not filling the window – but we can stretch or find a general sized image
  • Trying to find a image to make it take the size of the screen – like fit to content on Indesign
  • Issue we keep seeing – only copying in it default size. – why is this?
  • Maybe try and make a canvas and fit the image to that….
  • We are trying to not have to put another background colour over top of the image to then erase or copy – after testing can’t erase

Codes we are using / are wanting to use:

This list will keep being updated as we move throughout our project

  • Random Background image code
  • Copy code
  • Random Ellipse code
  • Fill code
  • Stroke code
  • Random Background code
  • Button for next image code
  • Anti-Bounce code
  • Shake code
  • FrameRate code

Game names:

    • Digital scratch game
    • Puzzle scratch game
    • Puzzle party game
    • Scratch and party game
    • Erase and puzzle game
    • Scratch array game
    • Do Not game – Winner! – but we changed it… See what we changed it to as you read through the rest of our blog.

Ways to make the game visually interesting:

  • Play with the colour theme.
  • Add a background colour image – jk leave it white.
  • Make the circles a different colour / grow / change colour /
  • Have the copy be white.

After today and testing and trying our ideas – some things worked and other things didn’t. But we assigned jobs for each of us to work on over the weekend until we met on Monday.

Based on the ideas we had last week though, here is a video of the first trial run:

Max: work on figuring out a way to make the image go the size of the page without messing up with copy code.

Savaya: work to make it more visually appealing and play with the copy code and its colour and work on adding another ‘frustrating’ element.

Here is a video to show the practiced colour theme of the random ellipses (not final):

End of day two.

Thanks!


DAY THREE

DAY THREE OF OUR EXPERIMENT

October 23, 2017

Today we worked on the blog a little bit more to showcase our process – both as a group and separately (always thinking of the group).

Here is a image to show the colour theme of the ellipses:

screen-shot-2017-10-23-at-12-57-29-pm

These colours were chosen from the Doughnut image using Adobe Color CC

We also finalized the concept of the game:

The full image our class will be putting together – if they can – will be a image of doughnuts. We will be using 20 screens and each person should get a different image to create the final image.

You will download the code given to you by us you will see a blank screen use your finger in a erase like motion on your screen to try and see what piece of the image you get if you get one that someone already else has, click the NEXT IMAGE button and attempt again until you get a image no one else has.

THE CATCH: as you are trying to see the image on your screen there will be another function that is also deleting the image (random ellipses) – so keep swiping fast!! And, will you ever be able to put together the complete image? Yes! When you have your image shake your phone to stop the ellipses from copying over the image.

Image of the emotional goal for the player using the game:

img_3033

So, now that we have figured out the context of the game and are working to finalize the colour theme, we are seeing a few issues.

A couple problems we are seeing:

  • When we go to use the erase like motion on your phone screen to see your image, for those with an Android and iPhone the movement isn’t as smooth – it is moving the whole screened image rather than just being able ‘scratch’ to see your image with no extra movement. – But yay! We fixed it. (In css, position:fixed and overflow:hidden)
  • The random ellipses are a little bit TOO crazy. We need to figure out a way to slow them down. (the discovery of frameRate!)
  • We need to re-size the doughnut image to make it larger so when people have their piece of the puzzle that small image will cover the screen of their phone.
  • When putting the ellipse code and the game code together, the scale of the ellipses was doing something weird to the image.

The next step after solving these issues / steps to fix:

  • When cutting the images we realized we cannot cut it into 20 pieces because of the size. So – the glitch in our assignment will be only 18 people will actually be apart of the whole image, but everyone will be apart of the game. – Decided not to do this – and we made it so it would be for 20 screens.
  • When sizing the image into pieces we should air on the smaller size because people can enlarge the image after to ‘connect’ the dots as close as possible.

Here is a image to show how we are going to break up the whole image:

22851652_10155066211522897_429250688_o

End of day three.

Thanks!


DAY FOUR

DAY FOUR OF OUR EXPERIMENT 

October 24, 2017

Today we worked on making our ‘get a new image’ tab on the page into a image so when you scrolled over it with your finger it wouldn’t get highlighted – as it was doing that with it not being an image.

Here is a image of the New Image link:

newimage

Because the default p5 button doesn’t seem to have a way to make it an image built in, we decided to change the css properties of all buttons to show the above image – this solution would not work if we were using multiple buttons, but we are not, so yay! If we were using multiple buttons, we imagine the way to do it would to be to create the buttons in html and then look into how to like them into a p5 function.

We also worked on finalizing the speed and look of the ellipses. They feel much faster on a phone than they do on a computer so after fiddling around decided to slow them down a little but more so as to only produce a mild panic.

We also decided to update the name of our game from The Do Not Game to The Dough Not Game.

End of day four.

Thanks!


DAY FIVE

DAY FIVE OF OUR EXPERIMENT

October 26, 2017

Today we worked on making our game look good and finalizing all the code, and process of the game. We did this via Facebook when we thought of something that we would need to add to make the interaction of the game smooth.

Here is a image to show how it will look once everyone has gotten an image on their phone:

22833414_10159549073055451_1429570892_o

We asked on our DF Grads 2019 Facebook page what kind of phones everyone had to make sure the sizes of the pieces of the image would fit on people’s screens. And based on everyone’s answers we are seeing that some people will have to enlarge their image on their phone screen to match to their partners.

Here is a image to show everyone responding to our question on Facebook:

img_3060

This information also re-assured us that using 20 screens will work!

For the interaction portion of the game, we were talking about how it might be a little crazy for everyone to be running around looking for their puzzle partner – so we added in our instructions a little tip.

The tip: To make this process easier for you, try getting into groups or partners so you can work on pieces of the puzzle together than just doing it individually and hoping you will find someone with the next image to match yours.

In the end of all our brainstorming, coding, and hard work here is a GIF to show you a pre-show of our game, The Dough Not Game:

End of day five.

Thanks!


DAY SIX – THE REVEAL OF THE DOUGH NOT GAME

DAY SIX OF OUR EXPERIMENT 

October 27, 2017

To take a look at our code click this link: The Dough Not Game Code

To play the game go to this link: http://dough-not.glitch.me

Here is a image of our branding:

dough-notHere is the final video of everyone playing our game:

Here is a image to show everyone’s interaction with the game:

img_20171027_144917

Here is a image to show the whole image everyone created by playing the game:

Processed with VSCO with f2 preset
Processed with VSCO with f2 preset

In the end there was not 20 phones being used, but it created a cool picture!

End of day six.

Thanks!


FINALE PROJECT BLOG POST

Dough Not Game:

The Dough Not Game is a fun art and digital installation where you can connect with others in a single room and also to play with your emotions.

To play The Dough Not Game: you download the code that is shared to you – you will do an erase like motion on your phone screen to get a piece of a complete image – and once you have ‘scratched’ enough to get most of the image you will shake your phone to stop the image from disappearing from the random ellipses  – from there you will find others playing the game and connect with them to match your pieces of the puzzle – once you have found your puzzle partners you will then create a full image on your phone screens. Remember, you need up to 20 people to play this game.

Here is a tip to make this game a little more interactive and fun: To make this process easier for you, try getting into groups or partners so you can work on pieces of the puzzle together than just doing it individually and hoping you will find someone with the next image to match yours.

Take a picture and tag us in it to show us the image you made with your peers!

Project Members: Max Lander and Savaya Shinkaruk.

Project Context:

The Dough Not Game was designed and created to be a fun art and digital installation game for those who are bonding in a common space. It is more common for people to come into a common space and automatically go to their digital screens and connect via online rather than face to face – so Max and Savaya created this game so people could connect with others face to face but they still need a digital screen to make a interaction happen.

We also made it so people have to come together and work together to create a bigger picture – but they also need to work on their own ‘erasing and scratching’ skills to find a piece of the bigger image. Most of our research on how to created this game, consisted of us brainstorming fun game ideas, and then using our P5.js book and online webpage P5.js to make it happen.

During our brainstorming ideas we both like the games where we are given a mismatched complete image and have to move the pieces around to make the complete image. So, we took this concept and when with it and talked about it – through our understanding of the P5 code how we could do this. And so, The Dough Not Game was created.

With the goal of our given assignment – to create an interactive experience for 20 screens – we took that concept for you to have an interactive component on your screen, but also for it to be an interactive with other people in a common space. With that in mind we created something that was fun to make and play, and explored different areas of a new coding system (p5). With a broad concept, we thought of things that we would like to use and do when it came to making an interactive piece.

Code:

The Dough Not Game Code

URL link:

http://dough-not.glitch.me

Supporting Visuals and Design Files to show our process to our final code and URL link to our game are throughout the blog post above. ^^

Bibliography:

McCarthy, L., Reas, C., & Fry, B. (2015). Make: Getting Started with p5.js. (1st ed.). San Francisco, CA: Maker Media

P5.Js. (n.d.). References. Retrieved October 16, 2017, from https://p5js.org/reference/

End of experiment.

Thanks!

ViCO

Vibration sensor + Slider actuator  + Cotton material + Happy adjective

dsc_0237

Project Description

This project is about turning vibrations to sound. It will aim to explore a different way of listening; a way in which you can feel the melody through your hands.

For this project, I took inspiration from the time I was a dancer, in the ensemble of Kol Demama dance group (literal Hebrew translation – sound and silence), which ensemble deaf and hearing dancers. The basis of the merger was a “vibrational” system, in which the deaf dancers take their choreographic cues from beat patterns felt through their feet, or gestural signals sensed through bodily contact, as well as from visual sources.

In my project, I wanted to mimic that experience and let my audience ‘hear’ a melody created from a combination of vibration beats, through their hands, while holding a cotton ball. The vibration beats, similar to music notes, hold different parameters setting for voltage and duration, which allow me to compose them into a rhythm, a melody, that the audience gets to ‘listen’ to through their hands.

There are a few ways you can experience this installation. You can cover the cotton ball with your hands. You can hold it next to your ear, to use the ‘original’ sense of hearing. I leave the listening experience open for the audience to choose. I believe listening to music is a personal experience, and everyone can choose to experience it at a different level of intimacy.

Circuit Diagrams

a1

a2

Code

https://github.com/LolaSNI/madlibs-experiment1 

SKETCHES

dsc_0227

Design

dsc_0205 dsc_0228

dsc_0239 dsc_0233

During the design process, I needed to take into consideration some parameters:
1. The cotton ball should not be too tight so that the vibration motor could vibrate inside the ball. It should also not be too dense so that the vibration could go through the ball.
2. The design should allow the audience feel the vibrations using their hands and it should be at eye level.
3. The environment should be ‘clean’ from distractions so that the audience can focus on the vibrations as much as possible.
4. The wires and the connections should be strong enough so that if people pull and play with the ball, all the wires stay connected.
5. I covered the vibration sensor cables with hot glue to protect it. During the testing process, I realized how sensitive it is and how quickly it can be damaged.

Based on this I decided to create a minimalistic design and let the cotton ball be the center of the design. I decided to hang the cotton ball, hoping the due to the vibrations people will notice small movements and get closer to it.

Process Journal

I started this project using a Mind Mapping techniques which allowsdsc_0178 me to expand my ideas, related to each property I could use for this assignment by using a free-association of ideas.

Following that process, I took some time to think about my options and use my imagination to think of them as a whole. I find sketching technique to be beneficial to me with creating a quick draft of my ideas. Whenever I had an idea popping into my head, I created a sketch without overthinking of its feasibility and or if it contains all of my properties.
t5

Focusing on vibrations as the main property, I spent time reading about vibrations, what does it mean, how it works, etc. I also draw inspiration from my own experience as a dancer with the idea of using vibrations as music.

My online research leads me to several artists using vibrations at their work. The one who caught my attention was Alessandro Perini who use vibrations in some of his work. His work Wooden Waves using the same principle of allowing people to experience vibration through their body.

It was time to take things from theory to practice. After setting the basic input-output board, I played with vibration. Trying to hold it, placing it on/in/bellow different cotton surfaces, etc. and test the various effect each of the settings has. (+the effect it has on my dog. No harm caused 🙂 ).

capture34

Video links:
First setting

More testing

Lola is helping to test the vibration motor

I also start with looking online for code that allows me to turn the vibration into a melody. I came across this project Vibration Foam speakers which drift me a bit from my original idea. At this point, I thought of changing my original design and create a cotton cover part (to replace the foam) and play a happy melody. I did some testing but find out that for some reason the code is not working right. I tried using some help from my friend, with no luck. I reached to a dead end :/

67

Left: original foam speaker. Right: cotton copy I made. I tried different cotton density.

After my meeting with Kate, she opens my mind to a new option – use beat sequences, to help demonstrate the idea,  instead of a full melody. From here, my primary work was to test different voltage and duration parameters and try to compose them into a short tune. We also spoke about creating three range for the slider based on various range values allow the audience experience three different melodies.

87

Evolution of a code. From left to right: first input-output setup, Melody, Vibe beats.

In the process, I managed to create three short beat sequences and tested their effect through the cotton ball. At this stage I had two issues raising. First, I realized that some of the lower voltages are not passing through the cotton ball. I played with it a bit to fine tune the ranges. Second, I experience some troubles with the delay of the response time of the slider. In my second meeting with Nick and Kate, Nick explained that the reason for that is that the loop goes all the way before it goes back and to read the slider parameters, and since that on each tune I have a relatively long delay time (average of 5 seconds) the respond time of the slider is not in sync with the actual change of the melody. He also suggested two ways that might improve the problem, but they won’t solve it completely.

I managed to reduce the delay time by offering 2 sequences, instead of 3, and shortens the sequences.

Future planning…
dsc_0226

Project References

  • Alessandro Perini, Wooden Waves – His artistic production ranges from instrumental and electronic music to audiovisual and light-based works, net-art, land-art and vibration-based works. At his work Wooden Waves he uses tactile sound installation that uses a wooden floor as a vibrating surface and lets people feel the movement of sound vibrations along their bodies, when lying on the floor.

 

 

Material MadLibs 1 – Max, Shawn & Chris: Kitty Catwash

GROUP

Sean Harkin, Chris Luginbuhl, Max Lander

CARDS

Blue foam, button, servomotor, furry

PROJECT TITLE

Kitty Catwash

PROJECT DESCRIPTION

Everyone loves a clean cat. Everyone loves a carwash. Our project combines these two great things for something even greater.

Remember the first time you were in a carwash? Those soapy rollers like an undersea spectacle, magic fingers washing away the traces of your dusty travels? Now cats can experience the magic of a carwash with Kitty Catwash!

SKETCHES

We did some brainstorming and came up with a few ideas:

-Using the servomotor and an attached arm to press the button (referencing Useless Box)

-Hacking the button to use it as a spring-powered fur launcher, which would glue blue foam “fur” to a blue foam model of Chris’s bald head.

Button modified to be a launcher
Button modified to be a launcher

-Taking the spring out of the button, compressing it with the servo and making a furry creature hop or fly (pics below)

Launcher button being operated with servo
Launcher button being operated with servo

-Modifying two buttons to have a longer spring-loaded travel, and put them in the legs of a blue foam sasquatch. Use the servo to compress and release the buttons in alternate legs to create a walking motion.

Servo & spring powered legs make this furry guy lurch around, scaring the children.
Servo & spring powered legs make this furry guy lurch around, scaring the children.

-Making a machine to draw fur patterns with a stylus.

3 servos on an arm draw semi-random dashes that look like fur
3 servos on an arm draw semi-random dashes that look like fur

-Using two servomotors to power large brushes/rollers like a miniature carwash….for cats. This is the concept we developed further

EXPERIMENTATION

We tried making blue foam furry in a variety of ways

-Drawing fur on it in pen

-Cutting triangles onto it

-Using different sizes of cheese graters

-Using a woodworking rasp

-Using a lathe with a dull toolbit

Large cheese grater vs blue foam
Large cheese grater vs blue foam
Rasp vs. blue foam
Rasp vs. blue foam
Small cheese grater vs blue foam
Small cheese grater vs blue foam

Building the circuit with servomotors and making them run helped give us the idea of a carwash because of the washing machine-like back and forth motion.

DESIGN FILES

Step 1: Initial Sketches

Once the final design had been chosen, the first step were some rudimentary sketches to get an idea of how the product would go together. The sketches, although basic, were the preliminary basis for the rest of build. This process allowed us to establish rough sizes for our components; however we failed to account for the fully assembled height of the button. Thankfully, we were able to correct for this in the digital modeling stage.

Rough sketch
Rough sketch

Step 2: Digital Modeling

The components were then digitally modeled using Autodesk Inventor. The aim was to create and assemble the components digitally to try and predict any issues which may arise in the build. Initially they build went well, until the button component was added. As can be seen below, the casing did not accommodate the full length of the button. The solution was to increase the size of the handle’s lid. By increasing the depth of the lid instead of the base, the button itself would be better supported in the prototype.

3D model in Inventor
3D model in Inventor

Step 3: Build

Building the components was fairly straight-forward since we had both sketches and a digital model to work from. As the brief for the project outlined our material as blue foam, we tried to minimise any additional materials. For the handle and the lid, the blue foam was cut to size using a bandsaw, and any supplementary subtraction was made using a free-hand cutting blade. The only additional material used – outside of our Creatron Kits –  was the hot-glue which was used as an adhesive. For the handle, the material was cut to the correct size and then cut in half. This allowed the excess material to be cut out with ease. The pieces were then glued back together to form the handle. The lid was also cut to size, only requiring a bored hole from a pillar drill for the button.

20171003_170314
Soldering and glueing
img_20171003_160224
See the poem “Axe Handles” by Gary Snyder for details on how cool it is to make handles by hand.
Making the kitty brushes on the lathe
Making the kitty brushes on the lathe

Step 4: Assemble

Again, due to our thorough design process and simple design, we were able to assemble the product without much hassle. First we moved our electronics from the breadboard to the protoboard; this was always the plan as the breadboard was thought to be too cumbersome to be included in the final product. We assembled the components on the protoboard and were able to cut it down to size for the handle (Note: Max’s button died at this point making us think the soldering/code was not correct, after a short while we realized it was the button and replaced it).The protoboard was inserted into the handle using a tight-fit. Similarly, the servos used a tight-fit joint into the handle, with the rollers were attached to the base of the servos by a hot-glued washer. The button supplied came with a washer to secure in place inside the lid. The group discussed using more permanent joints, however settles on the idea that this was a prototype of the final product. Although functional, if we were proceed with the product more permanent adhesives and joints would be used.

Circuit transferred to a protoboard
Circuit transferred to a protoboard
Pending UL certification
Pending UL certification
Kitty Carwash assembled
Kitty Carwash assembled

CIRCUIT DIAGRAM & LAYOUT

catwash
Circuit Layout v.1
circuit-v2_schem
Circuit diagram for v.2
circuit-v2_bb
Circuit Layout for v.2

PROMO VIDEO

https://vimeo.com/237016654

CODE

The most recent code can be found on our GitHub.

https://github.com/ChrisLuginbuhl/KittyCarwash

The v1 code can also be found there if you dig. An easier way is to find it in our other repository:

https://github.com/naxwell/CnC

CODE PROCESS

Version 1

Getting the button connected to the servo was easy.  That being said, the button itself is quite finicky/sensitive. It’s very tricky to get a single press out of it and more often than not it reads multiple click (AKA many 1’s). Looking into it further provided us with a number of avenues to compensate for this, but after talking it out we decided to with holding down the button, and avoid the press down noise altogether.

Using the “buttonservo” code (which is a mashup of the default Button and default Servo codes in Arduino), we tried to play with the angles and time between updates to see if we could get a constant increase, but again was confronted with the finicky button. Most of the time, regardless of the settings, the button would interrupt itself and the servo would stop and start again from it’s updated spot, which would have been great if it could process it fast enough to give a constant loop, but we weren’’t able to get that happening.

So, we moved on to the internet to try and find a solution, which led us to this ask for help – https://arduino.stackexchange.com/questions/17536/controlling-servo-motors-with-push-button-problem-though, which moves the servo from one end of it’s range to the other on a button click. The problem with this for our purposes was, again, the single click. We tried multiple ways to adapt this to to be constant press friendly – namely trying to make one click result in one degree of movement, mostly to no avail. What this process did make us realize was that the range of movement needed to be much shorter than both of these options were currently operating on, somewhere in the range of 60 degrees. And also back and forth, the back and forth is important (this was forgotten many times and resulted in half successes of it going in one direction and then breaking when trying to add reversal). All this lead to “ButIncr” (see GitHub repository revision with this name).

Unsurprisingly, the code did not work. As we was trying to troubleshoot this and not having much success, we returned to an earlier version that had been close and adjusted the degrees to 60, thinking that worst case scenario it could be multiple fast clicks if not a constant press. Plugged it in and held it down, and it did exactly what we wanted – it moved quickly between two points. Which is great, except that’s not what it’s supposed to do. It works because it is constantly cutting itself off after only a small amount of movement. But also, it functions exactly how we wanted to, so we decided to keep it and move on to putting it altogether.

Version 2

With this first version working, we created a branch of this code on github a fresh developer was brought in to pick up the torch.

One issue with servomotors that we wanted to address is their tendency to “lurch” at full power to a new position when first connected. In our development branch, we made a change to the algorithm: in the arduino’s loop() method, had the arduino read the button state, and if pressed, move from 1 degree to 60 degrees and back to 1, then check the button again and repeat.

Returning to 1 degree ensures the servo is always “parked” in the same position and avoids lurching. We used 1 degree as the start/end point rather than 0 because one of our test servos was straining against the hard stop when set to zero.

CONTEXT

As a general rule, animal owners will buy anything for their pets. In the context of the internet and a constant need for entertainment, this is twofold applied when talking about things that make their pets entertaining . Further to that, cleaning cats is often the worst part of having cats, besides finding their fur in everything (An example DIY solution to all of these consideration can be found here, in this video of people vacuuming their cats – https://www.youtube.com/watch?v=v_qYwJ94Ja8).

As physical computing technologies because more and more accessible to more and more people (and kids!) we get to see the creation of more and robots, many of which exist primarily as entertainment devices (most notable of these is Simone Giertz’s Shitty Robots). One of the question sometimes being asked by these experimental and DIY robot makers is “Will a robot solve this single unique problem better than a human?”

Kitty Catwash sits firmly in between all these ideas & processes. With the Kitty Catwash, we tried to combine a solution for a real life problem, via the functional reference of the traditional carwash, with a strong sense of whimsy and entertainment value, with the ultimate goal of answering a single and super niche question – Can a robot brush my cat better than a brush?

Simone Giertz’s Shitty Robots – https://www.youtube.com/channel/UC3KEoMzNz8eYnwBC34RaKCQ

Weird pet products – http://twentytwowords.com/ridiculous-pet-products/

The many reasons why we love useless robots – https://www.newscientist.com/article/2082014-the-many-reasons-why-we-love-useless-robots/

Title of Prototype

Group Member A, Group Member B, Group Member C

Prototype Description

Tell us about your prototype here.

Circuit Layout or Circuit Schematic

Create a circuit layout (breadboard view) or circuit schematic using Fritzing. Make sure it is tidy. Export as an image and include it here.

Code

Provide an active link (one that can be clicked on) to your code on GitHub. DO NOT INCLUDE CODE IN BLOG POST. The code should include a proper header that has the course code and name of this course, title of your prototype, your group member names, and any necessary attributions to code examples that you worked from. Also be sure to provide ample comments within the body of the code.

Supporting Visuals

Include any illustrations, design files, photographs, and videos that clearly document your prototype and how it works. These should be your more polished visual assets.

For video we recommend using Vimeo rather that YouTube.

Process Journal

During the development of your project you must document your process, discoveries, challenges, and details to illustrate the specific technical design decisions taken during the development of the project.  The creation of these materials will help you to understand your own design process and provides a vital resource for current and future classmates to understand specific design challenges and solutions.

Project Context & Bibliography

Write about the context in which your project sits. Provide references to related articles, papers, projects, or other work that provide context for your project. How do they relate to what you’ve made? Talk about this as a group and capture your thoughts in at least 3-4 paragraphs. Provide citations as needed and include your bibliography at the end.

Hello world!

Welcome to OCAD University Blogs. This is your first post. Edit or delete it, then start blogging! There are tons of great themes available for you to choose from. Please explore all of the options available to you by exploring the Admin Toolbar (when logged in) at the top of the page, which will take you to the powerful blog administration interface (Dashboard), which only you have access to.

Have fun blogging!