Blog Posts from July 2nd and July 4th

Post 1, July 2nd: First Brainstorming Session 

For the first week we started some rapid brainstorming to help us start thinking about what we wanted to prototype.

Since my thesis explores strengthening the communication between the user and the prototype through smart interaction design, I started to jot down interactions I wanted to play with.

img_6417

The interaction I wind up going with will then later lead on to what output I can produce to compliment the interactive input in question based on preliminary research.

Post 2, July 4th: Prototyping Paper

We were discussing papers in our mini groups today and I added a sticky note to talk about using artifacts as a gateway to explore interactive narratives. “Grasping Cultural Context through Multi-sensory Interactions” by Jamie Kwan et al.  They created three prototypes to explore all the affordance of an artifact (in this case a nut prayer bead) to enhance the narrative experience for the participant. These were known as diegetic objects. I found this paper useful in scoping out the use of artifacts to enhance a user’s experience.

You can find the paper here.

Blog Posts from July 9th and July 11th

Post 3, July 9th: 

I presented my prototype ideas to the class followed by the following question:

“How can the prototype itself fluidly create a seamless communication the user can quickly grasp based on the interactive inputs that take place?”

Goals:

To create a low-fidelity prototype that exemplifies a fluid interactive experience. The prototype will result in feedback the user can understand or relate with based on the actions they do.

Feedback:

I got some pretty good feedback on how to think about going about this. I mentioned I was interested in the idea of motion, movement, even using breathing as an input to produce a relatable feedback from this interactive system. Here’s a few that really peaked my interest:

“Can you use movement and visual data to somehow effect each other and create an ever changing interaction in that space?”

“What do you want to say about this movement of breathing/walking?”

“How do you want to present your output? Do you want to evoke any emotion? (meaning impacting people at the moment they realize how their input has influence in the world…If you know what feeling you’d like people to have after seeing your work, then you can decide about how to visualize your output.”

 “Visualization + sound + touch + temperature + smell.”

“Create a space with all these sensory elements and make them communicate with users/visitors in some way. Thing about themes.”

I needed to really think about the interaction I wanted to compliment with the visual output users will be seeing or experiencing. Almost as if I was fabricating a synchronized dance…

Post 4, July 11th:

I had dwindle my interactive decisions  down to two separate inputs for two separate projects: movement within a space and breathing. I choses these two specifically because they both present very fluid and natural inputs from people. We can do these two things quiet easily even if were not able to walk.

For visual output I started to think about water colours. I don’t know why water colours came to mind but I’ve listed a couple of reasons I can understand and relate to but don’t know if the audience will…

– Water is also very fluid.

-It’s vital to people. It’s important in most cultures and religions. Water is a big part of our lives. In a way we’re also 70% water. (but these are facts that I doubt people will pick up based on what they’re interacting with but this is just my bias.)

-Water colours to me seem to match the human nature and spirit. (Intimate response.)

First Prototype (low-fidelity):

An Interactive Bubble Blowing Experience.

I was thinking of exploring the affordances and interactivity of a bubble wand.

Users would be able to blow virtual bubbles in a physical space.

The interactivity here is blowing. Users will be familiar with a bubble wand and a bubble bottle.

First Prototype (high-fidelity):

Building upon the first prototype, it be cool if I could include the kinect so users could even pop these bubbles, releasing and leaving behind a splatter of water colours. (I don’t know if users will get the connection that the watercolours are in someway related to them and their breathing existing outside of them in a virtual space but thats what needs to be tested here.)

Second Prototype (high-fidelity):

 An interactive Motion Tracker that Projects a Heat-Map Movement System of People with Water Colours.

I really like this idea. (A lot.) I think it could be the most beautiful thing I could later make for my thesis. This installation would act as a sky light. A projection of water colours that represented people walking down below. If a lot of people were concentrated in one area in a room, the watercolours above hovering around that group would become more vivid. If they were less people in another area of the same room, they would be less pigmented. This watercolour skylight projection celling would basically act as a heat map.

Now as attractive as this idea sounds there are a lot of interactive bottlenecks to consider here.

The audience may not make the connections between them and what is being projected above. After all, this is basically a cluster of water colours representing people and motion and to make that distinction could serve to be confusing.

People may find being in an area where there is less people and seeing less vivid water colours above them may translate poorly in their interactions. “The water colours above me are lighter because I am alone…what does that say about me and my existence? Does that mean I’m less significant alone then when I’m in a group?” -This could make people feel like g a r b a g e….

with that I have decided to go with the first prototype and start building my thoughts from there. The interactivity is clear enough and I feel the affordances of the bubble wand stick and poppable bubbles could make for interesting play.

 

Blog Posts from July 16th to July 20th

Post 5, July 16th

Sketches of the first iteration of my prototype:

I have decided to go with my low-fidelity prototype and create an interactive familiar space of blowing air into bubbles using a familiar tool. The bubbles for now will pop automatically after a couple of seconds into a water-colours.

img_6460

Initially I wanted the user’s to pop the bubbles but the kinect does not detect fingers, only hands. I’ll think about this integration later in future iterations.

Post 6, July 19th

Post of parts and setup:

I went and bought the microphone I’ll be attaching to the bubble wand.

img_6490 img_6491

Post 7, July 20th

I was able to setup the circuit for the mic sensor with an LED. I found a snippet of code here that allowed me to test my breath sensor in the making. It wasn’t working at first, so I changed the threshold. Every time the mic reached the threshold, it would turn on! Cool 🙂

I was able to setup the circuit for the mic sensor with an LED! Every time I blew on the mic, the led would turn on:

 

Post 8, July 23

I went to buy a few of my materials for the bubble wand from Michaels. Since most people don’t sell normal one holed bubble wands anymore….I had to be crafty and make my own:

I used a cake pop stick as the handle and some shiny tinfoil goodie bag wrap as the loop. (Finicky at best but it worked.)img_6672img_6559

Right now the bubble wand only emits an array of bubbles at a variety of sizes with one puff…

Later I hope to have a continuous set  bubbles blow after more than one puff and have a long blowing one when the user blows continuously for a long period of time.

Post 9, July 24

This was the first day at the CFC and I was setting up the very first iteration of my prototype: Air + Water.

Air + Water was my first attempt to explore certain interactive steps between the user and the prototype leading from one cause of effect to the next. Thus, eventually having the user participate in an interactive experience where they do not feel confused about what to do but what they also leave behind in the installed work.  Getting users comfortable with interacting with a familiar tool in an interactive smart space was the first step in this prototype.

The steps followed would:

a) Cause them to interact with the tool/smart space. (I used familiarity with a bubble wand.)

b) Cause an action that would capture their direct input. (The act of air blowing.)

c) Express that input in some shape and form relevant to the context (in this case a bubble from the bubble wand)

d) Leave a mark. ( My element of surprise, water colours that represented the form of breathe incased in a bubble.)

 

I got as far as creating the wand with the microphone attached and displaying an array of bubbles in a variety of sizes in processing when users blew into the microphone. I created the bubble in Illustrator using a tutorial I found here.

Here’s the current bubble:

air-water-bubble

The visual of this bubble is mainly to translate the action of blowing air through a familiar tool that incases the breathe of the user. I wanted to match the input of this action as much as possible through the bubble. Even from the moment it pops, it stains the canvas with a splatter of water colours. This is the last component of the user that is left with the installation. At this stage it is still questionable weather or not this leaves an impact of some sort to the user once they leave the space or if they feel fulfilled or included.

The bottom line really is creating and interactive experience that initates participation in a non-intimidating way that allows for familiarity to take shape so users feel more comfortable and less confused of what to do once in this space.

The setup of the interaction then leads unfolding from point A to point B in a way that correlates the purpose of one thing to another based on the actions leading up to the final.

Easiness –>  Action –> Input –> Feedback.

Easiness: Makes the interactive work inviting.

Action: Makes the user do an action based on what context they have been given from the installation space.

Input: The Action prompts the user to give in input based on the context of the action. (For example, user sees a bubble wand so they are most likely going to blow through this wand.)

Feedback: The user gets feedback that relates to their input and action.

??? : The complimentary expression that leaves a piece of the user with the installation piece. (In this case a stained watercolour splat on the canvas from the bubble the user just blew from the bubble wand they knew how to use given the context of the installation space.

I presented this prototype idea to my class and got little feedback on it. I would get more feedback the following day. I planned to explain my research question in regards to this step by step interactive prototype that I had managed to achieve.

Current Research Question:

How do I approach the disconnect between human’s in smart spaces and the information from smart spaces through visualized displays?

 

Post 10, July 25

img_6561-2 img_6588-2

I presented my prototype to some external CFC staff today..some feedback more useful than others..especially from the staff!

Here were some snippets of my notes after speaking to them loosely about my intentions and why I created Air + Water the way I did.

“I would like to see some form of a response from the user simply touching the wand that could follow up to the act of blowing.”

“You could present different kinds of blowing pressures? and show show such intense inflation through the bubble?”

 

“What does the water colours create and connect with through the piece? Feels the water colours have little agency at this point.”

“You could present one bubble per breathe and size depending on how long the breathe is.”

Some technical advice:

“Do a timer count down for the bubbles of how long they will live…”

“Instead of a microphone you could use a temp sensor instead. Much more controllable and you’ll pick up less noise that way!”

“It be cool if once the bubble popped you could smear the with the colours on the digital canvas or have people popped themselves.”

Interesting observations:

One user picked up the bubble wand, dipped it into the jar and blew!

They also expressed how familiar and interesting the experience was.

Commented on how my instinct was right about the bubble part of leaving a mark with water colours. “There is this enticing way of leaving a mark.”

“The concept of introducing familiarity is very strong here. Maybe we can have different scenarios of interactivity..”

Familiarity as a UI tool. What does that breed? Using familiarity to get to the point of element of surprise?

This element of surprise really stood out to me…because it is basically the end result that will eventually make these interactions of the user purposeful and yet at the same time, populate and create the artwork itself.

Which lead me to my second research question:

How do I approach the disconnect between human’s in smart spaces and the information from smart spaces through familiarity, tools and visualized displays?

Post 11, July 26

I meet with the CFC director today. Her name was Ana. I showed her my prototype and walked her through my thought process.

She had some insightful feedback for me but got lost more in the visual and less into the steps leading towards that visual. The interactions leading up to what the user saw became so foreign to her that she implied that I should go to the science centre and look at some the installations there.

Those works focused more on visualizing input alone and not necessarily the user’s actions that complimented every action taken leading up to the visual portion in the interactive space I was creating.  

So in short, I felt she lost sense at what I was trying to getting at. I thanked her for her time and started to pack up my things, shortly after to reflect on my next steps.

The biggest question that usually came up in our meeting I took into a lot of consideration was “why breathe, why air?

Well simply put, air is one of the most non-intrusive direct human inputs to collect. This makes this kind of data collection easier to craft for an inviting interactive space.

I think playing with air is fun despite its difficulty (all inputs are difficult to manipulate and play off of). One action could easily be crafted from another and the reasoning can also be easily crafted as well within the right context.

Post 12, July 30

Today, I plan to work with the temp sensor and (hopefully) move my work into unity. I feel I’ll have much more bandwidth of animating the bubbles better as of right now they are a bit static and not very life like…

I tried using a temp sensor today but it was even more finicky than the microphone!…Yikes forever…

img_6675

I went back to the microphone for now.

For my next test I wanted to grow a bubble. So I did that in processing by having the bubble’s size increase whenever the microphone reached its threshold. It was a simple blow increase. So as long as the user blew into the microphone, the bubble would grow:

 

My next step was to have the bubble slowly vanish and have a lingering water colour splatter in its place:

watercoloursplat

I plan to create a set of different water colour splatters to appear at random when a bubble disperses.

Post 13, Aug 1st

I presented my second draft of Air and Water to my cohort on Thursday, August 1st 2019 and presented a small demo of the air bubble increase to my class along with a small power point presentation about my process. I spoke about all the steps I took and thought about while getting to the end result I did. Ana also mentioned that it wouldn’t hurt to also try to explore with other familiar tools that encourage blowing while also thinking of how those actions are complimentary to the visual output.

Here’s  a quick link to my powerpoint presentation!

img_6782

My Instructive Failure:

I was unable to demonstrate the full contextual setup of my interactive installation by not having the water colour splatter appear after the bubble vanished. (I’m still working on this but it’s more or less a coding logic issue on my part.)

How did my thinking process change:

I started to think more about the changing state of the bubble based on the user’s input then the aesthetics of the visualization itself. That’s why in this latest iteration of Air + Water, I decided to have a single bubble grow when the user blew into the bubble wand and then have it float. The focus on the user’s breathe being translated into the growing existence of this bubble seems clearer this way.

From there I started to think more about the interaction of the installation piece. From the bubble wand, to the bubble to the water colour splat collective. All of these actions in a way had to compliment the action before.

Future Works:

For future work I plan to continue working on Air + Water for the rest of the summer. I believe it could potentially become integral in helping me answer my research question:

How do I approach the disconnect between people in smart spaces (such as interactive installation pieces) and the information from smart spaces using familiarity, tools and visualized displays?

I’ve become very interested in correlating the input of a human characteristic such as breath or movement (such as walking in a space.) to what is not only presented and collected onto a screen , but how that input reaches that point. The level of interactive complimentary between action to input also helps lessen this disconnect. Air + Water is definitely an example of this. I’m looking forward to exploring this direct user input relationship further within interactive installation spaces.