remember ocean

Karo Castro-Wunsch

live demo:



Process Journal

Sometimes I forget about the ocean, sometimes I’m thinking about things that seem more important at the time but in retrospect really weren’t really important when compared to endless water. The ocean does her own thing and takes care of herself, as far as I can tell, so she doesn’t really get up in your face too often unless you’re of the Moby dick/Tom hanks variety. So this piece is a somewhat agitated, demanding, incarnation of the ocean, trapped in the internet as per usual. The central thought is ~ don’t forget the the ocean ~ Ie remember the ocean. I put the demanding ocean next to population growth. The ocean at the whim of pop growth, becoming more agitated due to pop growth, becoming increasingly more spastic. With life coming from the sea and all, it’s probably pretty tough being the mother of all that and keeping your kelp beds in good order too. There’s the calm ocean, more sphereish and gentle, and the cartoonishly wild ocean that don’t even fit on the screen it’s having such a fit. Your job is to mind the ocean, let her know that yes, you are thinking of her, you do remember her. And that’s the role of the peripheral: a salt water detector in a shell that is activated only when you tap it with a finger dipped in salt water. The sensor is tuned to only react to salt water, regular water carrying too faint a signal. By touching salt water to the shell (a small alter of sorts) you let the ocean know that you’re thinking of her in a way that assures her also that you remember at least a small fragment of the sensation of the ocean. A drop of salt water on a finger is practically an ocean if you look close enough…? Touching the shell calms the ocean and she resumes peaceful orbliness, but she’s only calm so long as you actively attend to her, otherwise she slips steadily back into agitation. And so it goes.

I’m really glad of the meditation on ocean vs population this project allowed me, but what I’m less enthusiastic of is cats vs conductive thread.


^ It’s a knot, there were many of them. Future iterations of this project will use a wireless connection. The initial thought for the thread was to wrap the shell with it and leave gaps which could be crossed with the thread. This proved difficult as the thread holds on to too much moisture, so I re-designed using a two prong moisture detector. Also, yes, I am aware creation sells a moisture detector for $5, I thought I could make something that looked better, which was a successful negative result!


It worked very well though and was able to handle large drops of water and not become waterlogged.

On the visual technical end, learning more WebGL and shader systems is always fun. The Ocean is a deformed icosahedron that uses 3D and 4D periodic perlin noise in conjunction with vertex shader reflection to produce its wobliness.

The population api doesn’t give a second to second population update, so to get that working I took the daily pop status report and interpolated it with the avg growth of humans / second.

To deal with serial data with the water sensor there was quite a bit of tweaking in the form of scaling and averaging but the numbers still seemed to drift as I kept using the sensor, I assume it started to degrade a little. Poor Guy.


The circuit for the sensor just involves grounding the sensor wire so that it’s more constant when off, and passing the ground through a resistor so that more of the current is channeled towards the analog in. Is that how electronics works? Experimentally thought, this circuit functions.




The physical interface is a lil bowl of water and a shell. I have a good deal of remorse about not making the sensor more invisible. Ideally you just touch a bowl of water and that’s your interface. I’ve seen water bowls used as synths before but that’s a mk004 thing now.


The simple contrast of the visuals and concept make me happy.

References / Resources

Reflective Deform Shader:

World population api:

Considerations on the Ocean:

Water Sensor Inspiration:





Finlay Braithwaite


This project realizes an interactive audible tone and interval reference. Think of it as a modern day pitch pipe that allows users to explore complex chords instead of a single note. The harmonizer can reproduce any frequency in the audible spectrum. With a fundamental ‘root’ frequency established, users can explore three additional harmonic intervals to build any four-part chord of their choosing. Harmonizer allows for either equal or just intonation, illuminating a difficult to demonstrate yet fundamental aspect of musical theory.  Tactile controls allow modification of each voice’s volume, waveform, and pan. In addition, a filter can be applied to shape the tone of the Harmonizer’s output. The Harmonizer system is the combination of a web application and USB peripheral.

Demo Video

Harmonizer DEMO

Code – Harmonizer v0.0.1



Code for Adafruit Feather M0

P5.js javascript

Breadboard Layout


Wiring Schematic


Process Journal

As this experiment is a rare occasion to focus on one’s self and make a device solely for them, I naturally wanted to create a sound peripheral and application of some sort. My artistic and professional background is in music and sound design as well as audio operations and engineering.

At first, I wanted to make a tool to analyze real-time acoustic information and create a fun yet useful visual display. I thought it would be interesting to design my own device that was equal parts sound pressure meter, real-time spectrum analyzer, waveform monitor. I began investigating the feasibility of such a project. I would connect an Arduino compatible microphone to the Adafruit Feather M0. From there I would ultimately connect to a P5.js javascript platform that would translate my incoming audio data into various visualizations. The beginning and end seemed very realistic, but I began worrying about the serial data connection between the Feather and P5.js. Our in-class examples ran at 9600 baud, but I felt I might need a much higher data rate. Furthermore, in probing the audio possibilities of P5.js with my laptop’s on-board microphone, I felt that an Arduino microphone was overly complicated on one hand and redundant on the other.

Moving on, I became intrigued with p5.sound’s sequencer functions. This function allows for a sequence of media files or musical notes to be triggered in a repeatable sequence. Following this exploration, I resolved to create a step-sequencer. Users would be able to define the parameters of the sequence using a physical controller. A physical peripheral becomes so import in this context. A computer really only has two variable inputs that can be controlled simultaneously, the X and Y positions of the mouse. I wanted to open this up so one could physically control eight variables simultaneously. To get the most mileage out of this, I thought of using potentiometers as my input. It appeared that the Feather would accommodate eight analog inputs, so I began dreaming about what I could with eight pots. My first design had eight knobs controlling the pitch of eight notes in an eight note sequence. If I could build that, I would think about adding additional functionality. This would resemble the Doepfer Musikelektronik ‘Dark Time’ analog sequencer (, where every note is set using a potentiometer.

The sequencer was a nightmare to work with in the context of my design priorities. Once the sequencer pattern was set, there didn’t seem to be a fluid way to insert new note pitches into the pattern from the potentiometer value. Also, as the sequence progressed, the CPU demand of the host browser escalated until the app choked out. While this sometimes made an interesting or comical sound, it highlighted that the sequencer was turning oscillators on but not turning them off. Finding the p5.sound code slightly impenetrable with my coding experience, I didn’t not find a workable solution to this problem.

With this left-turn in my design plans, I switched directions and began developing a tone and interval reference system. I’m fascinated with the concept of intonation, the division of the audible spectrum into musical notes. There are many different systems of musical tunings stemming from the world’s diverse musical cultures. Dividing the spectrum is first done proportionally in octaves. Each octave is a doubling in frequency. The difference between 240 Hz and 480 Hz, for example, is an octave. Within each octave we can apply simple fractions to derive tones and semitones. In western music, each octave is divided into 12 semitones. Simple fractions create intervals known as a third, fifth, seventh, etc… This system is beautiful in it’s simplicity and the tones it produces have a pure relationship with one another. This type of intonation is known as just intonation. While theoretically ideal, these ratio relationships of notes don’t transpose well. Instruments created in just intonation can only play in one scale.

To get around this limitation of just intonation, a system of equal intonation was created. Equal intonation has each note equally spaced in terms of frequency from its surrounding notes. It’s a beautiful system that allows an instruments to play music in every key. However, the simple relationships of just intonation are lost, making all combinations of notes slightly dissonant. It’s a tradeoff required to create instruments for all keys and, as a result, is part of the foundation of western music. Interestingly, we’ve become so accustomed to the slightly dissonant intervals of equal intonation, that music with just intonation seems antiquated and quaint; medieval in quality.

Here’s a video that does well in demonstrating the differences between equal and just intonation. You can see the challenge in this demonstration as he has to manually create each individual frequency for each interval he creates.

The concepts of intonation are hard to illustrate acoustically as there are few instruments to elaborate with. This inspired me to create such a reference, so that educators and scholars could have a reference to complex intervals in both intonations.

My design was to create a four oscillator tone generator that could leverage a fundamental frequency oscillate at intervals related to that note. A user would set a four-part chord and then be able to switch intonations.

In my previous experiment, I have explored the concept of a two-state application. For ‘Frame It Up’ we had the launch screen and gameplay states. I wanted to take this further and created a dynamic multi-page experience for the user. I didn’t want to limit the exploration of the user by the number of physical controls. I endeavoured to create a six-page experience. I planned four pages for oscillators and their parameters, one page for effects, and one page for global parameters such as global intonation.

I first created three arrays of intervals, one for equal intonation, another for just intonation, and one with labels for both intonations. Each array entry contains the calculation to leverage a root frequency and calculate an interval. The arrays would allow me to apply a frequency to my four oscillators.

var justScale = [1, 25/24, 9/8, 6/5, 5/4, 4/3, 45/32, 3/2, 8/5, 5/3, 9/5, 15/8, 2];

//calculation of just intonation. Each interval is a simple ratio to the root.

var equalScale = [1, Math.pow(2, 1/12), Math.pow(2, 2/12), Math.pow(2, 1/4), Math.pow(2, 1/3), Math.pow(2, 5/12), Math.pow(2, 1/2), Math.pow(2, 7/12), Math.pow(2, 2/3), Math.pow(2, 3/4), Math.pow(2, 5/6), Math.pow(2, 11/12), 2];

//calculation of equal intonation. Each jump from semitone to semitone is a calculated exponential increment.

var scaleLang = [‘Unison’, ‘Minor\nSecond’, ‘Major\nSecond’, ‘Minor\nThird’, ‘Major\nThird’, ‘Fourth’, ‘Diminished\nFifth’, ‘Fifth’, ‘Minor\n Sixth’, ‘Major\nSixth’, ‘Minor\nSeventh’, ‘Major\nSeventh’, ‘Octave’]

//names of intervals, regardless of intonation

I thoroughly enjoyed making the code for this project. I pushed myself to become more comfortable with functions and objects. I was able to make multipurpose functions that would satisfy a number of scenarios without writing the code over a dozen times. I feel I could take this further and make my code tighter. In making the code smaller, I feel I’d be able to get a better overview of the global logic of my code while at the same time allowing me to make quick changes with global effect.

The function I’m most proud of is latchKnob. With latchKnob, the potentiometers need to move through a current value before it takes control of that value. It makes the dynamic switching of pages possible. Without this, all parameters on a page would snap to the current potentiometer values when the page is loaded. latchValue is an absolute (+ or -) range that widens the function’s parameters.

function latchKnob(value, input){ if (Math.abs(value – input) < latchValue) { return input; } else { return value; } }


I am very happy with physical peripheral. It’s housed in a sturdy plastic container. As a prototype it allowed me to flexibly configure and troubleshoot the electronics and Feather. It’s open base made accessing the electronics easy. It was fairly straightforward to make a symmetrical design with my eight faders. Instead two banks of four, I wanted to separate the function of two buttons while maintaining the symmetry of the device. The dial on the left would control the page and the dial on the right would control the root note.


Overall, I’m very happy with how this project evolved. I still feel it’s a work-in-progress and I’m excited to continue working on it. As a first version, I learned a lot about the system’s strengths and weaknesses and found direction for future versions. For the next version, I’m going to make a page that displays and controls all intervals for all oscillators in one space. I also want to explore the sequencer again, allowing you to play chords in a rhythm or possibly as an arpeggio.


By: Emma Brito

Project Description

SleepyTime is a laptop peripheral addition that aims to assist in lulling the user to sleep. Every element of the project seeks to be a relaxing addition when falling asleep. To use Sleepytime the web page hosting it must be open. Then one of three calming instrumental songs will begin to play. The volume of the song is controlled by a light sensor. As the light gets dimmer the music also gets quieter with the goal of gradually lulling the participant to sleep. The light sensor is hidden within a teddy-bear so that nothing seems out of place within a “sleeping” environment. Finally, if the participant chooses to leave the laptop screen illuminated, they will see an animation of twinkling stars in the night sky.

GitHub Code

Process journal

Brainstorming and Beginnings

Initially I was excited to do this idea with a pulse sensor as the input, and create a relaxing activity that involved a visualization of pulse rates on the laptop screen. I felt that it suited the prompt of “making something for the computer it doesn’t have” very well. After all, laptops and computers often are a source of stress because of their relationship to work. Since we bring them home with us it seems like we can’t get away from stress. This is where a project that focuses on relaxation would come into play. It emphasizes de-stressing.

To create this idea I went to Creatron and purchased a pulse sensor. I worked with the pulse sensor for a while, but I found it difficult to work with. I initially had a hard time getting access to the values. Once I did the sensor itself was unreliable, and when I was able to get readings they were inconsistent. I tried a couple different codes I found on GitHub to see if it made any difference (there were no examples like we have for some of the other sensors because pulse sensors were not included in our initial kits). After playing around with the sensor for a number of hours I decided it was time to modify the project slightly.

New Idea

I still wanted to pursue the idea of relaxation within the project, when I had the idea of taking it to its fullest extent, i.e. sleep. Once I came up with this as the ultimate goal, I quickly decided on a light sensor as the input. Initially I was planning on it having it serve more as a prompt for sleep, with the lullaby turning on once it hits a level of dimness within the room. I altered the intention slightly once I realized that light levels can drastically change from room to room, regardless of the time. Having the web page opened intentionally is the best way to ensure that it is most effective. THis also meant that the project would become more of a sleep assistant.

First Issues

I was easily able to wire the light sensor into the breadboard and upload the code into the serial monitor. This was the first thing that I did. Unfortunately getting started with P5 was not as smooth. I was taking things one step at a time, I made sure my libraries were in order and tried loading an .mp3 into my sketch. I then put it all on my webspace. This led to a number of different errors unfortunately.


I couldn’t figure out what the issue was, especially since the code was lifted straight from their examples. It was working for other people, but not mine. I deleted everything and re-downloaded the P5 libraries and sketch. It still didn’t work. After going through the index and code once more, the only thing I could think to do was try downloading the libraries yet again and trying the same code and file. Luckily this time it worked! It loaded successfully on my webspace.

This meant that I could finally get to work connecting it to the Arduino. I added the port, console.logs, etc… Once they were both up and running I could then begin making my project. I started with the basic the sound code.


Adding Variations

I added in my “if” statements for the volume to adjust to the light values ranges. The music will get quieter as the light lowers. I also added in a few more song options and a “random” feature so that there was more variety of the music. This was done in an effort to keep the sound from becoming too repetitive. This appeared on the webspace.


Issues with the Light Sensor and Sound

Once I tested the whole thing out, I found that the “random” songs was the only aspect of the previous stage that had worked without issue. The volume of the songs were not changing. I tried shining light on the sensor as well as covering it, but there was no difference. I went back to look at the values listed in the “if” statement and compared them to the sensor readings and found that the light values had changed within the room. I adjusted the ranges but still couldn’t find a difference. Since everything else was working, I had a difficult time figuring out what the issue was.


Ultimately the code in Arduino had to be changed from Serial.print to Serial.write. The volume then would change with lower values of light reaching the sensor, but would not increase in volume once uncovered. This led to another readjustment of the If statement ranges.



Once the sensor and sound system was running properly, I was able to focus on the visuals of the webpage. This had not been my priority because visuals in creating a “sleepy” environment seemed to be less important (closed eyes, and all). Making the web page more appealing to look at was ideal though. Originally I was going to work with visuals of water, but ultimately settled on the night sky. While I wanted to avoid this because of my previous project dealing with the stars I decided it was more relevant and water would be better suited for different audio. Originally I uploaded an image onto the canvas, but it did not enhance the experience very much. I then decided on a GIF because it could easily be looped, and the repetition of it could be hypnotic.

This meant that I had to download the P5 play library, and place it in my file. I separated a series of still images of twinkling stars from a video on photoshop in order to create a loop of five frames for the GIF. I followed the sample code and was able to load them onto the webpage as an animation, although they appear as a GIF. The main issue I encountered with the animation was that it did not fill the entire screen. I found it distracting, so I had to re-export the frames in a larger format. I found that this created a more immersive experience.


Final Touch

Once everything else was completed, it became time to hide the breadboard. I didn’t want to just place it in a box because this didn’t seem to suit the overall intent of the project. Something related to comfort seemed like the best idea, especially something could have a purpose. I landed on the idea of a pillow or teddy bear that would have the light sensor exposed, but could still be cuddled. This is exactly what I did, with the breadboard in the teddy bear. Moving forward I would like to make this wireless with more variety in music and visuals.

23547019_10214740969780790_1106325090_o 23555263_10214740971620836_1863701914_o

Sketches, Design files, and Photographs

img_2003 img_2004

Above: Two early sketches of how it might be used and set up


The Fritz followed for the light sensor

Video of the device being used

The music changing sound early on

Project Context

Many of us joke about how our laptops keep us up late at night. They often serve as a tool or insomnia rather than sleep. Additionally, technology is often seen as  stress-inducing rather than relaxing because of it’s association with work. All to say I was interested in figuring out a way to create the opposite effect with the this project. I wanted to create something that would assist in relaxation and have a calming effect on the user. I also didn’t want them to have to think about using the system because that would cause potentially  more stress as well.


Sleep Machine App: The connection between this app and my project is obvious. Both incorporate digital technology and the internet to create a relaxing sleep environment. I like how this one can be personalized for each user and offers a wide range of options of nature sounds. If I were to pursue other iterations of SleepyTime, I would like to incorporate the more options for the user.

Sleep Machines: These machines are updated versions of the traditional white noise machine. Rather than playing music, they offer a variety of other soothing sounds. Most involve natural phenomenon, while settings include white noise and pink noise They can now be controlled from devices.

Sound and Sleep Studies: There have been a lot of studies about how sound can be disruptive to the sleep cycle and the negative health impacts that this can have on people. It can be beneficial however, depending on the kind of noise utilized. Soft, repetitive sounds within a room can drown out other disruptive noises within the room like outside traffic or even a snoring partner. It regulates the sound environment within the space which is crucial to getting a good night of sleep.

P5: I used ( and Make: Getting Started with p5.js for many portions of the code

Arduino Code: I used this code for the light sensor -

Songs used as lullaby

  • Andy McKee – Rylynn
  • Andy Williams (Acoustic) – Moon River
  • Iron & Wine (Instrumental) – Each Coming Night

Experiment #3 – Digi-Doodler





The functionality of this “game”/peripheral is that it is essentially like an etch-a-sketch controller that produces the drawing digitally, alongside creative prompts, a timer, and a snapshot saving function.




Day One

I initially had the idea to create a peripheral that would help me access my eyes more effectively. I have been wearing glasses/contacts since I was in third grade and recently, I’ve begun looking at the possibility of laser eye surgery. To be a good candidate, your eyes’ prescription need to not have changed drastically in the past little while and I wanted to create a way to put me in an environment where this was possible. I have the habit of using my computer deep into the night and this effectively strains my eyes to a degree I would like to monitor. So my initial idea was to use a light sensor on the back of my computer that would adjust for various thresholds in my room at night. Once it detected the threshold for a dark environment, a stopwatch would be triggered on the webpage I created and then log the amount of time the sensor was within this threshold limit. On this day that I settled on the idea, I managed to hook up the light sensor, code for it with Arduino, and then get a log of the current time through p5.js.

Day Two

My next step with this light sensor idea was to: figure out how to create a stopwatch that logs and time stamps. I spoke to people in class and consulted the internet and kept coming up short. I was only getting resources on how to set a timer instead. At this point, I was getting nervous because I didn’t know if I should continue with this idea and make some compromises in the design or just start from scratch instead. I decided to approach the assignment again from the beginning and devise a new kind of peripheral that I need in my life. I was hoping I could settle on something that would incorporate elements of the first idea I had but I didn’t want to restrict myself so I let the ideas soar.

I began thinking about note talking and how much I love that pencil and paper contact. But in the past couple of months, I’ve been finding that these moments are fewer and fewer. This is in part to my binder being very heavy to bring alongside my laptop at school everyday. But the bigger reason is that nobody around me seems to be taking handwritten notes so it often made me feel like a bit of an outlier to be erasing and tugging on paper. So I tried out note taking on my laptop and lo and behold, it is easier and faster and more convenient, yes. But a part of me still misses paper and its affordances like…doodling! So I kept thinking in this direction and thought about etch-a-sketches, the most iconic and nostalgic doodler. I thought about its shape and how the knobs are basically two potentiometers. I thought that this was a very feasible and resourceful second idea, considering the time crunch and my building anxiety!

Day Three

I created my potentiometer breadboard and began looking into how I could create an interface similar to the etch-a-sketch drawing capabilities. I needed to figure out a continuous draw function. I looked in the book and eventually made sense of how to do it with an ‘If’ statement and a (mouseIsPressed) function that would then map a line with (pmouseX, pmouseY, mouseX, mouseY) that looked at both the X and Y axes but I was having difficulty adapting it to the two potentiometers I needed to use.

I found on GitHub some code that introduced me to Javascript’s splitting of a string as well as  parseInt(string, radix); as a way of creating the line extensions in the vein of etch-a-sketch. 

I had issues with the resulting webpage being quite glitchy and not very smooth but I tried to buffer those problems by adjusting the delay and the thickness of the line itself.

I then met up with Nick and was able to brainstorm some ways of taking this assignment further, as an etch-a-sketch is definitely a peripheral that I need but it is still something I have not inherently created. So I started thinking about adding creative prompts to make the experience like a ‘productive’ and ‘guided’ doodling session. Then I also was told that potentially adding a timer would aid the presentation. And this fit into my overall idea too, of having a doodling device that still lets you focus at your tasks at hand – and what better way than having time as a limitation/restrictor? Then I had the idea of adding a screenshot function so that you could keep a kind of log of all the different doodles you create – as you would have in your binders during class.

Then I also went up to speak with Reza about constructing a box for my controller. He laid out all my options and I decided using a ready-made box would suffice for my purposes. I used a Tupperware container and then spray painted in with a white primer and it yielded a nice little container! I fitted it with two pieces of blue foam on the inside so that I could slide in the breadboard and have it stay in a stable spot instead of potentially falling out at any point. This way, it was a compact box that remained with an open bottom and a hole on the side for the USB cable to fit in through.





Make: Getting Started with p5.js

Gay Magic

Gay Magic

By: Tommy Ting

Gay Magic is a web-based game where the player has to move the objects around the site with two potentiometers in order to unlock all the mystery items such as poppers, Truvada and a chain collar. Once all items are unlocked, a spell is cast and Dionysus is summoned to start the party.

Circuit Layout or Circuit Schematics


Code – Master file, final code and assets. – See how I progressed and worked through my cod.


Supporting Visuals

Wiring up the force resistor


First Test

Second Test

Third Test

Fourth Test

Final Presentation and Game Play

Final Presentation and game play

Final Presentation


After Nick’s suggestion of moving the potentiometers in front of the fur and use the fur as a mouse rest


First Iteration of Dionysus without the animated rainbow


Final iteration of the Dionysus page


Process Journal

Day 1 Working with API

On my first day, I was looking through all the available APIs and found one that I was particularly attracted to called “We Feel Fine”. This API draws from many different web spaces whenever someone posts something that starts with “I feel” or “I am feeling”. From this, I was hoping to use a force resistor to activate this API with Spotify to generate emotions with a matching song. I started to wire up the sensor and immediately found it quite hard to work with because it was very sensitive, even a very light amount of force would send it from 0 to 1023. Moreover, I realized that the “I Feel Fine” API was not working as I wasn’t able to pull anything from it. I decided to move on and find another API that I want to use. After some search I couldn’t find anything that really struck my interest.


Day 2 Moving On

I started playing around with some photo montage assets that I have been working with in my class Possible Futures with Dr. Poremba. In this class, I have been exploring the intersection of queerness and critical future studies with performance and dance. I have been cutting up all these body parts from magazines to form new shapes. One thing that really struck me during this creative process was using these body parts to form a Stonehenge scene. Queerness and witchcraft have a really interesting relationship of radicality, Otherness and anti-normativity (I will explain in depth in my project context). I quickly used a sprite that I have previously made with some a sample code found on the Play.js library from I really liked what I saw and decided that I would use this experiment to further develop some ideas I have been exploring in Possible Futures and my thesis proposal.


Day 3 Start Working on Stonehenge Scene

On my third day I quickly made the first Stonehenge sprite with a crystal gem as the cursor. I wired up one potentiometer and got it to move the gem on the X-axis. I adapted the Arduino and P5 code from ITP. I had to play around with the P5 code in order to get the potentiometer to control the gem. In my P5 code, I first mapped the x.position of the gem to inData which only got the cursor to move at a very limited distance, maybe around 300 pixels. Later, I created the a var for which mapped the x-position to the potentiometer and then mapped the position of the gem to the var. The x-position was mapped to the full width of the screen which allowed the gem to move full-screen. I did encounter a problem with this solution because initially I put the var up top above everything but once I put it under the draw function it worked perfectly.


Day 4 Two Potentiometers and Stonehenge Scene

On my fourth day I tried wiring up two potentiometers so that one would control the X and the other would control the Y axis. I updated the P5 code and used the Arduino code from Nick’s example. I couldn’t get it to work and wasn’t able to find any help from my peers. I decided to put away this idea for now because I was running out of time and I wanted to finish the Stonehenge scene and make it a working prototype with at least 1 potentiometer. I then created the rest of the sprites on Photoshop and tested the collider and displacer codes found in the P5 play library, which I assigned to some assets. I played around with the overlap change animation code (also found in the Play library) to the gem cursor to make change into a different object when it overlapped on top of one of the stonehenge montage. Since it did, I went back to Photoshop to finish creating the sprites.


Day 5 Finished Stonehenge Scene

I finished the Stonehenge scene by creating all assets on Photoshop. I developed the game concept further designing a very simple mechanic and goal. The player has to move around the these floating body parts (which the gem collides on) to overlap the Stonehenge body parts, only one combination of the stonehenge and the floating body part would result in a change of animation of the floating body part. Once all the body parts have been changed, an image of Dionysus would appear. It took me some time to figure out the code to make Dionysus appear once all the images have been changed. After a bit of playing around I figured out that if I created a simple if statement of if this overlaps && this overlaps && this overlaps && this overlaps then add Dionysus animation, it worked. Unfortunately the animation played at a very slow speed but I only had 1 more day left and I wanted to spend it on going back to trying to get two potentiometers to work.


Day 6 Revisit Two Potentiometers

I found this really challenging. I revisited Nick’s example, and after some suggestions from Kate, I changed my P5 code to reflect some of the wording that Nick used in his code. I was able to get both potentiometer working but only one at a time. After a lot of frustrating moments, I gave up and sought help from my classmates. I had help from a number of people, but it was Finlay who was finally able to help me with two potentiometers (he had 8 in his project…). He told me to put everything I had in my draw function into a new function called “game”, and the draw function made it so that the game wouldn’t run unless the serial connection was established. In the ardCon function, which runs the serial connection, he told me to make the values of serialStatus switch from 0 to 1. Finally, going back to the drawing function, we made it so that 1 would start the game function. This made both potentiometers working!


Project Context

Gay Magic is a game that explores the intersection of queerness and future studies through the investigation of gay identities and subjectivities, ephemerality and history. The mechanic of the game is to move around the body parts to uncover gay objects such as poppers and truvada pill. All of the artistic choices of the game are deliberated, the background is pink which reflects on the colour Nazis’ used to label gay men in the concentration camps, the body parts draws on the history of our bodies being sites of resistance, political protest and medical experiments, and the reference to witchcraft and magic references the similar shared experience we have with witches and also a general reference to 70s radical political groups. The picture of Dionysus at the end of the game is because Dionysus is the ancient greek god of play and party, which has been a symbol for Freud’s pleasure principle and for queer men groups such as the Radical Fearies.


The usage of peagan and witchcraft imagery is informed by the work of Silvia Federici. Federici argued that capitalism, in order to continue existing, requires a constant expropriation of free labour, especially of women’s (2004). Federici connects the rise of capitalism to the struggle of communalism and feudalism and the the success of capitalism depends on the subjugation of communalism. She situates the institutionalized punishment of prostitution, witches and heretics to the beginnings of proto-capitalism (2004). By incorporating imagery of paganism and magic, I want to reintroduce the radical potential of the gay male body that have been lost due to heteronormativity, privilege and capitalism.


I decided to use body parts to create my photomontage because queer and gay bodies are the sites of our political resistance and therefore, the sites for our political liberation. Photomontage is a great way to illustrate the fantastic and the eery using recognizable imagery, which has been a technique of choice for the Surrealists, radical architects such as Archigram to current speculative futures designers (Jain et al 2011).  Gay bodies bodies have been experimented on, chemically castrated (Turing), burnt (the origin of the word “faggot”) and were famously wrapped up a bag and thrown on to the lawn of the White House on October 11th 1992 to protest the US government lack of action during the AIDS crisis (Hubbard 2012). I am also deeply interested in the softness of technology and the “triumph of software” which is more malleable and has connotation of playfulness (Banham 1981). In Banham’s 1981 essay, he stated that software is able to quickly adapt to change, and undermine the rigidity of hardware. I see this also as an analogy to counter culture’s’ relationship to the strict culture of a normative hierarchical nuclear capital society. Softness also informed my choice of using fur to conceal my hardware.


Ephemerality is explored in my project through the game mechanic. By moving the body parts around, the player reveals the hidden objects, but if they move the body parts away, the revealed object quickly disappears. Scholars have suggested that queerness is often represented through ephemera such as untimely death, dance and gesture as communication and short-lived spaces (Desmond 2001; Castiglia and Reed 2011; Farmer 2000; Getsy 2016; Muñoz 2009). I wanted to incorporate this queer ephemerality by letting it inform the design of the game mechanic. Gay Magic is a small experiment of my major research project which will be a video game that will bring out the radical potential of the gay male body through looking at queer history, speculative futures, cyber spaces and dance.


Banham, Reyner. “Triumph of Software.” Design by Choice, edited by Penny Sparke, Rizzoli International Publications, 1981, p. 133‐136.


Castiglia, Christopher, and Christopher Reed. If Memory Serves: Gay Men, AIDS, and the Promise of the Queer Past. University of Minnesota Press, 2012.


Desmond, Jane E, editor. Dancing Desires: Choreographing Sexualities On and Off the Stage. The University of Wisconsin Press, 2001.


Farmer, Brett. Spectacular Passions: Cinema, Fantasy, Gay Male Spectatorships. Duke University Press, 2000.


Federici, Silvia. “Caliban and the Witch: Women, the Body and Primitive Accumulation. Brooklyn, NY: Autonomedia, 2004.


Getsy, David J., editor. Queer. Whitechapel Gallery and The MIT Press, 2016.


Jain, Anab, et al. “Design Futurescaping.” The Era of Objects (Blowup Reader 3), 29 Sept. 2011, pp. 6-14., of-objects- blowup-reader.


Muñoz, José Esteban. Cruising Utopia: The Then and There of Queer Futurity. New York University
Press, 2009.
United in Anger: A History of ACT UP. Directed by Jim Hubbard, The Film Collaborative, 2012.

Morse Code Messager

Morse Code Messager

By Feng Yuan

Project Description

This project is a Morse Code message “transmitter”. People could use this device enter their message in Morse Code. Their message will be converted into English and displayed on the website. They also could choose to send a statute message of “GETTING A MORSE CODE MESSAGE” in their own facebook timeline.

Morse Code Messager should be a wireless, portable message sender. Unlike the convience and user-friendly design, Morse Code Messager will provide people a chance to experience how to communicate in this old style.

Code is Here.

Process Journal

Step 01: Research and Ideation

After learning the introduction of API and IFTTT, I came out my first plan, Visualizing Weather Data. My first idea came from my personal experience and demand. Collecting the weather data and visualizing the number in a physical and tangible way was my original plan. Based on this idea, I started to research precedent.

  • ZEEQ Smart Pillow. ZEEQ can play music to help user fall asleep, monitor and react to user’s snoring, analyze user’s sleep and also wake user up. All above functions are responded to the data it collects.

ZEEQ is a great example of how to actually create some function to using the personal data. Instead of using cell phone and alarm, making the bedding itself to be “smart” may be an effective way to improve the sleeping quality and guide people to live in a more healthy life.

  • Tempescope. Tempescope is tabletop gizmo that displays weather forecasts and current conditions by actually recreating them inside a sealed enclosure. If the forecast says it’s going to rain, it will rain inside the box. If it’s going to be cloudy or foggy, the enclosure will be filled with mist. With this product, people can just glance over at the Tempescope and instantly know what to expect.

Insteading of checking the app, Tempescope represent the weather data in a physical way which create a connection between digital virtual world and our daily life.

I also did some research of which weather API I should use:

Step 02:Change the idea and start over

After my further study and research of weather API and current precedents, I feel it is not very necessary for me to have a device to visualize the weather data. And also, there are so many similar products in the market. More focusing on my personal interest, I decide to change my plan and to create something I really like!

Morse code was developed since late 1830s. It encodes letters and numbers into sequences of short and long signals called “dots” and “dashes”. Morse code is used in electrical telegraph system at the beginning, and later it was adapted to radio communication. Today Morse code is still using in military, ham radio communication, dire emergencies, etc. As an old-school way for long distance communication, it is very easy to learn and the device is also portable and easily maintained.


I am so into this vintage communication approach. I decide to take this rare opportunity to create a Morse Code message sender/generator for myself.


After observing and imitating the hand sending gestures in 1966 US Army Training Film, I choose to use buttons instead of the bronze handle to input messages. Here is some basic thoughts of my coding:

  • One button sends “dot” signal, another button sends “dash” signal.
  • “Dot” button and “dash” button have different color to distinguish.
  • Pressing the button send a “1” or “2” int number. (1 = dot, 2 = dash)
  • For the letters whose morse code characters are less than 4 (such as I,A,E,I etc.), I choose to use 0 fill the empty space. “I” will be transferred  to 1100 in morse code. “D” will be transfered to 2110.
  • Arduino will process the 4 digits obtaining from buttons and generate the letter.
  • After the buttons are pressed 4 times, a letter will be translated. For the these letters containing less than 4 morse code characters, I design a “end entering” button.The code part should be something like this:

if(press button 4 times or press the “end entering” button){

Translate the 4 digits to letters ();    }

Translate the 4 digits to letters (){

If (the 1st character is dot){

if(2nd character is dot){}

if(2nd character is dash)

Else if (2nd character is empty)



Step 03: Complete the coding on arduino side.

  • Build the electronic circuit.
  • Make sure the arduino can “know” which letter I am typing.



img_5530 img_5531

Step 04: Complete the coding on p5 side.

  • Make sure the value generated from arduino can be sent to P5.
  • P5 “knows” which letter it is.
  • P5 storage the letter into a array and generate a string/sentence of these letters.
  • Make sure all the null value/unidentified value/empty value will be removed from the array.
  • The letter can be displayed on the P5 website.

Step 05: Craft the arduino container to hide all the wires.

  • Following the old-school style of morse code, I choose to use a wood box as my container.
  • I choose use a red button for dot, a blue button for dash and a black button for “end entering”.
  • With the help of Raze, I made this box.




Step 06: Connect the adafruit with Facebook in IFTTTT.

In my plan, after the morse codes are translated into English letter and generate a string value, this value should be sent to adafruit and then show up in my Facebook timeline as a statue message. However, after testing and testing over again, I realized the String value generated from arduino can be sent to adafruit but can’t be sent/displayed/used on Facebook. Instead of using IFTTT which has a super user-friendly interface and also a lot limitations, I probably need to use the original Facebook api directly. However, for this point, I have to change my plan to only show “GETTING A MORSE CODE MESSAGE” in my Facebook timeline.



Vimeo Video is Here.

Experiment #3 “Trump Punch”

By: Kristy Boyce
Code available on Github:

Final version with Trump reaction upon being hit by the right hand boxing glove.
Final version with Trump reaction upon being hit by the right-hand boxing glove.









Project Description:

Trump Punch is a computer game and peripheral device, it is intended as a stress release aid for liberal newshounds fed up with the antics of those in political office. The current version (for rather obvious reasons) comes with a large, smug Donald Trump face that’s just ripe for the punching. In the spirit of bipartisanship, the  two button classic arcade style controller allows the user “left” and “right” punching abilities. The game will also punch Trump on mouse click.


Process Journal:

My process began with taking a day to doodle things and come up with ideas for peripherals that I would do without regard to feasibility or failure. Some ideas were:

  • Left-handed mouse
  • Mouse that works in a 3D space in the air
  • Antique radio but with potentiometer knobs linked to Spotify
  • A classic video game
  • A sensor that detected men and catcalled them

I narrowed it down to the radio concept and the game idea. I figured I could 3D print the vintage radio enclosure or cut an actual old radio, and use some of the Chuck Norris code we played with in class; the dial options would be labelled based on mood and then load up a playlist via Spotify or perhaps an on computer xml playlist.


The game I came up with was Trump Punch and based on a simplified version of a PUNCHOUT style classic Nintendo game. I liked the topical hook, my research was looking like it was the more feasible option based on my skillset, and I was interested in animating on the web.

I met with Kate on the 3rd day of the experiment and presented both ideas to her. She seemed to like the radio idea best, but based on my enthusiasm and the fact that I could storyboard out all the steps for the game, she suggested I follow that route.



I spent a good deal of time playing with the library, animating and creating sprites. I gained a lot of ground initially with the simple example interactions but ended up spending hours and hours attempting to complicate and customize the functions.

screen-shot-2017-11-04-at-3-58-16-pm screen-shot-2017-11-04-at-4-07-57-pm

I started working with a creative commons licensed png drawn from a photo of trump and a red glove.


Glove & Trump Interaction:

I could get the glove to move up in a punch like fashion by adjusting its y-axis 50 pixels on mouse click.

I easily got the glove to track the mouse coordinates and follow, but I encountered a lot of trouble in creating the actual ‘hit’ of the glove and Trump’s face. At this point .overlap and .collide became confusing and in the end, I had to use overlap to create my collision. Which was very counter-intuitive.

I was able to  rotate or scale Trump’s face when the two PNGs came into contact, but I had a lot of trouble getting it to swap to a reaction PNG in a convincing way in combination with the mouse click and image overlap.



I couldn’t figure out how to make things happen on button click rather than mouse, but it turned out all I had to do was make the P5 code listen for mouse or button click as in “||” in the code.

I was able to get 1 button working with communication between P5 and Arduino fairly easily.


Testing with buttons
Testing with buttons

There was a certain point when I couldn’t seem to get more than a weird “tap” instead of a punch and that I started playing around with a Trump Asteroids game with an angry uterus. I really actually like this concept, but it didn’t seem to require the two button input setup that I was already very committed to at that point.
In the end, it was a good exercise and helpful because the examples I looked at worked with calling and animating sprites.

The angry uterus 1
The angry uterus 1
Angry Uterus 2
Angry Uterus 2




Roxanne Henry went over some array stuff that helped me get the second button up and running along with Nick’s example in the experiment 3 pages. This is where I also learned you really do need to use a ground, really.

Syntax problems
Syntax problems

Once all the cod issues were worked out, I put the PNGs in the right order so the glove wouldn’t float behind Trump, I added a left punch glove and a reaction sound effect for when Trump gets hit.

Peripheral Controller:

Initially, I thought I’d use a big red button you could just smash, but I wanted to create an object that I would actually keep on my desk that wouldn’t take up too much space but that would match my aesthetic (I have a white desk with white peripherals.) So in keeping with the vintage video game feel I went with a white and red Nintendo-ish controller design that would enclose the feather. I followed a 6 button arcade enclosure tutorial on Adafruit and modified the design to suit.

screen-shot-2017-11-04-at-1-31-58-pm screen-shot-2017-11-04-at-1-34-34-pm screen-shot-2017-11-04-at-1-35-17-pm

Fusion 360 design
Fusion 360 design
Fusion 360 mockup
Fusion 360 mockup






3D printing
3D printing
Disappointing controllercase

But then disaster hit! The red buttons I had purchased were not the right size for my printed design. I had double checked before purchase, asking the saleman if they were 24mm buttons, but they were actually 30mm. A smart person would measured anyway but I didn’t.

I ended up purchasing a box and keeping everything on the breadboard, which was functionally fine, but I really wanted that slick, white, handheld controller that I spent hours and hours on.

The horrible backup box
The horrible backup box


Drilling the horrible backup box
Drilling the horrible backup box



Final Prototype In Action:


This project was influenced by my interest in socio-political issues and the classic video game “Mike Tysons’s Punchout” I see Trump as a perfect King Hippo style character. I knew I wanted to make something highly graphic, within my skillset, and with some type of reasonable “why” factor or hook.

Moving forward I would like to involve more motion in Trump, some taunting animation and audio, a bell “ding” sound and have the game cued via API. Perhaps when #Trump is trending in the news, the game opens. An interesting idea I had during crit was that maybe social media mentions would literally grow his head and strengthen him as an opponent, he literally feeds on the online attention, just like in real life! #SAD


In class Code from Nick Puckett & Kate Hartman


Code Snippets

Coding Train:

Donkey Hotey via Flickr

Adafruit Industries


Touch Your Knob – Max Lander

Touch Your Knob -Max Lander

Prototype Description

Touch Your Knob is a computer peripheral designed to make interaction with sequential images and video clips, specifically of the pornographic variety, super silly and fun! Turning the large knob either advances or reverses the clip frames. The farther the turn the faster the frames will run. It allows a kind of turntable like experience for the video frames on screen, allowing the user to repeat, reverse, slow and loop it.

Circuit Layout



Live site –

Supporting Visuals

Process Journal

When thinking about potential projects for this experiment, I was instantly drawn to the issues my classmates were having around gif images and p5. I was thinking about how gifs (and videos) are just a series of frames in a sequence and how that should be quite easy for p5 to accomplish. I knew I wanted to do something with a series of images.

Originally I thought it would be quite fun to do something silly around the idea of “working for it” in that I liked the idea that people have to put in effort in order for the computer to play a video they wanted to see. This, obviously, would be most hilarious in a pornographic context, so I thought maybe I would make something where the user has to shake something (phallic shaped) to advance a video. In trying to assess the technical difficulty of this idea I realized that in order to make that the way I wanted, I would have to learn how to embed electrontics into a silicon molded object, which felt a little outside of the scope of this project. I still wanted to play around with this gif idea though, so I tried to think of a different way to interface with sequential image files.

I like knobs. I like gifs. I was pretty sure I could make something that makes gifs with knobs. So that was the goal.

The first step was to get the code that moves, sequentially, through a folder of images. This wasn’t very difficult since I had already written something similar for the last project. The internet let me know how to manipulate a number within a file name, which was the only problem I encountered, in that there is a different number of zeros before the numbers in the files ending in 1 – 9 than there are in 10 – 99. Once that was corrected for with a simple if statement, and the keypress p5 reference was consulted it all worked as planned! And was super fun to play with, even without a knob. (proof of fun –

From there I wanted to see if I could get it working with a potentiometer. Using the in class example code, hooking the potentiometer up was relatively easy. I remapped the values coming out of it to 0 through 10 so it would be an easier range to work through. This also helped create quite a wide range for each number so it was easy to get the images to stop changing by setting the movement to above or below 5.

Trials with the Rotary Encoder

At this point I wanted to see if it would be better to use a knob that could continuously go around, so I picked up a rotary encoder from Creatron (this one –

I used this walk through to get it up and running –

Since the above code has statements for clockwise and counterclockwise, I added in a variable  that could be sent to p5 as an indicator of direction. This got a similar effect to the potentiometer except for one crucial difference – there was no off. It also chugged a whole lot and generally felt quite a bit less satisfying than the potentiometer. I think this is because the potentiometer returns a numerical value that is easier (for me, at least) to work with to make smooth. It seemed like the encoder was slower and after trying to figure out the alternate ways to send the signal through with little success I decided to go back to the pot. Part of this decision was based in the fun of the piece being the back and forth movement of the images so the continual motion was less necessary.

*I do miss the LED though, so cute.

Failure the Second

The next goal I wanted to accomplish was to make it possible to record the screen and download a gif of it. P5 will grab individual frames or record video from the webcam but doesn’t have any screen record functions. It turns out that the way this used to happen is no longer possible because of some chrome upgrades and I couldn’t get the library recommended by the p5 reference (  to work (even the examples were not fully functional) – it seems this may have been because was not using webGL animations and to me it looks like they require each other?

3D printing

Since I had decided on the final circuit it was time to work on an enclosure. Having never 3d printed anything before I figured this was the perfect time to try it out! A friend recommended TinkerCAD ( and gave me a short little demo. It was quite easy to mock something up –


My initial mock up was not sized correctly so after some new measurements and some chatting with Reza in the Maker Lab I had two print ready files.

Box –

Lid –

screen-shot-2017-11-06-at-10-00-50-pm screen-shot-2017-11-06-at-10-00-26-pm


Initially the box was going to be printed on the lulzbot, but the speed was turned way down and after 2 hours it was only 2% of the way through, so we stopped it and switched to the makerbot replicator 2, which went much faster (about 2 and half hours total for the box and lid). I also learned an important lesson about hair spray and the makerbot bed when the first attempt at printing the lid lifted of the bed and got real messy. Hair. Spray.

img_20171106_145625 img_20171107_114335

Enthusiastically close to a semi finished project, I soldered, assembled and apparently killed my potentiometer… So I soldered and assembled again with a new one.




A couple notes about assembly –

    The original hole in the lid was the smallest bit too small so I had to drill it out a bit

   The potentiometer has a small notch for fitting it into something, so I dug out a little hole in the back of the lid for it to slip into.

At this point, I wanted to spend some time focussing on polishing the experience of the video/frames. I was live testing it thought glitch, but that put a limit on the amount of images of 100 and I wanted to see if what the limit was for number of images, as 100 frames is only 4 seconds of video. I selected a second of the film and exported to frames and ended up with 896, which I assumed would chug because of load time, but runs just as smooth as the 100, even after changing it to display the images fullscreen.

The last piece I wanted to implement, since it was simpler than I had originally hoped, was to make the speed of the frames relative to how far from center the knob was turned. Originally, I did this by manually setting the frame rate based on a couple ranges – >3, 3-5, 5-7, <7. I realized quickly that, while this worked, it would be way smoother if I could use the input directly from the potentiometer. This was accomplished by remapping the inData to values between 1 and 5 in each direction from center and then multiplying that number by 10 to set the framerate.

It’s all quite simple but ultimately creates a super silly interaction with the video that is quite enjoyable!

Project Context

In my most recent work, inside and outside of this program, I have been quite interested in interactive video and screen based media. Most likely due to my interest in creating cinematic VR, but also in relation to smaller scale and simpler interactions as well, something I think piece is quite successful at. It is quite a good technological and theoretical stepping stone between my history with porn (including my pornographic video game, PornGame the Game) and my hopeful future with interactive cinematic VR. As an aside it’s done a very good job of making me think about the impact a custom physical interface could have on VR experiences.

Next up – embedding electronics in silicone… for reasons.

The “Pay attention” bot

The “Pay attention” bot helps the user to keep in tune with the “real world” around them, even if they’re really absorbed into their work or listening to loud music. The bot “pays attention” for you, listening for someone to call your attention. Once it detects someone trying to call your name, it will wave at you incessantly until you acknowledge it and turn it off. By now, you are well aware that someone in the real world was looking to get your attention.

Code available on Github.


The process for this project was rather quick and uneventful, unfortunately.

My first idea was to have the arduino itself record and process the speech recognition with the desktop printing out alerts but two things stopped me:

  1. I wasn’t very interested in purchasing a new board on such short notice, in case it doesn’t work out; it’s a pretty big commitment!
  2. I don’t like desktop or push notifications. They vex me.

So I decided to reverse the role. Have the computer, which already has microphone access record and process speech, and have the arduino nag me when something gets recognized. The next task I had was to find a suitable library or API that helped me with voice recognition. P5 was the first to offer one up. At first, I was skeptical of it, since it seemed really lightweight, so I started looking at alternatives. IBM Watson’s API seemed really interesting, but they weren’t offering it for free. There were some alternatives I could have used such as interfacing with Watson through Pubnub, but the interface of Pubnub seemed convoluted at best, and pay-to-play at worse. As someone who’s very used to getting their hands elbow-deep into code,  using an interface to do the work for me was both a very disorienting and very frustrating experience. I decided to drop this route altogether.

I went back to investigating P5’s speech and speechrec addons. For my purposes, I needed it to record continuously. There is a continuous option available, but the example online wasn’t working and I couldn’t get it to work myself, either. I distinctly remember reading somewhere, in a release statement probably, that the continuous function was buggy and to use something else instead, but I can’t, for the life of me, find it anymore. I should have took a screenshot. I’m still not used to having to document my process when I’m coding and debugging. I’ll remember to next time.

Anyway, I ended up finding a workaround. Simply assigning an OnEnd() function to the recording object and asking it to restart itself was sufficient enough for my needs. There was a small issue in testing where it would stop recording (evidently) in the time it takes for the state to change from “ended” to “started”, so it wouldn’t detect sounds for that small period of time. Given more time, I would have tried harder to get the Continuous option to work, but I have learned not to linger on the small things when you need a deliverable in a short amount of time.

Debugging the “restart if you stop” function.

After I got that part of my P5 code working, it was very simple to activate the servo through serial input. There was a tiny hiccup when I was doing

if (myRec.resultString == "Roxanne") {...}

which wouldn’t pick up my name if it was stringed inside a sentence. For example, something like “Roxanne, do you have a minute?” would be ignored.  I converted the code to

if ("Roxanne")) {...}

in order to search the string for my name, instead, and it worked beautifully. The if statement prompted the serial port to send through a code which my arduino was listening for.

In practice, the speech recognition was not as powerful as I would have liked (saying “Roxanne” often produced the words “Rock band”, instead), but it was sufficient for a prototype.

The arduino code was fairly trivial since its job was also fairly trivial. It was a slightly modified version of Kate and Nick’s basic servo code. I simply added a clause for a button press, which would toggle off the variable “shouldBeMoving” as well as a check for incoming serial data. If there was serial data, and it was the code I was feeding from P5, then I would toggle on my “shouldBeMoving” variable in order to activate the basic servo code. The arm on the servo was programmed to simply wave in a 90 degree angle, enough to be annoying and catch my attention, but not enough to be obnoxious to others.

so much code...
code code code…

I believe I broke the servo when attempting to graft the arm onto it, since it was working without fail before the arm. After the arm, the servo appeared to get tired or simply get stuck on itself after a few swings. It ended up making the button somewhat extraneous, since it was stopping itself, but I kept the button, simply because there is something satisfying about smashing the button to stop the servo, but also, just in case the servo was feeling exceptionally peppy and decided to keep waving for eternity.

Thanks Sean for the help in making the arm!

Another small thing I found frustrating but, you know, was kind of necessary, was debugging. The P5 speechrec addon requires a server to function, so I needed to have my code re-uploaded to my github page whenever I wanted to test a change. Debugging became especially frustrating since that’s usually a process where I add logging statements at different places in order to glean information, and then remove promptly when expected outcome happens. This made for quite a lot of commits, which the github page was slow to catch. I’d often have to wait about a minute or two between commits before my page would be up to date. But alas, such is the way.

so much commit
If you judge my commit messages… I really cant blame you…

Video of it working:

It lives! from Roxanne Henry on Vimeo.

References and thanks:

Servo test code

P5 Speech

IBM’s Watson

Kate and Nick’s servo code

Sean Harkins for help with making the arm

Dave Foster for help with making the box

Press in Case of Winter

Press in Case of Winter – Emilia Mason

Press in Case of Winter is part of an idea I had almost a year ago. I come from Nicaragua, a country in the tropics, therefore, I am not a big fan of cold weather. Last December I bought myself a “Christmas Palm Tree”, painted the leafs in different colors and the stem in gold, and decorated it with colorful Christmas lights.

img_0030  img_0065

The project consists a button for one to press when in need of tropical music and colors. I wanted to bring the tropics to a room as soon as someone comes in. Play happy/tropical music from Latin America and show bright colors.


The concept  is inspired by two ideas:

1. Beat the Seasonal Affective Disorder (SAD) or more commonly known as “winter blues” transforming my apartment into a tropical and colorful paradise. Studies have shown happy and upbeat music has a positive effect on one’s mood.

  1. An appropriation of Christmas. I had never thought of the fact that we use real and fake pines as Christmas trees, decorated as if it is winter. Nicaragua is part of the southern hemisphere, December is the beginning of summer and temperatures oscillate between 26 to 30 degrees during Christmas season. I wanted to use a local tree for my Christmas celebration.

The idea is very simple:

When coming in the apartment press the button to warm up your mood. Pressing the button will start a random song from the songs and animation indicated in the P5.Js folder uploaded to Cyberduck.

Codes 🙂

P5.Js Code (Link to Github)

The code is simple:
-Specify the variables for the different songs, all songs and the variable colors for all the frames used for the simple animation.
-Function Preload for the songs and animation.
-Function Setup for canvas and setting up the serialport.
-Function Pressed for button.
-Function Draw for animation.

Arduino Code (Link to Github)

Button input code:
Press for on and press for off.

Circuit Diagrams:



Process Journal:
My idea began a little too ambitious for my skills.

The initial idea was to connect a weather API (taking the weather from Nicaragua) and two Spotify playlists (one for sunny days and another for rainy days). Depending on the weather in Nicaragua the Spotify playlists were going to play either a hyper tropical song or a more relaxed tropical song when the button was pressed. This way I would be able to bright and warm my day here in Toronto and I would also have an idea of how the weather was in Managua (the city I am from).

Since this seemed a little too complicated I decided to start from a very simple idea and start building up from there.


Step 1: Build and Code the Arduino.
I thought this was very simple, I used an example I found on the internet and I only changed the pin number. I was only going to use 1 button as an input.


Step 2. Code P5.Js to preload sound
For this, I used the P5.Js manual and the help of our friend in the rainbow train.


Those resources were very helpful but I still had issues preloading several files of sound and making them random everytime the page would load.

At first, I was trying to make an array to randomize the 3 songs I started testing with. In theory, this was very simple but I was having trouble making it work.


Then, I remembered that Yiyi and Jad did something similar in the past assignment so I asked Yiyi if it would be ok to use her code as an example.


I went to their project’s GitHub and used their code as an example to modify my code. I must say this was very helpful.


Having done the basic for my code, which was to make p5 preload the songs, I wanted to make sure I could connect the P5 code and the Arduino. My plan was that once I had the P5 and Arduino running I would start adding more songs and images and/or GIFs.

Once I had added my p5.serialport.js library to the HTML file and opened p5.serialcontrol and connected the Arduino I realized I am a genius who never wrote a command on the P5 code of what to do with the Arduino.
Orlando from the second year was in the studio and he helped me a little (in Spanish).


After that, I was able to write the function pressed and function serial event.

ALAS! The Arduino and P5 were connected and I actually had a reading on my console. But I had a big problem, I had a lot of noise.

That day I had my office time with Kate and she recommended soldering my button and fixing the wires to get rid of the noise.

So, I spent the rest of that day trying to understand WHY ON EARTH I had so much noise.

img_9352-1 23514523_10214722742245113_973481933_o

I tried different buttons, pins, solid and stranded wires, regular breadboard, mini breadboard, no breadboard, and soldering everything I could.

NOTHING.WORKED. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

At this moment I was starting to stress A LOT  because  I still hadn’t done the playlists on Spotify or had any images/GIFs/Animation show on the screen. The worst part was that the console was reading so much noise and my buttons were hardly working. And I still needed to build a box for the button. I was really concerned about this because my plan was to have a small box that I could stick to the wall next to my door but since my button and board was not working I had no idea what size the box should be.

Finlay and Roxanne H. saw me hyperventilating and were generous with their time. They helped me realize the problem was not with the wires, board, feather, pins, buttons or the USB cable. IT WAS THE ARDUINO CODE!
I fixed it and the noise was gone!


That was frustrating, to say the least.

Having my button and code working, after two days lost and wasted. I was finally able to work on the images. I downloaded lots of GIFs to use but I was recommended to work with animations instead. Since I was running out of time I decided to do the easiest animation possible using the P5 Play library and making frames the size of the screen in different colors.

Then I proceeded to make the box:

When I had all the basic pieces I tested everything together and failed:

I changed the button AGAIN and it was FINALLY working!

What would I do to make this better?
1. Not use the arcade button. Find another type of button, a less finicky one.
2. Make a small box and stick it to a wall instead of having it next to the computer.
3. Use a projector. After I presented I realized that what I wanted to accomplish was an experience not just pressing a button.
4. Connect to Spotify playlists and weather API.


Yiyi’s and Jad’s code for assignment 2: