Dikla Sinai, Finlay Braithwaite, Karo Castro-Wunsch

Productogyser is a chaos management system for shared workspaces. It polls users’ desire to focus and creates a generative artistic centrepiece that indicates the office’s combined desire for focus. The device is a small control box with a clean and minimal design. It allows the user to use the encoder to express their state of mind. Red light is an indication for your coworkers that you need to focus and preferred not to be disturbed. Green light is an indication that you are willing to socialize.

The Productogyser is a custom-made product, specially designed to answer the needs of Victory Social Club, an open-space shared workplace. Victory Social Club is a multi-disciplinary production and design collective based in Toronto, Canada. The people who are part of this group come from different backgrounds such as filmmakers, directors, designers, sound editors, picture editors, animators, and educators. They all share one open space which makes it challenging to maximize the productivity in such a busy environment.



Design Kit

List of electronic components – Here




Since our product is custom made for a specific client, it was necessary for us to listen to our clients and get their feedback in order to come up with the best solution for them. We thought that it would give us the best insights about the functionality, group dynamic, etc. During the whole process of designing and testing, we worked together with our clients to make sure we were synced regarding the expected outcome. It was fascinating to learn about the process of creating a real product, to serve other users. For some of us, it was the first interaction with such a process, and it was highly educative.

Day 1 – Interview

We got together with the people from the studio to hear from them about their work experience in an open space. We pitched our idea and asked them some questions to understand if there is a real need for our product and get their ideas, comment about it. After the interview, we set together to define our product, based on their feedback.

We come up with a list of requirements:

  • The product should be able for each user to indicate his/her abilities to focus or socialize.
  • The individual should be able to express their state of mind, in a comfortable and not embarrassing way.
  • The sign for the individual’s state of mind should be visible to others.
  • There should be a shared area where everyone can be aware of the overall state of mind.
  • Since they have many guests and clients in the studio, they don’t want to codes for ‘please be quiet’ and ‘let’s socialize’ to be very clear to someone on the outside. They don’t want to make them feel uncomfortable. The codes should be more of an abstract than a traffic light.
  • The design of the product should be minimalistic and suit the overall studio design.

Some other things people mentioned in the interview:

  • If they need quiet, they just put on their earphones.
  • They think that most of the people are just not aware of the fact they are causing so much noise.

Full interview link: Victory Social Club – Charette



Day 2 – Worked on our first version of product

Code Ideas

The individual stations hold and broadcast a value for the desire for focus of each individual. The value can be positive or negative, representing more or less desire for. The value for ‘focus desire’ is dialled in using a rotary encoder. The local focus desire is reflected in the red/green LEDs integrated in the rotary encoder, red signifying a desire for focus (negative value) and green signifying an openness to distraction. This gives the user live feedback that their input has been accepted by the system. It also allows visitors at close proximity to observe and respect the user’s desire for focus.


Either state, positive or negative depreciates over time, eventually landing and resting at 0. This is ‘dead man’s switch’, eventually negating the user’s output over time, so that if they leave their desk their influence over the global office state diminishes.


void Depreciation() {

if (millis() – DepreciationMarker >= DepreciationRate) {

if (dialValue < 0) {++dialValue;}

if (dialValue > 0) {–dialValue;}

DepreciationMarker = millis();}}


Each device publishes their local state to feed the central generative display. Each device publishes to its own PubNub Channel. The data published is the desire for focus ranging from -255 to 255.

Channel1: {“focusDesire”: -111}
Channel2: {“focusDesire”: +200}
Channel3: {“focusDesire”: -5}


The generative display aggregates the data from all user stations and uses it to build its display. The generative display is a javascript site translates the incoming data into an engaging yet informative peripheral notification. Low saturation red and green background inform the space of the overall desire for focus. The shape of the generative display is circular for each user until they dial in a desire for focus (negative value) wherein the shapes become more angular, representing the desire for focus. There can be multiple shapes both circular and angular that represent the mood of individuals in the studio. However, there can only be one background colour representing the cumulative mood of the entire studio.

This cumulative mood is returned via PubNub to Channel0. Each local station translates this global value to a Neopixle LED, turning red for negative values, green for positve. In this way, each user can know the state of the studio, irregardless of their ability to see the generative display.



User Station Code – Arduino

Generative Display Code – Javascript



Day 3 – Design, testing plan and code

  • We created a list of questions for the user testing process –  Here. The survey contains questions to provide feedback about our product. With our survey, we aim to understand if the product works as expected and how intuitive is the user experience, to understand the need of each feature, get feedback about the product design, measure the overall satisfaction level, and understand what needs to be improved.
  • We start discussing ideas for the physical design, keeping in mind our clients demands. We did some research online for fancy plastic boxes and we found some ideal soap containers at MUJI. We thought that since they are semi-transparent, they could be perfect for our needs since the neopixel lights are too bright and this will help dim their output a bit. We also tried to optimize the configuration of all parts on a breadboard to find out what is the minimum size required for the box.
  • We went shopping for boxes at MUJI.


The making of

We planned to create one fully functioning prototype, test it, and multiply the successful prototype two more times.

  • Each soap container includes two parts of foam to protect them from scratches. We decided to use the foams as part of our design for two reasons:
    1. Design element – They can defuse the light and makes it more interesting as it goes through the foam texture.
    2. Functionality – we can use it a substitute for our breadboard which helps to stabilize the parts and hold them inside the box.
  • We drilled two holes for the USB cable and the encoder.
  • We added encoder covers (same colour as the box) to make the encoders easier to see and turn.
  • Since the wiring required three GND pins, we had to connect four jumper wires – three that go to each pin in the encoder and neopixel and one that goes into the GND pin at the Feather.


Day 4 – testing the code

We were trying to test the code with the prototypes before the user testing, but we had to deal with some code issues first:

  • Communication with PubNub – we got ‘client error’ messages in two of our three Feathers.
  • We managed to get green light output but not red light output on the encoder LED.
  • Finlay was out of town 🙁 which makes the process and communication a bit harder *BUT* we managed to figure some of the issues over a group chat.


Day 5 – debugging

We needed to solve two main issues:

  • Showing red light for any values below zero.
  • Make the neopixel respond to our central art piece and shows the same output – the average value of the whole devices.

We did some testing, and figured out that two of the three devices were just not responding. This gave us an indication that there might be a problem with the Feathers. We started with checking the wires to make sure everything is wired correctly. Once we verified that the wiring is ok, we were quite clueless about might be the problem.

We then decided to test it on another wifi network to see if this might be the problems and we found out that they were all working fine. Apparently, two of our Feather was not registered on the OCADU wifi network. Our reaction to this discovery was a fine blend of elation, exaltation, frustration, and indignation.


Day 6- User testing

We set up 3 working stations at the studio and asked people to plug them to their computer for 3-5 hours, and at the end of each session, answer our online survey.



We had 7 people to test our product and share their feedback with us.



  • Music by Mild High Club, used with permission

Experiment 3: FoodNeg

The FoodNeg allows dog owners to know when their dog is hungry and in need of food. The FoodNeg is using light sensor, placed above your dog bowl. Once the dog approaches the bowl, the sensor will detect the change in light level, and send a notification to the owner’s browser to let them know their dog is hungry.
A happy dog is a well-fed dog!

Designed with love for my snow-bunny Lola









User flow






Process and challenges
Along the way, I switched my original idea from using a microphone to light sensor. I’ve come to realized that Arduino microphone, which was my first choice, is not working as planned. I wanted to use the microphone to pick my dog’s bark and howl and create two different of sound level, high and low, each representing a different need (food and play).

  • Day 1 – Try to define Arduino microphone as the input and map the values.
    – Started the process with a sound detector but realized it works mostly as ‘analog’ device – on/off and not giving different sound values, so I had to switch to a microphone to get the results I want. (or at least I thought it would).
    – Search online for a solution since the microphone requires different voltage (5v) than what my Arduino is offering (3v). I used USB as power resource to overcome this problem.
    – Tested the input so I’ll be able to map the values. I figure out that the values are between 0-110. So I created two groups of values 0-49 and 50- up.
  • Day 2 and 3 – Build P5js code and connect it to Arduino.
    –  First I did some research about sound to find out the options existed. I played with some code examples, with tiny modifications (mostly graphics). – Mic input from –
    – Define the structure of my code. I find the visualization process to be very helpful especially if you are not very experienced with writing code.
  • Day 4 – Overcome the ‘need’ to use my computer’s microphone as an outsource for voice input.
    – Since I accidentally copy the whole libraries from another project, I forgot to exclude the sound and speak libraries from the HTML.
    – Security issues in Chrome – prevent me from using chrome as a browser because it is detecting an outside device. Look for a solution online, but since the answers I got were too complicated for me to understand I couldn’t figure out what was wrong and ask for help. I was suggested to use another browser since I’ll probably won’t be able to fix it. So I decided to try Firefox, and it was working fine there.
  • Day 5 – Lots of frustration and one MAGIC! (Continue with P5JS code).
    – I plugged the board again and open it in Chrome (out of habit) and everything works (Magic!).
    – Try to map two levels of output – each for different sound level – complete failure.
    Howls = Low = Food animation
    Barks = High = Play animation
    – Turning point – After spending two days of working on the microphone definitions and mapping, I then realize that the values on my console are not changing. While I was looking for a solution online, I realized that the microphone, same as the sound detector, can’t pick different levels of sound (note: always read the spec BEFORE you start using any sensor!!!). So I’m switching my original plan. I decided to use a light sensor instead. I’m creating a basic light sensor Arduino code. Map the values on the console. Change the P5JS code based on the new values.
    – Could not make gif images to work – I wanted the notifications to be dynamic so that they will grab the dog owner’s attention. After trying a few examples I search online to solve this issue, I decided to replace the gif images with simple p5js animations.
    – Try to test different animations code at the p5js animation – play library. – follow the code from
    – Create the assets and write the animations and implement them in the original code + update the HTML with the play library.
  • Day 6 – Design
    – Create the product design – a dog bowl station with the light sensor attached to it
    – Improve the animations.
    – Test the light sensor ranges with my dog to make sure it fits and works.lola


Testing and evaluation
– The design should feet the size of the dog otherwise it won’t be able to detect it.
– Changes in natural light in different rooms affect the values of the if statements. I had to test it again in the room we present in to reset the values.
– Consider- The mess dogs make when they might harm the sensor.

Future development
Add more sensors to other parts of the house so you can track other needs such as go for a walk – if the dog stands next to the door, or let’s play if the dog is standing next to his toys box.



Vibration sensor + Slider actuator  + Cotton material + Happy adjective


Project Description

This project is about turning vibrations to sound. It will aim to explore a different way of listening; a way in which you can feel the melody through your hands.

For this project, I took inspiration from the time I was a dancer, in the ensemble of Kol Demama dance group (literal Hebrew translation – sound and silence), which ensemble deaf and hearing dancers. The basis of the merger was a “vibrational” system, in which the deaf dancers take their choreographic cues from beat patterns felt through their feet, or gestural signals sensed through bodily contact, as well as from visual sources.

In my project, I wanted to mimic that experience and let my audience ‘hear’ a melody created from a combination of vibration beats, through their hands, while holding a cotton ball. The vibration beats, similar to music notes, hold different parameters setting for voltage and duration, which allow me to compose them into a rhythm, a melody, that the audience gets to ‘listen’ to through their hands.

There are a few ways you can experience this installation. You can cover the cotton ball with your hands. You can hold it next to your ear, to use the ‘original’ sense of hearing. I leave the listening experience open for the audience to choose. I believe listening to music is a personal experience, and everyone can choose to experience it at a different level of intimacy.

Circuit Diagrams







dsc_0205 dsc_0228

dsc_0239 dsc_0233

During the design process, I needed to take into consideration some parameters:
1. The cotton ball should not be too tight so that the vibration motor could vibrate inside the ball. It should also not be too dense so that the vibration could go through the ball.
2. The design should allow the audience feel the vibrations using their hands and it should be at eye level.
3. The environment should be ‘clean’ from distractions so that the audience can focus on the vibrations as much as possible.
4. The wires and the connections should be strong enough so that if people pull and play with the ball, all the wires stay connected.
5. I covered the vibration sensor cables with hot glue to protect it. During the testing process, I realized how sensitive it is and how quickly it can be damaged.

Based on this I decided to create a minimalistic design and let the cotton ball be the center of the design. I decided to hang the cotton ball, hoping the due to the vibrations people will notice small movements and get closer to it.

Process Journal

I started this project using a Mind Mapping techniques which allowsdsc_0178 me to expand my ideas, related to each property I could use for this assignment by using a free-association of ideas.

Following that process, I took some time to think about my options and use my imagination to think of them as a whole. I find sketching technique to be beneficial to me with creating a quick draft of my ideas. Whenever I had an idea popping into my head, I created a sketch without overthinking of its feasibility and or if it contains all of my properties.

Focusing on vibrations as the main property, I spent time reading about vibrations, what does it mean, how it works, etc. I also draw inspiration from my own experience as a dancer with the idea of using vibrations as music.

My online research leads me to several artists using vibrations at their work. The one who caught my attention was Alessandro Perini who use vibrations in some of his work. His work Wooden Waves using the same principle of allowing people to experience vibration through their body.

It was time to take things from theory to practice. After setting the basic input-output board, I played with vibration. Trying to hold it, placing it on/in/bellow different cotton surfaces, etc. and test the various effect each of the settings has. (+the effect it has on my dog. No harm caused 🙂 ).


Video links:
First setting

More testing

Lola is helping to test the vibration motor

I also start with looking online for code that allows me to turn the vibration into a melody. I came across this project Vibration Foam speakers which drift me a bit from my original idea. At this point, I thought of changing my original design and create a cotton cover part (to replace the foam) and play a happy melody. I did some testing but find out that for some reason the code is not working right. I tried using some help from my friend, with no luck. I reached to a dead end :/


Left: original foam speaker. Right: cotton copy I made. I tried different cotton density.

After my meeting with Kate, she opens my mind to a new option – use beat sequences, to help demonstrate the idea,  instead of a full melody. From here, my primary work was to test different voltage and duration parameters and try to compose them into a short tune. We also spoke about creating three range for the slider based on various range values allow the audience experience three different melodies.


Evolution of a code. From left to right: first input-output setup, Melody, Vibe beats.

In the process, I managed to create three short beat sequences and tested their effect through the cotton ball. At this stage I had two issues raising. First, I realized that some of the lower voltages are not passing through the cotton ball. I played with it a bit to fine tune the ranges. Second, I experience some troubles with the delay of the response time of the slider. In my second meeting with Nick and Kate, Nick explained that the reason for that is that the loop goes all the way before it goes back and to read the slider parameters, and since that on each tune I have a relatively long delay time (average of 5 seconds) the respond time of the slider is not in sync with the actual change of the melody. He also suggested two ways that might improve the problem, but they won’t solve it completely.

I managed to reduce the delay time by offering 2 sequences, instead of 3, and shortens the sequences.

Future planning…

Project References

  • Alessandro Perini, Wooden Waves – His artistic production ranges from instrumental and electronic music to audiovisual and light-based works, net-art, land-art and vibration-based works. At his work Wooden Waves he uses tactile sound installation that uses a wooden floor as a vibrating surface and lets people feel the movement of sound vibrations along their bodies, when lying on the floor.