Experiment 3: Block[code]

Block[code] is an interactive experience that engages the user in altering/modifying on screen visuals using tangible physical blocks. The visuals were created using processing with an attempt to explore The Nature of Code methodology of particle motion for creative coding.

Project by
Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Description
The experiment was designed to create a tangible interaction i.e. the play with the rectangular blocks their selection and their arrangement, that would in-turn generate alter the visual output i.e. the organisation and the motion of the rectangles on the screen. I conceptualisation the project taking inspiration from physical coding, Google’s Project Bloks, that use the connection and the order of joining of physical blocks to generate a code output. The idea was to use physical blocks i.e. the rectangular tangible shapes to influence the motion and appearance of the rectangles on the screen, from random rectangles to coloured strips of rectangles travelling at a fixed velocity to all the elements on the screen accelerating, giving users the experiences of creating visual patterns.

img_20191104_180701-01

Inspiration
Project Bloks is a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. It is a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences. See the detailed project here.

projectbloks-580x358

Gene Sequencing Data
The visuals were largely inspired by gene or DNA sequencing data from by brief stint in the world of biotechnology. I use to love the vertical motion and the layering effect the sequencing data would create in the visual output and wanted to generate that using particle motion and code. I was also inspired to tie the commonality between genetic code and computer code, and bring it out in the visual experience.

gene-sequencing-dataDNA sequencing. Image sourced from NIST.gov.

Mosaic Brush sketch on openprocessing.org by inseeing generates random pixels on the screen and uses mouseDragged and keyPressed functions for pixel fill and visual reset. The project can be viewed here.

pixel-project

The Process
I started the project by first making class objects and writing coding for simpler visuals like fractal trees and single particle motion. Taking reference from single particle motion I experimented with location, velocity and acceleration to create a running stream of rectangle particles. I wanted the rectangles to leave a tail or a trace as they moved vertically down the screen for which I played with changing opacity with distance and also having the background called in the setup function so as to get a stream or trace of the moving rectangle particle [1].

With next iterations I created a class of these rectangle particles and subjected it to move function, update function and to system velocity functions based on their location on the screen. Once I was able to create the desired effect in a single particle stream, I created multiple streams of particles with different colours and different parameters for the multiple stream effect.

img_20191031_124817-01

img_20191031_171129-01

img_20191104_163708-01-01

The basic model of a Tangible User Interface is the interface between people and digital information requires two key components: input and output, or control and representation. Controls enable users to manipulate the information, while representations are perceived with the human senses [2]. Coding is an onscreen experience and I wanted to use the visual output as a way to allow the participant to be able to use the physical tangible blocks as an interface to influence the visuals on the screen and to build it. The tangible blocks served as the controls to manipulate the information and its representation was displayed in terms of changing visuals on the screen.

the-tui

setup-examples

Choice of Aesthetics
The narrative tying physical code to biological code was the early inspiration I wanted to build the experiment around. The visuals were in particular inspired from gene sequencing visuals, of rectangular pixels running vertically in a stream. The tangible blocks were chosen to be rectangular too, with coloured stripes marked on them to relate each one to a coloured stream on the screen. The vertical screen in the setup was used to amplify the effect of the visuals moving vertically. The colors for the bands was selected based on fluorescent colors commonly seen in the gene sequencing inspiration images due to the use of fluorescent dyes.

mode-markings

img_20191105_114412-01

img_20191105_114451-01

Challenges & Learnings
(i) One of the key challenges in the experiment was to make a seamless tangible interface, such that the wired setup doesn’t interfere with the user-interaction. Since it was arduino based setup, getting rid of the wires was not a possibility but could have been hidden in a more discreet physical setup.
(ii) Ensuring the visuals were created as per the desired effect was also a challenge for I was programming with particle systems for the first time. I managed this by creating a single particle with the parameters and then applying it to more elements in the visual.
(iii) Given more time I would have created more functions like the accelerate function that could alter the visuals like slowing the frame rate or reducing the width or changing the shape itself.
(iv) The experiment was more exploratory in terms of the possibility of using this technology and software and left room for discussions around what it could be rather than being conclusive. Questions that came up in the presentation was, How big do you imagine with vertical screen? OR How do you see these tangibles being more playful and seasmless?

img_20191105_001118

Github Link for the Project code

Arduino Circuits
The circuit for this setup was fairly simple, with a pull-up resistor circuit and DIY switches using aluminium foil.

arduino-circuit-1

arduino-circuit-2

References
[1] Shiffman, Daniel. The Nature of Code. California, 2012. Digital.

[2] Hiroshi Ishii. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international conference on Tangible and embedded interaction (TEI ’08). ACM, New York, NY, USA, xv-xxv. DOI=http://dx.doi.org/10.1145/1347390.1347392

Designing Clay Technology

Designing Clay Technology

By Catherine Reyto

The title is a play on Designing Calm Technology, Mark Weiser and John Seely Brown:

Calm technology or Calm design is a type of information technology where the interaction between the technology and its user is designed to occur in the user’s periphery rather than constantly at the center of attention.”

Description

The concept for this experiment was deliberately minimalistic.  The intention from the outset: a display that people could approach with intuitive grasp of what to do and how to do it in a way that might seem while seemingly obvious may be credited to an attuned set of motor skills ingrained from experiential play in childhood.  It needed to be welcoming, malleable and to an extent, user-defined.  The set up needed to be durable, tough, possessing no fragility or risking of breaking; something that could endure the abuse and wear-and-tear of many hands (including those of small children) without compromising an acute level of sensitive response in movement and touch.  Though the idea seems derived from Tom Gerhardt’s Mud Tub, and indeed I was inspired by from watching this video and reading about the intent behind the work, it was mostly the result of coincidence.  Directly following the first class of our Experiment 3 lab, I gave a lecture to film students about drawing the human form. In an analogy about expressing the volume of a human figure, I alluded to the very tangible sensation of dividing up slabs of clay with a wire thread, and how satisfying it felt.  As I said it, a student agreed enthusiastically. I took that as evidence enough that it was a sensation that was both familiar and relatable- especially to a group of Design students. But I was also interested in exploring why I had thought to use that particular analogy at all, clay having nothing to do with the class I was teaching. I had searched for a mental association that they could reference from deep memory, and lock a new idea to it, giving it ground.  It was these same steps that I had interpreted in Ishii’s TUI mandates, as well as in Mark Weizer’s view: “The more you can do by intuition the smarter you are; the computer should extend your unconscious.”

From the top: Tom Gerhardt: Mud Tub, Tangible Interface 2009, and Cutting Clay with Wire, Delon Visual Arts

The display needed to be both self-explanatory and inviting; not by means of decor but with visuals and objects that elicit feelings of calm creativity: playing with clay in art class.  The table surface was covered with that classic plastic, red-and-white checkered table-cloth so associated with craft tables. The only items upon it was a black rolling pin and next to it, a big ball of clay.  I am quick to respond to any chance to play with art materials in any setting, and that especially goes for a public setting within an at least somewhat professional context, where the act of craft-making can quell the anxious feelings. I guessed that within this class of like-minded artists and designers, there was a likelihood that others might gravitate to the ‘crafts’ table, where they could zone out on the medium (clay) and with a tool they’re familiar with in an ingrained way since childhood (rolling pin).  

Above the table, a large-screen display provided visual feedback as they kneaded the clay with the rolling pin.  With the Arduino nestled like a level in the exact centre of the pin, the user’s hand movements were being recorded by the built-in gyroscope, and passed to the Processing code.  The angle values over time were translated into shapes moving across the screen, with graphics (shape, frame location, size and colour) altering with the pace and pressure of the user’s hands over the clay.  

img_9133

The initial concept involved cutting wire through clay, and capturing the satisfying feeling with direct visual feedback on the screen (sliding the wire slowly through the clay would slow down the changing colours, for instance).  I faced a few too many challenges with this idea, the main one being that even if I did purchase or acquire a sensor small enough to fit comfortably in the handle of the wire cutter, it seemed to throw off the balance if it was only in one handle.  It also would have meant using high-quality clay to achieve the appropriate sliding effect, and that’s difficult to acquire unless purchased in a large quantity. The rolling pin proved to be not only more practical but in fact a lot more tangible, taking direct pressure from the hands on the pin, instead of the indirect pull from holding handles of a cutting wire.  

Materials :

PVC pipe (rolling pin)

floral dry-foam (structural support for the Arduino)

Lady’s razor container (holding the Arduino in place)

Electrical tape (to seal the Arduidno and USB casing) 

 

I had been inspired by the ideas presented in Tangible Bits: Beyond Pixels (Hiroshi Ishii), and the principles of Ubiquitous Computing (Mark Weiser) of “weaving digital technology into the fabric of a physical environment and making it invisible”.  It was Ishii’s emphasis on the ‘meaningful’ and ‘comprehensive’ that spoke directly to me. I often feel torn between the future of digital technology and the not-so-distant past, before it was drastically altered.  It dawned on me while reading Ishii’s paper that the goal of the DF program was to find that link through art created by and made for people. It renewed my appreciation for what what we were learning, but it also made me feel the need to go back to basics. 

img_9230 img_9229

TUI Mandate as described in Tangible Bits: Beyond Pixels (Ishii).

That’s where clay came in to the picture. I had another reason for keeping the material assembly of the concept as simple as possible : to maximize the amount of time I could devote to learning / working with the code. I envisioned dynamic, acutely responsive imagery on the screen, comprised of a myriad of images that would behave similarly to custom brushes in Photoshop where you can paint with photos of your cat or sample a nice arrangement of paw prints to use as an abstract texture.  After some experimenting though, I found the use of images was a bit constrictive and disconnected. The most direct response came from the employ of very basic shapes, and it makes sense: in traditional animation, there is a rule of thumb that no more than one character should be moving at a time, in the same way that nothing should compete for attention with the main focal point in film. The simple familiarity of the basic shapes led to a more immediate response, with far less distraction.  

Context

I have since college, worked with a Wacom tablet and not long after, began a decade of working as a demo artist for the brand itself.  During that time I created courses in how to make the best use of the pressure sensitivity in order to obtain design project goals. Something that has always fascinated me was the range of people that enrolled in the Digital Painting course (Continuing Education, George Brown College).  The greatest challenge and most interesting aspect of this class was whom it attracted: professionals who thought it might benefit their respected profession in some way. That comprised of, to name a small few : a pool table designer, a costume designer, interior designer, architects, film-makers, photographers  small-business owners, and of course, graphic designers of all kinds. Thanks to this work with digital pressure sensitivity, I feel like I have inadvertently been subscribing to the Ubiquitous Computing ideology and the mandates of Tangible User Interfaces (Ishii) long before I read about them two weeks ago.

I have a personal history to lend to the context of this project.  The house I grew up in was a bit of a technology paradox. There was a computer (assembled DIY style by my programmer dad, comprised of used parts) in almost every room, each one surrounded and often semi-buried under stacks of paper, files and books (belonging to my editor-mom, who hated computers).  I’ve always felt my mother sits at one end of the extreme of the boomer generation and my dad at the other: the luddite and the technologist. My mom is an educational-writer and potter and a believer in hard, physical work.  In her retirement I have persuaded her to adapt to ‘smart’ technology tools as a means of making her life easier. She wound up distrusting the “Ok Google” voice-operated remote to the extent that she returned it for the in-the-box basic model. My dad meanwhile, taught himself how to write code in the 80s because it was fun, recorded all his favourite classical music to MIDI files in the 90s and still writes his own apps as a hobby.  As an adult, my friendships have divided between two mindsets, both of which I identify with. On one end there are developers, sci-fi aficionados and comic geeks, on the other, carpenters, classic literature, folk and rock music and carpenters. Sometimes there is an overlap (and I identify with those people the most), but it’s been rare.

With the influence that all things digital has had in recent decades, that spectrum has become murkier with each generation since.  Through the murkiness come big questions about where we are heading and where we are coming from. We wonder what we’ll be able to take with us and what we are leaving behind, to make way for the onset of ever-changing ideals pertaining to innovation.  

Until college, my art practice was a physical, tangible process.  Since then it has been mostly digital, with massive pros and cons. I had held out as long as I could; Majoring in Film Animation, I drew everything by hand and only used the ancient (obsolete) cameras and editors (Oxberry and Leika).  When I later spent a year in the Computer Animation lab at Sheridan college, compared to the drawing studio Concordia, the difference in energy in both spaces was shocking. So much so that after walking in one day and taking in the sea of zoned-out, green-lit faces, I knew I would never work in that industry.  There is something wrong with how we have been adapting to our digital environments, and it’s got nothing to do with the tools themselves.  It’s how we believe we need to use them. It’s not just the difference between a Graphic User Interface and a tangible one. But the philosophical problem is most easily demonstrated in their comparison, making it the best place to start.  My personal spin on it is a rule-of-thumb that has become a theme in my teaching, educational and art-making endeavours: my mom (representing the spectrum of Leonard Cohen luddites out there) needs her curiosity to be sparked enough to ask ‘why’, while my dad and everyone else who makes a habit of looking under the hood is challenged enough to ask ‘how’. Between those two questions lies my take on Ubiquitous Computing – it just needs a friendlier name.  

Technical Challenges 

After the first two group projects, I was looking forward to the opportunity to work alone so that I could prioritize my focus on learning about what was going on behind the scenes, in the gyroscope and accelerometer themselves, their inputs in Arduino, how the serial communication was being sent to Processing.  I very much wanted to create some stunning animated visuals, but I allowed myself to go down the rabbit holes that group work had been preventing me from (and wisely, it turns out). I watched the class videos about the gyroscope and accelerometer several times, pausing and looking everything up that I didn’t immediately comprehend (including how G force works), I researched different types of sensors, I built my own class of objects after watching and reading Shiffman tutorials.  The biggest rabbit hole was my determination to use inputs from both the accelerometer as well as the gyroscope. It seemed vital that the beauty of the rolling pin be in its level of sensitivity, and I had a hard time accepting that my lack of experience and knowledge pushed this goal out of scope. But it was a good lesson for the remaining experiments in this course, and could serve as an invaluable reminder in my thesis work. I did manage to acquire nicely organized reads from both, but everything seemed to fall apart with the map function.  Even once I had resigned to just using the gyroscope inputs with the map function seemingly well-implemented, there was still more of a hike in the value incrementation than I could pass off as drift.

img_9234

img_9235
Code Link

Reflections

The result from all that determination, was that although I did probably learn a lot more than I realized, I became overwhelmed by the amount of new information coming in from so many areas and with the deadline quickly closing in, I was losing my sense of comprehension.  This was nowhere more the case than in the Processing code, where I was continuously distracted by all the visual possibilities while not being able to write them cohesively. I had planned to write an elegant class of animation functions, but wound up with two rect shapes that took gyroscope X, Y, and Z angle values and a few basic animation functions.  It was enough to demonstrate the concept in presentation, but I should have allotted more time to the visuals and perhaps requested help with the map function. 

As a self-observation I found that I get easily fixated on one route to the goal.  Discussions with Manisha helped me out of this mental quicksand more than once. It was talking with her that the idea evolved from a clay cutting wire to rolling pin.  I am however, finding confidence in my resourcefulness in these projects: Engineering the problem-solving how to hang the mobile with limited resources and major restraints (experiment 2) and how to turn an Arduino into a level (razor container fits it perfectly!) in a fixed way (floral foam from Dollarama) within a DIY rolling pin (PVC pipe).  I had fun exploring my home, scanning everything in sight in an abstract way, reinterpreting its potential use. Or just sitting back and exploring this process mentally, recalling in 5th-element style the encyclopedic reference of tools and objects from memory, until something stands out and is moved to the category of candidates for potential use, awaiting comparison against other objects with a similar potential purpose with a different set of offerings, to see how it holds up.

 

Bibliography

Gerhardt, Tom, February 19, 2010.  Mud Tub, Tangible Interface 2009 [Video file].  Retrieved from https://www.youtube.com/watch?v=4kb0u2jPotU

Cutting Clay with Wire,

Delon Visual Arts, November 5 2014. Cutting Clay with Wire [Video file].  Retrieved from https://youtu.be/CE1RngUukjk

Ishii, Hiroshi, Tangible Media Group, 2008.  Tangible Bits: Beyond Pixels.  MIT Media Laboratory. 

Weiser, Mark, 1999.  The Computer for the 21st Century. Palo Alto Research Center, Xerox, CA

Weiser, Mark and Brown, John Seely, December 21, 1995.  Designing Calm Technology. Xerox PARC, CA

Shiffman, Daniel, 2008.  Learning Processing: A Beginner’s Guide to Programming Images, Animation and Interaction.  Elsevier Inc. Burlington, MA

 

 

Skull Touch

S K U L L  T O U C H – An interactive skull

img_0604

Skull Touch
An interactive skull that reacts to the touches of people and gives output in the form of spooky sound in different amplitudes and frequency.

Mentors
Kate Hartman & Nick Puckett

Description

This project tries to explore different states of touche such as no-touch, one finger touch, two-finger touch and grab. Capacitive touch enables to detect the different state of touch.
When you think of the word “Tangible interface”, the first thing comes in my mind is the tactile interface that anyone can feel, touch and interact with. Why scary theme, since it was Halloween time and hence the spooky skull.

Github link: https://github.com/Rajat1380/SkullTouch

Inspiration

I started with the idea about how to use a pressure sensor and a touch sensor, then I realized that only two states can be obtained and I was not satisfied with that. So I started looking for how to get more output.
I got to know about the Capacitive touch sensor through which any surface can be a sensor. After watching this video by DisneyResearchHub, this got me thinking that this is what I was trying to do initially. Disney had developed its own hardware to detect different sensors and information is not open for the public. Then I started looking for alternatives and found StudioNAND/tact-hardware. They have provided all the information regarding the capacitive sensor. I always want to work with audio and how to control them with different input methods. This project gave me all the required push to go for it. As an input device, I chose the skull as this project was happening around Halloween time.

The Process

I started with the circuit set up and code to get the capacitance value for the different touches. The list provided by studioNAND to make low budget capacitive sensor is given below.

Requirement
1× 1N4148 diode
1× 10mH coil
Condensators

1× 100pF
1× 10nF
Resistors

1× 3.3k
1× 10k
1× 1M

I got all the except components except 10mH inductor. I got 3.3 mH inductor instead of recommended one in the list and preceded with the circuit set up.
I have to install TACT library for Arduino and processing to run the code.

Arduino Circuits

circuit

circuit2

Prototype 1

img_20191030_230809

img_20191105_185643

The output was very distinctive for no-touch, one finger touch, but not for one finger and two-finger touch. I figured this out very late in my project. This was happening because of 3.3mH inductor.  The range I was getting is too narrow for distinctive touches. I tried to make an inductor on 10mH with the hollow cylinder and copper wire but I was not able to get near 10mH. So proceeded with the current code.

As I was planning the control audio with the skull as an input device through different modes of touch. I directly jumped into the processing to couple the capacitance value with the amplitude, frequency of the scary sound to give spooky experience to the user.

Final Setup

group-15setup3

img_0597img_0601 img_0602Challenges & Learnings

  1. Libraries and how, where to install, these were my hard learning with this project. As TACT library was created in 2014 as now there is no support for the new Arduino microcontroller. So I have to use Arduino UNO.
  2. I was getting noise in capacitance output even for no-touch. I tweeted the developer of the TACT library and I got a reply from him. He gave me access to controlling sound via a microphone input. I am still finding this hard to understand. Someday I will.
  3. Getting components was hard for me as I was not able to get all the components from creation. I tried to craft inductor and it was not successful. It’s part of learning and it difficult to accept the dead ends.

References

StudioNAND, 2014 tact-hardware
28th Oct, 2019 [https://github.com/StudioNAND/tact-hardware]

Tore Knudsen, 2018 Sound Classifier tool
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2018 Capacitive sensing + Machine Learning
29th Oct, 2019 [http://www.toreknudsen.dk/journal/]

Tore Knudsen, 2017 SoundClassifier
29th Oct, 2019 [https://github.com/torekndsn/SoundClassifier]

Tore Knudsen, 2017 Sound classification with Processing and Wekinator
29th Oct, 2019 [https://vimeo.com/276021078]

DisneyResearchHub, 2012 Botanicus Interacticus
30th Oct, 2019 [https://www.youtube.com/watch?v=mhasvJW9Nyc&t=49s]

HOVER BEAT

cover

HOVER BEAT

Musical instrument  without Touch 

Jignesh Gharat


Project Description:

Hover Beat is an interactive musical instrument that is played without touch or physical contact installed in a controlled environment with a constant light source. The project aims to explore interactions with a strong conceptual and aesthetic relationship between the physical interface and the events that happen in the form of audio output.

Interaction:

A project’s potential radius of interaction is usually determined by technical factors, be it simply the length of a mouse cord or the need for proximity to a monitor used as a touch screen, the angle of a camera observing the recipient, or the range of a sensor. However, the radius of interaction is often not visible from the outset—especially in works that operate with wireless sensor technology.

In the project, the attempt is not to  mark out the radius of interaction or special boundaries at all, so that it can be experienced only through interaction

Explorations & Process:

I started exploring different sensors that can be used for controlling sound. Starting with a sound detection sensor, flex sensor, capacitive DIY sensors using aluminum foil, pressure sensors and finally ended up using a light sensor to make the interaction invisible. So that the user doesn’t see or understand how the instrument actually works and opens many possibilities to interact with the object and they explore, learn while interacting.

img_20191029_131438855

Flex Sensor | Arduino Uno or Arduino Nano

img_20191030_201510090_burst000_cover_top

Light sensor LDR | Arduino Uno | Resistor

img_20191031_120447094

Using a glass bowl to calibrate the base sensor data reading in- DF 601 studio environment at night.

How does the sound actually change?

The data coming from the Arduino and LDR are used in processing to control playback speed and amplitude of the sound. A steady reading is used as a benchmark which is from the constant light amount detected by LDR.

Libraries in Processing: import processing.sound.*; ( For SimplePlayBack )
import processing.serial.*; ( import the Processing serial library )

Sound:

I started experimenting with 8-bit sounds, vocals, and instruments.  The sounds changed its playback rate and amplitude based on the amount of light received by the LDR sensor. to minimize the noise and improve the clarity in ever-changing sounds the best option was to work with clean beats and so I decide to work on instrumental music. The interactions were mostly using hands so I did some research on musical instruments that use hands and make distinct clear sound beats. Indian classical instrument TABLA  a membranophone percussion instrument was my inspiration to develop the form and interaction gestures.musician-tabla

Aesthetics: 

The form is inspired by the shape of Tabla.  Use of semi-circular glass bowl to define the boundary or say a start point to measure readings and define the limit of interaction in a radius as it uses the LDR sensor. The transparent material of the glass actually confuses the user and makes them curious about how it works. The goal was to come up with a minimal simple and elegant product which is intuitive and responsive in realtime.

img_20191031_163131 img_20191031_163119screenshot-2019-11-05-at-7-20-15-pm

Exhibition Setup:

The installations work only in a controlled environment where the light quality is constant and won’t change because there is a base reading calibrated and used as a bench mark to change the sound.

screenshot-2019-11-05-at-7-17-55-pm

Experiment Observations:

The affordances were clear enough. Because of the sound playing the users got a clue that the object has something to do with the touching or tapping but later on interacting they found out quickly that it’s just hovering at different heights over the object to manipulate the sounds of tabla. People tried playing with the instrument. People with some technical knowledge on sensors were more creative as they found out its the light that controls sound.

Coding:

Github – https://github.com/jigneshgharat/HoverBeat

Scope:

The experiment has laid a foundation to develop the project further and make a home music system that reacts to the light quality and sets the musical mood for example if you dim the lights the music player switches the mood to an ambient peaceful soundtrack. A new musical instrument can be made just by using DIY sensors at a very low cost with new interesting interactions.

References:

 

Touch

Touch: Graphite User Interface

Liam Clarke

touch head

Touch is a Arduino and processing project that creates a simple computer with a touch interface. 

The initial goal was to find ways to combine Processing and Arduino in a single interactive medium. The project is a touch screen using conductive paint, glue, and wires. A 10×5 grid was cut into an acrylic sheet which acts as the conductive circuit within the screen. While the initial dimensions were simple, a much more complex grid can be built upon the current version. An Arduino Uno and the CapacitiveSensor library are used to register touches via the grid. The data is then sent to Processing which performs the visual actions that creates the illusion of an operating system. The screen is created using projection onto the acrylic panel, where icons and features are mapped to the grid using MadMapper.

  secondTouch screen technologies were researched to develop ways to combine Processing and Arduino. The most attractive was using infrared touch frame around glass. The hardware for this method of touch screen, a contact image sensor, can be sourced from an average house hold printer. While this would be a visually clean method of sensing touch, a capacitive grid was chosen in favour for simplicity and time constraints. In capacitive touch, sensors have an electrical current running through them and touching the screen causes a voltage change. This sends a signal to a receiver  and touch is registered within software.

The screen was made with an acrylic panel. A grid was was cut into the panel and these grooves would be used to embed conductive material within. Small holes were drilled into the panel, which would be used as touch points. These touch points were filled with conductive paint and protected with conductive glue. Tests were done to find the smallest groove and amount of conductive paint that could be used to trigger a signal in the send pin.

1  

The frame of the computer is sourced from a broken microwave, gutted of internal components. Different screens and casing shapes were tested, the setup and size was selected as it was the most adaptable for future expansions on the machine’s function. A large size helps improve screen resolution while back projecting on translucent material.

micro

The code was created using Paul Badger’s CapSense library. Each sensor connects to two digital inputs on the Arduino. The send pin connects to a 1M ohm resistor which connects to the receive pin. Between the receive pin and the resistor, the sensor is connected. Multiple sensors can share the same send pin, which will help when scaling the number of functions on an Arduino.

2

The Processing side of the software is based on reactive images to the touch input. Images were mapped out to the grid using the Syphon Processing library through a projector via MadMapper. MadMapper facilitates organizing the layout, removing the need for precise calibration within Processing in regards to image location.

The style direction was based on the original Macintosh for it’s simplicity and colours. The point of basing the design off a real operating system was to add to the illusion of a fully functional computer. The current build features an audio player with a play and pause button, a simulated paint program, and a return to home function.

final

Code:

https://github.com/lclrke/Touch

Pjrc.com. (2019). CapacitiveSensor Arduino Library with Teensy, for Capacitive Touch and Promimity Sensing. [online] Available at: https://www.pjrc.com/teensy/td_libs_CapacitiveSensor.html.

 GitHub. (2019). CapacitiveSensor. [online] Available at: https://github.com/PaulStoffregen/CapacitiveSensor.

Baxter, L. (1996). Capacitive sensors. John Wiley & Sons, p.138.

 

 

 

 

The Compass

PROJECT TITLE
The Compass

PROJECT BY
Priya Bandodkar

PORFOLIO IMAGES

portfolio-2 portfolio-1 portfolio-4

portfolio-3

PROJECT DESCRIPTION

“The Compass” is an experiment that uses interactions with a faux compass in the physical space to navigate in a virtual 3D environment. The exploration leverages affordances of a compass such as direction, navigation, travel to intuitively steer in a virtual realm. It closely emulates a VR experience sans the VR glasses.

Participants position facing the screen and rotate the handheld compass to traverse in an environment on the screen in directions mimicking the movement of the physical interface. Participants can turn the device, or even choose to turn around with the device for subtle variation of the experience. This movement influences the 3D sphere on the screen to rotate, creating an illusion of moving in the virtual space itself. The surface of the 3D sphere is mapped with a texture of a panoramic landscape. The landscape is a stylised city scene of crossroads bundled with characters, vehicles to compliment the navigational theme of the experiment. 

As a variation to the original concept, I embedded characters in the scene that participants need to search for. Thus, tapping into the ‘discovery’ affordance of the compass creating a puzzle-game experience.

PROJECT CONTEXT

As a 3D digital artist, I have always been interested in exploring the possibilities of interactive 3D experiences in my practice. The introduction to processing opened doors to accessing the ‘interactive’ part of this. I was then keen on playing with the freedom and limitations of incorporating the third dimension into processing and studying its outcomes.

One of my future interests lies in painting 3D art in VR and making the interactions as tangible as possible. I am greatly inspired by the work of Elizabeth Edwards, an artist who paints in VR using Tilt Brush and Quill, creating astonishing 3D art and environments in the medium. I was particularly fascinated by her art piece in VR called ‘Spaceship’ which was 3D painted using Tilt Brush. I posed myself with a challenge of emulating this virtual experience, and controlling it with a physical interface which was more intuitive than using a mouse.

It had to be a physical interactive object that helps you look around mimicking a circular motion. Drawing parallels in the real world, I realised the compass has been one of the most archaic yet intuitive interfaces to finding directions and navigating in the real space. Hence decided to leverage its strong affordance and compliment the visual experience of my project. While building on the interactivity, I realised how easy and effortless it became to comprehend and relate the virtual environment to your own body and space from the very first prototype. And even more due to physically controlling the object with an intuitive handheld in real space.

ideation

Studying about the gyroscope and sending its data to processing filled a crucial piece of the puzzle. It helped me utilise the orientation information with simple, yet very useful lines of code to bring in the anticipated interaction to a T.

sphere

I studied VR art installations such as the ‘Datum Explorer’, which created digital wilderness using a real landscape, that concluded to a non-linear storytelling using elusive animals. This elicited the idea of incorporating identifiable characters in my 3D environment to add an element of discovery to the experience. I looked up for games that were based on similar concepts such as ‘Where’s Waldo?’ to calibrate the complexity of this puzzle-game idea. I used six characters from the Simpsons family and embedded them using a glitching graphic effect, thus hinting that they did not exactly belong in the scene, and hence needed to be spotted.

characters

To leverage the affordance of the compass, it was important to make it compact enough to fit in the hand and be rotated. I was able to achieve this by nesting the microcontroller on a mini-breadboard within the fabricated wooden compass. I stuck to the archaic look for the compass to keep the relatability towards the design intact for participants. While incorporating the puzzle-game aspect, I realised the design of the compass can be customised to hold clues related to the game. But I decided to let go of that in this version, as the complexity of the six-character puzzle was simple and straightforward enough for participants to solve.

compass-1 compass-2 compass-3

OBSERVATIONS

To conclude, the interaction with the compass in the physical world to control a virtual 3D environment came about intuitively for participants, and was successful. Some interesting interactions that came up during the demo, were when participants decided to turn around with the compass held in hand, and also when they placed the compass near the screen and rotated the entire screen to experience the emulation of ‘dancing with the screen’. The experience was also compared closely to resemble VR, however without wearing the VR glasses, making it more personal and tangible.

CODE

https://github.com/priyabandodkar/Experiment3-TheCompass

FUTURE ITERATIONS

This is an exploration in continuum that I would like to build on using the following:

  • Layering the sphere with 3D elements, image planes in the foreground to create a depth in the environment.
  • Using image arrays that appear or disappear based on the movement of the physical interface.
  • Adding intricacies and complexities to the puzzle game by including navigation clues on the physical interface.

REFERENCES

Edwards, Elizabeth. “Tilt Brush – Spaceship Scene – 3D Model by Elizabeth Edwards (@Lizedwards).” Sketchfab, Elizabeth Edwards, 1 Jan. 1967, https://sketchfab.com/3d-models/tilt-brush-spaceship-scene-ea0e39195ef94c9b809e88bc18cf2025.

“Datum Explorer.” Universal Assembly Unit, Wired UK, https://universalassemblyunit.com/work/datum-explorer.

“Interactive VR Art Installation Datum Explorer | WIRED.” YouTube, WIRED UK, https://www.youtube.com/watch?v=G7BaupNmfQU.

Ada, Lady. “Adafruit BNO055 Absolute Orientation Sensor.” Adafruit Learning System, https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/processing-test.

Strickland, Jonathan. “How Virtual Reality Works.” HowStuffWorks, HowStuffWorks, 29 June 2007, https://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality.htm.

“14 Facts About Where’s Waldo?” 14 Facts About ‘Where’s Waldo?’ | Mental Floss, 20 Jan. 2017, https://www.mentalfloss.com/article/90967/14-facts-about-wheres-waldo.

CODE REFERENCES

https://processing.org/tutorials/p3d/

https://processing.org/tutorials/pshape/

https://processing.org/reference/sphere_.html

https://processing.org/reference/texture_.html

Commuting Fun

img-0145

Project Name
Commuting Fun by Jessie Zheng

Project Description
Commuting Fun is an interactive installation aiming to bring some fun to the mundane day-to-day commuting and to take away the stress, anxiety and even anger people may experience during rush hours. Aesthetically interesting visual patterns are projected onto the interior of the vehicle and can be altered through passengers actions on the train. Expected actions on a vehicle such as sitting on a seat, grabbing the handles, and tapping on a ticket machine can generate unexpected changes in the visuals. Commuting Fun provides new possibilities to start people’s day with fresh and relaxed minds. The project aims to utilize as much space for exploration and interaction as possible, encouraging passengers to move around within the vehicle and get active.

Code
https://github.com/jessiez0810/experiment3

Video of Interaction

Ideation Process
I have been wanting to make an installation for this class for a very long time, and finally I could do it for this project. I’ve looked up how different interactive museums worldwide incorporate different types of interactions. A lot of them use graphics as decorative elements to the interactive environment, and certain parts of the graphics react to where there are physical interactions from the participants. Based on the same idea, I wanted to decorative design patterns on processing which get projected to all surfaces in the installation space as if they are wall papers.


To add more interesting and tangible pieces as part of my installation, I came up with ideas such as putting balloons and exercise balls because their shapes echo the polka dots pattern on the walls, which could add to the aesthetic appeal. For example, one of the Nuit Blanche exhibitions I went used balloons in the installation space with lights around to amplify the visual appeal. They could also serve as switches to the circuit, which could trigger changes in the image of polka dots in the projection when people pull on the balloons and sit or move the exercise balls. The whole space should serve as a visually-pleasing play space for people. Moreover, I thought about how participants could also be part of the exhibition. If they could all wear whites, their clothes could be a canvas for the projection of the polka dots as well. However, considering the limited timeframe to figure out the logistics to add balloons and exercise balls as part of the circuit, I had to change direction to implement something that doesn’t move much so it causes less of a challenge to have a stable and secure circuit.

pasted-image-0

Nuit Blanche Exhibition That Inspired Me

One day on my way to commute to school, I noticed people interact with lot of objects in daily life without paying much attention to it. Commuting alone has lots of things people interact with, for example, tapping on a machine to get on a streetcar, pressing the stop button to get off, grabbing onto handles to stay stable. These are objects that have been carefully and systematically designed and put into certain places in a vehicle to ensure the functionality of a vehicle, and people are so used to them function in a certain way that they barely pay attention to them. What if something unexpected happens when people interact with a vehicle the way they normally do? Would they behave differently? Eventually I decided to recreate a part of a subway for my installation and have objects such as seats, handles, buttons act as switches to change the way graphics behave. The graphics will then be projected back onto the subway to be an integral part of this space so the digital as well as the physical space become as a whole.

Project Context
Inspired by the article Tangible Bit: Beyond Pixels by Hiroshi Ishii who prefers the multi layers of interactions in TUI over the one dimensional GUI interaction between users and the digital screen, this project takes on the challenge to explore the interactive relationship between the physical and virtual world. An installation becomes an inevitable choice as this project goes further into the ideation process due to my awareness not to confine the physical interactions of participants within the space of a single object, but to have multiple objects within the installation space for participants to walk around and discover.
Eventually the final decision was made to recreate TTC transit vehicles as the interactive space because of the amount of tangible things to be worked with, for example, stop buttons and wires, POP machines, seats etc. People could interact with the things within the installation space without being provided with any instructions or guidance as most people have predefined ideas in mind of how to interact with the objects within transit vehicles from day-to-day commuting experiences, which could minimize possible confusion of the interactive experience. As objects are spread out across the vehicle, it adds another level of interactions which enables broader movements of participants’ bodies.
An effort has been made to keep the physical and virtual world inseparable from each other. Having decided to incorporate objects on vehicles as switches to control the graphics interface, the next challenge is how to integrate it into the physical world organically. Coming from a Chinese background, I have seen themed subway trains in China, the interior of which are covered with decorative design during certain festivals. Using the graphic interface as a decorative design which is projected onto the train becomes my focus. While people interact with different objects on the train, the design of the interior of vehicles changes based on the interactions.

2

A Themed Subway Train in Ningbo, China

However, with technical limitations, I’m unable to map and project the decorative design pattern onto all surfaces of the recreated vehicle installation space, which has led me to think of other ways of projection methods. In China, a lot of times for commercial purposes, a series of pictures will be put up in the tunnels outside the train. While the train is moving at a fast speed, the windows will be used as a frame for the animated commercials outside. For this project, the window area will be utilized in a similar way to display the visual elements, which simplifies the projection process yet still enables the visual elements to be an integral part of the installation.


Drawn by the minimalist style of polka dots art installations by the Japanese artist Yayoi Kusama exhibition Infinity Mirrors at the AGO, I chose to use polka dots to be the visual elements of the decorative design. It strikes me how something really simple yet could still be extremely visually stimulating. Sometimes simplicity speaks more to the audience than a compilation of elements. Yayoi Kusama’s obsession with polka dots comes from her mental disorder caused by her difficult relationship with her mom while she was a child. She was encouraged to draw and paint by a therapist to as an outlet for her oppressive feelings. The statement behind her artworks created by the carefully chosen colors and deliberately arranged compositions is truly powerful on viewers such as myself. I decided to choose a playful and vibrant color palette for the dots and background because I believe this could soothe people’s anxiety during rush hours.

Work in Progress
I wrote the code at first using potentiometers and then replaced them with velostat pieces attached with aluminum foil to make sure the code works.

img-0032

Once I decided to recreate the subway space, I jotted down a list of things which can be found in a subway in Toronto. I printed out posters and recorded the ambient sound in a subway on my way to school.
The most challenging thing was to make the chairs in our studio resemble the seats in a subway. I found red fabric at Michael’s that has a similar look and feel as the fabric on a subway seat, and found a silvery metallic adhesive film which I pasted and sewed onto the red fabric. This process was extremely time-consuming as I had no prior experience of sewing. It took me almost 2 days to finish making all three pieces of seat cover.

img-0035img-0041

I also used card boxes to recreate the yellow handles on the TTC subway. Initially I intended to use them as switches as well, however, I couldn’t find a good method to secure them in place while people pull on them. Eventually they became props which add to the look of the recreated subway space.
After taping all three pieces of seat covers onto chairs, I put sensors under the covers and connected them to the circuit. I had people sit on the chairs to see if it created desired changes in the graphics, made changes accordingly.

img-0123

However, after a few tests, the aluminum foil is starting to crumble and break after people sit on them many times. This has led me to seek alternative conductive material that can endure stretching a bit better. Conductive fabric became the enhanced alternative.
Finally, the chairs are ready to go.

Final Look

img-0155

Final Look Of The Installation

img-0129

First Aid Box To Hide The Arduino

img-0131

Polka Dots Projected Back On The Train

img-0145

Handles To Grab On To

Reference
artjouer (2018) A Visit to TeamLab Planets Tokyo: Amazing Interactive Art. Available at: https://youtu.be/G6EtM1r0Eko (Accessed: November 4, 2019).

BRANDS OF THE WORLD (2018) Toronto Transit Commision. Available at: https://www.brandsoftheworld.com/logo/toronto-transit-commission (Accessed: November 4, 2019).

British Council Arts (2015) What is digital art? Available at: https://youtu.be/2RWop0Gln24 (Accessed: November 4, 2019).

CHINADAILY (2015) Another themed subway train runs in Ningbo. Available at: http://www.chinadaily.com.cn/m/ningbo/2015-04/22/content_20503269.htm (Accessed: November 4, 2019).

Grief, A. (2015) What Toronto’s highways would look like as a TTC map. Available at: https://www.blogto.com/city/2015/09/what_torontos_highways_would_look_like_as_a_ttc_map/ (Accessed: November 4, 2019).

Ishii, H. (2019) Tangible Bits: Beyond Pixels. Massachusetts: MIT Media Laboratory. Available at: https://zhenbai.io/wp-content/uploads/2018/08/4.-Tangible-Bits-Beyond-Pixels.pdf (Accessed: November 4, 2019).

O’Neil, L. (2019) The TTC is putting fare evaders on blast in a new add campaign. Available at: https://www.blogto.com/city/2019/05/ttc-now-putting-fare-evaders-blast/ (Accessed: November 4, 2019).

Sinha, V. (2018) Yayoi Kusama: Her world of polka dots. Available at: https://www.thejakartapost.com/life/2018/09/06/yayoi-kusama-her-world-of-polka-dots.html (Accessed: November 4, 2019).

Tate (2012) Yayoi Kusama – Obsessed with Polka Dots | Tate. Available at: https://www.youtube.com/watch?v=rRZR3nsiIeA (Accessed: November 4, 2019).

PLAY

play

By Masha Shirokova

CODE: https://github.com/MariaShirokova/Experiment3

PROJECT DESCRIPTION:

My idea was to create a multi-sensory device, which allows users to explore sense crossing and experience at least 3 senses. 

Play is an (musical? visual? tangible?) instrument with multisensory interface: users are able to play sounds, create their own sound and visual compositions on the screen by interacting with tactile sensors. All of the sounds present a visual animation over the background when played. Play enables users to make a whole “orchestra” out of pom poms, glasses of water and other non-musical objects, turn a palette into a rhythmic  sequencer. 

For now, this device consists of 3 tactile objects: glass of water, paper foldable button and pompom button, that control three modes of visuals on the screen and 3 sounds. Further, I would like to expand the amount of objects, as well as to make visual part more complicated. 

This device provides users with lots of performance possibilities. It is also can be used for educational purposes and experiences to give kids and adults a chance to interact with music in new and different ways.

PROJECT CONTEXT:

Hearing smells or seeing sounds are examples of possible synesthesia – one of my main research interests. This experiment is my first attempt to create a multi-sensory object, which helps users to understand how tightly senses are crossed and connected to each other.  In the case of Play, pushing or touching DIY buttons triggers sound and colorful visual animation. 

The history behind the aesthetic expression of synesthesia arose from the paintings of Wassily Kandinsky and Piet Mondrian. It continued in note drawings of Cornelius Cardew, who literally drew his music on notation schemes. Sometimes these were quite identifiable notes, but their duration and relative volume should be determined by the performer. The epiphany of this approach was his book “Treatise”, comprising 193 pages of lines, symbols, and various geometric or abstract shapes that largely eschew conventional musical notation. Simple grid of the board and screen interface was inspired by geometrical abstract works of Mondrian, classical notation scheme and short films by Oskar Fischinger. The screen grid is affected by the sound and turns into the sound wave which changes depending on the volume (amplitude) of the sound.

Wassily Kandinsky was capable of “hearing”colors and that is why he composed his famous “symphony pictures” to be melodically pleasing. He combined four senses: color, hearing, touch, and smell. Therefore, experimenting with perceiving senses differently by using the device can be a valuable exercise to develop imagination and creativity. In his compositions, circles, arcs and other geometrical shapes seem to be moving, therefore I also used simple animated shapes and bright colors to keep connection with the artist who experienced synesthesia.

 

artwork-vasily-kandinsky-composition-8-37-262Composition 8 by Wassily Kandinsky

1f9284dca0f4160be8a0dcbb1f555ec1Composition London by Piet Mondrian

d5694rjnbom swjloncilnkDrawn notes from “Treatise” by Cornelius Cardew

Working on sound part was a new experience for me, therefore I picked 3 different sounds: rapid drum sound, and 2 xylophone sounds. There is a Russian band SBPCH (Samoye Bolshoe Prostoe Chislo) that plays electronic music, based on simple but nice sounds of water, rain, glass or ping pong sounds. I wanted to reach the same effect by picking up my sounds. This is how I “hear” collapsing and growing shapes.

As for the tactile part, my goal was to make tangible experience as much diverse as I could, so I included soft pom pom button, paper button and glass of water. At the same time, users experience something soft, colourful, dry and  solid, and liquid – this is where contrast of touch , sounds and visuals mix together. 

First, I saw the possibility of adding water to the circuit in the video of Adafruit Industries Company. Then, I realized that they use different boards, which use capacitive touch. Therefore, I started looking for other methods of using water as a sensor.  I added salt and it worked!I

 

PROCESS:

First week, I started with brainstorming some initial ideas for the project:

  • Shadow play
  • Use of bubble wrap
  • Game based on the principle of Minesweeper Online Game
  • Multi-sensory device

I decided to do the last, as it represents my research interest and, hopefully, will be helpful for my thesis.

The first class codes, provided by Kate, became foundation for my project. I replaced potentiometers by DIY sensors and added more details to the Processing code(sound and visual). 

citcuit

Circuit from the first class

Visual interface:

sketchbook

Interface sketches

For the grid, I used soundwave (the same method we used in Echosystem for Experiment 1t) which was affected by amplitude of the sound.

  1. 3D rotating cubes for the starting screen using P3D Library and rotation.screen1

 

2.First sensor activates the yellow square( Y position of the square was mapped with sensor value) and “play more” text.

button-puk

 

 

3. Second sensor activates static composition of star and rectangles.water-drum

 

 

4. Third sensor activates text and a  circle ( fill color was randomized: fill(0, random(0,240), 255), its Y position also was mapped with sensor value.  Moreover, there were activated three more ellipses, their size was changing due to FrameCount command using different proportions, so they looked like water surface. Third sensor is also responsible for the sound wave.
xylo-fluff

 

 


xylo

Two sensors are being activated

 

Figuring out the sound:

Wiring the potentiometers to the Arduino and writing the code for 3 DIY sensors  was simple. However, working with multiple sounds was a bit challenging. First, I looked at sound libraries I can use in processing and I found Sound Library and Minim Library.  While using 2 sounds, it was comfortable to use both, as it was possible to stop and play sound files from 2 different libraries. However, when I added the third sound, it did not play. So, instead of pausing sounds I changed volume and used only Sound Library.

Testing sounds

 

Combining sound+image

3 sounds

DIY sensors:

I was excited to work with different materials and provide users with very different experience. In the beginning, being inspired by performance where users used only fruits to play music, I wanted to use lemon as one of the sensors. However, there were 2 “not enough” – not enough voltage or lemon was not enough conductive. So, I switched to squishy circuits and tested play-dough. It  also was not very reliable, even though I tried 2 kinds of resistors. Play-dough sensor only worked as a switch from on mode to off. Therefore, I came up with the idea of 2 buttons (paper and pom pom) which would still allow to experience different interaction and contrast material. Furthermore, I still wanted to use a glass of water as a sensor. Although, I did not manage to activate this sensor with user’s touch, I did make it work as a simple switch (sensor reaches its max value – when both clips are in the glass). In this case, salt helped me a mot by making water more conductive.

I preferred copper to foil as a copper material, as it was less flexible and more stable. 

  • Lemon
  • Play-dough

  • Water

  • Fluffy Button & Paper Button

Processed with VSCO with n1 preset     Processed with VSCO with n1 preset

Assembling and making:

As I see this device as an open structure for adding extra tangible objects, I decided to keep the model structure also exposed to public and did not hide the wires which were connected to the button and to the glass of water.  

The tactile part consisted of laser-cut board, pom-pom button (fluffy balls were simply glued to the card with copper piece covered with velostat, while the other copper piece was glued to the board), and the paper button ( sensor part was hidden inside, so pressing allowed 2 parts of the sensor connect with each other).

For the visual interface i did laser-cutting of the board and box, where I could hide the bread board.

img_7636

Processed with VSCO with n1 preset

Processed with VSCO with kk2 preset

REFLECTIONS:

The best part of this experiment is that  my classmates were actually interested in interacting with the device and enjoyed the process of creating bits and melody.

A few days later after the presentation, I am looking  at the project and thinking, why I did not manage to make  it more complicated. I know I should have set up and edited the sounds, so the result melody would be better. 

In general, I enjoyed working on this project, as I could actually play with my favorite materials: sound, image and tactile materials. However, as I set up a to use all 3 senses, I did not have enough time to work on the quality of the sound. My further plan is to improve the processing code by adding sounds and complicating the visual part. 

Another plan is to match sound, visual and tactile part on real based data received from people who experience synesthesia. I believe this phenomena can be found very inspiring for other users. Even those who do not feel mixed senses, the synesthesia vocabulary can be used as a source of inspiration, since color and music associations are very poetic and metaphorical. Perhaps, users shall produce their very own vocabulary of vision to be able to experience art fully, and hopefully the future PLAY device can be useful in order to expand our sense experience.

REFERENCES

16 Pineapples – Teplo. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=SimccVMCpv4.

Adafruit Capacitive Touch HAT for Raspberry Pi – Mini Kit – MPR121. (n.d.). Retrieved November 5, 2018, from https://www.youtube.com/watch?v=Wk76UPRAVxI&list=PL5CF99E37E829C85B&index=130&t=0s.

An Optical Poem (1938) by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=_kTbt07DZZA.

Chen, P. A. (2016, November 15). How to add background music in Processing 3.0? Retrieved November 5, 2019, from https://poanchen.github.io/blog/2016/11/15/how-to-add-background-music-in-processing-3.0.

“Early Abstractions” (1946-57), Pt. 3 by Oskar Fischinger. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=RrZxw1Jb9vA.

Nelzya Skazat’ Koroche by SBP4. (n.d.). Retrieved November 5, 2019, from https://www.youtube.com/watch?v=XIictPv-5MI.

Puckett, N., & Hartman, K. (2018, November 2). DigitalFuturesOCADU/CC18. Retrieved from https://github.com/DigitalFuturesOCADU/CC18/tree/master/Experiment%203

Swinging. (n.d.). Retrieved November 5, 2019, from https://works.jonobr1.com/Swinging.

Visual music. (2019, September 19). Retrieved November 5, 2019, from https://en.wikipedia.org/wiki/Visual_music.

Live VJ Performance Show

Project Title: Live VJ Performance Show

-An interactive live performance that explored audio visualization

By Jun Li – Individual Project

wechatimg2308

Abstract

This project was a live performance show based on the concept of ‘Audio Visualization’, duration of 11 mins-long show for the audience. Also, it’s my first attempt at being a VJ. All 8 of the different effects were generated and interacted with music input in real-time. It was built on the Arduino controllers, TouchDesigner in addition to projection of the video output on the background. The purpose of this experiment was to create a very simple user-interface with different switches and sliders to manipulate the effects.

This dynamic experience allows the every participant to have an opportunity to become a ‘Visual Jockey’ as, they can operate and furthermore, change each parameter of the audio resulting in creation of the energetic graphics in the background. 

Keywords:  VJ, Live Performance Show, Audio Visualization, Interaction.

Requirements

The goal of this experiment was to create a tangible or tactile interface for a screen-based interaction that uses Arduino and Processing. Since, l came from a very similar undergraduate program called ‘Technoetic Art’ and l had a lot of experience working with these software beforehand. l am always eager to challenge myself for a high-tech level and create various fascinating projects. After discussing my idea with Kate and Nick, I received their permission. I used the technical logic of Processing and applied it to TouchDesigner. Thus, retaining the knowledge of serial communication in these 2 softwares.

Description

Music visualization, refers to a popular way of communication that combines audio visual and audio with the core of vision, music as the carrier and various media technologies such as new media technologies to interpret music content through pictures and images. It provides an intuitive visual presentation technique for understanding, analyzing and comparing the expressiveness of internal and external structures of musical art forms.

Vision and hearing are the most important channels for human beings to perceive the outside world. They are the most natural and most common behaviors of human beings and are irreplaceable for the activities of the human cognitive world. “Watching” and “listening” are the most natural, direct, and most important means of recognizing the outside world through a variety of audiovisual senses. Sound and video, hearing and vision, in contemporary society, the two agree on aesthetic trends and dominate the aesthetic form of mass culture. Vision provides many conveniences for people to see and understand musical works and music culture. People will increasingly rely on visual forms to understand audio content. The specific application of music visualization is very wide. For example, live music, exhibition site and other music visualization systems, combined with special images and customized immersive content development works, can give people a strong visual impact while enjoying music.

Process & Techniques

Research question

First, l have explored and researched what kind of parameters from audio can affect the visual part and how can l utilize these to manipulate the shape or the transformation of the visual design. Usually, audio visualization is based on the high, mid, low frequency, beat and volume of the music. So, one of the most important techniques is how to get these data or parameters from the audio itself and how to convert it to audio visualization.

Arduino Implementation

Most of my project program was built in TouchDesigner, for the Arduino part, the code was not difficult, just sent the data of a switch and 8 sliders to TD for manipulating the visual effect through serial communication.

TouchDesigner Implementation

  1. How to generate visual effects in real-time based on the different parameters of audio?
  2. How to utilize the data coming from Arduino to affect the visual effects that already had been generated in real-time?
  3. How to arrange and switch different visual effects and what’s the key and special point for VJ performance.
  4. How to optimize the program to avoid affecting the real-time output.
  5. How to output the whole project (both visual and sound part) to large displays (2 screen monitors, 1 projection) and 2 speakers to let VJ performer monitor and change the effects in real-time and show it to the audiences.

For the first and second questions, l had created 8 effects, utilizing different parameters from audio.

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-43-03

1.1 – Audio analyzing (high, mid, low frequency)

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-26-11

1.2 – Utilize the data from Arduino to manipulate visual effects

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-15-40

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-15-59

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-16-13

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-49-53

1.3 – Opening introduction (particle writing system)

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-47-48

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-05-51

1.4 – 8 different visual effects (Initial state & Altered state, Same below)

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-48-32

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-06-06

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-47-33

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-06-25

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-47-14

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-06-43

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-48-47

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-06-59

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-49-15

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-07-22

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-48-18

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-22-07-38

For the third question, I used the data from the slider and added an additional layer in black to switching and transited to the next effect. At the same time, l explored a lot of VJ will add strobe visual effect while the music reached its climax. So, there was another additional layer based on the low frequency of the audio, it increased the strobe of the video, which could improve and enhance the live atmosphere.

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-39-30

1.5 – The interface of switching 8 effects

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-41-13

1.6 – Different function in layers

For the fourth question, in the beginning, functionally l could utilize one data from the slider to control all 8 visual effects. However, l realized it was a huge challenge for my laptop to process it on the GPU & CPU at the same time. So, l had to separate the data into 8 sliders to control each effect and test it carefully. Eventually, it worked and did optimize my program successfully.

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-37-08

1.7 – The data flow in Touchdesigner 

wechatimg2306

1.8 – Arudino interface

For the last question, l did explore the function in TD for project output, it was powerful to allow the creators to create a super convenience interface to supervise the whole process and outputted it to any screen and speaker.

%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-34-21

1.9 – Output to display

Challenge & Learning

  1. The due time of this project was super short. It required us to submit in 9 days, including having ideas, coding, testing, debugging and final setup. Especially, l used a more difficult software – TouchDeisgner. Although l had experience working with TD before, not as familiar as Processing. At the same time, l didn’t have much project experience. l barely got any help and reference from my classmates. It’s really an individual work and quite challenging for me, which l am excited as well.
  1. Set a goal to a live VJ performance show means too few visual effects won’t be accepted and not powerful enough in visual and hearing. I am a perfectionist. So, I had to create 10 mins-long show at least, that’s the extra requirement and pressure for me. So, to achieve that in the end, l kept testing and creating different visual effects. Eventually, l chose 8 powerfully and great enough from 26 effects l created in TD. %e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-16-55-12                                       2.1 – Reference library l created
  1. Processing data and having these effects in real-time were a super heavy job on my GPU and CPU. it also challenged the performance of my computer. l almost getting closer to ruin my GPU and unfortunately, l still lost my 3.5mm audio output in the end. After coding, l had to work really hard to optimize the program to get a better, stable 60 FPS output. Because my computer was not powerful enough, especially the GPU. Eventually,there still were some frame droppings during live performances.
  1. To my satisfaction, this project was successful been made and the response from others were very great. l perfectly achieved the goal l set in the beginning, compared the initial goal, NOTHING had been changed. I learnt a lot of project experience and techniques in TD, including analyzing audio, visual designing, data integration, manipulation and analysis, project optimization, output and management.
  1. The next step l will do is to keep improving and optimizing my program and create a simpler operation interface, which can let users manipulate it easily.%e5%b1%8f%e5%b9%95%e5%bf%ab%e7%85%a7-2019-11-04-15-20-28

2.2- VJ setup in TouchDesigner

Code Link:

https://github.com/LLLeeee/Experiment3_C-C_JUN-LI

Conclusion

Currently, we use technology-based new media art as a way out. These forms of new media art are often the forerunners of future art. Artists work in the area of “art and technology” to create collaborative artworks. What’s more, in the era of such a big technological transition in modern times, the way of life has changed accordingly, and it has become necessary to fundamentally grasp the essence of art. The cross-border nature of music visualization art is very obvious. It involves music art, visual art and the artistic integration of the two.

When this project was exhibited for the exhibition, I was the VJ and presented it myself and able to personally observe the interaction and experience of the participants with it. To my satisfaction,I achieved my goal of becoming a VJ, bringing a show which challenging me incredibly.

wechatimg2315

wechatimg2314

Reference

  1. vjfader (2017). VJ Set – Sikdope live @ Mad House – ShenZhen China.Available at: https://www.youtube.com/watch?v=uG1GrD7VQOs&t=902s.
  2. Transmission Festival (2017). Armin van Buuren & Vini Vici ft. Hilight Tribe – Great Spirit (Live at Transmission Prague 2016). Available at: https://www.youtube.com/watch?v=0ohuSUNHePA 
  3. Ragan, M. (2015). THP 494 & 598 | Simple Live Set Up | TouchDesigner.Available at: https://www.youtube.com/watch?v=O-CyWhN4ivI
  4. Ragan, M. (2015). THP 494 & 598 | Large Display Arrangement | TouchDesigner. Available at: https://www.youtube.com/watch?time_continue=1&v=RVqNjJfE9Lg 
  5. Ragan, M. (2015). Advanced Instancing | Pixel Mapping Geometry | TouchDesigner.  Available at: https://matthewragan.com/2015/08/18/advanced-instancing-pixel-mapping-geometry-touchdesigner/
  6. Ragan, M. (2013). The Feedback TOP | TouchDesigner.  Available at: https://matthewragan.com/2013/06/16/the-feedback-top-touchdesigner/

Tête-à-Tête <3

Rittika Basu

Project Description: ‘Tête-à-Tête’ <3 is a private-dating platform. The term, ‘Tête-à-Tête’ (French origin) refers to a secret one-to-one conversation between two people. The communication is encrypted through the use of ‘Cupid Cryptic Codes’ that gets transmitted through the playing of the mini piano keys. The objective is to have an ‘ongoing secret chat’ while being camouflaged as a piano player. There are ten piano keys in total which generate the Solfège/Solfa – an educational technique to teach musical notes and sight-reading skills and familiar beginners with lyrical patterns. For example, ‘A’ is denoted with one Red blink and generated the musical note of ‘Do’. Similar to an actual piano, the first eight mini piano keys produces Do, Re, Mi, Fa, So, La, Ti & Do consecutively. The last 2 keys, produces ‘Beep’ sounds denoting to set of odd and even numbers. The code syntax incorporates Alphabets, numbers, several punctuations in addition to a few emojis.

Visuals:

Work-In-Progress:

Work-in-Progress

whatsapp-image-2019-10-31-at-6-45-23-pm-2

 

whatsapp-image-2019-10-31-at-6-45-24-pm-1
Attaching Alligator wires to the mini piano keys DIY buttons
20191105_013239
Wiring the Arduino with the Mini Piano strings

Final Images:

cover-page setting-images 73420315_471938793422400_4582641864728903680_n background-4

Interactive Video:

Question: Guess what Neo is encrypting?

Answer: coffee?

Project Context:

Research: 

I started observing ideas post observing the Mud Tub, Massage Me and Giant Joystick. One day, I came across an article on ‘Secret Dating’ and its reasons. Foremostly, its a common practise among the LGBT community as non-heterosexual forms of expressing love is a taboo in several regions of the world. In many Asian and African countries disclosure of such relationships might end up tragic incidents where the partners would be jailed, penalised, killed by relatives or chemically castrated. One of the reasons could be that the unconventional relationships are perceived as an act of humiliation and shame in society. Thus, homosexual lovers are often forced to pursue relationships in secrecy.

Being from India (one of the largest populous country), I have witnessed how dating, kissing or any form of displaying affection is publicly scowled. Whereas molestation, on the other hand, is ignored blatantly many times. Instead of teaching children on lessons of communal friendships in additions to having healthy relationships, parents wrongly depicts romantic intimacies as vile and inappropriate for youngsters. For example, during my 12th grade, my best friend’s mother told her that making boyfriend is indecent and girls who date boys will always perform poorly in academia.  This was because her parents considered ‘teenage love’ as a distraction and feared that their daughter might be engaged in pre-marital coitus.

Developing a new language – Cupid Codes:

Contemplating on the notion of secret dating, the idea of having a secret system of communication struck my mind during the next week. I commenced reading about cryptic messaging and cypher networking which had interested me since childhood. Inspired by ‘Morse Code’ and ‘Tap Code’, I attempted being an amateur cryptanalyst and created my own set of cryptic codes using LEDs blinking.

whatsapp-image-2019-11-02-at-6-30-11-pm-1

Ideation – Creating a ‘Cryptic Messaging System’

It was a struggle in the beginning as I had to conceive how to implement this transmission via serial communication within a limited set of 12 digital pins. In the fullness of time, after innumerable trials and errors, I came up with ‘Cupid Codes’ and strategise a systematic table to remember the new language. These set of multi-coloured blinks on Processing screen will help to transmit messages between two lovers secretly. While the other people surrounding the environment will assume that the participants or lovers are actually engaged in playing the Mini piano because the codes are conveyed through the playing (audio and visual output) from the piano keys.

Tap Code

Creation of a new language – Cupid Code (usage of multi-coloured blinks with different timings and syntaxes)

 

Rapid Prototyping:

Utilising the knowledge of DIY switches from Kate’s class 12, I tried to make the mini Piano keys wrapped with Aluminium foil to test the conductivity of electricity. However, this prototype’s form turned out to be very childish and juvenile. Hence, I had to reserve this one as a kid’s version because of its bright colours and playful mechanism.

Prototyping:

After referring videos of DIY mini piano on YouTube and instructional images from Pinterest, I began implementing the gathered knowledge in developing my own little piano. Creating the piano switches as the DIY switches were intensive and tedious. Afterwards, I soldered the jumper wires, resistors with the copper strips attached on the piano keys. The idea was that, when these piano keys will be pressed, it completes the whole circuit and emits musical audio. There are 10 piano keys that are colour coded in Red, Orange, Yellow, Blue & Green. The 2 green-coloured keys are for odd and even numbers respectively. But rest of the eight keys are to represent Alphabets, emojis, phrases and punctuation in different combinations. Every 10 keys emit 10 different musical notes ie. the Solfège or Solfeggio, a.k.a. Sol-fa, Solfa, Solfeo, which follows as Do, Re, Mi, Fa, So, La, Ti, Do and Beep sounds.

Access from GitHub (Codes + Audio + Image + Typeface) :

The coding is simple and derived from the file shared by Kate and Nick’s Github page,  titled ‘Experiment_3_-_Arduino_Processing_ASCII_Digital_values.ino’ (Arduino file) and ‘Experiment_3__Arduino_Processing_ASCII.pde (Processing file)’. 

In Arduino: Arduino Code for Experiment 3: Tête-à-Tête <3

In Processing: Processing Code for Experiment 3: Tête-à-Tête <3

Supporting Files: Audio, background image and typeface

Execution: 

Replacing Aluminium with Copper because not only Cu is a better conductor of electricity and its strips are harder than Al foil. Cryptic communication language was developed involving of colour blinks that appeared on the screen at different timings which are to be used and exchanged between the participants or partners for flirting, messaging and calling each other. Every piano key generates different sounds making the messaging activity seems like playing of a musical note. Since, this instrument functions like a real piano, non-participants surrounding both the partners will assume them as piano players while they can happily date their lovers in peace and privacy. 

Reference:

Research + Coding Tutorials
  1. Kremer, B. (2019). Best Codes. from https://www.instructables.com/id/Best-Codes/
  2. Hartman, K. (2019). Exp3_Lab1_ArduinotoProcessing_ASCII_3DigitalValues/. Lecture, OCAD University. https://github.com/DigitalFuturesOCADU/CC19/tree/master/Experiment3/Exp3_Lab1_ArduinotoProcessing_ASCII_3DigitalValues
  3. curtis’s channel. (2016). processing: playing and using sound files [Video]. Retrieved from https://www.youtube.com/watch?v=DJJCci3kXe0
  4. Engel, M. (2014). Adding and using fonts in processing [Video]. Retrieved from https://www.youtube.com/watch?v=QmRbb-_d_vI
  5. Blum, J. (2011). Tutorial 06 for Arduino: Serial Communication and Processing [Image]. Retrieved from https://www.youtube.com/watch?v=g0pSfyXOXj
  6. Rudder, C. (2014). Seven secrets of dating from the experts at OkCupid. Retrieved, from https://www.theguardian.com/lifeandstyle/2014/sep/28/seven-secrets-of-dating-from-the-experts-at-okcupid
  7. Elford, E. (2018). HuffPost is now a part of Verizon Media. Retrieved from https://www.huffpost.com/entry/mom-secret-lesbian-relationship_n_5aa143e9e4b0d4f5b66e2b35
Audio:
  1. Rodgers & Hammerstein. (1965). “Do-Re-Mi” – THE SOUND OF MUSIC [Video]. Retrieved from https://www.youtube.com/watch?v=drnBMAEA3AM
  2. Jaz_the_MAN_2. (2015). Do, re, mi, fa, so, la, ti, do – DO stretched.wav [MP3 file]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316899/
  3.  Jaz_the_MAN_2. (2015).  Do, re, mi, fa, so, la, ti, do – RE stretched.wav [Online]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316909/
  4.  Jaz_the_MAN_2. (2015).  Do, re, mi, fa, so, la, ti, do – MI.wav [WAV file]. Retrieved from https://freesound.org/people/Jaz_the_MAN_2/sounds/316909/
  5.  Jaz_the_MAN_2. (2015). Do, re, mi, fa, so, la, ti, do. – FA stretched.wav [WAV file]. Retrieved from  https://freesound.org/people/Jaz_the_MAN_2/sounds/316905/
  6. Katy (2007).  Solfege – So.wav [Online]. Retrieved from https://freesound.org/people/digifishmusic/sounds/44935/
  7. Jaz_the_MAN_2. (2015). LA.wav [Online]. Retrieved from  https://freesound.org/people/Jaz_the_MAN_2/sounds/316902/
  8. Katy (2007).  Solfege – Ti.wav [Online]. Retrieved from https://freesound.org/people/digifishmusic/sounds/44936/
  9. austin1234575 (2014).  Beep 1 sec [Online]. Retrieved from https://freesound.org/people/austin1234575/sounds/213795/
  10. cheesepuff (2010).  a soothing music.mp3. [Online] Retrieved from https://freesound.org/people/cheesepuff/sounds/110215/