Project 2 : Confession

The Beginning

-What is water?-

My project and all of the ideas were actually began at this seemingly easy but difficult question at the same time. Water is visible and invisible. Water is probably the most natural and common but mysterious matter on the earth. Water is tolerant but could be astonishingly august sometimes. Water is elusive most of the time, however, could be given out certain patten sometimes. Water is like thinking, seemingly logical but actually chaos. Water is the most basic and representative creature of the world.

Ironically, I wasn’t intend to use water at the very beginning of this project. The idea of using something that is water-like influenced me so much that I googled nearly all kinds of interesting liquid or fluid I could found on the Internet. Some of them were toxic and have some access issues.

Narrowed down on two kinds of feasible fluids which could possibly help me to achieve what I had in my mind. The first, probably everybody would have know about this, is non-newton fluids. A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. The most significant feature regarding this fluid is that when it is given a certain force in a high frequency, it will achieve a more solid-like feature, even just for a few moments.


A lot of experiments videos regarding this online shows that what would be created by this feature of non-newton fluid could be presented as something more human.

This is the video that actually gave me the idea of visualize my concentration but in a more melodic way.  Though this non-newton fluid is already fun enough, when Glen showed me another kind of interesting fluid, I was totally drain into the world of interesting liquid. The fluid is called ferrofluid, which is a type of mixture of water and toner something that one could easily found in a printer. This kind of fluid would be effected and displays flower-like patten when it is triggered by a magnet.

Unfortunately, the idea of using multiple strong magnets was not that practical given the consideration of  time limit.

I’ve sticked to the idea of using non-newton fluid aka corn starch(a mixture of corn starch and water in a proportion of 3:2) for almost all the time until last Saturday that I finally did a second test on my brand new speaker only found the final effect more like a stove ready for some pancake cooking than a more conceptual artwork. Had to switch to another liquid that could make my concept clarified.

At that point, it seemed like someone I didn’t know at all but felt so familiar with that told me in my mind in a very tender way: Mizi(Japanese word means water). I would like to call it a sign rather than a symbol of hungry(haven’t ate anything that day T_T). And then suddenly, it seems everything backs to where it should be. Water the most divine and evil(seriously) thing in the world would be the best and only suitable material in this scenario. My mind is a chaos, and so is everyone else. We all have some good times and bad times. We are not saint, there will be negative and filthy stuff stored somewhere in our knowledge like a seed waiting to be irrigated and flowered if they get enough nutrition. You will never know what other people are thinking about since in this world, all of us are so good at camouflaging. However, what if one’s thinking is totally exposed under the spotlight viewed by others, would that make a the ultimate confession? It does. So here it goes the title, the best delivering I could make to the people who care and who want to care or even pretend to care – Confession, a ultimate and true personal reflection of me myself and I.

The Process


The idea of exposing my mind under the spotlight aka visualization, is the main and only topic regarding my concept. How do I visualize my mind in an authentic  way? It should be beautiful, cool and meaningful at the same time. The whole installation should be clean and pure, but what will be reflected would be giving the idea of chaos and order.

In order to achieve this goal, I gave up on the idea of using anything related to Arduino, since the connection and signal transmission will always be an issue there bothering me until the end of the day. And all those wires would probably damage the beauty of the whole set up.

Back to the topic of visualization. When people talk about this term it often gives out information related to something that the audience could see and listen to. But what could not be visualized in reality are also should be counted in this content. Something more abstract like feeling and emotion.

The human himself  is a miracle. Thanks to the mother nature, we could process of the function of thinking and imagining. When we listen to something, for example a piece of music, we could feel or imagine the feeling of the composer or the emotion the composer tries so hard to cast on the audience. That’s the extension of the definition of visualization but why not should be counted in this scenario? But question is how? How to project this kind of visualization on the audience?

The answer for this that I discovered along the road is the appropriateness of the visual effect and the audio effect. People process imagination, if the quality of the multimedia content is there, there is no way that delivering would be obliterated. Another factor that also should be noted is the environment. The right spacial position would maximum the final delivering. However, the later one is not up to me since I’m not the deal maker. But the first two part could be controlled and corrected.


I intend to use a giant speaker to generate frequency as well as vibration since day one. But how could I active such a huge stuff was a very serious question.  I tried to solder it onto an AC wire which could be used to connect to the laptop directly but eventually failed. After consulting Hector who told me that I probably need a amplifier to power this giant machine. Another advantage about using amplifier would be that it would maximum the sound ten times than just using computer only.

Apart from using a 12 inch loud speaker and a 35 watt mono amplifier which could reach frequency low enough to 40 and remain stable, to contain the water as well as protect the speaker from watering issue, I’ve used a 12 inch pizza pan. We got the knowledge that sound transmits fastest in metal. So the frequency would be transmitted perfectly through a metal media. The advantage about pizza pan is that it is not that heavy and the depth of the pan wouldn’t be either sallow or deep to imbalance the outlook.

As the visual part, I decided to do some projection mapping in order to create a more personal space. The most common used object in a projection mapping performance would be cubes. With this idea, I’ve utilized a 13*13*26 foam cube as my projection target. The advantage of foam is that unlike mirror or metal which would give a strong reflection, foam though reflects lights, it will diffuse light which is more tender and natural. More interestingly is that it could also be used as a stand to support the speaker which would make the whole installation unified.

Since it is a brain confession, something that could measure the brain signal would be necessary. I used Neurosky mindwave EEG sensor as the emitter to send out the signal which will represent the state of my brain. This EEG sensor could measure alpha, beta, theta, delta and gamma waves in our brain and already pre-set three event for customers: meditation, attention and blink strength. Through a library called think gear, this biosensor could connected to processing directly without Arduino. However, I have to keep both the bluetooth and Wi-Fi off in order to hold the connection.


The whole project will be divided by three parts: the audio synthesizing and vibration generating, the visual effect creating and multiple signal transmitting and also, the projection mapping.

According to these division. I need at least three software run at the same time to get the whole picture completed. I chose Max/Msp to do the audio processing. Even though that there are libraries for processing to do some audio synthesizing job, the job would be easier and I would have more chances to do with audio effects which would be ten times more difficult than just programmed in processing only. And also all those audio libraries compare to Max/Msp would be unprofessional. To adjust the projection mapping on a certain scale, I used the best projection mapping software I’ve ever used, Madmapper to deliver the visual effect.


-The code will be enclosed at the end-

This is the code part. This part could be divided by four sections: signal receiving, signal processing, visual effect and signal output. For the signal receiving, I used thinkgear library to receive the signal send from EEG biosensor. To get the signal correctly from the sensor, one has to identify some pre-set event.

As to the signal processing, I tried to document all the figures in order to capture the emotion changes to see if there is a certain patten. Unfortunately, all I have is pages of random figures that tells nothing. However, it does can reflect the complexity of one’s thinking. the more active the state of mind is, the more intense and complex the patten shows on the mapping. There are three classes that are working together in order to give a more clarified look.

Class attention and meditation are simply classified the figure according to the biosensor. Class analysis handle not just disposing data that sent from class attention and meditation, but set the mind state and frequency.

The most important part in processing is to create a visual effect that should be representative enough to show the patten of my brain. It should be chaos and beautiful but also expressive. This is like the hardest part. I spent days on this try to find the best visual effect. First I used Ani library for processing to create a water-like framework. But the projecting effect of this wasn’t the way I wanted in my mind.

Abandoned this sketch, I started to focus on coding a particle system which in my mind will be cool if I could use the signal transmitted from EEG biosensor to control the position of particle emitter. Unfortunately this went down as well as I projected at the studio the other night. The effect feels so fake and affected.

Making the decision of abandoning this sketch wasn’t that easy since I’ve spend days programming on that. After making the decision,  I was in a state of hunting new ideas, that was when glitch art came into my life. It was Stephen who gave me the information regarding glitch art which is basically an art of finding the beauty in failure or processing errors.

It was like the destiny that I found a tiny but powerful library called glitchP5 which could create awesome effect something that was exactly what I wanted at that time.

I was so happy that when I found out that this library could not be used in syphon framework which is the only way to connected with Madmapper, I depressed for days. However hard it is, I should figure out a way that could reflected my state of mind.

When think about water, the most obvious thing would be the ripple. As I observed the complexity of the ripple according to the intensity of the rain, the final piece of puzzle became clear. This is a water park, I should do something water-like. Giving the idea of that, I did some research and finally got the final result that people saw on the presentation day.

Based on the algorithm by Daniel Shiffman, I made it into 5 mind states according to the complexity of the brain state. So according to the signal transmitted from the sensor, the code will classified the signal according to the standard I coded in my program. The more intense the signal is the more complex and active the changing of the patten, projected on the cube, would be.

The final part of the processing would be the output of signal. Processing in my project works like a processor, it receives signal from the outside and do some actions according to the signal then it output the outcome to other place. In my case, processing should be able to talk to Madmapper, this could be easily done through syphon library. And it should be also connected with Max/Msp. I tired the OSC which is like a super power tool for communication between different softwares, but I found the feedback not friendly and it seems would take time on how to figure it out. So I turned out to use an alternative library which is like super easy to use highly recommended for users who just wanna connected with Max/Msp.


       "If you want to find the secret of the universe, think in terms of energy,
        frequency and vibration."
                                                                    -Nikola Tesla

Audio is important, not just for creating the atmosphere but more vitally, the vibration. Under different frequency, water would be exposed different patten. This makes what seems fluidly a solid concept. The patten exists within the ecology of nature, its beauty would be the best way to present the way we think and imagine.

Visual effect would be the first thing that catch people’s eyes. The patten of water, displayed under a continuos light source accompanied with melodies, could not only demonstrate what my concept is, but also illustrate the melody itself and beauty of nature.


Using Max/Msp would probably be the biggest challenge for me during this project. Admitted the fact that Max/Msp is extraordinary in the section of whatever is audio and synthesizing and processing or even video disposing, the first impression of this particular software, for me, was a feel of O.O.C. Not familiar and still probably doubt this very new concept way of programming, Max/Msp did wow me during the developing process. Exploring the granular synthesizing would definitely be the highlight of the whole process.

The advantage of Max/Msp is that, even the sample music isn’t that perfect enough, through proper programming, it will eventually become something that satisfied you.

As I said at the very beginning, I’ve switched the idea of using corn starch liquid to water at the very end of the project, so in the patch you will see there is a single cycle~ object that only made for receiving output low frequency figures processed from processing, which would only gives out vibration but no sound. However this function wasn’t used at all, except mode five where my brain reach its maximum complexity.


 I got to interested into projection mapping way before the beginning of this project(that’s the reason why I chose the topic of “projection adventure” in the salon session). No need for negotiating, my project would definitely include projection mapping. Another reason for this was, having a single and isolated object as the target where I could project my visual effect on and place my installation in a empty, quite space is always my ideal picture in my head since day one. In order to get familiar with madmapper, I did several experiments. Some for testing regarding communication with processing, some of them regarding the practising of this software.

According different room and position, the project file of madmapper has to be changed each time, so obviously I have to do according adjustments. However hard it is, it is still content for me to see the final result was actually impressive. But there is still some issues that I didn’t cover. Right now, I successfully managed to projected on two sides of my foam cube, ideally all the four sides should be wrapped with visual effect which would make it more delicate, I don’t know if it is possible for me to link two projectors in a loop circuit to fulfill this duty, I’ll definitely work on it cause probably, this technique will be used in my final project.

-The Journal-  

 My project started at very early, probably 3 weeks ago since I would be more confident and feel much more prepared on the condition that I’ve done enough research. I spent a week to get familiar with processing, during this time I mainly practised on the coding and tried to get a more foundational knowledge regarding this new programming environment as well as brand new language, Java. You could see it from some of my experiments, I’ve listed some of my programming experiment which were actually more like millstones in the journey of exploring. Another thing that made me quite relief is that I now do have time on programming in glsl which is something I stopped 2 years ago. During my undergraduate life, coding shaders would be the most horrible memory which kinda make me pretty terrified. I was under a lot of pressure regarding deadlines and competitions, all these negative memories and nerves made me don’t want to get to near this particular area. However, deep down somewhere in my mind that actually really want to further into, so during the exploring process, I picked up what I’ve left behind. Tough I still need to learned a lot, I’m pretty glad that right now I’m more comfortable with this seemingly difficult(it is), but actually very interesting as you dig in field.

Along the exploring, I started to gather all my ideas and try to complete a whole picture, It was until the end of the week one, that I kinda have an ambiguous idea of what my direction should be. But at that time the technical part is clarified, what function do I need, what should I put into code and what other software I should use, etc. So during week two, I focused on the functions delivering. It probably would be the most difficult week I’ve so far right now. If you read my experiments you would know that during this time, I’ve rewritten all of my code at least 3 times for divers reasons. But I’m pretty glad at the end of second week, most part of my code were done. In that case I could concentrate on the physical delivering part and giving adjustments and testing in order to make my whole installation in a perfect shape.

IMG_0490IMG_0500IMG_0506IMG_0507 IMG_0516 copyIMG_0512IMG_0540IMG_0533

Week 3 went so fast, every day was like a battle field, it filled with work like everyday. A very serious issue emerged without warning: my EEG biosensor has a issue on connecting to my computer. I tried lots of different ways to fix this problem, until I turn off all of my Wi-Fi and bluetooth, this problem finally relieved a little bit.

The Final Presentation

The Reference


Link to the sketch folder(this is a multiple programmed code, there are some part of the code are commented, people who download this could uncomment and write some code to give it a try.  Please import glitchp5, syphon, thinkgear and maxlink library in order to active this sketch. Need to write some code accordingly to active the shader. Have fun :))

Link to max/msp patch