Author Archive

Brains Talking


 Group members: Siyu Sun, Mehdi Farahani,  Anantha Chickanayakanahalli

 “Brains being by themselves tend to get noisy especially with all the electronic devices to blame. When they are brought in proximity or closeness to other brains a sense of synchronization is possible and there is a scope of calmness. In this experiment, we are trying to represent this proximity by means of a physical installation of two objects that represent the two brains. They are activated by changes in their positions and when moved they demonstrate the noise of everyday sounds reverberating. These sounds come to a standstill and attain a silence when the brains are brought together (touching each other). This is meant to translate the effect people have on each other.”


Project Description

Brains Talking is an interactive artwork based on the study of  Bluetooth Low Energy (BLE). In this installation, we are digging into the term of Consciousness in the human mind. Consciousness means some kind of experience, cognition, feeling, imagination, and volition. We can perceive and receive the information in the world, and the brain is the most crucial tool to let it can be trued. There are so many nerve systems in our brain, and we can do the sensation during that, just like we can speak, we can touch, we can taste and smell the flavor, which can help us interact with the world. But compared with the un-consciousness, it is very easier to understand, just like we don’t have any self-thoughts, awareness, and can’t do the interactive. For instance, stones, sculptures, installations, even like our technology world. So we combine the still acrylic material and then use laser cutting to cut off the outline of the human brain, to mimic future human brains. when consciousness becomes unconscious, what can we humans do?


  1. We put some sound from our life. Neurofeedback is a form of alternative treatment wherein brain waves are retrained through auditory or visual stimuli. Everything can be turned to sound in our physical world, and they can activate our neuro system to make the feedback. So we are thinking about mimicking a virtual environment that can interact with future brains. The sound in this audio track came from the original physical sound, like the movement elevator, boiling water, the shaking of keys, flapping the handrail, closing the door, and so on. All of these inadvertently form our communication system. In this installation, we are set up a defined distance that can activate the sound, people can use the distance to control the sound value, which means the distance from each other is never far away. As we all know, following the development of technology in our recent world, the relationship between humans is turned into a silent world. We always focus on intelligent devices which ignore the people around us. So we want to use this kind of idea to critique this phenomenon.


2.  Mimic the hand’s power





Experience Video




How it Works Video


Final Imagines







Development  Images

Laser Cutting Sketch









Circuit Diagram





Processing File:

Arduino Code:




1-   Jade, Laura. “INTERACTIVE BRAINLIGHT- Brain Sculpture Lights Up Your Thoughts.” YouTube, Date accessed: December 03, 2021


2-   “BRAINLIGHT.” Laura Jade, Date published: August 29, 2018, Date accessed: December 03, 2021


3-    Jade. Laura – BRAIN LIGHT PROJECT – RESIDENCY 2015, Date published: July 08, 2019, Date accessed: December 03, 2021


4-     Kanai, R. (2010, August 31). Are Two Heads Better Than One? It Depends. Retrieved from

Talking With Plants by Siyu Sun



Project Description

Talking with Plants(2021) is an interactive bionic artwork based on the Arduino study.  This project simulated the possibility of dialog between humans and plants, to let people empathize with plants. Plants play an important role in the ecosystem, and they account for about 80% of all biomass on the planet. [1] However, most of their feedback is slow, so people always ignore these existences. As the term Plant Blindness offered by Elisabeth Schussler and James Wandress, a botanist and biologist in the United States said,  “the inability to see or notice the plants in one’s own environment, leading to the inability to recognize the importance of plants in the biosphere and in human affairs.”

I hope to make plants”conscious” through this bionic plant installation. I used fiber optics and a culture utensil as materials to create the bionic environment. Then used fiber optics, and picked real plants from the forest by roots on three single LEDs to simulate growing plants. The interactive effect here is microphone/sound sensor inputs the sound signal to recognize the volume level, to identify different volume setting signals to light up corresponding color LEDs. After research, plants have a strong sensation, they can distinguish different colors of optical fibers, ultraviolet rays, and red high beams. They can change their physiological conditions according to different light.[2] Based on this concept, I chose LEDs lights for feedback output. So people can see the change situation of plants when they make sounds.



Discussion of Calm Technology Principles

In this work, I was engaging with the first principle  I. Technology should require the smallest possible amount of attention in Calm Technology. I am a firm believer in the principle that less is more. Therefore in this installation, I used a minimalist technique to make the interaction more direct. And through the creation of the environment and concept, I built the appearance of the installation as I wanted, and let the audience as far as they can perceive the thoughts that I expressed.



Experience Video ( 1 minute )



How It Works Video



Final Project Images





Development  Images











Circuit diagram






[1]Jose, Sarah B., et al. “Overcoming Plant Blindness in Science, Education, and Society.” Plants, People, Planet, vol. 1, no. 3, 2019, pp. 169–172.,

[2] BioScience, Volume 53, Issue 10, October 2003, Page 926,[0926:PB]2.0.CO;2

Experiment 1 – Visualization of Gossip | Siyu Sun

—— Gossip can destroy a person or make them strong.





Visualization of Gossip(2021) is an immersive interactive narrative work between the human and the environment.

The technique I used here is based on the case study in ml5, PoseNet, Sound Visualization in p5.js. So audiences don’t have to use the external input device, such as use mouse to implement click events. I set up creatCapture() in order to connect the camera, then combined it with PoseNet’s tracking system, and “scroll”, “click” experiment, to get feedback on the sound, graphics, and status of the control screen.

Back to the creative part, this work is divided into four narrative clues. The control feedback system and purpose of each theme are different. In combination with the research done in Experiment 1, I hope to establish an immersive area combining audiovisual and express my conceptual understanding of “Rumors/Gossip”.  I used this technique of expression due to I am concerned that the media has the potential to attract multiple senses through rich information, and has plenty of potentials to influence audiences in perception, cognition, and emotion. The sensory or perceptual mode, surround effect, and resolution in the immersive experience will help the audience create a sense of presence in virtual events, and associate the sense of consciousness or substitute reality. [1]





Study One: Movement and Feedback | ml5, createCapture(), preload()

The prototype of Gossip

This is my first work. I use harsh noise images to show what external evaluation brings. The cacophony of music goes up and down. When people hold their head and rock from side to side, the noise image follows people’s movement, it’s like noise in your head that you can’t get rid of, swirling back and forth.  

The most technique I study in this part is knowing the ml5,PostNet, setting up a Creaturecapture function, and know-how to preload media files.  The most key thing I would like to mention is I used the person’s silhouette instead of a real person over the camera, that’s because I want the whole works to have a sense of unity. So in order to  realize it, the important thing you must do is upload media file in function preload(){

Then, I combined the Capture function of ML5. PoseNet to Capture the position of the audience’s Nose to control the movement of the background picture, and set the noise image at the position of the eyes and Nose to adjust the specific orientation and make it follow the movement of the audience so that it is difficult to get rid of this concept.  

How to do 

Hold your head and shake left and right.


The problem I met in this process, the visual identification is not particularly accurate and it’s better to change the soundwave image to true vibration frequency, but I don’t know how to combine them together.


createCapture():Creates a new HTML5 <video> element that contains the audio/video feed from a webcam. The element is separate from the canvas and is displayed by default. The element can be hidden using.

Preload():  This function is used to handle the asynchronous loading of external files in a blocking way.


p5.js Present:

p5.js Edit:


Study Two: Movement As a Click | MousePressed()

— Don’t be afraid

When gossip invades your brain, the only thing you have to do now is beat them and rebel against the odds.  In this part, I combined the Click function of mousePressed to change the image.  I’ve set up two different figures — controlling the variation of the picture by identifying the viewer’s waist height, and the noisy image disappearing when the viewer raises their hand.  It stands for defeating the gossip.  

How to do

Move from side to side and raise your hands to see the image disappear.


However, the biggest regret in this part is that I want to make the noise sound disappear when I raise my hand, but I have tried many times without success. I hope to continue to study this question in the next few days.


MousePressed()This function is called once after every time a mouse button is pressed over the element. Some mobile browsers may also trigger this event on a touch screen if the user performs a quick tap. This can be used to attach element-specific event listeners.



p5.js Present:

p5.js Edit:



Study Three: Sound Visualization | waveform()

— Self-digestion

This is an area full of noise and language.  The inspiration of this work comes from the interactive text experience of “IN ORDER TO CONTROL” In this study, sound visualized learning was conducted to determine the waveform to read the amplitude of sound and draw the waveform of the sound. The mic was also set so that the audience could speak and the amplitude would also be realized.  And I set up a lot of words on the screen, praise, criticism, insult, people can speak any works in here and can see the different amplitude was changed. It’s really funny that it’s my first time to try this part, and I saw the tutorial to wrote down the code in random(in order to try), then I got the effect I want to have haha—- The soundwave was a fusion with the profile, that means people can make balance with gossip.

How to do

In this process you need to watch the text on the screen and listen to the soundwave noise, you can speak anything you want, and the microphone will record the voice then feedback to the waveform.


In the beginning, I designed to make these sentences scroll up and down, but I tried it hard to be scrolling them together. 




p5.js Present:

p5.js Edit:



Study Four: Dynamic Text

— Balance yourself

After experiencing the whole work, I hope to let the audience know how to establish their own balance point in the gossip: We live in a space full of all kinds of external voices, and all kinds of gossip, like knives and stones, affect us, bring us to pressure, but also label us. It made me imagine the weight of pressure, rumors can destroy a person or make them powerful.

How to do

In this process, you need to raise your hand and move your body everywhere you want. I  set up a bad sentence ” You’re a jerk!” on the left, and the praise sentence”You’re so creative!” on the right. You can find it to be in a balanced way.


p5.js Present:

p5.js Edit:




As I mentioned before, the sound visual study is my favorite part and I do the related works since 2019, using Kinect combined TouchDesigner.

Siyu Sun: ” RUMOR” Kinect with TouchDesigner interactive art device (



[1]Teresa Chambel. 2016.  Interactive and Immersive Media Experiences.  In Proceedings of the 22nd Brazilian Symposium on Multimedia and the Web (Webmedia ’16).  Association for Computing Machinery, New York, NY, USA, 1.  DOI:




Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.