Author Archive

Tribe — Xiangren Zheng(Gary), Harish Pillai, Yikai Zhang(Glen)

Tribe is an interactive projection and light installation. The project is divided into two parts. In the first part, audiences will enjoy a 2 minutes projection and light show. After that, in the second part, the show will become more interactive. Audiences are able to control the flashing patterns of the led lights and different video clips projected on the wall through a wireless joystick.


The concept of “Tribe” was started from an idea that we hope to create a new form of public entertainment. We noticed there are many recreational activities happening in the city of Toronto everyday. However, most of them are one-way communication, which lack some interaction between the performances and audiences. If we want to turn Toronto into an amusement park or opportunities for game-like encounters then we need to add more interaction elements in. In addition, through the observation to the Canadian’s life style we found many Canadians love socializing and nightlife. Bars, restaurants and nightclubs are some spaces they really like. Therefore, during the discussion we decided to create an interactive projection and light performance. Our expectation was to turn the city of Toronto into a large night amusement park and everyone can be the part of the show.


Step 1: Symbol

After determining the form of the show we started to look for the theme for our project. We all believed that integrating technology and culture to create a new kind of art form would be very interesting. During the discussion, we remembered there is a book called Zero written by a Japanese designer Matsuda Yukimasa. The book is about exploring major sources of symbols such as basic shapes, colours and numbers and placing symbols in context of mythologies and religions, the human life cycle, people and culture, and symbol systems. We found all these symbols are very simple, elegant and full of mystique. Therefore, we developed an idea that taking one symbol as our carrier and give new meaning to it through adding more content using new media technologies.
Screen Shot 2014-12-10 at 17.10.16Screen Shot 2014-12-10 at 17.10.32

Finally, we decided to use a symbol which is a hexagon consisted of two equilateral triangles. The reason we use this shape is we found this shape has a sense of stability and mystery simultaneously and also looks beautiful and elegant.

2014-11-27 22.17.04

Step 2: Fabrication

The size of project we made this time is comparatively large. The final size is 8 feet*8 feet. The hexagon is composed of four pieces of plywood and two equilateral triangles on the top are made of 10 pieces of 8 feet long wood strips. All cutting works were finished at the make lab on the 7th floor. Here we need to thank to our instructor Reza. We could not finish this part without his help. Then we carried all materials downstairs and finished remaining assembly works in the studio. After finishing all of these, for achieving better projection effect we painted it into the white colour.

2014-12-04 16.45.27
2014-12-04 17.00.02
2014-12-05 22.44.15
2014-12-07 03.35.25
2014-12-07 05.08.34
Photo 2014-12-10, 6 48 03 PM

Step 3: Light system

This is the first time for us to do a light system on such a scale. In project Tribe, we used 23 meters led strips. This 23 meters led strips were cut into 24 segments depends on the length of each side of the shape. Meanwhile each led strips was soldered with two 0.75mm2 wire and numbered according to the sequence of each side of the shape. At the last, each led strip was stuck on the visible edges of the mesh to outline the geometric shapes of the structure.
2014-12-07 01.15.48
2014-12-07 01.15.38
2014-12-06 21.04.43
2014-12-07 03.33.53
2014-12-07 08.24.05
2014-12-08 02.32.38
On the hardware side, we used one 12V 12A power supply to power all led strips. The power supply is connected with a 27 channel LED dimmer. The LED dimmer can take DMX512 signal as the input and then transmit that signal as PWM(pulse-width modulation) output.(DMX512 is a bus protocol used in theatres and live shows to send light intensity information to the light fixtures that are set up on stage.) For receiving DMX signal, the LED dimmer has to connected with another device called DMX512 interfaces, DMX512 interfaces allow to send or receive DMX512 from a computer. This is the bridge which allows computer to output the light intensity information to the LED dimmers. The standard connector used for DMX interface is the 5-pin XLR, but as they are less common than the 3-pin XLR used for audio, in this case, we also need a 5-pin XLR adaptor.

Photo 2014-12-10, 8 03 17 PM

Photo 2014-12-10, 8 03 01 PM

Photo 2014-12-10, 8 02 51 PM

Photo 2014-12-10, 8 04 48 PM

Photo 2014-12-10, 8 05 00 PM

On the software side, we used Madmapper to read the pixel information from videos or pictures. Then Madmapper will send out the DMX information according to the pixel information to the DMX ODE. In addtion, Madmapper also supports Syphon. This function creates a lot of new possibilities. Because Processing also has a library that allows Processing send the output directly into MadMapper. In this way, we can create various interactive light or projection works through integrating these two powerful tools.

The entire process runs like this. Light intensity will be determined by video fed to the software, sent through Art-Net over ethernet or USB to a DMX interface, and then to the LED dimmer. The LED dimmer will take the intensity value depending on their allowed addresses and dim the LED strip connected to the corresponding output by fast-switching the voltage provided by the power supply.

Step 4: Interaction

In this project, at first we wanted to use LeapMotion as our interaction tool. But after talking with Kate and Nick we got to know that we must to use radio connection as our main method to interact with our content. After that talk, we developed several ideas such as making a pair of gloves with accelerometers in it or sending midi signal wirelessly to let visual part interact with the sound. However, all of these interaction are either not natural enough or not interesting enough. Through reading the project’s requirements again and again finally we focused on the word “Game-like”. We thought it would be really fun if we use a joystick as the tool to control our lights and projections. Base on this idea we planned to write a Processing sketch that can read the signal send from Arduino and then switch different videos based on the data. After that, Processing will feed the video to Madmapper through Syphon. Then Madmapper can control both projections and lights according to the video it received.Though this way, audiences are able to interact with our project wireless and the interaction process is natural and interesting enough as well.

2014-12-02 23.31.25

Step 5: Animation

We started our animation part in less than two weeks, we knew that a detailed plan was required. Being knowledgeable about what kind of video will be need eventually, we decided to to the frame animation for the changing of patten on LEDs first. Since we are not using neopixels which could be programmed to lightened up accordingly to its shape, we have to divide the patten which audiences may have seen on the final presentation day into multiple segments and animate each segment’s the degree of transparency in which will be reflected in the voltage posed on each LED strip according to the beats of the music in frames.

Everything has to do with the tempo and the beats of the song. The more exactly on the point where the beats comes up, the more mesmerizing the final effect would be. And also the patten display on the polygon like object should be representative enough for the music note is playing. It took us 4 days to accomplished the light video.


light video 截图

2014-11-28 18.41.36
(Since Gary is the animator in our group, here are confessions of an animator)
As to the video we need to project on the way, honestly speaking I didn’t really have a concept at all, but for me it was ok for the fact that the final presentation would be something more like a installation in club than a conceptual artwork which people would critic about. As long as there motions would be intense and make sense according to the music, that final delivering would be enough to impress the audience.

However there is a serious problem that I’ve overlooked until I was done with the light video. I was too ambitious about making the whole 1’44’’ animations at the very beginning that I didn’t really think about how much work would that be for me for the fact that the measurement here we are talking about is not second but frame. Immediately after I accomplished the light patten video, I realized that it would be a mission impossible for me to task the animation all by myself before the deadline. Using some of the free vj clips and 2d and 3d works I’ve done before and resources I have would be the only solutions for me.

Even though, the amount of animation work I have to do personally reduced, the job wasn’t easy for me. The problem is that what we have to projected on the object, the shape of it wasn’t regular. So masking job was inevitable, and this has to be done by frame with the reference light video. And also, as a perfectionist, how to make every signal frame unified in art style with the fact that through using different styles material is tricky. I have to modified or redo some resources to make it logically understandable and not feeling awkward when people are watching.

As to the software side, I’ve mainly used after effect as the tool to create videos both for projections and the light control. Accompanied with Cinema 4D which covered some part of 3D animations whose files then imported into AE to do some refinement.

During this project I did benefit a lot whatever it is regarding the knowledge about software techniques or creativity constructing or even skills in group collaboration. It is a project that combine all the knowledge together not just what we have learned from C&C, but others like creative techniques and work it out in a very short time.





The final result:


Screen Shot 2014-12-11 at 03.01.46

Case Study 1:

4D virtual reality theme park
4D virtual reality theme park

This was our earliest case study when we were thinking very much how about how we could make an interactive amusement park. It got us thinking about possibly building some sort of infrastructure that would allow us to make an interactive amusement park that was interactive. Along the way, we begin to thinking about feasibility in terms of time, cost and construction.

Our animator was inspired to use Cinema4D to make 3D projections but our animator found it to be quite a huge feat to accomplish in a short time frame. This video was part of the inspiration process that got us thinking about what we could do.

Case Study 2:


In this video, you will be able to see symmetrical projection mapping in which the visual designer
uses a scaffold of sort that serves as a grounding for the projection mapping. The designer uses madmapper to shape a cube life projection, this gives the illusion of a 3-D structure once the projection mapping is made.

Case Study 3:

Enigmatica Mars

Another good reference on projection mapping that plays with illusions of spatial perception and 3-D space. During our case study period, we hadn’t really gave much though on how we could X-Bee in our projection mapping project.

Case Study 4:


This video had impressed us with it’s use of symmetry and projections on the interior of the building structure. At this phase of our case study, we had begin our construction of our wood shaped star. We felt that we might be able to do something similar with our own project but on a smaller scale using

Magic Fish Pond



The second project I did is called magic fishpond. Fishpond was the first image came to my mind when I first heard what we needed to do should be something related to the water. I know Tom has told me usually your third idea is the best idea you can think of and I also know the theme of fish is not new and interesting enough at least in terms of the concept as most of us will immediately think of fish. But finally there were two reasons let me still insist to develop this idea. One was about my childhood memories. When I was little I lived with my grandpa in a city called Suzhou. That city is widely known for its garden architecture. Normally all gardens are built with a fishpond in it therefore watching fish with my grandpa is my happiest time in my childhood. My deepest impression is that fish are beautiful no matter in their colors, shapes and movement patterns. Even today I still love to observe fish as it always gives me a sense of calm. Therefore I started to think to create something to represent this feeling.

fish garden1

The other reason is in eastern Asia fish is always treated as a special symbol as people believe fish will bring them lucky and fortune. This is also the reason of people loving to build a fishpond in their yard in the old days. Watching fish represents the people’s pursuit of happiness, the love to the beautiful things and the wish of having a better future. But it’s interesting that most of them don’t have a kind of visual manifestation. Therefore from this point of view I divided my work into three parts, each part I wanted to show one function or pleasure of watching fish.

441_501931 Bret-Fish6




After deciding to use fish as the object, my first mission is to define the form of the fish. At this point, it would be hard to convey the soft and sprightly feeling of the fish’s body if I just uploaded a picture of a fish and moved it in Processing. I thought of it for a while and the best solution came from a toy I have.

Screen Shot 2014-11-13 at 0.02.17 Screen Shot 2014-11-13 at 0.02.32

This crocodile is made by wood and it can writhe like the real crocodile when you bend it. The magic behind is it is made by a group of semi-independent units. They are half–connected with each other. Therefore it feels very life-like when you bend or wiggle it. This let me remember another example in the book Learning Processing. In one chapter Daniel Shiffman teaches us to draw a shape like a snake. The snake is drawn by a bunch of circles. Likewise, this example also could be used to draw a fish.

Screen Shot 2014-11-12 at 23.44.00

After finishing the shape, the second thing was to determine the colour of the fish. At first I tried to mimic the real fish’s colour but the result was not as good as I thought. Until one day I found a picture of the phytoplankton.

bio-2 bio-4 bio-6

Phytoplankton means the marine microbes. They are bioluminescent and emanate the blue glow. That translucency effect strongly attracted me.

Vaadhoo-Island-Maldives-2 1177458243604_6byjcA4X_l

Considering I will use water as my projection interface, I decided to draw my fish in half-transparency to create a fluorescence effect. At first I only used blue as my main colour but as my idea kept developing I finally used three gradient, six colours in total to render my fish.



As for the movement part I still need to thank to Daniel Shiffman. His book The nature of code gives me a lot of help. I learned most of algorithm of particle system and genetic algorithms from that book. My personal experience is that atan2(), noise(), dist()and trigonometric function are some of very important fuctions to learn and use as if you can use them in a proper way they will create really organic movement pattern for you.


I immediately decided to use LeapMotion as my main sensor when I started my project. The reason is simple, hand gesture is the most natural interactive way in front of fish. We use hands to feed them, to play with them even to catch them. In my project I wanted to use hands to achieve most of my goals and without needing to touch anything. These goals include switching among three different modes, changing the size of the fish and command the shoal’s movement. It’s lucky Processing has a library of controlling LeapMotion called LeapMotionP5. It’s developed by the famous generative design studio Onformative. Thanks to the Onformative’s work. The library covers all the API functions of LeapMotion and it is also very easy to use. Through some consideration, finally I chose to use swiping gesture to switch between 3 modes and use circle gesture to control the size of the fish.

Apart from gesture control, I remember when I was little I usually liked to paddle the water back and forth when I watched the fish. Now think about it, that was a very instinctive behaviour as people always wants to interact with something and water was the only thing I could touch at that time. Therefore, I got an idea which is summoning the fish through stirring the water. After having this idea, the problem to me was how to detect the water flow. At first I thought to use the water flow sensor. But the fact is water flow sensor only works in the case of very rapid water flow. Apparently it does not work very well in this particular scenario. Through some searching online I found the flex sensor is my ideal solution as it is sensitive enough and easy to use. After some experiments I finally created my own water flow sensor.


2014-10-22 13.43.24 2014-11-13 02.39.30 2014-11-13 02.39.04

During I researched how to use the flex sensor, I found there is another sensor called gyroscope. This sensor is very familiar to us because we all have one in our cellphone. It can measure orientation based on the principles of angular momentum. I ran across this sensor on the website and I immediately thought of using it in my project control the swimming direction of the shoal. But after I connecting the sensor with Arduino I found the number from the raw data was unbelievable huge to use, so I had to read the data sheet and on page 13 I found I can convert the raw accelerometer data through multiplying of g(9.8 m/s) by dividing by a factor of 16384. After this adjustment the data was finally back to normal.

2014-10-29 16.32.38 2014-10-29 20.42.18 2014-10-31 13.47.07

Projection and interface:

Before this project, I had seen several projects using water as interface. What had impressed me most is project AquaTop. That project did really well in trying to use different materials as interfaces and the exploration of ubiquitous computing. Especially their use of water inspired me a lot; therefore at the very first of this project I had decided to use water as my project’s carrier. I believe water as the projection interface has two main benefits. One is water can add visual depth and texture for your images when you use in the right way. For example, in my testing, I found all images projected on the water had a halo around them, which makes the whole picture look more aesthetically pleasing. Another benefit is since we use water as display we have no screen size limitation. This advantage can let us use larger-size container to create a better final effect.

2014-11-13 02.36.46 2014-11-13 02.37.01 2014-11-11 19.52.30 2014-11-11 19.51.42

The last puzzle:

After finishing all these parts above I still personally felt my project missed one most important character – emotion. Because I believe all great art works have one thing in common, which is able to let audience get emotionally involved. People love arts, music and movies because they can empathize with them and think from them. Therefore this makes me think about the behaviour of watching fish over again. One day I checked the Google’s Devart Website and I found one project called Wishing Wall was really interesting and especially the background music touched me truly profoundly. That is a project about visualizing wishes. When I was watching it I suddenly realized that people have the same kind of behaviour as well when they watch the fish. Sometimes people like to toss a coin into the pond and make a wish simultaneously. Therefore I started to think to use another way to record this beautiful moment. Because in a way – wishing this behaviour itself is very meaningful and full of emotions. So why I don’t catch this opportunity to use this idea and create out of it something interesting and other people can also see and interact with.

Finally I focused on the words. Through integrating the concept of fish and wish I developed an idea of using fish to spell what audience wants to say. In terms of the code, I chose to use a library in Processing called Geomerative. This library can split the text into several segments and each segment is defined to be a destination of one fish. In this way audiences can type in whatever they want to wish and then a certain number of fish will be summoned and spell those wishes.



During the final presentation, I could feel that most audience liked the wishing part most. That more or less proved my original point of view. This also encouraged me to create more engaging and immersive experiences in my third project.




Source Code



Circuit Diagram

Screen Shot 2014-11-14 at 23.36.12



Shiffman, D. (2008). Learning Processing. Amsterdam: Morgan Kaufmann/Elsevier.

Shiffman, D., Fry, S. and Marsh, Z. (n.d.). The nature of code.

Bohnacker, H., Gross, B., Laub, J. and Lazzeroni, C. (2012). Generative design. New York: Princeton Architectural Press.

Colossal, (2014). A Maldives Beach Awash in Bioluminescent Phytoplankton Looks Like an Ocean of Stars. [online] Available at: [Accessed 15 Nov. 2014].

Yamano, S. (2014). AquaTop – An Interactive Water Surface. [online] Available at: [Accessed 15 Nov. 2014]., (2014). this is onformative a studio for generative design.. [online] Available at: [Accessed 15 Nov. 2014]., (2014). Geomerative. [online] Available at: [Accessed 15 Nov. 2014]., (2014). DevArt. Art made with code.. [online] Available at: [Accessed 15 Nov. 2014].


No.7 Spaceship – Cursed Monster

Project Description:

No.7 Spaceship – Cursed Monster is a haunted house-themed narrative interaction project. My work is part of the whole four works which constitute the concept of the No.7 cursed spaceship. My original idea was to create a moving scary alien’s head. Through combining a series of sensor, fabric and software I made a monster from outer space which could perceive the movement of the audience and brighten its eyes and spit out the slime simultaneously.


Circuit Diagrams:


1 Arduino Uno
2 servo motors
4 super bright LEDs





Process Journal :

Day 1 :

This is the first day after we received the topic of our project1. The theme of the spaceship haunted house make me immediately remember the ghost mask Daniah brought for the transmedia project. At that time our task was movie “Alien”, we thought we could do something with it but unfortunately because of some reason this mask couldn’t be on the stage last time. It’s its turn now!


Day 2:

To be honestly, this is my first time to hear the words “haunted house”. Because in China we never celebrate for the ghost so we actually have no formal experience before to make something spooky, although we really wanted to. We heard Halloween is approaching, I and my friends decided to go to Dollarama to find some inspirations.

Dollarama has lots of stuff prepared for the Halloween, broken legs, bloody organs…. Although my friends were excited about those but I found there was a toy frog with a lot of slimy liquid and something like eggs in it, not scary but really disgusting. On the way back to school, I decided to do something simple but could leave audience a deep impression.

My idea was to use a servo squeeze out the liquid when someone approaches it. Because the servo we brought is not powerful enough I need two softer tubes. After a difficult search, finally I found two plastic tubes which could perfectly force the liquid out! Then I used detergent as the slime and tried to insert two tubes into the monster’s mouse. Looks very subtle.



Day 3:

Today I need to add some LEDs., two for the monster’s eyes and two for lighting up monster’s face. I did some experience using the normal LEDs but sadly they are not bright enough. So I bought four 1 W high power LEDs in Creatron, the effect was perfect. After everything was set up, I needed to trigger these parts in someway. I tried to use both Ping sensor and PIR motion sensor but unfortunately they are not as good as I expected, the number on the console always jumped around. Take Ping sensor as an example, I know its working principle is emitting the sonic wave and then receive it to judge distance. But the fact is I can’t see where the sound wave goes hence even if the number jumped around on the console I still don’t know the reason. Considering the stability on the presentation day I decided to use Kinect as my sensor. First it’s stable enough second I can observe people’s motion through the camera inside Kinect. As for the distance, I could calculate the closest point through the code. In this way, Kinect could be the distance sensor as well and even if some errors happened on the presentation day I could fix it immediately. Considering the complexity, I decided to do this part tomorrow and finish all the assemble job in the rest of the day.




Day 4:

Today my job is writing the code for my Kinect and let it detect the closest point. In this way, when people approach Kinect theoretically the closest point should be on that people’s body and as long as that point appears on the centre of the picture Kinect would send a signal to Processing and then Processing transfer this signal again to Arduino to control servos and LEDs on the board.



This is my first time to transmit data between Processing and Arduin. Honestly, it’s a lot of easier than I thought. Processing has a serial library to read the string or char data from Arduino but can’t read integer and float data directly. I suspect it may needs a compile process. I spend a lot of time on trying to read integer and floating data from Arduino, this time the process was harder than I anticipated. But after one day effort I succeeded although this has nothing to do with my project…




Day 5:

My first project is almost there. One thing I missed to do is adding the sound on my project. I don’t plan to buy the mp3 shield as it’s experience and the library for that is rather crude. My plan of solving this problem is continuing to use Processing. Processing has better sound libraries and I can control the pitch and volume of sound. Besides, I don’t have to purchase another speaker for this as well.

I used built-in sound library minim to control the sound playing and the final result was pretty good. Then I connected all the wires, filled two tubes with detergent, set my Kinect in the correct position and clicked the play button. All of my friends was disgusted by my work… Emm, not that bad 🙂



This is my first time to finish an arduino project. I really enjoy the whole process, especially I found there are so many interesting sensors which I even never imagine before. I believe this gives me more chances to create something crazier in the future.



Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.