Author Archive

“LightGarden” by Chris, Hart, Phuong & Tarik

Image1

 

 

 

 

 

 

 

 

 

A serene environment in which OCAD students are invited to unwind and get inspired by creating generative artwork with meditative motion control.

The final Creation & Computation project -a group project- required transforming a space into an “amusement park” using hardware, software, and XBee radio transceivers.

We immediately knew that we wanted to create an immersive visual and experiential project — an interactive space which would benefit OCAD students, inspire them, and help them to unwind and meditate.


The Experience

experience1

 

 

 

 

 

 

 

 

Image credit: Yushan Ji (Cynthia)

We chose to present our project at the Graduate Galley as the space has high ceilings, large unobscured empty walls and parquet flooring. We wanted the participants to feel immersed in the experience from the moment they entered our space. A branded plinth with a bowl of pulsing and glowing rock pebbles greets the participants and invites them to pick up one of the two “seed” controllers resting on the pebbles. These wireless charcoal coloured controllers, or “seeds”, have glowing LEDs inside them and the colours of these lights attract the attention of participants from across the space.

Two short-throw projectors display the experience seamlessly onto a corner wall. By moving the seeds around, the participant can manipulate a brush cursor, and draw on the display. The resulted drawing uses complex algorithms that create symmetric, mandala-like images. To enhance the kaleidoscope-like visual style, the projection is split between two walls with the point of symmetry centered on the corner, creating an illusion of three-dimensional depth and enhancing immersion.

By tilting the seed controller up, down, left, and right, the participant shifts the position of their brush cursor. Holding the right button draws patterns based on the active brush. Clicking the left button changes the selected brush. Each brush is linked to a pre-determined colour, and the colour of the brush is indicated in the LED light on the seeds as well as the on-screen cursor. Holding down both buttons for 3 seconds resets the drawing of that particular participant but will not affect the other user.

To compliment and enhance the meditative drawing experience, ambient music accompanies the experience and wind chime sounds are generated as the player uses the brushes. Bean bags were also available in the space to give the participants the option to experience LightGarden while standing or sitting.

The visual style of the projections we were inspired by:

  • Mandalas
  • Kaleidoscopes
  • Zen Gardens
  • Fractal patterns
  • Light Painting
  • Natural symmetries (trees, flowers, butterflies, jellyfish)

experience2

 

 

 

 

 

 

 

Image credit: Yushan Ji (Cynthia)


Interactivity Relationships

LightGarden is an interactive piece that incorporates various relationships between:

  • Person to Object: which exhibits in the interaction between the player and the “seed” controller.
  • Object to Person: the visual feedback (i.e. the cursor on the screen responses predictably whenever the player tilts or click a button through the change in its location/appearance on the screen), as well as auditory feedback (i.e. the wind chimes sound fades in when draw button is clicked) let user know that he is in control of the drawing.
  • Object to Object: our initial plan was to use Received Signal Strength Indicator to illustrate the relationship between the controller and the anchor (e.g. the shorter of the distance between the anchor and the “seed” controller is, the faster the pulsing light on the anchor goes).
  • Person to Person: Since there are two “seed” controllers, two players can use each individual controller to collaboratively produce a generative art with different brush and color.

The Setup

  • 2 short-throw projectors
  • 2 controllers, each has an Arduino Fio, XBee, accelerometer, two momentary switches, one RGB LED, and lithium polymer battery.
  • 1 anchor point with an Arduino Uno, a central receiver XBee, and RGB LED
  • An Arduino Uno with Neo Pixel strip

System Diagram

SystemDia

 

 

 

 

 

 

 

 

 

 

 

 

 


The Software

Processing was used to receive user input and generate the brush effects. Two kinds of symmetry were modeled in the program, bilateral symmetry across the x-axis, and radial symmetry from ranging from two to nine points. In addition to using different colors and drawing methods, each brush uses different kinds of symmetry, to ensure that each one feels significantly different.

Each controller is assigned two brushes which it can switch between with the toggle button. A class was written for the brushes that kept track of its own drawing and overlay layers and was able to handle all of the necessary symmetry generation. Each implemented brush then extended that class and overrode the default drawing method. There was also an extension of the default brush class that allowed for smoothing, which was used by the sand brush.

One major downside discovered late in development was that the P2D rendering engine won’t actually make use of the graphics card unless drawing is done in the main draw loop of the Processing sketch. Most graphics work in the sketch is first rendered off-screen, then manipulated and combined to create the final layer, so as a result the graphics card was not utilized as effectively as it could have been.

Here is a listing of the four brushes implemented for the demonstration:

brushes

 

 

 

 

 

 

 

 

 

1. Ripple Brush

This brush uses a cyan color, and fades to white as it approaches the center of the screen. It uses both bilateral symmetry and a sixfold radial symmetry, which makes it well suited for flower-like patterns. It draws a radial burst of dots around each of the twelve cursor positions (six radial points reflected across the x-axis), and continually shifts their position to orient them toward the center of the screen. With smoothing effects applied, this creates multiple overlapping lines which interweave to create complex patterns.

2. Converge Brush

This brush uses a dark indigo color, and draws lines which converge toward the center of the drawing. It has bilateral symmetry and eightfold radial symmetry. As the lines approach the edge of the screen, a noise effect is applied to them, creating a textured effect. Because all lines converge, it creates a feeling of motion, and draws the viewer toward the center of the image

3. Sand Brush

This brush uses a vibrant turquoise color, and like the starburst brush fades to white as it nears the center of the image. It draws a number of particles around the brush position; the size, number, and spread of these particles increases as the brush approaches the outer edge, creating a scatter effect. This brush uses sevenfold radial symmetry, but does not have bilateral symmetry applied, which allows it to draw spiral patterns which the other brushes cannot make.

4. Silk Brush

This brush uses a purple color and has the most complex drawing algorithm of the brushes. It generates nine quadratic curves originating from the position the stroke was started to the current brush position. The effect is like strands of thread pulled up from a canvas. The brush has bilateral symmetry but only threefold radial symmetry so that the pattern is not overwhelming. Because it creates such complex designs, it is well suited for creating subtle backgrounds behind the other brushes.


The Controllers and Receiver

controller2

 

 

 

 

 

 

 

 

Image credit: Yushan Ji (Cynthia) and Tarik El-Khateeb

Seed Controller – Physical Design

When considering the design and functionally of our controllers, we started the endeavour with a couple goals from the start. These goals were determined by very realistic limitations of our intended hardware, most notably, the XBee transceiver and the 3-axis accelerometer.  We knew we needed accelerometer data for our visuals and in order to have reliable, consistent data, the base orientation of the accelerometer needed to be fairly standardized. Furthermore, the Xbee transceiver signal strength severely drops when a line of sight relationship is blocked, either by hands or other physical objects. Taking this into consideration, we designed a controller that would suggest the correct way of being held. The single affordance that we used to do this was a RGB LED that would illuminate and signify what we wanted to be the “front” of the controller.

controller3

 

 

 

 

 

 

 

 

Image credit: Tarik El-Khateeb and Phuong Vu.

Initially we started with the hopes of creating a 3D-Printed, custom shaped controller (by amending a ready 3D module from thingiverse.com), however after some experimentation and prototyping, we quickly came to the conclusion that it was not the right solution given the time constraints associated with the project. In the end, we decided to go with found objects that we could customize to suit our needs. A plastic soap dish became the unlikely candidate and after some modifications, we found it to be perfect for our requirements.

To further suggest controller orientation, we installed two momentary, press-buttons that would act as familiar prompts as how to hold it. This would prevent the user from aiming the the controller with just one hand. These buttons also engaged the drawing functions of the software and allowed for customization of the visuals.

The interaction model was as follows:

  1. Right Button Pressed/Held Down – Draw pixels
  2. Left Button Pressed momentarily – Change draw mode
  3. Left and Right Buttons held down simultaneously for 1.5 seconds – clears that user’s canvas.

controller1

 

 

 

 

 

 

 

 

Image credit:  Tarik El-Khateeb

Seed Controller – Electronics

We decided early on to use the Xbee transceivers as our wireless medium to enable cordless control of our graphical system. A natural fit when working with Xbees, is the Arduino Fio, a lightweight, 3.3v microcontroller that would fit into our enclosures. Using an Arduino meant that would could add an accelerometer, RGB LED, two buttons and the Xbee without a concern of shortage of IO pins, as is the case with using a Xbee alone. By programming the Fio to poll the the momentary buttons we could account for duration of each buttons presses. This allowed some basic, on-device processing of data before sending instructions over wireless, helping reduce unnecessary transmission. Certain commands like “clear” and “change mode” were handled by the controllers themselves, significantly increasing reliability of theses functions.

In the initial period of development, we had hoped to use the Xbee-Arduino API, certain features seemed very appealing to us but as the experimenting began, it was clear that even though it was an API there were still several low-level functions that significantly complicated the learning processing and overall interfered with our development. We made a strategic decision to cut our losses with the API and rather use the more straight forward, yet significantly less reliable method of broadcasting serial directly and parsing it on the other end in Processing, after the wireless receiver relays it. Here is an example of the data being transmitted by each controller:

<0,1,1,-1.24,0.56>

<controllerID,colorModeValue,isDrawingValue,X-axis,Y-axis>

 

LightGardenControllerSchematic LightGardenControllerBreadBoard

 

 

 

 

Circuit diagrams for the Seed Controllers.
Wireless Receiver

In order to receive the wireless commands from both of our controllers, we decided to create an illuminated receiver unit. The unit is comprised of an Arduino Uno, RGB LED and an Xbee; it acts as a simple relay, forwarding the the serial data received via the Xbee to the USB port of the computer for the Processing sketch to parse. We used the SoftwareSerial library to emulate a 2nd serial port on the Uno so we could transmit the data as fast as it was being  received. In terms of design, instead of hiding the device we decided to feature it prominently in the view of the user, a pulsing white LED indicates that it serves a functional purpose and our hope was for it to remind users that wireless transmission is occurring, something that we take for granted nowadays.

LightGarden_Reciever_Schematic LightGarden_Reciever_BreadBoard

 

 

 

 

Circuit diagrams for the Wireless Receiver.


Design

Branding strategy:

The LightGarden logo is a mix of two fonts from the same typeface family: SangBleu serif and sans serif. The intentional mix of serif and sans serif fonts is a reference to the mix and variety of effects, colours and brushes that are featured in the projections.

The icon consists of outlines of four seeds in motion symbolizing the four cardinal directions, four members of the group as well as the four main colours used in the visual projection.

logo

Image credit: Tarik El-Khateeb

Colour strategy:

Purple, Turquoise, Cyan and Indigo are the colours chosen for the brushes. The rationale behind using cold colours instead of warm colours is that the cold hues have a calming effect as they are visual triggers associated with water and the sky.

Purple reflects imagination, Turquoise is related to health and well-being, Cyan represents peace and tranquility and Indigo stimulates productivity.


Sound

Sound plays a major role in our project. It is an indispensable element, without it the experience cannot be whole. Because the main theme of our project is to create a meditative environment, it was important to choose the type of sound which was meditative: enhancing rather than distracting the players from the visual experience. We needed to find a sound that was organic, can be looped, and yet does not become boring to the participants in the long run.

In order to fulfill all of aforementioned requirements, we decided to go with Ambient music, an atmospheric, mood inducing musical genre. The song “Hibernation” (Sync24, 2005) from Sync24, was selected as the background music. Using Adobe Audition (Adobe, 2014), we cut out the intro/outro part of the song, and beatmatched the ending and the beginning of the edited song so that the entire song can be seamlessly looped.

sound

 

 

 

 

 

 

 

 

Image credit: Screen captures from Adobe Audition

Sound was also used as a means of giving the auditory feedback to the user of our “seed” controller, i.e., whenever player clicks the draw button on the “seed” controller, a sound is played with the purpose of notifying player that the action of drawing is being carried out. For this purpose, we employed the sound of wind chimes, whose characteristic is known for inducing the atmospheric sensation as used in Ambient Mixer (Ambient mixer, 2014). In our application, the ambient song is played repeatedly in the background  whereas the wind chimes sound fades in and out every time the player clicks and releases the draw button allowing the wind chimes to organically fuse into the ambient music. To do so, we utilized the Beads, a Processing library for handling real-time audio (Beads project, 2014). Beads library contains several features for playing the audio file and for generating a sequence of timed transition of the audio signal, i.e., sequence of changes in amplitude of the audio signal. So when the draw button is clicked the amplitude of wind chimes audio signal increases, and conversely, when the draw button gets released the amplitude of wind chimes audio signal decreases.


Code

https://github.com/phuongvu/LightGarden


Case Studies

One: Pirates of the Caribbean: Battle for Buccaneer Gold

Pirates of the Caribbean: Battle for Buccaneer Gold is a virtual reality ride and interaction game at DisneyQuest, an “indoor interactive theme park”, located in Downtown Disney at the Walt Disney World Resort in Florida. (Wikipedia. 2014)

The attraction is 5 minutes long and follows a linear storyline in which Jolly Roger the Ghost Pirate appears on screen and tells the participants that their pirate ship is seeking treasure and that they can win this treasure by sinking other ships and taking their loot. The ship sails through different islands and comes across many ships to battle. 4:30 minutes into the ride, the pirate ghost re-appears and informs the players that they have to battle him and his army of skeletons in order to be able to keep any treasure they won by battling the ships. Once all the ghosts and skeletons have been defeated the final score appears on the screen.

The attraction can be experienced by up to 5 participants. One individual steers the pirate ship by using the realistic helm on the attraction inside of a detail rich 3D computer generated virtual ocean with islands and ships. Up to four players control cannons to destroy other ships. The cannons use wireless technology to “shoot” virtual cannons on the screen.

The attraction uses wrap-around 3D screens, 3D surround sound, and a motion platform ship that fully engages the participants and make them feel like real pirates on a real ship. (Shochet. 2001)

https://www.youtube.com/watch?v=5cgtnXMbJcw

 

Two: Universal Studios Transformers: The Ride 3D

Transformer: The Ride 3D (Universal Studios, 2011) is a 3D indoor amusement ride situated in Universal Studios Hollywood, Universal Studios Florida and Universal Studios Singapore. The ride 3D is an exemplary case study of how a thrill ride when combined with visual, auditory and physical simulation technologies can create such immersive experience that it clears the borderline between fiction and reality.

The setup of this attraction consists of a vehicle mounted on motion platform that runs for 610 meters track. Each vehicle can carry up to 12 riders, who, throughout the ride, will be exposed to different kind of effects like motion, wind-blowing including hot air and air blast, water-spraying, fog, vibration, and a 18 meters high 3D projections that shows various Transformers characters (Wikipedia, 2014). Along the ride, participants will have a chance to “fight along side with Optimus and protect the AllSpark from Decepticons over four stories tall” (Universal Studios, 2011).

 

Three: Nintendo Amiibo

The Nintendo Amiibo platform is a combination of gaming consoles and physical artifacts, which take the form of well-known Nintendo characters in figurine form (Wikipedia, 2014). The platform is one of many following the same trend, that is, small, NFC (near-field-communication) equipped devices that, when paired with a console, add additional features to that console or game. NFC is actually a technology built up from RFID (Radio Frequency Identification) and most smart phones are now equipped with it (AmiiboToys, 2014).

The Amiibos have memory capability (only 1Kb-4Kb) and allow certain games to store data on the figurine itself (AmiiboToys, 2014). One example of this is with the newly released, Super Smash Bros game for the Wii U. The figurines actually “contain” NPC (non-playable characters) that match the resemblance of the character. These characters actually improve their abilities based on your own playing habits, apparently they actually become quite hard to beat! (IGN, 2014).

The interesting aspect of the Amiibo line and others like it, is the interaction between the digital representation of the character and the physical figurine itself. By using NFC, the experience seems almost magical, something that a physical connection would most likely ruin. There is a relationship between the player and the object but also between the player and the on-screen character, especially when said character is aggravating the player because its skills are improving. The transparency of the technology helps dissolve the boundaries between the physical object and the fully animated character.

 

Four: Disney MagicBand

The fourth case study will not be focusing on an attractive in an amusement park but rather on a new one billion dollar wearable technology that has been introduced in the Walt Disney parks: the MagicBand. (Wikipedia. 2014)

The MagicBand is a waterproof plastic wristband that contains a short range RFID chip as well as bluetooth technology. They come in adult and child sizes and store information on them. The wearer can use them as their hotel room key, park ticket, special fast pass tickets, photo-passes as well as a payment method for food, beverages and merchandise. (Ada. 2014)

The MagicBands also contains a 2.4ghz transmitter for longer range wireless communication, it can track the band’s location within the parks and link on-ride photos and videos to guests’ photo-pass account.

Thomas Staggs, Chairman of Walt Disney Theme Parks and Resorts, says that the band in the future might enable characters inside the park to address kids by their name. “The more that their visit can seem personalized, the better. If, by virtue of the MagicBand, the princess knows the kid’s name is Suzy… the experience becomes more personalized,” says Staggs. (Panzarino. 2013)

 


References & Project Context

3D printing:

http://www.thingiverse.com/thing:376028

 

Sounds:

Adobe. 2014. Adobe Audition. Retrieved from https://creative.adobe.com/products/audition

Ambient Mixer. (2014). Wind chimes wide stereo. Retrieved from http://ambient-music.ambient-mixer.com/ambient-sleeper

Beads project. 2014. Beads library. Retrieved from http://www.beadsproject.net/

Sync24. “Hibernation.” Chillogram. Last.fm, 22 December 2005. Web. 01 Dec. 2014. Retrieved from http://www.last.fm/music/Sync24/_/Hibernation

 

Case Study 1:

Disney Quest – Explore Zone. Retrived from http://www.wdwinfo.com/downtown/disneyquest/dquestexplore.htm

Shochet, J and Banker T. 2001. GDC 2001: Interactive Theme Park Rides. Retrived from http://www.gamasutra.com/view/feature/131469/gdc_2001_interactive_theme_park_.php
Wikipedia. 2014. Disney Quest. Retrived from http://en.wikipedia.org/wiki/DisneyQuest

 

Case Study 2:

Inside the Magic. 2012. Transformers: The Ride 3D ride & queue experience at Universal Studios Hollywood. Retrived from https://www.youtube.com/watch?v=ARJHpBgu1vM

Universal Studios. 2011. Transformers: The Ride 3D. Retrieved from http://www.universalstudioshollywood.com/attractions/transformers-the-ride-3d/.
Wikipedia. 2014. Transformers: The Ride. Retrieved from https://en.wikipedia.org/wiki/Transformers:_The_Ride

 

Case Study 3:

AmiiboToys, (2014) Inside Amiibo: A technical look at Nintendo’s new figures. Retrieved from http://www.amiibotoys.com/2014/10/17/inside-amiibo-technical-look-nintendos-new-figures/

IGN, (2014). E3 2014: Nintendo’s Amiibo Toy Project Revealed – IGN. Retrieved from http://ca.ign.com/articles/2014/06/10/e3-2014-nintendos-amiibo-toy-project-revealed?abthid=53972d736a447c7843000006 [Accessed 10 Dec. 2014].

Wikipedia. (2014). Amiibo. Retrieved from http://en.wikipedia.org/wiki/Amiibo

 

Case Study 4:

Ada. 2014. Making the Band – MagicBand Teardown and More. Retrieved from http://atdisneyagain.com/2014/01/27/making-the-band-magicband-teardown-and-more/

Panzarino, M. 2013. Disney gets into wearable tech with the MagicBand. Retrieved from http://thenextweb.com/insider/2013/05/29/disney-goes-into-wearable-tech-with-the-magic-band/

Wikipedia. 2014. MyMagic+. Retrieved from http://en.wikipedia.org/wiki/MyMagic%2B#MagicBand

 

Project Inspirations / Context:

http://www.nearfield.org/2011/02/wifi-light-painting

http://artacademy.nintendo.com/sketchpad/

http://www.procreate.si/

http://digital-photography-school.com/light-painting-part-one-the-photography/

http://www.wired.com/2011/01/bioluminescent-sea-creatures/

“Create your own Nautical Adventure” by Tarik El-Khateeb

Cover

 

 

 

 

 

 

 

 

 

Creation and Computation
Project 2: Water Park.

Storytelling, Interactivity and Customizability are three core elements that I am fascinated by and interested in exploring within my Master’s Degree program and this project was a good exercise for me to start experimenting with these three elements.

 

Background

The theme of the Creation & Computation course for this year (2014) is “Amusement” and Project 2 is a Water Park installation/interaction that utilizes water as an interface, sensor and/or projection.

I was instantly drawn to the idea of using the water as an interactive element and an interface. Instead of creating a conceptual installation piece I wanted to create a fun and interactive one that might be seen someday at an amusement park.

But what would this fun interaction be? Who would interact with it? And, what is the expected outcome of this interaction?

While coming up with a theme for Project 1 the class collectively chose “Pirates & Mermaids” as a theme for Project 2, and even though adhering to the theme was made optional later on, I decided to use it as it fit with the target audience that I had in mind: children. I wanted to create a water-based project in which children could customize and create their own adventures – maybe even star in them.

 


The Project

The project is an interactive storytelling piece that is aimed at children, its purpose is to give children full control over the narrative of a story by allowing them to choose visual elements and characters and then moving them around in a body of water and seeing them come alive on a screen or projection.

 

 


The Set-up

Image1

 

 

 

 

 

 

 

 

 

The presented project consisted of:

– One medium size water tank (16”x10”x6”), filled half way.
– Six coloured plastic balls, each representing an element or character.
– A wooden paddle, to move the assets around in the water.
– A webcam, aimed at the tank at water level.
– A projector, to screen the adventure.
– A set of illustrated backgrounds, assets and characters.

The premise of the interaction is quite simple: a child drops a coloured ball into the water tank and the ball comes alive on screen as an illustrated asset. The child can use the provided paddle to stir the water and move the ball(s) around and watch the assets on screen follow the same motion, direction and speed.

Through processing, the webcam recognizes the colour of the ball and calls the asset that is associated with that particular colour. Motion tracking is used to track the location of the ball within the frame of the water tank and apply the motion to the asset.

There are 3 different types of assets, and each one interacts with the scene in its unique way:

– “Bobbing” assets: these are the blue, green, pink and red balls that are associated with assets that float on the water surface (ship, island, mermaid and pirate, respectively). The assets bob amongst the waves and move along the x-axis. The character assets load between the first and second waves and the scene assets (ship and island) load between the second and third waves.

– “Floating” asset: the brown ball (the clouds) loads in the sky rather than amongst the waves and the processing only tracks its x-axis location without any bobbing effect, so the cloud float smoothly in the sky from one side to the other within fixed parameters.

– “Free-roaming” asset: The yellow ball (the sun) is different from all the other balls as it is attached to a black stick. This asset is a free-roaming asset, meaning that instead of it bobbing on the water or floating, the child can move the yellow ball on and over the water and watch the sun move around the sky freely.

When the final set-up was tested on location on the day of the presentation, I discovered that the brown ball was not being detected properly by the camera as the reflections on the tank and the shadows cast on in and behind it confused the processing with the brown asset.

I painted the ball in orange, but the acrylic paint did not dry in time, so for the presentation (as seen in the attached video) the clouds asset was attached to the sun asset, both recognized as the yellow ball. The clouds still retained their “Floating” ability and the sun was “free-roaming” and the combined effect of both types called by the same ball gave an interesting and more dramatic effect.


The Bigger Picture

How does this project fit within a water park?

I imagined this project to be a green screen pool with a projection / wall-size screen on one end.  A family or small group of children would experience this attraction together. Before they enter the pool area, they create their own adventure by selecting a background and assigning assets to coloured inflatable balls/beds and selecting a musical track.
Once inside the pool area they will see themselves on the screen inside the environment they selected, with their selected assets replacing the coloured balls. The whole adventure could be recorded and shared to the group at the end of the experience.

The idea is to create a Kinect-like fun environment, but instead of interacting and playing with invisible objects that appear on the screen, they will be interacting with physical objects which they can swim along, toss and throw and even ride.

 


Components, Style and Design

Image3

 

 

 

 

 

 

 

 

 

To make the out come appealing to children, I chose to illustrate the assets in a wood-cut style with bold outlines and colours. I also wanted to give the illusion of flat 2D paper cutouts, so all the assets have drop shadows and rough edges as if they were cut by a child from a magazine or book and layered over each other.

This is also the main reason for not incorporating depth in the camera tracking or processing, as I wanted to maintain a very flat style.

I also created a logo for the adventure using the same wood-cut style.

I built the water tank from custom cut acrylic sheets and bonded them with heavy duty water-proof silicone caulking sealant.

Image2

 

 

 

 

 

 

 

 

 


The Code

https://github.com/TAR1K/Nautical-Adventures


Original Concept and Reflection

The original concept for this project was a lot more elaborate than the final presented version, but within the period of 3-weeks I was introduced to processing for the very first time, started to understand how to decipher and read the code, wrote a few lines of code myself, and tried to comprehend the limitations and strengths of processing; and it was all a great challenge for me – therefore I chose to minimize the concept to a fully-functional yet scaled down version.

Image5

 

 

 

 

 

 

 

 

 

 

The initial concept used only 3 coloured balls, and instead of having set assets assigned to each one, the child would assign an asset of their choice from a ready assets library to the ball.

I wanted to create an interface on a touch screen in which a child would be able to fully customize the screen by choosing a background, assign characters or elements to the coloured balls as well as selecting a music track to accompany their adventure.

Another feature that I wanted to add to the experience was the “storm mode” in which the speed of the movement of the water would affect the assets on screen. For example, the scene would be of a mermaid and an island bobbing gentling on calm waves on a sunny day, but when the water in the tank moved faster and more aggressively, the screen would go darker, lightening storm clouds would appear, the waves would become more choppy, and the assets would have a more distressed/windy appearance to them. As seen below, some of the assets for the “storm mode” were created but not used in the final project.

Image4

 

 

 

 

 

 

 

 

 

 

The final feature that I wanted to incorporate was a tilt sensor. I had considered having the tank on a see-saw like contraption that was attached to a tilt sensor by using the Arduino, the tilting would control the movement of the objects in the water as well as initiate the previously mentioned “storm mode” when moved rigorously. I decided not to use the Arduino in this project to focus more on processing, and therefore ended up using a wooden paddle to create the motion in the tank.

After simplifying the concept, the ultimate challenge was to have the processing identify the colours and track the movement of the balls seamlessly. A lot of effort was put in to giving the assets a jittery bobbing motion to match the 2D storybook cut-out style of the visual assets.

Tweaking the tracking, movement, speed and pixel location of the assets was the most time consuming aspect of the processing phase. The acrylic tank was very reflective so special care had to be taken when the colours were being calibrated between the balls and processing as every different room and lighting setup had to have the colours be re-calibrated constantly.

My fellow cohort, Hart Reed, is largely responsible for the creation of the colour tracking code, which is based on the code “Simple Color Tracking” by Daniel Shiffman. My role in the coding process was assigning the colours to the assets and calling them on screen, as well as tweaking the RGB codes and assigning the pixel locations for the assets and tweaking the movement, bobbing and speed of the assets.


References & Inspirations

Project Runway. 2104. TV Show. Season 13, Episode: “The Rainway”. Color Dye dress by Sean Kelly
http://coub.com/view/3ve5v

Secret of the Kells. 2009. Animated Movie. Directed by Tomm Moore, Nora Rwomey.

Shiffman, Daniel. Learning Processing, Chapter 16. http://www.learningprocessing.com

Woodcut Illustrations by Steven Noble. http://woodcutillustration.com/


Experiments

 

 

 Experiments 1, 2 & 3

One of the first concepts I wanted to explore was food colouring and water. I was inspired by an episode of Project Runway (link above in References), in which the designer made hidden pockets with powder dye colour inside a white dress and when the model took the ‘rainway’ the colour pouches got wet, released the dye and coloured the dress.

I wanted to explore how I could use the effect of adding droplets of dye to water or liquid and have the colours blend and merge. I was interested in finding ways to take advantage of colour tracking in processing and seeing what can be created out of the merging of food colouring in liquid form.

I was interested in seeing what can be done with using a light source behind or under the glass vessel with water, and how the light can be manipulated and maybe change the effects created by the colour droplets, even enhance them. At one point I was interested in purchasing an iPhone controlled LED lightbulb and see how that could be integrated into the project, but my cohorts advised me against it, as using processing with apple devices was very tricky and time consuming and needed a very in depth knowledge of processing, which I didn’t possess.

I also explored the idea of freezing ice cubes of food colouring and seeing how they would melt and merge in hot and lukewarm water, to see how decreasing the melting speed would create slower and more interesting patterns of colour. I also tried the same experiments in a milky liquid to see whether the colours would show through an opaque substance.

After experimenting with food colouring, I came to the conclusion that the project I had in mind needed more time, better understanding of processing and maybe another team member on board. I decided not to go ahead with that project, and will not reveal any of its details here hoping that I will get a chance to work on that for the final project or in another class.

Experiments 4 & 5

When I abandoned the food colouring concept I decided to go for a less abstract concept and adhere more to the water park amusement theme. I initially thought of using lego pieces as ‘participants’ in my park ride, I wanted to make sure that every piece was properly detected and translated on screen. I worked with my fellow cohort Hart Reed to come up with a processing code that would detect the RGB values of any object clicked on with the mouse.

As seen in experiment 4, I wanted to make sure that I had the correct RGB values for each of the lego pieces that I planned on using, and I wanted to create visual assets on photoshop to match these shapes. However it became clear that the RGB values would change depending on the lighting, and by then I has also decided against using lego objects and opted for using painted ping pong balls.

Experiment 5 uses a modified version of Daniel Shiffman’s colour tracking code. I wanted to make sure that the coloured balls would be perfectly detected and tracked on screen before I created the assets for the Nautical Adventure projection. I needed to make sure that the tracking saw every movement and bobbing on the surface of the water, so the code created a line that would seamlessly follow the object, and through studying the consistency and accuracy of the line I would know if the tracking was accurate or not.

Experiment 6

Once I started working on the Nautical Adventure assets, I felt that the sun looked too flat and inanimate. I had the idea of animating the sun on Adobe After Effects and then calling the exported .mov video within the processing code and using it instead of a flat .png image.

I split the sun visual into layers and animated each layer on Atfer Effects. I also went through the video library on processing.org (https://www.processing.org/reference/libraries/video/) and did the experiments to call the video, have it play on screen when needed, and learned how to loop the video to keep playing for as long as the yellow ball was still being detected on screen.

One of the setbacks of using video was that I couldn’t export a good quality video asset with a transparent background, and I didn’t want to use a .gif asset that was limited in the number of colours and frame rate. I could export the video as a .mov with a fixed background and have it appear on screen in a specific location that would match the background behind it.

I had to make a choice between having the sun asset a well-animated but static asset or a free-roaming flat image. I eventually went for the free-roaming option, and from testing the adventure on my cohorts, it was obvious that moving the sun asset – which was the yellow ball at the end of a stick – was one of the more fun parts of the experience.

Experiments 7 & 8

The final two experiments are related to tweaking and fixing the code for the final product. There were lots of issues regarding the movement of the waves as I wanted each wave to move in a different speed and in a different direction. My cohort Phuong Vu helped me write the code to animate the waves.

A class called “Waves” was created that controlled the circular motion of the waves, and then the class was called in the body of the main code with options to change the direction, speed and also the ability to make sure that the waves were not moving in sync – to add a more dynamic feel to the animation; within this line of code:

w1 = new Wave(loadImage(“WAVE1.png”), 0, 20, 9, 1);

The video shows the result of one of the failed experiments of getting the waves to act in a choppier and faster way.

The final video shows one of the final testing phases in which not only one of the assets is called in a defective way, the video also seemed to detect colours from the reflection on the water tank. Fixing the colour detection was the most time consuming aspect of the whole project as every different light condition affected the outcome and the colours had to be re-calibrated every time. This issue continued until the very last moment before the project was presented to the guest jury. It was solved by adjusting the lighting in the presentation room and making sure that the tank was in front of a white background and on a white surface.

Alien Astronaut by Tarik El-Khateeb

START


The year is 2141. The Michelangelo Space Station has been abandoned for nearly a decade after the entire crew mysteriously disappeared following an unexpected collision with an unidentified space object. Analysis of the black box suggested that the lead astronaut might have caused the collision intentionally and was behind the crew’s disappearance.

A space shuttle in trouble is forced to dock at the station and while investigating the space, the crew is haunted by an army of alien infested microchip spiders who weave webs of metal mesh. These aliens have given life to the portrait of their leader – disguised as the lead astronaut – and used the portrait’s eyes to keep track of the crew members and send the information back to him on his spaceship. Once in a while they cause the portrait to flash the image of his alien skeleton exposing his true form and nature.

The crew of the shuttle discovers what happened to the previous crew all too late as, like the preview crew, they are  transported to the leaders spaceship, cocooned in metal web by the microchips spiders, where unimaginable horrors await…


The Inspiration

The project was inspired by oil portrait paintings, that have peepholes instead of eyes, which are commonly used in scary movies and haunted attractions.


Sketch


The Project and Process

It is the portrait of the lead astronaut in the space station covered in metal cobwebs and alien infested microchip spiders. The eyes move when the portrait detects motion and the microchips try to expose the true nature of the astronaut by flashing the image of his alien skeleton from within the frame.

As seen in the sketch above, the original concept was slightly more complex by having the microchip spiders move and rotate when triggered and had an additional layer of acrylic with the alien skeleton on it. Due to time constraints the microchip spiders were only used as decorative static elements. The sheet of acrylic was eliminated by adhering the alien skeleton to the back of the portrait thus cutting down the costs and making more room inside the frame for the other components.

Originally, the eyes were supposed to follow the movement of the trigger, but given that the portrait would be displayed in a crowded room, I chose not to have the eyes track movement as there was a chance that the sensor would get confused by the amount of movement in the exhibition space.


 Pully


WIP


Project Components and Design

Four wood frames were used as the frame for the portrait, the thickness they provided was enough to hold all the components within its interior space, including the 12v battery pack, the breadboard and the Arduino.

An IR proximity sensor (with an up to 30cm range) is used for the motion detection, a small window was cut in the bottom half of the portrait for the sensor.

To make the eyes move, the printed image of the eyes was adhered to a timing belt, which stretched between two timing pulleys. One timing pulley was mounted on a standard analog servo while the other was attached to the opposite side of the frame. Once the sensor is triggered, the eyes start to slowly move from side to side.

To light up the alien skeleton silhouette, a four foot long white LED light strip was wrapped inside the frame behind the portrait. The Arduino does not have enough force to power the lights so a 12v eight AA battery pack was used. A TIP120 voltage regulator was used as a power blocker between the Arduino and the LED lights, with the Arduino controlling the flow of the power from the batteries to the LED lights according to the sequence in the code. The LED lights are triggered by the sensor and they flash after the eyes move for a specific period of time.

The portrait is printed on a 70gm paper and the alien skeleton silhouette is cut out from a black cardboard and adhered to the back of the portrait. The frame is covered with silver corrugated cardboard and painted with acrylic paint to add an aged effect. Wire mesh and microchips are used for the cobwebs and spiders.


Astro


The Circuit

The components for the circuit are:

  • Arduino Uno (1)
  • Breadboard (Medium, 1)
  • Standard Analog Servo (Medium,1)
  • GT2 Timing Pulley (8mm bore, 2)
  • GT2 Timing Belt (2mm pitch, 1)
  • IR Proximity Sensor (4-30cm distance, 1)
  • White LED Strip (4 feet, 1)
  • AA Batteries (1.5v each, 8)
  • TIP120 Voltage Regulator (1)
  • 10kΩ Resistor (1)

Schematics

 

 

 

 

 

 


The Code

https://github.com/TAR1K/Alien-Astronaut/blob/master/Alien-arduino-code.ino


Final Product

FINAL product


Video: Process and Final Product


Challenges and Choices

  • Given that I had no programming skills prior to this course, it was a challenge to create the code for the project.
  • Timing the movement of the eyes and making the movement seem as natural as possible was the challenge that took the longest amount of time to code and get right.
  • Given more time, I would have worked on the atheistic of the frame more, maybe custom building a metal frame and aging it by using non-toxic chemicals.
  • The original concept had the microchip spiders moving when triggered, it would have been an interesting additional challenge to tackle.
  • After reflecting on the exhibition, I believe that the presentation of my project would have been stronger had I created an atmosphere and a setting for the portrait by designing and printing a background wallpaper of a space station and mounting the portrait on it.

Reference:

the Sinister Eleven

A set of eleven portraits that used to adorn the walls of a corridor in the Walt Disney World Haunted Mansion. The eyes of the characters would follow the guests as they passed by. In 2007 the Mansion was refurbished and the portraits were re-located to another area without the moving eyes effect.

I was very fortunate to see the original portraits in Disney World Tokyo a few years ago. The Haunted Mansion experience was a very memorable one, especially as it utilized state of the art technology alongside some of the authentic mechanisms that were created for the original attraction in the U.S. when it was first unveiled.

More information regarding the Sinister Eleven can be found in these two links:

http://www.doombuggies.com/history6.php

http://longforgottenhauntedmansion.blogspot.ca/2011/03/famous-ghosts-and-ghosts-trying-to-make.html


Credit and special thanks goes to my fellow students: Chris Olson, Hart Reed and Phuong Vu for their guidance and their help with the coding and programming.

first Arduino experiment!

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.