Author Archive

Project 3: City as Amusement Park- Simon Says Lanterns (Tegan Power & Rida Rabbani)

Project Proposal: https://prezi.com/17_f86php4ij/90s-carnival/

Project Video: https://vimeo.com/114065701

Project Description:

Our project is an interactive two player game installation using XBee communication. This project is inspired by Simon Says but has been altered to be two player and to transform an existing space into an installation for game play. Lanterns of different sizes and colours communicate with each other and players attempt to match the colour sequence that the other player has input. A set of 3 small lanterns (blue, green, red) have large arcade buttons that player one may press 6 times in any order. This input corresponds to the large lanterns hanging on the other side of the room. Once player one has inputted the sequence, the corresponding LED string colours will blink in order on player two’s side. Once this is done, player two must match this sequence. Player two inputs their response by tapping the large lanterns, activating the tilt switch in each. If player two matches the sequence correctly, all six lanterns on both ends will blink in congratulations. If player two fails to match the sequence,  a buzzer will sound. This game could transform any space, indoor or outdoor and could be packaged and sold as a “install yourself” party game.

Codes:

Transmit (small lanterns) code:

https://github.com/ridasrabbani/Rida/blob/code/code2.ino

Receive (large lanterns) code:

https://github.com/ridasrabbani/Rida/blob/code/FINAL%20ASSIGNEMENT.ino

Writing the code was by far the most challenging aspect of this project and took several days to evolve and become functional. We first started by testing arduino to arduino XBee communication by having buttons on one end control LEDs on the other end. We did this by assigning each button pin to a number (1,2,3) that would serial print to the LED side. The LED side would read the incoming number and light up the corresponding LED (1=blue, 2=green, 3=red). Once this was set up, we focused on the receiving end. (We wanted to have player one and player 2 roles interchangeable but once we realized the difficulty of the code we took on, we decided to stick to a one way communication that would simply reset at the end of a turn. The first thing we did on the receiving end was create variables for player and player two’s sequence inputs. Player one has six button presses so the code would look for button presses and fill the six “YourHold” spaces. We then had to add booleans to indicate when Your turn and My turn was over. Once YourTurnIsOver=True, the input sequence is digitally written to the corresponding LED string in the large lanterns for player two to memorize. Once this is done, player two fills the six MyHold spaces. It does this by looking for button presses 1,2 or 3 to fill the six slots. Once player two has finished, MyTurnIsOver=True. At this point, the YourHold and MyHold values are compared. For this comparison of the six values, each is nested into the last because if any value does not match, there is no need to check to others. At this point, if the matches are all satisfied, all LedPin outputs blink, if they do not match, the buzzer in pin 13 is HIGH. The system then resets for the next match.

Sketches:

10822644_10152630337303477_400802408_n

10850447_10152630336988477_1983402173_n

 

 

Case Studies:

Case Study 1-Intel World Interactive Amusement Park

Intel came to Inhance needing an environment in which to explain their Intelligent System Frameworks in an educational and enjoyable way.

Interaction: It features an animated theme park that allows up to thirty people to engage with the wall, bringing up floating windows of content amidst the rides, roller coasters and people moving about the park.

Technology: The result is the Intel Amusement Park Experience, an interactive multitouch application displayed on a 3×2 multi-screen LCD wall. It integrates with a Social Media Photo Booth App that allows attendees to take photos that superimpose their faces on a roller coaster ride. The photos can be sent to Facebook, Twitter, the Intel®World Wall and their email.

Narrative: The wall brings all of Intel’s products into one environment to show the connectivity through the entire park. Our goal was to deliver the same emotion one experiences in an amusement park, drawing attendees to the wall to touch it and learn. The result was constant excitement on people’s faces and large clusters of people touching the wall. It was highly successful in terms of being created primarily for trade shows, including Embedded World, Mobile World Congress, Design West and Cornell Cup.

Case Study 2-Lagoon Amusement Park

Amusement parks are all about speed. Whether it’s riding a massive roller coaster or plummeting 70 feet inside a tubular water slide, guests want to go fast.

Interaction: Lagoon now is able to satisfy the needs of its employees and guests with the updated card printing technology, bringing the park back to its desired speed.

Narrative: Now that the Lagoon Amusement Park has established its current system, computer stations at the gates can track Season Passport access information and provide valuable marketing information. “We’re trying to increase our per person usage through promotions such as our Season Passport Holder coupon books,” Young said. This allows them to operate at full capacity all day long, allowing their guests get their season passports quickly and in a fun way.

Case Study 3-  XD Dark Ride in the world 

Set in the iconic Filmpark Babelsberg just outside Berlin, this full turnkey project was the first installation of the XD Dark Ride in the world.

Interaction: XD Dark Ride is an immersive and interactive screen with a capacity of  24 seat hence being an object to many people ride.

Technology: Adding interactivity to state-of-the-art immersive special effects, it has revolutionized the world of ride simulation by combining video game technology with 3D animated movies.

 Narrative: First XD Dark Ride theater project in Europe. Is a conversion project of a pre-existing spherical structure into a one-of-a-kind interactive dome integrating the world’s largest interactive screen (16m wide)
Case Study 4- Wizarding World of Harry Potter
The latest installment of The Wizarding World of Harry Potter is scheduled to open this summer in Orlando’s Universal Studios theme park. The new attraction features London and the magic-packed Diagon Alley.
Interaction: Guest will not only be able to enter the arches of the Leicester Square facade but will be immersed in a bustling Wizarding hub within a Muggle city where towering buildings are slightly askew, with steep staircases and jagged edges galore.
Technology: In the real-life version, visitors will be in awe of the marble lobby and cavernous passageways. They’ll take off from here on a multi-sensory thrill ride through the vaults. And the dragon that will perch atop the bank building (reminiscent of when it escapes from the bank in the series) really does blow a giant ball of fire quite frequently. The thrill ride requires visitors to don 3D Glasses and features 360-degree themed sets, intense 4K animations, and 3D projection systems for complete immersion.
Narrative: Guests around the world were impressed by the immersive experience Universal created and the meticulous attention to detail they used to bring the Harry Potter stories to life. With the theme central to that of Harry Potter it is brought to life through a real life version of the story.

Photos and Diagrams:

 

photo 1photo 2

 

Soldering the arcade button to longer leads. Installing the button into the small lanterns by wrapping wire around it and the metal piece in the centre of the lantern. LED string is fit into the small lanterns and affixed to the sides to keep it in. Long leads come out the bottom for later connection to the Arduino.

10841221_10152630338748477_483116723_n-2photo 3photo 5

All three small lanterns are affixed to the black player one board. Construction paper covers with button holes are attached to the top to hide electronics inside each lantern.

photo 4-2photo (22)

 

Large lanterns are also filled with corresponding coloured LED string and affixed to the edges. The tilt sensor is soldered to long leads and affixed to the bottom of the metal lantern structure. The tilt sensor had to be placed at a very specific angle so that players two’s tap would successfully close the switch fully. Long leads are soldered to the other end of the tilt switches and LED string for connection to the Arduino.

photo 1-2photo 2-2

 

Final setup: Hanging large lanterns for player two, board mounted small lanterns for player one.

Circuit:

photo 2-3photo 1-3

 

Tools:

Hardware

  • 2 Arduinos
  • 2 Xbees
  • 6 Lanterns
  • 3 Tilt sensors
  • 6 sets of LED string
  • 3 Buttons
  • 2 9V Batteries

Circuit Diagram:

small lanterns breadboard prototype: buttons are replaced with large coloured arcade buttons installed in small lanterns. LEDs are replaced with red, green and blue LED string inside small lanterns.

smalllantern_bb

large lanterns breadboard prototype: buttons are replaced with tilt sensors. LEDs are replaced with red, green and blue LED string inside large lanterns.

 

 

 

largelantern_bb

 

Notes on process:

We started of thinking of different ideas generated by the theme of the project. With the theme being an amusement park we wanted something that involved people in terms of visuals and engaged them to join in or interact with the installation. Initially we wanted to create an environment which coul be from both inside and outside. However once we started working with our simon says idea, it really didn’t really matter where the lanterns were placed as long they could create a establish a communication.

Then we had to decide whether we wanted a one player game dedicated to one player interacting with the simon says lanterns or two players playing amongst each other, while the rest of the audience enjoyed this process of lanterns creating a pattern and lighting up.

After establishing the 2 player game installation we had to work with the different materials, at first we were thinking of using balloons but then we decided the lights and lanterns along with XBee  inside them as we could not find balloons with cavity space within it. When we proposed the idea we were also advised that larger lanterns and materials would make a bigger impact.

The code to respond to our game was the more complicated part however with a large help from Ryan we got a code which stored arrays and chunks of sequences, the point we got stuck was when it came to the buttons responding to the sequence of lights.

While at the same time we managed materials, sensors and how they respond to one another. On the day of the final presentation we were experimenting with stability of materials as well as the code to correspond to our idea, however it was more complicated then we thought although it was able to store the sequence, not only was the communication with the XBee lost along the line but when we got them to communicate one of the buttons kept sending faulty data, at some point our simple on and off button became a sensor detecting movement near it. It was finally with Ryans help when we got the circuit to work as a simon says game it was too late to set it up to the tilt sensors and lights on the larger lanterns. Which despite our last minutes attempts wasn’t sending any data to the lED lights if it was not connected directly to the arduino.

Project Context:

http://www.instructables.com/id/Arduino-Simon-Says/

Although the simon says Arduino was a very simple demonstration without making use of the XBees it gave us an idea of how to send information back and forth and simply test out the led’s and match them using the buttons. The next step from here was to translate this into our more complicated wireless use of the simon says technique back and forth within the lanterns and making it more interactive.

http://www.trio-tech.com

http://www.inhance.com/intel-world-interactive-amusement-park-1

https://www.hidglobal.com/sites/hidglobal.com/files/resource_files/hid-lagoon-amusement-park-cs-en.pdf

http://edition.cnn.com/2014/01/23/travel/wizarding-world-harry-potter-diagon/

These case studies helped us explore not only the potential of real time technology but how experiential and interactive attractions, sets and props add to the touch and feel of the environment. Provoking senses and working with the familiarities and surprises for the audience makes them curious and interested in the space and attractions.

Project 2: Brain Activity

 

Untitled

 

Background:

In order to create a theme around the concept of water, it was important to understand the properties of projection along with its interaction with processing. As this was my first interaction with processing, exploring it’s multiple assets and the role it could play in the development of my final project was critical.

The only projection I was fully able to comprehend on the explanation of water projection was quite literally using the projector and projecting the water in a tank on a surface as simple as a wall. However after researching and going through amazing work by some artists my perspective became broader. I began to understand that this was only one of the ways digital technologies can come to play with the physical world.

Project Description:

The idea behind the project was the activity that goes on in the brain. As a result of thoughts and process which trigger brain frequency which communicates and connects with the other parts of the body and as a result projects a certain action or movement.

Although the brain is a complex organ. I used the mirror, water and an interactive frequency generator through processing to be generated through a speaker.

The way this worked was every time you interacted with a low level frequency on processing it generated a sound strong enough to reflect itself through a connected speaker on the mirror which had water placed on it. These patterns created variations in the mirror projection before the mouse interaction which was constant.

Code:

https://github.com/ridasrabbani/assignment-2/blob/master/assignment.ino

Process:

The first step of the project was understanding the brief of the project in relation to the processing knowledge and projection. Over the weeks with the different ways the digital and physical spaces could be combined made it clear how a variety of materials and software could enable the users to interact with multiple points of the project.

Whether the software was as simple as the camera built inside the laptop which allowed the projection or as complex as the mad mapper. With an understanding of  what it was in terms of projection and the final result could be possible by choosing the correct combination of medium.

The next step was to look at existing works on the theme of water or projection using processing and physical medium. Distort Yourself was an example of one such work that had a complex yet strong idea with a fairly simple execution. In a similar way I wanted to execute an idea that would come across to the audience as something they wanted to dive deeper and allow interaction. The brain organ which is so complex yet the process and activity can be reflected on a medium came to me and I wanted to use a similar system to that of Distort Yourself but bring it to life just as The Abyss creatures were projected.

My next step was to look at the way different materials interact with one another to best project my idea. In order to do so I looked at different sizes of mirrors and speakers and although the size was an important element the quality and weight of the mirrors and speakers were crucial. The heavier the mirror the less vibrations and patterns it allowed on the layer of water. However I wanted to create a comfortable sound without bursting any eardrums and produce the same effect on the layer of water. After surveying of the market and getting ideas from people. I was able to get a mirror with a large enough surface rather than a frame that covered the projection space.

10751571_10152581487858477_18178307_n10811503_10152582294253477_2080520156_n10805170_10152582695378477_323625154_n

 

 

 

The beads library  had a wide range of choice to chose from. Although I was thinking of using a player at first. I wanted the users to interact with the frequency screen on processing hence I chose to use the Lesson 10 interaction. There was still the element of giving them the complete control that was missing. Which is why I altered the code to allow complete mouse control. By starting from point 0 and by clicking on any space on the screen and keeping it pressed an interaction was built.

void mousePressed()
{
ac.start();
}

void mouseReleased()
{
ac.stop();
}

H0wever I still wanted a lower frequency to be produced to create better patterns and vibrations on the screen which is why I altered the frequency ration from 10 to 0.1

updatePixels();
//mouse listening code here
carrierFreq.setValue((float)mouseX / width * 1000 + 50);
modFreqRatio.setValue((1 – (float)mouseY / height) * 0.1+ 0.1);
}

Even after having the lowest possible frequency ratio I noticed the lowest frequency produced was when the mouse was clicked on the extreme top left side corner.

 

Finally I wanted to change the size of the screen to full screen on processing hence I added a code line to the size:

size(displayWidth, displayHeight);

 

Diagram of the System:

 

diagram

Photographs:

jkjkjk

 

 

10748639_10152581180718477_628418883_n
10807007_10152581181063477_179630985_n (1)

10799453_10152581181093477_699078001_n

 

 

Project Video:

 

 

 

Other diagrams:

brain-hi

Daydreaming-on-the-job-this-brain-wave-reading-helmet-knows-video--b8035a79cd

 

Experiments:

Experiment 1: Using For Loops

 

This was the first processing code that I followed with the tutorial in order to create loops. In order to repeat code, it not only simplifies the same repeat actions you have to do again and again but how important calculation and cordination with the rest of the code is in processing. Although I got the code to run and execute on the screen the first time, it gave me squares of colour the second time around. That may have been because I missed a important loop line or another technical issue that I didn’t account for. While doing this code it also gave me access to a large palette of colors that are available in processing. In the final code of my assignment two though I chose to vary the palette using the RGB in order to reflect on the theme of my project which was the brain using the color red for the frequency and black for the background:

color fore = color(102, 0, 0);
color back = color(0,0,0);

However in the future I want to be able to chose from the color themes available in the palette library.

 

Experiment 2:

Adding sound was my first encounter with sound in processing and it helped me understand how the sound element of media can add life to a sketch or a picture. It was also an important aspect of my final project as while doing this experiment the tutorial gave me an understanding of how libraries can be used to create a sketch and can be further modified depending on the function you want it to do. I also got access to the beads library which later became an important part of my code as it was the skeleton of the interaction project that I used to create the interaction between the user and the frequency. This experiment required a wav file and a jpeg to add the visual layer and a sound file on top of it. The problem I faced with it however was difficulties with the sound it self it played out for a second on the first try however re adding the sound after the picture worked on the second time around.

Experiment 3: Beads Sound Library

The choice available on the beads library made it a hard decision to chose a particular sketch when it came to processing sound. Although at first I wanted to use a music player and simply just use a low frequency sound that works well with my projection. I didn’t want to have to take the easy way out. I also wanted an interaction and although the interaction sketch worked well it was still missing the component which gave the user full control. Hence I had to modify it to give better results and work only when the mouse was clicked and stayed clicked. It was also that each user got a unique experience and got to explore the sketch from scratch without having to load again each time. However in the future when I have more time to experiment I want to be able to add a sound of my choice and then allow interaction with that on different frequencies to see how I can create distort existing music and create variations.

Experiment 4: The Abyss Creatures

The Abyss is a 3D space that allows interaction between graphics and animation with processing. Allowing control over not only the basic creatures shape and drawing but their movement, focus and their information. Andreas Gysin created this as a workshop for design students to explore this space in processing and allow programming at different levels.  It allows interaction and the basic creatures comes alive as a graphical output. This inspired me to see projection not just as a tool but to be used to rebuild a space for individual creatures and multiple creatures to coexist with one another.

The Abyss code allowed me to recreate the creature, and gave me ideas for my future project. Creating a virtual space people can explore and recreate and to me this experiment seemed very similar to the games I played while growing up such as Sims where you create a virtual world with properties you seem fit and characters with as many qualities as you want. The personal aspect of the experience was something that I wanted to create in my final project. I also at one point wanted to work on this experiment as my final project however because of the locked layers I was not able to edit the information about the movement for each creature and their shapes.

Experiment 5: Frequency Interaction

The frequency interaction was one of the most important experiments of my project as it was the basis on which the sound was generated. It involved playing around with the frequency changes, the screen size, the basic elements that changed the whole feel of the project which were the colors and most importantly the mouse control which allowed screen interaction only when mouse is pressed.

However I would have liked to add a element of the code where you do not need to necessarily have the mouse pressed as this is only one of the interaction with the project. The mirror reflection and patterns is another that would have required something to be pressed until you pressed something else to stop it. This with other elements such as a different comfortable sound or a less pitchy sound would have allowed less focus on the frequency interaction and more on the mirror interaction.

Never the less this was one of the best sketches in the sound library of beads and worked really well with my final project with a little tweaking here and there.

Experiment 6: Projection

Projection was one of the experiments that followed me right till the end. Even right before my turn I was setting up the processing screen with the projector as the mac settings which were different from the windows that I am used too. The projection in my case was to show the interaction with the frequency screen. However I wanted to project the mirror image changing with the frequency generated and the before and after.

Madmapper was also an interesting tool which I wanted to use to highlight an installation complementing mine on the wall. However trying out projection as a part of the class generated many ideas as to how it can be used with the traditional projector. Projection was certainly different from how I had imagined. From the projection on the room itself to the certain points in the room which focus can be concentrated on.

This helped me on my project as I learnt people can project on different surfaces and materials and use different tools to create the right effect on the spectator. In my case this was a layer of water on the mirror which created patterns and variations different from the original reflection.

Experiment 7: Mad Mapper

Mad Mapper which seemed confusing yet artistic to me in the start became much clearer to me when we practically tried out a projection on this medium. It was not only about the 2D image on a surface but creating 3D patterns, objects and textures. All this by sizing and aligning it against objects and surfaces it would best work with. It was fun because you can perfect the art and recreate it as many times as it was an easy prototype friendly software. Having only used it twice,once as a part of a tutorial to distort an image and the second time with friends who had tried it before. It was an amazing experience to see the simple projection play out and transform as a part of an environment.

Although we had this session very close to the final project demonstration I really wanted to either add this element of mapping or recreate my final project with this new understanding of the software. It was also interesting to see many people implement it as a part of their demonstration or to create the right mood. In the future I want to try it out with after effects and LED set-up.

Experiment 8: Gesture Based Detection

I carried out the gesture based experiments through the in built camera it allowed detection of color, movement, distance and direction. In this case I alterd the code so that the distance red dot disappeared completely as soon as the finger moved close to the camera it detected this through the pitch black darkeness of the screen hence detecting a certain color. The code that allowed this was:

void mousePressed() {
// Save color where the mouse is clicked in trackColor variable
int loc = mouseX + mouseY*video.width;
trackColor = video.pixels[loc];
}

void keyPressed()
{
if(key==’v’)
{
videoToggle=!videoToggle;
}

}

This came as close to the color detection and worked better than the other interactions with the camera. Although I wanted to incorporate the this element of detection in the final project by detecting the change in the image or a certain movement in the water due to the unreliable results of the speaker and the mirror at the different stages I was not able test and implement it out before the final project.

 

Inspirational  works and their connection with my project:

Distort yourself with we are narcisses by Bertrand Lanthiez and Chloé Curé uses the simple idea of the mirror and sound to create a distortion and question their real image. However this project is more visual then it is interactive and sound is generated depending on the distance sensor.  The visual and perceptive element of this work which was so finally polished inspired to dig deeper and create a connection with an existence which has always been there yet not so noticeable. Which made me think of the brain which is such a complex organ yet the process is so generative. Hence I wanted to use the idea of frequencies to be the essential input.

However the concept of frequencies being constant was not quite sitting with me and I wanted to dive deeper which is when I came across the Aural Architecture and Visualize the sound by Sung Ting Hsieh which also applied to the concept of water using the microphone to pick up oscillations from the structures and emphasize the connection between the spaces and the audience. Which is how I thought of bringing in the element of interaction and alterations of frequency on a processing screen and then on the output screen which was the mirror through variations and patterns.

While working on my project with elements of interaction and interesting projections I came across a third project The Abyss by Filip Visnjic a 3D space on processing where creatures can be drawn, built and released and although the idea in itself was interesting what I really got from the experience was that the most basic information such as the name and the birthday can bring a concept which is so hard to visualize otherwise into life. By naming, creating and defining the movement of the creature I felt not only a part of the atmosphere but I was better able to visualize the entire projection better. Hence while working on my project this feeling really resonated within me and I wanted to concentrate on how while going through my installation people would think of everyday object, actions and things differently in terms of how they contribute to the universe. Just as one mind becomes two and the infinite brain activity is creating patterns, processes and variations.

Expectations and Outcomes

Although the function aspect of project played out in same way as I imagined it to. I did at a point want to add the open cv elements  into it which would allow control over which part of the mirror the pattern could be created. As well a sound which was more stable yet created the same low frequency required for the vibrations.

As for the design elements of the project I imagined to be working with a bigger mirror and later on in the execution something that would like a floating brain. However the frequency generated by the speaker didn’t prove to be strong enough to work well on a larger surface. As for the brain even though I had a basic template, waterproof properties and better planning was needed for my execution to mirror a brain.

As for the processing display and function that aspect of the project went smoothly as it allowed me to generate the lowest frequency noise by changing the frequency ratio and the size of the screen to full screen by an extra line in the code: size(displayWidth,displayHeight);

 

 

 

Project 2: Experiment 8 (Gesture Base detection)

I carried out the gesture based experiments through the in built camera it allowed detection of color, movement, distance and direction. In this case I alterd the code so that the distance red dot disappeared completely as soon as the finger moved close to the camera it detected this through the pitch black darkeness of the screen hence detecting a certain color. The code that allowed this was:

void mousePressed() {
// Save color where the mouse is clicked in trackColor variable
int loc = mouseX + mouseY*video.width;
trackColor = video.pixels[loc];
}

void keyPressed()
{
if(key==’v’)
{
videoToggle=!videoToggle;
}

}

This came as close to the color detection and worked better than the other interactions with the camera. Although I wanted to incorporate the this element of detection in the final project by detecting the change in the image or a certain movement in the water due to the unreliable results of the speaker and the mirror at the different stages I was not able test and implement it out before the final project.

Project 2: Experiment 7 (MadMapper)

Mad Mapper which seemed confusing yet artistic to me in the start became much clearer to me when we practically tried out a projection on this medium. It was not only about the 2D image on a surface but creating 3D patterns, objects and textures. All this by sizing and aligning it against objects and surfaces it would best work with. It was fun because you can perfect the art and recreate it as many times as it was an easy prototype friendly software. Having only used it twice,once as a part of a tutorial to distort an image and the second time with friends who had tried it before. It was an amazing experience to see the simple projection play out and transform as a part of an environment.

Although we had this session very close to the final project demonstration I really wanted to either add this element of mapping or recreate my final project with this new understanding of the software. It was also interesting to see many people implement it as a part of their demonstration or to create the right mood. In the future I want to try it out with after effects and LED set-up.

Project 2: Experiment 6 (Projection)

Projection was one of the experiments that followed me right till the end. Even right before my turn I was setting up the processing screen with the projector as the mac settings which were different from the windows that I am used too. The projection in my case was to show the interaction with the frequency screen. However I wanted to project the mirror image changing with the frequency generated and the before and after.

Madmapper was also an interesting tool which I wanted to use to highlight an installation complementing mine on the wall. However trying out projection as a part of the class generated many ideas as to how it can be used with the traditional projector. Projection was certainly different from how I had imagined. From the projection on the room itself to the certain points in the room which focus can be concentrated on.

This helped me on my project as I learnt people can project on different surfaces and materials and use different tools to create the right effect on the spectator. In my case this was a layer of water on the mirror which created patterns and variations different from the original reflection.

Project 2: Experiment 5 (Frequency Interaction)

The frequency interaction was one of the most important experiments of my project as it was the basis on which the sound was generated. It involved playing around with the frequency changes, the screen size, the basic elements that changed the whole feel of the project which were the colors and most importantly the mouse control which allowed screen interaction only when mouse is pressed.

However I would have liked to add a element of the code where you do not need to necessarily have the mouse pressed as this is only one of the interaction with the project. The mirror reflection and patterns is another that would have required something to be pressed until you pressed something else to stop it. This with other elements such as a different comfortable sound or a less pitchy sound would have allowed less focus on the frequency interaction and more on the mirror interaction.

Never the less this was one of the best sketches in the sound library of beads and worked really well with my final project with a little tweaking here and there.

Project 2: Experiment 4 (The Abyss Creatures)

The Abyss is a 3D space that allows interaction between graphics and animation with processing. Allowing control over not only the basic creatures shape and drawing but their movement, focus and their information. Andreas Gysin created this as a workshop for design students to explore this space in processing and allow programming at different levels.  It allows interaction and the basic creatures comes alive as a graphical output. This inspired me to see projection not just as a tool but to be used to rebuild a space for individual creatures and multiple creatures to coexist with one another.

The Abyss code allowed me to recreate the creature, and gave me ideas for my future project. Creating a virtual space people can explore and recreate and to me this experiment seemed very similar to the games I played while growing up such as Sims where you create a virtual world with properties you seem fit and characters with as many qualities as you want. The personal aspect of the experience was something that I wanted to create in my final project. I also at one point wanted to work on this experiment as my final project however because of the locked layers I was not able to edit the information about the movement for each creature and their shapes.

Project 2: Experiment 3 (Beads Library)

The choice available on the beads library made it a hard decision to chose a particular sketch when it came to processing sound. Although at first I wanted to use a music player and simply just use a low frequency sound that works well with my projection. I didn’t want to have to take the easy way out. I also wanted an interaction and although the interaction sketch worked well it was still missing the component which gave the user full control. Hence I had to modify it to give better results and work only when the mouse was clicked and stayed clicked. It was also that each user got a unique experience and got to explore the sketch from scratch without having to load again each time. However in the future when I have more time to experiment I want to be able to add a sound of my choice and then allow interaction with that on different frequencies to see how I can create distort existing music and create variations.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.