The Strange and the Beautiful

This series of illustrations explores the concept of universal beauty and timelessness found within the realm of geometric forms and impossible figures. Computer-generated visual components are explored in the work, enhancing the images. The result of this process is a captivating visual experience. In print form, the abstract figures draw you into another world from another time.

Select Images:

poster001_bluepyramid_sposter001_bluepyramid002_s poster001_bluepyramid003_s poster001_bluepyramid004_s

Technical Documentation:

Digital tools used:

Adobe Illustrator (for vector tracing and drawing)
Adobe Photoshop (for colouring and texturing)
p5.js javascript library (for generating visual components using original code)

Code:

Some of the code for this project was adapted from my previous experiment with diving into computer-generated visuals based on following sets of rules. It can be found here on CodePen:

“The Sun”
https://codepen.io/natalielh/pen/RjgyME
From my previous experiment, different parameters given in the code modify the layers of randomly-generated lines radiating from the center point, giving the appearance of a layered circle.

“The Sun 002”
https://codepen.io/natalielh/pen/ZaJEaN
Another iteration of “The Sun”, the visual effect generated is reminiscent of an ink mark-making effect.

“Radiating Lines”
https://codepen.io/natalielh/pen/KyLJmQ
Generated lines radiating from a center point, forming an outer circle. This tool is simple yet powerful, as the visual designs generated can be applied to many different applications.

Design Files and assets:

I created a separate vector file in Illustrator to archive the ‘impossible figure’ subject matter in the work, so that it can be easily modified and reused in future endeavors. Various images of textures and photoshop brushes are used to create the images with a worn-in style.

These design files and assets can be found here:
https://drive.google.com/open?id=1LckvnVKvLBbBdMhSM65ea_zRHzuq7eDg

Context Summary

The Strange and the Beautiful is an experimental series of illustrations. It dives into the realm of working with geometric forms, optical illusions, and impossible figures as subject matter within two-dimensional design.

The aesthetic of the final prints are very antique-looking, reminiscent of ‘found treasure’ from a mysterious time or place. The red-orange and deep purple colour scheme along with the added ‘grunge artifacts’ resembling a worn-in print give a mysterious appearance. This is deliberate, as it explores the concept of ‘universal beauty’ from the perspective of it being found during no specific time period. Theoretically, the entire design could be created by hand without very sophisticated tools, pigment, or ink. I have also found that introducing ‘print artifacts’ to the design is visually appealing for many viewers, as it gives a contrast to the very geometric and clean lines often found in graphical works.

The visual aesthetic appeal of straight edges and solid, geometric forms can be found in many contemporary applications of graphic design, especially in the rise in popularity of the minimalist, two-dimensional style. The appeal of these features goes back to ancient times, and can be found throughout world history in cases such as ancient Greek architecture and pottery, and ancient Egyptian pyramids and illustrations.

I initially planned on exploring using different new media techniques in this series, such as various forms of glitch-art using code. However, I realized that this would conflict with the design direction that I had set out to achieve. Having someone look at the prints and say “hey, this is glitch-art!” would have taken away from the idea that the designs could have been made without the use of any computers. You would be able to conclude right away that the piece is very contemporary. However, I used digital media and computer generation as a tool to help achieve the look I was aiming for. I used code to generate the visuals that would have been very time-consuming to create by hand, or even using digital tools without the aspect of complete automation. Some of the generated visuals create an organic feeling in the work, where randomly-placed lines (still following a predetermined set of rules) create a contrast from other visual elements. I find it interesting to leave the viewer a hint that parts were computer-generated without giving it away completely. I definitely plan on further exploring this uncommon subject matter with different approaches producing different visual styles. Although this series focuses completely on the aesthetic of old-school prints, I would like to focus on different styles in future iterations of the subject matter, evoking in viewers a different feeling or impression.

Other artists and designers working primarily with impossible figures and optical illusion as subject matter continue to be relatively uncommon. In the 20th century, artists such as Maurits Cornelis Escher and Oscar Reutersvärd have explored these unusual constructions. In graphite, Escher created architectural scenes featuring surrealist patterns and architecture that would be impossible to construct in the physical world. Reutersvärd, independently developing the Penrose triangle, created paintings on canvas as well as coloured pencil drawings of impossible figures. They can often be seen as formed constructions that appear to look like a window, framing a sky with clouds. Viewers often find it difficult to look away as what the mind sees is in these images is in constant conflict with what the mind knows and expects to be physically possible. Impossible figures continue to be an obscure subject matter, with many works drawn or painted by hand with traditional tools.

It is difficult to find detailed research and writings about impossible figures specifically. They are uncommon as subject matter (human figures are much more common in art, for example) and they are a subcategory in the world of optical illusions and optical illusion art. There has been documented research in observing how the human brain reacts to visual stimuli that conflicts with how the brain is programmed, producing visual phenomena and the ‘wow’ factor that can be found when looking at constructions involving these impossible figures. People looking at images often see colours, distortions, and other visual effects that are not in the images, instead, they are subjective and are created in the brain. I am interested in further exploring surrealist optical-illusion art from a 21st century lens, using new tools to bring the audience different experiences.

riley_movement_in_squares

Movement in Squares, by Bridget Riley 1961

Process Journal

I started by thinking about what techniques I can use to sketch these impossible figures on paper. Using an isometric perspective can make it very simple to blend foreground areas into the background, as they remain the same size on the page. This is particularly important for impossible figures.

I created and printed out some isometric dot paper to use for my sketches. It can be found in the design files section.

sketch001
sketch002

 

Drawing different figures on paper with a dot guide is a quick way for me to sketch out ideas before I do the vector work digitally.

penrose_square

For this experiment, I wanted to focus on this particular form: the Penrose square. Making one versatile vector file to modify in various ways is useful in trying different techniques in Photoshop.

outlines

I traced out the final lines in red after making the guide squares in green, following the grid set up in Illustrator. The tricky thing with these figures is that you can’t just draw one set of squares and then copy-paste them and resize them again in another layer without changing the size. They need to be retraced every time.

Then I made a mental plan for which parts are going to be on top or underneath other parts.

transparent

Tracing the fills is the trickiest part of the job, but the final result is always well worth it. I decided to have all the box corners end up on top, with the overlapping squares showing a repeated under-over-under-over pattern. Having the fills transparent makes it easier to view the lines underneath for much better accuracy. It’s important to get everything lined up as there is very little room for error.

opaque

Here is the final image that can be documented for later use. The colours are set so that the sides of the squares can easily be seen and selected in different programs. It can be set up in Photoshop with many different configurations. It’s a ‘base’ for creating new images on.

Making the new images in Photoshop involves selecting and masking the different fills by colour. At first, I tried adding a gradation over the entire figure, but I felt that it didn’t fit with the flat colour print look I was going for:

withburn

I still wasn’t completely satisfied with the look even after desaturating the image completely. I find that this technique is much more effective when photographs are placed inside the fills instead of solid colours.

solid

With playing around with masking, filling, and inverting parts of this image, I ended up with something I found to be particularly aesthetically appealing, so I kept this part and moved on with other things for the image. I experimented with adding different colours in the background and to the fills in the figure, but I found that I enjoyed two-tone colour schemes the best.

figures

I added some abstract shapes for the background the go with the impossible figure.

addgradient

After adding in print artifacts to add texture and weight to the design, I added a gradient map as a final layer to have an aged-paper look. I used an orange and purple gradient map at 35% opacity for this. With a gradient map as a final layer, I’m able to continue editing and inverting the black-and-white colours underneath while keeping a clean, non-destructive editing workflow.

print_001

sun001_002

even_002

 

Then I created the computer-generated visuals using the p5.js library. I used some of the code I experimented with before. Adding them to the images was simple, as I made these outputs in black and white so that they are easily visible and can be easily modified in image editing programs.

I enjoyed playing with the different aspects in these iterations to achieve different results. It’s interesting to look at the variations that can be made by changing the lines (so that figures pop out of or blend into the background), placing visuals with different sizes in different positions and at different layers in the image, and inverting the image to reverse the positive and negative space.

References

A brief history of surrealism in design:
https://1stwebdesigner.com/modern-surrealism/

More examples of modern surrealist art and design:
https://designobserver.com/feature/a-dictionary-of-surrealism-and-the-graphic-image/37685

About surrealist landscapes:
http://artscenecal.com/ArticlesFile/Archive/Articles2001/Articles1001/SurrealistLandscapesA.html

Documented research about visual phenomena and optical illusions:
http://www.michaelbach.de/ot/

About sacred geometry:
https://www.geometrycode.com/sacred-geometry/

About celtic knot patterns:
http://www.ancient-symbols.com/celtic-knots.html

A guide on isometric projection aimed at designers:
https://medium.com/gravitdesigner/designers-guide-to-isometric-projection-6bfd66934fc7

A website about impossible figures, the artists in their history, and a large illustrated library of figures:
http://im-possible.info/english/index.html

Posted in Experiment 4 | Leave a comment

Tell

Tell is an interactive audiovisual installation on the transience of all things. It surrounds the viewer with slowly dissolving waves, visually connected to the sound of the viewer’s voice.

Participants speak into a corner, flooding it with waves of moving light. These images shimmer with colour and intensity unique to the tonal qualities of the participant’s speech. Each contribution’s appearance is wholly based on the individual responsible. The participant’s phrase endlessly repeats, with each echo reshaping it into something different, but derivative of the original. The waves adjust, transforming the participant’s contribution into something entirely new. This is the legacy of the participant’s voice.

Tell seeks to engage individual reflection on temporality and mortality. Tell offers no stance of its own, instead seeking to prompt and facilitate the participant’s voice.
Link to video

Read More »

Posted in Experiment 4 | Leave a comment

dereScape

Mustafa Abdelfattah, Sanmeet Chahil, Tristan Santos

dereScape

1-14

Project Description

DereScape is an interactive performance art installation that takes the viewer on a journey through videos of Toronto and projects them onto a space which the envelope passerbys with vivid and interactive animations. What inspired the creation of dereScape was our keen idea to redefine photography and videography.

Using the theme of the city as the backdrop of the installation, we’ll be exploring the environment of the city and what it means to us and the viewers themselves. This project encompasses photography, videography, interactive animation and digital projection. These are all related mediums as they share a static visual representation which can only be viewed with a two dimensional perspective. DereScape combines these mediums together and immerses the user into the interactive cityscape represented through pixels, which vary in depth and help further elude the sense of a three-dimensional experience. The range of pixelization that the environment experiences is determined by user interaction from the live audio feed of the installation space.

DereScape invokes a meditative and sublime state through audio interaction, projection, and pixel animations. The viewer can walk in front of the projection to be a part of the scene but as they talk, the projection will begin to shift and morph in real time. Conceptually, the addition of sound in the space creates noise in the background.

DereScape’s performance art aspect highlights an unconventional projection photography studio session with the introduction of a photographer, and the visitors as models. When the photographer is not focused and in their innate element, or when the subject is not still, the environment becomes out of focus – in this case, pixelated.

Ultimately, dereScape blends performance art, photography and interactive digital art to create a seamless experience, showcasing our theme of redefining photography.

Photo Journal

Link To PDF

Teaser Video


Technical Documentation

In order to build the dereScape installation, it requires:

  • A Laptop
  • An external Microphone
  • A projector
  • Two (2) speakers
  • A Camera
  • A photographer
  • Volunteers
With regard to software, dereScape requires Processing with a few external libraries – PeasyCam; for editing the three dimensional space, minim; for audio inputs, and Processing video; in order to render the clips of the city.

Installation Diagram

Print

Code

Link to Zipped Code


Context Summary

DereScape is an interactive performance art installation designed and coordinated by Mustafa Abdel Fattah, Sanmeet Chahil and Tristan Santos of OCAD University. The creative installation is run by the three and aims to redefine the standard of projection photography. With experience in programming, graphic design and photography, the group have designed an installation that demonstrates the possibility of blending interactive animation and photography into a overall seamless performance piece.

DereScape involves four main components: the subject, the photographers, the animation and the audience.

In essence, dereScape is an installation in which photographer; in this case are Mustafa, Sanmeet and Tristan, interact with a volunteer subject by asking them interview style questions relating to the underlying theme of “one’s own journey in Toronto”. The subject is placed in front of a projection of a video which is animated by the subject’s interaction. The subject interacts back with the photographers, answering the interview style question in the form of speech and body language. Vocal projections are picked up by a microphone placed nearby the subject and the data from the microphone is passed to a Processing sketch which converts audio data into usable data; which is later used to alter variables in the code of the animation, altering the form of the video projected on the subject. The photographers take photos of the subject as they answer the questions; thus capturing the interaction between the subject and the animation through body language of the subject and visualization of audio data.

The group drew inspiration from previous projects they worked on together, as well as personal endeavours pursued outside of the classroom. Mustafa, Sanmeet and Tristan worked on a project previously titled “Pixalbum”. This previous project was an exploration of raw audio data gathered from MP3 files and visualized in the form of sound waves projected on a pixelated album cover. Pixalbum produced still images that were later exported, printed and produced as temporary tattoos. Pixalbum allowed the group to devise one of the main functions of dereScape’s interactive animation; a pixel generator. Originally programmed by Sanmeet, the group later expanded the pixel generation code present in Pixalbum to work with video instead of still images and were successful in implementing video into a 3D space. Also, they were able to utilize live audio data sent from a microphone to alter the overall pixel density and form of the pixelated video; responding to user audio interaction in real time.
As mentioned previously, the group drew inspiration from individual projects and have experimented a lot with photography and have each developed their own personalized photography style. The group found projection photography of particular interest and wanted to explore this field more in depth, while utilizing skills they already possess in the field of programming and generative art. Unlike most projection photography projects, dereScape encompasses a more intimate and interactive experience between the photographers, the subject and the projection, while also blending performance elements in the form of an interview between the photographers and the subject for the audience to engage in.

Through research, the group took influence from a variety of sources referencing immersive light photography, projection mapping and integrating video in Processing. Most of the influence and aid was gathered from Youtube tutorials and articles written on projection photography. A list of references can be seen below.

References

Articles
(2016, May 16). Terra Incognita. Retrieved December 07, 2017, from http://skywalker-terraincognita.blogspot.ca/2016/05/women-dressed-in-light.html

Yoo, A. (2015, January 30). Dancing House: Interactive Projection Mapping Crumbles House to Movement. Retrieved December 07, 2017, from https://mymodernmet.com/klaus-obermaier-dancing-house-ghent-light-festival/

Rozin, D. (n.d.). Wooden Mirror – 1999. Retrieved December 07, 2017, from http://www.smoothware.com/danny/woodenmirror.html

 

Tutorials and Sources
Magic immersive video and visual effects
https://www.youtube.com/watch?v=lX6JcybgDFo

 

Rendering video into processing
https://processing.org/tutorials/video/
https://www.youtube.com/watch?v=G2hI9XL6oyk

 

Sound Effects
https://myaudiograb.com/12IIG4a9t4
https://drive.google.com/file/d/0B0uaHPe2B7gHV1hKbXNnTHZ3M1E/view

 


Process Journal

Throughout the creation of dereScape, we ran into many challenges and discoveries that ultimately changed our perspective on the project. We worked on a week to week milestone basis, and we took extensive notes of our process.

 

Project Proposal
The first idea that came to mind when we began the project was taking into account our common interests. We were all interested in photography, and have never experienced shooting projection photography in a studio like setting before. As a result, we wanted to encompass the code Sanmeet had developed from his dMAGE project, but in a video format. We had written our thoughts and speculated a final product and prepared for the proposal presentation on Tuesday.

 

Week 1

 

Tuesday, November 21
On Tuesday Nov 21st, we presented the proposed idea to our professors what we had brainstormed earlier. We had received feedback and decided to further research how we could instantiate an interactive element.

 

Feedback
We were told to possibly research the ability to represent this digital pixelated cityscape in VR, and focus more on the interaction we are trying to convey to the user. As a result, we opted to researching the methodology to utilize sound as a variable – the ambient sound recorded from the environment and the sound that would be devised from the interaction.

 

Thursday, November 23
We had set out our goal for the end of the day  to complete a prototype of running a pixelated video through Processing. We had utilized the pixelated movie example from Processing’s built in video example, and speculated if we were going to be able to successfully render a 2d video in a 3d space.
After hacking at the processing code and swapping a picture for instead a video using Processings video library, we were successfully able to render a video in Processing. Once it was able to render, we added back the pixelization element to vary the depth of the video and create a 3d effect.
After viewing the video, we learned that the pixelation can influence the aesthetic of the video, due to the video rendering the colours. The videos work in the way we wanted them to, but we just required more content. The videos that looked the best were of people walking close to the camera, or walking past as they closer they were the more of the frame they took up, ultimately making them more visible even through the pixelization. We discovered that perspective is highly crucial for the pixel processing, as it adds a layer of abstraction.

 

Interesting Ideas
An important aspect of any installation is the medium it is being viewed, and as a result we thought of creating a ‘hologram’ with the pyramidal transparent glass method. The only issue was that we couldn’t implement it into a room environment as it would require large amounts of materials to create a more promising effect. We thought that speaking in the environment of the cityscape may make you feel as if you don’t have a voice, and if you aren’t being heard. We wanted to make it so that the more you speak the more disconnected you become to the space, but the quieter and softer you are the clearer it’ll be. Tristan related this experience to sleep paralysis – where your conscious is awake but you can’t physically wake up.

 

Next Tasks
For next week, we wanted to research how we can utilize the microphone sound input and use it as a dynamic variable to increase/decrease the pixel size of the video to make it interactive.

 

Week 2

 

Tuesday, November 28
Our goal for today was to get the microphone to dynamically affect the pixelation rate of the video. With the help of Adam Tindale, we were able to utilize the minim sound library audio input mechanics – mic.mix.level() – which gets audio from mic and buffers it and gets the audio level.

 

Challenge
One challenge we encountered after getting the microphone input to work was that it was sequential as we made it in if statements – if the level of the microphone was within a certain range, the pixel would stay at that size and not change fluidly.

 

Next Tasks
For next week we had to capture more video elements of the city to construct a narrative. Furthermore, we had to refine and find a better way of ‘gradiently’ getting pixels to change smoothly. We also needed to find a way to optimize the sketch to run more fluidly, as when the pixel size was too small it would freeze the video and become too small to render the video fast enough. Lastly, we had to decide on the presentation space and exactly what equipment we needed to rent.

 

Thursday, November 30
Today, we wanted to combine the videos we had collectively taken and organize them to best fit a narrative that would follow through to the questions we would ask the volunteer/model to answer during the shoot. We also wanted to smooth the visual pixelation element of the installation, and optimize the performance of the sketch.

 

Challenge
In order to smoothen the visual transitions of the pixel sizes, we needed to directly reference the mic input level to represent through the sketch. The issue was that the mic input numbers were very low, and in order to get accurate readings we had to amplify the microphone sensitivity. We decided to go through the software side of fixing it instead of the hardware, and created a micSens variable to multiply the level and round it to a whole number. Another challenge was the performance optimization of the sketch, and in order to fix that we introduced an if statement that automatically corrected the size of each pixel to 30 if it dipped below that number – anything lower than 30 resulted in very slow rendering and essentially froze the installation.

 

Next Tasks
We required more videos to create a more coherent narrative of the environmental space, so we had to go out and shoot over the weekend. We also needed to book the studio where we are going to present, and perhaps add a Peasycam tilting method according to binaural audio input from microphones.

 

Week 3

 

Tuesday, December 5
For today we wanted to finalize the equipment we required for the installation and book a room. We also wanted to test the processing sketch for any bugs and ensure it was functioning well.

 

Challenges
Upon finalizing the equipment we required to perform our installation, we required a projector. As we wanted to project onto two corner walls, Haru Ji brought up the issue of projection distortion. She said that in order to ensure that the installation does not look warped, we would have to use projection mapping software to ensure it is scaled correctly and appears flat on the wall instead of warped. Haru recommended to use VPT 7 (visual projection tool), and Sanmeet quickly tried to learn the ins and outs of it. We realized that it required a plugin in order to map a live feed from an external program, ‘Syphon’. After installing the plugin, it would not cooperate with VPT 7 and the entire software just borked. As a result, we are going to keep the projection having a distorted perspective, as it would cover more surface area of the wall and allow for a more immersive space that would fill up all peripheral vision of the visitor.

 

Final Thoughts
Ultimately, towards the end of the project we completed any last minute explorations and experimented with various textures, patterns and effects for our projected animation. We stuck with our theme of interactive animation and projection photography and worked on perfecting our concept to design an engaging performance for the final presentation. We worked hard on designing a seamless interaction for the subjects and the audience. Sticking with the theme of redefining photography, videography and interactive digital art we created an innovative and aesthetically pleasing experience.

 

We explored the creation of digital environments using processing 3D animation and video files to animate pixels and their depth. We also explored and incorporated sound from the installation environment to amplify the movement of video pixels in the Z-axis. The installations finalized function works as such: if there is nobody talking in the space, then the animation/video will become clear and if someone is speaking, the animation/video will become more pixelated.

 

We have discussed plans for the future of this project thoroughly. With more time and resources, we would like to explore the possibility of more interactivity; possibly using Arduino and its capabilities of handling numerous sensors; more specifically interfacing a kinect sensor. We also have plans to minify the project and create a version of dereScape that explores the most intuitive and sensible ways to interact utilizing sensors built-in to one’s own computer so that those without the equipment can try their hand at the same project. We have also discussed future plans to experiment with different styles of projection, including 360 degree projection, allowing subjects to rotate around the experience (e.g., six environments in a 3D cube structure; each side is an environment).

 

In the end, through the expansion of our original theme we created a interactive space that focuses introspective reflection, silence and the analysis of one’s own experience within the city of Toronto. We found that our exploration of pixelation of video being manipulating by a subject’s voice worked well in an interview style performance piece. Overall, we are satisfied with what we have designed and are hopeful for the possibilities of dereScape’s future expansion.

 

Photographs (Full Photo Journal Here)

2 1-17 1-112-1

Videos

Posted in Experiment 4 | Leave a comment

Project ShadowCast

By Harit Lad and William Selviz

It started off as a way to bring recursive patterns into our everyday lives by creating fractal inspired Christmas ornaments. We then realized that in order to provide some uniqueness to our idea, some tweaks needed to be made. From there, we explored the options of applying fractals in our own way to three-dimensional items; so we experimented with our 3D printed prototypes. We thought about the introduction of light into our project and concluded that a hollow shape would suit the addition of light better. Lastly, we decided upon creating a laser cut, self-generated algorithmic pattern inspired, mood lamp that creates interesting shadows when lit in a dark environment.

24883185_10155814630524020_1888566416_o-1 24819228_10155814630564020_1085725879_o-1 23926578_10155792282579020_289842205510769031_o

Final project video can be found here

Parts/equipment list:

  • 3d printer
  • ¼ inch plywood
  • 30 led neopixel strip
  • Arduino Uno
  • Empty water bottle
  • Plastic bag diffuser

Software:

  • Cinema 4D
  • Fusion 360
  • Adobe Illustrator
  • Arduino

Code: The code can be found here

Design Files: The design files can be found here

Process Journal:

img_20171123_141143 img_20171123_141150 img_20171123_143455 img_20171123_145733 img_20171130_111349 img_20171205_124604 img_20171205_120327 img_20171205_120306

Posted in Experiment 4 | Leave a comment

Deep Sea Aquarium

previwoj

 

Posted in Experiment 4 | Leave a comment

Low-Poly Plant Room

Pandy, Sydney, Julie, Vivian, Tania

DIGF-2004-001

December 7, 2017

 

Concept/What it is:

After being inspired by Haru’s classes of (what some would call) biological design, we decided to create a low-poly computer game based off of a Unity game called Superhot by the Superhot Team. Using the word “game” very loosely, we built an interactive room with fantasy-like plants being the main focus. The player would be able to walk up to the different animated plants and watch them react using colliders. In a way we could call it a prototype for a larger product because expanding this game to be more interactive and scenic is something we all would want do in the future. We wanted to stick with the aesthetic of having everything white and making the interactive objects coloured. Keeping the room simple yet fun, the goal of the space is to create an almost relaxed vibe where players have the time to just explore and wander without the feeling of being rushed. Our overall vision was to make an interactive deliverable that would be extremely interesting to play around with using a VR, if we had the time/money to use a VR device.

Technical Documentation:

  • Equipement
    • Computer or Laptop
  • Software / Platforms
    • Unity Engine
    • Rhinoceros 5
    • Maya 2017
    • Post Processing Stack (Asset store in Unity)

 

Process:

Building the Room:

No one on our team had worked in 3D unity before, some of us only had limited experience with 2D which works in significantly different ways. The first step was to look into how to build models and prefabs and interact with them in the space. Starting of working with one sided planes in order to do a first attempt build of the room, we wanted to bring in all of our assets into the scene in order to test for lighting and interaction.

We started running into problems once we brought the objects from Maya; the objects were skilled scaled to the wrong size and were not grouped individually as one whole piece. This caused problems with placement and and with the effects in the room the animations as well were difficult to apply. We needed an animation controller but with the objects having so many groups it was difficult to sort through them. We initially wanted a trigger animation with a proximity sensor as well as the light trigger but because of the way the objects imported once a collider or a mesh renderer was added to the object the animation became static and no longer applied. The issue was solved by using the prefabs in the system and adding the box collider to those so that they could be triggered.

The windows were originally open, so that there was the view of outside scenery. We used the terrain generator in unit which allows you to do some custom modeling, and mass generates trees and other objects. This idea was scrapped when we downloaded the post processing script to give the lighting a softer, more ethereal feeling. This was when we discovered that my version of unity had not updated properly and was outdated, which was interfering with post processing. At first we were unsure of the source of the error, so we scrapped the outside world in order to focus more on the internal experience of the room.

Once we discovered this error, we focused on trying to uninstall and reinstall, in which we encountered a glitch that was discussed on unity forums which said that there was no current patch for, stopping the program from running. We had to uninstall and reinstall the newest beta version which worked. After this, post processing installed properly but would not run or show in the game build. We decided not to worry about it.

Then the second build involved taking floor prefabs and making a more finalized version of the room with lighting interactions. This time there were less issues that before but the box collider mesh issue was still a problem so the c# script that was attached to the trigger had to be on a premade unity asset.

We learned to make all models including the space in 3D modeling program like blender or Maya, since this would have solved these issues. And to look more heavily into the import of models and their textures and animations. A large portion of project time in unity was spent troubleshooting, re-uploading assets and correcting errors.

 

Lighting in Unity:

We definitely went through some ups and downs when it came to lighting in Unity. We had the vision of realistic light reflecting off of white surfaces and adding intense shadows and glows (in reference to SuperHot). We have succeeded in doing that by the end of the game, but we tried different versions of lighting before we came to that final result. The first thing we tried was normal directional lighting which looked okay but we could not figure out the controls to make it look a little more aesthetically pleasing. So, we switched over to a different technique. We made the planes to be about the size of the windows and added a light material to them. We enabled the static option and fixed the values and colour of the light according to what we desired. The lighting and shadows looked great, but we ran into the problem of not being able to make the planes transparent and therefore not giving them the actual “window” feel. So, we switched back to directional lighting and got the hang of the different controls. We ended up with our desired, glowing effect and it still looked good with the animation lighting. We also tried adjusting the lighting so that it would be able to change overtime as if it were real day/night. Turning on real-time instead of baked lighting apparently does that, but we could not figure out how to adjust the time intervals so that the user would not have to wait a day in real life to see the change in lighting. That is definitely something we would want to look at in the future of this game.

 

Animations (Plants, Furniture, Decoration, etc):

The idea for the room and the furniture was to create low poly items that were also white to fit the contrast between stationary and interactive items. We wanted the furniture to have a simplistic and modern appeal to it. We settled with (mostly) white, geometric furniture to fit this aesthetic. The original idea was to create geometric furniture in Rhino while the organic furniture was to be done in Maya. With the idea of modern, the table and windows were done with Rhino 5 as it has a built in glass texture. The rest of the furniture and decorations were done in Maya.

Unfortunately, problems arose when we tried to export the 3D models. It turns out that the glass material in Rhino 5 was not able to transfer smoothly into Unity. It was also not able to distinguish plastic materials as it turned our white cabinets to a dark black brown. Fortunately, the models that were done in Maya were able to be exported perfectly. Determining the colour for the decorations was a tricky situation. We wanted the plants to be the focal point of the game but we also did not want the decorations to blend with the furniture. We finally decided on a darker version of the colour palette we decided to base the game on. The decorations were done as a darker version because it allow the plants to be emphasised as it allowed the decorations to be dulled out.

 

All the plant assets were created with maya, and used a set colour theme for emphasis. This also gave a visual cue as to what the player could interact with. We stuck to a low poly count when creating our plants for aesthetic reasons, but also to keep modelling simple as we only had a couple weeks to work on it. The animations were done via Maya keyframes. Due to time constraints, the animations were kept to simple rotations and translations. With more time, we would have added more complex movements and interactions. One of the biggest issues with the plants was importing them to Unity; while some worked fine, some did not retain their colours, and some did not retain animations (which is why some animations were missing during our demo). Working with Maya was definitely a learning experience for us. We discovered the limitations of what we could do with our given timeframe, but also learned a lot about the exporting process and which shaders and shapes will not export to Unity. Additionally, we learned a lot about file and asset management and making the overall plants seamless and easy to unpack in another software. The designs of our plants were loosely based on what one might see in real life, with a surreal fantasy twist for interest.

 

Layout of the Interactive Space

screen-shot-2017-12-07-at-11-02-06-pm

 

Examples of Furniture

24899237_1510375385667125_1184426542_n 25105692_1510375389000458_1289445545_n 24899250_1510375382333792_1552450227_n 24898919_1510375402333790_2013627008_n 24992173_1510375392333791_1145243278_n 24891490_1510375395667124_901263108_n

 

Examples of Plants

 

screen-shot-2017-12-07-at-9-28-27-am   screen-shot-2017-12-07-at-9-29-11-am screen-shot-2017-12-07-at-9-29-54-am screen-shot-2017-12-07-at-9-30-56-am screen-shot-2017-12-07-at-9-31-22-am screen-shot-2017-12-07-at-9-31-56-am

 

Final Game Screenshots

process1 room room1 room2 room3 screencap1 screencap2 screencap3 workinprogress

 

Final Game Video (Must click link because it exceeds max. size for blog site):

https://drive.google.com/file/d/1EmM0zMIjfQHS8mnv1Fkov4UM2_g3rps3/view?usp=sharing

 

Posted in Experiment 4 | Leave a comment

Dress of Life

Please Click here to see full documentation, blog contains main aspects, full document contains all details.

Press Kit:

Dress of Life

By Mika Hirata, Katrina Larson, Ziyi Wang, Vivian Wong, Anran Zhou

‘Dress of Life’ is an installation piece that depicts a brief and abstract history of evolution. Utilizing projection mapping, animations, and sound, an awe-inspiring environment is created that draws the viewer in. As the animations are projected on a dress form, the viewer is compelled to contemplate and recognize their place in the world as a part of evolution, not as some higher being. The dress is made with scraps of paper that emulate nature, and features elements that are inspired by rivers, flowers, and tree bark. The front of the dress displays the abstract representation of evolution, while the back displays the animations in reverse to represent how life continues to cycle. We hope that this installation will help the audience realize that they are just a stop in evolution, rather than the end of it.

Dress of Life Full Video

Dress of Life – Front Only Video

24992615_10215902542584766_567062989_o  24623665_10215902542504764_898719214_o  screen-shot-2017-12-07-at-11-47-14-pm

Technical Documentation

Parts/Equipment List:

  • Paper
  • Glue gun
  • Two Projectors
  • Two Laptops

Software:

  • P5.js
  • Adobe Premiere Pro

Code: (All links in coding process documentation)

Design Files / Assets: Link to master folder

All assets and files can be found in this master folder.

Context Summary:

The Dress of Life is an art installation that explores the human connection with nature by displaying basic evolutionary history on a dress. Using animation and code, we project an abstract depiction of evolution on a dress which connects humans to evolution and reminds us that we are the result of all of this. Essentially, we wear the history of evolution on our backs. In this piece, we combined a traditional and new artistic media, coded animation and projection mapping, with fashion design. This combination relates to the concept of evolution, showing both old and new methods of creating.

There are several artists working in spaces similar to ‘The Dress of Life’. Starting with the base of our piece, we were inspired by Laura Baruel’s paper dresses from an exhibit called Rococo-Mania. Baruel utilized paper and fabric to create ornate and structured dresses that represented different natural elements. She merged nature and architecture in the sculptures and created extravagant silhouettes, but by exclusively using white materials, the audience is forced to focus on the textures and shapes of the dress. While designing the dress, we took inspiration from Baruel’s structured elements and her use of texture. Rather than using white materials to guide the viewer’s’ eye, we used it for its efficacy as a projection surface […]

Click here to see full documentation for full context summary.

Process Journal

Projection Process
1. Challenges

At first, we were trying to use some professional projection mapping software to create a more advanced and interactive projection mapping experience, so we tried three softwares; Madmapper, VPT 7, and Heavy M. We were extremely limited when using the free version of these softwares and there were compatibility issues with VPT 7. These softwares did not allow us to create more than one animation, hence we chose to simply integrate our animations as a video file and project it for now, and once we have a licence to Madmapper, we would continue our project and do more exploration.

 2. Solutions

Trial 1: After getting some advice from Haru, Adam and Kate, we tried mapping the image on the wall and people by using Adobe Illustrator (using Mask function). Through this experimentation we learned more about our limitations. We learned that the images cannot be projected onto dark objects or clothing.

Trial 2: We changed the image into 4K image, however the resolution of the projection mapping was not good enough to show the difference between regular and 4K resolution image.

Trial 3: We used an animation (screen capture of p5.js code created in Experiment 3) and cropped the animation into a dress shape by using Adobe Illustrator and Premiere. The moving object on the human body was very interesting to watch.

img_5641-2 img_5642-2
Left: First attempt to project a mask of dress shape using Photoshop and a projector. Right: Projection results onto a person.

Coding Process – details in full document.

screen-shot-2017-12-07-at-11-53-24-pmscreen-shot-2017-12-07-at-11-53-30-pmscreen-shot-2017-12-07-at-11-53-56-pmscreen-shot-2017-12-07-at-1-04-42-am screen-shot-2017-12-07-at-11-54-54-pmscreen-shot-2017-12-07-at-11-54-59-pmscreen-shot-2017-12-07-at-11-55-06-pmscreen-shot-2017-12-07-at-11-55-14-pm

Final Commented Code:

All Animation Videos:
https://drive.google.com/drive/folders/1usZaR5HereeAopOvdapLQGIpT4FrRj-6f30

The projected video for the front of the dress:
https://drive.google.com/file/d/1qMvVOXygEMRgoMTX_X6Lo5YFJnd4Aemx/view?usp=sharing

The projected video for the back of the dress:
https://drive.google.com/file/d/1V–IixYYoxdTeQZowQpN-J0RHx6Q2dAZ/view?usp=sharing

Dress Design Process

screen-shot-2017-12-07-at-11-45-47-pm screen-shot-2017-12-07-at-11-45-54-pm
We started with a base that would hold the shape, and then add texture and designs on it. After that we added crumpled paper for a general surface. The effect fit to our concept and also create an interesting look.   

screen-shot-2017-12-07-at-11-45-15-pm screen-shot-2017-12-07-at-11-45-24-pm
After the base is finished, we added more and more pieces on. The patterns on the dress are abstracted representation of nature and life. The flowers, petals, long stripe and rounded shape all speaks to our concept.

screen-shot-2017-12-07-at-11-45-35-pm screen-shot-2017-12-07-at-11-45-40-pm
After finished the form of the dress, we tested projection onto the dress. We tried projecting our animations and see which works. Based on that we changed some of our designs and added more.

Initial Concept: With this being a dress of life, we wanted to make the dress look and feel natural, so our dress would need to be soft and flowy and be made of organic forms.

Ideas and inspirations: We want the nature parts to be abstractive and shown as patterns on the dress, hoping to create a decaying effect on the bottom, as a way of showing a procedure in the life cycle. We also thought about adding paper folding techniques, in order to create a flat and clean surface, which might be better for projection.

screen-shot-2017-12-08-at-12-03-55-am screen-shot-2017-12-08-at-12-04-02-am
Since we were almost done with creating dress, we started mapping the patterns on the dress in a dark room.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2017-12-04-22-50-15 %e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2017-12-04-22-55-45
To map the patterns on the dress, we created a mask of the dress by tracing the outer lines with using Adobe Illustrator.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2017-12-05-0-11-09
Combined the patterns that we created with using p5.js by using Adobe Premiere Pro. To crop the animations in the dress shape, we used “track matt key”. Also we added a fading effect at the end of each animations.

img_0098 img_0107 img_0101
wechatimg229
Another trial of using the projectors to map the patters on the dress. This time, we used to projector to project the patterns on the both side back and front., Also our patterns were very clear on the dress even though the room was not dark.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2017-12-06-22-29-40 %e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2017-12-06-22-29-21 screen-shot-2017-12-07-at-11-47-14-pm
Our last trial in an absolutely dark room with two projectors to map the patterns on the front and the backside of the dress with a music.  

 

Music:

https://www.youtube.com/watch?v=kjPTQLgGpq8 Interstellar: A Trip Hop Mix, Confused Bi-Product of a Misinformed Culture

Bibliography:

Toftegaard, Kirsten. “Rokoko-Mania.” Laura Baruël, 2011, www.baruel.dk/filter/Projects/3451239.

This article provided us with an artist statement and commentary regarding Laura Baruel’s work, one of the most influential artists for the dress design. The natural elements that she created using paper inspired us and gave us a guideline for what is possible with paper.

Karayiannis, A. Lapanupat, P. Zhu, M. “Evolutionary Design.” Nature Inspired Design, 2009, http://www.cs.bham.ac.uk/~rjh/courses/NatureInspiredDesign/2009-10/StudentWork/Group4/EvoArt.pdf.

In this article, the basics of evolutionary design were explored. They discussed the fundamentals of this art and how it uses similar terminology to genetics and evolutionary studies.

Cope, Greg. “Evolutionary Art Using A Genetic Algorithm.” Algosome, www.algosome.com/articles/genetic-algorithm-evolution-art.html.

Another article that provided a summary of genetic algorithms in programming. This type of programming inspired our ‘tree’ style animation.

Hart, Matthew. “Projection Mapping On Clothes Is Now Possible, Great For Making Memes.” Nerdist, October 2016, https://nerdist.com/projection-mapping-on-clothes-is-now-possible-great-for-making-memes/

This was a more casual article in a news format. It showcased the places that we could eventually go with projection mapping. The researchers featured were able to have the projection keep a constant position on a moving fabric.

Kestecher, Alexa. “Couture Fairy Tale.” Creators, July 2012, https://creators.vice.com/en_us/article/z4yjva/designer-franck-sorbier-projects-an-haute-couture-fairy-tale.

Similar to our design, the designer featured in this article used dresses as a screen for projections. He used both his dress and the surrounding area as projection surfaces to create an environment that conveyed a story. This all encompassing projection mapping is an inspiration for where we could take this project in the future.

 

Posted in Experiment 4 | Leave a comment

FUTURE SPACE CLOCK

artboard-1-copy-5-100

artboard-1-copy-8-100

artboard-1-copy-9-100

Posted in Experiment 2, Uncategorized | Leave a comment

Thorn Choker

Julianne Quiday (Just Julie)

Katrina Larson (Kat)

 

Description/Concept

After realizing that the weather was/is changing drastically, we decided to make a tattoo corresponding to global warming. We made a thorn vine tattoo choker where the thorns each represent a month of the year, and the size varying depending on the temperature anomaly for each month. The more degrees it is over the average, the bigger the bud, and vice versa. More will be explained below on the data, but this is basically a way to increase awareness about the drastic changes in temperature and how it’ll only grow from here if the human race doesn’t do anything about it.

screen-shot-2017-10-26-at-10-54-51-am screen-shot-2017-10-26-at-10-54-59-am

 

The Data

Using the NOAA (National Oceanic and Atmospheric Administration) we were able to find the temperature anomalies each month for the past year.

This data represents the way that temperature is being altered by climate change. Since climate change is an inherently negative topic, we wanted to translate the data into something with an equally negative connotation.

 

Progress

 

OLYMPUS DIGITAL CAMERA

img-thing

We were initially going to do a rose bud vine but we realized that if our topic of research was going to be something unfortunate, we should stick to a darker design.

thornprint20171026_102435

Thorns are a part of nature, but they cause pain to other species, just like rising temperatures. Each thorn’s size is proportional to the temperature’s departure from average for a specific month.

The neck placement was chosen to visualize how these temperature changes can harm humans, with larger thorns (anomalies) causing more damage.  We are edgy. Idk if you knew that

Posted in Experiment 2 | Leave a comment

My Colourful Tattoo

Daniel McAdam, Thomas Graham, Natalie Le Huenen, Vivian Fu, and Erika Davis

Our project was designed to be a fun experiment of our capabilities. We thought it would be a fun idea to create a personalized image from the data that we had collected. The end result is a colourful image that is made up of multiple blocks of colour. The blocks of colour were determined by the data we had inputted in Processing. It works by beginning in the initial ‘input mode’. The user would type in their string, which is displayed on the screen. By pressing the Enter key after that, the sketch shifts to the ‘Draw Squares’ mode. In this mode, it cycles through each character in the input string and turns it into a hexadecimal value that it stores in an array. It then cycles through that array, pulling values for the each of the red, green and blue arguments of the fill() method. Before passing these arguments, it converts them into integers using the unhex() method. It uses the row and column indices of a 2D loop to set the position for a new square, coloured individually using the results of the fill() method. This accumulates into a grid to create a square. If the input is not enough to create a square, it pulls the previous row of colours down to extend it to the required size. After this, it waits for user input to either save the resulting image (‘S’) or return to ‘Input Mode’ (‘R’).

The image location was preferably near the hand as we thought it was a cool, futuristic idea. By having it on our hands, the square resembled a QR code. Because of the similarities, we thought it could be a new design for colourful and unique QR codes as each square was tailored to each individual.

Our code as well as our data set are referenced in GitHub:
https://github.com/thomasgraham97/Atelier-Color-Barcode

 

Process Images:

colourfultattoo_page_2_image_0001

colourfultattoo_page_3_image_0001

colourfultattoo_page_3_image_0002

colourfultattoo_page_4_image_0001

colourfultattoo_page_4_image_0002

colourfultattoo_page_5_image_0001

colourfultattoo_page_5_image_0002

colourfultattoo_page_6_image_0001

colourfultattoo_page_7_image_0001

Pressing ENTER activates the image-processing code:

colourfultattoo_page_7_image_0002

 

Final Images:

colourfultattoo_page_8_image_0001

colourfultattoo_page_8_image_0002

colourfultattoo_page_8_image_0003

colourfultattoo_page_9_image_0001

colourfultattoo_page_9_image_0002

Posted in Experiment 2 | Leave a comment

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.