Mustafa Abdelfattah, Sanmeet Chahil, Tristan Santos



Project Description

DereScape is an interactive performance art installation that takes the viewer on a journey through videos of Toronto and projects them onto a space which the envelope passerbys with vivid and interactive animations. What inspired the creation of dereScape was our keen idea to redefine photography and videography.

Using the theme of the city as the backdrop of the installation, we’ll be exploring the environment of the city and what it means to us and the viewers themselves. This project encompasses photography, videography, interactive animation and digital projection. These are all related mediums as they share a static visual representation which can only be viewed with a two dimensional perspective. DereScape combines these mediums together and immerses the user into the interactive cityscape represented through pixels, which vary in depth and help further elude the sense of a three-dimensional experience. The range of pixelization that the environment experiences is determined by user interaction from the live audio feed of the installation space.

DereScape invokes a meditative and sublime state through audio interaction, projection, and pixel animations. The viewer can walk in front of the projection to be a part of the scene but as they talk, the projection will begin to shift and morph in real time. Conceptually, the addition of sound in the space creates noise in the background.

DereScape’s performance art aspect highlights an unconventional projection photography studio session with the introduction of a photographer, and the visitors as models. When the photographer is not focused and in their innate element, or when the subject is not still, the environment becomes out of focus – in this case, pixelated.

Ultimately, dereScape blends performance art, photography and interactive digital art to create a seamless experience, showcasing our theme of redefining photography.

Photo Journal

Link To PDF

Teaser Video

Technical Documentation

In order to build the dereScape installation, it requires:

  • A Laptop
  • An external Microphone
  • A projector
  • Two (2) speakers
  • A Camera
  • A photographer
  • Volunteers
With regard to software, dereScape requires Processing with a few external libraries – PeasyCam; for editing the three dimensional space, minim; for audio inputs, and Processing video; in order to render the clips of the city.

Installation Diagram



Link to Zipped Code

Context Summary

DereScape is an interactive performance art installation designed and coordinated by Mustafa Abdel Fattah, Sanmeet Chahil and Tristan Santos of OCAD University. The creative installation is run by the three and aims to redefine the standard of projection photography. With experience in programming, graphic design and photography, the group have designed an installation that demonstrates the possibility of blending interactive animation and photography into a overall seamless performance piece.

DereScape involves four main components: the subject, the photographers, the animation and the audience.

In essence, dereScape is an installation in which photographer; in this case are Mustafa, Sanmeet and Tristan, interact with a volunteer subject by asking them interview style questions relating to the underlying theme of “one’s own journey in Toronto”. The subject is placed in front of a projection of a video which is animated by the subject’s interaction. The subject interacts back with the photographers, answering the interview style question in the form of speech and body language. Vocal projections are picked up by a microphone placed nearby the subject and the data from the microphone is passed to a Processing sketch which converts audio data into usable data; which is later used to alter variables in the code of the animation, altering the form of the video projected on the subject. The photographers take photos of the subject as they answer the questions; thus capturing the interaction between the subject and the animation through body language of the subject and visualization of audio data.

The group drew inspiration from previous projects they worked on together, as well as personal endeavours pursued outside of the classroom. Mustafa, Sanmeet and Tristan worked on a project previously titled “Pixalbum”. This previous project was an exploration of raw audio data gathered from MP3 files and visualized in the form of sound waves projected on a pixelated album cover. Pixalbum produced still images that were later exported, printed and produced as temporary tattoos. Pixalbum allowed the group to devise one of the main functions of dereScape’s interactive animation; a pixel generator. Originally programmed by Sanmeet, the group later expanded the pixel generation code present in Pixalbum to work with video instead of still images and were successful in implementing video into a 3D space. Also, they were able to utilize live audio data sent from a microphone to alter the overall pixel density and form of the pixelated video; responding to user audio interaction in real time.
As mentioned previously, the group drew inspiration from individual projects and have experimented a lot with photography and have each developed their own personalized photography style. The group found projection photography of particular interest and wanted to explore this field more in depth, while utilizing skills they already possess in the field of programming and generative art. Unlike most projection photography projects, dereScape encompasses a more intimate and interactive experience between the photographers, the subject and the projection, while also blending performance elements in the form of an interview between the photographers and the subject for the audience to engage in.

Through research, the group took influence from a variety of sources referencing immersive light photography, projection mapping and integrating video in Processing. Most of the influence and aid was gathered from Youtube tutorials and articles written on projection photography. A list of references can be seen below.


(2016, May 16). Terra Incognita. Retrieved December 07, 2017, from

Yoo, A. (2015, January 30). Dancing House: Interactive Projection Mapping Crumbles House to Movement. Retrieved December 07, 2017, from

Rozin, D. (n.d.). Wooden Mirror – 1999. Retrieved December 07, 2017, from


Tutorials and Sources
Magic immersive video and visual effects


Rendering video into processing


Sound Effects


Process Journal

Throughout the creation of dereScape, we ran into many challenges and discoveries that ultimately changed our perspective on the project. We worked on a week to week milestone basis, and we took extensive notes of our process.


Project Proposal
The first idea that came to mind when we began the project was taking into account our common interests. We were all interested in photography, and have never experienced shooting projection photography in a studio like setting before. As a result, we wanted to encompass the code Sanmeet had developed from his dMAGE project, but in a video format. We had written our thoughts and speculated a final product and prepared for the proposal presentation on Tuesday.


Week 1


Tuesday, November 21
On Tuesday Nov 21st, we presented the proposed idea to our professors what we had brainstormed earlier. We had received feedback and decided to further research how we could instantiate an interactive element.


We were told to possibly research the ability to represent this digital pixelated cityscape in VR, and focus more on the interaction we are trying to convey to the user. As a result, we opted to researching the methodology to utilize sound as a variable – the ambient sound recorded from the environment and the sound that would be devised from the interaction.


Thursday, November 23
We had set out our goal for the end of the day  to complete a prototype of running a pixelated video through Processing. We had utilized the pixelated movie example from Processing’s built in video example, and speculated if we were going to be able to successfully render a 2d video in a 3d space.
After hacking at the processing code and swapping a picture for instead a video using Processings video library, we were successfully able to render a video in Processing. Once it was able to render, we added back the pixelization element to vary the depth of the video and create a 3d effect.
After viewing the video, we learned that the pixelation can influence the aesthetic of the video, due to the video rendering the colours. The videos work in the way we wanted them to, but we just required more content. The videos that looked the best were of people walking close to the camera, or walking past as they closer they were the more of the frame they took up, ultimately making them more visible even through the pixelization. We discovered that perspective is highly crucial for the pixel processing, as it adds a layer of abstraction.


Interesting Ideas
An important aspect of any installation is the medium it is being viewed, and as a result we thought of creating a ‘hologram’ with the pyramidal transparent glass method. The only issue was that we couldn’t implement it into a room environment as it would require large amounts of materials to create a more promising effect. We thought that speaking in the environment of the cityscape may make you feel as if you don’t have a voice, and if you aren’t being heard. We wanted to make it so that the more you speak the more disconnected you become to the space, but the quieter and softer you are the clearer it’ll be. Tristan related this experience to sleep paralysis – where your conscious is awake but you can’t physically wake up.


Next Tasks
For next week, we wanted to research how we can utilize the microphone sound input and use it as a dynamic variable to increase/decrease the pixel size of the video to make it interactive.


Week 2


Tuesday, November 28
Our goal for today was to get the microphone to dynamically affect the pixelation rate of the video. With the help of Adam Tindale, we were able to utilize the minim sound library audio input mechanics – mic.mix.level() – which gets audio from mic and buffers it and gets the audio level.


One challenge we encountered after getting the microphone input to work was that it was sequential as we made it in if statements – if the level of the microphone was within a certain range, the pixel would stay at that size and not change fluidly.


Next Tasks
For next week we had to capture more video elements of the city to construct a narrative. Furthermore, we had to refine and find a better way of ‘gradiently’ getting pixels to change smoothly. We also needed to find a way to optimize the sketch to run more fluidly, as when the pixel size was too small it would freeze the video and become too small to render the video fast enough. Lastly, we had to decide on the presentation space and exactly what equipment we needed to rent.


Thursday, November 30
Today, we wanted to combine the videos we had collectively taken and organize them to best fit a narrative that would follow through to the questions we would ask the volunteer/model to answer during the shoot. We also wanted to smooth the visual pixelation element of the installation, and optimize the performance of the sketch.


In order to smoothen the visual transitions of the pixel sizes, we needed to directly reference the mic input level to represent through the sketch. The issue was that the mic input numbers were very low, and in order to get accurate readings we had to amplify the microphone sensitivity. We decided to go through the software side of fixing it instead of the hardware, and created a micSens variable to multiply the level and round it to a whole number. Another challenge was the performance optimization of the sketch, and in order to fix that we introduced an if statement that automatically corrected the size of each pixel to 30 if it dipped below that number – anything lower than 30 resulted in very slow rendering and essentially froze the installation.


Next Tasks
We required more videos to create a more coherent narrative of the environmental space, so we had to go out and shoot over the weekend. We also needed to book the studio where we are going to present, and perhaps add a Peasycam tilting method according to binaural audio input from microphones.


Week 3


Tuesday, December 5
For today we wanted to finalize the equipment we required for the installation and book a room. We also wanted to test the processing sketch for any bugs and ensure it was functioning well.


Upon finalizing the equipment we required to perform our installation, we required a projector. As we wanted to project onto two corner walls, Haru Ji brought up the issue of projection distortion. She said that in order to ensure that the installation does not look warped, we would have to use projection mapping software to ensure it is scaled correctly and appears flat on the wall instead of warped. Haru recommended to use VPT 7 (visual projection tool), and Sanmeet quickly tried to learn the ins and outs of it. We realized that it required a plugin in order to map a live feed from an external program, ‘Syphon’. After installing the plugin, it would not cooperate with VPT 7 and the entire software just borked. As a result, we are going to keep the projection having a distorted perspective, as it would cover more surface area of the wall and allow for a more immersive space that would fill up all peripheral vision of the visitor.


Final Thoughts
Ultimately, towards the end of the project we completed any last minute explorations and experimented with various textures, patterns and effects for our projected animation. We stuck with our theme of interactive animation and projection photography and worked on perfecting our concept to design an engaging performance for the final presentation. We worked hard on designing a seamless interaction for the subjects and the audience. Sticking with the theme of redefining photography, videography and interactive digital art we created an innovative and aesthetically pleasing experience.


We explored the creation of digital environments using processing 3D animation and video files to animate pixels and their depth. We also explored and incorporated sound from the installation environment to amplify the movement of video pixels in the Z-axis. The installations finalized function works as such: if there is nobody talking in the space, then the animation/video will become clear and if someone is speaking, the animation/video will become more pixelated.


We have discussed plans for the future of this project thoroughly. With more time and resources, we would like to explore the possibility of more interactivity; possibly using Arduino and its capabilities of handling numerous sensors; more specifically interfacing a kinect sensor. We also have plans to minify the project and create a version of dereScape that explores the most intuitive and sensible ways to interact utilizing sensors built-in to one’s own computer so that those without the equipment can try their hand at the same project. We have also discussed future plans to experiment with different styles of projection, including 360 degree projection, allowing subjects to rotate around the experience (e.g., six environments in a 3D cube structure; each side is an environment).


In the end, through the expansion of our original theme we created a interactive space that focuses introspective reflection, silence and the analysis of one’s own experience within the city of Toronto. We found that our exploration of pixelation of video being manipulating by a subject’s voice worked well in an interview style performance piece. Overall, we are satisfied with what we have designed and are hopeful for the possibilities of dereScape’s future expansion.


Photographs (Full Photo Journal Here)

2 1-17 1-112-1


This entry was posted in Experiment 4. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Post a Comment

You must be logged in to post a comment.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.