Exp. 3-3-3 Review of Sarah Boo’s Colour Echoes by Neetu Sajan

Sarah’s work:

 https://www.instagram.com/ar/747147612590731/

https://www.facebook.com/fbcameraeffects/tryit/747147612590731/

https://drive.google.com/file/d/1h70UJyLEii7t4wiepjoCYLkfayfOfx3T/view

f796b078-9d08-45cf-b7fb-dc89a97dcd9d4ffa6b0f-b4a9-43df-bb2c-154e20758022Sarah’s work is a captivating visual that distorts ones surroundings by melding colour and form. Not only was it amusing to walk around my surroundings viewing the world through this lens, but each picture taken created an abstract artwork! The lines in the foreground  with the contortion of various colours  and blurred background created a really interesting relationship. Hearing from Sarah gave me additional insight into her inspiration and process behind the peice. 

This was my interview with her: 

What’s the title if you have one? 

Colour Echoes

What was your inspiration behind the concept? 

I liked the idea of warping the image of an existing physical space in a way that referenced digital methods/mediums. Concepts within glitch art and pixel sorting were definitely starting off points.

Any challenges you faced? 

I couldn’t figure out how to pixel sort or analyze and the overall composition of the camera image using the patch editor in Spark AR. I also tried to figure out if there was a way to code it in Spark but the APIs exposed seemed to match the patch editor. 

Would you modify or add something, if so what would it be? 

I would like to learn how to modify the image and sound on a more granular level (ex – rows and columns, translate image into waveforms) so that I can provide a better translation between visual and auditory. Currently, both are related through the device motion, but that really limits the things you can do with it.

What did you learn/enjoy through the process?

I always like playing around with node-based programs, as they lend themselves well to experimentation. I enjoyed following the filter making process to completion, having it usable on the Instagram app. However, I couldn’t help but feel a bit wary about how streamlined the process is. When a big tech company puts enough resources towards developing a free tool to the point where it’s that easy for anyone to upload content, it’s usually for a nefarious reason. 

And what tools did you end up using? 

SparkAR

And could you very briefly guide me through your process creating it?

It was mostly experimentation with a vague goal. I figured the first step to getting something pixel sorting-like was to get a visual feedback loop going, so I used my previous knowledge about achieving that through other software. I looked through the patch editor tools and linked a bunch together until I got something working. From there, I continued adding and rearranging nodes with different effects and playing with the blend modes and tweaked the parameters until I achieved a look I thought was interesting. With the audio, I started with two recorded sound clips (that I made) and added various effects that responded to device interaction until I found an interesting combination.

Leave a Reply