Author Archives: Gavin Tao

Project 3 – Strange Networks – Tyler, Mufaro, Mona, Gavin

 

project_3_final


project_3_final2 project_3_final3 project_3_final4 project_3_final5Video of Lines (2016)
Video of Arduino and Pure Data project (2022)


project_3_final7lostcity2The Lost City of Eslinas by Mari.K aka MadMaraca (2021)
project_3_final8jellyfishexperiment
Researchers fitted some moon jellyfish with a prosthetic “swim controller”.
Credit: Nicole Xu and John Dabiri / Caltech
jellyfishexperiment2The electronic swim controller made the modified jellyfish swim nearly three times faster than their normal speed.
Credit: Nicole Xu and John Dabiri / Caltech
project_3_final9
abrar-khan-civilizationgif
Civilization by Abrar Khan (2019)
project_3_final10project_3_final11landscape-2Our ideal scene illustration named, “Cyborg Jellyfish”, made with Ai, Blender, Procreate and Photoshop

project_3_final12Files for programming: https://github.com/tbeattysk/Cyborg-JellyFish
Files for Toronto cityscape: https://github.com/msvive/TorontoGIS-3D-model.git

doc5


project_3_final13

doc1doc2
doc3doc4

doc6
Link to full video

BIBLIOGRAPHY

Developconference.com. 2022. “Dom Clarke: Develop Conference.” Develop Conference Brighton. https://www.developconference.com/whats-on/2019-speakers/speaker-detail/dom-clarke. [Accessed 12 December 2022]. 

Guido, Giulia. 2020.  “The Surreal Photographs by Elia Pellegrini.” Collateral. https://www.collater.al/en/surreal-photographs-elia-pellegrini/. [Accessed 12 December 2022].

Moment Factory. 2022. “Massive Media Architecture at Resorts World Las Vegas.” Moment Factory. https://momentfactory.com/work/all/all/resorts-world-las-vegas. [Accessed 12 December 2022]. 

Pen, Kim Seung. 2022. Kim Seung Pen. https://kimseungpen.com/. [Accessed 12 December 2022].   

Soundslikelind. 2016. “Project: Lines – Interactive Sound Art Exhibition: Cycling ’74.” Project: LINES – Interactive Sound Art Exhibition | Cycling ’74. https://cycling74.com/projects/lines-interactive-sound-art-exhibition. [Accessed 12 December 2022].  

Soundslikelind. 2016. Sounds like Lind. https://www.soundslikelind.se/. [Accessed 12 December 2022].  

“Gallery.” 2018. LOT REIMAGINED: ACROSS AN EMPTY LOT. https://emptylot.weebly.com/gallery.html. [Accessed December 12, 2022].

Spiral Circus. 2022. “Silt for Nintendo Switch – Nintendo Official Site.” for Nintendo Switch – Nintendo Official Site. https://www.nintendo.com/en-ca/store/products/silt-switch/. [Accessed 12 December 2022]. 

Bushwick, S. (2020, January 29). Cyborg jellyfish could one day explore the ocean. Scientific American. Retrieved December 12, 2022, from https://www.scientificamerican.com/article/cyborg-jellyfish-could-one-day-explore-the-ocean/

Sketch 5 – Gavin Tao

doc     doc2

For this sketch, I was testing out how to make Processing recognize a series of .png files, and display them in a sequential order, so that it looks like an animated image.

I utilized imageCount, which you can program to recognize the format of your filenames that Processing is drawing data from. Therefore, you don’t have to manually type in each file within the code itself. This is especially helpful if you have a lot of images.

I also used mousePressed, so that the lines would “grow” if you held down the mouse button; and would stop growing if you released the mouse button.

Github of code:
https://github.com/geipanda/Sketch5_ImageSequence

Resources:
– This example on Processing helped me to figure out the imageCount function.
– Animation created by me.

Sketch 4 – Gavin Tao

sketch4a

Communication between Arduino and p5.js.

I loaded an animation into p5.js, and mapped each individual frame to the data on the Arduino’s potentiometer. This gives a tactile feeling of spinning Cloud’s sword. Because I didn’t set any kind of “reset” function, the animation works in both a clockwise and counter-clockwise manner:

sketch4b sketch4c

Initially, before loading the images, I tested a very simple project of dimming the LED light with the potentiometer. I kept the LED in the final build, because it functions as a signal for me to know that the Arduino side of things are working if I get any kind of error.

p5.js can’t directly read the serial information on the Arduino, so you must use an external program called p5.SerialControl to open up the port for p5.js to read: https://github.com/p5-serial/p5.serialcontrol/releases

Resources:
Serial Communication with Arduino and p5.js: https://medium.com/@yyyyyyyuan/tutorial-serial-communication-with-arduino-and-p5-js-cd39b3ac10ce
Serial input to p5: https://itp.nyu.edu/physcomp/labs/labs-serial-communication/lab-serial-input-to-the-p5-js-ide/
Cloud Strife gif: https://www.deviantart.com/zerolympiustrife/art/FFBE-Cloud-Strife-gif-1-707432653

My github containing the p5.js code and Arduino code:
https://github.com/geipanda/Sketch4_Pot_GIF

Fritzing diagram:
sketch4_fritz_bb

Project 1: Screen Space – Victoria, Maryam, Gavin

Section 1: Related Works Research

A related screen space project is one called Minion Fun by Atharva Patil. This project uses poseNet to track the movements of the face to produce minion sounds on different sections of the screen. Atharva Patil is a product designer who is currently leading design at Atlas AI building geospatial demand intelligence tools. 

Picture of the work:
screen-shot-2022-10-24-at-6-40-14-pm

This work relates to our project and research because it uses the face to create a fun interaction with sound. We used this project as our main inspiration, but instead of using the face to produce a fun sound, we used the face to produce a picture of a specific emotion (happy, sad, angry, etc.) on top of a face mask. This project helped us understand how poseNet could be used just for the face and not the entire body.

Section 2: Conceptualization

In the post-COVID pandemic, wearing a face mask has become a norm – it is a necessary precaution, and in some countries, still enforceable by law. A face mask typically takes up half of a person’s entire face; obfuscating the nose and mouth. This can often make it harder to read another individual’s facial expression. We used this fact as a springboard for our project. 

Initially, we had to brainstorm what the face mask may block in terms of everyday life. As well as the aforementioned difficulty in reading facial expressions, we also noted that when using facial recognition software (such as Apple’s Face ID), the system often struggles to identify the individual. Therefore, we wanted to put the mask at the forefront of our project – to reconfigure it as a tool for effective communication. 

screen-shot-2022-10-24-at-11-57-30-pmimg_5860

Some questions that arose from this initial brainstorm included: how do we ensure that the camera can recognize the face mask as opposed to a fully-exposed human face? How can we approach this in a creative manner? And ultimately, what do we want to express through the face mask?

We ran through a series of ideas – such as attempting to change the colour of the user’s mask itself – until we arrived at the concept of showcasing different facial expressions/emotions directly on the user’s mask, depending on their position within the screen space. The screen is broken down into four sections, each one representing a different emotion – happy, sad, tired, and angry. Corresponding music plays as the user enters each section.

In the final iteration, we used a webcam and ran the code directly from p5.js online editor. In a more idealistic situation, we envision this could work as a video filter for a social media platform. With more time, we probably wouldn’t have used the “illustrated mouth” images used in the presentation. A potential replacement would be actual human mouths, which could create a somewhat uncanny valley situation that may express our main idea more clearly; a direct response to the “Hi, how are you?” text displayed on the screen. We weren’t able to finalize this aspect for the presentation, because there’s difficulty in showcasing sadness, tiredness or anger through just the mouth alone. With more time, we may have been able to create more abstract/experimental depictions of emotions.

screenspace1screenspace2

Videos of Project 1:
https://youtube.com/shorts/SR3K_EHgCjw
https://youtu.be/DG0jxVODSEg

b30a1d24-d035-4ec2-bca0-a86f117a7d6c d967460a-3764-4877-963d-cc5b08a56733545e46db-8826-47c1-a181-5bc2281a7f0a

BIBLIOGRAPHY:
Patil, A. (2019, January 7). Motion Music. Medium. Retrieved October 24, 2022, from https://medium.com/disintegration-anxiety-1/icm-final-project-53b624770bb6