Category: Experiment 1

Experiment 1: Body As Controller

Internet Attention

My series of studies were centered around the internet, the web, social media and our relationships & experiences with these media. Given that this experiment deals with using our bodies as controllers, my focus was on visualizing the existing relationships we have with these media platforms and looking at how we could create friction in these technologies where necessary, making us more active participants in our relations with these platforms, taking back some control from the algorithms designed to keep us mindlessly scrolling, clicking and consuming.

Scroll 1 – doomScroll

For my first scroll, I thought about our relationships with endless feeds and how we’ve been accustomed to just scrolling these feeds for hours on end even when we might not want to.
Using Posenet, I prototyped a scenario where the user has to carefully hover their left hand over directional arrows to scroll, using this interaction as way to add friction to the process, limiting the time spent scrolling and making us think more intentionally about interacting with these feeds

Present Link: https://editor.p5js.org/demilade/full/Ul9HVzur1

Edit Link: https://editor.p5js.org/demilade/sketches/Ul9HVzur1

Interaction Video: https://youtu.be/N67gJCCAJpY

Click 1 – peskyPopups

You’ve probably been to a website where you were assaulted by 1 popup after the other, seemingly unending all preventing us from accessing the content we came for. My first click study is a mini game where the user has to hover their left hand over the popup buttons and ‘clap to click’ the buttons and close them before the screen is overrun with popups and the healthbar gets red. The whole process is reminiscent of clapping at pesky insects invading our personal spaces reflecting similar emotions pesky popups evoke.
I use Posenet to track the users wrists and the distance between the wrists to simulate a clap click.

Present Link: https://editor.p5js.org/demilade/full/SusMA6Svh

Edit Link: https://editor.p5js.org/demilade/sketches/SusMA6Svh

Interaction Video: https://youtu.be/rZBlnjZtTqM

Click 2 – clickSwarm

This is a visualization of how everything online is in a constant battle for our attention. Every corner of the internet is peppered with call-to-actions, begging us to click in order to get some more of our engagement.
For this study I use PoseNet to track the users face via nose tracking. The cursor follows the nose and the cursors  within a certain threshold ‘activate’ following you, no matter where you go. The user has to blast an imaginary fireball by bringing their wrists together like a kamehameha to briefly get the cursors to point away but then everything reverts showing how we only seem to get away from this attention battle for only a short period of time.

Present Link: https://editor.p5js.org/demilade/full/J1UxhwUL6

Edit Link: https://editor.p5js.org/demilade/sketches/J1UxhwUL6

Interaction Video: https://youtu.be/s_OZ0YM1F5E

Scroll 2 – speedScroll

Following up to and building on doomScroll, this scrolling experience uses 2 hands instead of 1. The users left hand needs to be over the direction they would like to scroll and the right hand controls the scroll speed based on the intensity of the vertical hand wipe.

Present Link: https://editor.p5js.org/demilade/full/zP7C_Cbh8

Edit Link: https://editor.p5js.org/demilade/sketches/zP7C_Cbh8

Interaction Video: https://youtu.be/9qX_5uFL2vM

Experiment 1 – Body as Controller

The Dawn (Click)

This was the first work that I did and I tried to keep it as simple as I could, while at the same time keeping an eye for the aesthetic aspects of my work. When I run the code, the sound of birds chirping will be played, similar to what we hear in the early morning. I personally hate the sound of all birds, but I think they could play a significant role in helping me expand the narrative and make it more real in terms of how powerful the ambiance would feel!

The way the interaction works in this piece is that by using Posenet, it detects the distance between my wrists, and whenever I clap my hands (basically whenever I put my wrists together) a star will pop up on a random spot in the sky.

the-dawn

  • Present link : https://preview.p5js.org/Parnian/present/AJzs2kDZS
  • Edit link : https://editor.p5js.org/Parnian/sketches/AJzs2kDZS
  • Video : https://vimeo.com/618232461

 

The Windmill (Swipe)

I have a personal interest in animated movies and cartoons and here it was no exception either. To me, the most attracting aspect of this work is the landscape, which is again very simply drawn, yet very satisfying in a nostalgic way.

By using Posenet, the movement of the nose along the x-axis will be detected. If I move to the left, the blades of the windmill will spin counterclockwise and if I move right, they will spin clockwise. The speed of the blades’ spin is also sensitive to the speed of my nose’s x-movement.

windmill

  • Present link : https://preview.p5js.org/Parnian/present/Zv48HHyb-
  • Edit link : https://editor.p5js.org/Parnian/sketches/Zv48HHyb-
  • Video : https://vimeo.com/618250438

 

Make-A-Wish… (Click)

The set of codes for this work is the only one in which I used clmtracker and I am happy to have done so! Because for some reasons I found that extremely challenging and confusing to work with clmtracker and given the amount of time I had to prepare these four works, I know I couldn’t have done better – even if I wanted to.

The way it works is that it calculates the distance between point 0 and point 14 in my face and when it is more than a certain amount, it means I am close enough to blow the candle and actually be able to turn it off (since we all know if you’re not close enough, no matter how hard you blow it won’t turn off). But if you wanted to play with it, please don’t cheat and make sure you do the actual blowing – it helps to communicate better with it! Also wait for the children to sing the “Happy Birthday” song to you. They will tell you when you can blow the candle…

make-a-wish

  • Present link : https://preview.p5js.org/Parnian/present/p6gUktmSM
  • Edit link : https://editor.p5js.org/Parnian/sketches/p6gUktmSM
  • Video : https://vimeo.com/618253827

 

Help Isaac! (Swipe)

At first, writing codes for a game was very intimidating for me and I preferred to stay in the “playing around” phase of my learning. But finally for this last study, as I grew more skilled and more confident along the way, I decided to go for a game.

In this study, after being detected, the nose of the player is attached to the picture of Isaac Newton. If the player can manage to stay under the falling apple from the tree, a “Game Win” sound will be played and if not, a “Game Over” sound will play. This way they will know if they interacted with the work in the right or wrong way.

isaac

  • Present link : https://preview.p5js.org/Parnian/present/4q5S9aqDwf
  • Edit link : https://editor.p5js.org/Parnian/sketches/4q5S9aqDwf
  • Video : https://vimeo.com/618254266

 

Hey P5.js! Change colours with my movement

I developed this idea because I was very interested in how body movements can interact with technology to connect with shapes and colours. I always have had a fascination with colours and how they affect the mind and body. For example, in my apartment, almost every light bulb has a colour mode which I can change with a simple voice command. I am interested in being able to explore how I can move my body and have the lights change according to my movements. Therefore, I put those ideas to use with this assignment.

 

In these projects, I innovated with four interactive studies using four different movements to change colour and move shapes. I used P5.js with Posenet, which is a body-tracking program that detects different body joints to form a skeleton, and estimates a human pose.  Users can move different body parts to interact with this program to explore various shapes and colours using scroll and click functions.

 

I tried to make this program user-friendly and intuitive, and to ensure that the user has fun! This project was inspired by smart home design concepts using lights and smart assistants, such as Alexa and Siri, both of which are voice-activated. One of my biggest challenges, particularly for someone who hasn’t done coding before, was that the camera displays the inverse image of what is in front of it. So, if the users move to the right, the computer displays the movement to the left. My hope in the future, is that various appliances, like lights, televisions, and speakers can be manipulated by certain body movements that reflect our moods and desires.

 

Experiment 1: Shoulder Dance

The users can move their shoulders to the left or right, and the four vertical squares will move synchronously following the direction of the shoulders. This interaction requires a body scroll function. There is a bigger space between the middle two squares that allows the user to know how far from the camera he/she/they should be by aligning your eyes to fit in between the two middle squares. I chose pale pink as the colour, not just because it’s my favourite, but also because it’s a happy calming colour. This colour combined with the swaying of shoulders, hopefully makes the users feel like they are doing a gentle dance.

screen-shot-2021-09-28-at-11-38-22-am

Present: https://preview.p5js.org/MerelVS/present/G-uXtdnkL

Edit: https://editor.p5js.org/MerelVS/sketches/G-uXtdnkL

 

Experiment 2: Nose Colours

There are four coloured circles in the four quadrants of the screen. A smaller turquoise circle is projected on the nose and acts as a mouse. When the users move the turquoise dot with their noses to one of the coloured circles, the colour of that circle is exchanged with colour from the circle that is positioned diagonally to it. All the circles in the four quadrants are pastel colours. This interaction requires a click function. Besides being amusing, this project reminds me of the childhood game called ‘connect the dots’.

screen-shot-2021-09-28-at-11-39-51-am

Present: https://preview.p5js.org/MerelVS/present/HrlZqYUyn

Edit: https://editor.p5js.org/MerelVS/sketches/HrlZqYUyn

 

Experiment 3: The Slinky

With the movement of both eyes from one end of the screen to the other, the two circles representing the eyes move and change size and colour. When the user moves right, the circles become smaller and darker. When the users move to the left, the circles become larger and lighter. This interaction requires a scroll function. This project reminds me of a childhood toy I used to have – a rainbow coloured slinky. And the good news is this one doesn’t get tangled! Another sensation that the users may feel when interacting with this project is that they can paint circles on a blank canvass with their eyes.

screen-shot-2021-09-28-at-11-41-05-am

Present: https://preview.p5js.org/MerelVS/present/D0qxUJnGH

Edit: https://editor.p5js.org/MerelVS/sketches/D0qxUJnGH

 

Experiment 4: A Colour Party

When either of the dots on the users’ wrists touches the opaque pink circle with “HI” written in it, a strobe light of random colours is projected on top of the user to mimic the feeling of being at a party. This interaction requires a click function. This project is for pure fun and amusement. The best way to use this project is by setting your computer down, standing one meter away so that the camera can detect the user’s wrists, and then raise your arms up and down and have a party!

screen-shot-2021-09-28-at-11-39-00-am

Present: https://preview.p5js.org/MerelVS/present/LRD3bwHrx

Edit: https://editor.p5js.org/MerelVS/sketches/LRD3bwHrx

Experiment 1: Re-learning to learn…how to learn

When I was in my 1st year of undergrad I had briefly been tasked with learning processing (similar to P5.js), that was its own challenge and I remember thinking “well thankfully that’s over with, I won’t have to use that again”. That was nearly 6 years ago and yet here we are. I my stomach sank seeing Processing and P5.js on the course outline but onward we go. I would say I succeeded in attempting to re-learn Processing/P5.js while trying to fight off the thought that I’m not good at it. I’m not the best nor do I think I did all that well seeing as with everyone learning for the first time tutorials and remixing code is your best friend in your journey to learning different coding languages. Looking back I would say that I “failed” at being more creative in my studies. I spent more time learning, watching tutorials over and over while not being able to focus and making sure the code worked than testing out newer or more unique ways to click and scroll. I relied heavily on tutorials and peers for learning how to do these and I found myself reusing code across all four because my brain could not process how do anything complex at this time. When in doubt, ask for help, I suppose, but this was an interesting experiment to reset my mind to think about traditional coding methods.

Study 1: Changing images, in a snap! (Click)

Starting off with the one code that doesn’t work properly. A common challenge I faced with all 4 studies was not declaring variables and the like properly which would lead to my code running but not registering anything. I tired to go into each study by writing the code as basic as I could without using the webcam until the very end. Clicking with a mouse is one of the first functions you learn, so I attempted (kind of successfully) to show images, “-in a snap!”. I would consider this successful seeing as the terminal shows me its registering my fingers snapping but for whatever reason is not loading the photos, I have a couple theories as to why this might be but I have not found a way to fix it as of late.

screen-shot-2021-09-28-at-11-36-50-am

Present     Edit     Demo

Study 2: Head banging until you see colours (Click)

This was inspired by literally head banging and having the blood rush to your head in which case there is a chance you may see colours. Similar to the previous click where rapid movement registers the click, I’m still not certain as to how to refine the code to track the movements more accurately. This was a challenge as well as it selectively worked depending on how I wore my hair on webcam where my hair up was fine, but down and obscuring my face was problematic. It also registers every head movement I make as a click so it ends up looking more like what I was attempting for scrolling instead, minus the gradient transition between colours. Looking back I was try to find a way for it to register my head at one point on a grid that my webcam sees my head in and use that as the click but the variety that comes with the rapid colour changes is actually something I enjoy about the outcome, however unintentional. 

screen-shot-2021-09-23-at-7-49-39-pm

Present     Edit     Demo

Study 3: Groovy Baby!  (Scroll) 

This was a challenge just based on the type of movement needed which was a leg movement, I’m 5’11” so even fitting in the frame is a struggle muchness lifting my leg for x amount of time in order for my webcam to register it. Luckily framing it as if I’m just moving my mouse across a screen to change the gradient helped immensely. For this one my webcam was having a hard time keeping track of my full body despite the target being my knee, the skeleton would often “slip” off my body and appear behind me which quite scared me the first time it happened as I thought there was someone behind me until I realized what was happening. It is intended to make a gradient of green assuming it registers my knee which is the one part of my leg I can keep in frame. 

screen-shot-2021-09-23-at-7-52-13-pm

Present     Edit    Demo

Study 3: Groovy Baby! (Scroll) Part 2

I think this one is a bit more of an abstract interpretation of what “scrolling” is as well as my most successful study. I envisioned my old home desktop running Windows XP where the volume bar was just that, a bar, but with your mouse you could use the scroller to toggle the volume up or down. So I attempted to code a kind of pseudo music player where doing groovy arm movements would change the volume. This code tracks my hand (more generally my wrist) and lowers the volume as I raise my hand, this is because while I was sitting to code I did not want to get up in frame to see if putting my hand down would lower the volume, so I reversed it to allow myself to stay seated. Unlike the previous study, my webcam had less to track (only my upper body) and did so with more accuracy.

screen-shot-2021-09-23-at-7-47-07-pm

Present     Edit    Demo


*You will notice across my edit links that my projects have everything sorted into folders in the sidebar, this is just out of preference as I prefer to have my assets collapsed instead of seeing them stacked on top of each other.

*Please note my bibliography is quite long as I am citing every resource and media I used to learn.

Unknown. “11.jpg”, Pinterest, Uploaded by Отец всея Руси. 

https://www.pinterest.ca/pin/1076219642174139140/

Unknown. “12.jpg”, Pinterest, Uploaded by Lemon Nomel.

https://www.pinterest.ca/pin/1076219642174137677/

Unknown. “13.jpg”, Pinterest, Uploaded by Unknown User.

https://www.pinterest.ca/pin/1076219642174120807/

Unknown. “14.jpg”, Pinterest, Uploaded by ylino. oui.

https://www.pinterest.ca/pin/1076219642174045040/

Turbo ft. Yoko Takahashi. “A Cruel Angel’s Thesis / Eurobeat Remix”, YouTube, Uploaded by Turbo, 2019. https://youtu.be/1gW1uHRPChc

Dan Shiffman. “7.6: Clicking on Objects -p5.js Tutorial”, YouTube, Uploaded by The Coding Train, 2015. https://youtu.be/DEHsr4XicN8

Dan Shiffman. “Q&A #1: Side-Scroller in p5.js”, YouTube, Uploaded by The Coding Train, 2016. https://youtu.be/DEHsr4XicN8

Biomatic Studios. “Let’s make Pong! (Tutorial for beginners) $1 – p5js”, YouTube, Uploaded by One Man Army Studios. https://youtu.be/m6H6SHIdAhQ

Dan Shiffman. “ml5.js: Webcam Image Classification”, YouTube, Uploaded by The Coding Train, 2018. https://youtu.be/D9BoBSkLvFo

Dan Shiffman. “11.1: Live Video and createCapture() – p5.js Tutorial”, YouTube, Uploaded by The Coding Train, 2016. https://youtu.be/bkGf4fEHKak

Kazuki Umeda. “Face detection (webcam) for p5.js coders.”, YouTube, Uploaded by Kazuki Umeda, 2021. https://youtu.be/3yqANLRWGLo

Dan Shiffman. “7.4: Mouse Interaction with Objects – p5.js Tutorial”, YouTube, Uploaded by The Coding Train, 2017. https://youtu.be/TaN5At5RWH8

Dan Shiffman. “9.12: Local Server, Text Editor, JavaScript Console – p5.js Tutorial”, YouTube, Uploaded by The Coding Train, 2016. https://youtu.be/UCHzlUiDD10

Dan Shiffman. “P5.js Web Editor: Uploading Media Files – p5.js Tutorial”, YouTube, Uploaded by The Coding Train, 2018. https://youtu.be/rO6M5hj0V-o

Lauren Lee McCarthy. “Reference: p5. SoundFile”, Webpage.

https://p5js.org/reference/#/p5.SoundFile

Dan Shiffman. “ml5.js Pose Estimation with PoseNet”, Youtube, Uploaded by The Coding Train, 2020. https://youtu.be/OIo-DIOkNVg

Gloria Julien. “Code Test: Removing Images [p5.js]”, Webpage, 2019.

https://medium.com/@gloriajulien/code-test-removing-images-p5-js-1fdd651e3a5f

Augmented Alice

Project Description

Augmented Alice is a series of 4 interactive sketches that associated with each other to imagine an augmented journey in near future. In this experiment, I intend to trigger different scenarios in mixed reality settings by tracking body and hand position with p5.js and Posenet.

 

Overall successes/failures

The process of learning creative coding is like speaking a new language for me. At beginning, it took time to get familiar with the programming environment, the syntax and the logic flow before I started creating anything. In the first study, I experimented with replacing mouse clicking with interaction of computer visions, which went smoothly and encouraged me to take a step beyond. Continuing to involve PoseNet and Face API for body and face tracking function, I started off intimidating and a bit tough, but once I stopped staring at my frustrating face in webcam and spent time on examining every line of the code to find breakthrough, it turned out to be promising. The learning section of P5.js is really helpful for me when I hope to systematically understand how the code works. I believe there’s a lot more I can discover and work on creating more engaging responsive media to add playfulness to my interactive work in future.

 

Study 1 Hyper consumerism

Video Link: https://vimeo.com/616672424

Study Description:

The idea of this sketch origins from a guess of how mixed reality will change the concept of user profile and social interaction. I use emoji as an analogical representation of the new profile in near future since emoji is not only as means of expressing emotions, but also have been adopted as tools to express relationally useful roles in conversation. The interaction was triggered using the ml5 face detection and pose estimation model in p5.js. It follows the movement of the viewer’s nose and generate a changing emoji on the face. The key position such as left and right ears is also detected to determine the face angle.

 

Present Link: https://preview.p5js.org/beforespring/present/rASN7bEmF

Edit Link: https://editor.p5js.org/beforespring/sketches/rASN7bEmF

 

Study 2 Future Identity

Video Link: https://vimeo.com/616672424

Study Description:

The idea of this sketch origins from a guess of how mixed reality will change the concept of user profile and social interaction. I use emoji as an analogical representation of the new profile in near future, converting the hexadecimal Emojicode to decimal codes to make them appear. The interaction was triggered using the ml5 face detection and pose estimation model in p5.js. It follows the movement of the viewer’s nose and generate a changing emoji on the face. The key position such as left and right ears is also detected to determine the face angle.

 

Present Link: https://preview.p5js.org/beforespring/present/rASN7bEmF

Edit Link: https://editor.p5js.org/beforespring/sketches/rASN7bEmF

 

 

Study 3 Food Delivery

Video Link: https://vimeo.com/616658482

Study Description:

Playing with visual elements of emoji, I try to apply P5+Clmtrackr for facial tracking to this study, but experienced some technical error during the process, so eventually I uses Face-Api as the reference. This sketch provides an instant “food delivery” for the viewers. When the viewers open their mouths to interact with the computer, the food emoji will suddenly appear and fly into their mouths.

 

Present Link: https://preview.p5js.org/beforespring/present/rXS_tEO24

Edit Link: https://editor.p5js.org/beforespring/sketches/rXS_tEO24

 

Study 4 Tamagotchi

Video Link: https://vimeo.com/616668096

Study Description:

In this study, I created a virtual companion whose color of surface and eyes movement can be controlled with the position of viewers’ nose. With the viewer move left and right, the color varies from apple green to pink and the eyes followed the flow. I use PoseNet to capture the movement of nose and the map function to process the variables of RGB codes. During the process, I found that when I try to apply body as controller to generate 2 immediate feedback in one conditional statement, it didn’t work and shows unknown error. I shall try to figure out the reason in next few days.

Present Link: https://preview.p5js.org/beforespring/present/cQ2X7Twkp

Edit Link: https://editor.p5js.org/beforespring/sketches/cQ2X7Twkp

HAPPY PLACE

HAPPY PLACE creates a virtual space that allows the user to jump out of their seat and move to the music that plays. Having interpreted the outline as four studies that take place simultaneously within one sketch – each individual study responds to an action that works collaboratively to finally produce a drawing or expression that illustrates the user’s movement. Focusing on the emotion of ‘happiness’, the project aims to create a fun and carefree atmosphere through the use of music, movement and colour. As the user engages with the sketch, they are able to switch on and off the music, adjust the volume, change the background colour and draw on the canvas with their body.

One of the hardest parts of this project was just being able to start. As a beginner in p5.js, having to incorporate poseNet to the project was both intimidating, and at times frustrating – especially when it caused unidentifiable errors. However, the process of writing a line of code, testing it out and debugging was helpful in understanding the language and how it works, and was beyond satisfying when the action and response actually worked. Though my code may be fairly simplistic, I wanted to primarily focus on understanding how the code works rather than trying out complicated techniques; and was happy to see that even still, I was able to produce studies that contributed towards my concept.

Links:  Collaborative Sketch Present Link   &   Collaborative Sketch Edit Link

Click One: Change Background Colour

As the beginning step that sets the atmosphere of the collaborative sketch, the first click option asks users to choose their favourite colour as the background. To do so, the user lifts their left wrist to the top-left corner of the sketch, which creates a series of randomized colours that the user can choose from as their new background. Following the theme of movement, the action of lifting the wrist or ‘clicking’ the randomize button emulates the disco dance move ‘The Point’ wherein the user demonstrates a similar motion to start and stop the background colour changes.

Links: Click One Video   &   Click One Present Link   &   Click One Edit Link

 

Click Two: Turn On & Off Music

Click Two allows the user to turn on the song ‘Good Day’[1] when their left elbow is in the bottom-half of the canvas, and turns it off when it’s in the top-half. The challenge, given the elbow’s natural position, is that the music plays by default, but seeing as the music would stop playing once the elbow moved, it seemed like the best option. Going forward, I would want to test out other methods to keep the music playing without needing the user to be in a specific position at all times, thus functioning like a proper switch.

Links:  Click Two Video   &   Click Two Present Link   &   Click Two Edit Link

 

Scroll One: Adjust Music Volume

Following Click Two, this first scroll option serves as the next step in being able to control the volume of the music that is being played. Following the right and left shoulder positions, the volume decreases to a value of 0.2, and increases to a value of 1. This scroll action acts as an equivalent to a volume slider that you would find on a computer, and works in promoting movement by the user by allowing them to play with the sounds that are created.

Links:  Scroll One Video   &   Scroll One Present Link   &   Scroll One Edit Link

 

expone-scroll-two

Scroll Two: Draw on Canvas

Sound can be one of the strongest links to memory and emotion. In choosing a song that produces a fun and lighthearted atmosphere, the user is able to move freely and dance to the music. Tracking the nose position, this scroll function results in a line that is drawn in randomized colours to follow the movement of your body. Thus, as the user moves around the canvas, the result of their motion creates a unique piece of art. This works in conjunction with the first scroll option wherein playing around with the volume results in changes in the artwork too.

Links:  Scroll Two Video   &   Scroll Two Present Link   &   Scroll Two Edit Link

 

[1] Good Day by Greg Street ft. Nappy Roots: https://www.youtube.com/watch?v=hjPLkPsLxc4&ab_channel=GregStreetVEVO

 

let’s be young again

Summary

My experiments with p5.js and ML revolve around reviving the fun games I played when I was young and exploring the curiosity generated while conducting these experiments. Additionally, the objective of the experiments (ones I set for myself) was to understand the level of transformation that could be possible with just plain data coming in from the Posenet tool and to see if math can still be fun for me.

These activities and coding after so many years made me realise that its okay to be rusty especially when you have a tool like p5.js. Although I had to depend a lot on the reference docs and examples, towards then end, creating own functions seem to be possible. All these days, I kept experimenting with motion however, I missed out playing with audios and I can imagine how interactive that can become! My learning from this exercise is that there is still lots to explore in the area of creative coding.

Crucial learning – Fully read the reference manual and any connected resources! In a couple of experiments, I had to do some math. My first approach was to depend on high school math to bring in trigonometry and vector algebra. After spending a ton of time, I realized there are far more easier ways using some ready functions and could have saved a ton of time.

Experiment 1 – Fly away!

3196419_plane

Remember spreading out your hands and pretending that you were a plane when you were young? This interactive piece tracks those exact poses and moves the plane left and right. Control the plane to avoid the obstacles that come up one after another at a random spot.

Link to editable code

Link to interaction

Experiment 2 – Superman is here

3196419_superman

Take your glass off to be reveal the superman. The code measures the relative position of the wrist with the nose and based on that decides to either show or hide the superman. The next step for this code was to add in movements to the arms to mimic your hands when you put-on or take-off your spectacles.

Link to editable code

Link to interaction

Experiment 3 – Lights out

3196419_candle

This interactive code tracks the movement of your hand to detect a wave. Be sure to use your right hand and wave close to yourself rather than the camera. One wave tries to blow off the candle and waiting for a returns the flame to its state. You can never take the light away.

Link to the editable code

Link to interaction

Experiment 4 – Perspectives

3196419_perspective

This interactive code is a bit different that others. Sorry, you don’t get to play. However, from where ever you try to look at the box, it will try to adjust its shape to make it look like its opening is facing you. This code required a bit of trial and error along with matrix multiplication to make it work. The black dot and the farther side of the box moves in the opposite direction to create a simple distortion. Please note, the box isn’t moving as a whole but the corners move independently.

Link to the editable code

Link to interaction

 

 

 

 

Abeona: Souvenirs from a P5.js journey

Description

Abeona is the Greek Goddess of the outward journey, looking over kids when they take their first steps away from home [1]. This project is an overall view of my short journey with P5.js, away from home (as I usually code in a financial/banking context). Along this road, I played with all 3 of the body-related technologies and used hand, face, and body recognition.

When facing a creativity tool as interactive as coding, building a game looks almost inevitable. So, the first study is a tribute to my inner child. I made a candy-eating game using facial recognition. The second study explores other experiences we can have on the 2D screen. This one uses body tracking is to entertain my dream to dive into the deep sea. The third study is my way to step back and reflect on how the digital world is not just affecting us but also defining us [2]. For this study, I employed hand-tracking technology to show how demanding the digital world could be.

The pain points of these implementations were mainly on the body tracking side. The tools introduced are not as accurate or fast-tracking as I would need for most of my ideas. It is essential to consider the limits of each library early in the design process. On the other hand, the P5.js community and immense resources are the Abeona of learning to code. These resources are excellent guides that make any implementation achievable.

 

1&2. The first control in this game is scrolling through head movement from side to side. With this way of scrolling, slowly, the user sees the “mouth” image moving in the same direction as they go.

gamemove

Scrolling through head movement

Secondly, the user has control over when the candies are eaten by opening and closing your “mouth” (this acts as a toggle click).

Eating candy through mouth opening

Eating candy through mouth opening

Only with an open mouth could users eat candies falling their way. The eaten sweets are clustered on the top left side of the canvas in a yellow bucket. This video shows the user scrolling and clicking with face-tracking technology.

Watch the video here

Try it now!

Click here to play this Halloween-themed game. Make sure you are in a well-lit area and have your face in front of a webcam. Give the tracker a few seconds to load, and a couple of seconds after you see your face, you can play.

Edit link

 

3. This project combines the joy of being in nature with coding. For this scroll, the user has to perform a swim-like activity. When arms are opening, the picture rolls. By swimming in front of the camera, users can see the lower parts of the Picture, deeper sea zones.

Watch the video here

I came across a website with a similar idea that has more details on creatures living in different depths of water [3].

Try it now!

Start your adventure by having your arms straight and in front of your chest like this:

Pause scroll with still arms

Pause scroll with still arms

Then start opening them up to sides while holding straight (As you would do in breaststrokes) and watch the image scrolling to a deeper sea area:

Image scrolls down as user opens up their arms

Image scrolls down as user open up their arms

Edit link

 

4. Last work signifies the competition for our attention in the digital world. In this project, a pushbutton follows the user’s index finger and makes itself click. But if the hand posture changes to a fist or high five, the button stops following the user. This game represents both the demanding digital world plus the fact that we can and should control our digital activities. The users gain control by awareness of the time they spend on every digital tool. In this interaction, “No action” is still an action [4]; the button will click if users do not restrain it.

Watch the video here

Try it now!

Put either hand on a well-lit blank surface. The camera could have a top or front-facing view. With a pointing pose like this, you will attract the button:

Pointing pose attracts the button

Pointing pose attracts the button

With other poses (fist, high-five), you can stop the button:

Put fingers together to stop the button from moving

Put fingers together to stop the button from moving

Edit link!

 

 

Bibliography

[1] Took, Thalia. Abeona, Roman Goddess of Journeys, http://www.thaliatook.com/OGOD/abeona.php

[2] Mühleisen, Martin. “F&d Article.” The Impact of Digital Technology on Society and Economic Growth – IMF F&D Magazine – June 2018 | Volume 55 | Number 2, https://www.imf.org/external/pubs/ft/fandd/2018/06/impact-of-digital-technology-on-economic-growth/muhleisen.htm.

[3] Liang, John, et al. “Scroll down to the Bottom of the Sea.” DeeperBlue.com, 17 Jan. 2020, https://www.deeperblue.com/scroll-down-to-the-bottom-of-the-sea/.

[4] Wilson, George, and Samuel Shpall. “Action.” Stanford Encyclopedia of Philosophy, Stanford University, 4 Apr. 2012, https://plato.stanford.edu/entries/action/.

 

Consulted and repurposed code:

Stock images used:

 

 

Experiment 1 – Visualization of Gossip | Siyu Sun

—— Gossip can destroy a person or make them strong.

 

%e5%be%ae%e4%bf%a1%e5%9b%be%e7%89%87_20211001002040

 

Description

Visualization of Gossip(2021) is an immersive interactive narrative work between the human and the environment.

The technique I used here is based on the case study in ml5, PoseNet, Sound Visualization in p5.js. So audiences don’t have to use the external input device, such as use mouse to implement click events. I set up creatCapture() in order to connect the camera, then combined it with PoseNet’s tracking system, and “scroll”, “click” experiment, to get feedback on the sound, graphics, and status of the control screen.

Back to the creative part, this work is divided into four narrative clues. The control feedback system and purpose of each theme are different. In combination with the research done in Experiment 1, I hope to establish an immersive area combining audiovisual and express my conceptual understanding of “Rumors/Gossip”.  I used this technique of expression due to I am concerned that the media has the potential to attract multiple senses through rich information, and has plenty of potentials to influence audiences in perception, cognition, and emotion. The sensory or perceptual mode, surround effect, and resolution in the immersive experience will help the audience create a sense of presence in virtual events, and associate the sense of consciousness or substitute reality. [1]

7ede8242bf8631016cd838f9b546ac0

 

 

Context

Study One: Movement and Feedback | ml5, createCapture(), preload()

The prototype of Gossip

This is my first work. I use harsh noise images to show what external evaluation brings. The cacophony of music goes up and down. When people hold their head and rock from side to side, the noise image follows people’s movement, it’s like noise in your head that you can’t get rid of, swirling back and forth.  

The most technique I study in this part is knowing the ml5,PostNet, setting up a Creaturecapture function, and know-how to preload media files.  The most key thing I would like to mention is I used the person’s silhouette instead of a real person over the camera, that’s because I want the whole works to have a sense of unity. So in order to  realize it, the important thing you must do is upload media file in function preload(){

Then, I combined the Capture function of ML5. PoseNet to Capture the position of the audience’s Nose to control the movement of the background picture, and set the noise image at the position of the eyes and Nose to adjust the specific orientation and make it follow the movement of the audience so that it is difficult to get rid of this concept.  

How to do 

Hold your head and shake left and right.

Problem

The problem I met in this process, the visual identification is not particularly accurate and it’s better to change the soundwave image to true vibration frequency, but I don’t know how to combine them together.

Reference

createCapture():Creates a new HTML5 <video> element that contains the audio/video feed from a webcam. The element is separate from the canvas and is displayed by default. The element can be hidden using.

Preload():  This function is used to handle the asynchronous loading of external files in a blocking way.

1

p5.js Present:   https://preview.p5js.org/lizsun703/present/Qhbzo9zdl

p5.js Edit:         https://editor.p5js.org/lizsun703/sketches/Qhbzo9zdl

https://youtu.be/nRb6ZiC-kqY

 

Study Two: Movement As a Click | MousePressed()

— Don’t be afraid

When gossip invades your brain, the only thing you have to do now is beat them and rebel against the odds.  In this part, I combined the Click function of mousePressed to change the image.  I’ve set up two different figures — controlling the variation of the picture by identifying the viewer’s waist height, and the noisy image disappearing when the viewer raises their hand.  It stands for defeating the gossip.  

How to do

Move from side to side and raise your hands to see the image disappear.

Problem

However, the biggest regret in this part is that I want to make the noise sound disappear when I raise my hand, but I have tried many times without success. I hope to continue to study this question in the next few days.

Reference

MousePressed()This function is called once after every time a mouse button is pressed over the element. Some mobile browsers may also trigger this event on a touch screen if the user performs a quick tap. This can be used to attach element-specific event listeners.

 

uhg1bx8hal

p5.js Present: https://preview.p5js.org/lizsun703/present/M38Co7k5r

p5.js Edit:       https://editor.p5js.org/lizsun703/sketches/M38Co7k5r

 

 

Study Three: Sound Visualization | waveform()

— Self-digestion

This is an area full of noise and language.  The inspiration of this work comes from the interactive text experience of “IN ORDER TO CONTROL” In this study, sound visualized learning was conducted to determine the waveform to read the amplitude of sound and draw the waveform of the sound. The mic was also set so that the audience could speak and the amplitude would also be realized.  And I set up a lot of words on the screen, praise, criticism, insult, people can speak any works in here and can see the different amplitude was changed. It’s really funny that it’s my first time to try this part, and I saw the tutorial to wrote down the code in random(in order to try), then I got the effect I want to have haha—- The soundwave was a fusion with the profile, that means people can make balance with gossip.

How to do

In this process you need to watch the text on the screen and listen to the soundwave noise, you can speak anything you want, and the microphone will record the voice then feedback to the waveform.

Problem

In the beginning, I designed to make these sentences scroll up and down, but I tried it hard to be scrolling them together. 

Reference

Waveform()

x9qumd3btk

p5.js Present: https://preview.p5js.org/lizsun703/present/BsiRBoJzX

p5.js Edit:       https://editor.p5js.org/lizsun703/sketches/BsiRBoJzX

 

 

Study Four: Dynamic Text

— Balance yourself

After experiencing the whole work, I hope to let the audience know how to establish their own balance point in the gossip: We live in a space full of all kinds of external voices, and all kinds of gossip, like knives and stones, affect us, bring us to pressure, but also label us. It made me imagine the weight of pressure, rumors can destroy a person or make them powerful.

How to do

In this process, you need to raise your hand and move your body everywhere you want. I  set up a bad sentence ” You’re a jerk!” on the left, and the praise sentence”You’re so creative!” on the right. You can find it to be in a balanced way.

4

p5.js Present: https://preview.p5js.org/lizsun703/present/8XF0zEzkO

p5.js Edit:       https://editor.p5js.org/lizsun703/sketches/8XF0zEzkO

 

 

Others

As I mentioned before, the sound visual study is my favorite part and I do the related works since 2019, using Kinect combined TouchDesigner.

Siyu Sun: ” RUMOR” Kinect with TouchDesigner interactive art device (vimeo.com)

 

Biography

[1]Teresa Chambel. 2016.  Interactive and Immersive Media Experiences.  In Proceedings of the 22nd Brazilian Symposium on Multimedia and the Web (Webmedia ’16).  Association for Computing Machinery, New York, NY, USA, 1.  DOI:https://doi-org.ocadu.idm.oclc.org/10.1145/2976796.2984746

 

 

 

Seasonal Osculation

Seasonal Osculation is a series of 4 interactive art experiments, made using the p5.js web editor. The artworks are a series of generative artworks that respond to the change in the viewer’s moment and actions. The artworks have mainly interacted through two different actions that are performed in a new manner rather than the conventional methods to click and scroll. The artworks respond to the moment in a way that bridges the gap between the viewer and the artwork and he feels a part of that artwork.

Inspired by the geometrical aspect of things the artworks depict normal day-to-day shapes like circles and rectangles with deep meaning to them depicted through the color and weight of the shapes. As the interaction between humans and machines is increasing day by day the future is not far off where we can see art that is highly interactive and not made in controlled environments but be highly public and open to all to see and interact opening new ways to feel connected with the artwork by not just seeing pretty artworks but actually be part of one and connecting to the piece in a deeper way. These experiments look upon those new ways to interact with generative art that have deeper meaning apart from being pretty.

 

1: LOOPER

looper

The first piece is a generative artwork representing clocks and the concept of time and when we get close to the piece the time glitches and the clocks start to act weird and we are stuck in a time loop with the piece and every watch moves back and forth in a different time but not completing the full circle which all depicts a click trigger into the artwork that you have aligned with the artwork but you being stuck in it.

The interaction calculates the difference between the eyes to calculate the distance of the viewer from the piece. This is a click experience.

Present Linkhttps://preview.p5js.org/HippieInDisguise/present/BQ7R4nOPw

Editor Linkhttps://editor.p5js.org/HippieInDisguise/sketches/BQ7R4nOPw

 

 

2: PUSH 

push

The second piece depicts the thoughts we have and how we are always thinking something but ideas start to clutter up. This piece depicts those thoughts through circles and weights onto the circles which depict the great ideas but as there are so many thoughts they tend to get lost. In this piece, the viewer pushes his thought in which way he wants to think in being the left side of the brain to explore logical, scientific ideas or the right side of the brain to explore his creative ideas

Swipe through the screen to push your thoughts to interact with the piece. This is a scroll experience. 

Present Linkhttps://preview.p5js.org/HippieInDisguise/present/gpVAE-2i5

Editor Linkhttps://editor.p5js.org/HippieInDisguise/sketches/gpVAE-2i5

 

 

3: SEASONS

seasons

This second piece is a generated art piece that depicts the colors of different seasons around the world from summer to winter. The artwork explores the colors from a season in a harmony to generate artworks that are never the same just like the season which has passed the next winter could be colder.  

This is a two-step interaction when the user is close to the screen the artwork generates more works for the same season but when the viewer joins hands it cycles to a different season. This is a click experience. 

side note: it can be a bit heavy if you keep cycling through generating artworks and could lead to delays in the interactivity.

Present Linkhttps://preview.p5js.org/HippieInDisguise/present/ougxmEd1K

Editor Linkhttps://editor.p5js.org/HippieInDisguise/sketches/ougxmEd1K

 

 

4: INFINITE

infinite

This last piece signifies an endless scroll that doesn’t have a scroll not being controlled by any triggers which signify the human moment from playing football on the field to swimming in the pool. Like humans have no bound and are always exploring this piece interacts to the moment of the viewer drifting through space which has no bound sets the piece as an infinite scroll.

This piece interacts with the moments of the viewer it still has room for a lot of improvement in terms of plotting the whole body onto the grid and make a silhouette of the viewer in the space which has no bounds. This is a scroll experience. 

Present Linkhttps://preview.p5js.org/HippieInDisguise/present/85kKWv2nX

Editor Linkhttps://editor.p5js.org/HippieInDisguise/sketches/85kKWv2nX

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.