Human Tetris

Human Tetris
Greg Martin
October 4, 2020
Experiment One – Creation & Computation
K. Hartman, N. Puckett – OCAD University

screen-shot-2020-10-03-at-5-47-26-pm

Human Tetris is an attempt to combine a tactile full-body game with a modern adaptation of one of the earliest human-camera interfaces. Formatted as a five-level interactive game, the project progresses through a set of puzzles in which the player must contort themselves to stay within the bounds of a ‘tripwire’, displayed on-screen. The entire experience is intended to be navigated, played, and completed at a distance of around six feet; a custom user interface was designed to allow the user to control the game flow from this distance.

The project is inspired in part by the Brain Wall game show phenomenon and the pioneering user interface designed for the PlayStation Two’s EyeToy peripheral (see below). Engineered around the principles of discoverability and learnability, the project provides a minimum of instruction or tutorial to the user and lets them instead discover both the navigation and the play mechanism. Visual feedback such as incremental shape fills and text reminders ‘train’ the user on desired behaviour.

Smoothness, fluidity, and scalability were key goals in the development of this tool. The experience flows from beginning to end with no keyboard/mouse input required, and the JavaScript class-based level system allows for near-instantaneous creation of new levels and challenges. A system of buffers and timers are maintained in the core game loop to account for occasional drops and stutters in the pose detection system and rendering engine.

Study One – Interactions and Feedback // Present Edit Video

A player holds a body part over an interaction button. The tool feeds back a partial completion status to the user.

A key goal of the project was smooth, fluid navigation using the same paradigm as the game itself: the body should itself be the controller. The initial splash screen greets the user with a game title and a circle labelled ‘Begin’. By chance or by exploration, the user waves a hand over the button and sees immediate feedback: the button fills clockwise and progress circles appear akin to a clock face. This instantly conveys a time-based interaction and compels the user to hold or change position to impact the animation. This is the solitary navigation mechanic for users, which keeps the experience streamlined and approachable.

Study Two – Introduction and Training Level // Present Edit Video

The player is prompted to keep their body within a small play area. This trains the user on what a successful position looks like.

In the opening levels, the primary goal is situating the user in the activity area and providing them the basic game mechanics. Once the user has acquired the primary navigation mechanic (see Study One), the next aim is to convey how to successfully complete a level. The user is informed of their correct positioning by three key elements: a change in the tripwire stroke from yellow to green, a gradual sweeping fill of the tripwire area, and a ‘Hold it!’ banner that shakes to imply a strenuous core workout :).

Study Three – Up/Down Awareness // Present Edit Video

The user ducks to fit inside a more challenging tripwire. The player must begin to not only interact but modify their body shape to complete the levels.

Once the primary navigation and game mechanics are acquired, the game moves away from basic, symmetric tripwire shapes and into shapes requiring more significant contortion. Because up-down movements are reflected one-to-one via webcam (as opposed to left-right movement, see below), this interaction is the next logical challenge to the user. It is important that although the vertical shape of the body must be changed, the tripwire remains horizontally symmetrical so as not to overly disorient the user. This level in particular can require the player to make use of the space in front of and behind them; they must project their 3D body shape onto a 2D place.

Study Four – Left/Right Awareness and Mirroring // Present Edit Video

The user must now contort into a more challenging shape that requires their understanding of the mirroring nature of the webcam.

The mirror image effect inherent to webcam-based interaction provides the opportunity to challenge and disorient the user. This asymmetric, jagged tripwire requires more sophisticated movement. Usually, this is the most challenging level yet for players as they must separately map their vertical movement and horizontal movement, which is inverted onscreen. In terms of establishing a smooth learning curve, this level was the most difficult to design. A balance must be sought between frustration and skill development. This may be the first level to present a genuine challenge or require several attempts on the part of the player.

Study Five – Complex, Compound Movement // Present Edit Video

The player must use a combination of movements to stay within the smallest, most crooked tripwire of the game.

As the final level, this tripwire tests the knowledge acquisition of the previous levels by requiring horizontal translation, tilting of the body, extension of the limbs, and an extended holding of a matching pose. By now you must assume that the game’s mechanics are internalized and familiar to the user, allowing  the level design to be foregrounded. Completion of the level prompts a congratulatory splash screen before navigating the user to the home screen to permit them to play again. The game can be played on a loop indefinitely, with no further mouse/keyboard interaction required.

Context

The concept for the game’s tripwires came from the Japanese game show phenomenon known as Brain Wall or nokabe, in which contestants are required to contort their bodies to pass through an advancing screen containing a specific body-sized cut-out (“Brain Wall”). The segment quickly became popular in Japan and went viral around the world (Fermoso). The format’s slapstick humour and the immediacy of the presented challenge helped versions of the show proliferate throughout more than 40 countries (Hollingworth).

Using nokabe to incorporate the goals of learnability, fun and seamlessness into my studies, I was encouraged by early demonstrations to friends and family that suggested the challenge was clear and the learning curve was shallow enough to be approachable. Over a number of iterations, I refined the difficulty curve of the experience until I was introducing one mechanic or degree of freedom per level.

The concept for the interaction ‘buttons’ was based on my experiences growing up with the EyeToy, an early augmented-reality experience built for Sony’s PlayStation Two that combined a small camera with a variety of games in which the player interacted with onscreen elements (Kim). I was interested in adapting the user experience for a world where, fifteen years on, machine learning and real-time pose detection are standard fare.

In EyeToy, menu elements were ‘active regions’ onscreen, demarcated with shapes or colours, that required a player’s body part to intersect with those regions for a short period of time (Zhou 99). Developers found that when adding a live video feed on screen, “people immediately grasped the control and interface [as] they were able to see themselves and receive immediate feedback from their on-screen actions” (Kim). I wanted to emulate that intuitive grasp of the interface so that gameplay was foregrounded rather than the user’s acquisition of interaction mechanics; employing the user’s hand as a cursor for onscreen buttons has been demonstrated as highly effective at reducing error rate and mean task time (Cheng 296).

Whereas the EyeToy’s human detection processing was based on pixel analysis of highly compressed individual frames (Kim), I was able to relatively easily incorporate live pose detection and tracking using the p5.js platform and ml5.js extension library.

The point-in-polygon math and circular animations ended up requiring significant development time. Of particular interest was efficiently solving the “pose inside tripwire” problem. Although it is a trivial calculation to the human eye, detecting whether a series of points all fall within a convex polygon required some intermediate mathematics. The even-odd rule (Hormann 131) allowed me to assert that a point was inside the tripwire by confirming that a line arbitrarily extended out from that point intersected with all tripwire line segments an odd number of times. This enabled the use of the relatively efficient line-line collision algorithm to confirm that the player remains fully inside a given shape (Hallberg).

Bibliography

Cheng, Kelvin, and Masahiro Takatsuka. “Initial Evaluation of a Bare-Hand Interaction Technique for Large Displays Using a Webcam.” Proceedings of the 1st ACM SIGCHI Symposium on Engineering Interactive Computing Systems – EICS 09, 2009, doi:10.1145/1570433.1570487.

Fermoso, Jose. “Viral Video of the Year Nominee: Human Tetris.” Wired, Conde Nast, 20 Dec. 2007, www.wired.com/2007/12/viral-video-of/.

Hallberg, Chris. “LINE/LINE.” Collision Detection, crhallberg.com/CollisionDetection/Website/line-line.html.

Hollingworth, William. “Japanese Game Show Takes the World by Storm.” The Japan Times, 10 Mar. 2010, www.japantimes.co.jp/news/2010/03/10/national/japanese-game-show-takes-the-world-by-storm/.

Hormann, Kai, and Alexander Agathos. “The Point in Polygon Problem for Arbitrary Polygons.” Computational Geometry, vol. 20, no. 3, 2001, pp. 131–144., doi:10.1016/s0925-7721(01)00012-8.

Ishibashi’s Room Entertainer Edition. “Brain Wall – English Sub”. Online video clip. YouTube. Google, 19 Mar. 2020. Web. 24 Sep. 2020. https://www.youtube.com/watch?v=UY9axBEJy8s

Kim, Tom. “In-Depth: Eye To Eye – The History Of EyeToy.” Gamasutra, 6 Nov. 2008, www.gamasutra.com/view/news/111925/InDepth_Eye_To_Eye__The_History_Of_EyeToy.php.

Zhou, Hanning, et al. “Static Hand Posture Recognition Based on Okapi-Chamfer Matching.” Real-Time Vision for Human-Computer Interaction, 2005, p. 99., doi:10.1007/0-387-27890-7_6.

Imitating Interactions: Exploring Click Events for AR Using Body-as-Controller

IMITATING INTERACTIONS:
Exploring Click Events for Augmented Reality Using Body-as-Controller
KRISTJAN BUCKINGHAM

ex1_mainimage

This series of p5.js sketches explores possible interactions for tracking the user’s body position to initiate click-events in an Augmented Reality setting. The idea is to utilize intuitive movements so the user can perform interactions without the need to click a button or interface with a menu. As environment and body-tracking technology become more stable, it is important that virtual interactions feel as natural as possible. Using VR and AR shouldn’t be tedious, so it is optimal to provide the user with functions that can be executed with simple trackable gestures. While the particular interactions outlined in this series could be used as shown in a single game or experience, the idea is really to have a set of recognized actions that can trigger different effects in various scenarios throughout multiple apps. Although the tracking technology used to demonstrate these effects would not be the same in an AR setting, it is reasonable to speculate that similar interactions are not far off. It is through Imitating Interactions that one is able to explore what will eventually be possible. With the use of PoseNet and relatively simple tracking and coding, complex interactions can be conceptualized in a straightforward way. Hopefully, it is not long before these ideas can be applied in a practical way within the environment for which they were envisioned.

  1. The Clapper

Bring your hands together to turn on the light!

p5.js present link: https://editor.p5js.org/kristjanb/present/NoBEYMg7t
p5.js edit link: https://editor.p5js.org/kristjanb/sketches/NoBEYMg7t

screen-shot-2020-10-04-at-4-39-03-pm

screen-shot-2020-10-04-at-4-38-59-pm

This sketch is based on the idea of clapping to turn the lights on or off. In this version of the sketch, the lights are initially “off” with only a black screen and a yellow circle that tracks the user’s head. When the user’s hands are brought together (as if clapping) the lights turn “on” by revealing the video feed of the webcam. Ideally, this action would act more like a switch, but I wasn’t able to create that effect precisely. This idea is loosely based on the first interactive p5.js sketch I made called “gotcha!” which was a click-to-reveal mouse event.

2. Coffee Up

Drink your coffee to power up!

p5.js present link: https://editor.p5js.org/kristjanb/present/Zez0M204X
p5.js edit link: https://editor.p5js.org/kristjanb/sketches/Zez0M204X

screen-shot-2020-10-04-at-4-39-57-pm

screen-shot-2020-10-04-at-4-39-26-pm

The idea behind this sketch is that there would be virtual objects which can be “picked up” and interacted with naturally to perform an action. To simulate this effect, an image of a coffee mug appears on the position of the user’s hand. When that hand approaches the user’s head, a “power-up” gif appears. In an applied scenario, the coffee mug would be a virtual object that is found, picked up, and “drank” to restore the user’s health bar, with other objects performing similar functions. Interactions like this are sometimes found in VR games, but I haven’t seen it in AR yet.

3. Picture Frame

Match the hand positions to frame yourself and take a snapshot!

p5.js present link: https://editor.p5js.org/kristjanb/present/14X9jbeUP
p5.js edit link: https://editor.p5js.org/kristjanb/sketches/14X9jbeUP

screen-shot-2020-10-04-at-4-41-01-pm

screen-shot-2020-10-04-at-4-40-47-pm

The basis of this sketch is to take a snapshot / screen capture within an AR or VR experience. Since the user is placed in a 360° 3D environment, their hands can be tracked to frame the area they would like to capture. To create this effect, illustrations of hands are placed in either corner of the screen and when the user’s hand positions are close enough, a snapshot of the last frame in that position appears below. This sketch demonstrates taking a selfie, but would generally be used to capture content from the user’s perspective.

4. Downside Up

Turn your head to the side to flip and distort the image!

p5.js present link: https://editor.p5js.org/kristjanb/present/u6sgRhytg
p5.js edit link: https://editor.p5js.org/kristjanb/sketches/u6sgRhytg

screen-shot-2020-10-04-at-4-41-35-pm

screen-shot-2020-10-04-at-4-41-24-pm

screenshot of key momentThis sketch is based on the idea of the user changing their perspective to alter the way the virtual experience appears. When the user tilt’s their head to the side, the image flips upside down and becomes distorted. If this action worked more like a switch, it could be an interesting way to flip between different perspectives. The positions tracked to achieve this effect are the ear and shoulder positions on opposite sides, so the user can tilt their head either direction for it to work. Ideally, they would not have to keep their head turned to have the effect continue.

5. Flip It & Reverse It

Cross your arms to mirror and invert the image!

p5.js present link: https://editor.p5js.org/kristjanb/present/fGjos6BFo
p5.js edit link: https://editor.p5js.org/kristjanb/sketches/fGjos6BFo

screen-shot-2020-10-04-at-4-42-26-pm

screen-shot-2020-10-04-at-4-42-06-pm

Similar to the previous sketch, this would be another way for the user to change their perspective. By crossing their arms, the user flips the screen to a mirror image which is also inverted. This is achieved by tracking the user’s hand positions and triggering the effect when they cross. This is the most finicky sketch of the series and it was difficult to keep the image from appearing glitchy. It still has kind of a cool effect with the instability, but would ideally be more of a smooth switch between perspectives.

Context

Virtual Reality and Augmented Reality (VR and AR) devices have become significantly more accessible in recent years and the capabilities promised in these platforms are starting to be unlocked as technology progresses. Facebook just released the Oculus Quest 2 VR headset at a relatively affordable price, with more stable features including their new hand-tracking capabilities (Grey). For years, Apple have also been rumored to be working on their own VR headset, as well as the Apple Glass AR glasses, filing many patents for the technology along the way (Kosuch). While these devices may not yet be as compact, affordable, and capable as consumers would like, it won’t be long before they catch up. In the mean-time, users must interface with clunky headsets and awkward hand-held devices, but developers need to prepare for an impending hyper-immersive future. PoseNet’s body tracking is a great way to start sketching out what those interactions could eventually look like. Although interactions like the ones proposed in these sketches may not be possible right now, the ongoing developments have been promising. The more capable hand-tracking and environment-tracking become, the more immersive the experiences will feel. “When you are able to look down and see your hands, reacting and moving in real time, your brain tells you it’s real” (Sharman). The interactions depicted in this series of sketches may not be overly complex, but there is a satisfaction in the simplicity of a natural interaction. As the technology progresses to the point where users are able to use their entire body as a controller in VR and AR experiences, actions will become increasingly intuitive. Someday even what is considered to be “real” could become ambiguous as virtual objects begin to live in shared physical environments. Until then, 2-dimensional position tracking will have to suffice for speculation on the future of the body-as-controller.

References

Grey, Jess. “Review: Oculus Quest 2.” Wired, 16 Sep. 2020, https://www.wired.com/review/oculus-quest-2-review/. Accessed 24 Sep. 2020.

Kosuch, Kate. “Apple Glasses: Release Date, Price, Features and Leaks.” Tom’s Guide, 02 Oct. 2020. https://www.tomsguide.com/news/apple-glasses. Accessed 03 Oct. 2020.

Sharman, Tom. “As Oculus Goes Controller-Free, Will It Be the Future of VR?” Medium, 15 Jun. 2020. https://medium.com/virtual-library/as-oculus-goes-controller-free-will-it-be-the-future-of-vr-c69d67013d4. Accessed 03 Oct. 2020.

Trailing Tessellations

trailing-tessellates-02


About The Project

Trailing tessellates is a series of 5 interactive art experiments, made using the p5.js web editor. The interactive artworks are a series of patterns that respond to change in movements and gestures of the viewers, such that the viewers can become a part of the piece. This series is heavily influenced by the works of several interactive and visual artists like Daniel Rozin, who is well known for his interactive mirrors and Maurits Cornelis Escher, a renowned graphic artist who made mathematically inspired tessellations and prints.

Inspired by the paradoxical stability and dynamicity of mosaics and tessellations, this series is all about breaking the stillness of geometry and inviting motion. Viewers are encouraged to try different kinds of movements, which could involve bilateral / diagonal movements, and proximity with the camera to interact with the art pieces. The series is deliberately devoid of any camera feed in order to generate a sense of companionship between the artwork and the audience. It is supposed to bring about an emotive connection, between the viewer and the piece, like an owner and their pet. Movements have always been considered as an indicator of life. A responsive movement generated by this artwork is supposed to be interpreted as anthropomorphism through design (without a specific narrative context). The slight lag, due to camera feed loading, is helpful in this regard, as it gently delays the response which generates the feeling of trailing along. Each piece depicts a particular mood and concept which is tied to the overall idea of patterns and tessellations.


#1: Trailing Pyramids

screenshot-2020-10-04-at-10-19-24-am

The first piece is a fairly simple interactive tessellation based on the orthogonal grid. Each pyramid is a unit which has a moving top. The pyramid tops trail the viewer’s movements and follows them around while they are interacting with the artwork. The interactions in this piece involve nose tracking, where the viewer’s nose will be tracked on the horizontal and vertical planes only. Click on the links below to see the presentation or the source code.

Presentation – https://editor.p5js.org/Krishnokoli/present/FTRm-EVbi

Source Code –  https://editor.p5js.org/Krishnokoli/sketches/FTRm-EVbi


#2: Cascading Bandhani

screenshot-2020-10-04-at-10-18-48-am

Inspired by the ubiquitous ‘Bandhani’, which is a traditional textile tie-dye technique of Gujarat, India, this piece is all about the drape and its cascading interactivity. The piece trails along with the viewer, as if it is attached on the top, like a fabric pinned to a clothes rack. On increasing proximity with the piece/camera, the prints multiply in size and on moving farther away they diminish slowly. This interactivity is designed to generate a perception of fluidity and spatial depth. Click on the links below to see the presentation or the source code.

Presentation –https://editor.p5js.org/Krishnokoli/present/MRTNFlzOr

Source Code – https://editor.p5js.org/Krishnokoli/sketches/MRTNFlzOr


#3: Magnetic vision

screenshot-2020-10-04-at-10-17-38-am

This piece was originally inspired by the ‘Eyes on me’ artwork by Purin Phanichphant. In this piece, the numerous eyes tend to follow a singular point almost like a frenzy. This piece was designed to generate a sense of how herd mentality functions. It shows how a singular point can garner extreme attention and judgement from observers. Herd mentality is a state where people can be influenced by their peers to adopt certain behaviours on a largely emotional, rather than rational, basis. Click on the links below to see the presentation or the source code. The basic movement here is tracked with the help of the nose, which can be detected on both x and y axis.

Presentation – https://editor.p5js.org/Krishnokoli/present/gAXkdLHDO

Source Code – https://editor.p5js.org/Krishnokoli/sketches/gAXkdLHDO


#4: Blooming Binary Trees

screenshot-2020-10-04-at-10-17-01-am

Dan Schiffman’s Recursive Tree was a major inspiration for this particular piece. A major component of this piece ‘fractals’ are found in various natural phenomenon. In his book ‘Nature of Code’, Schiffman explains how fractals can be interpreted as the visual depictions of tree branches, lightning bolts or mountains. This unique visualisation along with the concept of social afforestation was the motivation behind this piece. Like the idea of social afforestation which requires active human participation for forestry this piece’s interactivity lies in proximity of the viewer and the screen. Viewers are encouraged to move back and forth in-front of their webcam order to experience this piece. Click on the links below to see the presentation or the source code.

Presentation – https://editor.p5js.org/Krishnokoli/present/mvUFFrgbs

Source Code – https://editor.p5js.org/Krishnokoli/sketches/mvUFFrgbs


#1: Polka Player

screenshot-2020-10-04-at-10-16-44-am

This artwork was also inspired by the staggered grid concept of Purin Phanichphant. However, this piece was an exploration to correlate sound, visuals and their spatial impact. In relation to the proximity of the viewer to the camera and the piece, the sound and the visuals change. The piece was inspired by the jazz classic ‘Polka dots and moon beams’, which is used as a soundtrack in the background. Click on the links below to see the presentation or the source code.

Presentation – https://editor.p5js.org/Krishnokoli/present/yYAf6wPhg

Source Code – https://editor.p5js.org/Krishnokoli/sketches/yYAf6wPhg


Project Context

The context for this project is based on inspiration from the works of MC Escher, Purin Phanichphant and Daniel Rozin.

MC Escher was a mathematician and artist His work features mathematical objects and operations including impossible objects, explorations of infinity, reflections, symmetry, perspective, truncated and stellated polyhedra, hyperbolic geometry, and tessellations. The defining moment that pushed the artist towards the creation of the art we most associate with his name was his trip to Spain and his visit to the Alhambra Palace. There, M. C. Escher carefully copied some of the geometrical tilings that covered the façade of the palace and from that moment, his production became less observational and more formally inventive. The exploration of patterns and the regular division of planes was the richest inspiration that the artist ever faced.

However, it is this connection between geometry and art that he has most deftly defined, that has inspired me to learn more about tessellations, patterns and optical illusions using digital art as a medium.

Daniel Rozin, through his works on mechanical mirrors, deftly explores the metaphorical concept of mirrors and how human beings interact with their own reflection. Most of Rozin’s works are based on the concept of motion, form and reflection. He uses an array of reflective and non-reflective materials along with a system of highly synchronized motors and motion sensors to generate a response to the viewer’s movement. This interactive response which Rozin interprets as reflections is the main attribute of his mechanical mirrors. He creates large installations which contains cameras or motion sensors to detect the movement of the viewers. His installations reproduce the concept of pixels, where each unit of woodblock, ball, toy, trash is moved with the help of a motor, which generates a response when it senses motion from the viewer.

Purin Phanichphant, is a contemporary artist and designer who is known for his interactive installations. His work includes a strange combination of objects like buttons, knobs, screens, and code, which often result in interactive experiences for audiences. His work has been featured in museums, galleries and venues around the United States, Japan,  & Iceland. His use of raw, bright colours, patterns and subtle yet effective use of interactivity has strongly inspired this series.

The intention behind this series was to create a set of interactive patterns which will involve active participation from the viewer. After all being able to control something with the help of body movements, that too remotely, is always a one of a kind experience. It gives the viewer a safe space to imagine and believe in the unbelievable

Citations

  • Hafsah Parkar. The cognitive theory of tessellations. Date published 14 February, 2014. National Institute of Design.
  • Van Dusen, B, B C Scannell and R P Taylor. A Fractal Comparison of M.C. Escher’s and H.von Koch’s Tessellations. Oregon: n.d. E-book.
  • Daniel Rozin. NYU Tisch. https://tisch.nyu.edu/about/directory/itp/95804818. Accessed on 4 October, 2020

 

A day at the arcade by Abhishek Nishu

project-1-copy-2

A day at the Arcade, by Abhishek Nishu

PROJECT DESCRIPTION:

“A Day at the Arcade” is a series of artistic and gamified experiences that are commonly linked by the simple movement of your body across a screen. Just like when you’re at an arcade, you will experience multiple games built on various themes, where interactions are linked to different parts of your body. 

I started with observing our interaction with computers and how we currently experience them through a series of clicks and taps that lead to visual changes on our screens. As users, we are so focused on where we want our cursor to eventually be that the simple motion of your finger translating an action on the screen has become a subconscious part of the experience that we take for granted . And so my series of experiments revolve around creating visual & physical experiences that bring back the presence of physical interaction with our computers. Imagine a world where you control your entire computer through just different motions of your body from a distance. This reminds me of how the movie Minority Report  depicts the future computer. 

I have structured all my experiences with the help of poseNet’s body tracking technology while leveraging  my skills from other software. As a result, I was able to create and introduce visual elements that supported building these experiences.

 

EXPERIENCE 1: Don’t forget your mask

Don't forget your mask

Experiment gif: https://youtu.be/6VALdM0d4vA

Experience description:

My first experiment begins with sending the simple message that during these times, always keep your mask on. Using poseNet, I tracked the mask to the nose and have set the width and height of the mask to be dynamic by tracking the distance between my ears. This always keeps the mask proportionate to my face by altering its size according to the distance of my face from the screen. Do you want to talk about this when you move your head from side to side? I would further like to explore how the perspective of an image can change with the angle of my head.

Present Link: https://editor.p5js.org/Abhinishu/present/7N2Z3Fv-8

Edit Link: https://editor.p5js.org/Abhinishu/sketches/7N2Z3Fv-8 

 

EXPERIENCE 2: The Optical Bird

the-optical-bird

Experiment gif: https://youtu.be/kdgZ4ayEW2Y

Present Link: https://editor.p5js.org/Abhinishu/present/Zqe9c0qCN

Edit Link: https://editor.p5js.org/Abhinishu/sketches/Zqe9c0qCN

Experience description:

My second experiment is a digital experience of an optical illusion. It is a concept where 2 overlapping static visuals, when moved on top of each other, create an animation. The sketch consists of 2 layers where the static bottom layer is an image of a bird and the top layer is controlled by the movement of my right wrist from left to right of my screen. On moving my wrist from side to side I was able to create the animation of the bird flying forward and backwards.

A critical observation while exploring this interaction was to see how poseNet recognised and tracked my wrist even while it was out of frame. And how its tracking is sensitive to the distance from the camera when you use points lower than your chest.

 

EXPERIENCE 3: LION KING STORYBOARD

the-optical-bird-copy-2

Experiment gif: https://youtu.be/kdgZ4ayEW2Y

Present Link: https://editor.p5js.org/Abhinishu/present/AceWBoBH7

Edit Link: https://editor.p5js.org/Abhinishu/sketches/AceWBoBH7

Experience description:

My third experiment explores the interaction between media and body movements. By incorporating multiple images that transition from moving my nose from left to right of the screen, I was able to recreate a synopsis of a movie that makes us all nostalgic. The storyboard of Lion King. To further enhance the experience, I use the mouse pressed function to turn the images into short movie clips.

 

EXPERIENCE 4: FACE CATCHING

face-catching

Experiment gif: https://youtu.be/pqZxREB5SxU

Present Link: https://editor.p5js.org/Abhinishu/present/Nv8MQKzuL

Edit Link: https://editor.p5js.org/Abhinishu/sketches/Nv8MQKzuL

Experience description:

Face catching, well it’s all in its name. This experiment is a game where you catch a ball using your face. As you move your face from side to side, you control the digital hands to catch the ball. For each ball you catch you get a point. This exercise helped me explore ways to introduce gamifying elements like a score, gifs and how to reset the game.

 

EXPERIENCE 5: TRON BIKE DANCE

tron-bike-dance

Experiment gif: https://youtu.be/TilDzH6bTIA

Present Link: https://editor.p5js.org/Abhinishu/present/JHviL-YaZ

Edit Link: https://editor.p5js.org/Abhinishu/sketches/JHviL-YaZ

Experience description:

On creating my first game and linking it to a single body part, this game explores controlling a single object with multiple body parts. The game is to get the bike to follow the path by controlling its movements with my wrists and ankles. I have also introduced the element of time with a timer of 60 seconds which ends the game. This project particularly taught me about the sensitivity of poseNet. While trying to identify which body parts to link, I learnt that the camera was more reactive to the wrists and ankle as they are to end points of the body. I further wish to explore this game by pre-setting poses to move the bike in different directions. 

 

PROJECT CONTEXT:

If you have ever watched ‘The Matrix’, ‘Iron Man’ or even ‘Minority Report’, you must wonder what it would be like to control a TV, computer or any other digital device with just a wave of your hand or any other body part. What always fascinated me about these movies is how body movements and gestures translate to visuals or user interfaces. 

Introducing Gesture Interfaces. While we may still need to push buttons, touch displays and trackpads, or  raise our voices, we’ll increasingly be able to interact with and control our devices simply by signaling with our fingers, gesturing with our hands, and moving our bodies. “Gesture controls will notably contribute to easing our interaction with devices, reducing (and in some cases replacing) the need for a mouse, keys, a remote control, or buttons. When combined with other advanced user interface technologies such as voice commands and face regonition, gestures can create a richer user experience that strives to understand the human “language,” thereby fueling the next wave of electronic innovation (Dipert, 2016)”. We are already seeing the implementation of this in real-world companies with brands like BMW and Samsung to improve user experiences. An example would be, BMW has a feature  to slide your foot under the trunk of your car to open it while your hands are full and Samsung’s smart TV allows you to change channels by panning your hand. While the number of functional applications grow, gesture interfaces are also  seen in the sheer fun and interactivity space of entertainment and gaming. Looking at places like arcades, DisneyLand, Virtual & Augmented reality zones and even in our homes with modern platforms such as Nintendo’s Wii and Microsoft’s Kinect.

“Gestural interfaces are highly magical, rich with action and have great graphical possibilities (Noessel,2016)”. While the Nintendo Wii and Microsoft’s Kinect require buying additional equipment to enjoy this technology, my project looks more into how we can bring the arcade home and into the devices we already have. Under these difficult times, while being cooped up at home, we all could use some fun in our lives.

As a next step to my project, I would like to explore the functional side of gesture interfaces by including the element of speech. In the film Iron Man 2, Tony says to the computer, “JARVIS, can you kindly vacuform a digital wireframe? I need a manipulable projection.” JARVIS then immediately begins to run the scan. Such a command would be hard to give through a physical gesture and this is where language handles abstract commands well making it a strong gesture to explore (Noessel, 2016).

CITATIONS:

  1. The Gesture Interface: A Compelling Competitive Advantage in the Technology Race, By Brian Dipert, April 2016.
  2. Stuart K. Card, in Designing with the Mind in Mind (Second Edition), 2014.
  3. What Sci-Fi Tells Interaction Designers About Gestural Interfaces, Chris Noessel, 2013.
  4. J. Davis and M. Shah, “Visual gesture recognition,” in IEE Proceedings – Vision, Image and Signal Processing, vol. 141, no. 2, pp. 101-106, April 1994, doi: 10.1049/ip-vis:19941058.
  5. What is gesture recognition? Gesture recognition defined, Sonia Schechter, 2020.
  6. What’s the future of interaction? Ashley Carman, Jan 26, 2017

Screwed Body Reverberations by Candide Uyanze

Description

Screwed Body Reverberations is a series of auditory experiments by me, Candide Uyanze, that invite users to generate their own “slow and reverbed” version of an audio track using their bodies. For each experiment, users can perform an action to control either the audio rate, reverb, panning, or all three.

Continue reading “Screwed Body Reverberations by Candide Uyanze”

Body Band

visualactiondesign

Jessy(Xin Zhang)

Description

Body band is an interactive and playful experiment using the body as a controller to build the interaction between player and screen-based sound. Standing naturally in front of the screen is the default state for each study. Then the player can use walking, raising hands, twisting the body, squatting to trigger the sound. In the first study, the performer turns on the sound in the form of walking, accompanied by a metronome. Each of the five studies has its own designed movement and they occur in our daily life, however, most of these body movements or posture can not be heard in the real space. I re-combined them with the sound of instruments or music to represent the relationship between the physical body and sound.

Context

Normally physical movements and postures can be visualized clearly, we can understand performers’ moods via their body representation. But most of the physical movements can not be heard. Like footsteps are difficult to hear especially in a noisy street. However, we can still  recognize specific sounds without seeing it. For example, When the person behind you is clapping, you can tell by the sound without looking at it. From my perspective, the action generates the sound and movement is the visual representation of a sound which is significant for people to build cognition. Body band is an experimental interactive project that combining the musical sound with body movement. The whole interaction is a process of transforming the acoustic signals to visualized posture which producing a ‘audiolization of body movement’.

When I saw the video of an interactive sound exhibition Linewhich is designed by Swedish composer Anders Lind in 2016, it inspired me of a body controller as a media to make the interaction of sound. In the project, the traditional keyboard has been redefined as lines on the wall. Players’ gesture and location could be captured by sensors to produce digital music.

Some acoustic instruments are easy to get started on. It is not difficult to produce the sound for the first time when picking up ones, like guitar, piano et. Even through mastering a musical instrument is a complex process that may take years of practice to reach the level of virtuosity. The music consists of a lot of elements.The performer must understand how the music flows and then translate that understanding into bodily movements that render the sound in the physical world. Performers’ reflection of music varies considerably from performers and their style, their movements of playing instruments have their personal style and understanding. But an understanding of the music is origin from our perception and physical body. People naturally attune to the basic rhythm while listening to music-swaying their bodies, tapping their feet.

Additionally, people without professional musical theoretical are capable of identifying the instrument if they hear the sound. We can distinguish the sound characteristics of percussion instruments from stringed instruments, or we can also come up with a description of the action and guess how it could be produced when only hearing the sound. The perception of sound or music has action-sound relationships. Moreover, ‘Gesture’ as a term or concept can be fruitfully used when working with sound, in the performance of interactive, real-time generated music(s). The physical body movements and actions make the sound-producing more expressive and visual.

Study 1 Footstep metronome

Present link  https://editor.p5js.org/xinzhang.jessy/present/cVLOE0vsQ

Edit link  https://editor.p5js.org/xinzhang.jessy/sketches/cVLOE0vsQ

Video:  https://ocadu.techsmithrelay.com/WLSf       

(The sound material is from ‘Free Sound’, www.freesound.org)

1-1

1-2

1-3

Walking is one of the most basic human movements and it has a certain rhythm. In my first study, I associated footsteps with the sound of a metronome to show their similarity-rhythm. Whenever the player is walking, the virtual metronome will be played. Everyone has his own rhythm and it can be adjusted during walking. So the sound will also be changed according to different people, which is a representation of personality.

Study 2 Virtual MIDI

Present link  https://editor.p5js.org/xinzhang.jessy/present/aYX3VC8J-

Edit link  https://editor.p5js.org/xinzhang.jessy/sketches/aYX3VC8J-

Video  https://ocadu.techsmithrelay.com/SJrb

2-1

2-2

2-3

The ideal of study 2 is origin from the digital instrument MIDI and the instrument is simp-lied into 2D blocks which are easy to ‘play’. People without any professional musical experience can play a song by moving their bodies. Whenever touching the block on the screen with hands or arms, it will trigger the sound of MIDI.

Study 3 Volume Controller

Present link  https://editor.p5js.org/xinzhang.jessy/present/fOcFQIj4v

Edit link  https://editor.p5js.org/xinzhang.jessy/sketches/fOcFQIj4v

Video https://ocadu.techsmithrelay.com/zzhS 

(The sound material is from ‘Free Sound’, www.freesound.org)

3-1

3-2

During musical performance, the player will control their body movement so as to generate sound. In my third study, I use arm position to represent the sound volume and the volume can be adjusted by moving arms. As the arms moving to the high position, the organ becomes louder. In this piece, I want to use body movement to represent the sound visually.

Study 4 Touchable Triangle

Present link  https://editor.p5js.org/xinzhang.jessy/present/hMkplGr1P

Edit link  https://editor.p5js.org/xinzhang.jessy/sketches/hMkplGr1P

Video  https://ocadu.techsmithrelay.com/CQL4

(The sound material is from ‘Free Sound’, www.freesound.org)

4-1

4-2

In this work, the whole body can be considered as part of the instrument. Whenever the performer stands up and it will trigger the sound. Additionally, without any music experience, the player can produce their own music by moving the physical body.

Study 5 Swing Switch

Present link  https://editor.p5js.org/xinzhang.jessy/present/hirJCrC2i

Edit link  https://editor.p5js.org/xinzhang.jessy/sketches/hirJCrC2i

Video  https://ocadu.techsmithrelay.com/G4Mi   

(The sound material is from ‘Audiomicro’, audiomicro.com)

5-1

5-2

The digital world has helped us to define and understand more actions. The digital music players have the function of moving back and forward. They are normally represented by arrows going left and right. This inspired me to apply the principle in 3D physical space and using the body as a controller. When the performer moving to the left, it will trigger the first piece of music, while the other side will trigger the other one. These movements can also be visualized as dancing which has a close relationship with the music.

Bibliography

Naoyuki Houri, Hiroyuki Arita, Yutaka Sakaguchi ‘Audiolizing Body Movement: Its Concept and Application to Motor Skill Learning’, 2011

The Nordic House, ’Lines-Interactive sound art installation’

https://nordichouse.is/en/event/lines-gagnvirkt-hljodraen-innsetning/

Anders Line, Anhttp://www.soundslikelind.se/

Espie Estrella, ‘An introduction to the elements of music’,’https://www.liveabout.com/the-elements-of-music-2455913’

Xiao Xiao Basheer Tome and Hiroshi Ishii MIT Media Lab’Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion,’2014

M. Leman and R. I. Gody.,’Why study musical gestures. In R. I. Gody and M. Leman, editors, Musical Gestures: Sound Movement, and Meaning’ Routledge, 2010.

Alexander Refsum Jensenius,’ACTION – SOUND, Developing Methods and Tools to Study Music-Related Body Movement’,2007

Jan C. Schacher,’ Moving Music – Exploring Movement–to–Sound Relationships’, 2016

 

Coastal Ocean by Grace Yuan

thumbnail

Coastal Ocean

Grace Yuan

Project Description:

Coastal Ocean is a series of five web-based interactive games, inviting the players to dive in the coastal underwater area at different times from 1800 to 2020. The five games all together create an immersive journey for the players to time travel through the history of the collapse of coastal oceans. This series of work is done in p5*js and the body tracking technology is based on the Posenet library.

This series of experiences focus on addressing how global warming and human activities such as overfishing and littering have impacted the coastal ocean ecosystem. Players are encouraged to interact with the games by posing specific hand gestures, including raising hands and crossing hands, are the embodiment of actions that should be taken to protect the vulnerable marine ecosystem. The moments of adopting these particular poses are expected to be inspiring to the players, as they explore the games, the bigger problem of what can we do as individuals to protect the marine becomes evident. The results of human disturbance to the coastal oceans are shown step by step in the first four games, by illustrating the process of fish distinction, coral bleaching, and kelp forest reduction. In the last game Coastal Ocean: 2020, the slogan of “Marine Conservation” is put up, surrounded by the restored ecosystem, advocating for a healthy future for the ocean.

#1 Coastal Ocean: 1800

Present: https://editor.p5js.org/grace.yuan/present/8VjjcoORA

Edit: https://editor.p5js.org/grace.yuan/sketches/8VjjcoORA

This first game of the series represents the status of the coastal ocean ecosystem before the intervene of human activities – an environment with an abundance of marine creatures. When the player holds their hands together, a fish swims towards the diver’s hands and follow them, as if the fish is close to the diver and being treated preciously.

#2 Coastal Ocean: 1850

Present: https://editor.p5js.org/grace.yuan/present/Ws8riDR66

Edit: https://editor.p5js.org/grace.yuan/sketches/Ws8riDR66

The second game represents the early encounter of modern fishing boats and the coastal area. The kelps are gone, the color of coral has faded, and there are fewer fish. When the player presses their hands together as a praying gesture after clap their hands, the fishes will return to the coral reefs and avoid taking the bait.

#3 Coastal Ocean: 1950

Present: https://editor.p5js.org/grace.yuan/present/7apBSnU6q

Edit: https://editor.p5js.org/grace.yuan/sketches/7apBSnU6q

The third game represents the peak time of overfishing. There are fewer fish and the color of the coral continues to fade. When the player raises their hands, the diver lifts the fish nets and the fishes come out of the coral reefs.

#4 Coastal Ocean: 2000

Present: https://editor.p5js.org/grace.yuan/present/TEtElZfTh

Edit:https://editor.p5js.org/grace.yuan/sketches/TEtElZfTh

The fourth game represents the collapse of the coastal area. The corals are completely bleached, the fish are extinct, and there is only trash left in the water. When the player raises their hand, the diver will catch the plastic bottle floating in the water. This game shows the result of overfishing and ocean littering – the once fertile seashore has nothing but tons of garbages.

#5 Coastal Ocean: 2020

Present: https://editor.p5js.org/grace.yuan/present/IYkoOHT_o

Edit: https://editor.p5js.org/grace.yuan/sketches/IYkoOHT_o

The last game represents a hopeful future of the coastal ocean, where the ecosystem is restored. When the player crosses their hands, indicating saying no to the commercial fishing along the coastline, the diver puts up the slogan “Marine Conservation” and the fishes all swim around the board.

Project Context

The concept of creating a virtual coastal ocean experience is inspired by two influential works – the classic virtual aquarium screensaver and the documentary film Coastal Seas from Netflix’s Our Planet series.

The virtual aquarium screensaver is a nostalgic feature of the computer for me personally because I always liked to stare at it when I was little. It is a replacement for the real fish tank and comfort for the ones who can’t afford real pet fish. My original idea for this game series was to recreate my own version of the virtual aquarium, a “place” I can stay to enjoy the experience of underwater, to revisit childhood memory, and to achieve peace of mind. However, as I kept developing the aquarium on the screen, I couldn’t help thinking what if one day in the future, we can all afford a fish aquarium but there are no more fish left in the ocean, and the digital aquarium is all we have as an archived history? The notions of owning, playing, remembering, destroying, and vanishing mixed together in my mind and led me to reflect on my second reference, the Coastal Seas.

Our Planet: Coastal Seas is an informative episode full of stunning visuals of waters around Raja Ampat Island, Indonesia, and depressive facts about coastal seas, offering the viewers a feast for the eyes and food for thoughts simultaneous. The damage to the marine caused by human activities and global warming is terrible, and as soon as I learned about this history, I iterated my original idea and decided to create a series of work that can help to raise the awareness of marine protection. During the research process, I came across an academic article Historical Overfishing and the Recent Collapse of Coastal Ecosystems. It informed me of the process of the collapse of the coastal ocean, and the changes of the key elements in the ecosystem, such as the kelp forest, the coral reefs, and the seagrass beds, which are later incorporated into my work. The article also illustrates a future of marine restoration, the potential of saving the coastal zones brings back hope and I wanted to use it as an inspiring ending in my work.

With the body tracking technology, the question of how to use the way player move their body parts to reflect on marine protection came up. My answer to it is to use simple but culturally meaningful hand gestures to show support and care for marine creatures and the environment. The goal is that with the integrated visuals, sound, storytelling and body movement, players will achieve an intuitive understanding of the environmental issue, and be encouraged to help make the change.

Work Cited

Jackson, J. B. C. “Historical Overfishing and the Recent Collapse of Coastal Ecosystems.” Science, vol. 293, no. 5530, 2001, pp. 629–637., doi:10.1126/science.1059199.

“Our Planet: Coastal Seas.” Pearson, Hugh, director. Season 1, episode 4, 2019.

“Overfishing.” National Geographic, 29 July 2019, www.nationalgeographic.com/environment/oceans/critical-issues-overfishing/.

October 2020

Virtual Context Interaction with Computer Vision

together

My series of studies focus on simulating nature power through computer graphic algorithms by using PoseNet and Clmtrackr, which are introduced in class. With these API for P5.js, I am able to use a webcam to track my body and face turning them into controllers. By moving the hands, head, or mouth, viewers can affect the particles’ movement and trigger an event in my study. Augmented Reality is a popular research and marketing topic recently; my project attempts to explore the possibility of interaction with the virtual elements while applying the physical space and how our body is engaged in the interconnection. Direct and metaphoric control are the two main methods; one can trigger an even directly by doing a certain action, while the other can affect the track of the elements predefine by the algorithm. Daniel Shiffman’s tutorials on the particle movement algorithm provided the fundamental of my work; all the graphic generating on my works is based on his tutorials. As my professors, Kate Hartman and Nicholas Puckett, provide amazing tutorials on body Tracking to trigger events and draw shapes with the Face Tracking API.

1

Present: https://jieguann.github.io/CreationAndComputation/BodyAsControl/work/Sketch1/

Code:https://github.com/jieguann/CreationAndComputation/blob/master/BodyAsControl/work/Sketch1/sketch.js

This sketch attempts to immerse the viewers to a dark night influence by Van Gogh and observe and interact with the star’s movement on the screen. The viewers should raise one of their hands in front of their webcam, move the hand slowly to observe how it affects the stars’ movement. The algorithm of the direction of the stars is based on the Perlin Noise, which Shiffman introduces. By modifying the noise wave with the hand position, the star’s track will follow the position of the viewers’ hand.

 

2

Present: https://jieguann.github.io/CreationAndComputation/BodyAsControl/work/Sketch2/

Code: https://github.com/jieguann/CreationAndComputation/blob/master/BodyAsControl/work/Sketch2/sketch.js

The sketch positions the viewers into a universe space and turns their head into a black hole to attract the photons on the dark space. By moving their head on the space, the viewers can see how the photons follow it. Such moving can create a real-time interactive animation to simulate how a black hole work in the universe.  This work follows the tutorial of 2D Black Hole Visualization from Shiffman to create the black hole’s attraction algorithm. To make the black hole follow the people’s head, I get the nose’s keypoint and attach the holo on it.

 

3

Present: https://jieguann.github.io/CreationAndComputation/BodyAsControl/work/Sketch3/

Code: https://github.com/jieguann/CreationAndComputation/blob/master/BodyAsControl/work/Sketch3/sketch.js

This sketch provides the fish experiment to the viewers. By applying a dark blue effect on the video, the viewers will feel they are in the deep sea. When the viewers open their mouths, they will find bubbles are coming out. Such a present creates an augmented reality experiment for viewers by using their mouth to trigger a virtual event on the screen. I think this interaction expands the Human-Computer Interface question without physical touch. In order to trigger the bubble event by opening the mouth, get the top and down points from the mouth, and calculate the distance. The distance between these two points will affect the speed of the bubble particles.

4

Present: https://jieguann.github.io/CreationAndComputation/BodyAsControl/work/Sketch4/

Code: https://github.com/jieguann/CreationAndComputation/blob/master/BodyAsControl/work/Sketch4/sketch.js

Like the third sketch, this practice uses one-third of the screen size to create a river, for viewers to have a virtual diving experiment. Once the viewers’ nose is under the water, bubbles will come out. The sketch uses the Face Tracking API to track the nose’s point, and once the position of the nose below to one-third of the screen, the bubble even will be triggered.

5

Present: https://jieguann.github.io/CreationAndComputation/BodyAsControl/work/Sketch5/

Code:https://github.com/jieguann/CreationAndComputation/blob/master/BodyAsControl/work/Sketch5/sketch.js

The sketch attempts to immerse the viewers into a starfield, and they are able to control their speed of flying on the space. I visualized a speed controller in the middle; viewers can move it up and down by moving their faces. When the controller goes up, the speed of moving on the space goes faster. Shiffman’s tutorial starfield in processing inspires this sketch. Traditionally, controlling the speed of a plane usually through hands. Nevertheless, with the development of computer vision techniques, I believe more innovation of interaction between humans and machines will be developed.

Project Context

As Levin described, the algorithms of Computer Vision are used in artworks increasingly to track people’s activaties (475). He said that, “Techniques exist which can create real-time reports about people’s identities, locations, gestural movements, facial expressions, gait characteristics, gaze directions, and other characteristics” (Levin, 475). Experiment 1 is a great practice for us to try computer vision, as using the pre-train model to simplified the process. It help us explore how the computer can understand human’s activaties through the camera.

Senior addressed five major paradigms for vision-based interactive artwork, and my sketchs should belong to the Mirror Interface (37). Mirror interface is a principal paradigm of interactive art that present a mirro-like surface to the viewers (Senior, 38). She introduced that “Magic Morphin Mirro”(1997) by Darrell et al is an image captured by camera, and the viewers will find their faced distort and warp. The faces are detected and tracked by face detection algorithm and applying the complex effect on it (Senior, 38). My sketches use webcame to capture the video of human and reflect on the screen. By capturing the face and body movement, the real-time videos are applied effect and trigged evens.

As Shao presented a way to interacte with the virtual context through IoT-enabled devices (439), I believe the Interaction with virtual content should be awareed. The extended “Toward Visualization and Interaction with Hybrid Objects and Avatars” acquire more information from the context, using computer vision to capture how many people on the room to effect the emotion of the avatar’s emotional state through fuzzy logic is my interested aspect (Guan, 857). This is what my propose that the metaphoric interaction. The reflection of the agent is not directly control by human, otherwise the human activaties affect it within the an algorithm. It is similar to my sketch one that the human’s hand movement affect the moving of the star.

Work Cited

Guan, Jie, et al. “Exploring a Mixed Reality Framework for the Internet-of-Things: Toward Visualization and Interaction with Hybrid Objects and Avatars.” 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2020.

Levin, Golan. “Computer vision for artists and designers: pedagogic tools and techniques for novice programmers.” AI & SOCIETY 20.4 (2006): 462-482.

Senior, Andrew W., and Alejandro Jaimes. “Computer vision interfaces for interactive art.” Human-Centric Interfaces for Ambient Intelligence. Academic Press, 2010. 33-48.

Shao, Yiyi, Nadine Lessio, and Alexis Morris. “IoT avatars: mixed reality hybrid objects for core ambient intelligent environments.” Procedia Computer Science 155 (2019): 433-440.

 

Face ON by Mairead Stewart

Face ON 

By Mairead Stewart

DIGF 6037: Creation & Computation, Experiment 1

October 2, 2020

Project Description

As facial recognition and image classification algorithms begin to dominate our digital spaces, social networking sites increasingly rely on these technologies to categorize and assess images posted by their users. These algorithms may be used to promote favourable content, but they can also aid in down-ranking or even censoring images. This places a significant responsibility on social media companies to maintain fair and non-discriminatory image classification algorithms, something they are currently struggling to do. To exacerbate these issues, studies show that sites such as Facebook and Instagram tend to select for images of human faces, rewarding users for sharing their own likenesses with more views (Bucher, 2012) (Cotter, 2018). In a business model in which the user’s image becomes the commodity of large social media companies, these image classification algorithms must be examined with a critical eye.

Like real life, Face ON incentivizes users to share their face by providing five interactive art exhibits that work only when a user’s face is in view of the camera. As soon as a user covers their face, the art on the screen begins to glitch and dissolve into greyscale, symbolizing the shadowbans and censorship that threaten users who do not conform to the guidelines of a social media platform. The project critiques the ways in which social media users’ bodies and faces are commodified, categorized, and shared in ways they can neither see nor control.

Sketch One

experiment_1-1

experiment-1-1

Present: https://editor.p5js.org/mairead/present/kGH_zzgBNa

Edit: https://editor.p5js.org/mairead/sketches/kGH_zzgBNa

Two squares move independently of one another in a smooth pattern around the screen. The variations in colour of the stroke and background are caused by movement of the viewer’s nose, while the movement of the squares is caused by a perlin noise function. When a viewer covers their face, the artwork fades to black and white and the movement of the two squares begins to glitch and jump. This project is based on an earlier sketch I created that had one square rotating around the mouse.

Sketch Two

experiment_1-2

experiment-1-2

Present: https://editor.p5js.org/mairead/present/BuY78iVWg

Edit: https://editor.p5js.org/mairead/sketches/BuY78iVWg

This study uses audio inputs to create a multicoloured audio visualization installation. Variations in colour are caused by sound level and random functions, which also control the rate of fading. Viewers are encouraged to make noise to interact with the artwork, but as soon as they hide their face, the screen fades to black and white and the colourful sound wave becomes dull and glitchy.

Sketch Three

experiment_1-3

experiment-1-3 Present: https://editor.p5js.org/mairead/present/-opc6PHkk

Edit: https://editor.p5js.org/mairead/sketches/-opc6PHkk

Circles move in a wave-like or grid-like pattern, their position and rate of speed determined by a user’s head movements. As they slide past each other, they create a dissolving trail. As soon as the viewer covers their face, however, the circles begin to float upwards and downwards with a more pronounced dissolving trail while the background fades to black and white.

Sketch Four

experiment_1-4

experiment-1-4

Present: https://editor.p5js.org/mairead/present/7rLCxJNqE

Edit: https://editor.p5js.org/mairead/sketches/7rLCxJNqE

Lines of varying lengths crosshatch their way down the screen, returning to the top when they reach the bottom edge. The sound input of the viewer determines the length of each stroke. As soon as the viewer hides their face, however, the crosshatch pattern becomes irregular and glitchy and the colours fade to black and white.

Sketch Five

experiment_1-5

experiment-1-5

Present: https://editor.p5js.org/mairead/present/WfXJbRmeL

Edit: https://editor.p5js.org/mairead/sketches/WfXJbRmeL

Lines radiate from the centre of the screen, creating a kaleidoscope effect. The number of lines is controlled by the movement of the viewer’s head while the complexity of the pattern is caused by the viewer’s noise level. When a viewer covers their face, the pattern stops rotating and begins to glitch, fading to black and white.

Project Context

The content moderation and ranking algorithms of large social networking sites have long been a source of scholarship and criticism, in part because of the crucial role they can play in spreading fake news, inciting protests, and radicalizing users’ ideologies (Barrett, 2020). Moreover, these algorithms constitute a black box, preventing scholars and journalists alike from examining their internal systems. 

There is some information we can glean about these sites from simple observation however. In her article, “Playing the visibility game: How digital influencers and algorithms negotiate influence on Instagram,” author Kelley Cotter acknowledges that Instagram divulges few details about their content ranking algorithm, but by examining the rhetoric and behaviours of social media influencers, we can begin to see what content the platform’s algorithm favours (p. 898). Reliant on the Instagram ranking system to provide engagement, Influencers will follow behavioural norms laid out by the platform itself. Cotter describes the rule-heavy language Influencers use when conversing about Instagram’s algorithms online, describing the way they “gesture to the sovereignty of Instagram and its algorithmic architecture, characterizing them with language reminiscent of government authorities or law enforcement” (p. 902). This threat of penalization and consequent down-ranking by an algorithm is prevalent on Facebook as well. In “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook,” author Taina Bucher takes Foucault’s concept of panopticism – or constant surveillance – and subverts it, arguing that the punishment for disobedience on Facebook is actually invisibility (2012). While Foucault saw constant visibility as a punishment, social networking sites use visibility as a reward for following their specific guidelines and will down-rank or remove content that is less desirable. What counts as desirable content is also interesting to note. In a paper for New York University’s Centre for Business and Human Rights, author Paul Barrett lays out the history, expectations and experiences of people who work for Facebook’s content moderation system (2020). He mentions the role of Facebook’s algorithms in major socio-political events such as the Rohingya crisis as well as their failure to properly censor inflammatory and violent content. According to Barrett, Facebook’s algorithm is inclined to over-censor neutral content while simultaneously ignoring white supremacist sentiments (2020). This leads to a dangerous situation in which content created by minorities may be censored more than hate speech against that same group. In these examples and many more, scholars are concerned with the amount of power these algorithms hold, and the level of secrecy employed by social media platforms themselves (Cotter, 2018). 

References

Barrett, P. (2020). Who moderates the social media giants? A call to end outsourcing. Stern Centre for Business and Human Rights, 1–29.

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. 

Cotter, K. (2018). Playing the visibility game: How digital influencers and algorithms negotiate influence on Instagram. New Media & Society, 1(4), 895–913.