Category Archives: Screen Space

An Interactive Experience on Bioluminescence by

Sunidhi Naik

Mona Safari

Wentian Zhu

LINKS TO THE P5js EDITOR

  1. Scene 1: https://editor.p5js.org/Ms.vive/sketches/UpGvpfYYXT
  2. Scene 2: https://editor.p5js.org/zhuwentianzz/sketches/Bn8X6RDx2
  3. Scene 3: https://editor.p5js.org/sunidhinaik/sketches/etC3pYFVk

LINKS TO THE P5js FULLSCREEN

  1. Scene 1: https://editor.p5js.org/Ms.vive/full/UpGvpfYYXT
  2. Scene 2: https://editor.p5js.org/zhuwentianzz/full/Bn8X6RDx2
  3. Scene 3: https://editor.p5js.org/sunidhinaik/full/etC3pYFVk

related-work-research-1

Links to the Scientific Research articles:

  1. BIOLUMINESCENCE
  2. The Facts of Deep-Sea Bioluminescence
  3. Why Are Fireflies Disappearing?
  4. 8 Brilliant Bioluminescent Animals
  5. Bioluminescence vs Biofluorescence

Links to the Artistic Inspirations:

  1. Jellyfish Village by Andreas Rocha
  2. Swamp Dwellings by Tarmo Juhola
  3. Birch Forest by Marcel Lenoir
  4. Bottle Bass by Leo Vann
  5. Magic Fingertips by Roelof knol 

Links to the related Projects:
Digital Graffiti Wall – Stencils Feature by Tangible Interactio 

Nightrise by Moment Factory

related-work-research-2

conceptualization

c1c2c3c4c5

our-photos

 

VIDEO LINK OF THE FINAL PRESENTATION

https://www.youtube.com/watch?v=gzdloGFWhaw

VIDEO LINK OF INTERACTION ON THE PRESENTATION DAY

https://mega.nz/file/2kBjwKbK#xHCYaDTnlrGVeZn58m5b91nXL0NJYzCxVl2CFX8Dri4

BIBLIOGRAPHY

  1. Team, The Ocean Portal. “Bioluminescence.” Smithsonian Ocean, 18 Dec. 2018, https://ocean.si.edu/ocean-life/fish/bioluminescence.

  2. Montereybayaquarium.org, https://www.montereybayaquarium.org/stories/bioluminescence.

  3. Vartan, S. (2022) Why are fireflies disappearing?, Treehugger. Treehugger. Available at: https://www.treehugger.com/why-are-fireflies-disappearing-4868708 (Accessed: October 24, 2022).

  4. Glow Effects: “Easiest Glow Effect in p5.Js (2 Lines of Code).” YouTube, YouTube, 4 Dec. 2021, https://www.youtube.com/watch?v=iIWH3IUYHzM&t=125s.

  5. PoseNet: shiffman. “Ml5.Js Pose Estimation with PoseNet.” YouTube, YouTube, 9 Jan. 2020, https://www.youtube.com/watch?v=OIo-DIOkNVg.

  6. Badore, Margaret. “8 Brilliant Bioluminescent Animals.” Treehugger, Treehugger, 4 Apr. 2022, https://www.treehugger.com/nature-blows-my-mind-brilliant-bioluminescent-creatures-4858604.

  7. Science World Report. “Bioluminescence vs Biofluorescence: The Science of Glowing Seashores, Fluorescent Frogs, Sharks, Turtles and Parrots.” Science World Report, 20 Mar. 2017, https://www.scienceworldreport.com/articles/57788/20170320/bioluminescence-vs-biofluorescence-glowing-sea-shores-fluorescent-frogs.htm.

  8. Science World Report. “Bioluminescence vs Biofluorescence: The Science of Glowing Seashores, Fluorescent Frogs, Sharks, Turtles and Parrots.” Science World Report, 20 Mar. 2017, https://www.scienceworldreport.com/articles/57788/20170320/bioluminescence-vs-biofluorescence-glowing-sea-shores-fluorescent-frogs.htm.

  9. Hand Tracking: “P5.Js Web Editor.” P5js.org, https://editor.p5js.org/StevesMakerspace/sketches/VCFXH2lOh. Accessed 25 Oct. 2022.

  10. Artstation.com, https://www.artstation.com/artwork/J9ZA40. Accessed 25 Oct. 2022.

  11. Artstation.com, https://www.artstation.com/artwork/J9b8ez. Accessed 25 Oct. 2022.

  12. Artstation.com, https://www.artstation.com/artwork/eaozXY. Accessed 25 Oct. 2022

  13. “Instagram.” Instagram, https://www.instagram.com/p/Ceejmx3Bv6x/?igshid=YmMyMTA2M2Y=. Accessed 25 Oct. 2022.

  14. “—.” Instagram, https://www.instagram.com/p/CFP6tNYnSk5/?igshid=YmMyMTA2M2Y=. Accessed 25 Oct. 2022.

  15. Wigington, Patti. “The Magic & Folklore of Fireflies.” Learn Religions, 3 July 2014, https://www.learnreligions.com/the-magic-and-folklore-of-fireflies-2562505.

  16. Hakai Magazine. “The Secret History of Bioluminescence.” Hakai Magazine, https://hakaimagazine.com/features/secret-history-bioluminescence/. Accessed 25 Oct. 2022.

 

Virtuālis Puppet

Synopsis: Creation of a puppet storytelling environment in screen spaces through the use of hand tracking and hand motion controls for staging short stories for an audience.

Cast: As Venus; Mufaro Mukoki, As Jupiter; Ricardo ‘Ricky’ Quiza Suárez


ACT 1- OVERTURE; The referents

Puppet Narrator. Article: Hand gesture-based interactive puppetry system to assist storytelling for children.

4hand-gesture-based-interactive-puppetry-system-to-assist-storytelling-for-childrenhand-gesture-based-interactive-puppetry-system-to-assist-storytelling-for-children

 

 

 

 

Figure 1: Both images illustrate different Puppet Narrator features. (Left) The implementation of the system architecture is mainly composed of three parts: input, motion control, and output (Right) Basic gesture control, an example of using a combination of gestures to steer and manipulate the puppet. a Stretch. b Grip.

Authors: Hui Liang is a Marie Curie senior research fellow at the National Centre for Computer Animation, (NCCA) Bournemouth University, the UK, and an associate professor at the Communication University of China, Dr. Ismail K. Kazmi, a Senior Lecturer of Games Programming/Development at Teesside University, where he teaches a wide range of courses in Computer Science and Game Development; Peifeng Jiao, a lecturer at the basic school of the Southern Medical University of China, Jian J. Zhang, Professor of Computer Graphics at  NCCA, where he leads the National Research Centre, Jian Chang, Professor and active scientist in computer animation with over 15 years research experience in the NCCA.

This article was a pivotal point of information gathering and referential research for our project. The authors intended with this system, to develop narrative ability in the virtual story world. Depth motion sensing and hand gestures control technology were utilized in the implementation of user-friendly interaction. A key aspect in the drive for developing the Puppet Narrator was ‘how digital techniques have been used to assist narrative and storytelling, especially in many pedagogical practices; with the rapid development of HCI techniques, saturated with digital media in their daily lives, young children, demands more interactive learning methods and meaningful immersive learning experiences’ (Liang, Hui, et al 517).

Its abstract premise proposes the creation of a hand gesture-based puppetry storytelling system for young children; players intuitively use hand gestures to manipulate virtual puppets to perform a story and interact with different items in the virtual environment to assist narration. Analyzing the collected data in this article, facilitated us in scoping and giving form to our screen space exercise. Some of this data includes; how would interaction through the system architecture happens (Input, Motion Control, Output, what kinds of hands gestures can be used, what skills/areas are trained in the public with its use (narrative ability, cognitive skills, and motor coordination ability), what technologies were used in its realization and how and what was accomplished through the practical application of a prototype of this Puppet Narrator system in a selected target audience.

Handsfree.js. Library: Online library with multiple projects about its potentiality.

laser-pointer-handsfree

handsfree

 

 

 

 

Figure 2: Both images illustrate different Handsfree.js projects. (Left) Laser pointers but with your finger (Right) Hand pointers and scrolling text. More examples at https://handsfree.dev/.

Author: Oz Ramos a.k.a Metamoar / Midifungi, a generative artist exploring compositions with p5.js and fxhash artist, who has been exploring gesture-driven creative coding for about 4 years.

Handsfree.js started back in 2018 while the author was homeless, in order to help a friend recovering from a stroke at the shelter they habited, by navigating the web with face gestures. Over time and through the support and encouragement of many people, the project grew to become much more than that, expanding into a full-grown library with the capacity for both face and hand tracking. Regarding hand, it differs from the handpose ML5 library, in regard to detecting multi-hands, up to 4, a key aspect to integrating into our project.

As we began code sketching prototypes of our project, we encountered one issue; the libraries we were working with only detected one hand. P5js library Handsfree.js became key in overcoming that limitation, but it was more than just that. The online repositories of Oz Ramos, the author, about his and others’ explorative research use of Handsfree.js was beyond useful; both inspiration and code-wise, these examples allowed us to get more understanding of how the hands can interact and influence the screen space. While we manually explored venues on how to adapt the library to our screen space, we also incorporated plugins, like pre-established hand gestures (pinching fingers) to allocate functions within the screen space.


ACT 1 – ARIA; The idea

Marionettes have been a part of many world cultures for thousands of years and have played a unique role in storytelling. Although puppetry was never reserved solely for the entertainment of children, our project focused on the use of it to assist storytelling to children. Puppetry continues to be a respected form of art, however, the new generation, who were born into technology, are not fascinated much by the physical instruments of play around them. They have spent the better part of their lives glued to screens. It is important for educators to devise more interactive learning methods and meaningful immersive learning experiences (Liang, Hui, et al 517).

 In our project, we attempted to answer the following questions:

  • How can human-computer interaction assist in learning and communication?
  • What are new ways of engaging in education to facilitate learning?
  • How does HCI assist/improve engagement?

virtualis-puppet-mindmap

Our project uses handsfree.js which is easy to access through a browser. Handsfree.js uses hand tracking to position the cursor on the screen and incorporates hand gestures, creating an intuitive program. As a result, the user is able to use simple hand gestures to control the marionette puppet on the screen. To control the puppet, fingers are mapped to specific strings of the puppet. This results in a puppet that can move anywhere on the screen and whose arms make puppet-like movements. Other functions of the marionette puppet/s, such as mouth movements, are triggered by other hand gestures like pinching the fingers

the-cat-maiden-storyboard-01

Figure 3: A scene is composed of 1 to 2 puppets and interactive stage props. Hands are used to control puppet movement via fingertips, and gestures trigger different interactions.

The project provides a range of ready-made scenes, characters and props for educators to choose from to assist them in telling their chosen stories. If more than one scene is chosen, pressing the arrow keys on the keyboard will help them navigate from one scene to the next.

the-cat-maiden-storyboard

Figure 4: The storytelling unfolds itself through multiple scenes, controlled by the arrow keys. For implementation, we choose Easop’s fable, the Cat Maiden.

screen-set-up

Figure 5: The interaction in the system occurs in a dialectic way; the puppeteer(s) controls action through screen space, reflected upon another screen space. Props can be used to expand upon the space where the action happens, mimicking puppeteer stands.

The final product is presented on a screen the size of a television and physical props, such as curtains and boxes, can be added around the screen space in an effort to emulate a traditional puppet show. An essential part of the program is that it must be user-friendly and require minimum effort from the facilitator which means the facilitators can be the children themselves, with supervised assistance. This makes it easy for them to adapt to a variety of different stories. The facilitator will still need to stand behind the main screen, much like the fashion of a typical puppet show, however, they have the advantage of having a smaller screen to coordinate their story. This means that they will also get to see their narrative as it is experienced by others.

window-giff

Figure 6: Final version of the coded system while being implemented.



ACT 3 – FINALE; The documentation

copia-de-moving-circle

moving-puppet-0-dot-5

 

 

 

 

moving-circles-3

 

 

movin-circles-4moving-circles-5

Figure 7: Body of work showing the evolution of the project.

For our presentation, we made use of a 70-inch TV connected to a laptop via an HDMI connection. We stood and presented our puppet show in front of the laptop and the output was projected onto both the laptop and the television. As accessibility was an important consideration, we situated the puppet show in a small room to amplify the audio as much as possible without requiring additional hardware. Additionally, the presentation was accessed through the chrome browser. The puppet show was presented to an audience who sat and/or stood in front of the large-screen television. In preparation for our presentation, we stood in front of the laptop to test the best distance to stand from it in order to move the puppets more easily (2.5 inches) and to prevent the software from picking up any more hands that would disrupt the performance. 

Upon reflection, we ought to have created a physical structure around the large-screen TV to emulate the stage of a puppet show and create a more immersive experience. Our challenges and limitation were our own limited experience with coding. We experimented with various codes and sketches to mimic and stimulate hand movements and map our puppet strings to human fingers via the web camera.

scheme-2

Figure 8: One of the usability tests of different ways of applying hand detection on the screen, and using it as means of moving objects on the screen space.

for-schemes1

scheme-3Figure 9: Mock-up practices and usability test of the system. The interaction happens on a screen that the puppeteers use to conduct the show, while the puppet show is output to an external, bigger screen.

You can watch our presentation here: 

https://www.youtube.com/watch?v=6LSZq3VBrLo

The final code utilized in the presentation (Edit):

https://editor.p5js.org/ricardokiza654/sketches/94qop2xpi

The final code utilized in the presentation (FullScreen):

https://editor.p5js.org/ricardokiza654/full/94qop2xpi

 

Some notable sketches we created can be found on the following links:

https://editor.p5js.org/ricardokiza654/sketches/GDkoYSEVd

https://editor.p5js.org/ricardokiza654/sketches/Xa5LVG0yX

https://editor.p5js.org/mufaromukoki/sketches/mi2fY65HfG

https://editor.p5js.org/mufaromukoki/sketches/XFVMztb-l

 

To view more of our documentation visit this link.

 

Bibliography

Aesop, . ““The Cat-Maiden”.” Aesop’s Fables. Lit2Go Edition. 1867. Web. <https://etc.usf.edu/lit2go/35/aesops-fables/377/the-cat-maiden/>. October 24, 2022.

Canada Museum of History. Qualities of a Puppet, Canada Museum of History, 2022, https://theatre.historymuseum.ca/narratives/details.php?lvl2=4812&amp;lvl3=4826&amp;language=english.

Fling, Helen. Marionettes: How to Make and Work Them. Dover Publications, 1973.

Flower, Cedric, and Alan Jon Fortney. Puppets: Methods and Materials. Davis Publications, 1993.

Liang, Hui, et al. “Hand Gesture-Based Interactive Puppetry System to Assist Storytelling for Children.” The Visual Computer, vol. 33, no. 4, 2016, pp. 517–531., https://doi.org/10.1007/s00371-016-1272-6

Mediapipe Solutions. “MediaPipe Hands.” Mediapipe, 2022, https://google.github.io/mediapipe/solutions/hands.html.

Ramos, Oz. “Handsfree.js Intro (Spring 2019).” Vimeo, Oz Ramos, 24 Oct. 2022, https://vimeo.com/476537051.

Ramos, Oz. Handfree.dev. “Hands-Free Libraries, Tools, and Starter Kits.” Handsfree.js, 17 May 2022, https://handsfree.dev/.

Ramos, Oz. “MIDIBlocks/Handsfree: Quickly Integrate Face, Hand, and/or Pose Tracking to Your Frontend Projects in a Snap ✨👌.” GitHub, MidiBlocks, 2022, https://github.com/MIDIBlocks/handsfree.

Victory Infotech. “Indian Wedding Bride In Choli And Groom Kurta Pajama With Koti Couple Traditional Outfits Doodle Art PNG Picture.” Pngtree, Pngtree, 11 July 2022, https://pngtree.com/element/down?id=ODA3MTc2Ng&amp;type=1&amp;time=1666238499&amp;token=NDllYmQ3MzdmY2JiYTkwMmRmYjg1MjEwMjBkYWE1M2M.

Paddle Player – Zaheen, Anusha, Prathistha

Inspiration

Artwork 1: A simple quick sketch made by Andre Burnier. The concept revolves around using the mouse click to make the ball interact with the various lines.

Name of the Artist – André Burnier

He is a graphic designer and coder, based in Brazil. He has done his masters in graphic design at AKV | St. Joost in Breda – Netherlands.

Andre focused on actively researching the interaction between graphic design and programming. Expertise includes generative design, custom-made designer bots, as well as more traditional graphic design – logos, visual identities, and editorial.

We were inspired by the simple graphics and movement of the ball to make a fun interaction with it, maybe something similar to an interaction-based game, because we wanted the user experience to be a very fun one. 

In order to manipulate the overall experience, we studied the collisions of the balls. 

This also led us to search for other interactive games with balls and different objects and we came across the brick breaker game. 

unnamed

https://www.instagram.com/p/CgkarqFOlN-/

Searching further for interactive games, we came across this large-scale game created by the Moment Factory. 

Artwork 2: HOTSPOT – An Interactive Sprint With Moving Targets

In the game, players must catch moving hotspots. The gameplay focuses on agility and speed. Physical obstacles optional

1 runner at a time – 3 minutes

This game is 100% customizable and adapts to all types of environments.

Moment Factory is a multimedia studio with a full range of production expertise under one roof. Their team combines specializations in video, lighting, architecture, sound, and special effects to create remarkable experiences. With its headquarters based in Montreal, the studio also has offices in Paris, New York, Tokyo, and Singapore. They have created more than 500 unique shows and destinations. 

unnamed-2]

https://www.instagram.com/p/CgkarqFOlN-/

 

This inspired us to combine the world of a virtual game with physical space and also to explore real-time tracking and projection mapping. Virtual games can be integrated anywhere and tailored to any audience, making them ideal for public display. 

Also, incorporating virtual gaming with physical activity, as it is no less than a workout, can also be used to relieve stress or warm up at work.

As games are often very fun, we were encouraged to create something which is simple but also engaging and interactive.

Conceptualization 

Breakout was among the first games to be created, back in 1976 by one of the oldest video game developers, Atari. It came out as a modified version of Atari’s first game, Pong and was referred to as “Pong turned on its side”. A concept created by Nolan Bushnell and Steve Bristow, it uses a paddle near the bottom of the screen to bounce a ball towards layers of bricks at the top of the screen. When the ball hits the bricks, the bricks that are touched are eliminated. The goal of the player is to eliminate all the bricks on the screen. 

unnamed

Following its success as an arcade game, there were a series of Breakout games released by Atari in the following years, as well as numerous similar games by other developers. Breakout also inspired Steve Wozniak, its original engineer along with Steve Jobs, to create Brick Out in BASIC for the first Apple ][ computer – the first software version of the original hardware creation. Where the original was a black and white arcade machine game that used cellophane strips on the screen to colour the bricks, this new version used both colour and sound to amplify the user’s experience playing the game.

For our project, we analysed the three primary elements of the game – the bricks, the paddle and the ball – to see if we could manipulate these elements to create an interesting experience for the player, considering the increased possibilities that computer vision offered us now, compared to the simplicity of the original game. We found that the engagement offered simply by bouncing the ball off the paddle was very high, especially when the paddle was controlled by an unexpected body part like a nose or a wrist, and players found that experience itself to be fairly rewarding. So we decided to continue with an exploration of the possibilities of interaction between the player and the screen, while maintaining the simplicity of the gameplay.
cc-group-project-frame-2

mindmap

Some of the questions that arose during this process of exploration, helping us formulate our final concept were:

  1. How does the scale of the screen space impact the player’s ability to continuously engage with the game?
  2. What is the value of the body in creating an engaging experience with a game that is traditionally played using computer hardware?
  3. Is it possible to draw the player into the screen space without physically placing them within the screen?
  4. How can analog tools like a piece of chalk and a blackboard amplify the user’s experience of the game?

In answering these questions, we found that the best user experience for our game came from being able to present it in a life sized scale where the player is able to focus solely on the functionality of their body as the paddle and where the primary and only goal is to hit the ball. With no scorekeeping or ending, it functions as a way for the player to let go of any mental boundaries placed on the movement of their body and allows them to “enter” the screen space completely, almost forgetting about the presence of their body in the physical space.

To increase the effect of this experience, we decided to create the game with a black background using a brightly coloured ball and paddle that both changed colour based on the success of the player. This was achieved using p5.js for the design, Posenet by ml5.js for the computer vision aspect and Collide2D for the gameplay. This was then projected on a blackboard, stretching the limits of the screen beyond what the projector could offer, minimising the player’s awareness of the existence of a separate screen at all. Placing written guides on the screen, for which two parts of the body created the paddle, helped the player move more easily and offered a good level of amusement as well, by highlighting the absurdity of the placement of the paddle and how it would move. Ensuring the player could not see their actual body reflected on the screen in any way was the key to minimising how conscious they were of their physical presence.

Explorations and User Testing Link: https://youtu.be/ZJkmakvHXEk

Documentation

Demonstration Link: https://youtu.be/ZDOcGTbguUk

img_0832
unnamed-3unnamed-2

 

 

 

 

 

 

 

 

The ideal state of our installation would be projected onto 2 screens wherein the person would have a bigger space to run around and play with the game providing a more wholesome experience.

unnamed-3

ezgif-com-gif-maker-4

Editable Codehttps://editor.p5js.org/zaheensandhu/sketches/ibOwdHTrA

Fullscreen Experiencehttps://editor.p5js.org/zaheensandhu/full/ibOwdHTrA

Bibliography

“Fun Solutions to Transform Any Space into Immersive Playground.” Moment Factory, 2020, https://momentfactory.com/work/destinations/public-spaces/augmented-games.

https://www.andreburnier.com/abouthttps://momentfactory.com/work/destinations/public-spaces/augmented-games

https://momentfactory.com/about/segments/shows/lumina-night-walks

“Fun Solutions to Transform Any Space into Immersive Playground.” Moment Factory, 2020, https://momentfactory.com/work/destinations/public-spaces/augmented-games.

Çimen, E. Gökçe. “A Brief History of Brick Breaker Video Games.” Hero Concept, 9 Sept. 2018, https://www.heroconcept.com/a-brief-history-of-brick-breaker-video-games/. Accessed 4 Oct. 2022.

Wozniak, Steve. “How Steve Wozniak Wrote Basic for the Original Apple from Scratch.” Gizmodo, 1 May 2014, https://gizmodo.com/how-steve-wozniak-wrote-basic-for-the-original-apple-fr-1570573636.

 

Native vs. Invasive Particles

Team
Firaas Khan
Rim Armouch

Code

Fullscreen

For some time, artists have been exploring the different ways they can raise awareness around certain issues related to the environment and let people interact with nature through the medium of digital technologies.

Firaas being interested in physics and I in ecology, we decided to create an interactive installation that lets the spectator understand the way invasive species destroy and alter freshwater environments. It also sheds light on how human activities can contribute to the spreading of invasive species.

According to the Government of Canada, invasive species such as viruses, bacteria, parasites, and other micro-organisms once introduced into a new environment, outside of their natural range, could grow quickly because they don’t have natural predators in their new environment. As a result, they can outcompete and harm native species. They can even alter habitats to make them inhospitable for the native species. This is especially concerning for species at risk.

Therefore, we ask ourselves how can we raise awareness around environmental issues through technology?

This installation was created using an interconnected screen and webcam. It was designed using the P5JS web editor, and the way the interaction works in this experiment is by detecting a human nose on the screen that, once detected, would bring an invasive species into the environment. 

 

 

pinstallation-01-copy   pinstallation-07   pinstallation-06
Phase 01                                        Phase 02                                       Phase 03

 

 

 

pinstallation-01Animated GIF of our set up

 

 

Ideally, we would want our installation to be exhibited at a science fair or science museum, which is why we created small notes that describe what each particle is, and how it behaves in a freshwater environment.

pinstallation-cards-2-02        pinstallation-cards-2-03

pinstallation-cards-2-04

 

 

 

pinstallation-02

 

 

pinstallation-05

 

 

 

 

 

 

 

 

 

 

 

pinstallation-03

 

 

 

 

 

Two projects that particularly inspired us are done by Thijs Biersteker:

plasticiicopy

The first one is Plastic Reflective, an interactive kinetic installation that lets the spectator reflect on the concept of the growing plastic problem in our oceans.

 

1-keyvisual-shadedseasbythijsbierstekercommissionedbysciencegallerydublin-photobytheartist-png

The second one is Shaded seas which shows that the power to change the overuse of plastic is in our own hands. The work sets out to keep creating awareness of the plastic problem in our oceans.

While both projects don’t necessarily use a screen or a webcam, they follow the same interaction that we’re seeking in our project and shed light on similar issues.  Additionally Firaas, and I thought that they could be done by using P5JS, Posenet, and machine learning. 

Our project was developed over several weeks, and while we weren’t able to achieve everything we needed with coding. We did go through an interesting research through design process, that helped us choose how to design the particles, how to design the human-screen interaction, the environment that we would like to create, and the way we want to present our installation.

 

screen-shot-2022-10-24-at-12-31-55-am  screen-shot-2022-10-24-at-12-32-31-am

 

screen-shot-2022-10-24-at-12-37-36-am

 

 

p-sketch-01Sketch made based on images gathered from our particles research

p-sketch-02-copy
Sketch made based on images gathered from our particles research

A big part of our sketching process was related to the abstraction and the materiality of the particles. Do we want to abstract them? Make them realistic? How can we represent the fact that they are alive? How do they behave? What kind of properties do we want to give them?

Another part that we were also concerned with, was the environment in which they live.

 

sketch01   sketch02  sketch-03  ezgif-com-gif-maker  sketch-06  ezgif-com-gif-maker-2ezgif-com-gif-maker-6

ezgif-com-gif-maker-4

 

 

 

 

 

 

Then, we decided to create a group of native particles that would float in a freshwater environment, that would later be invaded by another type of species introduced by human interaction with the screen.

 

 

Print

 

We chose to go with an abstract and simple version of the process, making it accessible to a wide range of people. Since it’s a freshwater environment, we decided to display our experiment lying horizontally on a surface. Given how screens with built-in webcams are easily accessible, we decided to use a tablet for the user to interact with. Making our project simple to install and portable.

We would like to consider our project as version 1 which can be enhanced over time. We already had a few ideas on how we can enhance the project:

+ Create an entire ecosystem and introduce more than 1 type of native particle that responds differently to the introduction of new particles

+ Elaborate on the idea of cycle and lifespan

+ Work on the visual approach. What happens if we add more details that could, later on, be used for additional behaviors?

+ Given that the iPad wasn’t processing our code efficiently, we did hear feedback related to the use of a projector and game engine, opening up the possibility to experiment with the shape of the screen

+ Elaborate more on the existing interactions, and add behaviors to the particles making the user journey even more interesting. Many users felt inclined to touch the screen, which could potentially lead to another interaction or be a limitation considering that we focused on webcam-screen communication

 

pinstallation-user-flow-05Additional properties of the native particles

 

The next step for us would be to keep inviting users to test our initial prototype to keep note of how people would like to interact with such a project. In addition, we would dig deeper into the different technologies that can display our project taking into consideration the feedback provided by our instructors and cohort.

 

Bibliography:

Canada, Environment and Climate Change. “Government of Canada.” Canada.ca. / Gouvernement du Canada, April 19, 2021. https://www.canada.ca/en/services/environment/wildlife-plants-species/invasive-species.html.

“Thijs Biersteker —Plastic Reflectic.” 2016 Thijs Biersteker. https://thijsbiersteker.com/plasticreflectic.

“Thijs Biersteker —Symbiosia.” 2020 Thijs Biersteker.
https://thijsbiersteker.com/shaded-seas.

Shiffman, Daniel, Shannon Fry, and Zannah Marsh. 2012. The Nature of Code. United States: D. Shiffman.

 

 

 

 

Project 1: Screen Space – Victoria, Maryam, Gavin

Section 1: Related Works Research

A related screen space project is one called Minion Fun by Atharva Patil. This project uses poseNet to track the movements of the face to produce minion sounds on different sections of the screen. Atharva Patil is a product designer who is currently leading design at Atlas AI building geospatial demand intelligence tools. 

Picture of the work:
screen-shot-2022-10-24-at-6-40-14-pm

This work relates to our project and research because it uses the face to create a fun interaction with sound. We used this project as our main inspiration, but instead of using the face to produce a fun sound, we used the face to produce a picture of a specific emotion (happy, sad, angry, etc.) on top of a face mask. This project helped us understand how poseNet could be used just for the face and not the entire body.

Section 2: Conceptualization

In the post-COVID pandemic, wearing a face mask has become a norm – it is a necessary precaution, and in some countries, still enforceable by law. A face mask typically takes up half of a person’s entire face; obfuscating the nose and mouth. This can often make it harder to read another individual’s facial expression. We used this fact as a springboard for our project. 

Initially, we had to brainstorm what the face mask may block in terms of everyday life. As well as the aforementioned difficulty in reading facial expressions, we also noted that when using facial recognition software (such as Apple’s Face ID), the system often struggles to identify the individual. Therefore, we wanted to put the mask at the forefront of our project – to reconfigure it as a tool for effective communication. 

screen-shot-2022-10-24-at-11-57-30-pmimg_5860

Some questions that arose from this initial brainstorm included: how do we ensure that the camera can recognize the face mask as opposed to a fully-exposed human face? How can we approach this in a creative manner? And ultimately, what do we want to express through the face mask?

We ran through a series of ideas – such as attempting to change the colour of the user’s mask itself – until we arrived at the concept of showcasing different facial expressions/emotions directly on the user’s mask, depending on their position within the screen space. The screen is broken down into four sections, each one representing a different emotion – happy, sad, tired, and angry. Corresponding music plays as the user enters each section.

In the final iteration, we used a webcam and ran the code directly from p5.js online editor. In a more idealistic situation, we envision this could work as a video filter for a social media platform. With more time, we probably wouldn’t have used the “illustrated mouth” images used in the presentation. A potential replacement would be actual human mouths, which could create a somewhat uncanny valley situation that may express our main idea more clearly; a direct response to the “Hi, how are you?” text displayed on the screen. We weren’t able to finalize this aspect for the presentation, because there’s difficulty in showcasing sadness, tiredness or anger through just the mouth alone. With more time, we may have been able to create more abstract/experimental depictions of emotions.

screenspace1screenspace2

Videos of Project 1:
https://youtube.com/shorts/SR3K_EHgCjw
https://youtu.be/DG0jxVODSEg

b30a1d24-d035-4ec2-bca0-a86f117a7d6c d967460a-3764-4877-963d-cc5b08a56733545e46db-8826-47c1-a181-5bc2281a7f0a

BIBLIOGRAPHY:
Patil, A. (2019, January 7). Motion Music. Medium. Retrieved October 24, 2022, from https://medium.com/disintegration-anxiety-1/icm-final-project-53b624770bb6

Common Objects Used Uncommonly: Merging screen space and physical space with analog objects as inputs

By Shipra Balasubramani, David Oppenheim, Tamika Yamamoto

demo_3

Our research and design focus was on extending the screen into physical space and vice versa. We wanted to incorporate the body into the user experience and work with unconventional forms of analog input. We followed an iterative approach to research, ideation and design. 

 

1. Related Works Research 

The following projects served as our main references:

”Cleaning Windows” on EyeToy

eyetoy

Photo credit: David :3 from Youtube

 

The EyeToy (2003) is a color webcam accessory designed to be used with the Playstation 2. It uses computer vision and gesture recognition which allows players to interact with video games using motion. The Cleaning Windows game is a part of a series of mini-games in EyeToy:Play, that has the players “clean” windows with their body as fast as possible to get high scores. 

It served as inspiration for our project because although the mechanics were simple, the interactive experience ignited a sense of play and delight that we wanted to tap into for our project.

 

 

Draw Me Close

analog

Photo credit: NFB/National Theatre

 

Draw Me Close, by Jordan Tannahill, the NFB and National Theatre, blurs the worlds of live performance, virtual reality and animation to create a vivid memoir about the relationship between a mother and her son. The individual immersive experience allows the audience member to take the part of the protagonist inside a live, animated world. This project served as a reference because of how it mapped analog objects such as a crayon/pen, sketchbook and furniture into the virtual space to make the audience experience of the virtual experience more tangible. 

 

The Treachery of Sanctuary

treachery

Photo credit: Chris Milk

The Treachery of Sanctuary, by Chris Milk, ‘is the story of birth, death, and transfiguration that uses projections of the participants’ own bodies to unlock a new artistic language’. Digitally captured shadows of the visitor are reprojected on this large-scale interactive. A shallow reflecting pool keeps the user at a distance from the screen, creating a dramatic effect and a seamless user experience. The technology used to create this experience are Openframeworks, MS Kinect and Unity3D. 

The idea of particles forming the body and then disintegrating into smaller objects reformed as birds, inspired us to work with the idea of creating a body/human form out of unconventional particles/objects and tracking  the movement of the user. 

 

2. Conceptualization

 

docu-blackboard

 

Our main research questions revolve around the idea of collapsing the distance between the digital space of the screen and the physical space inhabited by the user and whether that might contribute to increased feelings of presence and affect. As part of that exploration we were interested in embodied interactions with analog objects that connected to the screen. We wanted to prompt the user to associate their physical body and space with the digital and engender a sense of surprise and delight. We discussed various references that provided inspiration for our design approach.

We wondered about getting rid of expected forms of input, for example the mouse, the Xbox or PS5 controller or even the Joy-Con controllers that Nintendo popularized with the Wii —by now, all of them are extensions of our body that we no longer think about —and instead, incorporate common objects that carry with them different associations. Our “Cleaning Windows” reference (and others) led us from bubbles and using the body as a controller to a ball as both input and extension of the body. 

 

docu-blackboard-bubbles

 

We worked on individual code sketches based on our research and ideation phase. These led into our design and development phase, described in the next section. 

1-624x464 img_sketch2-davidoppenheim-624x382sketch3_2

 

Design Considerations & Technical Description

Our larger vision would be a geometric installation with lots of affordances for projecting onto and filled with multiple everyday objects. We would play with the affordances and conventional associations of each object to create micro narratives that stemmed from the user’s own histories and relationships to those objects.

 

microsoftteams-image

 

For this initial prototype we focused on an interaction with one object only, although we did work in a surprise second object to test the logic of our programmatic approach (state machines) and object recognition library.

We focused on designing a space that would not require instructions and rely instead on the affordances of the physical design – it was important that the installation feel alive when the user first entered (the screen’s camera displayed the user’s image and a grid of moving tennis balls) and that the analog object and its position afford interaction (we chose a tennis ball and positioned it on a lit pedestal). 

We used P5.js in conjunction with ml5.js and the PoseNet machine learning model as well as the COCO dataset. 

We segmented our v1 prototype concept into features and created code sketches for each —object recognition and state machine, digital object using GIFs, and the GIFs’ interactions with the skeleton —and then integrated our separate code bases for testing and debugging.  

Final prototype v1 code: https://editor.p5js.org/tamikayamamoto/sketches/FKTt5dcfM

Fullscreen v1 code: https://editor.p5js.org/tamikayamamoto/full/FKTt5dcfM

 

3. Presentation & Documentation

 

img_2003

 

Location : Room 510 at 205 Richmond St. West (OCAD U)

Installation dimensions: 3m x 3m 

Number of participants: Single user

Hardware: Laptop, short throw projector with speaker, external webcam, tripod, 2 x spotlights

Software: P5.js web editor, ml5.js (PoseNet and COCO)  

Screen: Projection on blackboard 

Installation Design: During our initial ideation process we were driven towards creating a large-scale installation. We wanted to work with a 1:1 scale projection size, breaking away from the conventional screens as commonly used in our day-to-day lives. We used a short throw projector to project onto a black screen (blackboard),  helping to reduce the distance between the screen and the projector and allowing us to create a compact installation. 

While testing our initial work in progress prototype we were able to visualize and test the placement of the projector, screen, webcam and laptop. While doing so we realized the challenges that came along with working with computer vision. Inadequate lighting of the analog object created inaccuracies in COCO’s detection of the object. Keeping that in mind, we added two external light sources:  the first lighting the object and the second as a bounce light  providing ambient light for the installation and helping to avoid sharp shadows cast by the user which would have confused PoseNet.

The layout of the installation was finalized by trial and error method. The intent was to enhance the experience by creating immersive space for the user, leaving enough room for movement within the installation while creating a focus on the interactions and projected digital space. 

 

installation-setup-edit

 

User Experience Description

The following section outlines the user experience of our v1 prototype that we demonstrated during the October 20th critique, starting with a user flow diagram and then through a series of annotated gifs. 

 

research-ideation_screenspace-frame-1

1. User enters the playspace and sees a grid of digital tennis balls on screen, a webcam capture that mirrors their movement, and a physical tennis ball displayed on a tripod in front of them.

ball-enter

2. User picks up the tennis ball. On screen, the ball grid deconstructs and forms the body of the user. Background fades to black and a parallaxing background appears.

docu_display

3. As the User moves, the Ball Person on screen mirrors their movement. A parallax background that moves with the user suggests a sense of moving in a 3D environment that includes both the digital screen and the physical playspace.

docu_ball-pickup

 

docu_ball-person

4. User releases the physical tennis ball, either dropping it to the ground or returning it to the pedestal (tripod). On screen, the Ball Person deconstructs and tennis balls fall to the ground.

docu_ball-drop

5. End: after five seconds, the ball grid appears on screen once more.

docu_ball-grid-up

*We partially prototyped a second object (donut) but chose not to include it as part of the overall user experience and demoed it separately instead.

docu_donut-person

User Experience demo video (with sound):

https://youtube.com/shorts/zSFykZKXWsk?feature=share

User testing recordings (with sound): 

https://youtube.com/shorts/dw8qskS8Qr0?feature=share

4. Feedback and Next Steps

Feedback was obtained from the critique and not as part of formal user testing sessions. 

The critique began with three volunteers who tested the installation before receiving any context from us as designers. We observed their sessions and took note of their body language and utterances. During the discussion that followed our verbal presentation, we asked for the tester’s observations. Additionally we received feedback from individuals who observed the three testers. Finally, a few more users tried the installation toward the end of the critique. 

Our main takeaways from the session were:

  • Overall response to the experience was positive; users didn’t require instructions to move through the intended experience (pick up the tennis ball and play around with it and move their body); 
  • Users seemed to enjoy the key moment of picking up the ball and watching themselves transform into a ‘tennis ball person’, moving around as that form of tennis balls and then breaking apart (by dropping or placing the ball back on the pedestal); 
  • One user required prompting to pick up the ball; 
  • There was an acceptable moment of tension when users didn’t quite know what they were allowed (or supposed) to do with the tennis ball, however all users quickly began to bounce it, throw it, or put it back on the pedestal; 
  • Users did seem to want more complex behavior from the system, for example, to see one of the digital tennis balls follow their analog tennis ball when they threw it in the air or against the wall; 
  • Our demonstration of a second object (donut) seemed to be well-received, as was the larger vision of having multiple everyday objects available to play with.

Should we decide to further develop the project, we would begin by conducting formal user testing of this v1 prototype and then dive back into further research and ideation as part of a larger iterative design and build process.

 

Bibliography

“EyeToy.” Wikipedia, Wikimedia Foundation, 28 May 2022, https://en.wikipedia.org/wiki/EyeToy.

“Soarman Cleaning Windows on EyeToy.” YouTube, uploaded by David :3, 09 Mar. 2014, https://youtu.be/NZs1WfFVAPs.

“The Treachery of Sanctuary” (2012), website:  http://milk.co/treachery.html

“Draw Me Close” (2017, 2021), press kit: https://mediaspace.nfb.ca/epk/draw-me-close/

 

 

 

 

 

The Digital Pet | Screen Space Experiment

screenshot-2022-10-24-at-6-56-57-pm 

THE DIGITAL PET

October 2022

By: Divyanka Sadaphule, Nicky Guo, and Taylor Patterson

 

Context Research  

Related Work

  • For this experiment we wanted to focus on the exploration of colour and emotions.
  • Two artists that we’ve researched for inspiration are James Turrell and Jónsi, Hrafntinna. Both artists use the senses to emit emotions and evoke feelings from their installations.

 

James Turrell – Ganzfeld

screenshot-2022-10-24-at-2-45-52-pm

This installation explores space and light, focusing on the effects of light on people’s moods as opposed to light just being there for illumination. James Turrell comes from a scientific background where he studied the ‘Ganzfeld Effect. In the Ganzfeld Effect, “your brain is starved of visual stimulation and fills in the blanks on its own. This changes your perception and causes unusual visual and auditory patterns” (Healthline, 2020). This is a perfect fit for our project as we aim to link colour to emotion.

 

 

 

Jónsi Hrafntinna – Obsidian

Located in the Art Gallery of Ontarpicture2io, this installation stood out the most with its use of sound to create deep vibrations. The installation is set in a room with little to no light and is a “Sixteen-channel sound installation, chandelier, speakers, subwoofers, carpet, fossilised amber scent.” (Art Gallery of Ontario, n.d.). Other senses such as smell and lighting are used in this installation as well. The idea is to ‘evoke the sensation of being inside a volcano’. To push our project even further, we want to incorporate the feeling of sound through vibrations.

 

The Impact of Colour Psychology on Emotions in Child Development  

Colours play an essential role when it comes to child development. It is an energy having wavelength and frequency. Colour psychology and its impact on a child’s learning abilities and behaviour is a much-researched subject. (Olesen, 2016)

Studies demonstrate the benefits of colours where brain development, creativity, productivity, and learning are concerned. With the help of colours, neural pathways in our brains are connected. The research was done that children wearing coloured goggles who were made to complete pegboard tests were found to solve the tests much faster when wearing goggles of their favourite colour. (Co, n.d.)

The reaction to the temperature of warm to cool colours was another matter, the warm colours in a way can calm certain students but they may excite others. Likewise, cool colours might stimulate one and relax another. In addition, research studies have also shown that with the help of colours we improve our learning ability and memory. The study concluded that red and blue colours are the best for enhancing cognitive skills and improving brain functions of children. (Renk Etkisi, 2017)

 

Historical Aspects  

Several ancient cultures, including the Egyptians and Chinese, practiced chromotherapy, or the use of colours to heal. Chromotherapy is sometimes referred to as light therapy or colorology.

Chromotherapy & Phototherapy in Ancient Egypt: The ancient Egyptians used to use colour as a healing technique as many aspects of their lives. Colours were also associated with gods. With a strong focus on worshiping the sun, they believed that shining the rays of light through coloured crystals could penetrate the body and act as a treatment for ailments.

Chromotherapy in Ancient China: In Chinese culture, they connect colours with health. Chinese culture has always been keen on the connection of body-mind-earth-spirit, and it shows in the holistic Traditional Chinese Medicine (TCM) techniques that have transcended generations. It is believed that the colours you attract are alignments or imbalances with the cosmos and surrounding energy.

(JACUZZI Saunas – Clearlight Infrared SaunasTM, 2018)

 

Conceptual Framework

The initial idea behind the digital pet was to help children to learn and understand emotions with the help of colours. This can be also beneficial for students who have learning disabilities and ADHD who often experience distorted colour discrimination. The impact of different colours can simulate emotions ranging from comfort and warmth to hostility and anger which can help children to understand their emotional factors.

  • We created an interactive platform for people to explore two different states of emotion through sound, technology and colours.
  • Our digital pet represents two states of emotions, happy and angry. We chose colours and sound to match each of these emotions to enhance the effect of our emotional qualities of our digital pet.

 

Research Question

  • How do we map emotion to sound, through technology and interaction while formulating a playful experience where users utilise their visual and hearing senses?

 

Technical description & design considerations

Ideal Project Location

Ideally, we would like the digital pet to be in a room set up with a projector, speakers and a webcam, the digital pet will be projected onto a wall. The colour of the octopus will reflect on all 4 walls of the room, and there will be a song playing. The use of colour, sound and touch will working together to produce an immersive art installation.

Transitions/Interactions

While the demonstration shown in the images below only utilise two of the three proposed emotions, happy, and anger, the aim is to give users the opportunity to experience all three 3 separate emotions /positions with the below actions associated with each.The idea is to have an interaction-based end goal where our character is reactive. The sequence of events are shown below:

Position 1 : Static/Sad

The character will be floating, it will look sad, but it will be ready to welcome on-coming guests. The character’s position should be in the middle of the screen and it will be white on a black background with a rain cloud, rain, thunder and lightning on all 4 walls.

Emotion 1 : Happy

After 20 seconds of someone entering the room, the digital pet becomes happy and turns yellow. The background will change to sunshine and a bed of flowers floating. The user will be prompted to wave their hands up and down to be ‘happy with the octopus’. The waving of the arms will change the digital pet’s colour to a yellow orange gradient and will bounce up and down on the grid. The face will have a smile on it.

Emotion 2: Angry

After the user waves their arms 5 times, the octopus experiences a new emotion, anger. When our character is angry,  the background will change into clouds of smoke. The character will change colour to red, the head will grow and the room will vibrate with rage. The character will have a frown on its face and angry brows at this point.

Emotion 1 (Happy)                                

img_5444

 

 

 

 

Emotion 2 (Angry)

img_5446

Code

Video of working code

______________________________________________

References

D, William. “Ganzfeld – a Light and Space Exhibiton by James Turrell.” Design Is This, 27 Oct. 2015, https://www.designisthis.com/blog/en/post/james-turrell-ganzfeld.

“Jónsi: Hrafntinna (Obsidian).” Art Gallery of Ontario, https://ago.ca/exhibitions/jonsihrafntinnaobsidian#:~:text=Hrafntinna%20(Obsidian)%2C(2021,places%20or%20trigger%20different%20memories.

Pietrangelo, Ann. “Ganzfeld Effect: Sensory Deprivation Hallucinations.” Healthline, Healthline Media, 15 Oct. 2020, https://www.healthline.com/health/ganzfeld-effect.

Healthline. (2020). Ganzfeld Effect: Sensory Deprivation Hallucinations. [online] Available at: https://www.healthline.com/health/ganzfeld-effect.

Art Gallery of Ontario. (n.d.). Jónsi: Hrafntinna (Obsidian). [online] Available at: https://ago.ca/exhibitions/jonsi-hrafntinna-obsidian [Accessed 24 Oct. 2022].

Co, P. (n.d.). How To Improve Your Child’s Mood With Colors. [online] Parent Co. Available at: https://www.parent.com/blogs/conversations/how-to-improve-your-childs-mood-with-colors#:~:text=Researchers%20at%20the%20University%20of.

Renk Etkisi (2017). Renk Etkisi | The Effect of Color | The Effects of Colors on Children. [online] Renketkisi.com. Available at: http://renketkisi.com/en/the-effects-of-colors-on-children.html.

Olesen, J. (2016). Color Psychology: Child Behavior And Learning Through Colors. [online] Color-Meanings.com. Available at: https://www.color-meanings.com/color-psychology-child-behavior-and-learning-through-colors/.

JACUZZI Saunas – Clearlight Infrared SaunasTM. (2018). The Ancient History of Color Light Therapy | Jacuzzi® Saunas Blog. [online] Available at: https://infraredsauna.com/blog/color-light-therapy-history/.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Crowded project

Full documentation PDF linkcrowded-screenspace

The pandemic changed how we interact with spaces regarding cameras and our feeling towards being monitored continuously in public spaces. In the pandemic, a lot of data and footage has been collected of people through cameras, heatmaps and virus tests which are tracking everyone’s movements and health. Survival and health took a center stage than the breaching of continuous privacy and data collection in public spaces. People were comfortable being monitored for the everyone’s safety but the concept of being monitored has not been a comfortable feeling for people.

  • In this project, we simulate what it would be like to track people’s microbes and their related health. Doings so gives the audience the opportunity to experience the feeling of being seen and observed and their role in the environment interacting with the particles. screenshot-2022-10-24-at-3-24-45-pm
  • In our experience, we have 3 stages and we had to break down the stages each with its own implementation needs. The screens are placed in an L shape to prevent blocking the view of each other and breaking down the experience of the people.

In the stage 1, the participant enters the room and must be scanned for the microbes.

In stage 2, the participants move further in the room and view themselves in the other screen and interact with the particles they are creating.

In stage 3, one participant is infected with an “angry” particle, indicated by a looming sound from a Bluetooth speaker at a distance.

Code link : https://github.com/tbeattysk/Screen-Space

Video link of the project : https://www.youtube.com/watch?v=pLrTyiisAzA

Full documentation PDF linkcrowded-screenspace

Group: Tyler, Yueming, Purvi