Fundamentals of Immersion

Process JournalArchive

Jul 23

I have decided to put a freeze on the development of an immersive experience as the summative portion of this study.

This study began with the intention of exploring practices from every corner of the immersive design discipline in order to highlight effective design methods and, eventually, synthesize them into a singular “one-size-fits-all” practice-based approach. When that was the intention of the study, it made sense for the final evaluation to be a demonstration of that practice.

As the study progressed it became clear that the discourse that I found most engaging and useful was the theoretical/conceptual writings on the nature of immersion as it is understood by the various quadrants of design practitioners who utilize it.  The conceptual research was highly useful in pinpointing and the phychosociological phenomenon at the core of what I call immersion that I am keen to explore. Through this process it became clear to me that before I could begin to codify a design practice I needed to conduct a thorough context review and lay out a set of definitions – the myriad disciplines developing immersive experiences share many commonalities in terms of the mechanisms they utilize, but lack a common terminology or even definition.

It is more useful to me, then, for the summation of this study to be, rather than a piece of experiential design, a comprehensive contextual analysis laying out my understanding of the terms, phenomena, and mechanisms at play in immersion. The explorations in practical design undertaken as part of this study are being put to use in the CFC Prototyping assignment.

Jul 03

This week I have been focusing more on experiments with incorporating DepthKit into Vuforia. I used a Kinect for Windows (graciously provided by my supervisor) to record some test footage using Depthkit, and experimented with post-processing and green screen to get a sharper image. (I will have to reshoot the test footage with a better green screen setup next time, as my rather slapdash studio setup had too many wrinkles for a clear mask.) I was able to learn how to implement a green screen as well as play with Depthkit’s settings to reduce spikes.

My first new interaction is a hand reaching out of a surface, which is one of the first images I envisioned when I first considered using volumetric capture.

docudepth2Test footage with a cylindrical trigger:docudepth1I’ve also begun playing with Spark AR by Facebook. Facebook’s usage policy seems highly constraining, as it can only generate content for Facebook platforms, and thus may not be all that useful for an artistic practice. Still, it couldn’t hurt to become familiar with the tool.barf-docu Other experiments still to come:

  • Depthkit AR interaction using an object as trigger, incorporating the object in the film.
  • Learning how to maximize the efficiency of the available processing power, as experiments suggests the Android phone has trouble rendering multiple volumetric videos simultaneously.
  • Volumetric videos generated on a surface and in midair.
Jun 20

This week I have been continuing my AR explorations. My most noteworthy development is the incorporation of volumetric video into AR, as seen above.

ardocu

Situating the volumetric video clip is as simple as nesting it as a GameObject child of the intended ImageTarget. Scaling and placing it is slightly more complicated as the GameObject seems to exist at a different origin than the video itself.

More experimentation is required. I have done some experiments with regular video, and virtual buttons – that is, areas on trigger images that can be occluded to cause events to trigger in Unity.

img_20190620_155103_452 screenshot_20190620-154017 screenshot_20190620-153959

The app builds and runs, but it is very slow to generate a video once the button is pressed. I suspect that the code is turning each video off and on every time a new one is called. I experimented with having all three videos running offscreen, as well as switching the renderers on and off, but succeeded only in crashing unity. Still, it’s a promising proof of concept.

Jun 12

This week I have been exploring AR through Vuforia and Unity with the help of Udemy course Augmented reality (AR) tutorials on Markerless tracking, Cloud AR, 3D Object detection, + more with Unity & Vuforia (instructor Dinesh Tavasalkar).

Through a series of guided experiments I have made a sort of living sketchbook, each page of which has a different set of interactions and code.

After finishing the course (I’m only half done) I would like to explore volumetric video capture in AR.

screenshot_20190611-175204 img_20190611_175956_032 screenshot_20190611-175458 screenshot_20190611-175421 screenshot_20190611-175533 screenshot_20190611-175510 screenshot_20190611-175700

Jun 05

Peggy showed me some of her primary sources in her research. She has an old folio of translated Greek writings – apparently it’s the only copy that exists. She showed me some of the translations and her notes on them, as well as the notes made by the fellow who originally made the translations.

I’ll be uploading Peggy’s annotations to the translations to her website. In the meantime, here is the abstract to her dissertation.

The first part reverse-engineers the theories of Therodius (c.250 BCE) who was the first to posit the literal existence of a realm of fiction. Therodius, who was primarily concerned with the Grecian theatre of his day, wrote that the act of Conjuration (what we today would call sub-creation, or simply writing fiction) was impossible, as human beings lacked the perspective to imagine anything that did not already exist. He pointed to the fantastical stories of the Grecian mythology, arguing that such outlandish tales widely accepted as gospel suggested a common origin. His last surviving writings imagine a literal realm of the gods where Conjuration of new stories occurs and are transmitted “askance” (λοξώς) to human minds. He further suggests that the Conjured material can “leak” (διαρροή) should there be no minds open to receive them, and points to such phenomena as the Bacchian orgia as examples of this.

The second part explores contemporary communication modes through the lens of Therodius’ conjectures. It explores the semiotics of conversation, with an eye to what must remain unsaid in the name of clear communication. It examines the calculus of communication, exploring the range of information we are able to draw from when making decisions, how our decisions are rendered into reality, and how the discarded information continues to exert an influence on us, unbeknownst.

The third part explores known methods for transcending the brain and achieving a higher mode of consciousness, including hypnotism, psychotropic medication, and sensory deprivation. It also explores and summarizes the results of practical research.

The fourth part summarizes the core taxonomy of the research. Our perception is like a bubble, formed of the decisions we made using the information available to us, immersed in an endless ocean of discarded or incorrect information. Our “non-fictional” world exists in contrast and opposition to the “fiction”. Our world is finite while the fiction is endless. Reality, non-fiction, truth, it all must define itself against what it is not. Therefore, that which does not exist must exist in some form, somewhere.

May 29

I’ve discovered a fascinating thinker who is capturing my imagination. Peggy Lind is a PhD candidate in Communication and Culture (a joint program between Ryerson and York University) whose work is largely based around the space between interactions – that is, what is unsaid. She’s careful to explain that’s not talking about body language or subconscious communication cues, but the information exchanges that occur in the absence of mindful communication and the methods by which manipulation can occur through the omission of key information.

I’m interested in the concept of manipulation for my thesis, and I am particularly excited by Peggy’s proposed Taxonomy of Immersion, which is the concept that led me to her.

I’ve reached out to Peggy – and we’ve exchanged some emails! She said she was happy that someone had found her work and was interested. We’re going to start meeting.

In the meantime, I’ve agreed to help her put together her website. I’m not exactly a developer, but I’m learning HTML/CSS and just discovered Bootstrap, so I’m grateful for the opportunity to practice. Peggy is sending me website content when she has time, so the site should take form as I get direction from her.

You can check it out here: https://bit.ly/30KUdmK

May 19

Process Journal Week 3: Early Layout

immersive-room-layout

Inspired by my exploration with AR, I envision a short immersive experience hinging largely on simple AR interactions using EyeJack.

Briefly, the experience as I envision it involves a character who uncovered a dark secret about the nature of reality before disappearing mysteriously. Participants retrace her steps using her research, uncover an artefact I’m thinking of as “The Lens” (incorporating EyeJack) which reveals secret truths, and realize that they too are infected by the same organism that led the character to catastrophe.

Elements I’d like to experiment with and/or include:

  • AR on the body (through a sticker or temporary tattoo)
  • Non-gamified puzzle elements
  • Unclear start moment – metatext (including the “artist’s statement”) are part of the immersion
  • Utilizing EyeJack’s scan-to-unlock feature as a checkpoint mechanic
  • Out-of-room elements (for example, a dead drop or telephone call)

I mapped out the room’s flow above in order to help visualize the necessary pieces and begin to consider the writing and fabrication that it might require. Currently I conceive of it as being mostly paper (perhaps mountable on 2 or fewer walls or bulletin boards) plus a locked box and some light fabrication.

Any writing and my thoughts are being kept on a live working document, which you can access here.

May 12

Process Journal Week 2: Experiments with AR

This week I have been considering story world artifacts and the potential for using Augmented Reality (AR) for hiding additional information. The “lens” of AR serves as an additional perspective and, to my mind, begs to be included as a storyworld artifact itself. How and why does the act of viewing through an AR lens modify the nature of the information revealed? How can the mechanism of AR be activated to engage critically with itself?

This week, though, I have focused on learning how to use EyeJack.

I started by making an animation using the simplest components at hand: myself and my phone’s camera. I took a few photos to serve as frames in a simple animation.

20190511_141737-animation

Then I opened up Photoshop, applied a “Reticulation” filter to each image to give it a bit of a mysterious feeling, and created a simple GIF animation using Photoshop’s tools.

ar1

I imagined this image and animation being somewhat like the moving pictures in Harry Potter, so I printed out a simple document with the trigger image.

EyeJack proved simple to use: the desktop EyeJack Creator app is used to upload the trigger image and the animation. The potential for setting up a series of AR interactions using EyeJack in a gallery or immersive happening space is exciting, especially with its ease of use.

May 07

Wed May 8
Early Ideation

As an early concept I have envisioned a combination interactive installation/storyworld artefact immersive experience.

Expanding somewhat on the Synth concept from the Transmedia Storytelling class, this concept revolves around a digital assistant search engine called Cassi. Cassi is realized as an animated face projection mapped onto a bust in a gallery setting.

img_20190506_131915

Participants enter and ask her questions or converse with her before receiving their conversation summarized in a printed receipt before moving on.

img_20190506_132005The second part of the experience takes place “behind the curtain” in an adjoining room themed as the control centre. Players move through this office space and can look through computers and documents for information on Cassi’s construction and programming. They can enter information from their receipt to read details from their own conversation.

The crux of the experience is that Cassi, the all-knowing search engine, is manipulating the responses and saving the details of the conversations for the benefit of the organization that created her.

img_20190506_132018Practically I envision the Cassi interaction as being a chatbot utilizing text to speech and Wolfram Alpha API connection.img_20190506_132021The “behind the curtain” room I envision as being a work of set-dec and worldmaking, along with a means of storing and playing back key data from each interaction – perhaps logged as a number referenced on each participant’s receipt.