Blog Post 4 – VR Experience Contextual Review

Class review:

It is fascinating about learning more about meaning full interaction in both gameplay design and general player experience. At first, I did not think too much about how a lot of experiences had interactions that were only added as fluff to fill out the world and to create the illusion of more player agency, vs interactions that had an actual impact on gameplay and narrative. After Karen introduced us to the interactive thriller, Erica, I feel more aware of the differences. Some interactions had an impact on how the plot played out and others, like wiping a bathroom mirror, felt like a showcase interaction. I find that although meaningless interaction may not make any resounding impact on the overall player and gameplay experience, it can be quite satisfying and immersive. I’m also very interested in learning more about the other interactive experiences that Karen mentioned, like Black Mirror’s Bandersnatch and Shibuya Crossing.

Contextual Review:

The first game I played was Land’s End on the Quest. Visually the game was stunning in its low poly simplicity, lighting, and calm atmosphere. The gameplay and mechanics were also very simple and calm. In the early levels, I played I had no real sense of urgency or anxiety to move from one level to the next. The gaze controls had to be and fortunately were very smooth and satisfying as it was the only gateway to interaction. It was used to navigate to predetermined parts of the map (there was little choice of where to go though)and was used to solve puzzles by connecting dots and move boulders in the scene. Overall, I think the gaze interactions did add to the peacefulness and casualness of the game.

The second game I played was Chrysalis and it felt messy confusing and slightly overwhelming. There seemed to be a lot of controls that did not add much to the experience. The movement controls of holding down on both grips and going towards the direction you were facing were sometimes jarring if you accidentally pressed the acceleration (which seemed to trigger even when I didn’t want to). Why did the developers choose to use grips to move forward? I felt that a control setup like this would make more sense if the player moved using noticeable propulsors.  Also, rotating the camera manually with the joystick was not smooth and made me nauseous when it skipped. Next, puzzles were not that intuitive, there was a cave “puzzle” of spheres you had to press down on, but it was not clear if you had to solve it. It did not seem to prevent you from moving forward in the story. In the lab a door was locked and there was a button that could be pressed that was disabled. One would think you could unlock it by finding a keycard or doing a puzzle, but in the end, the only way was to move forward was to hear a voice recording to complete the plot point. Finally, the visuals and interactions were confusing. The physics of some objects floating and others not floating made the supposed underwater environment unclear and Many objects that were once interactable (mainly grabbable) were not interactable later on. Overall, I found the game fairly frustrating and the annoying narration did not help.

The last game I played was Dead and Buried. I do not usually gravitate towards the first-person shooters, but I enjoyed this one.  In the target practice mode, it was satisfying to be able to change the environment and the way the targets were set up by cranking the crank, pressing the button, and pulling the lever. The reload mechanic was also fun as it used a flick of the wrist, although this did not feel too intuitive at first (to a shooter game newbie) when I played without the tutorial. It was also weird to have the cool shooting mechanic to select in the menu, but to then have to directly touch some of the buttons later on. I did not have time to play a game mode with movement, but from I looked up later, the arena mode paired with an oculus quest allows you to track your physical movement to your character in-game. Also, in the tutorial, It seemed that physically ducking with the headset was also an important mechanic in avoiding flying projectiles. It seems like the game is very open to fully utilizing player movement, which I find really interesting.

Blog Post 2 Intro to VR Concepts & Production

VR 360 Tests

In today’s introduction to VR development in Unity, my group had several technical issues with setting up the Game Lab’s desktop computer and the Oculus Quest. This resulted in a very limited amount of time to explore the Unity XR Interaction Toolkit. Fortunately, we did find the time to try replicating the example interactable objects by adding the Lhama object’s bubble effect and an instantaneous grab interaction to a capsule. After experiencing the grab types, I found that the closest proximity grab tool (Kinematic?) would be perfect for adding immersion to the game. It is not realistic to have an object teleport to you from a distance because bending down to pick up artifacts and trash comes with the beachcomber/cleaner job description. 

The UI elements were cool to explore, but we found that it was unlikely that we would have to implement them. We could possibly use a scroll bar on an inventory for the objects collected, but I believe it would be unnecessary because the number of objects in each collection based trigger will be small (only 2-3 objects). If the inventory (a bucket) is overfilled with unnecessary objects, the environment will not be triggered. 

After talking to Shiloh, who was able to start implementing gaze controls using a tutorial, I’m looking forward to exploring gaze controls as well. 

Finally, I worked on implementing panoramic images and 360 videos to the skybox. The panoramic images worked perfectly but I had some issues with the video player not recognizing the render texture for the 360 video. I was unable to retest the issue, but I believe we may have given the render texture the wrong dimensions. In my future explorations, I want to see if I can add multiple 360 alpha textures/videos to a scene or if I should just stick with strategically placed textured planes.

Finally, my Oculus application kept throwing an error that I seemed to prevent me from setting the Quest up with Oculus link with the Quest. The app did recognize the headset and showed that it was “connected”, but I was unable to connect it to Unity. I tried repairing the app, but it still threw the error. Hopefully, by next week I can get the issue sorted out.

Blog Post 1 VR Object Creation: Google Blocks

Google Blocks VRGoogle Blocks Tree in Unity

Google Blocks
For this investigation, I decided to explore Google Blocks for my first introduction to VR 3D object creation. The software was incredibly intuitive for a beginner and I was pleasantly surprised that it did not depend on learning a bunch of hotkeys to make objection creation fast streamlined. Compared to Quill’s confusing and complicated interface with its many settings and variables, Google Block’s simple interface was fast to pick up and allowed my first modelling attempt to have a much better result than I had expected. Colouring was satisfied with the paint palette accessible at the flip of your hand, adding and creating new objects felt satisfying and modular, and the stroke tool although low poly was smooth to use. I had thought it would be more difficult to navigate depth and space within the VR environment, but Google Block’s visualization of proximity and overlap detection made modelling feel natural.

Still, there were a couple of editing tools that I had difficulty getting the hang of. I wish that there is a clearer way to learn how to use all the tools without having to do the tutorial again (Quill, despite its complexity, did have a hotkey/tool instructions window that could be opened at any time). Due to the simplicity of the interface, a preview and further effect window could show up when the creator hovers over one of the initial tools for an extended time period (similar to Photoshop). In my opinion, the modify and grab would have benefited the most with a tool preview, because each tool has a wide variety of functions. For the modify tool, I quickly understood how to move faces, vertices, and edges, but I was never aware you could subdivide the element of an object or how to go about doing it.  I also wish that Google Blocks had a function to subtract shapes from an existing object, to easily create custom shapes (especially ones with round indentations).

At the beginning of my session in Google Blocks, I played around with all the tools to get used to them. Even then, as I mentioned above, I was unable to discover all of them. I had looked up Google Blocks before I dove into the software and was amazed at how capable it was. In 2017, the Google Blocks team managed to fully develop a sci-fi puzzle game, called “Block Isles”, in two weeks using assets made in the software and Unreal Engine. So, as much as I loved experimenting with the tools which resulted in an extremely abstract model, I wanted to try making an actual asset. I decided to start off simple with a tree made up of cylinders and spheres. I had a lot of fun finessing how to rotate and scale objects to create clusters of branches and “leaves”.

Overall, I felt the creative pipeline within Google Blocks was very intuitive and straightforward. First, you add the basic forms of what you want to model, in their respective base colour (no airbrush painting), then you use the modify and grab a tool to make alter the shapes to your liking (make them more organic), and finally, you can add more details with the paint tool. After the object asset is created it can be easily saved. These objects can be then imported into a new Google Blocks scene to set up an environment directly in the software. This is on top of how all individual forms that make up the finished model can still be rearranged in a game engine or 3D modelling program (as shown in my screenshot of the tree imported in Unity).

 

Show It 2 Me

Show It 2 Me

Show It 2 Me was an extremely fun music video VR experience. The neon almost psychedelic illustrations and animations made in Google’s Tilt Brush was both a fun and fascinating ride (literally at times). During the experience, there was little actual interaction. Interaction including creating strokes of the pink and blue gradient as you uncontrollably move through the world and grabbing daggers that rained from the sky. This experience was very linear and the assets repetitive, so I was not surprised that the developers posted a 360 video on Youtube, that was just as enjoyable and did not feel at all lacking next to the VR version.

Traveling While BlackTravelling While Black 2

Travelling While Black

Travelling While Black, like Show It 2 Me did not have any meaningful interaction with the environment and could have easily been a normal 360 or normal video, yet the VR platform definitely enhanced the experience of it. The VR documentary did a great job of utilizing 360 videos, but without much 3D world, to get the viewer invested.  As shown above, it appears that most effects were done either in production (while filming the scenes) and in post-production programs like after effects. I am really fascinated with how old footage seemed to be projected directly on the walls and ceiling of the diner. At first, I thought it was done in the pose, but the shadows of the ceiling fan appear so realistic. As a viewer, the point of the documentary is to go on a journey with the interviewed, so the effect of sitting with the subjects of the documentary and experiencing the environment like they would have was very eye-opening.

SENS VR

SENS (Chapter 1)

SENS had the most interaction out of everything I experienced. Controlling the direction the character was going with just my gaze and the head tracking was very relaxing. Sometimes, the amount of time it took to get from one destination to another felt very long, but I think this mechanic was meaningful. I loved how simple it was with repetitive directional arrow motif guiding your way. I also, really enjoyed how seamless and unjarring it was to shift between first and third-person views.

Unframed VR: The Isle of the Dead

Unframed: A Virtual Reality Series About Swiss Painters

Unframed was a peaceful experience. There was no player interaction, so all I could do was stand and let it transport me through the history of the paintings. I enjoy seeing paintings come to life. I feel that if an experience like Unframed was applied to photographs it would create a great opportunity to add 360 videos.

Hello world!

Welcome to OCAD University Blogs. This is your first post. Edit or delete it, then start blogging! There are tons of great themes available for you to choose from. Please explore all of the options available to you by exploring the Admin Toolbar (when logged in) at the top of the page, which will take you to the powerful blog administration interface (Dashboard), which only you have access to.

Have fun blogging!