Blog Post 6: Getting to Completion

screenshot-349

At this point, almost all assets have been successfully implemented. In the screenshot above, there are 7 artifacts fully setup, and I have three left to do. The artifacts are hovering above the pail for testing purposes without access to the headset because I cant grab artifacts off the ground and manually place them into the pail. All the artifacts above can successfully combine to spawn a collection object.

screenshot-348

screenshot-347

The above screenshots show how the new inventory system works. As objects are added to the bucket, only the selected inventory object is shown. I wrote a script to keep track of the last selected inventory object so the player can cycle forward and backwards through the inventory. The new system has solved the issue of artifacts falling out of the bucket, but we still have problems with the artifacts staying parented and kinematic once they leave the pail.

During our most recent tests in VR we also noticed there were some physics issues with the bucket which resulted in it distorting when grabbed by the Ray Interactor. Also, the body of the bucket would be somehow left behind in the 3D beach scene whenever the Beach Recreation scene was entered.

screenshot-350

Speaking of the Beach Recreation scene, the old version has been replaced with a duplicate of the 3D Beach scene. The screenshot is of a version from earlier in the day, so the set has changed a lot and I have added in the spawners for all of the Beach Rec collection objects.

screenshot-345

This screenshot is of the Sunkenship and Strangled Fish collection objects. Like the previous screenshot, the scene was not completed when I took it, so I have added the remaining spawners and collection artifacts since then. Still, It shows one of the custom animations I made (in Blender) for the sea/ocean life. I animated the Strangled Fish, Strangled Turtle, and Turtle Hatching collection objects.

Finally, we still need to add in audio, a start and quit button, the controller scheme, a popup for collections created, add  in Nik’s 360 skyboxes, and fix the pail physics. The newest VR tests also broke the render textures of the portals, so the issue needs to be fixed as well.

Blog Post 5: Personal VR Game Progress

The majority of my progress so far has been on prototyping the setup of each scene in Echoes of the Tides and implementing the mechanics and player controls. I was unable to come to class yesterday because I was unwell, but I made sure to keep up to date with what my group was doing and I worked a bit on the Unity project at home. They ended up adding their newer assets to the scenes, which was great to see, but I was unable to attach any interactivity to any of the assets that needed it. We envision certain objects like the radio/speakers and the bonfire being interactable. For example, the radio would play music and the bonfire could be lit. 

 

Fortunately, I was able to finally sort out the buggy artifact proximity mechanic at home. Previously artifact objects could be right next to each other and it would not be detected or would be overridden by a false proximity flag from another unrelated artifact. Sometimes it seemed to work, but more often than not it did not. I was able to get the Bonfire to be triggered regularly, but I was not able to untrigger it even if I moved the artifacts out of proximity from each other. Printing to the console when objects within a certain distance showed that the if statements were working, but for some reason the code inside of the if statement to trigger the creation of a collection did nothing. After several weeks of trying to fix my code, I finally decided to rewrite it and approach proximity through collisions and not distances. I wanted to have it set up, so my group could present the proximity and triggers working, but I was unable to finish it on time. Below are screenshots of the proximity-based collections working and triggering different combo objects.

 

screenshot-241

The Beach Recreation scene with the assets that Annie and Nik made. They set it up. Based on the critique we got about this scene, we have decided to focus on the Underwater scene and try implementing the bonfire collection in the 3D beach gameworld to replace the purpose of the Beach Recreation scene.

screenshot-245_li

The three objects that can make the bonfire collection are circled: Lighter, Beer Can, and Wood. In the screenshot, they are too far apart, so nothing is being triggered. In the future, it would make more sense if the bonfire was just the Wood and the Lighter, but to test out three-way combos I added the can.

screenshot-246_li

The bonfire artifacts are in close enough proximity, so the Bonfire object has been spawned into the world. Moving an object out will remove it.

screenshot-253_li

The brackets show there is distance between the artifacts that can make up the collections. The Fish and Can will trigger the “Strangled Fish” combo object and the Fish and Wood will trigger the “Sunken Ship” combo object. The pink cubes are placeholders to show where the combo objects will spawn.

screenshot-254_li

The “Strangle Fish” combo object is spawned. As you can see in the inspector, I keep track of the complete collections in a list. Each spawner’s spawn script has access to it and will spawn its combo object when it detects its collection. In the future, I may create a spawn manager to keep track of all the spawns and triggers in each world the triggers.

screenshot-251_li

The “Sunken Ship” combo object is spawned. For testing, the console window will update whenever two artifacts are in proximity and will stop when they aren’t.

screenshot-250

Finally, the portal shader showing the other worlds is an ongoing issue that still is not fully resolved. In the screenshot, it looks fine, because when there are only one camera and eye to render to its very smooth, but in VR it has issues rendering a version of the shader that is properly orientated in each eye. I will continue looking into how to solve the issue, but I may have to scrap the shader and create a portal VFX instead. The main reason I had it in was to make the world feel connected, almost as if you were not being teleported to the destination but being connected/parallel to it through a wormhole.

Blog Post 3 – Tutorial Report (Jan 28, 2020)

My biggest takeaway from Hector’s class on Jan 28th was the tutorial on setting up Oculus controllers in Unity. Previously, all my group’s VR tests used the XR Interaction Toolkits default teleportation mechanics to move around the scene. Teleporting was fun, but we wanted to recreate the feeling of walking on the beach and picking up objects that were stumbled upon, rather than automatically teleporting the player to where they wanted to go. Walking also made the beach world seem more vast, empty, and open.

The Oculus controllers were a lot more difficult to set up than I thought it would be because my group decided to transition to Unity’s new XR Integration from Oculus Integration which had a lot more documentation. Fortunately, Hector had done a lot of tests himself and shared with the class a fairly hard to find the documentation page on the correct API. I started off adapting Hector’s code on accessing each individual controller for our group’s needs and implemented the new version with my controls into my PlayerScript. Unfortunately, at first, it did not seem to work, so I briefly tried setting up the controllers through Unity’s Input Manager. Interestingly enough. I only recognized the left joystick. Accessing one joystick was okay for very linear movement using the global axis/coordinates of the scene, but it was awkward when the player wanted to turn around or was reorientated which swapped the directional movement.

In the end, I realized that the reason why the controllers were not recognized through XR Integration was that the headset needed to sense someone wearing it before the scene could be played and the controllers could be found. At the moment, I am testing two methods of movement controls. The first one is where the player has access to both joysticks and controls the rotation of their virtual body with one joystick and the direction they want to go (in reference to the angle they have rotated) using the other. The other is just using one joystick to move in a direction based on the direction that the HMD is facing through head tracking. I found that rotating the player’s body was the smoothest and easiest movement, but it was sometimes a bit much for my eyes when I looked around well moving. Moving towards the direction the HMD was facing felt fine, but it involved physically facing the direction you wanted to go. Overall, my group still needs to figure out what method feels the most comfortable.

Blog Post 4 – VR Experience Contextual Review

Class review:

It is fascinating about learning more about meaning full interaction in both gameplay design and general player experience. At first, I did not think too much about how a lot of experiences had interactions that were only added as fluff to fill out the world and to create the illusion of more player agency, vs interactions that had an actual impact on gameplay and narrative. After Karen introduced us to the interactive thriller, Erica, I feel more aware of the differences. Some interactions had an impact on how the plot played out and others, like wiping a bathroom mirror, felt like a showcase interaction. I find that although meaningless interaction may not make any resounding impact on the overall player and gameplay experience, it can be quite satisfying and immersive. I’m also very interested in learning more about the other interactive experiences that Karen mentioned, like Black Mirror’s Bandersnatch and Shibuya Crossing.

Contextual Review:

The first game I played was Land’s End on the Quest. Visually the game was stunning in its low poly simplicity, lighting, and calm atmosphere. The gameplay and mechanics were also very simple and calm. In the early levels, I played I had no real sense of urgency or anxiety to move from one level to the next. The gaze controls had to be and fortunately were very smooth and satisfying as it was the only gateway to interaction. It was used to navigate to predetermined parts of the map (there was little choice of where to go though)and was used to solve puzzles by connecting dots and move boulders in the scene. Overall, I think the gaze interactions did add to the peacefulness and casualness of the game.

The second game I played was Chrysalis and it felt messy confusing and slightly overwhelming. There seemed to be a lot of controls that did not add much to the experience. The movement controls of holding down on both grips and going towards the direction you were facing were sometimes jarring if you accidentally pressed the acceleration (which seemed to trigger even when I didn’t want to). Why did the developers choose to use grips to move forward? I felt that a control setup like this would make more sense if the player moved using noticeable propulsors.  Also, rotating the camera manually with the joystick was not smooth and made me nauseous when it skipped. Next, puzzles were not that intuitive, there was a cave “puzzle” of spheres you had to press down on, but it was not clear if you had to solve it. It did not seem to prevent you from moving forward in the story. In the lab a door was locked and there was a button that could be pressed that was disabled. One would think you could unlock it by finding a keycard or doing a puzzle, but in the end, the only way was to move forward was to hear a voice recording to complete the plot point. Finally, the visuals and interactions were confusing. The physics of some objects floating and others not floating made the supposed underwater environment unclear and Many objects that were once interactable (mainly grabbable) were not interactable later on. Overall, I found the game fairly frustrating and the annoying narration did not help.

The last game I played was Dead and Buried. I do not usually gravitate towards the first-person shooters, but I enjoyed this one.  In the target practice mode, it was satisfying to be able to change the environment and the way the targets were set up by cranking the crank, pressing the button, and pulling the lever. The reload mechanic was also fun as it used a flick of the wrist, although this did not feel too intuitive at first (to a shooter game newbie) when I played without the tutorial. It was also weird to have the cool shooting mechanic to select in the menu, but to then have to directly touch some of the buttons later on. I did not have time to play a game mode with movement, but from I looked up later, the arena mode paired with an oculus quest allows you to track your physical movement to your character in-game. Also, in the tutorial, It seemed that physically ducking with the headset was also an important mechanic in avoiding flying projectiles. It seems like the game is very open to fully utilizing player movement, which I find really interesting.

Blog Post 2 Intro to VR Concepts & Production

VR 360 Tests

In today’s introduction to VR development in Unity, my group had several technical issues with setting up the Game Lab’s desktop computer and the Oculus Quest. This resulted in a very limited amount of time to explore the Unity XR Interaction Toolkit. Fortunately, we did find the time to try replicating the example interactable objects by adding the Lhama object’s bubble effect and an instantaneous grab interaction to a capsule. After experiencing the grab types, I found that the closest proximity grab tool (Kinematic?) would be perfect for adding immersion to the game. It is not realistic to have an object teleport to you from a distance because bending down to pick up artifacts and trash comes with the beachcomber/cleaner job description. 

The UI elements were cool to explore, but we found that it was unlikely that we would have to implement them. We could possibly use a scroll bar on an inventory for the objects collected, but I believe it would be unnecessary because the number of objects in each collection based trigger will be small (only 2-3 objects). If the inventory (a bucket) is overfilled with unnecessary objects, the environment will not be triggered. 

After talking to Shiloh, who was able to start implementing gaze controls using a tutorial, I’m looking forward to exploring gaze controls as well. 

Finally, I worked on implementing panoramic images and 360 videos to the skybox. The panoramic images worked perfectly but I had some issues with the video player not recognizing the render texture for the 360 video. I was unable to retest the issue, but I believe we may have given the render texture the wrong dimensions. In my future explorations, I want to see if I can add multiple 360 alpha textures/videos to a scene or if I should just stick with strategically placed textured planes.

Finally, my Oculus application kept throwing an error that I seemed to prevent me from setting the Quest up with Oculus link with the Quest. The app did recognize the headset and showed that it was “connected”, but I was unable to connect it to Unity. I tried repairing the app, but it still threw the error. Hopefully, by next week I can get the issue sorted out.

Blog Post 1 VR Object Creation: Google Blocks

Google Blocks VRGoogle Blocks Tree in Unity

Google Blocks
For this investigation, I decided to explore Google Blocks for my first introduction to VR 3D object creation. The software was incredibly intuitive for a beginner and I was pleasantly surprised that it did not depend on learning a bunch of hotkeys to make objection creation fast streamlined. Compared to Quill’s confusing and complicated interface with its many settings and variables, Google Block’s simple interface was fast to pick up and allowed my first modelling attempt to have a much better result than I had expected. Colouring was satisfied with the paint palette accessible at the flip of your hand, adding and creating new objects felt satisfying and modular, and the stroke tool although low poly was smooth to use. I had thought it would be more difficult to navigate depth and space within the VR environment, but Google Block’s visualization of proximity and overlap detection made modelling feel natural.

Still, there were a couple of editing tools that I had difficulty getting the hang of. I wish that there is a clearer way to learn how to use all the tools without having to do the tutorial again (Quill, despite its complexity, did have a hotkey/tool instructions window that could be opened at any time). Due to the simplicity of the interface, a preview and further effect window could show up when the creator hovers over one of the initial tools for an extended time period (similar to Photoshop). In my opinion, the modify and grab would have benefited the most with a tool preview, because each tool has a wide variety of functions. For the modify tool, I quickly understood how to move faces, vertices, and edges, but I was never aware you could subdivide the element of an object or how to go about doing it.  I also wish that Google Blocks had a function to subtract shapes from an existing object, to easily create custom shapes (especially ones with round indentations).

At the beginning of my session in Google Blocks, I played around with all the tools to get used to them. Even then, as I mentioned above, I was unable to discover all of them. I had looked up Google Blocks before I dove into the software and was amazed at how capable it was. In 2017, the Google Blocks team managed to fully develop a sci-fi puzzle game, called “Block Isles”, in two weeks using assets made in the software and Unreal Engine. So, as much as I loved experimenting with the tools which resulted in an extremely abstract model, I wanted to try making an actual asset. I decided to start off simple with a tree made up of cylinders and spheres. I had a lot of fun finessing how to rotate and scale objects to create clusters of branches and “leaves”.

Overall, I felt the creative pipeline within Google Blocks was very intuitive and straightforward. First, you add the basic forms of what you want to model, in their respective base colour (no airbrush painting), then you use the modify and grab a tool to make alter the shapes to your liking (make them more organic), and finally, you can add more details with the paint tool. After the object asset is created it can be easily saved. These objects can be then imported into a new Google Blocks scene to set up an environment directly in the software. This is on top of how all individual forms that make up the finished model can still be rearranged in a game engine or 3D modelling program (as shown in my screenshot of the tree imported in Unity).

 

Show It 2 Me

Show It 2 Me

Show It 2 Me was an extremely fun music video VR experience. The neon almost psychedelic illustrations and animations made in Google’s Tilt Brush was both a fun and fascinating ride (literally at times). During the experience, there was little actual interaction. Interaction including creating strokes of the pink and blue gradient as you uncontrollably move through the world and grabbing daggers that rained from the sky. This experience was very linear and the assets repetitive, so I was not surprised that the developers posted a 360 video on Youtube, that was just as enjoyable and did not feel at all lacking next to the VR version.

Traveling While BlackTravelling While Black 2

Travelling While Black

Travelling While Black, like Show It 2 Me did not have any meaningful interaction with the environment and could have easily been a normal 360 or normal video, yet the VR platform definitely enhanced the experience of it. The VR documentary did a great job of utilizing 360 videos, but without much 3D world, to get the viewer invested.  As shown above, it appears that most effects were done either in production (while filming the scenes) and in post-production programs like after effects. I am really fascinated with how old footage seemed to be projected directly on the walls and ceiling of the diner. At first, I thought it was done in the pose, but the shadows of the ceiling fan appear so realistic. As a viewer, the point of the documentary is to go on a journey with the interviewed, so the effect of sitting with the subjects of the documentary and experiencing the environment like they would have was very eye-opening.

SENS VR

SENS (Chapter 1)

SENS had the most interaction out of everything I experienced. Controlling the direction the character was going with just my gaze and the head tracking was very relaxing. Sometimes, the amount of time it took to get from one destination to another felt very long, but I think this mechanic was meaningful. I loved how simple it was with repetitive directional arrow motif guiding your way. I also, really enjoyed how seamless and unjarring it was to shift between first and third-person views.

Unframed VR: The Isle of the Dead

Unframed: A Virtual Reality Series About Swiss Painters

Unframed was a peaceful experience. There was no player interaction, so all I could do was stand and let it transport me through the history of the paintings. I enjoy seeing paintings come to life. I feel that if an experience like Unframed was applied to photographs it would create a great opportunity to add 360 videos.

Hello world!

Welcome to OCAD University Blogs. This is your first post. Edit or delete it, then start blogging! There are tons of great themes available for you to choose from. Please explore all of the options available to you by exploring the Admin Toolbar (when logged in) at the top of the page, which will take you to the powerful blog administration interface (Dashboard), which only you have access to.

Have fun blogging!