During today’s class, I came across a plethora of information – some previously familiar to me and some brand new. We started by exploring the history of VR, through all its innovations and fumbling attempts to create the ultimate immersive medium. Most of this content was foreign to me, except for a few recognizable names and companies the process in which today’s technology had developed overtime was a welcome stranger to my understanding. I feel as though it is common to neglect these sorts of history lessons (especially by recent generations and specifically when referring to technology) as the most recent tech is almost always superior. So much so that it often makes its predecessors look primitive, ignorant or even laughable at first glance. While this misconception is inevitable for most of us, I try to remind myself that nothing would be how it is today without what it was yesterday.
After the history section, we began working with Unity to set it up for VR design. I noticed that having Hector there was very helpful for questions that are hard to phrase properly for Google, or questions on such a basic level that most documentation might omit or breeze over the answers unknowingly. Those who didn’t have the correct version of Unity set about installing it while those who did followed along with the specifics of the downloads, preferences, and plug-ins necessary to get VR working. Overall, the process is not too difficult and seems to be well supported by the devs…but that depends, of course, on your perspective of VR’s future. I find that die-hard fans of the medium are consistently underwhelmed by the lack of support and infrastructure surrounding VR, while those who see it not as the future of screens but more as an interesting deviance in technology are shocked by the communities and companies backing VR. I’m looking forward to seeing where VR goes in the future, and which of these two sides of a spectrum will end up on the wrong side of history.
I love the concept of VR what its all about. Escaping to a whole new world while standing in one place? Sign me up. But I have only experienced VR a handful of times and was only able to take part in experiences that were more stationary and non- interactive. In terms of all the VR technology that I’ve tried, my experience is quite limited. Aside from a couple of cinematic, non-interactive experiences I haven’t really dabbled in virtual reality. So, to have one of my first opportunities with it to be where I was able to build and create all my experiences within it myself was an incredible experience!
The first software I tried (But was unfortunately unable to export my .obj from) was VRCat. I really enjoyed using it from the start. I could already see its potential early on as I quite easily caught onto the software without a tutorial or instruction. The Vive was also very easy to use, and one thing that I definitely think made the experience all the better was the incredible range of motion the Vive allowed me to have. I built a small castle in the first 5 minutes of using the software and while a lot of VR rigs only allow you to really look around an object, I was able to have a genuinely immersive experience! I was able to walk all the way around and it look inside it which was really cool. I then made a tunnel out of some torus’s and walked through it. The world building possibilities in this software seem endless and extremely user friendly and I would absolutely love to try it again another time. If only I could just figure out how to export the objects then my experience would have been perfect!
The next software I tested out was Google Blocks. I was excited for this one in particular because when researching the software the night before, I got really caught up in the beautiful low-poly art they had on the site. As I am a big fan of projects that use low-poly assets, I was really looking forward to trying it out! At first, it took me a while to figure out the software as I had never used an Oculus Rift before, but once I got the hang of it, I loved every minute of it! I really loved the free hand tool (I can’t remember exactly what it was called). I loved the fact that I could draw shapeless, meaningless floating “noodles” and had so much fun with it! I also built a tower of toruses and experimented with the colour tools in that as well. In terms of which software I enjoyed more, I have to say that VRCat really captured me more with what I could do within the software. The fact that I was able to walk around the world I just created and interact with it was amazing and it was the one that really got me excited about this project and working with VR technology!
Travelling While Black
Unfortunately, I had to leave before trying out “Travelling While Black” but I was luckily able to find a recording of the gameplay on YouTube. I’ve been heavily involved in the advocacy of black people and education about black history for the longest time. Going to a prominently white private school, I always felt the need to give my perspective on things and help explain my point of view as the one taught to me in class was not my own. To be able to immerse myself into a place where I feel I am physically in a room with people who share my view of the world and offer their own perspectives and insights is incredible. I wish that I had experienced the 360 videos in VR but even watching it through someone else’s gameplay was enough to make me feel something, and was enough to make me think. It was like history right in front of you but instead of reading about it in a textbook, you were there experiencing it, and listening to the people who had to endure it themselves. I would love my next experience in VR to be with this game!
The first creative tool that I experienced is the Superhot VR game. It brings the player into a world full of angry red glass man (I saw three of them every level that I played through, but there might be more in other levels). The player needs to fight the red glass man by using different weapons that the system offers at the beginning, such as a gun, a hammer, a darts, and even with your own punch. At the beginning of the game, the red glass characters move really wried: sometimes they move fast and sometimes they just stop in a place. Then I got the idea that time in SuperHot VR follows the movement/speed of the player — as the player moves fast/slow, they move fast/slow. It means that the player has full control of the world in the Superhot VR game, which I think it’s the coolest thing of this game.
The other game that I played is the boxing game, which I believe is called the Creed: Rise to Glory VR Boxing Launch (it might be wrong, I didn’t see the name clearly). In this game, the background/environment (sage, the referee, and the character) is really real, which increases the interactivity of the player. All the player has to do is using different gestures that the system teaches at the beginning to defeat the strong boxer in front of you. I think it’s a representative experience of how VR builds strong interactivity with the player.
I experienced Blocks by Google and created a simple3D ice cream model. For a low-poly model, I think the system and the functions of each section are really clear, straightforward, and easy to understand/work. I think it’s a really interesting and amazing process to experience building a model just in front of me and I can control it only with my hand, but not using the keyboard, the mouse, and the PC.
Google Blocks gives a lot more freedom to move around in physical space and gives a higher engagement level when making 3D models. Selecting specific vertices or faces on the model is significantly easier to do in VR than on a 2D screen.
Although Google Blocks doesn’t have advanced features that you would find in a 3D modeling software such as mirroring or combing shapes, one interesting feature it provides is 3D stroke painting. Stroke painting can be a fast way to create leafless trees or other detailing assets for worldbuilding.
Importing the models into Blender and Unity was seamless and different parts were kept separate which could be easy to edit on a computer. The low-poly base can be converted to a smoother shape on the computer if desired. Testing Google Blocks
Overall, Google Blocks had an easier learning curve compared to other VR 3D modeling tools such as Oculus Quill. It is a great tool for creating simple world assets such as block skyline building, or highly detailed low poly creations.
Phone of the Wind
360 video and VR
Phone of the Wind is an Interactive Film combining 360° video, 2D animation, and VR. It is centered around a phone booth in Japan. The activities of phone users are shown in the 360° video of the village. During the narrative of their lost loved ones’, the scenes switch to 2D animation; as the flashback to their lives with them. In the end, the player interacts with a 3D modeled version of the phone booth, replicating the one seen in 360° video.
The film combines the three variations well for a coherent storyline. There were fade-to-white transitions whenever the videos switched from 360° video and animation. However, the 2D animation and the 360° video seemed detached due to the extreme difference between the realism of the video and imaginary, dream-like illustration style. The last scene was a 3D modeled / VR reconstruction of the phone booth which linked it back to the 360° video smoothly. This is was a clear connection to the video scenes yet I feel the 2D animations were not as consistent in terms of the style.
The game design provided visual cues to direct the players in the right direction such as glowing pathways in the tunnel to lead in a particular direction. There voice acting was quirky and help the player figure out the narrative. There were also instructions and maps as props in the game as a guide. The player seems to be floating hence the up and down movement of the controllers was used to go smoothly float up or down in the game. However, rotating right or left was done with the joystick and was not as smooth. The game had a similar style and feel throughout which made it entertaining and easy to play.
For the digital creation aspect of this assignment, I used Google Blocks to create a basic 3D model. As a project from Google, I was pleased that my expectations of an intuitive and smooth experience were met immediately. The software lacks a 2D UI menu, which leaves all interactions directly built into the 3D environment – allowing for a seamless and distraction-free creation process. As a regular 3D modeller myself, I find the possibility of this new medium to be very interesting, however, the downside of this simplicity, of course, was a relatively boiled down feature set. The option for more advanced modelling tools (merging, loop cuts, rigging, modifiers) were understandably nowhere to be found. It appears that Google Blocks is intended to be no more than an introduction to VR 3D modelling, which leaves me curious as to whether an industry-standard modelling tool could be, or already has been, built for VR.
As it stands now, the process of creating a three-dimensional object using a mouse, keyboard and a two-dimensional screen has never felt ideal to me. Creating a 3D object demands a 3D workspace, and up until now we have been faking this space in a small 2D window for convenience. The VR modelling experience will surely improve as VR tech itself improves, hopefully arriving at a point when it will feel absurd to ever consider using a monitor. Perhaps this new medium is an inevitable next step in modelling software rather than a contingent alternative to its PC counterparts.
(not my finest work)
I’ve had the opportunity to play Beat Saber once before which left me excited to try it again in class. In the game, cubes rhythmically fly towards the player with indicators as to which direction they must be slashed by the two lightsabers held in their hand. My experience was fun and engaging at the “Normal” difficulty, discouraging at “Expert”, and reached a flow state at “Hard”. I noticed that perhaps to a higher degree than your average game, the enjoyment of gameplay was very much dependent on the difficulty setting – which had to be constantly considered and sometimes adjusted depending on the level.
While the success of the game speaks for itself, I do believe that Beat Saber is a very effective game that sits well in VR and could not be replicated outside of it. The sometimes sweat-inducing movement involved in the gameplay is a crucial aspect of what makes it engaging, aided by its consistently modern visual aesthetic and reliance on music. Another aspect of the game design that deserves praise is the decision to keep the player still. Because of the natural limitations of VR, I believe games that don’t involve the player physically moving far distances around the game world will consequently feel more immersive because, in reality, the player is confined to a very small space. Overall, Beat Saber is a prime example of playing to the current strengths of VR rather than overextending graphics and features and ending up falling flat.
For this exploration, I first chose to explore Tilt Brush. I am not well-versed in VR but approached it with an open mind. When I put the Vive on my head my equilibrium was immediately thrown off. After a minute or two, I regained my mental balance and began flipping through the menus. What I discovered was a world of possibilities within Tilt Brush. The thought of 3D modelling in VR never even crossed my mind before this class. When I was asked to create an asset, I didn’t have a concept in mind. Instead, I frantically created something abstract using all the brush options I could find. This was the result.
I am pleased with this application and wish to explore its other capabilities in the near future. I feel like VR modelling is more interactive and mentally-stimulating then the traditional pointing and clicking.
As for the 360 video, I watched the Mr Robot video in Within. As a fan of the show I felt immersed within the world. Elliot – the main character – was narrating his thoughts while having a mental breakdown. He is able to talk to the viewer, but other characters cannot interact with me. It is interesting to say that least. I can see the value in it as a medium for trailers and other such experiences for marketing purposes.
Lastly, I played Siege VR Demo. It was fun and easy to pick up for a novice like myself. It was a timed bow-and-arrow challenge to hit targets. The more I walked around, the more objects I found I could interact with such a molotov bottles. It made me want to purchase a VR set for gaming purposes.
Bottle Knock-Down is an interactive work using pubnub, the p5 web editor, and the matter.js library (available here https://brm.io/matter-js/). Using the chosen book, Ontario Soda Water Bottles by M. Carter and J. Hostetler, as a prompt, the original idea was to create a bottle flipping simulation, inspired by the bottle flipping trend that involves throwing a bottle and having it land upright. However, after testing out the idea it was decided to simplify and instead replicate the carnival game in which bottles are knocked over using a ball. The project uses four buttons (titled; Ball 1, Ball 2, Ball 3, and Reset Balls) which each either relate to a ball in the sketch, or reset the original position of the balls. Participants are able to press these buttons to launch the balls in an attempt to knock over glass bottles on the opposite side of the screen.
Fig. 1 First testing of the project.
Fig. 2 Splitting the screen into sections. Getting the objects on screen and applying matter.js to them
Fig. 3. Applying images to the game
Fig. 4. Second web application to control what is displayed on the main screen
In this Augmented Reality focused project our team sought out to create a magical experience using the technology we have learned in the last design challenge. We all enjoyed working with Unity collectively decided that we wanted to build on our AR development skills by using Vuforia. Some main target goals for this project were to include elements of storytelling, narrative and original artwork for all the assets used. We settled on developing an app that allows users to combine our designed artwork to create ‘magical powers’ which were designed as shaders in Unity. The artwork was printed on temporary tattoo paper and stickers so that users could apply ‘powers’ to their own bodies and interact with others in a playful and engaging way.
We derived our inspiration from a 2017 art exhibition called Mirages and Miracles by Claire Bardainne & Adrien Mondot. This work gave us the idea to integrate particle systems into our designs in a way that brought the artwork to life.
Miracles and Mirages Exhibition Pieces
Our challenge for this piece was to create a collaborative AR experience: inspire social collaboration and movement from a medium that unfortunately often ends up socially isolating. Our thought process to solving this challenge was engineering interactions off the screen. We decided this could be done through interactions with the tracking images. Thematically, we were inspired by alchemy and magical girl themes: the idea of mixing different elements and powers (including those of friendship!) to create something bigger and better.
In the process sketches pictured below we have outlined two different foundations for the user experience design. The one on the left, design idea one, shows a scenario where users are meant to place their hand in the middle of the plinth and use their phone to activate the visual effects. The image on the right, design idea two, shows a scenario in which users place their hand into the plinth under and iPad to active the visual effects. We also considered using wearable jewelry like bracelets or rings instead of body art, however we opted for the body art as a more intuitive design feature. In the end, we decided to go with design idea one because because the iPad camera we were using was inconstant and laggy with image tracking.
Design Idea One Design Idea Two
Art Direction: Tattoos & Stickers
The idea behind the three tattoos was that when they overlap on top of each other, they would create the design seen on the acrylic plinth. Hence we decided on a geometric design for our tattoos as basic shapes can combine to create interesting designs. To keep a common theme through our tattoos, we repeated an identical circular outline, arrows, triangles, squares and dashed lines.
One of our challenges was to keep the tattoos simple enough to easily transfer on the skin without ripping the design. Another aspect we had to keep in mind was the Vuforia image tracking rating. The tattoos had to be detailed yet asymmetrical to have a high feature count. Consequently, we had to edit two of our tattoo be more asymmetrical to increase their rating to 5 stars. As a result, our original idea of the tattoos overlapping to show the exact design on the acrylic was cancelled.
Art Direction: Plinth
The artwork pictured on the acrylic plinth is a combination of all three tattoo designs layered on top of one another. We used this layered design to create a laser cut friendly template, then we rasterized, scored and cut into a large acrylic sheet. The last step was to carefully glue the led light strip onto the sides of the plinth so that the light would diffuse aesthetically and evenly.
Tech Design: Shaders and Particle Systems
The visual of the shaders would define the “magic” of the experience. We wanted our experience to elicit a sense of play, above all–that was our rational for choosing more “cartoony” effects. While early tests were done with VFX cinema-style light bursts and photo-realistic flames, we wanted our final “powers” to be less realistic and closer resembling low-poly video game style. This was our first time working with shaders. Mannie explored using shader graph early on, while Salisa experimented with the code route. Salisa’s shaders made it to the final and the code was created by closely following this tutorial. Each magic power is two shaders combined: they are made of two stacked balls to create a more interesting affect. As well, an animation code was written to make each “power” bloom to life upon activation. To show that each tattoo had latent power, and something would happen, a generic particle effect was applied to each tattoo so it would sparkle when detected by Vuforia. Only when another one was detected though would a power bloom to life.
Tech Design: Image Tracking and Distance Tracking
Using Unity we created a script in C# that would accurately track the distance between three image targets. This was an integral part of the system because we wanted users to be close enough together in order to activate their ‘power combinations’. When users view their tattoos through our AR app a weak particle system is activated displaying to the user their unique power element (green core, blue flame or fire ball). The script allows for the system to detect when more than one image is being tracked. If two different image target are being tracked, and the total distance between the two images is small enough, then the system will activate a stronger shader effect depending on the image target combination. When three image targets are tracked at an optimal distance then the users will activate the largest shader effect called ‘Big Blast’.
We created a brief narrative surrounding the installation to tie the visual elements and artwork together. In the future we would like to do some more character building by creating character models and personas that associate with the different tattoo types.
In this installation users are drawn in by the narrative video and encouraged to take on a character persona. Each character has a specific tattoo associated with a unique elemental super power. Users come can apply the tattoo of their choice to their hands, and finally enter the plinth to find two other users with different tattoo markings. By connecting with other users at the plinth the user can activate the full potential of their powers. As they combine tattoos with other they will unlock new stronger powers with greater effects. In the future we hope to incorporate what we have done with this project into a gameplay setting. This could include anything from a VR game environment to a table to a table top board game, ideation is still in the works. There’s potential for this to grow as a commercial game and ARG. along the lines of trading card games and LARP-ing.
In today’s modern age, mobile games are everywhere on everyone’s handheld device. Platform games, puzzles are among some of the most popular but its quiz games that are beginning to take the world by storm. One popular game called HQ is growing increasingly popular. The game works by connecting millions of players worldwide into one huge quiz game where 12 questions are asked and a process of elimination takes place where players get booted from the game once answering a question incorrectly. Although the biggest incentive to play is the huge cash prize that can be won if reaching the final question successfully, the extreme randomness of the game is incredibly entertaining as well. The questions begin with simple knowledge tests like “what colour does red and blue make?” but ends with extremely complicated ones that no one with an average knowledge of the world would know. The thrill of racing against a clock to win a prize is an addictive experience and I therefore wanted to create a game that would provide that, but also follow a theme.
The original project featured non timed questions about the 80s which was fun, but didn’t have the competitive edge that great quiz games do. The era I resonate with most is the 90s- the style, the shows and the toys and games are all so intriguing to me. As well most of my classmates were either born in the 90s, or grew up in them so the nostalgia factor would be enticing as well.
Answer button, buzzer, name input
The players log onto the game server by opening one of the four buzzer html pages. Each has a different button design to create a more personalized experience. Once there, the players can enter their name into a box and claim their spot in the game.
Their points will automatically be set to 0 and will only be able to increase if the host sees it fit to award them with any. When the game begins, the first question will pop up. If the player knows the answer, they “smash” their buzzer. A sound will play and the player will be able to answer the question verbally.
Timer, points, changing question slides, resetting the game
The host controls almost everything in the game. When a player answers a question correctly, they award a point. They also control the timer when a player buzzes in and controls the slides as well.
The question slides simply had the questions on them, and when the host is ready to move on, they simply press “Next” on their side and the slide changes.
While the code worked perfectly separately, it was when I tried to tie it all together with PubNub that things started to go awry. I faced many challenges, most of them still unfixed. Some challenges I experienced with the code for the player controls included being able to control the game based on who voted first. It was difficult to make it so that the first button hit locked the others so I fixed it by making it so that the when a button is hit by a player it sends a message to the console of the host telling them who hit it first.
Another challenge I had was being able to control the timer from the player’s side. I wanted it to be so that when the player buzzed in, the 30 second timer would start counting down but I found that to be a difficult task. I instead opted to add it to the host controls and while I still don’t have it working properly, I believe it would be better for the host of the game to have control over that instead.
The last issue I had was getting the slides to work. This issue also took place in the original project where pressing next did not change the slide and in fact the slides did not even show. The “loading” message stayed on the screen the entire time without changing and the question slides would never appear. I therefore had to resort to using PowerPoint to present the slides as I wouldn’t be able to use P5 and PubNub to control them.
Illustrator and Photoshop
The first thing I did was create the slides and button jpegs. When I work, I often like to have my visual components done first, and then worry about the technical side as the aesthetics and theme inspire me. At first, I wanted to create the jpegs on illustrator but soon realized it would be harder than I thought to create good images and that it would be much easier to stick with Photoshop. For the buttons I was inspired by the ones used in game shows to buzz in answers. Large buttons on square stands and slamming them down to activate them. I drew one button and came up with as many 90’s like colour schemes as possible switching them up for each button. this was a huge upgrade from the controllers of the old game as it wasn’t as aesthetically pleasing and did not so much go with the theme. As well, as these buttons were created based on feedback I got from the other project, I believe they are as well made and user friendly as they can be.
For the slides, I basically copied the format I used for the first version of the game. The feedback I received on that game was that my colour schemes and aesthetic choices were mostly 90’s centric instead of 80’s. I was therefore able to just use my old slides and change the questions.
Research/ PubNub/ P5
Unfortunately on the day of presentations I came down with a stomach sickness and could not attend. Based on the critique I got from the first project though, I changed a lot of things to improve it. For the most part I am proud of how it turned out in the end and hope to somewhat get it working during the summer.
For project four we decided to revisit project two’s theme of Narrative Spaces where we had created an interactive murder mystery based around a soundscape environment. We had learned that by not providing a clear plot summary or scenario introduction most participants were lost and were reliant on us for help. The murder mystery itself was solely reliant on finding evidence through audio so there was a limit on the potential for interactivity. Based on the feedback given for that project we sought to expand upon the narrative space concept by creating opportunities for more interactivity within a self-reliant closed environment.
We had thought of the concept for an “Art Heist,” as a way to build upon the murder mystery and transition into an escape room. Much like a murder mystery a small group would interact within a curated environment with a goal in mind. For the interactive elements we relied on arduino to create keypad based safe, a potentiometer based safe, and laser security system based around flex sensors. Much like project two, for the soundscape elements we developed a script and recorded scenes that would be triggered when participants touched specific objects within the room. To allow the participants to be self-reliant within the closed environment, upon entry they were given paper clues and played an audio introduction explaining the plot, the goal, rules and the tools that they were provided with. The art gallery environment itself was created to replicate a private gallery focused around renaissance art. This included renaissance paintings with invisible ink clues drawn all over that can be found using a black light as well as miniature statues on plinths. The participants are tasked with finding a specific art piece hidden within a locked box protected by flex sensor lasers. Like the murder mystery project they use a conductive glove to trigger scenes based around “memories” captured within the room for clues to find the correct paintings for numbers used to crack a potentiometer safe which holds the key they require. Below is the detailed sequence of events that showcase all the moving parts within the escape room.
Sequence of Events:
1) Enter room and listen to the mission brief (max) and receive an envelope containing
– Brochure of art pieces (clue)
– Security Memo clue for keypad (aid)
2) Timer starts with music.
3) Find number code for the security keypad in the security memo.
4) Disable security keypad (arduino) and find black light inside. (The black light is needed to identify painting clues)
5) Find 3 audio memory clues (max), each leading to a single painting.
– School of Athens painting (clue)
– Christ among Doctors painting (clue)
–Niccolò Mauruzi da Tolentino at the Battle of San Romano painting (clue)
6) Search each painting with black light for a potentiometer number clue.
7) Crack potentiometer safe (arduino) with the 3 numbers found in the right sequence.
8) Get key inside potentiometer safe.
9) Open masterpiece lockbox with key, avoid laser security flex sensors (arduino).
10) Leave room with masterpiece before time runs out. (20 mins).
After settling upon the idea of an art heist we began conceptualization of our three puzzles; as we relied mainly on tinfoil circuits in our previous studio murder mystery project we wanted to create more elaborate and complex puzzles that showcased the creative potential of the Arduino. Keeping with the theme of the art heist, we bounced around several ideas such as giving the players a codex to decipher a code found in the paintings to blacking out certain words in a book to provide a password; after much brainstorming and deliberation we settled upon a potentiometer safe, a keypad, as well as flex sensor trip wires as they were possible within the amount of time we were given, we had a relatively good idea of how each puzzle would function in regards to sensors and circuitry, and they fit the theme of the art heist fairly well.
Studio recording for mission brief and audio scenes
3d printed statues & 3d modeling lock box. Discussion for keypad code.
Working on potentiometer, development of keypad lock.
Setup and testing of flex sensor lasers.
Audio editing for timer sound effect, incomplete potentiometer lockbox.
Invisible ink testing, and art gallery installation.
Preparation for gallery show.
Collaboration notes discussing traps within room, and initial room layout.
The voice acting was all recorded using Adobe Audition in the audio lab with a condenser mic. To edit the audio I exported the session into Logic Pro. Since the scenes were recorded by person and not in order of lines, I pieced each dialogue line from each person into each scene. I cleaned up the audio using EQ and vocal effects as well as adding in sounds from the Legend of Zelda, Super Mario, telephone sounds, and any sounds found in the Max patch.
The Max patch is a number of toggles that trigger each audio file as scenes to play when the toggle is triggered; it’s essentially a simplified DJ sampler.
The Final Chest is a fairly simple task to accomplish. The chest’s keyhole is surrounded by a series of five red threads. These threads are meant to act as “laser beam” sensors, like one might see in a spy movie. The end of each thread is secured to a wall and the other end is attached to one of five flex sensors. Over half of each sensor is taped flat against the wall opposite the other end of the thread so that the threads are taut, whilst bending the sensors as little as possible.
The flex sensors are each attached to an analog read of an Arduino with 10K resistors. The Arduino then runs off the power of a laptop running serial control, to send the five values as a string to p5, which then interprets and assigns the values to five variables. The p5 code utilizes the sound library and has a function to play an alarm noise file whenever a sensor goes beyond a designated threshold. It also has a small display on the canvas to help visualize the values which the flex sensors are reading and set values for it then to start playing.
Puzzle Three Code and 3D Model CAD files:
Puzzle Three Circuit:
The potentiometer box was designed with non-destructive modelling in Fusion360. When designing the mechanism for the potentiometer box, we calculated the dimensions of the base of the thinker, the space it needed in order to slide and reveal the hole, and the distance between the shaft of the motor to the inside wall. In that way, everything moved smoothly without the statue falling off, the hole being too small, or not having enough room to store the Arduino components and power.
The box can be disassembled into four 3D printed parts. There is the base, which is a 165-centimetre cube with three holes in the front. Beside each hole, is a small slot for the potentiometers to hook onto so that they do not rotate when being turned.
The second component is a platform that goes into the box first. It is as tall as the servo is deep. The purpose of it is to create a platform for the chest’s key to be placed on as high as possible, to both provide room to hide the Arduino components, as well as make the key reachable since the hole on top is rather small for a whole hand to fit into. It also has a small section cut out in the back so that it can go around where the motor mount attaches.
The third component is the servo mount. The servo goes in after the platform and slides snugly into a dovetail slot. The reason the servo goes in after the platform and is not just printed as part of the base box is so that the platform would not need to have a large hole in it to fit around the mount, thereby exposing the Arduino and wiring, or letting the key fall to where the players cannot reach. The mount also has a small cylinder which fits through one of the holes of the servo which would regularly be used for a screw so that the motor doesn’t slide out of the exposed front. The front is exposed so that the wires on the front can fit in while sliding in from the top.
Finally, the lid has three main aspects. The lid is the same width and length as the box’s height and has an inset perimeter ridge along the bottom so that it does not slide off when the motor turns. The top of the lid has two holes in it. One hole is for the Statue to cover where the key is visible, and the second hole is fitted for the servo to poke out and turn the statue.