Final Videos




[soundcloud url=”https://api.soundcloud.com/tracks/317391488?secret_token=s-jdgPR” params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

[soundcloud url=”https://api.soundcloud.com/tracks/317391466?secret_token=s-YYDHb” params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Enoch took the lead on the audio production. The original output of our data-to-sound experiment was a high-pitched repetitive sound file. It was not pleasant to listen to – and it caused general anxiety.

Using a program called Audacity, Enoch modified and stylized the unique and exoitic sound into an even more unique sound – but this time a sound that was pleasant to listen to. The strange artifacts that came out of the.RAW transition into audio remain as the strange chorus of echoed tones that repeat at an unpredictable rate.

The final audio file was passed on to create the final videos.

Polar Coordinate Noise

Extrapolating our material further, we modified our captured image into a polar ring. This ring was the inspiration behind investigating the audio qualities of our data. We began to think of the potential outputs of this experiment – and had settled that we wanted to create a ‘track’.

We also settled on the conclusion that whatever we created, it would not simply be a single-channel video performance. It had to be more engaging, and had to explore more aspects of this ‘data shredding’.

Raw Motion Capture Noise

Raw Motion Capture Noise

After the motion capture stage, it occurred to us that the data for the recorded motion was kept in a .RAW file. A .RAW file can be opened in most programs, and it is the most uniformly accepted filetype across different types of medium. In the digital environment, a .RAW file is the closest thing to a standardized ‘material’ or ‘medium’ that one can get to.

.RAW files can be used as/inferred to:
– an image
– an audio file
– a dataset (called Raw Data)

Using this method we could extract our original performance into a digital ‘material’. Through exploration we found that the .RAW file, expressed as an image, looked like this:

Proposed Vinyl Cover

Original Greenscreen Footage

This project began as an experiment with using a green screen to key out aspects of our re-enactment of the source footage. This starting point would be the standardized footage that the rest of the elements of the project would revolve around.

The green screen capture was taken in 100 McCaul, actors Enoch and Janelle of our own group played the two characters in the original film in this intimate scene.

This performance was later recorded with Motion Capture technology and the physical performance was kept as an aspect of our final performance.

Our project was an experiment in tearing apart the source footage in various ways, like putting it through a grinder, then trying to put it all back together again.

Re-Embody: Nike Basketball Commercial

Members – DO, Arslan, Karolina, Rohit



Project Description Project Description
Our team decided to recreate Nike basketball commercial. The chosen reference is basketball shoes commercial from Nike. The reason behind choosing this particular video is that it fits quite nicely to the overall theme of project–exploration of human body interaction. We all agreed that sports involves lots of dynamic gestures and movement of the body and often requires interaction with other co-players. There are some key elements we thought that feature of this advertisement would fit to this project. First, It is easy to closely examine the body movement of characters in the video, and second, Those movements shown in the video are feasible to capture, analyse, interpret and reproduce the motion presented in the video.



Process Breakdown


At first we started to think about what is the best way to show movement of a body and in what kind of situations we can see the most expressive way. That is how we ended up searching for sport ads and looking more into movement of different bodies within these area. Furthermore we discovered some few great examples that show not only people in action but also pay a specific attention to small details and movements that are usually lost while watching sport. At the end we figured out that Nike Ads have the most artistic approach to the problem and have some great examples of amazing movements in action that is pretty common – playing basketball.



What we find rather difficult with script writing was the fact that our whole footage does not has any single line of dialog between members of the team so we ended up just generously describing the very dynamic scenes of people playing basketball. There is nothing much to talk about this process – we analyzed footage and then described it best as we could.

Storyboarding is another thing. Since our whole footage takes place in one scene, and all different angles are very dynamic we decided at the beginning to simplify our own re:capture to much easier to shoot scenes. Storyboard is pretty simple to read and follow which was our goal. Our own footage was supposed to be simple and easy to follow.



We explored technologies that we can use to recreate this ad. The technologies include, 3D Scanning, Photogrammetry, Motion Capture, Green Screen and Stop Motion. We used Kinect and Skanect for 3D Scanning. Photogrammetry was done using Trnio, we were able to  capture motion through Perception Neuron, Motionbuilder, Axis Neuron, Green screen keying was done using After Effects CC and Stop Motion was created using Premiere Pro.

After exploring all the technology, we found that 3D scanning and perception neuron were interesting technologies. We thought that Photogrammetry is still in its early stage and will require tremendous work to embed into videos. Photogrammetry is good for generating quick 3d models without any extensive setup. For the final video, however, we decided to select green screen capture method as it gave us flexibility to work on effects more dynamically.   


The green screen scene is the main technology for our final footage. In the original scene, there are scenes where characters in the video get dissolved as particle dusts. However, one thing we didn’t expect is that we couldn’t really replicate the original scene due to the size of the studio. The studio was too small for us to mimic the scenes that have lots of the movement e.g. drive-in scene, dunk over the player scene. Eventually, we changed our plan and decided to film the most of the scene in the actual basketball court, and add particular scene ( players dissolved as particle dust) using chromakey technique.

Essential things we learned from using green screen is that we need proper lighting and exposure in order to get neat results from the capture. We had to go through several takes to get the exposure and lights correctly. Sometimes, the green screen has folds in it which would cause the footage to go to waste as the keying doesn’t work properly for an uneven background.



In this stage, our team had to retake most of the scenes mainly due to the unexpected challenges we faced when shooting clips in a studio. As briefly mentioned in stage 3 document, we filmed the whole scene in the basketball court. Also we slightly modified in terms of the style of video composition. First, we filmed more footages from more diverse camera angles, including detailed close up shots of the team members acting in the footage. Instead of compositing those footages as a single video clips, we intersected the two complete sets of composed clips in a chronic order to make the overall video look more dynamic. Logic Pro X was used to make a background music. We decided to use music with no background noise. This way we could avoid any recording on set, while filming running shots, etc.



We used After Effects CC to covert (key) green screen footage and to apply scatter effect. Keying greenscreen footage was easy but it was very complicated to combine greenscreen footage with raw footage of basketball court or other footages. Some of the reasons includes the resolution of the footage being different, recording techniques (with tripod/without tripod), stabilization, exposure, etc. We also used Premiere Pro CC to make the final video. Premiere Pro was fairly easy to use and offered some wide range of options on film editing. We worked with After Effects to create Visual Effects and exporting it to Premiere Pro to edit and finalize the video.



There are many ways to capture or recreate a film. Given the time, we chose to Green Screen as our option. Green Screen has it’s own advantages and disadvantages, advantages is being easy to add effects later but the disadvantages are that the capture process needs to be executed properly. Having a screenplay and storyboard will certainly help in the green screen capture process. Camera plays a major role in the green screen capture process, the expectations for the video can not be reached if we don’t have access to proper lenses. This is especially true when we had to do close up shots. Although, an experienced editor may be able to fix or add more effects but we are just beginners. This was an experience for us to explore technology and we still have a lot more to learn.


Links to resources & references



KD 9 Nike Advertisement: https://www.youtube.com/watch?v=4naT8ckkj5Y



Disintegration Effect: https://www.youtube.com/watch?v=C913enLWYxE



If we had more time, we would learn to use lights and green screen properly, and re-do the capture. To take this project further, we would like to explore Motion Capture methods more extensively and utilize them to digitize the motion in our piece, especially since there was such an emphasis on motion in our source video. We could look into perhaps re-embodying the piece entirely using digital models and the Perception Neuron technology.




Re-Embody: Final Translation and Assemblage

By: Erica Park, Quinn Shoreman, and Pedro Betti


Link To Video


Original Video




Process work


Project description

Using Green Screen and Motion Capture, our group re-created the ending scene of the film Submarine. We wanted to send a strong message to the viewer just as Submarine ending, with the heartfelt scene of Oliver fallowing up to Jordana up the shore. We thought the scene needed a little more romance by having our actors hold hands by the end of the scene instead of just standing next to each other. By holding hand, we wanted to send a message of how they are connected. Originally we tried to create the same scene using different media, using green screen and motion capture with 3D models, but because of the motion capture results, we needed to change our idea into a colder message with the motion capture scene.


Process Work (Pedro Betti)

Steps Taken for Final Scene Wild Card and Motion Capture work


Using and translating motion capture.

This was the first step of the wild card and our motion capture animation, the translation of our motion capture work into Axis Neuron which gave us the files necessary to create a universal animation file that can be put into any set humanoid skeleton.  


Making the Final Scene Background

For our wild card before we could proceed with the animation the background needed to be created according to the green screen picture. So using Unity 3D and their basic assets I updated and re-skinned all of our 3d scan objects and used it to build our background beach. Unity 3D allows for both animation and static objects so the translation of animation to the water and environment was quite challenging but ultimately very satisfying.


Importing Animation, Rigging, and Final Animation Refinements

The third step to our last scene was the most time consuming part of this scene build. The animation imported from Axis Neuron was very choppy and didn’t look very polished in the translation, so for the application into the humanoid skeletons there was a large portion of cutting and pasting to make the animation fluid and look good. Then when all the animation was refined and imported the rigging and creation of the humans in the scene was fairly simple and I again used a mixture of Unity 3D assets and our own models. After the characters were made I spent a large time on the rigging and animation to just get the colliders for the ground and the humans to work with each other and then proceeded to work with the new Unity 3D animation software and programming. After quite some time it looked like a almost near translation of what the original motion capture recorded was meant to look like. At the very end I recorded several scenes and ways for the characters to meet and hold hands but ultimately only one option came out looking better than the others.


Recording, After Effects, CC Scatterize  

The final part of the project was the recording and the wild card scene special effects. This was achieved by recording the scene animation for unity and putting it into After Effects and using the many takes to blend them into a smooth video of the two characters disintegrating on screen. For the final scene and character disintegration I applied CC Scatterize which gives me the ability to both describe how much and how long the disintegration will happen. Once that was complete the rest was a lot of cutting and editing videos and the final product was a perfect scene where the two characters walked next to each other and after a few seconds disintegrated into the air on this deserted beach during night time.


Process Work (Quinn Shoreman)

Recreating the ending scene from Submarine was difficult but rewarding. The closest way that we were able to do this was by the use of green screen. As I have never used a green screen before was a challenging task at first. Having no prior knowledge created quite a steep learning curve but became easier over time. Making sure that the green screen was completely cropped out while still having the characters completely visible was difficult at times seeing as the green screen we used was quite wrinkly. However, with minor tweaking with the levels I was able to remove it completely. Other than that it was a quite simple task, just time consuming. Having to do every scene one by one made this process take much longer. Adding in the water and moon was another somewhat difficult task seeing as I had to make it look like the water and the moon were in proportion to the rest of the scene, which was solved with resizing. All in all doing the green screen and editing it into premiere was a challenge at first but became easier over time as I became more comfortable with the software.


Process Work (Erica Park)

I worked on the final edits for combining the green screen and the motion capture with 3D models. Using the opacity options in After Effects, I faded in and out the transition while matching them to the beat of the music to the video. Some duration of the transition is longer than others to match the tempo of the music. I had some troubles with both footage, because my computer could not open aep files sent from other windowed computers, it had to be sent by a mov or mp4 file for me to open in after effects. This caused me to have a harder time editing both footages.


Resources For Green Screen:


Water for Green Screen



Moon for Green Screen




Don’t Wait for Me

By Quinn Shoreman, and Paul Ashkenas


Future Step

If we were to have a proper film of the ocean, our footages could have turned out better. It was unfortunate that the green screen curtains had some wrinkles, because the outcome of the footage we took was not in the best quality. If we were given more time to work on this project, we’d need to seek out for higher quality materials for producing the best outcome of our film. If we could scale up from here, we would probably use different medias to produce the same scene for comparison. Using animation or claymation would be a great other way to produce the Submarine ending scene.

They Live

Link to post


ReEmbody: Panda Cheese Final Translation and Assemblage

Re-Embody Documentation Blog Post

Group members: Carolina, Caroline, David, Bernice

Project Description – what you created and why

We created a short video based on the office panda cheese commercial to explore the motion and body language employed in the commercial. By reenacting the original commercial and using various technologies to capture the movements such as motion capture and 3d scanning, we wanted to convey the nonverbal tension and communication between the two main characters (the panda and Wyatt) in our video.


Photographs, video, screen capture of results (whatever visual means is appropriate for your results)

screen-shot-2017-04-12-at-8-39-51-am screen-shot-2017-04-12-at-8-41-54-am

Video link: https://vimeo.com/212870240

Summary of process – both capture & translation

From trying out all the different types of capture methods, we decided to work mainly with chroma-keying, as we felt as though we would have just enough time to finish a polished output with that method. Comparatively, motion capture took too long to clean up, with photogrammetry and 3D scanning producing outputs that took equally as long to sharpen up. The 2d animation method (that we chose for our wild card) also took too long to produce useable assets and composite.


Capturing footage for this method required setting up the green screen as well as the camera. We shot our footage according to the shot list we established in the start of this project.


Translation required keying out the green screen and subbing in 2d animations, as well as stitching together clean audio to use for the final output. Polishing involved colour correcting our footage.


Links to resources & references

-Panda Cheese commercial: https://www.youtube.com/watch?v=aW3mJf-sFko


Future steps – how would you scale up from here?

We would focus on creating more interesting material to key in, as well as think of other elements we could potentially composite into the video. Instead of restricting ourselves to mainly 2d animations, we would attempt to work on mixing in capture results from the other methods, namely better rendered results from photogrammetry and potentially results collected from the perception neuron suits.

Group One – Sims – Re-Embody

Alessia, Luke, Miranda, Sunny

Project Description

Our group wanted to explore The Sims as our origin piece because of the movement used within the game. The Sims are characters used in a simulation world that is meant to mimic human life, including human movement. What results is actually overdramatized and caricature-like expressions and movement as opposed to natural human movement. We thought this would be really interesting to reenact because we would be humans acting inhumanly based on movement that was meant to be humanlike.

As a result, we created a video depicting our group acting as the characters of a Sims 4 Let’s Play video, which is basically a gameplay video including voiceover from the person playing the game. We decided to include UI elements to really ground the visuals into a gamelike experience, and we also filmed the scenes in two different ways. We filmed the scene once using Sim movements, and once using the Sim actions as guides, but moving more naturally, the way we would in real life.

We also decided to take this “Let’s Play” environment one step further, and use a Youtube page as our delivery. This allows for a similar experience to watching an actual gameplay video, only you’re seeing humans reenacting gameplay instead. This was the most impactful way of delivery for us, because it allowed for a fuller individual experience of the work, the same way one would experience the source material.

Final Results



Photos of Process

screen-shot-2017-04-12-at-6-23-58-am screen-shot-2017-04-12-at-6-22-30-am

img_2865 img_2905


Summary of Process

We utilized chromakeying in our final result, as well as 3d models that were inserted into the scene to allow for assets that weren’t easily duplicated as props. We filmed the scene multiple times from different angles and with two different types of movement (natural and unnatural) to allow for a full comparison of Sims versus human movement. We also used the “pauses” that were present in game as real-time pauses, with each actor freezing in place to give the illusion of being frozen in time. We also had pictures taken of each shot without any actors or props to be able to fill in for the chromakey-ed areas. Each of these were edited in Photoshop to match up with the film itself.

The voiceover from the “player” was added in post, as were UI elements taken from the source footage itself. We then decided to create a Youtube page that would house our final product.

Links to resources & references

https://www.youtube.com/watch?v=7UpzRpKRMJs&feature=youtu.be – Initial source material

www.tf3dm.com – For the toilet model

Future steps

        Future steps could include further work on UI elements, including the Sims’ heads when the player makes an action. There could also be more filming angles, to include the gradual turning of the player’s 360 degree camera. It would also be a bonus to be able to film in a bathroom too and have that to compare to a CGI version!

        A different direction would be a full motion capture of the entire scene with all three characters, that could then be rigged with 3d models of our characters to translate the project back into digital. This could be really interesting with both iterations of our scene, with humans acting like sims, and the natural movement interpretations of the Sim actions. 

Re-Embody: Life of Pi Flying Fish






For our final project we chose to recreate the flying fish scene from Life of Pi. We chose this clip because of its cinematographic beauty; we wanted to create a video with a strong graphic aesthetic, and the colours and details within this segment allowed us to play around with elements and create something a little different. Using motion capture gear and software, we reproduced the movements of both characters, and with the data we gathered we were able to create a completely animated version of the scene, shown as if the viewer is hovering above and looking down on the events unfolding below. In doing this we were not only able to recreate an accurate portrayal of the sequence, but were also able to show it from a completely different perspective, and in an alternate style from the original clip in Life of Pi.






Using Perception Neuron Motion Capture gear, the roles of both of the prominent characters within our source clip were reenacted by Mackenzie. In order to accurately capture every movement within the scene, each role was acted out at a slightly slower speed than they occurred in the movie, and each movement was rather more exaggerated to ensure maximum detail. Each of these actions were recorded within motion capture software running on Windows. These files were then opened in Motionbuilder and doctored: the speed and orientation of each file was edited so that the character’s movements made sense when incorporated into the same clip.

Mackenzie reenacts the role of Pi while wearing a Perception Neuron Motion Capture suit.

Mackenzie’s movements are captured and recorded by mo-cap software for Windows.

While the motion capture files were being edited, Maya was used to erect 3d models for each character. For Pi’s character, a model of a basic male body form was found online. This was then opened within Maya, where the model was made skinnier and given pants and a turban.

Next up, it was time to create the model that would stand in for Richard Parker the tiger. From the outset of the project it had been argued back and forth about how we would represent the tiger. Since the motion data that we had for Richard Parker’s part was of a human simply crawling around on hands and knees, we knew that it would be very difficult to rig a model of an actual tiger with the bone structure of a human, all the while making it look natural. Instead, we decided to create a model of a human but with the head of a tiger, and so we adapted the male form that was used for Pi and gave it the head of a beast. While it is obvious within the final clip that Richard Parker’s character is simply a human with a tiger’s head, we found that this actually addressed one of the key plots within the story. Near the end of the story, it is brought into question about whether the animals on the boat were actually real, or instead figments of Pi’s imagination, representing the beastly personalities of the other people that may have been stranded alongside him. With this in mind, our depiction of Richard Parker as a half-human half-tiger becomes much more meaningful than if we had recreated the scene with a complete model of a tiger.

Once each of these 3d models were completed within Maya, they were imported into 3ds Max alongside the motion data for each character. Both models were rigged with a human bone-structure, allowing for them to move accurately along with motion data, and thus the basis for our project was complete.

Next, each of the characters with their corresponding motion clips were imported into After Effects, where all of the finishing touches were applied. We decided to depict the scene from a different perspective than it is shown in the movie, instead looking down on the unfolding events from a fixed camera above it. In doing this, we were able to create a very simple aesthetic for the entire clip. In Illustrator, a few more assets were created to compliment the characters and fill out the scene: a boat, complete with canvas covering; a diamond of blue to represent water; small white circles to portray ripples within the water; small fish to embody the flying fish and big fish for the tuna; and a stick used by Pi to fend off Richard Parker. All of these assets were also imported into After Effects, and everything was placed and animated accordingly. Finally, sound clips from the movie and various other online sources were cut and mashed together, and the animated flying fish sequence from Life of Pi was complete.



Given more time with this project, no large changes would occur. Rather, the many hours spent refining the clip into its current form would turn into days of editing, all in the hopes of creating a smoother, more accurate, and more detailed animation of the flying fish sequence. In the current version of the animation, all of the character motion remains exactly the way it was recorded by the sensors when Mackenzie reenacted each role, with exception to the speed of each clip. Despite being surprisingly accurate already, some of the movement appears choppy, and at times limbs might appear to move in impossible ways. Small inaccuracies like these can be edited within Motionbuilder, however this is a very tedious and time-intensive process, so for the means of this project the files were left basically untouched. With some editing however, the animation would appear much smoother, and the orientation of the characters at all times could be refined to more accurately represent the source footage.

The animating of each different asset within After Effects was another lengthy process that was cut slightly short by time constraints. With more time, all of the minuscule movements within the clip could be perfected, again giving the animation a higher level of accuracy and an overall smoother feel. Nevertheless, given that many of the skills required to complete this animation were learned in the process of its creation, this final output is still quite an incredible achievement. Its many small inaccuracies and imperfections only add to the personality of this short but sweet re-embodiment of Life of Pi’s flying fish sequence.



Hugh, Pedro, Ethan, Miranda

Flask is an intelligent beverage container that interacts with the International Space Station. It allows the user to experience the excitement of space exploration through their bottle. Using the cloud network, Flask is constantly aware of the ISS’s location and relays that information to the user through the use of an LED ring.We decided we wanted to make something fun, that had a less serious tone than other projects we’ve undertaken in previous years. Thus, Flask was born. The idea was made to connect to completely unrelated areas: beverage consumption and the International Space Station. The idea formed into an intelligent bottle that displays the ISS’s location on a compass-like ring of lights. Anyone could use Flask, but it’s geared towards the “ThinkGeek” type of consumer.


User Testing Materials

NASA Parts List Part ID Count Cost Cost + Tax Cost (group)

3x Particle Photon Wifi Dev Module $64.41

4x ‘4X6cm Prototyping Board’  $6.67

4x neopixel ring $42.94

4x Plastic Flask $40.00






Part 1

16523230_10155034877154596_1301031756_o 16523756_10155034877314596_745456409_o 16651149_10155034877264596_1782412401_o

We began with the simple mathematics behind tracking the actual space station. During this time, from prior projects, we knew of tools that previously existed in terms of technology that could help this project come together. We moved from this to a quick decision about the form factor. We settled on it being a flask.

It was at this stage that we decided on using the Photon, using a Bottle, and having the tracking of the ISS in place as a core element to the tool.

part 2

16593600_10155034874849596_1975783129_o 16523208_10155034874949596_931137436_o 16523676_10155034875909596_399266324_o

Using the photon proved frustrating to use as we attempted further with the code, but luckily the early-made decisions about the form factors allowed other members of the team to get working on the different aspects of this project.

Depicted above are the elementary stages someone goes through to initialize the particle photon; the first step in relation to preparing for Internet of Things.

part 3

nasa_p-2_3x5 nasa_p-5_5x3 nasa_p-6_5x3

The development of the tool went quickly after the photon micro controllers arrived

we got right to prototyping the bottle. These pictures are part of the instructable on Nasaflask.

nasaflask-p-1 nasa_b-1_7x3 16593741_10155034871684596_300066054_o


  • Fill canteens with water. Make sure the prototypes are in their casings.
  • Do you need extra batteries?
  • No, just bring the lithium ion battery.

Code:  https://github.com/litlw/nasaflask

In-Class User Testing – Testing By Classmates

  • Set up canteens, allow users to form their own first impressions on what the device appears to be/do.

Field Testing – You testing your own device.

  • https://goo.gl/forms/qnRjZmoo4pfV6E0q1



 User testing our object was unique in the fact that there is little input from the user. It is more of a static object that changes over time. I found that people were intrigued by the form as I used it, and wanted to know what it was. This curiosity is good and bad. While people wanted to know what it did, they couldn’t tell what it did until I explained it to them. I think the learnability and accessibility of an object is important and I think it was lacking in our product when we tested it. Stickers, instructions or even logo imagery could have helped the accessibility. Users enjoyed watching the lights change but after the first couple times, they lost interest. Adding an interactive feature I think could greatly reduce boredom in the user and extend the lifespan of the product. Users suggested that the product be sold on Thinkgeek or similar sites. Our product concept was validated by the users. Most of the critiques came about from our execution.



Testing periods: Saturday and Sunday, 1-9pm

Notes about Canteen user testing:

  • Seems much too large, not comfortable to carry around for long periods of time. Could be a household item, but still would be much better to make it a lot smaller.
  • Common consensus is that the idea is “neat”, “fun” and “entertaining”.
  • People liked the lights, wanted a more sleek design
  • It felt too fragile over long periods of time; I was always conscious of the fact that I had it with me and if a certain way I moved my bag would screw with the electronics.
  • Most younger people liked the idea, as well as the older, “parent” aged group I asked. A lot of people said that it was the type of thing they’d see on ThinkGeek.
  • Most people said that the actual bottle wasn’t good quality enough to warrant the price that would cover our costs. Plastic just doesn’t seem to translate into an idea of “high quality”, even though the plastic is thick.
  • The bottom compartment seems too squishy; it doesn’t stand up well on its own at all which means you constantly have to lie it on its side, making it very difficult to see the light.
  • Most people had no idea what it did based on first appearances, and they all said that having more space-centered design on the bottle would help add to the “fun” factor as well as making the bottle more easily understood.


  • “As long as it happens semi-often it’s a cool idea”
  • “I like it as a reminder to drink water, when I’m so busy I forget.”
  • “It would definitely sell, people love that kind of shit now.”
  • “Super unique.”
  • “It’s fun! Wish it was a bit smaller.”
  • “It would need to come with instructions to explain how it works, because I have no clue what this thing does.”

General Consensus/Things Learned

  • The bottle is far too large, and the design of the device is too bulky.
  • Most people don’t want a plastic-based water bottle.
  • The idea intrigues many and most people that tested it out thought it was fun and interesting.
  • Best marketed towards the “ThinkGeek” consumer crowd.
  • Battery is seemingly sufficient


  • After the in-class user testing, our team looked at the google form findings and discussed possible changes we could make to the flask within our time limit.
  • After the field testing, our team came together to talk about our findings and again discuss what issues were possible to act upon within the time limit we had before final iterations.
  • We decided to take some of our google form data and add it to our presentation, so as to allow the class to better understand why some changes were made between the user testing form they experienced and the final form we had.


Questions for testers:

  • When first presented, is it obvious what the object does?
  • Once in your hand is it intuitive to use?
  • Do you feel like you want to interact with the object? If not, why?
  • What context would you use this device in?
  • Is this device aesthetically pleasing?
  • How does the object make you feel when you use it?
  • What would you change about the device?
  • What appeals to you most about the device?
  • Would you purchase this product? What price point do you think is reasonable for this product?
  • The data collected from the survey should be available using the Google Form survey, if there’s an issue accessing it, let me know and I can try to fix it. I’ve never had to share a result before, but I did click the box that said anyone answering the survey can see the result? So you may have to add answers into the fields and then it displays the results.


  • references & related works
  • any references / support materials used for the project
  • references to related articles, papers, projects, or other work that provide context for your project. Write about the relationship between your project and these references



 2 hours – Decided to write an update on the second hour then every four hours after this update. Well the flask has been on for a bit now and the lights come on for a few minutes then turn off for a longer time, this has sort of a meditational effect on me right now but I’m not sure how it will change.

4 hours – Four hours have passed and the blinking lights of the bottle don’t seem to bother me, but I do wish there was some sort of communication every time the lights turned on to tell me. I feel like that would make this product more interactive in many ways if that is a part of it. But all things considered, the battery still doing fine and LED ring still not burned out.

8 hours – After 8 hours the bottle became background notifications I could see that the lights showing and when they were showing and they did not seem to bother me nearly as much as earlier on when I was not used to the blinking lights. Battery still holding fine and I think I’m going to start taking a drink of water from the bottle when it blinks. Battery still holding and LED still not burned out, as well as structurally the bottle is still holding.

12 hours – At the end of the 12 hours of user testing there are some observations I do have to make, one of them is that user experience changed drastically for the better if I did take a drying every time it blinked and that it was not only more interactive but I cared much more for it and wished the lights changed every time I did take a drink.

Battery and hardware wise it held perfectly and being one of the programmers I would change a lot of things after this user testing on the code. It was fairly accurate only losing the ISS during the end of the testing did it desynchronize with the orbit. Summary of User Testing This user testing was about answering questions of accuracy and strength of the machine but it wielded much more to look over and consider.

The fact of the flask losing accuracy over time and the user needing extra incentive to start using the machine as a bottle were the main ones I found. Though it did do very well for 12 hours straight it was a very bright cold light blinking and I do wish now that we had changed it to something that either change light color or are a warmer color as it reaches your location. It still functions very well as a driving game and has great potential for kids and schools to learn and teach younger kids more about space and stellar bodies.

The battery lasted for the whole testing and looks like it is still for 24 hours after so recharging might be necessary every 3-4 days of usage, the actual components of the Flask are still intact and compact enough so that everything is not a burden to carry around. All in all, it is a very good design, however, there are minor changes in the algorithm that I do have to make so that accuracy does not slowly desynchronize over time, and that is something with my math more than anything.

It was a fun product to have around the house and I would totally buy it if I I saw it online but not in the form it is now it would have to be more compact and better looking with the entire system built into the bottle.


In conclusion, the premise behind the device was interesting to users, but the execution could be improved upon. Many people wanted the Flask to have more visual cues, as it was hard to know the connection between the Flask and its technology upon first look. In addition to that, because most issues were with design, the actual build of the Flask would have to be drastically improved in further iterations. Most people felt that the plastic bottle composition did not translate into a “quality” feel. The bottle itself was also much too large for day-to-day use, and would have to be made more compact. The electronics themselves need to be secured more permanently, and downsized as well. Overall, the major problems were size and design factors. There would also have to be a change in materials in terms of the microcontroller, as we found that the Photon could not handle the code for what was necessary to make the device successful.

What we would do in our next steps is move towards a metal material and to solidify the software so that the device remains accurate. We want to integrate some design factors that can inform the user what they are using and how it works. Ideally we would want to produce a product that feels worth fifty dollars, as opposed to the overall 20 dollar price point that people expected to pay.

The best parts that worked for us was the sleekness and overall design, but the overall design got in the way of the clear use that should be depicted on the object.

Hugh, Pedro, Ethan, Miranda

Remindlet – Final Report

Project Overview

Members: Jade Wu, Clay Burton, Do Park, John Cho

Remindlet is a wristband type wearable which tracks any bluetooth device to see if it has left its wearer’s vicinity. It alerts the user with vibration and light when tracked devices are out of range. 

Click here to see product video.

Final Design

16466822_10154062133987186_404734406_o 16466211_10154062135207186_2121618709_o 16492535_10154062134752186_2036688774_o

This is our final design, we tried to make the whole circuit as small as possible. It turned out that the final prototype is nearly half size of the first prototype. For the future iteration, we will look for different type of material. e.g. plastic, rubber.

Production Material


1. 2 female to male jumper cables

2. 1 RGB led (any size)

3. 1 Sparkfun FTDI Basic Breakout

4. 1-2k resistors 1-1k resistors

5. Lithium battery (3.7v)

6. Vibration motor

7. Small (or cut) proto-board

8. Arduino Pro Mini 5V

9. HC-04 Bluetooth Module

10. Jst 2 pin connector

11. 4 pin female to male connector


Final Bill of Materialsbill

Final Circuit Diagram


Final Code: Click Here

User Testing: Field test – During our field test a few problems we had was that since the electronic parts are attached to the fabric, the fabric wasn’t washable. Although it didn’t really matter for our field test, it was pretty apparent that it would be a problem long term with how dirty they got in 2 days. The code and bluetooth chip was also not perfect and that caused the bracelet to malfunction from time to time (turning red and alerting the user that bluetooth has disconnected even though it hasn’t). That was slightly annoying during the testing since it would buzz randomly, that could be fixed by using a higher grade of bluetooth module and optimizing the code. One last problem we had was the battery wouldn’t run for a long time because of how much energy the bluetooth module needed. Some of the positives results we had was that the Remindlet was very comfortable to wear because of the fabric we choose (it was thick and almost cushiony). Also, the battery was removable and could be recharged easily and that was by design. User test: Most people thought although the Remindlet worked well they would find it more useful in other situations or in a different format-such as a keychain.


How bluetooth works: Link

AT Commands Sheet: Link

Projects done with HC-05 bluetooth module: Link 1 Link 2 Link 3



Overall it was quite fun project to do. We had a tough time to figure out how to make it work in the beginning, but it turned out great at the end. The most challenging part was understanding how the bluetooth module works, and how to command it to find the other devices around it. We invested significant amount of time to research on this and fortunately we were able to find a useful resources to make it work. For the casing, we didn’t have enough time to look for silicon printing/molding. However, we are currently looking for other students from industrial design program to take this project further Working with them would make the future prototypes more marketable and practical.



Tech-Charades (Final Report)

Project Overview
Alessia, David, Janelle and Thomas Present: Tech-Charades!

Tech-Charades is a wearable party game for 2 or more players which uses a microcontroller and TFT screen to display different images to act out.






Production Materials

Bill of Materials:

Circuit & Code:

Design Process
1) Initial Concept

The original pitch for Tech-Charades was a headband in conjunction with a touch screen, which we ended up not using due to cost. Instead we opted for a TFT display with a built-in SD card reader. Hypothetically an end user could add their own photos to the game, however as we later learned the screen was very particular in terms of the image formatting.

Alessia produced a prototype headband as pictured above, but we decided to change the form of the wearable to a baseball cap to improve comfort and stability.

Our original concept also included the use of two tilt ball sensors. These would be used for player feedback: Tilt your head one way to confirm a correct answer and another to skip. After some tests this idea was abandoned due to the uncomfortable 90 degree angle needed to activate the tilt ball reliably. Using a PIR motion sensor was also looked into, however it would be nearly impossible to get two different outputs, and was more costly and complicated than a simple button. So we made the design decision to not differentiate between a “skip” and a “correct answer” and instead have the player track their own score. This made the circuit and code much simpler.

2) Prototype

Getting our prototype ready for user testing was quite difficult due to the finnicky nature of the TFT screen. All connections to the screen must be secure upon initialization of the Arduino sketch, otherwise the screen may power on but will not display anything. This meant there was no room for error when it came to soldering. Ultimately we were only able to have the screen with the game running separately on a breadboard and the cap with the enclosure full of parts prepared for user testing, though these still produced valuable feedback.

The biggest change made to the design at this point was the development of the head mount for the enclosure. Due to the cap being round and the enclosure being rectangular it did not snap on properly at the time of user testing. To solve this Thomas modeled out the head mount in Rhino, and had planned to 3D print it, but was unable due to a miscommunication. This setback turned out alright though, as we moved on to a papercraft version of the mount which was lighter and cheaper than the 3D printed version would have been.

3) Final Design(s)

Due to the technical challenges with the screen and with fitting all the electronics inside the enclosure, we each ended up with similar but slightly varied designs.

David’s Version:


Initially it seemed as though the Feather 32u4 would not be a viable microcontroller due to the documentation for the TFT screen provided by Adafruit. However after some trial and error it was found that the Feather simply has different SPI pinouts than the Arduino Uno. These are as follows:

Uno     Feather
TCS 10 10
D/C 8 9
RST* 9 0
CCS* 4 6
Interrupt 0 (skip button)    2 3

*customizable pins

In this version of the design the Feather (rather than the screen) is mounted to the protoboard using female header, which connects to the screen via hookup wire. Female header was also soldered to the TFT instead of male header for a simple socket-to-socket connection; the tension from the wires holds the screen in place. A hole was drilled in the back of the enclosure, and the wires for the skip button were threaded through to the side of the head mount.

Alessia’s Version:


For the Uno design, the screen is soldered to male header which plugs into a set of female headers on the protoboard. Wires connected to the protoboard go from there to the Uno which is placed underneath the protoboard. Additionally, a longer set of screws is needed to keep everything in place.

Besides the soldering and wiring, the main challenge of this configuration was fitting the power supply, as the 9-volt input jack for the Uno was inaccessible. Alessia’s solution was to drill a hole in side of the enclosure, both for the skip button and a USB cable which lead to a rechargeable unit fastened to the back of the cap.

Thomas’s Version:

This version was mostly the same as Alessia’s, however for the battery Thomas cut the jack off the 9-volt adapter and wired it directly to the gutters of protoboard. This worked initially, however for the sake of consistency he changed the power connection on the TFT from the Vin pin to the 3v3 pin. Because there was no voltage regulator on the latter, this sadly blew out the screen. Thomas also drilled a hole in the top of the enclosure to mount the skip button.

Janelle’s Version:


Janelle’s used the same external battery configuration as Alessia’s (with the regular 9-volt), and the front-facing button configuration used by Thomas. Unfortunately during testing the 9-volt adapter came undone and the enclosure had a tendency to fall out of the mount.

Version Differences Overview:

David Alessia Thomas Janelle
Microcontroller Feather Uno Uno Uno
Battery Rechargeable? Yes (LiPo) Yes No No
Battery Placement Inside enclosure Outside enclosure Inside enclosure Outside enclosure
Button Placement Side Side Front Front

4) Instructable


User Testing

Survey: https://docs.google.com/forms/d/1bp-HzYdmvAdtVCbT_ACPRWBX2NIbTI7_SOu_CO0_bn0/viewform?edit_requested=true

Results: https://docs.google.com/spreadsheets/d/1xHZdOJ6i5qmY_dg4hD9PqGlYa3oNlTUK6_OSUQMy_XY/edit?usp=sharing


Essentially we were testing 2 distinct aspects of our project; the wearable and the game itself. Overall the response to the wearable aspect was good, although there were some concerns about the weight and secureness of the enclosure. In response to this Thomas designed a head mount for the device as previously mentioned.

As for the game, users found that it was intuitive once explained, but nearly unanimously agreed that 30 seconds was a better image refresh rate than 10 (though for demonstration purposes 10 was appropriate). Users also desired the added functionality of a button, which was planned but not ready at the time of testing.

Field Testing

Survey: https://docs.google.com/forms/d/e/1FAIpQLSdA7pSb89Lk0y3Ljvqx5g3NoHANzcwf2JZnisuN5jXsCZtlOQ

Results: tech-charades-final-usertesting-survey-google-forms-2


During field testing found some things came to our attention which did not come up in our initial in-class test. One prominent issue which affected the readability of the images was the difference in height between players. A simple solution was to have the wearer of the hat sit down during gameplay. A more involved solution for a future iteration would be to design an adjustable hinge for the screen to sit on top of.

Another takeaway from participating in the playtesting was our inherent bias/advantage as the designers. Because we had become familiarized with the set of images used we were much better at the game than newcomers were. Though players understood the rules just fine they did not always feel they were intuitive to act upon. For example, the ambiguity in what constituted a correct answer, which was partially left to the player to determine. Players also seemed to prefer to use either words (as in Hedbanz) or actions (as in Charades) solely, instead of a combination of both as permitted by the rules.


Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.