Author Archive

timesUp

Group Members

Shreeya Tyagi | Orlando Bascunan | Afrooz Samaei

img_1590

img_1581

img-20161214-wa0005

img-20161214-wa0006

 

Introduction

timesUp is a minimal and stylish wearable timer that is designed to increase the productivity of the user. It can be worn on the wrist as a bracelet or hung from the neck in the form of a necklace. It features a timer which breaks down working time into intervals of 45 minutes followed by 15 minutes of break, or as we call it “play” time. Once the time is up the device notifies the user by a short vibration and blink of an LED. 

The project was developed based on Pomodoro time management technique, introduced by Francesco Cirillo in the late 1980s. The goal of this product is to encourage the users to commit to a set of work and play time intervals and help them maximize their efficiency and minimize the distractions. In addition, taking short, scheduled breaks while working eliminates the “running on fumes” feeling that users get when they have pushed themselves too hard, leading to a more productive day.

 

 

The product includes a vibration notification system, an LED notification system, and also a button to activate the commands.

The instructions to use the product is as following:

  • Press and hold the button to Start or Stop the timer.

          – Start is notified as a single vibration

          – Stop is notified as double vibrations

           When it’s time to take a break or get back to work the device will notify you by 3 vibrations and the blink of the LED.

 

  • Press the button to check the amount of time left on the timer. The LED will flash 3 times:

          – Slowly if within the first half of the interval

          – Fast if within the second half of the interval

 

  • Press the button to toggle between Work time (45 minutes) & Play time (15 minutes)

          – Work mode is displayed with two slow blinks of the LED

          – Play mode is displayed with two fast blinks of the LED

 

Context

For many people, time is an enemy. The anxiety triggered by “the ticking clock”, in particular when a deadline is involved, leads to ineffective work and study behavior which in turn elicits the tendency to procrastinate. The Pomodoro Technique was created with the aim of using time as a valuable ally to accomplish what we want to do the way we want to do it, and to empower us to continually improve our work or study processes.

The Pomodoro Technique is founded on three basic assumptions (Francesco Cirillo):  

  • A different way of seeing time (no longer focused on the concept of becoming) alleviates anxiety and in doing so leads to enhanced personal effectiveness.  
  • Better use of the mind enables us to achieve greater clarity of thought, higher consciousness, and sharper focus, all the while facilitating learning.
  • Employing easy-to-use, unobtrusive tools reduces the complexity of applying the technique, while favoring continuity, and allows you to concentrate your efforts on the activities you want to accomplish. Many time management techniques fail because they subject the people who use them to a higher level of added complexity with respect to the intrinsic complexity of the task at hand.

In order to develop this project, we considered ourselves as the potential users and thought of our daily challenges and needs in order to build a personalized product. All three of us were concerned with our time and how we manage it. Hence, we decided to build a product that helps us keep a better track of time passage while eliminating the distractions caused by traditional or phone alarms. 

Design and Production Process

Since the goal of timesUp is to minimize the distractions, whether caused by the user or by other people, we wanted the product to be as invisible as possible, in a way that it does not draw the user’s attention towards itself and also does not invite others to inquire about it. This main principle formed many of our design decisions. For instance, there is no visual sign, such as lights, indicating that the device is running, in order to minimize the attention drawn by the device. However, the user can still check the time passed or make sure the device is running by pressing the button and watching different blinking modes of the LED. 

In order to better user test different forms of the product and come up with the best design solution as we compare them, we decided to build three distinct products, while keeping the aesthetics, design choices, and functionality identical among the three objects. This gave the wearable flexibility and comfort along with ease of use. We built a bracelet and two different necklaces.

 

img_1594

img_1592

img_1593

 

We used 3D printing to build the prototypes. The main reasons behind choosing 3D printing was firstly because of its rapidness and low cost, which enabled us to do multiple iterations of the product and try different forms, before coming up with a final solution. Secondly, the light weight of the plastic made it a perfect choice for the material of this particular wearable product, as comfort should be a key feature of both the bracelet and the necklace.

The 3D printer we used was the MakerBot Replicator 2 and the polymer material filament was PLA (Poly Lactic Acid). After creating a range of textural vocabulary, we printed multiple disks with different textures and stacked and glued them on top of each other. There is a little slit cut on the side of the products that allows USB connection, in case the battery dies in the middle of a work session or in case a new code should be uploaded to the microcontroller.

The artistic details/forms of the product were explored in Autodesk Fusion 360. These were based on contemporary jewelry, which worked to our advantage since we were able to use the device in our everyday lives without the device attracting much attention in public workspace.

 

20161206_175033

20161206_155427
20161206_145145

20161206_191552
20161206_122134
20161206_155316
20161206_174510

20161206_195247

20161207_122526

20161207_132943

20161207_182004

20161208_005111

20161208_211432

20161208_213222

 

The circuit used in timesUp is identical among all the three prototypes. It consists of a mini vibrator motor, an LED, and a button, all connected to Gemma microcontroller. The reason behind choosing Gemma was mainly because of its small size and low cost. Although some features of the Gemma such as not having a serial port made it difficult to debug the code, it overall provided the functionality that we were looking for. In addition to the above components, we also used diodes to protect the components against reverse or negative voltage, transistors used as amplifiers, and also two 1K resistors. 

circuit-diagram_bb

 

Here is a complete list of all the components used:

https://docs.google.com/spreadsheets/d/17sHo2XsYTJYVaqCtRQAXoin2ELTBKjEZiLoTEISM1jQ/edit?usp=sharing

Link to the Code: https://github.com/obascunan/timesUp/blob/master/timesup.ino

2016-12-07-11-26-52

2016-12-07-11-27-00

2016-12-07-11-27-15

2016-12-07-12-49-20

2016-12-07-12-49-03

2016-12-07-13-38-18

2016-12-07-16-42-14

2016-12-07-16-42-40

 

User Testing Process

After building and integrating the circuit into the 3D printed objects, we tested the timer by setting the alarm to vibrate after one-minute intervals, in order to make sure that the timing and notifications work perfectly. We then set the alarm to vibrate 45 minutes after starting the test, indicating the start of the break, followed by another vibration after 15 minutes, showing the end of the break. Since the goal of our product was to keep the user focused and away from any possible distractions, we took notes and wrote down any significant comments on a piece of paper during the 15-minute breaks. After the user testing session was done, we filled up the following questionnaire:

https://docs.google.com/forms/d/1fQS3xAJj5mSP7VG-xqpt2jwTUKIn6LCJoDFf-msvjWw/edit

Overall, the product felt comfortable and familiar. Although some of the notifications were confusing sometimes, the overall experience was smooth and straight forward. The bracelet version was more reliable in term of detecting the vibration notifications compared to the necklaces, as we noticed that it was sometimes hard to feel the vibrations of the necklace. Unexpected challenges were to isolate from water as the enclosure is firm but permeable. 

The link above contains the results of our responses to the user testing questionnaires. The image below also shows some of the highlights of these results.

 

Instructions Handout
brochure_page_1

brochure_page_2

Challenges, Outcomes, and Future Iterations

The timesUp wearable is designed to help with time management and increasing focus level while working or studying. Although we found it challenging to commit to the designated time intervals at first, we realized that this product could help us gradually increase our efficiency and minimize distractions, when using it over a longer period of time.

The main challenge was to integrate the commands such that the product is equipped with different functionalities, while maintaining simplicity. It was challenging to play with various functionalities that the combination of a button, LED and vibrator could provide, in a way that they become intuitive to the user after a while.

The other difficult aspect of the project was the size of the product. The goal was to make the product simple, light, and relatively small. Hence, we spent a considerable amount of time carefully soldering the delicate components we were using and stacking them on top of each other such that they occupy the minimum amount of space.

For future iterations, we would like to make the On/Off switch on the gemma accessible to use. This would make the interactions with the device simpler as we could save the “holding button” action to switch between work and rest mode, and use the “pressing” action to check the time. A charging module or a disposable battery would make it easier for the user to keep the device running as the gemma doesn’t charge batteries. In addition, housing the button inside the device and using the the whole display to touch and press it could give a more sleek look to the product, as pressing the button is the only interaction needed from the user. Lastly, insulating from the water seemed relevant to do for the wrist piece, as it is in the splashing radius when washing hands/dishes.

 

References

Cirillo, Francesco. The Pomodoro Technique: Do More and Have Fun with Time Management. Berlin: FC Garage, 2013. Print.

https://learn.adafruit.com/buzzing-mindfulness-bracelet/overview

http://www.digitaltrends.com/wearables/re-vibe-anti-distraction-wearable/

https://www.makerbot.com/replicator/

http://www.autodesk.com/products/fusion-360/overview

Google Glass

Group Members:

Nana Zandi

Afrooz Samaei

glass-clearshade-isometric-thumb

 

Introduction

Google Glass is a head-mounted wearable computer and display that is worn like a traditional glass. On April 4, 2012, Google introduced the Glass through a Google+ post: “We think technology should work for you—to be there when you need it and get out of your way when you don’t. A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment. We’re sharing this information now because we want to start a conversation and learn from your valuable input. So we took a few design photos to show what this technology could look like and created a video to demonstrate what it might enable you to do.”

As Sergey Brin indicated while giving a TED talk about the Glass, the vision behind this product is related to the way we want to connect with other people and the way we want to connect with the information. The main motivation behind Glass is to build something that “frees your hands, your eyes, and also ears” and eliminates having to constantly look on our phones and socially isolating ourselves. The initial vision behind Google was to “eliminate the search query and have the information come to us as we needed it and make the information universally accessible and useful” (Larry Page, 2014)

During a Google I/O event in 2012 held in Moscone Center, San Fransisco, Google announced that a developer edition of the Glass called Explorer Edition is available to the developers to purchase, for $1500, which was shipped in early 2013. The Explorer Edition of the Glass enabled technologists, educators, and developers to pilot and test the product before it becomes commercially available to the public. Once the device was ready, the customers were contacted to attend one of Google’s headquarters in order to pick up their Glass and be given required instructions and information about how to use the device.

Shortly after that, the developers started developing applications for google glass, which was referred to as Glassware, based on the complete instructions provided by Google Developers. The applications include social and traditional media (e.g., Facebook, CNN), utilities (e.g., Stopwatch), and language learning (e.g., DuoLingo) applications, totaling about 150 (Forinash, 2015).

In January 2015, Google closed the Explorers program and discontinued the availability of Glass for individual buyers. However, the enterprise edition of the Glass is still under development. According to Google Developers, Glass at Work is a program intended to develop new applications and enterprise solutions by the certified partners, such as AMA (Advanced Medical Applications), APX Labs, AUGMATE, etc.

Design

Google Glass is comprised of a head-mounted optical display (HMOD), which is a prism functioning as the monitor and visual overlay, attached to a mini-computer and a battery pack, housed in the right side of the Glass. The frame is constructed out of titanium and comes in five different colors- charcoal, tangerine, shale, cotton, and sky.

What does Google Glass do?

Here is what Google Glass does when it’s on and connected to the internet (Google Glass for Dummies, P. 9):

  •  Takes photos and videos, and sends them to one or more of your contacts. The Glass camera sees the world through your eyes at the very moment you take the photo or record the video.
  •  Sends e-mail and text messages to your contacts, and receives the same from them.
  •  Allows you to chat live via video with one or more Google+ friends via Google Hangouts.
  •  Sends and receives phone calls.
  •  Searches the web with the Google search engine (of course) so that you can find information easily.
  •  Translates text from one language to another. Glass speaks the translated text and also shows a phonetic spelling of the translated word(s) on its screen.
  •  Provides turn-by-turn navigation with maps as you drive, ride, or walk to your destination.
  •  Shows current information that’s important to you, including the time, the weather, and your appointments for the day.
  •  Recognizes the song that’s playing on the device and identifies the artist(s) singing the song, in case you don’t know.

Technical Specifications

glass-pod-exploded-thumb

 

There are a few different ways to control Google Glass. One is by using the capacitive touchpad along the right side of the glasses, on the outside of the GPS and CPU housing. Users can move from screen to screen by swiping and tapping the touchpad with a finger. Another way to control Google Glass is through voice commands. A microphone on the glasses picks up your voice and the microprocessor interprets the commands. To use voice commands, users say ‘OK Glass”. This command brings up a list of the available commands.

maxresdefault3

In order to connect to the internet, users should connect the Glass to the computer’s Wi-Fi connection, once they first set up their MyGlass account. Users can also connect through their smartphone with the MyGlass app, which allows internet connectivity as long as the smartphone has an internet data plan or access to a Wi-Fi network. Pairing the Glass to a smartphone using Bluetooth is most convenient when no Wi-Fi is available or users do not have access to a data network.

Images from Google Glass project onto the reflective surface of the built-in prism, which redirects the light toward your eye. The images are semi-transparent — you can see through them to the real world on the other side.

A visitor of the "NEXT Berlin" conference tries out the Google Glass on April 24, 2013 in Berlin. "NEXT Berlin" describes itself as "a meeting place for the European digital industry". Organisers say that at the conference, "marketing decision-makers and business developers meet technical experts and creative minds to discuss what will be important in the next 12 months". The conference is running from April 23 to 24, 2013. AFP PHOTO / OLE SPATA / GERMANY OUT (Photo credit should read Ole Spata/AFP/Getty Images)

The speaker on Google Glass, located in the right arm, is a Bone Conduction Transducer. That means the speaker sends vibrations that travel through your skull to your inner ear — there’s no need to plug in ear buds or wear headphones. Using the camera and speaker together allows you to make video conferencing calls.

Also on board the glasses are a proximity sensor and an ambient light sensor. These sensors help the glasses figure out if they are being worn or removed. You can choose to have your Google Glass go into sleep mode automatically if you take them off and wake up again when you put them on. 

One last sensor inside Google Glass is the InvenSense MPU-9150. This chip is an inertial sensor, which means it detects motion. This comes in handy in several applications, including one that allows you to wake up Google Glass from sleep mode just by tilting your head back to a predetermined angle.

The power is provided by a battery housed in a wide section of the stem. It fits behind your right ear. It’s a lithium polymer battery with a capacity of 2.1 watt-hours. Google has stated that the battery life should last for “one day of typical use.” Although the battery quickly depletes with extensive use, such as taking lengthy videos, watching videos from the internet, using Bluetooth, etc., Glass recharges relatively quickly, in as little as two hours.

Applications

Healthcare

Mark Taglietti, head of ICT delivery services and vendor management at London University College Hospitals says, “Google Glass represents a step change in technical innovation, wearable technology, and the convergence of personal devices in the workplace. The healthcare applications of Glass are wide-ranging, insightful and impactful, from enabling hands-free real-time access to clinical and patient information, to the transmission of point of view audio and video for surgical research and educational purposes. Glass marks the beginning of a truly remarkable journey for technical innovation within healthcare, enabling providers to improve the delivery of care, as well as overall quality and patient experience.”

Some example in which Google Glass is dramatically changing healthcare:

Virtual Dictation

Augmedix is a glass application that provides a better way for doctors to enter and access important patient information in real-time without being tethered to a computer. Dignity Health uses Augmedix software and Glass to streamline the interaction between physicians and patients. “This technology allows me to maintain eye contact with my patients and have continuous conversations without having to enter information into a computer,” said Dr. Davin Lundquist, a family medicine practitioner and Dignity Health’s chief medical informatics officer. “The ability to listen, communicate, and care is just as critical as the diagnosis, and this technology allows me to spend more focused and quality time with my patients.”

Telemedicine

Care providers can communicate with physicians remotely and proactively monitor patients whose Electronic Health Records (EHR) can be transmitted in real-time.

Resident training

The Stanford University Medical Center Department of Cardiothoracic Surgery uses Google Glass in its resident training program. Surgeons at the medical center use glassware from CrowdOptics to train residents on surgical procedures.

Augmented Reality allows doctors to monitor patients’ vital signs during surgical procedures without ever having to take their eyes off the patient. Live streaming of procedures can also be used with augmented reality applications for teaching.

 

Warfare

“real-time information on the battlefield in order to prevent harm to the soldiers.”

“Google glass for war: The US military funded smart helmet that can beam information to soldiers on the battlefield.”

The technology promises to give soldiers increased situational awareness on the battlefield, as well as easy access to important intelligence.

 

Education

The technology allows teachers and students to share information in various modes of interaction that include flipped classrooms.

Students can record interactions with fellow students, including while on field trips. Later, students can analyze their own and others actions and responses. Teachers can also see how other teachers apply the technology.

 

Journalism

With wearable computers like Glass, journalism is changing into a place where news content is created and shared instantly, quite literally through the eyes of the reporter. Glass provides you with freedom of motion and the ability to convey more intimate stories as journalists can be less intrusive to his or her subjects. Instead of taking your eyes off the action to take notes, you can record the event hands-free from your face. Using Glass, journalists could have the opportunity to tread new ground with their stories, accessing all of their resources from a single, easy-to-use device. 

 

Criticism

  • Privacy Concerns

Concerns have been raised by various sources regarding the intrusion of privacy, and the etiquette and ethics of using the device in public and recording people without their permission. Privacy advocates are concerned that people wearing such eyewear may be able to identify strangers in public using facial recognition, or surreptitiously record and broadcast private conversations. There have also been concerns over potential eye pain caused by users new to Glass. Other facilities, such as Las Vegas casinos, banned Google Glass, citing their desire to comply with Nevada state law and common gaming regulations which ban the use of recording devices near gambling areas. On October 29, 2014, the Motion Picture Association of America (MPAA) and the National Association of Theatre Owners (NATO) announced a ban on wearable technology including Google Glass, placing it under the same rules as mobile phones and video cameras.

Internet security experts have voiced concerns about the product and its implications. Their point is that the wording of terms seems to give Google more control over user data than it should have. He also points out that with facial recognition software, the glasses could raise privacy issues.

Another concern is that Google could use the eyewear as a platform for collecting personal data and serving ads. As you go about your day wearing these glasses, Google could create a virtual profile. Based on your behaviors and location, Google could potentially serve up what it considers to be relevant advertisements to the screen on your glasses.

  • Safety Concerns

Concerns have also been raised on operating motor vehicles while wearing the device.

 Similar Products

Vuzix M200 or M300

vuzix-m300-1458572329-fvms-column-width-inline

 

Vuzix’s M300 smart glasses are built for enterprise uses. With an Intel Atom processor powering performance, it’ll run on the latest version of Android with 2GB RAM, 16GB of internal storage and Wi-Fi connectivity among the more notable specs. There’s also a 13-megapixel camera to take pics, head tracking support, and dual canceling microphones.

Epson Moverio BT-300

epson-moverio-bt-300-1-1457001717-ikvb-column-width-inline-1458563228-b0g7-column-width-inline

 

While Epson’s smart glasses have always been quite business-focused, it has teased the prospect of using them in the gym to race in virtual environments and is working with drone makers DJi so you can control flights straight from your specs.

Sony SmartEyeGlass

sony-smarteye-glass-200-1458564641-cihu-column-width-inline

 

Sony released the essential tools to allow developers to start coding applications for its Google Glass rival, and now developers can finally get hold of the SmartEyeGlass hardware. SmartEyeGlass includes an array of features, including a gyroscope, accelerometer, ambient light sensor and built-in camera. However, the monochrome screen is likely to put off consumers, if Sony chooses to release it beyond the business world.

 

References

Forinash, D. B. “Google Glass.” CALICO Journal 32.3 (2015): 609-17. Web.

Stepisnik, Eric Butow. Robert. Google Glass For Dummies. N.p.: John Wiley & Sons, 2014. Print.

Glauser, Wendy. “Doctors among early adopters of Google Glass.” Canadian Medical Association. Journal 185.16 (2013): 1385.

Hua, Hong, Xinda Hu, and Chunyu Gao. “A high-resolution optical see-through head-mounted display with eye tracking capability.” Optics express 21.25 (2013): 30993-30998.

Parslow, Graham R. “Commentary: Google glass: A head‐up display to facilitate teaching and learning.” Biochemistry and Molecular Biology Education 42.1 (2014): 91-92.

Yus, Roberto, et al. “Demo: FaceBlock: privacy-aware pictures for google glass.” Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 2014

Clark, Matt (May 8, 2013). “Google Glass Violates Nevada Law, Says Caesars Palace”.

MPPA (October 29, 2014). “MPAA and NATO Announce Updated Theatrical Anti-Theft Policy”

https://plus.google.com/+GoogleGlass/posts/aKymsANgWBD

https://developers.google.com/glass/distribute/glass-at-work

http://www.newyorker.com/business/currency/whats-the-problem-with-google-glass

http://www.businessinsider.com/military-invests-in-combat-google-glass-2014-2

http://www.dailymail.co.uk/sciencetech/article-2640869/Google-glass-war-US-military-reveals-augmented-reality-soldiers.html

https://www.wareable.com/headgear/the-best-smartglasses-google-glass-and-the-rest

http://electronics.howstuffworks.com/gadgets/other-gadgets/project-glass2.htm

http://www.catwig.com/google-glass-teardown/

Digital journalism: Your Sunday newspaper will never be the same

 

How Google Glass Will Transform Healthcare

 

 

BMI Booth

img_1571

 

Description

BMI booth is an automated system to collect and save data about people’s height and weight in order to calculate their BMIs. The booth consists of a scale to measure the weight, and an ultrasonic sensor installed on top to measure the height. Each sensor is connected to an Adafruit Feather microcontroller that reads the values and publishes them as separate feeds to a spreadsheet, matched together based on the time the values were received. The height and weight values are later used to calculate the BMI.

The booth was installed near the entrance door of OCAD University’s graduate building, on November 24 from 10 a.m. to 10 p.m. About 50 data on height and weight were collected. Information about the project and instructions on how to use the product were provided on the booth. The users had to step in the booth, wait for a short time to make sure the values are read, and then step out of the booth.

Process

I started the project by experimenting with my bathroom scale. I took it apart, tried to understand each component, and figure out a way to connect it to an Arduino, by going through online tutorials that existed about similar projects. The scale consists of four load sensors placed on four corners of the scale, forming a bridge configuration. A load sensor measures electrical resistance changes in response to pressure. The four load sensors are connected to a combinator which is connected to an Analog to Digital Converter. I removed the ADC in order to replace it with an Arduino. However, since the voltage level coming from the load sensors is too low to be read by the ADC in Arduino, an amplifier is needed to amplify the voltage. I used HX711 for this purpose. I tested and calibrated the scale using an Arduino and later connected it to the Adafuit Feather.

2016-11-19-16-15-51 2016-11-19-16-16-15

2016-11-21-13-30-53

img_1553

 

When I had the scale wired up, calibrated and functioning, I added the ultrasonic sensor to the circuit. The challenge that I faced at this point was having the scale and ultrasonic sensor read values at the same time. They would read and publish values separately, but not at the same time. Hence, I decided to connect each sensor to a Feather separately, and match the values together based on the time they were read and published.

The ultrasonic sensor is mounted on top of the booth. The height is calculated by subtracting the value read by the sensor (the distance from the sensor to the top of the head) from the height of the booth.

 

img_1567

img_1564

Construction

I kept the construction simple and used the wood that I had utilized in another project. The letters mounted on the body of the booth were laser cut in OCAD U woodshop.

new-doc-56

2016-11-22-17-15-22

2016-11-23-12-26-50

img_1555

 

img_1557

 

img_1556

 

Circuit Diagram

feather_ultrasonicsensorinput

Ultrasonic Sensor

 

scale-circuit-diagram_bb

Code: 

//github.com/afroozsamaei/BMI-Booth

Spreadsheet containing recorded data:

In order to save the collected data, I initially used IFTTT to save the read values in a spreadsheet. However, I noticed a delay between reading the values and having them published to the spreadsheet, which would cause problems since I had to match the values based on the time they were read. Hence, I published the sensor values to two separate spreads in my Adafruit dashboard and imported them manually to a spreadsheet. To get  more accurate results when calculating the BMI, I subtracted 2 kg from the weight values in order to compensate for the winter clothes.

https://docs.google.com/spreadsheets/d/1QoqXUfhgkQ8PkWGlCHyKx8L5sQtak5irb_YPt_U-5h4/edit?usp=sharing

Visulization of Collected Data

 

viz

Challenges and Learning Outcomes

This project was valuable for me in terms of succeeding to integrate a new form of sensor into my circuit, hack a daily object and connect it to the Internet. The main challenges were hacking the scale, reading values from both sensors connected to one Feather, and also perfectly positioning the ultrasonic sensor so it reads the height values accurately. Since I noticed that the sensor missed the height of the user in some cases, I decided to add a piece of cardboard to the booth, despite my will, in order for the users to hold above their head and get a more accurate value of their height, as I did not want to compromise on the accuracy of my data.

A lesson that I had learned from my other projects is having the least amount of expectation from the user when designing a product. Although I tried to take this fact into consideration by simply asking the user to step in and out of the booth without having to do any additional work, I noticed that I had neglected the fact that the booth is installed in a place where people are carrying their bags. Based on my observation, some users stepped on the scale while carrying their bags, which might be a result of being distracted and forgetting to put away the bag, or simply a matter of being lazy.

Future iterations of this project can add interaction to the booth by including a screen. The visuals shown on the screen can be either informative or simply a way of entertaining people in order to change the perspective of those who are scared of stepping on a scale.

 

Resources

http://www.instructables.com/id/Make-your-weighing-scale-hack-using-arduino/

https://learn.sparkfun.com/tutorials/load-cell-amplifier-hx711-breakout-hookup-guide

https://learn.sparkfun.com/tutorials/getting-started-with-load-cells?_ga=1.214101719.1109509025.1479484387

PacNet

Group Members

Thoreau Bakker

Afrooz Samaei

Description

PacNet is a browser-based game, based on the classic arcade game and built in P5.js. It features a simplified interface and does away with many of the original game elements, instead focusing on interactivity through networking. The main character navigates using a custom made controller, employing ready-made components and an Arduino micro, housed in a custom designed 3D printed enclosure. Additional characters are represented by small circles, and navigate using a keyboard. The main player controls the Pacman, chasing the circles on the screen. As other players open the web page, they get a circle which they can control using the laptop’s arrow keys. The players have to prevent their circles from being caught by the Pacman.

 

Design Process

The main source of inspiration for this game came from the Cineplex TimePlay. The idea was to create a collaborative game experience, in which the players either play against each other or play with each other to reach a target.

 

screenshot-2016-11-11-11-55-55

screenshot-2016-11-11-11-56-40

The scenario that we intended to create was to have a main player playing at the arcade machine, and the rest of the players play on their laptops or portable devices.

new-doc-55

diagram

img_6978

 

The first game concept that we came up with was a game in which the main player has to reach a target with the help of the others. The players would have to collect and move some blocks and place them on top of each other, in order for the main player to climb up the blocks and reach the target.

 

screenshot-2016-11-11-11-41-08

The players would have to move the blocks and place them on top of each other (Source: Phaser)

 

Another concept was to create a game in which the players compete against each other. We thought of a game in which the player has to collect some objects, like stars, which are controlled by other players. So, as the main player chases the objects, the other players have to control theirs and prevent it from being caught.

The player has to collect the stars which are controlled by other players.

The player has to collect the stars (Source: Phaser)

However, the main challenge was to create all the game objects and send the related data to Pubnub, in order to build a networked game. In addition, since the game library of P5.js is not as complete as other game engines, we spent a considerable amount of time experimenting with Phaser, which is a game framework based on Javascript. After lots of efforts and iterations, we finally decided to narrow down the game mechanics and assets, so we could effectively build the network. Hence, we built a networked version of Pacman game and created the assets from scratch, using P5.js drawing functions.

Simulating the Arcade Machine

For the sake of this prototype, we simulated the arcade machine using a laptop, a thumb joystick, and an Arduino Micro. The joystick and the Arduino are housed inside a custom-designed 3D printed box and are connected to a laptop. The values read by Arduino are used to control the movements of Pacman and are published to Pubnub to be transferred to all web pages.

img_1471 img_1473 img_1475
img_1479 img_1482

 

Game Played on browser

 

Circuit Diagram

circuitdiagram_bb

Challenges and Outcomes

This project proved to be especially challenging, both technically and conceptually. As a team, we invested considerable time early on, brainstorming with a very open and broad scope and tried to explore all possible avenues. In an effort to keep project satisfaction high for both of us, we really wanted to find an idea that excited us both equally. This proved to be challenging as different directions resonated with us in different ways. 

Despite the challenges, the project was extremely valuable in terms of learning outcomes. We were not only able to gain new skills regarding networking, but also regarding the scope and being realistic about what is achievable in a given amount of time. Our initial vision involved a game that incorporated not only hardware and networked input but also a complex interaction involving group coordination and physics for a final goal. While the idea was (and still is) a strong one, the goal was far too ambitious for our skill level and time constraints.

The future iterations of the game involve making the connections smoother, reduce the delays, and make a mobile version of the game so it can be played on smartphones and tablets as well.

 

Links to the Game:

Main Player: http://tinyurl.com/grqbyuc

Other Players: http://tinyurl.com/hugo8sx

Link to Code: https://github.com/afroozsamaei/PacNet

Broken Space

Team Members

Katie Micak 

Afrooz Samaei

 

a1

 

a2-2

 

a3-2

 

Description

Broken Space is an interactive installation, designed to display a footage of the galaxy on 20 laptop screens, placed on a grid. Each laptop randomly displays a part of the final video. The pieces should be put together by users through mouse clicking, in order to form the final image. Once the first frame is properly composed, the users have to press the mouse wheel in order for the video to play.

 

Design Process 

Afrooz comes from a background in engineering, and Katie comes from an art making field. When we came together we found that communicating our ideas to each other was difficult at first, since we were viewing the project from different perspectives informed by our histories in image making.

In order to arrive at a concept, we described how we viewed screens and interactions, what we should consider about images in terms of abstraction, and how we could look at building the experience.

After a few drawings and conversations, we finally began showing each other what images we found inspiring. Katie showed Nam June Paik’s video wall sculptures (https://www.youtube.com/watch?v=k66qFuGxrl8), and Afrooz showed an interactive iPad work showed at a Japan expo in 2014 (https://www.youtube.com/watch?v=5IPB-Bde6X0). As it turned out, we were generally speaking about creating the same type of experience- namely, one that involved a large scale and images that would move over multiple screens- like a painting.

Our first rendition of this experiment came in the form of a wall of laptops that would show an abstract image. This image would be altered through effects when the view passed by the image- we would do this by accessing the webcam. After an investigation in coding, and further conversation we came to the decision that we could simplify and distil our idea into a simpler format, but still result in the same impact.

 

Here are the core elements we chose to investigate:

-Scale: a wall of laptops showing the video, or using the laptops as a sculptural material that would inform the work.

-Interaction: since it would not be networked we would have to keep the interaction to one screen only. We decided that the interaction would then be among the users- they would have to collaborate.

-Abstraction: We both knew that we would be working with a number of variables that would cause any image we choose to appear ‘less than perfect’ and our tactic for dealing with this was to choose an image that would be more flexible. It would have to create a larger piece once all of the screens showed it at once.

-Physical Space: We wanted to build a space that would provide a center of focus.

-Concept: We wanted our concept to reflect the technology we were using, and capitalize on the limitations of the project.

 

Realization of concept

After we decided that we would be building a grid to hold and display our image, we began searching for abstract videos that would show well across many screens, even with breaks between monitors/ shelves/ etc.

We began within the visual language of technology, or data visualization since it belongs to computers entirely. We shared images of energy scans, brain scans, and also color fields. Katie began looking at color as it exists on computers and in nature, and was attached to a suggestion on her YouTube page. It was of space. Space it was! A simple enough concept, we would use a space video as a ‘painting.’ It was effective because it has a lot of color and movement, these images incorporated elements of design.

When constructing the puzzle we went through a number of iterations. First, we tried the first image of our video which is a nebula. This would not work, because viewers would have no orientation for how to construct the image- it was too abstract. Then we decided to try an astronaut and an illustration of a planet- also too ambiguous.

 

Finally, we decide on a beautiful image of a planet, with the text ‘Broken Space’ as a point of orientation- It would be simpler to decipher and put together for viewers working collaboratively.

Code

The final video is cut into twenty pieces. The pieces are stored in an array. Once the web page loads, a video is randomly displayed on the screen. The users have to go through the images (first frames of the video) by LEFT clicking (going forward) and RIGHT clicking (going backward). Clicking the mouse wheel (or the space bar, in case a mouse is not available) plays the video. For simplicity and a better display, the Enter button toggles the browser to fullscreen.

https://gist.github.com/afroozsamaei/352a8a44b22452b02fe18b3fb7969941

Link to the Web Page: https://webspace.ocad.ca/~3153570/

The Day of Exhibition

For the activity, viewers were asked to place their computers on shelves, which we custom constructed for the project, and each viewer was given a wireless mouse to activate the screens. Using the LEFT and RIGHT click function, viewers would be able to click through web pages until they found the correct image for their screen in terms of the composition.

Once all the images were in place, the viewers were given a surprise moment when the puzzle turned into a larger video.

There are 20 videos with 20 different audio tracks. The decision to incorporate audio was to utilize the 40 speakers that we have readily available through the laptops. The audio scape was created by editing compositions using found material from the NASA Soundcloud page (open source). We chose different sounds for different locations of the laptops- the bottom row would have more a bass sound, the top row had higher pitch laser-like sonic, the middle rows had more tangle sounds like speaking and beeping that are tied to spacecraft – man-made sounds. Listening to this audio in this context created an atmosphere of space and the sounds were meant to swing from peaceful audio to a rising tension- the audio moves the viewers through emotional space.

As mentioned earlier, we were concerned with bringing the viewer into the work physically and controlling the space. The viewers had to physically click and talk to solve the puzzle. We also closed the blinds to create a dark immersive environment and we kept the viewers further away from their laptops so that they could view the work in parts and as a whole. The audio filled the room when the video played. These elements are important to create an experiential piece creating more emotional weight. We hoped to create a sense of wonder and awe in the viewers, and that they would feel something from this experience.

It was interesting to see how viewers organized themselves to reach their goal of constructing an image. One person became the leader, and checked the individual screens in comparison to the whole image, approving when it was in the correct position. The group worked well, and quickly, created the image by communicating, listening, and experimenting.

 

Conclusion

Broken Space is a piece about networks on the cosmic level. We acknowledge connections to create a whole, we see the universe as both organized and random, we hope that this metaphor is translated to the concept of the internet and how individual participation creates structure and larger portraits that change over time, and provide insight into overarching concepts of collaboration and unity even though these connections are mediated.

 

 

The Acryliscope

Group Members:

–  Thoreau Bakker

–  Orlando Bascunan

– Afrooz Samaei

img_1466

Description

The Acryliscope is an automated kinetic kaleidoscope. It features opaque, transparent and mirrored acrylic, multiple servo motors and an Arduino microcontroller. It is the final output of an assignment allowing three main components: an ultrasonic (distance) sensor, a servo motor, and a main material — acrylic. As a viewer approaches the device and leans down to look through the kaleidoscope, the proximity sensor’s threshold is tripped and the servos come to life. The servos rotate translucent green laser cut disks, until the sensor detects that the viewer has left the immediate viewing zone. 

Extending / Celebrating Acrylic

Acrylic is an exceptional material with a number of qualities valuable for prototyping. It features excellent visual clarity, comes in a variety of sizes and finishes, and most of all is consistent in terms of machinability. We attempted to implement as many of the benefits of acrylic as possible. We used various opacities and finishes to channel and bounce light and capitalized on its machinability by laser cutting our 3D design models. 

Final Project

 

Video

 

Circuit Diagram

 

circuit-diagram_bb

Code

https://gist.github.com/afroozsamaei/bf50e7905f28a396a53b931e145e4ea4

Design Process

Initially the idea of enhancing the material made us research acrylic’s properties and brainstorm ideas about how to utilize acrylic in conjunction with an ultrasonic sensor and servomotors. Discarded ideas included automatic doors, kinetic sculptures and instruments. We concluded that the reflective aspects of mirrored acrylic could be used to provide visually attractive patterns. Hence, we decided to build a kaleidoscope that is activated as a viewer approaches.

Building the Device

After discussing and sketching different possible shapes for our kaleidoscope, we narrowed down our pieces as following:

  • Three pieces of acrylic mirror to build the prism
  • Three triangular pieces of transparent acrylic to build frames, holding the prism together
  •  A piece of white opaque acrylic as a stand, behind which the wheels and circuit are hidden. The servos are also mounted on this piece
  • Two laser cut pieces of translucent green acrylic,which provide the patterns

 

After 3D modelling the pieces, we handed the files in to the Rapid Prototyping lab for laser cutting. We took the pieces later to the Maker Lab for final trimming and assembling.

 

Challenges and Outcomes

Making the Acryliscope raised many challenges. One of these was that we were trying to do iterations with quick turnaround, but our laser cutting was outsourced and we had to wait for new cuts. What looked fine in the software didn’t pan out exactly how we had hoped — with motor assemblies touching each other and visibility of circle brackets being two examples of this. Our solution was to redesign the back plate and have a rush cut made, yet still this piece needed further work with hand tools in the maker lab.

In addition, we had tested the code and the circuit before building the product and mounting the discs. However, after assembling all the pieces together, we realized that the additional weight imposed on the servos by the discs, prevented the circuit from functioning. Hence, we had to add additional source of energy to power the servos and the sensor.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.