Author Archive

Blog Post 1 – Review 3 related projects

 

Project 1 – The Haystack Project [1]

What:

The Haystack Project, a collaboration at the International Computer Science Institute (ICSI) at the University of California, Berkeley, among multiple academic institutions, starts with an Android app that captures data right at the source. (It’s not yet available for iOS.)

The Haystack Project is an academic initiative led by independent academic researchers at ICSIUC Berkeley and IMDEA Networks in collaboration with UMass and Stony Brook University. At the core of the project is the Lumen app, an Android app that analyzes your mobile traffic and helps you to identify privacy leakes inflicted by your apps and the organizations collecting this information.

 

The Lumen app monitors what Android apps do with your data

 

How/Why:The app reports back fully anonymized pieces of information, allowing researchers to understand the kind of personal information that’s being extracted and transmitted. “We’re seeing tons of things like some applications linking the MAC address of the Wi-Fi access point as a proxy for location,” says

 

How/Why:

The app reports back fully anonymized pieces of information, allowing researchers to understand the kind of personal information that’s being extracted and transmitted. “We’re seeing tons of things like some applications linking the MAC address of the Wi-Fi access point as a proxy for location,” says Vallina-Rodriguez. (A base station’s MAC address identifies it uniquely, and is used by Wi-Fi location databases run by Apple, Google, and other firms.)

 

So what:

Most likely, very soon after turning on Lumen you will quickly learn interesting facts about the apps that you run on your phone. You can use Lumen to understand where your apps connect to, which data they share with third parties and even how much traffic they waste for advertising and tracking purposes so you can decide whether to uninstall those that strike you as too intrusive. Not all devices provide the features required by Lumen to operate.

Project 2 – Smart mirror for ambient home environment [2]

What:

This project describes the design and development of a futuristic smart mirror that represents an unobtrusive interface for the ambient home environment. The mirror provides a natural means of interaction through which the residents can control the household smart appliances and access personalized services.

This project describes the design and development of a futuristic smart mirror that represents an unobtrusive interface for the ambient home environment. The mirror provides a natural means of interaction through which the residents can control the household smart appliances and access personalized services.

Why/How:

A service-oriented architecture has been adopted to develop and deploy the various services, where the mirror interface, the appliances, and the news and data feeds all use Web service communication mechanisms. The smart mirror functionalities have been demonstrated by developing an easily extendable home automation system that facilitates the integration of household appliances and various customized information services.

A service-oriented architecture has been adopted to develop and deploy the various services, where the mirror interface, the appliances, and the news and data feeds all use Web service communication mechanisms. The smart mirror functionalities have been demonstrated by developing an easily extendable home automation system that facilitates the integration of household appliances and various customized information services.

Smart mirror for ambient home environment (PDF Download Available). Available from: https://www.researchgate.net/publication/4317313_Smart_mirror_for_ambient_home_environment [accessed Jun 24, 2017].

 

Image result for smart mirrorImage result for smart mirror

 

Project 3 – MobileSync Web [3]

A study of user experiences for webpage interactions on computers working with mobile technology by Chen Ji

What:

With the development of mobile technology, smartphones have become a necessity in our daily lives. Various sensors and multi-touch screens of smartphones have contributed to a large amount of functional and excellent mobile applications and games. However, the support on interactive webpages is not sufficient. Since, in the most circumstances, smartphones are available when people use computers to browse webpages, the author consider whether mobile technology might be effective to enhance the user experience when people browse webpages on computers and whether it has the potential to be a new way for web interactions. Through researches, mainly user testing, and analyzes, a project as a form of interactive webpage integrating mobile technology shows the potential needs of this combination. This project proposes a new way for people to browse interactive webpages which can lead user experiences, by use of mobile technology, to a new place.

 

Why/How:

This thesis project is conducted by various mobile sensors, but the camera is missed since the iPhone does not allow its camera to be used by web browsers; also, the computer’s web camera or laptop’s camera is not available on the latest version of Chrome, Firefox and Safari as well. The software engineering is a crucial limitation. In terms of mobile websites, it has not worked out so well in practice although it runs on any hardware and any operating system in theory (Banga & Weinhold, 2014). The platforms (iOS, Android, Windows 8, etc.) have different permission. For instance, at iOS platform, web apps are not allowed to access the camera of iPhone users, which means, for QR codes scanning, iPhone users need to install a native app with the scan function.

So what:

I personally havent figured out the So what of this project but it shares similar concepts to mine regarding mobile app sensors

 

[1]https://www.fastcompany.com/40407424/smartphone-apps-are-tracking-you-and-heres-how-to-monitor-what-they-know, https://haystack.mobi/

 

[2]https://www.researchgate.net/publication/4317313_Smart_mirror_for_ambient_home_environment

[3]http://openresearch.ocadu.ca/id/eprint/585/1/Ji_Chen_2016_MDES_DIGF_THESIS.pdf

From Data to Perception – Research Blog

Research on What kind of Visualisation we should use

We spent a few days researching the different kinds of visualizations we could use to communicate the data we were collecting.

1. Radar Charts

 

radar_chart-1 radar_chart

Radar Charts are a way of comparing multiple quantitative variables. This makes them useful for seeing which variables have similar values or if there are any outliers amongst each variable. Radar Charts are also useful for seeing which variables are scoring high or low within a dataset, making them ideal for displaying performance.

2.Parallel Coordinates Plot

parallel_coordinates-1-01 parallel_coordinates-01

 

This type of visualization is used for plotting multivariate, numerical data. Parallel Coordinates Plots are ideal for comparing many variables together and seeing the relationships between them. For example, if you had to compare an array of products with the same attributes (comparing computer or cars specs across different models).

3. Parallel Set

Parallel Set charts are similar to Sankey Diagrams in the way they show flow and proportions, however, Parallel Sets don’t use arrows and they divide the flow-path at each displayed line-set.

parallel_sets-1parallel_sets

 

We finally decided to go with the following visualization:

fsafesa-01dsadsad-01

 

 

Listed Activities the participant needed to do

1.Play 10 games of Speed Chess
2.Workout for 30 – 45 minutes
3.Watch a horror movie
4.Play a sport/board game for 30 – 45 minutes
5.Play with an animal/dog
6.Control Day (participant goes about their normal day)
7.Learning a new instrument
8.Watch family videos/photos
9.Watch a comedy show
10.Wordplay, quizzing, and other literary games
11.Learning a new language
12.Meditation
13.Playing a video game
14.Memory Test
15.Sleep

We tried out the Muse headset and then found out that the user had to be still while the Muse was active for us to get accurate readings, so many of our planned activities had to be replaced with ones that did not involve a lot of movement.

We planned how every session would take place –

Each session includes a participant and an observer who serves as a moderator.

While the session takes place the observer takes notes about events, time, and external response. These notes could be written or audio. We will also take 3 photos for documentation purposes. One at the beginning of the session, one in the middle and one at the end.

At the end of each session with the participant, we record the quantitative data we get from the muse and submit it to a google doc. We also note down descriptions of the sessions so that we can trace activities to particular times and events.

Muse Activity Recording

sf whatsapp-image-2017-04-05-at-6-46-10-pm-1 whatsapp-image-2017-04-05-at-6-46-10-pm-2 whatsapp-image-2017-04-05-at-6-46-10-pm-3 whatsapp-image-2017-04-05-at-6-46-10-pm whatsapp-image-2017-04-05-at-6-46-11-pm-1 whatsapp-image-2017-04-05-at-6-46-11-pm-2 whatsapp-image-2017-04-05-at-6-46-11-pm-3 whatsapp-image-2017-04-05-at-6-46-11-pm-4 whatsapp-image-2017-04-05-at-6-46-11-pm-5 whatsapp-image-2017-04-05-at-6-46-11-pm-6 whatsapp-image-2017-04-05-at-6-46-11-pm-7 whatsapp-image-2017-04-05-at-6-46-11-pm whatsapp-image-2017-04-05-at-6-46-12-pm-1 whatsapp-image-2017-04-05-at-6-46-12-pm-2 whatsapp-image-2017-04-05-at-6-46-12-pm-3 whatsapp-image-2017-04-05-at-6-46-12-pm-4 whatsapp-image-2017-04-05-at-6-46-12-pm

 

Mobile App and Website Design

We designed the Muse Monitor App using our findings while being true to the Muse brand. We conceptualized and envisioned that this what our final product would look like where we would display our final visualization.

 

app_page5 app_page1 app_page2 app_monitor

We also designed a website based on these designs

webpage4 webpage1 webpage5 webpage3 webpage2

 

Digital Games – Weekly Blogs

Link to Weekly Blog 1

https://drive.google.com/a/ocadu.ca/file/d/0B0-O5G42CfqEZ1MtU20zY2ZqZUk/view?usp=sharing

Link to Weekly Blog 2

https://drive.google.com/a/ocadu.ca/file/d/0B0-O5G42CfqEdDdBM1owd3N2ZTg/view?usp=sharing

Link to Weekly Blog 3

https://drive.google.com/a/ocadu.ca/file/d/0B0-O5G42CfqEMm9obFJQbkVpOEU/view?usp=sharing

Link to Weekly Blog 4

https://drive.google.com/a/ocadu.ca/file/d/0B0-O5G42CfqEb1BrV2w2ckFBQVE/view?usp=sharing

Link to Weekly Blog 5

https://drive.google.com/a/ocadu.ca/file/d/0B0-O5G42CfqEbE9ILUwzWWZVdUk/view?usp=sharing

Link to Weekly Blog 6

https://drive.google.com/a/ocadu.ca/file/d/0B0-O5G42CfqESmduZ29YbF8tc00/view?usp=sharing

CASE STUDY – Discovery Wall by Squint Opera

setwidth414-wcmc-discovery-wall-squint-opera-and-hirschmann-45

Presented by Sara Gazzad and Mudit Ganguly

Link to presentation 

Client: Weill Cornell Medical College
Creative Direction: Squint/Opera
Technical Direction: Hirsch&Mann
Detail Design: The Cross Kings
Fabrication: DCL
Optics: Ely Optics

ARCHITECT: Ennead Architects
INTERIOR DESIGNER: Ennead Architects
INDUSTRY RESOURCE: Hirsch&Mann
INDUSTRY RESOURCE: The Cross Kings
INDUSTRY RESOURCE: Design Communications Limited

Introduction

Biomedical research centres aren’t renowned for creative enterprise – why should they be – but across the pond one New York organisation is bucking the trend with a stunning new digital artwork. The Weill Cornell Medical College commissioned London-based creative agencies Squint/Opera and Hirsch&Mann to produce the Discovery Wall for its new Manhattan premises and the results are super-impressive. The final piece comprises 2,800 LED screens set behind a bank of lenticular discs. For passers-by it can be viewed as a large-scale digital artwork but up close the screens display content that relates to the college’s pioneering scientific research.

There’s a nice making-of below in which the creatives explain the project’s ongoing potential, built around the college being able to upload content through their CMS. As Daniel Hisrchmann puts it: “It is extensible beyond us by design…you get to make something and watch it get better and better as people add more content over time. That is amazing!”

wcmc-discovery-wall-squint-opera-and-hirschmann-33

What is it?

A wall-sized digital artwork created from thousands of tiny screens and lenses was designed by Squint/Opera for the $650m Belfer Research Building, part of Weill Cornell Medical College (WCMC) in Manhattan. The shimmering and animated foyer installation celebrates the college’s research work.

The large-scale digital installation (approx 4.6m x 2.7m) comprises 2800 mini screens set in a grid pattern behind a panel of thousands of circular acrylic discs – a reference to the lenses used in medical research. The dual layer construction makes it possible to read the wall from a distance as a single image, and then, up close, each screen has information about medical discoveries and other news from WCMC’s website. The installation is programmed so that images and stories change constantly.

To bring the concept to life, every aspect of the hardware was designed from scratch.

The shimmering and animated foyer installation celebrates the college’s research work. The large-scale digital installation (4.6m x 2.7m) is comprised of 2800 mini screens set in a grid pattern behind a panel of thousands of circular acrylic discs – a reference to the lenses used in medical research.

wcmc-discovery-wall-squint-opera-and-hirschmann-38

Goals

The artwork operates on three main perspective view points.

1. Far views display ‘macro’ images and text
2. Mid views display ‘mezzo’ layers of additional information
3. Up close views display ‘micro’, detailed levels of information

The goal of the installation was to celebrate the support of the building’s donors and promote the research and discoveries made in the building. In addition, it was designed as an intriguing and beautiful object to be viewed close up in the lobby or seen from outside the building as a single image. Each screen has information about medical discoveries and other news fed from WCMC’s website. The images and stories change constantly. Through the language of discovery passers-by are drawn in and encouraged to learn more. The vision of New-York-based Ennead Architects, was to commission an artwork which would promote collaboration throughout the building and give a light touch to the interior fabric. To achieve this electronics were colour-matched with the stone cladding and circuit boards were mounted on a transparent frame. The clear acrylic lenses magnify the stonework at oblique angles and focus on the screens when facing the wall square on. This elegant approach compliments the natural feel of the building. The double layer, screens and lenses, creates a unique visual effect, as the wall will look as whole from a long distance while the screens can be appreciated as single elements when looked closely. The creators use this characteristic to create large-scale visuals with smaller images, taken from the archives of the Belfer research center. Thanks to its set-up, the installation shows the research and the discoveries achieved in the Belfer’s building, in a way that is visually appealing and can be enjoyed from the street or from the lobby.
wcmc-discovery-wall-squint-opera-and-hirschmann-34

Process

During the commission Ennead Architects advised the client and briefed Squint/Opera to develop creative concepts. The concepts were delivered through a team of specialists brought together by Squint/Opera. Hirsch&Mann led the technology design, production and delivery, The Cross Kings led the physical detail design and fabrication in Boston was completed by Design Communications Limited. Squint/Opera worked closely with Hirsch & Mann to design and build all components from scratch. This involved creating many prototypes which allowed the team to test ideas and communicate concepts to all stakeholders, taking them on the journey of developing a piece of art. The prototypes acted as a key discussion tool beyond drawings or presentations and allowed the team to refine the design and align with the architectural vision and the brief. From the early stages Squint/Opera worked with Ennead Architects to ensure practical elements were successfully integrated within the building. This included provision of extractor fans, IT and AV conduits/storage, appropriate light levels and structural supports to ensure the artwork will remain a permanent homage to medical discovery.
To develop the software, team worked with variable.io to help create both the front end CMS and the backend data storage, crunching, encoding and control. They were able to update all 2,800 LCD screens playback at a rate of 20fps. The software controller was equipped with algorithms for tiled content distribution, procedural layout generation and playlist scheduling. The whole architecture was running on NodeJS with CouchDB and talking to the hardware over serial port via the custom protocols developed by White Wing Logic.
discovery_wall_cms_01 discovery_wall_cms_02 discovery_wall_cms_03

For the realization of Discovery wall the authors created most of the hardware components from scratch. They chose a tiny screen with a high pixel density that can be used as a single tone pixel or as part of a high res composition, where all screens create a larger image. As the screens are part of a popular consumer device, they had to reverse engineer it and find the ideal conditions for its operation, a huge technical challenge itself.

The wiring and mounting of the pixels is achieved by grouping eight displays into a single printed circuit boards, with their own control components and memory on the back side. The cooper traces are platted in gold to give the installation an aesthetic outlook. Each column comprises five PCB and 40 displays.

Content

Content displayed on the Discovery Wall can be viewed differently at so-called macro, mezzo and micro levels. By looking at the installation in its macro view from across the road, visitors will see a large-scale high-resolution image on what appears to be one large display. The closer individuals get, however, the more levels of detail are uncovered.
At the mezzo level, from outside the window of the building, visitors can see titles of research topics and clusters of images amongst the LED screens. At the micro level, right up close to the installation, visitors can see high-resolution images and paragraphs of related text on the individual screens.Content is selected and scheduled using a content management system that was designed for use with the Discovery Wall. As new discoveries are made at the research center, the content is updated. In addition to the layers of content, the curved lenses create a lenticular effect for each mini screen, changing how the artwork looks depending on where the viewer is standing.
wcmc-discovery-wall-squint-opera-and-hirschmann-37

Additional Info

The work is designed to be permanent and has a modular design. All its parts are replaceable and serviceable, meaning maintenance time and costs can be kept to a minimum. It has a power consumption of less than 1 kW.

Each screen is a reverse engineered LCD iPod nano screen, the resolution has been tested to ensure the screens can be read at the optimum image size at both a macro and micro levels. LCD screen resolution: 240 x 240 pixels Media wall macro resolution: 70 x 40 pixels Total media wall resolution: 16800 x 9600 pixels Power requirements: 1 KW (less than a standard heater) Lifespan: 10+ years

Gallery

 

Works cited

http://www.itsnicethat.com/articles/discovery-wall

http://thecreatorsproject.vice.com/en_uk/blog/2800-screens-create-internet-enabled-lcd-mosaic

http://thecreatorsproject.vice.com/en_uk/blog/2800-screens-create-internet-enabled-lcd-mosaic

http://newatlas.com/weill-cornell-medical-college-discovery-wall/32559/

http://www.squintopera.com/projects/all-work/wcmc-discovery-wall-2/

https://www.codaworx.com/project/wcmc-discovery-wall-weill-cornell-medical-college

WCMC Discovery Wall

Discovery wall – Zoom into medical research

 

Experiment – Eaves

Eaves

Project Description

Eaves is a meta data gathering prototype that captures audio levels in a room and sends those collected values over the internet. Project Eaves takes this data and converts it into artifacts that can be put on display to raise awareness about sound levels.

Video Link

Eaves – Prototype

untitled-2-01

untitled-2-01

Prototype in place at three locations

Inside Eaves

Eaves has a Feather that allows it to transmit data wirelessly to a cloud where that data can be retrieved from. It also has a Sound Detection Sensor Module that grabs sound values from the environment. The rig is powered by a USB powered rechargeable battery.

What the Data showed us

untitled-3-01

Comparison of Sound Values between three rooms.

 

untitled-4-01

Visualization of sound levels on a line graph

 

Artistic Visualization of sound levels

The data showed us the sound values of each of our three areas to enable us to find out which area was the loudest and which area was the quietest.

Code

https://github.com/afroozsamaei/Eaves-Mudit

Circuit Diagram

whatsapp-image-2016-11-27-at-7-00-49-pm

Credit: April Xie

Q&A

1.How does this device respond to the data it is collecting?

Eaves does not respond to the data it collects. It is simple an input that collects the data.

2.How does this behaviour evolve as it collects more data about or for this person/place?

Eaves was meant to be inconspicuous and hidden. Its meant to be an objected observer only.

3.What is the overall size and form of the object?

The housing is 7 inches tall and the wires that allow it to hang from objects increase its height to 1 foot

4.Does the object encourage interaction from others?

The object avoids interaction from others. If one touches the microphone then the sensor values shoot up. Eaves was designed to be hidden and blend in with the white ceilings.

Concept Sketches

Project Context

Silence in the classroom can boost children’s exam results, improve their self-esteem and cut down on bad behaviour, according to new research.

http://www.telegraph.co.uk/education/educationnews/8841649/Silence-is-golden-how-keeping-quiet-in-the-classroom-can-boost-results.html

http://www.huffingtonpost.com/michael-taft/noise-pollution-health-effects_b_905860.html

 

References

https://learn.adafruit.com/adafruit-microphone-amplifier-breakout/measuring-sound-levels

http://www.digikey.com/en/articles/techzone/2011/jul/the-five-senses-of-sensors—sound

 

Experiment 2 – Fascist Falldown

by Nadine Lessio and Mudit Ganguly
Creation and Computation
Digital Futures / OCAD U

Code: https://github.com/sharkwheels/CC_assignment2/tree/master/falldown_02

Weblink: http://tinyurl.com/facistfalldown

s7-set-01-01

Introduction

Fascist Falldown is physical bowling game that is sort of a cross between a carnival game, and beer pong. It consists of upwards of 20 mobile devices which act as pins. Users load the website (tinyurl link) onto their phone browsers and then are randomly assigned a dictator. They then place their phone on a cardboard phone stand that holds it in place and the users take turns bowling the dictators down using a ball ( the ball of democracy)

Take turns bowling over the Illuminati!

How the game works

Development

When we first started this project, we had thought about doing a joined map game like D+D. Using the phone’s rotation options, we figured it would be a nice little puzzle. Where you click or shake to change the map tile, rotate device to rotate piece, and then everyone has to put their phone together to make a full map.

 But during our first ever user testing phase we found out that the rotation of the phone wasn’t working like we wanted it to and we set out to think of another idea. Nadine simply slid her phone across the floor and were both like “bowling, with phones, that would be pretty amusing”.

We then discussed what we’d like to do with bowling. Simply having an animation of bowling pins would have been too simple and lazy. So we starting thinking about things being knocked over. We finally settled for dictators.The idea of dictators being knocked over is not a new concept, everytime a dictatorship falls the statues of the dictator are razed to the ground, demolished or defaced.

We agreed on using 10 dictators that are infamous across the world. They were

  1. Saddam Hussain
  2. Vladmir Putin
  3. Donald Trump
  4. Benito Mussolini
  5. Hitler
  6. Robert Mugabe
  7. Kim Jong-un
  8. Emperor Hirohito
  9. Lenin
  10. Mao

User Flow

  • Users load the weblink
  • Users click on button to get a random dictator
  • User receives his or her dictator (State_01)
  • User puts phone down on the stand.
  • User bowls, knocks phone over.
  • Device registers a change in the X axis
  • Code loads image of fallen dictator with the score (State_02)
  • Users can then touch the screen to reset the code to get a new random dictator

Because we had 10 different dictators we then essentially listed what assets we would need for each of these dictators. Those assets were State_1 and State_2 images (Before the phone gets knocked down and After the phone gets knocked down) . The Before images had a name and a picture of a particular dictator while the After image had a color overlay and a number that indicated the points you got for knocking that dictator down.

 

Before state2artboard-1-copy-2
 Before (State_01)           After(State_02)

During development, we discovered some interesting quirks using mobile browsers. For starters, common interactions like “swipe” were used by iOS as navigation, so using them for interaction was buggy, so we stuck to touch. The rotation functions in p5js also proved to be a bit of a challenge, as they only worked in webGL mode. We got around that first by looking at accelerometer data, and then just using P5’s underlying canvas to play with window rotation. We began testing out the DeviceMoved function which would record when the device moved beyond a threshold on the X axis.

img_20161030_144457 img_20161030_144508
First iteration

We then added our own images to the code. We also added a button that allowed users to get a randomly assigned dictator. We also decided to divide the dictators into 3 point categories (low, medium and high) and color code these so that users could aim for a higher score.

img_20161030_150056   img_20161030_150100
Second iteration


Art and code also went through a few renditions. At first, our graphics were very photo orientated, but later they became illustrations based on testing feedback that our graphics weren’t visible enough. The code originally started as more of a random / reset, but later we added more states to prevent false state tripping, and to better deal with audio. We also had to work around an iOS issue where you can’t autoload sounds, a user must interact with the touch screen first to make the browser live.

We then added sound. We picked the sounds of an arcade for each image. When the code has State_01 images loaded the sounds of the arcade play on loop until the phone is knocked over. When the phones are knocked down we have a new sound that plays which is the sound of bowling pins being knocked over

Sounds of arcade for State_01

Sounds of arcade for State_02

 

We then created the new look for our dictators, and added the 3 point categories (low, medium and high) and color code these so that users could aim for a higher score.

Images for State_01 (click to enlarge)

 

Images for State_02 (click to enlarge)

untitled-1-01-01-01

While we we’re working on the visuals and code, we were also simultaneously working on creating stands for the phones. We made a very basic game area (marked off with just tape) and went with basic rules, the idea being that you could play this with whatever you might have on hand. Our materials, followed suit. Basic things like cardboard stands (from a template) and tape mean you can up-cycle items like delivery boxes, etc. to play.

 

img_20161025_201044 img_20161025_201149

The eye of illuminate shape: Chosen because it’s reminiscent of a basic bowling ball pin set up, so its familiar, but is also a pop culture thing tied to conspiracies, and shady government. It also gives a starting point if users want to remix or change the eye into something else.

Bugs Encountered

  • ios/android compatibility
  • Coding sound for both ios and Android platforms
  • Button Presses on ios/Android platforms
  • Using the swipe function on the platforms

Future Iterations

  • Using smooth animations between states
  • Ingame scorecard
  • Transitions between sounds
  • Better randomizer
  • Incorporate difficulty settings

 

 Project Context

Shit is getting too real with Trump in the election. Let’s bowl over some fascists and for a moment pretend things are ok!

When we spoke to our classmates about this project almost all of them asked us if we were including Donald Trump in our lineup. Due to popular demand, we couldn’t help but comply with our consumer needs.

hero_image_main_2

 

References

Send me to heaven

http://www.techtimes.com/articles/55566/20150526/send-heaven-android-app-will-last-game-play-phone.htm

The Android app Send Me To Heaven, or simply S.M.T.H., is a sports game where users compete against others around the world to see how high their phones can fly in the air. The phone registers the height achieved, collecting points along the way, and of course, the closer to the heavens, the better. The results are uploaded onto leader boards which include: the World Top 10, Week Top 10, Day Top 10 and Local Top 10.

Created by developer Carrot Pop, S.M.T.H. requires Android 2.3.3. and up, and works only with devices that have an ARMv7 processor.

 

 

Tumball

Play a cricket inspired game using leaf blowers and a tumbleweed
Why related:
– DIY roll your own
– Up-cyling things around you
– Bizarre use of common objects

Wii Bowling

http://strategywiki.org/wiki/Wii_Sports/Bowling_Training

There’s a Power Throws mode where the game keeps adding more and more pins until its up to 100 and just play w/ the game physics.

 

Material Madlibs – Flutter

 

 

flutter-01-01 flutter2-01

Flutter Arduino Code/sketch_oct06a-_alternating_outputs.pdf

fluttermisisngh-01-01

flutter7-01

 flutter9-01 flutter10-01 flutter11-01 flutter12-01 flutter13-01 flutter14-01

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.