Category: Case Studies

inFORM-Tangible Media Group in MIT Media Lab

Shreeya Tyagi, Thoreau Bakker and Jeffin Philip

 

Introduction

inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. This project was done by Tangible Media Group in MIT Media Lab. The purpose of this case study was to understand the design process and the research that was involved in the project. 

MIT media lab created an interface based on inFORM that was able to give urban planners more control to be able to shape and view entire cities. Dynamic shape displays changes how we virtually collaborate from a display. We can touch and manipulate objects from a distance and also collaborate on 3D data sets.

MIT Media Lab

The MIT Media Lab is an interdisciplinary research laboratory at the MIT devoted to projects at the meeting of technology, multimedia, sciences, art and design.”Inventing a better future” is the theme of the Media Lab’s work. A current emphasis of Media Lab research, which encompasses the work of several research groups, is on human adaptability. Other Research groups at Media Labs are focused on .

Overview of the Tangible Media Group

The tangible media group was formed by Professor Hiroshi Ishii, explores the Tangible Bits & Radical Atoms visions  to seamlessly couple the dual world of bits and atoms by giving dynamic physical form to digital information and computation.

Vision driven design research of the Tangible Media Group

“Looking back on the history of HCI, they notice that quantum leaps have rarely resulted from studies on users’ needs; they have instead stemmed from the passion and dreams of visionaries like Douglas Engelbart. By looking beyond current limitations, the group believes that vision-driven design is critical to foster these quantum leaps, while also complementing needs-driven and technology-driven design. From Tangible Bits, an early example of their vision-driven research, they are shifting to Radical Atoms, which seeks out new guiding principles and concepts to view the world of bits and atoms with new eyes, with the goal of trailblazing a new realm in interaction design.

From the three approaches in design research: technology-driven, needs-driven, and vision-driven, they focus on the vision-driven approach due to its lifespan. They know that technologies become obsolete in ~1 year, users’ needs change quickly and dramatically in ~10 years. However, they believe that a clear vision can last beyond our lifespan. While they might need to wait decades before atom hackers (like material scientists or self-organizing nano-robot engineers) can invent the necessary enabling technologies for Radical Atoms, we strongly believe the exploration of interaction design should begin from today.” Tangible Media Group

Context, Significance, Related Works

Hiroshi Ishii, head of the Tangible Media Group (TMG) published an interesting paper in 2008, contextualizing the inForm project in terms of human evolution. In it, he notes that humans have developed “sophisticated skills for sensing and manipulating our physical environment”, yet most of these skills are “not used when interacting with the digital world where interaction is largely confined to graphical user interfaces“ (Ishii 32). He argues that, despite the ubiquity of graphical user interfaces (GUIs) as championed by Microsoft and Apple, there is something greatly lacking in these methods of interaction — that they do not allow us to “take advantage of our evolved dexterity or utilize our skills in manipulating physical objects” (32). That this project addresses these challenges and presents an alternative way to interact with digital content is both fascinating and valuable. It is not the first or only technology to interact with computers in a novel way, but the effect created is almost magical.

If there are other potentially more intuitive technologies that exist, why have they not been adopted? Perhaps the current dominant paradigm of keyboards, mice and increasingly, touchscreens, is to some extent influenced by the market. The assumption that it has to do, at least in part, with economies of scale seems plausible, and perhaps when demand increases enough, more alternative interaction technologies will become available. The tremendous potential of technologies like inForm to harness the incredible ‘touch’ skills humans already possess, speaks to the importance of research in these fields. While inForm was both groundbreaking and unlike anything seen before, there are a number of other projects that relate to the project conceptually.

The following examples will highlight other technologies that deal with interacting with data and virtual objects in unconventional ways.

Haptic Sculpting Device

There is a research lab at the University of Guelph called the Digital Haptic Lab, run by Dr. John Phillips and Christian Giroux. Their lab takes its name from a sculpting device, that provides haptic feedback through small motors, embedded in a multi axis pen type device. As a user uses the tool to ‘sculpt’ an onscreen three dimensional object, the haptic device varies the motor feedback to give the feeling of interacting with a real world object. The effect is almost startling, as one is able to ‘feel’ with the muscles of the hand, a virtual object that is not actually there. This device has a number of research and commercial applications, one of which is the design / sculpting of coins for general circulation.

This device is quite different from inForm in that it represents a virtual object in 3D space without the object actually being there, whereas inForm, as a “shape display”,  actually represents virtual objects as real objects (albeit in a low resolution) with its extendable pin blocks. Although very different, the underlying issues they address are related: how we make virtual and remote objects tangible.

Automotive Design: Still Using Clay

Another striking example of the relationship between technology and tangibility, is the automotive industry’s continued use of clay to model vehicles. Despite access to the best 3D software packages, holography, VR and other cutting edge technology it is still common, even a mandatory  in many design labs to build clay models. Take the following quote from an Wall Street Journal article, one example of many discussing this fascinating phenomena:

Indeed, despite Ford’s use of three-dimensional imaging technology that allows executives to don headsets and see a virtual vehicle in a computer-generated cityscape, the top brass won’t sign off on producing a new car—a decision that can involve spending a billion dollars or more—until they see full-size physical models” (see reference below).

The full-size models the article describes, are made of clay built on an armature and refined using hand held scrapers. For all the utility afforded by new technology, there is something missing in these tools requiring a real human touch, and the ability to see a full size model in the real world. The article notes that it is not a one or the other scenario however, and that 3D modeling is used extensively in the design process. The workflow goes back and forth, working together, and both tools afford special abilities and perceptions.

 

 

Again, this example is very different from the inFOrm project, even more so than the haptic devie. It is presented however, as an example of the importance of tangibility, and real world object. That it is essential even for multinational corporations with huge budgets and access to the latest technology, speaks to the value of what inForm is doing. The inForm project incorporates some of the best of both worlds in a way: The reproducibility and flexibility of the digital world, with the intuitive qualities and space of the analog / ‘real’ world objects.

Other related project:
Kinetic blocks: http://www.theverge.com/2015/10/14/9529947/mit-kinetic-blocks-shape-display-video

Technical overview:  How does it work? sensing method, actuation method, materials, relationship to user and audience

InForm is a dynamic shape display and at the same time it is also a tangible user interface. The key principles behind the interactivity of inForm are dynamic affordances and constraints. This is implemented through haptics, actuated affordances, actuation of physical objects etc. By mimicking familiar interfaces from tangible and physical domains, the inFOrm interface invites and encourage the user to interact with it. By providing constraints on these interactions the user input can be measured and used to control other actions.

The device consists of a shape display with rectangular pins, each controlled by a linear actuator, a kinect to sense depth information, a projector to display data. When used as a communication device both end users will have a set of theses components.

Each pin of the shape display can be individually moved using the linear actuators. The movement of these actuators are controlled using arduino chips. But the depth calculation and other calculations are done on an external computer. The linear actuator mechanism also employ PID controllers to detect and keep track of the position of pins and to provide accurate motion with continuous error correction.

The depth information is calculated from the depth image stream of kinect and is mapped to the movement range of the shape display pins. The projector is used for the visual feedbacks.

 

References/Bibliography

http://tangible.media.mit.edu/project/inform

@wsjeyesonroad. “One Thing Isn’t New in Car Design: Clay Prototypes.” The Wall Street Journal. Dow Jones & Company, 2014. Web. 13 Dec. 2016.

Ishii, Hiroshi. “The Tangible User Interface and Its Evolution.” Communications of the ACM 51.6 (2008): 32. Web.

Ishii, Hiroshi. “Materiable: Rendering Dynamic Material Properties in Response to Direct Physical Touch with Shape Changing Interfaces.” http://tmg-trackr.media.mit.edu/publishedmedia/Papers/598-Materiable%20Rendering%20Dynamic%20Material/Published/PDF

Ishii, Hiroshi. “TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week.” http://tmg-trackr.media.mit.edu/publishedmedia/Papers/554-TRANSFORM%20Embodiment%20of%20Radical/Published/PDF

Ishii, Hiroshi. “Radical Atoms: Beyond Tangible Bits, Toward Transformable Materials.” http://tmg-trackr.media.mit.edu/publishedmedia/Papers/485-Radical%20Atoms%20Beyond%20Tangible/Published/PDF

 

 

 

 

Minimaforms Petting Zoo

Case Study by Orlando Bascuñán, Bijun Chen

Presentation

General Overview:

Minimaforms was founded in 2002 by brothers Stephen and Theodore Spyropoulos as an experimental architecture and design practice. Using design as a mode of enquiry, the studio explores architecture and design than can enable new forms of communication. Embracing a generative and behavioral approach the studio develops open systems that construct participatory and interactive frameworks that engage the everyday.

Pushing the boundaries of art, architecture and design the work of Minimaforms is interdisciplinary and forward thinking exploring digital design and fabrication along with communication technologies seeking to construct spaces of social and material interaction. In 2010, Minimaforms was nominated for the International Chernikhov Prize in architecture. In 2008 their project Memory Cloud was named one of the top ten international public art installations by the Guardian.

Recent projects include two thematic pier landmarks and the illumination concept for a Renzo Piano’s master planned 760 acre National Park in Athens, a large scale land art work in Norway, a vehicle in collaboration with artist Krzysztof Wodiczko, a behavior based robotic installation for the FRAC Centre and immersive ephemeral environment for the city of Detroit. The work of Minimaforms is in the permanent collections of the FRAC Centre (France), the Signum Foundation (Poland) and the Archigram Archive (UK). In 2008 their project Memory Cloud was named one of the top ten international art installations by the Telegraph. Recent exhibitions have included work shown at the Museum of Modern Art (New York), Detroit Institute of Arts, ICA (London), FRAC Centre (France), Futura Gallery (Prague), Slovak National Gallery (Bratislava), and the Architecture Foundation (UK). They have been featured in international media including BBC, BBC radio, Robert Elms Show, Wired Magazine, Fast Company, Guardian, Blueprint, and Icon Magazine. They were named Creative Review’s “One to Watch.”

Petting Zoo FRAC Centre

Petting Zoo is the latest work developed by experimental architecture and design studio Minimaforms. The project is speculative life-like robotic environment that raises questions of how future environments could actively enable new forms of communication with the everyday. Artificial intelligent creatures have been designed with the capacity to learn and explore behaviors through interaction with participants. Within this immersive installation interaction with the pets foster human curiosity, play, forging intimate exchanges that are emotive and evolving over time. Beyond technology the project explores new forms of enabled communication between people and their environment.

The installation exhibits life-like attributes through forms of conversational interaction establishing communication with users that is emotive and sensorial. Conceived as an immersive installation environment, social and synthetic forms of systemic interactions allow the pets to engage and evolve their behaviors over time. Pets interact and stimulate participation with users through the use of animate behaviors communicated through kinesis, sound and illumination. These behaviors evolve over time through interaction enabling each pet to develop personalities. Pet interactions are stimulated through interaction with human users or between other pets within the population. Intimacy and curiosity are explored as enabling agents that externalize personal experience through forms of direct visual, haptic and aural communication.
Early historical experiments that examine similar issues can be found in the seminal cybernetic work of the Senster developed by the British cybernetic sculptor, Edward Ihnatowicz, Gordon Pask’s The Colloquy of Mobiles, and Walter Gray Walter’s first electronic autonomous robots (Tortoises) called Elmer and Elsie. Petting Zoo continues Minimaforms ongoing research in participatory and enabling frameworks examining cybernetic and behavior based design systems that can be found in other works of theirs like Becoming Animal exhibited in MoMA’s Talk To Me show and Memory Cloud: Detroit (2011) ICA London (2008).

Behaviors
Internal patterns of observation allow the pets to synchronize movements and behavioral responses. Through active proto-typing a correlated digital / analogue feedback has been developed to allow the system to evolve relationships that avoid repetitive controller tendencies.

Spatial Interfacing
Awareness of participant(s) is enabled through camera tracking and data scanning that allows for identifying human presence within contextual scenes. Real time camera streams are processed and coupled with blob tracking and optical flow analysis to locate positions and gestural activity of participants. Inactive participation of a performer in the environment can stimulate responses of disinterest and boredom.

Multi-User Interaction
Collective participation is enabled by the ability of our system to identify and real-time map the number of performers within a durational sequence.

 

Context/Related Projects

The Furby Experiment

Radiolab in the Talking to Machines episode reported the “Furby experiment” were they presented a barbie, a hamster and a furby to 6 brave kids in order to learn about how  compassionate we are about machine simulated feelings.

The experiment consisted in the kids holding the 3 “subjects” upside down while they felt comfortable, acting as a kind of emotional Turing test.

The results were that the kids could hold the barbie for an unlimited period of time, or until they got tired. They hamster could only hold the hamster for a painful 8 seconds. And the Furby they could hold for roughly a minute, being closer to the hamster than the barbie.

The kids statement concluded that the reaction of the Furby to be held upside down made them uncomfortable to a point that they felt bad about it.

Caleb Chung, the man who created the Furby, explains the reactions starting with what he thinks are the points that an object needs to feel real for a human.

  1. Feel and show emotions: The furby accomplishes this with audio, speech and movement of eyes and ears.
  2. Awareness of themselves and the environment: When there is a loud sound, Furby will say “Hey loud sound”. It also “knows” when it’s held upside down.
  3. Change over time: When first activated the Furby will speak furbish, a gibberish language, and slowly will replace it with English. There is no real language comprehension but it acts like it’s acquiring human language.

In the experiment these 3 points are accomplished. The Furby is aware that it is upside down (#2) and expresses fear (#1). “Me no like” he says, until he starts crying (#3), giving a compelling impression of emotions.

Deep Dream by Google

DeepDream is a computer vision program created by Google which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dreamlike hallucinogenic appearance in the deliberately over-processed images. Google’s program popularized the term (deep) “dreaming” to refer to the generation of images that desired activations in a trained deep network, and the term now refers to a collection of related approaches.

AI-Duet

This experiment lets you make music through machine learning. A neural network was trained on many example melodies, and it learns about musical concepts, building a map of notes and timings. You just play a few notes, and see how the neural net responds. We’re working on putting the experiment on the web so that anyone can play with it. In the meantime, you can get the code and learn about it by watching the video above.

Quick, Draw! by Google

This is a game built with machine learning. You draw, and a neural network tries to guess what you’re drawing. Of course, it doesn’t always work. But the more you play with it, the more it will learn. It’s just one example of how you can use machine learning in fun ways.

 

Technical overview

The installation used Kinects to track users position, movements and gestures. With this spatial awareness they created an evolving artificial intelligence behavior, that expressed moods and communicated with humans, using light, sound and movement. The tentacles moved with strings pulled by motors.

 

References

Minimaforms Studio. (n.d.). Retrieved December 11, 2016, from http://minimaforms.com/studio/

Petting Zoo by Minimaforms. (n.d.). Retrieved December 11, 2016, from http://minimaforms.com/#item=petting-zoo-frac-2

Furbidden Knowledge. (n.d.). Retrieved December 11, 2016, from http://www.radiolab.org/story/137469-furbidden-knowledge/

Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015). “DeepDream – a code example for visualizing Neural Networks”. Google Research. Archived from the original on 2015-07-08.

A.I. Duet – A.I. Experiments. (n.d.). Retrieved December 11, 2016, from https://aiexperiments.withgoogle.com/ai-duet

Quick, Draw! – A.I. Experiments. (n.d.). Retrieved December 11, 2016, from https://aiexperiments.withgoogle.com/quick-draw

CASE STUDY – Discovery Wall by Squint Opera

setwidth414-wcmc-discovery-wall-squint-opera-and-hirschmann-45

Presented by Sara Gazzad and Mudit Ganguly

Link to presentation 

Client: Weill Cornell Medical College
Creative Direction: Squint/Opera
Technical Direction: Hirsch&Mann
Detail Design: The Cross Kings
Fabrication: DCL
Optics: Ely Optics

ARCHITECT: Ennead Architects
INTERIOR DESIGNER: Ennead Architects
INDUSTRY RESOURCE: Hirsch&Mann
INDUSTRY RESOURCE: The Cross Kings
INDUSTRY RESOURCE: Design Communications Limited

Introduction

Biomedical research centres aren’t renowned for creative enterprise – why should they be – but across the pond one New York organisation is bucking the trend with a stunning new digital artwork. The Weill Cornell Medical College commissioned London-based creative agencies Squint/Opera and Hirsch&Mann to produce the Discovery Wall for its new Manhattan premises and the results are super-impressive. The final piece comprises 2,800 LED screens set behind a bank of lenticular discs. For passers-by it can be viewed as a large-scale digital artwork but up close the screens display content that relates to the college’s pioneering scientific research.

There’s a nice making-of below in which the creatives explain the project’s ongoing potential, built around the college being able to upload content through their CMS. As Daniel Hisrchmann puts it: “It is extensible beyond us by design…you get to make something and watch it get better and better as people add more content over time. That is amazing!”

wcmc-discovery-wall-squint-opera-and-hirschmann-33

What is it?

A wall-sized digital artwork created from thousands of tiny screens and lenses was designed by Squint/Opera for the $650m Belfer Research Building, part of Weill Cornell Medical College (WCMC) in Manhattan. The shimmering and animated foyer installation celebrates the college’s research work.

The large-scale digital installation (approx 4.6m x 2.7m) comprises 2800 mini screens set in a grid pattern behind a panel of thousands of circular acrylic discs – a reference to the lenses used in medical research. The dual layer construction makes it possible to read the wall from a distance as a single image, and then, up close, each screen has information about medical discoveries and other news from WCMC’s website. The installation is programmed so that images and stories change constantly.

To bring the concept to life, every aspect of the hardware was designed from scratch.

The shimmering and animated foyer installation celebrates the college’s research work. The large-scale digital installation (4.6m x 2.7m) is comprised of 2800 mini screens set in a grid pattern behind a panel of thousands of circular acrylic discs – a reference to the lenses used in medical research.

wcmc-discovery-wall-squint-opera-and-hirschmann-38

Goals

The artwork operates on three main perspective view points.

1. Far views display ‘macro’ images and text
2. Mid views display ‘mezzo’ layers of additional information
3. Up close views display ‘micro’, detailed levels of information

The goal of the installation was to celebrate the support of the building’s donors and promote the research and discoveries made in the building. In addition, it was designed as an intriguing and beautiful object to be viewed close up in the lobby or seen from outside the building as a single image. Each screen has information about medical discoveries and other news fed from WCMC’s website. The images and stories change constantly. Through the language of discovery passers-by are drawn in and encouraged to learn more. The vision of New-York-based Ennead Architects, was to commission an artwork which would promote collaboration throughout the building and give a light touch to the interior fabric. To achieve this electronics were colour-matched with the stone cladding and circuit boards were mounted on a transparent frame. The clear acrylic lenses magnify the stonework at oblique angles and focus on the screens when facing the wall square on. This elegant approach compliments the natural feel of the building. The double layer, screens and lenses, creates a unique visual effect, as the wall will look as whole from a long distance while the screens can be appreciated as single elements when looked closely. The creators use this characteristic to create large-scale visuals with smaller images, taken from the archives of the Belfer research center. Thanks to its set-up, the installation shows the research and the discoveries achieved in the Belfer’s building, in a way that is visually appealing and can be enjoyed from the street or from the lobby.
wcmc-discovery-wall-squint-opera-and-hirschmann-34

Process

During the commission Ennead Architects advised the client and briefed Squint/Opera to develop creative concepts. The concepts were delivered through a team of specialists brought together by Squint/Opera. Hirsch&Mann led the technology design, production and delivery, The Cross Kings led the physical detail design and fabrication in Boston was completed by Design Communications Limited. Squint/Opera worked closely with Hirsch & Mann to design and build all components from scratch. This involved creating many prototypes which allowed the team to test ideas and communicate concepts to all stakeholders, taking them on the journey of developing a piece of art. The prototypes acted as a key discussion tool beyond drawings or presentations and allowed the team to refine the design and align with the architectural vision and the brief. From the early stages Squint/Opera worked with Ennead Architects to ensure practical elements were successfully integrated within the building. This included provision of extractor fans, IT and AV conduits/storage, appropriate light levels and structural supports to ensure the artwork will remain a permanent homage to medical discovery.
To develop the software, team worked with variable.io to help create both the front end CMS and the backend data storage, crunching, encoding and control. They were able to update all 2,800 LCD screens playback at a rate of 20fps. The software controller was equipped with algorithms for tiled content distribution, procedural layout generation and playlist scheduling. The whole architecture was running on NodeJS with CouchDB and talking to the hardware over serial port via the custom protocols developed by White Wing Logic.
discovery_wall_cms_01 discovery_wall_cms_02 discovery_wall_cms_03

For the realization of Discovery wall the authors created most of the hardware components from scratch. They chose a tiny screen with a high pixel density that can be used as a single tone pixel or as part of a high res composition, where all screens create a larger image. As the screens are part of a popular consumer device, they had to reverse engineer it and find the ideal conditions for its operation, a huge technical challenge itself.

The wiring and mounting of the pixels is achieved by grouping eight displays into a single printed circuit boards, with their own control components and memory on the back side. The cooper traces are platted in gold to give the installation an aesthetic outlook. Each column comprises five PCB and 40 displays.

Content

Content displayed on the Discovery Wall can be viewed differently at so-called macro, mezzo and micro levels. By looking at the installation in its macro view from across the road, visitors will see a large-scale high-resolution image on what appears to be one large display. The closer individuals get, however, the more levels of detail are uncovered.
At the mezzo level, from outside the window of the building, visitors can see titles of research topics and clusters of images amongst the LED screens. At the micro level, right up close to the installation, visitors can see high-resolution images and paragraphs of related text on the individual screens.Content is selected and scheduled using a content management system that was designed for use with the Discovery Wall. As new discoveries are made at the research center, the content is updated. In addition to the layers of content, the curved lenses create a lenticular effect for each mini screen, changing how the artwork looks depending on where the viewer is standing.
wcmc-discovery-wall-squint-opera-and-hirschmann-37

Additional Info

The work is designed to be permanent and has a modular design. All its parts are replaceable and serviceable, meaning maintenance time and costs can be kept to a minimum. It has a power consumption of less than 1 kW.

Each screen is a reverse engineered LCD iPod nano screen, the resolution has been tested to ensure the screens can be read at the optimum image size at both a macro and micro levels. LCD screen resolution: 240 x 240 pixels Media wall macro resolution: 70 x 40 pixels Total media wall resolution: 16800 x 9600 pixels Power requirements: 1 KW (less than a standard heater) Lifespan: 10+ years

Gallery

 

Works cited

http://www.itsnicethat.com/articles/discovery-wall

http://thecreatorsproject.vice.com/en_uk/blog/2800-screens-create-internet-enabled-lcd-mosaic

http://thecreatorsproject.vice.com/en_uk/blog/2800-screens-create-internet-enabled-lcd-mosaic

http://newatlas.com/weill-cornell-medical-college-discovery-wall/32559/

http://www.squintopera.com/projects/all-work/wcmc-discovery-wall-2/

https://www.codaworx.com/project/wcmc-discovery-wall-weill-cornell-medical-college

WCMC Discovery Wall

Discovery wall – Zoom into medical research

 

Nils Völker

Nils Volker

Nils Völker


By: Katie Micak | Nadine Lessio

Nils Völker is a media artist based in Berlin. He was born in Aalen, Germany and studied Communication Design at the Bauhaus (graduated in 2004). Völker originally started making electronics out of Lego, but soon moved to larger site-specific sculptures and kinetic installations. His work is mainly focused on ideas of repetition, mechanical rhythm, and how inanimate objects mimic living organisms which he explores through simple materials such as plastic bags, cooling fans, and custom code and electronics.

 

Sixty Eight - Nils Völker

Sixty Eight – Nils Völker

 

Although each piece is unique, his sculptures all share the same characteristics including: integration into an established environment, activation in response to human presence, they offer moments of surprise, their pace is slow- almost meditative. Völker is also interested in replicating natural phenomena, such as waves, or breathing, geysers (volcanoes) or the expansion of atoms. It can be seen that Völker is concerned with exploring a familiar found material, and conceptualize how to personify these inanimate objects, creating a whole new experience and use for these products.

 

Makers and Spectators

Makers and Spectators

 

Volker describes his work as of “a rather technical nature; creating electronic circuits, programming, drawing circuit boards and soldering. But in the end all the engineering lies hidden under an organic appearing surface and can only be imagined.”

When asked how Völker began with robots, he states, “It’s somehow great to deal with these purely logical and abstract things to end up with something that isn’t logic at all.” (Regine)

Although Völker states his work is not ‘logical’ he hopes to capture something that cannot be captured by either creating containers (air in bags) or putting in contradictory elements (air in water). By using air Völker shows movement, and in a sense makes these works come alive, as if the viewer is standing in front of a newly discovered animal. The animal (sculpture) ‘sees’ and ‘follows’- it has a brain (computer) and lungs (fans and container).

volker fans

Fans

 

Völker considers his approach to art more pragmatic – material investigation- and less conceptually concerned. By finding ideas in accidents, experimentation, or glitches, his work is very much based in formal concerns and exposing the natural state of his materials. The materials he chooses are generally everyday offerings (such as bags), re-arranged and brought into new situations or scenarios.

In “Bits and Pieces” Völker uses found plastic children’s toy called hoberman spheres. These spheres expand and contract (much like plastic bags do) fold into a fraction of its original size using simple joints. It is a complex structure that emphasizes size and movement through expansion. “Each one is moved by a servo motor and individually controlled by a micro-controller creating the illusion of organic waves appearing to move throughout the space although each single plastic ball simply expands and contracts at the right time. “ (Völker) The installation was shown at NOME Gallery, Berlin in 2016.

Bits and Pieces

Bits and Pieces

Starting in 2010, Völker produced numerous “choreographed breathing” installations which all follow the same sense of logic: simple motors and fans are activated by the viewer’s’ presence- this fans ‘follow’ the viewer as they move through the space. Völker’s most notable and widely shown breathing piece is “One Hundred Eight.” As per the title, 108 plastic bags are mounted to a wall and inflate and deflate in sequence and relationship to the movements of observers in the space- as the viewer approaches, they inflate and as they recede, the bags deflate. (Völker) Völker has remade this piece in numerous iterations and shapes, and has named each of the installations for the number of bags used.

 

By titling this series of choreographed breathing installations pieces in relation to the number of bags present, it can be inferred that Völker is interested in aligning this series within a formalist art tradition. Seen commonly in abstract formal painting, the title of the work is often in relation to when that painting was created in a sequence.  This series is also concerned with only utilizing what is necessary to the meaning of the work. In Völker’s case focuses on the minimal activities of this materials and activates breathing- equals- life, the viewer activates the art.

This is only one area of his practice.

 

 Arguably, Völker’s most interesting piece is the object performance “Captured” done in 2011 at MADE Space, Berlin in collaboration with his brother, Sven Völker. This piece consists of 304 framed graphic pages surrounded by a fields of 252 inflatable silver cushions. There is a light show in addition to the inflatables, and 12- minute narration broken into four chapters: the intangible, the volume, the border and the ephemeral. The cushions were programmed in relation to each chapter, creating a sense of change over time. The lighting intensified the drama of the air cushions, and created a close relationship between all the elements at play.  (Völker) Audience members attended a live performance, and then the narrative was played as an installation over the following three months.

 Another work discussed in our presentation was “Fountains” (2012), a permanent installation in the Xixi Wetland Park in Hangzhou, China. This ephemeral work is a  to contrast with the solid public art which fills the park. The piece remains hidden under water until visitors walk by, then small fountains of shoot up in sequence throughout a pond- following the visitor as the move. Only using simple PPR tubes, steel and custom electronics is this satisfying and active artwork achieved.  (Völker)

 

fountains

Fountains

Although Völker does not speak often about the conceptual parameters of his practice, we believe that his work is soundly conceptual. Through his simple and elegant installations Völker asks viewers to see his chosen objects as more than their primary purpose. This re-contextualization forces viewers to relate their bodies to inanimate objects, or consider their movements in relation to the world around them.  In Völker’s world, bags breathe, light tells a story, toys become atoms, and all of these reach out for connection with those who stand before them watching idly.

 

Thirty Three

Thirty Three

 

Works cited

P. Natalie. “Nils Völker transforms NOME into a living organism via Bits and Pieces”  Widewalls. http://www.widewalls.ch/nils-volker-nome-exhibition/. 2016.

Völker, Nils. Artist’s website. http://www.nilsvoelker.com/. 2016.

 

Jobson, Christopher. “Bits and Pieces: An Expandable Kinetic Toy Sphere Installation by Nils Völker” This is Colossal. http://www.thisiscolossal.com/2016/04/expandable-kinetic-toy-sphere-installation-by-nils-volker/ 2016.

 

Grieco, Lauren. “Nils Völker breathes life into 108 Hoberman spheres suspended inside NOME Gallery.” Frame.  http://www.frameweb.com/news/nils-v. 2016.

Mary. “ART: Nils Völker’s One Hundred and Eight.” S.O.T.R. http://sickoftheradio.com/2010/12/01/art-one-eight-mesmerizes-viewers/. 2010.

Neunhiem, Anke. “Nils Volker” iGNANT. http://www.ignant.com/2012/09/05/nils-volker/. 2010.

P. Natalie. “Nils Völker transforms NOME into a living organism via Bits and Pieces”  Widewalls. http://www.widewalls.ch/nils-volker-nome-exhibition/. 2016.

Völker, Nils. Artist’s website. http://www.nilsvoelker.com/. 2016.

Regine. “A quick chat about robots, Lego and air bags with Nils Völker.” We Make Money Not Art. http://we-make-money-not-art.com/nils_voelker/. 2011.

 

DIY CELLPHONE – DAVID A. MELLIS

Case Study by Afaq Karadia, April Xie
Presentation

9474701418_798e142291_z

******** INTRO: DIY CELLPHONE ********

What is it?

DIY Cellphone is “a working (albeit basic) cellphone that you can make yourself”Created by David A. Mellis(1). Using the Arduino GSM, Mellis developed a comprehensive kit of open software, hardware and instructions for making a fully functional cellphone as part of his PhD research at the MIT Media Lab. Mellis used his DIY cellphone as his primary phone, for the purposes of research, for two years. It has formed a large international following of others who have made, customized, modified, and used their own DIY phones.

Capabilities

The DIY cellphone can “make and receive phone calls and text messages, store names and phone numbers, and display the time,” much like early Nokia cellphones in the 90’s and early 00’s.(2)

Specific features: (3)

  • Makes and receives calls and messages
  • Stores up to 250 numbers and names
  • Shows time
  • Serves as alarm clock
  • Connects to GSM networks (AT&T, T-Mobile)
  • Socket for standard SIM
  • 3.7 volt, 1000 mili amp hour rechargeable LiPo battery, rechargeable with mini USB

Mellis created two variations of screens for people to choose from:

  1. Black and white LCD like those found on old Nokia phones and one
  2. Eight-character matrix of red LEDs.

The LCD shows more information (six lines of fourteen characters) but breaks over time. The variant with the LED matrix is harder to use but the display is more robust.

Images of DIY Cellphone

9035433893_61c155a43e_z

9511241438_bd7422dbe3_z 10458797693_249459b856_z 11055018474_6801b067f4_z 11055018604_f6c391697e_z 11055020684_7ceb129311_z

Who can make the phone?

Making the DIY cellphone is complex and labour intensive, but is possible to complete without expert knowledge in electronics. Hand soldering and debugging the software will take a while, depending on luck and experience.

The most possibility for customization is in the exterior casing. Mellis’s standard model is made out of laser cut plywood, but makers can choose to customize the decor, or make a 3D printed enclosure.

Who is David A. Mellis?

d_hs

  • During time of making DIY cellphone, Mellis was Doctoral student in High-Low Tech research group at MIT Media Lab Hi Low Tech Lab 
  • One of the creators of Arduino
  • Currently works at Autodesk as a circuit design program designer
  • Research interest: “relationship between digital information and physical objects (manufacturing, electronics, programming),” and  “wants to create tools and examples to help people design, build, program electronic devices”.(4)

Hi Tech DIY 

  • Mellis: “[Hi-tech DIY is the] individual’s use of digital fabrication and embedded computation to make electronic devices”(5)
  • Hasbeen made possible with the rise of smaller, cheaper and more accessible computing and electronic components 
  • Extends traditional DIY through blending it with  hacker culture (software)

******** HOW TO MAKE A DIY CELLPHONE ********

Components of DIY Cellphone – overview

  • TOTAL COST FOR PARTS(6)
    • $105 USD FOR LCD VERSION
    • $135 USD FOR LED VERSION
  • Components(7) 
    • Around 60 total parts need to be hand soldered
    • All parts can be ordered through Digi-key, SparkFun, Arduino
    • Main components: 16 buttons, mic, speaker, magnetic buzzer, b/w LCD display (84×48 pixels) or 8-char LED matrix  

DIY Cellphone resources – open hardware and software

The code and plans for DIY cellphone are all open source. You can find these at

Hardware: damellis/cellphone2hw
Software: damellis/cellphone2

Program:

  • Approx 1000 line Arduino program(8)
  • ATmega1284P microcontroller controls user interface and communicates with the GSM modules

What is the Arduino GSM Shield?

  • Arduino GSM “connects your Arduino to the internet using the GPRS wireless network” with a SIM card and allows you to “make/receive voice calls using the on-board audio/mic jack and send/receive SMS messages”.(9)

gsmpinuse_3

Steps to making the phone

Mellis’s instructables page

Mellis’s website for making phone

There are 6 major steps Mellis describes in his instructions:

  1. Getting the parts
  2. Soldering the electronics
  3. Compiling the software
  4. Using the phone
  5. Troubleshooting/serial debugging
  6. Making the enclosure

********CONTEXT – DIY AND MAKER CULTURE********

DIY Culture: Definition  

“individuals engage raw and semi-raw materials and component parts to produce, transform, or reconstruct material possessions(10)

Motivations include marketplace and identity enhancement; alternative to modern consumer culture(11).

History and applications:

DIY movement began in the 1970s, as part of a renewed interest in skills for personal decor, building and upkeep of the home, clothing, and other everyday material items(12). It especially appealed to those living in urban environments, disconnected from the physical world and personal making of possessions. Magazines such as Popular Mechanics and Mechanix Illustrated were established, and became immensely popular.

Alan Watts (1967):
“Our educational system, in its entirety, does nothing to give us any kind of material competence. In other words, we don’t learn how to cook, how to make clothes, how to build houses, how to make love, or to do any of the absolutely fundamental things of life. The whole education that we get for our children in school is entirely in terms of abstractions. It trains you to be an insurance salesman or a bureaucrat, or some kind of cerebral character.(13)

Maker Culture – Background

  • A subculture of DIY that intersects between DIY and hacker culture (software)
  • Open source hardware – electronics, robotics, fabrication
  • “Cut and paste” approach to hobbyist tech, recipe-fashion re-use of designs on maker sites and publications (e.g. Make Magazine)(14)
  • Learning through doing in a social environment
  • Rise of hackerspaces, makerspaces, fabrication labs
  • 3D printing- “a market of one person”(15)

Examples of Maker Culture websites

Specific projects David Mellis was inspired by(16)

  • Mitchel Resnick – projects to introduce programming and engineering to children
  • Lego Mindstorms products, PicoCrickets
  • Other toolkits, for designers – Phidgets, Basic Stamp, Arduino, Raspberry Pi
  • Fritzing – helping people translate prototypes into fabricated circuit boards

******** DIY CELLPHONE AS RESEARCH THROUGH DESIGN:
QUESTIONS, PROTOTYPING, TESTING, FINDINGS*******

DIY Cellphone as Research

  • DIY Cellphone was made as part of Mellis’s PhD dissertation at MIT Dissertation: http://web.media.mit.edu/~mellis/mellis-dissertation.pdf
  • Can be considered “research through design”, meaning the creation of knowledge through making of an artefact.
    • Form of research where the process of making the artefact addresses the research questions being posed. “How can I tell what I think, till I see what I make and do?”(17)

Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High Tech DIY, Co-written by David A. Mellis and Leah Buechley, MIT Media Lab

  • Post-PhD dissertation, David presented this paper at CHI conference in 2014 in Toronto
  • “New technology is an opportunity and a challenge. If we can create tools and experiences for creating modern devices, we can provide DIY with increased appeal and value – but, unless we keep up, DIY will offer fewer and fewer possibilities for the devices people use in daily lives”(18)

Research questions for DIY cellphone(19):

  • “To what extent is it feasible for people to make the technology they use in their daily lives? What obstacles and difficulties will they encounter?”
  • “How are devices (and the process of making them) transformed when they can be produced by the people who will be using them?”
  • “How can we combine the flexibility of quick prototyping processes with the reproducibility and robustness of finished products?”
  • “Who will be interested in making their own devices? Why?”
  • “What are the economics of building a high-tech device in small quantities? Which parts are even available to individual consumers? What’s required for people to customize and build their own devices?”

Design goals for DIY cellphone

  • Mellis aimed to resolve two conflicting objectives in the design plans:
    • “As easy as possible to assemble vs functional enough to serve as primary phone”(20)
  • “Robust, attractive form that can be made with digital fabrication, minimize cost, time, # processes”(21)
  • Creating DIY cellphone did not require expertise – relies on variety of info and open source libraries
    • Arduino GSM shield: did not need to know about radio frequency circuit design
    • Software: based from open source libraries


Prototyping process

9473886720_4635d89cfd_z

  • breadboard prototype
  • 1st generation POC – make and receive calls
  • 2nd generation (previously described) – has been David Mellis’s phone for 9 months
  • Kept diary of experiences

David’s research process

  • Autobiography
    • Very excited to use prototypes; very frustrating when they wouldn’t work
    • “Had no-one else to blame but myself”
  • 1 workshop for designers
    • 9 people ages 21-39
    • 2 made custom enclosures

9058289197_862b71a027_z

  • 1 workshop for public

    • 11 people (5 via fliers, 6 via word of mouth)
    • Limited customization; challenges getting different GSM modules to work

9302702376_37d57bab64_z

  • Public local and online communities
    • Some made phone with no help at all. “An indication that the components and processes required to create the phone are, at least for some, independently accessible”(22)
    • Online community “[Showed] importance of having others in different places with different resources to replicate a device as part of refining a design”(23)
    • Obstacles that prevented daily use:
      • not good reception at home,
      • can’t find time to make repairs
    • People found time to organize dedicated sessions to get together and work on the phone

David’s research through design findings

  • “Hi tech DIY exists in an ecosystem”(24)
    • Like large companies and complex web of third party parts and manufacturing
    • DIY not just ability to do everything yourself. Depends on the availability of parts, resources, and processes
    • Industrial manufacturers hold control over DIY
  • “DIY tech supports multiple forms of engagement”(25)
    • Workshops need to be specific to audience goals and skills
    • Workshops let people form communities of making, open door for further exploration  
  • “Bridging the gap between prototype and production”(26)
    • Prototyping tools not good for production
    • Better hobbyist software for making circuit boards needed
    • Digital fabrication has helped close the gap
    • Preference of people who used phone in daily life:
      Reliability and robustness > functionality, easy of use
  • “Modern technology gives relevance to DIY”(27)
    • Complexity and focused concept give people more motivation than simpler devices or general purpose toolkits
    • People most excited to make cell phone (vs radios, speakers) – ubiquitous, little understanding of how they work
    • DIY must engage with things people see every day
  • “Technology requires new conceptions of transparency”(28)
    • Data sheets and source code increasingly relevant for DIY – dependent on manufacturers
  • “Has helped people make steps towards empowerment”(29) 
    • Having choice to produce technology
    • Understanding how devices are put together
    • Do we want a world in which only big companies have access to tech needed to make a modern device?

References

 

  1. David Mellis, “DIY Cellphone,” http://diy-devices.com/devices/cellphone/.
  2. Mellis, “DIY Cellphone”.

  3. Mellis, “DIY Cellphone”.
  4. MIT Ledia Lab, “David Mellis,” high-low tech group, http://highlowtech.org/?p=66
  5. David A. Mellis, and Leaeh Buechley, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 2014, 1723.
  6. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1725.
  7. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1725.
  8. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1725.
  9. “Arduino GSM Shield”, Arduino, http://arduino.cc/en/Main/ArduinoGSMShield.
  10. Marco Wolf, Shaun McQuitty, Understanding the Do-It-Yourself Consumer: DIY Motivation and Outcomes, Academy of Marketing Science Review, 2011.
  11. Wolf, Understanding the Do-It-Yourself Consumer: DIY Motivation and Outcomes.
  12. Seattle D.I.Y., “What is DIY?” Gender Jam: LAD.I.Y & Trans Fest, https://olyladiyfest.wordpress.com/what-is-diy/.
  13. “Alan Watts (1915-1973)”, Terebess Asia Online, http://terebess.hu/english/watts6.html.
  14. Thomas MacMillan, “On State Street, “Maker” Movement Arrives”, New Haven Independent, 2012, http://www.newhavenindependent.org/index.php/archives/entry/make_haven/id_46594.
  15. Neil Gershenfeld,
    How to Make Almost Anything : The Digital Fabrication Revolution, https://books.google.ca/books?id=pdx5CwAAQBAJ&pg=PT15&lpg=PT15&dq=3d+printing+%22a+market+of+one+person%22&source=bl&ots=TY9bFdXVbC&sig=JzKIdtcS2lhY2-TndJ0IPVS2TVA&hl=en&sa=X&ved=0ahUKEwjto_nIxOXQAhVH12MKHaUfCmAQ6AEIMjAD#v=onepage&q=3d%20printing%20%22a%20market%20of%20one%20person%22&f=false.
  16. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1724.
  17. Christopher Frayling, Research in art and design, London: Royal College of Art, 1993, 5,
  18. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1724.
  19. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1723.
  20. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1725.
  21. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1725.
  22. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1728.
  23. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1728.
  24. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1728.
  25. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1729.
  26. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1729.
  27. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1730.
  28. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1731.
  29. Mellis, Do It Yourself Cellphones: An Investigation into the Possibilities and Limits of High-Tech DIY, 1731.

 

DATA IS THE NEW OIL

“DATA IS THE NEW OIL” – Jer Thorp
Case Study by Ginger Guo and Natasha Mody

jerthorp

Introduction and General Overview:

“Often my work stems from a question in my mind or an interesting data set,” says Thorp.

Our case study was based on a very profound data visualization artist, Jer Thorp, whose work focuses on adding narrative meaning to huge amounts of data.

An artist and educator from Vancouver, Canada, Jer Thorp currently lives in New York and has a background in genetics. His digital art practice explores the many-folded boundaries between science, data, art, and culture. Thorp’s award-winning software-based work has been exhibited globally.

Not only is Jer Thorp the co-founder of ‘The Office for Creative Research’
a hybrid research group, working at the intersection of technology, culture and education, but is also a Professor at New York University’s ITP Program, has collaborated recently with NASA visualized in ‘138 years of Popular Science’, has exhibited work at MoMA (Museum of Modern Art) in Manhattan, has been a speaker at multiple TED Talks and is a member of the World Economic Forum’s ‘Council on Design Innovation’.

Our presentation of Jer Thorp’s work included an overview of his select pieces and an in-depth study of two specific projects that Ginger and I found captivating for many reasons.

Jer Thorp and the Office For Creative Research:

TED Talk:

 

Select Pieces – Context:

Among Jer Thorp’s many incredible works of art, an overview of the following select pieces were presented –

Project Cascade (2010 – 2011) –
visualizes the sharing activity of New York Times content over social networks.
Sustained Silent Reading (2010) – the system uses semantic analysis to ‘read’ through a base of content.
Random Number Multiples (2011) – 
produces screenprints from the work of computational artists and designers. 
138 Years of Popular Science (2011) – a visualization piece that explored the archive of their publication

What fascinates us most about some of these projects and makes it even more significant, is his exceptional ability to digitally transform data that serves multiple purposes –
1. The accurate representation of the information and it’s engagement, as well as
2. The metamorphosis of data into visually stunning pieces of art, that can in fact be purchased and become art collections!

screen-shot-2016-12-08-at-8-42-03-pm screen-shot-2016-12-08-at-8-42-15-pm screen-shot-2016-12-08-at-8-42-24-pm screen-shot-2016-12-08-at-8-42-36-pm


In-Depth Study – Context & Technical Overview:

Project 1 – On Data And Performance

According to Thorp, “A rare datum might find itself turned into sound, or, more seldom, manifested as a physical object. Always, though, the measure of the life of data is in its utility. Data that are collected but not used are condemned to a quiet life in a database”. Jer Thorp and his colleagues have been investigating the possibility of using data as a medium for performance. Here, data becomes the script, or the score, and in turn technologies that we typically think of as tools become instruments, and in some cases performers.

In this performance — A Thousand Exhausted Things, the script is MoMA’s collections database, an eighty year-old, 120k object strong archive. The instruments are a variety of custom-written natural language processing algorithms, which are used to turn the text of the database (largely the titles of artworks) into a performable form.
During this entire period, all of the dialogue that is spoken by the actors is either a complete title of an artwork, or a name of an artist. A data visualization, projected above the performers, shows the objects as abstracted forms as each artwork is mentioned.
By using such a non-conventional form to engage with the collections database, they tried to ask the audience to think of the database as not just a myriad of rows and columns, but as a cultural artifact.
Performance provides rich terrain for engagement with data, and perhaps allows for a new paradigm in which data are not as much operated on as they are allowed to operate on us.


A Thousand Exhausted Things:

data => viewer to data => performer => viewer

 

Project 2 – Algorithmic Design And The 9/11 Memorial

In late October, 2009, Jer Thorp was contacted and commenced work on a project with an Experience Design Studio called Local Projects located in New York City. The aim was to design an algorithm for placement of names on the 9/11 Memorial.

According to Jer Thorp, “In architect Michael Arad‘s vision for the memorial, the names were to be laid according to where people were and who they were with when they died – not alphabetical, nor placed in a grid. Inscribed in bronze parapets, almost three thousand names would stream seamlessly around the memorial pools. Underneath this river of names, though, an arrangement would provide a meaningful framework; one which allows the names of family and friends to exist together. Victims would be linked through what Arad terms ‘meaningful adjacencies’ – connections that would reflect friendships, family bonds, and acts of heroism. Through these connections, the memorial becomes a permanent embodiment of not only the many individual victims, but also of the relationships that were part of their lives before those tragic events”.

Over many years, staff at the 9/11 Memorial Foundation undertook the meticulous process of collecting adjacency requests from the victims next of kin, creating a massive database of requested linkages leading to a total of more than one thousand adjacency requests. The next challenge was that of optimization as they had to find a layout that fulfilled these adjacency requests. However, Thorp found a solution to the problem and in order to produce a layout that would give the Memorial Designers a structure to base their final arrangement of the names, his team built a software tool in two parts: First, an arrangement algorithm that optimized this adjacency problem to find the best possible solution, and second, an interactive tool, built in Processing, that allowed for human adjustment of the computer-generated layout.

9-11-memorial9_11

The Algorithm:
The solution for producing a solved layout for the names arrangement was placed at the bottom of a precariously balanced stack of complex requirements. Some of the challenges included – a basic spatial problem where the names for each pool had to fit, evenly, into a set of 76 panels (18 panels per side plus one corner), another obvious challenge was to place the names within the panels while satisfying as many of the requested adjacencies as possible as there was a dense set of relations that needed to be considered. To add to these problems, within the crucial links between victim names, there were a set of larger groupings in which the names were required to be placed: affiliations (usually companies), and sub-affiliations (usually departments within companies). However, Jer Thorp, found solutions to these problems through the generation of an algorithm in which the complete process was a combination of several smaller routines: first, a clustering routine to make discrete sets of names in which the adjacency requests were satisfied. Second, a space filling process which placed the clusters into the panels and filled available space with names from the appropriate groupings. Finally, there was a placement routine which managed the cross-panel names, and adjusted spacing within and between panels.

The end result from the algorithm was a layout which completed as many of the adjacency requests as completely as possible. With this system, they were able to produce layouts which satisfied more than 98% of the requested adjacencies.

wtc_drawer3-1024x644
wtc_full-1024x644

The Tool:
Early on in the project, it had become clear that the final layout for the names arrangement would not come directly from the algorithm. While the computerized system was able to solve the logistical problems underlying the arrangement, it was not as good at addressing the myriad of aesthetic concerns. The final layout had to be reviewed by hand – the architects needed to be able to meticulously adjust spacing and placement so that the final layout would be precisely as they wanted it. With this in mind, they built a custom software tool which allowed the memorial team to make custom changes to the layout, while still keeping track of all of the adjacencies. The tool, built in Processing, allowed users to view the layouts in different modes, easily move names within and between panels, get overall statistics about adjacency fulfillment, and export SVG versions of the entire system for micro-level adjustments in Adobe Illustrator. Other features were built in to make the process of finalizing the layout as easy as possible: users could search for individual names, as well as affiliations and sub-affiliations; a change-tracking system allowed users to see how a layout had changed over multiple saved versions, and a variety of interface options allowed for precise placement of names within panels.

Conclusion:
In conclusion, Thorp goes on to express his views on the profound emotional connect he had with this 9/11 Memorial project. The very reason in fact, why I chose to do an in-depth study and focus on this unique project, given the passionate New Yorker I am, and the intensity at which it resonates, since I lived through the tragic event.

He articulates the importance of the weight of data and his awareness of the names that he was working with, were those of near and dear one’s who were tragically lost in the 9/11 event. He expresses, “In the days and months that I worked on the arrangement algorithm and the placement tool, I found myself frequently speaking these names out loud. Though I didn’t personally know anyone who was killed that day, I would come to feel a connection to each of them”.

For Thorp, while names of the dead may be the heaviest data of all, almost every number or word he worked with, endured some link to a significant piece of the real world. He says, “It’s easy to download a data set – census information, earthquake records, homelessness figures – and forget that the numbers represent real lives”.

The experience through this process helped add meaning to such a significant monument and will never be forgotten.

Related Projects:
Here are some artists whose work was, in some shape or form, related to Jer Thorp’s and was an inspiration to us as well-

Music script turned to an weaving physical data sculpture –
http://nathaliemiebach.com/musical.html

Music Performance With Data:

Final Presentation:
https://drive.google.com/drive/folders/0B0v0C2NFCXOiUTYyWkZYQnpGRkk?usp=sharing

References/Bibliography:
https://ocr.nyc/
http://blog.blprnt.com/
https://tisch.nyu.edu/itp
http://blog.blprnt.com/blog/blprnt/all-the-names
http://localprojects.net/
http://mashable.com/2012/12/11/data-visualization-jer-thorp/#dWRv2smhgkqr
http://www.thelavinagency.com/speakers/jer-thorp#591255794400107479
http://blog.blprnt.com/blog/blprnt/on-data-and-performance

Google Glass

Group Members:

Nana Zandi

Afrooz Samaei

glass-clearshade-isometric-thumb

 

Introduction

Google Glass is a head-mounted wearable computer and display that is worn like a traditional glass. On April 4, 2012, Google introduced the Glass through a Google+ post: “We think technology should work for you—to be there when you need it and get out of your way when you don’t. A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment. We’re sharing this information now because we want to start a conversation and learn from your valuable input. So we took a few design photos to show what this technology could look like and created a video to demonstrate what it might enable you to do.”

As Sergey Brin indicated while giving a TED talk about the Glass, the vision behind this product is related to the way we want to connect with other people and the way we want to connect with the information. The main motivation behind Glass is to build something that “frees your hands, your eyes, and also ears” and eliminates having to constantly look on our phones and socially isolating ourselves. The initial vision behind Google was to “eliminate the search query and have the information come to us as we needed it and make the information universally accessible and useful” (Larry Page, 2014)

During a Google I/O event in 2012 held in Moscone Center, San Fransisco, Google announced that a developer edition of the Glass called Explorer Edition is available to the developers to purchase, for $1500, which was shipped in early 2013. The Explorer Edition of the Glass enabled technologists, educators, and developers to pilot and test the product before it becomes commercially available to the public. Once the device was ready, the customers were contacted to attend one of Google’s headquarters in order to pick up their Glass and be given required instructions and information about how to use the device.

Shortly after that, the developers started developing applications for google glass, which was referred to as Glassware, based on the complete instructions provided by Google Developers. The applications include social and traditional media (e.g., Facebook, CNN), utilities (e.g., Stopwatch), and language learning (e.g., DuoLingo) applications, totaling about 150 (Forinash, 2015).

In January 2015, Google closed the Explorers program and discontinued the availability of Glass for individual buyers. However, the enterprise edition of the Glass is still under development. According to Google Developers, Glass at Work is a program intended to develop new applications and enterprise solutions by the certified partners, such as AMA (Advanced Medical Applications), APX Labs, AUGMATE, etc.

Design

Google Glass is comprised of a head-mounted optical display (HMOD), which is a prism functioning as the monitor and visual overlay, attached to a mini-computer and a battery pack, housed in the right side of the Glass. The frame is constructed out of titanium and comes in five different colors- charcoal, tangerine, shale, cotton, and sky.

What does Google Glass do?

Here is what Google Glass does when it’s on and connected to the internet (Google Glass for Dummies, P. 9):

  •  Takes photos and videos, and sends them to one or more of your contacts. The Glass camera sees the world through your eyes at the very moment you take the photo or record the video.
  •  Sends e-mail and text messages to your contacts, and receives the same from them.
  •  Allows you to chat live via video with one or more Google+ friends via Google Hangouts.
  •  Sends and receives phone calls.
  •  Searches the web with the Google search engine (of course) so that you can find information easily.
  •  Translates text from one language to another. Glass speaks the translated text and also shows a phonetic spelling of the translated word(s) on its screen.
  •  Provides turn-by-turn navigation with maps as you drive, ride, or walk to your destination.
  •  Shows current information that’s important to you, including the time, the weather, and your appointments for the day.
  •  Recognizes the song that’s playing on the device and identifies the artist(s) singing the song, in case you don’t know.

Technical Specifications

glass-pod-exploded-thumb

 

There are a few different ways to control Google Glass. One is by using the capacitive touchpad along the right side of the glasses, on the outside of the GPS and CPU housing. Users can move from screen to screen by swiping and tapping the touchpad with a finger. Another way to control Google Glass is through voice commands. A microphone on the glasses picks up your voice and the microprocessor interprets the commands. To use voice commands, users say ‘OK Glass”. This command brings up a list of the available commands.

maxresdefault3

In order to connect to the internet, users should connect the Glass to the computer’s Wi-Fi connection, once they first set up their MyGlass account. Users can also connect through their smartphone with the MyGlass app, which allows internet connectivity as long as the smartphone has an internet data plan or access to a Wi-Fi network. Pairing the Glass to a smartphone using Bluetooth is most convenient when no Wi-Fi is available or users do not have access to a data network.

Images from Google Glass project onto the reflective surface of the built-in prism, which redirects the light toward your eye. The images are semi-transparent — you can see through them to the real world on the other side.

A visitor of the "NEXT Berlin" conference tries out the Google Glass on April 24, 2013 in Berlin. "NEXT Berlin" describes itself as "a meeting place for the European digital industry". Organisers say that at the conference, "marketing decision-makers and business developers meet technical experts and creative minds to discuss what will be important in the next 12 months". The conference is running from April 23 to 24, 2013. AFP PHOTO / OLE SPATA / GERMANY OUT (Photo credit should read Ole Spata/AFP/Getty Images)

The speaker on Google Glass, located in the right arm, is a Bone Conduction Transducer. That means the speaker sends vibrations that travel through your skull to your inner ear — there’s no need to plug in ear buds or wear headphones. Using the camera and speaker together allows you to make video conferencing calls.

Also on board the glasses are a proximity sensor and an ambient light sensor. These sensors help the glasses figure out if they are being worn or removed. You can choose to have your Google Glass go into sleep mode automatically if you take them off and wake up again when you put them on. 

One last sensor inside Google Glass is the InvenSense MPU-9150. This chip is an inertial sensor, which means it detects motion. This comes in handy in several applications, including one that allows you to wake up Google Glass from sleep mode just by tilting your head back to a predetermined angle.

The power is provided by a battery housed in a wide section of the stem. It fits behind your right ear. It’s a lithium polymer battery with a capacity of 2.1 watt-hours. Google has stated that the battery life should last for “one day of typical use.” Although the battery quickly depletes with extensive use, such as taking lengthy videos, watching videos from the internet, using Bluetooth, etc., Glass recharges relatively quickly, in as little as two hours.

Applications

Healthcare

Mark Taglietti, head of ICT delivery services and vendor management at London University College Hospitals says, “Google Glass represents a step change in technical innovation, wearable technology, and the convergence of personal devices in the workplace. The healthcare applications of Glass are wide-ranging, insightful and impactful, from enabling hands-free real-time access to clinical and patient information, to the transmission of point of view audio and video for surgical research and educational purposes. Glass marks the beginning of a truly remarkable journey for technical innovation within healthcare, enabling providers to improve the delivery of care, as well as overall quality and patient experience.”

Some example in which Google Glass is dramatically changing healthcare:

Virtual Dictation

Augmedix is a glass application that provides a better way for doctors to enter and access important patient information in real-time without being tethered to a computer. Dignity Health uses Augmedix software and Glass to streamline the interaction between physicians and patients. “This technology allows me to maintain eye contact with my patients and have continuous conversations without having to enter information into a computer,” said Dr. Davin Lundquist, a family medicine practitioner and Dignity Health’s chief medical informatics officer. “The ability to listen, communicate, and care is just as critical as the diagnosis, and this technology allows me to spend more focused and quality time with my patients.”

Telemedicine

Care providers can communicate with physicians remotely and proactively monitor patients whose Electronic Health Records (EHR) can be transmitted in real-time.

Resident training

The Stanford University Medical Center Department of Cardiothoracic Surgery uses Google Glass in its resident training program. Surgeons at the medical center use glassware from CrowdOptics to train residents on surgical procedures.

Augmented Reality allows doctors to monitor patients’ vital signs during surgical procedures without ever having to take their eyes off the patient. Live streaming of procedures can also be used with augmented reality applications for teaching.

 

Warfare

“real-time information on the battlefield in order to prevent harm to the soldiers.”

“Google glass for war: The US military funded smart helmet that can beam information to soldiers on the battlefield.”

The technology promises to give soldiers increased situational awareness on the battlefield, as well as easy access to important intelligence.

 

Education

The technology allows teachers and students to share information in various modes of interaction that include flipped classrooms.

Students can record interactions with fellow students, including while on field trips. Later, students can analyze their own and others actions and responses. Teachers can also see how other teachers apply the technology.

 

Journalism

With wearable computers like Glass, journalism is changing into a place where news content is created and shared instantly, quite literally through the eyes of the reporter. Glass provides you with freedom of motion and the ability to convey more intimate stories as journalists can be less intrusive to his or her subjects. Instead of taking your eyes off the action to take notes, you can record the event hands-free from your face. Using Glass, journalists could have the opportunity to tread new ground with their stories, accessing all of their resources from a single, easy-to-use device. 

 

Criticism

  • Privacy Concerns

Concerns have been raised by various sources regarding the intrusion of privacy, and the etiquette and ethics of using the device in public and recording people without their permission. Privacy advocates are concerned that people wearing such eyewear may be able to identify strangers in public using facial recognition, or surreptitiously record and broadcast private conversations. There have also been concerns over potential eye pain caused by users new to Glass. Other facilities, such as Las Vegas casinos, banned Google Glass, citing their desire to comply with Nevada state law and common gaming regulations which ban the use of recording devices near gambling areas. On October 29, 2014, the Motion Picture Association of America (MPAA) and the National Association of Theatre Owners (NATO) announced a ban on wearable technology including Google Glass, placing it under the same rules as mobile phones and video cameras.

Internet security experts have voiced concerns about the product and its implications. Their point is that the wording of terms seems to give Google more control over user data than it should have. He also points out that with facial recognition software, the glasses could raise privacy issues.

Another concern is that Google could use the eyewear as a platform for collecting personal data and serving ads. As you go about your day wearing these glasses, Google could create a virtual profile. Based on your behaviors and location, Google could potentially serve up what it considers to be relevant advertisements to the screen on your glasses.

  • Safety Concerns

Concerns have also been raised on operating motor vehicles while wearing the device.

 Similar Products

Vuzix M200 or M300

vuzix-m300-1458572329-fvms-column-width-inline

 

Vuzix’s M300 smart glasses are built for enterprise uses. With an Intel Atom processor powering performance, it’ll run on the latest version of Android with 2GB RAM, 16GB of internal storage and Wi-Fi connectivity among the more notable specs. There’s also a 13-megapixel camera to take pics, head tracking support, and dual canceling microphones.

Epson Moverio BT-300

epson-moverio-bt-300-1-1457001717-ikvb-column-width-inline-1458563228-b0g7-column-width-inline

 

While Epson’s smart glasses have always been quite business-focused, it has teased the prospect of using them in the gym to race in virtual environments and is working with drone makers DJi so you can control flights straight from your specs.

Sony SmartEyeGlass

sony-smarteye-glass-200-1458564641-cihu-column-width-inline

 

Sony released the essential tools to allow developers to start coding applications for its Google Glass rival, and now developers can finally get hold of the SmartEyeGlass hardware. SmartEyeGlass includes an array of features, including a gyroscope, accelerometer, ambient light sensor and built-in camera. However, the monochrome screen is likely to put off consumers, if Sony chooses to release it beyond the business world.

 

References

Forinash, D. B. “Google Glass.” CALICO Journal 32.3 (2015): 609-17. Web.

Stepisnik, Eric Butow. Robert. Google Glass For Dummies. N.p.: John Wiley & Sons, 2014. Print.

Glauser, Wendy. “Doctors among early adopters of Google Glass.” Canadian Medical Association. Journal 185.16 (2013): 1385.

Hua, Hong, Xinda Hu, and Chunyu Gao. “A high-resolution optical see-through head-mounted display with eye tracking capability.” Optics express 21.25 (2013): 30993-30998.

Parslow, Graham R. “Commentary: Google glass: A head‐up display to facilitate teaching and learning.” Biochemistry and Molecular Biology Education 42.1 (2014): 91-92.

Yus, Roberto, et al. “Demo: FaceBlock: privacy-aware pictures for google glass.” Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 2014

Clark, Matt (May 8, 2013). “Google Glass Violates Nevada Law, Says Caesars Palace”.

MPPA (October 29, 2014). “MPAA and NATO Announce Updated Theatrical Anti-Theft Policy”

https://plus.google.com/+GoogleGlass/posts/aKymsANgWBD

https://developers.google.com/glass/distribute/glass-at-work

http://www.newyorker.com/business/currency/whats-the-problem-with-google-glass

http://www.businessinsider.com/military-invests-in-combat-google-glass-2014-2

http://www.dailymail.co.uk/sciencetech/article-2640869/Google-glass-war-US-military-reveals-augmented-reality-soldiers.html

https://www.wareable.com/headgear/the-best-smartglasses-google-glass-and-the-rest

http://electronics.howstuffworks.com/gadgets/other-gadgets/project-glass2.htm

http://www.catwig.com/google-glass-teardown/

Digital journalism: Your Sunday newspaper will never be the same

 

How Google Glass Will Transform Healthcare

 

 

Humans Since 1982

Link to in-class presentation

Overview

Humans Since 1982 (HS82) is an art and design studio based in Stockholm, Sweden. It was founded in 2008 by Per Emanuelsson (Swedish) and Bastian Bischoff (German), who are both graphic designers who were born in 1982, hence the name. Emanuelsson and Bischoff met while studying for their MFAs at The School of Design and Crafts at the University of Gothenburg, Sweden.

Their studio emphasizes an interdisciplinary approach to design, by creating interactive technical works with artistic interfaces. HS82 works with galleries, institutions, and craftspeople from all around the world to create sculptural installation pieces and innovative designs.

HS82’s work is known for its focus on temporality, fleetingness, and rhythm. They are renowned in particular for their series of kinetic installations, most famously their “A Million Times” project. This project (re)appropriates the symbolism of analogue clock faces to create images that move due to the rotations of clock arms. What is interesting is that in fact they do not use real clocks at all. Instead, they simply use metal strips that are programmed to rotate 360′.

The hands of the clocks rotate once a minute, forming intriguing shapes and patterns as they show the time or spell out a word. The artistic intent of these pieces are to create moving images using analogue mechanisms combined with digital style and technology. The numbers are presented in a “digital” typeface.

A million Times, by Humans since 1982 from Humans since 1982 on Vimeo.

In addition to using “clocks” as a sculptural material, HS82 uses other everyday objects such as chairs, light bulbs, hair clips, and cameras as critical materials and to create effects of space, depth, and texture.

Celebrating the Cross 1 – lounger chair resting on top of crucifix

Light Culture, 2013

Surveillance Chandelier, 2011

My favorite piece is their Surveillance Chandelier (2011, pictured above) because I enjoy the juxtaposition of the spotlights embedded within the structure of a surveillance camera. I think the play on shape and functionality is very clever and visually compelling. This is an excellent example of a critical design piece, because this piece provokes a response in the viewer.

I am trying to imagine how I would react to this piece if I was visiting someone and this was their chandelier. I think I would feel somewhat uncomfortable and disoriented. It is uncomfortable to see security cameras presented so publicly, almost shamelessly, inside someone’s home. This piece blurs the distinctions between private and public spaces, and makes the viewer/user very aware of their body, the space they take up, how they are being viewed, and by whom. This piece makes the viewer the object, the one who is being acted upon by the piece, and I find that transformation very fascinating.

Development

HS82 has been making works about temporality and spaciality since 2007, as part of Emanuelsson and Bischoff’s graduate theses. One of their earliest projects is Clock Typefront Projection, which is a display of clocks, projected onto a screen. The movement of the clock hands creates shapes and text. This is one of their earliest iteration of their clock series, but the clocks are much more literal – with clear numbers on the clock face – and yet less realistic because of their digital display.

Clock Typefont II is a more experimental iteration of the typefont, but by this point HS82’s signature aesthetic has been realized.

An additional example of their early works is Firefly, which is another projection piece. Firefly is somewhat interactive, but the visuals are pixelated and clunky. This is a far cry from the smooth and seamless interfaces that they now produce, but this provides a backround and context that shows how their future themes and concepts developed. Firefly is airy, light, and whimsical, which their future “light sculptures” also have in common.

The most ubiquitous work in the HS82 repetoire is the Clock Clock, which has been presented in both a linear and circular format, and as modular and stand-alone pieces. These pieces are bold, monochromatic, and shiny – made of glass, acrylic, or metal. They also make interesting use of the negative space around the sculpture by making the Clock Clock feel monumental in the context of gallery, museum, or building where it is being displayed.

A million Times 432, 2014

Context

HS82’s pieces fit into a larger genre of work about time and clocks. It is easy to see their inspiration from artists like Richard Jackson and Alicia Eggert, who have developed similar projects over the past 20 years.

Richard Jackson was one of the pioneers of this type of art/design practice. He created 1000 Clocks in 1987, and presented several iterations until 1992. This project was an immersive experience, with clocks covering the walls of an empty room from floor to ceiling (and the ceiling itself). The project was inspired by the artist’s 50th birthday and idea of the uncontrollability of aging. The installation confronts the audience with the (literal) proximity of time and the feeling that time/life is running out. The clocks are synchronized and change time in unison. To me, the piece makes me very aware of an impending finality.

Alicia Eggert has used similar “clock-like” mechanisms to create a typeface and spell out messages related to time and space, for example “It Never Stops”. She has other works that spell out “Eternity” and “Wonder”.

These pieces do not have the circular “clock-like” frames around the moving hands that HS82’s do, but it is clear that their concept and approach is similar and is meant to evoke similar reactions from viewers/audiences: making them aware of their mortality.

Jackson’s piece creates a tangible augmented reality for the audience. Eggert’s pieces have wires showing Wonder specifically looks very technical – almost like a bare-bones prototype. By making the mechanisms more visible, I would argue that Jackson and Eggert’s pieces are more “artistic” – less commercial and more critical – than HS82’s.

Technical Overview

I was not able to find detailed information about how the clock technology works. However, a fact sheet about “A Million Times” (2013) shares the following information:

  • Dimension: 344cm x 180cm x 5cm
  • Number of single clocks: 288
  • Material: aluminium + electric components
  • Electricity: standard 100-240V, 50-60Hz socket
  • Operation system: customized software to be controlled via iPad
  • Finish: powder coated white + black hands, screen printed dials
  • Engineering: David Cox

From the visual above, found on HS82’s website, it appears that a camera acts as a sensor to detect when viewers approach the object. This would then likely trigger the rotation of the clock arms, by a servo motor. The text that the clock arms spell out would be pre-programmed, and would start to rotate to those positions when the process is triggered.

The information sheet for “The Clock Clock White” (2010) explains:

“[This piece] Re-contextualizes time in a mix of old and new, analogue and digital. The clock is made of 24 two-handed analogue clocks. Six clocks make up a number, each of them displaying either one of its corners or one of its sides. All 24 clocks create one giant display similar to that of a digital watch. This work is notable for its digital/analogue format and the choreography that takes place between
the minutes.”

The choreography described are determined by pre-programmed angles that the servo motors rotate to. They are likely controlled by an Arduino or similar microcomputer/processor which also has a timing function to ensure that the visualizations/display are in sync.

References/Bibliography

“A Most Unusual Clock”. Forbes.Com, 2016, http://www.forbes.com/sites/anthonydemarco/2013/12/06/a-most-unusual-clock/#704ece1670b6.

“Alicia Eggert”. Aliciaeggert.Com, 2016, http://aliciaeggert.com/.

“Blending Analog And Digital Clocks: Humans Since 1982 Makes The Clock Clock “A Million Times” Better – Core77″. Core77, 2016, http://www.core77.com/posts/24483/blending-analog-and-digital-clocks-humans-since-1982-makes-the-clock-clock-a-million-times-better-24483.

Etherington, Rose. “A Million Times Clock Installation By Humans Since 1982”. Dezeen, 2016, https://www.dezeen.com/2013/02/26/a-million-times-by-humans-since-1982/.

Humans Since 1982, 2016, http://www.humanssince1982.com/.

“Humans Since 1982 – 29 Artworks, Bio & Shows On Artsy”. Artsy.Net, 2016, https://www.artsy.net/artist/humans-since-1982.

“Humans Since 1982 (@Humanssince1982) | Twitter”. Twitter.Com, 2016, https://twitter.com/humanssince1982.

Walsh, Daniella. “Method, Madness And Obsession In New Museum Exhibit – Laguna Beach Local News”. Laguna Beach Local News, 2016, http://www.lagunabeachindy.com/method-madness-and-obsession-in-new-museum-exhibit/.

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.