Project 2, “Klaus Wallpaper/Mirror”

Klaus Wallpaper/Mirror

When applying to the DFI program, in the winter of 2012, I was working on a project loosely referred to as, “Klaus Mirror Wallpaper.” In it, I envisioned a mirror, similar to the one in Snow White, which one could peer into but would have different images looking back at them. In my mind, the images would interactively ripple like a pebble thrown into water and pop up, like a mirror.

I wanted it to double as wallpaper, its eeriness based on how it could camouflage itself into the rest of the wallpaper. At the time, I considered this to be magical, or at the very least, too technically difficult for me to accomplish on my own. Instead, I came up with a Flash animation version of what I had in store. I made a music loop for this, called “Klaus Tomato Complete”.


Klaus Nomi was a German underground No Wave/opera performer living in New York City in the late seventies/ early eighties. I have longtime been a fan of Nomi’s ability to transform visual art into a persona (through costume and song), sort of a precursor to Lady Gaga. Around the time that I was initially working on this project, I was listening to a cover he did of Donna Summer’s “I Feel Love.”

I made a song that was inspired by this.

“Will Munro: Total Eclipse” was the first art show I saw when I moved to Toronto in 2010, at Frank’s Gallery (AGO). The poster for this show included a large image of Klaus Nomi.

Will Munro


This poster has been hanging on the wall of my various apartments since that time. I like the idea of overtly  acknowledging pop culture obsessions and making more art with and about it.

For “Klaus Wallpaper/Mirror”, I used an image of Klaus Nomi that I found on a fan tumblr. I used this image  as the wallpaper because I like taking a character and turning him into a highly reproductible iconic figure  (Andy Warhol). I like Walter Benjamin’s ideas in “The Work of Art in the Age of Mechanical Reproduction”,  where he discusses how the more something is mass-produced, the less authentic it becomes: “Even with the  most perfect reproduction, one thing stands out: the here and now of the work of art- its unique existence in the  place where it is at this moment…The here and now of the original constitute the abstract idea of its  genuineness. The whole province of genuiness is beyond technological reproducibility.” (Benjamin, 8)

Due to my highly limited Photoshop skills, I erased one of the figure’s heads by accident, but prefer it that way.

For Halloween of that year, I found a Martha Stewart craft, “Mirror Glow Eyes”, that is a different version of the  Snow White Mirror, that gives off animation feel. I love it.

Mirror Glow Eyes

For the images that went on top of the Klaus Nomi heads, I used photos that I had taken in 2006 of video games at an arcade in Hong Kong. I have always been fascinated by the video game aesthetic; make something as synthetic as possible. I like the juxtoposition between the Klaus Nomi heads and arcade photos.

I was also inspired by Tasman Richardson’s 2012 show, “Necropolis” at MOCCA. He takes film and plays with  it in all sorts of formats that are interactive installatiosn. The following clip is an experience of his work.


 Working on this project has involved many steps.

  1. Load all of the images.
  2. Figure out the exact points of coordination where the arcade photos should go to be exactly on top of the Klaus heads.
  3. For some reason, after I loaded them and started playing around with my ‘if statements’, they all became huge.Take the arcade photos, resize and touch them up in Photoshop.
  4. Create a new sketch, where I load all of these photos again.
  5. Create all of my ‘if statements’ for where I want the photos to be placed. I want it to be so that depending on where you are standing, the arcade photos pop up over Klaus’ head. This means that, at first, I have to give vague points of coordination, around each square for where the person has to stand in order for the arcade photo to pop up. Because these points of coordination often work in alignment, it is difficult to have a person stand in one position and not have a few heads pop up at the same time.
  6. Endless playing with OpenCV and Capture, which is endlessly inspiring, but confusing, because I haven’t been able to do figure out how to incorporate the demos into my sketch. So far, some other demos that I would like to play with and incorporate into Klaus Mirror Wallapaper or another project, at a future time are:

-Suitscan: Have video playing before, then when you stand somewhere, everything stops and it plays the split screen, plus colous changes.

-In Open CV, copy is cool, the image, for some reason, reminds me of using many LFOs in music, something about how they drag.

-Spatiotemporal: Half Screen

– Time displacement: Ripple delay



-In Movie, I like scratch.

 More Process Notes:

I was having a lot of problems trying to use OpenCV; I didn’t know where to put it in my code, in the beginning or end? I went to see Nick and he stayed with me for what felt like over an hour and helped me TREMENDOUSLY. He told me that I needed to employ Network Communication for the project; get another sketch running at the same time as mine. The initial one would be the ‘client’ and the other one, with Open CV, would be the ‘server’. One of the reasons for this was to be able to have my initial dimensions for the image (1200, 1500). Otherwise, trying to use OpenCV in my initial sketch would only be able to handle the small square that it initially wants to use (hope that is clear).

I was ecstatic when everything finally worked. It needs to be dark in the room for things to work most efficiently and it’s a bit rough, but wow, what gratification I have to see my images come and go, depending on where I am standing!!

Now to incoporate audio:

I added the Minim library and was able to have one sketch successfully play my accompanying loop, “Klaus Tomato Complete”, although when I try to put it into my Client sketch, this is when troubles arise. I spent a few hours trying to incorporate it into my main sketch, when it eventually occurred to me to do with Minim what I had done with OpenCV; give it its own sketch!! Duh hickey. I added the code and then one line so that it would loop and presto, all three sketches work to create my Klaus Mirror/Wallpaper!

The Code

Remember, I used three sketches. I am having trouble logging into GitHub, so I will have to give you my code here.

1. The server, which was in charge of face detection:

import java.awt.Rectangle;
OpenCV opencv;

// contrast/brightness values
int contrast_value = 0;
int brightness_value = 0;

Server server;///make a new server
float faceX;
float faceY;
String sendData; //
byte zero = 0;

void setup() {

size( 320, 240 );

opencv = new OpenCV( this );
opencv.capture( width, height ); // open video stream
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // load detection description, here-> front face detection : “haarcascade_frontalface_alt.xml”

server = new Server(this, 5204);

public void stop() {


void draw() {

// grab a new frame
// and convert to gray;
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );

// proceed detection
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

// display the image
image( opencv.image(), 0, 0 );

// draw face area(s)
for( int i=0; i<faces.length; i++ ) {


rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );





2. The client, which was responsible for my images:

Client client;//****

PImage img1; // Klaus Background
PImage img2; // Boobs
PImage img3; // Egyptian Eye
PImage img4; // Clown
PImage img5; // Yellow Buttons
PImage img6; // Kind Text
PImage img7; // Pink Buttons
PImage img8; // Girl Face
PImage img9; // Sunset
float faceX;//***
float faceY;//***

void setup() {

client = new Client(this,”″, 5204);

size(1500, 1200);
img1 = loadImage(“Klaus Background.jpg”);
img2 = loadImage(“Boobs28.jpg”);
img3 = loadImage(“EgyptianEye28.jpg”);
img4 = loadImage(“Clown28.jpg”);
img5 = loadImage(“YellowButtons.jpg”);
img6 = loadImage(“KindText28.jpg”);
img7 = loadImage(“PinkButtons28.jpg”);
img8 = loadImage(“GirlFace28.jpg”);
img9 = loadImage(“Sunset28.jpg”);

void draw()
receiveData();///*****put this at the top of the draw

image(img1, 0, 0);

//img2 Boobs

if (faceX > 240 && faceY > 70) {
image(img2, 216, 40, 60, 90);

//img3 Egyptian Eye

if (faceX > 500 && faceY > 150) {
image(img3, 675, 337, 60, 90);

//img4 Clown

if (faceX > 400 && faceY > 600) {
image(img4, 446, 641, 60, 90);


//img5 Yellow Buttons

if (faceX > 1100 && faceY > 150) {
image(img5, 1132, 336, 60, 90);

//img6 Text

if (faceX > 800 && faceY > 70) {
image(img6, 903, 42, 60, 90);

//img7 Pink Buttons
if (faceX > 210 && faceY > 150) {
image(img7, 216, 336, 60, 90);

//img8 Girl Face
if (faceX > 1100 && faceY > 200) {
image(img8, 1132, 40, 60, 90);}

//img9 Sunset
if (faceX > 1100 && faceY > 50){
image(img9, 1132, 641, 60, 90);}


void receiveData()
boolean sTest;
String[] inCoords = new String[2];
byte zero = 0;
String data;
// Read data
if (client.available() > 0) {
data = client.readStringUntil(zero);

try {
inCoords = splitTokens(data, “,”);
} catch (NullPointerException e) {




3. The music

import ddf.minim.*;

Minim minim;
AudioPlayer song;

void setup()
size(100, 100);

minim = new Minim(this);

// this loads mysong.wav from the data folder
song = minim.loadFile(“Klaus Tomato Complete.mp3”);;

void draw()


**Important to note that one must press play first for the server, second for the client and third for the music, otherwise it won’t play.

***One thing that I noticed through playing with this is that because it can be for the camera to pick up the user’s face, the user (me), sometimes has to wave their arms around and it becomes a sort of dance to get the attention of a small green light at the top of the camera screen.

****I would love to see this on a big projector!



Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” (Penguin Great Ideas), 2008.





Don’t Believe What You Screen (Drones)

Big Brother


Today, the army only occupies the territory once the war is over. (Virilio, 2000)


By Umar, Brandy & Dushan.


Don’t Believe What You Screen was conceived to create an affective space to evoke a measure of the discomfort and threat experienced daily by populations under constant surveillance — with the prospect of death at any moment — by drones. Turning the ‘self’ into the ‘other’ through machine-mediated vision — as if surveilled by a drone — the disembodied gaze is presented on all-sides of the viewer, re-articulating the role of the subject as victim, pilot and voyeuristic drone. This installation works to build awareness and discussion of systemic surveillance and its relation to militarism, state-sponsored power and associated unanswerable violence.

State power & violence is remotely enacted daily through ‘drone strikes’ throughout the Middle East and South Asia. Unmanned aerial vehicles (UAVs) are deployed and operated by pilots thousands of miles away with little concern for ethical considerations or threat of recompense. Transmitted in grainy black and white or infrared-night-vision, cameras and screens mediate the distant pilot’s vision with perceived technological precision. Digital/optical “zooming”, heat-signatures and instrumentation have replaced human-sight, biological colour spectrums and bodily “situational-awareness” — opening up a muddled, lethal ambiguity that confuses farm tools for guns and weddings for “insurgent” meetings.

The reality of drone warfare is complicated, bizarre, and full of consequences for “both sides.” Although obscured by the extreme technological and power imbalance, drone pilots often suffer from PTSD, becoming victims of their own distant violence, all the while sitting only a few miles from home. UAVs are geographically closer to their victims than operators by many orders of magnitude — buzzing, watching and bombing 24hrs a day, 7 days a week — airborne 7/11s of surveilled death. In a confusing near-virtual space, small-screen warriors witness a real theatre of war virtually — kill, then clock out — only to commute to another theatre where hyper-realistic visions of virtual war are depicted on the big screen.

Note: For an in-depth and linguistically stunning exploration and critique of machine-mediated vision, see Paul Virilio’s War & Cinema: The Logistics of Perception &/or Vision Machines (1989 [1984])

Surveying Projected Surveillance
To avoid what felt like simplistic notions of connection, we consciously reoriented the project away from directly relating to social media or creating a “product based” solution; instead focusing on a discursive political art-installation. In creating Drones, lines of inquiry quickly expanded to include the dangers of ubiquitous surveillance, the moral ambiguity of UAVs, aggressor-victim relations and aggressor as victim, communication gaps and lags, the effect of screens on accuracy and judgement, the cause and use of fear through (continual) sound/visual exposure to a threat, and mass media and personal culpability through inaction and apathy,

After lengthy (and disturbing) research, discussions, diagrams, drawings and confusing conversations, Drones began to take form. We were fortunate to be made of a group with close ties and experiences with the core concepts of our project. Most notably (and disturbingly), Umar experienced the threat and effects of drone warfare directly; while Brandy, coming from China, offered numerous insights, feedback and thoughts on ubiquitous surveillance and state-power. These encounters were immeasurably informative and irreplaceable as direct involvement most often is. Lastly, Dushan’s experience teaching & researching for the course Illustrative Activism and editorial illustration helped to round-out the project’s scope. All in all, constituent experiences that offered an excellent basis to take on a project of this size.

Top Secret Prototyping & Development Processes
Initially, Drones was sparked by Processing/OpenCV video tracking, linked to an Arduino with a servomotor, allowing for a physically-tracked object.The prototyping and development stage(s) were extensive and difficult, thanks to the complication of the subject, richness of resources, extensive and varied range facts (exacerbated and obfuscated by state-secrecy and military propaganda), and the difficulty in building an experience in a space/environment so removed from the scenes of drone warfare. Investigative research on drones, pilots, victims and governmental policies (both current and future) were extremely unsettling — cementing our commitment to explore and reveal this relevant and timely project.

Continuing our investigation, we realized the potential difficulties brought on by machine-mediated vision, finding it unimaginable that decisions of life and death were enacted at such great distance with seemingly sparse information. Questions arose of truth perception, pilot post-traumatic stress disorder (PTSD) and incapacitated communication between aggressor and victim.

From Hyper-Complicated to Simply Complicated
As previously mentioned, project ideation went through numerous iterations as we explored room layout, affective possibilities and addressed technical issues. The following are a few of the variants:

  1. Our original intention was to build a 2-user experience: one as pilot, the other, as victim — where both parties would be confronted with a vision of the other. This first version turned out to be unwieldy on all fronts, requiring an excess of space, materials, equipment and programming. The concept of re-mediated/open communication between the pilot and victim was an interesting revelation suitable for further consideration.
  2. Following pilot-victim re-mediation, we searched for ways to remove communicative barriers (physical, electronic, mental, psychological) by (a) exploring the use of open spaces where both ‘actors’ could interact freely and exchange places; (b) re-linking via electronic communication (twitter, wi-fi, video, and/or mic & speaker); and (c), creating a (physically) circular experience where victim and pilot exchanged places after a ‘successful’ strike, as a sort of revenge/punishment model.
  3. Using multiple pilot & victim interfaces/apps for a more ‘game-like’ experience: where texts or app interactions allow people to ‘strike’ each other dependent on their being assigned a
  4. Data-driven experiences using real-time stats to connect the user to the real-world battlefield — sending notifications/badges of ‘successful strikes’ or implying the user was killed, maimed or caused an attack due to a perceived geo-locative action.

Final Concept

  1. Place a “primer” where the user receives a facial scan for identification purposes. This was achieved using processing and a basic face detection example. The moment a face is detected, a video is launched containing a series of photos of classmates, teachers, activists, terrorists and logos of governmental organizations associated with surveillance (FBI, CIA, Interpol, etc). These  unwelcome associations create the appearance of unwarranted suspicion and surveillance.
  2. The darkened smoke-filled room is quiet, with only a tablet lying on the ground the only (visibly) interactive element available.
  3. Once the tablet is picked up, a button is released, setting off series of alarming sounds (a collage of militaristic & communicative effects) while (on 3 sides) large projections of the subject are presented — viewed from above in drone-like vision; replete with crosshairs and target-distance information.
  4. Blob tracking is enabled, causing a laser to track/target the viewer , producing an immanent threat.
  5. The tablet ‘targets’ can be pressed (evoking the action of striking), causing a series of facts to appear about drones. The full-screen fact can be pressed/clicked again to return to the main map page.
  6. Once the user returns the tablet to its stand/spot, the (physical) button is pressed, causing the video to close, but the monotonous sound of the drone flying overhead to continue in the darkness. This creates a contemplative space for the viewer and, serves as form of ‘death light’, playing off of the term ‘lights out’…


Ideal Scenarios

In working through Drones, a host of scenarios were suggested, considered and ultimately ruled out due to time, equipment and spacial constraints. Some thoughts on ‘ideal’ project options & situations are as follows:

  • Ceiling mounted projectors (or short-throw key-stoning) to allow for brighter video & remove potential cast shadows, ideally images take entire wall without seams.
  • Multiple lasers, from very high above in imitation of drone-targeting, with 3D movement for full-room tracking (width, depth).
  • Smooth walls and surfaces for distraction-free viewing.
  • Surround sound to create a realistic sound-scape, using distance as an audio component
  • Actual face scanning and capture with possible real-time web-crawlers to dig up & assemble personalized initial entrance scans — enhancing the surveillance theme. This scan could also unlock the room for the viewer to enter only after entering a name & scanning.
  • Darker room for better mood-lighting and effects.
  • Smooth floor suitable for rubble piles & other ephemera to build a war torn, damaged/distressed.
  • The addition of a final stage — after the tablet is returned — where a large face/eyes (of East Asian/Middle Eastern background, centrally projected ) stares silent and unblinking at the viewer, representing ‘the other’, flipping reality on its head.
  • Alternately, further projections could reveal bodies lying around the viewer after a few moments of darkness. Presented in silence (or with drone-buzzing), potential for a memorable (and disturbing) experience would be highly likely.
  • In a similar vein, the tablet could also be used as augmented-reality viewfinder which reveals bodies, dolls, personal effects, etc.

As both Kate & Nick stated, Drone’s scope & potential is massive; something that could be built upon for many months in order to achieve the perfect effect/affect on the viewer and spread an important message. We strongly believe Drones could be successfully developed and made suitable for events like Nuit Blanche or professional gallery spaces.


Diagrammatic Network

Network Diargram


Coding the Drone (Github)


Sketches & Design






Circuit diagrams

ArduinoSwitch_BreadBoard ArduinoSwitch_Schematic ServerMotor_BreadBoard ServerMotor_Schematic




Projected video space & scanner:




Observable Context

In response to a question raised during critique: Drones are surveillance with a consequence. We would argue that it is a misconception that (the use of) drones and (the act of) surveillance are separate entities. With autonomy as a benefit, all consequence —in this case death — is granted solely to the drone, as the capacity to kill belongs to the drone alone. Those targeted for surveillance or elimination can neither protest nor kill, a fait accompli.

An inquiry back: “How and why were drones and surveillance seen as separate entities?” This seems at best a sort of hopeful denial — or worst, willful ignorance — as the projected reality of 30,000 drones over North America by 2020 is too nauseating to consider. Easier to divide and separate as concepts, but a freakish prospect when viewed in unity; conjoined twins made palatable as long as we can’t see the other.

This quote from the Guardian, speaks well on this point and our intent (emphasis ours): “Perhaps there’s also an element of magical thinking here, the artist hoping to denude the death effect of the drone through the spiritual power of religious belief. The frequency of use and destructive power of drones is frightening and our complicity in this process isn’t so much a condemnation of those who would seek to repress the information, as it is of those who imagine it’s simply not happening. Bridle and Goodwin are recuperating the unimaginable back into the world, and that is honourable work.” — The Guardian, Oct. 25, 2013

Another surprising (and vaguely horrifying) tendency noted during the critique was the reference of to the projected images as “game-like.” This starkly illustrates the cognitive disconnect experienced by general public when relating to machine-vision. A disconnect slyly seized upon by governments, police forces, the military and corporations to continue expanding Big Brother styled policies. Games like Call of Duty: Modern Warfare — one of the first to institute drone-like imagery — are based on reality, not the other way around. This confusion of the virtual with the real is indicative of an increasing trend, where classical ‘reality‘ —where our minds physically reside — is re-shaped through the ubiquity of screen-based imagery, the increasing realism of graphics and the gamification of our daily lives. Admittedly, Call of Duty, and other games similar in nature, were an important starting point in our conceptualization and research for this project.

The Guardian article, Drones Through Artists’ Eyes: Killing Machines & Political Avatars.
proved informative by possibilities shown by the works numerous artists.

As our research reveled (by the countless deaths of civilians), the screen deceives, which begs the question: What are the moral, social, political and policy obligations necessitated in response to the inherent ambiguity of screen-based warfare?

A fantastic visualization of drone effects can be seen at

To reiterate and expand the points above, here are additional facts outlining the use and effect of drones:

  • Drones are actively deployed in Pakistan, Afghanistan, Yemen. Iraq, Libya and Somalia; with further alleged usage in Mali and Iran.
  • Many drone pilots are based at the near-opposite side of the world, up to 7400 mi (12000km) distant from the ‘theatre of war‘.
  • Current ‘battlegrounds’  are most often heavily populated urban environments, where guerrilla warfare tactics combined with ambiguity in target-acquisition often result in civilian death
  • With a 2 second communication/control lag, further deaths and mis-targeting occur as civilians may unexpectedly move into a target-zone.
  • The United States plans to have 30,000 drones operating over North America by 2030, controlled by a variety of interests, including policing, military, corporations, tracking companies, and private individuals.
  • By 2030, the RAF plans to have 2/3rds of its fleet unmanned by 2030, while current aircraft in development is claimed to be the last manned jet fighter
  • Personality strikes — where a targeted individual or group is known —  are responsible for over 2,500 deaths in Pakistan alone, have a “success rate” of only 1.8%.
  • Signature strikes,  “…target individuals whose identities are unknown, but who exhibit certain patterns of behavior or defining characteristics associated with terrorist activity.” The current strategy employed are based purely on “suspicious behavior”. From,
  • All adult males 18-65 are considered “enemy combatants”; a loose “legal” term first enacted by the Bush administration to initiate kidnapping, torture and assassination. Unrecognized in international law, this dangerous faux-legal framework is still in use, and even expanded upon, by the Obama administration.
  • The United States uses a “Double Tap” strategy, where a target is bombed twice in a short timeframe, most often killing and maiming first responders (emergency personnel) neighbors, children, citizens, etc.
  • Repeated “Double Taps” cause a breakdown in the social fabric in the affected areas, causing further unnecessary harm and death, as people no longer come to the aid of the injured for fear for their own lives.
  • Living under constant fear, some people have compared drones to a mosquito. “You can hear them but you can’t see them.”
  • Congregation appears to be a qualifying factor for drone strikes; owing to numerous multi-person killings, people no longer attend parties, weddings or funerals.
  • Parents no longer send children to school because schools have been targeted,
  • Victims report a heart-breaking loss of faith in the concepts of“law” & “justice”. A victim whose wife and 2 daughters were murdered in a drone strike revealed there are no legal processes to register the wrong done to him to receive recompense, press charges or proclaim their innocence.
  • “Bugsplat is the official term used by US authorities when humans are killed by drone missiles… deliberately employed as a psychological tactic to dehumanise targets so operatives overcome their inhibition to kill; and so the public remains apathetic and unmoved to act. Indeed, the phrase has far more sinister origins and historical use: In dehumanising their Pakistani targets, the US resorts to Nazi semantics. Their targets are not just computer game-like targets, but pesky or harmful bugs that must be killed.” From, Al Jazeera

Bonus titles alternates: Drone of Arc; Drone, Drone, Drone, Goose! ; Terminator 6: Drone Alone


Al-Jazeera (n.d.). The growing use of public and private drones in the U.S.. Retrieved December 3, 2013, from

A Drone Warrior’s Torment: Ex-Air Force Pilot Brandon Bryant on His Trauma from Remote Killing. (n.d.). Democracy Now!.
Retrieved December 3, 2013, from

Drone Wars: Pilots Reveal Debilitating Stress Beyond Virtual Battlefield. (n.d.)
Retrieved December 3, 2013, from

Download Stanford/NYU Report | Living Under Drones. (n.d.). Living Under Drones. Retrieved December 3, 2013, from (n.d.). Dronestream. Retrieved December 3, 2013, from

Piloting A Drone Is Hell. (n.d.). Popular Science. Retrieved December 3, 2013, from

The Guardian, Frost, Andrew.Drones Through Artists’ Eyes: Killing Machines & Political Avatars. Retrieved November 20, 2013,

Virilio, Paul. (1992). “Big Optics”. trans. J.Von Stein & Peter Waybill (ed.), On Justifying the Hypothetical Nature of Art & the Non-Identicality within the Object World., 82-93 Koln: Galerie Tanja Grunert.

Virilio, Paul. (2008). “Open Sky”. (Radical Thinkers). trans. Julie Rose. Verso Books.



What is Artwee2?

Artwee2 is an art installation that promotes the idea of citizen participation and collective art. Artwee2 is a project in its developing stages, and only presented by the team to assure the possibility of such project for the future development. This project  started with the idea of the connection between the digital and physical. The intention was to bridge these two worlds, and make one of a representation of the other with the ultimate goal of art creation.

This project is nothing but utilizing simplicity of an object/idea and enticing the tangibility of the digital world. This artwork celebrates the impact of our digital activity. To make it more transparent and fun. Fun is the sugar of the installation. Using fun in order to stop people and make them write something that may give someone somewhere else information about a cause they may not ever noticed.

The hours we spend over social media is enormous. If we could visualize it physically, we would better perceive this phenomena as we are made to understand the world through visualizing physical maps of events. For the future development, Artwee2 has the potential to be an artsy product for people who want to physically quantify their online activity in a very creative way. Little Printer is a great example of a fun and creative product, which physicalizes tweeting.

The same concept could be used for another purpose, for non-profit organizations like Greenpeace or WWF encouraging people to at least tweet something and receive feedback from the robot. The feedback could be a robotic artwork.

Continue reading ARTwee2

DistantMusic – Richard Borbridge and Sherif Taalab

Presentation Distant Music_Page_01

Live music everywhere.

Host a virtual concert in any space, streaming the note-for-note experience of great musicians in great venues.


Distant Music works by bridging a transmitter and receiver for MIDI signals, and outputting a real-time performance in distant spaces. We identified applications including house parties and in-home listening parties, shared concert experiences among multiple venues, adaptations for virtual bands, remote musical collaboration, and connecting performance and art installations in different gallery spaces.

Presentation Distant Music_Page_05Presentation Distant Music_Page_04


The concept of Distant Music was demonstrated by installing the prototype in a mock lounge setting, providing a basic visualization that responded to the key-presses transmitted across a wireless network. The lounge setting stood in for one of the prime markets opportunities for this device – as a data broadcasting tool. Conceived though our “change the vibe” buttons, users in each venue could experience different streams with a click, “attending” several concerts in a single night. Distant Music takes a prevalent concept of connectivity and begins to apply it to a future conception of digital music. As more music is rooted in a digital interface, the opportunity to transmit and reinterpret traditional analogue sounds grows through technologies like MIDI, or more likely Networked MIDI and Open Sound Control (OSC). Distant Music explores the principles behind connecting to existing instruments in new ways.


The concept began as a wireless musical instrument – a virtual band member based on an Android Tablet. Challenges were present throughout the prototyping. Most notably, the inherent limitations of MIDI on Android, Bluetooth idiosyncrasies, reading, parsing, and interpreting data through a serial bus, and ultimately passing serial data through to a MIDI interpreter were each “showstoppers” that needed to be considered and overcome. XBee served as an unencumbered wired signal replacement, but given its range would have limited efficacy for the final wider intent.


The final project connected a MIDI keyboard with a visualizer and audio output in a remote location. Though a standard MIDI interface, the keyboard is attached to MIDI shield on an Arduio, processed and transmitted through an XBee radio tuned to transmit at 31250 baud – the MIDI standard – which facilitated interpretation by the receiver. A second XBee radio, receiving the raw data stream was connected to a single-port USB MIDI interface via the MIDI-IN channel, allowing the onboard DAC and microprocessor to prepare the signal for interpretation by the operating system’s onboard MIDI subsystem, passed through to the MidiBus library in Processing and the visualizer and player prepared in the code.

Presentation Distant Music_Page_07



Visualizations in this project simplistically responded to NoteOn signals, “dinging” with each keypress to show explicitly the musical event. Future visualizations may enhance the experience of the sound through more atmospheric and immersive interpretations of the signals, bringing an even richer experience to the user.



In our experience, MIDI serves well as a control mechanism, but its functionality as a musical interface falls short in the face of wireless communication. In exploring Bluetooth and XBee, the limitations were apparent. Wi-Fi is the next logical step in the evolution of this project to achieve the distances implied in the prototype. The perpetual challenge is the fundamental relationship between music and time. Wireless technologies are all saddled with a latency that makes true interoperability challenging at best. Previous achievements including Yamaha’s Elton John presentation [] and emerging mobile projects like Ocarina [] are modelled on a single instrument, rather than a mass collaboration. The value of Distant Music remains as marketable opportunity to connect musicians in real time, rather than through the exchange of large high-resolution files or low-resolution collaborative techniques.

Presentation Distant Music_Page_10


The project proposed several questions about the future of music production and performance: how can/will music production respond to the shrinking world and harness new collaborative techniques to create new sounds and bring disparate and distant artists together? How can technology help musical performance to reach a broader audience and more varied venues? What implications are there for musical performance as it is separated from its inherent spatial and experiential qualities? While Distant Music may pose more questions than answers, it does begin to address two key issues: the technical potential and limitations of broadcasting JIT (just-in-time), rather than produced music, and the digital future of performance and collaboration.


Project Source Code

Project Photos

Project Video



Project 3: Tweet Monitor- Created by Heather Simmons, Laura Stavro & Areen Salam











artie at the vet copy copy copy2Slide02










Tweet Monitor:

Our concept for Tweet Monitor is a 2-way communication product that creates a meaningful connection between users across space and time.   The product enables users, via Twitter and SMS, to send and receive messages correlated to physical activities.  In the first application, tweets (via force sensor and microphone input) notify a dog owner every three minutes when the dog has jumped or barked.  A tweet from owner to dog can also be sent, activating a green light signal and a voice message to the dog. One of the key features of the Tweet Monitor is that you are able to stream a live video transmission through a website at all times from anywhere in the world.  Hourly summary tweets and a daily update graph allow owners to view summary information at their leisure.

For our project Tweet Monitor, we used Arthur the dog as our inspiration and created our final project for the Creation & Computation course.

User Requirements and Design Considerations:

  • Two way communications – dog to owner and owner to dog
  • Both visual and audible communications
  • Direct text message to owner to make monitoring easy
  • Frequent updates, every 5 minutes at least, in the early days
  • Hourly summary in case we eventually want to shut off the more frequent updates
  • Daily graph of activity
  • Durable – able to withstand dog jumping on it
  • Calibrated to the barks and weight of a very small dog

Project Demo:




Following a few simple steps that we found on this blog we created a visualizer. We made a few changes according to our project and integrated it with the rest of the code.

The first step was to download and install Twitter4J in processing. The next step was installing the authentication with Twitter API by visiting the website and generating a consumer key, consumer secret, access token and token secret.

We thought it would be interesting to create a visualizer and embed that in our code to serve as a screen saver. We built an Array List to hold all of the words that we got from the imported tweets. The Twitter object gets built by something
called the TwitterFactory, which needs our configuration information that we set using the access tokens and keys generated.

Now that we had a Twitter object, we built a query to search via the Twitter API for a specific term or phrase. This code  will not always work – sometimes the Twitter API might be down,  our search might not return any results or we might not be connected to the internet. The Twitter object in twitter4j handles those types of conditions by throwing back an exception to us; we need to have a try/catch structure ready to deal with that if it happens.

After that, we made a query request and broke the tweets into words. We put each word into the words Arraylist.  In order to make the words appear to fade over time,  we drew a faint black rectangle continuously over the window. We then randomly set the size and color of the words to appear in random order in the black window.

Screen shot 2013-12-02 at 12.44.25 AM


Figuring out the microphone was a bit of a cut and paste. Though we’d seen Kate use the microphone as a trigger we weren’t sure which library she used so we had a look at a number of different examples. The most useful of examples was on The code was in German, Stefan Fiedler’s example used a sonar-like visualizer, but gave us the basis to isolate the linein and volume codes.

We then added in the println portion to allow us to properly test and calibrate the volume. Arthur’s barks vary widely in volume, which gave us an opportunity to include a range of tweets triggered by the different levels of his voice. As the volume rises so does the urgent tone of the messages. Not sure this is an accurate translation but it’s fun to give a voice to a puppy!

Sending a Tweet from Dog to Owner

We used a force sensor connected to the Arduino and the mic input via the minim library and compared those values to thresholds set in the code.  The force sensor reading was sent over the serial port to Processing – this is in our serialEvent(Serial port) method.  If the noise/force exceeded the threshold, a tweet was sent using twitter4j’s updateStatus() method.    The code to send the tweet is contained in our postMsg() function.

Screen shot 2013-12-02 at 12.44.40 AM


Twitter to text/SMS

After the twitter feed connection was established using the twitter4j library we were good to use Ifttt to create the recipe:


This made the project more useable in the real world. Though of course one could constantly monitor @artiethewestie’s twitter feed that’s not really conducive to leading a normal life.

Using the SMS feature makes it easy and streamlined – no checking Twitter, instead updates are routed to your cellphone and vibrate or jingle when the dog barks or pushes on the gate. This gives the owner an understanding of how often, how intense and how long their puppy is barking for.


Two Way Communications – Sending Tweet from Owner to Dog

The function getLastTweet() allows the owner to send a Tweet with a coded keyword in it (in this case, the word Artiehome), which sets off a light on the dog’s gate and plays a sound file telling the dog that the owner is coming home soon.  (We modified some code from to get started).  getLastTweet() causes Twitter to search for all Tweets containing the word Artiehome.  If it finds one, and if that Tweet is not exactly the same as the one it previously pulled, it uses splitTokens() to grab the first word in that tweet.  If that first word is Artiehome,  it sends the code “1” through Processing to Arduino via the serial port.  Arduino reads that code from the serial port, and causes the light to flash.  Processing then plays the sound file (this function is playSoundClip().

Livefeed to Website

We wanted to create a site that would allow for visual monitoring of the puppy and the tweets as only one phone can be connected using ifttt. By using and a webcam, we were able to get a good angle of Arthur’s space. is a great system and we were able to embed it into a wordpress site with the twitter feed in the sidebar. The web page gives a single visual representation of the previously invisible trigger and response action that allows for the puppy and its tweets to be monitored remotely by many users. It is accessed using wifi which makes it available at those times when there is no cell phone reception and only internet or wifi connection.

Screen shot 2013-12-02 at 12.59.13 AM

Hourly Update

Not all owners want a Twitter update every three minutes, so we created the function hourlyUpdate(), which, each hour, sends a summary of Arthur’s total barks and jumps in that hour to Twitter.


We then created a summary graph, tied to the keyPressed() function (up key to graph, down key to return to main), which shows Arthur’s total barks and jumps since the program was last run.

Screen shot 2013-12-02 at 12.45.17 AM


To create a Tweet Monitor product, we used a long force sensor, which we pasted on a hard cardboard strip specified to the length and width of the sensor. In order to conceal the force sensor, we used canvas to wrap it around the force sensor and  cardboard. Once the force sensor was concealed properly we mounted it on the baby gate making sure it was at an appropriate height for Arthur the dog to jump and touch the force sensor on the gate.  The force senor was soldered with additional wires to extend it and then wired into Arduino. The Arduino along with the breadboard was safely placed in a canvas pouch behind the baby gate.


 System Diagram



Twitter Monitor Code (Functionality):

  • Wireless – Bluetooth Smirf or Blue Mate Gold.
  • Uses Twitter4j library, video library, and minim audio library.
  • 2 modes – screensaver and readout.  Toggle between them by moving cursor into or out of upper right corner of screen.
  • Screensaver mode displays random words from Arthur’s tweets.
  • 4 inputs – force sensor in a gate, microphone in laptop, webcam, and Tweets using a keyword.
  • 4 outputs – audio, LEDs, streamed video, and Tweets.
  • Readout displays different messages depending on whether dog is barking, barking loudly, jumping on the gate, or both.
  • Once every two minutes (time threshold is adjustable) , sends a Tweet saying whether the dog is barking, jumping, or both.  Owner can retrieve Tweets from anywhere and also gets a text message on cellphone (IFTTT).
  • Once an hour, sends a summary Tweet saying how many times the dog has barked or jumped this hour.
  • Pressing the up arrow key reveals a graph of the number of times the dog has barked and jumped today.  Pressing the down arrow key returns to readout mode.
  • The dog owner can send a keyword in a Tweet, from anywhere and from any account, which will cause green LEDs on the gate to flash 3 times (from Arduino), and a sound file to play (from Processing) telling Arthur that his owner will be home soon.
  • In other applications, such as eldercare, the flashing green lights and audio file could be used to indicate “help is on the way.”  In a baby monitoring application, the lights could be nightlights turned on and off via Tweet by a parent, and the sound file could be a lullaby.

Key Coding Challenges:

Avoid Crashing Twitter:

  • Twitter limits the number of Tweets to 1,000 per day.  To avoid crashing Twitter while still continuously monitoring for barks or jumps, a timer was used.  If the time threshold since the last Tweet has passed when the dog jumps or barks, a Tweet is sent.  If not, the reading is passed to a “temporary” variable whose release is triggered when the threshold has passed.
  • Twitter also prevents users from posting duplicate messages.  All bark or jump messages include a time stamp to prevent duplicates from being recorded.  The keycode for inbound tweets used to set off the LED and sound file is the word “Artiehome.”  The owner, when tweeting to the dog, tweets the word Artiehome, then space, then anything.  The reason for “space, then anything” is again to avoid a duplicate message.  The splitTokens() function is then used to parse out the first word in the inbound tweet, Artiehome, which is then compared to the required key specified in the program (Artiehome), to ensure a match.
  • Twitter also limits the number of queries an application can make per 15 minute window.  The code which compares keywords in inbound Tweets to the key in the application is therefore also nested in the timer.

The key to comparing strings:

Use “equals()” function, not ==, because == may return a  false value when in fact the strings are equal.  This is because == sometimes compares the location of String objects in memory, not the words themselves.

if (homeKey.equals(twitterKey)) {





Breadboard View:                                                    Schematic View:

tweeting dog (1)_bb        tweeting dog (1)_schem

Next Steps:

  • Build a data set over time for training purposes.  We could send daily data to a text file (similar to our classmates who completed the “Hush” project) and from there display daily graphs of Arthur’s barks and jumps, perhaps with hourly breakdowns within the day to identify “problem times” for the dog.
  • Consider integration of other sensors for different applications (motion sensors to detect falls in seniors, for example).
  • Consider building a durable, mobile rubber robot to increase interaction between owner and dog, similar to the one built by the Pawly team.  This robot could be controlled via Twitter “commands”.

Similar Products:

i) Angel Care



video2  Under-the-Mattress Movement Sensor Pad

video3  Color video Transmission

v4   LCD Touch Screen

v5  2-Way Talk-Back Feature

v6  Audio “Tic” Feature

v7  Adjustable Camera Angle


ii) GrandCare System



Motion Sensors

Daily and weekly graphing – Detect patterns of motion

If there is No motion in kitchen from 8am to noon, email daughter.

If Any motion is detected at the foot of bed at night, turn on the bathroom light.

If there is Excessive motion in bathroom for more than 15 minutes between 10pm and 6am, contact Emergency Call List.

Door Sensors

Daily graphing – Used for doors, windows, cabinets, drawers, refrigerators

If door is Opened from 10pm to 6am, call neighbor.

If the pill cabinet is Not Opened between 8-10am, send a reminder call to the Resident.

If door is Not Opened between 10-11am call Supervisor (the Caregiver has not arrived).

Bed and Chair Sensors

Nightly graphing – Sleeping patterns or when a loved one leaves bed at night

If nobody is In Bed for more than 45 minutes between 10pm and 6am, text night nurse
(Resident got up and didn’t return to bed).


Project Code:

In order to run the following Processing sketch, please add this photo to the sketch and name it “arthur.jpg”


In order to run the Processing sketch, please download and add the sound file “artiesound.wav” to your sketch from the website link attached below:

Github Arduino Code:

Project Demo (C&C Group)



Visualizer/screensaver ref:

Two-way communications – keycoded Tweets from owner to dog

Microphone Code:

Steffen Fiedler

Minim Manual –



Project 3 – The Magic Jungle – Klaudia Rainie Han – Paula Aguirre G – Anna Sun


Can you imagine that you could be transformed into a tree, have the possibility to fly with your own wings and play with animals?

Magic Jungle creates an interactive experience which allowing audience to explore the jungle life by utilizing the most recent advanced technology.  The virtual jungle world would generate sounds and shapes while you interact in it, tracking your physical movements.

Why a jungle?

The jungle is magic and enchanted by itself.

In the project you can see how you will be transformed into a whole new and unique creature, almost magically. That’s why, in this search we come out with the jungle.

The sounds, the colors, the lights and shadows, the animals, and all the elements in the jungle works together to give you and amazing immersion while you are in this place.

That is the reason of why a jungle and not another space, because we want to make use of the feeling of experimenting in a magical world that only the jungle can produce.


We want to share with the audience a little bit of the jungle in order to promote the care of it. That’s why we are bringing the jungle into the city, we want to make conscience about the importance of the natural world for our survival and let the people know the magic that nature has always have.


We wanted to create a project that gives the audience the chance to interact in a magical, fun and interesting way. That’s why when we were thinking about what to do, we remembered an installation that took place a year ago at the NY MOMA.

This installation was called “Shadow Monsters”, designed by the interaction designer Philip Worthington.  On it, the fantasy was seeing monsters materializing from the shadows of the audience.


Philip Worthington. Shadow Monsters. 2004–ongoing. Java, Processing,

BlobDetection, SoNIA, and Physics software.

What will you experiment?

For how long have you been stuck in the city?  Have you ever think about escaping from the city and exploring the jungle life? Welcome to our Magic Jungle! Our project offers the audience unique experiences to interact with animals and plants in the jungle through physical movements and sounds. In our magical world, you are going to be transformed into a tree, a magic creature with wings and have also the opportunity of play with jungle animals. You would hear the sound of wind by touching the branches on the tree.  Our magic frog would even talk to you if you pad on its back. Help yourselves to explore the magic in the jungle!

How does it work?

The interaction with Magic Jungle is so simple that you will love it!

You need to be in front of the Kinect so it can detect you.

When it detects you, you can start making movements to see how, from your own body, will start appearing new shapes attached to it, creating new and magical jungle creatures. On the same space you will also find jungle animals, and if you make the right move, you will hear them talk.

To make Magic Jungle we used Arduino and processing together.

Body detection will be able through Kinect and Processing.

You need to be in front of the kinect so it can detect you and start the interaction. It will detect your shape and will show it to you in the screen like a shadow. If you move, it will start modifying your actual body shadow to a new one.  You can see in real time how you are been transformed into a new and magical jungle’s creature.

Flex sensors works together with the jungle creatures’ toys on Arduino. You can interact with a frog and a tree.

Each flex sensor is calibrated; at the moment of interacting with the jungle creatures, the serial monitor will received data, when it reaches a specific number (defined according to the range the sensor showed), it will send a signal to processing. Processing will received the signal and will play a different sound for each different jungle character.


– Input

  1. Frog (Flex Sensor)
  2. Tree (Flex Sensor)
  3. People (Motion Detection)

– Output

  1. Sound
  2. Images & Shapes

– Function

  • When the user presses the frog, Processing will generate sound and also a shape on the screen.
  • The user will shake the tree and Processing will generate a sound.
  • When the user opens their arms, Processing will generate wings on their figure.
  • When the user put their arms up, Processing will generate trees on their hands.
  • When the user do monkey movements Processing will generate a monkey face on their figure.

Go Wireless

In order to improve the flexibility of setting the project in a large space, we decide to make the Arduino device go wireless.  We adopted it utilizing two XBee Modules, one LilyPad XBee, one XBee Explorer and the CoolTerm application to configured the XBee modules and in this way achieves this goal. Firstly we installed CoolTerm to help us with the configuration of the two XBee modules.  The “B” XBee module, which is attached with the LilyPad XBee, is connected to the Arduino LilyPad after the configuration. The “A” XBee Module is attached with the XBee Explore and directly connected to the computer. After the installation, we send the data (code) from the computer to the Arduino LilyPad and save it for further wireless use. Once the setup is done, the two XBee modules are ready to communicate with each other, and allow the circuit to work wirelessly.

  1. Jungle Creatures: Each of one have a flex sensor connected to the Arduino LilyPad Board
  2. LilyPad Board: Connected to the XBee “B”
  3. XBee “B” wireless communicates with XBee “A”
  4. XBee “A” it’s connected to the computer (Processing)
  5. Computer connected to the Kinect
  6. Kinect connected to a projector
  7. Projector allows users to interact with Magic Jungle.



Circuit Diagrams




Code link

Pictures about the Project

20131127_19432220131127_19425320131127_17111220131127_174112 20131127_174119 20131127_174141

Project video

Related Projects and Research

–       Philip Worthington. Shadow Monsters. 2004

–       Amnon Owed. CAN Kinect Physics Tutorial. 2012

–       Lisa Dalhuijsen and Lieven van Velthoven.  Shadow Creatures. 2009

–       Theo Watson. Interactive puppet.  2010


–       Arduino. Getting Started with LilyPad.

–       Enrique Ramos Melgar and Ciriaco Castro Diez. Arduino and Kinect Projects; design, build, blow their minds. Apress. First edition.



Project 3 Process Post – Laura Wright + Katie Meyer

Summary of project:
A microphone installed in the DFI lab records volume levels through Processing. Processing then averages the volume levels over a 10-minute time frame. Every 10 minutes a Twitter account linked to the Processing sketch will tweet the current volume level (currently, we have 4 levels). If there has been no change in volume since the previous 10 minutes, there will be no new tweet.

We plan to use a tablet to run our sketch, and to leave that tablet somewhere secure in the DFI room (likely a locker). The tablet will continuously run the sketch as an application, providing us with more data in the form of tweets. We need to ensure the tablet is always plugged in, which could prove difficult if we put the tablet in a locker, but we will test that out.

First few versions of our code:

a. This version worked, but there was no function in the code to only send out a tweet every 10 minutes. This meant that once we ran the sketch, it tweeted the raw input data, every value read from the microphone. This meant that the sketch tweeted rapidly and often tweeted the same thing, pushing over the daily limit of 1,000 tweets. This caused Twitter to reject our tweets and essentially suspend our account for a few hours. This made testing a challenge.

  • First iteration:

b. This version is similar, but with functions built-in so that our sketch only sent tweets every 10 minutes. This meant that we were no longer overloading Twitter.

It also has volume averaging, which calculates the volume over a period of 10 minutes to ensure that outliers don’t skew the data. This volume average determines which tweet to send out (there are currently 4 to choose from: quiet, loud, really loud, super loud). We will likely add more than 4 later and make them more specific.

  • Second iteration:

Outstanding problems to address:
a. We need to figure out how to turn a Twitter RSS feed of our account (@VolumeBot2013) into an XML file.
b. We need to figure out how to turn an XML feed into a data visualization with Processing.

Sensoris – Research Presentation

Sensoris, word from the Latin Sens (feel) and Oris (color). Feel through Color.


Bogota capital of Colombia, is one of the favorite cities for the realization of events in the country, not only for being the capital of the country but for its cultural diversity and different recreational activities. Events in Bogotá are an attractant for the floating population, not only in the same city but also in other regions of the country.

But People with any different condition find physical and socio-cultural barriers when attending an event, so they often choose to not participate because of the limitations presented, generating a complex phenomenon that is affecting the entire population and does not guarantee equal and equitable access to facilities and / or services. That’s why I think we need to do something about this, we need to have a more inclusive society where everybody, without matter their condition, can be an active part of it. If we have an inclusive city, where we can interact together, and enjoy the different events the government have for us, then we will be a really and united society. The most important of all, is to give the opportunity to people with any special condition feel free and welcome to go to the events, and give them the opportunity to feel autonomous.

16 17










To identify a central issue in the development of events, I reviewed documents related to accessibility for disabled people living in the city of Bogota. Blindness is the critical condition, with a percentage of 34.5 of all persons with a disability condition.


* Ceguera (Blindness in spanish / this chart is in spanish because is the official colombian statistic chart about disabilities)

Then, for the realization of the general matrix (general chart), I considered the different types of disabilities in order to have an idea to where I should direct the project, linked to a number of variables that are affected by factors such as physical barriers and socio-cultural, current solutions, resource availability and design opportunity.


According to this information and analysis, the decision to work with people with visual difficulties is made because actually there are not enough solutions, and this gives a great design opportunity.

Problem to solve: 

Accessibility for people with low vision and blindness in open spaces designed for the realization of cultural events in Bogota city.

Inspirational concept for the project:

Biomimetics, is the general idea of designing using  the solutions presented in natural processes. “ecolocation”. Echolocation is a method of sensory perception by which certain animals orient themselves in their environment, detect obstacles, communicate with each other and find food.


After learning how some animals use their other senses to see, I realize that visually impaired people can also make use of other elements to see, without the need to use the eyes.


System that must be able to receive and send the necessary signals that the environment gives, so the user can interpret them and move with freedom and autonomy in space.


Path: Modular Floor.

Using colors that contrast with the color of modul for low vision users, and easy to read by the color sensor. The icons will be just in important places of the path, to advise the user of any changes or places in the route.

9 10


How it will distributed in the context?

The field of the park will be modified because we need to have a safe and stable route for our users.

1. excavation of at least 80 cm deep across the width and length of what will be the path.

2. In the excavation should replace the removed material with granular material, compacted in layers of 20-30 cm to ensure adequate strength.

3. They should build a concrete plate with steel rods for reinforcement the field.

4. Holes are left in the concrete where the modules screws will later tie.

The modules will be placed on the path only when an event takes place.

7 8

Sensoris Accessory:


Same color of the cane to continue with the identity of it and don’t make it look like a piece apart.


If someone have a with different cane type, for example, with a wheel in the bottom, the accessory has a hinge that allows it to open it and place in position the accesory.


The accessory has a spring system which allows , by pressure, remain attached to the white cane. It has 4 metal spheres resting on springs, which, when the white cane comes, they move inward,  allowing the accessory to be firmly attached to the cane through the pressure.


Accessory Parts:

– Color Sensor, will detect the color of the modular floor.


– PMV Converter

– Bluetooth Chip


The signal captured by the sensor is transmitted to the converter, the converter takes this numerical information (color) and turn it into a signal that can be played by the PVM (playback voice module),

The PMV is responsible for reproducing this signal in mp3 format,  this information can be reproduce on a speaker, headset or other audio device.

We don’t want accidents, so we are not using cables, we are using a wireless signal to transmit the informaition. In this way the user will receive via wireless the color that the sensor is reading in a voice message.




In this way, Sens – Oris will optimize accessibility for visually impaired people at public events in Bogota city.

Project Design by Paula Aguirre G.

(Short part of the undergraduate Industrial Design Program “Sensoris” by Paula Aguirre, Bogota, Colombia)

Other projects related  to this topic:

– OrCam

– Cameras, GPS and sound system.

– EyeBorg