Don’t Believe What You Screen (Drones)

Big Brother

 

Today, the army only occupies the territory once the war is over. (Virilio, 2000)

 

By Umar, Brandy & Dushan.

////////////////////////////////////////////////////////////////////////////////////////////////

Don’t Believe What You Screen was conceived to create an affective space to evoke a measure of the discomfort and threat experienced daily by populations under constant surveillance — with the prospect of death at any moment — by drones. Turning the ‘self’ into the ‘other’ through machine-mediated vision — as if surveilled by a drone — the disembodied gaze is presented on all-sides of the viewer, re-articulating the role of the subject as victim, pilot and voyeuristic drone. This installation works to build awareness and discussion of systemic surveillance and its relation to militarism, state-sponsored power and associated unanswerable violence.

State power & violence is remotely enacted daily through ‘drone strikes’ throughout the Middle East and South Asia. Unmanned aerial vehicles (UAVs) are deployed and operated by pilots thousands of miles away with little concern for ethical considerations or threat of recompense. Transmitted in grainy black and white or infrared-night-vision, cameras and screens mediate the distant pilot’s vision with perceived technological precision. Digital/optical “zooming”, heat-signatures and instrumentation have replaced human-sight, biological colour spectrums and bodily “situational-awareness” — opening up a muddled, lethal ambiguity that confuses farm tools for guns and weddings for “insurgent” meetings.

The reality of drone warfare is complicated, bizarre, and full of consequences for “both sides.” Although obscured by the extreme technological and power imbalance, drone pilots often suffer from PTSD, becoming victims of their own distant violence, all the while sitting only a few miles from home. UAVs are geographically closer to their victims than operators by many orders of magnitude — buzzing, watching and bombing 24hrs a day, 7 days a week — airborne 7/11s of surveilled death. In a confusing near-virtual space, small-screen warriors witness a real theatre of war virtually — kill, then clock out — only to commute to another theatre where hyper-realistic visions of virtual war are depicted on the big screen.

Note: For an in-depth and linguistically stunning exploration and critique of machine-mediated vision, see Paul Virilio’s War & Cinema: The Logistics of Perception &/or Vision Machines (1989 [1984])

Surveying Projected Surveillance
To avoid what felt like simplistic notions of connection, we consciously reoriented the project away from directly relating to social media or creating a “product based” solution; instead focusing on a discursive political art-installation. In creating Drones, lines of inquiry quickly expanded to include the dangers of ubiquitous surveillance, the moral ambiguity of UAVs, aggressor-victim relations and aggressor as victim, communication gaps and lags, the effect of screens on accuracy and judgement, the cause and use of fear through (continual) sound/visual exposure to a threat, and mass media and personal culpability through inaction and apathy,

After lengthy (and disturbing) research, discussions, diagrams, drawings and confusing conversations, Drones began to take form. We were fortunate to be made of a group with close ties and experiences with the core concepts of our project. Most notably (and disturbingly), Umar experienced the threat and effects of drone warfare directly; while Brandy, coming from China, offered numerous insights, feedback and thoughts on ubiquitous surveillance and state-power. These encounters were immeasurably informative and irreplaceable as direct involvement most often is. Lastly, Dushan’s experience teaching & researching for the course Illustrative Activism and editorial illustration helped to round-out the project’s scope. All in all, constituent experiences that offered an excellent basis to take on a project of this size.

Top Secret Prototyping & Development Processes
Initially, Drones was sparked by Processing/OpenCV video tracking, linked to an Arduino with a servomotor, allowing for a physically-tracked object.The prototyping and development stage(s) were extensive and difficult, thanks to the complication of the subject, richness of resources, extensive and varied range facts (exacerbated and obfuscated by state-secrecy and military propaganda), and the difficulty in building an experience in a space/environment so removed from the scenes of drone warfare. Investigative research on drones, pilots, victims and governmental policies (both current and future) were extremely unsettling — cementing our commitment to explore and reveal this relevant and timely project.

Continuing our investigation, we realized the potential difficulties brought on by machine-mediated vision, finding it unimaginable that decisions of life and death were enacted at such great distance with seemingly sparse information. Questions arose of truth perception, pilot post-traumatic stress disorder (PTSD) and incapacitated communication between aggressor and victim.

From Hyper-Complicated to Simply Complicated
As previously mentioned, project ideation went through numerous iterations as we explored room layout, affective possibilities and addressed technical issues. The following are a few of the variants:

  1. Our original intention was to build a 2-user experience: one as pilot, the other, as victim — where both parties would be confronted with a vision of the other. This first version turned out to be unwieldy on all fronts, requiring an excess of space, materials, equipment and programming. The concept of re-mediated/open communication between the pilot and victim was an interesting revelation suitable for further consideration.
  2. Following pilot-victim re-mediation, we searched for ways to remove communicative barriers (physical, electronic, mental, psychological) by (a) exploring the use of open spaces where both ‘actors’ could interact freely and exchange places; (b) re-linking via electronic communication (twitter, wi-fi, video, and/or mic & speaker); and (c), creating a (physically) circular experience where victim and pilot exchanged places after a ‘successful’ strike, as a sort of revenge/punishment model.
  3. Using multiple pilot & victim interfaces/apps for a more ‘game-like’ experience: where texts or app interactions allow people to ‘strike’ each other dependent on their being assigned a
  4. Data-driven experiences using real-time stats to connect the user to the real-world battlefield — sending notifications/badges of ‘successful strikes’ or implying the user was killed, maimed or caused an attack due to a perceived geo-locative action.

Final Concept

  1. Place a “primer” where the user receives a facial scan for identification purposes. This was achieved using processing and a basic face detection example. The moment a face is detected, a video is launched containing a series of photos of classmates, teachers, activists, terrorists and logos of governmental organizations associated with surveillance (FBI, CIA, Interpol, etc). These  unwelcome associations create the appearance of unwarranted suspicion and surveillance.
  2. The darkened smoke-filled room is quiet, with only a tablet lying on the ground the only (visibly) interactive element available.
  3. Once the tablet is picked up, a button is released, setting off series of alarming sounds (a collage of militaristic & communicative effects) while (on 3 sides) large projections of the subject are presented — viewed from above in drone-like vision; replete with crosshairs and target-distance information.
  4. Blob tracking is enabled, causing a laser to track/target the viewer , producing an immanent threat.
  5. The tablet ‘targets’ can be pressed (evoking the action of striking), causing a series of facts to appear about drones. The full-screen fact can be pressed/clicked again to return to the main map page.
  6. Once the user returns the tablet to its stand/spot, the (physical) button is pressed, causing the video to close, but the monotonous sound of the drone flying overhead to continue in the darkness. This creates a contemplative space for the viewer and, serves as form of ‘death light’, playing off of the term ‘lights out’…

 

Ideal Scenarios

In working through Drones, a host of scenarios were suggested, considered and ultimately ruled out due to time, equipment and spacial constraints. Some thoughts on ‘ideal’ project options & situations are as follows:

  • Ceiling mounted projectors (or short-throw key-stoning) to allow for brighter video & remove potential cast shadows, ideally images take entire wall without seams.
  • Multiple lasers, from very high above in imitation of drone-targeting, with 3D movement for full-room tracking (width, depth).
  • Smooth walls and surfaces for distraction-free viewing.
  • Surround sound to create a realistic sound-scape, using distance as an audio component
  • Actual face scanning and capture with possible real-time web-crawlers to dig up & assemble personalized initial entrance scans — enhancing the surveillance theme. This scan could also unlock the room for the viewer to enter only after entering a name & scanning.
  • Darker room for better mood-lighting and effects.
  • Smooth floor suitable for rubble piles & other ephemera to build a war torn, damaged/distressed.
  • The addition of a final stage — after the tablet is returned — where a large face/eyes (of East Asian/Middle Eastern background, centrally projected ) stares silent and unblinking at the viewer, representing ‘the other’, flipping reality on its head.
  • Alternately, further projections could reveal bodies lying around the viewer after a few moments of darkness. Presented in silence (or with drone-buzzing), potential for a memorable (and disturbing) experience would be highly likely.
  • In a similar vein, the tablet could also be used as augmented-reality viewfinder which reveals bodies, dolls, personal effects, etc.

As both Kate & Nick stated, Drone’s scope & potential is massive; something that could be built upon for many months in order to achieve the perfect effect/affect on the viewer and spread an important message. We strongly believe Drones could be successfully developed and made suitable for events like Nuit Blanche or professional gallery spaces.

 

Diagrammatic Network

Network Diargram

 

Coding the Drone (Github)

 

Sketches & Design

 

Screenshots

 

 

 

Circuit diagrams

ArduinoSwitch_BreadBoard ArduinoSwitch_Schematic ServerMotor_BreadBoard ServerMotor_Schematic

 

Videos

 

Projected video space & scanner: https://vimeo.com/80695898

 

Tablet: https://vimeo.com/80934126

 

Observable Context

In response to a question raised during critique: Drones are surveillance with a consequence. We would argue that it is a misconception that (the use of) drones and (the act of) surveillance are separate entities. With autonomy as a benefit, all consequence —in this case death — is granted solely to the drone, as the capacity to kill belongs to the drone alone. Those targeted for surveillance or elimination can neither protest nor kill, a fait accompli.

An inquiry back: “How and why were drones and surveillance seen as separate entities?” This seems at best a sort of hopeful denial — or worst, willful ignorance — as the projected reality of 30,000 drones over North America by 2020 is too nauseating to consider. Easier to divide and separate as concepts, but a freakish prospect when viewed in unity; conjoined twins made palatable as long as we can’t see the other.

This quote from the Guardian, speaks well on this point and our intent (emphasis ours): “Perhaps there’s also an element of magical thinking here, the artist hoping to denude the death effect of the drone through the spiritual power of religious belief. The frequency of use and destructive power of drones is frightening and our complicity in this process isn’t so much a condemnation of those who would seek to repress the information, as it is of those who imagine it’s simply not happening. Bridle and Goodwin are recuperating the unimaginable back into the world, and that is honourable work.” — The Guardian, Oct. 25, 2013

Another surprising (and vaguely horrifying) tendency noted during the critique was the reference of to the projected images as “game-like.” This starkly illustrates the cognitive disconnect experienced by general public when relating to machine-vision. A disconnect slyly seized upon by governments, police forces, the military and corporations to continue expanding Big Brother styled policies. Games like Call of Duty: Modern Warfare — one of the first to institute drone-like imagery — are based on reality, not the other way around. This confusion of the virtual with the real is indicative of an increasing trend, where classical ‘reality‘ —where our minds physically reside — is re-shaped through the ubiquity of screen-based imagery, the increasing realism of graphics and the gamification of our daily lives. Admittedly, Call of Duty, and other games similar in nature, were an important starting point in our conceptualization and research for this project.

The Guardian article, Drones Through Artists’ Eyes: Killing Machines & Political Avatars.
proved informative by possibilities shown by the works numerous artists.

As our research reveled (by the countless deaths of civilians), the screen deceives, which begs the question: What are the moral, social, political and policy obligations necessitated in response to the inherent ambiguity of screen-based warfare?

A fantastic visualization of drone effects can be seen at drones.pitchinteractive.com

To reiterate and expand the points above, here are additional facts outlining the use and effect of drones:

  • Drones are actively deployed in Pakistan, Afghanistan, Yemen. Iraq, Libya and Somalia; with further alleged usage in Mali and Iran.
  • Many drone pilots are based at the near-opposite side of the world, up to 7400 mi (12000km) distant from the ‘theatre of war‘.
  • Current ‘battlegrounds’  are most often heavily populated urban environments, where guerrilla warfare tactics combined with ambiguity in target-acquisition often result in civilian death
  • With a 2 second communication/control lag, further deaths and mis-targeting occur as civilians may unexpectedly move into a target-zone.
  • The United States plans to have 30,000 drones operating over North America by 2030, controlled by a variety of interests, including policing, military, corporations, tracking companies, and private individuals.
  • By 2030, the RAF plans to have 2/3rds of its fleet unmanned by 2030, while current aircraft in development is claimed to be the last manned jet fighter
  • Personality strikes — where a targeted individual or group is known —  are responsible for over 2,500 deaths in Pakistan alone, have a “success rate” of only 1.8%.
  • Signature strikes,  “…target individuals whose identities are unknown, but who exhibit certain patterns of behavior or defining characteristics associated with terrorist activity.” The current strategy employed are based purely on “suspicious behavior”. From, http://www.international.ucla.edu/news/article.asp?parentid=131351
  • All adult males 18-65 are considered “enemy combatants”; a loose “legal” term first enacted by the Bush administration to initiate kidnapping, torture and assassination. Unrecognized in international law, this dangerous faux-legal framework is still in use, and even expanded upon, by the Obama administration.
  • The United States uses a “Double Tap” strategy, where a target is bombed twice in a short timeframe, most often killing and maiming first responders (emergency personnel) neighbors, children, citizens, etc.
  • Repeated “Double Taps” cause a breakdown in the social fabric in the affected areas, causing further unnecessary harm and death, as people no longer come to the aid of the injured for fear for their own lives.
  • Living under constant fear, some people have compared drones to a mosquito. “You can hear them but you can’t see them.”
  • Congregation appears to be a qualifying factor for drone strikes; owing to numerous multi-person killings, people no longer attend parties, weddings or funerals.
  • Parents no longer send children to school because schools have been targeted,
  • Victims report a heart-breaking loss of faith in the concepts of“law” & “justice”. A victim whose wife and 2 daughters were murdered in a drone strike revealed there are no legal processes to register the wrong done to him to receive recompense, press charges or proclaim their innocence.
  • “Bugsplat is the official term used by US authorities when humans are killed by drone missiles… deliberately employed as a psychological tactic to dehumanise targets so operatives overcome their inhibition to kill; and so the public remains apathetic and unmoved to act. Indeed, the phrase has far more sinister origins and historical use: In dehumanising their Pakistani targets, the US resorts to Nazi semantics. Their targets are not just computer game-like targets, but pesky or harmful bugs that must be killed.” From, Al Jazeera

Bonus titles alternates: Drone of Arc; Drone, Drone, Drone, Goose! ; Terminator 6: Drone Alone

___
References

Al-Jazeera (n.d.). The growing use of public and private drones in the U.S.. Retrieved December 3, 2013, from http://america.aljazeera.com/watch/shows/the-stream/the-latest/2013/10/7/the-growing-use-ofdronesintheus.html

A Drone Warrior’s Torment: Ex-Air Force Pilot Brandon Bryant on His Trauma from Remote Killing. (n.d.). Democracy Now!.
Retrieved December 3, 2013, from http://www.democracynow.org/2013/10/25/

Drone Wars: Pilots Reveal Debilitating Stress Beyond Virtual Battlefield. (n.d.).LiveScience.com.
Retrieved December 3, 2013, from http://www.livescience.com/40959-military-drone-war-psychology.html

Download Stanford/NYU Report | Living Under Drones. (n.d.). Living Under Drones. Retrieved December 3, 2013, from http://www.livingunderdrones.org/download-report/

Dronestre.am. (n.d.). Dronestream. Retrieved December 3, 2013, from http://dronestre.am

Piloting A Drone Is Hell. (n.d.). Popular Science. Retrieved December 3, 2013, from http://www.popsci.com/technology/article/2013-08/psychological-toll-drone-warfare

The Guardian, Frost, Andrew.Drones Through Artists’ Eyes: Killing Machines & Political Avatars. Retrieved November 20, 2013,
from http://www.theguardian.com/culture/2013/oct/25/drones-through-artists-eyes-killing-machines-and-political-avatars

Virilio, Paul. (1992). “Big Optics”. trans. J.Von Stein & Peter Waybill (ed.), On Justifying the Hypothetical Nature of Art & the Non-Identicality within the Object World., 82-93 Koln: Galerie Tanja Grunert.

Virilio, Paul. (2008). “Open Sky”. (Radical Thinkers). trans. Julie Rose. Verso Books.

ARTwee2

artwee2logo

What is Artwee2?

Artwee2 is an art installation that promotes the idea of citizen participation and collective art. Artwee2 is a project in its developing stages, and only presented by the team to assure the possibility of such project for the future development. This project  started with the idea of the connection between the digital and physical. The intention was to bridge these two worlds, and make one of a representation of the other with the ultimate goal of art creation.

This project is nothing but utilizing simplicity of an object/idea and enticing the tangibility of the digital world. This artwork celebrates the impact of our digital activity. To make it more transparent and fun. Fun is the sugar of the installation. Using fun in order to stop people and make them write something that may give someone somewhere else information about a cause they may not ever noticed.

The hours we spend over social media is enormous. If we could visualize it physically, we would better perceive this phenomena as we are made to understand the world through visualizing physical maps of events. For the future development, Artwee2 has the potential to be an artsy product for people who want to physically quantify their online activity in a very creative way. Little Printer is a great example of a fun and creative product, which physicalizes tweeting.

The same concept could be used for another purpose, for non-profit organizations like Greenpeace or WWF encouraging people to at least tweet something and receive feedback from the robot. The feedback could be a robotic artwork.

Continue reading ARTwee2

Teddy Light – Project 3 (Langen, Phachanhla)

     Teddy Light

Project Description

Artist Statement

The featured artists have formal training in diverse backgrounds, chiefly in photography and the health sciences. However, both have a vested interest in education – one at the elementary and secondary level, while the other in higher education. It is important to allow students and children to learn, explore and discover on their own to set the foundation for life-long learning and curiosity – especially at a young age.

This link can be provided by the blending of art and technology into educating the children and future innovators of tomorrow. An integration of these components, especially new innovations into the daily routine of children will help to foster the growth of these learning foundations. The Teddy Light, has the dual purpose of providing light for a child (the brightness controlled by the tablet), as well as teaching young children the alphabet. The tablet interface allows for children to learn the alphabet by hearing the sounds that each letter makes, as well as being able to practice drawing out each letter; thus these components connect kinesthetic, visual and auditory learning. The Teddy that accompanies the tablet makes the learning more social and fun, as children are able to learn with a friend.

Early acclimatization of children to new technologies and innovations will also allow them to exercise an early curiosity for the current reaches of development and what is possible. They grow up with the innovation, and it becomes second nature. It is inevitable in this coming age that children will be affected by digital distraction; we may as well ensure that it is informative and educational.

– Kirsti Langen & Melissa Phachanhla, 2013

 

Diagram of the Network 

Original Concept and Next Steps

The original concept which we would like to continue to further develop is the idea of the teddy bear as a friend that comes alive to learn with the child. All sounds such as the lullaby and letter files will be “spoken” by the teddy bear and played through the arduino speakers rather than the tablet. Voice recognition will also be used to turn on and off the LED string in the teddy bear’s stomach. A sample phrase would be “Teddy Light On.” This is particularly useful at night when the child needs to use the washroom and cannot easily find a light switch. Additionally, the light will not bother them when they are trying to sleep, and you can simply say “Teddy Light Off.” The child can also say “Goodnight Teddy” to play a lullaby from the arduino, so that the tablet is not left on when they fall asleep.

We had made a lot of progress in getting these audio parts started, but the quality of the sound and voice recognition were not yet good enough to include in the final product and for the class critique. We believe that the problem of the scratchy sound quality was caused by using a micro SD card that had too large of a memory for the MP3 shield. We were not able to locate a 1gig micro SD card, which is the optimal storage size for the shield.

As suggested at the critique, we would also like to explore the possibility of using smaller boards and shields. The hardware can then be used in something like a patch or accessory for a teddy bear, rather than having the hardware in the teddy bear itself. This way, the child can bring a stuffed animal they already own to life with this accessory (think Toy Story).

The picture above demonstrates us playing around with the speakers and amps to have sound come from the arduino rather than the tablet. The speaker was not quite loud enough, so we used a potentiometer in an attempt to make a volume control. The sound quality was not clear enough however, and we decided for now that it would be best of the sound came from the tablet. It is a next step to have the sound come from the teddy bear.

mp3 shield for volume to come from arduino (Speaker moves):

https://vimeo.com/80775507

Voice Recognition to LED:

https://vimeo.com/80775508

Vimeo

https://vimeo.com/80747208

Git Hub Link

Processing:

https://github.com/MPhachanhla/Creation-and-Computation-/blob/master/Teddy%20Light%20-%20Processing

Arduino:

https://github.com/MPhachanhla/Creation-and-Computation-/blob/master/TeddyLight%20-%20Arduino

 

Sketches and Design Files

 

Screenshots

 

Circuit Diagrams

Explanation of Prototyping and Development Process

There was a lot of careful planning and thought that went into fabricating the tablet and bear prior to beginning coding. We decided to go with a white teddy bear since it would match a white fabric. It was important to have a white and semi translucent fabric, to allow the LED lights on the string shine through and be visible. We also knew we needed a large enough teddy bear to hold an arduino, shield and bluetooth device, which we wanted to house in a box to protect it. The stuffing of the bear was also important because we needed a more cotton like stuffing to disguise the hardware. Silica or beany material would have been able to be placed around the box that housed the arduino. Red lights for the LED string were chosen to fit the theme of the heart in the bear.

Very early on, we knew it needed to be relatively easy for us to get the hardware in and out of the bear to be able to hook up the battery and check the signal on the bluetooth. Therefore the hardware could not merely be sewn in. Initially, we wanted to have a zipper to be able to access the arduino. This however, would have ruined the appearance of the bear as a friend. We needed a way to conceal the openings and fasteners, and that is when we decided that velcro would be best. Velcro pieces are sewn into the fabric and into the inside of the bear. The fabric also acts as a support for the LED string since they are sewn into it, in the shape of a heart to retain its form. Therefore, the fabric acts as a cover, and scaffold for all of the hardware. We also decided to hook all of our connections straight into the arduino board, rather than a breadboard to make the size easier to manage when inserting the hardware into the bear.

Fabrication and prototyping for the tablet also took some thought. The width and heights of the tablet were measured and the proportions were used in order to photoshop the images that were used. We wanted the images to fill the screen without becoming distorted, so we knew it was important to have equal proportions. For the buttons, we measured the size of one using the pixels on the rectangle function and then mapped out the coordinates on graph paper to be able to determine the points for each button. However, we were off by a bit, as we forgot to account for the thickness of the stroke of the rectangle when initially calibrating. So there was quite a bit of on screen calibration that occurred to ensure that buttons were placed in the correct spots.

We also wanted the writing to feel natural, rather than using your fingers. That is why we felt it was important to have the red crayon stylus. The colour of the drawing function on the tablet was also changed to red to match the crayon. There was also a stand purchased to be able to stand the tablet up. Initially, we wanted video capabilities for the child to watch and count sheep to help them fall asleep. For the critique, we felt it was important to stage the bear and tablet in a kid’s room, or blanket fort (images below).

20131120_143728 

Determining the hardware needed before a trip to Creatron.

20131120_143733

Determining the hardware needed before a trip to Creatron.

20131202_024730

Math to determine the button coordinates.

20131127_165712

20131127_163757

20131127_163743

Context and Research (500 words)

We have moved into a generation and mindset that believes in the all-magical products that will solve all of our problems – the magic bullet essentially. In this case, more and more parents and educators have turned to technology to begin educating their children. There are countless products on the technological market that are aiming to teach children academic skills from an early age. While there is a strong link between early childhood education and great academic achievement in today’s students, we must still consider the role that technology plays (Wardle, 2008). What age is appropriate to allow children to start playing with technology such as tablets, computer and mobiles? Are these tools and gadgets so integrated into society that we do not need to worry, and generations prior to having these technologies are being unnecessarily cautious? Or should we slow down and have parents and educators consider the implications of technology based learning through devices such as computers and tablets?

Many experts believe that computers are not developmentally appropriate for children three years old (Wardle, 2008). The same experts also believe that these technologies are useful in educating children above the age of three (Wardle, 2008). The danger, experts believe is that computers will only reinforce national trends towards earlier, more academic skill acquisition, while ignoring other important developmental skill (Wardle, 2008). All children develop at a different pace, with different learning styles. While technology can help to enhance early childhood education, we must be careful in assuming its overall effectiveness. Correlation does not always equate with causation. Early childhood educators and teachers must still ensure that children are able to develop important skills such as a mastery of language, social interaction, and a natural curiosity to explore. Children and learners at any age will also need different learning modalities to be successful, especially since children cannot sit and complete one task for extended periods of time. We need to integrate aspects of moving, dance and play into the learning mix as well.

There are countless products and apps on the market for young learners. Two similar ones to the project Teddy Light are featured below. Read with me Scout is a toy featured for children ages 2-5 years. He accompanies children as a friend that reads and asks questions from board based books. He engages children by asking questions throughout the reading process to help build comprehension skills, rather than passive listeners. Now while again this product uses technology to thoughtfully probe and have readers think critically of the story, it is making the assumption that the children are answering the questions correctly and even if answering at all. A parent or other still needs to be present to reinforce that learning. It also limits the skill set to academia and children are more than just students, they need to be engaged in other areas as well. Another similar product featured in the image below is the InnoTab3. It offers educational games to teach reading, math, social studies, handwriting, science, problem solving and geography. Hundreds of software cartridges can be downloaded to enhance the learning and fun. It also makes use of tilt sensors, D-pad controllers, microphone and cameras to interact with the games. There is also an e-ready and story dictionary component. In essence, it is a tablet designed for children’s learning. Again the skills, although more varied than Read with Me Scout, are still focused largely on academia. It does not encourage social interaction, exploring the physical world around us.

While tablets and computers can help improve academic skills, we cannot ignore the rest of those skills that make us well rounded and able to integrate into a productive society. The answer to effective early education is a holistic approach with constant engagement, feedback and involvement of others. These technologies although helpful must be seen and used for what they are – mere tools, not the all-magical solution. The goal is to graduate a variety of students with varying life skills, not merely just making them an encyclopedia.

Screen Shot 2013-12-02 at 3.45.11 AM inostab

Read with me Scout and the InnoTab 3 pictured above.

References

DistantMusic – Richard Borbridge and Sherif Taalab

Presentation Distant Music_Page_01

Live music everywhere.

Host a virtual concert in any space, streaming the note-for-note experience of great musicians in great venues.

WHAT

Distant Music works by bridging a transmitter and receiver for MIDI signals, and outputting a real-time performance in distant spaces. We identified applications including house parties and in-home listening parties, shared concert experiences among multiple venues, adaptations for virtual bands, remote musical collaboration, and connecting performance and art installations in different gallery spaces.

Presentation Distant Music_Page_05Presentation Distant Music_Page_04

WHY

The concept of Distant Music was demonstrated by installing the prototype in a mock lounge setting, providing a basic visualization that responded to the key-presses transmitted across a wireless network. The lounge setting stood in for one of the prime markets opportunities for this device – as a data broadcasting tool. Conceived though our “change the vibe” buttons, users in each venue could experience different streams with a click, “attending” several concerts in a single night. Distant Music takes a prevalent concept of connectivity and begins to apply it to a future conception of digital music. As more music is rooted in a digital interface, the opportunity to transmit and reinterpret traditional analogue sounds grows through technologies like MIDI, or more likely Networked MIDI and Open Sound Control (OSC). Distant Music explores the principles behind connecting to existing instruments in new ways.

PROCESS

The concept began as a wireless musical instrument – a virtual band member based on an Android Tablet. Challenges were present throughout the prototyping. Most notably, the inherent limitations of MIDI on Android, Bluetooth idiosyncrasies, reading, parsing, and interpreting data through a serial bus, and ultimately passing serial data through to a MIDI interpreter were each “showstoppers” that needed to be considered and overcome. XBee served as an unencumbered wired signal replacement, but given its range would have limited efficacy for the final wider intent.

PRODUCT

The final project connected a MIDI keyboard with a visualizer and audio output in a remote location. Though a standard MIDI interface, the keyboard is attached to MIDI shield on an Arduio, processed and transmitted through an XBee radio tuned to transmit at 31250 baud – the MIDI standard – which facilitated interpretation by the receiver. A second XBee radio, receiving the raw data stream was connected to a single-port USB MIDI interface via the MIDI-IN channel, allowing the onboard DAC and microprocessor to prepare the signal for interpretation by the operating system’s onboard MIDI subsystem, passed through to the MidiBus library in Processing and the visualizer and player prepared in the code.

Presentation Distant Music_Page_07

 

HOW

Visualizations in this project simplistically responded to NoteOn signals, “dinging” with each keypress to show explicitly the musical event. Future visualizations may enhance the experience of the sound through more atmospheric and immersive interpretations of the signals, bringing an even richer experience to the user.

DistantMusic-circuits

PROBLEMS

In our experience, MIDI serves well as a control mechanism, but its functionality as a musical interface falls short in the face of wireless communication. In exploring Bluetooth and XBee, the limitations were apparent. Wi-Fi is the next logical step in the evolution of this project to achieve the distances implied in the prototype. The perpetual challenge is the fundamental relationship between music and time. Wireless technologies are all saddled with a latency that makes true interoperability challenging at best. Previous achievements including Yamaha’s Elton John presentation [http://www.youtube.com/watch?v=phLe4F8mRoc] and emerging mobile projects like Ocarina [http://www.smule.com/apps#ocarina] are modelled on a single instrument, rather than a mass collaboration. The value of Distant Music remains as marketable opportunity to connect musicians in real time, rather than through the exchange of large high-resolution files or low-resolution collaborative techniques.

Presentation Distant Music_Page_10

FUTURE

The project proposed several questions about the future of music production and performance: how can/will music production respond to the shrinking world and harness new collaborative techniques to create new sounds and bring disparate and distant artists together? How can technology help musical performance to reach a broader audience and more varied venues? What implications are there for musical performance as it is separated from its inherent spatial and experiential qualities? While Distant Music may pose more questions than answers, it does begin to address two key issues: the technical potential and limitations of broadcasting JIT (just-in-time), rather than produced music, and the digital future of performance and collaboration.

 

Project Source Code

Project Photos

Project Video

 

 

Project 3: Tweet Monitor- Created by Heather Simmons, Laura Stavro & Areen Salam

 

logo3

 

 

 

 

 

 

 

Inspiration:

artie at the vet copy copy copy2Slide02

 

 

 

 

 

 

 

 

 

Tweet Monitor:

Our concept for Tweet Monitor is a 2-way communication product that creates a meaningful connection between users across space and time.   The product enables users, via Twitter and SMS, to send and receive messages correlated to physical activities.  In the first application, tweets (via force sensor and microphone input) notify a dog owner every three minutes when the dog has jumped or barked.  A tweet from owner to dog can also be sent, activating a green light signal and a voice message to the dog. One of the key features of the Tweet Monitor is that you are able to stream a live video transmission through a website at all times from anywhere in the world.  Hourly summary tweets and a daily update graph allow owners to view summary information at their leisure.

For our project Tweet Monitor, we used Arthur the dog as our inspiration and created our final project for the Creation & Computation course.

User Requirements and Design Considerations:

  • Two way communications – dog to owner and owner to dog
  • Both visual and audible communications
  • Direct text message to owner to make monitoring easy
  • Frequent updates, every 5 minutes at least, in the early days
  • Hourly summary in case we eventually want to shut off the more frequent updates
  • Daily graph of activity
  • Durable – able to withstand dog jumping on it
  • Calibrated to the barks and weight of a very small dog

Project Demo:

 

Process:

Visualizer/Screensaver:

Following a few simple steps that we found on this blog we created a visualizer. We made a few changes according to our project and integrated it with the rest of the code.

The first step was to download and install Twitter4J in processing. The next step was installing the authentication with Twitter API by visiting the website https://dev.twitter.com/ and generating a consumer key, consumer secret, access token and token secret.

We thought it would be interesting to create a visualizer and embed that in our code to serve as a screen saver. We built an Array List to hold all of the words that we got from the imported tweets. The Twitter object gets built by something
called the TwitterFactory, which needs our configuration information that we set using the access tokens and keys generated.

Now that we had a Twitter object, we built a query to search via the Twitter API for a specific term or phrase. This code  will not always work – sometimes the Twitter API might be down,  our search might not return any results or we might not be connected to the internet. The Twitter object in twitter4j handles those types of conditions by throwing back an exception to us; we need to have a try/catch structure ready to deal with that if it happens.

After that, we made a query request and broke the tweets into words. We put each word into the words Arraylist.  In order to make the words appear to fade over time,  we drew a faint black rectangle continuously over the window. We then randomly set the size and color of the words to appear in random order in the black window.

Screen shot 2013-12-02 at 12.44.25 AM

Microphone

Figuring out the microphone was a bit of a cut and paste. Though we’d seen Kate use the microphone as a trigger we weren’t sure which library she used so we had a look at a number of different examples. The most useful of examples was on creativecoding.org. The code was in German, Stefan Fiedler’s example used a sonar-like visualizer, but gave us the basis to isolate the linein and volume codes.

We then added in the println portion to allow us to properly test and calibrate the volume. Arthur’s barks vary widely in volume, which gave us an opportunity to include a range of tweets triggered by the different levels of his voice. As the volume rises so does the urgent tone of the messages. Not sure this is an accurate translation but it’s fun to give a voice to a puppy!

Sending a Tweet from Dog to Owner

We used a force sensor connected to the Arduino and the mic input via the minim library and compared those values to thresholds set in the code.  The force sensor reading was sent over the serial port to Processing – this is in our serialEvent(Serial port) method.  If the noise/force exceeded the threshold, a tweet was sent using twitter4j’s updateStatus() method.    The code to send the tweet is contained in our postMsg() function.

Screen shot 2013-12-02 at 12.44.40 AM

 

Twitter to text/SMS

After the twitter feed connection was established using the twitter4j library we were good to use Ifttt to create the recipe:

Untitled-1

This made the project more useable in the real world. Though of course one could constantly monitor @artiethewestie’s twitter feed that’s not really conducive to leading a normal life.

Using the SMS feature makes it easy and streamlined – no checking Twitter, instead updates are routed to your cellphone and vibrate or jingle when the dog barks or pushes on the gate. This gives the owner an understanding of how often, how intense and how long their puppy is barking for.

IMG_5583

Two Way Communications – Sending Tweet from Owner to Dog

The function getLastTweet() allows the owner to send a Tweet with a coded keyword in it (in this case, the word Artiehome), which sets off a light on the dog’s gate and plays a sound file telling the dog that the owner is coming home soon.  (We modified some code from http://lucidtronix.com/tutorials/32 to get started).  getLastTweet() causes Twitter to search for all Tweets containing the word Artiehome.  If it finds one, and if that Tweet is not exactly the same as the one it previously pulled, it uses splitTokens() to grab the first word in that tweet.  If that first word is Artiehome,  it sends the code “1” through Processing to Arduino via the serial port.  Arduino reads that code from the serial port, and causes the light to flash.  Processing then plays the sound file (this function is playSoundClip().

Livefeed to Website

We wanted to create a site that would allow for visual monitoring of the puppy and the tweets as only one phone can be connected using ifttt. By using ustream.tv and a webcam, we were able to get a good angle of Arthur’s space. Ustream.tv is a great system and we were able to embed it into a wordpress site with the twitter feed in the sidebar. The web page gives a single visual representation of the previously invisible trigger and response action that allows for the puppy and its tweets to be monitored remotely by many users. It is accessed using wifi which makes it available at those times when there is no cell phone reception and only internet or wifi connection.

Screen shot 2013-12-02 at 12.59.13 AM

Hourly Update

Not all owners want a Twitter update every three minutes, so we created the function hourlyUpdate(), which, each hour, sends a summary of Arthur’s total barks and jumps in that hour to Twitter.

Graph

We then created a summary graph, tied to the keyPressed() function (up key to graph, down key to return to main), which shows Arthur’s total barks and jumps since the program was last run.

Screen shot 2013-12-02 at 12.45.17 AM

Fabrication:

To create a Tweet Monitor product, we used a long force sensor, which we pasted on a hard cardboard strip specified to the length and width of the sensor. In order to conceal the force sensor, we used canvas to wrap it around the force sensor and  cardboard. Once the force sensor was concealed properly we mounted it on the baby gate making sure it was at an appropriate height for Arthur the dog to jump and touch the force sensor on the gate.  The force senor was soldered with additional wires to extend it and then wired into Arduino. The Arduino along with the breadboard was safely placed in a canvas pouch behind the baby gate.

images

 System Diagram

Slide05

 

Twitter Monitor Code (Functionality):

  • Wireless – Bluetooth Smirf or Blue Mate Gold.
  • Uses Twitter4j library, video library, and minim audio library.
  • 2 modes – screensaver and readout.  Toggle between them by moving cursor into or out of upper right corner of screen.
  • Screensaver mode displays random words from Arthur’s tweets.
  • 4 inputs – force sensor in a gate, microphone in laptop, webcam, and Tweets using a keyword.
  • 4 outputs – audio, LEDs, streamed video, and Tweets.
  • Readout displays different messages depending on whether dog is barking, barking loudly, jumping on the gate, or both.
  • Once every two minutes (time threshold is adjustable) , sends a Tweet saying whether the dog is barking, jumping, or both.  Owner can retrieve Tweets from anywhere and also gets a text message on cellphone (IFTTT).
  • Once an hour, sends a summary Tweet saying how many times the dog has barked or jumped this hour.
  • Pressing the up arrow key reveals a graph of the number of times the dog has barked and jumped today.  Pressing the down arrow key returns to readout mode.
  • The dog owner can send a keyword in a Tweet, from anywhere and from any account, which will cause green LEDs on the gate to flash 3 times (from Arduino), and a sound file to play (from Processing) telling Arthur that his owner will be home soon.
  • In other applications, such as eldercare, the flashing green lights and audio file could be used to indicate “help is on the way.”  In a baby monitoring application, the lights could be nightlights turned on and off via Tweet by a parent, and the sound file could be a lullaby.

Key Coding Challenges:

Avoid Crashing Twitter:

  • Twitter limits the number of Tweets to 1,000 per day.  To avoid crashing Twitter while still continuously monitoring for barks or jumps, a timer was used.  If the time threshold since the last Tweet has passed when the dog jumps or barks, a Tweet is sent.  If not, the reading is passed to a “temporary” variable whose release is triggered when the threshold has passed.
  • Twitter also prevents users from posting duplicate messages.  All bark or jump messages include a time stamp to prevent duplicates from being recorded.  The keycode for inbound tweets used to set off the LED and sound file is the word “Artiehome.”  The owner, when tweeting to the dog, tweets the word Artiehome, then space, then anything.  The reason for “space, then anything” is again to avoid a duplicate message.  The splitTokens() function is then used to parse out the first word in the inbound tweet, Artiehome, which is then compared to the required key specified in the program (Artiehome), to ensure a match.
  • Twitter also limits the number of queries an application can make per 15 minute window.  The code which compares keywords in inbound Tweets to the key in the application is therefore also nested in the timer.

The key to comparing strings:

Use “equals()” function, not ==, because == may return a  false value when in fact the strings are equal.  This is because == sometimes compares the location of String objects in memory, not the words themselves.

if (homeKey.equals(twitterKey)) {

playSoundClip(“artiesound.wav”);

twitterKey=””;

}

Fritzing:

Breadboard View:                                                    Schematic View:

tweeting dog (1)_bb        tweeting dog (1)_schem

Next Steps:

  • Build a data set over time for training purposes.  We could send daily data to a text file (similar to our classmates who completed the “Hush” project) and from there display daily graphs of Arthur’s barks and jumps, perhaps with hourly breakdowns within the day to identify “problem times” for the dog.
  • Consider integration of other sensors for different applications (motion sensors to detect falls in seniors, for example).
  • Consider building a durable, mobile rubber robot to increase interaction between owner and dog, similar to the one built by the Pawly team.  This robot could be controlled via Twitter “commands”.

Similar Products:

i) Angel Care

Top_baby_digital_video_movement__sound_monitor_AC1001

 

video2  Under-the-Mattress Movement Sensor Pad

video3  Color video Transmission

v4   LCD Touch Screen

v5  2-Way Talk-Back Feature

v6  Audio “Tic” Feature

v7  Adjustable Camera Angle

 

ii) GrandCare System

new-homebase-with-sensors-jpg

 

Motion Sensors

Daily and weekly graphing – Detect patterns of motion

If there is No motion in kitchen from 8am to noon, email daughter.

If Any motion is detected at the foot of bed at night, turn on the bathroom light.

If there is Excessive motion in bathroom for more than 15 minutes between 10pm and 6am, contact Emergency Call List.

Door Sensors

Daily graphing – Used for doors, windows, cabinets, drawers, refrigerators

If door is Opened from 10pm to 6am, call neighbor.

If the pill cabinet is Not Opened between 8-10am, send a reminder call to the Resident.

If door is Not Opened between 10-11am call Supervisor (the Caregiver has not arrived).

Bed and Chair Sensors

Nightly graphing – Sleeping patterns or when a loved one leaves bed at night

If nobody is In Bed for more than 45 minutes between 10pm and 6am, text night nurse
(Resident got up and didn’t return to bed).

————————————————————————————————————————————————-

Project Code:

In order to run the following Processing sketch, please add this photo to the sketch and name it “arthur.jpg”

arthur

In order to run the Processing sketch, please download and add the sound file “artiesound.wav” to your sketch from the website link attached below:

Github Arduino Code: https://github.com/heathersimmons/Creation2/blob/master/tweetDogArduino

Project Demo (C&C Group)

Vimeo: https://vimeo.com/groups/209883/videos/80624483

References:

Visualizer/screensaver ref: http://saglamdeniz.com/blog/?p=124

Two-way communications – keycoded Tweets from owner to dog http://lucidtronix.com/tutorials/32

Microphone Code:

Steffen Fiedler http://www.creativecoding.org/lesson/topics/audio/sound-in-processing

Minim Manual – http://code.compartmental.net/tools/minim/manual-minim/

 

 

PROJECT 3 – Laura Wright and Katie Meyer: HUSH

Project description

HUSH measures volume in a chosen space, and then displays that volume in three different ways: it tweets the average volume levels from @VolumeBot2013, displays the current volume levels in the chosen space, and displays historic volume trends online.

HUSH connects two different spaces: the chosen space, and wherever the user is. In this case, the Digital Futures Initiative lab was chosen as a test space. A student can check how loud the space is before travelling from home to work there.

The idea for HUSH sprang from a frustration with the often-noisy DFI lab space. About 35 students use the communal space, and it inevitably gets loud sometimes. We wanted a way to be able to see how loud the lab is before making the (sometimes long) commute to school.

Vimeo video:

When we began researching the project, it was difficult for us to find comparables; measuring volume in shared spaces seemed to be done in informal ways, such as through qualitative analysis and casual verbal reporting, unless there was a formal workplace study underway. The lack of examples on that side led us to use sound visualizers as touch points for the project. On Open Processing, we found circular sound visualizations (http://www.openprocessing.org/sketch/5989) which became points of inspiration for our analog clock. Projects that visualized other data sets, like text, also helped us reimagine the traditional clock face for our purposes.

We conducted informal interviews with classmates across DFI1 and DFI2, and discovered that noise levels in the space is a major issue. Many shared the frustration over commuting to the space to study, only to find on arrival that it was too loud for them to concentrate.

“Sometimes it gets really loud, but most of the time it’s quite tolerable,” said Humberto Aldaz, a first year DFI student.

The fluctuating noise levels seemed to cause problems.

“I find it frustrating when I come to the studio space and it’s loud,” said Ardavan Mirhosseini, another first year student.

And so, HUSH was born.

How it works

HUSH uses a microphone that is hooked up to a laptop. The laptop runs several sketches in Processing. The laptop is also connected to a projector.

 MIC CLOSE

There are three Processing sketches involved with HUSH:

  • Twitter: the first sketch reads the volume levels. The average volume level of the room is tweeted out by @VolumeBot2013 every 10 minutes, unless there has been no change from the previous 10 minutes.                                                                                                                                            .                                                         twitter hush screen shot (1)
  • Projection: the second sketch reads the volume levels and displays it on a clock interface. In the DFI lab, a projector is used to display this visualization on the wall. The colours of the clock’s minute and hour hands change as the volume level changes.                                                                                                                                    Working Analog Clock Sketch.

  • Data visualization: the third sketch reads the volume levels and gathers them in a text file. This file is then converted, through Processing, into an interactive data visualization which shows the volume trends for the lab over a 24 hour period, overwriting the levels for each day to show general trends. This is then put online. This visualization allows users to see when the lab is usually at its loudest or quietest.                                                                                                                       hushdfi-dataviz.

Hardware and Code

Circuit diagram:

project3_processdocs_circuitdiagram

Links to github:

 

Design files

  • User Journey
  •    project3_userjourney
  • Clock visualization
    project3_printscreen_analogclock
  • Data visualization
  •     Screen Shot 2013-11-27 at 12.18.30 PM

 

Problems encountered

While the idea for this project is fairly simple, it was a technical challenge for a number of reasons.

1. Working with Twitter

We installed the Twitter4j library (http://twitter4j.org/en/) and figured out what code we needed to write to hook up our sketch with a Twitter account. However, Twitter only allows for 1,000 tweets a day. It is extremely easy to ‘overload’ Twitter (meaning exceed that number) with Processing sketches because they run very quickly. Once you send too many tweets in a row, or too many tweets that have the same text, Twitter will shut your account down for a few hours. This makes testing with Twitter frustrating. We eventually found a way to make our account only tweet every five minutes, and to also add a timestamp to every tweet so that there are no duplicates, which Twitter does not like.

2. The wires

We used a microphone to record the data. In order to string the microphone in a central location, we had to long wires. This was surprisingly difficult. We spent a long time working with TRS wires before realizing we needed TRRS wires, which work with microphones. These are surprisingly hard to come by, which was an extra added challenge.

3. Recording data

We had several options in order to gather our volume data. Originally, we wanted to get an XML feed from the Twitter account. This proved too difficult, so we chose to work with the Processing sketch’s volume recordings. After much searching online, and help from Nick, we managed to gather our volume data in a .txt file. This can then be extracted to use for visualizations.

4. Exporting to Android

Ideally, we wanted to export our Processing sketches as apps and then run them in Processing. This was not working, and we only later discovered that the Minim library (on which our volume measuring is heavily reliant upon) does not work with Android. So the tablet dream had to die.

 

A screen shot of our Processing code, which shows a funny error message we got when we accidentally added a data file to the same sketch twice.

processing error msg screen shot (1)

Alternative applications for HUSH

The DFI lab was a test space for the HUSH project. It can apply to other shared spaces where noise levels can be a factor, such as cafes, libraries or offices. It can also be used in places which are meant to be loud, such as concert venues (is the band playing yet?) or bars (is it busy yet?). Moving forward, we hope to get the project working on a tablet and permanently install it in the DFI space.

Project 3 – The Magic Jungle – Klaudia Rainie Han – Paula Aguirre G – Anna Sun

MAGIC JUNGLE

Can you imagine that you could be transformed into a tree, have the possibility to fly with your own wings and play with animals?

Magic Jungle creates an interactive experience which allowing audience to explore the jungle life by utilizing the most recent advanced technology.  The virtual jungle world would generate sounds and shapes while you interact in it, tracking your physical movements.

Why a jungle?

The jungle is magic and enchanted by itself.

In the project you can see how you will be transformed into a whole new and unique creature, almost magically. That’s why, in this search we come out with the jungle.

The sounds, the colors, the lights and shadows, the animals, and all the elements in the jungle works together to give you and amazing immersion while you are in this place.

That is the reason of why a jungle and not another space, because we want to make use of the feeling of experimenting in a magical world that only the jungle can produce.

Eco-care

We want to share with the audience a little bit of the jungle in order to promote the care of it. That’s why we are bringing the jungle into the city, we want to make conscience about the importance of the natural world for our survival and let the people know the magic that nature has always have.

Inspiration

We wanted to create a project that gives the audience the chance to interact in a magical, fun and interesting way. That’s why when we were thinking about what to do, we remembered an installation that took place a year ago at the NY MOMA.

This installation was called “Shadow Monsters”, designed by the interaction designer Philip Worthington.  On it, the fantasy was seeing monsters materializing from the shadows of the audience.

inspiration

Philip Worthington. Shadow Monsters. 2004–ongoing. Java, Processing,

BlobDetection, SoNIA, and Physics software.

What will you experiment?

For how long have you been stuck in the city?  Have you ever think about escaping from the city and exploring the jungle life? Welcome to our Magic Jungle! Our project offers the audience unique experiences to interact with animals and plants in the jungle through physical movements and sounds. In our magical world, you are going to be transformed into a tree, a magic creature with wings and have also the opportunity of play with jungle animals. You would hear the sound of wind by touching the branches on the tree.  Our magic frog would even talk to you if you pad on its back. Help yourselves to explore the magic in the jungle!

How does it work?

The interaction with Magic Jungle is so simple that you will love it!

You need to be in front of the Kinect so it can detect you.

When it detects you, you can start making movements to see how, from your own body, will start appearing new shapes attached to it, creating new and magical jungle creatures. On the same space you will also find jungle animals, and if you make the right move, you will hear them talk.

To make Magic Jungle we used Arduino and processing together.

Body detection will be able through Kinect and Processing.

You need to be in front of the kinect so it can detect you and start the interaction. It will detect your shape and will show it to you in the screen like a shadow. If you move, it will start modifying your actual body shadow to a new one.  You can see in real time how you are been transformed into a new and magical jungle’s creature.

Flex sensors works together with the jungle creatures’ toys on Arduino. You can interact with a frog and a tree.

Each flex sensor is calibrated; at the moment of interacting with the jungle creatures, the serial monitor will received data, when it reaches a specific number (defined according to the range the sensor showed), it will send a signal to processing. Processing will received the signal and will play a different sound for each different jungle character.

Prototyping

– Input

  1. Frog (Flex Sensor)
  2. Tree (Flex Sensor)
  3. People (Motion Detection)

– Output

  1. Sound
  2. Images & Shapes

– Function

  • When the user presses the frog, Processing will generate sound and also a shape on the screen.
  • The user will shake the tree and Processing will generate a sound.
  • When the user opens their arms, Processing will generate wings on their figure.
  • When the user put their arms up, Processing will generate trees on their hands.
  • When the user do monkey movements Processing will generate a monkey face on their figure.

Go Wireless

In order to improve the flexibility of setting the project in a large space, we decide to make the Arduino device go wireless.  We adopted it utilizing two XBee Modules, one LilyPad XBee, one XBee Explorer and the CoolTerm application to configured the XBee modules and in this way achieves this goal. Firstly we installed CoolTerm to help us with the configuration of the two XBee modules.  The “B” XBee module, which is attached with the LilyPad XBee, is connected to the Arduino LilyPad after the configuration. The “A” XBee Module is attached with the XBee Explore and directly connected to the computer. After the installation, we send the data (code) from the computer to the Arduino LilyPad and save it for further wireless use. Once the setup is done, the two XBee modules are ready to communicate with each other, and allow the circuit to work wirelessly.

  1. Jungle Creatures: Each of one have a flex sensor connected to the Arduino LilyPad Board
  2. LilyPad Board: Connected to the XBee “B”
  3. XBee “B” wireless communicates with XBee “A”
  4. XBee “A” it’s connected to the computer (Processing)
  5. Computer connected to the Kinect
  6. Kinect connected to a projector
  7. Projector allows users to interact with Magic Jungle.

PROTOTYPING

 

Circuit Diagrams

MAGIC JUNGLE_bb

MAGIC JUNGLE_esquema

 

Code link

https://github.com/KlaudiaHan/project-3

Pictures about the Project

20131127_19432220131127_19425320131127_17111220131127_174112 20131127_174119 20131127_174141

Project video

Related Projects and Research

–       Philip Worthington. Shadow Monsters. 2004

 http://www.youtube.com/watch?v=g0TOQo_7te4

–       Amnon Owed. CAN Kinect Physics Tutorial. 2012

–       Lisa Dalhuijsen and Lieven van Velthoven.  Shadow Creatures. 2009

–       Theo Watson. Interactive puppet.  2010

  • https://vimeo.com/16985224

–       Arduino. Getting Started with LilyPad.

–       Enrique Ramos Melgar and Ciriaco Castro Diez. Arduino and Kinect Projects; design, build, blow their minds. Apress. First edition.