Experiment 5 – OK to Touch!?

Project Title
OK to Touch!?

Team members
Priya Bandodkar & Manisha Laroia

Mentors
Kate Hartman & Nick Puckett

Project Description
Code | Computer Vision | p5js
OK to Touch!? is an interactive experience to bring into the spotlight the inconspicuous tracking technology and make it visible to the users through interactions with everyday objects. The concept uses experience to convey how users’ private data could be tracked, without their consent, in the digital future.

A variety of popular scripts are invisibly embedded on many web pages, harvesting a snapshot of your computer’s configuration to build a digital fingerprint that can be used to track you across the web, even if you clear your cookies. It is only a matter of time when these tracking technology takes over the ‘Internet of Things’ we are starting to surround ourselves with. From billboards to books and from cars to coffeemakers, physical computing and smart devices are becoming more ubiquitous than we can fathom. As users of smart devices, the very first click or touch with the smart object, signs-us-up to be tracked online and be another data point in web of ‘Device’ Fingerprinting, with no conspicuous privacy policies and no apparent warnings.

With this interactive experience, the designers are attempting to ask the question:
How might we create awareness about the invisible data tracking methods as the ‘Internet of Things’ expands into our everyday objects of use?

Background
In mid 2018 when our inbox was full of Updated privacy policy emails, it was not a chance event that all companies decided to update their policies at the same time, but an after effect of the enforcement of GDPR, General Data Protection Regulation. The General Data Protection Regulation (EU) 2016/679 (GDPR) is a regulation in EU law on data protection and privacy for all individual citizens of the European Union (EU) and the European Economic Area (EEA). It also addresses the transfer of personal data outside the EU and EEA areas. The effects of Data breach for political, marketing and technological practices is evident with the Facebook–Cambridge Analytica data scandal, Aadhaar login breach and Google Plus exposed the data of 500,000 people, then 52.5 million to name a few.

datasecurity-paper
News articles about recent data scandals. Image source: https://kipuemr.com/security-compliance/security-statement/

When the topic of Data Privacy is brought up in discussion circles, some get agitated about their freedom being affected, some take the fifth and some say that, ‘we have nothing to hide.’ Data privacy is not about hiding but about being ethical. A lot of the data that is shared across the web is often used by select corporations to make profits at the cost of the individual’s Digital Labour, that is why no free software is free but all comes at a cost, the cost of your labour of using it and allowing for the data generated to be used. Most people tend to not know what is running in the background of the webpages they hop onto or the voice interaction they have with their devices, and if they don’t see it they don’t believe it. With more and more conversation happening around Data Privacy and ethical design, we believed that it would help if we could make this invisible background data transmission visible to the user and initiate a discourse.exp-5-proposal-question

Inspiration

immaterials

The Touch Project
The designers of the Touch project— which explores near-field communication (NFC), or close-range wireless connections between devices—set out to make the immaterial visible, specifically one such technology, radio-frequency identification (RFID), currently used for financial transactions, transportation, and tracking anything from live animals to library books. “Many aspects of RFID interaction are fundamentally invisible,” explains Timo Arnall. “As users we experience two objects communicating through the ‘magic’ of radio waves.” Using an RFID tag (a label containing a microchip and an antenna) equipped with an LED probe that lights up whenever it senses an RFID reader, the designers recorded the interaction between reader and tag over time and created a map of the space in which they engaged. Jack Schulze notes that alongside the new materials used in contemporary design products, “service layers, video, animation, subscription models, customization, interface, software, behaviors, places, radio, data, APIs (application programming interfaces) and connectivity are amongst the immaterials.”
See the detailed project here.

digital-fingerprint

This is Your Digital Fingerprint
Because data is the lifeblood for developing the systems of the future, companies are continuously working to ensure they can harvest data from every aspect of our lives. As you read this, companies are actively developing new code and technologies that seek to exploit our data at the physical level. Good examples of this include the quantified self movement (or “lifelogging”) and the Internet of Things. These initiatives expand data collection beyond our web activity and into our physical lives by creating a network of connected appliances and devices, which, if current circumstances persist, probably have their own trackable fingerprints. From these initiatives, Ben Tarnoff of Logic Magazine concludes that “because any moment may be valuable, every moment must be made into data. This is the logical conclusion of our current trajectory: the total enclosure of reality by capital.” More data, more profit, more exploitation, less privacy. See the detailed article here.

paper-phone_special project

Paper Phone
Paper Phone is an experimental app, developed by a London based studio Special Project as part of the Google Wellness experiments, which helps you have a little break away from your digital world, by printing a personal booklet of the information you’ll need that day. Printed versions of the functions you use the most such as contacts, calendars and maps let you get things done in a calmer way and help you concentrate on the things that matter the most. See the detailed project here.

irl-podcast

IRL: Online Life is Real Life
Our online life is real life. We walk, talk, work, LOL and even love on the Internet – but we don’t always treat it like real life. Host Manoush Zomorodi explores this disconnect with stories from the wilds of the Web, and gets to the bottom of online issues that affect us all. Whether it’s privacy breaches, closed platforms, hacking, fake news, or cyber bullying, we the people have the power to change the course of the Internet, keeping it ethical, safe, weird, and wonderful for everyone. IRL is an original podcast from Firefox, the tech company that believes privacy isn’t a policy. It’s a right. Here the podcast here.

These sources helped define the question we were asking and inspired us to show the connection between the physical and digital to make the invisible, visible and tangible to accept.

The Process

The interactive experience was inspired by the ‘How Might We’ question we raised post our research on Data privacy and we began sketching out the details of the interaction;

  • Which interactions we wanted- touch, sound, voice or tapping into user-behaviour
  • What tangible objects we should use, – daily objects or a new product which incorporated affordance to interact with or digital products like mobile phones, laptops.
  • Which programming platform to use, and
  • How the setup and user-experience would be?

ideation-comp_ While proposing the project we intended to make tangible interactions using Arduino, embedded in desk objects and using Processing with it to create visuals that would illustrate the data tracking. We wanted the interactions to be seamless and the setup to look as normal, intuitive and inconspicuous that would reflect the hidden, creepy nature of Data tracking techniques.  Here is the initial setup we had planned to design:

installation

Interestingly, in our early proposal discussion, we raised the concerns of having too many wires in the display if we use Arduino and our mentors proposed we look at the ml5 library with p5js; a machine learning library that works with p5js to be able to recognize objects using computer vision. We attempted the YOLO library of ml5 and iterated with the code trying to recognize objects like remotes, mobile phones, pens, or books. The challenge with this particular code was in trying to create the visuals we wanted to accompanied with each object that is recognized, to be able to track multiple interactions and to be able to overlay the video that is being captured with computer vision. It was very exciting for us to use this library as we had to not depend on hardware interactions and we could use a setup with no wires, no visible digital interactions and create a mundane setup which could then bring in the surprise of the tracking visuals aiding the concept.

track-remote-with-rectangle

ml5-recognize-remote
Using YOLO ml5 library to track and recognize a remote.

data-points-map

motion-tracking_code
Using the openCV library for p5js and processing.

In using the ml5 library we also came across the openCV libraries that work with processing and p5js and iterated with it to use the pixel change or frame difference function. We created overlay visuals on the video capture and also without the visual capture thus creating a data tracking map of sorts. Eventually we use the optical flow library example and built the final visual output on it. To input data we used a webcam and captured the video feed for running through p5js.

Challenges & Learnings:
Our biggest learning was in the process of prototyping and creating setups to user test and understand the nuances of creating an engaging experience.
The final setup, was to have a webcam on the top to track any interaction that happens with the products on the table and the input video feed would be processed to then give a digital output of data tracking visualizations. For the output we tried various combinations like using a projector to throw visuals as the user interacted with the objects, use an LCD display to overlay the visual on the video feed or use a screen display in the background to form a map of the data points collected through the interaction.

The top projection was something we felt would be a very interesting output method as we will be able to throw a projection on the products as the user interacted with them, creating the visual layer of awareness we wanted. Unfortunately, each time we had top projection the computer vision code would get a lot of visual noise as each projection was being added to the video capture as an input and a loop of feeds would generate creating visual noise and multiple projections which were unnecessary as part of the discrete experience we wanted to create. Projections looked best in dark spaces but that would compromise with the effective of the webcam as computer vision was the backbone of the working of the project. Eventually we used a LCD screen and a webcam top mounted.

proposal-image-1
Test with the video overlay that looked like an infographic
process-2
Testing the top projections. This projection method generated a lot of visual noise for the webcam and had to be dropped.
tracking
Showing the data point tracking without the video capture.
process-1
Setting up the top webcam and hiding it within a paper lamp for the final setup.

Choice of Aesthetics:
The final video feed with the data tracking visual we collected was looking more like an infographic and subtle in nature as compared to the strange and surveillance experience that we wanted to create. So we decided to use a video filter to add that additional visual layer on the video capture to show that it has undergone some processing and is being watched or tracked. The video was displayed on a large screen which was placed adjacent to a mundane, desk with the typical desk objects like books, lamps, plants, stationery, stamps, cup and blocks.

setup

Having a bare webcam during the critique made it evident for the user’s about the kind of interaction and learning from that we hid the webcam in a paper lamp in the final setup. This added another cryptic layer to the interaction adding to the concept.

setup-in-use

These objects were chosen and displayed in a way so as to create desk workspace where people could come sit and start interaction with the objects through the affordances created. Affordances were created using, semi-opened books, bookmarks inside books, open notepad with stamps and ink-pads, a small semi-opened wooden box, a half filled cup of tea with a tea bag, wooden block, stationery objects, a magnifying glass, all to hint at a simple desk which could probably be a smart desk that was tracking each move of the user and transmitting data without consent on every interaction made with the objects.The webcam was hung over the table and discreetly covered by a paper lamp to add to the everyday-ness of the desk setup.

Each time a user interacted with the setup, the webcam would track the motion and the changes in the pixel field and generate data capturing visuals to indicate and spark in the user, that something strange was happening making them question, if it was Ok to Touch!?

user1

Workplan:

Dates Activities
23rd November – 25th November Material procurement and Quick Prototyping to select the final Objects
26th November – 28th November Writing the code and Digital Fabrication
28th November – 30th November Testing and Bug-Fixing
1st December to 2nd December Installation and Final touches
3rd December to 4th December Presentation

portfolio-image-2

portfolio-image-1

The Project code is available on Github here.

__________________
References

Briz, Nick. “This Is Your Digital Fingerprint.” Internet Citizen, 26 July 2018, www.blog.mozilla.org/internetcitizen/2018/07/26/this-is-your-digital-fingerprint/.

Chen, Brian X. “’Fingerprinting’ to Track Us Online Is on the Rise. Here’s What to Do.” The New York Times, The New York Times, 3 July 2019, www.nytimes.com/2019/07/03/technology/personaltech/fingerprinting-track-devices-what-to-do.html.

Groote, Tim. “Triangles Camera.” OpenProcessing, www.openprocessing.org/sketch/479114

Grothaus, Michael. “How our data got hacked, scandalized, and abused in 2018”. FastCompany. 13 December 2018. www.fastcompany.com/90272858/how-our-data-got-hacked-scandalized-and-abused-in-2018

Hall, Rachel. Chapter 7, Terror and the Female Grotesque: Introducing Full-Body Scanners to the U.S. Airports pp. 127-149 In Eds. Rachel E. Dubrofsky and Shoshana Amielle Maynet, Feminist Surveillance Studies. Durham: Duke University Press, 2015.

Khan, Arif. “Data as Labor” Singularity Net. Medium, 19 November 2018
blog.singularitynet.io/data-as-labour-cfed2e2dc0d4

Szymielewicz, Katarzyna, and Bill Budington. “The GDPR and Browser Fingerprinting: How It Changes the Game for the Sneakiest Web Trackers.” Electronic Frontier Foundation, 21 June 2018, www.eff.org/deeplinks/2018/06/gdpr-and-browser-fingerprinting-how-it-changes-game-sneakiest-web-trackers.

Antonelli, Paola. “Talk to Me: Immaterials: Ghost in the Field.” MoMA, www.moma.org/interactives/exhibitions/2011/talktome/objects/145463/.

Shiffman, Daniel. “Computer Vision: Motion Detection – Processing Tutorial” The Coding Train. Youtube. 6 July 2016. www.youtube.com/watch?v=QLHMtE5XsMs

 

Experiment 5: Eternal Forms

Names of the Group Members:

Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project Description:

“Eternal Forms” is an interactive artwork incorporating geometric elements in motion. The construction of the elements is highly precise in order to generate an optical illusion of constant collision. The illusion is a result of linear patterns overlapping in motion between geometric forms. The foreground square will be firmly stabilised while the background circle will be in rotating constantly. While display lights change their chromatic values when participants interact from ranging proximities.

The artwork takes inspiration from various light and form installation projects by Nonotak, an artist duo consisting of Illustrator Noemi Schipfer and architect-musician Takami Nakamoto. Nonotak works with sound, light and patterns achieved with repeating geometric forms. The installation work aims to immerse the viewer by enveloping them in the space with dreamlike kinetic visuals. The duo is also known for embedding custom-built technology in their installations, as well as conventional technology to achieve desired effects in unconventional ways.

Visuals:

Final Images:

 

Circuit Diagrams:

https://www.circuito.io/app?components=512,11021

Project Context

Initial Proposal

Originally we had the intention to continue our explorations with RGB displays. Four out of five of the group members had come a long way while working together on Experiment 4 (Eggsistential Workspace), only to have our communicating displays malfunction on account of the unexpected fragility the pressure sensors. We had hoped to pick up where we had left off, by disassembling our previous RGB displays and revamping the project into an elaborate interactive installation for the Open Show. We designed a four-panel display, each one showcasing a pattern of birds from our respective countries (Canada, Saint Lucia, India and China). The birds would be laser-cut and lit by effect patterns with the RGBs. After many hours of strategizing, we found we were facing too many challenges in the RGB code that, given our time constraints, became overly risky. For example, we intended to isolate specific lights within the RGB strips, thereby designating the neighbouring lights on the string to be turned off. Once we broke down how complex doing this would prove to be (each message sent to an LED would involve sending messages to all preceding LEDs in the string), it became clear that the desired codebase was out of scope. We returned to the drawing board and began restrategizing a plan that could work within the restraints of our busy schedules, deadlines and combined skills. Having five people in the group meant a lot of conflicting ideas, making it tricky to move out of the brainstorming process and into prototype iteration. But we were all interested in kinetic sculptures and the more examples we came across, the more potential we saw in devising one of our own. It seemed like an effective way of keeping us equally involved in the strategy as well as the code. Having minimal experience working with gears (only Jun Li had used them previously) we were intrigued by the challenge of constructing them. We came across this example and began to deconstruct it, replacing the hand-spun propulsion with a motor and controlling the speed and direction by means of proximity sensors.
(show video) :
https://www.youtube.com/watch?v=–O9eyKIubY

Though we aimed to keep the design as simple as possible, we weren’t able to gauge the complexities of the assembly until we had really started to dig into the design. We thought a pulley system could be built, where a mechanism surrounding the motor could trigger motion in another part of the structure by way of gears. We were mesmerized by the rhythmic patterns we came across, in particular, the work of Nonotok studio. They primarily work with light and sound installations. Taking inspiration from their pieces and work, we decided to create visual illusions based on the concept of pattern overlap. We also planned to make use of light and distance sensor to make the piece an interactive light display.

https://www.nonotak.com/_MASK

Tools and Software

  • Distance sensor
  • RGB lights
  • Servo motor (360)
  • Nano and Uno board
  • Acrylic sheets – black and diffused
  • ¼’ and ⅛’ Baltic Birch sheets
  • Laser cutting techniques
  • Illustrator

Ideation

Our previous ideas seemed complicated in terms of implementation. Hence, we sat for a second round of brainstorming on the several outcomes with the given time frame and resources. We commenced on browsing on existing projects of ‘Kinetic Installations of NONOTAk Studio, ‘The Twister Star Huge ‘by Lyman Whitaker, ‘Namibia’ by Uli Aschenborn and Spunwheel Award-winning sculptures made from circular grids. We proceeded with the creation of our circular grid. We designed a circular grid system, which is constructed by several interlinking ellipses running across a common circumference of the centre ellipse. This grid served the base our proceeding designs.

We derive inspiration from Félix González-Torres, American visual artist from Cuba who created minimal installations from common objects like lightbulb strings, clocks, paper, photographs, printed texts or hard candies. Being a member of ‘Group material’, a New-York based artist organisation formed to promote collaboration projects with regard to cultural activism and community education.

Process

Several constructions and geometrical formations were explored. We studied how to create an optical illusion with forms in motion. We tried to simplify the curvatures into straight lines since we had no idea on the feasibility and reliability of complicated junctions. Thus one simple circle and one simple square were included.

As you can see in the above diagrams, a layout was created to give us an idea of the entire frame, its size, materials to be used as well the complications or hindrances that may befall upon our way.

After the finalization of the entire setup, we can up with a list of different layers that would be encased in an open wooden box (20’by20’). The list is as follows from top to bottom:

  1. Square Black acrylic sheet with laser cut patterns – This will be the front view (covering of the open wooden box ) and will remain stationary – size 20’by20’
  2. Circular Black acrylic sheet with laser cut patterns – This will be in motion as the centre will be connected to the 360 servo motor – size 18.5’by18.5’
  3. Diffused white acrylic sheet with a cut outline in the centre to fix the base of the servo motor
  4. RGB lights + Nano and Uno board – These are stuck to the base of the wooden box
  5. A small wooden strip with a distance sensor holding area to be attached in front of the installation – This will change the pattern lighting based on distance

Image: The 2 layers of forms that were laser cut to include in our final setup.

Image: The RGB bulbs set up to create an even distribution of light across the 20”x 20” board.

Image: After the setup was done, above are a couple of effects created using lights an motion of the overlapping layers.

Prototyping

We created numerous samples of our design in miniatures and overlapped them. We experimented with black and white colours by playing with the following arrangements-

  • White Square rotating on the White Square (Stabilised)
  • Black Square rotating on the White Square (Stabilised)
  • Black Square rotating on the Black Square (Stabilised)
  • White Square rotating on the Black Square (Stabilised)
  • White circle rotating on the white square (Stabilised)

Coding

Motor — the motor is set to run slowly counterclockwise at the optimum speed to give the interplay of with the geometry. It’s important to get the speed exactly right or the lines will not show the desired effect.

Lights — the distance sensor reads in the value and includes it the running average of the distance (last 10 readings), it then maps that distance to a value that will be used for the brightness of the lights, and the speed of the effects. The closer the brighter, but slower the effects. The distance is also used to determine what light effect is shown. When very close, it breathes, a little further away, it blinks quickly, and at the standard distance it paints the colour to the background. Each effect adds to the effect of the illusion.

Github: https://github.com/jevi-me/CC19-EXP-5

Final Project & Exhibition Reflection

For the exhibition, we were given an entire room to display the piece. We projected a video of the manufacturing, on to one wall, and on the opposite wall, we solved the concern of the empty space by projecting artwork that was appropriate for the display : a generative mandala formation of various altering forms (coded by Sananda in her individual project). The work allows participants to create their patterns with varying colours using manual alterations by potentiometers. We also had some calming tunes that played along with the laser cutting video which was being projected.

Many in attendance commented that they couldn’t pull their eyes away from the piece, and that it was meditative, mesmerising and calming. We also received three offers for the purchase of the installation. One participant analysed the piece praising the use of colour, lines, geometry and interaction that made it very aesthetically pleasing, and we noticed many leaving and returning with their friends to have them experience the illusion themselves, and to interact with the distance with great delight. Overall the experience of light and subtle motion in a dark room created some beautiful visual illusions and that became the limelight of our experiment.

References

  1. SCHIPFER, NOEMI, and TAKAMI NAKAMOTO. “MASKS, Kinetic Object Series”. Nonotak.Com, 2014, https://www.nonotak.com/_MASKS.
  2. Kvande, Ryan. “Sculpture 40″ – Award Winning Optical Illusion Sculptures”. Spunwheel, 2019, https://www.spunwheel.com/40-kinetic-sculptures1.html
  3. Whitaker, Lyman. “Whitaker Studio”. Whitakerstudio.Com, 2019, https://www.whitakerstudio.com/

Absolutely on Music

by Lilian Leung

Project Description

Absolutely on Music is an interactive prototype of a potential living space. The space is composed of a sensor activated light that turns on when a participant sits on the chair and a copy of Haruki Murakami and Seiji Ozawa’s book Absolutely on Music, which plays the music the author and conductor talk about in each chapter of the book.

This experiment expands upon my personal exploration in tangible interfaces, as well as research further into slow-tech and the use of Zero UI (invisible user interfaces). This exploration is meant to re-evaluate our relationship with technology as being able to augment everyday inanimate objects rather that creating alternative screen-based experiences centered around a hand-held device. The audio played beneath the chair is played in context to each chapter of the book, divided between six individual conversations centered around a different topic and part of Ozawa’s career. There are five tracks played due to the sixth chapter being without a musical piece discussed in detail. The auditory feedback playing the music featured in the book creates a multi-sensory experience, and broadens the audience of the book rather than being solely to music experts that don’t require musical reference, to anyone looking to enjoy a light read.

Project Process

November 21 – 22 (Proposal Day)

Early research pointed to using an Arduino UNO instead of an Arduino Nano so I’d be able to use an MP3 shield to play my audio rather than depending on using Processing. For early exploration, I looked into using a combination of flex sensors and pressure sensors on the binding of the book and on the front and back cover to detect when the book was being picked up. This layout was based on inspiration I found by Yining Shi (2015), where they mapped the Jurassic Park novel with the movie.

After having my proposal meeting, I switched to using copper tape instead of flex sensors as switches to make the thing more reliable data. From there I decided on the modes of the experiment and how the book and chair should behave when not being used.

Modes

Idle Mode Active
Lamp – Dim Light Lamp – Full Brightness
Book – Orchestra Tuning Book – Play Audio

 

Having purchased the MP3 Shield, I starting formatting the MicroSD card with the appropriate tracks related to each chapter using ‘track00X’ to be appropriately read by the Arduino and shield. From the shield diagrams, I would only be able to use Analog Pins 0-5 and Digital Pins 0-1, 3-5, and D10. From this, I laid out each switch for each chapter from D0-1,3-5 and kept D10 to be used for the lamp and sensor input and output.

artboard-1

November 23
To create a more natural reading space, I went to purchase a chair and cushion from IKEA. I tried to pick a more laid back chair so that participants would be interested in sitting down rather than repurposing a studio chair. The supporting beam in the back of the chair allowed for a space to safely and discreetly place all my wiring that wouldn’t be seen. 

For the lamp design, initially I had intended to create free standing lamp, but after some thinking, I decided to have it incorporated inside the side table so there would be less clutter in the space. For the design of the side table, I wanted it to be minimal and be able to discreetly hide the light and all the wiring involved.

sidetableinspo

November 25

To conserve time and memory on the Arduino, all the audio clips for each chapter were compressed to 7 minutes maximum rather than playing the full one hour to two hour performances. I tested out the MP3 shield using external speakers instead of just headphones to check the sound quality.

November 26-27

Early iterations of the code for the Arduino and MP3 shield weren’t working as tracks refused to play with the if/else statements. Some revisions I made with the help of Nick Puckett was adding a statement to always have the track play the default tuning audio (track 7) and to simply change the track number on each switch rather than playing and stopping each track as it played.

In the early production stage of the side table, I cut a set of 7.5” by 10” sheets of ¼ inch plywood with a 4.5” diameter circle in the center and one with a 5” diameter circle to be able to house the LED stripe for the light. A hole was drilled on the bottom to allow for the wiring to be hidden away. To diffuse the light from the LED, a frosted acrylic sheet was cut to be used to securely hold in the lighting. I chose to have the LED light on the bottom side of the table so that the light would be more discreet and so readers wouldn’t have a bright light shining directly up at their faces while reading. 

artboard-1-copy-5

artboard-1-copy-4

Once the wiring was complete, I soldered the wiring onto a protoboard to securely hold everything. I used screw terminals for the wiring for the book switches, the chair pressure sensor and the side table light to be able to transport my work easily between locations and to easily troubleshoot wiring problems. From there I mounted a small board I made to the back supporting beam of the chair so the protoboard and Arduino could safely be placed inside.

November 28 – 29

To finish the side table, I put 4 sheets of ½ inch plywood and glued them together to make the legs stronger. For the wiring, I routered one side of the ½ in plywood from the inside so that the wiring could be hidden entirely inside the legs of the table and discreetly come out the leg.

artboard-1-copy

With the side table complete, I brought all the items together to see how the space would look all together.

artboard-1-copy-9

December 1

To solve the last few problems I was having with the Arduino and getting the chapter working, I updated the code from a if/else statement to an if statement followed by an “else if” for the remaining chapters. Another issue I was having was the difficulting uploading my code as I’d frequently get ths following error in the serial monitor:

AVRDUDE: STK500_GETSYNC<> ATTEMPT 10 OF 10 NOT IN SYNC

I managed to solve an error I was getting for uploading onto the Arduino Uno. From an online forum, a user mentioned it may be due to a pin being wired into pin 0 (RX) which would cause this error, unplugging this pin during upload managed to solve the issue. Another issue I was having was the consistency of the switches turning on and off as participants may hold the book from different angles and might now apply enough pressure for the switches to properly high and low.  Originally the switches were all formatted with all states stated with only the one switch indicated as HIGH. Though due to the inconsistency in pressure, I removed certain states that were inconsistent.

Ch. 1 Switch Ch. 2 Switch Ch. 3 Switch Ch. 4 Switch Ch. 5 Switch
HIGH LOW
LOW HIGH LOW LOW
LOW HIGH HIGH LOW
LOW LOW HIGH

From there the final test was adding in the speakers again with the finished chair and table to make speakers would comfortably fit underneath the chair.

 

Project Context

Absolutely on Music explores the use of audio and tactile sensors to create a more immersive experience for inanimate objects in the home, rather than creating an augmented screen-based experience. This experiment is based on the philosophy of slow-tech, countering our need to develop tools that work more efficiently to allow use to do more, faster (Slow Living LDN, 2018). The set-up space is meant to re-evaluate our experience technology and potential of creating a multi-sensory and accessible home. 

This work is an example of Zero UI, which involves interacting with a device or application without the use of a touchscreen or visual graphic interface. Zero UI technology allows individuals to communicate with devices through natural means of communication such as voice, movements and glances (Inkbotdesign, 2019). Most Zero UI-based devices are related to the internet of things and are interconnected with a larger network. For this experiment, I wanted to explore creating a multi-sensory experience not requiring any networked communication or quantified data gathered and allow participants a more immersed and mindful experience with an inanimate object.

I choose Absolutely on Music by Murakami and Ozawa purposely for the references to auditory content that readers may be unfamiliar with, and how searching for said music may interrupt the reading experience instead of making it a seamless experience. This makes the content more accessible to a broader spectrum of readers. 

A book was chosen as the object of choice because of the constant discussion between reading from a digital screen versus a physical copy on paper. A physical book is a dumb object and allows a slow more leisurely experience rid of distractions compared to reading on a digital device. 

The book used is Absolutely on Music, a series of six conversations between the Japanese author Haruki Murakami and Japanese conductor Seiji Ozawa. Classical music, like fine art is generally difficult to access and deeply personal. Interest declines as individuals may perceive themselves distrust their own reactions as classical music may feel perceived to more sophisticated folk as mentioned in a New York Times Op-Ed (Hoffman, 2018). By playing the actual audio through speakers below the chair and having the music audible from headphones, any one can follow along the book without any prior musical knowledge of the works described.

Within the book, the author and conductor discuss both of their careers, from key performances in Ozawa’s career and Murakami’s passion for music, as musical pieces are always deeply integral in all his works from the Wind-up Bird Chronicles and opening with Rossini’s The Thieving Magpie or a Hayd concerto within the page of Murakami’s Kafka on the Shore (2002)

To elevate the sensory experience of the book, a set of switches were placed within the first five chapters (conversations) of the book. The audio described in each chapter is played with the use of a switch situated within each chapter of the book to provide context to the works Murakami and Ozawa are discussing.

Table of Contents of Absolutely on Music

1st Conversation – Mostly on the Beethoven 3rd piano concerto
Interlude 1 – On Manic Record Collectors

2nd – Brahms at Carnegie Hall
Interlude 2 – The relationship of writing music

3rd Conversation – What happened in the 1960s
Interlude 3 – Eugene Ormandy’s Baton

4th Conversation – On the music of Gustav Mahler
Interlude 4 – from the Chicago Blues to Shin’inchi mori

5th Conversation – the Joys of Opera
6th Conversation – “There’s no single way to teach, you make it up as you go along”

Based on the contents of the book, I pulled the main musical piece the two individuals spoke about into a tracklist that I would use for the interactive book. 

Timing (Chapter) Tracklist
Idle Mode Orchestral Tuning Audio
Chapter 1 Glenn Gould’s 1962 Brahm’s Piano Concerto in C Minor
Chapter 2 Seiji Ozawa’s Beethoven’s 9th Symphony
Chapter 3 Seiji Ozawa’s Rite of Spring (by Igor Stravinsky)
Chapter 4 Seiji Ozawa’s The Titan / Resurrection (by Gustav Mahler)
Chapter 5 Dieskau; Scotto;  Bergonzi’s Rigoletto
Chapter 6 (No Audio, No Single Musical Piece Focused)

chairmock

2019-12-04-04-23-41-3

artboard-1-copy-15

artboard-1-copy-14

Project Video

Github Code

You can view the github repository here

Circuit Diagram

exp5-diagram

*Within the actual wiring, the button switches are two piece of copper foil placed on opposite pages acting as the switches

*The schematic displayed is using a Sparkfun VS1053 though I used a geeetech VS1053, the available pins laid out are slightly different where as the Sparkfun version used in the diagram show D3 and D4 being used, they’re available on the geeetech MP3 shield.

Exhibition Reflection

For the Digital Futures Open Show, my piece was exhibited in the entrance of the Experience Media Lab. I set up the space with some plants and an additional light as props to make the area more comfortable. The space was quieter than the Graduate Gallery which worked out for the piece and allowed participants to sit down and experience the piece one at a time without having too much noise in the background. For the seat sensor, I kept the table light on so that participants could see the reading space clearly rather than being seat pressure activated.

artboard-1-copy-12

My reflection on the experience would be from the participant aspect, where I noticed people were initially hesitant sitting down on the chair, not sure whether they were supposed to touch it, or that the space I created didn’t look like an art piece. I felt that the piece was successfully as it felt like a natural reading space, and don’t mind the confusion as the chair and book were designed in the context of being in a home rather than as an exhibition piece. 

There was some static from the speaker, but I also noticed that participants may have expected a much faster response from the book when the music changed, as many orchestral pieces had a natural slow build up, some participants flipped through the pages to experience the music change faster or couldn’t hear the musical piece.

While the book audio was designed to be for a single reader than can listen while reading rather than flipping through the pages, in hindsight, I’d probably revise the audio to begin likely in the middle of each musical piece when in an exhibition display so that participants could understand the concept faster.

Some helpful feedback I got on how to possibly improve the piece and learn more about invisible interfaces was reading Enchanted Objects: Design, Human Desire, and the Internet of Things by David L. Rose. Other feedback was also possibly exploring using a Maxuino which has more audio capabilities and support with Ableton live in case I wanted more control with my audio files and audio quality compared to using the MP3 shield.

 

Bibliography

Arduino Library vs1053 for SdFat. (n.d.). Retrieved November 29, 2019, from https://mpflaga.github.io/Arduino_Library-vs1053_for_SdFat/.

Hoffman, M. (2018, April 19). A Note to the Classically Insecure. Retrieved from https://www.nytimes.com/2018/04/18/opinion/classical-music-insecurity.html?rref=collection/sectioncollection/opinion&action=click&contentCollection=opinion®ion&module=stream_unit&version=latest&contentPlacement=4&pgtype=sectionfront.

Inkbotdesign. (2019, August 13). Zero UI: The End Of Screen-based Interfaces And What It Means For Businesses. Retrieved from https://inkbotdesign.com/zero-ui/.

Kwon, R. O. (2016, November 24). Absolutely on Music by Haruki Murakami review – in conversation with Seiji Ozawa. Retrieved from https://www.theguardian.com/books/2016/nov/24/absolutely-on-music-haruki-murakami-review-seiji-ozawa. 

LDN, S. L. (2019, May 25). Embracing Digital Detox and Slow Tech. Retrieved from https://www.slowlivingldn.com/lifestyle/slow-tech-digital-detox/. 

Murakami, H., & Ozawa, S. (2017). Absolutely on music conversations with Seiji Ozawa. London: Vintage. 

Shi, Y. (2015, February 7). Book Remote. Retrieved from https://www.youtube.com/watch?v=M1WrbADjfmM&feature=emb_title. 

Tench, B. (2019, February 11). Some Reflections on Slow Technology. Retrieved November 29, 2019, from https://www.becktench.com/blog/2019/2/11/some-reflections-on-slow-technology.

What Is My Purpose?

Project Title: What Is My Purpose?

By: Nilam Sari

Project Description:

This project is a 5x5x5 inches wooden cube with a 3D printed robotic arm. A hammer head shaped piece is attached at the end of its arm. The arm repeatedly hits the top part of its own body, a sheet of clear acrylic. This robotic piece appears to be self-harming itself.

Process: 

I started this project by creating a timeline because I thought I should be more organize with this project to meet the tight deadlines.

experiment-5-timeline

I modeled my design on Rhino3D to help me visualized the arm that needs to be fabricated with the 3D printer.

document1

At first I created the arm to hold a chisel, but after printing and testing it with a servo, the servos couldn’t handle the weight of the chizel so I compromised with an IKEA knife. That didn’t work either, so I compromised with this 3D printed hammer head that hold a 1in bolt.

document2

At first I had trouble attaching the servo motors into the 3D printed parts, but Arsh suggested that I cut out the little servo extensions and fit it into the 3D printed parts instead of trying to make a hole that fits directly into the servo, and it worked perfectly (Thank you Arsh!).

Next is time to code the movements of the arm.

At first, I achieved the movements that I wanted, but it is too predictable, it feels too ““robotic””. So I randomized the movements within a range and got the results that I wanted.

Then I worked on the wooden body part. This went smoothly and pretty uneventful. The lack of tools in the maker lab makes the process take longer than it needed to, but I managed to do it according to my timeline.

document3

When I was done with the wooden body, I attached the arm onto it.

When I did a test run, it was doing okay. The only problem is the chisel/knife problem I mentioned above. Then I installed the inserts for bolts at the corners of the top of the box to secure the 1/16 inch acrylic sheet.

At first I wanted this piece to be battery run, complete with an on/off button at the bottom of it. But when I tried using 9V battery it wasn’t strong enough to run the two servos. So I asked Kate for help and learned that it’s more about the current rather than the voltage. So I got four AA batteries on series and try to run it again. It was still not enough. Kate suggested to put another four AA batteries and attach it parallel to the other four. And it worked!

However, the battery couldn’t last long enough and the servo started getting weaker after a couple of minutes. It was a hard decision, but I had to compromise and use cable to power the robot from the wall outlet.

For the show, I originally wanted to add delay() so the servo motors get breaks in between and don’t overheat. But when I added delays the motor doesn’t run the same way it did without the delay.

Code: https://github.com/nilampwns/experiment5

Video: 

Visuals:

1

2

Circuit Diagrams: 

experiment-5-wiring

Project Context:

We are constantly surrounded by technology that do tasks for us. When machines do not carry out their tasks properly, they are deemed broken. Can we co-exist with machines that are not designed to serve us, humans? What kind of emotion is evoked by seeing a machine hitting itself over and over? Are we projecting our feelings onto non-living objects? These were the questions that lingered in my mind when I decided to create this robot.

I believe that robots have the potential to teach us, humans, how to empathize again. That is basically the belief that I have that drove me into graduate school and life in general. This piece that I created for experiment 5 is in some way a part of my long term research in life. Can robots teach us, humans, to empathize with a non-living being, and ultimately, with each other?

There has been multiple research that ask the participants about the possibility of the robots getting hurt or where participants are asked to hurt the robot. As Kate Darling (2015) wrote on her research report paper, “Subjects were asked how sorry they felt for the individual robots and were asked to chose which robot they would save in an earthquake. Based on the subjects preferring the humanoid robots over the mechanical robots, Riek et al. postulate that anthropomorphism causes empathy towards robotic objects.”

People tend to empathize more with robots that look like them. I want to push how far can I remove my piece from anthropomorphization as much as I can, and push it even further by making a robot that its whole purpose is to hit itself. That’s why I created the body to look like a wooden cube, with visible microcontrollers, the only thing that makes it looks a bit anthropomorphized is the robotic arm.

The act of self-harming is jarring. It’s emotional. It’s a sensitive topic. But what does self harming mean to a robot that cannot feel pain? It does not have sensors that detect pain.

Not so much about self-harming, but Max Dean’s piece, “Robotic Chair” (2008) is a robotic chair that explodes it self into multiple parts, then it slowly search for its missing piece and putting itself back together again autonomously. The viewers’ reactions were positive emotions. “As stated further on the Robotic Chair website, the chair has been shown to elicit “ empathy, compassion and hope ” in its viewers.” (Langill, 2013).

I acknowledge that my approach is very philosophical. The title of the work itself is “What Is My Purpose?” a question that even us humans have not found the answer to yet. I hope to make people think that it’s okay for machines to not have a purpose, to not serve us, still exist around us, and for us to still empathize with them. That way maybe humanity could learn to empathize more with each other.

Exhibition Reflection:

The exhibition was lovely. I got to talk to so many people about my research, receive feedback and reactions, or just simply chat with them. Unfortunately, the servos of my piece got overheated and busted 30 minutes into the show, I thought it was the arduino but I unscrewed the piece to hit the reset button but it still didn’t change anything. I tried to let it cool down for a couple of minutes but it also didn’t work. I have a video of it running so I was showing the videos to people who were interested in seeing it. Thankfully it was running when it was the busiest.

People told me they liked my work. I asked them what they like about it, and a couple of them said that they think it’s funny and silly. Some said they can feel the frustration of the robot. Some felt so empathetic that they felt bad watching the robot hitting itself over and over. One person even said “idiot robot” at it. It was a mixed bag of reactions but most people enjoyed the piece.

References:

Darling, Kate.Empathic concern and the effect of stories in human-robot interaction”. with P. Nandy and C. Breazeal. proceedings of the IEEE international workshop on robot and human communication (roman). 2015. 

Dean, Max. “Robotic Chair”. 2008.

Langill, Caroline Seck. “The Living Effect: Autonomous Behavior in Early Electronic Media Art”. Relive: Media Art Histories. MIT Press. 2013.

Eternal Forms

Experiment 5: Proposal

Members

Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project Description

“Eternal Forms” is an interactive artwork incorporating geometric elements in motion. The forms accelerate their rotating speeds when a user interacts from ranging proximities. The construction of the elements is highly precise along with accurate measurements to generate an optical illusion of constant collision. The framework establishes from a circular grid system, which is designed by several interlinking ellipses running across a common circumference of the centre ellipse. 

Parts/materials/ technology list 

  • Servo
  • Distance sensor
  • Pulley
  • Acrylic
  • Wood
  • Arduino – distance sensor and servo motor based coding
  • Laser Cutting

Work Plan

Base Circular Grid – Sets the basis of the design constructions.
Design Explorations
  • 24th Nov (Sun): Pattern exploration and design created based on the finalized patterns
  • 25th Nov (Mon): Design a network of connected illusions using the finalized patterns. Also, try out prototype laser cuts of the final pieces. Check out the weight and scaling optimizations.
  • 26th Nov (Tue): Work on the code – Arduino side. Also, try the servo functioning with the prototype model.
  • 27th Nov (Wed): Combine the servo and the sensor part of the experiment and check code functioning.
  • 28th Nov (Thu):  Create the final design laser cuts with the materials finalized – acrylic or wood or both. Additionally, creating the mounting piece that needs to go up on the wall.
  • 29th Nov (Fri): Iterate and work on creating multiple kinetic sculptures and make them interactive. Also, work on the display set-up of the installation.
  • 30th Nov (Sat): Trouble-shooting
  • 1st Dec (Sun): Trouble-shooting
  • 2nd Dec (Mon): Trouble-shooting
  • 3rd Dec (Tue): Final Presentation

Physical installation details

They will be multiple interactive artworks mounted on walls where the user can observe the changing rotating speed based on distance proximities. These artworks will be animated by Servo Motors.

Resource List

10 Nails and hammer – For mounting the artwork.

Extension Cord

Power Plug

Wall space of 5ft x 3ft to mount the piece.

References

  1. Kvande, Ryan. “Sculpture 40″ – Award Winning Optical Illusion Sculptures”. Spunwheel, 2019, https://www.spunwheel.com/40-kinetic-sculptures1.html.
  2. Whitaker, Lyman. “Whitaker Studio”. Whitakerstudio.Com, 2019, https://www.whitakerstudio.com/.

Experiment 5 — Leave A Window Open For Me

1.Project title: Leave A Window Open For Me

2.Project description: 

For this project, I’m aiming to create an installation that has space within a box build with mirror acrylic sheets and wood. It is meant to be filled with mirrors to form a room that has infinite reflections. It is meant to reflect my own experience in new york when I was going through an emotional break down for a long period of time. I was suffering from insomnia and depression that I had no motivation to do anything, the window was the only thing that I stare at the most, from day to night, from rain to shine.

I managed to step out of the emotional trap eventually, but looking out of the window seemed to become one of my daily routines. Except for this time, things have changed.

There will be a 3D printed bed placed in the middle of the installation to represent my room and my mental status. One side of the board would have a cutout and play the role of the window, which will have the led matrix panel placed behind it.

The audiences are expected to observe the installation from the peek hole at the front board.

3.Parts / materials / technology list:

LED matrix panel, Arduino UNO, acrylic sheets with mirror backing, glue gun, 3D printer, laser cutting machine, transformer adapter, tool power cord, etc.

4.Work plan: 

Nov 21-22: Proposal & Discussion

Nov 23-24: Coding & Researching

Nov 25-26: Purchasing materials & Building small prototype & test

Nov 27-28: Coding & 3D printing & Laser cutting

Nov 29- Dec 3: Debugging & Assembly

Dec 4: Open show

5.Physical installation details: 

The small room in the graduate gallery which is always used to play films because preferably a darker space, a pedestal that has the installation on will be placed against the wall.

 

Initial sketches:

img_3487 img_3488 img_3489 img_3490 img_3491 img_3492

img_3860

CityBLOCK

 

artboard-1

Project Title – CityBLOCK

Team Member – Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

Project Description
CityBLOCK is a modular city builder experience where the user controls the city building blocks with designed cubes to build their desirable dream city by rearranging the cubes.

What makes Toronto so unique? Being the largest city, most diverse, It’s home to a dynamic mix of tourist attractions, from museums and galleries to the world-famous CN Tower and, just offshore, Toronto Islands. And just a short drive away is Niagara Falls.
This project will let you build your version of Toronto city.
Everyone has their own imagination of their city. Since I came to Toronto city for the first time and i really surprised to see the fusion of old and new architecture of the city. I always wanted to show what I think about this city and I almost everyday go to the harbourfront and admire CN Tower, thinking about how it became the “ICON” of the city. The 0ther historical architecture of the city like Flatiron building, St Lawrence Market makes the city more unique and identifiable.
This is my take on to represent Toronto city from my perspective and of course, this city builder is modular so anything can be added later.

GITHUB LINK – here

Inspiration

The Reactable is an electronic musical instrument with a tabletop tangible user interface that was developed within the Music Technology Group at the Universitat Pompeu Fabra in Barcelona, Spain by Sergi Jordà, Marcos Alonso, Martin Kaltenbrunner, and Günter Geiger.

What I like about the retractable is any physical object can be smart and can interact, alter, do some modification with digital content.

The only input device is used here is only camera and it looks very clean in terms of setup for an open show.

Working flow

This set up is for the table to interaction and requires a lot of effort to make a table and configure the camera with a projector.

Technology

  • reacTIVision 1.5.1
  • TUIO Client – Processing
  • OV2710 – Camera
  • Projector – Which supports the short-throw lens with a throw ratio of 0.6 and below.
  • Infrared LED -850nm IlluminatorMaterial
    • Wooden/Paper blocks.
    • Table 4.5ft*3.5ft
    • Projector Mount
      These were the minimum requirement that I collected from several posts from reacTIVISION forum and

First of all, I wanted to try the reacTIVISION software and break it down to understand its working and constraints. Also, there is nothing much on the internet about it so I just started with documentation.

Work Plan

22nd – 23rd -ideation finalizing the concept
24th – 27th -Coding, configurating reacTIVision with processing.
27th – 30th-Testing and making ready for exhibition.
1st – 2nd-Installation/ setup
3rd – 4th-Exhibitions

After the first meeting, my whole project got flipped. Initially, I was thinking to make a tabletop interactive city builder and it became more awesome which was tracking fiducial markers from the front and projecting on the vertical plane. It was a smart move to hide the markers and making the aesthetics of the project more cooler.

I removed IR LEDS, Camera and started with webcam in order to look clean.

group-11Initial graphics and their coupled fiducial markers

 

Process

In this test1 I tested the recognition ability of reacTIVISION to detect the fiducial marker. I captured the image of the test marker and tested with reacTIVISION with different sets of the brightness of my smartphone and it recognized in almost every test run.

This time I was testing USB camera by Logitech and at this point, I don’t know how to configure the camera with reacTIVISION(it was .xml file) so without autofocus, it detects all the marker and it provides very low depth and with autofocus, it detects max 4 markers and it provides the larger range of depth but the marker that I have placed earlier in near to the camera gets out of focus when I try to put another marker far to the camera and hence limiting me tho the very low depth range interaction area.

After so many testing with a web camera, I found the minimal distance between camera and marker should 40-50 cm and its max distance should be 75-80 cm thus giving me a nice interactive region for interaction.

In test3 I tested the lag between the user’s input and the feedback on the screen. Text tracking was pretty good and constantly stick together with the marker.

In test 4 I tested with the graphic. initially, it was not rendering on the screen because there was another line of code to render a white background over the top of the image. After fixing that new challenges arises which was aligning images in one horizontal plane. The anchor point of the image is set in the center by default and it was fixed by offset with one common reference image. The distance between the two images is also affected by the camera distance so this created a need for a fixed platform.

group-1171

I selected some historical buildings from Toronto city and made their digital vector for laser cutting.  Nick gave me a suggestion to have multiple buildings for one building like 4 CN Tower. For CN Tower I have assigned one marker so I have to make 4 identical markers for each CN Tower. So in this way, I have multiple replicas of the same building and to keep the track of buildings and assigned markers I created an excel sheet. Then I attached markers to their respective buildings.

Internal Critique

Some valuable inputs from peers and mentors
1. A staircase like a platform will help the user to interact with the BLOCK pieces in an intended way.
2. Add some animation or triggers to move images so that the project will be more interactive.
3. A bigger TV will be more impactful for the project rather than the projector.

For 1 a quick fix would be making region on the table with a marker but there was some user holds the building from the base and in this process, they unknowingly hinder the marker from the camera. So the staircase (did the laser cut on the same day after critique thanks to Jignesh for helping me out staircase ) was the perfect solution as it turns out to be after the OPEN SHOW.

For 2 I did make the background animation.

While playing it was too laggy and too slow to process the video although it is 5 MB and still I was getting too much lag. So disabled the video.

For 3 I used a TV screen as the graphics were much brighter and sharper and the most important thing was that the set got cleaner and minimal.

 

Exhibition SetUP

img_20191204_165627

group-1170

Take-Aways from the Exhibition

Some people were so surprised and looking for the sensor from where I am tracking the buildings. Some people touched on the table to check is there any Magnetic field or not. Some people thought that I am using some algorithm that detects the shape of the buildings ( kept me thinking for a while) and replicating them on the screen.
After explaining to them how cityBLOCK works, they complimented the project smart and clean in terms of set up.
There was an old couple and the man was too happy to play with the blocks and his wife told him to we have to see other’s work as well and I think this was a success for my project about being a simple interactive project.

Challenges & Learnings:

The biggest hurdle for me was that I had no idea where to start. There is no proper resources but I did know that this can be done all I need was just one project to understand the communication between the camera and processing.

Getting image graphics on the screen when I show a marker to the camera was the most time-consuming phase of the project. This was just a code issue that draws another white background on the top of images.
Once I got the images then it was just completing the task which was necessary for the project.

References

http://reactivision.sourceforge.net/#files
https://www.tuio.org/?software

 

Birds of a Feather

Experiment 5: Birds of a Feather

Course: Creation & Computation

Digital Futures, 2019 

Proposal

Members: Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project title:  Birds of a Feather

Project visual concept

Project description:

This project is an installation piece, involving a display composed of LEDs and various materials (primarily laser-cut acrylic).  We will be taking the interaction lightscape aspects of Experiment 1 (Sananda and Jevi), Experiment 2 (Jevi and Li), and Experiment 4 (Catherine, Li, Rittika, Jevi), and the work with LEDs and acrylic from Catherine and Sananda’s Experiment 2. 

The theme of our installation is birds.  Specifically, birds as they have been represented stylistically by the artists of our respective countries of origin.  Aesthetically, the range of species, variation of visual patterns and textures works well with our optics. We will be working with distance sensors as a means of interaction with participants, limiting the activity to simple actions (activating and deactivating light based on proximity, and some light animation).  

img_9536

 /materials/technology list:  what you will use to build your project. Be specific.

The range of bird groupings and styles will span across four panels.  If it’s possible to acquire them we are hoping to mount these panels to clothing racks on wheels so they are flexible in terms of portability.  Behind the panels is where we will assemble our circuits and arduinos (hidden from view). The wiring will feed into the display via holes in the panels.  The birds will be laser-cut from acrylic sheets (colour varying), though we may include other materials (paper, ink, and translucent fabrics) to emphasize the shift in artistic style.  

Lighting: : 

  • Ambient/atmosphere: remote-lit bulbs
  • Display: 2 
    • 2 x RGB strips  / 12” (50 RGBs per strip) 
    • 2 x RGB strips / 1 m (50 RGBs per strip)
    • Individual LEDS
    • Jumper wires
    • 5 x distance sensors (Ultrasonic Range Finder –  LV – MaxSonar-EZ0)
    • 5 x Arduino Nano 33 IoT
    • 2 x Arduino Mega 2560
    • 1 x Arduino Uno 

Material :

– Plastics (acrylic), laser-cut

  • Designed in Photoshop, Procreate and Illustrator

Programming: 

  • Arduino
  • Touch Designer

Work plan – include dates & milestones

Sunday November 23: Round 1 / bird designs created

Monday November 24: Round 1 / Bird designs laser-cut

  • Schematic / circuits 
  • Arduino programming

Tuesday November 25: Round 1 / Bird designs laser-cut 

Wednesday November 26: Round 2 / Bird designs laser-cut

  • Initial tests/ RGBs and sensors
  • Panels designed for displays

Thursday November 27: Round 2 / Bird designs laser-cut

Friday November 28:  Round 3 / Bird designs laser-cut

  • Panels drilled for circuits 

Saturday November 29: Assembly / testing
Sunday November 30: Assembly  / testing 

Monday December 1 : Assembly / testing

Tuesday December 2:  Install in space

Wednesday December 3 :  Install in space

 

Physical installation details:  The displays will be mounted like paintings but as the artwork will be attached to the panels at varying distances, there will be a 3D aspect (depth).  The LED circuits will be mounted on the backside of the panels, with the display components positioned in front, so that they’re ‘coloured’ by the LEDS while diffusing the light  in the process.  

The panels will in turn be supported by clothing racks on wheels for portability.  

Resources / materials: 

  • Acrylic sheets
  • Tissues / translucent fabric
  • Acrylic paint (black matte)
  • Foam / wood /cork-board panels

 

 

 

 

 

 

Ocular

ocular-copy01

Project Title
Ocular
Animated Robotics – Interactive  Art Installation

Project by Jignesh Gharat


Project Description

 An animated robotics motion with a life-like spectrum bringing emotive behaviour into the physical dimension. An object with motion having living qualities such as sentiments, emotions, and awareness that reveals complex inner state, expressions and behavioural patterns.
He is excited to peak outside the box and explore around the surroundings but as soon as he sees a person nearby he panics and hides back in the box as if he is shy or feels unsafe. He doesn’t like attention but enjoys staring at others.What if a blank canvas could try to express its self instead of viewers projecting their interpretations, emotions, beliefs, and stories.


Technology

  • Mackbook pro
  • 1 Arduino UNO/ Nano
  • 2/3 Servo motors
  • VL53L0X laser distance sensors/ 
  • LED Bulb
  • Power Bank 20000 mah

Materials

  • Acrylic/Paper/3D printing (Periscope)
  • 4 Plywood 30” x 30”

Work plan

22nd – 23rd _Material procurement, storytelling, and ideation
24th – 27th _Code- Debugging, Calibrating distance sensor, Mockup, Testing
27th – 30th_Iterating, Fabrication, and Assembling
1st – 2nd_Installation/ setup
3rd – 4th_Exhibitions


Physical installation details

The head of the robot(Periscope) is observing the surrounding and is curious to explore things happening  around. As the viewer comes in the range of the distance sensor the robot hides quickly into  the box and peeks out. when there is no one in the range of the sensor the robot pops out again.


Exhibition Space

Preferably a silent art gallery space with a plinth. Spotlight on the plinth.

Interactive Canvas

  1. Project Title

Interactive Canvas (Working title) by Arsalan Akhtar

 

  1. Project Description

 

Ambient advertising is key to win consumers today which encourages discovery of various story telling tools. The “walls” around us are an empty canvas that could tell a lot of interactive stories. Thus, I would like to make an interactive wall that tells a story through sound or display when in close proximity. The subject of the story is about breach of digital privacy and how we in this digital age have given permissions to mobile apps that we use daily.

 

  1. Parts, Materials and Technology

 

  • Arduino micro
  • Resistors
  • Conductive Material
  • Projector
  • Piece of wood
  • Processing and Arduino Software
  • Photoshop for illustration

 

  1. Work Plan

 

  • Nov 20-22 : Low fidelity prototyping with arduino, conductive material and resistors
  • Nov 23-24: Work on processing to discover storytelling drawing
  • Nov 25-26: Procure wood and engrave storytelling assets
  • Nov 27-28: debugging of hardware and software
  • Nov 29-Dec1: debugging of physical installation
  • Dec 2: test run
  • Dec 3-4: Display

 

  1. Physical Installation

A rectangle piece of wood or paper (around 6ftx6ft) attached or hung against the wall with story engraved on it and facing a projector. The visitors could interact with features on the wall and learn about the story. Thus, a flat wall such the DF gallery could be a great place.

  1. Resource list 

 

  • a bracket to hold the piece of canvas against the wall.

 

Thank you