Eternal Forms

Experiment 5: Proposal

Members

Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project Description

“Eternal Forms” is an interactive artwork incorporating geometric elements in motion. The forms accelerate their rotating speeds when a user interacts from ranging proximities. The construction of the elements is highly precise along with accurate measurements to generate an optical illusion of constant collision. The framework establishes from a circular grid system, which is designed by several interlinking ellipses running across a common circumference of the centre ellipse. 

Parts/materials/ technology list 

  • Servo
  • Distance sensor
  • Pulley
  • Acrylic
  • Wood
  • Arduino – distance sensor and servo motor based coding
  • Laser Cutting

Work Plan

Base Circular Grid – Sets the basis of the design constructions.
Design Explorations
  • 24th Nov (Sun): Pattern exploration and design created based on the finalized patterns
  • 25th Nov (Mon): Design a network of connected illusions using the finalized patterns. Also, try out prototype laser cuts of the final pieces. Check out the weight and scaling optimizations.
  • 26th Nov (Tue): Work on the code – Arduino side. Also, try the servo functioning with the prototype model.
  • 27th Nov (Wed): Combine the servo and the sensor part of the experiment and check code functioning.
  • 28th Nov (Thu):  Create the final design laser cuts with the materials finalized – acrylic or wood or both. Additionally, creating the mounting piece that needs to go up on the wall.
  • 29th Nov (Fri): Iterate and work on creating multiple kinetic sculptures and make them interactive. Also, work on the display set-up of the installation.
  • 30th Nov (Sat): Trouble-shooting
  • 1st Dec (Sun): Trouble-shooting
  • 2nd Dec (Mon): Trouble-shooting
  • 3rd Dec (Tue): Final Presentation

Physical installation details

They will be multiple interactive artworks mounted on walls where the user can observe the changing rotating speed based on distance proximities. These artworks will be animated by Servo Motors.

Resource List

10 Nails and hammer – For mounting the artwork.

Extension Cord

Power Plug

Wall space of 5ft x 3ft to mount the piece.

References

  1. Kvande, Ryan. “Sculpture 40″ – Award Winning Optical Illusion Sculptures”. Spunwheel, 2019, https://www.spunwheel.com/40-kinetic-sculptures1.html.
  2. Whitaker, Lyman. “Whitaker Studio”. Whitakerstudio.Com, 2019, https://www.whitakerstudio.com/.

Experiment 5 — Leave A Window Open For Me

1.Project title: Leave A Window Open For Me

2.Project description: 

For this project, I’m aiming to create an installation that has space within a box build with mirror acrylic sheets and wood. It is meant to be filled with mirrors to form a room that has infinite reflections. It is meant to reflect my own experience in new york when I was going through an emotional break down for a long period of time. I was suffering from insomnia and depression that I had no motivation to do anything, the window was the only thing that I stare at the most, from day to night, from rain to shine.

I managed to step out of the emotional trap eventually, but looking out of the window seemed to become one of my daily routines. Except for this time, things have changed.

There will be a 3D printed bed placed in the middle of the installation to represent my room and my mental status. One side of the board would have a cutout and play the role of the window, which will have the led matrix panel placed behind it.

The audiences are expected to observe the installation from the peek hole at the front board.

3.Parts / materials / technology list:

LED matrix panel, Arduino UNO, acrylic sheets with mirror backing, glue gun, 3D printer, laser cutting machine, transformer adapter, tool power cord, etc.

4.Work plan: 

Nov 21-22: Proposal & Discussion

Nov 23-24: Coding & Researching

Nov 25-26: Purchasing materials & Building small prototype & test

Nov 27-28: Coding & 3D printing & Laser cutting

Nov 29- Dec 3: Debugging & Assembly

Dec 4: Open show

5.Physical installation details: 

The small room in the graduate gallery which is always used to play films because preferably a darker space, a pedestal that has the installation on will be placed against the wall.

 

Initial sketches:

img_3487 img_3488 img_3489 img_3490 img_3491 img_3492

img_3860

CityBLOCK

 

artboard-1

Project Title – CityBLOCK

Team Member – Rajat Kumar

Mentors
Kate Hartman & Nick Puckett

Project Description
CityBLOCK is a modular city builder experience where the user controls the city building blocks with designed cubes to build their desirable dream city by rearranging the cubes.

What makes Toronto so unique? Being the largest city, most diverse, It’s home to a dynamic mix of tourist attractions, from museums and galleries to the world-famous CN Tower and, just offshore, Toronto Islands. And just a short drive away is Niagara Falls.
This project will let you build your version of Toronto city.
Everyone has their own imagination of their city. Since I came to Toronto city for the first time and i really surprised to see the fusion of old and new architecture of the city. I always wanted to show what I think about this city and I almost everyday go to the harbourfront and admire CN Tower, thinking about how it became the “ICON” of the city. The 0ther historical architecture of the city like Flatiron building, St Lawrence Market makes the city more unique and identifiable.
This is my take on to represent Toronto city from my perspective and of course, this city builder is modular so anything can be added later.

GITHUB LINK – here

Inspiration

The Reactable is an electronic musical instrument with a tabletop tangible user interface that was developed within the Music Technology Group at the Universitat Pompeu Fabra in Barcelona, Spain by Sergi Jordà, Marcos Alonso, Martin Kaltenbrunner, and Günter Geiger.

What I like about the retractable is any physical object can be smart and can interact, alter, do some modification with digital content.

The only input device is used here is only camera and it looks very clean in terms of setup for an open show.

Working flow

This set up is for the table to interaction and requires a lot of effort to make a table and configure the camera with a projector.

Technology

  • reacTIVision 1.5.1
  • TUIO Client – Processing
  • OV2710 – Camera
  • Projector – Which supports the short-throw lens with a throw ratio of 0.6 and below.
  • Infrared LED -850nm IlluminatorMaterial
    • Wooden/Paper blocks.
    • Table 4.5ft*3.5ft
    • Projector Mount
      These were the minimum requirement that I collected from several posts from reacTIVISION forum and

First of all, I wanted to try the reacTIVISION software and break it down to understand its working and constraints. Also, there is nothing much on the internet about it so I just started with documentation.

Work Plan

22nd – 23rd -ideation finalizing the concept
24th – 27th -Coding, configurating reacTIVision with processing.
27th – 30th-Testing and making ready for exhibition.
1st – 2nd-Installation/ setup
3rd – 4th-Exhibitions

After the first meeting, my whole project got flipped. Initially, I was thinking to make a tabletop interactive city builder and it became more awesome which was tracking fiducial markers from the front and projecting on the vertical plane. It was a smart move to hide the markers and making the aesthetics of the project more cooler.

I removed IR LEDS, Camera and started with webcam in order to look clean.

group-11Initial graphics and their coupled fiducial markers

 

Process

In this test1 I tested the recognition ability of reacTIVISION to detect the fiducial marker. I captured the image of the test marker and tested with reacTIVISION with different sets of the brightness of my smartphone and it recognized in almost every test run.

This time I was testing USB camera by Logitech and at this point, I don’t know how to configure the camera with reacTIVISION(it was .xml file) so without autofocus, it detects all the marker and it provides very low depth and with autofocus, it detects max 4 markers and it provides the larger range of depth but the marker that I have placed earlier in near to the camera gets out of focus when I try to put another marker far to the camera and hence limiting me tho the very low depth range interaction area.

After so many testing with a web camera, I found the minimal distance between camera and marker should 40-50 cm and its max distance should be 75-80 cm thus giving me a nice interactive region for interaction.

In test3 I tested the lag between the user’s input and the feedback on the screen. Text tracking was pretty good and constantly stick together with the marker.

In test 4 I tested with the graphic. initially, it was not rendering on the screen because there was another line of code to render a white background over the top of the image. After fixing that new challenges arises which was aligning images in one horizontal plane. The anchor point of the image is set in the center by default and it was fixed by offset with one common reference image. The distance between the two images is also affected by the camera distance so this created a need for a fixed platform.

group-1171

I selected some historical buildings from Toronto city and made their digital vector for laser cutting.  Nick gave me a suggestion to have multiple buildings for one building like 4 CN Tower. For CN Tower I have assigned one marker so I have to make 4 identical markers for each CN Tower. So in this way, I have multiple replicas of the same building and to keep the track of buildings and assigned markers I created an excel sheet. Then I attached markers to their respective buildings.

Internal Critique

Some valuable inputs from peers and mentors
1. A staircase like a platform will help the user to interact with the BLOCK pieces in an intended way.
2. Add some animation or triggers to move images so that the project will be more interactive.
3. A bigger TV will be more impactful for the project rather than the projector.

For 1 a quick fix would be making region on the table with a marker but there was some user holds the building from the base and in this process, they unknowingly hinder the marker from the camera. So the staircase (did the laser cut on the same day after critique thanks to Jignesh for helping me out staircase ) was the perfect solution as it turns out to be after the OPEN SHOW.

For 2 I did make the background animation.

While playing it was too laggy and too slow to process the video although it is 5 MB and still I was getting too much lag. So disabled the video.

For 3 I used a TV screen as the graphics were much brighter and sharper and the most important thing was that the set got cleaner and minimal.

 

Exhibition SetUP

img_20191204_165627

group-1170

Take-Aways from the Exhibition

Some people were so surprised and looking for the sensor from where I am tracking the buildings. Some people touched on the table to check is there any Magnetic field or not. Some people thought that I am using some algorithm that detects the shape of the buildings ( kept me thinking for a while) and replicating them on the screen.
After explaining to them how cityBLOCK works, they complimented the project smart and clean in terms of set up.
There was an old couple and the man was too happy to play with the blocks and his wife told him to we have to see other’s work as well and I think this was a success for my project about being a simple interactive project.

Challenges & Learnings:

The biggest hurdle for me was that I had no idea where to start. There is no proper resources but I did know that this can be done all I need was just one project to understand the communication between the camera and processing.

Getting image graphics on the screen when I show a marker to the camera was the most time-consuming phase of the project. This was just a code issue that draws another white background on the top of images.
Once I got the images then it was just completing the task which was necessary for the project.

References

http://reactivision.sourceforge.net/#files
https://www.tuio.org/?software

 

Birds of a Feather

Experiment 5: Birds of a Feather

Course: Creation & Computation

Digital Futures, 2019 

Proposal

Members: Catherine Reyto, Jun Li, Rittika Basu, Sananda Dutta, Jevonne Peters

Project title:  Birds of a Feather

Project visual concept

Project description:

This project is an installation piece, involving a display composed of LEDs and various materials (primarily laser-cut acrylic).  We will be taking the interaction lightscape aspects of Experiment 1 (Sananda and Jevi), Experiment 2 (Jevi and Li), and Experiment 4 (Catherine, Li, Rittika, Jevi), and the work with LEDs and acrylic from Catherine and Sananda’s Experiment 2. 

The theme of our installation is birds.  Specifically, birds as they have been represented stylistically by the artists of our respective countries of origin.  Aesthetically, the range of species, variation of visual patterns and textures works well with our optics. We will be working with distance sensors as a means of interaction with participants, limiting the activity to simple actions (activating and deactivating light based on proximity, and some light animation).  

img_9536

 /materials/technology list:  what you will use to build your project. Be specific.

The range of bird groupings and styles will span across four panels.  If it’s possible to acquire them we are hoping to mount these panels to clothing racks on wheels so they are flexible in terms of portability.  Behind the panels is where we will assemble our circuits and arduinos (hidden from view). The wiring will feed into the display via holes in the panels.  The birds will be laser-cut from acrylic sheets (colour varying), though we may include other materials (paper, ink, and translucent fabrics) to emphasize the shift in artistic style.  

Lighting: : 

  • Ambient/atmosphere: remote-lit bulbs
  • Display: 2 
    • 2 x RGB strips  / 12” (50 RGBs per strip) 
    • 2 x RGB strips / 1 m (50 RGBs per strip)
    • Individual LEDS
    • Jumper wires
    • 5 x distance sensors (Ultrasonic Range Finder –  LV – MaxSonar-EZ0)
    • 5 x Arduino Nano 33 IoT
    • 2 x Arduino Mega 2560
    • 1 x Arduino Uno 

Material :

– Plastics (acrylic), laser-cut

  • Designed in Photoshop, Procreate and Illustrator

Programming: 

  • Arduino
  • Touch Designer

Work plan – include dates & milestones

Sunday November 23: Round 1 / bird designs created

Monday November 24: Round 1 / Bird designs laser-cut

  • Schematic / circuits 
  • Arduino programming

Tuesday November 25: Round 1 / Bird designs laser-cut 

Wednesday November 26: Round 2 / Bird designs laser-cut

  • Initial tests/ RGBs and sensors
  • Panels designed for displays

Thursday November 27: Round 2 / Bird designs laser-cut

Friday November 28:  Round 3 / Bird designs laser-cut

  • Panels drilled for circuits 

Saturday November 29: Assembly / testing
Sunday November 30: Assembly  / testing 

Monday December 1 : Assembly / testing

Tuesday December 2:  Install in space

Wednesday December 3 :  Install in space

 

Physical installation details:  The displays will be mounted like paintings but as the artwork will be attached to the panels at varying distances, there will be a 3D aspect (depth).  The LED circuits will be mounted on the backside of the panels, with the display components positioned in front, so that they’re ‘coloured’ by the LEDS while diffusing the light  in the process.  

The panels will in turn be supported by clothing racks on wheels for portability.  

Resources / materials: 

  • Acrylic sheets
  • Tissues / translucent fabric
  • Acrylic paint (black matte)
  • Foam / wood /cork-board panels

 

 

 

 

 

 

Ocular

ocular-copy01

Project Title
Ocular
Animated Robotics – Interactive  Art Installation

Project by Jignesh Gharat


Project Description

 An animated robotics motion with a life-like spectrum bringing emotive behaviour into the physical dimension. An object with motion having living qualities such as sentiments, emotions, and awareness that reveals complex inner state, expressions and behavioural patterns.
He is excited to peak outside the box and explore around the surroundings but as soon as he sees a person nearby he panics and hides back in the box as if he is shy or feels unsafe. He doesn’t like attention but enjoys staring at others.What if a blank canvas could try to express its self instead of viewers projecting their interpretations, emotions, beliefs, and stories.


Technology

  • Mackbook pro
  • 1 Arduino UNO/ Nano
  • 2/3 Servo motors
  • VL53L0X laser distance sensors/ 
  • LED Bulb
  • Power Bank 20000 mah

Materials

  • Acrylic/Paper/3D printing (Periscope)
  • 4 Plywood 30” x 30”

Work plan

22nd – 23rd _Material procurement, storytelling, and ideation
24th – 27th _Code- Debugging, Calibrating distance sensor, Mockup, Testing
27th – 30th_Iterating, Fabrication, and Assembling
1st – 2nd_Installation/ setup
3rd – 4th_Exhibitions


Physical installation details

The head of the robot(Periscope) is observing the surrounding and is curious to explore things happening  around. As the viewer comes in the range of the distance sensor the robot hides quickly into  the box and peeks out. when there is no one in the range of the sensor the robot pops out again.


Exhibition Space

Preferably a silent art gallery space with a plinth. Spotlight on the plinth.

Interactive Canvas

  1. Project Title

Interactive Canvas (Working title) by Arsalan Akhtar

 

  1. Project Description

 

Ambient advertising is key to win consumers today which encourages discovery of various story telling tools. The “walls” around us are an empty canvas that could tell a lot of interactive stories. Thus, I would like to make an interactive wall that tells a story through sound or display when in close proximity. The subject of the story is about breach of digital privacy and how we in this digital age have given permissions to mobile apps that we use daily.

 

  1. Parts, Materials and Technology

 

  • Arduino micro
  • Resistors
  • Conductive Material
  • Projector
  • Piece of wood
  • Processing and Arduino Software
  • Photoshop for illustration

 

  1. Work Plan

 

  • Nov 20-22 : Low fidelity prototyping with arduino, conductive material and resistors
  • Nov 23-24: Work on processing to discover storytelling drawing
  • Nov 25-26: Procure wood and engrave storytelling assets
  • Nov 27-28: debugging of hardware and software
  • Nov 29-Dec1: debugging of physical installation
  • Dec 2: test run
  • Dec 3-4: Display

 

  1. Physical Installation

A rectangle piece of wood or paper (around 6ftx6ft) attached or hung against the wall with story engraved on it and facing a projector. The visitors could interact with features on the wall and learn about the story. Thus, a flat wall such the DF gallery could be a great place.

  1. Resource list 

 

  • a bracket to hold the piece of canvas against the wall.

 

Thank you

 

 

 

 

 

 

Final assignment: Une Sculpture cinétique

By Jessie, Liam, Masha

Project Title: 

Une Sculpture cinétique  (working title)

 

Project description:

This project is going to display pattern and movement usually found in nature such as a bird flying or a flower blooming with kinetic sculptures. Changes in the pattern will be controlled based on visitors’ interactions with the sculptures. The project’s intention is to build a sense of connection between humans and nature and reflect on our relationship with it.

gif-2

Parts / materials / technology list:

  • Arduino Nano/ Uno
  • Servos – the amount is to be decided
  • Digital fabrication including laser cut wood/plastic/acrylic
  • Threads(fishing lines)
  • 3 laser sensors 

 

Work plan:

22 -26 November- Designing patterns and going through a few stages of prototyping

27-28 November- Coding and debugging

29 November- Test-assembling parts together to see if it works

30 November- Laser-cutting the final product and putting it altogether

1-2 December- Final testing and debugging

 

Physical installation details:

OCAD Graduate Onsite Gallery 

 

Resource list:

A display table with the size of 2m x 1m

An extension power cord

 

Experiment 5 – Proposal

cover

Project title

An Interface to Interact with Persian Calligraphy

Project description

This project will be an extension of what was presented in experiment three, an interface to interact with Persian calligraphy. After looking into different possibilities, I narrowed down the ideas into two different scenarios to extend the existing project. The main purpose of both scenarios is to improve the impression of the project for users.

Scenario 1:
Improving the interaction and visuals

One of the main things that I wanted to achieve in experiment 3 was to give users visual feedback on the specific point of interaction when touching the fabric. In the final result, I was only able to create one-dimensional horizontal feedback. Providing more accurate feedback is one of the ideas in this scenario. This scenario will most likely include:

  • Flat hanging fabric
  • Adding extra sensors or analyzing data differently
  • Modifying visuals respectively
  • Using multiple projectors (if applicable)

Scenario 2:
Creating a more immersive interface

I consider the use of a hanging piece of fabric a successful experiment as I had positive feedback from most of the participants. As a result, one of the possibilities to extend the project is to try different ways of hanging the fabric to surround the participants. This scenario will most likely include:

  • U-shaped hanging fabric
  • Analyzing new characteristics of the fabric and possible ways of interaction
  • Modifying the arrangement of sensors
  • Modifying visuals respectively
  • Using multiple projectors (if applicable)

 

Parts/materials/technology list

  • Arduino (one or two)
  • VL53L0X laser distance sensors (two or more)
  • Connecting wires
  • MacBooks
  • Short-throw projectors
  • A large piece of white fabric
  • Wooden bar (straight or curved) for hanging the fabric
  • Tripods, stands or extending arms to install distance sensors
  • Processing
  • Other tools to create visuals (SVG animation tools, Adobe Illustrator, …)

 

Work plan

Both scenarios require an initial setup so that data could be collected from sensors and the code creating the visuals could be tested and modified. Similarly, in both scenarios, new visuals should be designed, code should be modified, physical parts should be made and the final result should be calibrated for the presentation space.

work

In my experience with the previous experiment, the whole work can’t be divided into linear phases and is mostly done in an iterative way. However, some of the most important dates are:

NOV25: End of exploration/research/ideation

NOV25: Setting up the test setup (a prototype of the curved bar, if required)

NOV25 – DEC1: Creating visuals, analyzing data, coding

DEC1 – DEC3: Making required parts for the final setup

DEC3 – Final tests and calibration (final calibration will be done on DEC4)

 

Physical installation details

In my previous experience, this project is best presented in a dark and empty space. Ideally, a narrow space would help to cover the space with the fabric and divide the space so that people won’t walk to the back of the fabric.

The physical installation is also highly dependant on the final space.  I need to know the exact specifications of the space so that I can measure all the distances, model them and start creating everything accordingly. Ideally, I would like to install in room #118 to have full control over light and setup.

118-2118-1

Experiment 5 – proposal

(Un)seen
by Nadine Valcin

emergence-4

Project description
Much of my work deals with memory and forgotten histories. I am constantly searching for new ways to portray the invisible past that haunts the present. (Un)seen is a video installation about presence/absence that uses proxemics to trigger video and sound. It recreates a ghostly presence appearing on a screen whose voice constantly beckons the viewer to get closer, but whose image recedes into the frame as the viewer tries to engage with it. Ultimately, the viewer is invited to touch the cloth it is projected on, but if they do, the ghost completely disappears from view, leaving an empty black screen.

With permission, I will be using unused footage from a previous project comprised of closeups of a Black woman on a black background and will be recording and mixing a new soudtrack.

Parts / materials / technology list
MacBook Pro
Arduino Nano
Distance sensors (2 or 3) HC-SR04 or KS102 1cm-8M Ultrasonic Sensors
King size bedsheet, hanging from rod
Projector
2 speakers (or 4?)

Work plan
22.11.19-24.11.19     Edit 3 video loops
24.11.19-25.11.19     Write ghost dialogue and research sound
26.11.19-27.11.19    Record and edit sound
22.11.19-27.11.19     Program distance sensors and interaction
27.11.19                       Mount bedsheet on rod
28.11.19-02.12.19    Testing and de-bugging
03.12.19-04.12.19    Presentation

Physical installation details
The ideal space for the installation would be Room 118 on the ground floor.
With permission, I will be using footage shot for another project comprised of closeups of a Black women on a Black background. The ideal would be for the image to be projected from the rear onto the sheet. This would require a dark space and enough space behind and in front of the sheet. The mount for the sheet will be kept deliberately light. Metal wire can be used to hang the rod holding the sheet from the ceiling, but would potentially require discrete hooks screwed or attached to the ceiling.

Set-up Option A

unseen-setup-a1

Set-up Option B

unseen-setup-option-b1Resource list
Hand drill and other tools TBA
Ladder
Projector
2 (or 4?) speakers
2 pedestals for the sensors (?)
Cables for the speakers
Power bar and electrical extension cords
Table

Questions
– Can I have 4 speakers and have the play different sounds in pairs? I.e. the speakers behind the screen wouldn’t play the same sound as the speakers in front of the screen
– Do I actually need 3 distance sensors – 1 behind the screen for the functions triggered by people touching the screen and two mounted (possibly on pedestals) slightly in front of the screen at each side?
– Is it possible to hang things from the ceiling?
– Would a motion sensor also be useful to activate the installation when someone comes into the room?

Experiment 5 Proposal

Zero UI (Working Title)

For Experiment 5, I’d like to expand on tangible interfaces and explore the use of Zero UI (invisible user interfaces) and having technology fully incorporated within a room (or household) with the use of pressure sensitive furniture and sensors with auditory feedback to elevate regular objects (a book) to create a more immersive experiences instead depending on screen based experiences. This experiment is an exploration in creating a multi-sensory reading experience with content catered towards the book’s contents.

The experiment would involve the use of a pressure sensor chair that lights up a nearby lamp when the participant sits down. The pressure sensor may be installed physically on the chair or hidden away with the design of a cushion or lining. The participant can pick up the book and read or flip through the book and hear the music referred in the book playing from a speaker hidden away (possible below or behind the chair). The audio would be mapped depending on what section of the book the participant is on.

screenshot-2019-11-19-at-8-26-20-pm

The book I’d like to use is still undecided but one with many musical references such as Haruki Murakami’s book, The Wind Up Bird Chronicle, where as the book begins with the protagonist listening to Rossini’s the Thieving Magpie and refers to many other classical musicians. Another possible book could be J.R.R. Tolkien’s The Hobbit with the movie franchise’s music by Howard Shore playing instead.

Project Main Components and Parts

  1. Arduino Nano
  2. Flex Sensor
  3. Pressure Sensor
  4. MP3 Shield (?)
  5. External Speakers
  6. Lightbulb and Wiring (Lamp)

Additional Components and Parts

  1. Chair (Supporting Prop)
  2. Fabric/Cushion (To Hide/Place Sensor)
  3. Book (Prop)
  4. Mat Rug (Prop To Hide Cables)

Workback Schedule

Fri, Nov 22 –  Proposal Presentation
Sat, Nov 23 –  Coding + Gathering Digital Assets + Building Lo-Fi Breadboard Prototype
Sun, Nov 24 – Coding + Gathering Digital Assets + Building Lo-Fi Breadboard Prototype
Mon, Nov 25 –  Coding + Creatron for Final Components
Tues, Nov 26 –  Presenting Progress of Lo-Fi Breadboard Prototype + Revisions
Wed, Nov 27 – Prop Purchasing
Thurs, Nov 28 – Laser Cutting Components and Coding
Friday, Nov 29 – Troubleshooting / Bug Fixes
Sat, Nov 30 – Troubleshooting / Bug Fixes
Sun, Dec 1 –  Troubleshooting / Bug Fixes
Mon, Dec 2 – Troubleshooting / Bug Fixes
Tues, Dec 3 – Final Critique
Wed, Dec 4 – Open Show

Physical Installation

I’d like to ideally place the set up in the corner of a room and with dimmer lighting so the lighting from the lamp is more visible when it turns on. Supporting objects within the set up would be the chair participants can sit on with the sensor attached.

screenshot-2019-11-19-at-8-26-37-pm

screenshot-2019-11-19-at-8-26-45-pm

Resource List

  1. Chair and Side table
  2. Will need extension cords for power
  3. External speakers

Info for Open Show

Preferably displayed in the Grad Gallery room. I will just need an electrical outlet nearby or extension cord. We will need to book external speakers from AV.