Goodnight & Good Morning

main-image

Group Members

Grace Yuan, Abhishek Nishu

Project Description

Our project Goodnight & Good Morning is inspired by conversations me & Grace have had over our past remote fall semester. Each time we conversed, we began with asking each other the time and whether the other person was awake in order to have discussions on our projects. Regardless of all the technology we have the answer to this question didn’t come to us in an obvious way. So we took this project as an opportunity to design an object that would not only aesthetically fit itself in our space, but also visually represent day and night of our two cities across the world. Our daylight clocks take the form of  a Shadow box, where each box consists of a single graphic broken down into multiple layers with a light source behind creating shadows. We further wished to introduce a piece of culture in one another’s space and so we each designed the visuals to the other person’s Shadow box. Grace’s shadow box uses a Mughal painting designed by me and mine to represent Chinese architecture that Grace grew up around. 

nishus-layersMughal painting layers

graces-layersChinese architecture layers

As the sun rises in Mumbai (India) and sets in San Francisco (U.S.A), Grace’s Shadow Box light brightens and is completely illuminated during her night. The same action would occur the other way around with my Shadow Box coming on as the sun rose in California. This would help remind each whether it was day or night and when would be a good time to reach out.

With the help of an LDR(light dependent resistor) we are able to log the light intensity values in our environments and translate them into increasing and decreasing brightness of the LED’s. To also make the light values accurate, we performed an exercise documenting each other’s light levels so we could accordingly set our sensor ranges to react. We would further like to build this concept to include a sound input and that would help tap into the other senses and help send more detailed messages. 

Experience Video

https://ocadu.techsmithrelay.com/0I8o

Behind the scenes

https://ocadu.techsmithrelay.com/K7Vs

Final project images

Grace’s final project images:

grace-final-image-3

grace-final-image-2

microsoftteams-image-1

 

Nishu’s final project images:

img_0559

img_0594

img_0586

 

Development images

1. Rough development of ideas

img_0067

img_0562-copy

 

2. Testing & building circuit

img_0019

img_0431-copy

screen-shot-2020-11-29-at-10-34-01-am

 

3. Prototyping

img_0069

img_0520-copy

 

4. First glimpse of prototype coming together

img_0070

img_0536-copy

Link to the Arduino code hosted on Github

https://github.com/AbhishekNishu16/Experiment-5.git

Circuit diagram

circuit-diagram

Project Context

“What time is it over there?” While working with our teammates located at different parts of the earth, we keep asking each other this one particular question over and over again.

Since the Covid-19 pandemic, remote working has become “the new normal” and it’s common for people to work with someone who sits in another city, another timezone, and experiencing an opposite time of the day. The idea of informing one another of our local time intrigues us. As we explored the concept around time and daylight, one of the related works, Patch of Sky showed us the great potential of visualizing real-time information with the ambient approach. Patch of Sky is an interactive lamp, displaying colored light animation in response to the current Facebook location weather. It was developed with Arduino and BERGCloud, to bring the sky and nature into the interior and personal space. Reflecting on our project, by visualizing the time with the brightness of LEDs, we are creating a sense of companion in real time. The ambient light indicates both the daylight and the presence of the other person, offering a sense of warmth with the colored lights. When one person is asleep or turns off the light, the LEDs on the partner device also turn off as the person is not around. 

Coming to the form of the design, we want to tell a story through the device, to represent our own cultures and ourselves. In our vision, the device should also be a nice-looking object that we would love to keep and use to decorate our homes. The American Surrealist Artist Joseph Cornell is well-known for his interactive shadow box art. His shadow box artworks combine Constructivism and Surrealism with the juxtaposition of found objects and photographs. The particular structure of the shadow box inspired us, as it often integrates illustrations and storytellings with sophisticated details. The layering technique not only sets off the smooth visual effect of the LED lights but also adds depth and playfulness to the design. We studied a variety of paper-cut shadow boxes and decided to customize the shadow box designs. By creating a set of illustrated frames for each other, and making the device to represent one another, we learned about the Indian and Chinese cultures and opened up conversations throughout the process. Every time we look at the box, we not only read the time of the day across the devices but also understand the identity and message communicated through the design. It was a meaningful and joyful experience that involves bonding, laughter, stories, making, crafting, and a lot more. The journey of building the device becomes part of the final product, leaving us an embodiment of our partnership.

For our next step, we would like to share the resources and provide instructions to the public, to let people create their own communicating Goodnight and Good-Morning boxes. Thanks to the nature of the paper-cut shadow box, anyone who has access to the materials should be able to make their own and even customize the design based on their cultures, their stories, and the message they want to communicate. It’s suitable for friends, families, couples and any pair of persons who care about each other.

Citations & Bibliography

“Collection Online: Joseph Cornell”. Guggenheim, https://www.guggenheim.org/artwork/artist/joseph-cornell.

The Joseph Cornell Box, https://www.josephcornellbox.com

“Patch of Sky.”Fabrica Features, 2014, https://www.fabricafeatures.com/2014/patch-of-sky/.

 

 

 

 

 

 

Air Band

key-project-image

Band Members:  

Mairead Stewart- Synth

Bernice Lai- Accordian

Kristjan Buckingham- Drums

Clinton Akomea-Agyin- Guitar

Abhishek Nishu- Saxophone

Project Description:

AirBand is a networked series of digital ‘instruments’ that can be played through physical movement. To participate in AirBand, a player opens the p5 web editor and chooses to play one of the five instrument options: guitar, drums, saxophone, accordion, or synth. These instrument choices are flexible, allowing for any number of players on each instrument at a time. Next, the player is encouraged to dance around their space while the PoseNet face tracking library registers their position. A player moves from one side of the screen to the other to choose from a set of pre-loaded audio tracks of their chosen instrument, then claps their hands to make that track play. As more players join the band, their music can be heard by everyone else, meaning that though players may be on the other side of the world, they can still dance and play music with one another. While many aspects of the AirBand experience are the same for each instrument, the visual elements have been personalised so that players will see a different art experience on their screen for each instrument they play.

AirBand is not only a tool for creative expression, it is a real-time collaborative experience. As our social interaction increasingly moves to virtual spaces, experiences like these give participants an enjoyable alternative to the virtual meetings they are accustomed to. Rather than passively joining a video call, AirBand encourages participants to become active members of the group, physically contributing to a joint creative project.

Experience Video:

https://ocadu.techsmithrelay.com/K5QW

Behind the scenes:

https://ocadu.techsmithrelay.com/Ld9Z

Network Diagram:network-2

Final Project Images: 

Nishu:

final-image-copy

Mairead:

mairead_final_image

Bernice:

bernice_final_image-1

Kristjan:

kristjan_final_image

Clinton:

clinton_final_image-1-copy

Development images: 

img_0256-copy

img_0257-copy

%e6%88%aa%e5%b1%8f2020-11-19-%e4%b8%8b%e5%8d%882-59-09

screen-shot-2020-11-20-at-10-22-49-am

screenshot-2020-11-19-at-11-30-19-pm

Link to the code:

Kristjan’s Edit Link:

https://editor.p5js.org/kristjanb/sketches/ZCi4G6dDY

Kristjan’s Present Link: https://editor.p5js.org/kristjanb/present/ZCi4G6dDY

 

Mairead’s Edit Link:

https://editor.p5js.org/mairead/sketches/lc_bqnZK3

Mairead’s Present Link:

https://editor.p5js.org/mairead/present/lc_bqnZK3

 

Bernice’s Edit Link

https://editor.p5js.org/3149332/sketches/48Ny1AaI9

Bernice’s Present Link:

https://editor.p5js.org/3149332/present/48Ny1AaI9

 

Nishu’s Edit Link

https://editor.p5js.org/Abhinishu/sketches/H53lh0FlG

Nishu’s Present Link:

https://editor.p5js.org/Abhinishu/present/H53lh0FlG

 

Clinton’s Edit Link

https://editor.p5js.org/clinagyin/sketches/15PAtJyRd

Clinton’s Present Link:

https://editor.p5js.org/clinagyin/present/15PAtJyRd

 

Project Context:

AirBand is based on a number of projects that aim to create a shared space for remote musical collaboration. Although the challenge of connecting musicians virtually is hardly new, the pandemic has drastically increased demand for such a service.

Latency seems to be the primary issue most commonly addressed. With a reliable high-speed internet connection and direct input from an ethernet cable in favor of WiFi, some platforms such as Jamkazam are able to reduce latency to an acceptable degree, but it is not currently possible to achieve synchronous real-time playback (BandMix). The most popular workaround is to use a base track that each musician can play along to separately, then all of the recordings are layered together manually, rather than actually playing live (Pogue).

For those who do choose to navigate the latency issues to play together live remotely, another challenge that is less commonly addressed is access to instruments. It is often assumed that most musicians will have their instruments at home, but that is not always the case. The Shared Piano from Google Creative Lab, which was the main inspiration for this project, addressed this issue by allowing users to play music by inputting an electronic piano or by using the keyboard on their computer (Google Creative Lab).

21 Swings, created by Mouna Andraos & Melissa Mongiat and exhibited yearly in Montréal, does not connect users remotely, but it does allow various users to contribute to a shared musical experience in a physical way without the use of traditional instruments. Instead, a series of swings trigger distinct notes, prompting users to work together to create a composition (“21 Balançoires (21 Swings)”).

AirBand attempts to expand on the Shared Piano and 21 Swings by allowing each user to “play different instruments” by triggering distinct sounds remotely. PoseNet was incorporated in order to introduce more physicality to the experience, encouraging participants to dance and move around. Unique corresponding visuals can also be customized by each collaborator, creating a more tailored experience.

Whether playing over base tracks to simulate live jams, using technology to simulate instruments, or finding other ways to work around the lag, people are determined to find ways to come together and collaborate on music remotely (Randles). This endeavor is leading to some amazing innovations and many impressive creative solutions.

Works Cited

 “21 Balançoires (21 Swings).” Vimeo, uploaded by Daily tous les jours, 24 Apr. 2012, https://vimeo.com/40980676.

Google Creative Lab. “Shared Piano.” Experiments with Google, June 2020, https://experiments.withgoogle.com/shared-piano.

Pogue, David. “How to Make Your Virtual Jam Session Sound—and Look—Good.” Wired, 4 Jun. 2020, https://www.wired.com/story/zoom-music-video-coronavirus-tips/. Accessed 18 Nov. 2020.

Randles, Clint. “Making music at a distance – how to come together online to spark your creativity.” The Conversation, 13 Apr. 2020, https://theconversation.com/making-music-at-a-distance-how-to-come-together-online-to-spark-your-creativity-135141. Accessed 18 Nov. 2020.

“Virtual Jamming Tips.” BandMix, https://www.bandmix.com/virtual-jamming/. Accessed 18 Nov. 2020.

 

A day at the arcade by Abhishek Nishu

project-1-copy-2

A day at the Arcade, by Abhishek Nishu

PROJECT DESCRIPTION:

“A Day at the Arcade” is a series of artistic and gamified experiences that are commonly linked by the simple movement of your body across a screen. Just like when you’re at an arcade, you will experience multiple games built on various themes, where interactions are linked to different parts of your body. 

I started with observing our interaction with computers and how we currently experience them through a series of clicks and taps that lead to visual changes on our screens. As users, we are so focused on where we want our cursor to eventually be that the simple motion of your finger translating an action on the screen has become a subconscious part of the experience that we take for granted . And so my series of experiments revolve around creating visual & physical experiences that bring back the presence of physical interaction with our computers. Imagine a world where you control your entire computer through just different motions of your body from a distance. This reminds me of how the movie Minority Report  depicts the future computer. 

I have structured all my experiences with the help of poseNet’s body tracking technology while leveraging  my skills from other software. As a result, I was able to create and introduce visual elements that supported building these experiences.

 

EXPERIENCE 1: Don’t forget your mask

Don't forget your mask

Experiment gif: https://youtu.be/6VALdM0d4vA

Experience description:

My first experiment begins with sending the simple message that during these times, always keep your mask on. Using poseNet, I tracked the mask to the nose and have set the width and height of the mask to be dynamic by tracking the distance between my ears. This always keeps the mask proportionate to my face by altering its size according to the distance of my face from the screen. Do you want to talk about this when you move your head from side to side? I would further like to explore how the perspective of an image can change with the angle of my head.

Present Link: https://editor.p5js.org/Abhinishu/present/7N2Z3Fv-8

Edit Link: https://editor.p5js.org/Abhinishu/sketches/7N2Z3Fv-8 

 

EXPERIENCE 2: The Optical Bird

the-optical-bird

Experiment gif: https://youtu.be/kdgZ4ayEW2Y

Present Link: https://editor.p5js.org/Abhinishu/present/Zqe9c0qCN

Edit Link: https://editor.p5js.org/Abhinishu/sketches/Zqe9c0qCN

Experience description:

My second experiment is a digital experience of an optical illusion. It is a concept where 2 overlapping static visuals, when moved on top of each other, create an animation. The sketch consists of 2 layers where the static bottom layer is an image of a bird and the top layer is controlled by the movement of my right wrist from left to right of my screen. On moving my wrist from side to side I was able to create the animation of the bird flying forward and backwards.

A critical observation while exploring this interaction was to see how poseNet recognised and tracked my wrist even while it was out of frame. And how its tracking is sensitive to the distance from the camera when you use points lower than your chest.

 

EXPERIENCE 3: LION KING STORYBOARD

the-optical-bird-copy-2

Experiment gif: https://youtu.be/kdgZ4ayEW2Y

Present Link: https://editor.p5js.org/Abhinishu/present/AceWBoBH7

Edit Link: https://editor.p5js.org/Abhinishu/sketches/AceWBoBH7

Experience description:

My third experiment explores the interaction between media and body movements. By incorporating multiple images that transition from moving my nose from left to right of the screen, I was able to recreate a synopsis of a movie that makes us all nostalgic. The storyboard of Lion King. To further enhance the experience, I use the mouse pressed function to turn the images into short movie clips.

 

EXPERIENCE 4: FACE CATCHING

face-catching

Experiment gif: https://youtu.be/pqZxREB5SxU

Present Link: https://editor.p5js.org/Abhinishu/present/Nv8MQKzuL

Edit Link: https://editor.p5js.org/Abhinishu/sketches/Nv8MQKzuL

Experience description:

Face catching, well it’s all in its name. This experiment is a game where you catch a ball using your face. As you move your face from side to side, you control the digital hands to catch the ball. For each ball you catch you get a point. This exercise helped me explore ways to introduce gamifying elements like a score, gifs and how to reset the game.

 

EXPERIENCE 5: TRON BIKE DANCE

tron-bike-dance

Experiment gif: https://youtu.be/TilDzH6bTIA

Present Link: https://editor.p5js.org/Abhinishu/present/JHviL-YaZ

Edit Link: https://editor.p5js.org/Abhinishu/sketches/JHviL-YaZ

Experience description:

On creating my first game and linking it to a single body part, this game explores controlling a single object with multiple body parts. The game is to get the bike to follow the path by controlling its movements with my wrists and ankles. I have also introduced the element of time with a timer of 60 seconds which ends the game. This project particularly taught me about the sensitivity of poseNet. While trying to identify which body parts to link, I learnt that the camera was more reactive to the wrists and ankle as they are to end points of the body. I further wish to explore this game by pre-setting poses to move the bike in different directions. 

 

PROJECT CONTEXT:

If you have ever watched ‘The Matrix’, ‘Iron Man’ or even ‘Minority Report’, you must wonder what it would be like to control a TV, computer or any other digital device with just a wave of your hand or any other body part. What always fascinated me about these movies is how body movements and gestures translate to visuals or user interfaces. 

Introducing Gesture Interfaces. While we may still need to push buttons, touch displays and trackpads, or  raise our voices, we’ll increasingly be able to interact with and control our devices simply by signaling with our fingers, gesturing with our hands, and moving our bodies. “Gesture controls will notably contribute to easing our interaction with devices, reducing (and in some cases replacing) the need for a mouse, keys, a remote control, or buttons. When combined with other advanced user interface technologies such as voice commands and face regonition, gestures can create a richer user experience that strives to understand the human “language,” thereby fueling the next wave of electronic innovation (Dipert, 2016)”. We are already seeing the implementation of this in real-world companies with brands like BMW and Samsung to improve user experiences. An example would be, BMW has a feature  to slide your foot under the trunk of your car to open it while your hands are full and Samsung’s smart TV allows you to change channels by panning your hand. While the number of functional applications grow, gesture interfaces are also  seen in the sheer fun and interactivity space of entertainment and gaming. Looking at places like arcades, DisneyLand, Virtual & Augmented reality zones and even in our homes with modern platforms such as Nintendo’s Wii and Microsoft’s Kinect.

“Gestural interfaces are highly magical, rich with action and have great graphical possibilities (Noessel,2016)”. While the Nintendo Wii and Microsoft’s Kinect require buying additional equipment to enjoy this technology, my project looks more into how we can bring the arcade home and into the devices we already have. Under these difficult times, while being cooped up at home, we all could use some fun in our lives.

As a next step to my project, I would like to explore the functional side of gesture interfaces by including the element of speech. In the film Iron Man 2, Tony says to the computer, “JARVIS, can you kindly vacuform a digital wireframe? I need a manipulable projection.” JARVIS then immediately begins to run the scan. Such a command would be hard to give through a physical gesture and this is where language handles abstract commands well making it a strong gesture to explore (Noessel, 2016).

CITATIONS:

  1. The Gesture Interface: A Compelling Competitive Advantage in the Technology Race, By Brian Dipert, April 2016.
  2. Stuart K. Card, in Designing with the Mind in Mind (Second Edition), 2014.
  3. What Sci-Fi Tells Interaction Designers About Gestural Interfaces, Chris Noessel, 2013.
  4. J. Davis and M. Shah, “Visual gesture recognition,” in IEE Proceedings – Vision, Image and Signal Processing, vol. 141, no. 2, pp. 101-106, April 1994, doi: 10.1049/ip-vis:19941058.
  5. What is gesture recognition? Gesture recognition defined, Sonia Schechter, 2020.
  6. What’s the future of interaction? Ashley Carman, Jan 26, 2017