Silent Signals

PROJECT TITLE, SUBTITLE
Silent Signals
Breaking the Language Barrier with Technology

TEAM MEMBERS
Priya Bandodkar, Jignesh Gharat, Nadine Valcin

PORFOLIO IMAGES

portfolio-02 portfolio-03 portfolio-04portfolio-05

PROJECT DESCRIPTION

‘Silent Signals’ is an experiment that aims to break down language barriers between users across locations by enabling them to send and receive text messages in the language of the intended recipient(s) using simple gestures. The gesture detection framework is based on poseNet technologies, and the experiment uses PubNub to send and receive these messages. It is intended to be a seamless interaction where users’ bodies become controllers and triggers for the messages. It does away with the keyboard as an input and takes communication into the physical realm, engaging humans in embodied interactions. It can comprise of multiple users and is irrespective of the spatial distance between these participants.

PROJECT CONTEXT

When we first started thinking about communication, we realized that between the three of us, we had three different languages: Priya and Jignesh’s native Hindi, Nadine’s native French and English that all three shared as a second language. We imagined teams collaborating on projects across international borders, isolated seniors who may only speak one language and globetrotting millenials who forge connections throughout the world. How could we enable them to connect across language barriers by making them connect across language barriers?

Our first idea was to build a translation tool that would allow people to text one another seamlessly in two different languages. This would involve the use of a translation API such as Cloud Translation by Google (https://cloud.google.com/translate/) that has the advantage of automatic language detection through artificial intelligence. 

We then thought that it would be more natural and enjoyable for each user to be able to speak their preferred language directly without the intermediary of text. That would require a speech-to-text API and a text-to-speech API. The newly release Web Speech API (https://wicg.github.io/speech-api/) would fit the bill as would the Microsoft Skype Translator API (https://www.skype.com/en/features/skype-translator/) which has the added benefit of translating direct speech to speech translation in some languages, but unfortunately that functionality is not available for Hindi.

Language A                                                                Language B

p1

As we discovered that there are several translation apps already on the market, we decided to push the concept one step further enabling communication without the use of speech and started looking into visual communication.

The Emoji

pic2

Source: Emojipedia (https://blog.emojipedia.org/ios-9-1-includes-new-emojis/)

Derived from the Japanese terminology for “picture character”, the emoji has grown exponentially in popularity since its online launch in 2010. More than 6 billion emojis are exchanged every day and 90% of regular emoji users rated emoji messages as more meaningful than simple texting (Evans, 2018). They have become part of our vocabulary as they proliferate and are able to express at times relatively complex emotions and ideas with one icon.

Sign Language

Sign languages allow the hearing impaired to communicate. We also use our hands to express concepts and emotions. Every culture has a set of codified hand gestures that have specific meanings. 

pic3

American Sign Language

Source: Carleton University (https://carleton.ca/slals/modern-languages/american-sign-language/)

Culturally-Coded Gestures

Source: Social Mettle (https://socialmettle.com/hand-gestures-in-different-cultures)

MOTIVATION

We also simultaneously started thinking about how we can use technology, as the three of us shared a desire to make our interactions more intuitive and natural. 

“Today’s technology can sometimes feel like it’s out of sync with our senses as we peer at small screens, flick and pinch fingers across smooth surfaces, and read tweets “written” by programmer-created bots. These new technologies can increasingly make us feel disembodied.”

Paul R. Daugherty, Olof Schybergson and H. James Wilson
Harvard Business Review

This preoccupation is by no means original. Gestural control technology is being developed for many different applications, especially as part of interfaces with smart technology. In the Internet of Things, it serves to make interactions with devices easy and intuitive, having them react to natural human movements. Google’s Project Soli, for example, uses hand gestures to control different functions on a smart watch. 

ADAPTATION CHALLENGES

Some of the challenges in implementing this approach to technology is that there is currently no standard format for body-to-machine gestures and that gestures and their meanings vary from country to country. For example, while the thumbs up gesture pictured above has a positive connotation in the North American context, it has a vulgar connotation in West Africa and the Middle East.

CONCEPT

pic4

The original concept was a video chat that would include visuals or text (in the user’s language), triggered by gestures of the chat participants. We spent several days attempting to use different tools to achieve that result before Nick Puckett informed us that what we were trying to achieve was nearly impossible via PubNub. This left us with the rather unsatisfactory option of the user only being able to see themselves on screen. We nevertheless forged ahead with a modified concept that had these parameters:

  • Using the body and gestures for simple online communications
  • Creating a series of gestures with codified meanings for simple expressions that can be translated in 3 different languages

TECHNICAL ASPECT

gif1

poseNet Skeleton

Source: ml5.js (https://ml5js.org/reference/api-PoseNet/)

We leveraged the poseNet library, which is a machine learning model that allows for Real-Time Human Pose Estimation. It tracks 17 nodes on the body using the webcam and creates a skeleton that corresponds to human movements. By using the node information tracked by poseNet, we were able to define the relationship of different body parts to one another, use their relative distances and translate that into code.

pic5

poseNet tracking nodes

Source: Medium (https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5)

TECHNICAL LIMITATIONS

As we continued to develop the code, we soon realised that poseNet tracking seemed rather unstable and at times finicky, as it was purely based on the pixel-information it received from the camera. The output fluctuated as it was based on several factors such as the lighting, contrast of clothing, background and the user’s distance from the screen. Consequently, it meant that the gesture would not always be captured if these external factors weren’t acting favourably. Dark clothing and skin seemed to be particularly problematic.

We originally had 10 gestures coded, but the challenge of integrating them all was that they sometimes interfered or overlapped with the parameters of one another. To avoid this, we developed 5 in the prototype. We had to be mindful of using parameters that were precise enough to not overlap with other gestures, yet broad enough to take into account the fact that different body types and people would perform these gestures in slightly different ways.

Since there are very limited resources dealing with p5.js and PubNub, we had difficulty in finding code examples to help us resolve some of the coding problems we encountered. Most notably amongst these was managing to publish graphic messages we designed (instead of text), that would be superimposed on the recipient’s interface. We thus only managed to display graphics on the sender’s interface and send text messages to the recipient.

CODE ON GITHUB

https://github.com/jigneshgharat/Silent-Signals

OUTCOME

  • Participants expressed that it was a unique and satisfying experience to engage in this form of embodied interaction using gestures.
  • The users were appreciative of the fact that we developed our own set of gestures to communicate instead of confining to existing sign languages.

NEXT STEPS

We would like to complete the experience by publishing image messages to recipients with corresponding translations rather than have the text interface.

REFERENCES

Oliveira, Joana. “Emoji, the New Global Language?” In Open Mind https://www.bbvaopenmind.com/en/technology/digital-world/emoji-the-new-global-language/. Accessed online, November 14, 2019

Evans, Vyvyan. Emoji Code: the Linguistics behind Smiley Faces and Scaredy Cats. Picador, 2018. 

https://us.macmillan.com/excerpt?isbn=9781250129062. Excerpt accessed online, November 15, 2019

Schybergson H, Paul R. Daugherty Olof, and James Wilson. “Gestures Will Be the Interface for the Internet of Things.” in Harvard Business Review, 8 July 2015, https://hbr.org/2015/07/gestures-will-be-the-interface-for-the-internet-of-things. Accessed online November 12, 2019

Oved, Dan. “Real-time Human Pose Estimation in the Browser with TensorFlow.js” in Medium. 2018.

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5. Accessed online November 10, 2019.

The Compass

PROJECT TITLE
The Compass

PROJECT BY
Priya Bandodkar

PORFOLIO IMAGES

portfolio-2 portfolio-1 portfolio-4

portfolio-3

PROJECT DESCRIPTION

“The Compass” is an experiment that uses interactions with a faux compass in the physical space to navigate in a virtual 3D environment. The exploration leverages affordances of a compass such as direction, navigation, travel to intuitively steer in a virtual realm. It closely emulates a VR experience sans the VR glasses.

Participants position facing the screen and rotate the handheld compass to traverse in an environment on the screen in directions mimicking the movement of the physical interface. Participants can turn the device, or even choose to turn around with the device for subtle variation of the experience. This movement influences the 3D sphere on the screen to rotate, creating an illusion of moving in the virtual space itself. The surface of the 3D sphere is mapped with a texture of a panoramic landscape. The landscape is a stylised city scene of crossroads bundled with characters, vehicles to compliment the navigational theme of the experiment. 

As a variation to the original concept, I embedded characters in the scene that participants need to search for. Thus, tapping into the ‘discovery’ affordance of the compass creating a puzzle-game experience.

PROJECT CONTEXT

As a 3D digital artist, I have always been interested in exploring the possibilities of interactive 3D experiences in my practice. The introduction to processing opened doors to accessing the ‘interactive’ part of this. I was then keen on playing with the freedom and limitations of incorporating the third dimension into processing and studying its outcomes.

One of my future interests lies in painting 3D art in VR and making the interactions as tangible as possible. I am greatly inspired by the work of Elizabeth Edwards, an artist who paints in VR using Tilt Brush and Quill, creating astonishing 3D art and environments in the medium. I was particularly fascinated by her art piece in VR called ‘Spaceship’ which was 3D painted using Tilt Brush. I posed myself with a challenge of emulating this virtual experience, and controlling it with a physical interface which was more intuitive than using a mouse.

It had to be a physical interactive object that helps you look around mimicking a circular motion. Drawing parallels in the real world, I realised the compass has been one of the most archaic yet intuitive interfaces to finding directions and navigating in the real space. Hence decided to leverage its strong affordance and compliment the visual experience of my project. While building on the interactivity, I realised how easy and effortless it became to comprehend and relate the virtual environment to your own body and space from the very first prototype. And even more due to physically controlling the object with an intuitive handheld in real space.

ideation

Studying about the gyroscope and sending its data to processing filled a crucial piece of the puzzle. It helped me utilise the orientation information with simple, yet very useful lines of code to bring in the anticipated interaction to a T.

sphere

I studied VR art installations such as the ‘Datum Explorer’, which created digital wilderness using a real landscape, that concluded to a non-linear storytelling using elusive animals. This elicited the idea of incorporating identifiable characters in my 3D environment to add an element of discovery to the experience. I looked up for games that were based on similar concepts such as ‘Where’s Waldo?’ to calibrate the complexity of this puzzle-game idea. I used six characters from the Simpsons family and embedded them using a glitching graphic effect, thus hinting that they did not exactly belong in the scene, and hence needed to be spotted.

characters

To leverage the affordance of the compass, it was important to make it compact enough to fit in the hand and be rotated. I was able to achieve this by nesting the microcontroller on a mini-breadboard within the fabricated wooden compass. I stuck to the archaic look for the compass to keep the relatability towards the design intact for participants. While incorporating the puzzle-game aspect, I realised the design of the compass can be customised to hold clues related to the game. But I decided to let go of that in this version, as the complexity of the six-character puzzle was simple and straightforward enough for participants to solve.

compass-1 compass-2 compass-3

OBSERVATIONS

To conclude, the interaction with the compass in the physical world to control a virtual 3D environment came about intuitively for participants, and was successful. Some interesting interactions that came up during the demo, were when participants decided to turn around with the compass held in hand, and also when they placed the compass near the screen and rotated the entire screen to experience the emulation of ‘dancing with the screen’. The experience was also compared closely to resemble VR, however without wearing the VR glasses, making it more personal and tangible.

CODE

https://github.com/priyabandodkar/Experiment3-TheCompass

FUTURE ITERATIONS

This is an exploration in continuum that I would like to build on using the following:

  • Layering the sphere with 3D elements, image planes in the foreground to create a depth in the environment.
  • Using image arrays that appear or disappear based on the movement of the physical interface.
  • Adding intricacies and complexities to the puzzle game by including navigation clues on the physical interface.

REFERENCES

Edwards, Elizabeth. “Tilt Brush – Spaceship Scene – 3D Model by Elizabeth Edwards (@Lizedwards).” Sketchfab, Elizabeth Edwards, 1 Jan. 1967, https://sketchfab.com/3d-models/tilt-brush-spaceship-scene-ea0e39195ef94c9b809e88bc18cf2025.

“Datum Explorer.” Universal Assembly Unit, Wired UK, https://universalassemblyunit.com/work/datum-explorer.

“Interactive VR Art Installation Datum Explorer | WIRED.” YouTube, WIRED UK, https://www.youtube.com/watch?v=G7BaupNmfQU.

Ada, Lady. “Adafruit BNO055 Absolute Orientation Sensor.” Adafruit Learning System, https://learn.adafruit.com/adafruit-bno055-absolute-orientation-sensor/processing-test.

Strickland, Jonathan. “How Virtual Reality Works.” HowStuffWorks, HowStuffWorks, 29 June 2007, https://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality.htm.

“14 Facts About Where’s Waldo?” 14 Facts About ‘Where’s Waldo?’ | Mental Floss, 20 Jan. 2017, https://www.mentalfloss.com/article/90967/14-facts-about-wheres-waldo.

CODE REFERENCES

https://processing.org/tutorials/p3d/

https://processing.org/tutorials/pshape/

https://processing.org/reference/sphere_.html

https://processing.org/reference/texture_.html

random (ANIMALS, MONSTERS & HUMANS)

PROJECT TITLE, SUBTITLE
random (ANIMALS, MONSTERS & HUMANS)
A fun grouping game that entails random (shuffling, counting, clustering).

TEAM MEMBERS
Priya Bandodkar & Jun Li

PORFOLIO IMAGES

image3

image2

image4

PROJECT DESCRIPTION

Random (ANIMALS, MONSTERS & HUMANS) is a smartphone-based interactive physical game experience. The game facilitates both person-to-person and person-to-phone interactions. It can be played with 20 players and up, in an open space.

It is a character-based game containing a variety of single and mixed sets of personas, which are either animals, monsters or humans. On hitting start, the code shuffles, randomises and assigns a persona or a mixed set to each player. This is completely random, based on when the participant chooses to stop the shuffle, thus leading to a dynamic range of character-sets on the floor. The host then announces a pairing condition between these animals, monsters and humans (for instance, 1 human, 2 animals, 2 monsters need to form a team). The players now have to group up with other participants in a way that their cluster meets the announced pairing condition. They then advance to the next round. This continues until we have a winner(s).

It was interesting to see how the game playfully turned out to be an integrity test, as some players sneaked in multiple shuffles to find a matching character, and thus survive longer in the game.

PLAY THE GAME

image5

PROJECT CONTEXT

Ideation

We explored and brainstormed to generate concepts that could weave in, what we thought was the essence of this experiment—creating one connecting experience with 20 screens without networking. We also wanted to build an engagement that allowed users to have fun. Some of the ideas that came to our minds were—creating a shopping aisle or gallery of unusual products, and using smartphones as interactive displays. Another one was building a world map installation, while using phones as different regions/cities around the world, and interactively depicting their landscapes, interesting facts. Although these concepts had quite a bit of creative scope and aligned to the initial brief, we realised they lacked the element of fun. We did a second round of ideas, drawing inspiration from ice-breaker games like Musical Chairs.[1] And that’s when we came across the ‘Group in the numbers of…’ game by Michael Hartley.[2] It involves each player to pair up with other participants and total up to match the number announced by the host. It was a simple concept, but had a potential to serve as a footing that we could build on.

Concept Development

Taking this concept further, we thought what if each player was allowed to have a dynamic, random number for each round. Visualising it in the digital context, smartphone was the ideal choice for introducing this twist. 

Prototyping

Looking up for code resources, we realised that the math references could help generate a dynamic number with each click. Here is a process video of testing this out:

Furthermore, to lend an organic feel to the game, we brought character into play instead of numbers. We also sub-categorised the characters into families of animals, humans and monsters (earlier, aliens) with a view to introduce an additional layer of challenge. We further considered adding mixed set of characters to shake up the combination possibilities between participants as drawn below:Here is a process video of testing the functionality with images:

image6-1

Game Flow Visualisation

We mapped the flow of the game early on for two reasons—gauge the scope and plan for milestones, and foresee possible challenges. It was succinctly visualised as below:

  1. The game starts with 20 individual participants in an open area with 1 phone each.
  2. The game is established using an animation with the game title and button to proceed and play.
  3. The rules of playing the game are explained to participants verbally and/or  including a rule page within the game.
  4. The participants trigger the shuffle on their respective phones either by using a phone shake or by tapping the screen.
  5. Pictures or animated GIF loops of character start shuffling in a random order.
  6. Players tap the screen to stop and stop at one character.
  7. The facilitator picks a pairing condition between different characters based on numbers (from a bowl of paper chits, may need to be improvised depending on players left) and announces it. 
  8. This requires players to pair up in a different number of groups to meet the announced criteria. Players have 30 seconds to for this.
  9. Players pair up quickly form teams. 
  10. The ones left out, are eliminated.
  11. We continue with more rounds (3-4 rounds) until we have a winner(s).

Code

We built on the random() functionality tested during prototyping. While developing the code with GIFs, we ran into a roadblock with using random GIF loops in place of the images. The GIFs somehow did not work on both browsers (Safari & Chrome). Looking up for references online about animated GIFs [2] , we encountered a way to play a single GIF, but the same logic did not cater to loading 18+ GIFs. We also realised using a repository of GIFs would take a toll on the loading time of the game due to relatively higher file size. To overcome this predicament, we decided to introduce color in our images. To compliment the idea further, we added a code to that resulted in a different background color, from the selected color palette each time. It thus increased image possibilities by multifolds, and making it less repetitive.

Visual Aesthetics

We were initially exploring graphic styles that would lend well to creating GIF loops. The purpose of including animated characters was essentially because they would add so much more life to the screen. We had narrowed down to the pixel art and the doodle style.

giff-1
We went ahead with the doodle style (right), because it had more scope to bring out different character personalities using facial expressions. Below is are the character designs and GIF loop:

giff-3We were unable to proceed with GIFs midway through the development phase, we decided to alternatively uplift the visual aesthetics by adding colors using a 4-color palette.

image8

The idea of randomising the background color in code led to an even more diverse image repository for the game.

GAME TESTING AND CRITIQUE

We launched the ‘random (ANIMALS, MONSTERS & HUMANS)’ game at the Creation and Computation Experiment 1 Critique on 27 September, 2019. The video features highlights from the round of this game played by our classmates:

Observations during gameplay

  • Players that got eliminated in the early rounds were keen on being a part of the game in some other way.
  • It was a playful integrity test as some players sneaked in multiple shuffles to find a matching image to stay longer in the game.
  • Participants adapted to the game fairly quickly, and were able to pair up in the initial 10 seconds or so.
  • There was a glitch in viewing the gameplay on the Android interface leading to cropped images.
  • Everyone seemed to have enjoyed. There was a request for more rounds in order to decide a single winner.

LEARNINGS

  • Building an interactivity in p5.js using random images with an optimal loading time.
  • Working with constraints of screen size for different platforms by adding the background color in the code.
  • For Priya: Embracing the steep learning curve that came with coding for the first time, and developing physical computing skills in a span of 20 days.
  • For Lee: Applying p5.js to create a working prototype of the idea and learning designing skills from Priya.

CONCLUSION

  • Including GIF loops would have made the game experience even more interesting and dynamic.
  • We would be able to restrict players from tapping multiple shuffles (read, cheating) in one round by adding the ongoing round number functionality on the screen.
  • Rotation and device-shaken functionalities would not be suitable for this game as players need to run and quickly to make groups.
  • The game itself was successful, as the players enjoyed and wanted to play more rounds. We thus achieved the objective we had in mind.

TAKEAWAYS FOR FUTURE ITERATIONS

  • Including the ongoing ‘number of round’ functionality.
  • Finding a way to involve players that get eliminated in the initial rounds.
  • Working on including GIFs.
  • Adding sound effects to make the game more interesting and playful.
  • Making it more versatile to work on different operating systems and browsers.

CODE

https://editor.p5js.org/leelijun961118/present/PBjABRx9C6  (developed for smartphone platform)

GITHUB

https://github.com/LLLeeee/Creation-Computation-Project1/tree/master

REFERENCES

[1] How to Play Musical Chairs.” wikiHow, 29 Mar., 2019. https://www.wikihow.com/Play-Musical-Chairs.

[2] Hartley, Michael. “Game of Getting Into Groups of Given Numbers | Dr Mike’s Math Games for Kids.” Dr-mikes-math-games-for-kids.com, 2019. http://www.dr-mikes-math-games-for-kids.com/groups-of-given-numbers.html.

OTHER ONLINE RESOURCES

Random functionality: https://p5js.org/reference/#group-Math

Phone functionality: Touches: https://p5js.org/reference/#/p5/touches

Using GIFs in p5.js:

Discussion: https://github.com/processing/p5.js/issues/3380

Library: https://github.com/wenheLI/p5.gif/

Example: https://editor.p5js.org/kjhollen/sketches/S1bVzeF8Z

p5.js to GIF:https://www.youtube.com/watch?v=doGFUaw_2yI

Array: https://www.youtube.com/watch?v=VIQoUghHSxU&list=PLRqwX-V7Uu6Zy51Q-x9tMWIv9cueOFTFA&index=27

Loading Animation: https://www.youtube.com/watch?v=UWgDKtvnjIU