Author Archive

Mystical Lab

by Nooshin Mohtashami

Mystical Lab

Mystical Lab (click on image to see video)

Mystical Lab is a finger labyrinth that plays different pieces of music when your fingers strolls through its different quadrants and areas. A labyrinth is a meandering path, with a single path leading from the entrance to a center. Labyrinths are an ancient archetype pattern dating back 4,000 years or more, generally used for walking meditation or ceremonies, evoking metaphor, sacred geometry, spiritual pilgrimage, religious practice, mindfulness, environmental art, and community building. You enter a labyrinth from its entrance and by walking the path to the center. All labyrinths have only one path in to the center and the same path out. You can stay in the center for as long as you wish and when you are ready to leave, walk the same path back to the entrance/exit. A finger labyrinth is similar to a full sized labyrinth but it is smaller and traversed with a finger as opposed to being walked. The user traces the path from the starting point to the centre using a finger and then traces the path back from the center back to the starting point.

In the Mystical Lab, when the user touches the first sensor at the entrance of the labyrinth, the introductory music starts. As the user moves forward in the labyrinth and touches different sensors on the labyrinth, different music is played. Each touch sensor is associated with a specific piece of music that is played until the next touch sensor is touched. This continues until the center of the labyrinth is reached. The user will then traverse back to the entrance of the labyrinth and when the touch sensor at the “entrance/exit” is touched again the music stops.

Connecting the MPR121 to the outside world

Connecting the MPR121 to the outside world

creating touch points for the different areas

creating touch points for the different areas

Mystical Lab

Color makes a difference (or does it?)

Videos and Source Code

Circuit Diagram

Mystical Lab Circuit Diagram

Mystical Lab Circuit Diagram

 

References

The Labyrinth Society, https://labyrinthsociety.org/about-labyrinths

Walking a Labyrinth, Veriditas, https://www.veriditas.org/New-to-the-Labyrinth

What would I do next:

  • re-make the labyrinth surface, maybe even carve it in wood and make the sensors’ surface smoother
  • create images in the circles that are being drawn on the screen

 

LuminArt: light up your art

LuminArt

by Nooshin Mohtashami

illuminart

LuminArt is a simple installation that consists of a painted canvas, a series of LEDs installed (and integrated) on the canvas, one light sensor and one microcontroller (Arduino BLE 33 Sense) with a built-in microphone to monitor the sounds around the canvas.

The canvas is activated when the room its in is dark (or the light sensor is covered). Once the canvas is activated (or is ON), the microphone of the microcontroller monitors for the sounds around the canvas and randomly lights up the LEDs on the canvas illuminating the different areas of the art.

The colours of the LEDs in each section of the canvas match the colour of the art where they are placed so when the LEDs are OFF, they blend in with the art and the canvas looks more or less like a regular painting.

Calm Technology Principles

This was created with minimum amount of technology to solve the problem of illuminating the canvas in the dark (Calm Technology Principle VII). It also requires minimal attention from the user for it to be activated (Calm Technology Principle I), if the room is dark and the user talks, sings or plays music or even if there are general sounds in the room, the technology starts turning on/off the lights, otherwise, it remains as a passive painting and disappears in the background, following social norms of being a painting (Calm Technology Principle VIII).

Videos:

LuminArt in action (54 seconds) // How LuminArt works (1.46 min)

Link to the Arduino code on Github

https://github.com/n00shinm/LuminArt

Circuit diagram

Circuit Diagram for illuminArt

 

 

Experiment 1 – Body As Controller

Experiment 1: Body as Controller in 3 parts

Nooshin Mohtashami

The goal of this experiment is twofold:

  1. To explore the possibility of creating interactive virtual body art inspired by my related works research, and
  2. Using the computer’s built-in webcam and body movement to initiate interactions with the computer program instead of using the mouse or keyboard already attached to it.  Specifically creating: 2 ways to perform the action CLICK and 2 ways to perform the action SCROLL.

The tools and libraries used in this experiment are p5.js and ml5.js that enable immediate access to the pre-trained models through a Web browser (and much more).

The goals were reached in 3 separate parts listed below.

Part 1: Moving Sparkles

p5.js Present: https://preview.p5js.org/nooshin/present/QoWPCen-3

p5.js Editor:  https://editor.p5js.org/nooshin/sketches/QoWPCen-3

This is a simple sketch to show how controlled movement (scroll) can be implemented using a body pose.

movingsparkles

Fig 1 – Moving Sparkles

Moving Sparkles

Fig 2 – lifting shoulder to move the sparkles

In this sketch, lifting your shoulder to your ear will move the sparkles on the screen towards the lifted shoulder side (please note that the camera view on the screen is reversed). Meaning, lifting the left shoulder to your ear will move the sparkles to the left of the screen and lifting the right shoulder will move them to the right.

This is not really a sophisticated program and the sparkles do run “out” of screen and if the user starts by lifting the right shoulder first, the sparkles disappear and the user can get confused. However, this was one of my first experiments where I learned a lot about the basics of  how to work with p5.js and ml5.js’s libraries!


Part 2: Bowie’s Bolt

p5.js Present: https://preview.p5js.org/nooshin/present/izwLGOi_N

p5.js Editor: https://editor.p5js.org/nooshin/sketches/izwLGOi_N

This is a sketch to show how state change (click) can be implemented using a body pose.

face_m1

Fig 3 Bowie’s Bolt

Bowie's Bolt

Fig 4 – wrist to eye to change the virtual make up colour

In this sketch, there are 2 pre-defined “virtual make ups” drawn on the user’s face using p5.js vertex function. The colour of the virtual make ups can be changed by the user when the eye is touched or “clicked”.

One important learning from this experiment is learning about the size of the drawings on the screen and user’s face. When I was programming this I created the shape of the virtual make ups based on my distance from the webcam at the time and although the virtual make up moves on the screen with the model, it currently does not resize itself to be proportional to the face. This is because I used absolute numbers with the x,y coordinates of the body parts to create the shape. For example, to start the drawing I used:
vertex(eyeL.x, eyeL.y-100);
vertex(eyeL.x-80, eyeL.y-100)
This will draw a line from Left Ear’s x-position to Left Ear’s (x-80)-position while keeping the y-position constant. Now if the user moves further from the screen, while the face looks smaller, the 80 & 100 remain constant and therefore the virtual make up takes over the entire face (Fig 5). In the future version I would use dynamic values to draw the shapes. For example, by calculating the distance between the ears to find the width of the face on the screen and using that as reference to calculate where to place the virtual make up.
Moving away from the screen makes the virtual make up take over the entire face.

Fig 5 – the size of the virtual make up is not dynamic in the current sketch


Part 3: Finger Painting

p5.js Present: https://preview.p5js.org/nooshin/present/tLrxxrEjb

p5.js Editor: https://editor.p5js.org/nooshin/sketches/tLrxxrEjb

 

finger_p1

Fig 6 Finger paint on the screen

 

 

finger_paint_sparkvideo1

This is a 2-in-1 sketch where both movement (scroll) and state change (click) are implemented. I started with Steve’s Makerspace useful video on how to create the functionality for finger painting on the screen, extended it to include random colours and clearing of the screen by bringing the thumb and index finger together. It was also important to create a state of “painting” vs. “not painting” so that the screen doesn’t get cleared unintentionally if the thumb and index fingers accidentally come close together while the user is painting on the screen.

Instructions are as follows:

  • First raise your hand ✋for the program to recognize it. A white pointer will show on your index meaning it’s ready.
  • To start painting: point your index finger 👆.The white pointer will turn red. You are now ready to paint on the screen.
  • To stop painting open your hand and show all fingers ✋. You can move your hand to a different location, point your index finger and paint more.
  • To clear the screen: stop painting first ✋ then touch index + thumb together👌

 


Summary & Learnings 

I learned a lot with this experiment and also that the ML5 libraries seem to prefer:

  • Bright and clear rooms: body parts and movements were recognized much faster and more accurately in bright rooms and solid coloured user backgrounds than in dark rooms or with lots of colour or objects in the background.
  • Light clothing: wearing light coloured clothing worked better than dark coloured ones when working with body movement recognition.
  • No jewelry: wearing a bracelet or watch on the wrists seems to cause delay and confusion in recognizing hands/wrists and their location.
  • Slow movements: moving slowly was easier for the computer to recognize the body and its movements (or maybe this is a programmer’s issue!).

And as I started writing this summary, I realized I could extend the project and allow the user to draw their own “virtual makeup” on the screen using their finger (Part 3) and when done for the program to tie the drawn shape from the screen to the user’s facial coordinates. Then using Part 2, the user could change the colour of their virtual make up. A definitely fun future project 🙂

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.