An Interface to Interact with Persian Calligraphy

By Arshia Sobhan

This experiment is an exploration of designing an interface to interact with Persian calligraphy. On a deeper level, I tried to find some possible answers to this question: what is a meaningful interaction with calligraphy? Being inspired by works of several artists, along with my personal experience of practicing Persian calligraphy for more than 10 years, I wanted to add more possibilities to interact with this art form. The output of this experiment was a prototype with simple modes of interaction to test the viability of the idea.

Context

Traditionally, Persian calligraphy has been mostly used statically. Once created by the artist, the artwork is not meant to be changed. Either on paper, on tiles of buildings or carved on stone, the result remains static. Even when the traditional standards of calligraphy are manipulated by modern artists, the artifact is usually solid in form and shape after being created.

I have been inspired by works of artists that had a new approach to calligraphy, usually distorting shapes while preserving the core visual aspects of the calligraphy.

"Heech" by Parviz Tanavoli Photo credit: tanavoli.com
“Heech” by Parviz Tanavoli
Photo credit: tanavoli.com
Calligraphy by Mohammad Bozorgi Photo credit: wsimag.com
Calligraphy by Mohammad Bozorgi
Photo credit: wsimag.com
Calligraphy by Mohammad Bozorgi Photo credit: magpie.ae
Calligraphy by Mohammad Bozorgi
Photo credit: magpie.ae

I was also inspired by the works of Janet Echelman who creates building-sized dynamic sculptures that respond to environmental forces including wind, water, and sunlight. Using large pieces of mesh combined with projection creates wonderful 3D objects in the space.

Photo credit: echelman.com
Photo credit: echelman.com

The project “Machine Hallucinations” by Rafik Andol was another source of inspiration for me that led to the idea of morphing calligraphy. It led to the idea of displaying an intersection of an invisible 3D object in the space in which two pieces of calligraphy morph into each other.

Work Process

Medium and Installation

Very soon I had the idea of back projection on a hanging piece of fabric. I found it suitable in the context of calligraphy for three main reasons:

  • Freedom of Movement: I found this aspect relevant because of my own experience with the calligraphy. The reed used in Persian calligraphy moves freely on the paper, often very hard to control and very sensitive.
  • Direct Touch: Back projection makes it possible for the users to directly touch what they see on the fabric, without any shadows.
  • Optical Distortions: Movements of the fabric create optical distortions that make the calligraphy more dynamic without losing its identity.

Initially, I had some tests on a 1m x 1m piece of light grey fabric, but for the final prototype, I selected a larger piece of white fabric for a more immersive experience. However, the final setup was also limited by other factors such as specifications of the projector (such as luminance, short-throw ability and resolution). I tried to keep in mind the human scale factor when designing the final setup.

installation-sketch

hanging-fabricVisuals

My initial idea for the visuals being projected on the fabric was a morphing between two pieces of calligraphy. I used two works that I created earlier based on two masterpieces of Mirza Gholamreza Esfahani (1830-1886). These two along with another were used in one of my other projects for digital fabrication course when I was exploring the concept of dynamic layering in Persian calligraphy.

My recent project for Digital Fabrication course, exploring dynamic layering in Persian calligraphy

Using several morphing tools including Adobe Illustrator, I couldn’t achieve a desirable result. The reason was the fact that these programmes were not able to maintain the characteristics of the calligraphy in the middle stages.

The morphing of two calligraphy pieces using Adobe Illustrator
The morphing of two calligraphy pieces using Adobe Illustrator

Consequently, I changed the visual idea to match both the gradual change idea and the properties of the medium.

3f3d7u

After creating the SVG animation, all the frames were exported into a PNG sequence consisting of 241 final images. These images were used as an array in processing later.

In the next step, after using two sensors instead of one, three more layers were added to this array. The purpose of those layers was to give users feedback on interacting with different parts of the interface. However, with two sensors only, this feedback was limited to differentiate the left and right interactions.

Hardware

In the first version, I started working with one ultrasonic sensor (MaxBotix MB1000, LV-MaxSonar-EZ0) to measure the distance of the centre of the fabric and map it on the index of the image array. 

The issue with this sensor was the resolution of one inch. It resulted in jumps of around 12 steps in the image array and the result was not satisfactory. I tried to divide the data from the distance sensor to increase the resolution (because I didn’t need the whole range of the sensor), but still, I couldn’t reduce the jumps to less than 8 steps. The result of the interaction was not smooth enough.

Distance data from LV-MaxSonar-EZ0 after calibration
Distance data from LV-MaxSonar-EZ0 after calibration

For the second version, I used two VL53L0X laser distance sensors with a resolution of 1mm. Although claimed 2m in its datasheet, the range of the data I could achieve was only 1.2m. However, this range was enough according to my setup.

Distance data from VL53L0X laser distance sensor with 1mm resolution
Distance data from VL53L0X laser distance sensor with 1mm resolution
VL53L0X laser distance sensor in the final setup
VL53L0X laser distance sensor in the final setup

Coding

Initially, I had an issue reading data from two VL53L0X laser distance sensors. In the library provided for the sensor, there was an example of reading data from two sensors, but the connections to Arduino was not provided. This issue was resolved shortly and I was able to read and send data from both sensors to Processing, using AP_Sync Library. 

I also needed to calibrate the data with each setup. For this purpose, I designed the code to be easily calibrated. My variables are as follows:

DL: data from the left sensor
DR: data from the right sensor
D: the average of DL and DR
D0: the distance of the hanging fabric in rest to the sensors (using D as the reference)
deltaD: the range of fabric movement (pulling and pushing) from D0 in both directions

With each setup, the only variable needed to be redefined are D0 and deltaD.  These data in Processing control different visual elements, like the index of the image array. The x position of the gradient mask is also controlled by the difference of data from DL and DR, with an additional speed factor, that can change the sensitivity of the movement.

Code Repository:
https://github.com/arshsob/Experiment3

References

-https://www.echelman.com/

-http://refikanadol.com/

-https://www.tanavoli.com/about/themes/heech/

-http://islamicartsmagazine.com/magazine/view/the_next_generation_contemporary_iranian_calligraphy/