Invisible ‘Kelk’

Arshia Sobhan Sarbandi


Invisible ‘Kelk’* is an interactive installation inspired by the transformation of Persian script, from calligraphy to digital typefaces that almost all the current publications are based on. This project is a visual extension of my previous project, an exploration to design an interface to interact with Persian calligraphy.

*’Kelk’ is the Persian word for calligraphy reed.

From Calligraphy to Typography

Reza Abedini is one of the most famous Iranian graphic designers, whose main source of inspiration is visual arts and Persian calligraphy. What is original about his work is that he was the first to recognize the creative potential of Persian calligraphy and to transform it into contemporary graphic typography.[1][4] In an interview, Abedini says: “After many years, because we have repeatedly seen and got used to typography in newspapers, it has become the reference of readability, and not calligraphy anymore. And now, everything written in that form is considered readable, which is one hundred percent wrong in my opinion.”[2]


Left: Vaqaye-Ettefaghiye, the second Iranian newspaper published on 1851 Right: Hamshahri, one of the current newspapers in Iran
Left: Vaqaye-Ettefaghiye, the second Iranian newspaper published in 1851, written in Nastaliq script (image: Wikipedia)
Right: Hamshahri, one of the current newspapers in Iran, using glyph-based typography (image:


Abedini argues that we have lost the potentials of Persian calligraphy as a result of adapting to a technology that was not created for Persian script – movable type.[3] ‌Below, you can see Abedini’s original words written in calligraphy (Nastaliq, the prominent style in Persian calligraphy) and typography using one of the most common typefaces currently used in books and newspapers:

Despite the obvious visual difference between these two, the important fact is that both of these writings are completely readable for an Iranian person.

Designing the Visuals

Similar to my previous project, there are two different things happening when pushing or pulling the hanging piece of fabric – the canvas – from its state of balance. All the words from the short paragraph of what Abedini says about calligraphy and typography are scattered on the canvas in a random pattern that refreshes every minute. You see the same words on both sides, in calligraphy and typography.

The composition of calligraphic words is inspired by an important style in Nata’liq script, called Siah-Mashgh. This style is usually defined by its tilted written words in multiple baselines all over the canvas.


Calligraphy pieces by Mirza Gholamreza Isfahani
Calligraphy pieces by Mirza Gholamreza Isfahani in Siah-Mashgh style


What I found really interesting is the fact that although words are randomly positioned in my project, and the random pattern is constantly changing every minute, the result retains its visual identity in terms of the calligraphic style. The words are in harmony in all different random positions, which in my opinion, is the result of great visual potentials of Persian calligraphy.


Samples screenshots of how calligraphy words appear in different random patterns
Sample screenshots of how calligraphy words appear in different random patterns



Technical Details

Almost all the technical details are the same as the previous project, except that I used a slightly larger fabric in the exhibition and a cleaner physical setup. A larger piece of hanging fabric essentially results in slower movements of the fabric in the resting position and during the interaction, which I found more suitable regarding the overall experience I expected.

Exhibition Reflection

From the observation of and discussions with the people interacting with the project, two major points were perceived. The first obvious fact was that when confronting the hanging fabric, many people hesitated to physically interact with it, and even after being told that it was OK to touch the fabric, they touched it in a very gentle manner, unless they were ensured that they were supposed to ‘push’ the fabric. It was also perceived that the possibility of pulling the fabric was much less than pushing the fabric. However, after getting comfortable with the interaction, they usually spent several minutes with the piece and found it pleasing.

One possible solution (to resolve the issue of hesitation to interact with the fabric) could be installing it where people have to push the fabric to pass through it. Another possible solution could be some airflow in the environment that causes the fabric to move slightly in both directions from the resting state so that it can provide a clue that moving the fabric in two directions would result in different visual feedbacks.

Code Repository



Experiment 5 – Proposal


Project title

An Interface to Interact with Persian Calligraphy

Project description

This project will be an extension of what was presented in experiment three, an interface to interact with Persian calligraphy. After looking into different possibilities, I narrowed down the ideas into two different scenarios to extend the existing project. The main purpose of both scenarios is to improve the impression of the project for users.

Scenario 1:
Improving the interaction and visuals

One of the main things that I wanted to achieve in experiment 3 was to give users visual feedback on the specific point of interaction when touching the fabric. In the final result, I was only able to create one-dimensional horizontal feedback. Providing more accurate feedback is one of the ideas in this scenario. This scenario will most likely include:

  • Flat hanging fabric
  • Adding extra sensors or analyzing data differently
  • Modifying visuals respectively
  • Using multiple projectors (if applicable)

Scenario 2:
Creating a more immersive interface

I consider the use of a hanging piece of fabric a successful experiment as I had positive feedback from most of the participants. As a result, one of the possibilities to extend the project is to try different ways of hanging the fabric to surround the participants. This scenario will most likely include:

  • U-shaped hanging fabric
  • Analyzing new characteristics of the fabric and possible ways of interaction
  • Modifying the arrangement of sensors
  • Modifying visuals respectively
  • Using multiple projectors (if applicable)


Parts/materials/technology list

  • Arduino (one or two)
  • VL53L0X laser distance sensors (two or more)
  • Connecting wires
  • MacBooks
  • Short-throw projectors
  • A large piece of white fabric
  • Wooden bar (straight or curved) for hanging the fabric
  • Tripods, stands or extending arms to install distance sensors
  • Processing
  • Other tools to create visuals (SVG animation tools, Adobe Illustrator, …)


Work plan

Both scenarios require an initial setup so that data could be collected from sensors and the code creating the visuals could be tested and modified. Similarly, in both scenarios, new visuals should be designed, code should be modified, physical parts should be made and the final result should be calibrated for the presentation space.


In my experience with the previous experiment, the whole work can’t be divided into linear phases and is mostly done in an iterative way. However, some of the most important dates are:

NOV25: End of exploration/research/ideation

NOV25: Setting up the test setup (a prototype of the curved bar, if required)

NOV25 – DEC1: Creating visuals, analyzing data, coding

DEC1 – DEC3: Making required parts for the final setup

DEC3 – Final tests and calibration (final calibration will be done on DEC4)


Physical installation details

In my previous experience, this project is best presented in a dark and empty space. Ideally, a narrow space would help to cover the space with the fabric and divide the space so that people won’t walk to the back of the fabric.

The physical installation is also highly dependant on the final space.  I need to know the exact specifications of the space so that I can measure all the distances, model them and start creating everything accordingly. Ideally, I would like to install in room #118 to have full control over light and setup.


An Interface to Interact with Persian Calligraphy

By Arshia Sobhan

This experiment is an exploration of designing an interface to interact with Persian calligraphy. On a deeper level, I tried to find some possible answers to this question: what is a meaningful interaction with calligraphy? Being inspired by works of several artists, along with my personal experience of practicing Persian calligraphy for more than 10 years, I wanted to add more possibilities to interact with this art form. The output of this experiment was a prototype with simple modes of interaction to test the viability of the idea.


Traditionally, Persian calligraphy has been mostly used statically. Once created by the artist, the artwork is not meant to be changed. Either on paper, on tiles of buildings or carved on stone, the result remains static. Even when the traditional standards of calligraphy are manipulated by modern artists, the artifact is usually solid in form and shape after being created.

I have been inspired by works of artists that had a new approach to calligraphy, usually distorting shapes while preserving the core visual aspects of the calligraphy.

"Heech" by Parviz Tanavoli Photo credit:
“Heech” by Parviz Tanavoli
Photo credit:
Calligraphy by Mohammad Bozorgi Photo credit:
Calligraphy by Mohammad Bozorgi
Photo credit:
Calligraphy by Mohammad Bozorgi Photo credit:
Calligraphy by Mohammad Bozorgi
Photo credit:

I was also inspired by the works of Janet Echelman who creates building-sized dynamic sculptures that respond to environmental forces including wind, water, and sunlight. Using large pieces of mesh combined with projection creates wonderful 3D objects in the space.

Photo credit:
Photo credit:

The project “Machine Hallucinations” by Rafik Andol was another source of inspiration for me that led to the idea of morphing calligraphy. It led to the idea of displaying an intersection of an invisible 3D object in the space in which two pieces of calligraphy morph into each other.

Work Process

Medium and Installation

Very soon I had the idea of back projection on a hanging piece of fabric. I found it suitable in the context of calligraphy for three main reasons:

  • Freedom of Movement: I found this aspect relevant because of my own experience with the calligraphy. The reed used in Persian calligraphy moves freely on the paper, often very hard to control and very sensitive.
  • Direct Touch: Back projection makes it possible for the users to directly touch what they see on the fabric, without any shadows.
  • Optical Distortions: Movements of the fabric create optical distortions that make the calligraphy more dynamic without losing its identity.

Initially, I had some tests on a 1m x 1m piece of light grey fabric, but for the final prototype, I selected a larger piece of white fabric for a more immersive experience. However, the final setup was also limited by other factors such as specifications of the projector (such as luminance, short-throw ability and resolution). I tried to keep in mind the human scale factor when designing the final setup.



My initial idea for the visuals being projected on the fabric was a morphing between two pieces of calligraphy. I used two works that I created earlier based on two masterpieces of Mirza Gholamreza Esfahani (1830-1886). These two along with another were used in one of my other projects for digital fabrication course when I was exploring the concept of dynamic layering in Persian calligraphy.

My recent project for Digital Fabrication course, exploring dynamic layering in Persian calligraphy

Using several morphing tools including Adobe Illustrator, I couldn’t achieve a desirable result. The reason was the fact that these programmes were not able to maintain the characteristics of the calligraphy in the middle stages.

The morphing of two calligraphy pieces using Adobe Illustrator
The morphing of two calligraphy pieces using Adobe Illustrator

Consequently, I changed the visual idea to match both the gradual change idea and the properties of the medium.


After creating the SVG animation, all the frames were exported into a PNG sequence consisting of 241 final images. These images were used as an array in processing later.

In the next step, after using two sensors instead of one, three more layers were added to this array. The purpose of those layers was to give users feedback on interacting with different parts of the interface. However, with two sensors only, this feedback was limited to differentiate the left and right interactions.


In the first version, I started working with one ultrasonic sensor (MaxBotix MB1000, LV-MaxSonar-EZ0) to measure the distance of the centre of the fabric and map it on the index of the image array. 

The issue with this sensor was the resolution of one inch. It resulted in jumps of around 12 steps in the image array and the result was not satisfactory. I tried to divide the data from the distance sensor to increase the resolution (because I didn’t need the whole range of the sensor), but still, I couldn’t reduce the jumps to less than 8 steps. The result of the interaction was not smooth enough.

Distance data from LV-MaxSonar-EZ0 after calibration
Distance data from LV-MaxSonar-EZ0 after calibration

For the second version, I used two VL53L0X laser distance sensors with a resolution of 1mm. Although claimed 2m in its datasheet, the range of the data I could achieve was only 1.2m. However, this range was enough according to my setup.

Distance data from VL53L0X laser distance sensor with 1mm resolution
Distance data from VL53L0X laser distance sensor with 1mm resolution
VL53L0X laser distance sensor in the final setup
VL53L0X laser distance sensor in the final setup


Initially, I had an issue reading data from two VL53L0X laser distance sensors. In the library provided for the sensor, there was an example of reading data from two sensors, but the connections to Arduino was not provided. This issue was resolved shortly and I was able to read and send data from both sensors to Processing, using AP_Sync Library. 

I also needed to calibrate the data with each setup. For this purpose, I designed the code to be easily calibrated. My variables are as follows:

DL: data from the left sensor
DR: data from the right sensor
D: the average of DL and DR
D0: the distance of the hanging fabric in rest to the sensors (using D as the reference)
deltaD: the range of fabric movement (pulling and pushing) from D0 in both directions

With each setup, the only variable needed to be redefined are D0 and deltaD.  These data in Processing control different visual elements, like the index of the image array. The x position of the gradient mask is also controlled by the difference of data from DL and DR, with an additional speed factor, that can change the sensitivity of the movement.

Code Repository:







Road Hawgs

Creation and Computation | Experiment 1
By: Catherine Reyto & Arshia Sobhan

Project Description

Road Hawgs is a two-team game which is played by using phones as physical pieces on the game field. Each team tries to pass its own finish line through obstacles on the field, trying to prevent the other team to succeed at the same time. In this game, phone displays are used as multi-dimensional game pieces, giving players access to all the tools they need in one place.

The core concept of this game is a group game-play using phones which are mostly considered as the reason behind the isolation of people in social contexts, even in a regular multi-player computer game where players are staring at their screens without any physical interaction. This game combines the groupness that exists in traditional board games with new possibilities that a mobile phone screen can provide as a game piece.

Project Context

We were both interested in the idea of people using their phones as tactile objects to physically connect with one another, but it took us a few iterations before we landed on the roadblock game.  

Group game-play is reminiscent of childhood when structured social activities were routine and commonplace. We had this in mind – the act of people gathered around say, a puzzle or a board game – in considering how we would approach this experiment. We were interested in exploring what the experience of group games feels like: being lost in focus but with occasional bursts of bickering, cheering or laughter, competitive impulses, and most of all a strong sense of ‘togetherness’. We imagined a group leaned in shoulder-to-shoulder, far removed from the isolating tendencies that are typically associated with personal-use digital screens of any sort.  We started thinking about the environments that tend to go hand-in-hand with group game-play: living-room floors, basement rec rooms, cottages, and eventually got to the idea of camping. This, in turn, led us to our first iteration of an image-matching concept. The idea involved the whole group ‘assembling’ a tent, (an image of one that is, with all 20 phone screens making up the canvas), working together to build it out piece-by-piece, not far off from the real-life experience of threading the poles hand-over-hand to set up a tent in the woods. And just like real camping, when the work is done and it’s time to kick back and enjoy the view (or at least, the fire), we were thinking of setting up all the laptop screens to mimic the experience by an emitted glow or panoramic image. But we were dissuaded by the technical challenges that the level of orchestration involved. It entailed either networking, which was ruled out, or a level of programming beyond our three weeks with p5.

img_8677 img_8678


We were still attached to the collaborative puzzle work of image-matching and spent the next session brainstorming. We were both drawn to visual patterns and the power of code to alter complex graphics into drastically new mosaic designs with just a few taps (or clicks). We really liked the idea of working with Islamic tile patterns, both on account of their captivating beauty but also because like code, the designs are grounded in maths principles. But like Jessie and Kaitlin discussed their scavenger hunt map, we anticipated that the variety of screen sizes would be too disruptive to the visual rhythm.

Photo Credit:
Photo Credit:
Illustrating the visual interference caused by the framing around various phone screen displays

We also became increasingly aware of an overarching issue beyond screen-size interference.  For the class to interact with the screens towards a common goal, we both felt a challenge was needed. Not simply for the sake of competition or upping the ante, but rather to continue with our ideas about group game-play. We wanted to see our classmates working together, sharing frustrations and accomplishments as they competed in large groups.    

Figuring out our challenge led us through several iterations of wheel-spinning and creative frustration. We kept falling short of the target with concepts that were visually stimulating but too easily achieved, to the point of risking complacency. We frequently turned to the work of Artist and Designer Purin Panichphant for inspiration, eventually coming across the artwork that led us to the idea of matching Pieces. Panichphant’s Optical Maze Generator allowed us to make that final connection, though at first only in an abstract sense. As soon as we saw the maze and how it worked, it was unanimously agreed that we’d build our idea from it. We tapped around on the screen, rotating the squares of a grid of shape patterns, and began to visualize the idea of positioning parts of vertical and horizontal Pieces.

Panichphant’s Optical Maze Generator
Panichphant’s Optical Maze Generator


Designing the Game

A vision came to mind from the old version of the PC game Sim City (Will Wright, 1989). The game involves strategically building a metropolis on an allotted budget in order to grow the population and in turn, increase that budget to continue expansion and growth. One of the greatest satisfactions came from laying down Pieces of road pavement because it signified profit enough to invest in infrastructure.

SimCity, 1989 Video Game (Photo Credit:
SimCity, 1989 Video Game (Photo Credit:

We cut up several sheets of paper into phone screen-sized portions, then plotted out a system of match points (mid-width, mid-height, and all corners) for each screen. The shapes could be combined in many ways by simply rotating the sheets of paper to match connection points from one road piece to another. The strategic positioning of the road pieces was devised with the building blocks of Tetris (designed by Alexey Pajitnov) in mind: minimal variation (we used four shapes), relying greatly on rotation for combining point-to-point shape connections.




To create a bit of context and make the game more interesting, we threw in a few literal roadblocks, in the form of a river, a train passing and construction/road work. Each ‘blocker’ presented its own challenge: rivers need bridges to cross, as do trains, unless you choose to simply wait for the train to pass (miss a turn), and construction sites limit or alter your route. We added ‘relief’ pieces to the mix: a bridge for crossing the river and train tracks, and a ‘bribe’ to override the construction.  

With our pieces laid out, we felt good about having everything we needed to make a game that was simple yet clever enough that we could imagine it actually being played outside the classroom, by kids and adults alike. We just needed to strategize the game rules, and quickly learned that there is nothing simple about that. Game design is a puzzle in itself, or a story with a definitive beginning, middle and end that needs to be a delicate balance of pain and gain points. We wanted to focus on the collective experience of the whole class but keep the element of competition a priority. To solve this, we divided the class into two large teams that would be pit against one another on the same road. The tricky part came in trying to decipher their common goal. Was it to gain more distance (finish line), or more points (flags)? We also had to keep the demo time in mind, which meant having to omit the luxury of a first-round trial run. A complex set of rules could make for a far more interesting game, but we were always aware of having to keep it at a basic level. We also added the factor of luck to the game by having a dice toss to determine the number of moves each team has in a turn.

The title of the game has a double meaning. According to Wikipedia, a road hog is a motorist who drives recklessly or inconsiderately, making it difficult for others to proceed safely or at a normal speed.  Since the goal of the game is to be the first team to reach the finish line, the players will be placing pieces haphazardly, their strategy in selection curtailed by the pressure of the group (like the round-robin scenario in table-tennis).  Because both teams are ‘building’ the same road, they are detrimentally dependent on one another to win, thus making a play on the term ‘hog’, as into hoard to oneself. That concept was inspired by the billiards game “9 Ball”, which takes a non-linear approach to win the game (rather than a cumulative tally of points).

Final Version of the Game

After discussing different scenarios, we finalized these rules for the game:

  • Two teams are differentiated with two colours (team pink and team green)
  • The teams build the same road in turns
  • The number of moves in each turn is determined by a dice toss
  • Each team has its own respective finish line (placed side by side).
  • There are some obstacles in the field that prevent teams from going straight
  • Each team can use as many blockers as they want to deter the other team
  • When a team is blocked, they need to use a relief tool to get past the block
  • Teams have to physically use their phones in the game. They rotate them when finding the desired direction of a road piece, and they stack one phone on top of another when using blockers and reliefs (ie. bridge over the river).


  • Dynamite: Destroys the last three moves (road pieces)
  • River: Blocks the road (“bridge” is needed to pass)
  • Train: Blocks the road (“bridge” is needed to pass)
  • Construction: limits the directions to continue (“bribe” can be used to pass through in any direction)


  • Bridge: to pass the river and the train
  • Bribe: To pass through construction in any desired direction




Using p5 and Technical Challenges

As far as p5 was concerned, in spite of our limited knowledge, we were pretty good at communicating approach strategies. We caught one another if an idea seemed out of scope, and Arsh really stepped up when it came to tackling challenges like adding a swipe mechanism.  The swipe was the fundamental feature needed for easy, intuitive game-play, as well as a great solution to simplifying our navigation. We aimed to keep every aspect of the game as minimal as possible because we anticipated the loss of time from explaining game rules in the demo.  

After finalizing the tools, we designed a simple navigation system using tap and swipe. Players have three separate tabs: for road pieces, blockers and reliefs, that can be accessed by swiping left and right respectively.  In each tab, they can then tap to toggle through subsets (ie. road shapes, blocker types). Although using tap was quite easily achieved using the event “touchStarted” and using variables to loop the toggle, the swipe function was not very straight forward. After some searching and testing, we finally used this example code from Shiffman incorporating hammer.js.  It enables swipes in all four directions and worked properly on iOS and Android in all browsers. We only needed to make use of the left and right swipe to give access to blockers and reliefs, with road pieces being the default toolset on the home screen.

The dice toss was also executed in p5 using the shake gesture to mimic the gesture of a real-life dice toss. The only factor that was in need of some tweaking was the shake threshold (setShakeThreshold()). After a bit of ‘road’ testing, we finally settled on a threshold of 40. But for presentation’s sake, yes – Nick had a good point, a real die would have sufficed.

We felt a little restrained by our limited skill-level. There were plenty of cute extras we had to rule out, like small animals scurrying across the road pieces for idle animation. We were both eager to challenge ourselves with p5, but the time restrictions added a precarious element to our codebase. It was and still is apparent that we could do with some refactoring, as doing so would lead to a free playground for adding and experimenting. Because we were sharing the codebase, and some of the code had been pulled from elsewhere (the foundation of the Swipe feature was courtesy of Shiffman), that sometimes led to hesitation about tampering with one another’s code. But we also worked really well at overcoming issues in the code when we were able to sit down together to work through them.

Presentation Reflection

Presenting first meant it was really difficult to gauge how to make the best use of our time.  Right off the bat, it was apparent that we had been too detailed in our projector-tutorial of the game rules. In hindsight, we’d have been pretty efficient in leading our classmates straight to the QR codes so there would be ample time for everyone to figure out the game as a hands-on trial run.  

It was a painful oversight that we hadn’t thought to load the QR code for the die into one of our own phones before we started the game.  We didn’t want to interrupt the flow of the lineups as we were weary about how much time we had left. This led to the call-out of ‘fake’ die rolls, the sort of on-the-spot thinking that happens in a worked-up presentation.  

In spite of what became a bit of a chaotic moment, it was really satisfying to see the game successfully play out.  We had anticipated that long start where the road needed to grow close enough to the finish line before the real fun of the game kicked in. In our own test-runs of the game, limited to just two phones, we were still able to see that we needed dispersed pain/pressure points in order to overcome that issue.  We resolved that being master game-designers might take a few more iterations yet. But in the meantime, we had achieved what we had set out to do. We got to watch our classmates compete and cheer and laugh as they used their phones like blocks from a classic board game.







Shiffman. (n.d.). hammer.js swipe. Retrieved from p5.js Web Editor:

Hammer.js. (n.d.). Retrieved from Swipe Recognizer:

Pajitnov, A. (1984, June 6). Tetris Analysis. Retrieved from

Phanichphant, P. (n.d.). Experiments with P5.js. Retrieved from

Phanichphant, P. (n.d.). Optical Maze. Retrieved from


Luke Stern, S. W. (2015). Game of Phones. Retrieved from

Wright, Will. (February 2, 1989) SimCity. DOS, Maxis