Minimaforms Petting Zoo

Case Study by Orlando Bascuñán, Bijun Chen


General Overview:

Minimaforms was founded in 2002 by brothers Stephen and Theodore Spyropoulos as an experimental architecture and design practice. Using design as a mode of enquiry, the studio explores architecture and design than can enable new forms of communication. Embracing a generative and behavioral approach the studio develops open systems that construct participatory and interactive frameworks that engage the everyday.

Pushing the boundaries of art, architecture and design the work of Minimaforms is interdisciplinary and forward thinking exploring digital design and fabrication along with communication technologies seeking to construct spaces of social and material interaction. In 2010, Minimaforms was nominated for the International Chernikhov Prize in architecture. In 2008 their project Memory Cloud was named one of the top ten international public art installations by the Guardian.

Recent projects include two thematic pier landmarks and the illumination concept for a Renzo Piano’s master planned 760 acre National Park in Athens, a large scale land art work in Norway, a vehicle in collaboration with artist Krzysztof Wodiczko, a behavior based robotic installation for the FRAC Centre and immersive ephemeral environment for the city of Detroit. The work of Minimaforms is in the permanent collections of the FRAC Centre (France), the Signum Foundation (Poland) and the Archigram Archive (UK). In 2008 their project Memory Cloud was named one of the top ten international art installations by the Telegraph. Recent exhibitions have included work shown at the Museum of Modern Art (New York), Detroit Institute of Arts, ICA (London), FRAC Centre (France), Futura Gallery (Prague), Slovak National Gallery (Bratislava), and the Architecture Foundation (UK). They have been featured in international media including BBC, BBC radio, Robert Elms Show, Wired Magazine, Fast Company, Guardian, Blueprint, and Icon Magazine. They were named Creative Review’s “One to Watch.”

Petting Zoo FRAC Centre

Petting Zoo is the latest work developed by experimental architecture and design studio Minimaforms. The project is speculative life-like robotic environment that raises questions of how future environments could actively enable new forms of communication with the everyday. Artificial intelligent creatures have been designed with the capacity to learn and explore behaviors through interaction with participants. Within this immersive installation interaction with the pets foster human curiosity, play, forging intimate exchanges that are emotive and evolving over time. Beyond technology the project explores new forms of enabled communication between people and their environment.

The installation exhibits life-like attributes through forms of conversational interaction establishing communication with users that is emotive and sensorial. Conceived as an immersive installation environment, social and synthetic forms of systemic interactions allow the pets to engage and evolve their behaviors over time. Pets interact and stimulate participation with users through the use of animate behaviors communicated through kinesis, sound and illumination. These behaviors evolve over time through interaction enabling each pet to develop personalities. Pet interactions are stimulated through interaction with human users or between other pets within the population. Intimacy and curiosity are explored as enabling agents that externalize personal experience through forms of direct visual, haptic and aural communication.
Early historical experiments that examine similar issues can be found in the seminal cybernetic work of the Senster developed by the British cybernetic sculptor, Edward Ihnatowicz, Gordon Pask’s The Colloquy of Mobiles, and Walter Gray Walter’s first electronic autonomous robots (Tortoises) called Elmer and Elsie. Petting Zoo continues Minimaforms ongoing research in participatory and enabling frameworks examining cybernetic and behavior based design systems that can be found in other works of theirs like Becoming Animal exhibited in MoMA’s Talk To Me show and Memory Cloud: Detroit (2011) ICA London (2008).

Internal patterns of observation allow the pets to synchronize movements and behavioral responses. Through active proto-typing a correlated digital / analogue feedback has been developed to allow the system to evolve relationships that avoid repetitive controller tendencies.

Spatial Interfacing
Awareness of participant(s) is enabled through camera tracking and data scanning that allows for identifying human presence within contextual scenes. Real time camera streams are processed and coupled with blob tracking and optical flow analysis to locate positions and gestural activity of participants. Inactive participation of a performer in the environment can stimulate responses of disinterest and boredom.

Multi-User Interaction
Collective participation is enabled by the ability of our system to identify and real-time map the number of performers within a durational sequence.


Context/Related Projects

The Furby Experiment

Radiolab in the Talking to Machines episode reported the “Furby experiment” were they presented a barbie, a hamster and a furby to 6 brave kids in order to learn about how  compassionate we are about machine simulated feelings.

The experiment consisted in the kids holding the 3 “subjects” upside down while they felt comfortable, acting as a kind of emotional Turing test.

The results were that the kids could hold the barbie for an unlimited period of time, or until they got tired. They hamster could only hold the hamster for a painful 8 seconds. And the Furby they could hold for roughly a minute, being closer to the hamster than the barbie.

The kids statement concluded that the reaction of the Furby to be held upside down made them uncomfortable to a point that they felt bad about it.

Caleb Chung, the man who created the Furby, explains the reactions starting with what he thinks are the points that an object needs to feel real for a human.

  1. Feel and show emotions: The furby accomplishes this with audio, speech and movement of eyes and ears.
  2. Awareness of themselves and the environment: When there is a loud sound, Furby will say “Hey loud sound”. It also “knows” when it’s held upside down.
  3. Change over time: When first activated the Furby will speak furbish, a gibberish language, and slowly will replace it with English. There is no real language comprehension but it acts like it’s acquiring human language.

In the experiment these 3 points are accomplished. The Furby is aware that it is upside down (#2) and expresses fear (#1). “Me no like” he says, until he starts crying (#3), giving a compelling impression of emotions.

Deep Dream by Google

DeepDream is a computer vision program created by Google which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dreamlike hallucinogenic appearance in the deliberately over-processed images. Google’s program popularized the term (deep) “dreaming” to refer to the generation of images that desired activations in a trained deep network, and the term now refers to a collection of related approaches.


This experiment lets you make music through machine learning. A neural network was trained on many example melodies, and it learns about musical concepts, building a map of notes and timings. You just play a few notes, and see how the neural net responds. We’re working on putting the experiment on the web so that anyone can play with it. In the meantime, you can get the code and learn about it by watching the video above.

Quick, Draw! by Google

This is a game built with machine learning. You draw, and a neural network tries to guess what you’re drawing. Of course, it doesn’t always work. But the more you play with it, the more it will learn. It’s just one example of how you can use machine learning in fun ways.


Technical overview

The installation used Kinects to track users position, movements and gestures. With this spatial awareness they created an evolving artificial intelligence behavior, that expressed moods and communicated with humans, using light, sound and movement. The tentacles moved with strings pulled by motors.



Minimaforms Studio. (n.d.). Retrieved December 11, 2016, from

Petting Zoo by Minimaforms. (n.d.). Retrieved December 11, 2016, from

Furbidden Knowledge. (n.d.). Retrieved December 11, 2016, from

Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015). “DeepDream – a code example for visualizing Neural Networks”. Google Research. Archived from the original on 2015-07-08.

A.I. Duet – A.I. Experiments. (n.d.). Retrieved December 11, 2016, from

Quick, Draw! – A.I. Experiments. (n.d.). Retrieved December 11, 2016, from