River Styx (2018)

River Styx

by Shikhar Juayl, Tyson Moll, and Georgina Yeboah

Figure 1

River Styx being presented at the “Current and Flow” end of the semester grad show at OCADU’s Grad Gallery on December 7th, 2018.

A virtual kayaking experience in the mythical word of River Styx. Navigate through the rivers of fire, hate, and forgetfulness using our handmade kayak controller, steering through rubble, rock and ruins. Discover jovial spirits, ancient gods and the arms of drowning souls across the waters between the worlds of the living and the dead.

The project environment was built in Unity using free 3D assets and characters created using volumetric footage. We used the Xbox One Kinect and a software entitled Depthkit in the game:play lab at OCAD U to produce the mysterious animated figures in the project. The vessel is operated with Tyson’s arduino-operated kayak controller, newly revised with 3d printed parts and more comfortable control aspects.

GITHUB LINK

*The scripts and 3d objects used in the project are available via Github, but due to the size of several assets, the Unity project is not included.

Figure 4. Sketched out diagram of circuit.

circuit diagram for paddle controller

PROCESS JOURNAL

Monday, Nov 26

When we first convened as a group, we discussed the possibility of taking the existing experience that Tyson created for his third Creation & Computation experiment and porting it over to Unity to take advantage of the engine’s capacity for cutting-edge experiences. As Unity is more widely used in the game development industry as well as for the purpose of simulations, we thought it would make for an excellent opportunity to explore and develop for new technologies that we had access to at OCAD U such as Virtual Reality and Volumetric Video capture. We also thought it would be exciting to be able to use Arduino-based controllers in a game-development project; a cursory web search revealed to us that Uniduino, a Unity plugin, was made for this purpose.

We also wanted to explore the idea of incorporating a narrative element to the environment as well as consider the potential of re-adapting the control concept of the paddle for a brand new experience. River Styx was the first thing to come to mind, which married the water-sport concept with a mythological theme that could be flexibly adjusted to our needs. Georgina had also worked on a paper airplane flight simulator for her third C&C experiment which inspired us to look at alternative avenues for creating and exploring a virtual space, including gliding. We agreed to reconvene after exploring these ideas in sketches and research.

Tuesday, Nov 27

We came up with several exciting ideas for alternative methods of controlling our ‘craft’ but eventually came full-circle and settled on improving the existing paddle controller. The glider, while fun in concept, left several questions regarding how to comfortably hold and control the device without strain. Our first ideas for the device imagined it with a sail. We then considered abstracting the concept of this controller in order to remove extraneous hardware elements. VR controllers, for example, look very different from the objects that they are supposed to represent in VR, effectively making them adaptive to various experiences and more wieldy. As we continued to explore these ideas, it occurred to us that the most effective use of our time would be to improve an already tried-and-true device and save ourselves the two or three days it would take us to properly develop an alternative. Having further researched the River Styx lore and mythos, we were also very excited to explore the concept with the paddle controller and resolved to approach our project accordingly.

capture

Wednesday, Nov 28

We visited the gamelab at 230 Richmond Street for guidance in creating volumetric videos with the kinect. 2nd year Digital Futures student Max Lander was kind enough to guide and give us pointers about using volumetric videos in Unity. Later that day, we made a serial port connection script to start integrating Tyson’s old paddle script in Unity.

Once that was completed, we then started looking into adding bodies of water assets for our environment using mp4 videos. Turns out the quality was not what we were going for so we later started integrating water assets from the the standard unity packages and began building our scenes.

In terms of the paddle and with the aid of a caliper, we dimensioned the elements of the original paddle controller and remodelled them with Rhinoceros for the purpose of 3D printing. Although the prospect of using an authentic paddle appealed to us, we chose to use the existing PVC piping and wood dowel design in order to reduce the amount of time we spent searching for just the right paddle and redesigning the attached elements. In order to improve communication between the ultrasonic sensor and the controller, the splash guards from the original kayak paddle controller were properly affixed to the dowel, as was the paddle. The ultrasonic sensor essentially uses sonar to determine distance, so it was important that the splash guards were perpendicular to its signal in order to ensure that the sound was properly reflected back. Likewise, we created a more permanent connection between the paddle headboards and the dowel and a neatly enclosed casing for the arduino and sensors.

The process of printing the materials took about five days, as not all printers were accessible and several parts needed to be redesigned to fit on available printer beds based on available material in the Makerlab. We also found that roll of material that we had purchased from Creatron for the project consistently caused printing errors compared to others, which resulted in significant time wasted troubleshooting and adjusting the printers.

capture

Thursday, Nov 29

This was our first session finding Unity assets and integrating them into the Unity editor. We used a couple of references to help shape and build the worlds we wanted to create. We managed to find a few assets we could work with from the start such as our boat. As we continued to find and add more assets to our environment we noticed that some assets were more heavier than others and caused a lot of lag to the game when we ran it so we later decided to use more low-poly rendered 3D models. Once we were satisfied with a certain environment, we added a first person controller (FPS) to the boat which came with Unity’s standard assets and began to navigate the world we created. We wanted to experience what it would be like exploring these rivers through this view and later replace our FPS controller with our customizeable arduino paddle.

Figure x. Shikhar working on River Styx environment.

Shikhar working on River Styx environment.

Friday, Nov 30

Hoping that it would simplify our lives, we purchased Uniduino, a premade Unity Store plugin. This turned out not to be the case, as its interface and documentation seemed to imply that we would need to program our Arduino through Unity instead of working with our pre-existing code developed in the Arduino IDE and its serial output. We ended up resolving this with the help of a tutorial by Alan Zucconi; we transmitted a string of comma-separated numbers associated with the variables that are used to operate the paddle and split them with string-handling functions in a C# script.

After some initial troubleshooting, we managed to get the gyroscope and ultrasonic sensor integrated with Unity by applying rotation and movement to an onscreen cube. The only caveat was that there was a perceptible, growing lag, which we decided to resolve on a later date.

 

Saturday, Dec 1st

As our environment for River Styx’s grew, we continued to discuss the addition of the other rivers involved in the Greek mythological underworld such as River of Pain, River of Forgetfulness, River of Fire and River of Wailing.We then started to brainstorm ideas for map layout, creating a map for these multiple rivers and what reflective element each river should have. Our discussions expanded towards game design vs an exploratory experience. We wanted to see how we could implement certain aspects to make it more of a game and less exploratory. However foreseeing how much time we had to develop and finalize our project without making things too complicated on ourselves we decided to keep it as an exploratory experience.

 

Monday, Dec 3rd

With the continuation of developing out other scenes, we came across other assets to help compliment our indistinguishable river environments. We were able to find lava assets for the river of Fire and create fogs in the river of forgetfulness. With all these assets and possible addition of volumetric videos, we decided we needed a powerful computer to encompass and run our Unity project to further decrease any kind of lag when running and working on it. We considered asking our professors for one but the only one capable of handling our needs was in the DF studio and it was inaccessible for uploading additional software and we couldn’t install any drivers to resolve serial port issues without administrative permissions. To avoid all these bottlenecks we decided to use Tyson’s personal PC tower to continue work on the project and later for installation for our upcoming grad show.  

We also converted the kayak controller code from javascript to C# for use in the unity game engine in an uncalibrated state. The first sign we saw of movement in Unity’s play window was noticeably slow, but indicated that our attempt to translate the code worked. For our convenience, the variables we would need to access to calibrate the device were given the prefix ‘public’ in our code. This allowed us to manually edit them from the Inspector window in Unity without running the risk of adjusting a ‘private’ variable in error.

Tuesday, Dec 4th

We reconvened in the game:play lab to capture volumetric videos with the Xbox One Kinect and Depthkit and import them into Unity. Depthkit comes with several features for manipulating captured data from the Kinect camera, including a slider for cutting out object further than a particular distance and undesirable artifacts. In order to use the captures as looping animations, we tried to keep our recordings in sync with a ‘neutral’ state we determined at the start in order to avoid having the footage jump significantly between the first and last frames. Given that the Kinect and Depthkit render the captured information as a video file we also needed to be mindful about recording times and the number of objects that we wanted to include in our project in order to reduce performance impact.

Some of the animations we captured included hands, exaggerated faces, ‘statues’ of god-like figures and silly dances. We frequently took affordances from the clipped area in order to isolate particular limbs in frame. In one instance, we were able to create a 4-armed creature by using two subjects, one in frame and the other hidden behind in cropped space, contributing a second set of arms.

capture

Wednesday, Dec 5th

At this stage we had three official scenes created. The paddle’s parts were ready to be assembled after going through the laser cutting machine. We then began to create a teleport code where the user could teleport from one cave entrance to the next at each scene but decided not to include it. We wanted the user to explore without feeling the need to be goal driven to get from one place to another. So, we decided to be facilitators and transport them whenever they wanted. We added a key press that would teleport the user from one scene to the next.

We had plenty of fun using the Zero Days Look for our Depthkit captures, which was created for the VR film of the same name. It allowed us to manipulate the default photographic appearance generated by default and incorporate colour, lines, points and shapes into the appearance of the Volumetric renditions. The more we worked with it, the more familiar we became with its interface and how our adjustments would look through in-game, as not all features of the plugin were directly rendered in the world view window in Unity during editing.

screen

capture

Thursday Dec 6th – Friday, Dec 7th

Prior to showcasing our project, we moved all of our unity assets and code to Tyson’s personal PC tower and continued our work from there. We began the integration of the volumetric videos into Unity and  play-tested the environment to get a feel for how comfortable it was to navigate with the paddle. We felt that the kayak’s motion was a bit slow for public demonstration; we tweaked the speed increment, friction, and maximum motion until it felt fluid.

Reception for the project was overall positive. Interestingly, children were able to pick up the controls with relative ease. Since the ultrasonic sensor targeted an area above the size of their hands, they were able to grip the paddle device wherever they desired. This could also be attributed to a lack of preconceptions of how the device works; one of the most experienced paddlers seemed to have the most difficulty operating the device.

capture

 

Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.

 

Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and wayfinding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customisable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicalities and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  

 

Project Context

TASC: Combining Virtual Reality with Tangible and Embodied Interactions to Support Spatial Cognition by Jack Shen-Kuen Chang,  Georgina Yeboah , Alison Doucette , Paul Clifton , Michael Nitsche , Timothy Welsh and Ali Mazalek.

Tangibles for Augmenting Spatial Cognition or T.A.S.C. for short is an ongoing grant project conducted at the Synaesthetic Media Lab in Toronto, Ontario. Lead by Dr. Ali Mazalek, the team looks into spatial abilities such as perspective taking and way-finding and creates tangible counterparts to complement the spatial ability being assessed in VR environments. This is all done to explore the effects that tangibles within VR spaces have on spatial cognitive abilities in participants through physical practice and going beyond the idea of 2D spatial testing.

Our project relates to the idea of integrating a customizable controller and using its purpose to complement the situation it’s involved in. For example, in the T.A.S.C project the user is controlling tangible blocks to solve a multi perspective taking puzzle. In our River Styx’s environment, the paddle and its use compliments its environment while also increasing embodiment in virtual spaces. We also designed the paddle in a way that acts as an actual paddle as well. Therefore, If the paddle dips low enough to either the left or right, it will rotate in the direction while also moving forward.

T.A.S.C and River Styx both encompass the exploration of an object’s physicality’s and mechanics of a tool and integrate its use in the environment it’s used in. We hope to later integrate VR into River of Styx to increase this immersive experience through paddling in such environments.  

The Night Journey by  Bill Viola and the USC Game Innovation Lab in Los Angeles.

The Night Journey is an experimental art game that uses both game and video techniques to tell the story of an individual’s journey towards enlightenment. With no clear paths or objectives and underpinnings from historical philosophical writings, the game focuses its core narrative towards creating a personal, sublime experience for the individual participant. Actions taken are reflected upon its world.

The techniques incorporated from video footage and the narrative premise of the game gave us inspiration for how we might tackle the scenic objectives for our project and interpret the paths we wanted players to take in River Styx.

References

  • https://docs.unity3d.com/ScriptReference/Rigidbody-isKinematic.html
  • https://forum.unity.com/threads/simple-collision-detection-blocking-movement.86712/
  • https://docs.depthkit.tv/docs/unity-plugin
  • https://www.unity3dtips.com/move-objects-in-unity/
  • http://www.uniduino.com/
  • https://www.arduino.cc/
  • https://www.zerodaysvr.com/

Cloudy and Tangled Thoughts

Cloudy and Tangled Thoughts

By Olivia Prior, Amreen Ashraf, and Nick Alexander

“Cloudy and Tangled Thoughts is an interactive piece which uses conductive fabric to explore the movement of light and space. Participants are invited to sit down and explore. Relax on a comfortable blanket and watch the clouds drift by. An array of irregular objects catch and refract the light, gently moving in relation to your position on the blanket, creating a sense of serenity.”

Audience enjoying final exhibit.

Audience enjoying final exhibit

img_3415

GitHub: https://github.com/alusiu/cloud-gazing

 

OVERVIEW

Cloudy and Tangled Thoughts evokes the experience of lying on a blanket, gazing at the sky, watching patterns form and dissipate in the leaves, wind, and clouds.

It consists of a blanket made from traditional and conductive textiles and a lattice of hanging geometric chimes. When participants lie or press on the blanket, lights and servo motors hidden among the chimes activate, causing them to swirl and tinkle. When more people lie on the blanket the pattern of lights and motion becomes more intricate in turn.

We succeeded in fabricating all the necessary technology, creating the code, assembling it, and proving the concept. However, the result did not live up to our vision. The team believes that the idea is strong and the tech is viable, and we will return to this project in order to develop it to a point where it meets our expectations.

 

CONCEPT

Cloudy and Tangled Thoughts started with a feeling. The team wanted to create a screenless installation that evoked a feeling of peace and wonder. We wanted to use technology to bring people together with a magical experience, using technology in a way that was unfamiliar to the average user. We envisioned the experience of lying on a blanket watching the clouds make shapes. It was important to us that we not create something simple like an on-off switch, a mechanism most people understand intrinsically, but instead create a relationship between sensors and output that generated a sense of wonder.

The prompt for the project from the Creation & Computation class was Refine & Combine. We were to return to a previous project and expand on it. While the concept we came up with was not directly related to a previous project any of us had done, we felt confident that our previous work with code, servos, lights, sensors, and fabrication put us at the same place developmentally as we would be if this had been a prior project.

 

PROCESS

The process began with a discussion of the kind of work we wanted to create, along with the kind of skills, technology, and existing projects we wanted to carry forward. When we settled on the concept above we began to brainstorm ways to realize it.

 We knew from the beginning that we wanted to work with a blanket and conductive fabric, but we debated over what form the apparatus hanging above, which needed to interact with the conductive fabric, would take. Since we had begun by imagining looking up at clouds, we researched installations and works of art utilizing cloud imagery and looked for inspiration there.

Cloud inspiration

Cloud inspiration

We decided that a series of geometric shape would complement the organic flowing nature of the blanket below. We envisioned multi coloured plexiglass, laser-cut into geometric shapes, hanging like wind chimes, diffusing light from above as they drifted and tinkled.

We submitted a proposal and consulted with our professors, Kate Hartman and Nick Puckett, for their opinions on how best to proceed. Kate provisioned us with 16sqft of conductive fabric, velostat, and a sewing machine for us to experiment with. Nick suggested that, rather than use the heavy and difficult to cut plexiglass, that we look into vellum as our cloud material, as it was light and would keep its shape after being folded.

Experimentation with vellum gave us a lot of data. We liked the way light moved through it and its versatility, and after trying several forms we settled on a triangular prism shape for our cloud-chime objects. However, we did not like the look of the vellum and wanted to find something more uniform and robust. We settled on a thin plastic and consulted with John Diessel in the Plastics Lab. John suggested that, since we planned to manufacture many identical objects from lightweight plastic, vacuum form was the best process for us. We built a form out of wood with the help of Reza Safaei in the Maker Lab and returned to the Plastics Lab to begin making what we affectionately came to refer to as “the boys”, all of which is discussed in detail below.

 

Hanging apparatus

Our conductive quilt was designed to control and move elements using servos. At first we went for simple geometrical shapes and simple constructions. 

cloud shape ideation

We had started out with wanting to control 12 unique shapes as we had built 12 sensors on to our quilt. We became very focused on the quilt, to the point that we had used up much of our allotted time before turning our attention to the servos and hanging apparatus.. Nick Puckett our professor has suggested using vellum, a material which which is easy to control due to its lightweight. The problem we encountered with vellum was that whenever we folded it to take the shape we wanted it to, it would become brittle and break at the folds. We were also slightly getting frustrated with how to to control each shape with the servos and how to mount the servos on the ceiling. We considered but decided against laser cut a mount which would hold the servos together, as we felt it was too late in the process to begin something we were unfamiliar with. At this point we as a team were getting unsure of using the servos to control the shapes. Our team mate Olivia suggested buying some fans and a relay, so that the quilt would start a fan based on where the participants sat. The fan would then start to blow these shapes. We did a rapid prototype using the vellum to construct simple modular shapes and hung them up to see the effects. We all agreed that the effects that the simple shapes created with the lights would look great.  

Prototype with vellum and lighting

We decided we liked the shapes and the effect it had. We were still unsure about using the fans when we looked further into buying a relay which was expensive and we were not very sure if we had to write new code for the fans. In addition, the relays came with a large constraint: only one attached object per relay could be powered at a time. This, combined with the cost of the relays, led us to shelve the idea of using fans.

We made a trip to the plastics lab at 100 McCaul to consult them on our simple modular shapes. John Diessel suggested we use light weight acrylics and the vacuum molding machine. They suggested we fabricate a molding shape that could be used with the machine. The largest size the machine could handle was 12×12 inches.

 We went back to 205 and visited Reza at the makers lab on the 7th floor. He understood exactly what we were looking for and helped us construct a shape using our vellum prototype.

We took the shape and went back to the plastics lab. We were instructed on how to use the machine to vacuum mold our shapes. Each sheet gave us 8 shapes, which meant we could produce a lot of shapes fast. We bought ten 12×12 sheets, 5 in translucent colour and 5 in white.

img_3256

Vacuum form in action

img_3290

It took roughly about 5 hours to construct and cut the shapes. Due to the makers lab and the plastics lab both being closed at night, which meant we had to cut the shapes by hand using scissors. This took a long time and was physically taxing on the team. 

img_3302

After we had completed molding our plastic shapes, we as a team were still unsure of how to have these shapes hang on the ceiling of the experimental media lab grid. We decided to use aircraft cable to hang the apparatus and even clip the shapes to hang using aircraft cable. We made a trip to Canadian Tire in the morning to buy crimpers for the aircraft cables. Unable to find the right grid to hang up, we made an exploratory supply run and came across a barbecue grill. Excited by the image of three circular mounts hanging in a staggered manner, we decided to buy three barbecue grills.

img_4023

At this point when we bought the barbecue grills, we only had 80 hanging shapes, which cut in half gave us about 160. We soon realized that unfortunately that wouldn’t be enough for three grills, which meant we had to make a quick run again to the plastics lab.

This is where we should have as a team scaled down rather than scale up. Building the hanging apparatus consumed a lot of time and energy. We know now in retrospect that it would have been better to go with large simple shapes rather than having so many small shapes. The small shapes made sense in the moment and looked good when hung, however they took a long time to construct.

 

Blanket Controller

Meanwhile we had also been fabricating the blanket. We used documentation from “Intro to Textile Game Controllers Workshop” run by Kate Hartman to fabricate analog sensors from the conductive fabric she gave us.

We built several small sensors to test, including one we sewed into a “sandwich” with regular fabric above and below in order to approximate the effect of the sensor when sewed into the blanket.

img_20181130_183524

The test sensors worked well, and we felt we were ready to scale up and begin fabricating full-size sensors. We laid out a large sheet of paper in order to mark and measure out the approximate size of the blanket.

img_20181130_170444

We decided a size of 4 feet by 4 feet was ideal, as it was large enough for two to lie comfortably while not being too large to manage. We debated for some time on the best way to layout and orient the sensors, with pitches ranging from as few as four sensors arranged in quadrants to dozens arranged in small triangles.

Blanket and sensor ideas 1

sensor ideation

We settled on the final version, pictured below. It allowed us to have either ends’ point of contact be on the edge of the blanket, meaning we would not need to run wiring through the blanket proper. It was, we felt, a manageable number of sensors, but enough to give us a lot of options for interactions in the final code. We also felt it was aesthetically pleasing, and thus an excellent blend of form and function. We selected classic “picnic” fabric in red, blue, yellow, and white gingham to give the device the affordance of a homemade picnic blanket.

img_20181201_182313

We plotted the sensor placement at 3-inch intervals, allowing 3 inches of velostat width per sensor, with conductive fabric cut slightly thinner than the velostat. We ironed the conductive fabric to strips of red-checked cloth, attached the velostat with dabs of hot glue, and folded the two sides together. They were kept in place with a few more dabs of hot glue until they could be sewn together permanently. We took pains to avoid puncturing the conductive fabric, sewing along the outside of the velostat. We left the ends of the conductive fabric trailing out of pockets at either end of the sensor to allow for easy connection later.

Building Full scale sensor: Measuring out pattern for cutting fabric and velostat

Building full scale sensor:
Measuring out pattern for cutting fabric and Velostat.

Building full scale sensor: Measuring velostat for cutting.

Building full scale sensor:
Measuring Velostat for cutting.

Building full scale sensor: pattern cutting Velostat.

Building full scale sensor:
Pattern tracing onto Velostat.

Building full scale sensor: Velostat laid out onto our big scale model

Building full scale sensor:
Velostat laid out onto our 4×4 ft model

Below: the process for making the sensors.

img_20181206_133808

After attaching two 3inch-wide lengths of velostat, block out a length of cloth slightly wider.

img_20181206_134108

Cut out the cloth.

img_20181206_134444

Cut out a second length, the same size as the first.

img_20181206_134952

Place and iron lengths of iron-on adhesive.

img_20181206_135230

Iron the conductive fabric to the iron-on adhesive.

img_20181206_135418

Lay the velostat over the conductive fabric.

img_20181206_135552

Use small dabs of hot glue to keep the velostat secure on both sides.

Not pictured: sew both halves together

As we completed each sensor, we tested it to ensure it was viable. When all the sensors were sewed and tested we cut a swatch of blue checked cloth at 4.5×4.5 feet to be the base of the blanket. We measured out and placed our sensors where we wanted them to be, then pinned them in place.

We conceived of and experimented with a power bus of conductive fabric along two sides of the blanket, to reduce the amount of wiring we would have to attach. We liked this idea as it made use of the blanket’s form to inform the function of the installation. However, we discovered that this layout diminished the effective voltage too much to get effective sensor readings, and we shelved the idea of the power bus. In retrospect, this should have been a warning sign to us that the power we were supplying was insufficient for our purposes.

One by one we sewed a hem in between the sensors. This fixed them in place on the blanket base, covered up the ratty ends of the sensors, and had the added benefit of making the blanket look softer and inviting to sit down on. We intended to tuck the extra fabric at the outside of the blanket over making a hem and channel for wiring while keeping the blanket looking nice and minimizing obvious electrical attachment points.

Fabric getting ready to be attached to the sensor

Fabric getting ready to be attached to the sensor

Ironing fabric adhesive

Ironing fabric adhesive

laying out conductive fabric onto adhesive

laying out conductive fabric onto adhesive

Ironing sensor onto fabric

Ironing sensor onto fabric

stitching fabric and sensor

stitching fabric and sensor

Unfortunately, our trusty sewing machine hit a snag late in production (the housing for the lower bobbin was pushed out of alignment, jamming the machine). While apparently not an uncommon problem, online diagnostics recommended taking the machine in for service rather than attempting to fix as a layperson. Without enough time to get the machine fixed or exchanged before the deadline, this was as far as we would get with our blanket. Luckily all the sensors were secured by this time and subsequent stitching would have been aesthetic.

img_20181207_175015

Code

Our main concern when deciding was how to code the blanket to create an interesting relationship between the laid out sensors and the servo motors above. We were curious with how the user would explore the interaction between the two separate components. We contemplated having a one to one relationship (i.e. one servo motor to every sensor). We as well considered having a rippling effect amongst the servo motors – when one servo would be activated a chain of the surrounding servo motors would also move.

As well, something that was overall important to us was that the clouds above reflected the participant’s position beneath the hanging apparatus. We thought this was interesting because it became a piece about the reflection of interaction.

The design of our quilt provided us with the given aesthetic of “quadrants”. We decided that we could determine the user’s position based off of the sum of the values from each quadrant. From there we mapped out all of our inputs and outputs that needed to have a relationship.

img_3727

Before we scaled up, we wanted to test the textile analog sensors acting as an input to control the output for a servo motor and led light strips. We determined what the threshold of both sensors was when some pressure was placed upon them and then used that data to determine when the motors and led lights should be activated. This was a great initial proof of concept, and we decided to proceed forward with this base code.

Our next step was to think about how to create a more interesting connection between the user activating the sensors to start the motors and led lights. We did not want the quilt to simply become a switch. As a solution, we created cases for each quadrant. Each quadrant would take the sum of the sensor input. The sum of the sensors would indicate the likelihood of how many sensors were being activated in each quadrant. The cases were as follows:

 

Maximum: most likely all of the sensors are being activated

  • Trigger all of the associated servos
  • Trigger all of the associated led lights with full brightness

 

Medium: most likely two of the sensors are being activated with great pressure

  • Trigger two (or one) of the associated servos
  • Trigger two (or one) of the associated led lights with full brightness

 

Minimum: most likely the sensors are being lightly activated

  • Randomly choose one servo to go on and off each time this case is triggered
  • Choose the corresponding led light to go on and off

 

Resting: All of the servos and led lights are off

 

Setting up for the critique

As we set up for our critique, it became apparent we had scaled up too much to implement our code. When we were assembling, we had decided to scale our inputs down to three sensors in our quadrant. Kate gave us the suggestion that rather than isolating the interaction to one quadrant to divide all of the sensors into three “super sensors”. Our quilt pattern naturally allowed us to have three rings of sensors; one on the outside, one in the middle, and one in the inside. We connected our quilt according to this diagram:

 

img_9317

Another thing that became apparent was that the hanging apparatus, due to its circular shape was hard to mount and hang in a balanced manner. We had run out of aircraft cable – which had actually proved to be extremely difficult to work with – so we decided to use twine to get the shape mounted. Another difficulty we ran into was the wiring of the apparatus to the floor. We were not prepared with the right set of wires which would be long enough to reach our breadboard. We attempted to use long individual wires, but that was impractical. Kate and Nick lent us long modular wiring, which significantly helped us with the hanging process. Also we learned that Kate is master at knots. Her wizardry helped us hang the apparatus quickly and safely.

 

Components

cloudgazing2_bb

Diagram only shows what was connected for critique

  • 1 x Arduino Mega
  • 3 x textile sensors
  • 3 x 50 ohms resistors
  • 3 x LED light strips [6 pixels each]
  • 3 x Micro Servo Motors

 

OUTCOME

We had lofty expectations for this project which the completed version did not meet. No aspect of the build was lax – we felt, in the end, that there had not been enough time in the two weeks we were allotted for the team to build, test, and iterate on the design enough to reach the state of completion we had envisioned.

In the end, we did have an interactive experience where the quilt activated led lights and gentle servos above. We also incorporated a projection behind to elevate the sense of being out in nature.

 

REFLECTIONS

This project taught us many things about working with unfamiliar materials and pursuing lofty goals in a short time frame. Some core reflections we have taken away are below.

We encountered many challenges we did not foresee or appreciate during the planning phase. These included:

  • The amount of time required to fabricate objects of the size and complexity we envisioned
  • The difficulty and time required in learning to effectively use new tools
  • Power management with sensors that we had created from scratch
  • Effectively scaling from a working prototype to a full-sized installation
  • Accounting for the “unknown unknowns” that crop up in projects

Were we to take on a similar project in future, we would:

  • Focus on one core interaction – for example, we would focus on only the blanket or the hanging apparatus
  • Do careful math when fabricating rather than making estimates
  • Start with fewer/smaller materials and scale up
  • Make purchases of materials in small amounts to prototype with

In terms of the use of textiles, we came across a couple of discoveries:

  • Our sensors only worked consistently when the ground and the positive were clipped to the opposite ends of the fabric. We experimented with having the two ends of the circuit clipped close together, which – while somewhat effective – was unreliable for our purpose.
  • When all of the twelve sensors were divided and clipped together to make three “super” sensors, we had to lower the resistors significantly to get any viable reading to use with our code.
  • Physically small sensors gave more reliable readings than large sensors at the same voltage.
  • It is possible to use conductive fabric as a “power bus” to power multiple sensors – though at our scale, this diminished the voltage to an amount where they were not usable for our purpose.

Next steps to take when we return to this project include:

  • Test the sensors with higher power and/or using multiple power sources
  • Test multiple variations of circuitry running through the blanket
  • Design, from scratch, an apparatus for hanging the clouds, with the same focus we had as we designed the blanket
  • Explore wireless communication with the hanging apparatus
  • Reconsider the form of the “above” apparatus
    • For example, explore projection of a generative image rather than a physical apparatus

 

Resources:

Kate Hartman & Yiyi Shao. Intro to Textile Game Controllers. Workshop held at Dames Making Games at Toronto Media Arts Centre on November 14, 2018

A special thank you to Nick Puckett whose advice on fabrication was invaluable, and who went out of his way to help the project get set up in time for its show.

A special thank you Kate Hartman for her donation of material and tools, for going out of her way to help the project get set up in time for its show, and whose infectious enthusiasm kept us going.

Sound Synthesis

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link: https://github.com/Omid-Ettehadi/Sound-Synthesis

screen-shot-2018-12-10-at-12-20-19-pm

Music Credit: Anish Sood @anishsood

Contributors: Olivia Prior and Georgina Yeboah
A special thanks to Olivia and Georgina for letting us leverage the code from experiment 2, Attentive Motions. Without the hard work contributed by both these ladies the musical spheres would not have been finished in time, we are truly grateful.
screen-shot-2018-12-10-at-12-01-27-pm

Figure 1.1: Left, Final display of Sound Synthesis
Figure 1.2: Center, Sound Synthesis Team
Figure 1.3: Right, Special thanks to Attentive Motions Team

Project overview
Sound Synthesis is an interactive light and music display that allows anyone passing by to become the party DJ. There are 3 touch points to this system. The first is the ‘DJ console’ which is made up of children’s blocks. Each block controls a different sound stem which is triggered by placing a block on the console. The next two touch points are wireless clear spheres which contain LEDs and a gyroscope that triggers another sound stem when the sphere is moved. These interactions not only activate sound and lighting but it also invokes a sense of play among all ages.

Intended context
The teams intent was simple- bring music and life to a gallery show using items common in child’s play. Relinquishing control over the music and ambience at a public event seems crazy but this trio was screwy enough to give it a try. The goal was to build musical confidence among the crowd and allow them to ‘play’ without the threat of failure. For a moment, anyone is capable of contributing the mood of the party regardless of their musical experience.

screen-shot-2018-12-10-at-12-31-28-pm

Figure 2.1: Left, Final display of Sound Synthesis
Figure 2.2: Veda showcasing the capabilities of each musical sphere
Figure 2.3: Veda showcasing the capabilities of the DJ console
Figure 2.4: Center display in action

Product video

Production Materials

screen-shot-2018-12-10-at-9-53-43-pm

Ideation
The team brainstormed different ways to combine older projects together to create a playful music experience for those visiting the end of semester show. The ideation process started off quite ambitious, attempting to match the same footprint as another project called ‘The sound cave’.
screen-shot-2018-12-10-at-11-31-10-am

Figure 3.1: Left, Initial drawing of floor layout
Figure 3.2: Center, Initial drawing of DJ console, sphere and proposed fabrication of center display
Figure 3.3: Right, Initial drawing of additional touch points for more interactions (if time allowed)

The sound cave had five stations hooked up the their center unit with a different interaction at each station. The original plan was to use the display tower from Omid’s Urchestra project as our center display with a few alterations. The first station would involve a kid’s puzzle which was taken from Veda’s Kid’s Puzzler project and the interaction would remain the same, utilizing the pull up resistors and copper tape to create a button. The next station would have the clear spheres from the Attentive Motions project and the interaction would also remain the same, utilizing the gyroscope to sense motion to send a signal to the main unit. The next 3 units would be brand new and this is where our ambitions got the best of us. After further group discussion, it was decided only to add one more station to the project. The new station would involve a version of a touch sensor which would require a wearable to ground the circuit, see figure 3.3.

screen-shot-2018-12-10-at-11-42-21-am

Figure 4.1: Left, For a detailed understanding of the LED tower : Urchestra
Figure 4.2: Center, For a detailed understanding of the block puzzle : The Kid’s Puzzler
Figure 4.3: Right, For a detailed understanding of the clear spheres : Attentive Motion

Journey Map

screen-shot-2018-12-10-at-4-39-48-pm

Figure 5.1: Top, The first ambitious version of the Journey Map
Figure 5.2: Bottom, A more realistic and achievable Journey Map

Scheduling
As a team, we came up with a schedule. Early on we wanted to make sure we were being realistic with the amount of work we were taking on, especially since there were many other final projects in other classes. We arrived at this schedule, which needed to be shifted from time to time but overall we were able to stick to it and achieve a final product we are all very proud of (with enough sleep).

screen-shot-2018-12-10-at-4-45-46-pm

Figure 6.1: Team workback schedule

Programming
One of the benefits of revisiting previous projects is that most of the hard work has already been done. The first thing we needed to do was see what data we can get from each of them and assess what else needed to be added or altered.

screen-shot-2018-12-10-at-1-17-55-pm

Figure 7.1: Left, Changing the Arduino Micro to Feather ESP32, Center Circuitry for DJ Console and Spheres. Right, Installing the Circuitry into the base of the box.
Figure 7.2: Center, Moving the circuits from breadboards into prototyping boards
Figure 7.3: Right, Adding LEDs to the puzzle

The DJ Console (Blocks) The puzzle used six switches with the help of copper tape underneath the shapes to complete the circuit. Also, it had a single LED light to indicate when any of the shapes were placed in their right position. Each shape corresponded to a specific sound that was then played through the P5 file.

We wanted to stick to the same principle, with a straightforward addition. We wanted to provide instantaneous feedback to the users upon any changed that were made, so instead of having only one LED, we placed six of them indicating how many blocks were active at each time. The system still used an Arduino micro that sent the data for the switches over serial connection to the P5 file. The data was then sent to PubNub so that the display system could use them.

The Sphere The ball used an Arduino Micro, an Adafruit Orientation Sensor, a LED strip and a small speaker. It used to make noise whenever it was to stable asking people to move it. We didn’t want the device to play any sounds anymore, we only wanted it to send the orientation data to PubNub. To do that, we got rid of the speaker and changed the Arduino Micro for a Feather ESP32 board. The board read the data from the orientation sensor and send it to PubNub. To provide real-time feedback to the user, the LED Strip would show some light whenever the ball was shaken.

The Center Display The display used an Arduino micro, a LED strip and nine switches made of copper tapes. The biggest issue with this problem was the need for copper tapes under shoes to complete the circuit. So, we got rid of the tapes and only used the design as a display. We added two extra LED strips to the display to make the experience much better.
The P5 read the data that was sent from the two balls and the puzzle and based on their configuration played the track that was associated with them. The data then sent to the Arduino micro over the serial connection to control the 3 LED Strips. The primary LED Strip was focused on the puzzle. If any of the keys were placed the LED strip would flash a green colour every 2 seconds; else it would flash a white light. The other 2 LED Strips each were related to a specific ball. The strips would flash the same colour as the ball that was shaken.

screen-shot-2018-12-10-at-2-05-16-pm

Figure 8: The team testings of the units

Sound and Design
The sound was the most critical piece of the experience for us. Since none of us have worked with music before we were most concerned about how the experience would come alive without a high quality sound output. Instead of making any guesses we turned to Adam Tindale who has been working with sound for the last three decades.

Our meeting was extremely productive and the most important lesson from it was the difference between creating a musical experience and a musical instrument. While creating a musical instrument you have to have a very deep understanding of the instrument, how it works and what sounds it can produce. The audience for such experiences are usually musicians with a similar knowledge. We found a relevant case study that proved this and we knew that this is not the experience we wanted to create.

screen-shot-2018-12-10-at-2-19-45-pm

Figure 9.1: Left, Cave of sounds, a musical experience
Figure 9.2: Right, Color cord – a technological musical instrument

We wanted to design an experience that made it easy to play with music, and could empower users of all experiences to create music of their own. While learning how to use musical instruments is a difficult task, and requires countless hours of disciplined practice, how might we do the opposite and create something that is inclusive, easy to use, and engaging at the same time. We needed a total of 8 sounds, six sounds for the DJ console (puzzle blocks) that set the main track and 2 accent sounds for each of the spheres that are triggered upon shaking.

We began our search for sounds online, with royalty free sounds available to use. We even tried working with Ableton and Garageband to see if any sounds would work together and create a synchronized soundtrack. But nothing that was available online was good enough, and since none of us had prior sound making experience we turned to our friends to collaborate with us on this.

Anish Sood is a renowned DJ, songwriter and music producer based in Goa, India. The genre’s he focuses on are EDM, House, Techno and Electro House. These felt like the right fit for our experience. We did a call together and briefed him in detail about the project. We wanted a track that was upbeat and yet soothing, and not monotonous to listen to. We took inspiration from the artist Kygo to describe the kind of sounds we wanted to produce. We also shared with Anish many pictures and videos of the parts of the experience and our vision for it. He was extremely receptive and put together a beautiful track for us within 24 hours of our call. He created six sounds on the DJ console that were divided between base sounds and overlapping instrumental and vocal sounds. He also sent us the master track so we knew what it would all sound like when it came together.

Playlist for the DJ Console:
https://soundcloud.com/user667414258/sets/sound-synthesis-stem-set/s-b34Rx

For the spheres we wanted to find sounds that accentuated the base track from the console well. After a mini-brainstorm we shortlisted on a tambourine and a gong for the spheres.

Playlist for the Spheres:
https://soundcloud.com/user667414258/sets/sound-synthesis-stem-set-sphere-sounds/s-Zlnq0

 

Fabrication
Our fabrication process was smooth and streamlined. The following steps were part of the process:

The DJ Console (Blocks) We already had the base for the DJ console in place from Experiment 3. This included the puzzle itself, a base box for it and a single LED light to indicate if the device has been activated. In order to convert the design from a kid toy to something more mature, we decided to spray paint its colourful keys to a simple black and white design. We also had to add 5 more holes for additional LEDs feedback, and one for the connecting cable. While presenting we used a plinth that housed the Laptop underneath.

screen-shot-2018-12-10-at-2-09-35-pm

Figure 10.1: Left, drilling holes for LED lights into box
Figure 10.2: Center, Adding circuitry into box
Figure 10.3: Right, Spray painting shapes for box

The Sphere The fabrication process for the spheres were already done in Experiment 2.The only thing that was required to be changed was the circuit, and addition of a battery holder for the LED Strips so that they could be run for more than 3 hours.

The Center Display We decided to stick with the same object that was made for Experiment 3. The only thing that needed to be changed was to remove the extra Ultrasonic sensors from the box. We added a base to the design so that we could glue down the three cylinders that were to hold the three LED Strips. We also added a back panel to the design so that the LED Strips would be invisible when the device was off.

screen-shot-2018-12-10-at-1-43-49-pm

Figure 11.1: Left, adding more LEDs to original circuit created for the Kid’s Puzzler project
Figure 11.2: Center left, rewiring new and improved DJ console
Figure 11.3: Right, April making alterations and rewiring to the original display unit used in the Urchestra project

screen-shot-2018-12-10-at-1-08-30-pm

Figure 12.1: Left, Final project layout
Figure 12.2: Center, Fine tuning the blocks and sphere
Figure 12.3: Right, Fine tuning the center display

Final Fritzing Diagrams

screen-shot-2018-12-10-at-10-30-54-pm

Figure 13: The final circuit for the hamster balls

screen-shot-2018-12-10-at-10-30-29-pm

Figure 14: The final circuit for the Blocks (DJ Console)

screen-shot-2018-12-10-at-10-30-43-pm

Figure 15: The final circuit for the center display

Presentation & Show

screen-shot-2018-12-10-at-1-50-16-pm

Figure 16.1: Left, Final floor plan of Sound Synthesis
Figure 16.2: Right, Instructional signs placed on plinth under each interactive device

For the final show, we wanted to make sure the connection between the three pieces were clear and the users know what each of the pieces did. To do that, a clean installation of the work was crucial. We placed all the objects in a corner, where they could see the display center from each of the stations. We used plinths of the same height, and printed short instructions on what to do with each piece to make sure the user is clear on his/her role in the experience. We also printed matching ID cards and wore black and white – to look like a team at the exhibit.

An issue we had to deal with was to make sure the web browser for our display unit was refreshed every now and then as the large quantity of data sent to it made it crash if it was opened for a long time. We made sure that at least one person was at the station at all time to make sure nothing goes wrong.

We received very positive feedback on the project. People were very interested in how easy it was for them to act as a DJ and play with the sounds without having to worry about the pace of each track and how to synchronize them. Kids especially enjoyed the experience because they were used to the puzzle and the games and they really liked to be in charge of what is being played. Other people really enjoyed the experience because of the unusual interface for the music. They liked how simple it was to control and how little work did they have to do to get good sounds out of the system. They also appreciated how instantaneous the feedback is with the interaction. One thing that they felt that could be improved was to add more tracks and give the users ability to choose which track is for each piece.

Reflection

As a team, we really hit our stride with this project. Since we all enjoyed working together so much during project 4 we thought we would go out with a bang together in project 5. The 3 of us each brought something different to the table and we found ways to utilize each team member’s strength. Omid not only spearheaded the coding but he is also extremely patient and slowed down his process so we could all work to understand the code of each device and troubleshoot any errors. Veda is extremely detailed in her design approach. It’s not enough for it to just look good, she makes sure each design is functional and user friendly, in every detail. April brought to the table her professional experience with meticulous project management, scoping and planning, graphical design and human centered thinking. Her skill set with fabrication and printing methods was also a blessing.

One of the most important lessons for us was to scope realistically, and leave a safety margin for debugging and troubleshooting. We also made sure to give ourselves enough time to iron out all the details for the actual presentation and setup.

After all the hard work we were able to achieve something that works beyond the level of a basic prototype. Hamster balls were dropped and the system crashed but everything was up and running without anyone at the party noticing. We are extremely proud of the final product and still can’t believe how well it turned out. If this project was ever to be scaled up it would require more stable software and possibly custom microcontrollers but for a 2 week student project, we are very proud.

screen-shot-2018-12-10-at-2-49-25-pm

Figure 17.1: Left, April and Veda rocking out at the final show
Figure 17.2: Right, Veda continues to rock, While Omid makes sure everything is under control

References
(n.d.). Retrieved from http://www.picaroon.eu/tangibleorchestra.html
(n.d.). Retrieved from http://www.picaroon.eu/tangibleorchestra.html
Cave of Sounds. (n.d.). Retrieved from http://caveofsounds.com/
Romano, Z. (2014, May 22). A tangible orchestra one can walk through and play with others. Retrieved from https://blog.arduino.cc/2014/05/22/a-tangible-orchestra-one-can-walk-through-and-play-with-others/
Schoen, M. (n.d.). Color Chord. Retrieved from https://schoenmatthew.com/color-chord
Tangible Orchestra – Walking through the music. (2014, June 03). Retrieved from https://www.mediaarchitecture.org/tangible-orchestra/

 

 

Tinker Box

Abstract

Tinker Boxes are physical manipulatives designed for digital interaction. Based on the concept of MiMs (Zuckerman, O., Arida, S., & Resnick, M. (2005)) which are “Montessori-inspired Manipulatives”. The boxes are low fidelity devices used to bridge the physical interaction with the digital world. They are aimed at children from 5 to 7yrs but can be extended to any age depending on the frontend software designed to fit the interaction. This iteration of the software looks at using it as a scaffolding tool to teach kids how to recreate or understand the making of a physical object, Lego toys in this instance. The plan to extend it to build other educational games and interactions to explore concrete and abstract concepts.

img_3288

Introduction

What can I tinker with next, the goal of Experiment 5 was to take a project from the past weeks and add to it in some meaningful way? This could be anything but “anything” is a very large canvas given 1.2 Weeks from concept to completion. I would like to say it was all clear from the start but it was murky on what this next step would look like. I tried to think about what I liked about the old project and what I did not it was quite clear the biggest pain point was the potentiometer breaking the interaction and dictating how far the kids could make the character chase each other before having to reverse and go backward to add more interaction.

This whole experiment would be to figure out how to use the Rotary Encoder, the initial idea was to change the whole first version of the box and add the rotary encoder to it, this would mean reverse engineering the hardware to fit in the new, it was not really worth it as the first version worked quite well as proof of concept and I wanted to keep it like that.

I then decided to use the RE[Rotary Encoder] to create a new kind of interaction but also examine critically what it is I was building, I used the case study assignment to dive deeper into what the interaction meant and how I could position the work in a meaningful way. In the paper “Extending Tangible Interfaces for Education: Digital Montessori-inspired Manipulatives”

Methodology

So what can I make interactive and make it meaningful to children? This was the question I kept asking myself, it’s easy to build an interaction but to have it meaningful and pedagogical is where the challenge is. I looked at my kids for inspiration, where I usually start, they are learning through play every single day but we seem to miss it.

I’m jumping ahead a bit because before I could even imagine what kind of interaction I wanted I needed to get the rotary encoder working and sending data, this may seem like a no-brainer for a coder, but for person new to the coding world of p5 and Arduino it was a critical first step else I have No Dice!

The base code for the project was a mix from the class code from Nick and Kate and also from Atkinson, multiwingspan.co.uk. I got the encoder sending a signal and then used the JSON library to parse the code so I could read it in p5.js. This is not the best way to do it I realized in retrospect and as I need to map different variables based on the length of the sprite animation I will be controlling. the better way to do it is to set a large range in Arduino and then map that range down to what I need based on each individual interaction. This is a bit technical in understanding but if you do venture into using my code it is something to keep in mind when you need to modify it to your needs.

img_3250

OK now that I had that and the button working I assembled all of the hardware even before I could get the software working, why would I do that, well basically because time was running out and once you write software, debugging and refining is a rabbit hole you can spend all of your time doing ill the cows come home, and I may never get to finish the physical hardware. I had this happen on other projects where the software take precedent and the hardware ends up being presented on a proto-board as there is no tie for refinement and fabrication.

img_3266

The circuit is pretty simple as you can see in this Fritz diagram below, it uses:

  • 1 Rotary encoder
  • 1 button
  • 1 Arduino Micro Original

That’s it, the circuit is also very clean so I could get to the main task of creating the interaction.

fritz

Once the circuit was done, I built the housing and the soldered all the components on to the PCB board.

img_3265

Software

I now looked at all the possible interactions I could create, using this rotary dial.

The basic idea is Turing up or down a value, you can then map this value to anything you like, in my case I decided to use sprite animations.

Coming back to observing my son play with lego, he would iterate and create new creations, cars, safe’s, vending machines, the list was exhaustive, he would look through youtube videos to follow along or just iterate, he would then share these creations with us at home and take them to school to show his friends. The thing is they could see the completed work not the process of getting there or even the individual parts that made the whole, this sparked an idea based on other stop-motion projects I had seen, with my son’s permission I broke apart his creations brick by brick and shot them using an iPhone and a tripod, I then used that to create sprite sheets which were controlled by the Think boxes rotary encoder. It took a bit of time to figure out how the sprite sheets worked and what was possible but it worked and the end result was satisfying, I then used the button to change the sprite and show another sprite animation, in this way I could get the user to scroll through the different creations I had created animations for.

img_3291

The interaction was automatic, there were no instructions needed, people turned the knob and clicked the button, I had built on their past experience of what buttons and Knobs do, it was now just a matter f changing the software to create a pedagogical experience for the child.

Some ideas I came up with based on this interaction are:

  • Simple Machines: where the box could be turned on its side and the knob is replaced with a crank, lending itself to simple machines, like cranes, fishing poles, ratchets, pulleys etc
  • The process of folding and unfolding has numerous pedagogical uses, least of which is just the wonder of seeing what is inside something, like for instance the layers of the earth’s crust, making planets move and just rotating objects on a different axis.
  • This makes the MiM very versatile yet simple in its interaction, the triangles complete when the software fits the user and the interaction.

Feedback

Some of the feedback was I should use this as an educational tool for products building IKEA furniture, pitch it to the company to create stop-motion videos for showing the different steps.

There was also interest in seeing how two of these devices could change the interaction if they controlled different aspects of the same Object/Interaction.

Summary

I would like to explore this prototype further and build more tinker boxes, which network or are even wireless, I had an early idea of building a wireless interaction but Nick said it might be a delayed interaction because of using a server like pubNub, I will look to see if there is any way to directly interface with the Mac/pc without the use of a third party software.

References

Zuckerman, O., Arida, S., & Resnick, M. (2005). Extending tangible interfaces for education.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems – CHI 05. doi:10.1145/1054972.1055093

Atkinson, M. (n.d.). MultiWingSpan. Retrieved from http://www.multiwingspan.co.uk/arduino.php?page=rotary2

GitHub code can be found here: https://github.com/imaginere/Experiment5DF

//generative(systems);

Experiment 5: Refine/Combine/Unwind

 

 

 

Exploring an Art & Graphic Design movement through computational means.

GitHub Link

Team Members
Carisa Antariksa, Joshua McKenna, Ladan Siad

pic1

pic4

pic3

Project Description

//generative(systems); is a investigation into Constructivism, an art and graphic design movement from the 1920s, through generative form. By referencing elements popularized from that movement, this project explores the automation of design processes through the basis of variance and the discourse around the movement towards engineered design systems. How do we as designers create distinct works when were are in a time of algorithmic design? Can our work still be dynamic, compelling and emotive? //generative(systems); examines the process of algorithmic automation and how it will affect viewers connection to the the aesthetic experience.

This experiment expands upon the Generative Poster project presented in the Creation & Computation Course Experiment 3’s, This and That. The original concept involved generating multiple iterations of a single design through computational means, where the intent was for a user to have a unique copy of a poster according to a predetermined design system.

Video

Tune by beatpick

How it works

Our code executed the following instructions:

  1. Randomly select background color from an already-made array of colors;
  2. Randomly select elements (triangles, circles, rectangles and other quadrilaterals);
  3. Randomly assign a color independently to each element;
  4. Randomly determine the # of elements that would be placed on page;
  5. Randomly determine the composition, placement and scale of each element;
  6. For each element selected, have an accompanied string value for the code to draw on a p5 sketch.
  7. Send the string values to a separate browser page in order as it appears on the graphic sketch.
  8. Save the composition as a .jpg.
  9. Print the code from browser to a physical paper artifact.
  10. Instruct the graphic generative system to wait until the print sketch completed printing.
  11. Restart the sketch.

This then repeated throughout the exhibition, conveying the design processes that the computer made based on the algorithm set through the code.

Project Context

For this project, we wanted to use this opportunity to use the tools we had learned in the course to dive into the history of graphic systems. The idea of using generative computation was very interesting to us as a tool for designers in this age, where the act of creating iterations through a more efficient algorithm can incite more possibilities and frequency of “happy accidents.” We first had to decide upon which graphic design movement we wanted to reference and be the basis of our system. Initially we proposed three historical graphic movements as possible selections for our project; the Memphis graphic design movement from the late 70s, the Bauhaus movement from the early 30s and the Russian Constructivism movement from the mid 20s. We soon narrowed our selection down to two, Constructivism and possibly Bauhaus, given the latter preceded the other.

From left to right, posters from the Constructivism, Bauhaus, Memphis Movements

From left to right, original posters from the Constructivism, Bauhaus, Memphis Movements

Constructivism was a movement with origins in 1920s Russia where it was primarily an art and architectural movement. It refused the concept of art for ‘art’s sake,’ which catered to the traditional bourgeois class of society and instead, focused on using art and design as a political tool. Within this time period, famous Constructivist Kazimir Malevich had also coined the term “construction art,” or “‘Suprematism” as a prominent characteristic of the movement’s visual language. This term was also widely influenced by the works of artist turned designer Aleksander Rodchenko in the 1890s, where he experimented these forms and combined them with photo-montages.

Russian Suprematist Paintings by Kashmir Malevich

Russian Suprematist Paintings by Kazimir Malevich

In its essence, the characteristics of this movement involved a free combination of basic geometric forms, such as circles, squares, lines and rectangles within a limited range of colors. The application of this visual language ranged from product packaging to logos, posters, book covers and advertisements. Despite its unfortunate end following the rule of Stalin, the legacy of this style lives on through succeeding movements, particularly Bauhaus and International Type Design. Nowadays it is widely used in parameters set in contemporary design, especially within flat design for digital means.

In our research, we also selected 3 different case studies that were relevant to the scope of this project iteration. They ranged from applications within modern architecture and graphic design.

  1. Project Discover explores the capability of software to produce an endless slew of architectural designs that satisfy specific criteria within a given project in terms of cost, constructibility or performance dynamics, at a pace and level of productivity that would be impossible for human beings to match. Their hope is that they go beyond basic automation to create an expanded role for the human designer and a more dynamic and collaborative interaction between computer design software and human designers in the future. This study helps contextualize our intentions with the project.
  2. “Randomatizm” is an exploration into the Suprematism movement in the early 1900’s in Russia. The term suprematism refers to an abstract art based upon “the supremacy of pure artistic feeling” rather than on visual depiction of objects. To date, the online gallery randomatizm.art presents 16 works of famous figures of Suprematism. At the moment, visitors to the online exposition generated more than 80,000 random files. We found that the variance in juxtapositions between each generated composition to be a good foundation for our research in expanding upon this experiment.
  3. “Oi” is a flexible brand identity created by the renowned Wolff Olins for the Brazilian telecommunications company. The idea behind the “logo generator,” as the application is called, is for the logo to move, wobble, and respond to customers in a playful and interactive way within the company’s mobile and web apps. In application, the identity mixes all of its identity elements — techie-looking typeface, icons, and blobs — in various ways to keep the identity flexible yet cohesive.

We also owe most of our references to https://algorithms.design/ and how the author explores algorithmic and generative design in different design applications.

Process Journal

Sketches of Initial Concept

Sketches of Initial Concept

There were many opportunities to continue different projects that involved physical computing with p5, but the interest that brought us together was in completing a browser-based project. In the group meeting that we conducted with several others in the cohort, a proposal to revisit the Generative Poster project was brought forward and we three agreed to push that idea forward. It would also aid to further our learning in coding through p5. Along with that exploration, we also wanted to see how it can contribute to the discourse around algorithmic design and the potential it has in affecting us as designers in the future. All three of us have a substantial background in visual communication and graphic design which motivated us to proceed with this concept.

In writing our proposal for Kate and Nick, we spent time on defining the project and the scope of what can be achievable in the minimum viable product. We also discussed and shared projects that inspired the visual styles that we were interested in conveying. From that basis, we then decided on the design movements we could emulate using generative design. We discussed aspects from existing projects that we liked and saw how they linked to movements that have been applied in contemporary design, such as International Type Design, Memphis and Bauhaus.

Following the proposal, we noted the feedback given to us in a meeting with them the next day. The key pieces of advice that Kate and Nick gave us was to treat the projection as a digital prototype on the wall. We were also advised to make sure we exhibited the canvas to our poster(s) to achieve the greatest impact– the scaling of the projection was key. What would the template to the posters be? Would it be projected on a wall, or would it be projected onto poster paper?

There were also opportunities to think about how the space can be used to introduce unexpected outputs, such as showing the transparency of machine thinking through the console log. The possibilities to how we can present this project can go into many directions, from something that can convey the computer “thinking” in the generative aspect to the code or perhaps introducing a minor interactivity with the visitors of the installation through human input.

MVP of envisioned Installation

MVP of envisioned Installation

Converging into Main Idea

Our criteria for selecting a graphic design movement was dependent on the positioning and placement of the elements on the canvas (whether the elements were abstract shapes, typography or simple forms). We noted that if a grid system was present it would be a helpful parameter and boundary for elements to appear within as part of our generative poster. We studied each movement carefully to determine whether a system was evident that could be reinterpreted and generated independently to the reference paintings. Additionally, we wanted to ensure that whichever movement we selected, that the elements could be drawn easily inside p5.js, there also needed to be a distinct color palette and theme that could be referenced to.

Given the scale of a single generative system we decided collectively as a group that we would only attempt to make a single system based on one movement as opposed to 3 independent systems (one for each previously mentioned movement, Memphis, Bauhaus and Constructivism).

We began our process by attempting to recreate some of the abstract forms in the Memphis movement, but soon realized that it would be a challenge to do so in p5.js as the shapes involved a lot of masking of circular patterns and use of the bezier() shape.

Contemporary Memphis elements that we wanted to code in p5

Contemporary Memphis elements that we wanted to draw in p5

We also explored shapes within the Bauhaus movement by looking at more complex shape code, such as different angles to arc() and also created a document that identified the common aspects to the movement.

90-degree-arcs semicircles

Elements identified for the Bauhaus iteration of project

Elements identified for the Bauhaus iteration of the project

We then opted to start with what we agreed to be the easiest of the three movements to create a generative system of; that being the Russian Constructivism movement. After studying several of Kazimir Malevich’s Suprematism paintings from the 1920’s we recognized some recurring shapes and forms: circles, rectangles, triangles (equilateral, isosceles and scalene), and quadrilaterals. All elements that could be made fairly easily with p5’s draw functions. We observed the common compositions created in his work to apply to the code in terms of how each element appeared on the screen. By referencing the color palette of this graphic design movement and using these paintings as a guideline, we began to build out the code that would later generate random compositions with these above mentioned elements.

Elements

Elements identified for the Constructivism iteration of the project

Coding Process and Challenges

img_6098

Throughout the process, we assigned tasks to each other to ease the workflow. There were many aspects for the installation that we wanted to implement in the MVP. We began by executing the sketches onto 3 canvases that continuously generate in one browser, which might have complicated the process more than it should. It required the use of Namespacing, a resource we found useful in executing this.

screen-shot-2018-12-11-at-6-54-13-pm

We soon scrapped this option as it would pose many challenges in the code, as we would then have to spend too much time on the code to maintain the sketch, drawing within each canvas.

The main challenge we faced while coding our system mostly involved making it behave human-like; we wanted it to convey this idea of it creating its own design decisions (ie. selecting a triangle or a circle for the first drawn element.) It was not as simple as we thought to code a machine to act this way. In our first iteration of this project, we had envisioned that the code would generate the entire poster at once before moving on to create a new sketch. It took some time to program a timer into the draw function of the system so that each element would be drawn consecutively one after another. Along with this problem, we also had to program an additional timer that communicated via PubNub between the two sketches. This was done so that the print sketch could pause the generative graphic sketch once a poster was completed. In the end, this was solved by setting the frameRate and stating the frameCount for each element in function draw().

After defining each equation to the randomly generated elements to numbered strings (e.g. var myString3 = “// Circle\nnoStroke();\nfill(” + circleColor + “);\ntranslate(” + translate2X + “, ” + translate2Y + “);\nellipse(0, 0, ” + radius2 + “, ” + radius2 + “);” instead of placing it into console.log(“”)) we were finally able to publish what was drawn on the canvas through Pubnub onto another browser. Some overlapping to the messages that were sent still remained, but was quickly fixed by defining the positions of each text based on the input, as seen in the print.js file.

Another task that we had to accomplish was print the poster as they are generated. We wanted the print to work without opening the dialog box but to just receive the information and execute the function. The research for the information that kept coming up was the term called silent printing or background printing. The way we were able to silent print was by getting into the browser setup and changed the parameters that would disable the dialog box pop-up. This required us to use Firefox because the instructions were easily accessible. We hacked into the about:config page and created New > Boolean, and then entered the preference name as print.always_print_silent and click OK. The boolean was then set to true. This successfully made the printing to happen automatically and not open the dialog box. After implementing this, the print() browser function was as easy as placing the command line in the code. The print() method then prints the contents of the current window. This process is well reflected in the final video.

Aside from printing the console.log, we also wanted to have a database of all the generated posters that the code executes through draw. We managed to link it to a dropbox folder through the save(); command line.

Presentation and Exhibition

For the installation, we had a clear idea on how to arrange all the elements in our corner in the Grad Gallery. Given if we had another day, we could have scaled up our installation to two generative systems, Constructivism and Bauhaus next to each other. The informative posters for each would be placed in the center area to create a conversation between the two movements and algorithmic systems. However, the Constructivism iteration is the MVP that we had hoped for and we were collectively very satisfied with the outcome. We focused the projection on the corner in a portrait manner by changing the orientation of the projector and placed the printer next to the screen that showed the saved images of the posters generated. The Title to the project was also projected to convey the overall concept of this experiment. Over 1200 posters were created and they are available to view and save through a Dropbox here. 

There was an overall positive response to the installation. Many visitors commended the idea of generative design and commented on how it can even reach the level of teaching an AI, albeit a simple version of it. The final exhibit demonstrated to our audience a process that involved both the designer as the creator of the tool and algorithm– along with the computer, generating the sketch based on the parameters set by the designer. It was also an ode to a history of how these systems began to be applied in means that can create a social and political impact. Furthermore, this process could provide a further commentary about how a machine would understand a cultural movement such as Constructivism; however it is important to note that the machine is still only as smart as the person who makes it.

img_6155

Special Thanks to Adam Tindale for advising us through some p5 challenges and Omid Ettehadi for guiding us through Pubnub.

References

Algorithm-Driven Design? How AI is Changing Design. (n.d.). Retrieved from https://algorithms.design/

Autodesk Research. (2017, February 23). Project Discover: An application of generative design for architectural space planning. Retrieved from https://www.autodeskresearch.com/publications/project-discover-application-generative-design-architectural-space-planning

Bill. (2008, August 22). “Silent” Printing in a Web Application. Retrieved from https://stackoverflow.com/questions/21908/silent-printing-in-a-web-application

Creative Bloq. (2013, October 11). The easy guide to design movements: Constructivism. Retrieved from https://www.creativebloq.com/graphic-design/easy-guide-design-movements-constructivism-10134843

Flask, D. (2015). Constructivism : Design Is History. Retrieved from http://www.designishistory.com/1920/constructivism/

Google. (n.d.). Google Fonts. Retrieved from https://fonts.google.com/specimen/Titillium+Web

Hebert, E. (n.d.). Namespacing / Multiple Canvases in P5JS. Retrieved from https://codepen.io/edhebert/pen/VpBzRo

Howe, M. (2017, April 5). The Promise of Generative Design. Retrieved from https://www.world-architects.com/en/architecture-news/insight/the-promise-of-generative-design

Miller, M. (2016, April 19). The Ultimate Responsive Logo Reacts To The Sound Of Your Voice. Retrieved from https://www.fastcompany.com/3059059/the-ultimate-responsive-logo-reacts-to-the-sound-of-your-voice

Moskovsky, A., & Day, Z. (2018). Randomatizm. Retrieved from http://randomatizm.hack.exchange/

Ning, Y. (2017). p5PlayGround. Retrieved from http://yining1023.github.io/p5PlayGround

Tate. (2015). Suprematism ? Art Term | Tate. Retrieved from https://www.tate.org.uk/art/art-terms/s/suprematism

UnderConsideration. (2016, April 4). New Logo and Identity for Oi by Wolff Olins and Futurebrand. Retrieved from https://www.underconsideration.com/brandnew/archives/new_logo_and_identity_for_oi_by_wolff_olins_and_futurebrand.php

Grow You A Great Big Web Of Wires

000036347161

Grow You a Big Web of Wires is a simple project built from copper tape, an Arduino Micro, lots of LED lights and a mess of wires. I wanted to explore what our homes and indoor spaces might look and feel like with the artifice of nature. This project was part sculpture, part installation and part futures imagining.

The leaves of the ‘plants’ are made of conductive copper tape that activate string LED’s when touched together. They are beautiful, but not alive, nor do they clean the air. The leaves look like leaves, the wires look like roots, but at best, they are a facsimile. The plants are meant to spark conversations about the place of nature’s contribution to our indoor lives.

“This project came to light from a love of plants; I found myself in a contemplation of what our homes might look like with more artificial and less natural. The forms I created are plant like, but are missing that chlorophyllic energy of something alive, though there is electrical energy flowing through each. These little plants are imaginings of a halfway point of the uncanny natural valley.” – Grow you a big web of wires 2018

Idea Process

In my initial thinking about this project I wanted to explore the gestures and communication design of plants and trees in nature.

I began pondering these questions to spark my ideation:

  • How do we think about communication methods of nature?
  • How can we use these notions to improve the way we as humans communicate?
  • How does the medium of physical technology change how we interact with these creations?
  • What are these things that plants talk about? Imagine a conversation between a pair of plants.
Idea sketches

Idea sketches

Material Process

I began by imagining how these plants would look. I knew that I wanted to recreate the way traditional house plants look in our homes. I went to source some wire at the hardware store and came back with 8 meters of galvanized steel wire.

Lots of experimentation with leaf shape and engraving veins into the copper tape. This shape was one of the successful shapes I was happy with

Lots of experimentation with leaf shape and engraving veins into the copper tape. This shape was one of the successful shapes I was happy with

The galvanized steel was pliable and easy to cut, but was VERY messy!

The galvanized steel was pliable and easy to cut, but was VERY messy!

I began manufacturing many leaves for the tree form. It reminded me of a beautiful fall day, but not quite.

I began manufacturing many leaves for the tree form. It reminded me of a beautiful fall day, but not quite.

I began sculpting with the galvanized wire and had initially had an idea of a tree shaped from a thousand wires all twisted together, however it proved more difficult than I had hoped. It could certainly happen, but would take much more time than I had. So I settled on creating a tree form out of some copper pipe I had laying around, then wound the steel wire around and through the pipe to create a base sketch of a tree. Taking cues from the plants in my house to structure the stems and leaf patterns.

In creating this tree form I realized how different metal is compared to plant life.

In creating this tree form I had a good contemplation session about how different metal is compared to plant life.

After building the form of the tree I began another plant. In working with the copper tape while making the leaves I began to get a sense of the best way to use this material. I keep some Sanseveria plants in my home in the places where there is not a lot of light. They are an extremely hardy plant with beautiful sculptural leaves. These plants don’t need to be watered often and can live through almost anything, it seemed like an excellent candidate to recreate. Using big long pieces of copper tape and solid core wire I began to form the plant by making leaves of different lengths and anchoring them in the plant pot using a base made out of a plastic lid.

My first Sanseveria leaf and my first proof of concept using a simple pullup connection.

My first Sanseveria leaf and my first proof of concept using a copper tape switch and a simple pullup resistor connection.

The final Snake plant

The final Snake plant

Code and Circuit Process

The circuit for this project was graciously offered to me by Veda Adnani from her project The Kid’s Puzzler. It was quite a simple set up utilizing the digital pins on the Arduino Micro. It was easy to create the proof of concept, however, there was much testing to be done with the copper tape once it was formed into the leaves and attached to the LED’s. There was a lot of real world differences once the circuit was connected. Sometimes the copper didn’t make a connection, sometimes the LED wires didn’t connect in the breadboard. It ended up taking a lot of troubleshooting and patience to come up with a setup that would work every time.

Circuit Diagram

Circuit Diagram

Circuit Diagram

Github Code

https://github.com/lc-w/GYAGBWoW

Final Presentation

The second part of this project was the actual installation of the plants. I had decided on displaying this work in the hallway between the Experimental Media Room and the main Graduate Gallery, this is a transitional space that could be peaceful and allow the viewers to have a quiet moment in the dark to reflect on these plants. The one issue I had been wrestling over all week were the walls, which were covered in a wheat pasted repeat pattern created by Inbal Newman. I had huge plans of covering the wall in large paper, or a projection, or even hanging a long white curtain in front of the work. But through the process of making the wire plant I realized that Inbal’s art and my creations had a good synergy, that complemented each other, so I began to wonder if the works could be incorporated together. The final display was directly in front of the piece and it did work well.

The title card in front of Inbal's wheat paste wall.

The title card in front of Inbal’s wheat paste wall.

Title and Description Cards

Title and Description Cards

The whole scene lit up and ready to glow.

The whole scene lit up and ready to glow.

The installation was quiet and contemplative; the two plants were placed on plinths with the title and info cards next to them, there was a mess of string lights everywhere. There was a lot of positive feedback. I was happy with the installation as a first iteration; but much like the first iteration of Grow You a Jungle, I want to go bigger! I am envisioning this project in a room with a multitude of copper plants, perhaps setting up a kitchen or bedroom area as the setting. In the future I would like to join the two projects together, using real plants as a switch for sound and the copper plants as a switch for light, creating a cycle of dependancy between the plants and humans.

Detail shots of both plants

Detail shots of both plants

References

One of the first projects I researched while thinking about this project was Botanicus Interacticus, a multi-faceted work funded by the Disney Research lab.

Botanicus Interacticus is “a technology for designing highly expressive interactive plants, both living and artificial. Driven by the rapid fusion of computing and living spaces, we take interaction from computing devices and places it in the physical world using livings plants as an interactive medium.” (Sato 2012)

Botanicus Interacticus

Botanicus Interacticus

This project uses the electrical currents in plants to enable a person to create music by touching a plant. They also created artificial plants that responded to touch. It was a look at how we can program interactivity into the world around us using the electricity that inherent in the world. I was interested in understanding how our gestures could be examined and used to reveal new ways of connecting to nature and this project was a big influence on my thinking.

Sonnengarten

Sonnengarten

Sonnengarten is an interactive light installation that reveals the relationship of plants and light. When a user presses their hand against the plant installation, “for a short time the plant is symbolically deprived of its energy of live.” (Sonnengarten 2015) so the light in the installation changes. This project had me thinking about how the lack of light indoors can affect a plants growth, and how it’s survival is reliant on the person taking care of it. The cycle of dependancy came to mind, and I began to think about how to connect the ideas from Grow You A Jungle to this new project.

Final Thoughts

This project was actually a lot more challenging than I initially expected, and that was mostly due to working with the copper tape. It was an exercise in learning your material and pushing it to the limits of its use. In working with the circuit and doing all the troubleshooting I came to a stronger understanding of how to fix connection issues. Something as small as a solder connection needs to be checked in the process of discovering the problem.

Works Cited

Cross the Dragon – An Interactive Educational Exhibit

screenshot_2018-12-10-jpeg-image-480-x-270-pixels

Project Name : Cross the Dragon

Team Members: Norbert Zhao, Alicia Blakely, and Maria Yala

Summary:

Cross the Dragon is an interactive art installation that explores economic changes in developing countries and the use of digital media to create open communication and increase awareness on the topic of economic investment from global powers in developing countries. The main inputs in the piece are a word find game on a touch interface and an interactive mat. When a word, belonging to the four fields: Transport, Energy, Real Estate, or Finance, is found,  a video is projected onto a touch-responsive mat. Through the touch sensitive mat one can initiate another video in response to the word puzzle word. The interactive mat plays video through projection mapping. In order to be able to interact with the mat again one has find another word. We have left the information in the videos open to interpretation as to keep it unbiased and to build a gateway to communication through art and digital gaming practices.

What we wanted to accomplish:

Through this interactive installation the idea was not to presume and impress preconceived notions about the educational information provided. The installation is designed to encourage positive thought process through touch, infographic video and game. Through this interface we can conceptualize and promote discussion on information that is not highly publicized and considered widely accessible or generally discussed in Canada.

Ideation & Inspiration:

Ideation

This project was inspired by a story shared by one of our cohorts. She describes how Chinese companies are building a new artificial island off the beach in downtown Colombo, her hometown, and are planning to turn it into Sri Lanka’s new economic hub. At the same time, in the southern port Hambantota, the Sri Lanka government borrowed more than $1 billion from China for this strategic deep-water port, but couldn’t repay the money, so they signed an agreement and entrusted the management of the port to a Chinese national company for 99 years.

For us, such news is undoubtedly new and shocking. As China’s economic growth and increasing voice in international affairs, especially after The Belt and Road Initiative was born in 2013, China began to carry out a variety of large investment projects around the world, especially the developing countries in Asia and Africa, the investment in infrastructure projects from China has peaked. At the same time, we have discovered a series of reports from the New York times, How China Has Become an Superpower, which contains detailed data about China’s investment in other countries and project details.

Therefore, this project was focused around the discussion about the controversy of this topic, because some people think that these investments have helped the local economic development, but some people think it is neo-colonialism. In the beginning during concept development we knew this topic would have an awareness aspect. It was important to portray this topic that has a profound effect on the social, cultural lives and identities of people across the globe. By having a heterogeneous subject in the sense that it stemmed into other socioeconomic conditions. After discussion and data research, we decided to focus on China’s growing influence especially economic in Africa.

Finally, we decided to explore this interesting topic through interactive design. We came up with the idea of creating a mini-exhibition, through which visitors can explore the story behind this topic by interacting with the game. When the visitor first comes into contact with this exhibition, they do not have detailed information about the exhibition, but after a series of game interactions, the detailed information about the exhibition theme would be presented in the form of intuitive visual design. The resulting self-exploration process will give visitors a deeper impression of the topic.

Inspiration

These three interactive projects were chosen because of how they combine an element of play and the need for discovery in an exhibition setting. They engage the audience both physically and mentally, which is something we aim to do with our own project.

Case Study 1 – Interactive Word Games

An interactive crossword puzzle made for the National Museum in Warsaw for their “Anything Goes” exhibit that was curated by children. It was created by Robert Mordzon, a .NET Developer/Electronic Designer, and took 7 days to construct.

screenshot_2018-12-10-final-presentation

Case Study 2: Projection Mapping & Touch interactions

We were interested in projection mapping and explored a number of projects that used projection mapping with board games to create interactive surfaces that combined visuals and sounds with touch interactions.

screenshot_2018-12-10-final-presentation1

Case Study 3: Interactive Museum Exhibits

ArtLens Exhibition is an experimental gallery that puts you – the viewer – into conversation with masterpieces of art, encouraging engagement on personal and emotional level. The exhibit features a collection of 20 masterworks of art that will rotate every 18 months to provide new, fresh experiences for repeat visitors.The art selection and barrier-free digital interactives inspire you to approach the museum’s collection with greater curiosity, confidence, and understanding. Each artwork in ArtLens Exhibition has two corresponding games in different themes, allowing you to dive deeper into understanding the object. ArtLens Exhibition opened to the public at the Solstice Party in June 2017.

screenshot_2018-12-10-final-presentation2

Technology:

We combined two of our projects – FindWithFriend and Songbeats & Heartbeats for our final project. The aspects of the two projects we were drawn to are the interactions. We wanted to create an educational exhibition that has a gamified component to it and encourages discovery almost like the Please Touch Museum.

Interactions:

We combined the touch interactions from the wordsearch & interactive mat.

Components:

P5, Arduino, PubNub, Serial Connection

Brainstorm

img_6212

Team brainstorming the user flow and interactions

screenshot_2018-12-10-untitled-diagram-xml-draw-io

Refined brainstorm diagram showing user flow, nodes, and interactions

How it works:

The piece will work like a relay race where one interaction on an Ipad will trigger a video projection onto an interactive mat. When a sensor on the mat is touched it will trigger a different projection showing the audience more data / information.

The audience is presented with a wordsearch game in a P5 sketch (SKETCH A) with the four keywords; “Transport”, “Energy”, “Real estate”, “Financial”, representing the industries that China has made huge investments in. Once the word is found e.g. “Transport”, a message is published to PubNub and is received by a P5 sketch (SKETCH B) that will play a projection about transport projects. When the audience touches the mat with the sensors, the sensor value (ON/OFF) will via a Arduino/P5 serial connection to a different P5 sketch (SKETCH B) will stop playing the Transport projection and displays more information about China’s transport projects in different African countries.

Step 1: Sketch A – Wordfind game

The viewer’s initial interaction with the “Cross the Dragon” exhibit is initiated in the wordfind game. This is created using p5.js. The gameboard is created using nested arrays that create the word find matrix. Each tile in the board is created from a Tile class with the following attributes: x,y co-ordinates, RGB color values, a color string description based on the RGB values, a size for it’s width and height, booleans – inPlay, isLocked, isWhite, and a tile category that indicates whether the tile is for Transport, Finance, Real Estate of Energy.

To create the gameboard, 3 arrays were used. One array containing the letters for each tile, another that contained the values that indicated whether a tile was in play or not. This was made up of 1’s and 0’s. Tiles that were in play, i.e tiles that contained letters for the words to be found, were marked with 1’s and those that were decoy tiles were marked with 0’s. The last array was one that indicated the tile categories using a letter i.e T,F,R,E, and O for the decoy tiles. The matrix was created by iterating over the arrays using nested for loops.

screenshot_2018-12-10-cross-the-dragon1

The arrays used to create the game board tile matrix of clickable square tiles

buildingtiles

Generating the 11×11 game board and testing tile sizes

Once the tile sizes were determined, we focused on how the viewer would select the words for the four industries. The original Find With Friends game catered to multiple players, identifying them each with a unique color. However, here there is only one input point, an iPad, so we decided to have just two colors showing up on the game board; red to indicate the correct tile and grey to indicate a decoy tile. When the p5 sketch is initiated, all tiles are generated as white and marked with the booleans – inPlay and isWhite. When a tile is clicked and it’s inPlay value is true, it turns red. If it’s inPlay value is false, it turns grey.

testingwords

Testing that inPlay tiles turn red when clicked

The image below indicates testing of the discover button. When a word is found, and the discover button is clicked, a search function loops through the gameboard tiles, counting from the tiles that are inPlay and have turned red, a tally of the tiles clicked is recorded in four variables i.e one for each industry. There are 9 Transport tiles, 6 Energy tiles, 10 Real Estate tiles, and 7 Finance tiles. Once looping through tiles is complete, a checkIndustries() function is called to check the tally of the tiles. If all the tiles in a category are found, the function sets a global variable currIndustry to the found industry and then calls a function to pass that industry to PubNub. When a tile is found to be in play and clicked, it is locked so that the next time the discover button is clicked, the tile is not counted again.

testingdiscover

Testing that inPlay tiles are registered when found and that already found tiles are not recounted for the message sent to PubNub.

Step 2: Sketch B – Projection Sketch – Part 1

When the sketch initializes, a logo animation video, vid0, plays on the screen and a state variable which was initialized as 0 is set to 1 in readiness for the next state which will play video 1 / a general information video on a found industry.

When the second p5 sketch receives a message from PubNub, it uses the string in the message body that indicates the current industry to determine which video to play. The videos are loaded into the sketch in the preload function and played in the body of the html page crossthedragon.html. During testing we discovered that we had to hide the videos using css and show them only when we wanted to play them, re-hiding them after because they would all be drawn onto the screen overlapping each other. When the sketch is loaded videos are added to two arrays – one to hold the initial videos and another to hold the secondary videos that provide additional information. The positions both the arrays for each industry are Transport in index 0, Energy in 1, Real Estate in 2, and Finance in 3.

Once a message is received a function setupProjections(theIndustry) is called. The function takes the current industry from the PubNub message as an argument and uses it to determine which video should be played. The function sets the values of the global vid1 and vid2. This is done by using the industry to determine which video to pull from the two arrays that hold all the videos. e.g if transport was found, vid1 = videos1[0] and vid2 = videos2[0]

A function makeProjectionsFirstVid() is called. This function stops the initial “Cross the Dragon” animation from playing and hides it, then hides vid2 and plays vid1. It then updates a global variable state to 2 in readiness for the second in-depth informational video.

Note: vid0 only plays when state is 0, vid1 only plays when state is 1, and vid2 only plays when state is 2.

Step 2: Sketch B – Projection Sketch – Part 2 Arduino overs serial connection

The second in-depth video is triggered whenever an signal is sent over a serial connection from Arduino when the viewer interacts with the touch-sensitive mat. Readings from the 3 sensors are sent over a serial connection to the p5 sketch. During testing we determined that using a higher threshold for the sensors produced a desirable effect of reducing the number of messages sent over the serial connection thus speeding up the p5 sketch and reducing system crashes. We set the code up so that messages were only sent when the total sensor value recorded was greater than 1000. The message sent was encoded in JSON format. The p5 sketch parses the message and uses the sensor indicator values passed i.e. either 0 or 1 to determine whether to turn on the second video. If the sensor indicator is 0 this means OFF and the video start is not triggered, if the value is 1 this means ON and the video is triggered. The makeProjectionsSecVid() function triggers the start of the video. If the state is 2, the vid1 is stopped and then hidden and the vid2 is shown then played on a loop. An isv2Playing boolean is set to true and is used to determine whether to restart the video and prevents it from jumping through videos if one is already playing.

Electronic Development 

While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg.  To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard. Since the FSR sensors are plastic due to travel some of the connections would come loose. One of the challenges was having to replace the sensors a few times. When this occurred would follow up with quick testing to make sure all sensors were active through the serial monitor in Arduino. To save time I soldered a few extra sensors to wires so the old ones could be switched out easily if they became damaged.

screenshot_2018-12-10-cross-the-dragon2

Materials for the Interactive Mat Projection

  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×6 piece of canvas material
  • Optoma Projector
  • 6 x 10k resistors

Video Creation Process

Information was extracted for the four most representative investment fields from the database of investment relationship between China and Africa: transport, energy, real estate and finance. Transport and real estate are very typical, because the two famous parts of China’s infrastructure investment in Africa are railway and stadium construction. In addition, energy is also an important part of China’s global investment. The finance part corresponds to the most controversial part of China’s investment, that is, when the recipient country cannot repay the huge loan, it needs to exchange other interests. Sri Lanka’s port is a typical example.

Initially, we wanted to present investment data in four fields through infographic. But after the discussion, we believed that video is a more visual and attractive way to present. So we make two video’s for each field. When visitors get the correct words in this field, they will be shown the general situation of China in the world and Africa in this field, which is video 1, including data, location, time and so on. When visitors click on mat,  projector will play more detailed video about the field, which is video 2, such as details of specific projects.

In video 1, we use Final Cut to make dynamic images of infographic produced in adobe illustrator, and add representative project images of this field in the latter half of video. So that visitors have a general understanding of this field.

2
In video 2, we use Photoshop and Final Cut to edit some representative project images in this field, and then add key words about the project in the image, so that visitors can have a clear and intuitive understanding of these projects.1

The Presentation

The project was exhibited in a gallery setting in the OCAD U Graduate Gallery space. Below are some images from the final presentation night.

settingup

Setting up the installation

shotsfrompresenation

People interacting with the Cross the Dragon installation

Reflection and Feedback

Many of the members of the public who interacted with the Cross the Dragon exhibit were impressed by the interactions and appreciated the educational qualities of the project. Many people stuck around to talk about the topics brought up by the videos, asking to know more about the projects, where the information came from and how the videos were made. Others were more interested in just the interaction but most participants did engage in open ended dialogue without being prompted. Overall feedback was positive. People seemed to be really interested in changing the informational video after finding the word in the puzzle. Some participants suggested slowing down the videos so that they could actually read all the information in the text.

For future iterations of this project, we would like to explore projection mapping more so that we can make the interactive mat more engaging. We noticed that once people found out that they could touch the mat, they tended to want to keep touching it and exploring it. We had spoken about including audio and text with animation earlier on in our brainstorming and we believe this would be a good way to include these through having sensitive areas on the mat to create more interactions. It was also suggested that we should project the videos onto a wall also so that people who were around the room would still be included in the experience without having to actually be physically at the exhibition station.

References

Code Link on Github – Cross The Dragon

P5 Code Links:

Hiding & Showing HTML5 Video – Creative Coding

Creating a Video array – Processing Forum

HTML5 Video Features – HTML5 Video Features

Hiding & Showing video – Reddit JQuery

Reference Links:

1] https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

2] http://osusume-energy.biz/20180227155758_arduino-force-sensor/

3] https://gist.github.com/mjvo/f1f0a3fdfc16a3f9bbda4bba35e6be5b

4] http://woraya.me/blog/fall-2016/pcomp/2016/10/19/my-sketch-serial-input-to-p5js-ide

5] https://www.nytimes.com/interactive/2018/11/18/world/asia/world-built-by-china.html

6] http://www.sais-cari.org/

7] http://www.aei.org/china-global-investment-tracker/

 

 

 

Sound and Heart Beats – Interactive Mat

music beats & heart beats

Music Beats & Heart beats by Alicia Blakey

 

Music and Heart Beats is an interactive installation that allows users to wirelessly send sounds and interact with a digital record player. Through the installation, you can either send someone a sound beat or a heartbeat. Listening to certain music or the sound of loved ones hear beat has proven to help improve mood and reduce anxiety.

If a user opens the application connected to the interactive record player they can see when others are playing songs. The digital record player then starts spinning and is initiated when a user interacts with the app that corresponds with the installation. The pin on the record player is indicated by LED lights that music is being played or this fun interaction can also be initiated through touch sensors as well.

This art installation also conceptualizes the experience of taking a moment to initiate with your senses of hearing and touch to have fun and take a few minutes out of your day to feel good and listen to sounds that are good for your body and mind.

 

img_1913

 

 

Ideation

Initially, I had a few variations of this idea that encompassed the visual of music vibrations and heartbeat blips. After the first iteration, the art and practice of putting on a record engaged with the act of listening more.  The visual aspect of watching a record play is captivating with in itself.  I always notice after someone puts on a record they always stay and watch the record spinning. There is something mesmerizing with the intrinsic components of this motion. I wanted to create an interaction that was more responsive with colour, light and sound. Expanding on the the cyclical nature of the turntable as a visual the intent was to create an environment.

 

rough-draft-proj-4

heart-beat-to-song-beat

img_2780

 

Development

While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg. I also used a WS2812B Neo pixel strip envoloped in plastic tubing. The LED strip required 5v power while the feather controller required 3v power. To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard.

 

 

img_2784

Coding 

When initializing the video it with testing it proved more optimal that the  video sequence sat over the controller by changing the z index styling.  My next step was to apply a mask styling over the whole desktop page to prevent clicks altering the p5 sketch. Styled the controller.js to be in the same location as both desktop and mobile so it could share pub nub x y click locations.  The media.js  file would connect with the controller.js for play and stop commands. One of the initial issues was a long loading time for the mobile client. The solution was distinguishing with inline javascript a variable that we can use to stop the mobile client from running the inload audio function.T he mobile and desktop sites were not working on iphone only on android.  Pub nub would initiate on android mobile phones but in the end could not debug the mobile issue. If the desktop html page was loading its media.js  while a mobile client was trying to communicate with it the overlying result was unexpected behaviour. A possible solution would be to apply in the desktop a call back function; this would tell the mobile client it is loaded.

 

 

 

screen-shot-2018-11-28-at-3-38-34-pm

 

Materials

  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×8 Canvas Material
  • Optoma Projector
  •  6 x 10k resistors
  •  3.2 ft Plastic tubing

 

I decided to use a breadboard instead of a proto – board this time due to the fact that the interactive touch sensitive mat was large. In order for the prototype to remain mobile I needed to be able to disconnect the LED’s and power converter. It was easier to roll the mat up this way and quickly reconnect everything. Since I was running over 60 LEDS I used a 9volt power supply to run through the converter.  I originally tested with the 3.7k resistors but found the sensors were not really responsive. I then replaced and tested with the 10k resistors and the mat had varied greatly in sensitivity and was more accurate.

 

The outcome of my project was interesting people were really encompassed in just watching the video  projected onto the interactive mat.  Being able to control the LED’s was a secondary approach that users seemed to enjoy but just watching the playback while listening to music seemed to cause a state of clam and happiness. The feedback and response to the instillation was very positive. It was noted that the projection was hypnotic in nature. The installation was designed to bring a state of calm and enjoyment.  Although the LED’s were very responsive with the touch sensors there was some flickers on the LED’s I think due to an issue with the converter dwindling. I had purchased used but after using the Ywrobot converter would purchase new for other projects.  Other comments suggested that I add another interaction into the p5.js sketch to enable users to control the motion of the record through the video with the sensors. The overall reaction was very promising for this prototype. I’m extremely happy with the conclusion of this project. There was a definitive emotional reaction that the project was designed for. 

https://github.com/aliciablakey/SoundBeats2HeartBeats.git

 

screen-shot-2018-11-27-at-12-36-24-am

 

References

https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

http://osusume-energy.biz/20180227155758_arduino-force-sensor/

https://gist.github.com/mjvo/f1f0a3fdfc16a3f9bbda4bba35e6be5b

http://woraya.me/blog/fall-2016/pcomp/2016/10/19/my-sketch-serial-input-to-p5js-ide

 

 

 

 

 

 

 

DISCHORD / SPACE QUEST PIZZA

By Ladan, Shikhar, Peiheng


Project description:

Space Quest Pizza is an endless arcade game with a simple goal (with a twist). Collect the pizza and avoid the enemies. The goal may be simple but it is the interactions that add the difficulty in the game as multiple players control the same character and have to work together to survive in the game.

 

GitHub Link: https://github.com/ace5160/dischord_game

screen-shot-2018-11-25-at-6-12-56-pm

Ideation:

2. brainstorming:

img_5683

Initially with this project,  we wanted to explore how to combine the music and visualizers into a game. We decided as group to work with the strength we had in the group which was game design, and visual design.

In the meeting, we talked about what were some  ideas we could execute. Music based Dungeon Crawler, Midi Controller Game (Guitar Hero inspired).  We talked about pros and cons as well a feasibility and the skills we all could contribute to the project.

Finally, we got a direction. We wanted to create a game that was aesthetically strong that incorporated music somehow. Our initial game idea was a dungeon crawlers game that incorporated and aspect of cooperation that folks had to figure out as they played the game. The music drove the aesthetic and rhythm of the game as every-time that a player got a power up it would change the genre of the music playing plus the aesthetic of the game. And we named it DISCHORD.

2. Dischord game

This is the initial sketches for our first game idea Dischord.

img_5768

First ideation session of the game we talked about what were the things we would have liked to explore with this project. There was an interest in gameplay and music. We came up with a dungeon crawler that was driven aesthetically and dynamically by music.

We started out building the game the p5.js games library. We initially created the game before were introduced fully to PunNub. We thought we would just build out the game fully and incorporate the network afterwards. Not realizing that the network would affect the type of messages we could send.

On our first meeting/ question session with Nick and and Kate we realized we couldn’t just incorporate pubnub into the backend after we created the game. There was suggestion made by both Nick and Kate on how to move forward the one that we felt we could execute in the time we had was to create a browser based game with a phone controller.  Once we had a pivot point we started trying to put their suggestion into the game we already had but were finding it difficult with connecting with PubNub.

We were not able to get the p5 play library to work with PubNub.  We decided to start from scratch and we found a piece of code that incorporated a second screen controller with moved ball with the built-in accelerometer of the device. We talked about what kind of interaction we wanted in the game. Once we simplified the interaction about multiple players controlling one avatar we started building out the game. Once we simplified the controls to moving a ball on screen up and down we were able to start world building and goals of the game. Once we moved to simpler code skeleton we built out the rest of the interaction for the game. Once we simplified the code this when we moved away from the Dischord game concept and moved toward the space quest pizza.

 

3.  Space Quest Pizza

As mentioned above, we started to adjust the game architecture because of pubnub. This idea came into existence when our old game could not be achieved due to technical issues. We went to the game lab and spoke to them about interactions in games. We also looked at a few games which used unique interactions in the context of networking. This is what drove us to the idea of space quest pizza. A simple avoid and collect game which requires an immense collaboration and dialogue between all players.

sketch

In the end, we decided that the game consists of a master screen and several mobile devices, and users can choose any of the four buttons including the top, bottom, left and right to control the movement of the game characters on the master screen through these different keys.

The game character needs to avoid the enemy character that constantly appears in the game. If it touches the enemy, the character will die. It need to try its best to get falling pizzas and get more score, which will be displayed at the top of the master screen of the game.

At the same time, on the first page of the game, we set two options: a two-persons group and a four-persons group. The device interfaces of two-persons group will get up-down button, and left -right button. The four-persons group will get up, down, left, and right buttons respectively. In addition, users need to scan the QR code on the first page of master screen to get the button interface on the phone. The QR code does not have any introduction. Users need to click the key to know which direction they get. We wanted to make the game more funny because of this sense of mystery and to get users to work closely with other users at the beginning.

 

Coding processing:

1, Game skeleton 

The game idea was derived from our brainstorming where we wanted a multilayer game which was intense and music was a big element in the game.

Dischord was supposed to be a multilayer music based dungeon crawler where all the players had the common objective of surviving and getting to the goal. All the obstacles and power ups were based on three music genres (pop, rock and R&B). The obstacles in the game had a visualizer behaviour.

In terms of the code for Space quest pizza. We started off with p5.play to help us with collisions and sprite creation. After working on the level design we realized that establishing a pubnub connection and p5.play was not feasible.

We went back to the drawing board and changed the idea, that’s where we started off with space quest pizza. This time without p5.play.

For the enemies, we used the keep function to allow them to chase the player. Each enemy is targeting areas around the player so it gives them separate behaviours of their own.

For the collisions we just compared the x and y coordinate values of the enemy and the players.

 

Music:

Music was an important part of both of the game ideas we had. For the Dischord as seen in the video the concept we were going for was that the environment moved according to the beat of the music. Once the player got a power up it changed the game play environment colour and rhythm. The video below show the initial idea of how the environment would move based on the music. This was something we ultimately wanted to add to the space quest pizza but didn’t have enough time. 

The finalized music that was added in the background of SQP was music that Ladan composed and once we heard it as group agreed it went well with the game play.

 

Pubnub connection:

In the initial game plan, the connection is that when all users playing the game, their characters’ actions will be displayed synchronously on all users’ interfaces. We tried to complete this interaction by controlling different devices through pubnub. At the beginning, we had a relatively simple understanding of pubnub. We believed that data, image and function could be transmitted through pubnub. After trying many coding and discussing with Kate, we found that pubnub could not transmit images or even animations. So this plan is undoubtedly difficult to implement in pubnub.

So as mentioned in ideation, Nick suggested that we adjust our idea to a more achievable plan. Instead of displaying the game processing synchronously from multiple devices, all users who enter the game can control the movement of a character on the master screen through a handheld device.  The only messages that would be sent across PubNub would be up, down, right, left.

Therefore, in the final game, users can click the buttons on the mobile phone screen to control the movement of a character on the master game screen, so as to get scores for eating pizza or avoid enemies. We set different values from 1 to 4 for the top, bottom, right and left buttons. When pubnub transmits different values to the the master game screen, XY position of the character will change according to the received value, thus generating the function of movement.

 

Visual design:

One of the major decision that drove design aesthetics was our group members wanted to explore pixel art. Once we decided how we wanted the look and feel of the game it was easy to go forward with the designing of the layout and characters. The reason we wanted to explore pixel art was to support the nostalgic feeling of videos games. Also we wanted to combine the retro arcade feel with the modern technology of networking. This would engender emotional resonance with our game.

We went through a few iterations of the background first

65ff0ae6-064c-498b-b871-f64cdfbc3a94 09eec551-993b-48eb-a20a-b4fe78c227fd

 

 

 

 

 

 

2ebc5a97-eecf-4817-b964-e955245930eb

 

b4d11105-3cb8-4a77-92bd-391071d7cb0e 3d69b919-6a99-4fc3-8eda-4bd6457b7706

We picked a space theme for the game which was very nostalgic to the arcade feel with games like Galaga and asteroids. After that we started with our pixel art iterations. We started simple and kept making the visuals better bottom up. The colour pallet was dark hue and a blue shade. For the game entities we used a lighter palette to distinguish main and background in the game easily.

alien-buttons_%e7%94%bb%e6%9d%bf-1

To make the game visually more relevant to the theme, we downloaded an Alien Language  font from the Internet and made different buttons to display on the phone.

 

User testing before in last day:

Feedback:

In the Game Mechanics, we planned to make the character get one point for every pizza it ate, to improve player’s motivation. However, due to the time and code, this function was not fully  implemented. 

In addition, QR codes are close to each other, and since there is no hint, when a group of people rush to try the game — like the presentation day — some people will scan the same QR codes, causing some confusion.

We received positive feed back on the look and feel of the game. One of the part we could have grown on was the achievement portion of the game play. It wasn’t clear to players what they were supposed to do if it wasn’t for us letting them know. Also wasn’t clear when the game ended.

Future iterations:

We plan on adding a few new features in the game in terms of mechanics and interactions.

  • we plan to make the controls randomized between the players at some instance while in game to add another level of complexity and have players collaborate to an even more extent.
  • Showing how long the players stay alive to make them feel like they are achieving something.
  • We plan on making this a exhibition game project where the controls would be split depending upon the number of people in the room. This we plan on achieving using an ultrasonic sensor at the entrance of the room which can detect when someone enters the room.
  • For the next iteration we want to incorporate the music in the game play aspect by having sounds for when the players is moving but also music in the background. Also a small visualization with the stars moving and twinkling to the music.  To give the game play for dynamism.

 

References:

  1. https://molleindustria.github.io/p5.play/docs/classes/Group.html
  2. http://molleindustria.github.io/p5.play/examples/index.html?fileName=asteroids.js
  3. https://creative-coding.decontextualize.com/making-games-with-p5-play/
  4. https://www.youtube.com/watch?v=4cusD-ut9I0
  5. https://phaser.io/tutorials/getting-started-phaser3/part5
  6. https://www.pubnub.com/tutorials/javascript/multiplayer-game/
  7. http://spaceteam.ca/
  8. https://p5js.org/
  9. https://forum.processing.org/two/discussion/23531/how-to-pause-play-a-sketch-with-the-same-button
  10. https://www.reddit.com/r/javascript/comments/5mjcdh/is_there_a_way_to_pause_frames_in_p5js/
  11. https://www.pubnub.com/developers/demos/pongnub/

The Flower of Life

Experiment 4: Network

The Flower of Life

Team Members: Naomi Shah, Carisa Antariksa, Mazin Chabayta

Code: https://github.com/mazchab/Life

Project Description

‘The Flower of Life’ is a kinetic sculpture that represents life and death rates of various countries around the world through two oppositely spinning disks. The installation attempts to visually represent populations across countries by the speed at which it spins, while an accompanying screen communicates its quantitative figures. The goal of this kinetic sculpture is to physicalize and merge the ideas of life and death into a single whole shape that is almost pulsating with life.

How it works

A user steps up to ‘The Flower of Life’ installation and enters the name of a country of their choice into one laptop and hits ‘enter’. The quantitative population data of birth and death rates then reflect on a second laptop screen. This, in turn, influences the installation, causing two disks mounted on 360-degree servo motors to start rotation in opposite direction, hence animating the information displayed on the screen. The orange disc represents births while the black disc represents deaths. ‘The Flower of Life ‘signifies the eternal circle of life through its constantly rotating disks.

geometrics

Digital Visualization of the Kinetic Sculpture

Ideation

PHASE 1

Before we split up into groups, a number of us from the cohort decided to approach forming teams differently for this experiment. We decided to meet and do a group brainstorming session so that we may collaboratively arrive at our concepts with each member contributing to as many ideas as possible. Over two days, we mapped out our disparate ideas and ‘workshopped’ with one another to build up our concepts. We developed around 6-7 strong ideas and finally narrowed it down to just a couple that we were interested in exploring. The concepts that were discussed and developed during our self-initiated workshop allowed us to form teams on the basis of what interested us personally.

From the beginning, our group showed interest in using physical computing as an element for this experiment, and after the Cy Keener’s talk a few weeks ago, we were inspired to experiment with physical computing as a form of data physicalization. Furthermore, we were keen on experimenting with global data brought into the classroom that participants could interact with.

PHASE 2

ideation-2

Converging into an idea

Our ideation process began by first assessing the kind of dataset we wanted to visually represent through our installation. We also wanted our project to speak to everyone regardless of where they are from or whatever their background is. In our cohort, we have people from many different parts of the world and we wanted to bring everyone in front of our sculpture and have a relationship with it. Continuing on that line of thought we eventually found the type of data that has a presence inside every one of us: Life and Death through population numbers.

Taking on a task of visualizing life and death can be tricky because they are subjective experiences that can have different meanings to each one of us. However, life and death are natural occurrences that happen every second of every minute all around the world, and we wanted to communicate that. In addition to that, we wanted to avoid a depressing impact on our audience and so we believed the visual representation cannot be unpleasant to look at. Eventually, we decided to use a traditional and well-recognized symbol for life, “the flower of life”. The flower of life is a symbol that’s being used by different civilizations since the 9th century BC. Today, it is usually associated with life and birth.

initialization

Other iterations of the kinetic sculpture

final spiral

Final form of the kinetic sculpture

Planning

High Priority Tasks:

We broke up our tasks on the basis of the core elements of this installation and then categorized them into high and low priority. Our high priority tasks involved deciphering the code that would be the foundation of our project and building the basic prototype of the installation. Our low priority tasks would extend itself to refining the experience of interacting with the installation through beautification and conceptual layering. We regularly assessed our tasks against our time frame.

How to pull data from an API using p5js

This was the first time that any of us had worked with API data. There were two ways we could have approached this. We could either pull up synchronous data from a JSON file locally hosted or pulled a simulation of real-time data from a URL API using asynchronous callbacks. The former would have required us to use Pubnub to meet the networking requirement, while the latter gave us the option of not using Pubnub at all. We needed to find data that best reflected our concept, or allowed us to tweak the concept only minutely, and that would determine whether we would use a locally hosted JSON file or a URL API.

Translating quantitative data visually

We needed to make sure that we were pulling up specific quantitative information from the API data on p5js, that would influence the kinetic sculpture via the Arduino. Our biggest priority was to make sure that the difference in numbers between life and death actually translated to the sculpture to have the disks move at varying speeds and represent populations.

Designing and building the kinetic sculpture

While this project had immense possibilities for visually representing this installation, we chose to keep our fabrication simple and work with materials from the university’s inventory, given the time constraint. We made quick iteration sketches to determine how it would look and how it must be built. We decided to start with just one pair of disks, but make multiple for various countries if time permits.

Low Priority Tasks:

Designing the web pages for input of country and output of quantitative data

We wanted to design the UI for the information input and output pages on both screens to aesthetically complement the installation ‘Flower of Life’. This would give the project an overall sense of finesse and completion while allowing for more engagement from participants.

Multiple disks for representation

We wanted to shortlist a selected number of countries and make a pair of disks for each, representing the population of each of these countries. This would have allowed us to create a more wholesome data visualization through the tangible exploration of data.

Fabrication

Once we had our concept ready, we started to plan our building process. We considered several options for presenting the kinetic sculpture like hanging it on the wall or on a pedestal. However, after realizing that the flower needs to be on eye level in order for the visual effect to be experienced by the users, we eventually decided to build a pedestal about 6-ft high made of wood for stability. Next, we quickly sketched the structure and all its parts and sourced most from our OCAD facilities.

structureparts

Materials Used:

20 inches of acrylic sheets with varying colors
Wood for pedestal and frame around rotating disks

Hardware:

2x 360-degree servo motors
1x Breadboard
1x Arduino Micro
1x External power source with wiring Jumper wires

ideation

We started our fabrication process with a consultation with Reza, the Maker’s lab manager and based on his advice, we made some crucial changes to the structure. He advised using a cross base rather than a square base, which meant the structure is lighter and easier to move around. In addition to that, at this point, we also decide to add a frame around the disks in order to hide the flatness of the acrylic disks and give more depth to the sculpture.

fabrication-1

Fabrication Process 1

For the disks, we were limited to a maximum weight of 4.8kg each, since this is the servo’s capacity. So, although we considered using wood for the disks, acrylic was lighter and more colorful than wood, which is more appropriate for our requirements. After sourcing the acrylic, we utilized the laser cutting machine to performs the cuts on the disks.

fabrication-2

Fabrication Process 2

Since we decided to use bigger servo motors with more torque to handle the size of the disks, we decided to test them ahead of time. During the testing stage, we noticed that the servo motors use too much power from the Arduino board and we had to come up with a better solution for power. We realize that our sculpture is not mobile and instead of investing in batteries, we decided to use the internal power converter inside a wall adapter, stripped a USB cable, connected power and ground to the breadboard and used those terminals as a power source for the servos. That ensured the constant supply of power.

electrical

cut-usbcable-01

Power supply link  + Cut USB cable

In the spirit of last minute complications, one of our disks fell and broke into multiple pieces. We tried gluing it back together, but eventually got them laser cut again on the morning of the presentation.

broken-fabrication

Setbacks

Coding Process

P5 – Serial Connection to Arduino

Since most of the functions are happening in p5, figuring out this portion of the code was a constant challenge. Also, it was the first time we attempt to use p5 to send data over serial connection to Arduino. So, we first divided the p5 code into the main functions needed and then started writing those codes separately. First, we used the examples we had in class for serial connection p5 to Arduino, which gave us some trouble at first, but eventually, we were able to maintain a connection between the two.

P5 – API Call & JSON Objects

This was the first time for us to pull data from a live source. It was a challenge to understand what APIs are and how they work, and how each API is called differently. We experimented a lot with the API callbacks and even explored using weather conditions from a free online weather API. There were a lot of query tests– from placing the .json information into function setup(), and making an asynchronous callback from that data in function draw(). This was a manual way to call the information, but it would not have been a viable option to call that information because the content of the .json file was quite big (information about ~220 countries were listed.) 

json in setup

Placing data in setup()

Aside from trying to work with a .json, we also experimented using a url for the API. This was proven to be successful as we were able to pull data by specifying the correct path.

code-01

Calling data from URL

Eventually, we decided to make the JSON file work. We based the information found online in text (.txt) format and converted it accordingly. After extensive research, the best way to write a JSON array of objects and with each object a set of data that we need to send to Arduino. We learned the correct way to create that file, and then call it from a p5 sketch. This was an excellent learning experience that will be very useful for us in the future. It was then pulled successfully through a chrome webserver extension (200 ok) and the preload function successfully loaded the .json.

P5 – PubNub

Another extremely useful tool that we learned how to make use of during this experiment was PubNub. For our project, we needed the user to be able to insert the country name in order to view that on the servos, and we needed that connection to be wireless. So, we created a PubNub channel called ‘Kinetic Life” and used that channel the ‘subscribe’ and ‘publish’ functions in order to submit commands through the internet. Eventually, we assigned one of the computer to subscribe to the channel in order to receive data, and the other to publish in order to the send out commands. So, computer A sent a call to the JSON file, through PubNub, to computer B, which pulled the JSON and sent out the relevant response, over serial to Arduino.

P5 – Mapping Data

One of the main lessons learned in experiment 3 was the correct way to map values. It allows us to view any data, no matter size or complexity, in a simple output either on screen or for Arduino. Although it is a simple line of code, used correctly, can be a powerful tool. 

Arduino – Data Receiving & Output

The code for the Arduino was fairly simple and straight to the point. After opening a serial connection between P5 and Arduino, it is fairly easy to tell Arduino to receive that data and output it through the servo. However, we quickly learned that servos are not as simple as they may seem and hacking them might not as easy of a task. After several sessions of trial and error, we were able to control the speed and direction at which the servos rotate, and based on the data coming in from the p5 sketch.

fritzingsketchbb

Final Circuitry

 

Presentation Day

presentation

Strengths Observed:

We invited participants to come forward and input the name of a country of their choice and see the disks spin as a visualization of the birth and death rates. We observed that many people put in names of countries they come from or have roots in, and expressed surprise over some of the data that came forward. Furthermore, many participants reported feeling mesmerized by the design of the disks that gave the effect of an optical illusion.

Limitations Observed:

Each time a participant input the name of a new country, we had to manually reset the Arduino. It would have been more effective to either have a reset button or allow the discs to rotate only for a specific time before a new country name input was fed into the input box. This would have improved the user experience considerably.

Furthermore, multiple discs representing various countries would have been much more effective as a data visualisation, allowing a comparison to arise between birth and death rates across different geographical regions. Not only would this have been more visually striking, but would have also allowed participants to reflect on growing or declining populations across the world at the same time.

Finally, gaining more control over the speed of the servos would have also contributed to making the experience of deciphering the contrasts between birth and death rates more pronounced. While we did succeed in influencing the speed slightly, it would not perhaps be obvious had the participants not been looking out for it.

Feedback from Participants:

Creating multiple disks for effective Data Visualisation

Creating multiple disks to represent different countries would have been more impactful because of a comparison that would arise, while also looking more striking aesthetically. This is something that we did intend to do initially. However, troubleshooting the code at every stage of the process occupied much of our time, without giving us the time to build further on our fabrication.

Improved Data Input

Participants were instructed to input a capital letter at the start of every country’s name, which would then show the quantitative data and allowed the disks to spin. One of the limitations of our dataset was that it yielded results only if the data input was written in this specific format. A few options provided during the feedback session was

  1. Creating a drop-down menu instead of allowing participants to type it out, hence avoiding mistakes with capital letters.
  2. Creating a map interface to make it more striking, immersive and also aesthetically pleasing

Future scope:

Making interactive data visualisation tangible and experiential can be a fantastic way to immerse participants in large datasets that would be missed if it lay in the form of passive, quantitative information. Not only does the data physicalization give the opportunity to participants to immerse in different contexts, but can also tell an effective narrative through the data that it represents.

It has the ability to inspire a sense of wonder at how its underlying technology allows us to explore, create and communicate with each other in new ways.

The ‘Flower of Life’ is the first prototype at achieving this ambitious goal. This experiment can be further forwarded using diverse data sets and interactions to tell a narrative about our world’s growing population, especially in the context of climate change and depleting resources. While digital data visualisation may be effective in allowing interactivity as well, the tangible and tactile interaction through an installation can allow for a more immersive sensory experience.

Another possible outcome of the ‘Flower of Life’ could be a call to action upon interacting with the installation, focusing on not what we already intuitively know, but where this knowledge could take us in the future.

References

Flower Of Life – A Thorough Explanation. Retrieved November 26, 2018, from https://www.tokenrock.com/explain-flower-of-life-46.html

Population API. Retrieved November 26, 2018, from http://api.population.io/

Servo Won’t Stop Rotating. Retrieved November 26, 2018, from https://arduino.stackexchange.com/questions/1321/servo-wont-stop-rotating

Servo 360 Continuous Rotation. Retrieved November 26, 2018, from https://www.sparkfun.com/datasheets/Robotics/servo-360_e.pdf

Toddmotto/public-apis. Retrieved November 26, 2018, from https://github.com/toddmotto/public-apis

Weather API – Free Weather API JSON And XML – Developer API Weather For Website – Apixu. Retrieved November 26, 2018, from https://www.apixu.com/

Wolfram, S. (2002, January 1). Note (d) For Why These Discoveries Were Not Made Before: A New Kind Of Science | Online By Stephen Wolfram [Page 872]. Retrieved November 25, 2018, from https://www.wolframscience.com/nks/notes-2-3–ornamental-art

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.