Category: Experiment 5

Cloudy and Tangled Thoughts

Cloudy and Tangled Thoughts

By Olivia Prior, Amreen Ashraf, and Nick Alexander

“Cloudy and Tangled Thoughts is an interactive piece which uses conductive fabric to explore the movement of light and space. Participants are invited to sit down and explore. Relax on a comfortable blanket and watch the clouds drift by. An array of irregular objects catch and refract the light, gently moving in relation to your position on the blanket, creating a sense of serenity.”

Audience enjoying final exhibit.

Audience enjoying final exhibit

img_3415

GitHub: https://github.com/alusiu/cloud-gazing

 

OVERVIEW

Cloudy and Tangled Thoughts evokes the experience of lying on a blanket, gazing at the sky, watching patterns form and dissipate in the leaves, wind, and clouds.

It consists of a blanket made from traditional and conductive textiles and a lattice of hanging geometric chimes. When participants lie or press on the blanket, lights and servo motors hidden among the chimes activate, causing them to swirl and tinkle. When more people lie on the blanket the pattern of lights and motion becomes more intricate in turn.

We succeeded in fabricating all the necessary technology, creating the code, assembling it, and proving the concept. However, the result did not live up to our vision. The team believes that the idea is strong and the tech is viable, and we will return to this project in order to develop it to a point where it meets our expectations.

 

CONCEPT

Cloudy and Tangled Thoughts started with a feeling. The team wanted to create a screenless installation that evoked a feeling of peace and wonder. We wanted to use technology to bring people together with a magical experience, using technology in a way that was unfamiliar to the average user. We envisioned the experience of lying on a blanket watching the clouds make shapes. It was important to us that we not create something simple like an on-off switch, a mechanism most people understand intrinsically, but instead create a relationship between sensors and output that generated a sense of wonder.

The prompt for the project from the Creation & Computation class was Refine & Combine. We were to return to a previous project and expand on it. While the concept we came up with was not directly related to a previous project any of us had done, we felt confident that our previous work with code, servos, lights, sensors, and fabrication put us at the same place developmentally as we would be if this had been a prior project.

 

PROCESS

The process began with a discussion of the kind of work we wanted to create, along with the kind of skills, technology, and existing projects we wanted to carry forward. When we settled on the concept above we began to brainstorm ways to realize it.

 We knew from the beginning that we wanted to work with a blanket and conductive fabric, but we debated over what form the apparatus hanging above, which needed to interact with the conductive fabric, would take. Since we had begun by imagining looking up at clouds, we researched installations and works of art utilizing cloud imagery and looked for inspiration there.

Cloud inspiration

Cloud inspiration

We decided that a series of geometric shape would complement the organic flowing nature of the blanket below. We envisioned multi coloured plexiglass, laser-cut into geometric shapes, hanging like wind chimes, diffusing light from above as they drifted and tinkled.

We submitted a proposal and consulted with our professors, Kate Hartman and Nick Puckett, for their opinions on how best to proceed. Kate provisioned us with 16sqft of conductive fabric, velostat, and a sewing machine for us to experiment with. Nick suggested that, rather than use the heavy and difficult to cut plexiglass, that we look into vellum as our cloud material, as it was light and would keep its shape after being folded.

Experimentation with vellum gave us a lot of data. We liked the way light moved through it and its versatility, and after trying several forms we settled on a triangular prism shape for our cloud-chime objects. However, we did not like the look of the vellum and wanted to find something more uniform and robust. We settled on a thin plastic and consulted with John Diessel in the Plastics Lab. John suggested that, since we planned to manufacture many identical objects from lightweight plastic, vacuum form was the best process for us. We built a form out of wood with the help of Reza Safaei in the Maker Lab and returned to the Plastics Lab to begin making what we affectionately came to refer to as “the boys”, all of which is discussed in detail below.

 

Hanging apparatus

Our conductive quilt was designed to control and move elements using servos. At first we went for simple geometrical shapes and simple constructions. 

cloud shape ideation

We had started out with wanting to control 12 unique shapes as we had built 12 sensors on to our quilt. We became very focused on the quilt, to the point that we had used up much of our allotted time before turning our attention to the servos and hanging apparatus.. Nick Puckett our professor has suggested using vellum, a material which which is easy to control due to its lightweight. The problem we encountered with vellum was that whenever we folded it to take the shape we wanted it to, it would become brittle and break at the folds. We were also slightly getting frustrated with how to to control each shape with the servos and how to mount the servos on the ceiling. We considered but decided against laser cut a mount which would hold the servos together, as we felt it was too late in the process to begin something we were unfamiliar with. At this point we as a team were getting unsure of using the servos to control the shapes. Our team mate Olivia suggested buying some fans and a relay, so that the quilt would start a fan based on where the participants sat. The fan would then start to blow these shapes. We did a rapid prototype using the vellum to construct simple modular shapes and hung them up to see the effects. We all agreed that the effects that the simple shapes created with the lights would look great.  

Prototype with vellum and lighting

We decided we liked the shapes and the effect it had. We were still unsure about using the fans when we looked further into buying a relay which was expensive and we were not very sure if we had to write new code for the fans. In addition, the relays came with a large constraint: only one attached object per relay could be powered at a time. This, combined with the cost of the relays, led us to shelve the idea of using fans.

We made a trip to the plastics lab at 100 McCaul to consult them on our simple modular shapes. John Diessel suggested we use light weight acrylics and the vacuum molding machine. They suggested we fabricate a molding shape that could be used with the machine. The largest size the machine could handle was 12×12 inches.

 We went back to 205 and visited Reza at the makers lab on the 7th floor. He understood exactly what we were looking for and helped us construct a shape using our vellum prototype.

We took the shape and went back to the plastics lab. We were instructed on how to use the machine to vacuum mold our shapes. Each sheet gave us 8 shapes, which meant we could produce a lot of shapes fast. We bought ten 12×12 sheets, 5 in translucent colour and 5 in white.

img_3256

Vacuum form in action

img_3290

It took roughly about 5 hours to construct and cut the shapes. Due to the makers lab and the plastics lab both being closed at night, which meant we had to cut the shapes by hand using scissors. This took a long time and was physically taxing on the team. 

img_3302

After we had completed molding our plastic shapes, we as a team were still unsure of how to have these shapes hang on the ceiling of the experimental media lab grid. We decided to use aircraft cable to hang the apparatus and even clip the shapes to hang using aircraft cable. We made a trip to Canadian Tire in the morning to buy crimpers for the aircraft cables. Unable to find the right grid to hang up, we made an exploratory supply run and came across a barbecue grill. Excited by the image of three circular mounts hanging in a staggered manner, we decided to buy three barbecue grills.

img_4023

At this point when we bought the barbecue grills, we only had 80 hanging shapes, which cut in half gave us about 160. We soon realized that unfortunately that wouldn’t be enough for three grills, which meant we had to make a quick run again to the plastics lab.

This is where we should have as a team scaled down rather than scale up. Building the hanging apparatus consumed a lot of time and energy. We know now in retrospect that it would have been better to go with large simple shapes rather than having so many small shapes. The small shapes made sense in the moment and looked good when hung, however they took a long time to construct.

 

Blanket Controller

Meanwhile we had also been fabricating the blanket. We used documentation from “Intro to Textile Game Controllers Workshop” run by Kate Hartman to fabricate analog sensors from the conductive fabric she gave us.

 

We built several small sensors to test, including one we sewed into a “sandwich” with regular fabric above and below in order to approximate the effect of the sensor when sewed into the blanket.

img_20181130_183524

The test sensors worked well, and we felt we were ready to scale up and begin fabricating full-size sensors. We laid out a large sheet of paper in order to mark and measure out the approximate size of the blanket.

img_20181130_170444

We decided a size of 4 feet by 4 feet was ideal, as it was large enough for two to lie comfortably while not being too large to manage. We debated for some time on the best way to layout and orient the sensors, with pitches ranging from as few as four sensors arranged in quadrants to dozens arranged in small triangles.

Blanket and sensor ideas 1

sensor ideation

We settled on the final version, pictured below. It allowed us to have either ends’ point of contact be on the edge of the blanket, meaning we would not need to run wiring through the blanket proper. It was, we felt, a manageable number of sensors, but enough to give us a lot of options for interactions in the final code. We also felt it was aesthetically pleasing, and thus an excellent blend of form and function. We selected classic “picnic” fabric in red, blue, yellow, and white gingham to give the device the affordance of a homemade picnic blanket.

img_20181201_182313

We plotted the sensor placement at 3-inch intervals, allowing 3 inches of velostat width per sensor, with conductive fabric cut slightly thinner than the velostat. We ironed the conductive fabric to strips of red-checked cloth, attached the velostat with dabs of hot glue, and folded the two sides together. They were kept in place with a few more dabs of hot glue until they could be sewn together permanently. We took pains to avoid puncturing the conductive fabric, sewing along the outside of the velostat. We left the ends of the conductive fabric trailing out of pockets at either end of the sensor to allow for easy connection later.

 

Building Full scale sensor: Measuring out pattern for cutting fabric and velostat

Building full scale sensor:
Measuring out pattern for cutting fabric and Velostat.

Building full scale sensor: Measuring velostat for cutting.

Building full scale sensor:
Measuring Velostat for cutting.

Building full scale sensor: pattern cutting Velostat.

Building full scale sensor:
Pattern tracing onto Velostat.

Building full scale sensor: Velostat laid out onto our big scale model

Building full scale sensor:
Velostat laid out onto our 4×4 ft model

Below: the process for making the sensors.

img_20181206_133808

After attaching two 3inch-wide lengths of velostat, block out a length of cloth slightly wider.

img_20181206_134108

Cut out the cloth.

img_20181206_134444

Cut out a second length, the same size as the first.

img_20181206_134952

Place and iron lengths of iron-on adhesive.

img_20181206_135230

Iron the conductive fabric to the iron-on adhesive.

img_20181206_135418

Lay the velostat over the conductive fabric.

img_20181206_135552

Use small dabs of hot glue to keep the velostat secure on both sides.

Not pictured: sew both halves together

As we completed each sensor, we tested it to ensure it was viable. When all the sensors were sewed and tested we cut a swatch of blue checked cloth at 4.5×4.5 feet to be the base of the blanket. We measured out and placed our sensors where we wanted them to be, then pinned them in place.

We conceived of and experimented with a power bus of conductive fabric along two sides of the blanket, to reduce the amount of wiring we would have to attach. We liked this idea as it made use of the blanket’s form to inform the function of the installation. However, we discovered that this layout diminished the effective voltage too much to get effective sensor readings, and we shelved the idea of the power bus. In retrospect, this should have been a warning sign to us that the power we were supplying was insufficient for our purposes.

One by one we sewed a hem in between the sensors. This fixed them in place on the blanket base, covered up the ratty ends of the sensors, and had the added benefit of making the blanket look softer and inviting to sit down on. We intended to tuck the extra fabric at the outside of the blanket over making a hem and channel for wiring while keeping the blanket looking nice and minimizing obvious electrical attachment points.

Fabric getting ready to be attached to the sensor

Fabric getting ready to be attached to the sensor

Ironing fabric adhesive

Ironing fabric adhesive

laying out conductive fabric onto adhesive

laying out conductive fabric onto adhesive

Ironing sensor onto fabric

Ironing sensor onto fabric

stitching fabric and sensor

stitching fabric and sensor

Unfortunately, our trusty sewing machine hit a snag late in production (the housing for the lower bobbin was pushed out of alignment, jamming the machine). While apparently not an uncommon problem, online diagnostics recommended taking the machine in for service rather than attempting to fix as a layperson. Without enough time to get the machine fixed or exchanged before the deadline, this was as far as we would get with our blanket. Luckily all the sensors were secured by this time and subsequent stitching would have been aesthetic.

img_20181207_175015

Code

Our main concern when deciding was how to code the blanket to create an interesting relationship between the laid out sensors and the servo motors above. We were curious with how the user would explore the interaction between the two separate components. We contemplated having a one to one relationship (i.e. one servo motor to every sensor). We as well considered having a rippling effect amongst the servo motors – when one servo would be activated a chain of the surrounding servo motors would also move.

As well, something that was overall important to us was that the clouds above reflected the participant’s position beneath the hanging apparatus. We thought this was interesting because it became a piece about the reflection of interaction.

The design of our quilt provided us with the given aesthetic of “quadrants”. We decided that we could determine the user’s position based off of the sum of the values from each quadrant. From there we mapped out all of our inputs and outputs that needed to have a relationship.

img_3727

Before we scaled up, we wanted to test the textile analog sensors acting as an input to control the output for a servo motor and led light strips. We determined what the threshold of both sensors was when some pressure was placed upon them and then used that data to determine when the motors and led lights should be activated. This was a great initial proof of concept, and we decided to proceed forward with this base code.

Our next step was to think about how to create a more interesting connection between the user activating the sensors to start the motors and led lights. We did not want the quilt to simply become a switch. As a solution, we created cases for each quadrant. Each quadrant would take the sum of the sensor input. The sum of the sensors would indicate the likelihood of how many sensors were being activated in each quadrant. The cases were as follows:

 

Maximum: most likely all of the sensors are being activated

  • Trigger all of the associated servos
  • Trigger all of the associated led lights with full brightness

 

Medium: most likely two of the sensors are being activated with great pressure

  • Trigger two (or one) of the associated servos
  • Trigger two (or one) of the associated led lights with full brightness

 

Minimum: most likely the sensors are being lightly activated

  • Randomly choose one servo to go on and off each time this case is triggered
  • Choose the corresponding led light to go on and off

 

Resting: All of the servos and led lights are off

 

Setting up for the critique

As we set up for our critique, it became apparent we had scaled up too much to implement our code. When we were assembling, we had decided to scale our inputs down to three sensors in our quadrant. Kate gave us the suggestion that rather than isolating the interaction to one quadrant to divide all of the sensors into three “super sensors”. Our quilt pattern naturally allowed us to have three rings of sensors; one on the outside, one in the middle, and one in the inside. We connected our quilt according to this diagram:

 

img_9317

Another thing that became apparent was that the hanging apparatus, due to its circular shape was hard to mount and hang in a balanced manner. We had run out of aircraft cable – which had actually proved to be extremely difficult to work with – so we decided to use twine to get the shape mounted. Another difficulty we ran into was the wiring of the apparatus to the floor. We were not prepared with the right set of wires which would be long enough to reach our breadboard. We attempted to use long individual wires, but that was impractical. Kate and Nick lent us long modular wiring, which significantly helped us with the hanging process. Also we learned that Kate is master at knots. Her wizardry helped us hang the apparatus quickly and safely.

 

Components

cloudgazing2_bb

Diagram only shows what was connected for critique

  • 1 x Arduino Mega
  • 3 x textile sensors
  • 3 x 50 ohms resistors
  • 3 x LED light strips [6 pixels each]
  • 3 x Micro Servo Motors

 

OUTCOME

We had lofty expectations for this project which the completed version did not meet. No aspect of the build was lax – we felt, in the end, that there had not been enough time in the two weeks we were allotted for the team to build, test, and iterate on the design enough to reach the state of completion we had envisioned.

In the end, we did have an interactive experience where the quilt activated led lights and gentle servos above. We also incorporated a projection behind to elevate the sense of being out in nature.

 

REFLECTIONS

This project taught us many things about working with unfamiliar materials and pursuing lofty goals in a short time frame. Some core reflections we have taken away are below.

We encountered many challenges we did not foresee or appreciate during the planning phase. These included:

  • The amount of time required to fabricate objects of the size and complexity we envisioned
  • The difficulty and time required in learning to effectively use new tools
  • Power management with sensors that we had created from scratch
  • Effectively scaling from a working prototype to a full-sized installation
  • Accounting for the “unknown unknowns” that crop up in projects

Were we to take on a similar project in future, we would:

  • Focus on one core interaction – for example, we would focus on only the blanket or the hanging apparatus
  • Do careful math when fabricating rather than making estimates
  • Start with fewer/smaller materials and scale up
  • Make purchases of materials in small amounts to prototype with

In terms of the use of textiles, we came across a couple of discoveries:

  • Our sensors only worked consistently when the ground and the positive were clipped to the opposite ends of the fabric. We experimented with having the two ends of the circuit clipped close together, which – while somewhat effective – was unreliable for our purpose.
  • When all of the twelve sensors were divided and clipped together to make three “super” sensors, we had to lower the resistors significantly to get any viable reading to use with our code.
  • Physically small sensors gave more reliable readings than large sensors at the same voltage.
  • It is possible to use conductive fabric as a “power bus” to power multiple sensors – though at our scale, this diminished the voltage to an amount where they were not usable for our purpose.

Next steps to take when we return to this project include:

  • Test the sensors with higher power and/or using multiple power sources
  • Test multiple variations of circuitry running through the blanket
  • Design, from scratch, an apparatus for hanging the clouds, with the same focus we had as we designed the blanket
  • Explore wireless communication with the hanging apparatus
  • Reconsider the form of the “above” apparatus
    • For example, explore projection of a generative image rather than a physical apparatus

 

Resources:

Kate Hartman & Yiyi Shao. Intro to Textile Game Controllers. Workshop held at Dames Making Games at Toronto Media Arts Centre on November 14, 2018

A special thank you to Nick Puckett whose advice on fabrication was invaluable, and who went out of his way to help the project get set up in time for its show.

A special thank you Kate Hartman for her donation of material and tools, for going out of her way to help the project get set up in time for its show, and whose infectious enthusiasm kept us going.

Sound Synthesis

Project by: April De Zen, Veda Adnani and Omid Ettehadi
GitHub Link: https://github.com/Omid-Ettehadi/Sound-Synthesis

screen-shot-2018-12-10-at-12-20-19-pm

Music Credit: Anish Sood @anishsood

Contributors: Olivia Prior and Georgina Yeboah
A special thanks to Olivia and Georgina for letting us leverage the code from experiment 2, Attentive Motions. Without the hard work contributed by both these ladies the musical spheres would not have been finished in time, we are truly grateful.
screen-shot-2018-12-10-at-12-01-27-pm

Figure 1.1: Left, Final display of Sound Synthesis
Figure 1.2: Center, Sound Synthesis Team
Figure 1.3: Right, Special thanks to Attentive Motions Team

Project overview
Sound Synthesis is an interactive light and music display that allows anyone passing by to become the party DJ. There are 3 touch points to this system. The first is the ‘DJ console’ which is made up of children’s blocks. Each block controls a different sound stem which is triggered by placing a block on the console. The next two touch points are wireless clear spheres which contain LEDs and a gyroscope that triggers another sound stem when the sphere is moved. These interactions not only activate sound and lighting but it also invokes a sense of play among all ages.

Intended context
The teams intent was simple- bring music and life to a gallery show using items common in child’s play. Relinquishing control over the music and ambience at a public event seems crazy but this trio was screwy enough to give it a try. The goal was to build musical confidence among the crowd and allow them to ‘play’ without the threat of failure. For a moment, anyone is capable of contributing the mood of the party regardless of their musical experience.

screen-shot-2018-12-10-at-12-31-28-pm

Figure 2.1: Left, Final display of Sound Synthesis
Figure 2.2: Veda showcasing the capabilities of each musical sphere
Figure 2.3: Veda showcasing the capabilities of the DJ console
Figure 2.4: Center display in action

Product video

Production Materials

screen-shot-2018-12-10-at-9-53-43-pm

Ideation
The team brainstormed different ways to combine older projects together to create a playful music experience for those visiting the end of semester show. The ideation process started off quite ambitious, attempting to match the same footprint as another project called ‘The sound cave’.
screen-shot-2018-12-10-at-11-31-10-am

Figure 3.1: Left, Initial drawing of floor layout
Figure 3.2: Center, Initial drawing of DJ console, sphere and proposed fabrication of center display
Figure 3.3: Right, Initial drawing of additional touch points for more interactions (if time allowed)

The sound cave had five stations hooked up the their center unit with a different interaction at each station. The original plan was to use the display tower from Omid’s Urchestra project as our center display with a few alterations. The first station would involve a kid’s puzzle which was taken from Veda’s Kid’s Puzzler project and the interaction would remain the same, utilizing the pull up resistors and copper tape to create a button. The next station would have the clear spheres from the Attentive Motions project and the interaction would also remain the same, utilizing the gyroscope to sense motion to send a signal to the main unit. The next 3 units would be brand new and this is where our ambitions got the best of us. After further group discussion, it was decided only to add one more station to the project. The new station would involve a version of a touch sensor which would require a wearable to ground the circuit, see figure 3.3.

screen-shot-2018-12-10-at-11-42-21-am

Figure 4.1: Left, For a detailed understanding of the LED tower : Urchestra
Figure 4.2: Center, For a detailed understanding of the block puzzle : The Kid’s Puzzler
Figure 4.3: Right, For a detailed understanding of the clear spheres : Attentive Motion

Journey Map

screen-shot-2018-12-10-at-4-39-48-pm

Figure 5.1: Top, The first ambitious version of the Journey Map
Figure 5.2: Bottom, A more realistic and achievable Journey Map

Scheduling
As a team, we came up with a schedule. Early on we wanted to make sure we were being realistic with the amount of work we were taking on, especially since there were many other final projects in other classes. We arrived at this schedule, which needed to be shifted from time to time but overall we were able to stick to it and achieve a final product we are all very proud of (with enough sleep).

screen-shot-2018-12-10-at-4-45-46-pm

Figure 6.1: Team workback schedule

Programming
One of the benefits of revisiting previous projects is that most of the hard work has already been done. The first thing we needed to do was see what data we can get from each of them and assess what else needed to be added or altered.

screen-shot-2018-12-10-at-1-17-55-pm

Figure 7.1: Left, Changing the Arduino Micro to Feather ESP32, Center Circuitry for DJ Console and Spheres. Right, Installing the Circuitry into the base of the box.
Figure 7.2: Center, Moving the circuits from breadboards into prototyping boards
Figure 7.3: Right, Adding LEDs to the puzzle

The DJ Console (Blocks) The puzzle used six switches with the help of copper tape underneath the shapes to complete the circuit. Also, it had a single LED light to indicate when any of the shapes were placed in their right position. Each shape corresponded to a specific sound that was then played through the P5 file.

We wanted to stick to the same principle, with a straightforward addition. We wanted to provide instantaneous feedback to the users upon any changed that were made, so instead of having only one LED, we placed six of them indicating how many blocks were active at each time. The system still used an Arduino micro that sent the data for the switches over serial connection to the P5 file. The data was then sent to PubNub so that the display system could use them.

The Sphere The ball used an Arduino Micro, an Adafruit Orientation Sensor, a LED strip and a small speaker. It used to make noise whenever it was to stable asking people to move it. We didn’t want the device to play any sounds anymore, we only wanted it to send the orientation data to PubNub. To do that, we got rid of the speaker and changed the Arduino Micro for a Feather ESP32 board. The board read the data from the orientation sensor and send it to PubNub. To provide real-time feedback to the user, the LED Strip would show some light whenever the ball was shaken.

The Center Display The display used an Arduino micro, a LED strip and nine switches made of copper tapes. The biggest issue with this problem was the need for copper tapes under shoes to complete the circuit. So, we got rid of the tapes and only used the design as a display. We added two extra LED strips to the display to make the experience much better.
The P5 read the data that was sent from the two balls and the puzzle and based on their configuration played the track that was associated with them. The data then sent to the Arduino micro over the serial connection to control the 3 LED Strips. The primary LED Strip was focused on the puzzle. If any of the keys were placed the LED strip would flash a green colour every 2 seconds; else it would flash a white light. The other 2 LED Strips each were related to a specific ball. The strips would flash the same colour as the ball that was shaken.

screen-shot-2018-12-10-at-2-05-16-pm

Figure 8: The team testings of the units

Sound and Design
The sound was the most critical piece of the experience for us. Since none of us have worked with music before we were most concerned about how the experience would come alive without a high quality sound output. Instead of making any guesses we turned to Adam Tindale who has been working with sound for the last three decades.

Our meeting was extremely productive and the most important lesson from it was the difference between creating a musical experience and a musical instrument. While creating a musical instrument you have to have a very deep understanding of the instrument, how it works and what sounds it can produce. The audience for such experiences are usually musicians with a similar knowledge. We found a relevant case study that proved this and we knew that this is not the experience we wanted to create.

screen-shot-2018-12-10-at-2-19-45-pm

Figure 9.1: Left, Cave of sounds, a musical experience
Figure 9.2: Right, Color cord – a technological musical instrument

We wanted to design an experience that made it easy to play with music, and could empower users of all experiences to create music of their own. While learning how to use musical instruments is a difficult task, and requires countless hours of disciplined practice, how might we do the opposite and create something that is inclusive, easy to use, and engaging at the same time. We needed a total of 8 sounds, six sounds for the DJ console (puzzle blocks) that set the main track and 2 accent sounds for each of the spheres that are triggered upon shaking.

We began our search for sounds online, with royalty free sounds available to use. We even tried working with Ableton and Garageband to see if any sounds would work together and create a synchronized soundtrack. But nothing that was available online was good enough, and since none of us had prior sound making experience we turned to our friends to collaborate with us on this.

Anish Sood is a renowned DJ, songwriter and music producer based in Goa, India. The genre’s he focuses on are EDM, House, Techno and Electro House. These felt like the right fit for our experience. We did a call together and briefed him in detail about the project. We wanted a track that was upbeat and yet soothing, and not monotonous to listen to. We took inspiration from the artist Kygo to describe the kind of sounds we wanted to produce. We also shared with Anish many pictures and videos of the parts of the experience and our vision for it. He was extremely receptive and put together a beautiful track for us within 24 hours of our call. He created six sounds on the DJ console that were divided between base sounds and overlapping instrumental and vocal sounds. He also sent us the master track so we knew what it would all sound like when it came together.

Playlist for the DJ Console:
https://soundcloud.com/user667414258/sets/sound-synthesis-stem-set/s-b34Rx

For the spheres we wanted to find sounds that accentuated the base track from the console well. After a mini-brainstorm we shortlisted on a tambourine and a gong for the spheres.

Playlist for the Spheres:
https://soundcloud.com/user667414258/sets/sound-synthesis-stem-set-sphere-sounds/s-Zlnq0

 

Fabrication
Our fabrication process was smooth and streamlined. The following steps were part of the process:

The DJ Console (Blocks) We already had the base for the DJ console in place from Experiment 3. This included the puzzle itself, a base box for it and a single LED light to indicate if the device has been activated. In order to convert the design from a kid toy to something more mature, we decided to spray paint its colourful keys to a simple black and white design. We also had to add 5 more holes for additional LEDs feedback, and one for the connecting cable. While presenting we used a plinth that housed the Laptop underneath.

screen-shot-2018-12-10-at-2-09-35-pm

Figure 10.1: Left, drilling holes for LED lights into box
Figure 10.2: Center, Adding circuitry into box
Figure 10.3: Right, Spray painting shapes for box

The Sphere The fabrication process for the spheres were already done in Experiment 2.The only thing that was required to be changed was the circuit, and addition of a battery holder for the LED Strips so that they could be run for more than 3 hours.

The Center Display We decided to stick with the same object that was made for Experiment 3. The only thing that needed to be changed was to remove the extra Ultrasonic sensors from the box. We added a base to the design so that we could glue down the three cylinders that were to hold the three LED Strips. We also added a back panel to the design so that the LED Strips would be invisible when the device was off.

screen-shot-2018-12-10-at-1-43-49-pm

Figure 11.1: Left, adding more LEDs to original circuit created for the Kid’s Puzzler project
Figure 11.2: Center left, rewiring new and improved DJ console
Figure 11.3: Right, April making alterations and rewiring to the original display unit used in the Urchestra project

screen-shot-2018-12-10-at-1-08-30-pm

Figure 12.1: Left, Final project layout
Figure 12.2: Center, Fine tuning the blocks and sphere
Figure 12.3: Right, Fine tuning the center display

Final Fritzing Diagrams

screen-shot-2018-12-10-at-10-30-54-pm

Figure 13: The final circuit for the hamster balls

screen-shot-2018-12-10-at-10-30-29-pm

Figure 14: The final circuit for the Blocks (DJ Console)

screen-shot-2018-12-10-at-10-30-43-pm

Figure 15: The final circuit for the center display

Presentation & Show

screen-shot-2018-12-10-at-1-50-16-pm

Figure 16.1: Left, Final floor plan of Sound Synthesis
Figure 16.2: Right, Instructional signs placed on plinth under each interactive device

For the final show, we wanted to make sure the connection between the three pieces were clear and the users know what each of the pieces did. To do that, a clean installation of the work was crucial. We placed all the objects in a corner, where they could see the display center from each of the stations. We used plinths of the same height, and printed short instructions on what to do with each piece to make sure the user is clear on his/her role in the experience. We also printed matching ID cards and wore black and white – to look like a team at the exhibit.

An issue we had to deal with was to make sure the web browser for our display unit was refreshed every now and then as the large quantity of data sent to it made it crash if it was opened for a long time. We made sure that at least one person was at the station at all time to make sure nothing goes wrong.

We received very positive feedback on the project. People were very interested in how easy it was for them to act as a DJ and play with the sounds without having to worry about the pace of each track and how to synchronize them. Kids especially enjoyed the experience because they were used to the puzzle and the games and they really liked to be in charge of what is being played. Other people really enjoyed the experience because of the unusual interface for the music. They liked how simple it was to control and how little work did they have to do to get good sounds out of the system. They also appreciated how instantaneous the feedback is with the interaction. One thing that they felt that could be improved was to add more tracks and give the users ability to choose which track is for each piece.

Reflection

As a team, we really hit our stride with this project. Since we all enjoyed working together so much during project 4 we thought we would go out with a bang together in project 5. The 3 of us each brought something different to the table and we found ways to utilize each team member’s strength. Omid not only spearheaded the coding but he is also extremely patient and slowed down his process so we could all work to understand the code of each device and troubleshoot any errors. Veda is extremely detailed in her design approach. It’s not enough for it to just look good, she makes sure each design is functional and user friendly, in every detail. April brought to the table her professional experience with meticulous project management, scoping and planning, graphical design and human centered thinking. Her skill set with fabrication and printing methods was also a blessing.

One of the most important lessons for us was to scope realistically, and leave a safety margin for debugging and troubleshooting. We also made sure to give ourselves enough time to iron out all the details for the actual presentation and setup.

After all the hard work we were able to achieve something that works beyond the level of a basic prototype. Hamster balls were dropped and the system crashed but everything was up and running without anyone at the party noticing. We are extremely proud of the final product and still can’t believe how well it turned out. If this project was ever to be scaled up it would require more stable software and possibly custom microcontrollers but for a 2 week student project, we are very proud.

screen-shot-2018-12-10-at-2-49-25-pm

Figure 17.1: Left, April and Veda rocking out at the final show
Figure 17.2: Right, Veda continues to rock, While Omid makes sure everything is under control

References
(n.d.). Retrieved from http://www.picaroon.eu/tangibleorchestra.html
(n.d.). Retrieved from http://www.picaroon.eu/tangibleorchestra.html
Cave of Sounds. (n.d.). Retrieved from http://caveofsounds.com/
Romano, Z. (2014, May 22). A tangible orchestra one can walk through and play with others. Retrieved from https://blog.arduino.cc/2014/05/22/a-tangible-orchestra-one-can-walk-through-and-play-with-others/
Schoen, M. (n.d.). Color Chord. Retrieved from https://schoenmatthew.com/color-chord
Tangible Orchestra – Walking through the music. (2014, June 03). Retrieved from https://www.mediaarchitecture.org/tangible-orchestra/

 

 

Tinker Box

Abstract

Tinker Boxes are physical manipulatives designed for digital interaction. Based on the concept of MiMs (Zuckerman, O., Arida, S., & Resnick, M. (2005)) which are “Montessori-inspired Manipulatives”. The boxes are low fidelity devices used to bridge the physical interaction with the digital world. They are aimed at children from 5 to 7yrs but can be extended to any age depending on the frontend software designed to fit the interaction. This iteration of the software looks at using it as a scaffolding tool to teach kids how to recreate or understand the making of a physical object, Lego toys in this instance. The plan to extend it to build other educational games and interactions to explore concrete and abstract concepts.

img_3288

Introduction

What can I tinker with next, the goal of Experiment 5 was to take a project from the past weeks and add to it in some meaningful way? This could be anything but “anything” is a very large canvas given 1.2 Weeks from concept to completion. I would like to say it was all clear from the start but it was murky on what this next step would look like. I tried to think about what I liked about the old project and what I did not it was quite clear the biggest pain point was the potentiometer breaking the interaction and dictating how far the kids could make the character chase each other before having to reverse and go backward to add more interaction.

This whole experiment would be to figure out how to use the Rotary Encoder, the initial idea was to change the whole first version of the box and add the rotary encoder to it, this would mean reverse engineering the hardware to fit in the new, it was not really worth it as the first version worked quite well as proof of concept and I wanted to keep it like that.

I then decided to use the RE[Rotary Encoder] to create a new kind of interaction but also examine critically what it is I was building, I used the case study assignment to dive deeper into what the interaction meant and how I could position the work in a meaningful way. In the paper “Extending Tangible Interfaces for Education: Digital Montessori-inspired Manipulatives”

Methodology

So what can I make interactive and make it meaningful to children? This was the question I kept asking myself, it’s easy to build an interaction but to have it meaningful and pedagogical is where the challenge is. I looked at my kids for inspiration, where I usually start, they are learning through play every single day but we seem to miss it.

I’m jumping ahead a bit because before I could even imagine what kind of interaction I wanted I needed to get the rotary encoder working and sending data, this may seem like a no-brainer for a coder, but for person new to the coding world of p5 and Arduino it was a critical first step else I have No Dice!

The base code for the project was a mix from the class code from Nick and Kate and also from Atkinson, multiwingspan.co.uk. I got the encoder sending a signal and then used the JSON library to parse the code so I could read it in p5.js. This is not the best way to do it I realized in retrospect and as I need to map different variables based on the length of the sprite animation I will be controlling. the better way to do it is to set a large range in Arduino and then map that range down to what I need based on each individual interaction. This is a bit technical in understanding but if you do venture into using my code it is something to keep in mind when you need to modify it to your needs.

img_3250

OK now that I had that and the button working I assembled all of the hardware even before I could get the software working, why would I do that, well basically because time was running out and once you write software, debugging and refining is a rabbit hole you can spend all of your time doing ill the cows come home, and I may never get to finish the physical hardware. I had this happen on other projects where the software take precedent and the hardware ends up being presented on a proto-board as there is no tie for refinement and fabrication.

img_3266

The circuit is pretty simple as you can see in this Fritz diagram below, it uses:

  • 1 Rotary encoder
  • 1 button
  • 1 Arduino Micro Original

That’s it, the circuit is also very clean so I could get to the main task of creating the interaction.

fritz

Once the circuit was done, I built the housing and the soldered all the components on to the PCB board.

img_3265

Software

I now looked at all the possible interactions I could create, using this rotary dial.

The basic idea is Turing up or down a value, you can then map this value to anything you like, in my case I decided to use sprite animations.

Coming back to observing my son play with lego, he would iterate and create new creations, cars, safe’s, vending machines, the list was exhaustive, he would look through youtube videos to follow along or just iterate, he would then share these creations with us at home and take them to school to show his friends. The thing is they could see the completed work not the process of getting there or even the individual parts that made the whole, this sparked an idea based on other stop-motion projects I had seen, with my son’s permission I broke apart his creations brick by brick and shot them using an iPhone and a tripod, I then used that to create sprite sheets which were controlled by the Think boxes rotary encoder. It took a bit of time to figure out how the sprite sheets worked and what was possible but it worked and the end result was satisfying, I then used the button to change the sprite and show another sprite animation, in this way I could get the user to scroll through the different creations I had created animations for.

img_3291

The interaction was automatic, there were no instructions needed, people turned the knob and clicked the button, I had built on their past experience of what buttons and Knobs do, it was now just a matter f changing the software to create a pedagogical experience for the child.

Some ideas I came up with based on this interaction are:

  • Simple Machines: where the box could be turned on its side and the knob is replaced with a crank, lending itself to simple machines, like cranes, fishing poles, ratchets, pulleys etc
  • The process of folding and unfolding has numerous pedagogical uses, least of which is just the wonder of seeing what is inside something, like for instance the layers of the earth’s crust, making planets move and just rotating objects on a different axis.
  • This makes the MiM very versatile yet simple in its interaction, the triangles complete when the software fits the user and the interaction.

Feedback

Some of the feedback was I should use this as an educational tool for products building IKEA furniture, pitch it to the company to create stop-motion videos for showing the different steps.

There was also interest in seeing how two of these devices could change the interaction if they controlled different aspects of the same Object/Interaction.

Summary

I would like to explore this prototype further and build more tinker boxes, which network or are even wireless, I had an early idea of building a wireless interaction but Nick said it might be a delayed interaction because of using a server like pubNub, I will look to see if there is any way to directly interface with the Mac/pc without the use of a third party software.

References

Zuckerman, O., Arida, S., & Resnick, M. (2005). Extending tangible interfaces for education.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems – CHI 05. doi:10.1145/1054972.1055093

Atkinson, M. (n.d.). MultiWingSpan. Retrieved from http://www.multiwingspan.co.uk/arduino.php?page=rotary2

GitHub code can be found here: https://github.com/imaginere/Experiment5DF

//generative(systems);

Experiment 5: Refine/Combine/Unwind

 

 

 

Exploring an Art & Graphic Design movement through computational means.

GitHub Link

Team Members
Carisa Antariksa, Joshua McKenna, Ladan Siad

pic1

Project Description

//generative(systems); is a investigation into Constructivism, an art and graphic design movement from the 1920s, through generative form. By referencing elements popularized from that movement, this project explores the automation of design processes through the basis of variance and the discourse around the movement towards engineered design systems. How do we as designers create distinct works when were are in a time of algorithmic design? Can our work still be dynamic, compelling and emotive? //generative(systems); examines the process of algorithmic automation and how it will affect viewers connection to the the aesthetic experience.

This experiment expands upon the Generative Poster project presented in the Creation & Computation Course Experiment 3’s, This and That. The original concept involved generating multiple iterations of a single design through computational means, where the intent was for a user to have a unique copy of a poster according to a predetermined design system.

Cross the Dragon – An Interactive Educational Exhibit

screenshot_2018-12-10-jpeg-image-480-x-270-pixels

Project Name : Cross the Dragon

Team Members: Norbert Zhao, Alicia Blakely, and Maria Yala

Summary:

Cross the Dragon is an interactive art installation that explores economic changes in developing countries and the use of digital media to create open communication and increase awareness on the topic of economic investment from global powers in developing countries. The main inputs in the piece are a word find game on a touch interface and an interactive mat. When a word, belonging to the four fields: Transport, Energy, Real Estate, or Finance, is found,  a video is projected onto a touch-responsive mat. Through the touch sensitive mat one can initiate another video in response to the word puzzle word. The interactive mat plays video through projection mapping. In order to be able to interact with the mat again one has find another word. We have left the information in the videos open to interpretation as to keep it unbiased and to build a gateway to communication through art and digital gaming practices.

What we wanted to accomplish:

Through this interactive installation the idea was not to presume and impress preconceived notions about the educational information provided. The installation is designed to encourage positive thought process through touch, infographic video and game. Through this interface we can conceptualize and promote discussion on information that is not highly publicized and considered widely accessible or generally discussed in Canada.

Ideation & Inspiration:

Ideation

This project was inspired by a story shared by one of our cohorts. She describes how Chinese companies are building a new artificial island off the beach in downtown Colombo, her hometown, and are planning to turn it into Sri Lanka’s new economic hub. At the same time, in the southern port Hambantota, the Sri Lanka government borrowed more than $1 billion from China for this strategic deep-water port, but couldn’t repay the money, so they signed an agreement and entrusted the management of the port to a Chinese national company for 99 years.

For us, such news is undoubtedly new and shocking. As China’s economic growth and increasing voice in international affairs, especially after The Belt and Road Initiative was born in 2013, China began to carry out a variety of large investment projects around the world, especially the developing countries in Asia and Africa, the investment in infrastructure projects from China has peaked. At the same time, we have discovered a series of reports from the New York times, How China Has Become an Superpower, which contains detailed data about China’s investment in other countries and project details.

Therefore, this project was focused around the discussion about the controversy of this topic, because some people think that these investments have helped the local economic development, but some people think it is neo-colonialism. In the beginning during concept development we knew this topic would have an awareness aspect. It was important to portray this topic that has a profound effect on the social, cultural lives and identities of people across the globe. By having a heterogeneous subject in the sense that it stemmed into other socioeconomic conditions. After discussion and data research, we decided to focus on China’s growing influence especially economic in Africa.

Finally, we decided to explore this interesting topic through interactive design. We came up with the idea of creating a mini-exhibition, through which visitors can explore the story behind this topic by interacting with the game. When the visitor first comes into contact with this exhibition, they do not have detailed information about the exhibition, but after a series of game interactions, the detailed information about the exhibition theme would be presented in the form of intuitive visual design. The resulting self-exploration process will give visitors a deeper impression of the topic.

Inspiration

These three interactive projects were chosen because of how they combine an element of play and the need for discovery in an exhibition setting. They engage the audience both physically and mentally, which is something we aim to do with our own project.

Case Study 1 – Interactive Word Games

An interactive crossword puzzle made for the National Museum in Warsaw for their “Anything Goes” exhibit that was curated by children. It was created by Robert Mordzon, a .NET Developer/Electronic Designer, and took 7 days to construct.

screenshot_2018-12-10-final-presentation

Case Study 2: Projection Mapping & Touch interactions

We were interested in projection mapping and explored a number of projects that used projection mapping with board games to create interactive surfaces that combined visuals and sounds with touch interactions.

screenshot_2018-12-10-final-presentation1

Case Study 3: Interactive Museum Exhibits

ArtLens Exhibition is an experimental gallery that puts you – the viewer – into conversation with masterpieces of art, encouraging engagement on personal and emotional level. The exhibit features a collection of 20 masterworks of art that will rotate every 18 months to provide new, fresh experiences for repeat visitors.The art selection and barrier-free digital interactives inspire you to approach the museum’s collection with greater curiosity, confidence, and understanding. Each artwork in ArtLens Exhibition has two corresponding games in different themes, allowing you to dive deeper into understanding the object. ArtLens Exhibition opened to the public at the Solstice Party in June 2017.

screenshot_2018-12-10-final-presentation2

Technology:

We combined two of our projects – FindWithFriend and Songbeats & Heartbeats for our final project. The aspects of the two projects we were drawn to are the interactions. We wanted to create an educational exhibition that has a gamified component to it and encourages discovery almost like the Please Touch Museum.

Interactions:

We combined the touch interactions from the wordsearch & interactive mat.

Components:

P5, Arduino, PubNub, Serial Connection

Brainstorm

img_6212

Team brainstorming the user flow and interactions

screenshot_2018-12-10-untitled-diagram-xml-draw-io

Refined brainstorm diagram showing user flow, nodes, and interactions

How it works:

The piece will work like a relay race where one interaction on an Ipad will trigger a video projection onto an interactive mat. When a sensor on the mat is touched it will trigger a different projection showing the audience more data / information.

The audience is presented with a wordsearch game in a P5 sketch (SKETCH A) with the four keywords; “Transport”, “Energy”, “Real estate”, “Financial”, representing the industries that China has made huge investments in. Once the word is found e.g. “Transport”, a message is published to PubNub and is received by a P5 sketch (SKETCH B) that will play a projection about transport projects. When the audience touches the mat with the sensors, the sensor value (ON/OFF) will via a Arduino/P5 serial connection to a different P5 sketch (SKETCH B) will stop playing the Transport projection and displays more information about China’s transport projects in different African countries.

Step 1: Sketch A – Wordfind game

The viewer’s initial interaction with the “Cross the Dragon” exhibit is initiated in the wordfind game. This is created using p5.js. The gameboard is created using nested arrays that create the word find matrix. Each tile in the board is created from a Tile class with the following attributes: x,y co-ordinates, RGB color values, a color string description based on the RGB values, a size for it’s width and height, booleans – inPlay, isLocked, isWhite, and a tile category that indicates whether the tile is for Transport, Finance, Real Estate of Energy.

To create the gameboard, 3 arrays were used. One array containing the letters for each tile, another that contained the values that indicated whether a tile was in play or not. This was made up of 1’s and 0’s. Tiles that were in play, i.e tiles that contained letters for the words to be found, were marked with 1’s and those that were decoy tiles were marked with 0’s. The last array was one that indicated the tile categories using a letter i.e T,F,R,E, and O for the decoy tiles. The matrix was created by iterating over the arrays using nested for loops.

screenshot_2018-12-10-cross-the-dragon1

The arrays used to create the game board tile matrix of clickable square tiles

buildingtiles

Generating the 11×11 game board and testing tile sizes

Once the tile sizes were determined, we focused on how the viewer would select the words for the four industries. The original Find With Friends game catered to multiple players, identifying them each with a unique color. However, here there is only one input point, an iPad, so we decided to have just two colors showing up on the game board; red to indicate the correct tile and grey to indicate a decoy tile. When the p5 sketch is initiated, all tiles are generated as white and marked with the booleans – inPlay and isWhite. When a tile is clicked and it’s inPlay value is true, it turns red. If it’s inPlay value is false, it turns grey.

testingwords

Testing that inPlay tiles turn red when clicked

The image below indicates testing of the discover button. When a word is found, and the discover button is clicked, a search function loops through the gameboard tiles, counting from the tiles that are inPlay and have turned red, a tally of the tiles clicked is recorded in four variables i.e one for each industry. There are 9 Transport tiles, 6 Energy tiles, 10 Real Estate tiles, and 7 Finance tiles. Once looping through tiles is complete, a checkIndustries() function is called to check the tally of the tiles. If all the tiles in a category are found, the function sets a global variable currIndustry to the found industry and then calls a function to pass that industry to PubNub. When a tile is found to be in play and clicked, it is locked so that the next time the discover button is clicked, the tile is not counted again.

testingdiscover

Testing that inPlay tiles are registered when found and that already found tiles are not recounted for the message sent to PubNub.

Step 2: Sketch B – Projection Sketch – Part 1

When the sketch initializes, a logo animation video, vid0, plays on the screen and a state variable which was initialized as 0 is set to 1 in readiness for the next state which will play video 1 / a general information video on a found industry.

When the second p5 sketch receives a message from PubNub, it uses the string in the message body that indicates the current industry to determine which video to play. The videos are loaded into the sketch in the preload function and played in the body of the html page crossthedragon.html. During testing we discovered that we had to hide the videos using css and show them only when we wanted to play them, re-hiding them after because they would all be drawn onto the screen overlapping each other. When the sketch is loaded videos are added to two arrays – one to hold the initial videos and another to hold the secondary videos that provide additional information. The positions both the arrays for each industry are Transport in index 0, Energy in 1, Real Estate in 2, and Finance in 3.

Once a message is received a function setupProjections(theIndustry) is called. The function takes the current industry from the PubNub message as an argument and uses it to determine which video should be played. The function sets the values of the global vid1 and vid2. This is done by using the industry to determine which video to pull from the two arrays that hold all the videos. e.g if transport was found, vid1 = videos1[0] and vid2 = videos2[0]

A function makeProjectionsFirstVid() is called. This function stops the initial “Cross the Dragon” animation from playing and hides it, then hides vid2 and plays vid1. It then updates a global variable state to 2 in readiness for the second in-depth informational video.

Note: vid0 only plays when state is 0, vid1 only plays when state is 1, and vid2 only plays when state is 2.

Step 2: Sketch B – Projection Sketch – Part 2 Arduino overs serial connection

The second in-depth video is triggered whenever an signal is sent over a serial connection from Arduino when the viewer interacts with the touch-sensitive mat. Readings from the 3 sensors are sent over a serial connection to the p5 sketch. During testing we determined that using a higher threshold for the sensors produced a desirable effect of reducing the number of messages sent over the serial connection thus speeding up the p5 sketch and reducing system crashes. We set the code up so that messages were only sent when the total sensor value recorded was greater than 1000. The message sent was encoded in JSON format. The p5 sketch parses the message and uses the sensor indicator values passed i.e. either 0 or 1 to determine whether to turn on the second video. If the sensor indicator is 0 this means OFF and the video start is not triggered, if the value is 1 this means ON and the video is triggered. The makeProjectionsSecVid() function triggers the start of the video. If the state is 2, the vid1 is stopped and then hidden and the vid2 is shown then played on a loop. An isv2Playing boolean is set to true and is used to determine whether to restart the video and prevents it from jumping through videos if one is already playing.

Electronic Development 

While choosing materials I decided to use  a force sensitive resistor with a round, 0.5″ diameter, sensing area. This FSR will vary its resistance depending on how much pressure is being applied to the sensing area. The harder the force, the lower the resistance. When no pressure is being applied to the FSR its resistance will be larger than 1MΩ. This FSR can sense applied force anywhere in the range of 100g-10kg.  To make running power along the board easier I used an AC to DC converter that converted 3v and 5v power along both sides of the breadboard. Since the FSR sensors are plastic due to travel some of the connections would come loose. One of the challenges was having to replace the sensors a few times. When this occurred would follow up with quick testing to make sure all sensors were active through the serial monitor in Arduino. To save time I soldered a few extra sensors to wires so the old ones could be switched out easily if they became damaged.

screenshot_2018-12-10-cross-the-dragon2

Materials for the Interactive Mat Projection

  • Breadboard
  • Jumper cables
  • Flex, Force, & Load Sensor x 3
  • YwRobot Power Supply
  • Adafruit Feather ESP32
  • Wire
  • 4×6 piece of canvas material
  • Optoma Projector
  • 6 x 10k resistors

Video Creation Process

Information was extracted for the four most representative investment fields from the database of investment relationship between China and Africa: transport, energy, real estate and finance. Transport and real estate are very typical, because the two famous parts of China’s infrastructure investment in Africa are railway and stadium construction. In addition, energy is also an important part of China’s global investment. The finance part corresponds to the most controversial part of China’s investment, that is, when the recipient country cannot repay the huge loan, it needs to exchange other interests. Sri Lanka’s port is a typical example.

Initially, we wanted to present investment data in four fields through infographic. But after the discussion, we believed that video is a more visual and attractive way to present. So we make two video’s for each field. When visitors get the correct words in this field, they will be shown the general situation of China in the world and Africa in this field, which is video 1, including data, location, time and so on. When visitors click on mat,  projector will play more detailed video about the field, which is video 2, such as details of specific projects.

In video 1, we use Final Cut to make dynamic images of infographic produced in adobe illustrator, and add representative project images of this field in the latter half of video. So that visitors have a general understanding of this field.

2
In video 2, we use Photoshop and Final Cut to edit some representative project images in this field, and then add key words about the project in the image, so that visitors can have a clear and intuitive understanding of these projects.1

The Presentation

The project was exhibited in a gallery setting in the OCAD U Graduate Gallery space. Below are some images from the final presentation night.

settingup

Setting up the installation

shotsfrompresenation

People interacting with the Cross the Dragon installation

Reflection and Feedback

Many of the members of the public who interacted with the Cross the Dragon exhibit were impressed by the interactions and appreciated the educational qualities of the project. Many people stuck around to talk about the topics brought up by the videos, asking to know more about the projects, where the information came from and how the videos were made. Others were more interested in just the interaction but most participants did engage in open ended dialogue without being prompted. Overall feedback was positive. People seemed to be really interested in changing the informational video after finding the word in the puzzle. Some participants suggested slowing down the videos so that they could actually read all the information in the text.

For future iterations of this project, we would like to explore projection mapping more so that we can make the interactive mat more engaging. We noticed that once people found out that they could touch the mat, they tended to want to keep touching it and exploring it. We had spoken about including audio and text with animation earlier on in our brainstorming and we believe this would be a good way to include these through having sensitive areas on the mat to create more interactions. It was also suggested that we should project the videos onto a wall also so that people who were around the room would still be included in the experience without having to actually be physically at the exhibition station.

References

Code Link on Github – Cross The Dragon

P5 Code Links:

Hiding & Showing HTML5 Video – Creative Coding

Creating a Video array – Processing Forum

HTML5 Video Features – HTML5 Video Features

Hiding & Showing video – Reddit JQuery

Reference Links:

1] https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

2] http://osusume-energy.biz/20180227155758_arduino-force-sensor/

3] https://gist.github.com/mjvo/f1f0a3fdfc16a3f9bbda4bba35e6be5b

4] http://woraya.me/blog/fall-2016/pcomp/2016/10/19/my-sketch-serial-input-to-p5js-ide

5] https://www.nytimes.com/interactive/2018/11/18/world/asia/world-built-by-china.html

6] http://www.sais-cari.org/

7] http://www.aei.org/china-global-investment-tracker/

 

 

 

Use of this service is governed by the IT Acceptable Use and Web Technologies policies.
Privacy Notice: It is possible for your name, e-mail address, and/or student/staff/faculty UserID to be publicly revealed if you choose to use OCAD University Blogs.