Experiment 4 – Script redux

Nik Szafranek

I ended up trying to push the code further for Experiment 2. The code still doesn’t work but I’m getting somewhere. I was particularly drawn to this as I’ve always been fascinated with writing and linguistics, I really enjoyed the challenge despite it not getting it working in the end and I feel like I’ve gotten more comfortable with code. I hope to explore further.

Here is the link to the code:

https://github.com/nszafranek/Script

I have gotten it to recognize chunks of text and split them into component characters but I’m still struggling with concatenating and displaying the artboard-1

Jin Zhang(3161758)
Siyue Liang(3165618)

Documentation: Sound Interactive Installation

20181208002426

Project Description
For this project, our group decided to experiment with the capacitive sensor. It is a library in Arduino that turns any conductive material into sensors that senses proximity. Our original idea was to make an installation with metal wire and people can put their hands close to it to control the amplitude and speed of the audio. We didn’t achieve it the end because the sensor didn’t work the same as we thought it would. Therefore the thing we ended with was that we had to touch the metal with our hands in order to make the values change, which was not what we aimed for.
Inspiration & Related Works
In the beginning, we inspired by the wire loop game. The player needs to use a loop wand to pass the metal wire maze without touching the wires. This game is famous. We think the cool thing about this game is the metal wires. It can be any different shapes as you want and play the role as a sensor. Based on this game, we thought the idea of using metal wire as a sensor to control sound/visual effects will be super cool. (this is the link of a simple wire loop game that we found online: https://www.instructables.com/id/Wire-Loop-Game-Tutorial/)

We did some research online and found out that there are lots of cool metal artwork. We wanted to build cool metal sculpture and connect each of them to the different piano note.

graphite-large-clive-wall-decor_1

Building Process

  • Code For Arduino

#include <CapacitiveSensor.h

/*
* https://forum.arduino.cc/index.php?topic=188022.0
* CapitiveSense Library Demo Sketch
* Paul Badger 2008
* Uses a high value resistor e.g. 10M between send pin and receive pin
* Resistor effects sensitivity, experiment with values, 50K – 50M. Larger resistor values yield larger sensor values.
* Receive pin is the sensor pin – try different amounts of foil/metal on this pin
*/
CapacitiveSensor cs_4_2 = CapacitiveSensor(4,2); // 10M resistor between pins 4 & 2, pin 2 is sensor pin, add a wire and or foil if desired
/*
CapacitiveSensor cs_8_7 = CapacitiveSensor(8,7);
CapacitiveSensor cs_7_6 = CapacitiveSensor(7,6);
CapacitiveSensor cs_9_8 = CapacitiveSensor(9,8);
CapacitiveSensor cs_11_10 = CapacitiveSensor(11,10);
CapacitiveSensor cs_13_12 = CapacitiveSensor(13,12);
*/

void setup()
{
cs_4_2.set_CS_AutocaL_Millis(0xFFFFFFFF); // turn off autocalibrate on channel 1 – just as an example
/* cs_8_7.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_7_6.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_9_8.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_11_10.set_CS_AutocaL_Millis(0xFFFFFFFF);
cs_13_12.set_CS_AutocaL_Millis(0xFFFFFFFF); */
Serial.begin(9600);
}

void loop()
{
long start = millis();
long total1 = cs_4_2.capacitiveSensor(50);
/* long total2 = cs_8_7.capacitiveSensor(50);
long total3 = cs_7_6.capacitiveSensor(50);
long total4 = cs_9_8.capacitiveSensor(50);
long total5 = cs_11_10.capacitiveSensor(50);
long total6 = cs_13_12.capacitiveSensor(50); */

// tab character for debug windown spacing

Serial.println(total1);// print sensor output 1
/* Serial.println(total2);
Serial.println(total3);
Serial.println(total4);
Serial.println(total5);
Serial.println(total6); */
Serial.println(“\t”);

delay(30); // arbitrary delay to limit data to serial port
}

  • Code For Processing
    import processing.serial.*;
    import processing.sound.*;

Serial myPort;

int sensor1;

// Declare the processing sound variables
SoundFile sound;
Amplitude rms;

// Declare a scaling factor
float scale=5;

// Declare a smooth factor
float smooth_factor=0.25;

// Used for smoothing
float sum;

public void setup() {
fullScreen(P3D);
//size(800,800,P3D);

String portName = “/dev/cu.usbmodem141301”;

myPort = new Serial(this, portName, 9600);
myPort.bufferUntil(‘\n’);

//Load and play a soundfile and loop it
sound = new SoundFile(this, “sound2.mp3”);
sound.loop();

// Create and patch the rms tracker
rms = new Amplitude(this);
rms.input(sound);

}

public void draw() {

// Set background color, noStroke and fill color
background(0);

//println(“sensor1: “+sensor1);

sound.rate(map(sensor1*0.05, 0, width, 1, 4.0));
//sound.amp(map(mouseY, 0, width, 0.2, 1.0));

// smooth the rms data by smoothing factor
sum += (rms.analyze() – sum) * smooth_factor;

// rms.analyze() return a value between 0 and 1. It’s
// scaled to height/2 and then multiplied by a scale factor
float rms_scaled=sum*(height/2)*scale;
noFill();
if (sensor1 > 1000){
stroke(random(0,255),random(0,255),random(0,255));}
else {
stroke(255);
}

strokeWeight(5);
translate(width/2,height/2);
rotateX(-PI/6);
//if (sensor1 > 1000){
//rotateY(PI/3 + sensor1 * PI);}

rotateY(frameCount*0.04);
for (int size = 10; size < 400; size += 50){
box(size,size, rms_scaled);
}

}
void serialEvent(Serial myPort) {
//yPos = myPort.read();//like serial.write in arduino
// read the serial buffer:
String myString = myPort.readStringUntil(‘\n’);

if (myString != null) {
// println(myString);
myString = trim(myString);

// split the string at the commas
// and convert the sections into integers:
int sensors[] = int(split(myString, ‘,’));
for (int sensorNum = 0; sensorNum <=0; sensorNum++) {
print(“Sensor ” + sensorNum + “: ” + sensors[sensorNum] + “\t”);
}
println();
if (sensors[0] != 0){
sensor1 = sensors[0];
}

/* sensor2 = sensors[1];
sensor3 = sensors[2]; */

}
}

20181208002240 20181208002228 20181208002215 20181208002157 20181208002147 20181208002135

Features & Goals
Our goal was to make a sound interactive installation where the speed and amplitude are controlled by the value of the capacitive sensor.

In terms of code part, we were planning on using 4 to 6 capacitive sensors for the installation. However, after wiring and connecting for 6 sensors, they all stopped working for some reason. So we took one out at a time and check to see if it worked, but it wouldn’t work until there was only one sensor left in the circuit. We also wanted to have the sensor controlling some LEDs apart from the visuals, but when we added the output pin for the LED, the serial port would stop writing values to Processing, which was really frustrating to see.

In terms of physical build, when we noticed that the metal wire wasn’t reacting to proximity, we tried using some other materials to replace it. We were not sure if it was because of the conductivity of the metal is affecting the effectiveness of the code. So we put conductive paint on paper and used it as the capacitive sensor but the same thing happened.
So we switched back to using metal wires. We spent a lot of time considering the form of the installation. We thought it would be better to make the visual and installation somehow relates to each other hence the music notes. A cube shape was the other option but we decided to go with the notes eventually.

References

http://playground.arduino.cc/Main/CapacitiveSensor?from=Main.CapSense

https://processing.org/reference/libraries/sound/SoundFile.html

Contact

 

contact2

 

Experiment 4: Final Report

Contact

Isaak Shingray-Jedig
DIGF 2004-001
December 7, 2018

Inspiration

The inspiration for this project came from past experience in the Digital Future program in the form of a past project.  The culminating assignment of Accio from first year that Roshe Leynes and I worked on introduced me to computer vision.  The project that we developed allowed people to interact relatively easily and naturally with an onscreen render and that was in large part thanks to the amazing potential of computer vision (in that case through the use of Kinect).  With experiment 4 I thought to explore the uses of Kinect further through a drawing experience that would use a large piece of conductive or resistive fabric as a makeshift paper or medium in combination with a Kinect for the tracking of the tip of a finger.  In practice, Kinect 1 wasn’t well suited to fine movement detection, so instead, I looked to video analysis computer vision and found my answer; colour tracking.  Colour tracking allowed for high levels of accuracy without any of the general depth noise that the Kinect was prone to producing, so I settled on it as my method for computer vision.

 

Experience

Users sit in front of a computer running the program.  They place a small fabric ring on the  index finger of their dominant hand.  They hold the space bar to use the color tracking calibrator and see the webcam feed, where they click on the colored band attached to the ring on their finger. they then release  space and begin to draw in the same way that one would finger paint or draw in sand.

 

contact3

Features and Components

The components used in Contact were:

  • USB webcam
  • Boom arm desk lamp
  • Large piece  of conductive fabric for desk
  • Small conductive fabric ring for finger with a patch of red electrical tape
  • Wires
  • Arduino

The physical construction of Contact was relatively simple but had to be set up in a very particular way because of the fact that the webcam had to have a clear line of sight on the entire pad of fabric, which in turn had to be very brightly lit.  In combination with the code, these components came together to produce a responsive touch drawing experience.  The most difficult part by far of the development of Contact was the implementation of reliable color tracking.  A few hurdles in that process were understanding how to think of colours in a space relative to X,Y,and Z, as well as understanding that every single pixel on screen had to be checked each frame.  In terms of rendering, when I tried to use an array to store line values, the system became very slow and unresponsive.  This is due to the heavy load that checking every pixel puts on the computer every frame.  For this reason, I put the background in the setup function and made 2 arrays 2 places long so that the program would draw one line at a time without erasing the previous line s

 

contact4

Goals

The goal of the project was to create a responsive and reliable, but above all else natural, drawing experience.  I think that in many ways Contact succeeded, especially in terms of usability and the responsive and reliable operation of the program through computer vision.  In terms of a natural experience, the physical setup of using a camera attached to the back end of a lamp seemed to produce a relatively natural feeling to Contact, but after having received feedback from the professors and seeing my peers interact with it, I think that the gestural nature of the experience should be further explored.  I will look to expand upon what exactly makes the experience unique from other forms of touch technology and play on its natural strengths as a system.  The most immediate area for experimentation would to me is to move the experience away from a seated position and into some form of more physical experience.  After having written the word sand in the experience section, I would also like to explore sand in a similar format.

 

contact5

https://github.com/isaakshingray/experiment4

 

Contextual works

Drawing using Kinect V2

https://pterneas.com/2016/03/15/kinect-drawing/

 

Computer vision for Artists and Designers

http://www.flong.com/texts/essays/essay_cvad/

 

The pixel array

 

 

 

Alternative Methods of Input: Synchronization of Music & Gameplay by Angela & Denzel

Alternative Methods of Input: Synchronization of Music & Gameplay

a project by Denzel Arthur & Angela Zhang

Denzel Arthur: Unity Gameplay

For this assignment I wanted to continue exploring unity, most importantly, the developer side of things. I wanted to connect my love for music and video games, in order to help give meaning to the back end side of the project. Through this assignment I got to work with the animator in unity, scripting with c#, serial port communication between arduino and unity, and music synchronization within the unity application, and ableton in order to create and cut sound clips.

 

The first part of this project was based on doing research on the concept in order to find out how I could replicate it. I immediately found resources from the developers of “140” a music based platformer that became the main inspiration for this project. As the game was developed in unity, the majority of the information provided by the developers aided me in my quest. They had information about frame per second, beats per second, and how to match them in the unity game engine. Although the code the added to the pdf was no longer active, the explanation itself was enough for me to create an prototype of the concept.

The second part of the project included setting up the unity scene. This part of the process involves using basic unity primitive objects to create a basic concept of the game. The primitive objects where used to arbitrarily represent the player object, enemies, and environment. With these basic assets, I was able to implement program collision systems, death and respawn animations, and other triggers like button events.

The third part of the process included the physical computing part. I was initially supposed to work with another classmate to create more elaborate buttons, therefore I created a basic protype buttons found in the arduino kit. These buttons became the main source of communication between the game and the player during the presentation. A very lackluster physical presentation, but that seems to be a trend in my work here in the digital futures program. Nonetheless, after the buttons were created I proceeded to connect the physical buttons attached to the microcontroller, to the unity engine. This proved more challenging than need be due to poor documentation on the free methods of connection, but after purchasing the Uduino kit, the connection became a seamless process. This process also included programming the buttons and adjusting the animations, mechanics, scripts, and audio files in order to get a prototype that was playable and had the right amount of difficulty.

The final part of this process was creating a visually appealing product within the game engine by adjusting the virtual materials and shaders within unity, and also swapping out any assets with ones that fit the concept of the game. I still went with primitive shapes in order to achieve a simplistic aesthetic, but certain shapes were modified in order to to make the different levels and enemy types seem more diverse.

unnamed-2 unnamed-3 unnamed unnamed unnamed-1

 

Angela Zhang: Physical Interface

For Experiment 3, I wanted to use capacitive sensing to make a series of buttons that would control the gameplay in Unity for Denzel’s game. I started with a 9×9” wooden board primed with gesso, that also has a ¾” thick border so that there is a bit material to protect the electro galvanized nails that will be nailed in as solder points.

I did a digital painting in Procreate to plan out my design for the buttons.

conceptual drawing - digital drawing on iPad & Procreate
conceptual drawing – digital drawing on iPad & Procreate

I ended up using the blue triangle design and splitting the yellow design in half to be two triangles that look like left and right arrows, which I intended to use as FRONT and BACK navigation; the blue triangle would be the JUMP button.

process - stencil for design, graphite on tracing paper
process -stencil for design, graphite on tracing paper
tracing paper stencil - shaded w 5B pencil on backside
tracing paper stencil – shaded w 5B pencil on backside

I traced the design onto some tracing paper with graphite pencil.

On the opposite side of the tracing paper, I used a 5B graphite pencil to shade in the entire shape to make a transferable stencil.

Tracing with a pen with the shaded side down I transferred the design onto the gesso board.

process - transferred design onto gesso-ed 9x9" wooden board.
process – transferred design onto gesso-ed 9×9″ wooden board.

Once the design was transferred, I applied green painter’s tape around the edges so that when I applied the black conductive paint overtop, it would be clean around the edges. I added three more rectangular buttons for additional functionality. Once I had transferred it, I used a hammer and nailed some electro galvanized nails, about an 1.5cm long into each of the ‘buttons’ [not really buttons yet, but empty spaces where the buttons should be]. Because the nails were so small I used a thumb tack to do some of the registration holes for better accuracy.

process – back of gesso board with electro galvanized nails.

I then applied a generous coat of conductive paint by Bare Conductive mixed with a little bit of water, as the carbon based paint is a little sticky and hard to work, and water is conductive so this did not prove to be a problem. After I finished painting the designs with conductive paint, I sealed it with a layer of acrylic varnish spray to prevent it from rubbing when being touched. For some of the buttons, I planned to put another layer of acrylic paint to see if it was possible to activate the conductive paint with coloured paint overtop, to allow for more than just black designs as I had planned.

process - conductive paint applied and tape taken off
process – conductive paint applied and tape taken off
final button - conductive paint, acrylic varnish spray, regular acrylic paint, final coat of acrylic varnish
final button – conductive paint, acrylic varnish spray, regular acrylic paint, final coat of acrylic varnish
img_0700
Final Painting.
img_4594
back of board – set up

I painted the background with regular acrylic paint to make the board more aesthetically pleasing. With the final layer of acrylic paint, and a final coat of varnish, I was ready to test my connections. Using a soldering iron, I soldered wires to each of the connections, then alligator clipped these wires each to an electrode on the Touch Board micro-controller.

The LED on the Touch Board lit up when I touched each button, so it was all working. The only thing I noticed was that the connections were very sensitive to touch, so even if the wires in the back were touching on another it would sometimes trigger an electrode it was not supposed to. This can be solved with better cable management and enclosing the micro controller inside the back of the board if I want to do a standalone project.

The original idea was to hook up the board to Unity so that they could replace the tactile buttons that Denzel described using in his half of the documentation. Using the Arduino IDE, I uploaded the following code to the Touch Board [screenshots do not show code in entirety]:

screenshot - Uduino code
screenshot – Uduino code

screenshot - Uduino code [cont'd]
screenshot – Uduino code [cont’d]
screenshot - Uduino code [cont'd]
screenshot – Uduino code [cont’d]
The Uduino code (to bridge the serial connection between Unity and Arduino) uploaded successfully onto the Touch Board’s AtMega32u4 chip, which is the same chip as the Arduino Uno or Leonardo. The problem with the connection however, was that the conductive paint buttons were using capacitive sensing logic and not digital ON/OFF switch  logic, and neither Denzel and I were proficient enough in C# to change the Unity scripts accordingly so that the capacitive touch sense buttons could be used to trigger movement in the game engine. I tried looking at a few tutorials on this and watched one about analog inputs to Unity, which used a potentiometer as an example. I wasn’t sure if this was going to be what I needed in the scope of time that I had so I ended up settling on another idea and decided to attempt the Unity game controller with Denzel at a later date when we both had the time to look at forums (lol)

I changed the function of the conductive painting to be a MIDI keyboard, as the Touch Board is particularly good for being used as a MIDI/USB device. I uploaded this code instead to the Touch Board:

Arduino IDE - MIDI interface example sketch from Touch Board Examples Library
Arduino IDE – MIDI interface example sketch from Touch Board Examples Library
Arduino IDE - MIDI interface example sketch from Touch Board Examples Library
Arduino IDE – MIDI interface example sketch from Touch Board Examples Library

I then used Ableton Live as a DAW to make my MIDI make sound. I changed the preferences in Ableton > Preferences > Audio Inputs > Touch Board, as well as the Output. I also turned on Track, Sync, and Remote so I could map any parameter within Ableton to my conductive painting just like any regular MIDI keyboard. I used the Omnisphere for a library of sounds I could play with my MIDI; because the capacitive button is analogue, I can map parameters like granular, pitch bends, etc onto the buttons as well as trigger tracks, pads in a drum rack or sampler, or any of the Live view channels in Ableton to trigger whole loops.

Omnisphere 2.0 VST in Ableton - sound library
Omnisphere 2.0 VST in Ableton – sound library
Conductive painting inside of Ableton
Conductive painting inside of Ableton

Even though we did not successfully link Unity and the painting together, I still feel like I learned a lot from creating this unusual interface and I will push this idea further in both Unity and Ableton; I want to use Live for Max to trigger even more parameters in physical space, eventually things like motors.

Atelier: Animated Movie Experiment: Dimitra Grovestine

“Experiment 1” Assignment

Dimitra Grovestine

3165616

September 27, 2018

 

GitHub Link: https://github.com/dimitragrover/Ateliyay

Drive Link to Movie: https://drive.google.com/file/d/1OfjblX1HizLFLFF2SBLrmi8BLyrTs6CN/view?usp=sharing

Introduction

For the first experiment, we were told to choose something of interest to us and use code in which we had learned, to enhance our craft. A big part of my life consists of my acting and the creation of productions. I wanted to put together a complete project and decided to explore creating an animated movie.

A large topic in my computer theory class this semester has been discovering the difference between human thinking and computer thinking. One of the largest differences that we have outlined, is that humans carry emotion where computers do not. I wanted to test out how various digital translations and movements portray human-like emotions.

When choosing my video topic, I wanted to tackle an important subject, one that would be effective being presented through basic geometric shapes. I ended up choosing the topic of bullying and the overall human emotion of feeling unwanted or feeling like you don’t fit it with others. The choice to use geometric shapes in this case, enhances the topic because there is no race, gender or sexual orientation attached to circles; yet, there does exist colour difference between the two main characters of the film. This colour difference allows viewers of all ages to recognize that there does exist a difference between the two circles, yet it is not a difference that can be directly compared to one single social difference between humans. I found that my choice to use a geometric shape and to include a colour difference made a socially difficult topic easier to understand. It also allows for viewers to rethink their prejudices.

Humans tend to hold strong social opinions based on other humans different from themselves. Often, when presented with information of socially oppressed groups within other species such as the animal kingdom, those same humans think that the diversion is silly or that it doesn’t make sense. All in all, my use of non-human forms was to help others rethink the social and physical judgements that they carry.

Storyboard Development

Song choice: Don’t Laugh at Me (by Mark Wills)

Scene 1

screen-shot-2018-09-28-at-10-21-53-pm

Scene 1 stayed fairly similar to my original thoughts. I decided to implement a basic horizontal transition to mimic the human act of entering a space.

Scenes 2, 3 and 4

screen-shot-2018-09-28-at-10-22-10-pm

Scene 2 actually never ended up occurring in the final cut. But scene 3’s jiggling momentum was created using a random X position while moving up Y at a constant. The jiggling is very subtle, and I think this is effective when mimicking nerves. Because the human feeling is very internal rather than an external quality visually intensely seen by others.

Scenes 5, 6 and 7

screen-shot-2018-09-28-at-10-22-26-pm

When increasing the number of circles, a for loop was used. When creating that feeling of explosion, the radius size was being changed, increasing and decreasing like heart rate. Finally the spinning was produced using cosine and sine as well as height changes, having the circle appear to be moving in distance.

Scenes 8 and 9

screen-shot-2018-09-28-at-10-23-28-pm

In the film, I established a form of communication of the circles through the  ever changing size of the radius. When the two shapes were in the process of morphing into one colour, I layered the two circles on top of each other to begin to see the complement colour forming. I had them moving back and forth across the screen to mimic time passing. I also used a fade effect to allow the past scene to still be present for a small moment at the start of a new scene. This cinematic feature better helped portray the passing of time.

Scene 10

screen-shot-2018-09-28-at-10-22-40-pm

Once the circles finally blended colours, I used a for loop again to fill up the screen and conclude the film.

Canvas Exploration (How to Create a Movie on Canvas)

As I began exploring the creation of an animated film. I thought that it would be interesting to see if I could get an entire movie to play on the Canvas using timers. Going into the project, my last resort would result in the use of Premiere Pro. My very first test with the timer, involved playing back the first scene twice. This is when I realized that the timer starts from the moment you hit refresh on the computer, all of the timers do. When I play the exact scene back, and trigger the second timer to the moment that the first scene is done, it does some funny things. Three quarters through the first scene, it speeds up the movement of the three circles. When the second scene begins playing, the circles are moving at the new fast pace. On loop, the circles continue to move at the new fast pace.

 

Timer Experiment Code:

let timer = 2

let timer1 = 5

var Xplace = 0;

 

function setup() {

 createCanvas(720, 480);

}

 

function draw() {

 background(0);

 

 if (frameCount % 20 == 0 && timer > 0) {

   timer –;

 }

 if (timer == 0) {

   Xplace = Xplace – 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

ellipse(Xplace-85, 300, 65);

fill(255,0,0);

 

//red Parents

ellipse(Xplace-30,200, 100);

fill(255, 0,0);

ellipse(Xplace-140,200, 100);

fill(255, 0,0);

 }

 if (frameCount % 20 == 0 && timer1 > 0) {

 }

 if (timer1 == 0) {

   Xplace = Xplace – 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

ellipse(Xplace-85, 300, 65);

fill(255,0,0);

 

//red Parents

ellipse(Xplace-30,200, 100);

fill(255, 0,0);

ellipse(Xplace-140,200, 100);

fill(255, 0,0);

 }

}

 

To attempt at solving this problem, I decided to try and add an additional frameCount(); to the problemed area.  The only sweet spot I found when including the additional frameCount(); was at a rate of 30 which provides me with the same scene twice. But was my code now functioning properly or was it just playing the results of the first timer on repeat? I decided to create a timer producing the second scene. And it unfortunately presented no errors in the console but did not play the first scene followed by the second, rather it produced the first scene on loop. There also existed a visual error of a white flash at the beginning of scene one. I wasn’t sure if this white flash on the circles had something to do with the white circles in scene two or not.

 

Timer Experiment Code 2

let timer = 2

let timer1 = 5

var Xplace = 0;

 

function setup() {

 createCanvas(720, 480);

}

 

function draw() {

 background(0);

 

 if (frameCount % 20 == 0 && timer > 0) {

   timer –;

 }

 if (timer == 0) {

   Xplace = Xplace – 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

ellipse(Xplace-85, 300, 65);

fill(255,0,0);

 

//red Parents

ellipse(Xplace-30,200, 100);

fill(255, 0,0);

ellipse(Xplace-140,200, 100);

fill(255, 0,0);

 }

 if (frameCount % 20 == 0 && timer1 > 0) {

 }

 if (timer1 == 0) {

   Xplace = Xplace + 1;

if (Xplace < 0) {

Xplace = width;

}

//red baby character

   //red baby character

   ellipse(Xplace-85, 300, 65);

   fill(255,255,255);

 

   //red Parents

   ellipse(Xplace-30,200, 100);

   fill(255, 255,255);

   ellipse(Xplace-140,200, 100);

   fill(255, 255,255);

 

 }

}

 

Another concern of mine was the film lacking basic cinematic features, ones that can be found on a movie editor, such as scene to scene transitions. I was also worried how I was going to be able to get exact timing with the chosen audio. The audio I chose was a very important aspect to the piece and was a big help in guiding the storyline. For that reason, I chose to use a video editing software to enhance the actual piece and story.
Final Thoughts:

Overall I want to be very honest with these final thoughts. I believe that I was successful in achieving what I set out to do; however, I feel that I missed the bar on this overall project, and this is something that I did not realize until I saw the explorations of my classmates. I spent a lot of time on this project and in my efforts to present a story and I think that I was very set on creating something complete and something within my comfort zone and something that I could visually see working. I do believe I was successful at producing exactly that. I believe I did enhance my knowledge in terms of animating and film making using the canvas. However, personally, after reviewing the works of others, I think that I would have liked to have tried to create something a little less complete and something that I was unfamiliar with. I think that I potentially misinterpreted the instructions and that I potentially missed out on a greater learning opportunity. All in all, I think that itself was the learning opportunity for me. It definitely will change how I move forward with projects in this class and in my career.

Experiment 1: Cubical Vortex Effect – Jin Zhang & Siyue Liang

Experiment I: Cubical Vortex Effect

Jin Zhang(3161758) & Siyue Liang(3165618)

09.27.2018

Atelier I

Github Link:

https://github.com/haizaila/Experiment1

1

 

We were interested in the examples shown in class on sound controlled animation so we wanted to experiment more with it. Based on this idea, we tried to connect our 3D shapes by inputting songs or the computer microphone.

The reason we chose to work with html5 and javascript is that we had a similar lessons last year so we already learned some basic knowledge, so we are comfortable working with it.

 

This is what we made in the very beginning.  Cubes and cones are displayed from far to near by adding the z-axis.  It creates a tunnel-like shape and looks very cool.

2

 

 

Our original thought was to input an audio file and make animation react to the music.  We tried using the “loadSound” and “analyzer.getLevel” to input the music as a value for the animation.  But it didn’t work because the audio file couldn’t be loaded properly for some reason. So we went back to using “micLevel”. However, because the microphone records every tiny acoustical signal that it gets, the animation wouldn’t move as orderly because it is very easily disrupted by the outside voices.

 

 

After that, we just played around with the code more and got some of these random effects. Some of them worked and some of them did not.  Then we added a few if statements to make the shapes switch to cones when mouse is pressed, just to make it more fun and interactive.

5

6

7

We tried to change the “rotate” code in order to make the cubes and corns rotate in different angles individually. However, it could only rotate in one angle as a whole for some reasons.

 

I really liked this one because it has a cool future and space feel to it.  How we got this was we set the diameter of the cones to be equal to the mic value and when there is no sound, the cones would become these rod-like shapes.

3

 

In our final version, “micLevel” controls the stroke color, the size, and the rotating of the shapes and mouseX and mouseY controls their stroke and movement.  

8

 

It’s a really interesting process to see the changing of the result by playing with different elements.  We had a lot of fun doing this project together and we would definitely keep exploring sound interactive art.

 

 

Reference/Inspiration

This sketch is from openprocessing and it inspired us to have the idea of creating 3D effects and explore the beauty of geometrical shapes.  

We used the part for reference where a for loop was used to create shapes and translating them to different positions according to the mouse coordinates continually.

https://www.openprocessing.org/sketch/494306

Experiment 1 – Image To Sound – Nik Szafranek

 

//Basic Concept

Simple script to convert raw image data into raw audio data by tricking the very simple tag system in Base64.

//Rationale

installationposter

The jumping off point was an attempt to extend part of a previous project which focused on mixing in sound from various different sources and just muck about with code. It has been made to work with Python, as we can see as can be seen here: https://www.hackster.io/sam1902/encode-image-in-sound-with-python-f46a3f but as I was more familiar with Javascript I thought I’d try my hand at it that way, adding in what we’ve learned with p5. It doesn’t actually work, though.

//Code Outline

1. Get image
2. Convert Image to Raw Data
3. Convert Raw Data into a form that can be coherently turned into sound
4. Play sound
5. Save Sound

//Github Link

https://github.com/nszafranek/Image-to-sound
//Code snippets from:
https://thebestschools.org/magazine/turn-data-images-into-sounds/

Removing A Character From The Start/End of a String In Javascript


https://stackoverflow.com/questions/1789945/how-to-check-whether-a-string-contains-a-substring-in-javascript
https://stackoverflow.com/questions/4366730/how-do-i-check-if-a-string-contains-a-specific-word
https://stackoverflow.com/questions/17762763/play-wav-sound-file-encoded-in-base64-with-javascript

How to remove text from string in JavaScript ?


https://stackoverflow.com/questions/6094117/concat-to-string-at-beginning

Experiment 1: Smiley Happy Boi – James

The program listens to the sound heard from the devices microphone and displays it on screen in the form of a rainbow coloured smiley face. The farther out a beam is, the loud the room is. The program only creates a visualization of the sound while the user holds down the left mouse button, allowing the user to decide when to start and stop. Additionally, the user can press the space bar to change the direction, and hit the ‘A’ key on their keyboard in order to restart it. All of these controllers are displayed on screen so the user can easily figure out how to use it.

Link to GitHub of project:  https://github.com/JamesLR/Smiley-Sound-Boi

I decided to use P5.js for the project, as processing is efficient and simple to use for this particular idea. It was given a smiley face and bright, rainbow colours in order to make the user happy while using. Everyone can use a little extra happiness in their lives, and little things like a happy, rainbow coloured face can go a long way. Combine with that the fun of testing out sound levels and such, the project serves to brighten the user’s day. 99designs.ca even wrote an article how colors can effect people’s emotions, which you can find linked here: https://99designs.ca/blog/tips/how-color-impacts-emotions-and-behaviors/

Having a variety of bright colours appear when playing around with the program can really help someone’s attitude, and so can seeing a smile, even if just in a small little program. Although if more time was had, there would have been multiple faces for the little smiley face, such as shock or being tired. I actually did try to achieve this originally, but shifting between the expressions proved to be problematic, and the feature was eventually cut last minute.

(Also google hates microphones and asks for permission every time but that is kinda out of my control so…)

Hello world!

Welcome to OCAD University Blogs. This is your first post. Edit or delete it, then start blogging! There are tons of great themes available for you to choose from. Please explore all of the options available to you by exploring the Admin Toolbar (when logged in) at the top of the page, which will take you to the powerful blog administration interface (Dashboard), which only you have access to.

Have fun blogging!