Dev 4 – In Progress |Nicky Guo

Description 

My plan for project 2 is to have a giant and soft toy hammer attached to the gripper of the robot arm, kept stationary, and then there will be a person studying or working on a table next to the robot arm. Whenever the person was about to fall asleep on the table due to fatigue and sleepiness, the robot arm would move towards him with the giant hammer and then hit him on the head with the hammer to wake him up. 

As a progress of project 2, I have finished the initial fabrication and design of the tool attached to the robot arm, as well as some simple drafts to help understand my idea. Regarding tool selection, I chose a softer material for the head of the cone, so that I could wake the user up but not hurt them or to the point where they couldn’t take it. At the same time, I chose to use a hard material for the rod of the hammer, compared to the soft material, so that the hammer can be more stable to be attached to the gripper. Therefore, I used a rolled-up blanket as the head of the hammer, and then inserted the internal cardboard wallpaper roll, and then used the tape to fix its joints. Thus, my homemade hammer is complete, but the next step is to put it on the gripper to test. At the same time, the hammer and gripper connection also need further testing to ensure stability. 

Process 

screenshot-2023-03-31-at-8-57-43-pm screenshot-2023-03-31-at-8-57-36-pm screenshot-2023-03-31-at-8-57-27-pm

screenshot-2023-03-31-at-8-57-18-pm

What I learned 

In terms of the consideration and selection of tools carried out so far, I had to take into account some of the limitations and feasibility of the robot, for example, I had to choose rigid rods to match the size and stability of the gripper on the robot arm. Because the current robot is not yet able to adapt to all environments and conditions, there will be many limitations, so we as designers must take this into account. At the same time, I used the soft material to make the head of the hammer from the human (user) point of view, if it is a real “wake-up human robot”, first of all, it can never cause harm to humans, and secondly, if each strike will touch too much pain, the user may stop using this function or robot. Therefore, we need to take into account the various factors of both robots and humans and then find a good balance. 

Devs 4 – Group 1

Group members: Victoria Gottardi, Siyu Sun, Maryam Dehghani, Yueming Gao

Investigation

Can robots only simulate humans? Can robots be “othered”? Inspired by post-humanism, the relationship between organisms and machines is both evolutionary and simulated. Will future human pets also be mechanized? When humans interact with mechanical animals, can emotions also be generated?

The background is about post-humanism and the potential evolution of relationships between living organisms and machines. With the rise of robotics and artificial intelligence, it is essential to consider the implications of these advancements for the future of society.

Moreover, the question of whether machines can be “othered” raises significant questions about how we perceive and interact with technology. If we can create machines that appear to have a sense of self or otherness, it is possible that our relationship with technology will become more complex and nuanced. As we continue to develop more advanced robotics and artificial intelligence, it is crucial to consider the ethical implications of these advancements and how they may shape the future of society.

In this Project 2, we aimed to explore the idea of simulating the habits of cats using a mechanical arm. The project seeks to achieve a level of interaction between the robot and humans, such that it appears like a cat playing in the human world. We’ll use Processing and a robot arm to establish a connection between the two and design two basic actions to simulate the behaviour of a cat.

Process

In Devs4, to test whether our group’s project concept can be implemented initially, we hope that the robot can do some simple interactions with humans. We use Processing to connect with the robot. Basically, we need to send data to the robot and design two basic actions to simulate a cat. Therefore, we plan to test two actions first: The first action involves the robot arm slapping something off a table, while the second action is a slower movement followed by quick hits. These actions are designed to mimic the behavior of a cat and create a sense of interaction between the robot and the human observer. Therefore, we designed two States.

State1: Simulate a cat slapping something.

screenshot-2023-03-31-at-4-58-50-pm

 

State2: Simulate a cat hitting something. screenshot-2023-03-31-at-4-59-10-pm

The following pictures are the test screens.

devs4_movements-of-statements

img_8953 img_7711

Robot Script URL: https://github.com/vicisnotokay/Devs-4

Devs 4 – Exploration

Group 2 (Mona, Ricky, Shipra, Zaheen)

ezgif-com-video-to-gif-6

Description + Process

For this Devs, we wanted to explore the relationship between a human and an orchestra conductor in the form of a robot. A conductor during a performance serves as a messenger for the composers. Their responsibility is to understand the music and convey it through gestures (hand movements) so transparently that the musicians in the orchestra understand it perfectly.

As a starting point, we wanted to experiment with one aspect of this and then move ahead. Therefore, we tried to understand the hand movements of a conductor, how that reflects in the music composition, and whether the robot will be able to mimic those movements.
A case study we looked at while exploring was ABB’s Robot YuMi, wherein a robot’s performance was developed by capturing the movements of maestro Andrea Colombini through a process known as lead-through programming. The robot’s arms were guided to follow the conductor’s motions meticulously; the movements were then recorded and further fine-tunes in the software. Taking inspiration from this example, we wanted to see how a robot could be in charge of conducting a performance.

 

Final Project Video

Final Video

Exploration

We worked with State Machines to create the different instances. We also developed each of the states to perform an action on the laptop and an action/ movement for the robot. We worked with 3 states; Pause, Low, and High. The pause state would be nil/ zero state, no action happens and no audio is played. The low state plays the audio at a lower volume and lower sleep, and the robot does smaller slower movements. The high state plays the audio at a faster phase and at a higher volume. The actions/ movements of the robot also change.

 

ezgif-com-video-to-gif-8
ezgif-com-video-to-gif-4

Code

Further Steps

Moving forward, we intend to work on:

  1. Creating a performance piece that will include the robot controlling the music composition. This might be in relation to how the volume changes, the pitch, and the speed of the music composition.
  2. Designing a tool – a Baton, that will be provided to the robot. We want to explore the way we can manipulate size in comparison to the robot by providing a miniature-size tool.
  3. Creating a storyboard to choreograph the scene and props, while developing the character of the robot.

Learnings/ Takeaways

This project helped us in understanding the first few steps of playing with sending data and state machines. This opened up various opportunities for exploration, we narrowed down on this topic as we are working to develop this further for Project 2. The intent was to be able to mimic the movements of a conductor while synchronising the movements to a classical sound piece. After understanding the basics of working using this method, we were able to create a program on processing to send mouse data and trigger the volume levels and speed of the soundtrack.

 

 

Dev 3 – Group 6: AnasRaza,FaraasKhan

headd3-g6

Investigation

What is it you are trying to figure out?

Our main goal at this stage is to test sending and receiving data from the robot arm. From this, we are currently investigating visual effects that can be generated with robotic arm data, including visceral effects emerging from the synchronized physical motion of the arm with on-screen graphical media, and whether this combination creates a unique and immersive experience. A potential application of this combination is to create interactive installations that incorporate both visual and participatory physical elements. In the next stage, we hope to explore the possibilities of having the robot arm respond to human movement (if circumstances allow).

Documentation of process and results

For this exploration we are using an existing P5 code example and replacing the motion or interaction co-ordinates with X and Y of the robot arm. Because of the arm orientation, we have mapped Y-value of the arm with the screen’s X-coordinates and Z-values of the arm with screen Y-coordinates.

let rx = map(rposY, -260, 200, 100, windowWidth-100);
let ry = map(rposZ, 455, 875, windowHeight-100, 100 );

dev3-g6

in case GIF doesn’t work
https://drive.google.com/file/d/1t-Iz9S-UbgfYsIG6ERZCywWqvDcqUoWX/view?usp=sharing

How did it go?

This experiment went well, we think. We had the opportunity to practice preliminary yet crucial steps, including establishing server connections, data calibrations and creative explorations.

Learning Experience

Every interaction with robotic arm is a learning experience at this stage. Just knowing how to control the robot externally opens many possibilities for us. The most important lesson while working on this part of the assignment was to figure out data Mapping. The published values are very different from the graphic’s coordinates. Data calibration is the most critical part of any project where two different digital systems interact. We are using Map() function to calibrate data in this setting, but as this project becomes more complex, constraints for incoming or outgoing values may be needed. We hope to learn more about controlling the robot arm as we progress to the next stages.

Forked code: https://editor.p5js.org/anasraza/sketches/S-1tIFDG2

 

Devs3 – Group 1

Group members: Victoria Gottardi, Siyu Sun, Maryam Dehghani, Yueming Gao

What we are investigating

For this assignment, we are investigating what kind of code we can create with p5.js, and how that code responds to the robot’s movements connected through the websocket. We decided to pick p5.js because we are the most comfortable with that software, and we wanted to see what the robot to code connection on a web browser software would look like. We do not currently have a finalized project #2 concept, so we took this opportunity to get comfortable with getting data from the robot with a program like p5.js, and we are hoping that this experiment will potentially grant us some new skills, to which we can apply to project #2 if needed. Lastly, conducting this experiment might give us inspiration as we begin to solidify our project #2 concept.

 

How did it go

In Devs3, we try to use P5 to test the function of connecting the laptop to the robotic arm through Web Socket.

First of all, we set up Real time data for Robot on the computer. This application allows us to see the data when the robot moves, such as coordinate position, rotation angle and speed, etc.

Then we open the previously prepared sketch of P5 to test. As you can see, when the arm moves, the red dot moves with the arm. And the white particles will chase the red dot. This is because we make the x coordinate of the red point equal to the x value of the machine, and the y coordinate of the red point equal to the y value.

In addition, P5 has an advantage as a webpage editor. As long as the audience clicks on the URL we share, they can see the interactive process on different devices.

dev3adev3b

What we learned

In this process, we learned how to connect the robotic arm through Web Socket. We also try to use the data of the robotic arm to establish some interactions in different devices. There are many forms of interaction, but these are all based on changes in data. And we can directly see how the change in the data of the robotic arm affects the sketch of the P5 through coding. This makes us intuitively feel that there is actually a very close connection between data and our lives. How to use data to make changes is very important for our future projects.

 

The code link:

  1. Robot Script: https://github.com/vicisnotokay/Devs3
  2. P5 sketch: https://editor.p5js.org/y.gao3322/sketches/h78qqfkMQ

Dev3-Group5

Team member:

Yifan Xu, Wentian Zhu, Ellie Huang, Jiamin Liu

Description of what you are investigating

In Project 2, we are investigating the potential of robot arms to enhance sports and gaming training. In the physical world, gaining expertise in sports and games usually entails repetitive exercise and access to appropriate equipment. Unfortunately, such resources may not be accessible to everyone, which is where robot arms come in as a valuable aid. By generating virtual reality scenes and offering support for sport or game-based training, robot arms can be of tremendous assistance to a wider population.

screen-shot-2023-03-17-at-6-22-05-pm

Aim Lab Mobile

Robots are commonly employed in technology-driven businesses and academic research, but they have yet to be fully integrated into people’s daily lives. In the context of shooting training, traditional aim trainers are limited by fixed tracks laid out on the ground or ceiling, which make it difficult to change or replace targets easily. In contrast, the use of a robot arm in shooting games offers unparalleled flexibility. By acting as a moving target, the robot arm’s speed and position can be quickly and easily adjusted without the need for rebuilding tracks. This makes it an ideal tool for aim training during sports and gaming activities, offering a more immersive and interactive experience for users.

To achieve our goal, we plan to incorporate Unity, a popular game development platform, to create a target and establish a motion pattern that reacts to the robot arm’s movements. This will effectively turn the robot arm into a moving target, providing a challenging and engaging game for aim training. This modality could be further designed to support a variety of sports and games, enabling us to explore how robot arms can enhance training for different activities.

Our ultimate aim is to demonstrate the effectiveness of using robot arms in sports and gaming training. We believe that the flexibility and adaptability of robot arms make them a valuable tool for people who do not have access to traditional training resources. With the help of robot arms, individuals can train in a more immersive and interactive environment, leading to better results and a more enjoyable training experience.

 

Documentation of process and results

  1. link Unity to the robot arm

 screen-shot-2023-03-17-at-6-23-20-pm

  1. create Unity code

screen-shot-2023-03-17-at-6-24-32-pm
screen-shot-2023-03-17-at-6-25-18-pm

In Dev3, we successfully mapped the parameters received from the robot data input scripts to the plane’s XYZ coordinates. This process allowed us to establish the connection between the robot arm and the virtual environment. In addition, we divided the XYZ parameter values from robot data by 100 so that we are able to track the plane in the Unity window. 

  1. test the code

mar-18-2023-20-22-38

Video:

https://youtu.be/fednh6YGnmg

Code we used

https://github.com/NarrowSpace/HumanRobotCollab_Dev3_WIP

 

How did it go?

 In our recent experiment, we were able to achieve horizontal movement after some tests. However, we faced challenges due to space constraints and not considering the length of the robot arm, which prevented us from developing a code that would allow for flipping motions. This would have increased the difficulty level of the shooting game.

Moreover, we found that there is still much to learn about vertical motion and speed control. However, we are confident that we have sufficient time to further enhance the code and explore these aspects further. Despite the challenges, we are encouraged by the progress we have made so far and remain committed to advancing our understanding of robot arms and the human-robot interaction.

 

 

Description of what we learned

 During dev 3, we gained valuable insights that will help us in our future work with robot arms. One of the key takeaways was the importance of understanding axes and coordinate systems. This knowledge is critical in ensuring the precision of the robot arm movements and must be applied when writing code. By gaining a deeper understanding of axes, we were able to improve the accuracy of our robot arm movements and ensure that they matched the intended motion patterns.

In addition to learning about axes, we also improved our programming and debugging skills. Through trial and error, we were able to identify and fix bugs in the code, which helped us to develop more robust and efficient software. This experience will be valuable for future projects, as we will be better equipped to identify and fix errors in our code.

Moreover, this exploration allowed us to gain a fundamental understanding of the game development process. This involved designing the game mechanics and exploring human-robot interaction. Through this process, we gained insight into the complexities of game development and the importance of iterative testing and debugging.

 

Devs 3 – Follow Me (Group 2)

Group 2: Shipra, Mona, Ricky, and Zaheen


ezgif-com-video-to-gif-2

 

img_7381-4

 

Description

For Devs 3, we wanted to experiment with the data received from the robot to trigger a response. The primary goal for this experiment was to familiarise ourselves with the connections between the Robot and the programs; Processing, P5, and TouchDesigner. During these experiments, we worked with a simple program to generate simple responses. For example, moving the point on the screen according to the robot’s coordinates. Adjusting these parameters we wanted to investigate further tracking the robot’s movements and creating a visual response on the screen.

Final Project Video

Robot + Screen


MindMap of the Project & Learnings

After settling down on the first round of explorations, we decided to work with processing. We programmed the code to track the coordinates of the robot and used to map() function to translate those coordinates onto the screen. The first iteration was drawing the path tracing the robot’s movements and waypoints. Since we did not want to make another drawing tool, we learned from this logic to build on the final idea. The intent was to create an eyeball, that traces and follows a small dot on the screen. Both these were mapped based on the robot’s coordinates. We were also inspired by thinking about how to create a responsive, emotive screen action to the movement of the robot; how to make the screen “communicate” with the robot.

This process helped in understanding the limitations of working with the said technology. We realized the importance of working towards smaller goals and it helped in understanding the movements, familiarizing ourselves with the controls, and how the data can be translated into something meaningful using the programs. This assignment being the first step towards Project 2, resulted in a short animation controlled by the Robot, where we tracked and mapped the movements. The intent was to be able to create smooth transitions between the robot and the animation. This exercise was a great learning and will add value to our Project 2 Ideation process.

 

ezgif-com-video-to-gif-3

 

Test and Trial: Challenges

Our goal in this step was to determine the best approach and explore our preferred way to create an insightful interaction with the robot by conducting a series of tests. As we came up with many different ideas and concepts, we decided to follow a learning along making approach to find out where this interaction fits most creatively and on a practical level. In all of these tests and trials, we gained a deeper understanding of the concept of interacting with a non-human agent, in this case a robot, and we were forced to think about the potential effort and challenges we might face in this field, as well as having enough flexibility to change the path, which will definitely lead to more insightful results! The following are some experiments with our concepts using different platforms:

  • Touch Designer

In this sketch our initial test with receiving data from a cellphone OSC was successful, but we couldn’t figure out how to translate it into a way to process data from the robot. The project is as follows:

TD-osc test.gif

  • Pure Data

As with the first sketch, this one was not successful with the data received from the robot. The project details are as follows.
https://www.youtube.com/watch?v=vf84-FVJZx4

screen-shot-2023-03-17-at-10-15-06-am

Project 1 – Group 5

By: Victoria, Firaas, and Yifan

Victrola Record Player Advertisement 

Link to Final Video

still_of_video

For this assignment, we wanted to create a 20 second advertisement of a Victrola record player. The robot arm is very capable of taking smooth and professional looking shots, and we wanted to utilize this as much as we could. For this advertisement, we really wanted to show off the capabilities of the record player itself, while also making it look visually appealing, with strategic shots carried out by the robot arm. When looking through various examples of Victrola advertisements, one thing we kept seeing is how the shots used showed the record player in a very aesthetically pleasing format. Here are two examples:

Example 1

Example 2

However, we wanted to take a different approach and put a little vintage aesthetic onto our video with the sound, a filter, and music. The vinyl crackle was used in the beginning of the video before the music started playing to help fit the vintage aesthetic and it is a pleasing sound to the ears. The song chosen to play was “November Rain” by Guns N’ Roses, specifically the guitar solo around the 7 minute mark. Lastly, to give this video an advertisement feel, we wanted the video to end with a fade to the Victrola logo.

The Setup 

Storyboard:

img-20230212-wa0003

Background/Set:

background_set

The dark crimson velvet really complemented our desire to go with a vintage aesthetic while we were choosing the background. We used the table lamp to replicate the spotlight on the red carpet in order to make the record player stand out more. Due to the light we added, it took us a long time to modify the program because we didn’t want the robot’s arm’s shadow to obscure the main focus—the record player. In addition to serving as a spotlight, the table lamp should also be in a position where it cannot be knocked over by the robot arm as it is moving down the line that we have established.

After five trials, we eventually anchored the lamp to the left-hand corner of the table, with the robot arm in the right-hand corner. The light was coming from the left to the right, completely avoiding casting a shadow on the record player from the arm. Furthermore, we staggered the robot arm and the table lamp because of this placement.

Development Process

During our first attempt operating the arm, we struggled for a bit. Initially, the video was going to have several pans from right to left, and from bottom to up, as seen in the paper storyboard. However, when attempting the right to left pan, the arm would arc upwards when played fully, despite the setup being completely straight. Likewise, the controls for the arm got flipped around, making it difficult to control the arm overall. Ultimately, we played around these unexpected constraints, and focused more on the arm orbiting around the record player, as seen I’m the final video.

First Video Attempt

img_20230213_111759 img_20230214_144603img_20230213_111753

Link to Code

 

 

 

Project 1: Lost In_Group 2

Lost in

Group2: Siyu Sun, Yueming Gao, Maryam Dehghani

cover2

 

Project Description

In this mission, we study the control of the robotic arm and establish the operating range to carry out the automatic shooting. The controllability of the robotic arm has inspired us to use it in experimental video production. An experimental film is a film with a single objective language, without narration or subversion of traditional storytelling techniques. It mainly creates short films and closely relates to surrealism, expressionism, and avant-garde art. The methods used in the experimental film include defocusing, distortion, staining, repetition and quick editing of the image. Furthermore, unsynchronized sound and image, voice variations, grotesque characters and vague themes are also common styles in experimental movies.

To achieve a unique camera lens in experimental film, we think robotic arms are a good starting point for research. So we would like to test this technique on our first mission.

 

Project Concept

We live in a world full of sound, and people Can not control what will happen in their life. We may feel depressed, sad, and happy. How to face the situation when something happens in our life? The only thing we can do is create opportunities and become stronger.

 

Moodboard

3671677683240_-pic

 

 

FINAL VIDEO

 

How we made it?

Totally there are 4 scenes in this video.

Scene1:

scene1-pic

scene1_robot

scene1_light-the-candle

  • Mark some objects as a circle around the plate
  • Set the angle and position of the first waypoint
  • Set up every waypoint

The camera repeatedly crosses between the empty plate and the candle, symbolizing the diversity of events at the same time.

 

Scene2:

microsoftteams-image-2

movement-of-robot

writing

throwing-paper-balls

  • One-shot
  • The camera moves closer to the paper.
  • The camera follows the movement of the writing.
  • The camera follows the movement of throwing the paper ball.

This part just narrative the person who is in the mood very angry,  she just wrote down some words on the paper and throw it away, trying to relieve herself.

 

Scene3:

wechatimg366

scene3_dizzy-circling

  • Projected the visual on the wall.
  • The camera closed to the wall and then do the rotated movement.

We use the rotated movement in order to achieve the concept of fizzy, it is in the mood as well.

 

Scene4:

movements-sketch-04

scene4_candles

  • Goes up and down to zoom the lens of the camera.

Candles are a symbol of sunlight, brightness and hope.  Always at the height of despair, we should look for the openings of hope, every valuable person around us, every goal we have for the future and even every good memory we have from the past, search for them all… each one is like a  The light shines in our hearts.

 

About MAYA:

scene3mayarobot

 

Reflection:

At the begging(Dev1 Assignment), we are trying to record the First Scene, but we didn’t pay more attention to the environment set up, this is the first video we are recording:

It is easy to see the messy background and even can see people in the view. So for this case, and we want to make a film, it is not suitable to show this stuff. And we remake the first scene. And then for the MAYA, we got a lot of experiences as well. Such as should consider more about the environmental situation and think of how to design the movement.

Group 1 – Diaries of a Robot

Diaries of a Robot
Mona Safari, Dorothy Choi, Jiamin Liu, Zaheen Sandhu

gif-1-with-text

From researching creative inspirations that we found, we were inspired by cinematic themes in nature, miniatures, and light imagery. Our brainstorming led us to investigate how to create a simple, cinematic nature scene (e.g. animals and a cottage at the top of a hill in the countryside, near the ocean) that involves spatial and temporal changes, such as day-to-night transition.

picture1
picture2

Description

Our main concept for project 1 revolves around creating a cinematic nature scene. We manipulated miniature elements and the robot arm creatively to demonstrate different camera angles that can capture and illustrate the scene.

The story for project 1 centralizes from the Robot’s point of view. The robot is getting in touch with nature, its elements, and the native people, which is reflected by the words in this poem that goes along with our 20-second cinematography. Throughout this journey, our process involves storyboarding in the initial process, along with experimenting with the robot arm and settling with our final resources to create the scene.

ezgif-com-video-to-gif-1
Our Explorationsscreenshot-2023-02-28-at-2-20-32-pm

Devs 1 – Exploration
Devs 2 – Exploration

 

Story Board and Scene Creation

scene
Light and shadow
(Day-to-night transitions)

We explored these spatial and temporal changes through different light effects using a mobile color app and different tangible materials, e.g. pink bubble wrap envelope, and a water bottle, to create ideal colored light that would reflect morning-to-evening transitions.

Its impact on the scene in terms of ambiance and storytelling.

light

Cinematography

The voiceover narration comes from the lens of the “Robot arm”. We personified the Robot, as it is visiting nature and getting in touch with its world.

“Camera scenes follows the poem reflecting the respect of nature.”
picture4

 

May the warp be the white light of morning,

May the weft be the red light of evening,

That we may walk fittingly where grass is green,

O our Mother the Earth, O our Father the Sky.

American Indian | Tewa Song

 

The Process – Behind the scene

picture5picture6

picture7
picture7-1

 

 


Touch Point of the Process

One of the most crucial, challenging yet tricky of our process was controlling and playing with that light of the scene. In order to get the effect of the morning light and transition that into the evening light, we decided to use colored lights on our mobile phones and play around with the spatial and temporal changes. For the first part, our main goal was to have a seamless transition of the sun rising and having that sunlight effect. For this effect, we used a mix of orange and blue light. We first hit the record button, waited for a few seconds, and then pointed the light sources toward the scene. Moving along with the arm, we were naturally working with the arm to provide that seamless lighting effect. The next part was to transition to the evening light, for which we slowly moved away from the orange light source but still kept the blue light there, bringing it closer to the camera slowly. We were able to achieve that slow yet seamless color transition through this process.

 

light-steps

picture8
ezgif-4-cb6d017c89                                                                        ezgif-4-091ab62523

 

Our Challenges

  • ENVIRONMENTOne of the main challenges that we faced while we were setting up our scene was the surrounding environment in the video. We noticed that it was tough to be able to record the scene and not have the immediate environment exposed to the camera. To tackle this, we decided to create walls of thick sheets that would solve this purpose. While this did in fact solve our main challenge, it also acted like a bonus element for our project. We wanted to play with light and shadow and the sheet acted as a great background for that purpose. We also tested different angles and close-up focus with the camera to avoid capturing the surrounding environment the best that we can.picture9
  • TIMING

    Sometimes it was difficult to pinpoint how fast or how slow the robot arm should move, with respect to the scene, and also with respect to the narration to match the scene. There was a lot of learning through trial and error during this time. We were able to finalize the timing by first creating a pace for the narration and matching it with each of the keyframes when shooting the scene.
  • LIGHTING

    Sometimes the lighting was tricky – as it may have created too much shadow, captured too much shadow from the robot arm, or didn’t have enough brightness/opacity in colour. We found that the flashlight function on different cellphones has variations in brightness, so we used it to our advantage. For example, a brighter light against an opaque colour can create a bright colour light instead of a dull one. Additionally, casting light adjacent to or in the opposite direction of the robot arm (rather than in the same direction) would minimize shadow capture from the robot arm itself.


Development for the Future!

Initially, we intended to use a blurry background, but during our experimentation stage, we were unable to do so. This could also have been accomplished in a few other ways, but we decided to focus on the main part, which was the narrative of the project, and make it more valuable. After all, this is something we definitely need to experiment with in the future!

 

Final Video: Diaries of a Robot