Author Archives: Gavin Tao

Group 3 – Project 1: Robot Camera

Project 1
Group 3: Wentian, Nicky, Shipra, Gavin

Final video:


In project 1, our group intended to investigate the rapport between humans and robots. Rather than treating the robot arm as a clumsy machine, we personified the robot and created a live scenario where robots can interact with humans.

In the scenario we created, the robot is endowed with a “personality”, creating the illusion of it interacting with a human in a distinctive and enchanting way.

While exploring the robot’s operation, one of the issues we recognized is that robots still fall into uncanny valley territory when trying to replicate human behavior. Although the futuristic robots in sci-fi movies have demonstrated possibilities of consciousness the scenario that robots are fully in possession of mortal characteristics is still not entirely plausible in today’s world. Based on the identified problem space, we wanted to experiment with the potential interactions between humans and robots personified from conscious characteristics, mindsets, and behaviors.

We brainstormed an interesting scenario where the actor interacts with the robot arm without real conversation or physical contact. To be specific, the actor will be gossiping with someone else on a phone call, and the noise she makes would wake up the robot’s arm. Then, the actor will gesture to the robot to go away, and the robot will start to pretend to not eavesdrop on her conversation until the moment that the robot hears something explosive.

Our intended outcome is that, by showcasing how robots could interact with us in daily circumstances, people (especially technological designers/researchers) could be inspired and conduct further research in the field of robot personas.

setup3   setup4   setup5

We created a setup that matched the script and storyboard. We used props such as books, plants, a water bottle, paper, laptop, and a desk lamp to create the appearance of a design studio/office. Our actor was placed directly opposite the robot arm. A hidden microphone was set up next to her to capture sound, and the phone was put in the robot arm’s gripper. We used mostly natural lighting by opening up the blinds on the window, and also aided it with desk lamp featured in the video, and an additional hidden off-screen lamp. A meta element is added to the project because the robot is looking at its own operating instructions throughout the final video.

MIMIC for Maya helped us to visualize the final video, but we decided to go with manual input for the robot arm’s waypoints, because we felt more comfortable with those controls at this point in our learning. The trickiest aspect was getting the timing correct – we had to coordinate the actor and robot to create the illusion of them responding to each other. An additional challenge was ensuring the robot didn’t solely function as a mechanical object. We gave it somewhat natural actions – looking away, glancing at their documents, being nosy – to personify it and give it some character.

We set waypoints to create the different actions of the robot. The difficult part about this was applying different speeds to the robot arm – some actions, such as the robot looking at its documents required a bit more breathing room, whereas the robot being “surprised” by what it hears needed a much faster movement. After testing this out, we used the “wait” keyframe for the part where the robot needs to pause.


Development Process

Storyboard and link to script
storyboardMimic images and link to Mimic output (1, 2)
After settling on an idea, we started gathering references for shots and the scripts. The intent was to be able to create an environment for the actor and the robot to be able to coexist, a scene where they could have a conversation. After writing the first draft of the script, we started framing the scenes and sequences. Since it is a short video, we broke it down into 8 major frames, and drew a storyboard based on the script. This was further developed on Mimic and helped in understanding timing, movement, and camera angles.

Post this ideation and prototyping stage, we gathered props (set elements) from the Digital Futures Lounge (6th floor) and created the environment for the shoot. Since the scene is set in a study environment, we decided to work with the robot arm and the table set up. After a few test runs with the actor, we were able to synchronize the script to the robot movements. It was easier to modify and change the dialogue delivery, as compared to changing the robot’s movement. The final shot was edited with the added effects of phone ringtone, robot arm sounds (to exaggerate the movements). The blinking action was also added to make it seem like the robot had just woken up from sleep, hearing the ringtone.

Code or Files
Sound design Robot movement: Sound Design: Robotic Arm Sequence

Group 3 – Dev 1

Dev 1
Group 3: Wentian, Nicky, Shipra, Gavin

For Dev 1, we wanted to experiment with the robot arm’s spatial movement. Primarily, the goal was to test how the robot arm could handle a 360 degree encircling of an object.  We attached a phone to the gripper, utilizing the phone’s camera to take photos/videos of the object. Additionally, we also wanted to explore shooting angles such as a top-down perspective. In these experiments, we are trying to figure out how fluid the robot arm can be, after we’ve programmed a series of moving commands into the console. We observed the speed of the arm transitioning to different points, whether it would collide with the object, and the quality of the photographs/videos taken on the phone. This is prep work for our Project 1, where the camera’s perspective will be crucial. This will also be very useful for Dev 2, where we plan to use the robot to 3D scan an object.


First, we placed our main character “Dead Panda” on the central area of the table beside the robot arm. To prevent the arm from reaching the maximum moving distance to the bottom, we put some wood blocks under the object to increase its height. Then we started to position the beginning waypoint of the robot arm and adjust the shooting angle of our camera. After all the initial preparation, we started our first attempt by inputting four waypoints that were programmed by each of us. Unfortunately, our first attempt failed. The gripper kept doing full rotations during the demonstration – a potential hazard for the object – because we were manually changing the positions and angles of the gripper. After another failed attempt, we realized we needed to control the gripper and arm on the pad rather than direct operation on the robot to resolve this issue. Although the camera collided with the object several times in movement due to inappropriate positioning of waypoints, we eventually accomplished the task and got a smooth video of an encircling shot of our main object.



VIDEO OF DEV 1 <— Click

img_7066 img_7076 img_7079


Learnings & Outcomes
While going through this process of trial and error while working with the robot arm, we were faced with a few of the aforementioned challenges. This process helped in understanding the limitations of working with the said technology. We realized the importance of working towards smaller goals helped in understanding the movements and familiarizing ourselves with the controls. The first step towards Project 1, resulted in a short 360 degree video of a small object placed on the table. The intent was to be able to create a smooth transition between camera angles and positions. We achieved that to an extent by working as a team art directing the scene. This exercise was able to add value to our Project 1 Ideation process.