Devs 3 – Follow Me (Group 2)

Group 2: Shipra, Mona, Ricky, and Zaheen


ezgif-com-video-to-gif-2

 

img_7381-4

 

Description

For Devs 3, we wanted to experiment with the data received from the robot to trigger a response. The primary goal for this experiment was to familiarise ourselves with the connections between the Robot and the programs; Processing, P5, and TouchDesigner. During these experiments, we worked with a simple program to generate simple responses. For example, moving the point on the screen according to the robot’s coordinates. Adjusting these parameters we wanted to investigate further tracking the robot’s movements and creating a visual response on the screen.

Final Project Video

Robot + Screen


MindMap of the Project & Learnings

After settling down on the first round of explorations, we decided to work with processing. We programmed the code to track the coordinates of the robot and used to map() function to translate those coordinates onto the screen. The first iteration was drawing the path tracing the robot’s movements and waypoints. Since we did not want to make another drawing tool, we learned from this logic to build on the final idea. The intent was to create an eyeball, that traces and follows a small dot on the screen. Both these were mapped based on the robot’s coordinates. We were also inspired by thinking about how to create a responsive, emotive screen action to the movement of the robot; how to make the screen “communicate” with the robot.

This process helped in understanding the limitations of working with the said technology. We realized the importance of working towards smaller goals and it helped in understanding the movements, familiarizing ourselves with the controls, and how the data can be translated into something meaningful using the programs. This assignment being the first step towards Project 2, resulted in a short animation controlled by the Robot, where we tracked and mapped the movements. The intent was to be able to create smooth transitions between the robot and the animation. This exercise was a great learning and will add value to our Project 2 Ideation process.

 

ezgif-com-video-to-gif-3

 

Test and Trial: Challenges

Our goal in this step was to determine the best approach and explore our preferred way to create an insightful interaction with the robot by conducting a series of tests. As we came up with many different ideas and concepts, we decided to follow a learning along making approach to find out where this interaction fits most creatively and on a practical level. In all of these tests and trials, we gained a deeper understanding of the concept of interacting with a non-human agent, in this case a robot, and we were forced to think about the potential effort and challenges we might face in this field, as well as having enough flexibility to change the path, which will definitely lead to more insightful results! The following are some experiments with our concepts using different platforms:

  • Touch Designer

In this sketch our initial test with receiving data from a cellphone OSC was successful, but we couldn’t figure out how to translate it into a way to process data from the robot. The project is as follows:

TD-osc test.gif

  • Pure Data

As with the first sketch, this one was not successful with the data received from the robot. The project details are as follows.
https://www.youtube.com/watch?v=vf84-FVJZx4

screen-shot-2023-03-17-at-10-15-06-am

Leave a Reply

Your email address will not be published. Required fields are marked *