Category Archives: Devs

Dev 3

Activity

For this assignment, our goal was to successfully connect to the robot using Processing and Websocket, and experiment with how the robot responds based on data from Processing. These software were chosen due to comfortability. This activity is not directly related to project 2, but rather it is to get familiar with using different programming software and familiar with the robot, through software, as well as inspire a project direction.

Process

It was a difficult process due to various conundrums. The first revolved around my laptop being outdated, in which I had to update my macOS system to meet compatibility demands of the various software needed. The second revolved around connection with the robot. Despite being connected, I was not getting any live input data from the robot. The third revolved around the robot not wanting to listen, due to confusion from the code/script, or other. With patient, self-troubleshooting through google search and video modules, along with asking other students for assistance, we were able to get the process moving along.

What I learned

This dev 3 activity helped me understand the interaction between robot, software, and the human (me). I learned that it is important to understand coordinates of the robot’s XYZ axis, in order to be familiar with the language that the robot understand, or how the robot responds to instructions to translate that into movement. This helped with getting the robot to move the way the I wanted it to move. Additionally, in the beginning, I had trouble getting connected with the robot from my laptop, even after doing all the set-up steps correctly. From this experience, I learned that the robot’s system would get confused if it receives more than one internet signal, and wouldn’t always be able to tell which system it should listen to. I was able to trouble shoot by turning my wifi on and off, so that one connection stays dominant.

Dev 4 – Small wins and lessons

Project Idea

Project 2 is inspired by a fishing game for children and inspired by games at the Canadian National Exhibition (CNE) where you play to win prizes. The idea is for the robot to interact collaboratively with the human, with a common goal of catching or scooping an object. If the robot, in collaboration with the human, is successful, the robot will receive a prize.

Process

The animation and movement of the robot revolves around sending data to the robot. We first played around with the robot using the standard UR robot touchscreen interface, to get a feel of the animated movements, as we are familiar with the touchscreen control. We then used Autodesk Maya 2020 to program the movements offline, and tested the script on the robot. We had mixed results from the process. Though the scooping idea worked well in theory, it was difficult to execute in practicality. The scooping motion was hard to do accurately, and the object had difficulty going into the box. Thus, we decided to pivot. Instead of scooping, we had the robot do a sweeping motion instead. This was inspired by motions from sweeping the floor, or playing mini golf.

Below are a few pictures of the process:

 

screen-shot-2023-04-20-at-12-12-33-amduckscreen-shot-2023-04-20-at-12-18-15-amscreen-shot-2023-04-20-at-12-18-49-am.

screen-shot-2023-04-20-at-12-24-54-am.

What I learned

In regards to designing robot animation using Maya 2020 versus animating the robot using live touchscreen control, I learned that indeed offline animation may not equate in the same regard as live animation. There was some initial trouble with getting the robot started in the right direction, as it appeared to want to go ‘around the world’ before executing the intended animation. Additionally, certain movements were constrained. For example, it was hard to get the robot to take their prize off the table. When the animated movements looped back, it was not possible for the robot to re-position itself. Thus, these constraints altered some ideas. Lastly, the process from this project inspired thinking on the feet. I altered and pivoted ideas many times, with consideration of the constraints of the robot and feasibility. This taught me that no idea needs to be set in stone, and to go with the flow with the robot.

Next steps

Finalize animation with virtual robot using Maya 2020, set up the robot’s video scene, and consider additional animation and production/post-production activities.

 

Dev 4 – Group 5

g501

finalll

 

Introduction:

In Development 4, we are continuing our exploration of creating mixed reality by combining real objects with the virtual environment. Since we have successfully imported robot data into Unity and achieved live communication between the robot and the software, we are now aiming to incorporate the VR headset (Meta Quest2) and utilize its built-in feature called Passthrough in order to create the mixed reality experiment.

 

Process:

process0-1

 

First of all, we imported the latest Oculus Integration SDK V50 into Unity, then we dragged the OVR camera from the official file to our scene and added the OVR Passthrough layer to the OVR CameraRig to activate the passthrough feature.

Our original plan was to develop an aiming game in which the robot arm would act as the movable target. Players could view the moving target with an OVR camera while simultaneously shooting it with a controller within the context of the game’s mechanics. However, we encountered several issues that stuck us from pushing it forward. The biggest problem was that the passthrough feature in v50 does not support direct playback in Unity. Therefore, we needed to build the scene and import the built apk. file to the headset, which unfortunately meant that live communication with the robot arm was impossible.

After discussing with Nick and the group members, we decided to simplify the communication medium. Instead of going through the robot, screen monitor, and VR headset, we have chosen to let the robot arm directly communicate with the controllers, without the need for an additional screen as an intermediary.

And after these two attempts, something different happens when revisiting the concept of making this game. Instead of making a straightforward aiming game, we would like to target kids as our primary players and design a game that primarily helps them practice their reaction capacity, which can be beneficial for developing prompt and responsive thinking.

Therefore, our next step is to explore different controllers, such as Xbox or PlayStation controllers, and Arduino joysticks, that will allow us to connect to Unity while enabling live communication with the robot arm. Meanwhile, we are considering incorporating User Interface Design components into building a game scene different from a typical shooting game and could be tailored to children. The overall structure of the game is still up for debate.

 

 

Dev 4 – In Progress |Nicky Guo

Description 

My plan for project 2 is to have a giant and soft toy hammer attached to the gripper of the robot arm, kept stationary, and then there will be a person studying or working on a table next to the robot arm. Whenever the person was about to fall asleep on the table due to fatigue and sleepiness, the robot arm would move towards him with the giant hammer and then hit him on the head with the hammer to wake him up. 

As a progress of project 2, I have finished the initial fabrication and design of the tool attached to the robot arm, as well as some simple drafts to help understand my idea. Regarding tool selection, I chose a softer material for the head of the cone, so that I could wake the user up but not hurt them or to the point where they couldn’t take it. At the same time, I chose to use a hard material for the rod of the hammer, compared to the soft material, so that the hammer can be more stable to be attached to the gripper. Therefore, I used a rolled-up blanket as the head of the hammer, and then inserted the internal cardboard wallpaper roll, and then used the tape to fix its joints. Thus, my homemade hammer is complete, but the next step is to put it on the gripper to test. At the same time, the hammer and gripper connection also need further testing to ensure stability. 

Process 

screenshot-2023-03-31-at-8-57-43-pm screenshot-2023-03-31-at-8-57-36-pm screenshot-2023-03-31-at-8-57-27-pm

screenshot-2023-03-31-at-8-57-18-pm

What I learned 

In terms of the consideration and selection of tools carried out so far, I had to take into account some of the limitations and feasibility of the robot, for example, I had to choose rigid rods to match the size and stability of the gripper on the robot arm. Because the current robot is not yet able to adapt to all environments and conditions, there will be many limitations, so we as designers must take this into account. At the same time, I used the soft material to make the head of the hammer from the human (user) point of view, if it is a real “wake-up human robot”, first of all, it can never cause harm to humans, and secondly, if each strike will touch too much pain, the user may stop using this function or robot. Therefore, we need to take into account the various factors of both robots and humans and then find a good balance. 

Devs 4 – Group 1

Group members: Victoria Gottardi, Siyu Sun, Maryam Dehghani, Yueming Gao

Investigation

Can robots only simulate humans? Can robots be “othered”? Inspired by post-humanism, the relationship between organisms and machines is both evolutionary and simulated. Will future human pets also be mechanized? When humans interact with mechanical animals, can emotions also be generated?

The background is about post-humanism and the potential evolution of relationships between living organisms and machines. With the rise of robotics and artificial intelligence, it is essential to consider the implications of these advancements for the future of society.

Moreover, the question of whether machines can be “othered” raises significant questions about how we perceive and interact with technology. If we can create machines that appear to have a sense of self or otherness, it is possible that our relationship with technology will become more complex and nuanced. As we continue to develop more advanced robotics and artificial intelligence, it is crucial to consider the ethical implications of these advancements and how they may shape the future of society.

In this Project 2, we aimed to explore the idea of simulating the habits of cats using a mechanical arm. The project seeks to achieve a level of interaction between the robot and humans, such that it appears like a cat playing in the human world. We’ll use Processing and a robot arm to establish a connection between the two and design two basic actions to simulate the behaviour of a cat.

Process

In Devs4, to test whether our group’s project concept can be implemented initially, we hope that the robot can do some simple interactions with humans. We use Processing to connect with the robot. Basically, we need to send data to the robot and design two basic actions to simulate a cat. Therefore, we plan to test two actions first: The first action involves the robot arm slapping something off a table, while the second action is a slower movement followed by quick hits. These actions are designed to mimic the behavior of a cat and create a sense of interaction between the robot and the human observer. Therefore, we designed two States.

State1: Simulate a cat slapping something.

screenshot-2023-03-31-at-4-58-50-pm

 

State2: Simulate a cat hitting something. screenshot-2023-03-31-at-4-59-10-pm

The following pictures are the test screens.

devs4_movements-of-statements

img_8953 img_7711

Robot Script URL: https://github.com/vicisnotokay/Devs-4

Devs 4 – Exploration

Group 2 (Mona, Ricky, Shipra, Zaheen)

ezgif-com-video-to-gif-6

Description + Process

For this Devs, we wanted to explore the relationship between a human and an orchestra conductor in the form of a robot. A conductor during a performance serves as a messenger for the composers. Their responsibility is to understand the music and convey it through gestures (hand movements) so transparently that the musicians in the orchestra understand it perfectly.

As a starting point, we wanted to experiment with one aspect of this and then move ahead. Therefore, we tried to understand the hand movements of a conductor, how that reflects in the music composition, and whether the robot will be able to mimic those movements.
A case study we looked at while exploring was ABB’s Robot YuMi, wherein a robot’s performance was developed by capturing the movements of maestro Andrea Colombini through a process known as lead-through programming. The robot’s arms were guided to follow the conductor’s motions meticulously; the movements were then recorded and further fine-tunes in the software. Taking inspiration from this example, we wanted to see how a robot could be in charge of conducting a performance.

 

Final Project Video

Final Video

Exploration

We worked with State Machines to create the different instances. We also developed each of the states to perform an action on the laptop and an action/ movement for the robot. We worked with 3 states; Pause, Low, and High. The pause state would be nil/ zero state, no action happens and no audio is played. The low state plays the audio at a lower volume and lower sleep, and the robot does smaller slower movements. The high state plays the audio at a faster phase and at a higher volume. The actions/ movements of the robot also change.

 

ezgif-com-video-to-gif-8
ezgif-com-video-to-gif-4

Code

Further Steps

Moving forward, we intend to work on:

  1. Creating a performance piece that will include the robot controlling the music composition. This might be in relation to how the volume changes, the pitch, and the speed of the music composition.
  2. Designing a tool – a Baton, that will be provided to the robot. We want to explore the way we can manipulate size in comparison to the robot by providing a miniature-size tool.
  3. Creating a storyboard to choreograph the scene and props, while developing the character of the robot.

Learnings/ Takeaways

This project helped us in understanding the first few steps of playing with sending data and state machines. This opened up various opportunities for exploration, we narrowed down on this topic as we are working to develop this further for Project 2. The intent was to be able to mimic the movements of a conductor while synchronising the movements to a classical sound piece. After understanding the basics of working using this method, we were able to create a program on processing to send mouse data and trigger the volume levels and speed of the soundtrack.

 

 

Devs3 – Group 1

Group members: Victoria Gottardi, Siyu Sun, Maryam Dehghani, Yueming Gao

What we are investigating

For this assignment, we are investigating what kind of code we can create with p5.js, and how that code responds to the robot’s movements connected through the websocket. We decided to pick p5.js because we are the most comfortable with that software, and we wanted to see what the robot to code connection on a web browser software would look like. We do not currently have a finalized project #2 concept, so we took this opportunity to get comfortable with getting data from the robot with a program like p5.js, and we are hoping that this experiment will potentially grant us some new skills, to which we can apply to project #2 if needed. Lastly, conducting this experiment might give us inspiration as we begin to solidify our project #2 concept.

 

How did it go

In Devs3, we try to use P5 to test the function of connecting the laptop to the robotic arm through Web Socket.

First of all, we set up Real time data for Robot on the computer. This application allows us to see the data when the robot moves, such as coordinate position, rotation angle and speed, etc.

Then we open the previously prepared sketch of P5 to test. As you can see, when the arm moves, the red dot moves with the arm. And the white particles will chase the red dot. This is because we make the x coordinate of the red point equal to the x value of the machine, and the y coordinate of the red point equal to the y value.

In addition, P5 has an advantage as a webpage editor. As long as the audience clicks on the URL we share, they can see the interactive process on different devices.

dev3adev3b

What we learned

In this process, we learned how to connect the robotic arm through Web Socket. We also try to use the data of the robotic arm to establish some interactions in different devices. There are many forms of interaction, but these are all based on changes in data. And we can directly see how the change in the data of the robotic arm affects the sketch of the P5 through coding. This makes us intuitively feel that there is actually a very close connection between data and our lives. How to use data to make changes is very important for our future projects.

 

The code link:

  1. Robot Script: https://github.com/vicisnotokay/Devs3
  2. P5 sketch: https://editor.p5js.org/y.gao3322/sketches/h78qqfkMQ

Dev3-Group5

Team member:

Yifan Xu, Wentian Zhu, Ellie Huang, Jiamin Liu

Description of what you are investigating

In Project 2, we are investigating the potential of robot arms to enhance sports and gaming training. In the physical world, gaining expertise in sports and games usually entails repetitive exercise and access to appropriate equipment. Unfortunately, such resources may not be accessible to everyone, which is where robot arms come in as a valuable aid. By generating virtual reality scenes and offering support for sport or game-based training, robot arms can be of tremendous assistance to a wider population.

screen-shot-2023-03-17-at-6-22-05-pm

Aim Lab Mobile

Robots are commonly employed in technology-driven businesses and academic research, but they have yet to be fully integrated into people’s daily lives. In the context of shooting training, traditional aim trainers are limited by fixed tracks laid out on the ground or ceiling, which make it difficult to change or replace targets easily. In contrast, the use of a robot arm in shooting games offers unparalleled flexibility. By acting as a moving target, the robot arm’s speed and position can be quickly and easily adjusted without the need for rebuilding tracks. This makes it an ideal tool for aim training during sports and gaming activities, offering a more immersive and interactive experience for users.

To achieve our goal, we plan to incorporate Unity, a popular game development platform, to create a target and establish a motion pattern that reacts to the robot arm’s movements. This will effectively turn the robot arm into a moving target, providing a challenging and engaging game for aim training. This modality could be further designed to support a variety of sports and games, enabling us to explore how robot arms can enhance training for different activities.

Our ultimate aim is to demonstrate the effectiveness of using robot arms in sports and gaming training. We believe that the flexibility and adaptability of robot arms make them a valuable tool for people who do not have access to traditional training resources. With the help of robot arms, individuals can train in a more immersive and interactive environment, leading to better results and a more enjoyable training experience.

 

Documentation of process and results

  1. link Unity to the robot arm

 screen-shot-2023-03-17-at-6-23-20-pm

  1. create Unity code

screen-shot-2023-03-17-at-6-24-32-pm
screen-shot-2023-03-17-at-6-25-18-pm

In Dev3, we successfully mapped the parameters received from the robot data input scripts to the plane’s XYZ coordinates. This process allowed us to establish the connection between the robot arm and the virtual environment. In addition, we divided the XYZ parameter values from robot data by 100 so that we are able to track the plane in the Unity window. 

  1. test the code

mar-18-2023-20-22-38

Video:

https://youtu.be/fednh6YGnmg

Code we used

https://github.com/NarrowSpace/HumanRobotCollab_Dev3_WIP

 

How did it go?

 In our recent experiment, we were able to achieve horizontal movement after some tests. However, we faced challenges due to space constraints and not considering the length of the robot arm, which prevented us from developing a code that would allow for flipping motions. This would have increased the difficulty level of the shooting game.

Moreover, we found that there is still much to learn about vertical motion and speed control. However, we are confident that we have sufficient time to further enhance the code and explore these aspects further. Despite the challenges, we are encouraged by the progress we have made so far and remain committed to advancing our understanding of robot arms and the human-robot interaction.

 

 

Description of what we learned

 During dev 3, we gained valuable insights that will help us in our future work with robot arms. One of the key takeaways was the importance of understanding axes and coordinate systems. This knowledge is critical in ensuring the precision of the robot arm movements and must be applied when writing code. By gaining a deeper understanding of axes, we were able to improve the accuracy of our robot arm movements and ensure that they matched the intended motion patterns.

In addition to learning about axes, we also improved our programming and debugging skills. Through trial and error, we were able to identify and fix bugs in the code, which helped us to develop more robust and efficient software. This experience will be valuable for future projects, as we will be better equipped to identify and fix errors in our code.

Moreover, this exploration allowed us to gain a fundamental understanding of the game development process. This involved designing the game mechanics and exploring human-robot interaction. Through this process, we gained insight into the complexities of game development and the importance of iterative testing and debugging.

 

Devs 3 – Follow Me (Group 2)

Group 2: Shipra, Mona, Ricky, and Zaheen


ezgif-com-video-to-gif-2

 

img_7381-4

 

Description

For Devs 3, we wanted to experiment with the data received from the robot to trigger a response. The primary goal for this experiment was to familiarise ourselves with the connections between the Robot and the programs; Processing, P5, and TouchDesigner. During these experiments, we worked with a simple program to generate simple responses. For example, moving the point on the screen according to the robot’s coordinates. Adjusting these parameters we wanted to investigate further tracking the robot’s movements and creating a visual response on the screen.

Final Project Video

Robot + Screen


MindMap of the Project & Learnings

After settling down on the first round of explorations, we decided to work with processing. We programmed the code to track the coordinates of the robot and used to map() function to translate those coordinates onto the screen. The first iteration was drawing the path tracing the robot’s movements and waypoints. Since we did not want to make another drawing tool, we learned from this logic to build on the final idea. The intent was to create an eyeball, that traces and follows a small dot on the screen. Both these were mapped based on the robot’s coordinates. We were also inspired by thinking about how to create a responsive, emotive screen action to the movement of the robot; how to make the screen “communicate” with the robot.

This process helped in understanding the limitations of working with the said technology. We realized the importance of working towards smaller goals and it helped in understanding the movements, familiarizing ourselves with the controls, and how the data can be translated into something meaningful using the programs. This assignment being the first step towards Project 2, resulted in a short animation controlled by the Robot, where we tracked and mapped the movements. The intent was to be able to create smooth transitions between the robot and the animation. This exercise was a great learning and will add value to our Project 2 Ideation process.

 

ezgif-com-video-to-gif-3

 

Test and Trial: Challenges

Our goal in this step was to determine the best approach and explore our preferred way to create an insightful interaction with the robot by conducting a series of tests. As we came up with many different ideas and concepts, we decided to follow a learning along making approach to find out where this interaction fits most creatively and on a practical level. In all of these tests and trials, we gained a deeper understanding of the concept of interacting with a non-human agent, in this case a robot, and we were forced to think about the potential effort and challenges we might face in this field, as well as having enough flexibility to change the path, which will definitely lead to more insightful results! The following are some experiments with our concepts using different platforms:

  • Touch Designer

In this sketch our initial test with receiving data from a cellphone OSC was successful, but we couldn’t figure out how to translate it into a way to process data from the robot. The project is as follows:

TD-osc test.gif

  • Pure Data

As with the first sketch, this one was not successful with the data received from the robot. The project details are as follows.
https://www.youtube.com/watch?v=vf84-FVJZx4

screen-shot-2023-03-17-at-10-15-06-am

Devs 2 – Group 5

By: Victoria Gottardi, Yifan Xu, and Firaas Khan

What are We Investigating? 

For Devs 2, we wanted to experiment with setting up some objects of different heights on the table, and try to discover how the robot arm could be capable of filming these objects to our precise measurements. We also wanted to experiment with how the gripper moves while holding the camera. For project one, we are planning on creating an advertisement, so we must be very strategic with our shots. We want to make sure the robot gives us the most affordances when creating really clean and usable shots. The positioning and movement of the phone and gripper will be crucial to getting what we’ll need.

The Process

img_20230208_125047 img_20230208_121434

Here is the link to the first attempt

Here is the link to the second attempt

 

Code

Here is the link to attempt one’s code

Here is the link to attempt two’s code

 

What We Learned

For this experiment, our main takeaway was that the gripper does not spin 360 degrees, so in order to move it in the correct position, you must make sure it is not fully turned one way to the max. We also learned that spinning the gripper too much can actually take away from the shot and potentially be visually nauseating to the viewer. In these experiments, we also played with the time of each waypoint, we discovered that the best length of time was 3-4 seconds and it appears that this amount of time could capture our desired items effectively. However, we think the length of time of each waypoint might also depend on what we are planning to show with the shot, and how we want the viewer to view and understand the shot.