At first, I was unsure of what to do for the midterm I was trying out a bunch of code between turning the camera into a grid or animating it but it all seemed too little. eventually, I looked over the tutorials and saw blending an image with your camera and that’s when an idea struck me. If I were to use the colour and shapes from previous assignments and merge it with the video from the camera, the shapes will not only modify the video but will also change the colour.
The code mainly uses offset in the display.frag file to move and morph the video with the image with the help of texCoordinate. I did want to try adding generateGrid and randomGrid to the sketch however when attempting to put the code in different areas for a couple of hours I then decided to stick with my original concept. The current code checks if the mouse is pressed with the function mousePressed () after pressing the display it will turn fullscreen, after the code checks your mouse’s position with uMousePosition and allows you to move and morph the display in a prefered direction. The code works exceptionally when there is more colour, colour and light in the room as when the video picks this up and merges it with the shape image more detailed designs go into it, and vice versa. The video also picks up the shapes of the image and changes whatever is in your camera to match them which I find particularly interesting. Getting screenshots of the changes is pretty difficult so click the link above if you wish to see it.