midterm project

I forgot to post when I finished the midterm, and lost the draft somehow. I’ll try to recollect my process.

When I first started on the p5 shaders midterm I wanted to make a digital mirror version of the screen melt from Doom 1 and 2. When you start a new level in those games, the previous screen becomes a foreground image that melts away in a series of offset descending columns, revealing the game in the background.File:ScreenWipe Melt.png

At first I thought it would be a fun application of what we’d learned, but I forgot that John Carmack is a literal wizard.
I started by creating a grid from our image like we’d learned, but after the grid locations had been indexed, I’d revert the creation of the grid to get our whole image again, without the tiling. This would allow individual segments of the image to be altered.
I had two separate layers of the same camera capture because I wanted the screen wipe to reveal a permanent change to the base image, like a discolouration or distortion. I think this ended up kind of working, but when the columns had descended offscreen, there would be some kind of totally unintended distortion, I think it was like the final pixel of the column stretched out.

The part that caused me trouble was the screen wipe itself.

I wanted each column to descend at its own time, so I set time as a random seed and a random function. This seed would decide which column would begin dropping next each (arbitrary unit of time). It would also decide how the background would be altered.

First issue: The seed would never refresh, so my random function would never change the pattern in which columns dropped.
Second issue: Whenever I got the timed offset of columns working, each column would immediately drop to the current height of the first column dropped. My vertical offset would NEVER work.

I think I never started implementing colour changes in the background, or if I did, it wasn’t carrying over properly. I remember some issues with mix() yelling at me about the data types that were being input, and too many variables in what was coming in/out.

I took a look at the original code for the effect, and a replication made by shadertoy user neur0sys. neur0sys’ code helped to structure my code a bit better and I think it was how I managed to get a limited version of the offset working at the speed I wanted. Ultimately, the original part of the project had lead to a dead end. Doom’s pseudo RNG was a big list of numbers in an array, and neur0sys had done the same. Quickly dumping all these pseudorandom values into an array wasn’t possible in the version of GLSL we were using. Our arrays were extremely limited, and I think I read that was the case for all versions of GLSL really.

I didn’t have to learn version 300 es while this thing was already late, so I pivoted to developing more upon an interesting effect I’d stumbled upon while trying to figure things out.

capture

At one point I wanted to make the dropping columns look like they were dragging the permanent image along like dough or something. I pretty much started hooking everything up to sin waves. I don’t think I got my intended effect but I found a distortion pattern that kind of looked like holographic plastic and certain patterns of glass. When it came time to try something other than the screen melt, I repurposed this idea. The columns wouldn’t fall out entirely, they’d bob up and down in place. The distortion happening within the columns itself gives the appearance of more columns while also blurring the image overall. The base image would slowly distort horizontally, I think. It makes for a cool blurry and disorienting effect. Kind of incognito but you can tell what’s going on. It’s like when the secret Illuminati guys decide they still need to keep their webcams on for some reason.

Here’s the link again.

Leave a Reply

Your email address will not be published. Required fields are marked *