Computer Graphics Lab 1: Fractals and Blue Screening

Ben Mitchell and Zach Pezzementi

Lab Description

There were two parts to this assignment. The first was doing image manipulation on pictures taken with a blue screen as a background to insert the supject of the pictures into other backgrounds. The second was to generate visualizations of fractal sets generated from the mathematical equations for attractors.

Part One: Bluescreening

In the first part, each student had an image taken of them with a digital camera. In order to extract the student from the background, it was neccessary to determine what was student, and what was background. This was done by transforming the image into r-g chromaticity space, and examining the distance from each pixel to the average color of three pixels which were known to be background. Pixels with a distance to this color that was below a given threshold were determined to be background; the remainder were foreground.

Some basic image manipulation functions were written for resizing, rotating, and mirroring images. One or more of these functions was applied to the input image. This allowed the image to be altered so as to better fit into a novel background.

At this point, an unrelated background image was read in, and the transformed foreground from the image of the student was overlayed onto this image at a specified location. Feathering was applied to the edges of the image to eliminate artifacts from the background separation process, and to make it look more like a part of the background image. This feathering was done based on a grassfire-like two pass algorithm used to determine distance from the background. An alpha-blending coefficient based on this distance was used in conjunction with a command line "feathering" parameter to determine how much the foreground and background were each weighted in generating the output image.

One of the problems we had was blue spill in areas of shadow; use of r-g space helped with this, but in very dark regions, even this breaks down. Also, even with good feathering, the fact that the two images were not originally one was still evident. This was due to several factors outside our control, such as light source direction, intensity, and color. While it might be possible to adjust for the latter two, the first one is impossible to deal with without a full 3-D model of the subject the image was taken of.






Part Two: Fractals

In the second part of the lab, we examined mathematical functions called attractors. Specifically, we looked at two sets of such equations, called Julia sets and Mandelbrot sets. A Julia set is the set of points in the complex plane for which the simplest possible attractor climbs to infinity.

We generated images of a Julia set by defining a window in the complex plane to examine, and a complex constant C which defines the Julia set. X and Y are varied, with the real axis being mapped to X and the imaginary axis mapped to Y. For each point, the equation defined by X, Y, and C was examined. If it went to infinity, the intensity of the pixel at that image coordinate was set to a shade of blue such that the brightness of the color was proportional to how long it took the equation to go above a threshold indicating it would climb infinitely. If it did not go to infinity, the pixel was set to black.

A Mandelbrot set is defined for any given point by whether or not the Julia set defined by C equal to the complex coordinates of that point goes to zero, or infinity, when evaluated at the origin of the Julia set. Once again, the image plane is the complex plane, with X being real and Y being imaginary. The pixels in the Mandelbrot image were colored again proportionally to the time it took the Julia set to go to infinity. In this case, first the blue channel increased, and once it had reached full saturation, the red channel, and then the green channel was increased in the same fashion.

Both types of image were generated by performing 4-sample-per-pixel hyper-resolution jitter sampling. Any more buzz words you want used? Oh, yeah, this is a form of anti-aliasing. There's one more.

Lab Questions

1.

The pictures were initially 2560x1920 pixels. This was determined by opening them in xview, and reading the size off the terminal.

2.

The origin is in the upper left hand corner, with x increasing to the right, and y increasing down. This was determined by using prior knowledge.

3.

XV has the same coordinate space as our PPMs. Middle clicking in XV gives you pixel information including coordinates, and color in several color spaces.

4.

See above.

5.

We simply tried playing around with different insertion sizes and coordinates (specified on the command line) to see how they looked. When we found a set that looked good, we used it. We used pixel by pixel swapping to do mirroring and rotation. We used 4-corner projection of pixels with new pixels being a weighted combination of old pixels for shrinking. We also wrote an algorithm for making an image bigger, but had no chance to use it due to the very high resolution of the original images we were using.

6.

Ben liked the recursive tail at the right hand extreme of the set. Zach prefered the small gap at the right hand edge of the main lobe. It is a point of contention between us. We don't like to talk about it.

7.

We tried simple black and white, and we tried a couple variations of intensity relative to time till divergence. We thought that the blue for the Julia sets was pretty, and liked the "rim of fire" we got from the spatterings of color at the inner rim of the mandelbrot set (which come from the probabalistic method of super-sampling). We thought alternating colors based on modulus math was ugly, and excessively Seussian. Also, flat black and white proved kind of boring.

Lab Extensions

We implemented feathering and transparency. Also, we just put a fractal wattermark on the Godzilla picture. This represents our assurance that it is, in fact, the real thing, and not a cheap fake.