The first modification created two temporary image arrays to store the left and right views of the 3D scene. Each scene was then rendered to the back buffer of a Silicon Graphics Indigo2 workstation with a slightly offset horizontal distance. The magnitude of this offset is a multiple of the diagonal distance of all objects in the scene. The value of this "multiple" was determined empirically and is proportional to the distance from the eye to the center of the scene. As this distance decreases, the multiple used to determine the horizontal offset also decreases. Without this automatic adjustment, objects in the scene would have a horizontal offset that is too great to allow the brain to fuse the two views into a single image.
  After the red, green and blue components of each image are captured, the green and blue of the image representing the left eye's view are set to 0. Likewise, the red component of the pixels of the right eye’s image are also set to 0. In this way, the red lens of the anaglyph glasses will only display pixels containing red (the left eye’s view) and the blue lens will display those pixels with some green or blue (the right eye’s view). Green and blue are coupled together because both colors can bee seen through the blue lens of the anaglyph glasses. The two modified images are then added together resulting in a single image with red coming from the left eye's view and green and blue coming from the right eye's view. Because the brain fuses these two images into a single image, the colors of the original model are also fused producing a 3D stereoscopic image in color. This image is rendered to the front (visible) buffer. When viewed using red/blue anaglyph glasses, the illusion of depth is created.