The hardware of the environment currently consists of the Fakespace Immersive Workbench, a Polhemus position/orientation tracked probe and a 4 processor SGI Onyx. The workbench projects alternating left/right eye images onto the underside of a horizontal translucent table top. Using Crystal Eyes shutter glasses the user is presented with a large format stereoscopic image.
The software of the environment is designed to provide the user with the illusion that the volumetric data set resides in the physical space above the table surface with the probe position co-located within the data set volume. The data set volume can be arbitrarily positioned relative to table surface. To assist in visual orientation, the table surface presents, as a background image, the intersection of the data set volume with the plane of the table surface. Imaged portions of the data set appear above the table surface. The data set is imaged using a ray-casting based volume rendering algorithm with a lighting model incorporating diffuse and specular highlights of high opacity material. Categories of data values can be associated with arbitrary opacity values and real or false colors. The lighting model is also capable of supporting shadows.
The software is structured as a client/server application. The client is responsible for user interaction, tracker communication and visual display. The client communicates user actions to the server. The server performs all calculations and returns image updates for display. The server is multi-threaded to exploit multiprocessor architectures. It is possible to run the client and server processes on the same computer or to run the client process on a low end workstation connected to the server via ethernet. The server process is machine independent and has also been implemented on a HP K200 server. The system is entirely implemented in software and does not utilize the geometry hardware of the SGI computer.
The system is capable of working with a variety of data sets including MRI, CT as well as cryosection data such as provided by the Visible Human Project (TM). The current implementation uses a single byte value for each voxel. Data sets with a larger dynamic range must be scaled to this size. Color data sets, or multi-modality data sets must be palettized to fit this constraint. Both the male and female data sets of the Visible Human project are adequately covered by a 256 value color palette.