NLM Home Page VHP Home Page


Next: Methods Up: Title Page Previous: Full Text Index Index: Full Text Index Contents: Conference Page 

Introduction

      With the advances of virtual endoscopy, the mapping of gray-slice datasets of medical images in Red-Green-Blue space becomes highly desirable for visualization, since it will greatly help in patient diagnosis. For example, navigation through a color-textured colon model will help detect polyps and inspect abnormalities via virtual colonoscopy [Hong'1995]. 3D color texture information of the Visible Human dataset, supported by the National Library of Medicine, provides a huge resource for the mapping to gray-scale images of CT, MR (magnetic resonance), Ultrasound, etc. The goal of this work is to achieve an accurate mapping from Visible Human color textures to gray-scale CT images in virtual colonoscopy.

      In the computer graphics community, great effort has been devoted to elaborate efficient algorithms for texture mapping ([1]). Texture mapping is usually implemented in a hardware engine. To take advantage of this accelerator, a 3D geometry model (based on polygons) is extracted from a CT dataset by a marching cube algorithm ([2]). Subsequently one 2D texture can be mapped into the 3D geometry model. This strategy has recently been applied to texture modeling and sampling tasks ([3], [4], [5], [6]).

      The main drawback of this strategy is that 2D texture does not represent 3D features of the CT data set. Consequently, the produced images do not accurately reflect the information of the dataset. To mitigate the drawback, effort has been devoted on transfer function to produce 3D texture directly from the CT dataset. The transfer function must be manually created by the user. This task is time consuming and requires great effort to adjust transfer function parameters. Although a genetic algorithm was recently proposed to automatically determine the parameters, the results are far from expectation.

      We propose an alternative approach, taking advantage of recent developments in computer vision, image processing, artificial intelligence, computer graphics, and visualization. The main goal of our approach is to map the natural texture of tissue (from the Visible Human dataset or images provided by optical endoscopy) to the corresponding tissue segmented from a dataset. It consists of five phases (texture segmentation, analysis, modeling, matching and synthesis) and is called SAMMS. Each phase has its own focus and is briefly described below: (1) Texture segmentation from the Visible Human dataset is based on a 3D adaptive region growing method. (2) Texture analysis adopts a second generation of wavelet transform to extract features from segmented textures. (3) Texture modeling uses a multi-scale statistical theory to model the extracted features. (4) Texture matching adopts cross entropy to distinguish material pattern and matches the modeled features to the segmented classes of CT dataset. In the CT segmentation, the adaptive region growing selects a seed inside a colon (air space) with a low intensity. Then, the seed grows in direction to the colon wall, fluid or stool. This method can automatically detect a mixture of material and assign a mixture percent of each material by Bayesian estimation. (5) Once two textures are matched, we perform texture synthesis. In the material mixture region we utilize a color composition technique by attributing the transparency value of each material according to the mixture percent of each material.

      In the following, we will briefly describe the five phases and show the viability of the reproduction of color tissue textures. Then we will present preliminary results from the Visible Female "data set" supported by The National Library of Medicine.


Next: Methods Up: Title Page Previous: Full Text Index Index: Full Text Index Contents: Conference Page