NLM Home Page VHP Home Page


Next: Methods Up: Title Page Previous: Full Text Index Index: Full Text Index Contents: Conference Page 

Introduction

      Digital color images can be stored in the simplest way as a sequential bit-mapped field of pixel values. For color images this approach is typical of the 'raw format' which defines a pixel through a binary word of 24-bit, 8-bit for each of the three channel red, green blue (RGB). An image like Fullcolor VHD, having a spatial resolution of 2048 by 1024 pixels, occupies about 7.5MB. Due to difficult management of this dimension in on-line browsing of the VHD repository, compressed versions of the VHD image seem more suitable. Although great care has to be paid in order to do not loose relevant image content and no loss of information would be more preferable, lossy compression is tolerable because at low compression rates visual perception is not able to discriminate the original image from its  compressed image and even for high compression rates image distortion could be acceptable because large structures can be still recognized. Moreover only a low-resolution approximation can be sufficient to the user for deciding if the image requires to be decoded or received in original full size. Here we are proposing to upgrade the services already offered by VISIBLE HUMAN DATASET- MILANO MIRROR SITE with an on-line consultation of the VHD performed onto a lossy compressed version of the images. A suitable solution is here presented which is based on progressive compression through Wavelet Multiresolution Decomposition and Embedded Zerotree (EZ).

      Many solutions to image compression have been proposed and they range from bit reduction, image transformation and bit reduction (no loss techniques) to lossy approaches which exploit, after image transformation, scalar or vector quantization ([2]) based on entropic model. While operating solely on bits, the bit-mapped field can be compressed without any loss of information through dedicated algorithms such as run length encoding (RLE). This algorithm allows to describe, in a reduced space, bit sequences in more simple way. For instance a sequence of zeros is substituted by a zero and by the number of its occurrences. Image compression ratios in between 2 and 4 can be obtained. Yet these ratios are not satisfying for wide image and other techniques have to be used. All the advanced compression techniques try to describe the image content in a different domain with respect the color domain by transforming the pixel values in such a way to capture more information in a few coefficients. The objective of any image transform is indeed to decorrelate the pixel values by projecting the original image onto a dual space through a set of basis functions which allows to well localize into reduced set of coefficients the most of the information content. Being the information content of an image localized as in the spatial domain as well in the frequency domain,  transforms which are able to recover precise descriptors of these features are requested. They are based on three main step grouping as follows:

     The above three step are joined to constitute the analysis phase. Retrieving image pixel values is named synthesis phase. The compression reduces the bit rate for transmission or storage and keep acceptable fidelity with the original image. Different techniques can be applied for coding the image. Common to all is the value of the compression ratio which determine the effective quality of the coded image.

     JPEG compression, adopting the discrete cosine transform (linear mapping) carried out through Fast Fourier transform, performs a projection in a spatial-like frequency domain. Coding without loss of information the obtained coefficients allows to reach a compression ratio of about 5.5. Nevertheless this approach is not applied to the entire image, which would be very time consuming and really not performant. Let a set of high frequencies with low amplitude be obtained from a DCT transform of an image. Applying DCT to the whole image no spatial localization can be recovered and canceling out this set of frequency can lead to high distortions in image region where also the low frequencies have low amplitude. To overcome this two drawbacks,  in JPEG algorithm DCT is applied to 8x8 blocks allowing speeding up the procedure and localize frequency content. Then for each block a set of coefficients are obtained and according to entropy rules the less significant can be excluded. On the other hand the phenomenon of blockiness can not be avoided when image is deeply compressed, i.e. high discontinuities are generated at the edges of each block (see figure 1). Another drawback of  Fourier analysis is the phenomenon of splitting the information in the transform domain when localized features are present in the image, i.e. the energy of discontinuous signals is dispersed across many coefficients in the transform domain. Let the image be constituted by a black background with some few white spots. To capture this localized features in dual domain many frequency components are needed. Although very few coefficients are required to encode the image directly in the image domain, much more coefficients are required to encode the image in the frequency domain.

        Wavelets ([1], [6], [7]) have been proposed in this field due to the appealing property of representing both smooth areas and discontinuities compactly in transform domain and seems to promise better results with respect of JPEG in term of image distortion and compression ratios.


Next: Methods Up: Title Page Previous: Full Text Index Index: Full Text Index Contents: Conference Page