written 8.5 years ago by | modified 2.9 years ago by |
Mumbai University > EXTC > Sem 7 > Data Compression and Encryption
Marks: 10 M
Year: May 2013
written 8.5 years ago by | modified 2.9 years ago by |
Mumbai University > EXTC > Sem 7 > Data Compression and Encryption
Marks: 10 M
Year: May 2013
written 8.5 years ago by |
Approach 1: This is appropriate for bi-level images. A pixel in such an image is represented by one bit. Applying the principle of image compression to a bi-level image therefore means that the immediate neighbors of a pixel P tend to be identical to P. Thus, it makes sense to use run-length encoding (RLE) to compress such an image. Ex. Fascimile Compression
Approach 2: We can extend the principle of image compression tells us that the neighbors of a pixel tend to be similar to the s color C [ where C is either black or white] , then pixels of the same color seen in the past tend to have the same immediate neighbors. This approach looks at n of the near neighbors of the current pixel and considers them an n-bit number. This number is the context of the pixel. In principle there can be 2n contexts, but because of image redundancy we expect them to be distributed in a non uniform way. Some contexts should be common while others will be rare. This approach is used by JBIG.
Approach 3: Separate the grayscale image into n bi-level images and compress each with RLE and prefix codes. The principle of image compression seems to imply intuitively that 2 adjacent pixels that are similar in the grayscale image will be identical in most of the n bi-level images. This, however is not true. An example of such a code is the reflected gray codes.
Approach 4: Use the context of a pixel to predict its value. The context of a pixel is the values of some of its neighbors. We can examine of some neighbors of a pixel P, compute an average A of their values and predict that P will have the value A. The principle of image compression tells us that our prediction will be correct in most cases, almost correct in many cases and completely wrong in a few cases. This is used in MLP method.
Approach 5: Transform the values of the pixels and encode the transformed values. The compression is achieved by reducing or removing redundancy. The redundancy of an image is caused by the correlation between pixels, so transforming the pixels to a representation where they are decorrelated eliminates the redundancy. In a highly correlated image, the pixels tend to have equiprobable values, which results in Maximum entropy. If the transformed pixels are decorreleated certain pixel values become common, thereby having large probabilities, while others are rare. This results in small entropy. Quantizing the transformed values can produce efficient lossy image compression.
Approach 6: The principle of this approach is to separate a continuous- tone color image into three grayscale images and compress each of the 3 separately; using approaches 3, 4 or 5. For a continuous – tone image, the principle of image. An important feature of this approach is to use a luminance chrominance color representation is that the eye is sensitive to small changes in luminance but not in chrominance. This allows the loss of considerable data in the chrominance components. While making it possible to decode the image without a significant visible loss of quality
Approach 7: A different approach is needed for discrete-tone images. Recall that such an image contains uniform regions, and a region may appear several times in the image. A good example is screen dump such an image consists of text and icons. Each character of text and each icon is a region, and any region may appear several times in the image. A possible way to compress such an image is to scan it, identify regions and find repeating regions. If s region B is identical to an already found region A, then B can be compressed by writing a pointer to A on the compressed stream. The block decomposition method (FABD) is an example of how this approach can be implemented.
Approach 8: Partition the image into parts (overlapping or not) and compress it by processing the parts one by one. Suppose that the next unprocessed image part is part no. 15. Try to match it with parts 1-14 that have already been processed. If part 15 can be expressed, for ex. as a combination of parts 5 (scaled) and 11 (rotated), then only the few part 15 can be discarded. If part is can’t be expressed as a combination of already- processed parts, it is declared processed and is saved in raw format. This approach is the basis of the various fractal methods for image compression. It applies the principle of image compression to image parts instead of to individual pixels. Applied this way, the principle tells us that “interesting” images, have a certain amount of self-similarity. Parts of the image are identical or similar to the entire image or to other parts.