0
8.2kviews
Explain Bi-level Image Compression Standards
1 Answer
0
225views

As more and more documents are handled in electronic form, efficient methods for compressing bi-level images (those with only 1-bit, black - and - white pixels) are much in demand. A familiar example is fax images.

Each scan line in the image is treated as a run of black - and - white pixels. However, considering the neighboring pixels and the nature of data to be coded allows much more efficient algorithms to be constructed.

The JBIG Standard

JBIG is the coding standard recommended by the Joint Bi - level Image Processing Group for binary images. This lossless compression standard is used primarily to code scanned images of printed or handwritten text, computer - generated text, and facsimile transmissions.

It offers progressive encoding and decoding capability, in the sense that the resulting bitstream contains a set of progressively higher - resolution images. This standard can also be used to code grayscale and color images by coding each bit-plane independently, but this is not the main objective.

The JBIG compression standard has three separate modes of operation:

Progressive, progressive - compatible sequential, and single - progression sequential. The progressive - compatible sequential mode uses a bit-stream compatible with the progressive mode. The only difference is that the data is divided into strips in this mode.

The input image goes through a sequence of resolution - reduction and differential - layer encoders. Each is equivalent in functionality, except that their input images have different resolutions.

The JBIG2 Standard

By contrast, the JBIG2 standard is explicitly designed for lossy, lossless, and lossy to lossless image compression. The design goal for JBIG2 aims not only at providing superior lossless compression performance over existing standards but also at incorporating lossy compression at a much higher compression ratio, with as little visible degradation as possible.

A unique feature of JBIG2 is that it is both quality progressive and content progressive. By quality progressive, we mean that the bit-stream behaves similarly to that of the JBIG standard, in which the image quality progresses from lower to higher (or possibly lossless) quality.

On the other hand, content progressive allows different types of image data to be added progressively.

The JBIG2 encoder decomposes the input bi-level image into regions of different attributes and codes each separately; using different coding methods. Another feature of JBIG2 that sets it apart from other image compression standards is that it is able to represent multiple pages of a document in a single file, enabling it to exploit interpage similarities.

JBIG2 offers content - progressive coding and superior compression performance through model - based coding, in which different models are constructed for different data types in an image, realizing additional coding gain.

The JBIG2 specification expects the encoder to first segment the input image into regions of different data types, in particular, text and halftone regions. Each region is then coded independently, according to its characteristics. Text - Region Coding.

Each text region is further segmented into pixel blocks containing connected black pixels. These blocks correspond to characters that make up the content of this region. Then, instead of coding all pixels of each character, the bitmap of one representative instance of this character is coded and placed into a dictionary. For any character to be coded, the algorithm first tries to find a match with the characters in the dictionary.

The JBIG2 standard suggests two methods for halftone image coding. The first is similar to the context - based arithmetic coding used in JBIG. The only difference is that the new standard allows the context template to include as many as 16 template pixels, four of which may be adaptive.

The second method is called descreening. This involves converting back to grayscale and coding the grayscale values. In this method, the bi-level region is divided into blocks of size mb xnb. For an m x n bi-

Level region, the resulting grayscale image has dimension

mg = [(m + (mb — 1))/mb} by

ng = [(n + (nb — l))/nbJ.

The grayscale value is then computed to be the sum of the binary pixel values in the corresponding mb x nb block. The bit-planes of the grayscale image are coded using context - based arithmetic coding. The grayscale values are used as indices into a dictionary of halftone bitmap, patterns. The decoder can use this value to index into this dictionary, to reconstruct the original halftone image.

Please log in to add an answer.