0
43kviews
Explain MPEG video compression standard.
1 Answer
8
1.4kviews

The name MPEG is an acronym for Moving Pictures Experts Group. MPEG is a method for video compression, which involves the compression of digital images and sound, as well as synchronization of the two.

There currently are several MPEG standards.

  • MPEG-1 is intended for intermediate data rates, on the order of 1.5 Mbit/sec.
  • MPEG-2 is intended for high data rates of at least 10 Mbit/sec.
  • MPEG-3 was intended for HDTV compression but was found to be redundant and was merged with MPEG-2.
  • MPEG-4 is intended for very low data rates of less than 64 Kbit/sec.

    i. In principle, a motion picture is a rapid flow of a set of frames, where each frame is an image. In other words, a frame is a spatial combination of pixels, and a video is a temporal combination of frames that are sent one after another.

    ii. Compressing video, then, means spatially compressing each frame and temporally compressing a set off names.

    iii. Spatial Compression: The spatial compression of each frame is done with JPEG (ora modification of it). Each frame is a picture that can be independently compressed.

    iv. Temporal Compression: In temporal compression, redundant frames are removed.

    v. To temporally compress data, the MPEG method first divides frames into three categories:

    vi. I-frames, P-frames, and B-frames. Figure1 shows a sample sequence off names.

enter image description here

Fig1: MPEG frames

$ \ \ \ \ $vii. Figure2 shows how I-, P-, and B-frames are constructed from a series of seven frames.

enter image description here

Fig2: MPEG frame construction

I-frames: An intracoded frame (I-frame) is an independent frame that is not related to any other frame.

They are present at regular intervals. An I-frame must appear periodically to handle some sudden change in the frame that the previous and following frames cannot show. Also, when a video is broadcast, a viewer may tune at any time. If there is only one I-frame at the beginning of the broadcast, the viewer who tunes in late will not receive a complete picture. I-frames are independent of other frames and cannot be constructed from other frames.

P-frames: A predicted frame (P-frame) is related to the preceding I-frame or P-frame. In other words, each P-frame contains only the changes from the preceding frame. The changes, however, cannot cover a big segment. For example, for a fast-moving object, the new changes may not be recorded in a P-frame. P-frames can be constructed only from previous I- or P-frames. P-frames carry much less information than other frame types and carry even fewer bits after compression.

B-frames: A bidirectional frame (B-frame) is related to the preceding and following I-frame or P-frame. In other words, each B-frame is relative to the past and the future. Note that a B-frame is never related to another B-frame.

  • According to the MPEG standard the entire movie is considered as a video sequence which consist of pictures each having three components, one luminance component and two chrominance components (y, u & v).
  • The luminance component contains the gray scale picture & the chrominance components provide the color, hue & saturation.
  • Each component is a rectangular array of samples & each row of the array is called the raster line.
  • The eye is more sensitive to spatial variations of luminance but less sensitive to similar variations in chrominance. Hence MPEG – 1 standard samples the chrominance components at half the resolution of luminance components.
  • The input to MPEG encoder is called the resource data and the output of the MPEG decoder is called the reconstructed data.
  • The MPEG decoder has three parts, audio layer, video layer, system layer.
  • The system layer reads and interprets the various headers in the source data and transmits this data to either audio or video layer.
  • The basic building block of an MPEG picture is the macro block as shown:

enter image description here

  • The macro block consist of 16×16 block of luminance gray scale samples divided into four 8×8 blocks of chrominance samples.
  • The MPEG compression of a macro block consist of passing each of the °6 blocks their DCT quantization and entropy encoding similar to JPEG.
  • A picture in MPEG is made up of slices where each slice is continuous set of macro blocks having a similar gray scale component.
  • The concept of slice is important when a picture contains uniform areas.
  • The MPEG standard defines a quantization stage having values (1, 31). Quantization for intra coding is:

$$Q_\text{DCT }= \frac{(16 ×\text{DCT})+\text{sign(DCT)}×\text{quantizer scale})}{2×\text{quantizer - scale} × θ}$$

Where

DCT = Discrete cosine transform of the coefficienting encoded

Q = Quantization coefficient from quantization table

enter image description here

Quantization rule for encoding,

$$Q_\text{DCT }= \frac{16 \times DCT}{2×\text{quantizer - scale} × θ}$$

  • The quantized numbers Q_(DCT )are encoded using non adaptive Haffman method and the standard defines specific Haffman code tables which are calculated by collecting statistics.
Please log in to add an answer.