0
3.8kviews
Discuss the drawbacks of different conventional methods in audio compression. Explain silence compression in lossy compression technique.

Mumbai University > EXTC > Sem 7 > Data Compression and Encryption

Marks: 10 M

Year: Dec 2014

1 Answer
0
36views

Drawbacks of conventional audio compression methods:

Conventional compression methods, such as RLE, statistical, and dictionary-based, can be used to losslessly compress sound files, but the results depend heavily on the how sounds respond to each of the three classes of compression methods.

  1. Run length encoding:

    a) RLE may work well when the sound contains long runs of identical samples. With 8-bit samples, long runs of identical samples occurs easily.

    b) With 16-bit samples, long runs in sound may be rare. Therefore RLE consequently becomes ineffective.

  2. Statistical methods:

    a) They assign variable-size codes to the samples according to their frequency of occurrence.

    b) With 8-bit samples, there are only 256 different samples, so in a large audio file, the samples may sometimes have a flat distribution. Such a file will therefore not respond well to Huffman coding.

    c) With 16-bit samples there are more than 65,000 possible samples, so some samples may occur very often, while others may be rare. Such a file may therefore compress better with arithmetic coding.

  3. Dictionary-based methods:

    a) Dictionary-based methods expect to find the same phrases again and again in the data. This happens with text, where certain strings may repeat often.

    b) Sound, however, is an analog signal and the particular samples generated depend on the precise way the ADC works. With 8-bit samples, for example, a wave of 8 mv becomes a sample of size 2, but waves very close to that, say, 7.6 mv or 8.5 mv, may become samples of different sizes.

    c) This is why parts of speech that sound the same to us, and should therefore have become identical phrases, end up being digitized slightly differently, and go into the dictionary as different phrases, thereby reducing compression. Dictionary-based methods are not well suited for sound compression.

Principle of silence compression:

  • The principle of silence compression is to treat small samples as if they were silence (i.e., as samples of 0).
  • This generates run lengths of zero, so silence compression is actually a variant of RLE, suitable for sound compression.
  • This method uses the fact that some people have less sensitive hearing than others, and will tolerate the loss of sound that is so quiet they may not hear it anyway.
  • Audio files containing long periods of low-volume sound will respond to silence compression better than other files with high-volume sound. This method requires a user-controlled parameter that specifies the largest sample that should be suppressed.
  • Two other parameters are also necessary, although they may not have to be user-controlled. One specifies the shortest run length of small samples, typically 2 or 3.
  • The other specifies the minimum number of consecutive large samples that should terminate a run of silence. For example, a run of 15 small samples, followed by two large samples, followed by 13 small samples may be considered one silence run of 30 samples, whereas the runs 15, 3, 13 may become two distinct silence runs of 15 and 13 samples, with nonsilence in between.
Please log in to add an answer.