written 7.7 years ago by | • modified 2.8 years ago |
Mumbai university > Electronics and telecommunication Engineering > Sem 7 > Data compression and Encryption
Marks: 4
Years: May 2016
written 7.7 years ago by | • modified 2.8 years ago |
Mumbai university > Electronics and telecommunication Engineering > Sem 7 > Data compression and Encryption
Marks: 4
Years: May 2016
written 7.7 years ago by |
Compression ratio:
Distortion:
In order to determine the efficiency of a compression algorithm, we have to have same way of quantifying the difference. The difference between the original and the reconstruction is called as distortion.
Lossy techniques are generally used for the compression of data that originate as analog signals such as speech and video.
In compression of speech and video, the final arbiter of quality is human.
Since human responses are difficult to model mathematically, many approximate measures of distortion are used to determine the quality of the reconstructed waveforms.
Compression rate:
Fidelity and quality:
The difference between the reconstruction and the original are fidelity and quality.
When we say that the fidelity or quality of a reconstruction is high, we mean that the difference between the reconstruction and the original is small.
Whether the difference is a mathematical or perceptual, difference should be evident from the context.
Self-information:
Shannon defined a quantity called self-information.
Suppose we have an event A, which is set of outcomes of some random experiment. If P(A) is the probability that event A will occur then the self-information associated with A is given by
i(A) = logb1/(P(A)) = -logb P(A) ----------- (1)
If the probability of an event is low, the amount of self-information associated with it is high.
If the probability of an event is low, the amount of self-information associated with it is low.
The information obtained from the occurrence of two independent events is the sum of the information obtained from the occurrence of individual events.
Suppose A and B are two independent events. The self-information associated with the occurrence of both event A and event B is given by equation 1
i(AB) = logb1/(P(AB))
as A and B are independent
P(AB) = P(A.)P(B)
And i(AB) = logb1/(P(A).P(B))
= logb1/(P(A)) + logb1/(P(B))
i(AB) = i(A) + i(B)