written 8.4 years ago by | modified 2.8 years ago by |
Mumbai University > EXTC > Sem 7 > Data Compression and Encryption
Marks: 10 M
Year: May 2013
written 8.4 years ago by | modified 2.8 years ago by |
Mumbai University > EXTC > Sem 7 > Data Compression and Encryption
Marks: 10 M
Year: May 2013
written 8.4 years ago by |
Thus we encode and transmit the first value of the sequence followed by the encoding of the differences between the 2 samples:
Example: Source Sequence
6.2 , 9.7 , 13.2 , 5.9 , 8 , 7.4 , 4.2 , 1.8
Taking difference
6.2 , 3.5 , 3.5 , -7.3 , 2.1 , -0.6 , -3.2 , -2.4
Lossless Reconstruction
$N^{th} + (n-1)^{th}$ = Reconstructed Sequence Lossy Scheme
We use a 7 level quantizer with output values (-6, -4 , -2 , 0 , 2 ,4 ,6)
On Quantization:
6 , 4 , 4 , -6 , +2 , 0 , -4 , -2
Thus the reconstructed sequence
6 , 10 , 14 , 8 , 10 , 10 , 6 , 4
The difference or error between original and the reconstructed signal is:
0.2 , -0.3 , -0.8 , -2.1 , 2 , -2.6 , -1.8 , -2.2
i. Adjacent audio samples tend to be similar in much the same way that neighboring pixels in an image tend to have similar colours.
ii. The simplest way to exploit this redundancy is to subtract adjacent samples and code the differences, which tend to be small integers. Any audio compression method on this principle is called DPCM
iii. A block diagram of DPCM is shown below:
$$\text{Fig2.5.a Encoder}$$
$$\text{Figure 2.5.b Decoder}$$
iv. The DPCM system consists of two major components predictor and quantizer.
v. DPCM gains its advantage by the reduction in variance.
vi. Variance is reduced on how well the predictor can predict the next symbol based on past reconstructed symbols.
vii.
The variance in above is given by:
$σd^2 = E [ ( X_n – P_n)^2 ]$
The reconstructed sequence is given by :
$X_n \ ^\wedge = X_n + q_n$
The predictor sequence {$P_n$ } is given by:
$P_n = f \{ X \ ^\wedge \ _{n-1} , X \ ^\wedge \ _{n-2} , ………, X \ ^\wedge \ _ 0 \}$
Assume quantizer value to be small that replace $X_n \ ^\wedge \ \ by \ \ X_n$
$P_n = f (X_{n-1} , X_{n-2} ,………, X_0 )$
Assume predictor to be linear
$Pn = 1. X \ ^\wedge \ _{n-1}$
Variance is given by….
$$\sigma^2_a=E\bigg[\bigg(x_n- \sum ^N_{i=1}ai.x_{n-1}\bigg)^2\bigg]=0 \\ \dfrac{\delta \sigma_a^2}{\partial a_1}=-2E\bigg[\bigg(x_n-\sum^N_{i=1}a_i.x_{n-1}\bigg)x_{n-1}\bigg]=0 \\ \dfrac{\delta \sigma_a^2}{\partial a_2}=-2E\bigg[\bigg(x_n-\sum^N_{i=1}a_i.x_{n-1}\bigg)x_{n-2}\bigg]=0 \\ \dfrac{\delta \sigma_a^2}{\partial a_N}=-2E\bigg[\bigg(x_n-\sum^N_{i=1}a_i.x_{n-1}\bigg)x_{n-N}\bigg]=0$$
Taking the expectations, we can rewrite there equations as:
$$\sum _{i=r}^Na_i.R_{xx}(i-1)=R_{xx}(1) \\ \sum _{i=1}^Na_i.R_{xx}(i-2)=R_{xx}(2) \\ . \\ . \\ \sum _{i=1}^Na_i.R_{xx}(i-N)=R_{xx}(N)$$
In matrix form RA=P
Where
$$R=\begin{bmatrix} \ R_{xx}R_{xxx}(1)R_{xx}(2)........R_{xx}(N-1) \\ \ R_{xx}(N-1)...........R_{xx}(0) \\ \end{bmatrix}$$
$$A=\begin{bmatrix} \ a_1 \\ \ a_2 \\ \ .... \\ \ a_N \end{bmatrix} and \ \ P=\begin{bmatrix} \ R_{xx}(1) \\ \ R_{xx}(2) \\ \ .... \\ \ R_{xx}(N) \\ \end{bmatrix}$$
We can find predicta coefficients
$A= R^{-1} P$
[
DPCM
r = [ 9, 9, 7, 5, 8] -> Differential Output
]
i. Compression is possible only because sound and thus audio samples tend to have redundancies.
Disadvantages of DPCM:
ii. In DPCM we subtract adjacent samples and code the differences, however this method is inefficient since they do not adapt themselves to varying magnitudes of audio stream.
iii. Hence, better results are obtained using ADPCM (Adaptive Differential Pulse Code Modulation).
iv. ADPCM uses the previous sample to predict the current sample.
v. It then computes the difference between the current sample and its predictions and quantizes the difference.
$$\text{Figure 2.6.a ADPCM Encoder}$$
vi. The adaptive quantizer receives the difference D (n) between the current input sample x [n] and predictor $X_p [n-1]$
vii. The quantizer computes and outputs the quantized code C [n] of X [n].
viii. The same code is sent to adaptive dequantizer (some dequantizer is used by the decoder) which produces the next quantized difference value $D_P[n]$.
ix. This value is added to previous predictor value $X_p [n-1]$ and sum $X_p [n]$ is sent to the predictors to the used in next stop.
$$\text{Figure 2.6.b ADPCM Decoder}$$
x. Its input is code c(n) , dequantizes it is to a difference $D_q [n]$ , which is added to preceeding predictor output $X_p [n-1]$ to form next output $X_p[n]$
xi. The next output is also fed into predictor to be used in next step.