written 8.4 years ago by | • modified 8.4 years ago |
DPCM (Differential Pulse Code Modulation)
i. Adjacent audio samples tend to be similar in much the same way that neighboring pixels in an image tend to have similar colours.
ii. The simplest way to exploit this redundancy is to subtract adjacent samples and code the differences, which tend to be small integers. Any audio compression method on this principle is called DPCM
iii. A block diagram of DPCM is shown below:
$$\text{Fig2.5.a Encoder}$$
$$\text{Figure 2.5.b Decoder}$$
iv. The DPCM system consists of two major components predictor and quantizer.
v. DPCM gains its advantage by the reduction in variance.
vi. Variance is reduced on how well the predictor can predict the next symbol based on past reconstructed symbols.
vii.
The variance in above is given by:
$σd^2 = E [ ( X_n – P_n)^2 ]$
The reconstructed sequence is given by :
$X_n \ ^\wedge = X_n + q_n$
The predictor sequence {$P_n$ } is given by:
$P_n = f \{ X \ ^\wedge \ _{n-1} , X \ ^\wedge \ _{n-2} , ………, X \ ^\wedge \ _ 0 \}$
Assume quantizer value to be small that replace $X_n \ ^\wedge \ \ by \ \ X_n$
$P_n = f (X_{n-1} , X_{n-2} ,………, X_0 )$
Assume predictor to be linear
$Pn = 1. X \ ^\wedge \ _{n-1}$
Variance is given by….
$$\sigma^2_a=E\bigg[\bigg(x_n- \sum ^N_{i=1}ai.x_{n-1}\bigg)^2\bigg]=0 \\ \dfrac{\delta \sigma_a^2}{\partial a_1}=-2E\bigg[\bigg(x_n-\sum^N_{i=1}a_i.x_{n-1}\bigg)x_{n-1}\bigg]=0 \\ \dfrac{\delta \sigma_a^2}{\partial a_2}=-2E\bigg[\bigg(x_n-\sum^N_{i=1}a_i.x_{n-1}\bigg)x_{n-2}\bigg]=0 \\ \dfrac{\delta \sigma_a^2}{\partial a_N}=-2E\bigg[\bigg(x_n-\sum^N_{i=1}a_i.x_{n-1}\bigg)x_{n-N}\bigg]=0$$
Taking the expectations, we can rewrite there equations as:
$$\sum _{i=r}^Na_i.R_{xx}(i-1)=R_{xx}(1) \\ \sum _{i=1}^Na_i.R_{xx}(i-2)=R_{xx}(2) \\ . \\ . \\ \sum _{i=1}^Na_i.R_{xx}(i-N)=R_{xx}(N)$$
In matrix form RA=P
Where
$$R=\begin{bmatrix} \ R_{xx}R_{xxx}(1)R_{xx}(2)........R_{xx}(N-1) \\ \ R_{xx}(N-1)...........R_{xx}(0) \\ \end{bmatrix}$$
$$A=\begin{bmatrix} \ a_1 \\ \ a_2 \\ \ .... \\ \ a_N \end{bmatrix} and \ \ P=\begin{bmatrix} \ R_{xx}(1) \\ \ R_{xx}(2) \\ \ .... \\ \ R_{xx}(N) \\ \end{bmatrix}$$
- These equations are known as discrete form of weiner- hopf equation
- If we know autocorrelation $R_{XX} (k) for k =0,1,……….N$
We can find predicta coefficients
$A= R^{-1} P$
[
DPCM
- x = [8.5, 9.3, 6.6, 4.5, 7.8] -> Output Sampling
- x^ = [9 , 9 , 7 , 5, 8 ] -> Output Quantizer
- x^ = [9 , 0, 2 , 2 ,-3] -> Output DPCM
- r^ = [ 9 , 0 ,2 , 2. -3] -> Received Input
r = [ 9, 9, 7, 5, 8] -> Differential Output
]
ADPCM (Adaptive Differential Pulse Code Modulation)
i. Compression is possible only because sound and thus audio samples tend to have redundancies.
Disadvantages of DPCM:
ii. In DPCM we subtract adjacent samples and code the differences, however this method is inefficient since they do not adapt themselves to varying magnitudes of audio stream.
iii. Hence, better results are obtained using ADPCM (Adaptive Differential Pulse Code Modulation).
iv. ADPCM uses the previous sample to predict the current sample.
v. It then computes the difference between the current sample and its predictions and quantizes the difference.
$$\text{Figure 2.6.a ADPCM Encoder}$$
vi. The adaptive quantizer receives the difference D (n) between the current input sample x [n] and predictor $X_p [n-1]$
vii. The quantizer computes and outputs the quantized code C [n] of X [n].
viii. The same code is sent to adaptive dequantizer (some dequantizer is used by the decoder) which produces the next quantized difference value $D_P[n]$.
ix. This value is added to previous predictor value $X_p [n-1]$ and sum $X_p [n]$ is sent to the predictors to the used in next stop.
$$\text{Figure 2.6.b ADPCM Decoder}$$
x. Its input is code c(n) , dequantizes it is to a difference $D_q [n]$ , which is added to preceeding predictor output $X_p [n-1]$ to form next output $X_p[n]$
xi. The next output is also fed into predictor to be used in next step.