written 8.5 years ago by | • modified 8.5 years ago |
To generate a sequence of length m, using Huffman procedure requires the entire code for all possible sequence of length ‘m’.
E.g If the original size was k, then the size of the codebook would be $k^m$ . Taking reasonable value k =16 and m=20. It gives a codebook size of $16^{20}$ . This is obviously not a viable option.
For Shannon Fano coding procedure we do not need to build the entire codebook instead, we simply obtain the code for the tag corresponding to a given sequence. It is entirely feasible to code sequenced of length 20 or much more.
We can’t make large m Huffman Coding because we can’t got rates closer to maximum entropy.
Shannon Fano Coding is vice versa (exception for sources whole probabilities are powers of two).
Shannon Fano coding is more complex then Huffman coding.
Huffman coding is not adoptable for changing input statistics. There is need to preserve the tree.
It is much easier to ‘adapt arithmetic exists to changing input statistics. All we need to do is estimate the probabilities of the input alphabet by keeping a count of the letters as they are coded . No need to preserve the tree in case of Shannon Fano Coding.
In Huffman there is need to generate a code a prior so separation of modeling and coding procedures is not feasible with Huffman.
Shannon Fano coding has no need to generate a code a priori, Thus property about to separate the modeling and coding procedures in a manner that is not very feasible with Huffman Coding.