Entropy Coding

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Entropy coding

In information theory, an entropy coding (or entropy encoding) is any lossless data compression method
that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that
any lossless data compression method must have expected code length greater or equal to the entropy of the
source.[1]

More precisely, the source coding theorem states that for any source distribution, the expected code length
satisfies , where is the number of symbols in a code word, is
the coding function, is the number of symbols used to make output codes and is the probability of the
source symbol. An entropy coding attempts to approach this lower bound.

Two of the most common entropy coding techniques are Huffman coding and arithmetic coding.[2]
If the
approximate entropy characteristics of a data stream are known in advance (especially for signal
compression), a simpler static code may be useful.
These static codes include universal codes (such as Elias
gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding).

Since 2014, data compressors have started using the asymmetric numeral systems family of entropy coding
techniques, which allows combination of the compression ratio of arithmetic coding with a processing cost
similar to Huffman coding.

Contents
Entropy as a measure of similarity
See also
References
External links

Entropy as a measure of similarity


Besides using entropy coding as a way to compress digital data, an entropy encoder can also be used to
measure the amount of similarity between streams of data and already existing classes of data. This is done
by generating an entropy coder/compressor for each class of data; unknown data is then classified by
feeding the uncompressed data to each compressor and seeing which compressor yields the highest
compression. The coder with the best compression is probably the coder trained on the data that was most
similar to the unknown data.

See also
Arithmetic coding
Asymmetric numeral systems (ANS)
Context-adaptive binary arithmetic coding (CABAC)
Huffman coding
Range coding
References
1. Duda, Jarek; Tahboub, Khalid; Gadgil, Neeraj J.; Delp, Edward J. (May 2015). "The use of
asymmetric numeral systems as an accurate replacement for Huffman coding" (https://ieeex
plore.ieee.org/document/7170048/;jsessionid=wI8UmMMsBsrAYcNN1Ux2gHW5ReVUf0Hh
rXFHlaZZVSmlJdKWDIv8!-2026188604). 2015 Picture Coding Symposium (PCS): 65–69.
doi:10.1109/PCS.2015.7170048 (https://doi.org/10.1109%2FPCS.2015.7170048).
2. Huffman, David (1952). "A Method for the Construction of Minimum-Redundancy Codes".
Proceedings of the IRE. Institute of Electrical and Electronics Engineers (IEEE). 40 (9):
1098–1101. doi:10.1109/jrproc.1952.273898 (https://doi.org/10.1109%2Fjrproc.1952.27389
8). ISSN 0096-8390 (https://www.worldcat.org/issn/0096-8390).

External links
Information Theory, Inference, and Learning Algorithms (http://www.inference.phy.cam.ac.uk/
mackay/itila/book.html), by David MacKay (2003), gives an introduction to Shannon theory
and data compression, including the Huffman coding and arithmetic coding.
Source Coding (http://iphome.hhi.de/wiegand/assets/pdfs/VBpart1.pdf), by T. Wiegand and
H. Schwarz (2011).

Retrieved from "https://en.wikipedia.org/w/index.php?title=Entropy_coding&oldid=1118802006"

This page was last edited on 29 October 2022, at 00:29 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License 3.0;


additional terms may apply. By
using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the
Wikimedia Foundation, Inc., a non-profit organization.

You might also like