information theory - ejwoo.com€¦ · shannon, c. (1948). a mathematical theory of communication....

40
Information Theory (Information Theory by J. V. Stone, 2015)

Upload: phungnguyet

Post on 30-May-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

Information Theory(Information Theory by J. V. Stone, 2015)

Page 2: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

2

Claude Shannon (1916 – 2001)

Shannon, C. (1948). A mathematical theory ofcommunication. Bell System Technical Journal, 27:379–423.• A mathematical definition of information• How much information can be communicated

between different elements of a system

Whether we consider computers, evolution, physics, artificial intelligence, quantum computation, or the brain, their behaviors are largely determined by the way they process information.

Information is a well-defined and measurable quantity as important as mass and velocity in describing the universe.

Shannon’s theory ranks alongside those of Darwin–Wallace, Newton, and Einstein.

Page 3: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

3

Information vs. Data and Signal vs. Noise

Information (useful data or signal) is embedded in data which

consists of signal and noise (note: signal-to-noise ratio, SNR).

1 bit is the amount of information required to choose between two

equally probable alternatives (note: bit vs. binary digit).

n bits produce m = 2n equally probable alternatives. If you have n

bits of information, then you can choose from m = 2n equally

probable alternatives.

m equally probable alternatives possess n = log2m bits of

information. If you have to choose between m equally probable

alternatives, then you need n = log2m bits of information.

Page 4: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

4

Finding a Route, Bit by Bit

1 bit is the amount of information required to choose between two equally probable alternatives.

How much information is needed to correctly arrive at the point D when you have no information about the route?

What is the meaning (or usefulness) of the first 1 bit of information?

What is the meaning (or usefulness) of the second 1 bit of information? What if you already knew you should turn right?

What is the meaning (or usefulness) of the third 1 bit of information? What if you already knew that there was a 71% probability that right turn is correct?

Page 5: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

5

Toy problem of three questions:

With 20 questions, how many words can you deal with? Note 220 = 1,048,576 106.

How about 40 questions? Note 240 = 1,099,511,627,776 1012.

If you made 40 turns to drive from Seoul to Budang, you have avoided arriving at 1,099,511,627,775 incorrect destinations.

1 bit is the amount of information required to choose between two equally probable alternatives.

A Million Answers to Twenty Questions

• “Is it inanimate?”

• “Is it a mammal?”

• “Is it cat?”

• Does the ordering of the words matter?

Page 6: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

6

Electric current produces magnetic field.

26 pairs of electric lines or 23 or 1 (how about bidirectional telegraphy?)

Morse code for efficient use of a single channel

• Short codewords for most common letters

E, T, A, N, M, etc.

• Longer codewords for less common letters

Z, Q, J, X, Y, etc.

Telegraphy

Page 7: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

7

Image and Pixels

An image is a 2D array of pixels (picture elements). Each pixel is represented as a number expressing luminance, color,

X-ray absorption, proton density, temperature, or others. An image is a 2D array of numbers which can be reformatted as a

long 1D array of numbers. The number representing a pixel is a value chosen from a range such

as [0, 255]. If a pixel takes one value from m equally possible values, the pixel

conveys n = log2m bits of information.

Page 8: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

8

Most images have predictable internal structures with redundant information.

Encoding method A: coding 1 bit (0 or 1) per pixel for 100100 pixels (10,000 bits of information?)

Encoding method B: coding the locations of white pixels

Encoding method C: coding the numbers of black pixels before the next white pixels

Encoding method D (run-length coding): coding the numbers of pixels preceding the next changes from

0 to 1 or from 1 to 0

All of these methods can be lossless.

What is the amount of information in each of these images?

Binary (B&W) Image

Page 9: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

9

Direct coding: 1001008 = 80,000 bits

Difference coding: 1001007 = 70,000 bits (12.5% reduction)

How much information does each pixel contain? • The smallest number of binary digits required to represent each pixel is equal to the

amount of information (measured in bits) implicit in each pixel. • The average information per pixel turns out to be 5.92 bits. • The image can be compressed without loss of information.

8-bit Gray Scale Image

[0, 255]

[-63, 63]

Page 10: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

10

126 million photoreceptors

1 million nerve fibers

Data Compression and Difference Coding

Page 11: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

11

Encoding and Decoding in Human Vision System

Page 12: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

12

Efficient Coding Hypothesis and Evolution

Evolution of sense organs, and of the brains that process data from those organs,

is primarily driven by the need to minimize the energy expended for each bit of

information acquired from the environment.

The ability to separate signal from noise is fundamental to the Darwin-Wallace

theory of evolution by natural selection. Evolution works by selecting the

individuals best suited to a particular environment so that, over many generations,

information about the environment gradually accumulates within the gene pool.

Thus, natural selection is essentially a means by which information about the

environment is incorporated into DNA (deoxyribonucleic acid).

Page 13: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

13

Acoustic Signal

Television and Teleaudition?

Page 14: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

14

Information Processing System

The ability to separate signal from noise, to extract information from

data, is crucial for all information processing systems.

Television and all of those modern devices and systems

Human nervous system and all of those biological systems

Page 15: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

15

Television and Broadcasting System

Page 16: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

16

Malmivuo and Plonsey, 1995

Human Nervous System

Page 17: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

17

Information Processing System

Probability theory Entropy Information theory

Output y

Noise

Channel

Encoder

x = g(s)

Data s Data s

Decoder

Input x

Source(Transmitter)

Receiver

Page 18: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

18

Outcome, Sample Space, Outcome Value, Random Variable, and Probability

Random variable XExperiment

Coin flip

Outcome

= head

Mapping

X(head) = 1

Random variable XExperiment

Coin flip

Outcome

= tail

Mapping

X(tail) = 0

Outcome: head (xh) or tail (xt)

Sample space: Ax = {xh, xt}

Outcome value: 1 or 0

Random variable: X(xh) = 1 and X(xt) = 0

Probability distribution: p(X) = {p(X=xh), p(X=xt)} = {p(xh), p(xt)}

Page 19: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

19

Probability Distributions

Fair Coin Biased Coin

Fair Die with 8 Sides

Page 20: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

20

Random Variable

A random variable X is a function that maps each outcome x of an

experiment (e.g. a coin flip) to a number X(x), which is the outcome

value of x.

If the outcome value of x is 1, this may be written as X = 1 or x = 1.

Examples of a 6-sided die may include

𝑋 =

1, if 𝑥 is 12, if 𝑥 is 23, if 𝑥 is 34, if 𝑥 is 45, if 𝑥 is 56, if 𝑥 is 6

or 𝑋 = 0, if 𝑥 is 1, 3, or 51, if 𝑥 is 2, 4, or 6

The random variable X is associated with its probability distribution

p(X) = {p(x1), p(x2), , p(xk)} and sample space Ax = {x1, x2, , xk}.

Page 21: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

21

Information Processing System

Output y

Noise

Channel

Encoder

x = g(s)

Data s Data s

Decoder

Input x

Source(Transmitter)

Receiver

Page 22: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

22

Source and Message

Message is an ordered sequence of k symbols: 𝐬 = 𝑠1, 𝑠2, ⋯ , 𝑠𝑘

Each symbol 𝑠𝑖 is an outcome (value) of a random variable 𝑆 with a

sample space 𝐴𝑠 = 𝑠1, 𝑠2, ⋯ , 𝑠𝛼 .

The random variable 𝑆 is associated with the probability distribution

𝑝 𝑆 = 𝑝(𝑠1), 𝑝(𝑠2),⋯ , 𝑝(𝑠𝛼) and 𝑖=1𝛼 𝑝(𝑠𝑖) = 1.

Page 23: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

23

Encoder and Channel Input

An encoder transforms the message 𝐬 to a channel input 𝐱 = 𝑔(𝐬).

The channel input 𝐱 = 𝑥1, 𝑥2, ⋯ , 𝑥𝑛 is an ordered sequence of

codewords 𝑥𝑖.

Each codeword 𝑥𝑖 is an outcome (value) of a random variable 𝑋 with

a sample space 𝐴𝑥 = 𝑥1, 𝑥2, ⋯ , 𝑥𝑚 which is a codebook.

The random variable 𝑋 is associated with the probability distribution

𝑝 𝑋 = 𝑝(𝑥1), 𝑝(𝑥2),⋯ , 𝑝(𝑥𝑚) and 𝑖=1𝑚 𝑝(𝑥𝑖) = 1.

The probability of a codeword 𝑥𝑖 is 𝑝(𝑥𝑖).

Page 24: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

24

Code

A code is a list of symbols and their corresponding codewords.

A code can be envisaged as a look-up table such as

Symbol Codeword

𝑠1 = 3 𝑥1 = 000

𝑠2 = 6 𝑥2 = 001

𝑠3 = 9 𝑥3 = 010

𝑠4 = 12 𝑥4 = 011

𝑠5 = 15 𝑥5 = 100

𝑠6 = 18 𝑥6 = 101

𝑠7 = 21 𝑥7 = 110

𝑠8 = 24 𝑥8 = 111

Page 25: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

25

Channel and Channel Output

A channel produces a channel output 𝐲 = 𝑦1, 𝑦2, ⋯ , 𝑦𝑛 which is an

ordered sequence of n values 𝑦𝑖.

Each channel output value 𝑦𝑖 is an outcome (value) of a random

variable 𝑌 with a sample space 𝐴𝑦 = 𝑦1, 𝑦2, ⋯ , 𝑦𝑚 .

The random variable 𝑌 is associated with the probability distribution

𝑝 𝑌 = 𝑝(𝑦1), 𝑝(𝑦2),⋯ , 𝑝(𝑦𝑚) and 𝑖=1𝑚 𝑝(𝑦𝑖) = 1.

The probability of the channel output value 𝑦𝑖 is 𝑝(𝑦𝑖).

Page 26: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

26

Decoder and Received Message

A decoder transforms the channel output 𝐲 to a received message

𝐫 = 𝑔−1(𝐲) using the code.

The received message may contain an error as 𝐫 = 𝑔−1 𝐱 + 𝛈 =

𝐬 + 𝐞 due to channel noise.

The error rate of a code is the number of incorrect inputs associated

with the codebook divided by the number of possible inputs.

Page 27: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

27

Channel Capacity

Channel capacity is the maximum amount of information that can becommunicated through the channel.

Channel capacity is measured in terms of the amount of informationper symbol as bits/symbol.

If a channel communicates n symbols per second, its capacity isexpressed in terms of information per second as bits/s.

If an alphabet of a symbols is transmitted through a noiselesschannel at n symbols/s, its channel capacity is nlog2a bits/s.

The rate of information through a given channel is less than thechannel capacity due to noise and/or code inefficiency.

The channel capacity is the maximum rate that can be achievedwhen considered over all possible codes.

Page 28: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

28

Output y

Noise

Channel

Encoder

x = g(s)

Data s Data s

Decoder

Input x

Channel Capacity

Page 29: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

29

Shannon’s Desiderata : Properties of Information

Continuity. The amount of information associated with an outcomeincreases or decreases continuously (smoothly) as the probability ofthe outcome changes.

Symmetry. The amount of information associated with a sequenceof outcomes does not depend on the order of the sequence.

Maximal value. The amount of information associated with a set ofoutcomes cannot be increased if those outcomes are already equallyprobable.

Additivity. The information associated with a set of outcomes isobtained by adding the information of individual outcomes.

There is only one definition of information satisfying all of these fourproperties.

Page 30: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

30

Information as Surprise

Shannon information or surprisal of an outcome (value) 𝑥 withprobability of 𝑝 𝑥 is

Shannon information is a measure of surprise.

ℎ 𝑥 = log21

𝑝(𝑥)= − log2 𝑝(𝑥) bits

Page 31: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

31

Entropy is Average Shannon Information

Entropy is the average Shannon information or surprisal of a randomvariable 𝑋 with its probability distribution 𝑝 𝑋 =𝑝(𝑥1), 𝑝(𝑥2),⋯ , 𝑝(𝑥𝑚) .

The entropy of a random variable is the logarithm of the number ofequally probable outcome values.

𝐻 𝑋 =

𝑖=1

𝑚

𝑝(𝑥𝑖)log21

𝑝(𝑥𝑖)= 𝐸 ℎ(𝑥) bits

𝐻 𝑋 ≈ ℎ 𝑥 =1

𝑛

𝑖=1

𝑛

log21

𝑝(𝑥𝑖)=

1

𝑛

𝑖=1

𝑛

ℎ(𝑥𝑖) bits

𝐻 𝑋 = log2𝑚 bits

𝑚 = 2𝐻 𝑋 equally probable values

Page 32: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

32

Entropy is a Measure of Uncertainty

The average uncertainty of a variable X is summarized by its entropy

H(X).

If we are told the value of X, the amount of information we get is, on

average, exactly equal to its entropy.

Coin Fair Die with 8 Sides

H(X) = 3 bitsH(X) = 1 bit for fair coin

Page 33: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

33

Satellite TV Amount of data in HDTV images

• 1920(H)1080(V)3(R,G,B) = 6,220,800 pixels

• 256 values per pixel means log2256 = 8 bits/pixel

• 6,220,8008 = 49,766,400 bits/image 50 million bits/image

• Frame rate of 30 images/s produces 1,500 megabits/s

Satellite channel

• Channel capacity = 19.2 megabits/s

Remedy for an effective compression factor of about 78 1,500/19.2.

• Squeeze all the redundant data out of the images spatially and temporally (Moving Picture

Experts Group, MPEG using cosine transform).

• Remove components invisible to the human eye (filtering, high-resolution for intensity, low-

resolution for color).

• Recode the resultant data so that all symbols occur equally often.

• Add a small amount of redundancy for error correction.

• How about MP3 (MPEG1 or MPEC2 Audio Layer III) and JPEG (Joint Photographic

Experts Group)?

Page 34: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

34

Evolution by Natural Selection

Evolution is a process where natural selection acts as a mechanism for transferring

information from the environment to the collective genome of a species.

• Each individual is asked by “Are the genes in this individual better or worse than average?”

• The answer comes in units called “fitness” measured by the number of grownup offspring.

• Over many generations, information about the environment becomes implicit in the genome of a

species.

The human genome contains about 3109 (3 billion) pairs of nucleotides. Each

nucleotide comprises one element in one half of the double helix of the DNA

molecule.

The human genome makes use of four nucleotides including adenine (A), guanine

(G), thymine (T), and cytosine (C).

One may define the fitness as the proportion of good genes above 50%.

Page 35: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

35

Does Sex Accelerate Evolution? (MacKay, 2003)

The mutation rate of a genome of 𝑁 nucleotides is the probability that each genewill be altered from one generation to the next (average proportion of alteredgenes).• For a sexual population, the largest mutation rate is 1 𝑁.• For an asexual population, the largest mutation rate is 1 𝑁.

For a sexually reproducing population, the rate at which a genome of 𝑁

nucleotides accumulates information from the environment can be as large as 𝑁bits/generation.• For 𝑁 = 3 × 109, the rate is 54,772 bits/generation.• The collective genome of the current generation would have 54,772 bits more

information about its environment than the genomes of the previous generation.

For an asexually reproducing population, the rate at which a genome ofnucleotides accumulates information from the environment is 1 bit/generation.

Page 36: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

36

Does Sex Accelerate Evolution? (MacKay, 2003)

New information ≤ 𝑁 bits/generation

New information = 1 bits/generation

A large genome increases evolutionary speed but it also decreases tolerance tomutations.• If N is large, evolution is fast but mutation is scarce.• If N is small, evolution is slow but mutation is frequent.• Evolution should have found a compromise in N.

The collective genome of a species should maximizes the Shannon informationacquired about its environment for each joule of expended energy. This efficientevolution is an extension of efficient coding hypothesis.

New Information

MutationRate

Sexual 𝑁 1 𝑁

Asexual 1 1 𝑁

Ratio 𝑁 𝑁

Page 37: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

37

The Human Genome: How Much Information?

The alphabets of DNA are A, G, T, and C.

For a message with N letters, the number of possible messages is

𝑚 = 4𝑁.

The maximum amount of information conveyed by this message is

𝐻 = log2 4𝑁 = 𝑁 log2 4 = 2𝑁 bits.

For the human genome with 𝑁 = 3 × 109 nucleotides, 𝐻 = 6 × 109

or 6 billion bits of information.

6 billion bits of information is stored in the DNA inside the nucleus of

every cell.

Page 38: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

38

Enough DNA to Wire Up a Brain?

Neurons are the only connection between us and the physical world.

1011 neurons/brain 104 synapses/neuron = 1015 synapses/brain

3 × 109 nucleotides in the human genome

If each nucleotide specifies one synapse, the genome can encode

only one millionth of all synapses without using any DNA for the rest

of the body.

Therefore, there is not enough DNA in the human genome to specify

every single synaptic connection in the brain.

The human brain must learn! Learning provides a way for a brain to

use the information from the environment to specify the correct set

of all of 1015 synapses.

Page 39: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

39

“Everything must be based on a simple idea. And … this idea,

once we have finally discovered it, will be so compelling, so

beautiful, that we will say to one another, yes, how could it

have been any different?”

“… the universe is made of information; matter and energy are

only incidental.”

John Wheeler, 1986

Page 40: Information Theory - ejwoo.com€¦ · Shannon, C. (1948). A mathematical theory of communication. ... 8 Most images have ... (head) = 1 Experiment Random variable X Coin flip

EOD