chapter7 image compression - ustc

65
Chapter7 Image Compression Preview 71I d i 7.1 Introduction 7.2 Fundamental concepts and theories i di hd 7.3 Basic coding methods 7.4 Predictive coding 7.5 Transform coding 7.6 Introduction to international standards Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang 1

Upload: others

Post on 04-May-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter7 Image Compression - USTC

Chapter7 Image Compression

• Preview7 1 I d i• 7.1 Introduction

• 7.2 Fundamental concepts and theories i di h d• 7.3 Basic coding methods

• 7.4 Predictive coding• 7.5 Transform coding• 7.6 Introduction to international standards

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

1

Page 2: Chapter7 Image Compression - USTC

7.1 Introduction7.1.1 why code data?

• To reduce storage volume7.1.2 Classification

To reduce storage volume• To reduce transmission

time

Lossless compression: reversible

Lossy compression: irreversibletime– One colour image

• 760 by 580 pixels

Lossy compression: irreversible

• 3 channels, each 8 bits• 1.3 Mbyte

Vid d t– Video data• same resolution• 25 frames per secondp• 33 Mbyte/second

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

2

Page 3: Chapter7 Image Compression - USTC

7.1 Introduction7.1.3 Compression methods

•Entropy coding: Huffman coding arithmetic coding•Entropy coding: Huffman coding, arithmetic coding•Run length coding: G3, G4

di j( ) i ( i d ) ( )•LZW coding: arj(DOS), zip(Windows), compress( UNIX)•Predictive coding: linear prediction, non-linear predictionT f di DCT KLT W l h T WT( l )•Transform coding: DCT, KLT, Walsh-T, WT(wavelet)

•Others: vector quantization (VQ), Fractal codingthe second generation coding

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

3

Page 4: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancyIf n1denote the original data number, and n2 denote the compressed data number, then the data redundancy can be defined as

11DR

RC

= −R

where CR, commonly called the compression ratio, is

1

2R

nCn

=

F lFor example n2 =n1 RD=0

2 1n n , 1R DC R→ ∞ →

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

4

Page 5: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy

•Other expression of compression ratio

2

1R

nCn

= or 2 8 // , Rn bits pixelL bits pixel C

M N L= =

×

•Three basic image redundancy

Coding redundancy

Interpixle redundancye p e edu d cy

psychovisual redundancy

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

5

Page 6: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: coding redundancy

If rk represents the gray levels of an image and each rk occurswith probability pr(rk), then

( ) , 0,1, 1kr k

np r k Ln

= = −

where L is the number of gray levels, nk is number of times that the kth gray level appears in the image and n is the total numberthe kth gray level appears in the image, and n is the total numberof pixels in the image

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

6

Page 7: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: coding redundancy

If the number of bits used to represent each value of rk is l(rk),thenthe average number of bits required to represent each pixel is

1L 1

0

( ) ( )L

avg k r kk

L l r p r−

=

= ∑For example

rk Pr(rk) Code1 l1(rk) Code2 l2(rk)r0 0.19 000 3 11 20

r1 0.25 001 3 01 2r2 0.21 010 3 10 2r3 0.16 011 3 001 3r4 0.08 100 3 0001 4r5 0.06 101 3 00001 5R6 0.03 110 3 000001 6

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

7r7 0.02 111 3 000000 7

Page 8: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: coding redundancy

7

20

( ) ( )avg k r kk

L l r p r=

=∑2(0.19) 2(0.25) 2(0.21) 3(0.16)

4(0.08) 5(0.06) 6(0.03) 6(0.02)= + + ++ + + +2.7bits=

3/ 2 7 1 11C 3/ 2.7 1.11RC = =

11 0 099R = =1 0.0991.11DR = − =

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

8

Page 9: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: interpixel redundancy

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

9

Page 10: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: interpixel redundancy

Autocorrelation coefficients

( )( )(0)

A nnA

γ ΔΔ =

11 N n− −Δ

where

1

0

1( ) ( , ) ( , )N n

y

A n f x y f x y nN n

Δ

=

Δ = + Δ−Δ ∑

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

10

Page 11: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: interpixel redundancy

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

11

Page 12: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.1 Data redundancy: psychovisual redundancyPsychovisual redundancy is associated with real or quantifiable.Psychovisual redundancy is associated with real or quantifiable visual information. The information itself is not essential for normal visual processing

8bits/pixel 4bits/pixel 2bits/pixel

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

12

Page 13: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.2 Fidelity criteria: objective L t t i t i d d t th( )f x y ^

( )fLet represent an input image and denote the Decompressed image. For any value x and y, the error

( , )f x y ( , )f x y

^

e(x,y) bewteen and can be defined as( , )f x y^( , )f x y

^( , ) ( , ) ( , )e x y f x y f x y= −

So that the total error between the two images isSo that the total error between the two images is

1 1 ^[ ( ) ( )]

M N

f x y f x y− −

−∑ ∑0 0

[ ( , ) ( , )]x y

f x y f x y= =∑ ∑

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

13

Page 14: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.2 Fidelity criteria: objective

The root-mean-square error, eems, between and is( , )f x y^( , )f x y

1 1 ^1 M N1 1 ^2 1/ 2

0 0

1{ [ ( , ) ( , )] }M N

rmsx y

e f x y f x yMN

− −

= =

= −∑ ∑

The mean-square signal noise ratio is defined as 1 1 ^

2( )M N

f− −

∑ ∑ 2

0 01 1 ^

( , )10 lg x y

rms M N

f x ySNR = =

− −=∑ ∑

2

0 0

[ ( , ) ( , )]x y

f x y f x y= =

−∑ ∑

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

14

Page 15: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.2 Fidelity criteria: objective

The signal noise ratio is defined as 21 1 ^

( ) ( )M N

f x y f x y− − ⎡ ⎤−⎢ ⎥∑ ∑0 0

1 1 ^2

( , ) ( , )10 lg

[ ( ) ( )]

x yM N

f x y f x ySNR

f f

= =− −

−⎢ ⎥⎣ ⎦=∑ ∑

∑ ∑ 2

0 0

[ ( , ) ( , )]x y

f x y f x y= =

−∑ ∑The peak signal noise ratio is defined asThe peak signal noise ratio is defined as

1 12

max 2

M N

ff

− −

∑ ∑0 0 max

1 1 2^2

10 lg 10lg[ ( , ) ( , )]

x yM N

rms

fPSNRef x y f x y

= =− −= =

−∑ ∑Digital Image Processing

Prof.Zhengkai Liu Dr.Rong Zhang15

0 0

( ) ( )x y

f y f y= =∑ ∑

Page 16: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.2 Fidelity criteria: subjective

Side-by-side comparisonsSide by side comparisons

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

16

Page 17: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.3 image compression models

Source encoder model

( , )f x y decorrelation quantizer Symbolencoder

^( , )f x y

R RRemoveInterpixel

redundancy

Removepsychovisualredundancy

Removecodingredundancy redundancy

redundancy

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

17

Page 18: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: terms

1948, Cloude Shannon, A mathematical theory of communication

News: dataNews: datainformation: contentsInformation source: symbols

Memoryless source

Information source: symbols

Markov source

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

18

Page 19: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Self-information

A random event E that occurs with probability is said to contain units of information.

( )P E

( )I E

1( ) log log ( )( )

I E P EP E

= = −

( )

( )I E = ∞

( )P E

For example ( ) 0,P E =

( ) 1,P E = ( ) 0I E =

Unit: Base=2, bits ( binary digits)U :Base=e, nats ( nature digits)Base=10, Hartly

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

19

Page 20: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Entropy of the source

definition

Suppose X={x0 , x1,… xN-1 }is a discrete random variable, andthe probability of xj is P(xj),namely

0 1 1

0 1 1( ) ( ) ( )( )Nx x xX

p x p x p xP X−⎡ ⎤⎡ ⎤

= ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦0 1 1( ) ( ) ( )( ) p p p⎣ ⎦ ⎣ ⎦

then 1 1

0 0

1( ) ( ) log ( ) log ( )( )

N N

j j jj j

H X p x p x p xp x

− −

= = −∑ ∑0 0( )j jjp x= =

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

20

Page 21: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Entropy of the source

properties

( ) 0H X ≥(1) ( ) 0H X ≥(1)

( ) (1,0, 0) (0,1, 0) (0,0, 1) 0H X H H H= = = = =(2)

Wh P( ) 1/N f ll j H(X) l N

0 1 1( ) ( , ) logNH X H x x x N−= ≤(3)

When P(xj)=1/N for all j,, H(X)=logN

example X={1 2 3 4 5 6 7 8 } p =1/8 for each jexample X={1,2,3,4,5,6,7,8 }, pj=1/8 for each j8

21( ) log 8 3 /8

H X bits symbol= =∑Digital Image Processing

Prof.Zhengkai Liu Dr.Rong Zhang21

1 8j=

Page 22: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Conditional entropy

1∑ ∑

definitions0

1( ) ( ) log( )i

i i

H X p xp x

=∑

1 11

1( ) ( ) ( | ) log( )i i i

i i i i

H X p x p x xp x x−

=∑ ∑1( ) ( ) ( | ) logH X p x p x x x∑ ∑2 1 2

1 2

( ) ( ) ( | ) log( )i i i i

i i i i i

H X p x p x x xp x x x− −

− −

=∑ ∑

11

1

1( ) ( ) ( | ) log( )m i i i i m

i i i i i m

H X p x p x x xp x x x− −

− −

=∑ ∑

tproperty1 2 1 0m mH H H H H H∞ −= = < < < < <

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

22

Page 23: Chapter7 Image Compression - USTC

7.2 Fundamental concepts and theories7.2.4 Elements of information theory: using information theory

A simple 8 bits image:

21 21 21 95 169 243 243 24321 21 21 95 169 243 243 243 1∑

A simple 8-bits image:Zero-order entropy:

21 21 21 95 169 243 243 24321 21 21 95 169 243 243 24321 21 21 95 169 243 243 243

01( ) ( ) log( )i

i i

H X p xp x

=∑

1 81 /bits pixel

Gray-levelGa Count probability

1.81 /bits pixel=Probability of the source symbols:

Gray levelGa Count probability21 12 3/895 4 1/8169 4 1/8169 4 1/8243 12 3/8

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

23

Page 24: Chapter7 Image Compression - USTC

7.2 Fundamental concepts and theories7.2.4 Elements of information theory: using information theory

Relative frequency of pairs of pixels:Gray-level

PairaCount probability

(21,21) 8 2/8(21 95) 4 1/8(21,95) 4 1/8(95,169) 4 1/8(169,243) 4 1/8(243,243) 8 2/8(243,21) 4 1/8

Fi t d t

1 11( ) ( ) ( ) log

( )i i iH X p x p x xp x x−=∑ ∑

First-order entropy:

1( )i i i ip x x −∑ ∑1.25 /bits pixel=

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

24

Page 25: Chapter7 Image Compression - USTC

7.2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Noiseless coding theorem

Also called Shannon first theorem. Defines the minimum averagecode word length per source symbol that can be achieved

( ) avgH X L≤

For memoryless (zero-memory) source:

1( ) ( ) log( )i

i i

H X p xp x

=∑ ( )i ip

But, the image data usually are Markov source, how to code?

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

25

Page 26: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Noiseless coding theorem

Coding steps decorrelationX Ymarcov memoryless

entropycoding

entropycoding

LX

g g

LY

H(X)=Hm(X)<H0(X), LX= H0(X)

X Y

H(Y)=H0(Y), LY= H0(Y)

H(X)=H(Y) LY= Hm(X)<LX

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

26

( ) ( ) Y m( ) X

Page 27: Chapter7 Image Compression - USTC

7 2 Fundamental concepts and theories7.2 Fundamental concepts and theories7.2.4 Elements of information theory: Noise coding theorem

For a given rate distortion D, the rate distortion function R(D)is less than H(X), and R(D) is the minimum of coding length Lavg

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

27

Page 28: Chapter7 Image Compression - USTC

7 3 Basic coding methods7.3 Basic coding methods7.3.1 Huffman coding

Huffman, 1952

• code the low probabilities symbols with long wordcode the high probabilities symbols with short word

• smallest possible number of code symbols per source symbol

• the source symbols must be coded one at a time

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

28

Page 29: Chapter7 Image Compression - USTC

7 3 Basic coding methods7.3 Basic coding methods7.3.1 Huffman coding

step1:create a series of source reduction by ordering the probabilitiesstep2:combine the lowest probability symbols into a single symbol

h l h i h d ithat replaces them in the next source reductionstep3:repeat step2 until the lowest probability is 1

For example Symbol Probability 21 43

ABCD

0.40.30.10 1

0.40.30.10 1

0.40.30.20 1

0.40.30.3

0.60.4

DEF

0.10.060.04

0.10.1

0.1

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

29

Page 30: Chapter7 Image Compression - USTC

7 3 Basic coding methods7.3 Basic coding methods7.3.1 Huffman coding

t 4 d h d d ith 0 d 1 t i ith thstep4:code each reduced source with 0 and 1, staring with the smallest source and working back to the original source Symbol Probability 21 43Symbol Probability 21 43

AB

0.4 1 0.4 1 0.4 10 3 00

0.4 10 3 00

0.6 00 4 1B

CDE

0.3 000.1 0110.1 01000 06 01010

0.3 000.1 0110.1 01000 1 0101

0.3 000.2 0100.1 011

0.3 000.3 01

0.4 1

0.4 1 0.3 2 0.1 3 0.1 4avgL = × + × + × + ×6

01

( ) ( ) log ( )i ii

H A p a p a=

= −∑EF

0.06 010100.04 01011

0.1 0101

0.06 5 0.04 5avg

+ × + ×2.2 /bits symbol=

1i=

2.14 /bits symbol=

( )L H X>Digital Image Processing

Prof.Zhengkai Liu Dr.Rong Zhang30

0 ( )avgL H X>

Page 31: Chapter7 Image Compression - USTC

7.3 Basic coding methodsg7.3.2 Arithmetic coding•An entire sequence of source symbols( or message) is assigned as a single arithmetic code word• the code word itself defines an interval of real numbersbetween 0 and 1

The encoding process of an arithmetic coding can be explainedthrough the following example

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

31

Page 32: Chapter7 Image Compression - USTC

7.3 Basic coding methodsg7.3.2 Arithmetic coding: example

encoding

•Let us assume that the source symbols are {a1 a2 a3a4} and theprobabilities of these symbols are {0 2 0 2 0 4 0 2}probabilities of these symbols are {0.2, 0.2, 0.4, 0.2}

•To encode a message of a sequence: a1 a2 a3a3a4

h i l [0 1) b di id d f b i l•The interval [0,1) can be divided as four sub-intervals:[0.0, 0.2), [0.2, 0.4), [0.4, 0.8), [0.8, 1.0),

symbols probabilites Initial intervalsa1 0.2 [0.0,0.2)a2 0.2 [0.2,0.4)a3 0.4 [0.4,0.8)a 0 2 [0 8 1 0)

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

32

a4 0.2 [0.8,1.0)

Page 33: Chapter7 Image Compression - USTC

7.3 Basic coding methodsg7.3.2 Arithmetic coding: example

•We take the first symbol a1 from the message and find its encoding range is [0.0, 0.2).

•The second symbol a2is encodedby taking the 20th-40th of y y ginterval [0.0, 0.2) as the new interval [0.02,0.04).

•And so on Visually we can use the figure:

Fi ll h b

And so on. Visually, we can use the figure:

•Finally, choose a number from the interval of [0 0688 0 06752) as the[0.0688,0.06752) as theoutput: 0.06800

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

33

Page 34: Chapter7 Image Compression - USTC

7.3 Basic coding methodsg7.3.2 Arithmetic coding : example

0< 0 068800 <0 2 a1

decoding

0< 0.068800 <0.2 a1

0.04< 0.068800 <0.08 a2

0 056< 0 068800 <0 072 a0.056< 0.068800 <0.072 a3

0.0624< 0.068800 <0.0688 a3

0.06752< 0.068800 <0.0688 a4

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

34

Page 35: Chapter7 Image Compression - USTC

7 3 Basic coding methods7.3 Basic coding methods7.3.4 Binary image coding: constant area coding

all white

all blackall black

mixed intensity

•The most probable category is coded with 1-bit code word 0p g y

•The other two category is coded with 2-bit code word 10 and 11

•The code assigned to the mixed intensity category is used as a prefix

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

35

Page 36: Chapter7 Image Compression - USTC

7 3 Basic coding methods7.3 Basic coding methods7.3.4 Binary image coding: run length coding

•Represent each row of a image by a sequence of lengths thatdescribe runs of black and white pixels

• the standard compression approach in facsimile (FAX) coding. CCITT, G3,G4.

(1,8) (0,3) (1,14) (0,4) (1, 28) (0,3) (1,23) (0,3) (1,29) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , ) ( , )(0, 3) (1,28) (0,3) (1,31) (0,4) (1,25) (0,4) (1,38) (0,4)

8 3 14 4 28 3 23 3 29 3 28 9 3 31 4 25 4 38 4

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

36

Page 37: Chapter7 Image Compression - USTC

7 3 Basic coding methods7.3 Basic coding methods7.3.4 Binary image coding: contour tracing and coding

• is the difference between the starting coordinates of the front contours adjacent lines

i h diff b h f b k l h• is the difference between the front-to-back contour lengths contours adjacent lines

''Δ

• code: new start and'Δ ''Δ• code: new start, and Δ Δ

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

37

Page 38: Chapter7 Image Compression - USTC

But for gray image

162 161 159 161 162 160 158 156 156 161162 161 159 161 162 160 158 156 156 161

But for gray image…

162 161 159 161 162 160 158 156 156 161163 158 159 159 160 158 155 155 156 158159 157 159 156 159 159 154 152 155 153155 157 156 156 158 157 156 155 154 156156 157 155 151 157 156 155 156 156 154157 156 156 156 156 156 154 156 155 155158 157 155 155 156 155 155 155 155 155156 155 156 153 156 155 156 155 154 156155 155 157 154 157 155 157 158 158 158

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

38

Page 39: Chapter7 Image Compression - USTC

7 4 Predictive coding7.4 Predictive coding7.4.1 lossless coding

encoder

Huffman coding or

ˆ

d d

Arithmetic codingn n ne f f= −

decoder

ˆn n nf e f= +

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

39

n n nf f

Page 40: Chapter7 Image Compression - USTC

7 4 Predictive coding

In most case the prediction is formed by a linear combination

7.4 Predictive coding7.4.1 lossless coding

In most case, the prediction is formed by a linear combination of m previous pixels. That is,

ˆm⎡ ⎤

1n̂ i n i

if round fα −

=

⎡ ⎤= ⎢ ⎥⎣ ⎦∑

h i th d f th li di twhere m is the order of the linear predictorαi are prediction coefficients

For example, 1-D linear predictive can be written

ˆ ( ) ( )m

f d f⎡ ⎤∑1

( , ) ( , )ii

f x y round f x y iα=

⎡ ⎤= −⎢ ⎥⎣ ⎦

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

40

Page 41: Chapter7 Image Compression - USTC

7 4 Predictive coding

[ ]ˆ

7.4 Predictive coding7.4.1 lossless coding: Experimental result

m=1,α=1 [ ]( , ) ( , 1)f x y round f x y= −

Original image Residual image(128 represents 0)

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

41

(128 represents 0)

Page 42: Chapter7 Image Compression - USTC

7 4 Predictive coding2

1( ) e

e

p e e σ−

=

7.4 Predictive coding7.4.1 lossless coding: Experimental result

m=1, α=1 ( )

2ee

p e eσ

Histogram of original image Histogram of residual image

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

42

Histogram of original image Histogram of residual image

Page 43: Chapter7 Image Compression - USTC

7 4 Predictive coding7.4 Predictive coding7.4.1 lossless coding: JPEG lossless compression standard

P di ti P di ti

Prediction schemes

c b

a x

Prediction Scheme

Predictionvalue

0 Null1 a1 a2 b3 c

Location of pixels

4 a+b-c5 a+(b-c)/26 b+(a-c)/27 (a+b)/2

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

43

Page 44: Chapter7 Image Compression - USTC

7 4 Predictive coding

ˆ

7.4 Predictive coding7.4.2 lossy coding: model

encoder n n nf e f= +

decoder

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

44

Page 45: Chapter7 Image Compression - USTC

7 4 Predictive coding

diff ti l l d d l ti ( DPCM)

7.4 Predictive coding7.4.2 lossy coding: Optimal predictors

differential pulse code modulation( DPCM)•Minimizes the encoder’s mean-square prediction error

2 2ˆ2 2ˆ{ } {[ ] }n n nE e E f f= −Subject to the constraint thatj

ˆm

n i n if fα −=∑1i=

22

1{ }m

n n i nE e E f fα⎧ ⎫⎡ ⎤⎪ ⎪= −⎨ ⎬⎢ ⎥⎣ ⎦

∑ 11

{ }n n i ni

f f −=

⎨ ⎬⎢ ⎥⎣ ⎦⎪ ⎪⎩ ⎭∑

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

45

Page 46: Chapter7 Image Compression - USTC

7 4 Predictive coding7.4 Predictive coding7.4.2 lossy coding: Optimal predictors

Experiment: four predictors:

ˆ ( , ) 0.97 ( , 1)f x y f x y= −(1) ( , ) 0.97 ( , 1)f x y f x y(1)

ˆ ( , ) 0.5 ( , 1) 0.5 ( 1, )f x y f x y f x y= − + −(2)ˆ̂ ( , ) 0.75 ( , 1) 0.75 ( 1, ) 0.5 ( 1, 1)f x y f x y f x y f x y= − + − − − −(3)

0.97 ( , 1) ifˆ f x y h v− Δ ≤ Δ⎧⎨

0.97 ( , 1) if ( , )

0.97 ( 1, ) otherwisef x y h v

f x yf x y

Δ ≤ Δ⎧= ⎨ −⎩

(4)

wherewhere ( 1, ) ( 1, 1)h f x y f x yΔ = − − − −

( , 1) ( 1, 1)v f x y f x yΔ = − − − −

f(x-1,y-1) f(x-1,y)

f(x,y-1)

f(x-1,y+1)

f(x,y)

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

46

Page 47: Chapter7 Image Compression - USTC

7 4 Predictive coding7.4 Predictive coding7.4.2 lossy coding: Optimal predictors

Experiment

original

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

47

Page 48: Chapter7 Image Compression - USTC

7 4 Predictive coding7.4 Predictive coding7.4.2 lossy coding: Optimal predictors

C l i d h d f h di iDigital Image Processing

Prof.Zhengkai Liu Dr.Rong Zhang48

Conclusion:Error decreases as the order of the predictor increases

Page 49: Chapter7 Image Compression - USTC

7 4 Predictive coding7.4 Predictive coding7.4.2 lossy coding: Optimal quantization

The staircase quantization function t=q(s) is shown as

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

49

Page 50: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.1 Review of image transform: definitions

X YT

1 1

, , ( , , , )M N

m n i jy x i j m n− −

= Φ∑∑Y TX=Direct transform0 0i j= =

1X T Y−=Inverse transformpolynomial expression

Vector expression

Question: why transformation can compress data?

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

50

Page 51: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.1 review of image transform: properties

• Entropy keeping: H(X)=H(Y)• Energy keeping: 1 1 1 1

2 2( , ) ( , )M N M N

X m n Y k l− − − −

=∑∑ ∑∑0 0 0 0m n k l= = = =∑∑ ∑∑

• decorrelation:1 2 1 0m mH H H H H H∞ −= = < < < < <

• Energy re-assigned:

H(Y)=H0(Y) H(X)=Hm(X) H0(X)>H0(Y)

Energy re assigned:

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

51original DFT DCT

Page 52: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.2 Transform coding system

( , )f x y Symbolencoder

quantizerConstruct

n*nsubimage

Forwardtransform

Bitsstream

subimage

encoder

Bitsstream

Symbolencoder

Inverse transform

Merge n*n

bi

^( , )f x ystream encoder transform subimage

decoder

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

52

Page 53: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.3 Transform selection

Information packing ability:Computational complexity: WHT<DCT<DFT<KLT

KLT>DCT>DFT>WHT

7.5.4 sub-image size selection

•Computational complexityComputational complexity increase as the subimage size increase

•Correlation decrease as thesubimage size increase

•The most popular subimagesize are 8*8, and 16*16

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

53

,

Page 54: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.5 bit allocation: zonal coding

• transform coefficients of maximum variance carry the mostinformation and should be retained

1 1

1 1

1

1

1 1

1 0 0

0

0

0

0

0 8 7

7 6

6

5

4 3

4 3 2

2

1

1

0

0

1

11

1

1

1

000

0

00

0

00

0

00

0

00

0

00

6

43

5

4

4

333

3

32

3

21

1

11

1

00

0

001

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

3

2

1

3

2

1

3

1

1

2

1

0

1

1

0

1

0

0

0

0

0

0

0

0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Zonal mask Bit allocationDigital Image Processing

Prof.Zhengkai Liu Dr.Rong Zhang54

Page 55: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.5 bit allocation: threshold coding

1 1 0 1 1 0 0 0

1

1

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Zig-zag1

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

g g

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

threshold mask

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

55

Page 56: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.5 bit allocation: threshold coding

There are three basic ways to threshold a transformed coefficients

•A single global threshold can be applied to all subimagesg g pp g•A different threshold can be used for each subimage•The threshold can be varied as a function of the location of each fcoefficient within the subimage

( , )ˆ( ) T u vT u v round⎡ ⎤

= ⎢ ⎥( , )( , )

T u v roundZ u v

= ⎢ ⎥⎣ ⎦

where T(u,v): transform coefficients

Z(u,v): quantization matrix

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

56

Page 57: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.5 bit allocation: threshold coding

Z(u,v) is assigned a particular value c

Z(u v) used in JPEG stardardZ(u,v) used in JPEG stardard

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

57

Page 58: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.5 bit allocation: threshold coding

Experiment results

Left column: quantize with Z(u,v)q ( , )

Right column: quantize with 4*Z(u,v)

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

58

Page 59: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.3 JEPG lossy compression standard

( , )f x y Huffmancodingquantizer

Construct8*8

subimageDCT

BitsstreamZig-zag

subimage

encoderQuestion: why there are mosaics in JEPG images?Question: why there are mosaics in JEPG images?

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

59

Page 60: Chapter7 Image Compression - USTC

7 5 Transform Coding7.5 Transform Coding7.5.3 JEPG 2000 lossy compression standard

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

60

Page 61: Chapter7 Image Compression - USTC

2003 Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

61

Page 62: Chapter7 Image Compression - USTC

Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

62

Page 63: Chapter7 Image Compression - USTC

7 6 Introduction to international7.6 Introduction to international standards

• Image compression standards: -JPEG,JPEG2000-JPEG2000

• Video compression standards:-MPEG-1,MPEG-2,MPEG-4, H 261 H 263 H 263+ H 264-H.261,H.263,H.263+,H.264

-H.264/AVC2003 Digital Image Processing

Prof.Zhengkai Liu Dr.Rong Zhang63

Page 64: Chapter7 Image Compression - USTC

Video compression standardsVideo compression standards

2003 Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

64

Page 65: Chapter7 Image Compression - USTC

Homework

6.7(11-14) ( )(1)请说明是否能用变长变码法压缩1幅已直方图均衡化的具有

2n级灰度的图?这样的图像中包含像素间冗余吗(2) 这样的图像中包含像素间冗余吗?

6 8(11 15)6.8(11-15) (1) 对一个具有3个符号的信源,有多少唯一的Huffman码?

(2)构造这些码。( )构

6.11(11-22)已知符号 的出现概率分别是已知符号a,e,i,o,u,?的出现概率分别是0.2,0.3,0.1,0.2,0.1,

0.1,0.23355进行解码,解码长度为6。

2003 Digital Image Processing Prof.Zhengkai Liu Dr.Rong Zhang

65