itct lecture 9.1itct/slide/2019/itct lecture 9.1.pdf · – inevitably results in some distortion...
TRANSCRIPT
ITCT Lecture 9.1:Image Data Compression (1)
Prof. Ja-Ling Wu
Department of Computer Science
and Information Engineering
National Taiwan University
Information Theory 2
Contents
Lecture 9.1
I. Introduction
II. Predictive Techniques
Lecture 9.2
I. Image Coding in Visual Telephony
II. Coding of Two-Tone Images
III. References
Information Theory 3
Image Data Compression
I. Introduction :
Image data Compression is concerned with
minimizing the number of bits required to represent
an image.
Applications of data compression are primarily in
“Transmission” and “Storage” of information.
Application of data compression is also in the
development of “fast algorithms” where the number
of operations required to implement an algorithm is
reduced by working with the compressed data.
--- Compressed Domain Signal Processing
Information Theory 4
Image data Compression techniques
Pixel
Coding
Predictive
Coding
Transform
CodingOthers
• PCM/quantization
• Run-length coding
• Bit-plane coding
JPEG2000/DVC
• Delta modulation
• Line-by-line DPCM
• 2-D DPCM
• Intra-/Inter-frame
techniques
•Adaptive
• Zonal coding
• Threshold coding
•DCT/Waveform
•Real/integer/Lapped
• Multi-D techniques
•Adaptive
• Hybrid Coding
• Vector quantization
•Compressed sensing
Information Theory 5
Image data Compression methods fall into two
common categories :
A. Redundancy Coding :
– Redundancy reduction
– Information lossless
Predictive coding : DM, DPCM
B. Entropy Coding :
– Entropy reduction
– Inevitably results in some distortion
Transform coding
For digitized data, “Distortionless Compression”
techniques are possible.
Information Theory 6
Some methods for Entropy reduction:
Subsampling : reduce the sampling rate
Coarse Quantization : reduce the number of
quantization levels
Frame Repetition / Interlacing :
reduce the refresh rate (number of frames per second)
TV signals
Information Theory 7
II. Predictive Techniques :
Basic Principle :
: to remove mutual redundancy between successive
pixels and encode only the new information.
DPCM :
A Sampled sequence u(m), coded up to m=n-1. Let
be the value of the reproduced
(decoded) sequence.
,2~,1~ nunu
Information Theory 8
At m=n, when u(n) arrives, a quantity , an
estimate of u(n), is predicted from the previously
decoded samples , i.e.,
prediction error :
If is the quantized value of e(n), then the
reproduced value of u(n) is :
ne~
nenunu ~~~
"rule prediction:" ; ,2~,1~~ nununU
nu~
,2~,1~ nunu
nunune ~
Information Theory 9
DPCM CODEC
Quantizer Communication
Channel
Predictor
with delay
Predictor
with delay
+
+
+–
nu ne ne~ ne~ nu~
nu~
nu~ nu~
Encoder Reconstruction
filter/Decoder
Information Theory 10
Note :
Remarks:
1. The pointwise coding error in the input sequence is exactly
equal to q(n), the quantization error in e(n)
2. With a reasonable predictor the mean square value of the
differential signal e(n) is much smaller than that of u(n)
e(n)in error on Quantizati the:
~
~~~
~
~
nq
nene
nenunenu
nununu
nenunu
Information Theory 11
Conclusion:
For the same mean square quantization error, e(n)
requires fewer quantization bits than u(n).
The number of bits required for transmission has
been reduced while the quantization error is kept the
same.
Information Theory 12
Feedback Versus Feedforward Prediction
An important aspect of DPCM is that the prediction is
based on the output — the quantized samples — rather
than the input — the unquantized samples.
This results in the predictor being in the “feedback loop”
around the quantizer, so that the quantization error at a
given step is fed back to the quantizer input at the next
step. This has a “stabling effect” that prevents DC drift
and accumulation of error in the reconstructed signal
. nu~
Information Theory 13
If the prediction rule is based on the past input, the signal
reconstruction error would depend on all the past and
present quantization errors in the feedforward prediction-
error sequence (n).
Generally, the MSE of feedforward reconstruction will be
greater than that in DPCM.
Quantizer
PredictorPredictorEntropy
coder/decoder
+
+
+
–
nu n nu~
Feedforward coding
nu
Information Theory 14
Example
The sequence 100, 102, 120, 120, 120, 118, 116, is to be
predictively coded using the prediction rule:
Assume a 2-bit quantizer, as shown below, is used,
Except the first sample is quantized separately by a 7-bit uniform
quantizer, given
coder. predictive dfeedforwar for the 1
DPCMfor 1~~
nunu
nunu
-6 -4 -2 2 4 6
5
1
-1
-5
.10000~ uu
Information Theory 15
Input DPCM Feedforward Predictive Coder
N u(n) u(n) u(n)
0 100 — — — 100 0 — — — 100 0
1 102 100 2 1 101 1 100 2 1 101 1
2 120 101 19 5 106 14 102 18 5 106 14
3 120 106 14 5 111 9 120 0 -1 105 15
4 120 111 9 5 116 4 120 0 -1 104 16
5 118 116 2 1 117 1 120 -2 -5 99 19
nu~ ne ne~ nu~ nu nε nε~ nu~
Information Theory 16
Delta Modulation : (DM)
Predictor : one-step delay function
Quantizer : 1-bit quantizer
1~
1~~
nunune
nunu
Unit Delay
Unit Delay
+
–
+
–
nu
nu~
nu~
ne ne~
+
+
Integrator
+
+
ne~ nu~
nu~
Information Theory 17
Slope
overload
nu nu~ Granularity
Primary Limitation of DM :
1) Slope overload : large jump region
Max. slope = (step size) (sampling freq.)
2) Granularity Noise : almost constant region
3) Instability to channel Noise
Step size effect :
Step Size (i) slope overload
(sampling frequency ) (ii) granular Noise
Information Theory 18
Adaptive Delta Modulation
This adaptive approach simultaneously minimizes the
effects of both slope overload and granular noise.
Unit Delay
+1
–1
+
–
1kS1KE
+
+
Adaptive
Function
kX 1kX1k
stored
,, min kk E
11
min1min
min21
1
1
11
if
if
sgn
KKK
KK
KKKK
K
KKK
XX
E
EE
XSE
Information Theory 19
DPCM Design
There are two components to design in a DPCM
system :
i. The predictor
ii. The quantizer
Ideally, the predictor and quantizer would be optimized
together using a linear or Nonlinear technique. In
practice, a suboptimum design approach is adopted :
i. Linear predictor
ii. Zero-memory quantizer
Remark : For this approach, the number of quantizing
levels, M, must be relatively large (M8) to achieve
good performance.
Information Theory 20
Design of linear predictor
iniiii
n
niiii
ninii
inniii
ii
jiij
i
inn
inn
i
nn
i
nn
RRRRa
a
a
a
RRRR
RaRaRa
SSaSSaSSaER
SSESSE
SSER
SSSE
SSaSaSaSE
SSaSaSaSE
a
SaSaSaSE
a
SSE
SSe
SaSaSaS
0
1
21
2
1
210
2211
22110
00
00
22110
22110
2
22110
2
00
000
2211
,,,
,,,
ˆ
n1,2,i ,0ˆ
0
n1,2,i , 0
2
ˆ
ˆ
0ˆ
Information Theory 21
When comprises these optimized coefficients, ai, then the
mean square error signal is :
0S
signal original theof variance the:
signal difference theof variance the: σ
ˆˆσ
principle) l(orthogona 0ˆˆBut
ˆˆˆ
ˆσ
00
2
e
002201100
00
2
0000
2
e
000
000000
2
00
2
e
R
RaRaRaR
SSESESSSE
SSSE
SSSESSSE
SSE
nn
The variance of the error signal is less than the variance of the
original signal.
Information Theory 22
Remarks:
1. The complexity of the predictor depends on “n”.
2. “n” depends on the covariance properties of the
original signal.
Information Theory 23
Design of the DPCM Quantizer
Review of uniform Quantizer:Quantization: 1. Zero-Memory quantization
2. Block quantization
3. Sequential quantization
Quantization Error : QE = y(X) – X
Average Distortion :
SNR :
Y
X
output
input
Midtread quantizer Midriser quantizer
dXXPXXyD
2
input x theof variance the:σ where
dBin log10
2
σ10
2
DSNR
Information Theory 24
Uniform quantizer : p(x) is constant within each interval
x1 x2 x3 x4 x5 x6
Granular RegionLower overload
regionupper overload
region
Y
X
Quantization Error for Midtread Quantizer
Information Theory 25
Output level yi always be in the midpoint of the input interval =xi-xi-1.
Assume p(x) is constant in the interval =xi-xi-1 and equal to p(xi)
Lower overload region : =x0-x1, x1 >> x0
Granular region : =xi-xi-1, 2 i M-1
upper overload region : =xM-xM-1, xM >> xM-1
y4
y3
y2
y1
y8
y7
y6
y5
x4x3x2x1 x8x7x6x5 x9X
Y
Uniform Midtread Quantizer M=9
1
2
3
1
2
1
1
3~
M
i
x
x
ii
M
i
i
x
xi
i
i
i
i
xxyxp
dxxpxxyD
Where we assume the contribution of the overload
region is negligible ; i.e. p(x1)=p(xM)=0
Information Theory 26
Since
V
V
V
M
i
i
M
i
i
ii
ii
dxx
xpx
VxV
D
xp
xpD
xyx
xyx
32V12
22
2V1
12
1
2
1
2
3
121
21
2
2
2
is anceinput vari the
)( p(x) is pdf theif
~
1But
~
Quantizer
characteristics
(Source Model)
Information Theory 27
Quantizerfor PCM only valid-
dB) (in 62log20
quantizer)bit -(n 2 if
log20log10
2for But
3
12log10
log10
Then
10
10
2
10
2
2
2
10
2
10
nnSNR
M
MMSNR
M
V
DSNR
n
MV
Information Theory 28
B. DPCM Quantizer
The pdf of the input signal to the DPCM quantizer is
not at all uniform. Since a “good” predictor would be
expected to result in many zero difference between
the predicted values and the actual values. A typical
shape for this distribution is a highly peaked, around
zero, distribution.
– +
pdf : p(d)
(Ex., Laplacian)
: Non-uniform Quantizer is required.
Information Theory 29
Compressor
Uniform
Quantizer
Expander
xL
x
dx
xdC
max2
C
Q
1C
xCQ
xC
xCQCY 1
X
Non-uniform Quantizer compressor + uniform Quantizer + Expander
Information Theory 30
Non-uniform Quantizer Expander
-Xmax
Xmax
Xmax
C(X) C(X)
y C-1(X)
Xmax-Xmax
x
uniform QuantizerCompressor
Information Theory 31
For this model, the mean-square distortion can be
approximately represented as :
functionnonlinear theof slope theis '
rangequantizer theis
'
12
1
12
12
22
2
1
xC
LL
LLxC
x
where
dxx
xp
MD
L
L
Information Theory 32
Lloyd-Max Quantizer : the most popular one.
1. Each interval limit should be midway between the
neighboring levels,
2. Each level should be at the centroid of the input prob.
Density function over the interval for that level, that is
2
1 ii yy
ix
01
i
i
x
xi dxxpyx
Logarithmic Quantizer : (log PCM)
-law
: US. Canada, Japan
1log
1log 1V
xVxy
1
KXdx
xdC
A-law
: Europe
Vx
xxy
AV
A
VV
AV
AAx
VAx
,
0,
log1
log
log1
Information Theory 33
If a Laplacian function is used to model p(e),
then the variance of the quantization error is:
the SNR for the non-uniform quantizer in DPCM becomes :
eep
ee
2exp
2
1
Input pdf of the DPCM Quantizer
V as
exp
2
2
312
2
92
3
0 3
2
2
1
3
22
Mg
V
Mg
e
ee
dee
2
2
2
2
2
22
2
2
10
10
n
9
210
10
log10
by SNR theimproves DPCM
65.6
: gives PCM pdf, same For the
log1065.6
2M Since
log10
log10
e
e
e
g
nSNR
nSNR
SNR
M
Information Theory 34
ADPCM :
i. Adaptive prediction
ii. Adaptive Quantization
DPCM for Image Coding :Each scan line of the image is coded independently by the
DPCM techniques. For every slow time-varying image (=0.95)
and a Laplacian-pdf Quantizer,
8 to 10 dB SNR improvement over PCM can be expected :
that is
The SNR of 6-bit PCM can be achieved by 4-bit line-by-line
DPCM for =0.97.
Two-Dimensional DPCM : two-D predictor
Ex :
1,11,1
1,,1,,
43
2
nmuanmua
nmuanmuanmu
Information Theory 35