cs654: digital image analysis lecture 33: introduction to image compression and coding

28
CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Upload: cuthbert-greer

Post on 17-Jan-2016

233 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

CS654: Digital Image Analysis

Lecture 33: Introduction to Image Compression and Coding

Page 2: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Outline of Lecture 33

β€’ Introduction to image compression

β€’ Measure of information content

β€’ Introduction to Coding

β€’ Redundancies

β€’ Image compression model

Page 3: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Goal of Image Compression

β€’ The goal of image compression is to reduce the amount of data required to represent a digital image

Page 4: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Approaches

Lossless

β€’ Information preserving

β€’ Low compression ratios

Lossy

β€’ Not information preserving

β€’ High compression ratios

Page 5: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Data β‰  Information

β€’ Data and information are NOT synonymous terms

β€’ Data is the means by which information is conveyed

β€’ Data compression aims to reduce the amount of data required to represent a given quantity of information

β€’ To preserve as much information as possible

β€’ The same amount of information can be represented by various amount of data

Page 6: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Compression Ratio (CR)

compression

Compression ratio: 𝐢𝑅=𝑛1

𝑛2

Page 7: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Data Redundancy

β€’ Relative data redundancy:

𝑅𝐷=1βˆ’1𝐢𝑅

Example:

A compression engine represents 10 bits of information of the source data with only 1 bit. Calculate the data redundancy present in the source data.

𝐢𝑅=101

βˆ΄π‘…π·=1βˆ’1

10=90 %

Page 8: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Types of Data Redundancy

Coding Redundancy1β€’ Number of bits required to code

Interpixel Redundancy2β€’ similarity in the neighborhood.

Psychovisual Redundancy3β€’ irrelevant information for the human visual system

Compression attempts to reduce one or more of these redundancy types.

Page 9: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Coding Redundancy

β€’ Code: a list of symbols (letters, numbers, bits etc.)

β€’ Code word: a sequence of symbols used to represent a piece of information or an event (e.g., gray levels).

β€’ Code word length: number of symbols in each code word

image: k-th gray level: probability of : number of bits for

Expected value:

𝐸 ( 𝑋 )=βˆ‘π‘₯

π‘₯𝑃(𝑋=π‘₯)

Average number of bits πΏπ‘Žπ‘£π‘”=𝐸 ( 𝑙 (π‘Ÿπ‘˜ ))=βˆ‘π‘˜=0

πΏβˆ’ 1

𝑙 (π‘Ÿ π‘˜)𝑃 (π‘Ÿπ‘˜)

Total number of bits π‘π‘€πΏπ‘Žπ‘£π‘”

Page 10: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Coding RedundancyFixed length coding

Code 1

0.19 000 3

0.25 001 3

0.21 010 3

0.16 011 3

0.08 100 3

0.06 101 3

0.03 110 3

0.02 111 3 bits

Total number of bits ΒΏ3𝑁𝑀

Page 11: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Coding RedundancyVariable length coding

Code 1 Code 2

0.19 000 3 11 2

0.25 001 3 01 2

0.21 010 3 10 2

0.16 011 3 001 3

0.08 100 3 0001 4

0.06 101 3 00001 5

0.03 110 3 000001 6

0.02 111 3 000000 6 bits

𝐢𝑅=3

2.7=1.11β‰ˆ10% 𝑅𝐷=1βˆ’

11.11

=0.099

Page 12: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Inter-pixel redundancy

β€’ Inter-pixel redundancy implies that pixel values are correlated

β€’ A pixel value can be reasonably predicted by its neighborsA B

Page 13: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Psychovisual redundancy

β€’ The human eye does not respond with equal sensitivity to all visual information.

β€’ It is more sensitive to the lower frequencies than to the higher frequencies in the visual spectrum.

β€’ Discard data that is perceptually insignificant!

Page 14: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Example

256 gray levels 16 gray levels/ random noise

16 gray levels

Page 15: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Measuring Information

β€’ What is the minimum amount of data that is sufficient to describe completely an image without loss of information?

β€’ How do we measure the information content of a message/ image?

Page 16: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Modeling Information

β€’ We assume that information generation is a probabilistic process.

β€’ Associate information with probability!

Note: when

A random event with probability contains:

Page 17: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

How much information does a pixel contain?

β€’ Suppose that gray level values are generated by a random process, then contains:

units of information!

Page 18: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

How much information does an image contain?

β€’ Average information content of an image:

(assumes statistically independent random events)

units/pixel(e.g., bits/pixel)

Entropy:

using

1

0

( ) P( )L

k kk

E I r r

Page 19: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Redundancy (revisited)

β€’ Redundancy:

where:

Note: if , then (no redundancy)

Page 20: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Image Compression Model

Source decoder

Channel Decoder

Channel

Channel encoder

Source Encoder

𝒇 (𝒙 ,π’š )

𝒇 β€² (𝒙 , π’š )

Encoder

Decoder

Compression

Noise tolerance

Page 21: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Page 22: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Mapper: transforms input data in a way that facilitates reduction of inter-pixel redundancies.

Page 23: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Quantizer: reduces the accuracy of the mapper’s output in accordance with some pre-established fidelity criteria.

Page 24: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Symbol encoder: assigns the shortest code to the most frequently occurring output values.

Page 25: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Fidelity Criteria

β€’ How close is to ?

β€’ Criteria

β€’ Subjective: based on human observers β€’ Objective: mathematically defined criteria

Page 26: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Subjective Fidelity Criteria

Page 27: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Objective Fidelity Criteria

β€’ Root mean square error (RMS)

β€’ Mean-square signal-to-noise ratio (SNR)

Page 28: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Lossless Compression

Types of coding

Repetitive Sequence Encoding Statistical Encoding Predictive Coding Bitplane coding

RLE HuffmanArithmaticLZW

DPCM