digital communication techniques

116
Chapter No:- 02 Presented By, Er. Swapnil Kaware, Chapter Name:- Source and Channel Coding DCT Notes By, Er. Swapnil

Upload: swapnil-kaware

Post on 06-May-2015

4.612 views

Category:

Technology


8 download

TRANSCRIPT

Page 1: Digital Communication Techniques

Chapter No:- 02

Presented By, Er. Swapnil Kaware,

Chapter Name:- Source and Channel Coding

DCT Notes By, Er. Swapnil Kaware

Page 2: Digital Communication Techniques

Communication SystemThe purpose of a Communication System is to transport an information bearing signal from a source to a user destination via a communication channel.

ELEMENTS OF DIGITAL COMMUNICATION SYSTEMS:

1. Analog Information Sources.

2. Digital Information Sources.

(i). Analog Information Sources → Microphone actuated by a speech, TV Camera scanning a scene, continuous amplitude signals.

(ii). Digital Information Sources → These are teletype or the numerical output of computer which consists of a sequence of discrete symbols or letters.

DCT Notes By, Er. Swapnil Kaware

DCT Notes By, Er. Swapnil Kaware

Page 3: Digital Communication Techniques

(i). Source Coding:- Code data to more efficiently represent the information.*Reduces “size” of data.*Analog - Encode analog source data into a binary format.*Digital - Reduce the “size” of digital source data.

(ii). Channel Coding:- Code data for transmission over a noisy communication channel.*Increases “size” of data.*Digit: - Add redundancy to identify and correct errors.*Analog - Represent digital values by analog signals.

Communication System

DCT Notes By, Er. Swapnil Kaware

Page 4: Digital Communication Techniques

Communication System

DCT Notes By, Er. Swapnil Kaware

Page 5: Digital Communication Techniques

Coding in communication system

• In the engineering sense, coding can be classified into four areas:

• Encryption:- to encrypt information for security purpose.

• Data Compression:- to reduce space for the data stream.

• Data Translation:- to change the form of representation of the information so that it can be transmitted over a communication channel.

• Error Control:- to encode a signal so that error occurred can be detected and possibly corrected.

DCT Notes By, Er. Swapnil Kaware

Page 6: Digital Communication Techniques

Source Coding(i). The process by which information symbols are mapped to alphabetical symbols

is called source coding.

(ii). The mapping is generally performed in sequences or groups of information and alphabetical symbols.

(iii). it must be performed in such a manner that it guarantees the exact recovery of the information symbol back from the alphabetical symbols otherwise it will destroy the basic theme of the source coding.

(iv). The source coding is called lossless compression if the information symbols are exactly recovered from the alphabetical symbols otherwise it is called lossy compression.

(v). The Source coding also known as compression or bit-rate reduction process.

DCT Notes By, Er. Swapnil Kaware

Page 7: Digital Communication Techniques

(vi). It is the process of removing redundancy from the source symbols, which essentially reduces data size.

(vii). Source coding is a vital part of any communication system as it helps to use disk space and transmission bandwidth efficiently.

(viii). The source coding can either be lossy or lossless.

(ix). In case of lossless encoding, error free reconstruction of source symbols is possible, whereas, exact reconstruction of source symbols is not possible in case of lossy encoding.

Source Coding

DCT Notes By, Er. Swapnil Kaware

Page 8: Digital Communication Techniques

• Suppose a word ‘Zebra’ is going to be sent out. Before this information can be transmitted to the channel, it is first translated into a stream of bits (‘0’ and ‘1’).

• The process is called source coding. There are many commonly used ways to translate that.

• For example, if ASCII code is used, each alphabet will be represented by 7-bit so called the code word.

• The alphabets ‘Z’, ‘e’, ‘b’, ‘r’, ‘a’, will be encoded as ‘1010101’, ‘0110110’, ‘0010110’, ‘0010111’, ‘0001110’.

Source Coding

DCT Notes By, Er. Swapnil Kaware

Page 9: Digital Communication Techniques

Source Encoder / DecoderSource Encoder ( or Source Coder):-

(i). It converts the input i.e. symbol sequence into a binary sequence of 0’s and 1’s by assigning code words to the symbols in the input sequence.

(ii). For e.g. :-If a source set is having hundred symbols, then the number of bitsused to represent each symbol will be 7 because 2⁷=128 unique combinations areavailable.

(iii). The important parameters of a source encoder are block size, code wordlengths, average data rate and the efficiency of the coder (i.e. actual output data rate compared to the minimum achievable rate).

DCT Notes By, Er. Swapnil Kaware

Page 10: Digital Communication Techniques

Source Decoder:- (i). Source decoder converts the binary output of the channel decoder into a

symbol sequence.

(ii). The decoder for a system using fixed – length code words is quite simple, but the decoder for a system using variable – length code words will be very complex.

(iii). Aim of the source coding is to remove the redundancy in the transmitting information, so that bandwidth required for transmission is minimized.

(iv). Based on the probability of the symbol code word is assigned. Higher the probability, shorter is the codeword.

Ex: Huffman coding.

Source Encoder / Decoder

DCT Notes By, Er. Swapnil Kaware

Page 11: Digital Communication Techniques

Huffman coding.• Rules:-

• Order the symbols (‘Z’, ‘e’, ‘b’, ‘r’, ‘a’) by decreasing probability and denote them as S1, to Sn (n = 5 for this case).

• Combine the two symbols (Sn, Sn-1) having the lowest probabilities.

• Assign ‘1’ as the last symbol of Sn-1 and ‘0’ as the last symbol of Sn.

• Form a new source alphabet of (n-1) symbols by combining Sn-1 and Sn into a new symbol S’n-1 with probability P’n-1 = Pn-1 + Pn.

• Repeat the above steps until the final source alphabet has only one symbol with probability equals to 1. DCT Notes By, Er. Swapnil Kaware

Page 12: Digital Communication Techniques

Huffman coding.

DCT Notes By, Er. Swapnil Kaware

Page 13: Digital Communication Techniques

• The Channel Encoder will add bits to the message bits to be transmitted systematically.

• After passing through the channel, the Channel decoder will detect and correct the errors.

• A simple example is to send ‘000’ (‘111’ correspondingly) instead of sending only one ‘0’ (‘1’ correspondingly) to the channel.

• Due to noise in the channel, the received bits may become ‘001’. But since either ‘000’ or ‘111’ could have been sent.

• By majority logic decoding scheme, it will be decoded as ‘000’ and therefore the message has been a ‘0’.

• In general the channel encoder will divides the input message bits into blocks of k messages bits and replaces each k message bits block with a n-bit code word by introducing (n-k) check bits to each message block.

Channel Encoder / Decoder:

DCT Notes By, Er. Swapnil Kaware

Page 14: Digital Communication Techniques

Channel Encoder / Decoder:

There are two methods of channel coding.

1. Block Coding: (i). The encoder takes a block of ‘k’ information bits from the source encoder and adds ‘r’ error control bits, where ‘r’ is dependent on ‘k’ and error control capabilities desired.

2. Convolution Coding: (i). The information bearing message stream is encoded in a continuous fashion by continuously interleaving information bits and error control bits.

DCT Notes By, Er. Swapnil Kaware

Page 15: Digital Communication Techniques

Block Coding:• Denoted by (n, k) a block code is a collection of code words each

with length n, k information bits and r = n – k check bits.

• It is linear if it is closed under addition mod 2.

• A Generator Matrix G (of order k × n) is used to generate the code. (3.1) G = [ Ik P ] k × n.

• where Ik is the k × k identity matrix and P is a k × (n – k) matrix selected to give desirable properties to the code produced.

• For example, denote D to be the message, G to be the generator matrix, C to be code word.

DCT Notes By, Er. Swapnil Kaware

Page 16: Digital Communication Techniques
Page 17: Digital Communication Techniques

Average Mutual Information & Entropy

Information theory answers two fundamentalquestions in communication theory:

1. What is the ultimate lossless data compression?2. What is the ultimate transmission rate of reliablecommunication?

Information theory gives insight into the problems of statistical inference, computer science, investments and many other fields.

DCT Notes By, Er. Swapnil Kaware

Page 18: Digital Communication Techniques

• Entropy is a measure of the number of specific ways in which a system may be arranged, often taken to be a measure of disorder, or a measure of progressing towards thermodynamic equilibrium.

• The entropy of an isolated system never decreases, because isolated systems spontaneously evolve towards thermodynamic equilibrium, which is the state of maximum entropy.

• Entropy was originally defined for a thermodynamically reversible process as where the entropy (S) is found from the uniform thermodynamic temperature (T) of a closed system divided into an incremental reversible transfer of heat into that system (dQ).

Entropy

DCT Notes By, Er. Swapnil Kaware

Page 19: Digital Communication Techniques

• The above definition is sometimes called the macroscopic definition of entropy because it can be used without regard to any microscopic picture of the contents of a system.

• In thermodynamics, entropy has been found to be more generally useful and it has several other formulations.

• Entropy was discovered when it was noticed to be a quantity that behaves as a function of state.

• Entropy is an extensive property, but it is often given as an intensive property of specific entropy as entropy per unit mass or entropy per mole.

Entropy

DCT Notes By, Er. Swapnil Kaware

Page 20: Digital Communication Techniques

Entropy(i). The entropy of a random variable is a function which attempts to

characterize the “unpredictability” of a random variable.

(ii). Consider a random variable X representing the number that comes up on a roulette wheel and a random variable Y representing the number that comes up on a fair 6-sided die.

(iii). The entropy of X is greater than the entropy of Y . In addition to the numbers 1 through 6, the values on the roulette wheel can take on the values 7 through 36.

(iii). In some sense, it is less predictable.

DCT Notes By, Er. Swapnil Kaware

Page 21: Digital Communication Techniques

(iv). But entropy is not just about the number of possible outcomes. It is also about their frequency.

(v). For example, let Z be the outcome of a weighted six sided die that comes up 90% of the time as a “2”. Z has lower entropy than Y representing a fair 6-sided die.

(vi). The weighted die is less unpredictable, in some sense. But entropy is not a vague concept.

(vii). It has a precise mathematical definition. In particular, if a random variable X takes on values in a set X = {x1, x2, ..., xn}.

Entropy

DCT Notes By, Er. Swapnil Kaware

Page 22: Digital Communication Techniques

The most fundamental concept of informationtheory is the entropy. The entropy of a randomvariable X is defined by,

The entropy is non-negative. It is zero when therandom variable is “certain” to be predicted.

Entropy

DCT Notes By, Er. Swapnil Kaware

Page 23: Digital Communication Techniques

Joint Entropy(i). Joint entropy is the entropy of a joint probability

distribution, or a multi-valued random variable. (ii). For example, one might wish to the know the joint

entropy of a distribution of people defined by hair color C and eye color E, where C can take on 4 different values from a set C and E can take on 3 values from a set E.

(iii). If P(E,C) defines the joint probability distribution of hair color and eye color.

DCT Notes By, Er. Swapnil Kaware

Page 24: Digital Communication Techniques

Joint and Conditional Entropy

For two random variables X and Y , the jointentropy is defined by,

The conditional entropy is defined by,

DCT Notes By, Er. Swapnil Kaware

Page 25: Digital Communication Techniques

Mutual information(i). Mutual information is a quantity that measures a relationship

between two random variables that are sampled simultaneously.

(ii). In particular, it measures how much information is communicated, on average, in one random variable about another.

(iii). Intuitively, one might ask, how much does one random variable tell me about another?

• For example, suppose X represents the roll of a fair 6-sided die, and Y represents whether the roll is even (0 if even, 1 if odd). Clearly, the value of Y tells us something about the value of X and vice versa.

• That is, these variables share mutual information.DCT Notes By, Er. Swapnil Kaware

Page 26: Digital Communication Techniques

(iv). On the other hand, if X represents the roll of one fair die, and Z represents the roll of another fair die, then X and Z share no mutual information.

(v). The roll of one die does not contain any information about the outcome of the other die.

(vi). An important theorem from information theory says that the mutual information between two variables is 0 if and only if the two variables are statistically independent.

Mutual information

DCT Notes By, Er. Swapnil Kaware

Page 27: Digital Communication Techniques

Mutual Information(i). The mutual information of X and Y is defined by,

(ii). Note that the mutual information is symmetric in the arguments. That is,

(iii). Mutual information is also non-negative, as we will show in a minute.

DCT Notes By, Er. Swapnil Kaware

Page 28: Digital Communication Techniques

Mutual Information and Entropy

It follows from definition of entropy and mutualinformation that,

The mutual information is the reduction of entropyof X when Y is known.

DCT Notes By, Er. Swapnil Kaware

Page 29: Digital Communication Techniques

Conditional Mutual Information

The conditional mutual information of X and Ygiven Z is defined by,

DCT Notes By, Er. Swapnil Kaware

Page 30: Digital Communication Techniques

Chain Rules of Mutual Information

It can be shown from the definitions that the mutual information of (X; Y ) and Z is the sum of the mutual information of X and Z and the conditional mutual information of Y and Z given X. That is,

Page 31: Digital Communication Techniques

Chain Rules of Entropy

From the definition of entropy, it can be shown thatfor two random variables X and Y , the joint entropyis the sum of the entropy of X and the conditionalentropy of Y given X,

DCT Notes By, Er. Swapnil Kaware

Page 32: Digital Communication Techniques

Discrete Memory Less Source(i). Suppose that a probabilistic experiment involves the observation of the output

emitted by a discrete source during every unit of time (signaling interval).

(ii). The source output is modeled as a discrete random variable S, which takes on symbols from a alphabet.

(iii). We assume that the symbols emitted by the source during successive signaling intervals are statistically independent.

(iv). A source having the properties just described is called a discrete memory less source.

(v). If the source symbols occur with different probabilities, and the probability ‘pk’ is low, then there is more surprise, and therefore information, when symbol ‘sk’ is emitted by the source.

DCT Notes By, Er. Swapnil Kaware

Page 33: Digital Communication Techniques

Lempel- Ziv algorithmAn innovative, radically different method was introduced in1977 by Abraham Lempel and Jacob Ziv.

This technique (called Lempel-Ziv) actually consists of two considerably different algorithms, LZ77 and LZ78. LZ78 inserts one- or multi-character, non-overlapping, distinct patterns of the message to be encoded in a Dictionary.

The multi-character patterns are of the form: C0C1 . . . Cn-1Cn. The prefix of

a pattern consists of all the pattern characters except the last: C0C1 . . . Cn-1

LZ78 O/P:

DCT Notes By, Er. Swapnil Kaware

Page 34: Digital Communication Techniques

Lempel-Ziv algorithm(i). Encode a string by finding the longest match anywhere

within a window of past symbols and represents the string by a pointer to location of the match within the window and the length of the match.

(ii). This algorithm is simple to implement and has become popular as one of the early standard algorithms for file compression on computers because of its speed and efficiency.

(iii). It is also used for data compression in high-7 speed modems. DCT Notes By, Er. Swapnil Kaware

Page 35: Digital Communication Techniques

Lempel-Ziv algorithm As mentioned earlier, static coding schemes require some

knowledge about the data before encoding takes place.

Universal coding schemes, like LZW, do not require advance knowledge and can build such knowledge on-the-fly.

LZW is the foremost technique for general purpose data compression due to its simplicity and versatility.

It is the basis of many PC utilities that claim to “double the capacity of your hard drive”.

LZW compression uses a code table, with 4096 as a common choice for the number of table entries. DCT Notes By, Er. Swapnil Kaware

Page 36: Digital Communication Techniques

Codes 0-255 in the code table are always assigned to represent single bytes from the input file.

When encoding begins the code table contains only the first 256 entries, with the remainder of the table being blanks.

Compression is achieved by using codes 256 through 4095 to represent sequences of bytes.

As the encoding continues, LZW identifies repeated sequences in the data, and adds them to the code table.

Decoding is achieved by taking each code from the compressed file, and translating it through the code table to find what character or characters it represents.

Lempel-Ziv algorithm

DCT Notes By, Er. Swapnil Kaware

Page 37: Digital Communication Techniques

1 Initialize table with single character strings 2 P = first input character 3 WHILE not end of input stream 4 C = next input character 5 IF P + C is in the string table 6 P = P + C 7 ELSE 8   output the code for P 9 add P + C to the string table 10 P = C 11 END WHILE 12 output code for P

Lempel-Ziv algorithm

DCT Notes By, Er. Swapnil Kaware

Page 38: Digital Communication Techniques

Example 1: Compression using LZW

Example 1: Use the LZW algorithm to compress the string

BABAABAAA

DCT Notes By, Er. Swapnil Kaware

Page 39: Digital Communication Techniques

Example 1: LZW Compression Step 1

BABAABAAA P=AC=empty

STRING TABLE ENCODER OUTPUT

string codeword representing output code

BA 256 B 66

DCT Notes By, Er. Swapnil Kaware

Page 40: Digital Communication Techniques

Example 1: LZW Compression Step 2

BABAABAAA P=BC=empty

STRING TABLE ENCODER OUTPUT

string codeword representing output code

BA 256 B 66

AB 257 A 65

DCT Notes By, Er. Swapnil Kaware

Page 41: Digital Communication Techniques

Example 1: LZW Compression Step 3

BABAABAAA P=AC=empty

STRING TABLE ENCODER OUTPUT

string codeword representing output code

BA 256 B 66

AB 257 A 65

BAA 258 BA 256

DCT Notes By, Er. Swapnil Kaware

Page 42: Digital Communication Techniques

Example 1: LZW Compression Step 4

BABAABAAA P=AC=empty

STRING TABLE ENCODER OUTPUT

string codeword representing output code

BA 256 B 66

AB 257 A 65

BAA 258 BA 256

ABA 259 AB 257

DCT Notes By, Er. Swapnil Kaware

Page 43: Digital Communication Techniques

Example 1: LZW Compression Step 5

BABAABAAA P=AC=A

STRING TABLE ENCODER OUTPUT

string codeword representing output code

BA 256 B 66

AB 257 A 65

BAA 258 BA 256

ABA 259 AB 257

AA 260 A 65

Page 44: Digital Communication Techniques

Example 1: LZW Compression Step 6

BABAABAAA P=AAC=empty

STRING TABLE ENCODER OUTPUT

string codeword representing output code

BA 256 B 66

AB 257 A 65

BAA 258 BA 256

ABA 259 AB 257

AA 260 A 65

AA 260

Page 45: Digital Communication Techniques

LZW Decompression

The LZW decompressor creates the same string table during decompression.

It starts with the first 256 table entries initialized to single characters.

The string table is updated for each character in the input stream, except the first one.

Decoding achieved by reading codes and translating them through the code table being built.

DCT Notes By, Er. Swapnil Kaware

Page 46: Digital Communication Techniques

LZW Decompression Algorithm

1 Initialize table with single character strings2 OLD = first input code3 output translation of OLD4 WHILE not end of input stream5 NEW = next input code6  IF NEW is not in the string table7 S = translation of OLD8   S = S + C9 ELSE10  S = translation of NEW11 output S12   C = first character of S13   OLD + C to the string table14 OLD = NEW15 END WHILE

DCT Notes By, Er. Swapnil Kaware

Page 47: Digital Communication Techniques

Example 2: LZW Decompression 1

Example 2: Use LZW to decompress the output sequence of

Example 1:

<66><65><256><257><65><260>.

DCT Notes By, Er. Swapnil Kaware

Page 48: Digital Communication Techniques

Example 2: LZW Decompression Step 1

<66><65><256><257><65><260> Old = 65 S = A

New = 66 C = A

STRING TABLE ENCODER OUTPUT

string codeword string

B

BA 256 A

DCT Notes By, Er. Swapnil Kaware

Page 49: Digital Communication Techniques

Example 2: LZW Decompression Step 2

<66><65><256><257><65><260> Old = 256 S = BANew = 256 C =

BSTRING TABLE ENCODER OUTPUT

string codeword string

B

BA 256 A

AB 257 BA

DCT Notes By, Er. Swapnil Kaware

Page 50: Digital Communication Techniques

Example 2: LZW Decompression Step 3

<66><65><256><257><65><260> Old = 257 S = ABNew = 257 C =

ASTRING TABLE ENCODER OUTPUT

string codeword string

B

BA 256 A

AB 257 BA

BAA 258 AB

DCT Notes By, Er. Swapnil Kaware

Page 51: Digital Communication Techniques

Example 2: LZW Decompression Step 4

<66><65><256><257><65><260> Old = 65 S = ANew = 65 C = A

STRING TABLE ENCODER OUTPUT

string codeword string

B

BA 256 A

AB 257 BA

BAA 258 AB

ABA 259 A

DCT Notes By, Er. Swapnil Kaware

Page 52: Digital Communication Techniques

Example 2: LZW Decompression Step 5

<66><65><256><257><65><260> Old = 260 S = AANew = 260 C =

ASTRING TABLE ENCODER OUTPUT

string codeword string

B

BA 256 A

AB 257 BA

BAA 258 AB

ABA 259 A

AA 260 AA

DCT Notes By, Er. Swapnil Kaware

Page 53: Digital Communication Techniques

LZW Summary

This algorithm compresses repetitive sequences of data well.

Since the code words are 12 bits, any single encoded character will expand the data size rather than reduce it.

In this example, 72 bits are represented with 72 bits of data. After a reasonable string table is built, compression improves dramatically.

Advantages of LZW over Huffman: LZW requires no prior information about the input data stream. LZW can compress the input stream in one single pass. Another advantage of LZW its simplicity, allowing fast

execution.

DCT Notes By, Er. Swapnil Kaware

Page 54: Digital Communication Techniques

LZW: Limitations What happens when the dictionary gets too large (i.e., when all the 4096

locations have been used)?

Here are some options usually implemented:

Simply forget about adding any more entries and use the table as is.

Throw the dictionary away when it reaches a certain size.

Throw the dictionary away when it is no longer effective at compression.

Clear entries 256-4095 and start building the dictionary again.

Some clever schemes rebuild a string table from the last N input characters.

DCT Notes By, Er. Swapnil Kaware

Page 55: Digital Communication Techniques

Coding of analog sourcesThree type of analog source encoding:- Temporal Waveform coding :design to represent digitally the time-domain characteristic of the signal(i). Pulse-code modulation (PCM).(ii). Differential pulse-code modulation (DPCM).(iii). Delta modulation(DM).

• Spectral waveform coding: signal waveform is sub divided into different frequency band and either the time waveform in each band or its spectral characteristics are encoded.

• Each subband can be encoded in time-domain waveform or Each subband can be encoded in frequency-domain waveform).

• Model-based coding: Based on the mathematical model of source.(The Source is modeled as a linear system that results in the observed source output).

• Instead of transmitted samples of the source, the parameters of the linear system are transmitted with an appropriate excitation table.

• If the parameters are sufficient small, provides large compression).

DCT Notes By, Er. Swapnil Kaware

Page 56: Digital Communication Techniques

Rate–Distortion Theory(i). In rate–distortion theory, the rate is usually understood as the number

of bits per data sample to be stored or transmitted.

(ii). In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error ).

(iii). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video).

(iv). The distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory.

DCT Notes By, Er. Swapnil Kaware

Page 57: Digital Communication Techniques

(v). In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory.

(vi). In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix.

Rate–Distortion Theory

DCT Notes By, Er. Swapnil Kaware

Page 58: Digital Communication Techniques

Rate Distortion Function• RATE :• It is the number of bits per data sample to be stored or transmitted.

• DISTORTION:• It is defined as the variance of the difference between input and

output.• 1.hamming distance• 2.squared error

• Rate distortion theory is the branch of information theory addressing the problem of determining the minimal amount of entropy or information that should be communicated over a channel such that the source can be reconstructed at the receiver with given distortion.

DCT Notes By, Er. Swapnil Kaware

Page 59: Digital Communication Techniques

Rate Distortion Function

DCT Notes By, Er. Swapnil Kaware

Page 60: Digital Communication Techniques

Variance of Input and Output Image(Example of distortion)

DCT Notes By, Er. Swapnil Kaware

Page 61: Digital Communication Techniques

Quantization• Quantization, in mathematics and digital signal processing, is the

process of mapping a large set of input values to a smaller set – such as rounding values to some unit of precision.

• A device oralgorithmic function that performs quantization is called a quantizer. The round-off error introduced by quantization is referred to as quantization error.

• In analog-to-digital conversion, the difference between the actual analog value and quantized digital value is called quantization error or quantization distortion.

• This error is either due to rounding or truncation. DCT Notes By, Er. Swapnil Kaware

Page 62: Digital Communication Techniques

• The error signal is sometimes considered as an additional random signal called quantization noise.

• Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding.

• Quantization also forms the core of essentially all lossy compression algorithms.

• A quantizer describes the relation between the encoder

input values and the decoder output values

Quantization

DCT Notes By, Er. Swapnil Kaware

Page 63: Digital Communication Techniques

Quantization

DCT Notes By, Er. Swapnil Kaware

Page 64: Digital Communication Techniques

Quantizer• The design of the quantizer has a significant impact on the

amount of compression obtained and loss incurred in a lossy compression scheme.

• Quantizer:- Quantizer in nothing but the combination of encoder mapping and decode mapping.

(i). Encoder Mapping:-• – The encoder divides the range of source into a number

of intervals.• – Each interval is represented by a distinct codeword.

(ii). Decoder mapping.• – For each received codeword, the decoder generates a

reconstruct value.

DCT Notes By, Er. Swapnil Kaware

Page 65: Digital Communication Techniques

Components of a Quantizer

1. Encoder mapping: Divides the range of values that the source generates into a number of intervals.

2. Each interval is then mapped to a codeword. It is a many-to-one irreversible mapping.

3. The code word only identifies the interval, not the original value.

4. If the source or sample value comes from a analog source, it is called a A/D converter.

DCT Notes By, Er. Swapnil Kaware

Page 66: Digital Communication Techniques

Mapping of a 3-bit Encoder

000 001 010 011 100 101 110 111

Codes

-3.0 -2.0 -1.0 0 1.0 2.0 3.0 input

DCT Notes By, Er. Swapnil Kaware

Page 67: Digital Communication Techniques

Mapping of a 3-bit D/A Converter

Input Codes Output

000 -3.5

001 -2.5

010 -1.5

011 -0.5

100 0.5

101 1.5

110 2.5

111 3.5

DCT Notes By, Er. Swapnil Kaware

Page 68: Digital Communication Techniques

Components of a Quantizer

Decoder:- Given the code word, the decoder gives a an estimated value that the source might have generated.

Usually, it is the midpoint of the interval but a more accurate estimate will depend on the distribution of the values in the interval.

In estimating the value, the decoder might generate some errors.

DCT Notes By, Er. Swapnil Kaware

Page 69: Digital Communication Techniques

Digitizing a Sine Wave

t 4cos(2*Pi*t)A/D Output

D/A Output Error

0.1 3.8 111 3.5 0.3

0.1 3.2 111 3.5 -0

0.2 2.4 110 2.5 -0

0.2 1.2 101 1.5 -0

DCT Notes By, Er. Swapnil Kaware

Page 70: Digital Communication Techniques

Step Encoder

DCT Notes By, Er. Swapnil Kaware

Page 71: Digital Communication Techniques

Step Encoder

DCT Notes By, Er. Swapnil Kaware

Page 72: Digital Communication Techniques

Scalar Quantization

• Many of the fundamental ideas of quantization and compression are easily introduced in the simple context of scalar quantization.

• An example: any real number x can be rounded off to the nearest integer, say

q(x) = round(x)

• Maps the real line R (a continuous space) into a discrete space.

DCT Notes By, Er. Swapnil Kaware

Page 73: Digital Communication Techniques

Vector Quantization

• Vector Quantization Rule:- Vector quantization (VQ) of X may be viewed as the classification of the outcomes of X into a discrete number of sets or cells in N-space

• Each cell is represented by a vector output Yj

• Given a distance measure d(x, y), we have• VQ output: Q(X) = Yj iff d(X, Yj) < d(X, Yi), " i ¹ j.

• Quantization region: Vj = { X: d(X, Yj) < d(X, Yi), " i ¹ j}.DCT Notes By, Er. Swapnil Kaware

Page 74: Digital Communication Techniques

Vector Quantization

DCT Notes By, Er. Swapnil Kaware

Page 75: Digital Communication Techniques

The Schematic of a Vector Quantizer

DCT Notes By, Er. Swapnil Kaware

Page 76: Digital Communication Techniques

BCH Codes• BCH (Bose –Chaudhuri – Hocquenghem)

Codes form a large class of multiple ramdom error-correcting codes.

• They ware first discovered by A.Hocquenghem in 1959 and independently by R.C.Bose and D.K.Ray-Chaudhuri in 1960.

• BCH codes are cyclic codes. Binary BCH are most popular.

• The first decoding algorithm for binary BCH codes was devised by Peterson in 1960. Since then, Peterson’s algorithm has been refined by Berlekamp,Massey,Chien,Forney and many others.

DCT Notes By, Er. Swapnil Kaware

Page 77: Digital Communication Techniques

BCH codes

• In coding theory the BCH codes form a class of cyclic error-correcting codes that are constructed using finite fields.

• BCH codes were invented in 1959 by French mathematician Alexis Hocquenghem, and independently in 1960 by Raj Bose and D. K. Ray-Chaudhuri.

• The acronym BCH comprises the initials of these inventors' names.

• One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code.

DCT Notes By, Er. Swapnil Kaware

Page 78: Digital Communication Techniques

• it is possible to design binary BCH codes that can correct multiple bit errors.

• Another advantage of BCH codes is the ease with which they can be decoded, namely, via an algebraic method known as syndrome decoding.

• This simplifies the design of the decoder for these codes, using small low-power electronic hardware.

• BCH codes are used in applications like satellite communications,compact disc players, DVDs, disk drives, solid-state drives and two-dimensional bar codes.

BCH codes

DCT Notes By, Er. Swapnil Kaware

Page 79: Digital Communication Techniques

Parameters BCH codesBlock length :

Message Size (bits) :Minimum Distance :

• The code is a t- error correcting

• For example, for m=6, t=3

This is a triple-error-correcting (63,45)BCH code.

mtnk 12n m

12min td

7132

1836

6312

min

6

d

kn

n

DCT Notes By, Er. Swapnil Kaware

Page 80: Digital Communication Techniques

Generator Polynomial of BCH Codes

• Let α be a primitive element in GF(2m).For 1≤ i ≤ t, let m2i-1(x) be the minimum polynomial of the field element α2i-1 .

• The generator polynomial g(X) of a t-error-correcting primitive BCH codes of length 2m-1 is given by

LCM : Least Common Multiple• Note that degree of g(x) is less.

))(),...,(),(()( 1231 XmXmxmLCMxg t

DCT Notes By, Er. Swapnil Kaware

Page 81: Digital Communication Techniques

BCH Encoding• Let m(x) be the message

polynomial to be encoded

Where mi GF(2m).

• Dividing x2tm(x) by g(x), we have

Where p(x) is the remainder.• Then u(x) is the codeword

polynomial for the message m(x).

1110 ...)(

kk xmxmmxm

)()()()(2 xpxgxqxmx t 12

1210 ...)( tt xpxppxp

)()()( xmxxpxu knDCT Notes By, Er. Swapnil Kaware

Page 82: Digital Communication Techniques

Decoding of BCH codes

• Consider a BCH code with n=2m-1 and generator polynomial g(x).

• Suppose a code polynomial v(x) is transmitted

Let r(x) be the received polynomial.

• Then r(x) = v(x) + e(x), where e(x) is the error polynomial.

1110 ...)(

nn xvxvvxv

1110 ...)(

nn xrxrrxr

DCT Notes By, Er. Swapnil Kaware

Page 83: Digital Communication Techniques

Reed–Solomon (RS) codes

• In coding theory, Reed–Solomon (RS) codes are non-binary cyclic error-correcting codes invented by Irving S. Reed and Gustave Solomon.

• They described a systematic way of building codes that could detect and correct multiple random symbol errors.

• By adding t check symbols to the data, an RS code can detect any combination of up to t erroneous symbols, or correct up to ⌊t/2 ⌋symbols.

• As an erasure code, it can correct up to t known erasures, or it can detect and correct combinations of errors and erasures.

DCT Notes By, Er. Swapnil Kaware

Page 84: Digital Communication Techniques

• Furthermore, RS codes are suitable as multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can affect at most two symbols of size b.

• The choice of t is up to the designer of the code, and may be selected within wide limits.

• In Reed–Solomon coding, source symbols are viewed as coefficients of a polynomial p(x) over a finite field.

• The original idea was to create n code symbols from k source symbols by oversampling p(x) at n > k distinct points, transmit the sampled points, and use interpolation techniques at the receiver to recover the original message.

Reed–Solomon (RS) codes

Page 85: Digital Communication Techniques

• That is not how RS codes are used today. Instead, RS codes are viewed as cyclic BCH codes, where encoding symbols are derived from the coefficients of a polynomial constructed by multiplying p(x) with a cyclic generator polynomial.

• This gives rise to efficient decoding algorithms (described below).

• Reed–Solomon codes have since found important applications from deep-space communication to consumer electronics.

• They are prominently used in consumer electronics such as CDs, DVDs, Blu-ray Discs, in data transmission technologies such as DSL and WiMAX, in broadcast systems such as DVB and ATSC, and in computer applications such as RAID 6 systems;

Reed–Solomon (RS) codes

DCT Notes By, Er. Swapnil Kaware

Page 86: Digital Communication Techniques

Motivation for RS Soft Decision Decoder

Hard decision decoder does not fully exploit the decoding capabilityEfficient soft decision decoding of RS codes remains an open problem

RS Coded Turbo Equalization System

-

+

a priori

extrinsicinterleaving

a priori

extrinsic

ΠΣ

source

RS Encoder

interleaving

PR Encoder

sink

hard decision

+

AWGN+

RS Decoder

Channel Equalizer

de-interleaving

Π

Σ

Soft input soft output (SISO) algorithm is favorable

DCT Notes By, Er. Swapnil Kaware

Page 87: Digital Communication Techniques

Applications

• Data storageBar code

• Data transmission

• Space transmission

DCT Notes By, Er. Swapnil Kaware

Page 88: Digital Communication Techniques

Reed Muller Codes

• Reed Muller codes are some of the oldest error correcting codes.

• Error correcting codes are very useful in sending information over long distances or through channels where errors might occur in the message.

• They have become more prevalent as telecommunications have expanded and developed a use for codes that can self-correct.

• Reed Muller codes were invented in 1954 by D. E. Muller and I. S. Reed. In 1972,

• A Reed Muller code was used by Mariner 9 to transmit black and white photographs of Mars.

• Reed Muller codes are relatively easy to decode. DCT Notes By, Er. Swapnil Kaware

Page 89: Digital Communication Techniques

Decoding Reed Muller:-

Decoding Reed Muller encoded messages is more complex than encoding them.

The theory behind encoding and decoding is based on the distance between vectors.

The distance between any two vectors is the number of places in the two vectors that have different values.

The basis for Reed Muller encoding is the assumption that the closest codeword in <(r;m) to the received message is the original encoded message.

Reed Muller Codes

Page 90: Digital Communication Techniques

• This method of decoding is given by the following algorithm: Apply Steps 1 and 2.

• The vector spaces used in this paper consist of strings of length 2m, where m is a positive integer, of numbers in

• F2 = f0.

• The code words of a Reed Muller code form a subspace of such a space.

• Vectors can be manipulated by three main operations: addition, multiplication, and the dot product.

Reed Muller Codes

Page 91: Digital Communication Techniques

• For two vectors x = (x1; x2; : : : ; xn) and y = (y1; y2; : : : ; yn), addition is dened by x + y = (x1 + y1; x2 + y2; : : : ; xn + yn);

• where each xi or yi is either 1 or 0, and

• 1 + 1 = 0; 0 + 1 = 1; 1 + 0 = 1; 0 + 0 = 0:

• For example, if x and y are dened as x = (10011110) and y = (11100001), then the sum of x and y is x + y = (10011110) + (11100001) = (01111111):

Reed Muller Codes

DCT Notes By, Er. Swapnil Kaware

Page 92: Digital Communication Techniques

Convolution codes

DCT Notes By, Er. Swapnil Kaware

Page 93: Digital Communication Techniques

• In telecommunication, a convolutional code is a type of error-correcting code in which each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m).

• The transformation is a function of the last k information symbols, where k is the constraint length of the code.

• Convolutional codes are used extensively in numerous applications in order to achieve reliable data transfer, including digital video, radio, mobile communication, and satellite communication.

• These codes are often implemented in concatenation with a hard-decision code, particularly Reed Solomon.

• Prior to turbo codes, such constructions were the most efficient, coming closest to the Shannon limit.

Convolution codes

DCT Notes By, Er. Swapnil Kaware

Page 94: Digital Communication Techniques

• Convolutional Encoding:-• To convolutionally encode data, start with k memory registers,

each holding 1 input bit. Unless otherwise specified, all memory registers start with a value of 0.

• The encoder has n modulo-2adders (a modulo 2 adder can be implemented with a single Boolean XOR gate, where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0), and n generator polynomials — one for each adder (see figure below).

• An input bit m1 is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs n bits.

Convolution codes

DCT Notes By, Er. Swapnil Kaware

Page 95: Digital Communication Techniques

• Now bit shift all register values to the right (m1 moves to m0, m0 moves to m-1) and wait for the next input bit.

• If there are no remaining input bits, the encoder continues output until all registers have returned to the zero state.

• The figure below is a rate 1/3 (m/n) encoder with constraint length (k) of 3. Generator polynomials are G1 = (1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are calculated (modulo 2) as follows:

• n1 = m1 + m0 + m-1n2 = m0 + m-1n3 = m1 + m-1.

Convolution codes

DCT Notes By, Er. Swapnil Kaware

Page 96: Digital Communication Techniques

Convolution codes

DCT Notes By, Er. Swapnil Kaware

Page 97: Digital Communication Techniques

Convolutional codes

• Convolutional codes map information to code bits sequentially by convolving a sequence of information bits with “generator” sequences.

• A convolutional encoder encodes K information bits to N>K code bits at one time step.

• Convolutional codes can be regarded as block codes for which the encoder has a certain structure such that we can express the encoding operation as convolution.

DCT Notes By, Er. Swapnil Kaware

Page 98: Digital Communication Techniques

• The convolutional code is linear.

• Code bits generated at time step i are affected by information bits up to M time steps i – 1, i – 2, …, i – M back in time. M is the maximal delay of information bits in the encoder.

• Code memory is the (minimal) number of registers to construct an encoding circuit for the code.

• Constraint length is the overall number of information bits affecting code bits generated at time step i: =code memory + K=MK + K=(M + 1)K.

• A convolutional code is systematic if the N code bits generated at time step i contain the K information bits.

Convolutional codes

DCT Notes By, Er. Swapnil Kaware

Page 99: Digital Communication Techniques

99

Modified State Diagram

A path from (00) to (00) is denoted by

Di (weight)Lj (length)Nk (# info 1‘s)

DCT Notes By, Er. Swapnil Kaware

Page 100: Digital Communication Techniques

100

Transfer Function

• The transfer function T(D,L,N)

T(D,L,N)D L

DNL(1 L)

5 3

1

DCT Notes By, Er. Swapnil Kaware

Page 101: Digital Communication Techniques

•The distance properties and the error rate performance of a convolutional code can be obtained from its transfer function.

• Since a convolutional code is linear, the set of Hamming distances of the code sequences generated up to some stages in the trellis, from the all-zero code sequence, is the same as the set of distances of the code sequences with respect to any other code sequence.• •Thus, we assume that the all-zero path is the input to the encoder

Transfer Function

DCT Notes By, Er. Swapnil Kaware

Page 102: Digital Communication Techniques

DCT Notes By, Er. Swapnil Kaware

Page 103: Digital Communication Techniques

103

Transfer Function• Performing long division:

T(D,L,N) = D5L3N + D6L4N2 + D6L5N2 + D7L5N3 + ….

• If interested in the Hamming distance property of the code only,

set N = 1 and L = 1 to get the distance transfer function: T (D) = D5 + 2D6 + 4D7 + …

There is one code sequence of weight 5. Therefore dfree=5. There are two code sequences of weight 6,

four code sequences of weight 7, ….

DCT Notes By, Er. Swapnil Kaware

Page 104: Digital Communication Techniques

104

Performance

• The event error probability is defined as the probability that the decoder selects a code sequence that was not transmitted

• For two codewords the Pairwise Error Probability is

• The upperbound for the event error probability is given by

d

d2/d

p1pdidi

d

21di

))p1(p(4(

)p1(2)p1(pi

d)d(PEP

dcetandisatcodewordofnumbertheis)d(Awhere

)d(PEP)d(AP

freeddevent

correct node

incorrect

DCT Notes By, Er. Swapnil Kaware

Page 105: Digital Communication Techniques

105

Performance• using the T(D,N,L), we can formulate this as

• The bit error rate (not probability) is written as

)p1(p2D;1NLevent )N,L,D(TP

)p1(p2D;1N;1LdNd

bit )N,L,D(TP

DCT Notes By, Er. Swapnil Kaware

Page 106: Digital Communication Techniques

Viterbi Decoding Algorithm• The Viterbi algorithm is a standard component of tens of millions of high-speed

modems. It is a key building block of modern information infrastructure

• The symbol "VA" is ubiquitous in the block diagrams of modern receivers.

• Essentially: the VA finds a path through any Markov graph, which is a sequence of states governed by a Markov chain.

• Many practical applications: – convolutional decoding and channel trellis decoding. – fading communication channels, – partial response channels in recording systems, – optical character recognition, – voice recognition. – DNA sequence analysis– etc.

DCT Notes By, Er. Swapnil Kaware

Page 107: Digital Communication Techniques

• A Viterbi decoder uses the Viterbi algorithm for decoding a bits tream that has been encoded using a convolutional code.

• There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm).

• The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding.

• It is most often used for decoding convolutional codes with constraint lengths k<=10, but values up to k=15 are used in practice.

• Viterbi decoding was developed by Andrew J. Viterbi and published in the paper "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm", IEEE Transactions on Information Theory, Volume IT-13, pages 260-269, in April, 1967.

Viterbi Decoding Algorithm

DCT Notes By, Er. Swapnil Kaware

Page 108: Digital Communication Techniques

• Applications:-

(i). The Viterbi decoding algorithm is widely used in the following areas:

(ii). Radio communication: digital TV (ATSC, QAM, DVB-T, etc.), radio relay, satellite communications, PSK31 digital mode for amateur radio.

(iii). Decoding trellis-coded modulation (TCM), the technique used in telephone-line modems to squeeze high spectral efficiency out of 3 kHz-bandwidth analog telephone lines.

(iv). Computer storage devices such as hard disk drives.

(v). Automatic speech recognition .

Viterbi Decoding Algorithm

DCT Notes By, Er. Swapnil Kaware

Page 109: Digital Communication Techniques

Viterbi Decoding• Viterbi decoding is one of two types of decoding algorithms used

with convolutional encoding-the other type is sequential decoding.

• Sequential decoding has the advantage that it can perform very well with long-constraint-length convolutional codes, but it has a variable decoding time.

• A discussion of sequential decoding algorithms is beyond the scope

of this tutorial; the reader can find sources discussing this topic in the Books about Forward Error Correction section of the bibliography.

• Viterbi decoding has the advantage that it has a fixed decoding time. It is well suited to hardware decoder implementation.

DCT Notes By, Er. Swapnil Kaware

Page 110: Digital Communication Techniques

• But its computational requirements grow exponentially as a function of the constraint length, so it is usually limited in practice to constraint lengths of K = 9 or less.

• Stanford Telecom produces a K = 9 Viterbi decoder that operates at rates up to 96 kbps, and a K = 7 Viterbi decoder that operates at up to 45 Mbps.

• Advanced Wireless Technologies offers a K = 9 Viterbi decoder that operates at rates up to 2 Mbps.

• NTT has announced a Viterbi decoder that operates at 60 Mbps, but I don't know its commercial availability.

• Moore's Law applies to Viterbi decoders as well as to microprocessors, so consider the rates mentioned above as a snapshot of the state-of-the-art taken in early 1999.

Viterbi Decoding

DCT Notes By, Er. Swapnil Kaware

Page 111: Digital Communication Techniques

Trellis CodedModulation.

• In telecommunication, trellis modulation (also known as trellis coded modulation, or simply TCM).

• TCM is a modulation scheme which allows highly efficient transmission of information over band-limited channels such as telephone lines.

• Trellis modulation was invented by Gottfried Ungerboeck who is working for IBM in the 1970s.

DCT Notes By, Er. Swapnil Kaware

Page 112: Digital Communication Techniques

• The name trellis was coined because a state diagram of the technique, when drawn on paper, closely resembles the trellis latticeused in rose gardens.

• The scheme is basically a convolutional code of rates (r,r+1).

• Ungerboeck's unique contribution is to apply the parity check on a per symbol basis instead of the older technique of applying it to the bit stream then modulating the bits.

• The key idea he termed Mapping by Set Partitions.

Trellis Coded Modulation.

DCT Notes By, Er. Swapnil Kaware

Page 113: Digital Communication Techniques

• This idea was to group the symbols in a tree like fashion then separate them into two limbs of equal size.

• At each limb of the tree, the symbols were further apart.

• Although hard to visualize in multi-dimensions, a simple one dimension example illustrates the basic procedure.

• Suppose the symbols are located at [1, 2, 3, 4, ...].

• Then take all odd symbols and place them in one group, and the even symbols in the second group.

Trellis Coded Modulation.

DCT Notes By, Er. Swapnil Kaware

Page 114: Digital Communication Techniques

• This is not quite accurate because Ungerboeck was looking at the two dimensional problem, but the principle is the same, take every other one for each group and repeat the procedure for each tree limb.

• He next described a method of assigning the encoded bit stream onto the symbols in a very systematic procedure. Once this procedure was fully described, his next step was to program the algorithms into a computer and let the computer search for the best codes.

• The results were astonishing. Even the most simple code (4 state) produced error rates nearly one one-thousandth of an equivalent uncoded system.

• For two years Ungerboeck kept these results private and only conveyed them to close colleagues.

• Finally, in 1982, Ungerboeck published a paper describing the principles of trellis modulation.

Trellis Coded Modulation.

DCT Notes By, Er. Swapnil Kaware

Page 115: Digital Communication Techniques

References

1. “Digital and Analog Communication Systems” –K. Sam Shanmugam, John Wiley.

2. “An introduction to Analog and Digital Communication”-Simon Haykin, John Wiley.

3. “Digital Communication- Fundamentals & Applications” –Bernard Sklar, Pearson Education.

4. “Analog & Digital Communications”-HSU, Tata Mcgraw Hill, II edition.

DCT Notes By, Er. Swapnil Kaware

Page 116: Digital Communication Techniques

Thank You!!!!!Have A Nice Day!!!!!

DCT Notes By, Er. Swapnil Kaware([email protected])