dhanalakshmi college of engineering, chennai department of information … ·  ·...

41
DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DEPARTMENT OF INFORMATION TECHNOLOGY IT6002 - INFORMATION THEORY AND CODING TECHNIQUES UNIT I INFORMATION ENTROPY FUNDAMENTALS PART A (2 MARKS) 1. Write the Kraft-Mc million inequality for the instantaneous code. (M - 10) Kraft-Mc million inequality: K-1 ∑ 2 -lk ≤ 1 K=0 The codeword length of the prefix code satisfy above equation. Here, l k is the codeword length of k th symbol sk, and K is the total number of symbols in the alphabet. Apply the above equation to the prefix code. 3 ∑ 2 -lk = 2 -l0 + 2 -l1 + 2 -l2 +2 -l3 K=0 From the above equation, observe l0 = 1, l1 = 2, l2 = 3 and l3 =3 digits (bits). Putting these values in the above equation, 3 ∑ 2 -lk = 2 -1 + 2 -2 + 2 -3 +2 -3 = 1 K=1 Thus Kraft McMillian inequality is satisfied. 2. What is meant by prefix property? In the prefix coding ‟no codeword‟ is prefix of „other codeword‟. i. The prefix codes are uniquely decodable. ii. The prefix codes satisfy Kraft McMillian inequality. 3. State the properties of entropy. (M - 11) Properties of Entropy: a. Entropy is zero, if the event is sure or it is impossible. H = 0, if pk = 0 or 1 b. When pk = 1/M for all the M symbols, then the symbols are equally likely for such source entropy is given as H = log2M. c. Upper bound on entropy is given as, Hmax = log2M 4. What is Shannon limit? (D - 13) The Shannon-Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise.

Upload: trinhdien

Post on 01-May-2018

223 views

Category:

Documents


3 download

TRANSCRIPT

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI

DEPARTMENT OF INFORMATION TECHNOLOGY

IT6002 - INFORMATION THEORY AND CODING TECHNIQUES

UNIT – I INFORMATION ENTROPY FUNDAMENTALS

PART – A (2 MARKS) 1. Write the Kraft-Mc million inequality for the instantaneous code. (M - 10) Kraft-Mc million inequality:

K-1 ∑ 2-lk ≤ 1 K=0

The codeword length of the prefix code satisfy above equation. Here, lk is the codeword length of kth symbol sk, and K is the total number of symbols in the alphabet. Apply the above equation to the prefix code.

3 ∑ 2-lk = 2-l0 + 2-l1 + 2-l2 +2-l3 K=0

From the above equation, observe l0 = 1, l1 = 2, l2 = 3 and l3 =3 digits (bits). Putting these values in the above equation,

3 ∑ 2-lk = 2-1 + 2-2 + 2-3 +2-3 = 1 K=1

Thus Kraft – McMillian inequality is satisfied. 2. What is meant by prefix property? In the prefix coding ‟no codeword‟ is prefix of „other codeword‟.

i. The prefix codes are uniquely decodable. ii. The prefix codes satisfy Kraft – McMillian inequality.

3. State the properties of entropy. (M - 11) Properties of Entropy:

a. Entropy is zero, if the event is sure or it is impossible. H = 0, if pk = 0 or 1

b. When pk = 1/M for all the M symbols, then the symbols are equally likely for such source entropy is given as H = log2M.

c. Upper bound on entropy is given as, Hmax = log2M

4. What is Shannon limit? (D - 13) The Shannon-Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise.

5. State the Channel Capacity theorem. (M - 11)(N - 10) The channel capacity of the discrete memoryless channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities P(xi).

C = B log2(1+S/N) bits/sec Here B is channel bandwidth. 6. What is the relationship between uncertainity and information?

Uncertainity: i. It is probability of occurrence of the events. ii. Uncertainity of event decides amount of information.

Information: i. It is the content received due to occurrence of events. ii. Information received in certain event is zero. But information received in rate even is

maximum. 7. What is a Huffman Coding? In Huffman coding, a separate codeword is used for each character. The code words produced by huffman coding gives an optimum value. 8. Calculate the entropy of the source emitting symbol with probability x = 1/5, y = 1/2, z = 1/3.

9. Define – Uncertainty

Uncertainity: i. It is probability of occurrence of the events. ii. Uncertainity of event decides amount of information.

10. Differentiate: Uncertainty, Information and Entropy. (N - 10)

S. No Uncertainity Information Entropy

1. 2.

It is probability of occurrence of the events. Uncertainity of event decides amount of information.

It is the content received due to occurrence of events. Information received in certain event is zero. But information received in rate even is maximum.

It is the average information received due to occurrence of multiple events Entropy is zero, if the event is sure or impossible.

11. List out the properties of mutual information.

i. The mutual information is symmetric, where I(X; Y) = I(X:Y) ii. The mutual information is always positive, when I(X;Y) ≥ 0.

12. Calculate the amount of information if Pk = 1/4? Amount of information : Ik = log2 (1 / pk)

= log10 4 / Log10 2 = 2 bits

13. Define – Shannon Fano Code Method This is a much simpler code than the Huffman code, and is not usually used, because it is not as efficient, however, this is generally combined with the Shannon Method (to produce Shannon - Fano codes). The main difference, such that I have found, is that one sorts the Shannon probabilities, though the Fano codes are not sorted. 14. What is average information? Average information or Entropy is given as,

Here,pk is the probability of kth message, M is the total number of memories generated by the source.

15. Define – Rate of information transmission across the channel Rate of information transmission across the channel is given as, Dt = [H(X) – H(X/Y)]r bits/sec

Here H(X) is the entropy of the source. H(X/Y) is the conditional entropy.

16. Define – Bandwidth efficiency The ratio of channel capacity to bandwidth is called bandwidth efficiency Bandwidth efficiency = channel capacity (C) Bandwidth (B) 17. What is the capacity of the channel having infinite bandwidth? The capacity of such channel is given as,

C = 1.44 (S/N0) Here S/N0 is signal to noise ratio

18. Define – Discrete memory less channel For the discrete memory less channels, input and output, both are discrete random variables. The current output depends only upon current input for such channel.

19. Find entropy of downloaded source emitting symbols x, y, z with probabilities of 1/5,1/2, 1/3 respectively.

p1 = 1/5, p2 = 1/2, p3 = 1/3.

H k log 2 (1/pk) = 1/5 log25 + 1/2 log22 +1/3 log23

= 1.497 bits/symbol

20. An alphabet set contains 3 letters A,B, C transmitted with probabilities of 1/3,1/4,1/4. Find entropy. p1 = 1/3, p2 = 1/4, p3 = 1/4.

H k log 2 (1/pk) = 1/3 log23 + 1/4 log2 4 +1/4 log24

= 1.52832 bits/symbol

21. Define – Information

Amount of information : Ik = log2 (1/pk)

22. Write the properties of information.

If there is more uncertainty about the message, information carried is also more.

If receiver knows the message being transmitted, the amount of information carried is zero.

If I1 is the information carried by message m1, and I2 is the information carried by m2, then amount of

information carried compontely due to m1 and m2 is I1+I2

Part – B (16 MARKS)

1. Explain briefly the source coding theorem. (N - 10)

2. Explain Huffman encoding algorithm. (M - 11)

3. A discrete memory less source has an alphabet of five symbols whose probabilities of occurrence are

as described here :

Symbols S0 S1 S2 S3 S4

Probability 0.4 0.2 0.2 0.1 0.1

Compute the Huffman code for this source, entropy and the average codeword length of the source

encoder. (M - 11)(N - 10)

4. Explain the properties of entropy.

5. Explain channel capacity and derive the channel capacity for binary symmetric channel.

Channel capacity can be expressed in terms of mutual information. Mutual information is given by

equation as,

6. Explain mutual information and its properties. (J - 14) (N - 10)

7. In the messages, each letter occurs the following percentage of times:

Letter A B C D E F

% of

occurrence

23 20 11 9 15 22

(1) Calculate the entropy of this alphabet of symbols.

(2) Devise a codebook using Huffman technique and find the average codeword length.

(3) Devise a codebook using Shannon-Fano technique and find the average codeword length.

(4) Compare and comment on the results of both techniques.

8. A discrete memoryless source has five symbols X1, X2, X3, X4 and X5 with probabilities 0.4, 0.19,

0.16, 0.15 and 0.15 respectively. Calculate a Shannon – Fano code for the source and code efficiency.

The Shannon – Fano algorithm is explained in table.

Message Probability

of messages

I II III Code word

for

messages

Number of

bits per

messages

i.e nK

X1

X2

X3

X4

X5

0.4

0.19

0.16

0.15

0.15

0

1

1

1

1

0

0

1

1

0

1

0

1

0

100

101

110

111

1

3

3

3

3

9. Write short notes on:

(a) Binary Communication channel

(b) Binary Symmetric channel.

10. i) How will you calculate channel capacity?

ii) Write channel coding theorem and channel capacity theorem

iii) Calculate the entropy for the given sample data AAABBBCCD

iv) Prove Shannon information capacity theorem

11. i) Use differential entropy to compare the randomness of random variables

ii) A four symbol alphabet has following probabilities

Pr(a0) =1/2

Pr(a0) = 1/4

Pr(a0) = 1/8

Pr(a0) = 1/8 and an entropy of 1.75 bits. Find a codebook for this four letter alphabet that satisfies

source coding theorem

iii) Write the entropy for a binary symmetric source

iv) Write down the channel capacity for a binary channel

12. (a) A discrete memory less source has an alphabet of five symbols whose probabilities of occurrence

are as described here

Symbols : X1 X2 X3 X4 X5

Probability: 0.2 0.2 0.1 0.1 0.4

Compare the Huffman code for this source .Also calculates the efficiency of the source encoder

(b) A voice grade channel of telephone network has a bandwidth of 3.4 kHz Calculate

(i) The information capacity of the telephone channel for a signal to noise ratio of 30 dB and

(ii) The min signal to noise ratio required to support information transmission through the

telephone channel at the rate of 9.6Kb/s (8)

13. A discrete memory less source has an alphabet of seven symbols whose probabilities of occurrence are

as described below

Symbol: s0 s1 s2 s3 s4 s5 s6

Prob : 0.25 0.25 0.0625 0.0625 0.125 0.125 0.125

(i) Compute the Huffman code for this source moving a combined symbols as high as possible

(ii) Calculate the coding efficiency

(iii) Why the computed source has a efficiency of 100%

14. (i) Consider the following binary sequences 111010011000101110100.Use the Lempel – Ziv

algorithm to encode this sequence. Assume that the binary symbols 1 and 0 are already in the code

book

(ii) What are the advantages of Lempel – Ziv encoding algorithm over Huffman coding?

15. A discrete memory less source has an alphabet of five symbols with their probabilities for its output

as given here

[X] = [x1 x2 x3 x4 x5 ]

P[X] = [0.45 0.15 0.15 0.10 0.15]

Compute two different Huffman codes for this source .for these two codes .Find

(i) Average code word length

(ii) Variance of the average code word length over the ensemble of source symbols

16. A discrete memory less source X has five symbols x1,x2,x3,x4 and x5 with probabilities p(x1) – 0.4,

p(x2) = 0.19, p(x3) = 0.16, p(x4) = 0.15 and p(x5) = 0.1

(i) Construct a Shannon – Fano code for X,and Calculate the efficiency of the code

(ii) Repeat for the Huffman code and Compare the results

17. Consider that two sources S1 and S2 emit message x1, x2, x3 and y1, y2,y3 with joint probability

P(X,Y) as shown in the matrix form.

Calculate the entropies H(X), H(Y), H(X/Y), and H (Y/X) (16)

18. Apply Huffman coding procedure to following massage ensemble and determine Average length of

encoded message also. Determine the coding efficiency. Use coding alphabet D=4.there are 10

symbols.

X = [x1, x2, x3……x10]

P[X] = [0.18,.0.17,0.16,0.15,0.1,0.08,0.05, 0.05,0.04,0.2] (16)

UNIT - II: DATA AND VOICE CODING

PART A (2 MARKS) 1. Draw the block diagram for pulse code modulator?

2. Differentiate delta modulation from DPCM?

Delta Modulation DPCM

DM encodes the input sample by only one bit. It sends the information about +∂ or -∂, ie. Step rise or fall.

DPCM can have more than one bit for encoding the sample, it sends the information about difference between actual sample value and predicted sample value.

3. What is adaptive DPCM?

Adaptive Differential Pulse-Code Modulation (ADPCM) is a variant of Differential Pulse-Code

Modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the

required bandwidth for a given signal-to-noise ratio.

4. Mention two merits of DPCM.

Bandwidth requirement of DPCM is less compared to PCM.

Quantization error is reduced because of perdition filter.

5. What is the main difference in DPCM and DM?

DM encodes the input sample by only one bit. It sends the information about +∂ or -∂, ie. Step rise or

fall. DPCM can have more than one bit for encoding the sample, it sends the information about

difference between actual sample value and predicted sample value.

6. How the message can be recovered from PAM?

The message can be recovered from PAM by passing the PAM signal through reconstruction filter.

The reconstruction filter integrates amplitudes of PAM pulses. Amplitude smoothing of the

reconstructed signal is done to remove amplitude discontinuities due to pulses.

7. Write an expressing for bandwidth of binary PCM with N message each with a maximum frequency of

fm Hz.

If „v number of bits are used to code each input sample, then bandwidth of PCM is given as,

BT ≥ N. v . fm downloaded

Here v. fm is the bandwidth required by one message.

8. How is PDM wave converted into PPM systems?

The PDM signal is given as clock signal to monostable multivibrator. The multivibrator triggers on

falling edge. Hence a PPM pulse of fixed width is produced after falling edge of PDM pulse. PDM

represents the input signal amplitude in the form of width of the pulse. A PPM pulse is produced after

this ‟ width‟ of PDM pulse. In other words, the position of the PPM pulse depends upon input signal

amplitude.

9. Mention the use of adaptive quantizer in adaptive digital waveform coding schemes.

Adaptive quenatizer changes its step size accordion to variance of the input signal. Hence quantization

error is significantly reduces due to adaptive quantization. ADPCM uses adaptive quantization. The

bit rate of such schemes is reduces due to adaptive quantization.

10. What do you understand from adaptive coding?

In adaptive coding, the quantization step size and prediction filter coefficients are changed as per

properties of input signal. This reduces the quantization error and number of bits used to represent

the sample value. Adaptive coding is used for speech coding at low bit rates.

11. What is meant by quantization?

While converting the signal value from analog to digital, quantization is performed. The analog value

is assigned to the nearest digital level. This is called quantization. The quantized value is then

converted to equivalent binary value. The quantization levels are fixed depending upon the number

of bits. Quantization is performed in every analog to digital conversion.

12. What is quantization?

The process of converting the original signal m (t) into a new signal (or) quantized signal mq (t)

which is an approximation of m (t) is known as quantization.

13. What is quantization error?

The difference between the original signal m (t) and the quantized signal mq (t) is called quantization

error.

14. What is uniform quantization?

If the step size „s‟ is fixed then it is said to be uniform quantization. This is also known as non-linear

quantization.

15. What is non-uniform quantization?

If the step size „s‟ is not fixed then it is said to be non-uniform quantization. This is also known as

non-linear quantization. Non-linear quantization is used to reduce the probability of quantization

error.

16. What is menat by Mid tread quantization?

If the origin lies in the middle of a tread of the stair case graph, then it is said to be mid thread

quantization.

17. What is meant by Mid rise quantization?

If the origin lies in the middle of a rising part of the stair case graph, then it is said to be mid rise

quantization.

18. What is PCM?

A signal, which is to be quantized prior to transmission, is usually sampled. We may represent each

quantized level by a code number and transmit the code numbers rather than sample value itself. This

system of transmission is termed as PCM.

19. Specify the 3 elements of regenerative repeater.

Equalizer

Timing circuit

Decision making device

20. What is DPCM?

In DPCM, it transmits the difference between the current sample value m (k) at sampling time k and

the immediately preceding sample value m (k-1) at time k-1. Now these differences are added in the

receiver to generate a waveform, which is identical to the message signal m (t).

21. How DPCM works?

DPCM works on the principle of prediction.

22. How a speech signal is coded at low bit rates?

• To remove redundancies from the speech signal as far as possible.

• Assign the available bits to code the non-redundant parts of the speech signal in an efficient

manner.

By means of these 2 conditions 64 bit sample is reduced into 32 bits,16 bits,8 bits and 4 bits.

23. What is ADPCM?

A digital coding scheme that uses both adaptive quantization and adaptive prediction is called

ADPCM.The use of ADPCM is used for reducing the number of bits per sample from 8 into 4.

24. What is AQF?

Adaptive quantization with forward estimation.Here unquantized samples of the input signal are used

to derive forward estimates of σM (nTs).

25. What is AQB?

Adaptive quantization with backward estimation.Here samples of the quantizer output are used to

derive backward estimates of σM (nTs).

26. What is APF?

Adaptive prediction with forward estimation.Here unquantized samples of the input signal are used to

derive forward estimates of the predictor coefficients.

27. What is APB?

Adaptive prediction with backward estimation.Here samples of the quantizer output and the

prediction error, are used to derive backward estimates of the predictor coefficients.

28. What is ASBC?

Adaptive sub band coding. ASBC is a frequency coder (i.e.) the speech signal is processed in

frequency domain in which in which the step size varies with respect to frequency.

29. Give the main difference between PCM and ASBC?

The ASBC is capable of digitizing speech at a rate of 16 KBPS where as PCM is capable of digitizing

speech at a rate of 64 KBPS.

30. What is noise-masking phenomenon?

Noise can be measured in terms of decibels. If the noise is below 15 db of the signal level in the band,

the human ear does not perceive noise. This is known as noise masking phenomenon.

31. How much delay that is taken place in ASBC?

25 ms, because a large number of arithmetic operations are involved in designing the adaptive sub

band coder whereas in PCM delay is not encountered.

32. What is delta modulation?

Delta modulation is a DPCM scheme in which the difference signal ∆ (t) is encoded into a single bit

0 (or) 1. This single bit is used to increase (or) decrease the estimate mˆ(t).

33. What are the drawbacks in delta modulation?

• Granular noise (or) hunting

• Slope overloading

34. What is hunting?

In hunting, there is a large discrepancy (or) difference between m (t) and mˆ (t) and stepwise

approach of mˆ (t) to m (t). When mˆ (t) caught m (t) and if m (t) remains unvarying mˆ (t) hunts

swinging up (or) down, above and below m (t). This process is known as hunting.

35. What is slope overloading?

We have a signal m (t), which over an extended time exhibits a slope, which is so large that mˆ (t)

cannot keep up with it. The excessive disparity (or) difference between m (t) and mˆ (t) is described

as slope overloading.

36.How Hunting and slope overloading problems can be solved?

These two problems can be solved by Adaptive delta modulation by varying the step size in an

adaptive fashion.

PART – B (16 MARKS)

1. (i) Compare and contrast DPCM and ADPCM

(ii) Define pitch, period and loudness

(iii) What is decibel?

(iv) What is the purpose of DFT?

2. (i) Explain delta modulation with examples.

(ii) Explain sub-band adaptive differential pulse code modulation

(iii) What will happen if speech is coded at low bit rates

3. With the block diagram explain DPCM system. Compare DPCM with PCM & DM systems.

4. (i) Explain DM systems with block diagram

(ii) Consider a sine wave of frequency fm and amplitude Am, which is applied to a delta modulator of

step size Δ .Show that the slope overload distortion will occur Information Coding Techniques

if Am > Δ / ( 2fmTs)

Where Ts sampling. What is the maximum power that may be transmitted without slope overload

distortion?

5. Explain adaptive quantization and prediction with backward estimation in ADPCM system with block

Diagram.

6. (i) Explain delta modulation systems with block diagrams

(ii) What is slope – overload distortion and granular noise and how it is overcome in adaptive delta

modulation

7. What is modulation? Explain how the adaptive delta modulator works with different algorithms?

Compare delta modulation with adaptive delta modulation.

8. Explain pulse code modulation and differential pulse code modulation.

UNIT – III: ERROR CONTROL CODING

PART A (2 MARKS)

1. State the properties of cyclic codes.

The properties of cyclic codes are :

Linearity Property

This property states that sum of any two code words is also a valid codeword. For example, Let X1 and X2

are two code words. Then,X3 = X1 + X2

Here, X3 is also a valid codeword. This property shows that cyclic code is also a linear code.

Cyclic Property

Every cyclic shift of the valid code vector produces another valid codevector. Because of this property, the

name „cyclic‟ is given. Consider an n-bit codevector as shown below:

X = {xn-1,xn-2,…..,x1,x0}

2. List out the example for repetition code. (D - 13)

A single message bit is encoded in to a block of „n‟ identical bits producing a (n, 1) block code. There are

only two code words in the code.

i. All-zero code word

ii. All-one code word

3. What is Syndrome? (N - 10)

Syndrome gives an indication of errors present in received vector „Y‟ if YHT = 0, then there are no errors in „Y‟

and it is valid code vector. The non zero value of YHT is called „syndrome‟. It‟s non zero value indicates that „Y‟ is

not a valid code vector and it contains errors.

4. Why are cyclic codes extremely well suited for error detection?

i. They are easy to encode.

ii. They have well defined mathematical structure. Therefore efficient decoding schemes are

available.

5. State the syndrome properties. (D – 13)

The syndrome properties are:

i. Syndrome is obtained by S = YHT

ii. If Y = X, then S = 0 i.e., no error in output.

iii. If Y = X, then S ≠ 0 i.e., there is an error in output.

iv. Syndrome depends upon the error pattern only, i.e., S = EHT

6. List out the properties of a syndrome. (D - 13)

The syndrome depends only on the error patterns and not on the transmitted code word. All error

patterns that differ by a code word have the same syndrome.

7. What is spatial frequency?

The rate of change of pixel magnitude along the scanning line is called spatial frequency.

8. What is hamming distance?

The hamming distance between the two code vectors is equal to the number of elements in which they

differ. For example, let the two code words be,

X = (101) and Y = (110)

These two code words differ in second and third bits. Therefore the banning distance between X and

Y is two.

9. What is code efficiency?

The code efficiency is the ratio of message bits in a block to the transmitted bits for that block by the

encoder ie.

one codeword produces another code word. For example consider the codeword.

X = (x -1,xn-2,……x1,x0)

Let us shift above codevect r to left cyclically,

X‟ = (x -2,x -3,…… x0, x1,xn-1)

Above codevector is also a valid codevector.

Code efficiency = message bits = k,Transmitted bits n

10. What is meant by systematic and non systematic codes?

In a systematic block code, message bits appear first and then check bits. In the nonsystematic code, message

and check bits cannot be identified in the code vector.

11. What is meant by linear code?

A code is linear if modulo-2 sum of any two code vectors produces another code vector. This means any code

vector can be expressed as linear combination of other code vectors.

12. What are the error detection and correction ca abilities of Hamming codes?

The minimum distance (dmin) of Hamming codes is „3‟ . Hence it can be used to detect double

errors or correct single errors. Hamming codes are basically linear block codes with dmin = 3

13. What is meant by cyclic code?

Cyclic codes are the subclass of linear block codes. They have the properly that a cyclic shift.

14. How syndrome is calculated in Hamming codes and cyclic codes?

In Hamming codes the syndrome is calculated as, S = YHT

Here Y is the received and HT is the transpose of parity check matrix. In cyclic

code, the syndrome vector polynomial is given as.

S(p) = rem [Y(p)/G(p)]

Here Y(p) is received vector polynomial and G(p) is generator polynomial.

15. Define – Hamming Distance

The hamming distance between the two code vectors is equal to the number of elements in which

they differ. For example, let the two code words be,

X = (101) and Y = (110)

These two code words differ in second and third bits. Therefore, the banning distance between X and

Y is two.

16. Compare LZ and LZW coding. (D - 13)

S.No LZ Coding LZW Coding

1.

2.

The codewords are formed by

appending new character.

No separate character set is stored in

the dictionary initially.

Codewords are new words appearing in the

string.

Complete character set is stored in the

dictionary initially.

17. What is meant by frequency and temporal masking? (D - 13)

When ear hears the loud sound, certain time has to be passed before it hears quieter sound. This is

called temporal masking.

When multiple signals are present as in the case with general audio, a strong signal may reduce the

level of sensitivity of the ear to other signals, which are near to it in frequency. This effect is known

as frequency masking.

18. What is BCH code?

BCH codes are most extensive and powerful error correcting cyclic codes. The decoding of BCH codes is

comparatively simples. For any positive integer „m‟ and „t‟ , there exists a BCH code with following

parameters:

Block length : n = 2m-1

Number of parity check bits : n-k ≤ mt

Minimum distance : dmin ≥ 2t + 1

19. What is RS code?

These are non binary BCH codes. The encoder for RS odes operates on multiple bits simultaneously. The (n,k)

RS code takes the groups of m-b t symbols of the incoming binary data stream. It takes such k‟ number of

symbols in one block. Then the encoder adds (n-k) redundant symbols to form the code word of n”symbols.

RS code has :

Block length : n = 2m-1

Message size : k symbols

Number of parity check bits : n-k= 2t

Minimum distance : d = 2t + 1

downloadedmin

20. What is the use of error control coding?

The main use of error control coding is to reduce the overall probability of error, which is also

known as channel coding.

21. What is the difference between systematic code and non-systematic code?

If the parity bits are followed by message bits then it is said to be systematic codes.

If the message bits and parity check bits are randomly arranged then it is said to be non-systematic codes.

22. What is a Repetition code? (D - 13)

A single message bit is encoded in to a block of „n‟ identical bits producing a (n, 1) block code.

There are only two code words in the code.

all-zero code word

all-one code word

23. What is forward acting error correction method?

The method of controlling errors at the receiver through attempts to correct noise-induced errors is

called forward acting error correction method.

24. What is error detection?

The decoder accepts the received sequence and checks whether it matches a valid message sequence.

If not, the decoder discards the received sequence and notifies the transmitter (over the reverse

channel from the receiver to the transmitter) that errors have occurred and the received message must

be retransmitted. This method of error control is called error detection.

25. Give the properties of syndrome in linear block code.

The syndrome depends only on the error patterns and not on the transmitted code word.

All error patterns that differ by a code word have the same syndrome.

26. What is Hamming code?

This is a family of (n,k) linear block code.

Block length : n= 2m – 1

Number of message bits : k = 2m – m-1

Number of parity bits : n – k = m

Where m ≥ 3 and m should be any positive integer.

27. When a code is said to be cyclic?

Linearity property

The sum of any two code words in the code is also a code word.

Cyclic property

Any cyclic shift of a code word in the code is also a code word.

28. Give the difference between linear block code and cyclic code.

Linear block code can be simply represented in matrix form

Cyclic code can be represented by polynomial form

29. What is generator polynomial?

Generator polynomial g (x) is a polynomial of degree n-k that is a factor of X n + 1, where g (x) is

the polynomial of least degree in the code. g(x) may be expanded as n-k-1

g (x) = 1 + ∑ giXi + X n-k, i =1

Where the coefficient gi is equal to 0 (or) 1. According to this expansion the polynomial g (x) has

two terms with coefficient 1 separated by n-k-1 terms.

30. What is parity check polynomial?

Parity check polynomial h (x) is a polynomial of degree „k‟ that is a factor of X n + 1, where h (x) is

the polynomial of least degree in the code. h(x) may be expanded as k-1,h (x) = 1 + ∑ hiXi + X k

i =1,Where the coefficient hi is equal to 0 (or) 1. According to this expansion the polynomial h (x)

has two terms with coefficient 1 separated by k-1 terms.

31. How a syndrome polynomial can be calculated?

The syndrome polynomial is a remainder that results from dividing r(x) by the generator polynomial

g(x). R(x) = q(x) b(x) + S (x)

32. Give two properties of syndrome in cyclic code.

The syndrome of a received word polynomial is also the syndrome of the corresponding error polynomial.

The syndrome polynomial S(x) is identical to the error polynomial e(x).

33. What is Hamming distance (HD)?

The number of bit position in which two adjacent code vectors differs is known as Hamming

distance.

(e.g) if c1 = 1 0 0 1 0 1 1 0 and c2 = 1 1 0 0 1 1 0 1 then HD = 5

34. What is weight of a code vector?

The number of non-zero components in a code vector is known as weight of a code vector.

(e.g) if c1 = 1 0 0 1 0 1 1 0 then W(c1) = 4

35. What is meant by minimum distance?

The minimum distance of a linear block code is the smallest hamming distance between any pairs of

code words in a code.

(e.g) if c1 = 0 0 1 1 1 0

c2 = 0 1 1 0 1 1

c3 = 1 1 0 1 1 0 d min = 3

36. What is coset leader?

A Coset leader is an error pattern with 2 n-k possible cosets.

37. What is convolutional code?

A convolutional code in which parity bits are continuously interleaved by information (or) message

bits.

38. What is meant by constraint length?

The constraint length (K) of a convolutional code is defined as the number of shifts a single message

bit to enter the shift register and finally comes out of the encoder output.

K= M + 1

PART – B (16 MARKS) 1. Explain in detail, the Viterbi algorithm for decoding of convolutional codes, with a suitable example.

The Viterbi algorithm performs Maximum likelihood decoding. It find a path through trellis with the

largest metric (minimum Hamming distance/minimum Euclidean distance).

i. At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the

path with the largest metric (minimum Hamming distance) together with its metric. The selected path

is known as survivor path.

ii. It proceeds in the trellis by eliminating the least likely paths.

A Convolutional code is specified by three parameters (n,k,K) or (k/n,K) where

i. Rc=k/n is the rate efficiency, determining the number of data bits per coded bit.

ii. K is the size of the shift register.

iii. Constraint length = n*K, i.e. the effect of each bit have its influence on n*K bits.

2. Construct a convolution encoder for the following specifications : rate efficiency 1/2, constraint length

3, the connections from the shift register to modulo-2 adder are described by the following equations,

g 1 (x) = 1 + x + x2 , g 2 (x) = 1 + x2 . Determine the output codeword for the message [10011].

3. How is dfree determine using Trellis algorithm? Explain.

4. Explain in detail, convolutional code and compare it with block codes. (J - 14)

5. Explain convolutional encoder with an example.

6. How are P-frame and B-frames encode and decode? (J - 14)(M - 10

7. Consider a hamming code C which is determined by the parity check matrix

(i) Show that the two vectors C1= (0010011) and C2 = (0001111) are code words of C and Calculate

the hamming distance between them

(ii) Assume that a code word C was transmitted and that a vector r = c + e is received. Show that the

syndrome S = r. H T only depends on error vector e. (4) (iii) Calculate the syndromes for all

possible error vectors e with Hamming weight <=1 and list them in a table. How can this be used to

correct a single bit error in an arbitrary position?

(iii) What is the length and the dimension K of the code? Why can the min Hamming distance dmin

not be larger than three?

8. (i) What is linear block codes?

(ii) How to find the parity check matrix?

(iii) Give the syndrome decoding algorithm

(iv) Design a linear block code with dmin ≥ 3 for some block length n= 2m-1 (6)

9. a. Consider the generator of a (7,4) cyclic code by generator polynomial g(x) – 1+x+x3.Calculate the

code word for the message sequence 1001 and Construct systematic generator matrix G.

b. Draw the diagram of encoder and syndrome calculator generated by polynomial g(x)?

10. For the convolution encoder shown below encode the message sequence (10011). also prepare the

code tree for this encoder

11. (i) Find a (7,4) cyclic code to encode the message sequence (10111) using generator matrix

g(x) = 1+x+x3

(ii) Calculate the systematic generator matrix to the polynomial g(x) = 1+x+x3.Also draw the encoder

diagram (M - 11)

12. Verify whether g(x) = 1+x+x2+x3+x4 is a valid generator polynomial for generating a cyclic code for

message [111].

13. A convolution encoder is defined by the following generator polynomials:

g0(x) = 1+x+x2+x3+x4, g1(x) = 1+x+x3+x4, g2(x) = 1+x2+x4

(i) What is the constraint length of this code?

(ii) How many states are in the trellis diagram of this code

(iii) What is the code rate of this code?

14. Construct a convolution encoder for the following specifications: rate efficiency =1/2

Constraint length =4.the connections from the shift to modulo 2 adders are described by following

equations g1(x) = 1+x, g2(x) = x

Determine the output codeword for the input message [1110]

UNIT – IV COMPRESSION TECHNIQUES

PART A (2 MARKS)

1. What is Dolby AC-3? (M - 10) The audio signal is not given directly to bit allocation algorithm. Rather, a spectral envelop of audio

signal is provided to bit allocation algorithm as well as decoder. The bit allocation is not sent to the

decoder. But, the decoder reconstructs the audio signals from representation of spectral envelope.

Hence, it doesn‟t need bit allocation details directly.

2. What is sub-band coding?

In signal processing, sub-band coding is any form of transform coding that breaks a signal into a

number of different frequency bands and encodes each one independently. This decomposition is

often the first step in data compression for audio and video signals.

3. State the main application of GUI.

The main application of GUI are :

GUI facilitates smooth interface between the human users and the operating system.

Parameters handling, controlling and monitoring is done with the help of GUI.

4. Distinguish between global color table and local color table in GIF.

The GIF format is used mainly with internet to represent and compress graphical images. GIF images

can be transmitted and stored over the network in interlaced mode. This is very useful when images are

transmitted over low bit rate channels.

5. State the various methods used for text compression. (M - 10)

The various methods used for text compression are:

Lossy compression :

All original data can be recovered when the file is uncompressed.

Lossless compression:

With lossless compression, every single bit of data that was originally in the file remains after the file

is uncompressed

6. What is Run-length coding?

The run length encoding is simplest lossless encoding techniques. It is mainly used to compress text or

digitized documents. Binary data strings are better compressed by run length encoding. Consider the

binary data string. 1111110000011111……….

If we apply run length coding to above data string, we get, 7,1; 6,0; 5,1; 3,0……

Thus there are seven binary 1‟ s, followed by six binary 0‟ s followed by five binary 1‟ s and so on.

7. What is arithmetic coding?

The arithmetic coding offers the opportunity to create a code that exactly represents the frequency of

any character. A single character will be defined by belonging to a specific interval. For more frequent

characters larger intervals will be used contrary to rare characters.

The size of those intervals is proportional to the frequency. The arithmetic coding is the most efficient

procedure but its usage will be restricted by patents.

8. What are the types of JPEG algorithms?

There are two types of JPEG algorithms they are:

Baseline JPEG: During decoding, this algorithm draws line until complete image.

Progressive JPEG: During decoding, this JPEG algorithm draws the whole image at once,

but in very poor quality. Then another layer of data is added over the previous image to improve its

quality. Progressive JPEG is used for images on the web. The used can make out the image before it is

fully downloaded.

9. What is TIFF? (D - 13)

Tagged Image File Format(TIFF) is used for the transfer of images as well as digitized documents. It

supports 48 bits of pixels resolutions. 16 bits are used for each of R, G, and b colours. The code

number indicates particular format in TIFF.

10. What is Graphics Interchange Format(GIF)?

Graphics Interchange Format(GIF) is used for representation and compression of graphical images.

Each pixels of 24-bit colour image is represented by 8-bits for each of R, G and B colours. Such

image has 224 colours. The GIF uses only 256 colours from the set of 224 colours.

11. What is Code Efficiency?

The code efficiency is the ratio of messag bits in a block to the transmitted bits for that block by the

encoder i.e.,

Code efficiency = Message bits / Transmitted bits

= k / n

12. State the main application of Graphics Interchange Format(GIF)

The GIF format is used mainly with internet to represent and compress graphical images. GIF images can be

transmitted and stored over the network in interlaced mode, this is very useful when images are transmitted over

low bit rate channels.

13. What is JPEG standard?

JPEG stands for joint photographic exports group. This has developed a standard for compression of

monochrome/color still photographs and images. This compression standard is known as JPEG standard. It is

also know as ISO standard 10918. It provides the compression ratios upto 100:1.

14. Why differential encoding is carried out only for DC coefficient in JPEG?

The DC coefficient represents average color/luminance/ chrominance in the corresponding

block. Therefore it is the largest coefficient in the block.

Very small physical area is coveredfrombyeach block. Hence the DC suitable compressions for

DC coefficients do not vary much one block to next block.

The DC coefficient vary slowly. Hence differential encoding is best suitable compression for DC

coefficients. It encodes the difference between each pair of values rather than their absolute values.

15. What do you meant by “GIF interlaced node”?

The image data can be st red and transferred over the network in an interlaced mode. The data is stored in such

a way that the decompressed image is built up in progressive way.

16. Write the advantages of Data compression.

Huge amount of data is generated in text, images, audio, speech and video.

Because of compression, transmission data rate is reduced.

Storage becomes less due to compression. Due to video compression, it is possible to store one complete

movie on two CDs.

Transportation of the data is easier due to compression.

17. Write the drawbacks of data compression.

Due to compression, some of the data is lost.

Compression and decompression increases complexity of the transmitter and receiver.

Coding time is increased due to compression and decompression.

18. How compression is taken place in text and audio?

In text the large volume of information is reduced, where as in audio the bandwidth is reduced.

19. Specify the various compression principles?

• Source encoders and destination decoders

• Loss less and lossy compression

• Entropy encoding

• Source encoding

20. What is lossy and loss less compression?

The Compressed information from the source side is decompressed in the destination side and if

there is loss of information then it is said to be lossy compression.

The Compressed information from the source side is decompressed in the destination side and if

there is no loss of information then it is said to be loss less compression. Loss less compression is

also known as reversible.

21. What is statistical encoding?

Statistical encoding is used for a set of variable length code words, in which shortest code words are

represented for frequently occurring symbols (or) characters.

22. What is Differential encoding?

Differential encoding is used to represent the difference in amplitude between the current

value/signal being encoded and the immediately preceding value/signal.

23. What is transform coding?

This is used to transform the source information from spatial time domain representation into

frequency domain representation.

24. What is meant by spatial frequency?

The rate of change in magnitude while traversing the matrix is known as spatial frequency.

25. What is a horizontal and vertical frequency component?

If we scan the matrix in horizontal direction then it is said to be horizontal frequency components.

If we scan the matrix in Vertical direction then it is said to be vertical frequency components.

26. What is static and dynamic coding?

After finding the code words these code words are substituted in a particular type of text is known as

static coding.

If the code words may vary from one transfer to another then it is said to be dynamic coding.

27. Let us consider the codeword for A is 1, the codeword for B is 01,the codeword for c is 001 and the

codeword for D is 000. How many bits needed for transmitting the text AAABBCCD.

4 * 1 + 2 * 2 + 1 * 3 + 1 * 3 = 14 bits

28. Give two differences between Arithmetic coding and Huffman coding.

The code words produced by arithmetic coding always achieve Shannon value

The code words produced by Huffman coding gives an optimum value.

In arithmetic coding a single code word is used for each encoded string of characters.

In Huffman coding a separate codeword is used for each character.

29. What is global and local color table? (N - 10)

If the whole image is related to the table of colors then it is said to be global color table.

If the portion of the image is related to the table of colors then it is said to be local color table.

30. What is termination code and Make-up code table?

Code words in the termination-code table are for white (or) black run lengths of from 0 to 63 pels in

steps of one pel.

Code words in the Make up-code table are for white (or) black run lengths that are multiples of 64

pels.

31. What is meant by over scanning?

Over scanning means all lines start with a minimum of one white pel. Therefore the receiver knows

the first codeword always relates to white pels and then alternates between black and white.

32. What is modified Huffman code?

If the coding scheme uses two sets of code words (termination and make up). They are known as

Modified Huffman codes.

33. What is one-dimensional coding?

If the scan line is encoded independently then it is said to be One-dimensional coding.

34. What is two-dimensional coding?

Two-dimensional coding is also known as Modified Modified Read (MMR) coding. MMR identifies

black and White run lengths by comparing adjacent scan lines.

35. What is meant by pass mode?

If the run lengths in the reference line (b1b2) is to the left of the next run-length in the coding line

(a1a2) (i.e.) b2 is to the left of a1, then it is said to be pass mode.

36. What is meant by vertical mode?

If the run lengths in the reference line (b1b2) overlap the next run-length in the coding line (a1a2) by

a maximum of plus or minus 3 pels, then it is said to be vertical mode.

37. What is meant by horizontal mode?

If the run lengths in the reference line (b1b2) overlap the next run-length in the coding line (a1a2) by

more than plus or minus 3 pels, then it is said to be horizontal mode.

PART – B (16 MARKS)

1. Explain the working of JPEG encoder and decoder, with a block diagram. (N - 10)

2. For a linear block code, prove with example that

(i) The syndrome depends only on error pattern and not on transmitted code word.

(ii) All error patterns that differ by a codeword have the same syndrome.

i. the number of codeworde is 2k since there are 2k distinct messages.

ii. The set of vectors {gi} are linearly independent since we must have a set of unique codewords.

iii. linearly independent vectors mean that no vector gi can be expressed as a linear combination of the other

vectors.

iv. These vectors are called baises vectors of the vector space C.

v. The dimension of this vector space is the number of the basis vector which are k.

vi. Gi є C the rows of G are all legal codewords.

vii. A (n,k) block code , where Block : encoder accepts a block of message symbols and generates a block of

codeword

viii. Linear : addition of any two valid codeword results in another valid codeword

Code rate

i. Code rate increases , error correcting capability decreases more bandwidth efficiency

ii. Code rate decreases , error correcting capability increases waste of bandwidth.

3. Consider the generation of a (7,4) cyclic code by the generator polynomial g(x) = 1 +x + x3 .

Calculate the code word for the message sequence [1001] and construct systematic generator matrix G.

Draw the diagram of encoder and syndrome calculator generated by the polynomial.(M - 11)

4. Explain syndrome and its properties. (M - 10)

5. Explain syndrome decoding in linear block codes with example. (M - 10)

6. Explain the Hamming codes with example.

7. Explain in detail, Cyclic codes.

8. Explain the entropy encoding blocks of JPEG standard.

9. Explain in detail, Adaptive Huffman coding, with the help of an example.

10. Assume that the character set and probabilities are e =0.3, n = 0.3, t=0.2, w=0.1, .= 0.1. Derive the

codeword value for string “entw#”. Explain how the decoder determines original string from the

received codeword value.

11. Explain adaptive Huffman coding for the message “Malayalam”.

The basic Huffman algorithm has been extended, for the following reasons:

(a) The previous algorithms require the statistical knowledge which is often not available (e.g., live audio, video).

(b) Even when it is available, it could be a heavy overhead especially when many tables had to be sent when a non-

order0 model is used, i.e. taking into account the impact of the previous symbol to the probability of the current

symbol (e.g., "qu" often come together, ...).

12. (i)Explain the various stages in JPEG standard

(ii)Differentiate loss less and lossy compression technique and give one example for each

(iii)State the prefix property of Huffman code

13. (a) Draw the JPEG encoder schematic and explain

(b) Assuming a quantization threshold value of 16, derive the resulting quantization error for each of

the following DCT coefficients 127, 72, 64, 56,-56,-64,-72,-128.

14. (i) Explain arithmetic coding with suitable example

(ii) Compare arithmetic coding algorithm with Huffman coding

15. (i) Draw JPEG encoder block diagram and explain each block

(ii) Why DC and AC coefficients are encoded separately in JPEG

16. (a) Explain in brief ,the principles of compression

(b) In the context of compression for Text ,Image ,audio and Video which of the compression

techniques discussed above are suitable and Why?

UNIT – V: AUDIO AND VIDEO CODING

PART A (2 MARKS) 1. What is Dolby AC-1? (M – 10)

Dolby AC-1 is used for audio coding. It is MPEG audio coding standard. It uses psychoacoustic

model at the encoder and has fixed bit allocations to each subband.

2. What is the minimum frame/sec required in MPEG? (M - 10)

MPEG-Motion Pictures Expert Group (MPEG) was formed by the ISO to formulate a set of standards

relating to a rangeof multimedia applications that involves the use of video with sound.

3. State the main difference between MPEG video compression algorithm and H.261.

MPEG H.261

MPEG stands for Motion Pictures Expert

Group(MPEG). It was formed by ISO. MPEG

has developed the standards for compression of

video with audio.

H.261 video compression standard has been

defined by the ITU-T for the

provision of video telephony and video

conferencing over an ISDN.

4. What is MPEG?

MPEG stands for Motion Pictures Expert Group(MPEG). It was formed by ISO. MPEG has

developed the standards for compression of video with audio. MPEG audio coders are used for

compression of audio. This compression mainly uses perceptual coding.

5. State the advantages of Lempel-Ziv algorithm over Huffman coding. (N - 10)

Advantages of Lempel-Ziv algorithm:

i. The codewords are formed by appending new character.

ii. No separate character set is stored in the dictionary initially.

iii. The data characters as well as dictionary indices are transmitted.

Advantages of Huffman coding:

i. Codes for the characters are derived.

ii. Precision of the computer does not affect coding.

iii. Huffman coding is the simple technique.

6. Compare LZW with Huffman coding. (N - 10)

S.No LZW Coding Huffman Coding

1.

2.

Codewords are new words appearing in

the string.

Complete character set is stored in the

dictionary initially.

Codes for the character are derived.

Precision of the computer does not

affect coding.

7. What are vocal tract excitation parameters?

The origin, pitch, period and loudness are known as vocal tract excitation parameters

8. Give the classification of vocal tract excitation parameters

voiced sounds

unvoiced sounds

9. What is CELP?

CELP – code excited Linear Prediction

In this model, instead of treating each digitized segment independently for encoding purpose, just a

limited set of segments is used, each known as waveform template.

10. What are the international standards used in code excited LPC?

ITU-T Recommendations

G.728

G. 729

G. 7.29(A) and

G. 723.1

11. What is perceptual coding?

Perceptual encoders are designed for the compression of general audio such as that associated with a

digital television broadcast. This process is called perceptual coding.

12. What is algorithmic delay?

Before the speech samples can be analyzed, it necessary to store the block of samples in memory

(i.e.) in buffer. The time taken to accumulate the block of samples in memory is known as

algorithmic delay.

13. What is temporal masking?

After the ear hears a loud sound, it takes a further short time before it can hear a quieter sound. This is

known as temporal masking.

14. What is called critical bandwidth?

The width of each curve at a particular signal level is known as the critical bandwidth. For that

frequency and experiments have shown that for frequencies less than 500 HZ, the critical bandwidth

remains constant at about 100 HZ.

15. What is meant by dynamic range of a signal?

Dynamic range of a signal is defined as the ratio of the maximum amplitude of the signal to the

minimum amplitude and is measured in decibels (db)

16. What is MPEG?

MPEG-Motion Pictures Expert Group (MPEG)

MPEG was formed by the ISO to formulate a set of standards relating to a range of multimedia

applications that involves the use of video with sound

17. What is the use of DFT in MPEG audio coder?

DFT-Discrete Fourier transforms

DFT is a mathematical technique by which the 12 sets of 32 PCM samples are first transformed into

an equivalent set of frequency components.

18. What are SMRs?

SMRs – Signal to Mask Ratios

SMRs indicate those frequency components whose amplitude is below the related audible threshold.

19. What is meant by AC in Dolby AC-1?

AC Acoustic coder. It was designed for use in satellites to relay FM radio programs and the sound

associated with television programs.

20. What is meant by the backward adaptive bit allocation mode?

The operation mode in which, instead of each frame containing bit allocation information in addition

to the set of quantized samples it contains the encoded frequency coefficients that are present in the

sampled waveform segment. This is known as the encoded spectral envelope and this mode of

operation is the backward adaptive bit allocation mode.

21. List out the various video features used in multimedia applications.

a. Interpersonal - Video telephony and video conference

b. Interactive – Access to stored video in various forms

c. Entertainment – Digital television and movie/video – on demand

22. What does the digitization format define?

The digitization format defines the sampling rate that is used for the luminance y and two

chrominance Cb and Cr, signals and their relative position in each frame

23. What is SQCIF?

SQCIF – Sub Quarter Common Intermediate Format.

It is used for video telephony, with 162 Mbps for the 4:2:0 format as used for digital television

broadcasts.

24. What is motion estimation and motion compensation?

The technique that is used to exploit the high correlation between successive frames it to predict the

content of many of the frames. The accuracy of the prediction operation is determined by how well

any movement between successive frames is estimated. This operation is known as motion estimation

If the motion estimation process is not exact, so additional information must also be sent to indicate

any small differences between the predicted and actual positions of the moving segments involved.

This is known as motion compensation.

25. What are intracoded frames?

Frames that encoded independently are called intracoded frames or I-frames

26. What is meant by GOP?

GOP- group of pictures. The number of frames/pictures between successive I- frames is known as a

group of pictures.

27. What is a macro block?

The digitized contents of the Y matrix associated with each frame are first divided into a two

dimension matrix of 16 x 16 pixels known as a macro block.

28. What is H.261?

H.261 video compression standard has been defined by the ITU-T for the provision of video

telephony and video conferencing over an ISDN.

29. What is GOB?

GOB- Group of blocks. Although the encoding operation is carried out on individual macro blocks, a

large data structure known as a group of block is also defined.

30. What are AVOs and VOPs?

AVOs- Audio Visual Objects

VOPs- Video Objects Planes

31. What is the difference between MPEG and other standard.

Difference between MPEG-4 and other standard is that MPEG-4 has a number of content based

functionalities.

32. What are blocking artifacts?

The high quantization threshold leads to blocking artifacts which are cause by the macro block

Encoded using high thresholds differing from those quantized using lower thresholds.

33. What are conventional codes? How are they different from block codes? (N - 10)

Convolution codes are error detecting codes used to reliably transmit digital data over unreliable

communication channel system to channel noise.

34. State the principle of Turbo coding. (N - 10)

The significance of Turbo coding is,

i. High weight code words

ii. Decoder generates estimates of code words in two stages of decoding and interleaving-deinter

leaving.

iii. This is like circulation of air in turbo engine for better performance. Hence these codes called turbo

codes

35. What are the reasons to use an inter leaver in a turbo code? (M - 10)

An inter leaver is a device that rearranges the ordering of sequence of symbols in a deterministic

manner. The two main issues in the inter leaver size and the inter leaver map. Inter leaver is used to

feed the encoders with permutations so that the generated redundancy sequences can be assumed

independent

36. Define – Constraint Length

The constraint length (K) of a convolutional code is defined as the number of shifts a single message

bit to enter the shift register and finally comes out of the encoder output.

K= M + 1

37. What are the differences between block and convolution codes?

S.No Block codes Convolution codes

1.

2.

3.

The information bits are followed by the

parity bits.

There is no data dependency between

blocks

Useful for data communications.

The information bits are spread along the

sequence

Data passes through convolutional codes

in a continuous stream.

Useful for low – latency communications.

38. Define – Constraint Length of a Convolutional Code.

Constraint length is the number of shifts over which the single message bit can influence the

encoder output. It is expressed in terms of message bits.

39. What are convolutional codes?

A convolutional code in which parity bits are continuously interleaved by information (or) message

bits.

40. Define – Turbo Code

The Parallel Concatenated Convolutional Codes(PCCC), called Turbo Codes, has solved the dilemma

of structure and randomness through concatenation and interleaving respectively. The introduction of

turbo codes has given most of the gain promised by the channel – coding theorem.

41. What is dolby AC-1?

Dolby AC-1 is used for audio coding. It is MPEG audio coding standard. It uses psychoacoustic

model at the encoder and has fixed bit allocations to e ch subband.

42. What is the need of MIDI standard?

The MIDI stands for musical instrument digital interface(MIDI). It normally specifies the details of

digital interface of various musical instruments to micro-computer. It is essential to access, record or

store the music generated from musical instruments.

43. What is perceptual coding?

In perceptual coding only perceptual feature of the sound are stored. This gives high degree of

compression. Human ear is not sensitive to all frequencies equally. Similarly masking of weaker

signal takes place when louder signal to present nearby. These parameters are used in perceptual

coding.

PART – B (16 MARKS) 1. Write short note on CELP principles.

2. Explain the significance of D- frames in video coding?

3. Explain in detail, Turbo decoding.

4. Explain in detail, Turbo codes and their uses. 5. Explain the MPEG algorithm for video encoding, with a block diagram. (N - 10) 6. Explain Linear predictive coding. (D - 13)(M - 11)

7. Write short notes on H.261 video compression standard. (D - 13)

8. Explain the concepts of frequency masking and temporal masking. How they are used in perceptual coding? (N - 10) 9. Explain the various versions of Dolby ACs stating its merits and demerits, With neat illustrations.

10. (i) Explain the principles of perceptual coding

(ii) Why LPC is not suitable to encode music signal?

11. (i) Explain the encoding procedure of I,P and B frames in video encoding with suitable diagrams.

(ii) What are the special features of MPEG -4 standards (J - 14)

12. Explain the Linear Predictive Coding (LPC) model of analysis and synthesis of speech signal. State

the advantages of coding speech signal at low bit rates

13. Explain the encoding procedure of I,P and B frames in video compression techniques, State intended

application of the following video coding standard MPEG -1 , MPEG -2, MPEG -3 , MPEG -4

14. (i) What are macro blocks and GOBs?

(ii) On what factors does the quantization threshold depend in H.261 standards?

(iii) Expalin the MPEG compression techniques. (M - 11)

15. (i) Explain the various Dolby audio coders.

(ii) Explain any two audio coding techniques used in MPEG.

16. Explain the following audio coders:

i) MPEG audio coders

ii) DOLPY audio coders.

17. (i) Explain the “Motion estimation” and “Motion Compensation” phases of P and B frame encoding

process with diagrams wherever necessary

(ii) Write a short note on the “Macro Block” format of H.261 compression standard