dc error correcting codes
DESCRIPTION
A property of MVG_OMALLOORLinear Block Codes; Hamming Codes; Communication system- Automatic Repeat Request (ARQ), Reed–Muller Code, Cyclic Codes- CRC Code, BCH Code, RS Code, Cyclic and convolution codes; Cyclic Linear Codes; Modulation, Demodulation and Coding; Error control coding; Error Correction Codes & Multiuser Communication- Multiple Access Techniques; Error detection and correction; Modern Coding- LDPC Code; Digital Transmission- Coding; Data Link Layer, Error Detection and Correction; Error Control Coding; Error Control in the Binary Channel; Convolution Codes; Channel Coding; Data Link Control and Protocols; Data Link Control; FEC Systems; Cyclic Codes for Error Detection; Viterbi Algorithm; (Note: Parity, Error handling, standard array and encoding,decoding present throughout)TRANSCRIPT
![Page 1: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/1.jpg)
Department of Electrical and Computer Engineering
Digital Communication and Error Digital Communication and Error Correcting CodesCorrecting Codes
Timothy J. SchulzTimothy J. SchulzProfessor and ChairProfessor and Chair
Engineering ExplorationEngineering ExplorationFall, 2004Fall, 2004
![Page 2: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/2.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Digital DataDigital Data
• ASCII Text
A 01000001B 01000010C 01000011D 01000100E 01000101F 01000110. .. .. .
![Page 3: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/3.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Digital SamplingDigital Sampling
000
001
010
011
111
110
101
100
000010001000001011011011010000111110111111111
![Page 4: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/4.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Digital CommunicationDigital Communication
• Example: Frequency Shift Keying (FSK)– Transmit a tone with a frequency determined by each bit:
0 1cos 2 1 cos 2s t b f t b f t
![Page 5: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/5.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Digital ChannelsDigital Channels
0
1
0
1
p
p
1-p
1-p
Binary Symmetric Channel
Error probability: p
![Page 6: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/6.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Error Correcting CodesError Correcting Codes
information bits channel bits
0
1
000
111
3 channel bits per 1 information bit: rate = 1/3
channel bits information bits000001010011100101110111
00010111
decode book
encode book
![Page 7: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/7.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Error Correcting CodesError Correcting Codes
0
000010
0
0
000000
0
1
111100
0
0
000001
0
1
111110
1
information bitschannel codereceived bitsdecoded bits
5 channel errors; 1 information error
![Page 8: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/8.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
0 0.1 0.2 0.3 0.4 0.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
channel error probability
bit e
rror
pro
babi
lity
Error Correcting CodesError Correcting Codes
• An error will only be made if the channel makes 2 or three errors on a block of 3 channel bits
ccc no errors (1-p)(1-p)(1-p) = 1-3p+3p2-p3
cce one error (1-p)(1-p)(p) = p-2p2+p3
cec one error (1-p)(p)(1-p) = p-2p2+p3
cee two errors (1-p)(p)(p) = p2-p3
ecc one error (p)(1-p)(1-p) = p-2p2+p3
ece two errors (p)(1-p)(p) = p2-p3
eec two errors (p)(p)(1-p) = p2-p3
eee three errors (p)(p)(p) = p3
situation probability
error probability = 3p2 – 2p3
![Page 9: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/9.jpg)
Department of Electrical and Computer Engineering
Digital Coding for Error Correction 0 0 1 0 1 0 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 0
Error Correcting CodesError Correcting Codes
• Codes are characterized by the number of channel bits (M) used for (N) information bits. This is called an N/M code.
• An encode book has 2N entries, and each entry is an M-bit codeword.
• A decode book has 2M entries, and each entry is an N-bit information-bit sequence.
![Page 10: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/10.jpg)
![Page 11: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/11.jpg)
![Page 12: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/12.jpg)
![Page 13: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/13.jpg)
![Page 14: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/14.jpg)
![Page 15: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/15.jpg)
EE576 Dr. Kousa Linear Block Codes 15
Linear Block Codes
![Page 16: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/16.jpg)
EE576 Dr. Kousa Linear Block Codes 16
Basic Definitions
• Let u be a k-bit information sequence
v be the corresponding n-bit codeword.
A total of 2k n-bit codewords constitute a (n,k) code.
• Linear code: The sum of any two codewords is a codeword.
• Observation: The all-zero sequence is a codeword in every
linear block code.
![Page 17: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/17.jpg)
EE576 Dr. Kousa Linear Block Codes 17
Generator Matrix
• All 2k codewords can be generated from a set of k linearly independent codewords.
• Let g0, g1, …, gk-1 be a set of k independent codewords.
• v = u·G
1,11,10,1
1,00100
nkkk
n
ggg
ggg
1-k
0
g
g
G
![Page 18: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/18.jpg)
EE576 Dr. Kousa Linear Block Codes 18
Systematic Codes
• Any linear block code can be put in systematic form
• In this case the generator matrix will take the form
G = [ P Ik]• This matrix corresponds to a set of k codewords
corresponding to the information sequences that have a single nonzero element. Clearly this set in linearly independent.
n-kcheck bits
kinformation bits
![Page 19: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/19.jpg)
EE576 Dr. Kousa Linear Block Codes 19
Generator Matrix (cont’d)
• EX: The generating set for the (7,4) code:1000 ===> 1101000; 0100 ===> 01101000010 ===> 1110010; 0001 ===> 1010001
• Every codeword is a linear combination of these 4 codewords.That is: v = u •G, where
• Storage requirement reduced from 2k(n+k) to k(n-k).
kI | PG
kkknk
1000
0100
0010
0001
101
111
110
011
)(
![Page 20: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/20.jpg)
EE576 Dr. Kousa Linear Block Codes 20
Parity-Check Matrix
• For G = [ P | Ik ], define the matrix H = [In-k | PT]
• (The size of H is (n-k)xn).
• It follows that GHT = 0.
• Since v = u•G, then v•HT = u•GHT = 0.
• The parity check matrix of code C is the generator matrix of another code Cd, called the dual of C.
1110100
0111010
1101001
H
![Page 21: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/21.jpg)
EE576 Dr. Kousa Linear Block Codes 21
Encoding Using H Matrix(Parity Check Equations)
7653
6542
7641
7653
6542
7641
7654321
000
101111110011100010001
vv=vvvv=vvvv=vv
vv+vvvv+vvvv+vv
vvvvvvv
0
information
![Page 22: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/22.jpg)
EE576 Dr. Kousa Linear Block Codes 22
Encoding Circuit
![Page 23: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/23.jpg)
EE576 Dr. Kousa Linear Block Codes 23
Minimum Distance
• DF: The Hamming weight of a codeword v , denoted by w(v), is the number of nonzero elements in the codeword.
• DF: The minimum weight of a code, wmin, is the smallest weight of the nonzero codewords in the code.
wmin = min {w(v): v C; v ≠0}.• DF: Hamming distance between v and w, denoted by
d(v,w), is the number of locations where they differ.Note that d(v,w) = w(v+w)
• DF: The minimum distance of the code dmin = min {d(v,w): v,w C, v ≠ 0}
• TH3.1: In any linear code, dmin = wmin
![Page 24: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/24.jpg)
EE576 Dr. Kousa Linear Block Codes 24
Minimum Distance (cont’d)
• TH3.2 For each codeword of Hamming weight l there exists l columns of H such that the vector sum of these columns is zero. Conversely, if there exist l columns of H whose vector sum is zero, there exists a codeword of weight l.
• COL 3.2.2 The dmin of C is equal to the minimum numbers of columns in H that sum to zero.
• EX:
1110100
0111010
1101001
H
![Page 25: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/25.jpg)
EE576 Dr. Kousa Linear Block Codes 25
Decoding Linear Codes
• Let v be transmitted and r be received, where
r = v + e
e error pattern = e1e2..... en, where
The weight of e determines the number of errors.
• We will attempt both processes: error detection, and error correction.
e ii
th
1 if the error has occured in the location0 otherwise
+ve
r
![Page 26: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/26.jpg)
EE576 Dr. Kousa Linear Block Codes 26
Error Detection
• Define the syndrome
s = rHT = (s0, s1, …, sk-1)
• If s = 0, then r = v and e =0,
• If e is similar to some codeword, then s = 0 as well, and the error is undetectable.
• EX 3.4:
0
101111110011100010001
rrrrrrr 7654321210 sss
65422
54311
65300
rrrr=srrrr=srrrr=s
![Page 27: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/27.jpg)
EE576 Dr. Kousa Linear Block Codes 27
Error Correction
• s = rHT = (v + e) HT = vHT + eHT = eHT
• The syndrome depends only on the error pattern.
• Can we use the syndrome to find e, hence do the correction?
• Syndrome digits are linear combination of error digits. They provide information about error location.
• Unfortunately, for n-k equations and n unknowns there are 2k solutions. Which one to use?
![Page 28: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/28.jpg)
EE576 Dr. Kousa Linear Block Codes 28
Example 3.5
• Let r = 1001001• s = 111• s0 = e0+e3+e5+e6 =1• s1 = e1+e3+e4+e5 =1• s2 = e2+e4+e5+e6 =1• There are 16 error patterns that satisfy the above equations,
some of them are0000010 1101010 1010011 1111101
• The most probable one is the one with minimum weight. Hence v* = 1001001 + 0000010 = 1001011
![Page 29: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/29.jpg)
EE576 Dr. Kousa Linear Block Codes 29
Standard Array Decoding
• Transmitted codeword is any one of:v1, v2, …, v2
k
• The received word r is any one of 2n n-tuple.
• Partition the 2n words into 2k disjoint subsets D1, D2,…, D2k
such that the words in subset Di are closer to codeword vi than any other codeword.
• Each subset is associated with one codeword.
![Page 30: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/30.jpg)
EE576 Dr. Kousa Linear Block Codes 30
Standard Array Construction1. List the 2k codewords in a row, starting with the all-zero codeword v1.
2. Select an error pattern e2 and place it below v1. This error pattern will be a correctable error pattern, therefore it should be selected such that:
(i) it has the smallest weight possible (most probable error)
(ii) it has not appeared before in the array.
3. Add e2 to each codeword and place the sum below that codeword.
4. Repeat Steps 2 and 3 until all the possible error patterns have been accounted for. There will always be 2n / 2k = 2 n-k rows in the array. Each row is called a coset. The leading error pattern is the coset leader.
• Note that choosing any element in the coset as coset leader does not change the elements in the coset; it simply permutes them.
![Page 31: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/31.jpg)
EE576 Dr. Kousa Linear Block Codes 31
Standard Array
• TH 3.3No two n-tuples in the same row are identical. Every n-tuple appears in one and only one row.
kknknknkn
k
k
k
2232222
2333233
2232222
2321
vevevee
vev+ev+ee
vev+ev+ee
vvv0v
----
![Page 32: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/32.jpg)
EE576 Dr. Kousa Linear Block Codes 32
Standard Array Decoding is Minimum Distance Decoding
• Let the received word r fall in Di subset and lth coset.• Then r = el + vi
• r will be decoded as vi. We will show that r is closer to vi than any other codeword.
• d(r,vi) = w(r + vi) = w(el + vi + vi) = w(el)• d(r,vj) = w(r + vj) = w(el + vi + vj) = w(el + vs) • As el and el + vs are in the same coset, and el is selected to
be the minimum weight that did not appear before, thenw(el) w(el + vs)
• Therefore d(r,vi) d(r,vj)
![Page 33: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/33.jpg)
EE576 Dr. Kousa Linear Block Codes 33
Standard Array Decoding (cont’d)
• TH 3.4Every (n,k) linear code is capable of correcting exactly 2n-k error patterns, including the all-zero error pattern.
• EX: The (7,4) Hamming code
# of correctable error patterns = 23 = 8
# of single-error patterns = 7
Therefore, all single-error patterns, and only single-error patterns can be corrected. (Recall the Hamming Bound, and the fact that Hamming codes are perfect.
![Page 34: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/34.jpg)
EE576 Dr. Kousa Linear Block Codes 34
Standard Array Decoding (cont’d)
EX 3.6: The (6,3) code defined by the H matrix:
H
1 0 0 0 1 1
0 1 0 1 0 1
0 0 1 1 1 0
543
642
651
v=vvv=vvv=vv
Codewords000000110001101010011011011100101101110110000111
3d min
![Page 35: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/35.jpg)
EE576 Dr. Kousa Linear Block Codes 35
Standard Array Decoding (cont’d)
• Can correct all single errors and one double error pattern
000000 110001 101010 011011 011100 101101 110110 000111
000001 110000 101011 011010 011101 101100 110111 000110
000010 110011 101000 011001 011110 101111 110100 000101
000100 110101 101110 011111 011000 101001 110010 000011
001000 111001 100010 010011 010100 100101 111110 001111
010000 100001 111010 001011 001100 111101 100110 010111
100000 010001 001010 111011 111100 001101 010110 100111
100100 010101 001110 111111 111000 001001 010010 100011
![Page 36: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/36.jpg)
EE576 Dr. Kousa Linear Block Codes 36
The Syndrome• Huge storage memory (and searching time) is required by standard array
decoding.
• Recall the syndrome
s = rHT = (v + e) HT = eHT
• The syndrome depends only on the error pattern and not on the transmitted codeword.
• TH 3.6 All the 2k n-tuples of a coset have the same syndrome. The syndromes of different cosets are different.
(el + vi )HT = elHT (1st Part)
Let ej and el be leaders of two cosets, j<l. Assume they have the same syndrome.
ejHT = elHT (ej +el)HT = 0.
This implies ej +el = vi, or el = ej +vi
This means that el is in the jth coset. Contradiction.
![Page 37: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/37.jpg)
EE576 Dr. Kousa Linear Block Codes 37
The Syndrome (cont’d)
Error Pattern Syndrome
0000000 0001000000 1000100000 0100010000 0010001000 1100000100 0110000010 1110000001 101
• There are 2n-k rows and 2n-k syndromes (one-to-one correspondence).• Instead of forming the standard array we form a decoding table of the correctable error patterns and their syndromes.
![Page 38: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/38.jpg)
EE576 Dr. Kousa Linear Block Codes 38
Syndrome Decoding
Decoding Procedure:
1. For the received vector r, compute the syndrome s = rHT.
2. Using the table, identify the coset leader (error pattern) el .
3. Add el to r to recover the transmitted codeword v.
• EX:
r = 1110101 ==> s = 001 ==> e = 0010000
Then, v = 1100101
• Syndrome decoding reduces storage memory from nx2n to 2n-k(2n-k). Also, It reduces the searching time considerably.
![Page 39: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/39.jpg)
EE576 Dr. Kousa Linear Block Codes 39
Hardware Implementation
• Let r = r0 r1 r2 r3 r4 r5 r6 and s = s0 s1 s2
• From the H matrix:
s0 = r0 + r3 + r5 + r6
s1 = r1 + r3 + r4 + r5
s2 = r2 + r4 + r5 + r6
• From the table of syndromes and their corresponding correctable error patterns, a truth table can be constructed. A combinational logic circuit with s0 , s1 , s2 as input and e0 , e1 , e2 , e3 , e4 , e5 , e6 as outputs can be designed.
![Page 40: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/40.jpg)
EE576 Dr. Kousa Linear Block Codes 40
Decoding Circuit for the (7,4) HC
![Page 41: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/41.jpg)
EE576 Dr. Kousa Linear Block Codes 41
Error Detection Capability
• A codeword with dmin can detect all error patterns of weight dmin – 1 or less. It can detect many higher error patterns as well, but not all.
• In fact the number of undetectable error patterns is 2k-1 out of the 2n -1 nonzero error patterns.
• DF: Ai number of codewords of weight i.
• {Ai; i=0,1,…,n} = weight distribution of the code.
• Note that Ao=1; Aj =0 for 0 < j < dmin
n
di
iniiu ppAP
min
)1(
![Page 42: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/42.jpg)
EE576 Dr. Kousa Linear Block Codes 42
• EX: Undetectable error probability of (7,4) HCA0 =A7 = 1; A1 =A2 =A5 =A6=0; A3=A4=7Pu(E) =7p3(1-p)4 + 7p4(1-p)3 + p7
For p = 10-2 Pu(E) = 7x10-6
• Define the weight enumerator:
• Then
• Let z = p/(1-p), and noting that A0=1
n
i
ii zAzA
0
)(
n
i
i
ii
nn
i
iniiu p
ppApppAP
11 1)1()1(
11
)1( ;1
11 1 p
pApP
p
pA
p
pA n
u
n
i
i
i
![Page 43: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/43.jpg)
EE576 Dr. Kousa Linear Block Codes 43
• The probability of undetected error can as well be found from the weight enumerator of the dual code
where B(z) is the weight enumerator of the dual code.
• When either A(z) and B(z) are not available, Pu may be upper bounded byPu ≤ 2-(n-k) [1-(1-p)n]
• For good channels (p 0) Pu ≤ 2-(n-k)
)1()21(2 nknu ppBP
![Page 44: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/44.jpg)
EE576 Dr. Kousa Linear Block Codes 44
Error Correction Capability
• An (n,k) code of dmin can correct up to t errors where
• It may be able to correct higher error patterns but not all.
• The total number of patterns it can correct is 2n-k
the code is perfect
2/)1( min dt
t
i
kn
i
n
0
2 If
t
i
inin
ti
iniu ppi
nppinP
01
)1(1)1(
![Page 45: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/45.jpg)
EE576 Dr. Kousa Linear Block Codes 45
Hamming Codes• Hamming codes constitute a family of single-error correcting codes
defined as:
n = 2m-1, k = n-m, m 3
• The minimum distance of the code dmin = 3
• Construction rule of H:
H is an (n-k)xn matrix, i.e. it has 2m-1 columns of m tuples.The all-zero m tuple cannot be a column of H (otherwise dmin=1).No two columns are identical (otherwise dmin=2). Therefore, the H matrix of a Hamming code of order m has as its columns all non-zero m tuples. The sum of any two columns is a column of H. Therefore the sum of some three columns is zero, i.e. dmin=3.
![Page 46: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/46.jpg)
EE576 Dr. Kousa Linear Block Codes 46
Systematic Hamming Codes
• In systematic form:
H =[ Im Q]• The columns of Q are all m-tuples of weight 2.• Different arrangements of the columns of Q produces
different codes, but of the same distance property.• Hamming codes are perfect codes
Right side = 1+n; Left side = 2m =n+1
t
i
kn
in
0
2
![Page 47: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/47.jpg)
EE576 Dr. Kousa Linear Block Codes 47
Decoding of Hamming Codes
• Consider a single-error pattern e(i), where i is a number determining the position of the error.
• s = e(i) HT = HiT = the transpose of the ith column of H.
• Example:
0 1 0 0 0 0 0
1 0 00 1 00 0 11 1 00 1 11 1 11 0 1
0 1 0
![Page 48: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/48.jpg)
EE576 Dr. Kousa Linear Block Codes 48
Decoding of Hamming Codes (cont’d)
• That is, the (transpose of the) ith column of H is the syndrome corresponding to a single error in the ith position.
• Decoding rule:
1. Compute the syndrome s = rHT
2. Locate the error ( i.e. find i for which sT = Hi)
3. Invert the ith bit of r.
![Page 49: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/49.jpg)
EE576 Dr. Kousa Linear Block Codes 49
Weight Distribution of Hamming Codes
• The weight enumerator of Hamming codes is:
• The weight distribution could as well be obtained from the recursive equations:
A0=1, A1=0
(i+1)Ai+1 + Ai + (N-i+1)Ai-1 = CNi i=1,2,…,N
• The dual of a Hamming code is a (2m-1,m) linear code. Its weight enumerator is
2/)1(2 ))(1()1(1
1)(
nn zznz
nzA
12)12(1)(
m
zzB m
![Page 50: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/50.jpg)
EE576 Dr. Kousa Linear Block Codes 50
History
• In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in particular the ability to not only detect errors, but correct them. His search for error-correcting codes led to the Hamming Codes, perfect 1-error correcting codes, and the extended Hamming Codes, 1-error correcting and 2-error detecting codes.
![Page 51: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/51.jpg)
EE576 Dr. Kousa Linear Block Codes 51
Uses
• Hamming Codes are still widely used in computing, telecommunication, and other applications.
• Hamming Codes also applied in– Data compression– Some solutions to the popular puzzle The Hat
Game– Block Turbo Codes
![Page 52: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/52.jpg)
EE576 Dr. Kousa Linear Block Codes 52
A [7,4] binary Hamming Code
• Let our codeword be (x1 x2 … x7) ε F27
• x3, x5, x6, x7 are chosen according to the message (perhaps the message itself is (x3 x5 x6 x7 )).
• x4 := x5 + x6 + x7 (mod 2)
• x2 := x3 + x6 + x7
• x1 := x3 + x5 + x7
![Page 53: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/53.jpg)
EE576 Dr. Kousa Linear Block Codes 53
[7,4] binary Hamming codewords
![Page 54: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/54.jpg)
EE576 Dr. Kousa Linear Block Codes 54
A [7,4] binary Hamming Code• Let a = x4 + x5 + x6 + x7 (=1 iff one of these bits is in error)
• Let b = x2 + x3 + x6 + x7
• Let c = x1 + x3 + x5 + x7
• If there is an error (assuming at most one) then abc will be binary representation of the subscript of the offending bit.
• If (y1 y2 … y7) is received and abc ≠ 000, then we assume the bit abc is in error and switch it. If abc=000, we assume there were no errors (so if there are three or more errors we may recover the wrong codeword).
![Page 55: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/55.jpg)
EE576 Dr. Kousa Linear Block Codes 55
Definition: Generator and Check Matrices
• For an [n, k] linear code, the generator matrix is a k×n matrix for which the row space is the given code.
• A check matrix for an [n, k] is a generator matrix for the dual code. In other words, an (n-k)×k matrix M for which Mx = 0 for all x in the code.
![Page 56: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/56.jpg)
EE576 Dr. Kousa Linear Block Codes 56
A Construction for binary Hamming Codes
• For a given r, form an r × 2r-1 matrix M, the columns of which are the binary representations (r bits long) of 1, …, 2r-1.
• The linear code for which this is the check matrix is a [2r-1, 2r-1 – r] binary Hamming Code = {x=(x1 x2 … x n) : MxT = 0}.
Example Check Matrix• A check matrix for a [7,4] binary Hamming Code:
![Page 57: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/57.jpg)
EE576 Dr. Kousa Linear Block Codes 57
Syndrome Decoding
• Let y = (y1 y2 … yn) be a received codeword.
• The syndrome of y is S:=LryT. If S=0 then there was no error. If S ≠ 0 then S is the binary representation of some integer 1 ≤ t ≤ n=2r-1 and the intended codeword is
x = (y1 … yr+1 … yn).
![Page 58: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/58.jpg)
EE576 Dr. Kousa Linear Block Codes 58
Example Using L3
• Suppose (1 0 1 0 0 1 0) is received.
100 is 4 in binary, so the intended codeword was (1 0 1 1 0 1 0).
![Page 59: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/59.jpg)
EE576 Dr. Kousa Linear Block Codes 59
Extended [8,4] binary Hamm. Code
• As with the [7,4] binary Hamming Code:– x3, x5, x6, x7 are chosen according to the message. – x4 := x5 + x6 + x7 – x2 := x3 + x6 + x7
– x1 := x3 + x5 + x7
• Add a new bit x0 such that – x0 = x1 + x2 + x3 + x4 + x5 + x6 + x7 . i.e., the new bit
makes the sum of all the bits zero. x0 is called a parity check.
![Page 60: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/60.jpg)
EE576 Dr. Kousa Linear Block Codes 60
Extended binary Hamming Code
• The minimum distance between any two codewords is now 4, so an extended Hamming Code is a 1-error correcting and 2-error detecting code.
• The general construction of a [2r, 2r-1 - r] extended code from a [2r –1, 2r –1 – r] binary Hamming Code is the same: add a parity check bit.
![Page 61: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/61.jpg)
EE576 Dr. Kousa Linear Block Codes 61
Check Matrix Construction of Extended Hamming Code
• The check matrix of an extended Hamming Code can be constructed from the check matrix of a Hamming code by adding a zero column on the left and a row of 1’s to the bottom.
![Page 62: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/62.jpg)
EE576 Dr. Kousa Linear Block Codes 62
q-ary Hamming Codes
• The binary construction generalizes to Hamming Codes over an alphabet A={0, …, q}, q ≥ 2.
• For a given r, form an r × (qr-1)/(q-1) matrix M over A, any two columns of which are linearly independent.
• M determines a [(qr-1)/(q-1), (qr-1)/(q-1) – r] (= [n,k]) q-ary Hamming Code for which M is the check matrix.
![Page 63: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/63.jpg)
EE576 Dr. Kousa Linear Block Codes 63
Example: ternary [4, 2] Hamming• Two check matrices for the some [4, 2] ternary
Hamming Codes:
![Page 64: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/64.jpg)
EE576 Dr. Kousa Linear Block Codes 64
Syndrome decoding: the q-ary case
• The syndrome of received word y, S:=MyT, will be a multiple of one of the columns of M, say S=αmi, α
scalar, mi the ith column of M. Assume an error vector of weight 1 was introduced y = x + (0 … α … 0), α in the ith spot.
![Page 65: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/65.jpg)
EE576 Dr. Kousa Linear Block Codes 65
Example: q-ary Syndrome
• [4,2] ternary with check matrix , word (0 1 1 1) received.
• So decode (0 1 1 1) as
(0 1 1 1) – (0 0 2 0) = (0 1 2 1).
![Page 66: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/66.jpg)
EE576 Dr. Kousa Linear Block Codes 66
Perfect 1-error correcting
• Hamming Codes are perfect 1-error correcting codes. That is, any received word with at most one error will be decoded correctly and the code has the smallest possible size of any code that does this.
• For a given r, any perfect 1-error correcting linear code of length n=2r-1 and dimension n-r is a Hamming Code.
![Page 67: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/67.jpg)
EE576 Dr. Kousa Linear Block Codes 67
Proof: 1-error correcting• A code will be 1-error correcting if
– spheres of radius 1 centered at codewords cover the codespace, and
– if the minimum distance between any two codewords ≥ 3, since then spheres of radius 1 centered at codewords will be disjoint.
• Suppose codewords x, y differ by 1 bit. Then x-y is a codeword of weight 1, and M(x-y) ≠ 0. Contradiction. If x, y differ by 2 bits, then M(x-y) is the difference of two multiples of columns of M. No two columns of M are linearly dependent, so M(x-y) ≠ 0, another contradiction. Thus the minimum distance is at least 3.
![Page 68: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/68.jpg)
EE576 Dr. Kousa Linear Block Codes 68
Perfect
• A sphere of radius δ centered at x is Sδ(x)={y in An : dH(x,y) ≤ δ}. Where A is the alphabet, Fq, and dH is the Hamming distance.
• A sphere of radius e contains words.
• If C is an e-error correcting code then , so .
![Page 69: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/69.jpg)
EE576 Dr. Kousa Linear Block Codes 69
Perfect
• This last inequality is called the sphere packing bound for an e-error correcting code C of length n over Fm:
where n is the length of the code and in this case e=1.
• A code for which equality holds is called perfect.
![Page 70: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/70.jpg)
EE576 Dr. Kousa Linear Block Codes 70
Proof: Perfect
• The right side of this, for e=1 is qn/(1+n(q-1)).• The left side is qn-r where n= (qr-1)/(q-1).
qn-r(1+n(q-1)) = qn-r(1+(qr-1)) = qn.
Applications• Data compression.• Turbo Codes• The Hat Game
![Page 71: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/71.jpg)
EE576 Dr. Kousa Linear Block Codes 71
Data Compression
• Hamming Codes can be used for a form of lossy compression.
• If n=2r-1 for some r, then any n-tuple of bits x is within distance at most 1 from a Hamming codeword c. Let G be a generator matrix for the Hamming Code, and mG=c.
• For compression, store x as m. For decompression, decode m as c. This saves r bits of space but corrupts (at most) 1 bit.
![Page 72: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/72.jpg)
EE576 Dr. Kousa Linear Block Codes 72
The Hat Game• A group of n players enter a room whereupon they each receive
a hat. Each player can see everyone else’s hat but not his own.
• The players must each simultaneously guess a hat color, or pass.
• The group loses if any player guesses the wrong hat color or if every player passes.
• Players are not necessarily anonymous, they can be numbered.• Assignment of hats is assumed to be random.• The players can meet beforehand to devise a strategy.• The goal is to devise the strategy that gives the highest
probability of winning.
![Page 73: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/73.jpg)
EE 551/451, Fall, 2007
Communication Systems
Zhu Han
Department of Electrical and Computer Engineering
Class 25
Dec. 6th, 2007
![Page 74: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/74.jpg)
EE 541/451 Fall 2007
Outline Outline Project 2
ARQ Review
Linear Code – Hamming Code Revisit
– Reed–Muller code
Cyclic Code– CRC Code
– BCH Code
– RS Code
![Page 75: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/75.jpg)
EE 541/451 Fall 2007
ARQ, FEC, HECARQ, FEC, HEC
ARQ
Forward Error Correction (error correct coding)
Hybrid Error Correction
tx rxError detection code
ACK/NACK
tx rxError correction code
tx rx
Error detection/Correction code
ACK/NACK
![Page 76: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/76.jpg)
EE 541/451 Fall 2007
Hamming CodeHamming Code H(n,k): k information bit length, n overall code length
n=2^m-1, k=2^m-m-1:
H(7,4), rate (4/7); H(15,11), rate (11/15); H(31,26), rate (26/31)
H(7,4): Distance d=3, correction ability 1, detection ability 2.
Remember that it is good to have larger distance and rate.
Larger n means larger delay, but usually better code
![Page 77: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/77.jpg)
EE 541/451 Fall 2007
Hamming Code ExampleHamming Code Example H(7,4)
Generator matrix G: first 4-by-4 identical matrix
Message information vector p
Transmission vector x
Received vector r
and error vector e
Parity check matrix H
![Page 78: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/78.jpg)
EE 541/451 Fall 2007
Error CorrectionError Correction If there is no error, syndrome vector z=zeros
If there is one error at location 2
New syndrome vector z is
which corresponds to the second column of H. Thus, an error has been detected in position 2, and can be corrected
![Page 79: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/79.jpg)
EE 541/451 Fall 2007
ExerciseExercise Same problem as the previous slide, but p=(1001)’ and the error
occurs at location 4 instead.
Pause for 5 minutes
Might be 10 points in the finals.
![Page 80: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/80.jpg)
EE 541/451 Fall 2007
Important Hamming CodesImportant Hamming Codes
Hamming (7,4,3) -code. It has 16 codewords of length 7. It can be used to send 27 = 128 messages and can be used to correct 1 error.
• Golay (23,12,7) -code. It has 4 096 codewords. It can be used to transmit 8 3888 608 messages and can correct 3 errors.
Quadratic residue (47,24,11) -code. It has 16 777 216 codewords and can be used to transmit 140 737 488 355 238 messages and correct 5 errors.
![Page 81: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/81.jpg)
EE 541/451 Fall 2007
Reed–Muller codeReed–Muller code
![Page 82: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/82.jpg)
EE 541/451 Fall 2007
Cyclic codeCyclic code Cyclic codes are of interest and importance because
– They posses rich algebraic structure that can be utilized in a variety of ways.
– They have extremely concise specifications.
– They can be efficiently implemented using simple shift register
– Many practically important codes are cyclic
In practice, cyclic codes are often used for error detection (Cyclic redundancy check, CRC) – Used for packet networks
– When an error is detected by the receiver, it requests retransmission
– ARQ
![Page 83: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/83.jpg)
EE 541/451 Fall 2007
BASICBASIC DEFINITIONDEFINITION of Cyclic Code of Cyclic Code
![Page 84: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/84.jpg)
EE 541/451 Fall 2007
FFREQUENCY of CYCLIC CODESREQUENCY of CYCLIC CODES
![Page 85: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/85.jpg)
EE 541/451 Fall 2007
EXAMPLE of a CYCLIC CODEEXAMPLE of a CYCLIC CODE
![Page 86: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/86.jpg)
EE 541/451 Fall 2007
POLYNOMIALS POLYNOMIALS over over GF(GF(qq))
![Page 87: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/87.jpg)
EE 541/451 Fall 2007
EXAMPLEEXAMPLE
![Page 88: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/88.jpg)
EE 541/451 Fall 2007
Cyclic Code EncoderCyclic Code Encoder
![Page 89: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/89.jpg)
EE 541/451 Fall 2007
Cyclic Code DecoderCyclic Code Decoder Divider
Similar structure as multiplier for encoder
![Page 90: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/90.jpg)
EE 541/451 Fall 2007
Cyclic Redundancy Checks (CRC)
![Page 91: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/91.jpg)
EE 541/451 Fall 2007
Example of CRC
![Page 92: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/92.jpg)
EE 541/451 Fall 2007
Checking for errors
![Page 93: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/93.jpg)
EE 541/451 Fall 2007
Capability of CRCCapability of CRC An error E(X) is undetectable if it is divisible by G(x). The
following can be detected.– All single-bit errors if G(x) has more than one nonzero term– All double-bit errors if G(x) has a factor with three terms– Any odd number of errors, if P(x) contain a factor x+1– Any burst with length less or equal to n-k– A fraction of error burst of length n-k+1; the fraction is 1-2^(-(-n-
k-1)). – A fraction of error burst of length greater than n-k+1; the fraction
is 1-2^(-(n-k)).
Powerful error detection; more computation complexity compared to Internet checksum
Page 652
![Page 94: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/94.jpg)
EE 541/451 Fall 2007
BCH CodeBCH Code Bose, Ray-Chaudhuri, Hocquenghem
– Multiple error correcting ability
– Ease of encoding and decoding
– Page 653
Most powerful cyclic code– For any positive integer m and t<2^(m-1), there exists a t-error
correcting (n,k) code with n=2^m-1 and n-k<=mt.
Industry standards– (511, 493) BCH code in ITU-T. Rec. H.261 “video codec for
audiovisual service at kbit/s” a video coding a standard used for video conferencing and video phone.
– (40, 32) BCH code in ATM (Asynchronous Transfer Mode)
![Page 95: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/95.jpg)
EE 541/451 Fall 2007
BCH PerformanceBCH Performance
![Page 96: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/96.jpg)
EE 541/451 Fall 2007
Reed-Solomon CodesReed-Solomon Codes
An important subclass of non-binary BCH
Page 654
Wide range of applications– Storage devices (tape, CD, DVD…)
– Wireless or mobile communication
– Satellite communication
– Digital television/Digital Video Broadcast(DVB)
– High-speed modems (ADSL, xDSL…)
![Page 97: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/97.jpg)
EE 541/451 Fall 2007
ExamplesExamples 10.2 page 639
10.3 page 648
10.4 Page 651
Might be 4 points in the final
![Page 98: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/98.jpg)
EE 541/451 Fall 2007
1971: Mariner 91971: Mariner 9
Mariner 9 used a [32,6,16] Reed-Muller code to transmit its grey images of Mars.
camera rate:
100,000 bits/second
transmission speed:16,000 bits/second
![Page 99: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/99.jpg)
EE 541/451 Fall 2007
1979+: Voyagers I & II1979+: Voyagers I & II Voyagers I & II used a [24,12,8] Golay code
to send its color images of Jupiter and Saturn.
Voyager 2 traveled further to Uranus and Neptune. Because of the higher error rate it switched to the more robust Reed-Solomon code.
![Page 100: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/100.jpg)
EE 541/451 Fall 2007
Modern CodesModern Codes
More recently Turbo codes were invented, which are used in 3G cell phones, (future) satellites,and in the Cassini-Huygens spaceprobe [1997–].
Other modern codes: Fountain, Raptor, LT, online codes…
Next, next class
![Page 101: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/101.jpg)
EE 541/451 Fall 2007
Error Correcting CodesError Correcting Codesimperfectness of a given code as the difference between the code's required Eb/No to attain a given word error probability (Pw), and the minimum possible Eb/No required to attain the same Pw, as implied by the sphere-packing bound for codes with the same block size k and code rate r.
![Page 102: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/102.jpg)
EE 541/451 Fall 2007
Radio System PropagationRadio System Propagation
![Page 103: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/103.jpg)
EE 541/451 Fall 2007
Satellite CommunicationsSatellite Communications Large communication area. Any
two places within the coverage of radio transmission by satellite can communicate with each other.
Seldom effected by land disaster ( high reliability)
Circuit can be started upon establishing earth station (prompt circuit starting)
Can be received at many places simultaneously, and realize broadcast, multi-access communication economically( feature of multi-access)
Very flexible circuit installment , can disperse over-centralized traffic at any time.
One channel can be used in different directions or areas (multi-access connecting).
![Page 104: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/104.jpg)
EE 541/451 Fall 2007
GPSGPS Just a timer, 24 satellite
Calculation position
![Page 105: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/105.jpg)
EE576 Dr. Kousa Linear Block Codes 105
Cyclic codes
CHAPTER 3: Cyclic and convolution codes
Cyclic codes are of interest and importance because
• They posses rich algebraic structure that can be utilized in a variety of ways.
• They have extremely concise specifications.
• They can be efficiently implemented using simple shift registers.
• Many practically important codes are cyclic.
Convolution codes allow to encode streams od data (bits).
IV054
![Page 106: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/106.jpg)
EE576 Dr. Kousa Linear Block Codes 106Cyclic codes
BASICBASIC DEFINITION DEFINITION AND AND EXAMPLESEXAMPLES•Definition A code C is cyclic if•(i) C is a linear code; •(ii) any cyclic shift of a codeword is also a codeword, i.e. whenever a0,… an -1 C, then also an -1 a0 … an –2 C.
IV054
Example(i) Code C = {000, 101, 011, 110} is cyclic.
(ii) Hamming code Ham(3, 2): with the generator matrix
is equivalent to a cyclic code.
(iii) The binary linear code {0000, 1001, 0110, 1111} is not a cyclic, but it is equivalent to a cyclic code.
(iv) Is Hamming code Ham(2, 3) with the generator matrix
(a) cyclic?(b) equivalent to a cyclic code?
1111000
0110100
1010010
1100001
G
2110
1101
![Page 107: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/107.jpg)
EE576 Dr. Kousa Linear Block Codes 107Cyclic codes
FFREQUENCY of CYCLIC CODESREQUENCY of CYCLIC CODES•Comparing with linear codes, the cyclic codes are quite scarce. For, example there are 11 811 linear (7,3) linear binary codes, but only two of them are cyclic.
•Trivial cyclic codes. For any field F and any integer n >= 3 there are always the following cyclic codes of length n over F:
• No-information code - code consisting of just one all-zero codeword.
• Repetition code - code consisting of codewords (a, a, …,a) for a F.
• Single-parity-check code - code consisting of all codewords with parity 0.
• No-parity code - code consisting of all codewords of length n
•For some cases, for example for n = 19 and F = GF(2), the above four trivial cyclic codes are the only cyclic codes.
IV054
![Page 108: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/108.jpg)
EE576 Dr. Kousa Linear Block Codes 108Cyclic codes
EXAMPLE of a CYCLIC CODEEXAMPLE of a CYCLIC CODE•The code with the generator matrix
•has codewords
• c1 = 1011100 c2 = 0101110 c3 =0010111
• c1 + c2 = 1110010 c1 + c3 = 1001011 c2 + c3 = 0111001
• c1 + c2 + c3 = 1100101
•and it is cyclic because the right shifts have the following impacts
• c1 c2, c2 c3, c3 c1 + c3
• c1 + c2 c2 + c3, c1 + c3 c1 + c2 + c3, c2 + c3 c1
• c1 + c2 + c3 c1 + c2
IV054
1110100
0111010
0011101
G
![Page 109: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/109.jpg)
EE576 Dr. Kousa Linear Block Codes 109Cyclic codes
POLYNOMIALS POLYNOMIALS over over GF(GF(qq))
• Fq[x] denotes the set of all polynomials over GF(q ). • deg (f(x )) = the largest m such that xm has a non-zero coefficient in f(x).
IV054
Multiplication of polynomials If f(x), g(x) Fq[x], thendeg (f(x) g(x)) = deg (f(x)) + deg (g(x)).
Division of polynomials For every pair of polynomials a(x), b(x) 0 in Fq[x] there exists a unique pair of polynomials q(x), r(x) in Fq[x] such that
a(x) = q(x)b(x) + r(x), deg (r(x)) < deg (b(x)).
Example Divide x3 + x + 1 by x2 + x + 1 in F2[x].
Definition Let f(x) be a fixed polynomial in Fq[x]. Two polynomials g(x), h(x) are said to be congruent modulo f(x), notation
g(x) h(x) (mod f(x)),if g(x) - h(x) is divisible by f(x).
![Page 110: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/110.jpg)
EE576 Dr. Kousa Linear Block Codes 110Cyclic codes
RING RING of of POLYNOMIALSPOLYNOMIALS•The set of polynomials in Fq[x] of degree less than deg (f(x)), with addition and multiplication modulo f(x) forms a ring denoted Fq[x]/f(x).
•Example Calculate (x + 1)2 in F2[x] / (x2 + x + 1). It holds•(x + 1)2 = x2 + 2x + 1 x2 + 1 x (mod x2 + x + 1).
•How many elements has Fq[x] / f(x)?•Result | Fq[x] / f(x) | = q deg (f(x)).
•Example Addition and multiplication in F2[x] / (x2 + x + 1)
IV054
Definition A polynomial f(x) in Fq[x] is said to be reducible if f(x) = a(x)b(x), where a(x), b(x) Fq[x] anddeg (a(x)) < deg (f(x)), deg (b(x)) < deg (f(x)).
If f(x) is not reducible, it is irreducible in Fq[x].
Theorem The ring Fq[x] / f(x) is a field if f(x) is irreducible in Fq[x].
+ 0 1 x 1 + x
0 0 1 x 1 + x
1 1 0 1 + x x
x x 1 + x 0 1
1 + x 1 + x x 1 0
0 1 x 1 + x
0 0 0 0 0
1 0 1 X 1 + x
x 0 x 1 + x 1
1 + x 0 1 + x 1 x
![Page 111: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/111.jpg)
EE576 Dr. Kousa Linear Block Codes 111Cyclic codes
FIELD FIELD RRnn, , RRnn == FFqq[[xx]] // ((xxnn -- 1)1)
•Computation modulo xn – 1
•Since xn 1 (mod xn -1) we can compute f(x) mod xn -1 as follow:
•In f(x) replace xn by 1, xn +1 by x, xn +2 by x2, xn +3 by x3, …
•Identification of words with polynomials
•a0 a1… an -1 a0 + a1 x + a2 x2 + … + an -1 xn -1
•Multiplication by x in Rn corresponds to a single cyclic shift
•x (a0 + a1 x + … an -1 xn -1) = an -1 + a0 x + a1 x2 + … + an -2 xn -1
IV054
![Page 112: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/112.jpg)
EE576 Dr. Kousa Linear Block Codes 112Cyclic codes
Algebraic Algebraic characterizationcharacterization of of cyclic cyclic codescodes
• Theorem A code C is cyclic if C satisfies two conditions
• (i) a(x), b(x) C a(x) + b(x) C
• (ii) a(x) C, r(x) Rn r(x)a(x) C
• Proof
• (1) Let C be a cyclic code. C is linear (i) holds.
• (ii) Let a(x) C, r(x) = r0 + r1x + … + rn -1xn -1
• r(x)a(x) = r0a(x) + r1xa(x) + … + rn -1xn -1a(x)
• is in C by (i) because summands are cyclic shifts of a(x).
• (2) Let (i) and (ii) hold
• Taking r(x) to be a scalar the conditions imply linearity of C.
• Taking r(x) = x the conditions imply cyclicity of C.
IV054
![Page 113: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/113.jpg)
EE576 Dr. Kousa Linear Block Codes 113Cyclic codes
CONSTRUCTION CONSTRUCTION of of CYCLIC CYCLIC CODESCODES•Notation If f(x) Rn, then
f(x) = {r(x)f(x) | r(x) Rn}
•(multiplication is modulo xn -1).
•Theorem For any f(x) Rn, the setf(x) is a cyclic code (generated by f).
•Proof We check conditions (i) and (ii) of the previous theorem.
•(i) If a(x)f(x) f(x) and b(x)f(x) f(x), then• a(x)f(x) + b(x)f(x) = (a(x) + b(x)) f(x) f(x)
•(ii) If a(x)f(x) f(x), r(x) Rn, then• r(x) (a(x)f(x)) = (r(x)a(x)) f(x) f(x).
IV054
Example C = 1 + x2 , n = 3, q = 2.
We have to compute r(x)(1 + x2) for all r(x) R3.
R3 = {0, 1, x, 1 + x, x2, 1 + x2, x + x2, 1 + x + x2}.
Result C = {0, 1 + x, 1 + x2, x + x2}C = {000, 011, 101, 110}
![Page 114: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/114.jpg)
EE576 Dr. Kousa Linear Block Codes 114Cyclic codes
CharacterizationCharacterization theorem theorem forfor cyclic cyclic codescodes
•We show that all cyclic codes C have the form C = f(x) for some f(x) Rn.
•Theorem Let C be a non-zero cyclic code in Rn. Then • there exists unique monic polynomial g(x) of the smallest degree such that• C = g(x)• g(x) is a factor of xn -1.
IV054
Proof (i) Suppose g(x) and h(x) are two monic polynomials in C of the smallest degree. Then the polynomial g(x) - h(x) C and it has a smaller degree and a multiplication by a scalar makes out of it a monic polynomial. If g(x) h(x) we get a contradiction.
(ii) Suppose a(x) C.Then
a(x) = q(x)g(x) + r(x) (deg r(x) < deg g(x))and
r(x) = a(x) - q(x)g(x) C.By minimality
r(x) = 0and therefore a(x) g(x).
![Page 115: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/115.jpg)
EE576 Dr. Kousa Linear Block Codes 115Cyclic codes
CharacterizationCharacterization theorem theorem forfor cyclic cyclic codescodesIV054
•(iii) Clearly,•xn –1 = q(x)g(x) + r(x) with deg r(x) < deg g(x)
•and therefore r(x) -q(x)g(x) (mod xn -1) and•r(x) C r(x) = 0 g(x) is a factor of xn -1.
GENERATOR POLYNOMIALSGENERATOR POLYNOMIALS
Definition If for a cyclic code C it holds
C = g(x),
then g is called the generator polynomial for the code C.
![Page 116: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/116.jpg)
EE576 Dr. Kousa Linear Block Codes 116Cyclic codes
HOW HOW TO TO DESIGN DESIGN CYCLIC CYCLIC CODES?CODES?•The last claim of the previous theorem gives a recipe to get all cyclic codes of given length n. •Indeed, all we need to do is to find all factors of
• xn -1.•Problem: Find all binary cyclic codes of length 3. •Solution: Since
• x3 – 1 = (x + 1)(x2 + x + 1)• both factors are irreducible in GF(2)
•we have the following generator polynomials and codes.
• Generator polynomials Code in R3 Code in V(3,2)
• 1 R3 V(3,2)
• x + 1 {0, 1 + x, x + x2, 1 + x2} {000, 110, 011, 101}• x2 + x + 1 {0, 1 + x + x2} {000, 111}• x3 – 1 ( = 0) {0} {000}
IV054
![Page 117: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/117.jpg)
EE576 Dr. Kousa Linear Block Codes 117Cyclic codes
Design Design ofof generator generator matricesmatrices for for cyclic cyclic codescodes• Theorem Suppose C is a cyclic code of codewords of length n with the generator polynomial
• g(x) = g0 + g1x + … + grxr.
• Then dim (C) = n - r and a generator matrix G1 for C is
IV054
Proof
(i) All rows of G1 are linearly independent.
(ii) The n - r rows of G represent codewordsg(x), xg(x), x2g(x),…, xn -r -1g(x)
(*)
(iii) It remains to show that every codeword in C can be expressed as a linear combination of vectors from (*).
Inded, if a(x) C, thena(x) = q(x)g(x).
Since deg a(x) < n we have deg q(x) < n - r. Hence
q(x)g(x) = (q0 + q1x + … + qn -r -1xn -r -1)g(x)
= q0g(x) + q1xg(x) + … + qn -r -1xn -r -1g(x).
r
r
r
r
gg
gggg
gggg
gggg
G
...0...00...00
......
0...0...00
0...00...0
0...000...
0
210
210
210
1
![Page 118: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/118.jpg)
EE576 Dr. Kousa Linear Block Codes 118Cyclic codes
EXAMPLEEXAMPLE•The task is to determine all ternary codes of length 4 and generators for them.•Factorization of x4 - 1 over GF(3) has the form
•x4 - 1 = (x - 1)(x3 + x2 + x + 1) = (x - 1)(x + 1)(x2 + 1)•Therefore there are 23 = 8 divisors of x4 - 1 and each generates a cyclic code.
• Generator polynomial Generator matrix
• 1 I4
• x
• x + 1
• x2 + 1
• (x - 1)(x + 1) = x2 - 1
• (x - 1)(x2 + 1) = x3 - x2 + x - 1 [ -1 1 -1 1 ]• (x + 1)(x2 + 1) [ 1 1 1 1 ]• x4 - 1 = 0 [ 0 0 0 0 ]
IV054
1010
0101
1010
0101
1100
0110
0011
1100
0110
0011
![Page 119: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/119.jpg)
EE576 Dr. Kousa Linear Block Codes 119Cyclic codes
CheckCheck polynomialspolynomials andand parity checkparity check matrices for cyclic codesmatrices for cyclic codes•Let C be a cyclic [n,k]-code with the generator polynomial g(x) (of degree n - k). By the last theorem g(x) is a factor of xn - 1. Hence
• xn - 1 = g(x)h(x)
•for some h(x) of degree k (where h(x) is called the check polynomial of C).
•Theorem Let C be a cyclic code in Rn with a generator polynomial g(x) and a check polynomial h(x). Then an c(x) Rn is a codeword of C if c(x)h(x) 0 - this and next congruences are modulo xn - 1.
IV054
Proof Note, that g(x)h(x) = xn - 1 0
(i) c(x) C c(x) = a(x)g(x) for some a(x) Rn
c(x)h(x) = a(x) g(x)h(x) 0.
0
(ii) c(x)h(x) 0
c(x) = q(x)g(x) + r(x), deg r(x) < n – k = deg g(x)
c(x)h(x) 0 r(x)h(x) 0 (mod xn - 1)
Since deg (r(x)h(x)) < n – k + k = n, we have r(x)h(x) = 0 in F[x] and therefore
r(x) = 0 c(x) = q(x)g(x) C.
![Page 120: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/120.jpg)
EE576 Dr. Kousa Linear Block Codes 120Cyclic codes
POLYNOMIALPOLYNOMIAL REPRESENTATION of DUAL CODESREPRESENTATION of DUAL CODES
•Since dim (h(x)) = n - k = dim (C) we might easily be fooled to think that the check polynomial h(x) of the code C generates the dual code C.•Reality is “slightly different'':
•Theorem Suppose C is a cyclic [n,k]-code with the check polynomial
•h(x) = h0 + h1x + … + hkxk,
•then•(i) a parity-check matrix for C is
•(ii) C is the cyclic code generated by the polynomial
•i.e. the reciprocal polynomial of h(x).
IV054
0
01
01
...0...00
....
0......0
0...0...
hh
hhh
hhh
H
k
k
kk
kkk xhxhhxh 01 ...
![Page 121: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/121.jpg)
EE576 Dr. Kousa Linear Block Codes 121Cyclic codes
POLYNOMIALPOLYNOMIAL REPRESENTATION of DUAL CODESREPRESENTATION of DUAL CODES
•Proof A polynomial c(x) = c0 + c1x + … + cn -1xn –1 represents a code from C if c(x)h(x) = 0. For c(x)h(x) to be 0 the coefficients at xk,…, xn -1 must be zero, i.e.
•Therefore, any codeword c0 c1… cn -1 C is orthogonal to the word hk hk -1…h000…0 and to its cyclic shifts.
•Rows of the matrix H are therefore in C. Moreover, since hk = 1, these row-vectors are linearly independent. Their number is n - k = dim (C). Hence H is a generator matrix for C, i.e. a parity-check matrix for C.•In order to show that C is a cyclic code generated by the polynomial
•it is sufficient to show that is a factor of xn -1.•Observe that and since h(x -1)g(x -1) = (x -1)n -1•we have that xkh(x -1)xn -kg(x -1) = xn(x –n -1) = 1 – xn
•and therefore is indeed a factor of xn -1.
IV054
0...
.. ..
0...
0...
0111
01121
0110
hchchc
hchchc
hchchc
nkknkkn
kkk
kkk
xh
kkk xhxhhxh 01 ...
xh 1 xhxxh k
![Page 122: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/122.jpg)
EE576 Dr. Kousa Linear Block Codes 122Cyclic codes
ENCODING with CYCLIC CODESENCODING with CYCLIC CODES II•Encoding using a cyclic code can be done by a multiplication of two polynomials - a message polynomial and the generating polynomial for the cyclic code.
•Let C be an (n,k)-code over an field F with the generator polynomial
•g(x) = g0 + g1 x + … + gr –1 x r -1 of degree r = n - k.
•If a message vector m is represented by a polynomial m(x) of degree k and m is encoded by
•m c = mG1,
•then the following relation between m(x) and c(x) holds•c(x) = m(x)g(x).
•Such an encoding can be realized by the shift register shown in Figure below, where input is the k-bit message to be encoded followed by n - k 0' and the output will be the encoded message.
•Shift-register encodings of cyclic codes. Small circles represent multiplication by the corresponding constant, nodes represent modular addition, squares are delay elements
IV054
![Page 123: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/123.jpg)
EE576 Dr. Kousa Linear Block Codes 123Cyclic codes
EENCODING of CYCLIC CODES NCODING of CYCLIC CODES II II
•Another method for encoding of cyclic codes is based on the following (so called systematic) representation of the generator and parity-check matrices for cyclic codes.
•Theorem Let C be an (n,k)-code with generator polynomial g(x) and r = n - k. For i = 0,1,…,k - 1, let G2,i be the length n vector whose polynomial is G2,i(x) = x r+I -x r+I mod g(x). Then the k * n matrix G2 with row vectors G2,I is a generator matrix for C.
•Moreover, if H2,J is the length n vector corresponding to polynomial H2,J(x) = xj mod g(x), then the r * n matrix H2 with row vectors H2,J is a parity check matrix for C. If the message vector m is encoded by
• m c = mG2,
•then the relation between corresponding polynomials is• c(x) = xrm(x) - [xrm(x)] mod g(x).
•On this basis one can construct the following shift-register encoder for the case of a systematic representation of the generator for a cyclic code:•Shift-register encoder for systematic representation of cyclic codes. Switch A is closed for first k ticks and closed for last r ticks; switch B is down for first k ticks and up for last r ticks.
IV054
![Page 124: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/124.jpg)
EE576 Dr. Kousa Linear Block Codes 124Cyclic codes
Hamming Hamming codescodes as as cycliccyclic codes codes
•Definition (Again!) Let r be a positive integer and let H be an r * (2r -1) matrix whose columns are distinct non-zero vectors of V(r,2). Then the code having H as its parity-check matrix is called binary Hamming code denoted by Ham (r,2).
•It can be shown that binary Hamming codes are equivalent to cyclic codes.
IV054
Theorem The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
Definition If p(x) is an irreducible polynomial of degree r such that x is a primitive element of the field F[x] / p(x), then p(x) is called a primitive polynomial.
Theorem If p(x) is a primitive polynomial over GF(2) of degree r, then the cyclic code p(x) is the code Ham (r,2).
![Page 125: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/125.jpg)
EE576 Dr. Kousa Linear Block Codes 125Cyclic codes
Hamming Hamming codescodes as as cycliccyclic codes codes
•Example Polynomial x3 + x + 1 is irreducible over GF(2) and x is primitive element of the field F2[x] / (x3 + x + 1).
•F2[x] / (x3 + x + 1) =
•{0, x, x2, x3 = x + 1, x4 = x2 + x, x5 = x2 + x + 1, x6 = x2 + 1}
•The parity-check matrix for a cyclic version of Ham (3,2)
IV054
1110100
0111010
1101001
H
![Page 126: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/126.jpg)
EE576 Dr. Kousa Linear Block Codes 126Cyclic codes
PROOF PROOF ofof THEOREM THEOREM•The binary Hamming code Ham (r,2) is equivalent to a cyclic code.•It is known from algebra that if p(x) is an irreducible polynomial of degree r, then the ring F2[x] / p(x) is a field of order 2r.•In addition, every finite field has a primitive element. Therefore, there exists an element of F2[x] / p(x) such that
• F2[x] / p(x) = {0, 1, , 2,…, 2r –2}.
•Let us identify an element a0 + a1 + … ar -1xr -1 of F2[x] / p(x) with the column vector• (a0, a1,…, ar -1)T
•and consider the binary r * (2r -1) matrix• H = [ 1 2 … 2^r –2 ].
•Let now C be the binary linear code having H as a parity check matrix.•Since the columns of H are all distinct non-zero vectors of V(r,2), C = Ham (r,2).•Putting n = 2r -1 we get• C = {f0 f1 … fn -1 V(n, 2) | f0 + f1 + … + fn -1 n –1 = 0 (2)• = {f(x) Rn | f() = 0 in F2[x] / p(x)} (3)
•If f(x) C and r(x) Rn, then r(x)f(x) C because• r()f() = r() 0 = 0
•and therefore, by one of the previous theorems, this version of Ham (r,2) is cyclic.
IV054
![Page 127: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/127.jpg)
EE576 Dr. Kousa Linear Block Codes 127Cyclic codes
BCH BCH codes codes and and Reed-SolomonReed-Solomon codes codes
•To the most important cyclic codes for applications belong BCH codes and Reed-Solomon codes.
•Definition A polynomial p is said to be minimal for a complex number x in Zq if p(x) = 0 and p is irreducible over Zq.
IV054
Definition A cyclic code of codewords of length n over Zq, q = pr, p is a prime, is called BCH codeBCH code11 of distance d if its generator g(x) is the least common multiple of the minimal polynomials for
l, l +1,…, l +d –2
for some l, where is the primitive n-th root of unity.
If n = qm - 1 for some m, then the BCH code is called primitiveprimitive.
1BHC stands for Bose and Ray-Chaudhuri and Hocquenghem who discovered these codes.
Definition A Reed-SolomonReed-Solomon code is a primitive BCH code with n = q - 1.
Properties:• Reed-Solomon codes are self-dual.
![Page 128: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/128.jpg)
EE576 Dr. Kousa Linear Block Codes 128Cyclic codes
CONVOLUTION CODES
• Very often it is important to encode an infinite stream or several streams of data – say bits.
• Convolution codes, with simple encoding and decoding, are quite a simple• generalization of linear codes and have encodings as cyclic codes.
• An (n,k) convolution code (CC) is defined by an k x n generator matrix,
• entries of which are polynomials over F2
• For example,
• is the generator matrix for a (2,1) convolution code CC1 and
• is the generator matrix for a (3,2) convolution code CC2
]1,1[ 221 xxxG
x
xxG
1
1
0
0
12
IV054
![Page 129: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/129.jpg)
EE576 Dr. Kousa Linear Block Codes 129Cyclic codes
ENCODING of FINITE POLYNOMIALS
• An (n,k) convolution code with a k x n generator matrix G can be usd to encode a
• k-tuple of plain-polynomials (polynomial input information)
• I=(I0(x), I1(X),…,Ik-1(x))
• to get an n-tuple of crypto-polynomials
• C=(C0(x), C1(x),…,Cn-1(x))
• As follows
• C= I . G
IV054
![Page 130: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/130.jpg)
EE576 Dr. Kousa Linear Block Codes 130Cyclic codes
EXAMPLES
• EXAMPLE 1
• (x3 + x + 1).G1 = (x3 + x + 1) . (x2 + 1, x2 + x + 1]
• = (x5 + x2 + x + 1, x5 + x4 + 1)
• EXAMPLE 2
x
xxxxGxxx
1
1
0
0
1).1,().1,( 32
232
![Page 131: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/131.jpg)
EE576 Dr. Kousa Linear Block Codes 131Cyclic codes
ENCODING of INFINITE INPUT STREAMS• The way infinite streams are encoded using convolution codes will be
• Illustrated on the code CC1.
• An input stream I = (I0, I1, I2,…) is mapped into the output stream
• C= (C00, C10, C01, C11…) defined by
• C0(x) = C00 + C01x + … = (x2 + 1) I(x)
• and
• C1(x) = C10 + C11x + … = (x2 + x + 1) I(x).
• The first multiplication can be done by the first shift register from the next • figure; second multiplication can be performed by the second shift register • on the next slide and it holds
• C0i = Ii + Ii+2, C1i = Ii + Ii-1 + Ii-2.
• That is the output streams C0 and C1 are obtained by convolving the input
• stream with polynomials of G1’
IV054
![Page 132: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/132.jpg)
EE576 Dr. Kousa Linear Block Codes 132Cyclic codes
ENCODING
The first shift registerThe first shift register
1 x x1 x x22
inputinput
outputoutput
will multiply the input stream by xwill multiply the input stream by x22+1 and the +1 and the second shift registersecond shift register
1 x x1 x x22
inputinput
outputoutput
will multiply the input stream by will multiply the input stream by xx22+x+1.+x+1.
IV054
![Page 133: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/133.jpg)
EE576 Dr. Kousa Linear Block Codes 133Cyclic codes
ENCODING and DECODING
1 x x1 x x22 II
CC0000,C,C0101,C,C0202
CC1010,C,C1111,C,C1212
Output streamsOutput streams
The following shift-register will therefore be an encoder for the The following shift-register will therefore be an encoder for the code CCcode CC11
For encoding of convolution codes so called For encoding of convolution codes so called
Viterbi algorithmViterbi algorithm
Is used.Is used.
IV054
![Page 134: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/134.jpg)
EE576 Dr. Kousa Linear Block Codes 134
Cyclic Linear Codes
Rong-Jaye Chen
![Page 135: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/135.jpg)
EE576 Dr. Kousa Linear Block Codes 135
OUTLINE
[1] Polynomials and words
[2] Introduction to cyclic codes
[3] Generating and parity check matrices for cyclic codes
[4] Finding cyclic codes
[5] Dual cyclic codes
![Page 136: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/136.jpg)
EE576 Dr. Kousa Linear Block Codes 136
Cyclic Linear Codes• [1] Polynomials and words
– 1. Polynomial of degree n over K
– 2. Eg 4.1.1
2 30 1 2 3
0
[ ] { .... }
,...., , deg( ( ))
nn
n
K x a a x a x a x a x
a a K f x n
3 4 2 3 2 4
2 4
2 3
2 3 2 3 3 2 3
4 2 3 7
Let ( ) 1 ( ) ( ) 1 then
( ) ( ) ( ) 1
( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( )
f x x x x g x x x x h x x x
a f x g x x x
b f x h x x x x
c f x g x x x x x x x x x x x x
x x x x x x
![Page 137: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/137.jpg)
EE576 Dr. Kousa Linear Block Codes 137
Cyclic Linear Codes
– 3. [Algorithm 4.1.8]Division algorithm
– 4. Eg. 4.1.9
Let ( ) and ( ) be in [ ] with ( ) 0. Then there exist
unique polynomial ( ) and ( ) in [ ] such that
( ) ( ) ( ) ( ),
with ( ) 0 or deg( ( )) deg( ( ))
f x h x K x h x
q x r x K x
f x q x h x r x
r x r x h x
2 6 8 2 4
3 4 2 3
3 4 2 3
( ) , ( ) 1
( ) , ( )
( ) ( )( ) ( )
deg( ( )) deg( ( )) 4
f x x x x x h x x x x
q x x x r x x x x
f X h x x x x x x
r x h x
![Page 138: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/138.jpg)
EE576 Dr. Kousa Linear Block Codes 138
– 5. Code represented by a set of polynomials • A code C of length n can be represented as a set of polynomials over K of
degree at most n-1
–
–
– 6. E.g 4.1.12
Cyclic Linear Codes
2 1
0 1 2 1( ) .... over Knnf x a a x a x a x
n0 1 2 1... of length n in Knc a a a a
Codeword c Polynomial c(x)
0000101001011111
1+x2
x+x3
1+x+x2+x3
![Page 139: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/139.jpg)
EE576 Dr. Kousa Linear Block Codes 139
Cyclic Linear Codes
– 7. f(x) and p(x) are equivalent modulo h(x)
– 8.Eg 4.1.15
– 9. Eg 4.1.16
( ) mod ( ) ( ) ( ) mod ( )
. ( ) ( )(mod ( ))
f x h x r x p x h x
ie f x p x h x
4 9 11 5 6( ) 1 , ( ) 1 , ( ) 1
( )mod ( ) ( ) 1 ( )mod ( )
=>f(x) and p(x) are equivalent mod h(x)!!
f x x x x h x x p x x
f x h x r x x p x h x
2 6 9 11 2 5 2 8
4 3
( ) 1 , ( ) 1 , ( )
( )mod ( ) , ( )mod ( ) 1
=>f(x) and p(x) are NOT equivalent mod h(x)!!
f x x x x x h x x x p x x x
f x h x x x p x h x x
![Page 140: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/140.jpg)
EE576 Dr. Kousa Linear Block Codes 140
Cyclic Linear Codes
– 10. Lemma 4.1.17
– 11. Eg. 4.1.18
If ( ) ( )(mod ( )), then
( ) ( ) ( ) ( )(mod ( ))
and
( ) ( ) ( ) ( )(mod ( ))
f x g x h x
f x p x g x p x h x
f x p x g x p x h x
7 2 5 6
7 6 2 2 6
7 6 3
( ) 1 , ( ) 1 , ( ) 1 , ( ) 1
so ( ) ( )(mod ( )), then
( ) ( ) and ( ) ( ) :
((1 ) (1 ))mod ( ) ((1 ) (1 ))mod ( )
( ) ( ) and ( ) ( ) :
((1 )(1 ))mod ( ) 1 ((1
f x x x g x x x h x x p x x
f x g x h x
f x p x g x p x
x x x h x x x x x h x
f x p x g x p x
x x x h x x 2 6)(1 ))mod ( )x x x h x
![Page 141: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/141.jpg)
EE576 Dr. Kousa Linear Block Codes 141
• [2]Introduction to cyclic codes– 1. cyclic shift π(v)
• V: 010110, : 001011
– 2.cyclic code• A code C is cyclic code(or linear cyclic code) if (1)the cyclic shift of each
codeword is also a codeword and (2) C is a linear code• C1=(000, 110, 101, 011} is a cyclic code• C2={000, 100, 011, 111} is NOT a cyclic code
– V=100, =010 is not in C2
Cyclic Linear Codes
( )v
v 10110 111000 0000 1011 ( )v 01011 011100 0000 1101
( )v
![Page 142: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/142.jpg)
EE576 Dr. Kousa Linear Block Codes 142
Cyclic Linear Codes– 3. Cyclic shiftπis a linear transformation
•
• S={v, π(v), π2(v), …, π n-1(v)}, and C=<S>,
then v is a generator of the linear cyclic code C
Lemma 4.2.3 ( ) ( ) ( ),
and ( ) ( ), {0,1}
Thus to show a linear code C is cyclic
it is enough to show that ( ) C
for each word in a basis for C
v w v w
av a v a K
v
v
![Page 143: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/143.jpg)
EE576 Dr. Kousa Linear Block Codes 143
Cyclic Linear Codes
– 4. Cyclic Code in terms of polynomial
( ), ( ) ( )v v v x xv x
3
7
2 4
2
Eg 4.2.11 v=1101000, n=7, v(x)=1+x+x
word polynimial(mod 1+x )
----------- -----------------------------
0110100 ( )
0011010 x (
xv x x x x
v
2 3 4
3 3 4 6
4 4 5 7 4 5 7
5 5 6 8 5 6 7
6 6 7 9
)
0001101 x ( )
1000110 x ( ) 1 mod(1 )
0100011 x ( ) mod(1 )
1010001 x ( ) 1
x x x x
v x x x x
v x x x x x x x
v x x x x x x x x
v x x x x 2 6 7mod(1 )x x x
![Page 144: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/144.jpg)
EE576 Dr. Kousa Linear Block Codes 144
Cyclic Linear Codes
– 5. Lemma 4.2.12 Let C be a cyclic code let v in C. Then for any polynomial a(x), c(x)=a(x)v(x)mod(1+xn) is a codeword in C
– 6. Theorem 4.2.13 C: a cyclic code of length n, g(x): the generator polynomial, which is the unique nonzero
polynomial of minimum degree in C.
degree(g(x)) : n-k,
• 1. C has dimension k• 2. g(x), xg(x), x2g(x), …., xk-1g(x) are a basis for C• 3. If c(x) in C, c(x)=a(x)g(x) for some polynomial a(x) with degree(a(x))<k
![Page 145: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/145.jpg)
EE576 Dr. Kousa Linear Block Codes 145
Cyclic Linear Codes– 7. Eg 4.2.16
the smallest linear cyclic code C of length 6 containing g(x)=1+x3 <-> 100100 is
{000000, 100100, 010010, 001001, 110110,
101101, 011011, 111111}
– 8. Theorem 4.2.17
g(x) is the generator polynomial for a linear cyclic code of length n if only if g(x) divides 1+xn (so 1+xn =g(x)h(x)).
– 9. Corollary 4.2.18
The generator polynomial g(x) for the smallest cyclic code of length n containing the word v(polynomial v(x)) is g(x)=gcd(v(x), 1+xn)
– 10. Eg 4.2.19
n=8, v=11011000 so v(x)=1+x+x3+x4
g(x)=gcd(1+x+x3+x4 , 1+x8)=1+x2
Thus g(x)=1+x2 is the smallest cyclic linear code containing
v(x), which has dimension of 6.
![Page 146: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/146.jpg)
EE576 Dr. Kousa Linear Block Codes 146
Cyclic Linear Codes
• [3]. Generating and parity check matrices for cyclic code
– 1. Effective to find a generating matrix
• The simplest generator matrices (Theorem 4.2.13)
k-1
( )
( ) , n: length of codes, k=n-deg(g(x))
:
x ( )
g x
xg xG
g x
![Page 147: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/147.jpg)
EE576 Dr. Kousa Linear Block Codes 147
Cyclic Linear Codes
2. Eg 4.3.2
• C: the linear cyclic codes of length n=7 with generator polynomial
g(x)=1+x+x3, and deg(g(x))=3, => k = 4
3
2 4
2 2 3 5
3 3 4 6
( ) 1
( )
( )
( )
g x x x
xg x x x x
x g x x x x
x g x x x x
1101000
0110100G=
0011010
0001101
![Page 148: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/148.jpg)
EE576 Dr. Kousa Linear Block Codes 148
Cyclic Linear Codes
3. Efficient encoding for cyclic codes
) aGc ( codelinear generala
of that withcomparedefficient timemore
(x)(x)(x) :algorithm Encoding
)),,,( message source ngrepresenti(
)( polynomial message
k).-n degree has g(x) polynomialgenerator the(so
k dimension and n length of code cyclica be CLet
110
1110
gac
aaa
xaxaaxa
k
kk
![Page 149: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/149.jpg)
EE576 Dr. Kousa Linear Block Codes 149
Cyclic Linear Codes
– 4. Parity check matrix
• H : wH=0 if only if w is a codeword
• Symdrome polynomial s(x)
– c(x): a codeword, e(x):error polynomial, and w(x)=c(x)+e(x)
– s(x) = w(x) mod g(x) = e(x) mod g(x), because c(x)=a(x)g(x)
– H: i-th row ri is the word of length n-k
=> ri(x)=xi mod g(x)
– wH = (c+e)H => c(x) mod g(x) + e(x) mod g(x) = s(x)
![Page 150: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/150.jpg)
EE576 Dr. Kousa Linear Block Codes 150
Cyclic Linear Codes
– 5. Eg 4.3.7
• n=7, g(x)=1+x+x3, n-k = 3
0
1
2 22
33
4 24
5 25
6 26
( ) 1mod ( ) 1
( ) mod ( )
( ) mod ( )
( ) mod ( ) 1
( ) mod ( )
( ) mod ( ) 1
( ) mod ( ) 1
r x g x
r x x g x x
r x x g x x
r x x g x x
r x x g x x x
r x x g x x x
r x x g x x
100
010
001
110
011
111
101
100
010
001
110
011
111
101
H
![Page 151: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/151.jpg)
EE576 Dr. Kousa Linear Block Codes 151
Cyclic Linear Codes
• [4]. Finding cyclic codes
– 1. To construct a linear cyclic code of length n• Find a factor g(x) of 1+xn, deg(g(x)) = n-k
• Irreducible polynomials
– f(x) in K[x], deg(f(x)) >= 1
– There are no a(x), b(x) such that f(x)=a(x)b(x), deg(a(x))>=1, deg(b(x))>=1
• For n <= 31, the factorization of 1+xn
(see Appendix B)
• Improper cyclic codes: Kn and {0}
![Page 152: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/152.jpg)
EE576 Dr. Kousa Linear Block Codes 152
Cyclic Linear Codes
– 2. Theorem 4.4.3
– 3. Coro 4.4.4
r n 2if n=2 then 1+x (1 )rss x
n. length of codes cyclic
linear proper 2)1(2 are thereThen
s.polynomial eirreducibl z ofproduct the
be 1let and odd is s where,2Let
r
z
sr xsn
![Page 153: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/153.jpg)
EE576 Dr. Kousa Linear Block Codes 153
Cyclic Linear Codes
– 4. Idempotent polynomials I(x)
• I(x) = I(x)2 mod (1+xn) for odd n
• Find a “basic” set of I(x)
Ci= { s=2j i (mod n) | j=0, 1, …, r}
where 1 = 2r mod n
i0
( ) ( ), a {0,1}k
i ii
I x ac x iCj
ji xxc )( where
![Page 154: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/154.jpg)
EE576 Dr. Kousa Linear Block Codes 154
Cyclic Linear Codes
– 5. Eg 4.4.12
– 6. Theorem 4.4.13Every cyclic code contains a unique idempotent
polynomial which generates the code.(?)
00 0
1 2 41 2 4 1
3 6 53 5 7 2
0 0 1 1 3 3 i
For n=7,
C {0}, so c ( ) 1
C {1, 2, 4} = C C , so c ( )
C {3, 5, 6} = C = C , so c ( )
I(x)=a c ( ) a c ( ) a c ( ), a {0,1},
I(x) 0
x x
x x x x
x x x x
x x x
![Page 155: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/155.jpg)
EE576 Dr. Kousa Linear Block Codes 155
Cyclic Linear Codes
– 7. Eg. 4.4.14 find all cyclic codes of length 9
0 1 3
2 4 5 7 8 3 60 1 3
0 0 1 1 3 3
C {0}, C {1,2,4,8,7,5}, C {3,6}
c ( ) 1, c ( ) , c ( )
==> I(x)=a c ( ) a c ( ) a c ( )
x x x x x x x x x x x
x x x
The generator polynomial
g(x)=gcd(I(x), 1+x9)
Idempotent polynomialI(x)
1
1+x+x3+x4+x6+x7
1+x3
1+x+x2
:
1
x+x2+x4+x5+x7+x8
x3+x6
1+x+x2+x4+x5+x7+x8
:
![Page 156: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/156.jpg)
EE576 Dr. Kousa Linear Block Codes 156
Cyclic Linear Codes• [5].Dual cyclic codes
– 1. The dual code of a cyclic code is also cyclic– 2. Lemma 4.5.1
a > a(x), b > b(x) and b’ > b’(x)=xnb(x-1) mod 1+xn
then
a(x)b(x) mod 1+xn = 0 iff πk(a) . b’=0
for k=0,1,…n-1
– 3. Theorem 4.5.2
C: a linear code, length n, dimension k with generator g(x)
If 1+xn = g(x)h(x) then
C⊥: a linear code , dimension n-k with generator xkh(x-1)
![Page 157: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/157.jpg)
EE576 Dr. Kousa Linear Block Codes 157
Cyclic Linear Codes
– 4. Eg. 4.5.3 g(x)=1+x+x3, n=7, k=7-3=4
h(x)=1+x+x2+x4
h(x)generator for C ⊥ is
g ⊥ (x)=x4h(x-1)=x4(1+x-1+x-2+x-4 )=1+x2+x3+x4
– 5. Eg. 4.5.4g(x)=1+x+x2, n=6, k=6-2=4
h(x)=1+x+x3+x4
h(x)generator for C ⊥ is g ⊥ (x)=x4h(x-1)=1+x+x3+x4
![Page 158: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/158.jpg)
EE576 Dr. Kousa Linear Block Codes 158
Modulation, Demodulation and Coding Course
Period 3 - 2005
Sorour Falahati
Lecture 8
![Page 159: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/159.jpg)
EE576 Dr. Kousa Linear Block Codes 159Lecture 8
Last time we talked about:
Coherent and non-coherent detections
Evaluating the average probability of symbol error for different bandpass modulation schemes
Comparing different modulation schemes based on their error performances.
![Page 160: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/160.jpg)
EE576 Dr. Kousa Linear Block Codes 1602005-02-09 Lecture 8
Today, we are going to talk about:
• Channel coding
• Linear block codes– The error detection and correction capability– Encoding and decoding– Hamming codes– Cyclic codes
![Page 161: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/161.jpg)
EE576 Dr. Kousa Linear Block Codes 1612005-02-09 Lecture 8
• Channel coding:– Transforming signals to improve communications
performance by increasing the robustness against channel impairments (noise, interference, fading, ..)
– Waveform coding: Transforming waveforms to better waveforms
– Structured sequences: Transforming data sequences into better sequences, having structured redundancy.
• “Better” in the sense of making the decision process less subject to errors.
What is channel coding?
![Page 162: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/162.jpg)
EE576 Dr. Kousa Linear Block Codes 1622005-02-09 Lecture 8
Error control techniques• Automatic Repeat reQuest (ARQ)
– Full-duplex connection, error detection codes– The receiver sends a feedback to the transmitter,
saying that if any error is detected in the received packet or not (Not-Acknowledgement (NACK) and Acknowledgement (ACK), respectively).
– The transmitter retransmits the previously sent packet if it receives NACK.
• Forward Error Correction (FEC)– Simplex connection, error correction codes– The receiver tries to correct some errors
• Hybrid ARQ (ARQ+FEC)– Full-duplex, error detection and correction codes
![Page 163: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/163.jpg)
EE576 Dr. Kousa Linear Block Codes 1632005-02-09 Lecture 8
Why using error correction coding?– Error performance vs. bandwidth– Power vs. bandwidth– Data rate vs. bandwidth– Capacity vs. bandwidth
(dB) / 0NEb
BP
A
F
B
D
C
E Uncoded
Coded
Coding gain:For a given bit-error probability, the reduction in the Eb/N0 that can berealized through the use of code:
[dB][dB] [dB] c0u0
N
E
N
EG bb
![Page 164: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/164.jpg)
EE576 Dr. Kousa Linear Block Codes 1642005-02-09 Lecture 8
Channel models
• Discrete memoryless channels– Discrete input, discrete output
• Binary Symmetric channels– Binary input, binary output
• Gaussian channels– Discrete input, continuous output
![Page 165: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/165.jpg)
EE576 Dr. Kousa Linear Block Codes 1652005-02-09 Lecture 8
Some definitions – cont’d
• Binary field : – The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
– Binary field is also called Galois field, GF(2).011
101
110
000
111
001
010
000
Addition Multiplication
Linear block codes
![Page 166: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/166.jpg)
EE576 Dr. Kousa Linear Block Codes 1662005-02-09 Lecture 8
Some definitions – cont’d• Fields :
– Let F be a set of objects on which two operations ‘+’ and ‘.’ are defined.
– F is said to be a field if and only if1. F forms a commutative group under + operation. The
additive identity element is labeled “0”.
2. F-{0} forms a commutative group under . Operation. The multiplicative identity element is labeled “1”.
3. The operations “+” and “.” distribute:
FabbaFba ,
FabbaFba ,
)()()( cabacba
![Page 167: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/167.jpg)
EE576 Dr. Kousa Linear Block Codes 1672005-02-09 Lecture 8
Some definitions – cont’d
• Vector space:– Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space over F if:
1. Commutative:
2.
3. Distributive:
4. Associative:
5.
VuvVv aFa ,
vuvuvvv aaababa )( and )(
FV uvvuvu,
)()(,, vvv babaVFbavvVv 1 ,
![Page 168: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/168.jpg)
EE576 Dr. Kousa Linear Block Codes 1682005-02-09 Lecture 8
Some definitions – cont’d– Examples of vector spaces
• The set of binary n-tuples, denoted by
• Vector subspace:– A subset S of the vector space is called a
subspace if:• The all-zero vector is in S.• The sum of any two vectors in S is also in S.• Example:
. of subspace a is )}1111(),1010(),0101(),0000{( 4V
nV
nV
)}1111(),1101(),1100(),1011(),1010(),1001(),1000(
),0111(),0101(),0100(),0011(),0010(),0001(),0000{(4 V
![Page 169: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/169.jpg)
EE576 Dr. Kousa Linear Block Codes 1692005-02-09 Lecture 8
Some definitions – cont’d
• Spanning set:– A collection of vectors , the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or to span V.
• Example:
• Bases:– A spanning set for V that has minimal cardinality is
called a basis for V.• Cardinality of a set is the number of objects in the set.
• Example:
.for basis a is )0001(),0010(),0100(),1000( 4V
. spans )1001(),0011(),1100(),0110(),1000( 4V
nG vvv ,,, 21
![Page 170: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/170.jpg)
EE576 Dr. Kousa Linear Block Codes 1702005-02-09 Lecture 8
Linear block codes
• Linear block code (n,k)– A set with cardinality is called a linear block
code if, and only if, it is a subspace of the vector space .
• Members of C are called code-words.• The all-zero codeword is a codeword.• Any linear combination of code-words is a
codeword.
nVnVC k2
nk VCV
![Page 171: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/171.jpg)
EE576 Dr. Kousa Linear Block Codes 1712005-02-09 Lecture 8
Linear block codes – cont’d
nVkV
C
Bases of C
mapping
![Page 172: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/172.jpg)
EE576 Dr. Kousa Linear Block Codes 1722005-02-09Lecture 8
Linear block codes – cont’d• The information bit stream is chopped into blocks of k bits. • Each block is encoded to a larger block of n bits.• The coded bits are modulated and sent over channel.• The reverse procedure is done at the receiver.
Data blockChannelencoder Codeword
k bits n bits
rate Code
bits Redundant
n
kR
n-k
c
![Page 173: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/173.jpg)
EE576 Dr. Kousa Linear Block Codes 1732005-02-09 Lecture 8
Linear block codes – cont’d• The Hamming weight of vector U, denoted by
w(U), is the number of non-zero elements in U.
• The Hamming distance between two vectors U and V, is the number of elements in which they differ.
• The minimum distance of a block code is
)()( VUVU, wd
)(min),(minmin ii
jiji
wdd UUU
![Page 174: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/174.jpg)
EE576 Dr. Kousa Linear Block Codes 1742005-02-09 Lecture 8
Linear block codes – cont’d• Error detection capability is given by
• Error correcting-capability t of a code, which is defined as the maximum number of guaranteed correctable errors per codeword, is
2
1mindt
1min de
![Page 175: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/175.jpg)
EE576 Dr. Kousa Linear Block Codes 1752005-02-09 Lecture 8
Linear block codes – cont’d• For memory less channels, the probability that the
decoder commits an erroneous decoding is
– is the transition probability or bit error probability over channel.
• The decoded bit error probability is
jnjn
tjM pp
j
nP
)1(
1
jnjn
tjB pp
j
nj
nP
)1(
1
1
p
![Page 176: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/176.jpg)
EE576 Dr. Kousa Linear Block Codes 1762005-02-09 Lecture 8
Linear block codes – cont’d• Discrete, memoryless, symmetric channel model
– Note that for coded systems, the coded bits are modulated and transmitted over channel. For example, for M-PSK modulation on AWGN channels (M>2):
where is energy per coded bit, given by
Tx. bits Rx. bits
1-p
1-p
p
p
MN
REMQ
MMN
EMQ
Mp cbc
sinlog2
log
2sin
log2
log
2
0
2
20
2
2
cE bcc ERE
1
0 0
1
![Page 177: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/177.jpg)
EE576 Dr. Kousa Linear Block Codes 1772005-02-09 Lecture 8
Linear block codes –cont’d
– A matrix G is constructed by taking as its rows the vectors on the basis, .
nVkV
C
Bases of C
mapping
},,,{ 21 kVVV
knkk
n
n
k vvv
vvv
vvv
21
22221
112111
V
V
G
![Page 178: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/178.jpg)
EE576 Dr. Kousa Linear Block Codes 1782005-02-09 Lecture 8
Linear block codes – cont’d
• Encoding in (n,k) block code
– The rows of G, are linearly independent.
mGU
kn
k
kn
mmmuuu
mmmuuu
VVV
V
V
V
2221121
2
1
2121
),,,(
),,,(),,,(
![Page 179: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/179.jpg)
EE576 Dr. Kousa Linear Block Codes 1792005-02-09 Lecture 8
Linear block codes – cont’d
• Example: Block code (6,3)
1
0
0
0
1
0
0
0
1
1
1
0
0
1
1
1
0
1
3
2
1
V
V
V
G
1
1
1
1
1
0
0
0
0
1
0
1
1
1
1
1
1
0
1
1
0
0
0
1
1
0
1
1
1
1
1
0
0
0
1
1
1
1
0
0
0
1
1
0
1
0
0
0
1
0
0
0
1
0
1
0
0
1
1
0
0
1
0
0
0
0
1
0
0
0
1
0
Message vector Codeword
![Page 180: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/180.jpg)
EE576 Dr. Kousa Linear Block Codes 1802005-02-09 Lecture 8
Linear block codes – cont’d• Systematic block code (n,k)
– For a systematic code, the first (or last) k elements in the codeword are information bits.
matrix )(
matrixidentity
][
knk
kk
k
k
k
P
I
IPG
),...,,,,...,,(),...,,(
bits message
21
bitsparity
2121 kknn mmmpppuuu U
![Page 181: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/181.jpg)
EE576 Dr. Kousa Linear Block Codes 1812005-02-09 Lecture 8
Linear block codes – cont’d
• For any linear code we can find an matrix , which its rows are orthogonal to rows of :
• H is called the parity check matrix and its rows are linearly independent.
• For systematic linear block codes:
nkn )(HG
0GH T
][ Tkn PIH
![Page 182: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/182.jpg)
EE576 Dr. Kousa Linear Block Codes 1822005-02-09 Lecture 8
Linear block codes – cont’d
• Syndrome testing:– S is syndrome of r, corresponding to the error
pattern e.
FormatChannel encoding
Modulation
Channeldecoding
FormatDemodulation
Detection
Data source
Data sink
U
r
m
m̂
channel
or vectorpattern error ),....,,(
or vector codeword received ),....,,(
21
21
n
n
eee
rrr
e
reUr
TT eHrHS
![Page 183: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/183.jpg)
EE576 Dr. Kousa Linear Block Codes 1832005-02-09 Lecture 8
Linear block codes – cont’d• Standard array
1. For row , find a vector in of minimum weight which is not already listed in the array.
2. Call this pattern and form the row as the corresponding coset
kknknkn
k
k
22222
22222
221
UeUee
UeUee
UUU
zero codeword
coset
coset leaders
kni 2,...,3,2nV
ie th:i
![Page 184: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/184.jpg)
EE576 Dr. Kousa Linear Block Codes 184184
Linear block codes – cont’d
• Standard array and syndrome table decoding1. Calculate
2. Find the coset leader, , corresponding to .
3. Calculate and corresponding .
– Note that • If , error is corrected.• If , undetectable decoding error occurs.
TrHS
iee ˆ SerU ˆˆ m̂
)ˆˆ(ˆˆ e(eUee)UerU ee ˆ
ee ˆ
![Page 185: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/185.jpg)
EE576 Dr. Kousa Linear Block Codes 185185
Linear block codes – cont’d
• Example: Standard array for the (6,3) code
010110100101010001
010100100000
100100010000
111100001000
000110110111011010101101101010011100110011000100
000101110001011111101011101100011000110111000010
000110110010011100101000101111011011110101000001
000111110011011101101001101110011010110100000000
Coset leaders
coset
codewords
![Page 186: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/186.jpg)
EE576 Dr. Kousa Linear Block Codes 186
Linear block codes – cont’d
111010001
100100000
010010000
001001000
110000100
011000010
101000001
000000000
(101110)(100000)(001110)ˆˆ
estimated is vector corrected The
(100000)ˆ
is syndrome this toingcorrespondpattern Error
(100)(001110)
:computed is of syndrome The
received. is (001110)
ted. transmit(101110)
erU
e
HrHS
r
r
U
TT
Error pattern Syndrome
![Page 187: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/187.jpg)
EE576 Dr. Kousa Linear Block Codes 187
• Hamming codes– Hamming codes are a subclass of linear block codes
and belong to the category of perfect codes.– Hamming codes are expressed as a function of a single
integer .
– The columns of the parity-check matrix, H, consist of all non-zero binary m-tuples.
Hamming codes
2m
t
mn-k
mk
nm
m
1 :capability correctionError
:bitsparity ofNumber
12 :bitsn informatio ofNumber
12 :length Code
![Page 188: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/188.jpg)
EE576 Dr. Kousa Linear Block Codes 188
Hamming codes• Example: Systematic Hamming code (7,4)
][
1011100
1101010
1110001
33TPIH
][
1000111
0100011
0010101
0001110
44
IPG
![Page 189: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/189.jpg)
EE576 Dr. Kousa Linear Block Codes 189
Cyclic block codes
• Cyclic codes are a subclass of linear block codes.• Encoding and syndrome calculation are easily
performed using feedback shift-registers.– Hence, relatively long block codes can be
implemented with a reasonable complexity.• BCH and Reed-Solomon codes are cyclic codes.
![Page 190: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/190.jpg)
EE576 Dr. Kousa Linear Block Codes 190
Cyclic block codes
• A linear (n,k) code is called a Cyclic code if all cyclic shifts of a codeword are also a codeword.– Example:
),...,,,,,...,,(
),...,,,(
121011)(
1210
inninini
n
uuuuuuu
uuuu
U
U “i” cyclic shifts of U
UUUUU
U
)1101( )1011( )0111( )1110(
)1101()4()3()2()1(
![Page 191: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/191.jpg)
EE576 Dr. Kousa Linear Block Codes 191
Cyclic block codes
• Algebraic structure of Cyclic codes, implies expressing codewords in polynomial form
• Relationship between a codeword and its cyclic shifts:
– Hence:
)1( degree ...)( 11
2210 n-XuXuXuuX n
n
U
)1()(
...
...,)(
1)1(
)1(
11
)(
12
2101
11
22
10
1)1(
nn
Xu
nn
n
X
nnn
nn
nn
XuX
uXuXuXuXuu
XuXuXuXuXX
nn
U
U
U
)1( modulo )()()( nii XXXX UUBy extension
)1( modulo )()()1( nXXXX UU
![Page 192: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/192.jpg)
EE576 Dr. Kousa Linear Block Codes 192
Cyclic block codes• Basic properties of Cyclic codes:
– Let C be a binary (n,k) linear cyclic code1. Within the set of code polynomials in C, there
is a unique monic polynomial with minimal degree is called the generator polynomials.
2. Every code polynomial in C, can be expressed uniquely as
3. The generator polynomial is a factor of
)(Xg)( . Xnr g
rr XgXggX ...)( 10g
)(XU)()()( XXX gmU
)(Xg
1nX
![Page 193: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/193.jpg)
EE576 Dr. Kousa Linear Block Codes 193
Cyclic block codes4. The orthogonality of G and H in polynomial
form is expressed as . This means is also a factor of
5. The row , of generator matrix is formed by the coefficients of the cyclic shift of the generator polynomial.
r
r
r
r
k
ggg
ggg
ggg
ggg
XX
XX
X
10
10
10
10
1 )(
)(
)(
0
0
g
g
g
G
1)()( nXXX hg1nX)(Xh
kii ,...,1, "1" i
![Page 194: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/194.jpg)
EE576 Dr. Kousa Linear Block Codes 194
Cyclic block codes• Systematic encoding algorithm for an (n,k) Cyclic
code:
1. Multiply the message polynomial by
2. Divide the result of Step 1 by the generator polynomial . Let be the reminder.
3. Add to to form the codeword
)(Xm knX
)(Xg )(Xp
)(Xp )(XX kn m )(XU
![Page 195: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/195.jpg)
EE576 Dr. Kousa Linear Block Codes 195
Cyclic block codes• Example: For the systematic (7,4) Cyclic code with
generator polynomial 1. Find the codeword for the message
)1 1 0 1 0 0 1(
1)()()(
:polynomial codeword theForm
1)1()1(
:(by )( Divide
)1()()(
1)()1011(
3 ,4 ,7
bits messagebitsparity
6533
)(remainder generator
3
quotient
32653
6533233
32
U
mpU
gm
mm
mm
pgq
XXXXXXX
XXXXXXXX
X)XX
XXXXXXXXXX
XXX
knkn
X(X)(X)
kn
kn
)1011(m
31)( XXX g
![Page 196: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/196.jpg)
EE576 Dr. Kousa Linear Block Codes 196
Cyclic block codes2. Find the generator and parity check matrices, G and H,
respectively.
1011000
0101100
0010110
0001011
)1101(),,,(1011)( 321032
G
g ggggXXXX
Not in systematic form.We do the following:
row(4)row(4)row(2)row(1)
row(3)row(3)row(1)
1000101
0100111
0010110
0001011
G
1110100
0111010
1101001
H
44I33I TP
P
![Page 197: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/197.jpg)
EE576 Dr. Kousa Linear Block Codes 197
Cyclic block codes• Syndrome decoding for Cyclic codes:
– Received codeword in polynomial form is given by
– The syndrome is the reminder obtained by dividing the received polynomial by the generator polynomial.
– With syndrome and Standard array, error is estimated.
• In Cyclic codes, the size of standard array is considerably reduced.
)()()( XXX eUr Received codeword
Error pattern
)()()()( XXXX Sgqr Syndrome
![Page 198: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/198.jpg)
EE576 Dr. Kousa Linear Block Codes 198
Example of the block codes
8PSK
QPSK
[dB] / 0NEb
BP
![Page 199: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/199.jpg)
EE576 Dr. Kousa Linear Block Codes 199
ADVANTAGE of GENERATOR MATRIX:
we need to store only the k rows of G instead of 2k
vectors of
the code.
for the example we have looked at generator array of (36)
replaces the original code vector of dimensions (8 6).
This is a definite reduction in complexity .
![Page 200: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/200.jpg)
EE576 Dr. Kousa Linear Block Codes 200
Systematic Linear Block Codes
Systematic (n,k) linear block codes has such a mapping that
part of the sequence generated coincides with the k message
digits.
Remaining (n-k) digits are parity digits
A systematic linear block code has a generator matrix of the form :
kIPG
100)(,21
010)(,22221
001)(,11211
knkp
kp
kp
knppp
knppp
![Page 201: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/201.jpg)
EE576 Dr. Kousa Linear Block Codes 201
P is the parity array portion of the generator matrix
pij = (0 or 1)
Ik is the (kk) identity matrix.
With the systematic generator encoding complexity is further
reduced since we do not need to store the identity matrix.
Since U = m G
100)(,21
010)(,22221
001)(,11211
,........,2
,1
,........,2
,1
knkp
kp
kp
knppp
knppp
nmmmnuuu
![Page 202: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/202.jpg)
EE576 Dr. Kousa Linear Block Codes 202
where
And the parity bits are
given the message k-tuple
m= m1,……..,mk
And the general code vector n-tuple
U = u1,u2,……,uk
Systematic code vector is:
U = p1,p2……..,pk , m1,m2,……,mk
nkniforkni
m
kniforki
pk
mi
pmi
pmiu
),....,1(
)(,.....,1.......2211
)(,........
)(,22)(,11
2........
2221212
1........
2121111
knkp
km
knpm
knpm
knp
kp
kmpmpmp
kp
kmpmpmp
![Page 203: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/203.jpg)
EE576 Dr. Kousa Linear Block Codes 203
Example:
For a (6,3) code the code vectors are described as
P I3
U = m1+m3, m1+m2, m2+m3, m1, m2, m3
u1 , u2 , u3, u4, u5, u6
100101
010110
001011
3,
2,
1mmmU
![Page 204: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/204.jpg)
EE576 Dr. Kousa Linear Block Codes 204
Parity Check Matrix (H) We define a parity-check matrix since it will enable us
to
decode the received vectors.
For a (kn) generator matrix G
There exists an (k-n) n matrix H
Such that rows of G are orthogonal to the rows of H
i.e G HT = 0
To satisfy the orthogonality requirement H matrix is written as:
Tkn
PIH
![Page 205: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/205.jpg)
EE576 Dr. Kousa Linear Block Codes 205
Hence
The product UHT of each code vector is a zero vector.
Once the parity check matrix H is formed we can use it to
test whether a received vector is a valid member of the
codeword set. U is a valid code vector if and only if
UHT=0.
)(,21
)(,22221
)(,11211
100
010
001
knkkk
kn
kn
kn
T
ppp
ppp
pppP
I
H
0UH
kn
pkn
pppppT ,......,22
,11
![Page 206: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/206.jpg)
EE576 Dr. Kousa Linear Block Codes 206
Syndrome Testing
Let r = r1, r2, ……., rn be a received code vector (one of 2n n-tuples)
Resulting from the transmission of U = u1,u2,…….,un (one of the 2k n-tuples).
r = U + e
where e = e1, e2, ……, en is the error vector or error pattern introduced by the channel In space of 2n n-tuples there are a total of (2n –1) potential nonzero error patterns.
The SYNDROME of r is defined as:
S = r HT
The syndrome is the result of a parity check performed on r to
determine whether r is a valid member of the codeword set.
![Page 207: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/207.jpg)
EE576 Dr. Kousa Linear Block Codes 207
If r contains detectable errors the syndrome has some non-zero value
syndrome of r is seen as
S = (U+e) HT = UHT + eHT
since UHT = 0 for all code words then :
S = eHT
An important property of linear block codes, fundamental to the
decoding process, is that the mapping between correctable error
patterns and syndromes is one-to-one.
![Page 208: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/208.jpg)
EE576 Dr. Kousa Linear Block Codes 208
Parity check matrix must satisfy:
1. No column of H can be all zeros, or else an error in the
corresponding code vector position would not affect the
syndrome and would be undetectable
2. All columns of H must be unique. If two columns are identical errors corresponding to these code word locations will be indistinguishable.
![Page 209: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/209.jpg)
EE576 Dr. Kousa Linear Block Codes 209
Example:
Suppose that code vector U = [ 1 0 1 1 1 0 ] is transmitted and
the vector r = [ 0 0 1 1 1 0 ] is received.
Note one bit is in error..
Find the syndrome vector,S, and verify that it is equal to eHT.
(6,3) code has generator matrix G we have seen before:
P I
P is the parity matrix and I is the identity matrix.
100101
010110
001011
G
![Page 210: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/210.jpg)
EE576 Dr. Kousa Linear Block Codes 210
S = r HT =
= [ 1, 1+1, 1+1 ] = [ 1 0 0]
(syndrome of corrupted code vector)
101
110
011
100
010
001
3,32,31,3
3,22,21,2
3,12,11,1
100
010
001
ppp
ppp
pppTH
101
110
011
100
010
001
]001110[
Now we can verify that syndrome of the corrupted code vector is the same as the syndrome of the error pattern:
S = eHT = [1 0 0 0 0]HT = [ 1 0 0 ]
( =syndrome of error pattern )
![Page 211: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/211.jpg)
EE576 Dr. Kousa Linear Block Codes 211
Error CorrectionSince there is a one-to-one correspondence between correctable error patterns and syndromes we can correct such error patterns.
Assume the 2n n-tuples that represent possible received vectors are arranged in an array called the standard array.
1. The first row contains all the code vectors starting with all-zeros
vector
2. First column contains all the correctable error patterns
The standard array for a (n,k) code is:
kne
kU
kneU
kne
je
iU
je
ek
Uei
UeUek
Ui
UUU
22222
222222
221
![Page 212: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/212.jpg)
EE576 Dr. Kousa Linear Block Codes 212
Each row called a coset consists of an error pattern in the first column, also known as the coset leader, followed by the code vectors perturbed by that error pattern.
The array contains 2n n-tuples in the space Vn
each coset consists of 2k n-tuples
there are cosets
If the error pattern caused by the channel is a coset leader, the received vector will be decoded correctly into the transmitted
code vector Ui. If the error pattern is not a coset leader the decoding will produce an error.
knk
n2
2
2
![Page 213: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/213.jpg)
EE576 Dr. Kousa Linear Block Codes 213
Syndrome of a Coset If ej is the coset leader of the jth coset then ;
Ui + ej is an n-tuple in this coset
Syndrome of this coset is:
S = (Ui + ej)HT = Ui HT + ejHT
= ejHT
All members of a coset have the same syndrome and in fact the syndrome is used to estimate the error pattern.
![Page 214: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/214.jpg)
EE576 Dr. Kousa Linear Block Codes 214
Error CorrectionDecodingThe procedure for error correction decoding is as follows:
1. Calculate the syndrome of r using S = rHT
2. Locate the coset leader (error pattern) , ej, whose syndrome equals rHT
3. This error pattern is the corruption caused by the channel
4. The corrected received vector is identified as U = r + ej .
We retrieve the valid code vector by subtracting out the identified error
Note: In modulo-2 arithmetic subtraction is identical to that of addition
![Page 215: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/215.jpg)
EE576 Dr. Kousa Linear Block Codes 215
Example:Locating the error pattern:
For the (6,3) linear block code we have seen before the standard array can be arranged as:
000000 110100 011010 101110 101001 011101 110011 000111
000001 110101 011011 101111 101000 011100 110010 000110
000010 110110 011000 101100 101011 011111 110001 000101
000100 110000 011110 101010 101101 011001 110111 000011
001000 111100 010010 100110 100001 010101 111011 001111
010000 100100 001010 111110 111001 001101 100011 010111
100000 010100 111010 001110 001001 111101 010011 100111
010001 100101 001011 111111 111000 001100 100010 010110
![Page 216: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/216.jpg)
EE576 Dr. Kousa Linear Block Codes 216
The valid code vectors are the eight vectors in the first row and the correctable error patterns are the eight coset leaders in the first column.
Decoding will be correct if and only if the error pattern caused by the channel is one of the coset leaders
We now compute the syndrome corresponding to each of the correctable error sequences by computing ejHT for each coset leader
101
110
011
100
010
001
jeS
![Page 217: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/217.jpg)
EE576 Dr. Kousa Linear Block Codes 217
Syndrome lookup table..
error pattern Syndrome
0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 1 0 1
0 0 0 0 1 0 0 1 1
0 0 0 1 0 0 1 1 0
0 0 1 0 0 0 0 0 1
0 1 0 0 0 0 0 1 0
1 0 0 0 0 0 1 0 0
0 1 0 0 0 1 1 1 1
![Page 218: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/218.jpg)
EE576 Dr. Kousa Linear Block Codes 218
Error CorrectionWe receive the vector r and calculate its syndrome S We then use the syndrome-look-up table to find the corresponding error pattern.This error pattern is an estimate of the error, we denote it as ê
The decoder then adds ê to r to obtain an estimate of the transmitted
code vector û
Û = r + ê = (U + e) + ê = U + (e ê)
If the estimated error pattern is the same as the actual error
pattern that is if ê = e then û = U
If ê e the decoder will estimate a code vector that was not transmitted and hence we have an undetectable decoding error.
![Page 219: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/219.jpg)
EE576 Dr. Kousa Linear Block Codes 219
ExampleAssume code vector U = [ 1 0 1 1 1 0 ] is transmitted and the vector r=[0 0 1 1 1 0] is received.
The syndrome of r is computed as:
S = [0 0 1 1 1 0 ]HT = [ 1 0 0 ]
From the look-up table 100 has corresponding error pattern:
ê = [1 0 0 0 0 0 ]
The corrected vectors is the Û = r + ê = 0 0 1 1 1 0 + 1 0 0 0 0 0
= 1 0 1 1 1 0 (corrected)
In this example actual error pattern is the estimated error pattern,
Hence û = U
![Page 220: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/220.jpg)
EE576 Dr. Kousa Linear Block Codes 220
3F4 Error Control Coding
Dr. I. J. Wassell
![Page 221: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/221.jpg)
EE576 Dr. Kousa Linear Block Codes 221
Introduction
• Error Control Coding (ECC)– Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or correction at the receiver
– Done to prevent the output of erroneous bits despite noise and other imperfections in the channel
– The positions of the error control coding and decoding are shown in the transmission model
![Page 222: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/222.jpg)
EE576 Dr. Kousa Linear Block Codes 222
Transmission Model
Digital
Source
Source
Encoder
Error
Control
Coding
Line
Coding
Modulator
(Transmit
Filter, etc)Channel
Noise
Digital
Sink
Source
Decoder
Error
Control
Decoding
Line
Decoding
Demod
(Receive
Filter, etc)
+
Transmitter
Receiver
X()
Hc()
N()
Y()
![Page 223: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/223.jpg)
EE576 Dr. Kousa Linear Block Codes 223
Error Models• Binary Symmetric Memoryless Channel
– Assumes transmitted symbols are binary– Errors affect ‘0’s and ‘1’s with equal probability
(i.e., symmetric)– Errors occur randomly and are independent from
bit to bit (memoryless)
IN OUT
0 0
1 1
1-p
1-p
p
p
p is the probability of bit error or the Bit Error Rate (BER) of the channel
![Page 224: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/224.jpg)
EE576 Dr. Kousa Linear Block Codes 224
Error Models• Many other types• Burst errors, i.e., contiguous bursts of bit errors
– output from DFE (error propagation)– common in radio channels– Insertion, deletion and transposition errors
• We will consider mainly random errors
Error Control Techniques• Error detection in a block of data
– Can then request a retransmission, known as automatic repeat request (ARQ) for sensitive data
– Appropriate for• Low delay channels• Channels with a return path
– Not appropriate for delay sensitive data, e.g., real time speech and data• Forward Error Correction (FEC)
– Coding designed so that errors can be corrected at the receiver– Appropriate for delay sensitive and one-way transmission (e.g., broadcast
TV) of data– Two main types, namely block codes and convolutional codes. We will only
look at block codes
![Page 225: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/225.jpg)
EE576 Dr. Kousa Linear Block Codes 225
Block Codes
• A vector notation is used for the datawords and codewords,– Dataword d = (d1 d2….dk)– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is quantified by the code rate,– Code rate = k/n– i.e., the higher the redundancy, the lower the code rate
Block Code - Example• Dataword length k = 4
• Codeword length n = 7
• This is a (7,4) block code with code rate = 4/7
• For example, d = (1101), c = (1101001)
• We will consider only binary data
• Data is grouped into blocks of length k bits (dataword)
• Each dataword is coded into blocks of length n bits (codeword), where in general n>k
• This is known as an (n,k) block code
![Page 226: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/226.jpg)
EE576 Dr. Kousa Linear Block Codes 226
Error Control Process
10001000
101101
Source code data chopped into blocks Chann
el coder
Codeword (n bits)
Dataword (k bits)
Channel
Codeword + possible
errors (n bits)Chann
el decode
r
Dataword (k bits)
Error flags• Decoder gives corrected data• May also give error flags to
– Indicate reliability of decoded data– Helps with schemes employing multiple layers of error correction
![Page 227: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/227.jpg)
EE576 Dr. Kousa Linear Block Codes 227
Parity Codes• Example of a simple block code – Single Parity Check Code
– In this case, n = k+1, i.e., the codeword is the dataword with one additional bit
– For ‘even’ parity the additional bit is,
k
i idq1
2) (mod
– For ‘odd’ parity the additional bit is 1-q– That is, the additional bit ensures that there are an ‘even’ or
‘odd’ number of ‘1’s in the codeword
Parity Codes – Example 1• Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)
![Page 228: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/228.jpg)
EE576 Dr. Kousa Linear Block Codes 228
Parity Codes – Example 2
• Coding table for (4,3) even parity code
Dataword
Codeword
111
011
101
001
110
010
100
000
1111
0011
0101
1001
0110
1010
1100
0000
![Page 229: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/229.jpg)
EE576 Dr. Kousa Linear Block Codes 229
Parity Codes• To decode
– Calculate sum of received bits in block (mod 2)– If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of
the received codeword– Otherwise error
• Code can detect single errors• But cannot correct error since the error could be in any bit• For example, if the received dataword is (100000) the transmitted dataword
could have been (000000) or (110000) with the error being in the first or second place respectively
• Note error could also lie in other positions including the parity bit
• Known as a single error detecting code (SED). Only useful if probability of getting 2 errors is small since parity will become correct again
• Used in serial communications• Low overhead but not very powerful• Decoder can be implemented efficiently using a tree of XOR gates
![Page 230: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/230.jpg)
EE576 Dr. Kousa Linear Block Codes 230
Hamming Distance• Error control capability is determined by the Hamming distance• The Hamming distance between two codewords is equal to the number
of differences between them, e.g.,1001101111010010 have a Hamming distance = 3
• Alternatively, can compute by adding codewords (mod 2)=01001001 (now count up the ones)
• The Hamming distance of a code is equal to the minimum Hamming distance between two codewords
• If Hamming distance is:1 – no error control capability; i.e., a single error in a received
codeword yields another valid codewordXXXXXXX X is a valid codewordNote that this representation is diagrammatic
only.In reality each codeword is surrounded by n codewords.
That is, one for every bit that could be changed
![Page 231: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/231.jpg)
EE576 Dr. Kousa Linear Block Codes 231
• If Hamming distance is:
2 – can detect single errors (SED); i.e., a single error will yield an invalid codeword
XOXOXO X is a valid codeword
O in not a valid codeword
See that 2 errors will yield a valid (but incorrect) codeword
Hamming Distance
• If Hamming distance is:
3 – can correct single errors (SEC) or can detect double errors (DED)
XOOXOOX X is a valid codeword
O in not a valid codeword
See that 3 errors will yield a valid but incorrect codeword
![Page 232: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/232.jpg)
EE576 Dr. Kousa Linear Block Codes 232
Hamming Distance - Example
• Hamming distance 3 code, i.e., SEC/DED– Or can perform single error correction (SEC)
10011011 X11011011 O11010011 O11010010 X
This code corrected this wayThis code corrected this way
X is a valid codeword
O is an invalid codeword
![Page 233: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/233.jpg)
EE576 Dr. Kousa Linear Block Codes 233
Hamming Distance
• The maximum number of detectable errors is
• That is the maximum number of correctable errors is given by,
where dmin is the minimum Hamming distance between 2 codewords and means the smallest integer
2
1mindt
.
1min d
![Page 234: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/234.jpg)
EE576 Dr. Kousa Linear Block Codes 234
Linear Block Codes• As seen from the second Parity Code example, it is possible to use a
table to hold all the codewords for a code and to look-up the appropriate codeword based on the supplied dataword
• Alternatively, it is possible to create codewords by addition of other codewords. This has the advantage that there is now no longer the need to held every possible codeword in the table.
• If there are k data bits, all that is required is to hold k linearly independent codewords, i.e., a set of k codewords none of which can be produced by linear combinations of 2 or more codewords in the set.
• The easiest way to find k linearly independent codewords is to choose those which have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k positions.
![Page 235: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/235.jpg)
EE576 Dr. Kousa Linear Block Codes 235
Linear Block Codes• For example for a (7,4) code, only four codewords are
required, e.g.,
1111000
1100100
1010010
0110001
• So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in the list are added together, giving 1011010
• This process will now be described in more detail
![Page 236: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/236.jpg)
EE576 Dr. Kousa Linear Block Codes 236
Linear Block Codes• An (n,k) block code has code vectors
d=(d1 d2….dk) and
c=(c1 c2……..cn)
• The block coding process can be written as c=dG
where G is the Generator Matrix
k
2
1
21
22221
11211
a
.
a
a
...
......
...
...
G
knkk
n
n
aaa
aaa
aaa
• Thus,
k
iiid
1
ac
• ai must be linearly independent, i.e.,
Since codewords are given by summations of the ai vectors, then to avoid 2 datawords having the same codeword the ai vectors must be linearly independent
![Page 237: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/237.jpg)
EE576 Dr. Kousa Linear Block Codes 237
Linear Block Codes
• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,
Since for datawords d1 and d2 we have;
213 d d d
k
iii
k
iii
k
iiii
k
iii ddddd
12
11
121
133 aa)a(ac
So,
213 c c c • 0 is always a codeword, i.e.,
Since all zeros is a dataword then,
0a 0c1
k
ii
![Page 238: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/238.jpg)
EE576 Dr. Kousa Linear Block Codes 238
Error Correcting Power of LBC
• The Hamming distance of a linear block code (LBC) is simply the minimum Hamming weight (number of 1’s or equivalently the distance from the all 0 codeword) of the non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))• Therefore to find min Hamming distance just need to
search among the 2k codewords to find the min Hamming weight – far simpler than doing a pair wise check for all possible codewords.
![Page 239: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/239.jpg)
EE576 Dr. Kousa Linear Block Codes 239
Linear Block Codes – example 1
• For example a (4,2) code, suppose;
1010
1101 G
• For d = [1 1], then;
0111
____
1010
1101
c
a1 = [1011]
a2 = [0101]
![Page 240: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/240.jpg)
EE576 Dr. Kousa Linear Block Codes 240
Linear Block Codes – example 2
• A (6,5) code with
110000
101000
100100
100010
100001
G
• Is an even single parity code
![Page 241: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/241.jpg)
EE576 Dr. Kousa Linear Block Codes 241
Systematic Codes• For a systematic block code the dataword appears unaltered in
the codeword – usually at the start• The generator matrix has the structure,
P|I
..1..00
................
..0..10
..0..01
G
21
22221
11211
kRkk
R
R
ppp
ppp
ppp
k R R = n - k
• P is often referred to as parity bits
• I is k*k identity matrix. Ensures dataword appears as beginning of codeword
• P is k*R matrix.
![Page 242: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/242.jpg)
EE576 Dr. Kousa Linear Block Codes 242
Decoding Linear Codes• One possibility is a ROM look-up table• In this case received codeword is used as an address• Example – Even single parity check code;
Address Data000000 0000001 1000010 1000011 0……… .
• Data output is the error flag, i.e., 0 – codeword ok,• If no error, dataword is first k bits of codeword• For an error correcting code the ROM can also store datawords
• Another possibility is algebraic decoding, i.e., the error flag is computed from the received codeword (as in the case of simple parity codes)
• How can this method be extended to more complex error detection and correction codes?
![Page 243: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/243.jpg)
EE576 Dr. Kousa Linear Block Codes 243
Parity Check Matrix• A linear block code is a linear subspace Ssub of all length n vectors
(Space S)• Consider the subset Snull of all length n vectors in space S that are
orthogonal to all length n vectors in Ssub
• It can be shown that the dimensionality of Snull is n-k, where n is the dimensionality of S and k is the dimensionality of Ssub
• It can also be shown that Snull is a valid subspace of S and consequently Ssub is also the null space of Snull• Snull can be represented by its basis vectors. In this case the generator basis vectors (or ‘generator matrix’ H) denote the generator matrix for Snull - of dimension n-k = R
• This matrix is called the parity check matrix of the code defined by G, where G is obviously the generator matrix for Ssub- of dimension k
• Note that the number of vectors in the basis defines the dimension of the subspace• So the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to all the vectors of the code
• Since the rows of H, namely the vectors bi are members of the null space they are orthogonal to any code vector
• So a vector y is a codeword only if yHT=0• Note that a linear block code can be specified by either G or H
![Page 244: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/244.jpg)
EE576 Dr. Kousa Linear Block Codes 244
Parity Check Matrix• So H is used to check if a codeword is valid,
R
2
1
21
22221
11211
b
.
b
b
...
......
...
...
H
RnRR
n
n
bbb
bbb
bbb
R = n - k
• The rows of H, namely, bi, are chosen to be orthogonal to rows of G, namely ai
• Consequently the dot product of any valid codeword with any bi is zero
![Page 245: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/245.jpg)
EE576 Dr. Kousa Linear Block Codes 245
Parity Check Matrix• This is so since,
k
iiid
1
ac
and so,
k
iii
k
iii dd
1j
1jj 0)b.(aa.b.cb
• This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To ensure this it is required that the rows of H are independent and are orthogonal to the rows of G
• That is the bi span the remaining R (= n - k) dimensions of the codespace
![Page 246: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/246.jpg)
EE576 Dr. Kousa Linear Block Codes 246
Parity Check Matrix
• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a plane) spanned by a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is orthogonal to the plane containing the rows of the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid codeword) will thus have a component in the direction of b1 yielding a non- zero dot product between itself and b1
![Page 247: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/247.jpg)
EE576 Dr. Kousa Linear Block Codes 247
Parity Check Matrix• Similarly, any received codeword which is in the
plane containing a1 and a2 (i.e., a valid codeword) will not have a component in the direction of b1 yielding a zero dot product between itself and b1
c1
c2
c3
a1
a2
b1
![Page 248: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/248.jpg)
EE576 Dr. Kousa Linear Block Codes 248
Error Syndrome
• For error correcting codes we need a method to compute the required correction
• To do this we use the Error Syndrome, s of a received codeword, cr
s = crHT
• If cr is corrupted by the addition of an error vector, e, then
cr = c + eand
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error
![Page 249: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/249.jpg)
EE576 Dr. Kousa Linear Block Codes 249
Error Syndrome• That is, we can add the same error pattern to different
codewords and get the same syndrome.– There are 2(n - k) syndromes but 2n error patterns– For example for a (3,2) code there are 2 syndromes and 8
error patterns– Clearly no error correction possible in this case– Another example. A (7,4) code has 8 syndromes and 128
error patterns.– With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as well as the zero value to indicate no errors
• Now need to determine which error pattern caused the syndrome
![Page 250: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/250.jpg)
EE576 Dr. Kousa Linear Block Codes 250
Error Syndrome• For systematic linear block codes, H is constructed
as follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity for H
• Example, (7,4) code, dmin= 3
1111000
0110100
1010010
1100001
P|I G
1001011
0101101
0011110
I|P- H T
![Page 251: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/251.jpg)
EE576 Dr. Kousa Linear Block Codes 251
Error Syndrome - Example
• For a correct received codeword cr = [1101001]
In this case,
000
100
010
001
111
011
101
110
1001011Hc s Tr
![Page 252: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/252.jpg)
EE576 Dr. Kousa Linear Block Codes 252
Error Syndrome - Example• For the same codeword, this time with an error in the
first bit position, i.e.,
cr = [1101000]
100
100
010
001
111
011
101
110
0001011Hc s Tr
• In this case a syndrome 001 indicates an error in bit 1 of the codeword
![Page 253: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/253.jpg)
EE576 Dr. Kousa Linear Block Codes 253
Comments about H• The minimum distance of the code is equal to the minimum number of
columns (non-zero) of H which sum to zero
• We can express
1n1 1100
1
1
0
1 10T
r d...dd
d
.
d
d
],...,,[Hc
nrrr
n
nrrr cccccc
Where do, d1, dn-1 are the column vectors of H
• Clearly crHT is a linear combination of the columns of H
• For a codeword with weight w (i.e., w ones), then crHT is a linear combination of w columns of H.
• Thus we have a one-to-one mapping between weight w codewords and linear combinations of w columns of H
• Thus the min value of w is that which results in crHT=0, i.e., codeword cr will have a weight w (w ones) and so dmin = w
![Page 254: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/254.jpg)
EE576 Dr. Kousa Linear Block Codes 254
Comments about H• For the example code, a codeword with min weight (dmin = 3) is
given by the first row of G, i.e., [1000011]• Now form linear combination of first and last 2 cols in H, i.e.,
[011]+[010]+[001] = 0
• So need min of 3 columns (= dmin) to get a zero value of cHT in this example
Standard Array
• From the standard array we can find the most likely transmitted codeword given a particular received codeword without having to have a look-up table at the decoder containing all possible codewords in the standard array
• Not surprisingly it makes use of syndromes
![Page 255: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/255.jpg)
EE576 Dr. Kousa Linear Block Codes 255
Standard Array
• The Standard Array is constructed as follows,
c1 (all zero)e1
e2
e3
…eN
c2+e1
c2+e2
c2+e3
……c2+eN
c2
cM+e1
cM+e2
cM+e3
……cM+eN
cM
…………………………
…… s0
s1
s2
s3
…sN
All patterns in row have same syndrome
Different rows have distinct syndromes
• The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows (i.e., the number of syndromes)
![Page 256: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/256.jpg)
EE576 Dr. Kousa Linear Block Codes 256
Standard Array
• Imagine that the received codeword (cr) is c2 + e3 (shown in bold in the standard array)
• The most likely codeword is the one at the head of the column containing c2 + e3
• The corresponding error pattern is the one at the beginning of the row containing c2 + e3
• So in theory we could implement a look-up table (in a ROM) which could map all codewords in the array to the most likely codeword (i.e., the one at the head of the column containing the received codeword)
• This could be quite a large table so a more simple way is to use syndromes
• The standard array is formed by initially choosing ei to be,– All 1 bit error patterns– All 2 bit error patterns– ……
• Ensure that each error pattern not already in the array has a new syndrome. Stop when all syndromes are used
![Page 257: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/257.jpg)
EE576 Dr. Kousa Linear Block Codes 257
Standard Array
• This block diagram shows the proposed implementation
Compute syndrome
Look-up table
+cr
s e
c
![Page 258: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/258.jpg)
EE576 Dr. Kousa Linear Block Codes 258
Standard Array• For the same received codeword c2 + e3, note that the
unique syndrome is s3
• This syndrome identifies e3 as the corresponding error pattern
• So if we calculate the syndrome as described previously, i.e., s = crHT
• All we need to do now is to have a relatively small table which associates s with their respective error patterns. In the example s3 will yield e3
• Finally we subtract (or equivalently add in modulo 2 arithmetic) e3 from the received codeword (c2 + e3) to yield the most likely codeword, c2
![Page 259: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/259.jpg)
EE576 Dr. Kousa Linear Block Codes 259
Hamming Codes
• We will consider a special class of SEC codes (i.e., Hamming distance = 3) where,– Number of parity bits R = n – k and n = 2R – 1– Syndrome has R bits– 0 value implies zero errors– 2R – 1 other syndrome values, i.e., one for each bit
that might need to be corrected– This is achieved if each column of H is a different
binary word – remember s = eHT
![Page 260: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/260.jpg)
EE576 Dr. Kousa Linear Block Codes 260
Hamming Codes• Systematic form of (7,4) Hamming code is,
1111000
0110100
1010010
1100001
P|I G
1001011
0101101
0011110
I|P- H T
• The original form is non-systematic,
1001011
0101010
0011001
0000111
G
1010101
1100110
1111000
H
• Compared with the systematic code, the column orders of both G and H are swapped so that the columns of H are a binary count
![Page 261: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/261.jpg)
EE576 Dr. Kousa Linear Block Codes 261
Hamming Codes - Example• For a non-systematic (7,4) code
d = 1011c = 1110000 + 0101010 + 1101001 = 0110011
e = 0010000
cr= 0100011
s = crHT = eHT = 011• Note the error syndrome is the binary address of the bit to be
corrected
Hamming Codes• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-
systematic H is col. 7 in the systematic H.
![Page 262: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/262.jpg)
EE576 Dr. Kousa Linear Block Codes 262
Hamming Codes
• Double errors will always result in wrong bit being corrected, since– A double error is the sum of 2 single errors– The resulting syndrome will be the sum of the
corresponding 2 single error syndromes– This syndrome will correspond with a third single
bit error– Consequently the ‘corrected’ codeword will now
contain 3 bit errors, i.e., the original double bit error plus the incorrectly corrected bit!
![Page 263: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/263.jpg)
EE576 Dr. Kousa Linear Block Codes 263
Bit Error Rates after Decoding
• Example – A (7,4) Hamming code with a channel BER of 1%, i.e., p = 0.01
P(0 errors received) = (1 – p)7 = 0.9321
P(1 error received) = 7p(1 – p)6 = 0.0659
P(3 or more errors) = 1 – P(0) – P(1) – P(2) = 0.000034
002.0)1(2
67 received) errors P(2 52
pp
• For a given channel bit error rate (BER), what is the BER after correction (assuming a memoryless channel, i.e., no burst errors)?
• To do this we will compute the probability of receiving 0, 1, 2, 3, …. errors
• And then compute their effect
![Page 264: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/264.jpg)
EE576 Dr. Kousa Linear Block Codes 264
Bit Error Rates after Decoding
• Single errors are corrected, so,0.9321+ 0.0659 = 0.998 codewords are
correctly detected• Double errors cause 3 bit errors in a 7 bit
codeword, i.e., (3/7)*4 bit errors per 4 bit dataword, that is 3/7 bit errors per bit.Therefore the double error contribution is 0.002*3/7 = 0.000856
![Page 265: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/265.jpg)
EE576 Dr. Kousa Linear Block Codes 265
Bit Error Rates after Decoding
• The contribution of triple or more errors will be less than 0.000034 (since the worst that can happen is that every databit becomes corrupted)
• So the BER after decoding is approximately 0.000856 + 0.000034 = 0.0009 = 0.09%
• This is an improvement over the channel BER by a factor of about 11
![Page 266: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/266.jpg)
EE576 Dr. Kousa Linear Block Codes 266
Perfect Codes• If a codeword has n bits and we wish to correct up to
t errors, how many parity bits (R) are needed?• Clearly we need sufficient error syndromes (2R of
them) to identify all error patterns up to t errors– Need 1 syndrome to represent 0 errors– Need n syndromes to represent all 1 bit errors– Need n(n-1)/2 to syndromes to represent all 2 bit
errors– Need nCe = n!/(n-e)!e! syndromes to represent all e
bit errors
![Page 267: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/267.jpg)
EE576 Dr. Kousa Linear Block Codes 267
Perfect Codes• So,
errors 3 toupcorrect to6
2)-1)(n-n(n
2
1)-n(nn1
errors 2 toupcorrect to 2
1)-n(nn1
error 1 toupcorrect to 12
nR
If equality then code is Perfect
• Only known perfect codes are SEC Hamming codes and TEC Golay (23,12) code (dmin=7). Using previous equation yields
)1223(11 222048 6
2)-1)(23-23(23
2
1)-23(23231
![Page 268: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/268.jpg)
EE576 Dr. Kousa Linear Block Codes 268
Summary
• In this section we have– Used block codes to add redundancy to messages
to control the effects of transmission errors– Encoded and decoded messages using Hamming
codes– Determined overall bit error rates as a function of
the error control strategy
![Page 269: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/269.jpg)
Error Correction Codes & Multi-user Communications
![Page 270: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/270.jpg)
2006/07/07 Wireless Communication Engineering I 270
Agenda• Shannon Theory
• History of Error Correction Code
• Linear Block Codes
• Decoding
• Convolution Codes
• Multiple-Access Technique
• Capacity of Multiple Access
• Random Access Methods
![Page 271: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/271.jpg)
2006/07/07 Wireless Communication Engineering I 271
Shannon Theory: R < C → Reliable communication Redundancy (Parity bits) in
transmitted data stream
→ error correction capability
Block Code Convolutional Code
Encoding
Code length is fixed Coding Rate is fixed
Soft-DecodingHard-Decoding
Decoding
Digital Information Analog Information
![Page 272: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/272.jpg)
2006/07/07 Wireless Communication Engineering I 272
History of Error Correction Code• Shannon (1948):
Random Coding, Orthogonal Waveforms
• Golay (1949): Golay Code, Perfect Code
• Hamming (1950): Hamming Code (Single Error Correction, Double Error Detection)
• Gilbert (1952): Gilbert Bound on Coding Rate
![Page 273: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/273.jpg)
2006/07/07 Wireless Communication Engineering I 273
• Muller (1954): Combinatorial Digital Function and Error Correction Code
• Elias (1954): Tree Code, Convolutional Code• Reed and Solomon (1960):
Reed-Solomon Code (Maximal Separable Code)• Hocquenghem (1959) and Bose and
Chaudhuri (1960): BCH Code (Multiple Error Correction)
• Peterson (1960): Binary BCH Decoding, Error Location Polynomial
![Page 274: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/274.jpg)
2006/07/07 Wireless Communication Engineering I 274
• Wozencraft and Reiffen (1961): Sequencial decoding for convolutional code
• Gallager (1962) LDPC
• Fano (1963): Fano Decoding Algorithm for convolutional code
• Ziganzirov (1966): Stack Decoding Algorithm for convolutional code
• Forney (1966): Generalized Minimum Distance Decoding (Error and Erasure Decoding)
• Viterbi (1967): Optimal Decoding Algorithm for convulutional code
![Page 275: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/275.jpg)
2006/07/07 Wireless Communication Engineering I 275
• Berlekamp (1968): Fast Iterative BCH Decoding
• Forney (1966): Concatinated Code
• Goppa (1970): Goppa Code (Rational Function Code)
• Justeson (1972): Justeson Code (Asmptotically Good Code)
• Ungerboeck and Csajka (1976): Trellis Code Modulation, Bandwidth-constraint channel
• Goppa (1980): Algebraic-Geometry Code
![Page 276: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/276.jpg)
2006/07/07 Wireless Communication Engineering I 276
• Welch and Berlekamp (1983): Remainder Decoding Algorithm without using Syndrome
• Araki, Sorger and Kotter (1993): Fast GMD Decoding Algorithm
• Berrou (1993): Turbo Code, Parallel concatinated convolutional code
![Page 277: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/277.jpg)
2006/07/07 Wireless Communication Engineering I 277
Basics of Decoding
a) Hamming distance
b) Hamming distanceThe received vector is denoted by r.t → errors correctable
12),( td ji cc
td ji 2),( cc
![Page 278: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/278.jpg)
2006/07/07 Wireless Communication Engineering I 278
Linear Block Codes(n, k, dmin) code
rate coding :
distance minimum :
bitsn informatio ofnumber :
length code :
min
nk
d
k
n
d ↑ Good Error correction capability k / n ↓ Low rater = n-k ↑ → d ↑
(n, k, d) Linear Block Code is Linear Subspace with k-dimension in n-dimension linear space.
![Page 279: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/279.jpg)
2006/07/07 Wireless Communication Engineering I 279
Arithmetic operations for encoding and decoding over an finite field GF(Q)where Q = pr, p: prime number r: positive integer
Example GF(2):
011
100
10+addition multiplication
101
000
10
/),,,(
XOR AND
![Page 280: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/280.jpg)
2006/07/07 Wireless Communication Engineering I 280
[Encoder]• The Generator Matrix G and the Parity Check Matrix H
k information bits X → encoder G → n-bits codeword C
XGC • Dual (n, n - k) code
Complement orthogonal subspaceParity Check Matrix H = Generator Matrix of Dual code
CHt = 0
GHt = 0
![Page 281: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/281.jpg)
2006/07/07 Wireless Communication Engineering I 281
![Page 282: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/282.jpg)
2006/07/07 Wireless Communication Engineering I 282
error vector & syndrome
ttt HHH eecrs
syndrome :
decision) Hard(after vector received :
orerror vect :
vectorcodeword :
s
r
e
c
process) (decoding es
![Page 283: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/283.jpg)
2006/07/07 Wireless Communication Engineering I 283
[Minimum Distance]Singleton BoundIf no more than dmin - 1 columns of H are linearly independent.
dmin ≤ n - k + 1 (Singleton Bound)
Maximal Separable Code: dmin = n - k + 1, e.g. Reed-Solomon Code
• Some Specific Linear Block Codes
– Hamming Code (n, k, dmin) = (2m - 1, 2m - 1 - m, 3)
– Hadamard Code (n, k, dmin) = (2m, m + 1, 2m - 1)
![Page 284: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/284.jpg)
2006/07/07 Wireless Communication Engineering I 284
Easy Encoding• Cyclic Codes
C = (cn -1, …, c0) is a codeword → (cn - 2, …, c0, cn
- 1) is also a codewordCodeword polynomial: C(p) = cn - 1 pn - 1 + ...+ cn
Shift Cyclic 1mod npppC
![Page 285: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/285.jpg)
2006/07/07 Wireless Communication Engineering I 285
Encoding: Message polynomial
n
kk xpxpX 1
1
Codeword polynomial pgpXpC where g(p): generator polynomial of degreekn
phpgpn 1
h(p): Parity polynomial
Encoder is implemented by Shift registers.
![Page 286: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/286.jpg)
2006/07/07 Wireless Communication Engineering I 286
Encoder for an (n, k) cyclic code.
![Page 287: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/287.jpg)
2006/07/07 Wireless Communication Engineering I 287
Syndrome calculator for (n, k) cyclic code.
Digital to Analog (BPSK)
10
11
s
sc
12 cs
![Page 288: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/288.jpg)
2006/07/07 Wireless Communication Engineering I 288
Soft-Decoding & Maximum Likelihood( )
( )( ) ( )
nn
n
k
nnss
rr
,,+,,=
,,=
+=
11
1
nsr
Likelihood : Prob
ksr
2Min Prob Max k
k
k
ksrsr
k
kcr,n Correlatio :Max
![Page 289: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/289.jpg)
2006/07/07 Wireless Communication Engineering I 289
• Optimum Soft-Decision Decoding of Linear Block CodesOptimum receiver has M = 2k Matched Filter → M correlation metrics
j
n
jiji rcC
1
12, Cr
signal receivedth -:
codewordth - theofbit position th -:
codewordth -:
jr
ijc
i
j
ij
iCwhere
→ Largest matched filter output is selected.
![Page 290: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/290.jpg)
2006/07/07 Wireless Communication Engineering I 290
2lnexp min kdRP cbM
nkRc
b
rate Coding:
bitper SNR:
beP exp2
1
Error probability for soft-decision decoding (Coherent PSK)
whereUncoded binary PSK
![Page 291: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/291.jpg)
2006/07/07 Wireless Communication Engineering I 291
bc kdRCg 2lnlog10 min
Coding gain:
dmin ↑ → Cg ↑
![Page 292: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/292.jpg)
2006/07/07 Wireless Communication Engineering I 292
• Hard-Decision Decoding
Discrete-time channel = modulator + AWGN channel + demodulator→ BSC with crossover probability
FSKt noncoheren:exp
FSKcoherent :
PSKcoherent :2
21
21
cb
cb
cb
R
RQ
RQp
![Page 293: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/293.jpg)
2006/07/07 Wireless Communication Engineering I 293
Maximum-Likelihood Decoding → Minimum Distance DecodingSyndrome Calculation by Parity check matrix H
t
tm
t
eH
HeC
YHS
where
orerror vectbinary :
rdemodulato at the codeword received:
codeword dtransmitte:
e
Y
Cm
![Page 294: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/294.jpg)
2006/07/07 Wireless Communication Engineering I 294
• Bounds on Minimum Distance of Linear Block Codes (Rc vs. dmin)
– Hamming upper bound (2t < dmin)
– Plot kin upper bound
t
ic i
n
nR
02log
11
nRd
dn
dc
21
2
1log
2
11 min2
min
min
• Comparison of Performance between Hard-Decision and Soft-Decision Decoding
→ At most 2dB difference
![Page 295: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/295.jpg)
2006/07/07 Wireless Communication Engineering I 295
AAAAR
AAn
d
c
1log1log1
12
22
min
HRn
d
c
1
min
– Elias upper bound
– Gilbert-Varsharmov lower bound
![Page 296: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/296.jpg)
2006/07/07 Wireless Communication Engineering I 296
![Page 297: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/297.jpg)
2006/07/07 Wireless Communication Engineering I 297
• Interleaving of Coded Data for Channels with Burst ErrorsMultipath and fading channel → burst error Burst error correction code: Fire code Correctable burst length b
knb2
1
Block and Convolution interleave is effective for burst error.
![Page 298: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/298.jpg)
2006/07/07 Wireless Communication Engineering I 298
Convolution CodesPerformance of convolution code > block code shown by Viterbi’s Algorithm.
)()( RnEzeP
E(R) : Error Exponent
![Page 299: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/299.jpg)
2006/07/07 Wireless Communication Engineering I 299
![Page 300: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/300.jpg)
2006/07/07 Wireless Communication Engineering I 300
Constraint length-3, rate-1/2 convolutional encoder.
![Page 301: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/301.jpg)
2006/07/07 Wireless Communication Engineering I 301
• Parameter of convolution code: Constraint length, K
Minimum free distance• Optimum Decoding of Convolution
Codes – The Viterbi Algorithm For K ≤ 10, this is practical.
• Probability of Error for Soft-Decision Decoding
![Page 302: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/302.jpg)
2006/07/07 Wireless Communication Engineering I 302
Trellis for the convolutional encoder
![Page 303: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/303.jpg)
2006/07/07 Wireless Communication Engineering I 303
where ad : the number of paths of distance d
• Probability of Error for Hard-Decision Decoding Hamming distance is a metric for hard-decision
dRQaP cbdd
de 2free
![Page 304: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/304.jpg)
2006/07/07 Wireless Communication Engineering I 304
Turbo Coding
![Page 305: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/305.jpg)
2006/07/07 Wireless Communication Engineering I 305
RSC Encoder
![Page 306: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/306.jpg)
2006/07/07 Wireless Communication Engineering I 306
Shannon Limit & Turbo Code
![Page 307: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/307.jpg)
2006/07/07 Wireless Communication Engineering I 307
Multiple Access Techniques1. A common communication channel is shared by many users.
up-link in a satellite communication, a set of terminals → a central computer, a mobile cellular system
2. A broadcast networkdown-links in a satellite system, radio and TV broadcast systems
3. Store-and-forward networks4. Two-way communication systems
-FDMA (Frequency-division Multiple Access) -TDMA (Time-division Multiple Access) - CDMA (Code-division Multiple Access): for burst and low-duty-cycle information transmission Spread spectrum signals → small cross-correlations
For no spread random access, collision and interference occur.Retransmission Protocol
Multi-user Communications
![Page 308: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/308.jpg)
2006/07/07 Wireless Communication Engineering I 308
Capacity of Multiple Access Methods
In FDMA, normalized total capacity Cn = KCK / W (total bit rate for all K users per unit of bandwidth)
02 1log
N
ECC b
nn
where
desity spectrumpower Noise:
bitper Energy :
Bandwidth:
0N
E
W
b
![Page 309: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/309.jpg)
2006/07/07 Wireless Communication Engineering I 309
Normalized capacity as a function of εb / N0 for FDMA.
![Page 310: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/310.jpg)
2006/07/07 Wireless Communication Engineering I 310
Total capacity per hertz as a function of εb / N0 for FDMA.
![Page 311: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/311.jpg)
2006/07/07 Wireless Communication Engineering I 311
In TDMA, there is a practical limit for the transmitter powerIn no cooperative CDMA,
02
1log
NEeC
bn
![Page 312: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/312.jpg)
2006/07/07 Wireless Communication Engineering I 312
Normalized capacity as a function of εb / N0 for noncooperative CDMA.
![Page 313: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/313.jpg)
2006/07/07 Wireless Communication Engineering I 313
Capacity region of two-user CDMA multiple access Gaussian channel.
Capacity region for multiple users
![Page 314: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/314.jpg)
2006/07/07 Wireless Communication Engineering I 314
Code-Division Multiple Access• CDMA Signal and Channel Models
• The Optimum ReceiverSynchronous TransmissionAsynchronous Transmission
- Sub optimum Detectors Computational complexity grows linearly with the number of users, K. Conventional Single-user Detector Near-far problem Decor relation Detector Minimum Mean-Square-Error Detector Other Types of Detectors- Performance Characteristics of Detectors
![Page 315: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/315.jpg)
2006/07/07 Wireless Communication Engineering I 315
Random Access Methods• ALOHA Systems and Protocols Channel access
protocol synchronized (slotted) ALOHA unsynchronized (unslotted) ALOHAThroughput for slotted ALOHA
![Page 316: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/316.jpg)
2006/07/07 Wireless Communication Engineering I 316
![Page 317: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/317.jpg)
2006/07/07 Wireless Communication Engineering I 317
Throughput & Delay Performance
![Page 318: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/318.jpg)
2006/07/07 Wireless Communication Engineering I 318
• Carrier Sense Systems and ProtocolsCSMA / CD (carrier sense multiple access with collision detection)No persistent CSMA1-persistent CSMAp-persistent CSMA
![Page 319: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/319.jpg)
2006/07/07 Wireless Communication Engineering I 319
![Page 320: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/320.jpg)
2006/07/07 Wireless Communication Engineering I 320
![Page 321: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/321.jpg)
2006/07/07 Wireless Communication Engineering I 321
![Page 322: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/322.jpg)
10.322
Chapter 10
Error Detection and
Correction
![Page 323: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/323.jpg)
10.323
10-1 INTRODUCTION10-1 INTRODUCTION
some issues related, directly or indirectly, to error some issues related, directly or indirectly, to error detection and correction.detection and correction.
Types of ErrorsRedundancyDetection Versus CorrectionModular Arithmetic
Topics discussed in this section:Topics discussed in this section:
![Page 324: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/324.jpg)
10.324
Figure 10.1 Single-bit error
In a single-bit error, only 1 bit in the data unit has changed.
![Page 325: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/325.jpg)
10.325
Figure 10.2 Burst error of length 8
A burst error means that 2 or more bits in the data unit have changed.
![Page 326: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/326.jpg)
10.326
Error detection/correction Error detection
Check if any error has occurred Don’t care the number of errors Don’t care the positions of errors
Error correction Need to know the number of errors Need to know the positions of errors More difficult
![Page 327: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/327.jpg)
10.327
Figure 10.3 The structure of encoder and decoder
To detect or correct errors, we need to send extra (redundant) bits with data.
![Page 328: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/328.jpg)
10.328
Modular Arithmetic
Modulus N: the upper limit In modulo-N arithmetic, we use only
the integers in the range 0 to N −1, inclusive.
If N is 2, we use only 0 and 1 No carry in the calculation (sum and
subtraction)
![Page 329: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/329.jpg)
10.329
Figure 10.4 XORing of two single bits or two words
![Page 330: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/330.jpg)
10.330
10-2 BLOCK CODING10-2 BLOCK CODING
In block coding, we divide our message into blocks, In block coding, we divide our message into blocks, each of k bits, called each of k bits, called datawordsdatawords. We add r redundant . We add r redundant bits to each block to make the length n = k + r. The bits to each block to make the length n = k + r. The resulting n-bit blocks are called resulting n-bit blocks are called codewordscodewords..
Error DetectionError CorrectionHamming DistanceMinimum Hamming Distance
Topics discussed in this section:Topics discussed in this section:
![Page 331: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/331.jpg)
10.331
Figure 10.5 Datawords and codewords in block coding
![Page 332: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/332.jpg)
10.332
The 4B/5B block coding discussed in Chapter 4 is a good example of this type of coding. In this coding scheme, k = 4 and n = 5.
As we saw, we have 2k = 16 datawords and 2n = 32 codewords. We saw that 16 out of 32 codewords are used for message transfer and the rest are either used for other purposes or unused.
Example 10.1
![Page 333: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/333.jpg)
10.333
Figure 10.6 Process of error detection in block coding
![Page 334: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/334.jpg)
10.334
Table 10.1 A code for error detection (Example 10.2)
![Page 335: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/335.jpg)
10.335
Figure 10.7 Structure of encoder and decoder in error correction
![Page 336: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/336.jpg)
10.336
Table 10.2 A code for error correction (Example 10.3)
![Page 337: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/337.jpg)
10.337
Hamming Distance
The Hamming distance between two words is the number of differences between corresponding bits.
The minimum Hamming distance is the smallest Hamming distance between all possible pairs in a set of words.
![Page 338: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/338.jpg)
10.338
We can count the number of 1s in the Xoring of two words
1. The Hamming distance d(000, 011) is 2 because
2. The Hamming distance d(10101, 11110) is 3 because
![Page 339: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/339.jpg)
10.339
Find the minimum Hamming distance of the coding scheme in Table 10.1.
SolutionWe first find all Hamming distances.
Example 10.5
The dmin in this case is 2.
![Page 340: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/340.jpg)
10.340
Find the minimum Hamming distance of the coding scheme in Table 10.2.
SolutionWe first find all the Hamming distances.
The dmin in this case is 3.
Example 10.6
![Page 341: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/341.jpg)
10.341
•The minimum Hamming distance for our first code scheme (Table 10.1) is 2. This code guarantees detection of only a single error. •For example, if the third codeword (101) is sent and one error occurs, the received codeword does not match any valid codeword. If two errors occur, however, the received codeword may match a valid codeword and the errors are not detected.
Example 10.7
Minimum Distance for Error Detection
To guarantee the detection of up to s errors in all cases, the minimum Hamming distance in a block code must be dmin = s + 1.
Why?
![Page 342: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/342.jpg)
10.342
•Table 10.2 has dmin = 3. This code can detect up to two errors. When any of the valid codewords is sent, two errors create a codeword which is not in the table of valid codewords. The receiver cannot be fooled. •What if there are three error occurrance?
Example 10.8
![Page 343: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/343.jpg)
10.343
Figure 10.8 Geometric concept for finding dmin in error detection
![Page 344: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/344.jpg)
10.344
Figure 10.9 Geometric concept for finding dmin in error correction
To guarantee correction of up to t errors in all cases, the minimum Hamming distance in a block code must be dmin = 2t + 1.
![Page 345: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/345.jpg)
10.345
A code scheme has a Hamming distance dmin = 4. What is the error detection and correction capability of this scheme?
SolutionThis code guarantees the detection of up to three errors(s = 3), but it can correct up to one error. In other words, if this code is used for error correction, part of its capability is wasted. Error correction codes need to have an odd minimum distance (3, 5, 7, . . . ).
Example 10.9
![Page 346: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/346.jpg)
10.346
10-3 LINEAR BLOCK CODES10-3 LINEAR BLOCK CODES
•Almost all block codes used today belong to a subset called Almost all block codes used today belong to a subset called linear block codeslinear block codes. . •A linear block code is a code in which the exclusive OR A linear block code is a code in which the exclusive OR (addition modulo-2 / (addition modulo-2 / XORXOR) of two valid codewords creates ) of two valid codewords creates another valid codeword.another valid codeword.
![Page 347: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/347.jpg)
10.347
Let us see if the two codes we defined in Table 10.1 and Table 10.2 belong to the class of linear block codes.
1. The scheme in Table 10.1 is a linear block code because the result of XORing any codeword with any other codeword is a valid codeword. For example, the XORing of the second and third codewords creates the fourth one.
2. The scheme in Table 10.2 is also a linear block code. We can create all four codewords by XORing two other codewords.
Example 10.10
![Page 348: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/348.jpg)
10.348
Minimum Distance for Linear Block Codes
The minimum hamming distance is the number of 1s in the nonzero valid codeword with the smallest number of 1s
![Page 349: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/349.jpg)
10.349
Table 10.3 Simple parity-check code C(5, 4)•A simple parity-check code is a single-bit error-detecting code in which n = k + 1 with dmin = 2.•The extra bit (parity bit) is to make the total number of 1s in the codeword even•A simple parity-check code can detect an odd number of errors.
Linear Block Codes Simple parity-check code Hamming codes
![Page 350: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/350.jpg)
10.350
Figure 10.10 Encoder and decoder for simple parity-check code
![Page 351: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/351.jpg)
10.351
Let us look at some transmission scenarios. Assume the sender sends the dataword 1011. The codeword created from this dataword is 10111, which is sent to the receiver. We examine five cases:
1. No error occurs; the received codeword is 10111. The syndrome is 0. The dataword 1011 is created.2. One single-bit error changes a1 . The received codeword is 10011. The syndrome is 1. No dataword is created.3. One single-bit error changes r0 . The received codeword is 10110. The syndrome is 1. No dataword is created.
Example 10.12
![Page 352: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/352.jpg)
10.352
4. An error changes r0 and a second error changes a3 . The received codeword is 00110. The syndrome is 0. The dataword 0011 is created at the receiver. Note that here the dataword is wrongly created due to the syndrome value. 5. Three bits—a3, a2, and a1—are changed by errors. The received codeword is 01011. The syndrome is 1. The dataword is not created. This shows that the simple parity check, guaranteed to detect one single error, can also find any odd number of errors.
Example 10.12 (continued)
![Page 353: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/353.jpg)
10.353
Figure 10.11 Two-dimensional parity-check code
![Page 354: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/354.jpg)
10.354
Figure 10.11 Two-dimensional parity-check code
![Page 355: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/355.jpg)
10.355
Figure 10.11 Two-dimensional parity-check code
![Page 356: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/356.jpg)
10.356
Table 10.4 Hamming code C(7, 4)
1. All Hamming codes discussed in this book have dmin = 3.2. The relationship between m and n in these codes is
n = 2m − 1.
![Page 357: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/357.jpg)
10.357
Figure 10.12 The structure of the encoder and decoder for a Hamming code
![Page 358: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/358.jpg)
10.358
Table 10.5 Logical decision made by the correction logic analyzer
r0=a2+a1+a0
r1=a3+a2+a1
r2=a1+a0+a3
S0=b2+b1+b0+q0
S1=b3+b2+b1+q1
S2=b1+b0+b3+q2
![Page 359: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/359.jpg)
10.359
Let us trace the path of three datawords from the sender to the destination:1. The dataword 0100 becomes the codeword 0100011. The codeword 0100011 is received. The syndrome is 000, the final dataword is 0100.2. The dataword 0111 becomes the codeword 0111001. The codeword 0011001 is received. The syndrome is \ 011. After flipping b2 (changing the 1 to 0), the final dataword is 0111.3. The dataword 1101 becomes the codeword 1101000. The codeword 0001000 is received. The syndrome is 101. After flipping b0, we get 0000, the wrong dataword. This shows that our code cannot correct two errors.
Example 10.13
![Page 360: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/360.jpg)
10.360
10-4 CYCLIC CODES10-4 CYCLIC CODES
Cyclic codesCyclic codes are special linear block codes with one are special linear block codes with one extra property. In a cyclic code, if a codeword is extra property. In a cyclic code, if a codeword is cyclically shifted (rotated), the result is another cyclically shifted (rotated), the result is another codeword.codeword.
Cyclic Redundancy CheckHardware ImplementationPolynomialsCyclic Code AnalysisAdvantages of Cyclic CodesOther Cyclic Codes
Topics discussed in this section:Topics discussed in this section:
![Page 361: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/361.jpg)
10.361
Table 10.6 A CRC code with C(7, 4)
![Page 362: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/362.jpg)
10.362
Figure 10.14 CRC encoder and decoder
![Page 363: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/363.jpg)
10.363
Figure 10.15 Division in CRC encoder
![Page 364: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/364.jpg)
10.364
Figure 10.16 Division in the CRC decoder for two cases
![Page 365: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/365.jpg)
10.365
Figure 10.21 A polynomial to represent a binary word
![Page 366: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/366.jpg)
10.366
Figure 10.22 CRC division using polynomials
The divisor in a cyclic code is normally called the generator polynomial or simply the generator.
![Page 367: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/367.jpg)
10.367
10-5 CHECKSUM10-5 CHECKSUM
The last error detection method we discuss here is The last error detection method we discuss here is called the checksum. The checksum is used in the called the checksum. The checksum is used in the Internet by several protocols although not at the data Internet by several protocols although not at the data link layer. However, we briefly discuss it here to link layer. However, we briefly discuss it here to complete our discussion on error checkingcomplete our discussion on error checking
IdeaOne’s ComplementInternet Checksum
Topics discussed in this section:Topics discussed in this section:
![Page 368: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/368.jpg)
10.368
Suppose our data is a list of five 4-bit numbers that we want to send to a destination. In addition to sending these numbers, we send the sum of the numbers. For example, if the set of numbers is (7, 11, 12, 0, 6), we send (7, 11, 12, 0, 6, 36), where 36 is the sum of the original numbers. The receiver adds the five numbers and compares the result with the sum. If the two are the same, the receiver assumes no error, accepts the five numbers, and discards the sum. Otherwise, there is an error somewhere and the data are not accepted.
Example 10.18
![Page 369: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/369.jpg)
10.369
We can make the job of the receiver easier if we send the negative (complement) of the sum, called the checksum. In this case, we send (7, 11, 12, 0, 6, −36). The receiver can add all the numbers received (including the checksum). If the result is 0, it assumes no error; otherwise, there is an error.
Example 10.19
![Page 370: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/370.jpg)
10.370
How can we represent the number 21 in one’s complement arithmetic using only four bits?
SolutionThe number 21 in binary is 10101 (it needs five bits). We can wrap the leftmost bit and add it to the four rightmost bits. We have (0101 + 1) = 0110 or 6.
Example 10.20
![Page 371: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/371.jpg)
10.371
How can we represent the number −6 in one’s complement arithmetic using only four bits?
SolutionIn one’s complement arithmetic, the negative or complement of a number is found by inverting all bits. Positive 6 is 0110; negative 6 is 1001. If we consider only unsigned numbers, this is 9. In other words, the complement of 6 is 9. Another way to find the complement of a number in one’s complement arithmetic is to subtract the number from 2n − 1 (16 − 1 in this case).
Example 10.21
![Page 372: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/372.jpg)
10.372
Figure 10.24 Example 10.22
1 1 1 1
0 0 0 0
![Page 373: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/373.jpg)
10.373
Sender site:1. The message is divided into 16-bit words.2. The value of the checksum word is set to 0.3. All words including the checksum are added using one’s complement addition.4. The sum is complemented and becomes the checksum.5. The checksum is sent with the data.
Note
![Page 374: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/374.jpg)
10.374
Receiver site:1. The message (including checksum) is divided into 16-bit words.2. All words are added using one’s complement addition.3. The sum is complemented and becomes the new checksum.4. If the value of checksum is 0, the message is accepted; otherwise, it is rejected.
Note
![Page 375: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/375.jpg)
10.375
Let us calculate the checksum for a text of 8 characters (“Forouzan”). The text needs to be divided into 2-byte (16-bit) words. We use ASCII (see Appendix A) to change each byte to a 2-digit hexadecimal number. For example, F is represented as 0x46 and o is represented as 0x6F. Figure 10.25 shows how the checksum is calculated at the sender and receiver sites. In part a of the figure, the value of partial sum for the first column is 0x36. We keep the rightmost digit (6) and insert the leftmost digit (3) as the carry in the second column. The process is repeated for each column. Note that if there is any corruption, the checksum recalculated by the receiver is not all 0s. We leave this an exercise.
Example 10.23
![Page 376: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/376.jpg)
10.376
Figure 10.25 Example 10.23
![Page 377: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/377.jpg)
Modern Coding Theory: LDPC Codes
Hossein Pishro-NikUniversity of Massachusetts Amherst
November 7, 2006
![Page 378: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/378.jpg)
378
Outline
Introduction and motivation Error control coding Block codes Minimum distance Modern coding: LDPC codes Practical challenges Application of LDPC codes to
holographic data storage
![Page 379: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/379.jpg)
379
Errors in Information Transmission
Digital Communications: Transporting information from one party to another, using a sequence of symbols, e.g. bits.
Received bits = Corrupted version of the transmitted bits
Transmitted bits
…. 0110010101 …
…. 0100010101 …
Noise & interference: received sequence may be different from the transmitted one.
![Page 380: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/380.jpg)
380
Magnetic recording Track
Sector
010101011110010101001
Some of the bits may change during the transmission from the disk to the disk drive
Errors in Information Transmission: Cont.
![Page 381: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/381.jpg)
381
Information bits Corrupted bitsBSC
10010…10101… 10110…00101…
These communication systems can be modeled as Binary Symmetric Channels (BSC):
Each bit is flipped with probability p:
1-p0 0
1 11-p
p
p0<p<0.5
Errors in Information Transmission: Cont.
SenderReceiver
![Page 382: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/382.jpg)
382
Pioneers of Coding Theory
Richard Hamming Claude Shannon
Bell Telephone Laboratories
![Page 383: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/383.jpg)
383
Error Control Coding: Repetition Codes
BSCBit error probability p=0.01
10010…10101… 10110…00101…
Error Control Coding: Use redundancy to reduce the bit error rate
Three-fold repetition code: send each bit three times
Example:
![Page 384: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/384.jpg)
384
Repetition Codes: Cont.
),,(),,( 111321 xxxyyy Encoder: Repeat
each bit three times
),,( 321 zzz
)( 1x
BSC
codeword
Decoder: majority voting
Corrupted codeword
)( 1x
![Page 385: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/385.jpg)
385
Repetition Codes: Cont.
Decoder
Encoder
BSCcodeword
(0) (0,0,0)
(1,0,0)
Corrupted codeword
(0)
Successful decoding!
Decoding: majority voting
)( 1x
![Page 386: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/386.jpg)
386
Decoding Error
103 01.0
),1(34-
23
e
e
pp
pppp
Decoding Error Probability : = Prob{2 or 3 bits in the codeword received in error}
Advantage: reduced bite error rate
Disadvantage: we lose bandwidth because each bit should be sent three times
ep
![Page 387: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/387.jpg)
387
Error Control Coding: Block Codes
),...,,( 21 nyyy
Decoder
Encoder
),...,,( 21 nzzz),...,,( 21 kxxx
),...,,( 21 kxxx
BSCcodeword
Corrupted codeword
Encoding: mapping the information block to the corresponding codeword
Decoding: an algorithm for recovering the information block from the corrupted codeword
Always n>k: redundancy in the codeword
Information blockn>k
![Page 388: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/388.jpg)
388
Error Control Coding: Block Codes
K: code dimension, n: code length
This is called an (n,k) block code
),...,,( 21 nyyy
Decoder
Encoder
),...,,( 21 nzzz),...,,( 21 kxxx
),...,,( 21 kxxx
BSCCodeword
Corrupted codeword
Information block
![Page 389: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/389.jpg)
389
Code Rate
n
k
lengthCode
DimensionrateCodeR
R shows the amount of redundancy in the codeword
Higher R = Lower redundancy
10 R
In general an (n,k) block code is a 1-1 mapping from k bits to n bits: ),...,,(y ),...,,( 2121 nk yyxxx
![Page 390: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/390.jpg)
390
Repetition Codes Revisited
),,(),,( 111321 xxxyyy
Decoder
Encoder
),,( 321 zzz)( 1x
)( 1x
BSC
For repetition code: k=1, n=3, R=1/3
K=1 n=3
The repetition code is a (3,1) block code
![Page 391: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/391.jpg)
391
Block Codes: Cont.
)1,1,1()1(
)0,0,0()0(
There are two valid codewords in the repetition code:
Valid codewords
A (5,3) block code: (n=5, k=3, R=3/5):
)00 011( )111(
)10 010( )011(
)01 110( )101(
)11 101( )001(
)11 010( )110(
)01 001( )010(
)00 110( )100(
)00 000()000(
8 valid codewords
The number of valid codewords is equal to the number of possible information blocks, .
k2
![Page 392: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/392.jpg)
392
Block Codes: Cont.
All k-tuplesAll n-tuples
(0,0)
(0,1)(1,0)
(1,1)
Valid codewords
),...,,(y ),...,,( 2121 nk yyxxx
k2 points n2 points
![Page 393: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/393.jpg)
393
Good Block Codes
There exist more efficient and more powerful codes.
Good Codes:
High rates = lower redundancy (depends on the channel error rate p)
Low error rate at the decoder
Simple and practical encoding and decoding
![Page 394: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/394.jpg)
394
Linear Block Codes
),...,,(y ),...,,( : 2121 nk yyxxxC
A linear mapping
Linear block codes:
Simple structure: easier to analyze Simple encoding algorithms.
knkk
n
n
kn
ggg
ggg
ggg
xxxyyy
....
.
.
.
.
.
.
.....
...
),...,,(),...,,(
21
22221
11211
2121
Generator matrix G
![Page 395: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/395.jpg)
395
Linear Block Codes
There are many practical linear block codes: Hamming codes Cyclic codes Reed-Solomon codes BCH codes …
![Page 396: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/396.jpg)
396
Channel Capacity
Channel capacity (Shannon): The maximum achievable data rate
Shannon capacity is achievable using random codes
),...,,( 21 nyyy
Decoder
Encoder
),...,,( 21 nzzz),...,,( 21 kxxx
),...,,( 21 kxxx
Noisy channel
![Page 397: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/397.jpg)
397
Shannon Codes
All k-tuplesAll n-tuples
(0,0)
(0,1)(1,0)
(1,1)
Valid codewords
),...,,(y ),...,,( 2121 nk yyxxx
k2 points n2 points
Random mapping
![Page 398: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/398.jpg)
398
Shannon Random Codes
As n (block length) goes to infinity, random codes achieve the channel capacity, i.e,
Code rate R approaches C, while the decoding error probability goes to zero
![Page 399: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/399.jpg)
399
Error Control Coding: Low-Density Parity-Check (LDPC) Codes
Ideal codes Have efficient encoding Have efficient decoding Can approach channel capacity
Low-density parity-check (LDPC) codes Random codes: based on random graphs Simple iterative decoding
![Page 400: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/400.jpg)
400
t-Error-Correcting Codes
The repetition codes can correct one error in the codeword; however, it fails to correct higher number of errors.
A code that is capable of correcting t errors in the codewords is called a t-error-correcting code.
The repetition code is a 1-error-correcting code.
![Page 401: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/401.jpg)
401
Minimum Distance
The minimum distance of a code is the minimum Hamming distance between its codewords:
= Min{dist(u,v): u and v are codewords}
For the repetition code, since there are only two valid codewords,
mind
3),( 21min ccdistd
)1,1,1( and )0,0,0( 21 cc
![Page 402: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/402.jpg)
402
Minimum Distance: Cont.
All vectors of length n (n-tuples)
![Page 403: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/403.jpg)
403
Minimum Distance: Cont.
Higher minimum distance = Stronger code
Example: For repetition code t=1, and .3min d
![Page 404: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/404.jpg)
404
Modern Coding Theory
• Random linear codes can achieve channel capacity
• Linear codes can be encoded efficiently
• Decoding of linear codes: NP hard
• Gallager’s idea:– Find a subclass of random linear codes
that can be decoded efficiently
![Page 405: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/405.jpg)
405
Modern Coding Theory
Iterative coding schemes: LDPC codes, Turbo codes
Iterative Decoder
Encoder
BSC
Iterative decoding instead of distance-based decoding
![Page 406: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/406.jpg)
406
Introduction to Channel Coding
Noisy channels:
Example: binary erasure channel (BEC)
Other channels: Gaussian channel, binary symmetric channel,…
Information bits Corrupted bitsNoisy channel10010…10101…
(0,1) (0,e)BEC
10e10…e01e1…
![Page 407: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/407.jpg)
407
Low-Density Parity-Check Codes
2y1y
Check (message) nodes
Variable (bit) nodes
0321 yyy
Simple iterative decoding: message-passing algorithm
Defined by random sparse graphs (Tanner graphs)
ny3y
![Page 408: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/408.jpg)
408
Important Recent Developments
Luby et al. and Richardson et al.Density evolutionOptimization using density evolution
Shokrollahi et al. Capacity-achieving LDPC codes for the binary erasure
channel (BEC)
Richardson et al. and Jin et al.Efficient encoding Irregular repeat-accumulate codes
![Page 409: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/409.jpg)
409
Standard Iterative Decoding over the BEC
0100101101001 eeeCodeword Received word
Standard Iterative Algorithm:
Repeat for any check node{
If only one of the neighbors is missing, recover it
}
10 e2y1y
110213 yyy
3y
Check node
Neighbors
![Page 410: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/410.jpg)
410
10 e e1e
ffff
=1=0 =1
Standard Iterative Decoding: Cont.
Decoding is successful!
![Page 411: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/411.jpg)
411
Algorithm A: Cont.
The algorithm may fail
e0 e 11e
f
Stopping Set: S
![Page 412: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/412.jpg)
412
Practical Challenges: Finite-Length Codes
• In practice, we need to use short or moderate-length codes
• Short or moderate length codes do not perform as well as long codes
![Page 413: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/413.jpg)
413
Error Floor of LDPC Codes
Low Error FloorHigh Error Floor
BE
R
Average erasure probability of the channel
10-9
10-7
10-5
10-1
Capacity-approaching LDPC Codes suffer from the error floor problem
![Page 414: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/414.jpg)
414
Volume Holographic Memory (VHM) Systems
![Page 415: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/415.jpg)
415
• Thermal noise, shot noise,…• Limited diffraction• Aberration• Misalignment error• Inter-page interference (IPI)• Photovoltaic damage• Non-uniform erasure
Noise and Error Sources
![Page 416: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/416.jpg)
416
• The magnitudes of systematic errors and thermal noise
are assumed to remain unchanged with respect to M
(number of pages) and SNR is proportional to 2M
1
• SNR decreases as the number of pages increases.
• There exists an optimum number of pages that maximizes
the storage capacity.
The Scaling Law
2MSNR
![Page 417: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/417.jpg)
417
Raw Error Distribution over a Page
• Bits in different regions of a page are affected by different noise powers. • The noise power is higher at edges.
![Page 418: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/418.jpg)
418
Properties and Requirements
• Can use large block lengths
• Non-uniform error correction
• Error floor: Target BER<
• Simple implementation: Simple decoding
1210
![Page 419: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/419.jpg)
419
…c1 c2 ck
Variable Nodes
Check Nodes
Ensembles for Non-uniform Error Correction
Bits from the first region
![Page 420: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/420.jpg)
420
Ensemble Properties
• Threshold effect• Concentration
theorem• Density evolution:
![Page 421: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/421.jpg)
421
Ensemble Properties
• Stability condition (BEC):
![Page 422: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/422.jpg)
422
Design Methodology
• The performance of the decoder is not directly related to the minimum distance.
• However, the minimum distance still plays an important role:– Example: error floor effect
• To eliminate error floor, we avoid degree-two variable nodes to have large minimum distance.
• For efficient decoding, and also for simplicity of design, we use low degrees for variable nodes.
![Page 423: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/423.jpg)
423
Performance on VHM
n=
104
n= 10
5
10-9
10-7
10-2
BE
R
10-5
Rate=.85
Avg degree=6
Gap from capacity at BER 1e-9: 0.6dB
![Page 424: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/424.jpg)
424
Storage Capacity
2000 4000 6000
0
0.2
0.4
0.6
0.8
1
Sto
rage
cap
acity
(G
bits
)
Number of pages
Information theoretic capacityfor soft-decision decoding: .95Gb
LDPC: soft.84 Gb
LDPC: hard.76 Gb
RS: hard.52Gb
![Page 425: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/425.jpg)
425
Conclusion
• Carefully designed LDPC codes can result in
significant increase in the storage capacity.
• By incorporating channel information in
design of LDPC codes
- Small gap from capacity
- Error floor reduction
- More efficient decoding
![Page 426: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/426.jpg)
426
Modern Coding Theory
The performance of the decoder is not directly related to the minimum distance.
However, the minimum distance still plays an important role:
Example: error floor effect
![Page 427: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/427.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Chapter 4
DigitalTransmission
![Page 428: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/428.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
4.1 Line Coding
Some Characteristics
Line Coding Schemes
Some Other Schemes
![Page 429: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/429.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.1 Line coding
![Page 430: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/430.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.2 Signal level versus data level
![Page 431: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/431.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.3 DC component
![Page 432: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/432.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 1Example 1
A signal has two data levels with a pulse duration of 1 ms. We calculate the pulse rate and bit rate as follows:
Pulse Rate = 1/ 10Pulse Rate = 1/ 10-3-3= 1000 pulses/s= 1000 pulses/s
Bit Rate = Pulse Rate x logBit Rate = Pulse Rate x log22 L = 1000 x log L = 1000 x log22 2 = 1000 bps 2 = 1000 bps
Example 2Example 2
A signal has four data levels with a pulse duration of 1 ms. We calculate the pulse rate and bit rate as follows:
Pulse Rate = = 1000 pulses/sPulse Rate = = 1000 pulses/s
Bit Rate = PulseRate x logBit Rate = PulseRate x log22 L = 1000 x log L = 1000 x log22 4 = 2000 bps 4 = 2000 bps
![Page 433: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/433.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.4 Lack of synchronization
![Page 434: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/434.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 3Example 3
In a digital transmission, the receiver clock is 0.1 percent faster than the sender clock. How many extra bits per second does the receiver receive if the data rate is 1 Kbps? How many if the data rate is 1 Mbps?
SolutionSolution
At 1 Kbps:1000 bits sent 1001 bits received1 extra bpsAt 1 Mbps: 1,000,000 bits sent 1,001,000 bits received1000 extra bps
![Page 435: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/435.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.5 Line coding schemes
![Page 436: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/436.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.6 Unipolar encoding
Unipolar encoding uses only one voltage level.
Note:Note:
![Page 437: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/437.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.7 Types of polar encoding
Polar encoding uses two voltage levels (positive and negative).Polar encoding uses two voltage levels (positive and negative).
Note:Note:
![Page 438: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/438.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In NRZ-L the level of the signal is dependent upon the state of the In NRZ-L the level of the signal is dependent upon the state of the bit.bit.
Note:Note:
In NRZ-I the signal is inverted if a In NRZ-I the signal is inverted if a 1 is encountered.1 is encountered.
Note:Note:
![Page 439: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/439.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.8 NRZ-L and NRZ-I encoding
![Page 440: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/440.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.9 RZ encoding
![Page 441: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/441.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
A good encoded digital signal A good encoded digital signal must contain a provision for must contain a provision for
synchronization.synchronization.
Note:Note:
![Page 442: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/442.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.10 Manchester encoding
![Page 443: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/443.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In Manchester encoding, the In Manchester encoding, the transition at the middle of the bit transition at the middle of the bit is used for both synchronization is used for both synchronization
and bit representation.and bit representation.
Note:Note:
![Page 444: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/444.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.11 Differential Manchester encoding
![Page 445: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/445.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In differential Manchester encoding, the transition at the middle of In differential Manchester encoding, the transition at the middle of the bit is used only for synchronization. the bit is used only for synchronization.
The bit representation is defined by the inversion or noninversion The bit representation is defined by the inversion or noninversion at the beginning of the bit.at the beginning of the bit.
Note:Note:
In bipolar encoding, we use three levels: positive, zero, In bipolar encoding, we use three levels: positive, zero, and negative.and negative.
Note:Note:
![Page 446: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/446.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.12 Bipolar AMI encoding
![Page 447: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/447.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.13 2B1Q
![Page 448: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/448.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.14 MLT-3 signal
![Page 449: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/449.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
4.2 Block Coding
Steps in Transformation
Some Common Block Codes
![Page 450: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/450.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.15 Block coding
![Page 451: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/451.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.16 Substitution in block coding
![Page 452: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/452.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Table 4.1 4B/5B encodingTable 4.1 4B/5B encoding
Data Code Data Code
0000 1111011110 1000 1001010010
0001 0100101001 1001 1001110011
0010 1010010100 1010 1011010110
0011 1010110101 1011 1011110111
0100 0101001010 1100 1101011010
0101 0101101011 1101 1101111011
0110 0111001110 1110 1110011100
0111 0111101111 1111 1110111101
![Page 453: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/453.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Table 4.1 4B/5B encoding (Continued)Table 4.1 4B/5B encoding (Continued)
Data Code
Q (Quiet) 0000000000
I (Idle) 1111111111
H (Halt) 0010000100
J (start delimiter) 1100011000
K (start delimiter) 1000110001
T (end delimiter) 0110101101
S (Set) 1100111001
R (Reset) 0011100111
![Page 454: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/454.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.17 Example of 8B/6T encoding
![Page 455: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/455.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
4.3 Sampling4.3 Sampling
Pulse Amplitude ModulationPulse Code ModulationSampling Rate: Nyquist TheoremHow Many Bits per Sample?Bit Rate
![Page 456: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/456.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.18 PAM
![Page 457: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/457.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Pulse amplitude modulation has some Pulse amplitude modulation has some applications, but it is not used by itself in data applications, but it is not used by itself in data communication. However, it is the first step in communication. However, it is the first step in
another very popular conversion method called another very popular conversion method called pulse code modulation.pulse code modulation.
Note:Note:
![Page 458: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/458.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.19 Quantized PAM signal
![Page 459: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/459.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.20 Quantizing by using sign and magnitude
![Page 460: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/460.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.21 PCM
![Page 461: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/461.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.22 From analog signal to PCM digital code
![Page 462: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/462.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
According to the Nyquist theorem, According to the Nyquist theorem, the sampling rate must be at least 2 the sampling rate must be at least 2
times the highest frequency.times the highest frequency.
Note:Note:
![Page 463: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/463.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.23 Nyquist theorem
![Page 464: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/464.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 4Example 4
What sampling rate is needed for a signal with a bandwidth of 10,000 Hz (1000 to 11,000 Hz)?
SolutionSolution
The sampling rate must be twice the highest frequency in the signal:
Sampling rate = 2 x (11,000) = 22,000 samples/sSampling rate = 2 x (11,000) = 22,000 samples/s
![Page 465: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/465.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 5Example 5
A signal is sampled. Each sample requires at least 12 levels of precision (+0 to +5 and -0 to -5). How many bits should be sent for each sample?
SolutionSolution
We need 4 bits; 1 bit for the sign and 3 bits for the value. A 3-bit value can represent 23 = 8 levels (000 to 111), which is more than what we need. A 2-bit value is not enough since 22 = 4. A 4-bit value is too much because 24 = 16.
![Page 466: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/466.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 6Example 6We want to digitize the human voice. What is the bit rate, assuming 8 bits per sample?
SolutionSolutionThe human voice normally contains frequencies from 0 to 4000 Hz. Sampling rate = 4000 x 2 = 8000 samples/sSampling rate = 4000 x 2 = 8000 samples/s
Bit rate = sampling rate x number of bits per sample Bit rate = sampling rate x number of bits per sample = 8000 x 8 = 64,000 bps = 64 Kbps= 8000 x 8 = 64,000 bps = 64 Kbps
Note that we can always change a band-pass signal to a low-pass Note that we can always change a band-pass signal to a low-pass signal before sampling. In this case, the sampling rate is twice the signal before sampling. In this case, the sampling rate is twice the
bandwidth.bandwidth.
Note:Note:
![Page 467: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/467.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
4.4 Transmission Mode4.4 Transmission Mode
Parallel Transmission
Serial Transmission
![Page 468: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/468.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.24 Data transmission
![Page 469: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/469.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.25 Parallel transmission
![Page 470: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/470.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.26 Serial transmission
![Page 471: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/471.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In asynchronous transmission, we send 1 start bit (0) at the In asynchronous transmission, we send 1 start bit (0) at the beginning and 1 or more stop bits (1s) at the end of each byte. beginning and 1 or more stop bits (1s) at the end of each byte.
There may be a gap between each byte.There may be a gap between each byte.
Note:Note:
Asynchronous here means “asynchronous at the byte level,” but Asynchronous here means “asynchronous at the byte level,” but the bits are still synchronized; their durations are the same.the bits are still synchronized; their durations are the same.
Note:Note:
![Page 472: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/472.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.27 Asynchronous transmission
![Page 473: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/473.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In synchronous transmission, In synchronous transmission, we send bits one after another we send bits one after another without start/stop bits or gaps. without start/stop bits or gaps.
It is the responsibility of the It is the responsibility of the receiver to group the bits.receiver to group the bits.
Note:Note:
![Page 474: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/474.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.28 Synchronous transmission
![Page 475: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/475.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Data Link LayerData Link Layer
PART PART IIIIII
![Page 476: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/476.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Position of the data-link layer
![Page 477: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/477.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Data link layer duties
![Page 478: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/478.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
LLC and MAC sublayers
![Page 479: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/479.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
IEEE standards for LANs
![Page 480: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/480.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Chapters
Chapter 10 Error Detection and Correction
Chapter 11 Data Link Control and Protocols
Chapter 12 Point-To-Point Access
Chapter 13 Multiple Access
Chapter 14 Local Area Networks
Chapter 15 Wireless LANs
Chapter 16 Connecting LANs
Chapter 17 Cellular Telephone and Satellite Networks
Chapter 18 Virtual Circuit Switching
![Page 481: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/481.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Chapter 10
Error Detectionand
Correction
![Page 482: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/482.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Data can be corrupted during transmission. For reliable
communication, errors must be detected and corrected.
Note:Note:
![Page 483: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/483.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.1 Types of Error10.1 Types of Error
Single-Bit Error
Burst Error
![Page 484: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/484.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In a single-bit error, only one bit in the data unit has changed.
Note:Note:
![Page 485: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/485.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.1 Single-bit error
A burst error means that 2 or more bits in the data unit have changed.
Note:Note:
![Page 486: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/486.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.2 Burst error of length 5
![Page 487: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/487.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.2 Detection10.2 Detection
Redundancy
Parity Check
Cyclic Redundancy Check (CRC)
Checksum
![Page 488: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/488.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Error detection uses the concept of redundancy, which means adding
extra bits for detecting errors at the destination.
Note:Note:
![Page 489: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/489.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.3 Redundancy
![Page 490: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/490.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.4 Detection methods
![Page 491: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/491.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.5 Even-parity concept
![Page 492: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/492.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
In parity check, a parity bit is added to every data unit so that the total
number of 1s is even (or odd for odd-parity).
Note:Note:
![Page 493: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/493.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 1Example 1
Suppose the sender wants to send the word world. In ASCII the five characters are coded as
1110111 1101111 1110010 1101100 1100100
The following shows the actual bits sent
11101110 11011110 11100100 11011000 11001001
Example 2Example 2
Now suppose the word world in Example 1 is received by the receiver without being corrupted in transmission.
11101110 11011110 11100100 11011000 11001001
The receiver counts the 1s in each character and comes up with even numbers (6, 6, 4, 4, 4). The data are accepted.
![Page 494: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/494.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 3Example 3
Now suppose the word world in Example 1 is corrupted during transmission.
11111110 11011110 11101100 11011000 11001001
The receiver counts the 1s in each character and comes up with even and odd numbers (7, 6, 5, 4, 4). The receiver knows that the data are corrupted, discards them, and asks for retransmission.
Simple parity check can detect all single-bit errors. It can detect burst Simple parity check can detect all single-bit errors. It can detect burst errors only if the total number of errors in each data unit is odd.errors only if the total number of errors in each data unit is odd.
Note:Note:
![Page 495: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/495.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.6 Two-dimensional parity
![Page 496: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/496.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 4Example 4
Suppose the following block is sent:
10101001 00111001 11011101 11100111 10101010
However, it is hit by a burst noise of length 8, and some bits are corrupted.
10100011 10001001 11011101 11100111 10101010
When the receiver checks the parity bits, some of the bits do not follow the even-parity rule and the whole block is discarded.
10100011 10001001 11011101 11100111 10101010
In two-dimensional parity check, a block of bits is divided into rows and a redundant row of bits is added to the whole block.
Note:Note:
![Page 497: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/497.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.7 CRC generator and checker
![Page 498: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/498.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.8 Binary division in a CRC generator
![Page 499: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/499.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.9 Binary division in CRC checker
![Page 500: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/500.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.11 A polynomial representing a divisor
10.10 A polynomial
![Page 501: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/501.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Table 10.1 Standard polynomialsTable 10.1 Standard polynomials
Name Polynomial Application
CRC-8CRC-8 x8 + x2 + x + 1 ATM header
CRC-10CRC-10 x10 + x9 + x5 + x4 + x 2 + 1 ATM AAL
ITU-16ITU-16 x16 + x12 + x5 + 1 HDLC
ITU-32ITU-32x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10
+ x8 + x7 + x5 + x4 + x2 + x + 1LANs
![Page 502: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/502.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 5Example 5
It is obvious that we cannot choose x (binary 10) or x2 + x (binary 110) as the polynomial because both are divisible by x. However, we can choose x + 1 (binary 11) because it is not divisible by x, but is divisible by x + 1. We can also choose x2 + 1 (binary 101) because it is divisible by x + 1 (binary division).
Example 6Example 6
The CRC-12
x12 + x11 + x3 + x + 1
which has a degree of 12, will detect all burst errors affecting an odd number of bits, will detect all burst errors with a length less than or equal to 12, and will detect, 99.97 percent of the time, burst errors with a length of 12 or more.
![Page 503: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/503.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.12 Checksum
![Page 504: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/504.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.13 Data unit and checksum
![Page 505: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/505.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
The sender follows these steps:The sender follows these steps:
•The unit is divided into k sections, each of n bits.The unit is divided into k sections, each of n bits.
•All sections are added using one’s complement to get the sum.All sections are added using one’s complement to get the sum.
•The sum is complemented and becomes the checksum.The sum is complemented and becomes the checksum.
•The checksum is sent with the data.The checksum is sent with the data.
The receiver follows these steps:The receiver follows these steps:
•The unit is divided into k sections, each of n bits.The unit is divided into k sections, each of n bits.
•All sections are added using one’s complement to get the sum.All sections are added using one’s complement to get the sum.
•The sum is complemented.The sum is complemented.
•If the result is zero, the data are accepted: otherwise, rejected.If the result is zero, the data are accepted: otherwise, rejected.
NoteNote::
NoteNote::
![Page 506: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/506.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
The receiver follows these steps:The receiver follows these steps:
•The unit is divided into k sections, each of n bits.The unit is divided into k sections, each of n bits.
•All sections are added using one’s complement to get All sections are added using one’s complement to get the sum.the sum.
•The sum is complemented.The sum is complemented.
•If the result is zero, the data are accepted: otherwise, If the result is zero, the data are accepted: otherwise, rejected.rejected.
Note:Note:
![Page 507: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/507.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 7Example 7
Suppose the following block of 16 bits is to be sent using a checksum of 8 bits.
10101001 00111001
The numbers are added using one’s complement
10101001
00111001 ------------Sum 11100010
Checksum 00011101
The pattern sent is 10101001 00111001 00011101
![Page 508: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/508.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 8Example 8
Now suppose the receiver receives the pattern sent in Example 7 and there is no error.
10101001 00111001 00011101
When the receiver adds the three sections, it will get all 1s, which, after complementing, is all 0s and shows that there is no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is OK.
![Page 509: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/509.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Example 9Example 9
Now suppose there is a burst error of length 5 that affects 4 bits.
10101111 11111001 00011101
When the receiver adds the three sections, it gets
10101111
11111001
00011101
Partial Sum 1 11000101
Carry 1
Sum 11000110
Complement 00111001 the pattern is corrupted.
![Page 510: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/510.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.3 Correction10.3 Correction
Retransmission
Forward Error Correction
Burst Error Correction
![Page 511: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/511.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Table 10.2 Data and redundancy bitsTable 10.2 Data and redundancy bits
Number ofdata bits
m
Number of redundancy bits
r
Total bits
m + r
11 2 3
22 3 5
33 3 6
44 3 7
55 4 9
66 4 10
77 4 11
![Page 512: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/512.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.14 Positions of redundancy bits in Hamming code
![Page 513: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/513.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.15 Redundancy bits calculation
![Page 514: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/514.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.16 Example of redundancy bit calculation
![Page 515: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/515.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.17 Error detection using Hamming code
![Page 516: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/516.jpg)
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.18 Burst error correction example
![Page 517: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/517.jpg)
EE576 Dr. Kousa Linear Block Codes 517
Chapter 11
Error-Control Coding
Chapter 11 :
Lecture edition by K.Heikkinen
![Page 518: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/518.jpg)
EE576 Dr. Kousa Linear Block Codes 518
Chapter 11 contents• Introduction• Discrete Memoryless Channels• Linear Block Codes• Cyclic Codes• Maximum Likelihood decoding of Convolutional Codes• Trellis-Coded Modulation• Coding for Compound-Error Channels
Chapter 11 goals• To understand error-correcting codes in use theorems and their
principles– block codes, convolutional codes, etc.
![Page 519: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/519.jpg)
EE576 Dr. Kousa Linear Block Codes 519
Introduction
• Cost-effective facility for transmitting information at a rate and a level of reliability and quality– signal energy per bit-to-noise power density ratio– achieved practically via error-control coding
• Error-control methods • Error-correcting codes
![Page 520: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/520.jpg)
EE576 Dr. Kousa Linear Block Codes 520
Discrete Memoryless Channels
![Page 521: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/521.jpg)
EE576 Dr. Kousa Linear Block Codes 521
Discrete Memoryless Channels
• Discrete memoryless channles (see fig. 11.1) described by the set of transition probabilities– in simplest form binary coding [0,1] is used of which
BSC is an appropriate example– channel noise modelled as additive white gaussian
noise channel• the two above are so called hard-decision
decoding– other solutions, so called soft-decision coding
![Page 522: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/522.jpg)
EE576 Dr. Kousa Linear Block Codes 522
Linear Block Codes
• A code is said to be linear if any twowords in the code can be added in modulo-2 arithmetic to produce a third code word in the code
• Linear block code has n bits of which k bits are always identical to the message sequence
• Then n-k bits are computed from the message bits in accordance with a prescribed encoding rule that determines the mathematical structure of the code– these bits are also called parity bits
![Page 523: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/523.jpg)
EE576 Dr. Kousa Linear Block Codes 523
Linear Block Codes
• Normally code equations are written in the form of matrixes (1-by-k message vector)– P is the k-by-(n-k) coefficient matrix– I (of k) is the k-by-k identity matrix– G is k-by-n generator matrix
• Another way to show the relationship between the message bits and parity bits– H is parity-check matrix
![Page 524: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/524.jpg)
EE576 Dr. Kousa Linear Block Codes 524
Linear Block Codes
• In Syndrome decoding the generator matrix (G) is used in the encoding at the transmitter and the parity-check matrix (H) atthe receiver– if corrupted bit, r = c+e, this leads to two important
properties• the syndrome is dependant only on the error
pattern, not on the trasmitted code word• all error patterns that differ by a code word, have
same syndrome
![Page 525: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/525.jpg)
EE576 Dr. Kousa Linear Block Codes 525
Linear Block Codes
• The Hamming distance (or minimum) can be used to calculate the difference of the code words
• We have certain amount (2_power_k) code vectors, of which the subsets constitute a standard array for an (n,k) linear block code
• We pick the error pattern of a given code– coset leaders are the most obvious error patterns
![Page 526: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/526.jpg)
EE576 Dr. Kousa Linear Block Codes 526
Linear Block Codes
• Example : Let us have H as parity-check matrix which vectors are – (1110), (0101), (0011), (0001), (1000), (1111)– code generator G gives us following codes (c) :
• 000000, 100101,111010, 011111– Let us find n, k and n-k ?– what will we find if we multiply Hc ?
![Page 527: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/527.jpg)
EE576 Dr. Kousa Linear Block Codes 527
Linear Block Codes
Examples of (7,4) Hamming code words and error patterns
![Page 528: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/528.jpg)
EE576 Dr. Kousa Linear Block Codes 528
Cyclic Codes
• Cyclic codes form subclass of linear block codes• A binary code is said to be cyclic if it exhibits the two
following properties– the sum of any two code words in the code is also a code
word (linearity)• this means that we speak linear block codes
– any cyclic shift of a code word in the code is also a code word (cyclic)
• mathematically in polynomial notation
![Page 529: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/529.jpg)
EE576 Dr. Kousa Linear Block Codes 529
Cyclic Codes
• The polynomial plays major role in the generation of cyclic codes
• If we have a generator polynomial g(x) of an (n,k) cyclic code with certain k polynomials, we can create the generator matrix (G)
• Syndrome polynomial of the received code word corresponds error polynomial
![Page 530: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/530.jpg)
EE576 Dr. Kousa Linear Block Codes 530
Cyclic Codes
![Page 531: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/531.jpg)
EE576 Dr. Kousa Linear Block Codes 531
Cyclic Codes
• Example : A (7,4) cyclic code that has a block length of 7, let us find the polynomials to generate the code (see example 3 on the book)– find code polynomials– find generation matrix (G) and parity-check matrix (H)
• Other remarkable cyclic codes
– Cyclic redundancy check (CRC) codes
– Bose-Chaudhuri-Hocquenghem (BCH) codes
– Reed-Solomon codes
![Page 532: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/532.jpg)
EE576 Dr. Kousa Linear Block Codes 532
Convolutional Codes• Convolutional codes work in serial manner, which suits better to such kind of
applications
• The encoder of a convolutional code can be viewed as a finite-state machine that consists of an M-stage shift register with prescribed connections to n modulo-2 adders, and a multiplexer that serializesthe outputs of the address
• Convolutional codes are portrayed in graphical form by using three different diagrams
– Code Tree
– Trellis
– State Diagram
![Page 533: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/533.jpg)
EE576 Dr. Kousa Linear Block Codes 533
Maximum Likelihood Decoding of Convolutional Codes
• We can create log-likelihood function to a convolutional code that have a certain hamming distance
• The book presents an example algorithm (Viterbi)– Viterbi algorithm is a maximum-likelihood decoder, which
is optimum for a AWGN (see fig. 11.17)• initialisation• computation step• final step
![Page 534: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/534.jpg)
EE576 Dr. Kousa Linear Block Codes 534
Trellis-Coded Modulation
• Here coding is described as a process of imposing certain patterns on the transmitted signal
• Trellis-coded modulation has three features– Amount of signal point is larger than what is required,
therefore allowing redundancy without sacrificing bandwidth
– Convolutional coding is used to introduce a certain dependancy between successive signal points
– Soft-decision decoding is done in the receiver
![Page 535: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/535.jpg)
EE576 Dr. Kousa Linear Block Codes 535
Coding for Compound-Error Channels
• Compound-error channels exhibit independent and burst error statistics (e.g. PSTN channels, radio channels)
• Error-protection methods (ARQ, FEC)
![Page 536: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/536.jpg)
EE576 Dr. Kousa Linear Block Codes 536Telecommunications Technology
The Logical Domain
Chapter 6
Error Control in the Binary Channel
Fall 2007
![Page 537: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/537.jpg)
EE576 Dr. Kousa Linear Block Codes 537Telecommunications Technology
The Exclusive OR
• If A and B are binary variables, the XOR of A and B is defined as:
0 + 0 = 1 + 1 = 0
0 + 1 = 1 + 0 = 1
XOR with 1 complements variable
• Note dij = w(xi XOR xj)
XORXOR 00 11
00 00 11
11 11 00
A
B
![Page 538: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/538.jpg)
EE576 Dr. Kousa Linear Block Codes 538Telecommunications Technology
Hamming Distance
• The weight of a code word is the number of 1’s in it.• The Hamming Distance between two code words is
equal to the number of digits in which they differ.
• The distance dij between xi = 1110010 and xj = 1011001 is 4.
![Page 539: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/539.jpg)
EE576 Dr. Kousa Linear Block Codes 539Telecommunications Technology
539
x1 = 1110010x2 = 1100001y = 1100010
d(x1, y) = w(1110010 + 1100010) = w(0010000) = 1
d(x2, y) = w(110001 + 1100010) = w(000011) = 2
![Page 540: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/540.jpg)
EE576 Dr. Kousa Linear Block Codes 540Telecommunications Technology
540
The Binary Symmetric Channel
1- p
1- p
p
p
x0 y0
x1 y1
![Page 541: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/541.jpg)
EE576 Dr. Kousa Linear Block Codes 541Telecommunications Technology
541
BSC and Hamming Distance
• If x is the input code word to a BSC, and y is the output, y = x + n, where the noise vector n has a 1 wherever an error has occurred:
x= 1110010, n = 0010000 y = 1100010• An error causes a distance of 1 between input and
output code words.
![Page 542: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/542.jpg)
EE576 Dr. Kousa Linear Block Codes 542Telecommunications Technology
542
Geometric Point-of-View
000
101
001
111
100
011010
110
In the BSC, any error changes a code word into a code word
Code set of all 8, 3-digit words
Minimum distance = 1
![Page 543: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/543.jpg)
EE576 Dr. Kousa Linear Block Codes 543Telecommunications Technology
543
Reduced Rate Source
000
101
001
111
100
011010
110
Code set of 4, 3-digit words
Minimum distance = 2
![Page 544: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/544.jpg)
EE576 Dr. Kousa Linear Block Codes 544Telecommunications Technology
544
Error Correction and Detection Capability
• The distance between two code words is the number of places in which they differ.
• dmin is the distance between the two codes which are closest together.
• A code with minimum distance dmin may be used– to detect dmin-1 errors, or
– to correct (dmin-1)/2 errors.
![Page 545: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/545.jpg)
EE576 Dr. Kousa Linear Block Codes 545Telecommunications Technology
545
The probability that y came from x1 = Pr{1 error} = pq6
The probability that y came from x2 = Pr{2 errors} = p2q5
p(y/x1) > p(y/x2); p <1/2
Received word is more likely to have come from closest code word
1 2
x1 x2y
Decode received vector as closest code word => correction
![Page 546: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/546.jpg)
EE576 Dr. Kousa Linear Block Codes 546Fall 2007Telecommunications Technology
546
Error Detection and CorrectionShannon’s Noisy Channel Coding Theorem:
• To achieve error-free communications
– Reduce Rate below Capacity
– Add structured redundancy
• Increase the distance between code words
Error Correcting Codes• Add redundancy in a structured way• For example, add a single parity check digit• Choose value of the appended digit to make number of 1’s even
(or odd)
![Page 547: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/547.jpg)
EE576 Dr. Kousa Linear Block Codes 547
Parity Check EquationParity Check Equation
m3m2m1 c1
1
0
ni
iik mc
Addition is modulo 2
m0
A
B+ 0 10 0 11 1 0
![Page 548: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/548.jpg)
EE576 Dr. Kousa Linear Block Codes 548
An (n,k) Group CodeAn (n,k) Group Code
4323
4312
4211
mmmc
mmmc
mmmc
m4 m3 c3m2 c2m1 c1
Code word x7 x6 x4x5 x2x3 x1
m4 m3 m2 m1 c3 c2 c1
1 1 1 0 1 0 01 1 0 1 0 1 01 0 1 1 0 0 1
![Page 549: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/549.jpg)
EE576 Dr. Kousa Linear Block Codes 549
Parity Check and Generator MatricesParity Check and Generator Matrices
1 0 0 0 1 1 10 1 0 0 1 1 00 0 1 0 1 0 10 0 0 1 0 1 1
1 1 1 0 1 0 01 1 0 1 0 1 01 0 1 1 0 0 1
H
G
![Page 550: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/550.jpg)
EE576 Dr. Kousa Linear Block Codes 550
CodesCodes
x6 x5 x3x4 x1x2 x0Code word
m3 m2 m1 m0Message
Hmx
xm
exy
Transmitted code word
Received word
e
Error event has a 1 wherever an error has occurred.
![Page 551: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/551.jpg)
EE576 Dr. Kousa Linear Block Codes 551
Syndrome CalculationSyndrome Calculation
T
TT
T
Ges
GeGHms
Gys
eHmy
![Page 552: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/552.jpg)
EE576 Dr. Kousa Linear Block Codes 552Telecommunications Technology
Hamming Code
• In the example, n = 7 and k = 4.
• There are r = n - k, or 3, parity check digits.
• This code has a minimum distance of 1, thus all single errors can be “corrected”.
• If no error occurs the syndrome is 000.
• If more errors occur, the syndrome is another 3-bit sequence.
• Each single error gives a unique syndrome. • Any single error is more likely to occur than any double, triple, or higher
order error.
• Any non-zero syndrome is most likely to have occurred because the single error that could cause it occurred, than for any other reason.
• Therefore, deciding that the single error occurred is most likely the correct decision.
• Hence, the term error correction.
![Page 553: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/553.jpg)
EE576 Dr. Kousa Linear Block Codes 553Telecommunications Technology
Properties of Binomial Variables
• Given n bits with a probability of error p and a probability of no error q = 1-p.– The probability of no errors is qn
– The probability of one error is pqn-1
– The probability of k errors is pkqn-k
• It is no problem to show that if p<1/2 then any k-error event is more likely than any k+1-error event. – The most likely number of errors is np. When p is very low then the
most likely error event is NO ERRORS, single errors are next most likely.
• Single-error-correcting codes can be very effective!
![Page 554: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/554.jpg)
EE576 Dr. Kousa Linear Block Codes 554Telecommunications Technology
Hamming Codes
• Hamming codes are (n,k) group codes where n = k+r is the length of the code words k is the number of data bits. R is the number of parity check bits, and 2r = n + 1.
• Typical codes are• (7,4), r = 3 • (15,11), r = 4 (24 = 16)• (63, 57), r = 6• Hamming codes are ideal, single error correcting codes.
![Page 555: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/555.jpg)
EE576 Dr. Kousa Linear Block Codes 555
Hamming Code PerformanceHamming Code Performance
• If the probability of bit error without coding is pIf the probability of bit error without coding is pu u and pand pc c with with
codingcoding• the probability of a word error without coding is the probability of a word error without coding is
the probability of a word error using a (7,4) Hamming Code is:the probability of a word error using a (7,4) Hamming Code is:
67 )1(7)1(1 ccc ppp
nup 111
• pu is the uncoded channel error probability.
• pc is the probability of bit error when Eb/No is reduced to 4/7 ths of that at which pu was calculated.
![Page 556: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/556.jpg)
EE576 Dr. Kousa Linear Block Codes 556Telecommunications Technology
Cyclic Codes• Cyclic codes are algebraic group codes in which the code words
form an ideal.• If the bits are considered coefficients of a polynomial, every code
word is divisible by a generator polynomial. • The rows of the generator matrix are cyclic permutations of one
another.
An ideal is a group in which every member is the product of two others.
![Page 557: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/557.jpg)
EE576 Dr. Kousa Linear Block Codes 557Telecommunications Technology
Cyclic Code Generation
A message M = 110011 can be expressed as the polynomial M(X) = X5 + X4 + X + 1
(code digits are coefficients of polynomial)
With a generator polynomial P(X) = X4 + X3 + 1, the code word can be generated as: T(X) = XnM(X) + R(X),
where R(X) is the remainder when XnM(X) is divided by P(X), i.e.XnM(X)/P(X) = Q(X) + R(X)/P(X),
where n is the order of P(X).
![Page 558: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/558.jpg)
EE576 Dr. Kousa Linear Block Codes 558Telecommunications Technology
Applications of Cyclic Codes• Cyclic codes (or cyclic redundancy check CRC) are used routinely to
detect errors in data transmission.
• Typical codes are the
– CRC-16: P(X) = X16 + X15 + X2 + 1
– CRC-CCITT: P(X) = X16 + X12 + X5 + 1
Cyclic Code Capabilities• A cyclic code will detect:
– All single-bit errors.
– All double-bit errors.
– Any odd number of errors.
– Any burst error for which the length of the burst is less than the CRC.
– Most larger burst errors.
![Page 559: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/559.jpg)
EE576 Dr. Kousa Linear Block Codes 559Telecommunications Technology
Convolutional Codes
• Block codes are memoryless codes - each output depends only the current k-bit block being coded.
• The bits in a convolutional code depend on previous source bits.• The source bits are convolved with the impulse response of a filter.
Why convolutional codes? Because the code set grows exponentially with code length – the hypothesis being that the Rate could be maintained as n grew, unlike all block codes – the Wozencraft contribution.
![Page 560: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/560.jpg)
EE576 Dr. Kousa Linear Block Codes 560Telecommunications Technology
Convolutional Coder
• Rate 1/2 convolutional coder
Xi Xi-2Xi-1
Input
O1
O2
Encoded output
1 1 0 1 0 1 ... 11 10 11 01 01 01 ...
![Page 561: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/561.jpg)
EE576 Dr. Kousa Linear Block Codes 561Telecommunications Technology
Trellis Diagram0
1
00
10
01
11
![Page 562: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/562.jpg)
EE576 Dr. Kousa Linear Block Codes 562Telecommunications Technology
111
110
101
011
001
101
00
10
01
11
11
1011
01 01 01
![Page 563: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/563.jpg)
EE576 Dr. Kousa Linear Block Codes 563Telecommunications Technology
Decoding00
10
01
11
Insert an error in a sequence of transmitted bits and try and decode it.
![Page 564: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/564.jpg)
EE576 Dr. Kousa Linear Block Codes 564Telecommunications Technology
Sequential Decoding• Decoder determines the most likely output sequence.• Compares the received sequence with all possible sequences that might have
been obtained with the coder• Selects the sequence that is closest to the received sequence.
Viterbi Decoding• Choose a decoding-window width b in excess of the block length.• Compute all code words of length b and compare each to the received code
word • Select that code word closest to the received word.• Re-encode the decoded frame and subtract from the received word.
Turbo Codes• Turbo codes were invented by Berrou, Clavieux and Thimajshima in 1993• Turbo codes achieve excellent error correction capabilities at rates very close to
the Shannon bound• Turbo codes are concatenated or product codes• Sequential decoding is used.
![Page 565: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/565.jpg)
EE576 Dr. Kousa Linear Block Codes 565Telecommunications Technology
Interleaved Concatenated Code
Information
Inner checks on information
Outer checks on information
Checks on checks
![Page 566: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/566.jpg)
EE576 Dr. Kousa Linear Block Codes 566Telecommunications Technology
Coding and Decoding
Inner Coder(Block)
Outer Coder(Convolutional)
Outer Decoder Inner Decoder
![Page 567: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/567.jpg)
EE576 Dr. Kousa Linear Block Codes 567Telecommunications Technology
Turbo Code Performance
= spectral efficiency = bits per second per Hertz
![Page 568: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/568.jpg)
EE576 Dr. Kousa Linear Block Codes 568
The Problem: Noise
Information Source
Message
Transmitter
Noise Source
Destination
Message
Reciever
ReceivedSignal
Signal
Message = [1 1 1 1]
Noise = [0 0 1 0]
Message = [1 1 0 1]
WWhh
![Page 569: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/569.jpg)
EE576 Dr. Kousa Linear Block Codes 569
Poor solutions
Single CheckSum -• Truth table:
• General form:
Data=[1 1 1 1]
Message=[1 1 1 1 0]
• Repeats –Data = [1 1 1 1]
Message=
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
A B X-OR0 0 00 1 11 0 11 1 0
aa
WWhh
![Page 570: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/570.jpg)
EE576 Dr. Kousa Linear Block Codes 570
Why they are poor
ratio noise tosignal theis S/N
Capacity Channel raw isW
Capacity Channel is C
/1log2 NSWC
Shannon EfficiencyRepeat 3 times:•This divide W by 3•It divides overall capacity by at least a factor of 3x.
Single Checksum:•Allows an error to be detected but requires the message to be discarded and resent. •Each error reduces the channel capacity by at least a factor of 2 because of the thrown away message.
WWhhaatt
![Page 571: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/571.jpg)
EE576 Dr. Kousa Linear Block Codes 571
Hammings Solution
Encoding:
• Multiple ChecksumsMessage=[a b c d]
r= (a+b+d) mod 2
s= (a+b+c) mod 2
t= (b+c+d) mod 2
Code=[r s a t b c d]
Message=[1 0 1 0]
r=(1+0+0) mod 2 =1
s=(1+0+1) mod 2 =0
t=(0+1+0) mod 2 =1
Code=[ 1 0 1 1 0 1 0 ]
WWhhaatt
ii
![Page 572: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/572.jpg)
EE576 Dr. Kousa Linear Block Codes 572
Simulation
100,000 iterationsAdd Errors to (7,4) dataNo repeat randomsMeasure Error Detection
Error Detection•One Error: 100%•Two Errors: 100% •Three Errors: 83.43%•Four Errors: 79.76%
Stochastic Simulation:
Results:
Fig 1: Error Detection
50%
60%
70%
80%
90%
100%
1 2 3 4Errors Introduced
Per
cen
t E
rro
rs D
etec
ted
(%
)
WWhhaatt
iiss
![Page 573: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/573.jpg)
EE576 Dr. Kousa Linear Block Codes 573
How it works: 3 dots
Only 3 possible wordsDistance Increment = 1
One Excluded State (red)
It is really a checksum. Single Error Detection No error correction
A B C
A B C
A C
Two valid code words (blue)
WWhhaatt
iiss tt
This is a graphic representation of the “Hamming Distance”
![Page 574: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/574.jpg)
EE576 Dr. Kousa Linear Block Codes 574
Hamming Distance
Definition:
The number of elements that need to be changed (corrupted) to turn one codeword into another.
The hamming distance from:• [0101] to [0110] is 2 bits• [1011101] to [1001001] is 2 bits• “butter” to “ladder” is 4 characters• “roses” to “toned” is 3 characters
WWhhaatt
iiss tthh
![Page 575: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/575.jpg)
EE576 Dr. Kousa Linear Block Codes 575
Another Dot
The code space is now 4. The hamming distance is still 1.
Allows:
Error DETECTION for Hamming Distance = 1.
Error CORRECTION for Hamming Distance =1
For Hamming distances greater than 1 an error gives a false correction.
WWhhaatt
iiss tthhee
![Page 576: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/576.jpg)
EE576 Dr. Kousa Linear Block Codes 576
Even More Dots
Allows:Error DETECTION for Hamming Distance = 2.
Error CORRECTION for Hamming Distance =1.
• For Hamming distances greater than 2 an error gives a false correction.
• For Hamming distance of 2 there is an error detected, but it can not be corrected.
WWhhaatt
iiss tthhee MM
![Page 577: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/577.jpg)
EE576 Dr. Kousa Linear Block Codes 577
Multi-dimensional Codes
Code Space:
• 2-dimensional
• 5 element states
Circle packing makes more efficient use of the code-space
WWhhaatt
iiss tthhee MMaa
![Page 578: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/578.jpg)
EE576 Dr. Kousa Linear Block Codes 578
Cannon Balls
• http://wikisource.org/wiki/Cannonball_stacking• http://mathworld.wolfram.com/SpherePacking.html
Efficient Circle packing is the same as efficient 2-d code spacing
Efficient Sphere packing is the same as efficient 3-d code spacing
Efficient n-dimensional sphere packing is the same as n-code spacing
WWhhaatt
iiss tthhee MMaatt
![Page 579: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/579.jpg)
EE576 Dr. Kousa Linear Block Codes 579
More on Codes• Hamming (11,7)
• Golay Codes
• Convolutional Codes
• Reed-Solomon Error Correction
• Turbo Codes
• Digital Fountain Codes
WWhhaatt
iiss tthhee MMaattrr
An ExampleWe will
• Encode a message
• Add noise to the transmission
• Detect the error
• Repair the error
![Page 580: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/580.jpg)
EE576 Dr. Kousa Linear Block Codes 580
Encoding the message
we multiply this matrix
1111000
0110100
1010010
1100001
H
But why?
You can verify that:
To encode our message
By our message
message code H
Hamming[1 0 0 0]=[1 0 0 0 0 1 1]Hamming[0 1 0 0]=[0 1 0 0 1 0 1]Hamming[0 0 1 0]=[0 0 1 0 1 1 0]Hamming[0 0 0 1]=[0 0 0 1 1 1 1]
Where multiplication is the logical ANDAnd addition is the logical XOR
WWhhaatt
iiss tthhee MMaattrriixx??
![Page 581: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/581.jpg)
EE576 Dr. Kousa Linear Block Codes 581
Add noise
• If our message isMessage = [0 1 1 0]• Our Multiplying yieldsCode = [0 1 1 0 0 1 1]
Lets add an error, so Pick a digit to mutate
1100110
0110100
1010010
11110000
01101001
10100101
11000010
0110
1111000
0110100
1010010
1100001
Code => [0 1 0 0 0 1 1]
![Page 582: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/582.jpg)
EE576 Dr. Kousa Linear Block Codes 582
Testing the message
• We receive the erroneous string:
Code = [0 1 0 0 0 1 1]
• We test it:
Decoder*CodeT
=[0 1 1]
• And indeed it has an error
The matrix used to decode is:
To test if a code is valid:
• Does Decoder*CodeT
=[0 0 0]
– Yes means its valid
– No means it has error/s
1010101
1100110
1111000
Decoder
![Page 583: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/583.jpg)
EE576 Dr. Kousa Linear Block Codes 583
Repairing the message
• To repair the code we find the collumn in the decoder matrix whose elements are the row results of the test vector
• We then change
• Decoder*codeT is
[ 0 1 1]
• This is the third element of our code
• Our repaired code is[0 1 1 0 0 1 1]
1010101
1100110
1111000
Decoder
Decoding the messageWe trim our received code by 3 elements and we have our original
message.[0 1 1 0 0 1 1] => [0 1 1 0]
![Page 584: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/584.jpg)
EE576 Dr. Kousa Linear Block Codes 584
Channel Coding in IEEE802.16e
Student: Po-Sheng Wu
Advisor: David W. Lin
![Page 585: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/585.jpg)
EE576 Dr. Kousa Linear Block Codes 585
Outline
• Overview• RS code• Convolution code• LDPC code• Future Work
Overview
![Page 586: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/586.jpg)
EE576 Dr. Kousa Linear Block Codes 586
RS code
• The RS code in 802.16a is derived from a systematic RS (N=255, K=239, T=8) code on GF(2^8)
![Page 587: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/587.jpg)
EE576 Dr. Kousa Linear Block Codes 587
RS code
![Page 588: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/588.jpg)
EE576 Dr. Kousa Linear Block Codes 588
RS code
• This code then is shortened and punctured to enable variable block size and variable error-correction capability.
• Shorten : (n, k) → (n-l, k-l)• Punctured : (n, k) → (n-l, k)• In general, the generator polynomial
in IEEE802.16a h=0
![Page 589: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/589.jpg)
EE576 Dr. Kousa Linear Block Codes 589
RS code
• They are shortened to K’ data bytes and punctured to permit T’ bytes to be corrected.
• When a block is shortened to K’, the first 239-K’ bytes of the encoder input shall be zero
• When a codeword is punctured to permit T’ bytes to be corrected, only the first 2T’ of the total 16 parity bytes shall be employed.
• When shortened and punctured to (48,36,6) the first 203(239-36) information bytes are assigned 0.
• And only the first 12(2*6) bytes of R(X) will be employed in the codeword.
![Page 590: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/590.jpg)
EE576 Dr. Kousa Linear Block Codes 590
Shortened and Punctured
![Page 591: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/591.jpg)
EE576 Dr. Kousa Linear Block Codes 591
RS code
![Page 592: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/592.jpg)
EE576 Dr. Kousa Linear Block Codes 592
RS code
• Decoding : The Euclid’s (Berlekamp) algorithm is a common decoding algorithm for RS code.
• Four step:
-compute the syndrome value
-compute the error location polynomial
-compute the error location
-compute the error value
![Page 593: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/593.jpg)
EE576 Dr. Kousa Linear Block Codes 593
Convolution code
• Each RS code is encoded by a binary convolution encoder, which has native rate of ½, a constraint length equal to 7.
![Page 594: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/594.jpg)
EE576 Dr. Kousa Linear Block Codes 594
Convolution code
• “1” means a transmitted bit and “0” denotes a removed bit, note that the has been changed from that of the native convolution code with rate ½ .
![Page 595: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/595.jpg)
EE576 Dr. Kousa Linear Block Codes 595
Convolution code
• Decoding: Viterbi algorithm
![Page 596: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/596.jpg)
EE576 Dr. Kousa Linear Block Codes 596
Convolution code
• The convolution code in IEEE802.16a need to be terminated in a block, and thus become a block code.
• Three method to achieve this termination– Direct truncation– Zero tail– Tail biting
![Page 597: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/597.jpg)
EE576 Dr. Kousa Linear Block Codes 597
RS-CC code
• Outer code: RS code• Inner code: convolution code• Input data streams are divided into RS blocks, then each RS
block is encode by a tail-biting convolution code.• Between the convolution coder and modulator is a bit
interleaver.
![Page 598: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/598.jpg)
EE576 Dr. Kousa Linear Block Codes 598
LDPC code
• low density parity checks matrix• LDPC codes also linear codes. The codeword can be
expressed as the null space of H, Hx=0• Low density enables efficient decoding
– Better decoding performance to Turbo code– Close to the Shannon limit at long block length
![Page 599: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/599.jpg)
EE576 Dr. Kousa Linear Block Codes 599
LDPC code
• n is the length of the code, m is the number of parity check bit
![Page 600: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/600.jpg)
EE576 Dr. Kousa Linear Block Codes 600
LDPC code
• Base model
![Page 601: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/601.jpg)
EE576 Dr. Kousa Linear Block Codes 601
LDPC code• if p(f,i,j) = -1
– replace by z*z zero matrix
else – p(f,i,j) is the circular shift size
f
0
( , ), p(i,j) 0
, , p(i,j)z, p(i,j)>0
p i j
p f i j
z
![Page 602: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/602.jpg)
EE576 Dr. Kousa Linear Block Codes 602
LDPC code• Encoding
[u p1 p2]
• Decoding
– Tanner GraphTanner Graph
– Sum Product Algorithm
![Page 603: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/603.jpg)
EE576 Dr. Kousa Linear Block Codes 603
LDPC code
• Tanner GraphTanner Graph
![Page 604: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/604.jpg)
EE576 Dr. Kousa Linear Block Codes 604
LDPC code• Sum Product Algorithm
![Page 605: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/605.jpg)
EE576 Dr. Kousa Linear Block Codes 605
LDPC code
![Page 606: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/606.jpg)
EE576 Dr. Kousa Linear Block Codes 606
LDPC code
Future Work• Realize these algorithm in computer
• Find some decoding algorithm to speed up
![Page 607: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/607.jpg)
EE576 Dr. Kousa Linear Block Codes 607
Chapter 11
Data LinkControl
andProtocols
![Page 608: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/608.jpg)
EE576 Dr. Kousa Linear Block Codes 608
11.1 Flow and Error Control11.1 Flow and Error Control
Flow Control
Error Control
Flow control refers to a set of procedures used to restrict the amount of Flow control refers to a set of procedures used to restrict the amount of data that the sender can send before waiting for acknowledgment.data that the sender can send before waiting for acknowledgment.
Error control in the data link layer is based on automatic repeat Error control in the data link layer is based on automatic repeat request, which is the retransmission of data. request, which is the retransmission of data.
![Page 609: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/609.jpg)
EE576 Dr. Kousa Linear Block Codes 609
11.2 Stop-and-Wait ARQ11.2 Stop-and-Wait ARQ
Operation
Bidirectional Transmission
![Page 610: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/610.jpg)
EE576 Dr. Kousa Linear Block Codes 610
11.1 Normal operation
![Page 611: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/611.jpg)
EE576 Dr. Kousa Linear Block Codes 611
11.2 Stop-and-Wait ARQ, lost frame
![Page 612: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/612.jpg)
EE576 Dr. Kousa Linear Block Codes 612
11.3 Stop-and-Wait ARQ, lost ACK frame
![Page 613: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/613.jpg)
EE576 Dr. Kousa Linear Block Codes 613
In Stop-and-Wait ARQ, numbering In Stop-and-Wait ARQ, numbering frames prevents the retaining of frames prevents the retaining of
duplicate frames.duplicate frames.
Note:Note:
![Page 614: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/614.jpg)
EE576 Dr. Kousa Linear Block Codes 614
11.4 Stop-and-Wait ARQ, delayed ACK
![Page 615: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/615.jpg)
EE576 Dr. Kousa Linear Block Codes 615
Numbered acknowledgments are Numbered acknowledgments are needed if an acknowledgment is needed if an acknowledgment is
delayed and the next frame is lost. delayed and the next frame is lost.
Note:Note:
![Page 616: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/616.jpg)
EE576 Dr. Kousa Linear Block Codes 616
11.5 Piggybacking
![Page 617: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/617.jpg)
EE576 Dr. Kousa Linear Block Codes 617
11.3 Go-Back-N ARQ11.3 Go-Back-N ARQ
Sequence Number
Sender and Receiver Sliding Window
Control Variables and Timers
Acknowledgment
Resending Frames
Operation
![Page 618: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/618.jpg)
EE576 Dr. Kousa Linear Block Codes 618
11.6 Sender sliding window
![Page 619: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/619.jpg)
EE576 Dr. Kousa Linear Block Codes 619
11.7 Receiver sliding window
![Page 620: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/620.jpg)
EE576 Dr. Kousa Linear Block Codes 620
11.8 Control variables
![Page 621: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/621.jpg)
EE576 Dr. Kousa Linear Block Codes 621
11.9 Go-Back-N ARQ, normal operation
![Page 622: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/622.jpg)
EE576 Dr. Kousa Linear Block Codes 622
11.10 Go-Back-N ARQ, lost frame
![Page 623: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/623.jpg)
EE576 Dr. Kousa Linear Block Codes 623
11.11 Go-Back-N ARQ: sender window size
![Page 624: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/624.jpg)
EE576 Dr. Kousa Linear Block Codes 624
In Go-Back-N ARQ, the size of the In Go-Back-N ARQ, the size of the sender window must be less than 2m; sender window must be less than 2m;
the size of the receiver window is the size of the receiver window is always 1.always 1.
Note:Note:
![Page 625: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/625.jpg)
EE576 Dr. Kousa Linear Block Codes 625
11.4 Selective-Repeat ARQ11.4 Selective-Repeat ARQ
Sender and Receiver Windows
Operation
Sender Window Size
Bidirectional Transmission
Pipelining
![Page 626: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/626.jpg)
EE576 Dr. Kousa Linear Block Codes 626
11.12 Selective Repeat ARQ, sender and receiver windows
![Page 627: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/627.jpg)
EE576 Dr. Kousa Linear Block Codes 627
11.13 Selective Repeat ARQ, lost frame
![Page 628: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/628.jpg)
EE576 Dr. Kousa Linear Block Codes 628
In Selective Repeat ARQ, the size of the In Selective Repeat ARQ, the size of the sender and receiver window must be at sender and receiver window must be at
most one-half of 2most one-half of 2mm. .
Note:Note:
![Page 629: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/629.jpg)
EE576 Dr. Kousa Linear Block Codes 629
11.14 Selective Repeat ARQ, sender window size
![Page 630: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/630.jpg)
EE576 Dr. Kousa Linear Block Codes 630
Example 1Example 1
In a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit takes 20 ms to make a round trip. What is the bandwidth-delay product? If the system data frames are 1000 bits in length, what is the utilization percentage of the link?
SolutionSolution
The bandwidth-delay product is
1 106 20 10-3 = 20,000 bits
The system can send 20,000 bits during the time it takes for the data to go from the sender to the receiver and then back again. However, the system sends only 1000 bits. We can say that the link utilization is only 1000/20,000, or 5%. For this reason, for a link with high bandwidth or long delay, use of Stop-and-Wait ARQ wastes the capacity of the link.
![Page 631: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/631.jpg)
EE576 Dr. Kousa Linear Block Codes 631
Example 2Example 2
What is the utilization percentage of the link in Example 1 if the link uses Go-Back-N ARQ with a 15-frame sequence?
SolutionSolution
The bandwidth-delay product is still 20,000. The system can send up to 15 frames or 15,000 bits during a round trip. This means the utilization is 15,000/20,000, or 75 percent. Of course, if there are damaged frames, the utilization percentage is much less because frames have to be resent.
![Page 632: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/632.jpg)
EE576 Dr. Kousa Linear Block Codes 632
11.5 HDLC11.5 HDLC
Configurations and Transfer Modes
Frames
Frame Format
Examples
Data Transparency
![Page 633: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/633.jpg)
EE576 Dr. Kousa Linear Block Codes 633
11.15 NRM
![Page 634: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/634.jpg)
EE576 Dr. Kousa Linear Block Codes 634
11.16 ABM
![Page 635: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/635.jpg)
EE576 Dr. Kousa Linear Block Codes 635
11.17 HDLC frame
![Page 636: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/636.jpg)
EE576 Dr. Kousa Linear Block Codes 636
11.18 HDLC frame types
![Page 637: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/637.jpg)
EE576 Dr. Kousa Linear Block Codes 637
11.19 I-frame
![Page 638: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/638.jpg)
EE576 Dr. Kousa Linear Block Codes 638
11.20 S-frame control field in HDLC
![Page 639: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/639.jpg)
EE576 Dr. Kousa Linear Block Codes 639
11.21 U-frame control field in HDLC
![Page 640: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/640.jpg)
EE576 Dr. Kousa Linear Block Codes 640
Table 11.1 U-frame control command and responseTable 11.1 U-frame control command and responseCommand/response Meaning
SNRMSNRM Set normal response mode
SNRMESNRME Set normal response mode (extended)
SABMSABM Set asynchronous balanced mode
SABMESABME Set asynchronous balanced mode (extended)
UPUP Unnumbered poll
UIUI Unnumbered information
UAUA Unnumbered acknowledgment
RDRD Request disconnect
DISCDISC Disconnect
DMDM Disconnect mode
RIMRIM Request information mode
SIMSIM Set initialization mode
RSETRSET Reset
XIDXID Exchange ID
FRMRFRMR Frame reject
![Page 641: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/641.jpg)
EE576 Dr. Kousa Linear Block Codes 641
Example 3Example 3
Figure 11.22 shows an exchange using piggybacking where is no error. Station A begins the exchange of information with an I-frame numbered 0 followed by another I-frame numbered 1. Station B piggybacks its acknowledgment of both frames onto an I-frame of its own. Station B’s first I-frame is also numbered 0 [N(S) field] and contains a 2 in its N(R) field, acknowledging the receipt of A’s frames 1 and 0 and indicating that it expects frame 2 to arrive next. Station B transmits its second and third I-frames (numbered 1 and 2) before accepting further frames from station A. Its N(R) information, therefore, has not changed: B frames 1 and 2 indicate that station B is still expecting A frame 2 to arrive next.
![Page 642: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/642.jpg)
EE576 Dr. Kousa Linear Block Codes 642
11.22 Example 3
![Page 643: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/643.jpg)
EE576 Dr. Kousa Linear Block Codes 643
Example 4Example 4
In Example 3, suppose frame 1 sent from station B to station A has an error. Station A informs station B to resend frames 1 and 2 (the system is using the Go-Back-N mechanism). Station A sends a reject supervisory frame to announce the error in frame 1. Figure 11.23 shows the exchange.
![Page 644: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/644.jpg)
EE576 Dr. Kousa Linear Block Codes 644
11.23 Example 4
![Page 645: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/645.jpg)
EE576 Dr. Kousa Linear Block Codes 645
Bit stuffing is the process of adding one Bit stuffing is the process of adding one extra 0 whenever there are five extra 0 whenever there are five
consecutive 1s in the data so that the consecutive 1s in the data so that the receiver does not mistake the receiver does not mistake the
data for a flag.data for a flag.
Note:Note:
![Page 646: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/646.jpg)
EE576 Dr. Kousa Linear Block Codes 646
11.24 Bit stuffing and removal
![Page 647: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/647.jpg)
EE576 Dr. Kousa Linear Block Codes 647
11.25 Bit stuffing in HDLC
![Page 648: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/648.jpg)
EE576 Dr. Kousa Linear Block Codes 64811.648
Chapter 11
Data Link Control
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
![Page 649: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/649.jpg)
EE576 Dr. Kousa Linear Block Codes 64911.649
11-1 FRAMING11-1 FRAMING
The data link layer needs to pack bits into The data link layer needs to pack bits into framesframes, so , so that each frame is distinguishable from another. Our that each frame is distinguishable from another. Our postal system practices a type of framing. The simple postal system practices a type of framing. The simple act of inserting a letter into an envelope separates one act of inserting a letter into an envelope separates one piece of information from another; the envelope serves piece of information from another; the envelope serves as the delimiter. as the delimiter.
Fixed-Size FramingVariable-Size Framing
Topics discussed in this section:Topics discussed in this section:
![Page 650: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/650.jpg)
EE576 Dr. Kousa Linear Block Codes 65011.650
Figure 11.1 A frame in a character-oriented protocol
![Page 651: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/651.jpg)
EE576 Dr. Kousa Linear Block Codes 65111.651
Figure 11.2 Byte stuffing and unstuffing
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape character in the text.
![Page 652: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/652.jpg)
EE576 Dr. Kousa Linear Block Codes 65211.652
Figure 11.3 A frame in a bit-oriented protocol
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistakethe pattern 0111110 for a flag.
![Page 653: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/653.jpg)
EE576 Dr. Kousa Linear Block Codes 65311.653
Figure 11.4 Bit stuffing and unstuffing
![Page 654: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/654.jpg)
EE576 Dr. Kousa Linear Block Codes 65411.654
11-2 FLOW AND ERROR CONTROL11-2 FLOW AND ERROR CONTROL
The most important responsibilities of the data link The most important responsibilities of the data link layer are layer are flow controlflow control and and error controlerror control. Collectively, . Collectively, these functions are known as these functions are known as data link controldata link control..
Flow ControlError Control
Topics discussed in this section:Topics discussed in this section:
![Page 655: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/655.jpg)
EE576 Dr. Kousa Linear Block Codes 65511.655
Flow control refers to a set of procedures used to restrict the amount of data that the sender can send before waiting for acknowledgment.
Error control in the data link layer is based on automatic repeat request, which is the retransmission of data.
Note
![Page 656: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/656.jpg)
EE576 Dr. Kousa Linear Block Codes 65611.656
11-3 PROTOCOLS11-3 PROTOCOLS
Now let us see how the data link layer can combine Now let us see how the data link layer can combine framing, flow control, and error control to achieve the framing, flow control, and error control to achieve the delivery of data from one node to another. delivery of data from one node to another.
The protocols are normally implemented in software by The protocols are normally implemented in software by using one of the common programming languages. using one of the common programming languages.
To make our discussions language-free, we have written To make our discussions language-free, we have written in pseudocode a version of each protocol that in pseudocode a version of each protocol that concentrates mostly on the procedure instead of delving concentrates mostly on the procedure instead of delving into the details of language rules.into the details of language rules.
![Page 657: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/657.jpg)
EE576 Dr. Kousa Linear Block Codes 65711.657
Figure 11.5 Taxonomy of protocols discussed in this chapter
![Page 658: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/658.jpg)
EE576 Dr. Kousa Linear Block Codes 65811.658
11-4 NOISELESS CHANNELS11-4 NOISELESS CHANNELS
Let us first assume we have an ideal channel in which Let us first assume we have an ideal channel in which no frames are lost, duplicated, or corrupted. We no frames are lost, duplicated, or corrupted. We introduce two protocols for this type of channel.introduce two protocols for this type of channel.
Simplest ProtocolStop-and-Wait Protocol
Topics discussed in this section:Topics discussed in this section:
![Page 659: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/659.jpg)
EE576 Dr. Kousa Linear Block Codes 65911.659
Figure 11.6 The design of the simplest protocol with no flow or error control
![Page 660: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/660.jpg)
EE576 Dr. Kousa Linear Block Codes 66011.660
Figure 11.7 Flow diagram for Example 11.1
Figure 11.7 shows an example of communication using this protocol. It is very simple. The sender sends a sequence of frames without even thinking about the receiver. To send three frames, three events occur at the sender site and three events at the receiver site. Note that the data frames are shown by tilted boxes; the height of the box defines the transmission time difference between the first bit and the last bit in the frame.
![Page 661: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/661.jpg)
EE576 Dr. Kousa Linear Block Codes 66111.661
Figure 11.8 Design of Stop-and-Wait Protocol
![Page 662: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/662.jpg)
EE576 Dr. Kousa Linear Block Codes 66211.662
Figure 11.9 Flow diagram for Example 11.2
Figure 11.9 shows an example of communication using this protocol. It is still very simple. The sender sends one frame and waits for feedback from the receiver. When the ACK arrives, the sender sends the next frame. Note that sending two frames in the protocol involves the sender in four events and the receiver in two events.
![Page 663: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/663.jpg)
EE576 Dr. Kousa Linear Block Codes 66311.663
11-5 NOISY CHANNELS11-5 NOISY CHANNELS
Although the Stop-and-Wait Protocol gives us an idea Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its predecessor, noiseless of how to add flow control to its predecessor, noiseless channels are nonexistent. We discuss three protocols in channels are nonexistent. We discuss three protocols in this section that use error control.this section that use error control.
Stop-and-Wait Automatic Repeat RequestGo-Back-N Automatic Repeat RequestSelective Repeat Automatic Repeat Request
Topics discussed in this section:Topics discussed in this section:
![Page 664: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/664.jpg)
EE576 Dr. Kousa Linear Block Codes 66411.664
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and retransmitting of the frame when the timer expires.
In Stop-and-Wait ARQ: we use sequence numbers to number the frames. The sequence numbers are based on modulo-2 arithmetic. In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2 arithmetic the sequence number of the next frame expected.
Note
![Page 665: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/665.jpg)
EE576 Dr. Kousa Linear Block Codes 66511.665
Figure 11.11 Flow diagram for an example of Stop-and-Wait ARQ.
Frame 0 is sent and acknowledged. Frame 1 is lost and resent after the time-out. The resent frame 1 is acknowledged and the timer stops. Frame 0 is sent and acknowledged, but the acknowledgment is lost. The sender has no idea if the frame or the acknowledgment is lost, so after the time-out, it resends frame 0, which is acknowledged.
![Page 666: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/666.jpg)
EE576 Dr. Kousa Linear Block Codes 66611.666
Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit takes 20 ms to make a round trip. What is the bandwidth-delay product? If the system data frames are 1000 bits in length, what is the utilization percentage of the link?
SolutionThe bandwidth-delay product is
Example 11.4
The system can send 20,000 bits during the time it takes for the data to go from the sender to the receiver and then back again. However, the system sends only 1000 bits. We can say that the link utilization is only 1000/20,000, or 5 percent. For this reason, for a link with a high bandwidth or long delay, the use of Stop-and-Wait ARQ wastes the capacity of the link.
![Page 667: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/667.jpg)
EE576 Dr. Kousa Linear Block Codes 66711.667
What is the utilization percentage of the link in Example 11.4 if we have a protocol that can send up to 15 frames before stopping and worrying about the acknowledgments?
SolutionThe bandwidth-delay product is still 20,000 bits. The system can send up to 15 frames or 15,000 bits during a round trip. This means the utilization is 15,000/20,000, or 75 percent. Of course, if there are damaged frames, the utilization percentage is much less because frames have to be resent.
Example 11.5
![Page 668: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/668.jpg)
EE576 Dr. Kousa Linear Block Codes 66811.668
In the Go-Back-N Protocol, the sequence numbers are modulo 2m,
where m is the size of the sequence number field in bits.
Note
![Page 669: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/669.jpg)
EE576 Dr. Kousa Linear Block Codes 66911.669
Figure 11.12 Send window for Go-Back-N ARQ
![Page 670: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/670.jpg)
EE576 Dr. Kousa Linear Block Codes 67011.670
The send window is an abstract concept defining an imaginary box of size 2m−1 with three variables: Sf, Sn, and Ssize.
The send window can slide one or more slots when a valid acknowledgment arrives.
Note
![Page 671: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/671.jpg)
EE576 Dr. Kousa Linear Block Codes 67111.671
Figure 11.13 Receive window for Go-Back-N ARQ
![Page 672: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/672.jpg)
EE576 Dr. Kousa Linear Block Codes 67211.672
The receive window is an abstract concept defining an imaginary box
of size 1 with one single variable Rn. The window slides
when a correct frame has arrived; sliding occurs one slot at a time.
Note
![Page 673: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/673.jpg)
EE576 Dr. Kousa Linear Block Codes 67311.673
Figure 11.15 Window size for Go-Back-N ARQ
![Page 674: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/674.jpg)
EE576 Dr. Kousa Linear Block Codes 67411.674
In Go-Back-N ARQ, the size of the send window must be less than 2m;
the size of the receiver window is always 1.
Note
![Page 675: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/675.jpg)
EE576 Dr. Kousa Linear Block Codes 67511.675
Figure 11.16 Flow diagram for Example 11.6
This is an example of a case where the forward channel is reliable, but the reverse is not. No data frames are lost, but some ACKs are delayed and one is lost.
![Page 676: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/676.jpg)
EE576 Dr. Kousa Linear Block Codes 67611.676
Figure 11.17 Flow diagram for Example 11.7
Scenario showing what happens when a frame is lost.
![Page 677: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/677.jpg)
EE576 Dr. Kousa Linear Block Codes 67711.677
Stop-and-Wait ARQ is a special case of Go-Back-N ARQ in which the size of the send window is 1.
Note
![Page 678: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/678.jpg)
EE576 Dr. Kousa Linear Block Codes 67811.678
Figure 11.18 Send window for Selective Repeat ARQ
![Page 679: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/679.jpg)
EE576 Dr. Kousa Linear Block Codes 67911.679
Figure 11.19 Receive window for Selective Repeat ARQ
![Page 680: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/680.jpg)
EE576 Dr. Kousa Linear Block Codes 68011.680
Figure 11.21 Selective Repeat ARQ, window size
![Page 681: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/681.jpg)
EE576 Dr. Kousa Linear Block Codes 68111.681
In Selective Repeat ARQ, the size of the sender and receiver window
must be at most one-half of 2m.
Note
![Page 682: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/682.jpg)
EE576 Dr. Kousa Linear Block Codes 68211.682
Figure 11.22 Delivery of data in Selective Repeat ARQ
![Page 683: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/683.jpg)
EE576 Dr. Kousa Linear Block Codes 68311.683
Figure 11.23 Flow diagram for Example 11.8
Scenario showing how Selective Repeat behaves when a frame is lost.
![Page 684: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/684.jpg)
EE576 Dr. Kousa Linear Block Codes 68411.684
11-6 HDLC11-6 HDLC
High-level Data Link Control (HDLC)High-level Data Link Control (HDLC) is a is a bit-orientedbit-oriented protocol for communication over point-to-point and protocol for communication over point-to-point and multipoint links. It implements the ARQ mechanisms we multipoint links. It implements the ARQ mechanisms we discussed in this chapter.discussed in this chapter.
Configurations and Transfer ModesFramesControl Field
Topics discussed in this section:Topics discussed in this section:
![Page 685: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/685.jpg)
EE576 Dr. Kousa Linear Block Codes 68511.685
Figure 11.25 Normal response mode
Figure 11.26 Asynchronous balanced mode
![Page 686: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/686.jpg)
EE576 Dr. Kousa Linear Block Codes 68611.686
Figure 11.27 HDLC framesControl field format for the different frame types
![Page 687: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/687.jpg)
EE576 Dr. Kousa Linear Block Codes 68711.687
Table 11.1 U-frame control command and response
![Page 688: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/688.jpg)
EE576 Dr. Kousa Linear Block Codes 68811.688
Figure 11.31 Example of piggybacking with error
Figure 11.31 shows an exchange in which a frame is lost. Node B sends three data frames (0, 1, and 2), but frame 1 is lost. When node A receives frame 2, it discards it and sends a REJ frame for frame 1. Note that the protocol being used is Go-Back-N with the special use of an REJ frame as a NAK frame. The NAK frame does two things here: It confirms the receipt of frame 0 and declares that frame 1 and any following frames must be resent. Node B, after receiving the REJ frame, resends frames 1 and 2. Node A acknowledges the receipt by sending an RR frame (ACK) with acknowledgment number 3.
![Page 689: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/689.jpg)
EE576 Dr. Kousa Linear Block Codes 68911.689
11-7 POINT-TO-POINT PROTOCOL11-7 POINT-TO-POINT PROTOCOL
Although HDLC is a general protocol that can be used Although HDLC is a general protocol that can be used for both point-to-point and multipoint configurations, for both point-to-point and multipoint configurations, one of the most common protocols for point-to-point one of the most common protocols for point-to-point access is the access is the Point-to-Point Protocol (PPP). Point-to-Point Protocol (PPP). PPP is a PPP is a byte-orientedbyte-oriented protocol. protocol.
FramingTransition PhasesMultiplexingMultilink PPP
Topics discussed in this section:Topics discussed in this section:
![Page 690: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/690.jpg)
EE576 Dr. Kousa Linear Block Codes 69011.690
Figure 11.32 PPP frame format
PPP is a byte-oriented protocol using byte stuffing with the escape byte 01111101.
![Page 691: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/691.jpg)
EE576 Dr. Kousa Linear Block Codes 69111.691
Figure 11.33 Transition phases
![Page 692: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/692.jpg)
EE576 Dr. Kousa Linear Block Codes 69211.692
Figure 11.35 LCP packet encapsulated in a frame
![Page 693: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/693.jpg)
EE576 Dr. Kousa Linear Block Codes 69311.693
Table 11.2 LCP packets
![Page 694: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/694.jpg)
EE576 Dr. Kousa Linear Block Codes 69411.694
Table 11.3 Common options
![Page 695: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/695.jpg)
EE576 Dr. Kousa Linear Block Codes 69511.695
Figure 11.36 PAP packets encapsulated in a PPP frame
![Page 696: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/696.jpg)
EE576 Dr. Kousa Linear Block Codes 69611.696
Figure 11.37 CHAP packets encapsulated in a PPP frame
![Page 697: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/697.jpg)
EE576 Dr. Kousa Linear Block Codes 69711.697
Figure 11.38 IPCP packet encapsulated in PPP frame
Code value for IPCP packets
![Page 698: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/698.jpg)
®
www.intel.com/labs
A Survey of Advanced FEC SystemsA Survey of Advanced FEC Systems
Eric JacobsenEric Jacobsen
Minister of Algorithms, Intel LabsMinister of Algorithms, Intel Labs
Communication Technology Laboratory/Communication Technology Laboratory/
Radio Communications LaboratoryRadio Communications Laboratory
July 29, 2004July 29, 2004
With a lot of material from Bo Xia, CTL/RCLWith a lot of material from Bo Xia, CTL/RCL
![Page 699: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/699.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
699
OutlineOutline What is Forward Error Correction?What is Forward Error Correction?
The Shannon Capacity formula and what it meansThe Shannon Capacity formula and what it means
A simple Coding TutorialA simple Coding Tutorial
A Brief History of FECA Brief History of FEC
Modern Approaches to Advanced FECModern Approaches to Advanced FEC
Concatenated CodesConcatenated Codes
Turbo CodesTurbo Codes
Turbo Product CodesTurbo Product Codes
Low Density Parity Check CodesLow Density Parity Check Codes
![Page 700: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/700.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
700
Information Theory RefreshInformation Theory Refresh
CC = = WW log log22(1 + P(1 + P / N)/ N)
ChannelChannelCapacityCapacity
(bps)(bps)
ChannelChannelBandwidthBandwidth
(Hz)(Hz)
TransmitTransmitPowerPower
The Shannon Capacity EquationThe Shannon Capacity Equation
2 fundamental ways to increase data rate2 fundamental ways to increase data rate
NoiseNoisePowerPower
C is the highest data rate that can be transmitted error free underC is the highest data rate that can be transmitted error free underthe specified conditions of W, P, and N. It is assumed that Pthe specified conditions of W, P, and N. It is assumed that Pis the only signal in the memoryless channel and N is AWGN.is the only signal in the memoryless channel and N is AWGN.
![Page 701: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/701.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
701
A simple exampleA simple exampleA system transmits messages of two bits each through a channelA system transmits messages of two bits each through a channelthat corrupts each bit with probability Pe.that corrupts each bit with probability Pe.
Tx Data = { 00, 01, 10, 11 }Tx Data = { 00, 01, 10, 11 } Rx Data = { 00, 01, 10, 11 }Rx Data = { 00, 01, 10, 11 }
The problem is that it is impossible to tell at the receiver whether theThe problem is that it is impossible to tell at the receiver whether thetwo-bit symbol received was the symbol transmitted, or whether ittwo-bit symbol received was the symbol transmitted, or whether itwas corrupted by the channel.was corrupted by the channel.
Tx Data = 01Tx Data = 01 Rx Data = 00Rx Data = 00
In this case a single bit error has corrupted the received symbol, butIn this case a single bit error has corrupted the received symbol, butit is still a valid symbol in the list of possible symbols. The mostit is still a valid symbol in the list of possible symbols. The mostfundamental coding trick is just to expand the number of bitsfundamental coding trick is just to expand the number of bitstransmitted so that the receiver can determine the transmitted so that the receiver can determine the most likelymost likelytransmitted symbol just by finding the valid codeword with thetransmitted symbol just by finding the valid codeword with theminimum Hamming distance to the received symbol. minimum Hamming distance to the received symbol.
![Page 702: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/702.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
702
Continuing the Simple ExampleContinuing the Simple Example
A one-to-one mapping of symbol to codeword is produced:A one-to-one mapping of symbol to codeword is produced:
Symbol:CodewordSymbol:Codeword0000 : 0010: 00100101 : 0101: 01011010 : 1001: 10011111 : 1110: 1110
The result is a systematic block codeThe result is a systematic block codewith Code Rate R = ½ and a minimumwith Code Rate R = ½ and a minimumHamming distance between codewordsHamming distance between codewordsof dof dminmin = 2. = 2.
A single-bit error can be detected and corrected at the receiver byA single-bit error can be detected and corrected at the receiver byfinding the codeword with the closest Hamming distance. Thefinding the codeword with the closest Hamming distance. Themost likelymost likely transmitted symbol will always be associated with the transmitted symbol will always be associated with theclosest codeword, even in the presence of multiple bit errors. closest codeword, even in the presence of multiple bit errors.
This capability comes at the expense of transmitting more bits,This capability comes at the expense of transmitting more bits,usually referred to as usually referred to as parityparity, , overheadoverhead, or , or redundancyredundancy bits. bits.
![Page 703: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/703.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
703
Coding GainCoding GainThe difference in performance between an uncoded and a codedThe difference in performance between an uncoded and a codedsystem, considering the additional overhead required by the code,system, considering the additional overhead required by the code,is called the Coding Gain. In order to normalize the power requiredis called the Coding Gain. In order to normalize the power requiredto transmit a single bit of information (not a coded bit), Eto transmit a single bit of information (not a coded bit), Ebb/N/Noo is used is used
as a common metric, where Eas a common metric, where Ebb is the energy per information bit, and is the energy per information bit, and
NNoo is the noise power in a unit-Hertz bandwidth. is the noise power in a unit-Hertz bandwidth.
…… ……
…… ……
TTbb TimeTime
The uncoded symbols require a certain The uncoded symbols require a certain amount of energy to transmit, in this amount of energy to transmit, in this case over period Tcase over period Tbb..
The coded symbols at R = ½ can be The coded symbols at R = ½ can be transmitted within the same period if transmitted within the same period if the transmission rate is doubled. Using the transmission rate is doubled. Using NNoo instead of N normalizes the noise instead of N normalizes the noise
considering the differing signal considering the differing signal bandwidths.bandwidths.
UncodedUncodedSymbolsSymbols
CodedCodedSymbolsSymbolswith R = ½with R = ½
![Page 704: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/704.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
704
Coding Gain and Distance to Channel Capacity ExampleCoding Gain and Distance to Channel Capacity Example
1 2 3 4 5 6 7 8 9 10 111 10
7
1 106
1 105
1 104
1 103
0.01
0.1
R = 3/4 w/RSR = 9/10 w/RSVitRs R = 3/4Uncoded QPSK
Eb/No (dB)
BE
R =
Pe
3.21.62
Coding Gain = ~5.95dB
Coding Gain = ~6.35dB
C, R = 9/10C, R = 3/4
d = ~1.4dB
d = ~2.58dB
UncodedUncoded““Matched-FilterMatched-FilterBound”Bound”PerformancePerformance
Capacity for R = 3/4Capacity for R = 3/4
These curvesThese curvesCompare the Compare the performance of performance of two Turbo two Turbo Codes with a Codes with a concatenated concatenated Viterbi-RS Viterbi-RS system. The system. The TC with R = TC with R = 9/10 appears to 9/10 appears to be inferior to be inferior to the R = ¾ Vit-the R = ¾ Vit-RS system, but RS system, but is actually is actually operating operating closer to closer to capacity.capacity.
![Page 705: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/705.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
705
FEC Historical PedigreeFEC Historical Pedigree197019701960196019501950
Shannon’s PaperShannon’s Paper19481948
Reed and SolomonReed and Solomondefine ECCdefine ECCTechniqueTechnique
Early practicalEarly practicalimplementationsimplementations
of RS codes for tapeof RS codes for tapeand disk drivesand disk drives
HammingHammingdefines basicdefines basicbinary codesbinary codes
Berlekamp and MasseyBerlekamp and Masseyrediscover Euclid’srediscover Euclid’s
polynomial techniquepolynomial techniqueand enable practicaland enable practicalalgebraic decodingalgebraic decoding
Gallager’s ThesisGallager’s ThesisOn LDPCsOn LDPCs
Viterbi’s PaperViterbi’s PaperOn DecodingOn Decoding
Convolutional CodesConvolutional Codes
BCH codesBCH codesProposedProposed
Forney suggestsForney suggestsconcatenated codesconcatenated codes
![Page 706: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/706.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
706
FEC Historical Pedigree IIFEC Historical Pedigree II200020001990199019801980
Ungerboeck’sUngerboeck’sTCM Paper - 1982TCM Paper - 1982
First integratedFirst integratedViterbi decodersViterbi decoders
(late 1980s)(late 1980s)
LDPC beatsLDPC beatsTurbo CodesTurbo CodesFor DVB-S2For DVB-S2
Standard - 2003Standard - 2003
TCM HeavilyTCM HeavilyAdopted intoAdopted intoStandardsStandards
Renewed interestRenewed interestin LDPCs due to TCin LDPCs due to TC
ResearchResearch
Berrou’s Turbo CodeBerrou’s Turbo CodePaper - 1993Paper - 1993
Turbo CodesTurbo CodesAdopted intoAdopted intoStandardsStandards
(DVB-RCS, 3GPP, etc.)(DVB-RCS, 3GPP, etc.)
RS codes appearRS codes appearin CD playersin CD players
![Page 707: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/707.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
707
Block CodesBlock Codes
DataData FieldField ParityParity
CodewordCodeword
Systematic Block CodeSystematic Block Code If the codeword is constructed byIf the codeword is constructed byappending redundancy to theappending redundancy to thepayload Data Field, it is called apayload Data Field, it is called a““systematic” code.systematic” code.
The “parity” portion can be actual parity bits, or generated by some other means, likeThe “parity” portion can be actual parity bits, or generated by some other means, likea polynomial function or a generator matrix. The decoding algorithms differ greatly.a polynomial function or a generator matrix. The decoding algorithms differ greatly.
The Code Rate, R, can be adjusted by shortening the data field (using zero padding)The Code Rate, R, can be adjusted by shortening the data field (using zero padding)or by “puncturing” the parity field.or by “puncturing” the parity field.
Generally, a block code is any code defined with a finite codeword length.Generally, a block code is any code defined with a finite codeword length.
Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,Turbo Product Codes, LDPCsTurbo Product Codes, LDPCs
Essentially all iteratively-decoded codes are block codes.Essentially all iteratively-decoded codes are block codes.
![Page 708: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/708.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
708
Convolutional CodesConvolutional Codes
Convolutional codes are generated using a shift register to apply a polynomial to aConvolutional codes are generated using a shift register to apply a polynomial to astream of data. The resulting code can be systematic if the data is transmitted instream of data. The resulting code can be systematic if the data is transmitted inaddition to the redundancy, but it often isn’t.addition to the redundancy, but it often isn’t.
This is the convolutional encoder forThis is the convolutional encoder forThe p = 133/171 Polynomial that is inThe p = 133/171 Polynomial that is invery wide use. This code has avery wide use. This code has aConstraint Length of k = 7. SomeConstraint Length of k = 7. Somelow-data-rate systems use k = 9 forlow-data-rate systems use k = 9 fora more powerful code.a more powerful code.
This code is naturally R = ½, butThis code is naturally R = ½, butdeleting selected output bits, ordeleting selected output bits, or““puncturing” the code, can be donepuncturing” the code, can be doneto increase the code rate.to increase the code rate.
Convolutional codes are typically decoded using the Viterbi algorithm, which increases in Convolutional codes are typically decoded using the Viterbi algorithm, which increases in complexity exponentially with the constraint length. Alternatively a complexity exponentially with the constraint length. Alternatively a sequential decoding algorithm can be used, which requires a much longer constraint length sequential decoding algorithm can be used, which requires a much longer constraint length for similar performance.for similar performance.
Diagram from [1]Diagram from [1]
![Page 709: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/709.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
709
Convolutional Codes - IIConvolutional Codes - II
Diagrams from [1]Diagrams from [1]
This is the code-trellis, or state diagram of a k = 2This is the code-trellis, or state diagram of a k = 2Convolutional Code. Each end node represents a codeConvolutional Code. Each end node represents a codestate, and the branches represent codewords selectedstate, and the branches represent codewords selectedwhen a one or a zero is shifted into the encoder.when a one or a zero is shifted into the encoder.
The correcting power of the code comes from theThe correcting power of the code comes from thesparseness of the trellis. Since not all transitions fromsparseness of the trellis. Since not all transitions fromany one state to any other state are allowed, a state-any one state to any other state are allowed, a state-estimating decoder that looks at the data sequence canestimating decoder that looks at the data sequence canestimate the input data bits from the state relationships.estimate the input data bits from the state relationships.
The Viterbi decoder is a Maximum LikelihoodThe Viterbi decoder is a Maximum LikelihoodSequence Estimator, that estimates the encoderSequence Estimator, that estimates the encoderstate using the sequence of transmitted codewords.state using the sequence of transmitted codewords.
This provides a powerful decoding strategy, butThis provides a powerful decoding strategy, butwhen it makes a mistake it can lose track of thewhen it makes a mistake it can lose track of thesequence and generate a stream of errors untilsequence and generate a stream of errors untilit reestablishes code lock.it reestablishes code lock.
![Page 710: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/710.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
710
Concatenated CodesConcatenated Codes
A very common and effective code is the concatenation of an inner convolutionalA very common and effective code is the concatenation of an inner convolutionalcode with an outer block code, typically a Reed-Solomon code. The convolutionalcode with an outer block code, typically a Reed-Solomon code. The convolutionalcode is well-suited for channels with random errors, and the Reed-Solomon code iscode is well-suited for channels with random errors, and the Reed-Solomon code iswell suited to correct the bursty output errors common with a Viterbi decoder. Anwell suited to correct the bursty output errors common with a Viterbi decoder. Aninterleaver can be used to spread the Viterbi output error bursts across multiple RSinterleaver can be used to spread the Viterbi output error bursts across multiple RScodewords.codewords.
RSRSEncoderEncoder InterleaverInterleaver Conv.Conv.
EncoderEncoder
DataDataChannelChannel ViterbiViterbi
DecoderDecoder
De-De-InterleaverInterleaver
RSRSDecoderDecoder
DataData
Inner CodeInner CodeOuter CodeOuter Code
![Page 711: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/711.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
711
Concatenating ConvolutionalCodesConcatenating ConvolutionalCodes
Parallel and serialParallel and serial
CCCCEncoder1Encoder1 InterleaverInterleaver CCCC
Encoder2Encoder2
DataDataChannelChannel Viterbi/APPViterbi/APP
DecoderDecoder
De-De-InterleaverInterleaver
DataDataViterbi/APPViterbi/APP
DecoderDecoder
Serial ConcatenationSerial Concatenation
CCCCEncoder1Encoder1
DataData
InterleaverInterleaver CCCCEncoder2Encoder2
ChannelChannel
De-De-InterleaverInterleaver
DataData
Viterbi/APPViterbi/APPDecoderDecoder
Viterbi/APPViterbi/APPDecoderDecoder CombinerCombiner
![Page 712: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/712.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
712
Iterative Decoding of CCCsIterative Decoding of CCCs
Rx DataRx DataInterleaverInterleaver
De-De-InterleaverInterleaver
DataData
Viterbi/APPViterbi/APPDecoderDecoder
Viterbi/APPViterbi/APPDecoderDecoder
Turbo Codes add coding diversity by encoding the same data twice throughTurbo Codes add coding diversity by encoding the same data twice throughconcatenation. Soft-output decoders are used, which can provide reliability updateconcatenation. Soft-output decoders are used, which can provide reliability updateinformation about the data estimates to the each other, which can be used during ainformation about the data estimates to the each other, which can be used during asubsequent decoding pass.subsequent decoding pass.
The two decoders, each working on a different codeword, can “iterate” and continueThe two decoders, each working on a different codeword, can “iterate” and continueto pass reliability update information to each other in order to improve the probabilityto pass reliability update information to each other in order to improve the probabilityof converging on the correct solution. Once some stopping criterion has been met,of converging on the correct solution. Once some stopping criterion has been met,the final data estimate is provided for use.the final data estimate is provided for use.
These Turbo Codes provided the first known means of achieving decodingThese Turbo Codes provided the first known means of achieving decodingperformance close to the theoretical Shannon capacity.performance close to the theoretical Shannon capacity.
![Page 713: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/713.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
713
MAP/APP decodersMAP/APP decoders Maximum A Posteriori/A Posteriori ProbabilityMaximum A Posteriori/A Posteriori Probability
Two names for the same thingTwo names for the same thing
Basically runs the Viterbi algorithm across the data sequence in both Basically runs the Viterbi algorithm across the data sequence in both directionsdirections
~Doubles complexity~Doubles complexity
Becomes a bit estimator instead of a sequence estimatorBecomes a bit estimator instead of a sequence estimator
Optimal for Convolutional Turbo CodesOptimal for Convolutional Turbo Codes Need two passes of MAP/APP per iterationNeed two passes of MAP/APP per iteration
Essentially 4x computational complexity over a single-pass ViterbiEssentially 4x computational complexity over a single-pass Viterbi
Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a suboptimal simplification compromisesuboptimal simplification compromise
![Page 714: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/714.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
714
Turbo Code PerformanceTurbo Code Performance
![Page 715: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/715.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
715
Turbo Code Performance IITurbo Code Performance II
0 1 2 3 4 5 6 7 8 91 10
8
1 107
1 106
1 105
1 104
1 103
0.01
0.1
UncodedVit-RS R = 1/2Vit-RS R = 3/4Vit-RS R = 7/8Turbo Code R = 1/2Turbo Code R = 3/4Turbo Code R = 7/8
Eb/No (dB)
BE
R
2.8641.629The performance curves shown hereThe performance curves shown herewere end-to-end measuredwere end-to-end measuredperformance in practical modems. Theperformance in practical modems. Theblack lines are a PCCC Turbo Code, andblack lines are a PCCC Turbo Code, andThe blue lines are for a concatenated The blue lines are for a concatenated Viterbi-RS decoder. The verticalViterbi-RS decoder. The verticaldashed lines show QPSK capacity fordashed lines show QPSK capacity forR = ¾ and R = 7/8. The capacity forR = ¾ and R = 7/8. The capacity forQPSK at R = ½ is 0.2dB.QPSK at R = ½ is 0.2dB.
The TC system clearly operates muchThe TC system clearly operates muchcloser to capacity. Much of thecloser to capacity. Much of theobserved distance to capacity is dueobserved distance to capacity is dueto implementation loss in the modem.to implementation loss in the modem.
![Page 716: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/716.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
716
Tricky Turbo CodesTricky Turbo Codes
Since the differential encoder has R = 1, the final code rate is determined by theSince the differential encoder has R = 1, the final code rate is determined by theamount of repetition used.amount of repetition used.
1:21:2 DD ++InterleaverInterleaver
RepeatRepeatSectionSection
AccumulateAccumulateSectionSection
Repeat-Accumulate codes use simple repetition followed by a differential encoderRepeat-Accumulate codes use simple repetition followed by a differential encoder(the accumulator). This enables iterative decoding with extremely simple codes.(the accumulator). This enables iterative decoding with extremely simple codes.These types of codes work well in erasure channels.These types of codes work well in erasure channels.
R = 1/2R = 1/2Outer CodeOuter Code
R = 1R = 1Inner CodeInner Code
![Page 717: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/717.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
717
Turbo Product CodesTurbo Product Codes
2-Dimensional2-DimensionalData FieldData Field
ParityParity
Par
ityP
arity
ParityParityParityParity
Horizontal Hamming CodesHorizontal Hamming Codes
Ver
tical
Ham
min
g C
odes
Ver
tical
Ham
min
g C
odes
The so-called “product codes” are codesThe so-called “product codes” are codesCreated on the independent dimensionsCreated on the independent dimensionsOf a matrix. A common implementationOf a matrix. A common implementationArranges the data in a 2-dimensional array, Arranges the data in a 2-dimensional array, and then applies a hamming code to each and then applies a hamming code to each row and column as shown.row and column as shown.
The decoder then iterates between decoding The decoder then iterates between decoding the horizontal and vertical codes.the horizontal and vertical codes.
Since the constituent codes are Hamming codes, which can be decoded simply, theSince the constituent codes are Hamming codes, which can be decoded simply, thedecoder complexity is much less than Turbo Codes. The performance is close to capacity decoder complexity is much less than Turbo Codes. The performance is close to capacity for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs have enjoyed commercial success in streaming satellite applications.have enjoyed commercial success in streaming satellite applications.
![Page 718: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/718.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
718
Low Density Parity Check CodesLow Density Parity Check Codes Iterative decoding of simple parity check codesIterative decoding of simple parity check codes
First developed by Gallager, with iterative decoding, in 1962!First developed by Gallager, with iterative decoding, in 1962!
Published examples of good performance with short blocksPublished examples of good performance with short blocks
Kou, Lin, Fossorier, Trans IT, Nov. 2001Kou, Lin, Fossorier, Trans IT, Nov. 2001
Near-capacity performance with long blocksNear-capacity performance with long blocks
VeryVery near! - near! - Chung, et al, “On the design of low-density parity-check codes within Chung, et al, “On the design of low-density parity-check codes within 0.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 20010.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 2001
Complexity Issues, especially in encoderComplexity Issues, especially in encoder
Implementation Challenges – encoder, decoder memoryImplementation Challenges – encoder, decoder memory
![Page 719: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/719.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
719
LDPC Bipartite GraphLDPC Bipartite Graph
This is an example bipartite graph for an irregular LDPC code.This is an example bipartite graph for an irregular LDPC code.
Check NodesCheck Nodes
EdgesEdges
Variable NodesVariable Nodes(Codeword bits)(Codeword bits)
![Page 720: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/720.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
720
Iteration ProcessingIteration Processing
11stst half iteration, compute half iteration, compute ’s,’s,’s, and r’s for each edge.’s, and r’s for each edge.
i+1i+1 = maxx( = maxx(ii,q,qii)) ii = maxx( = maxx(i+1i+1,q,qii))Check NodesCheck Nodes(one per parity bit)(one per parity bit)
Variable NodesVariable Nodes(one per code bit)(one per code bit)
qqii
rrii
rrii = maxx( = maxx(ii,,i+1i+1))
22ndnd half iteration, compute mV, q’s for each half iteration, compute mV, q’s for eachvariable node.variable node.
mVmVnn
mV = mVmV = mV00 + + r’sr’s qqii = mV – r = mV – rii
EdgesEdges
![Page 721: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/721.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
721
LDPC Performance ExampleLDPC Performance Example
Figure is from [2]Figure is from [2]
LDPC Performance canLDPC Performance canBe very close to capacity.Be very close to capacity.The closest performanceThe closest performanceTo the theoretical limitTo the theoretical limitever was with an LDPC,ever was with an LDPC,and within 0.0045dB ofand within 0.0045dB ofcapacity.capacity.
The code shown here isThe code shown here isa high-rate code anda high-rate code andis operating within a fewis operating within a fewtenths of a dB of capacity.tenths of a dB of capacity.
Turbo Codes tend to workTurbo Codes tend to workbest at low code rates andbest at low code rates andnot so well at high code rates.not so well at high code rates.LDPCs work very well at highLDPCs work very well at highcode rates and low code rates.code rates and low code rates.
![Page 722: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/722.jpg)
®
www.intel.com/labs
Communication and Interconnect Technology Lab
722
Current State-of-the-ArtCurrent State-of-the-Art Block CodesBlock Codes
Reed-Solomon widely used in CD-ROM, communications standards. Reed-Solomon widely used in CD-ROM, communications standards. Fundamental building block of basic ECCFundamental building block of basic ECC
Convolutional CodesConvolutional Codes K = 7 CC is very widely adopted across many communications standardsK = 7 CC is very widely adopted across many communications standards K = 9 appears in some limited low-rate applications (cellular telephones)K = 9 appears in some limited low-rate applications (cellular telephones) Often concatenated with RS for streaming applications (satellite, cable, DTV)Often concatenated with RS for streaming applications (satellite, cable, DTV)
Turbo CodesTurbo Codes Limited use due to complexity and latency – cellular and DVB-RCSLimited use due to complexity and latency – cellular and DVB-RCS TPCs used in satellite applications – reduced complexityTPCs used in satellite applications – reduced complexity
LDPCsLDPCs Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16eRecently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e Complexity concerns, especially memory – expect broader considerationComplexity concerns, especially memory – expect broader consideration
![Page 723: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/723.jpg)
EE576 Dr. Kousa Linear Block Codes 723
Cyclic Codes for Error DetectionW. W. Peterson and D. T. Brown
byMaheshwar R Geereddy
![Page 724: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/724.jpg)
EE576 Dr. Kousa Linear Block Codes 724
Notationsk = Number of binary digits in the message before encoding
n = Number of binary digits in the encoded message
n – k = number of check bits
kn
Definition A code is called cyclic if [xnx0x1...xn-1] is a
codeword whenever [x0x1...xn-1xn] is also a codeword.
![Page 725: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/725.jpg)
EE576 Dr. Kousa Linear Block Codes 725
b = length of a burst of errors.G (X) = message polynomial P (X) = generator polynomialR (X) = remainder on dividing X n-k G (X) by P(X).F (X) = encoded message polynomialE (X) = error polynomial
H (X) = Received encoded message polynomialH (X) = F (X) + E (X)
![Page 726: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/726.jpg)
EE576 Dr. Kousa Linear Block Codes 726
Polynomial Representation of Binary Information
It is convenient to think of binary digits as coefficients of a polynomial in the dummy variable X.Polynomial is written low-order-to-high-order.Polynomials are treated according to the laws of ordinary algebra with an exception addition is to be done modulo two.
![Page 727: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/727.jpg)
EE576 Dr. Kousa Linear Block Codes 727
Algebraic Description of Cyclic Codes
A cyclic code is defined in terms of a generator polynomial P(X) of degree n-k.
If P(X) has X as a factor then every code polynomial has X as a factor and therefore zero-order coefficient equal to zero.
Only codes for which P(X) is not divisible by X are considered.
![Page 728: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/728.jpg)
EE576 Dr. Kousa Linear Block Codes 728
Encoded Message Polynomial F(X)
• Computer X n – k G (X)• R(X) = X n – k G (X) / P(X)• Add the remainder to the X n – k G (X)
– F(X) = X n – k G (X) + R(X)
X n – k G (X) = Q(X) P(X) + R(X)
Also F(X) = Q(X) P(X)
![Page 729: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/729.jpg)
EE576 Dr. Kousa Linear Block Codes 729
Principles of Error Detection and Error Correction
• An encoded message containing errors can be represented by
H (X) = F (X) + E (X)
H (X) = Received encoded message polynomial
F (X) = encoded message polynomial
E (X) = error polynomial
![Page 730: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/730.jpg)
EE576 Dr. Kousa Linear Block Codes 730
Principles of Error Detection and Error Correction Contd…
• To detect error, divide the received, possible erroneous message H(X) by P(X) and test the remainder.
• If the remainder is nonzero an error has been detected.
• If the remainder is zero, either no error or an undetectable error has occurred
![Page 731: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/731.jpg)
EE576 Dr. Kousa Linear Block Codes 731
DETECTION OF SINGLE ERRORS
• Theorem 1: A cyclic code generated by an polynomial P(X) with more than one term detects all single errors.
• Proof:– A single error in the i’th position of an encoded message
corresponds to an error polynomial X i.– For detection of single errors, it is necessary that P(X) does
not divide X i.
– Obviously no polynomial with more than one term divides X I.
![Page 732: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/732.jpg)
EE576 Dr. Kousa Linear Block Codes 732
DETECTION OF SINGLE ERRORS Contd….
• Theorem 2: Every polynomial divisible by 1 + X has an even number of terms.– Proof:
• Also if P(X) contains a factor 1 + X any odd numbers of errors will be detected.
![Page 733: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/733.jpg)
EE576 Dr. Kousa Linear Block Codes 733
Double and Triple Error Detecting Codes (Hamming Codes)
• Theorem 3: A code generated by the polynomial P(X) detects all single and double errors if the length n of the code is no greater than the exponent e to which P(X) belongs. – Detecting double errors requires that P(X) does not divisible
by
X i + X j for any i, j < n
![Page 734: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/734.jpg)
EE576 Dr. Kousa Linear Block Codes 734
Double and Triple Error Detecting Codes Contd…
• Theorem 4: A code generated by P (X) = (1 + X) P1(X) detects all single, double, and triple errors if the
length n of the code is no greater than the exponent e to which P1(X) belongs.
– Single and triple errors are detected by presence of factor 1 + X as proved in Theorem 2.
– Double errors are detected because P1(X) belong to the
exponent
e >= n as proved in Theorem 3– Q.E.D
![Page 735: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/735.jpg)
EE576 Dr. Kousa Linear Block Codes 735
Detection of a Burst-Error • A burst error of length b will be defined as any pattern of errors for
which the number of symbols between first and last errors including these errors, is b.
• Theorem 5: Any cyclic code generated by a polynomial of degree n-k
detects any burst –error of length n-k or less.
– Any burst polynomial can be factored as E(X) = Xi E1(X)
– E1(X) is of degree b-1
– Burst can be detected if P(X) does not divide E(X)
– Since P(X) is assumed not to have X as a factor, it could divide E(X) only if it could divide E1(X).
– But b < = n – k• There fore P(X) is of higher degree than E1(X) which implies that P(X)
could not divide E1(X)
• Q.E.D• Theorem 6: The fraction of bursts of length b > n-k that are undetected
is
2-(n-k) if b > n – k + 1
2-(n – k – 1) if b = n – k + 1
![Page 736: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/736.jpg)
EE576 Dr. Kousa Linear Block Codes 736
Detection of Two Bursts of Errors (Abram son and Fire Codes)
• Theorem 7: The cyclic code generated by P (X) = (1 + X) P1 (X)
detects any combination of two burst-errors of length two or less if the length of the code, n, is no go greater than e, the exponent to which P1 (X) belongs. – Proof
– There are four types of error patterns• E(X) = X i + X j
• E(X) = (X i + Xi+1) + X j
• E(X) = X j (X j + Xj+1)• E(X) = (X i + Xi+1) + (X j + Xj+1)
![Page 737: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/737.jpg)
EE576 Dr. Kousa Linear Block Codes 737
Other Cyclic Codes• There are several important cyclic codes which have not been
discussed in this paper.– BCH codes (developed by Bose, Chaudhuri, and
Hocquenghem) are a very important type of cyclic codes.– Reed-Solomon codes are a special type of BCH codes that
are commonly used in compact disc players.
Implementation• Briefly, to encode a message, G (X), n-k zeros are annexed (I.e.
multiplication of Xn-1G (X) is performed) and then Xn-1G (X) is divided by the polynomial P (X) of degree n-k. The remainder is then subtracted from Xn-1G (X). (It replaces the n-k zeroes).
• This encoded message is divisible by P (X) for checking out errors
![Page 738: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/738.jpg)
EE576 Dr. Kousa Linear Block Codes 738
Implementation Contd…• It can be seen that modulo 2 arithmetic has simplified the division
considerably.
• Here we do not require the quotient, so the division to find the remainder can be described as follows.
• 1) Align the coefficient of the highest degree terms of the divisor and dividend and subtract (same as addition)
• 2) Align the coefficient of the highest degree terms of the divisor and difference and subtract again
3) Repeat the process until the difference has the lower degree than the divisor
• The hardware to implement this algorithm is a shift register and a collection of modulo two adders.
• The number of shift register positions is equal to the degree of the divisor, P (X), and the dividend is shifted through high order first and left to right.
![Page 739: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/739.jpg)
EE576 Dr. Kousa Linear Block Codes 739
Implementation Contd…• As the first one (the coefficient of the high order term of
the dividend) shifts off the end we subtract the divisor by the following procedure:1. In the subtraction the high-order terms of the divisor and
dividend always cancel. As the higher order term of the dividend is shifted off the end of the register, this part of the subtraction is done automatically.
2. Modulo two adders are placed so that when a one shifts off the end of the register, the divisor is subtracted from the contents of the register. The register than contains a difference that is shifted until another comes off the end and then the process is repeated. This continues until the entire dividend is shifted into the register.
![Page 740: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/740.jpg)
EE576 Dr. Kousa Linear Block Codes 740
![Page 741: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/741.jpg)
EE576 Dr. Kousa Linear Block Codes 741
Input 100010001101011
0 -> 10 00 1 0 -> 11 10 1 0 -> 11 01 1 1 -> 11 00 0 1 -> 11 10 0 0 -> 11 11 0 1 -> 01 11 1 0 -> 00 01 0 1 -> 00 00 1 1 -> 00 10 1
![Page 742: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/742.jpg)
EE576 Dr. Kousa Linear Block Codes 742
![Page 743: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/743.jpg)
EE576 Dr. Kousa Linear Block Codes 743
![Page 744: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/744.jpg)
EE576 Dr. Kousa Linear Block Codes 744
Implementation Contd…• To minimize the hardware it is desirable to use the same register for
both encoding and error detection.
• If circuit of fig : 3 is used for error detection, the remainder on dividing X n-k H(X) by P(X) instead of remainder on dividing H(X) by P(X)
• This makes no difference, because if H(X) is not evenly divisible by P(X) than obviously X n-k H(X) will not be divisible either.
• Error Correction: It is a much more difficult task than error detection.
• It can be shown that each different correctable error pattern must give a different remainder after division buy P(X).
• There fore error correction can be done.
Conclusion• Cyclic codes for error detection provides high efficiency and the ease of
implementation.
• It provides standardization like CRC-8 and CRC-32
![Page 745: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/745.jpg)
EE576 Dr. Kousa Linear Block Codes 745
The Viterbi Algorithm• Application of Dynamic Programming-the Principle of
Optimality• -Search of Citation Index -213 references since 1998• Applications
– Telecommunications• Convolutional codes-Trellis codes• Inter-symbol interference in Digital Transmission • Continuous phase transmission• Magnetic Recording-Partial Response Signaling- • Divers others
– Image restoration– Rainfall prediction– Gene sequencing– Character recognition
![Page 746: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/746.jpg)
EE576 Dr. Kousa Linear Block Codes 746
Milestones
• Viterbi (1967) decoding convolutional codes• Omura (1968) VA optimal• Kobayashi (1971) Magnetic recording• Forney (1973) Classic survey recognizing the
generality of the VA• Rabiner (1989) Influential survey paper of hidden
Markov chains
![Page 747: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/747.jpg)
EE576 Dr. Kousa Linear Block Codes 747
Example-Principle of Optimality
EE Bld
FacultyClub
Publish
Perish
N
N
S
S
.5
.8
.7
.5
1.2
.8
.2
.3
.5
.8
1.2
1.0
1.2
Professor X chooses an optimum
path on his trip to lunch
Optimal: 6 addsBrute force:8 adds
N bridgesOptimal: 4(N+1) addsBrute force: (N-1)2N
adds
Find optimal path to each bridge
![Page 748: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/748.jpg)
EE576 Dr. Kousa Linear Block Codes 748
Digital Transmission with Convolutional Codes
Information
SourceConvolutional
Encoder
1 2, ,...,
N
N
a a a
A 1 2, ,..., Nc c c
BSC
pp
Information
SinkViterbi
Algorithm
1 2, ,..., Na a a 1 2, ,...,
N
N
b b b
B
![Page 749: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/749.jpg)
EE576 Dr. Kousa Linear Block Codes 749
Maximum a Posteriori (MAP) Estimate
( , ) Hamming distance betwee
Define
n sequencesN ND B A
1 2 1 2
( , ) ( , )1 2 1 2
, ,..., , ,...,max ( , ,..., / , ,..., ) max (1 )
bit error probability
aximum posteriori robabM ility
A PN N N N
N N
D A B N D A BN N
a a a a a aP b b b a a a p p
p
1 2, ,...,
Equivalent
min ( , ) log( /(1 )
ly
N
N N
a a aD A B p p
Brute force = Exponential Growth with N
![Page 750: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/750.jpg)
EE576 Dr. Kousa Linear Block Codes 750
T
Input
110100
Example(3,1) code
T
Output
111 100 010 110 011 001 000
0
0
0
0
1
2 1
3 1 2
1 2
i
i s
i s s
s sInitial state -
Initial state - s s1 2 0
(output,input)efficiency=input/output
Convolutional codes-Encoding a sequence
1 2
State
S S
![Page 751: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/751.jpg)
EE576 Dr. Kousa Linear Block Codes 751
Fig.2.1400
10
01
111 -100
1 -1110 -011
0-000
1 -101
0 -010
0 -001
1 -110
input -output state
Markov chain for Convolutional code
![Page 752: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/752.jpg)
EE576 Dr. Kousa Linear Block Codes 752
00
01
10
11
00
01
10
11
000111
001110011100
010101
State output Next state
0 input
1 inputs1s2
Trellis Representation
![Page 753: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/753.jpg)
EE576 Dr. Kousa Linear Block Codes 753
Iteration for Optimization
1 2 1 2Shift register conte
, ,..., , ,..nt
.,s
min ( , ) min ( , )
N N
N N N N
a a a s s sD A B D A B
1 2 1 2, ,..., , ,... memor, ylessness1
min ( , ) min ( , BSC)-N N
NN N
i is s s s s s
i
D A B d a b
1 2 1 2 1
1 1
, ,..., , ,..., ,min ( ( , ) min ( ( , ) ( , ))
N N N
N N N NN N
s s s s s s sD A B D A B d a b
1 2 1 2 1/
1 1
, ,..., , ,...,min ( ( , ) min min ( ( , ) ( , ))
N N N SN
N N N NN N
s s s s s s sD A B D A B d a b
1 2 1 2 1/
1 1
, ,..., , ,...,min ( ( , ) min( ( , ) min ( , ))
N N N SN
N N N NN N
s s s s s s sD A B d a b D A B
![Page 754: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/754.jpg)
EE576 Dr. Kousa Linear Block Codes 754
Key step!
1 2 2 / , 1 2 2 /1 1
2 2 2 2
, ,..., , ,..., min ( , )) min ( , ))
N S S N SN N N
N N N N
s s s s s sD A B D A B
Redundant
1 2 1
1 1 2 2 1 Accumulated distanceIncremental
1 1
, ,..., /
2 21 1
/ , , distance ..., /
min ( , )
min ( ( , ) min ( , ))
N N
N N N N
N N
s s s s
N NN N
s s s s s s
D A B
d a b D A B
Linear growth in N
![Page 755: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/755.jpg)
EE576 Dr. Kousa Linear Block Codes 755
Deciding Previous State
1 2 1
1 1 2 1
, ,..., /
1 1
/ , ,..., /
min ( , )
min( ( , ) min ( , ))
i i
i i i i
i i
s s s s
i ii i
s s s s s s
D A B
d a b D A B
0000
10
State iState i-11 1( , )i iD A B
4
2
( , )i id a b
1
2
4
010ib
000ia
001ia
Search previous states
![Page 756: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/756.jpg)
EE576 Dr. Kousa Linear Block Codes 756
Trellis codes-Euclidean distance
shortest path-Hamming distance to s0
Viterbi Algorithm-shortest pathto detect sequenceFirst step
Optimum Sequence Detection
Trace though successive states
s0
s1
s2
s3
![Page 757: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/757.jpg)
EE576 Dr. Kousa Linear Block Codes 757
Inter-symbol Interference
ChannelTransmitter Equalizer
VADecisions
1
0
( ) ( ) ( )-Received si
Finite memory c
gnal
( ) ( )
0; hannel
N
ii
i j
i j
z t a h t iT n t
r h t iT h t jT dt
r i j m
1
( )N
ii
a p t iT
( )z t
![Page 758: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/758.jpg)
EE576 Dr. Kousa Linear Block Codes 758
AWGN Channel-MAP Estimate
1 2
2
, ,...,10
min ( ) ( ) -
Euclidean distance between received and possible signals
N
N
ia a a
i
z t a h t iT dt
1 2i
, ,...,1 1 1
0
Simplification
min 2 a
where
( ) ( ) -Output of Matched Filter
N
N N N
i i j i ja a a
i i j
i
Z a a r
Z y t h t iT dt
![Page 759: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/759.jpg)
EE576 Dr. Kousa Linear Block Codes 759
k 1
1 1 i1 1 1
12
k 1 0
Memory
-state Define:s { ,..., }
( ,..., , ,..., ) 2 a
(
s
Accumulated distanc
Z ; ,
e
In) 2 2 cr
k m k
k k k
k k m k i i j i ji i j
k
k k k k k i k i ki k m
m
a a
D Z Z s s Z a a r
d s s a Z a a r a r
emental distance
1 2 1
1 1 2 2 1
1 1 1 1, ,..., /
k 1 1 2 1 2/ , ,..., /
min ( ,..., , ,..., )
min ( (Z ; , ) min ( ,..., , ,..., ))k k
k k k k
k k m ks s s s
k k k k m ks s s s s s
D Z Z s s
d s s D Z Z s s
Viterbi Algorithm for ISI
State = number of symbols in memory
![Page 760: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/760.jpg)
EE576 Dr. Kousa Linear Block Codes 760
Magnetic Recording
0
( ) ( 1) ( ) 1( )kk
m t a u t kT t
Nyquist pu1
0 lse
( )( ) * ( )
2 ( ) where k k k kk
d m te t h t
dt
x h t kT x a a
Magnetic flux passes over headsDifferentiation of pulses
Sample
Magnetization pattern
Output
Controlled ISI Same model applies to
Partial Response signaling
![Page 761: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/761.jpg)
EE576 Dr. Kousa Linear Block Codes 761
Continuous Phase FSK
cos( (
Tranm
)
itted Si
); ( 1)
gnal
k k ky a t x kT t k T
1 2
Digital Input Sequ
, ,. ,
e
.
enc
. Na a a
1 1
Constraint-Continuous Pha
( ) ( ) 2
s
;mod
e
k k k ka t x a t x
0;even no. ones
1;odd no. oneskx
Example-Binary signaling
-1.2
-0.7
-0.2
0.3
0.8
0 0.2 0.4 0.6 0.8 1
Signaling interval
Whole cyclesodd number ½ cycles
![Page 762: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/762.jpg)
EE576 Dr. Kousa Linear Block Codes 762
Merges and State Reduction
All paths merge
Computations order of (No states)2
Carry only high probability states
Force merges to reduce complexity
Optimal paths through trellis
![Page 763: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/763.jpg)
EE576 Dr. Kousa Linear Block Codes 763
Input Pixel Effect of Blurring
Input pixel Optical channel AWGN
Optica
( , ) ( , ) ( , ) ( , )
where optical blur width
l output signalL L
l L m L
s i j a i l j m h l m n i j
L
Blurring Analogous to ISI
![Page 764: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/764.jpg)
EE576 Dr. Kousa Linear Block Codes 764
Row Scan
Known state transitionsAnd Decision Feed back
Utilized for state reduction
VA for optimal row sequence
![Page 765: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/765.jpg)
EE576 Dr. Kousa Linear Block Codes 765
Hidden Markov Chain
• Data suggests Markovian structure• Estimate initial state probabilities• Estimate transition probabilities• VA used for estimation of Probabilities• Iteration
![Page 766: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/766.jpg)
EE576 Dr. Kousa Linear Block Codes 766
Rainfall Prediction
No rain
Showerydry
Showerywet
Rainydry
Rainywet
Rainfall observations
![Page 767: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/767.jpg)
EE576 Dr. Kousa Linear Block Codes 767
DNA Sequencing
• DNA-double helix
– Sequences of four nucleotides, A,T,C and G
– Pairing between strands
– Bonding
and A T C G
Nucleotide sequence CGGATTC
Gene 1
Gene 2
Gene 3
Cordon A in three genes
•Genes
–Made up of Cordons, i.e. triplets of adjacent nucleotides
–Overlapping of genes
![Page 768: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/768.jpg)
EE576 Dr. Kousa Linear Block Codes 768
Hidden Markov ChainTracking genes
.H
M1
M2
M3
M4
P1
P2
P3
P4
E
SS-start first cordon of geneP1-4- +1,…,+4 from startGeneE-stopH-gapM1-4 -1,…,-4 from start
Initial andTransition
Probabilities known
![Page 769: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/769.jpg)
EE576 Dr. Kousa Linear Block Codes 769
Recognizing Handwritten Chinese Characters
Text-line images
Estimate stroke width
Set up m X n grid
Estimate initial and transition probabilities
Detect possible segmentation paths by VA
Results
Next Slide
![Page 770: DC Error Correcting Codes](https://reader038.vdocuments.site/reader038/viewer/2022102422/5474ee20b4af9f985d8b4590/html5/thumbnails/770.jpg)
EE576 Dr. Kousa Linear Block Codes 770
Example Segmenting Handwritten Characters
All possible segmentation
paths
Removal of Overlapping
paths
Eliminating Redundant
Paths
Discardingnear paths