channel coding - york university 6.pdf · waveform coding • transform a ... linear block codes...
TRANSCRIPT
1
Channel Coding
Chapter 6
Topics
• Waveform Coding and Structured Sequences
• Types of Error Control• Structured Sequences• Linear Block Codes• Error Detection and Correction• Cyclic Codes
2
Introduction
• Channel coding is the class of signal transformations designed to improve communication performance by enabling the transmitted signal to better withstand the effects of channel impairments.
Introduction
• Waveform coding Deals with transforming the waveform into a better one that makes it easier to detect and less subject to errors.
• Structured Sequences: Having redundant bits that could be used to detect the error (or correct it).
3
Antipodal/Orthogonal Signals
• Same ideas as in Chapter 3
Waveform Coding
• Transform a waveform set into another waveform set to minimize PB
• Most popular ones are orthogonal and biorthogonal.
• The smallest possible cross correlation is –1• This only happens when M=2• In general 0 (orthogonal) is used for M>2
4
Waveform Coding
• The cross correlation between 2 signals is a measure of the distance between them.
• If each waveform is represented by a sequence of pulses with levels of +1,-1, then
zij = Number of digits agreements – number of digits disagreements
Total number of digits in the sequence
Orthogonal Codes
=
1000
10 1H
=
=11
112 HH
HHH
0110110010100000
11011000
Data set Orthogonal code set
Hadamard matrix
5
Biorthogonal Codes
=
−
−
1k
1kk H
HB
=
10010011010111110110110010100000
B
111011101001110010100000
3
Biorthogonal Codes
≠−≠=−≠−
==
2/||,for 02/||,for 1
for 1
MjijiMjiji
jizij
Biorthogonal codes require half as many bits per symbol as orthogonal codes.
6
Transorthogonal Codes
Transorthogonal (Simplex) codes are generated from an orthogonal set by deleting the first digit of each codeword.
≠−
−=
= jiM
jizij
11
1
Simplex codes require the least Eb/No for a specified symbol error rate.
For large M, they all are identical
Waveform Coding
• Each k-bit message chooses one of the generators.
• 2k code bits is sent to the modulator (BPSK).
7
Waveform Coding
• At receiver, the signal is demodulated and fed to M correlator (or matched filter).
• The correlation is done over a code word duration T=2kTc
• For real-time communication, codeword duration should be equal to message duration T=2kTc=kTb
Waveform Coding
For orthogonal coding and AWGN the output of the correlators are 0’s except the transmitted code.
=
=
00 2)( vs.
NE
QMMP NE
QP sB
sB
Antipodal Orthogonal
8
Structured Sequences.
• In structured sequences, extra bits are added to the message to reduce the probability of errors (detect errors).
• K-bit data block is transmitted as n-bit message (n-k added bits).
• The code is referred to as (n,k) code.• k/n is called the code rate
Channel Models
• Discrete Memoryless Channel (DMC)– A discrete input alphabet, a discrete output
alphabet– P(j|i) is the probability of receiving j given i
was sent.– If input U=u1,u2,…uN, and output Z=z1, z2,…zN
∏==
N
mmm uzPP
1)|()( U|Z
9
Channel Models
• Gaussian Channel– Generalization of DMC– Input is discrete alphabet, output is
input+Gaussian noise– The demodulator output consists of a
continuous alphabet, or quantized version of it (>2 levels), the modulator made a soft decision
– Since, the decision is not hard(0,1), there is no meaning of probability of error
Channel Models
• Binary Symmetric Channel– BSC is a special case of DMC– alphabet is 2 elements 0,1– P(0|1)=P(1|0)=p– P(1|1)=P(0|0)=1-p– Demodulator output is either 1 or 0, the
modulator is said to make a hard decision
10
Single Parity Check Codes
• Even parity or odd parity• Rate = k/(k+1)
∑ −
=
−
=
=
−
−
2/
1
22 )1(2
)1(),(
n
j
jnj
jnj
ppj
nPnd
ppjn
njPProbability of j errors in n symbols
Probability of undetected error
Rectangular Code
• Data are arranged in an MxN matrix, • A horizontal parity check is appended to
each column, and a vertical parity check is appended to each row
• Code rate = k/n=MN/(M+1)(N+1)
11
Rectangular Code
• If the code can correct t and fewer errors.
• For small p, the first term dominates.
∑ −
=
+=
−n
tj
jnjM pp
jn
P1
)1(
Rectangular Codes
=
−
=
RNP
NE
dBNE
dBNE
dbG
b
c
b
u
b
1
)()()(
00
00
Trade-offs
Error performance vs. BW
Power vs. BW
Data rate vs. BW
12
Linear Block Codes
• (n,k) code maps a length k-tuples to length n-tuples.
• The set of all binary n-tuples form a vector space Vn That binary field has 2 operations, multiplication and addition (ex-or).
• A subset S of V is called a subspace of V iff– The all zeros vector is in S AND– The sum of any 2 vectors in S is also in S
Linear Block Codes
• In Vn the 2n tuples can be represented as points in the space
• Some of these points are in S• There are 2 contradicting objectives
– Code efficiency means we want to pack Vn with elements of S .
– Error detection means that we want the elements of S to be as far away as possible from each other.
13
Linear Block Codes
• Consider the (6,3) code.
• Could be done by using table lookup
• The size of the table lookup is k2k bits
• Too large for large k
Message codeword000 000000100 110100010 011010110 101110001 101001101 011101011 110011111 000111
Generator Matrix
• Since the codewords form a k-dimensional subspace of the n-dimensional space, we can use the basis of the subspace to generate any element in the subspace.
• If these basis are (linearly independent n-tuples) V1, V2, . . . ,Vk any code word can be represented as
• U=m1V1 + m2V2 + . . . mkVk
14
Generator Matrix
011101
111]011[
100101010110001011
tableprevious in the example,For
,,,,
3214
21
21
22221
11211
=
⋅+⋅+⋅=
=
=
=
=
=
=
=
VVVU
mmm
vvv
vvvvvv
k
nnkk
n
n
3
2
1
3
2
1
k
2
1
VVV
VVV
G
mGU
m
V
VV
G
Systematic Linear Block Codes
• A systematic linear block code is a mapping from a k-dimensional space to n-dimensional space such that the k-bit message is a part of the n-bit codeword.
= kIPG
We need only to store the Ppart of the matrix
15
Systematic Linear Block Codes
message
kparity
kn
knkkk
kn
kn
kn
knkkk
kn
kn
mmmU
ppp
pppppp
mmmuuu
ppp
pppppp
G
,,,,,,
100
010001
],,[,,,
100
010001
,2121
,21
,22221
,11211
,2121
,21
,22221
,11211
−
−
−
−
−
−
−
=
×=
=
ααα
Parity Check Matrix
• A matrix H is called the parity-check matrix• Rows of G are orthogonal to rows of H
GHT=0• H=[In-k PT]
• HT= PIn-k
16
[ ] 0UH
0GH
H
=
=
=
=
=
−
−
−−
−
−
−
−
−
−
−
−
−
knkkk
kn
knkkn
T
knkkk
kn
kn
knkkk
kn
kn
T
knkkk
kn
kn
T
ppp
pppppp
mmm
ppp
pppppp
ppp
pppppp
ppp
pppppp
,21
,22221
,11211,2121
,21
,22221
,11211
,21
,22221
,11211
,21
,22221
,11211
10
01001
,,,,,
10
01001
100
010001
10
01001
ααα
Syndrome testing
• If the received vector is r, that can be expressed as
• r = U + e , where e is the error vector (2n-1
possible errors).• Define the Syndrome S = rHT =(U+e)HT
• S=UHT + eHT = eHT
• There is a one to one correspondence between the syndrome and the error
17
Syndrome testing
• For single error, e contains only one nonzero element
• eHT chooses one row of HT
• No column of H can be 0, otherwise the corresponding error will not be detected
• No 2 columns of H are the same, otherwise the same error will produce the same syndrome
Example
[ ] [ ]
]100[]100000[
001
101110011100010001
011100
001110101110
===
=
==
==
TT
T
HeHS
rHS
rU
18
Error Correction
• There is one-to-one correspondence between correctable error patterns and syndromes.
knkknknkn
k
k
k
k
222i222
j2jij2j
323i323
222i222
2i21
eUeUeUe
eUeUeUe
eUeUeUeeUeUeUe
UUUU
−−−− +++
+++
++++++
Coset leader
Each row is a coset
The 2n different pattern, each appears exactly once
19
knkknkn
k
k
k
k
eUeUeU
eUeUeU
eUeUeUeUeUeU
UUU
i
jjij
i
i
i
−−−− +++
+++
++++++
22222
22
32332
22222
22
kn2
j
3
2
1
e
e
eeU
received
Corrected codeword
Corrects errors only if the error is a coset leader
k2
kn−2
20
ImplementationS=rHT
Syndrome is transformed into error, e.g.
101 000001
That means the input to the last gate is 101
Example
[ ] [ ]
]
100
101110011100010001
011001
100110101110
=
==
==
TrHS
rU
21
Error Detecting and Correcting Capabilities
• The Hamming weight of a codeword U is defined as the number of non-zero elements in w(U)
• The distance between 2 codewords (U,V)=w(U+V)• The distance between 2 codewords is the number of bits
that must be changed to turn one into the other• The minimum distance of a code is the minimum distance
between any 2 codewords.• Since the adding of any 2 words in the code, produce
another codeword (for linear codes), then the minimum distance of a linear code is min(w(U)) excluding all 0’s
Single error, detectable
Hamming Distance
Double error, detectable
Triple error, undetectable
22
Error Detection and Correction
• Having received r estimate u.• Maximum likelihood algorithm
• Fir BSC , the likelihood of Ui with respect to ri inversely proportional to
• Decide on Ui if
)|(max)|( UjrpUrp iU
iij∀
=
)|(max)|( UjrdUrd iU
iij∀
=
Error Detection and Correction
• To detect and correct t errors
• To detect e errors
• To correct α and detect β, α ≤ β
−
=2
1mindt
1min −= de
1min ++≥ βαd
23
Erasure Correction
• The symbol is declared erased if the receiver receives a signal with a very poor quality.
• The position of the erasure is known, although the correct symbol value is not known.
• ρ or fewer erasure can be corrected ifdmin ≥ ρ +1
• α errors and γ erasures can be corrected ifdmin ≥ 2 + α + γ +1
Estimating Error Capability
• The minimum number of rows needed in the standard array to correct all combination of t or fewer errors
++
+
+≥
++
+
+≥−
−tnnn
or
tnnn
kn
kn21
12 cosets ofnumber The
211log :bitaparity ofnumber The 2
122 1
min−
×≤−
k
knd Plotkin bound (for low rate)
Hamming bound (for high rate)
24
An (n,k) Code
• Assume error correction capability of 21. dmin = 2t+1=52. For non-trivial, k=23. For n=7, Plotkin bound is not satisfied, for
n=8, Both Plotkin and Hamming bounds are satisfied
• (8,2) code
(8,2) Code
• All zeros must be in• Closure property (sum of any 2 codewords
is another code word).• Each code word is 8 bits, 5 11’s except all
0’s• dmin = 5• Assume systematic, last 2 bits is the
message
25
(8,2) code
=
=
001111111100100000010000001000000100000010000001
1000111101111100
TH
G Message Codeword
00 00000000
01 11110001
10 00111110
11 11001111
26
Error Detection/Correction Tradeoff
• The syndrome si=eiHT is calculated for all 2n-k.
• The correction is done by locating the syndrome and adding the corresponding error pattern to the received word.
1min ++≥ βαdDetect β Correct α
2 2
3 1
4 0
Error Detection/Correction Tradeoff
• Detecting single and double errors and correcting the, similar to the previous decoder.
• Detecting 3 and correcting one: Draw a line under row 9, map only those syndromes, detect if the syndrome is not 0.