chapter 5: linear block codes - université de limogesoutline basic principles linear block coding...

47
Outline Basic principles Linear Block Coding Chapter 5: Linear Block Codes Vahid Meghdadi University of Limoges [email protected] Vahid Meghdadi Chapter 5: Linear Block Codes

Upload: others

Post on 24-Mar-2020

11 views

Category:

Documents


3 download

TRANSCRIPT

OutlineBasic principles

Linear Block Coding

Chapter 5: Linear Block Codes

Vahid Meghdadi

University of [email protected]

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Basic principlesIntroductionBlock codesError detectionError correction

Linear Block Coding

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Introduction

I Transmission through noisy channel

I Transmission errors can occur, 1’s become 0’s and 0’s become1’s

I To correct the errors, some redundancy bits are added to theinformation sequence, at the receiver the correlation isexploited to locate transmission errors

I Here, only binary transmission is considered.

I In this chapter we try to optimally add this redundancy to theinformation bits.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Block codes (1/3)

Definition of block coding:

I A message m is a binary sequence of size k : m ∈ Ak

I To each message m correspond a codeword which is a binarysequence c of size n : n > k .

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Block codes (2/3)

I The code space contains 2n points but only 2k of them arevalid codewords.

I A code must be a one-to-one relation (injection)

Example:m ∈ {00, 01, 10, 11}

c ∈ {000, 011, 101, 110}

This is a one parity check code. 001,010,100,111 /∈ C

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Block code (3/3)

Definition:The rate of a code is defined as k/n. For example for the previouscode n = 3, k = 2 so the rate is 2/3.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Linear codes

Definition:A binary code is linear if the following condition is satisfied:

∀c1 ∈ C , ∀c2 ∈ C ⇒ c1 + c2 ∈ C

Example: With the previous example

011 + 101 = 110 ∈ C

011 + 110 = 101 ∈ C

101 + 110 = 110 ∈ C

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Hamming distance and Hamming weight (1/2)

Definition :Hamming weight of a binary sequence is defined as the number of1 in the sequence.

w([101]) = 2 w([11101]) = 4

Definition:Hamming distance between two binary sequences v and w is thenumber of places where they differ.Example

v = [110101]w = [110011]

⇒ d(v,w) = 2

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Hamming distance and Hamming weight (2/2)

Property 1: d(v,w) + d(w, x) ≥ d(v, x) (triangle inequality)

Property 2: d(v,w) = w(v + w)

Example:v = [110101] w = [110011]

v + w = [000110] d(v,w) = w(v + w) = 2

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Minimum distance of block code

dmin = minv,w{d(v,w) : v,w ∈ C and v 6= w}

If the code is linear:

dmin = min{w(v + w) : v,w ∈ C and v 6= w}= min{w(x) : x ∈ C and x 6= 0}= wmin

Theorem: The minimum distance of a linear block code is equal tothe minimum weight of its non zero codeword.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Good codes

A good code is one whose minimum distance is large.For the previous example dmin = 2:

{00, 01, 10, 11} ⇒ {000, 011, 101, 110}

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Error detection (1/3)

I Noisy channel introduces an error-pattern: c is sent, r isreceived where r = c + e

I w(e) =the number of errors that occur during thetransmission

I e is called error pattern

I For a code with dmin as minimum distance, any two distinctcodewords differ in at least dmin places.

I No error pattern of weight dmin − 1 or less can change onecodeword into another.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Error detection (2/3)

I Any error pattern of size at most dmin − 1 can be detected.

I Some error patterns of size dmin or larger can be detected.

Example:{000, 011, 101, 110} ∈ C

For this code dmin = 2:

I any one error is detected

I but any 3 errors can be detected as well.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Error detection (3/3)

I For c + e ∈ C, and e 6= 0, the error cannot be detected.Because the code is linear, only for e as a codeword thisconstraint is satisfied.

I So, for a (n, k) code, there are exactly 2n − 2k error patternsthat can be detected.

Example: for the previous code, all the following error pattern canbe detected:

{[001], [010], [100], [111]}

For this code, n = 3, k = 2, 2n − 2k = 4

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Error Detection Capacity

I The error detection fails only when the error pattern isidentical to a non-zero codeword, otherwise c + e ∈ C and theerror is not detected.

Definition: Weight distribution

I Let Ai be the number of codewords of weight i ,

I A0,A1, · · · ,An are called the weight distribution of the code.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Code example

For this example:A0 = 1, A1 = A2 = 0, A3 = 7, A4 = 7, A5 = A6 = 0, A7 = 1

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Undetected Error Probability

Definition : Pu(E ) probability of undetected error

Pu(E ) = prob(e ∈ C)

In a BSC channel with p as the error transmission probability

Pu(E ) =n∑

i=1

Aipi (1− p)n−i

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Example

Considering previous code (7,4)

Pu(E ) = 7p3(1− p)4 + 7p4(1− p)3 + p7

For example if p = 0.01 then Pu(E ) = 7× 10−6

1-p

1-p

p

p0 0

1 1

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Error correction 1/2

How many errors can be corrected ?Let t be a positive number such that 2t + 1 ≤ dmin < 2t + 2Let v be the transmitted and r the received sequences, and w anycodeword. We can write:

d(v, r) + d(w, r) ≥ d(v,w)

Suppose that t ′ errors occur: d(v, r) = t ′, and for any v,w ∈ C

d(w, v) ≥ dmin ≥ 2t + 1

So d(w, r) ≥ 2t + 1− t ′. If t ′ ≤ t then:

d(w, r) > t

Conclusion: For any error pattern of t or fewer errors, r is closer toreal codeword v than any other codeword. So any t errors or fewercan be corrected.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Error correction 2/2

Number of vectors (n-tuples) around a codeword with a distance of

u is:

(nu

).

Definition: Hamming Sphere of radius r is the number of vectorsaround a codeword with a Hamming distance up to t:

V (n, t) =t∑

j=0

(nj

)

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

IntroductionBlock codesError detectionError correction

Conclusion

I A block code with minimum distance dmin guaranteescorrecting all error patterns of t = b(dmin − 1)/2c or fewer.

I t = b(dmin − 1)/2c is called error correcting capability of thecode.

I Some patterns of t + 1 or more can be corrected as well.I In general, for a t-error-correcting (n, k) linear code, 2n−k

error patterns can be corrected.I For a t-error-correcting (n, k) linear code and using an optimal

error correcting algorithm at the receiver, the probability thatthe receiver make a bad correction is upper bounded by

P(E ) <n∑

i=t+1

(ni

)pi (1− p)n−i

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Basic principlesIntroductionBlock codesError detectionError correction

Linear Block Coding

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Introduction

I Given n and k , how the code can be designed ?

I How an information sequence can be mapped to thecorresponding code word ?

I We are interested in systematic design.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Code subspace

I An (n, k) linear code is a k-dimensional subspace of the vectorspace of all the binary n-tuples, so it is possible to find klinearly independent code words g0, g1, · · · , gk−1 to span thisspace.

I So any code word can be written as a linear combination ofthese base vectors:

c = m0g0 + m1g1 + · · ·+ mk−1gk−1

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Generator matrix

We can arrange these k linearly independent code words (vectors)as the rows of a k × n matrix as follows:

G =

g0

g1...

gk−1

=

g00 g01 · · · g0,n−1

g10 g11 · · · g1,n−1...

......

...gk−1,0 gk−1,1 · · · gk−1,n−1

k×n

c = mG

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Equivalent code

I One can obtain the same code (same codewords) but with adifferent generator matrix. For example, we can exchange anytwo rows of G.

I The only difference is that we obtain another mapping frominformation sequences to codewords.

I We can replace each row by a linear combination of that rowwith another row.

I All the codes obtained are equivalent.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Systematic codes

Definition:

I If in all the codewords we can find exactly the correspondinginformation sequence, the code is called systematic.

It is convenient to group all these bits either at the end or at thebeginning of the code word.

I In this case the generator matrix can be divided into two submatrices [P|I ].

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Systematic codes

G =

g0

g1...

gk−1

=

p00 p01 · · · p0,n−1

p10 p11 · · · p1,n−1...

......

...pk−1,0 pk−1,1 · · · pk−1,n−1

1 0 · · · 00 1 · · · 0...

......

...0 0 · · · 1

G = [Pn×n−k Ik×k ]

So for a given message m the codeword will bec = mG = [mP m]1×n. The first part of the codeword is theparity bits and the second is the systematic part of the codeword.Note:Any linear code can be transformed into systematic code.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Example

Example: u = [1101] and

G =

1 1 1 1 0 0 01 0 1 0 1 0 01 1 0 0 0 1 00 1 1 0 0 0 1

So the codeword will be: [0011101]. The codeword is the sum ofthe first, second and forth lines of the G.The last four bits are the systematic part of the codeword.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Parity check matrix

For any k × n matrix G with k linearly independent rows, thereexists an (n − k)× n matrix H with (n − k) linearly independentrows such that any vector in the row space of G is orthogonal tothe rows of H and vice versa. So we can say:

I An n-tuple c is a codeword generated by G if and only ifcHT = 0

Matrix H is called Parity check matrix.

I The rows of H span a code space called dual code of G.

I GHT = 0

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Parity check matrix of systematic codes

For the systematic code G = [Pn×n−k Ik×k ], the parity checkmatrix is simply H = [I(n−k)×(n−k) PT

(n−k)×k ]. (Why?)Exercise: For the following generator matrix, give the generatormatrix of its dual code.

G =

1 1 1 1 0 0 01 0 1 0 1 0 01 1 0 0 0 1 00 1 1 0 0 0 1

Note: A non systematic linear code can be transformed intosystematic and then the parity check matrix can be calculated.This parity check matrix can also be used for the original code.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Minimum distance of the code

Theorem: The minimum distance of a code is equal to thesmallest positive number of columns of H which are linearlyindependent. That is, all combination of dmin − 1 columns of H arelinearly independent, so there is some set of dmin columns whichare linearly dependent.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Singleton bound

For an (n, k) linear code

dmin ≤ n − k + 1

Proof: the rank(H) = n − k so any n − k + 1 columns of H mustbe linearly dependent (the row rank of a matrix is equal to itscolumn rank). So dmin cannot be larger than n − k + 1 (accordingto the previous theorem).Definition: A code for which dmin = n − k + 1 is called”maximum distance separable” code.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Systematic Error Detection

I An n-tuple vector r is a codeword iff

rHT = 0

I The product S = rHT is a (1× n− k) vector and is called thesyndrome of r.

I If the syndrome is not zero, a transmission error is declared.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Systematic Error Correction

Maximum likelihood error correction:

c = arg minc∈C

dH(c, r)

Should we test all the possibilities ?For a (256,200) binary code, there are 2200 = 1.6× 1060

possibilities to be verified !Solution: Syndrome decoding which decreases the number ofpossibilies.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Syndrome Decoding

r is the received sequencer = c + eS = rHT = (c + e)HT = cHT + eHT = eHT

Conclusion: The syndrome is not a function of the transmittedcodeword but a function of error pattern. So we can construct onlya matrix of all possible error pattern with corresponding syndrome.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Syndrome Table Construction

I There are 2n possible received vectors

I There are 2k valid codewords

I There are 2n−k possible syndromes

First we generate all the error patterns of length 1, and calculatethe corresponding syndrome and put all of them into a matrix.Then, we increase the error pattern weight and we do the samething as before.Each time if the new syndrome had already been saved, it will bethrown away and we continue with the next error pattern.When all the 2k syndromes are found, the table is complete.For the previous example (256,200) the size of the matrix is256 = 7.2× 1016, which is still too big.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Exercise

H =

1 0 0 0 0 1 10 1 0 0 1 0 10 0 1 0 1 1 00 0 0 1 1 1 1

It is a (7,3) code withdmin = 4. Complete thesyndrome table which has 16rows:

Error pattern Syndrome

0000000 00001000000 1000

......

0000001 1101

11000001010000

......

1110000 1110

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Exercise

Suppose r = [0, 0, 1, 1, 0, 1, 1], calculate the syndrome and thencorrect it in maximum likelihood sense. Use the same H as theprevious example.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Hamming Code

Definition: If the syndrome table becomes complete for allw(e) ≤ t the code is called perfect.For example for the code (7, 4) with t = 1, the syndrome table iscompleted for all the error patterns of weight 1. These codes witht = 1 are called Hamming codes.For Hamming codes, there are 2n−k − 1 non-zero syndromes. Thisnumber must be equal to the number of error patterns of weight 1.It means that 2n−k − 1 = n. Calling the number of paritiesm = n − k , we can write for a Hamming code that:

k = 2m −m − 1

So the valid Hamming codes are:(3, 1), (7, 4), (15, 11), ..., (2m − 1, 2m −m − 1).

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Error Detection Performance Example

As we saw before the probability that a received sequence is takenfor a codeword by false is given by:

Pu(E ) =n∑

i=1

Aipi (1− p)n−i

For a (7,4) code the weight distribution is given by[1, 0, 0, 7, 7, 0, 0, 1] and for a (15,11) code it is[1, 0, 0, 35, 105, 168, 280, 435, 435, 280, 168, 105, 35, 0, 0, 1]. Usinga BSC channel derived from a BPSK system withp = Q(

√2REb/N0), we can draw the probability that a received

sequence is taken for a codeword erroneously.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Pu curves

I R = 4/7I Eb/N0 = 6dB,

p = 0.0165I Eb/N0 = 10dB,

p = 3.6× 10−4

I R = 11/15I Eb/N0 = 6dB,

p = 0.0078I Eb/N0 = 10dB,

p = 6.41× 10−50 2 4 6 8 10

10−12

10−10

10−8

10−6

10−4

10−2

100

SNR(dB)

Pu

(15,11)(7,4)

It means that at 10 dB of Eb/N0, from 1011 received sequences,just one codeword error is not detected (it is not related to the bitsbut to the codewords).

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Error correction probability for perfect codes

Up to t errors are corrected in a perfect code wheret = b(dmin − 1)/2c. So the probability that a received sequence goout of Hamming sphere of radius t can be calculated easily. It is infact the probability of failure for a correcting receiver.

P(F ) = 1−t∑

j=0

(nj

)pj(1− p)n−j

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Error correction probability for perfect codes

0 1 2 3 4 5 6 7 8 9 1010

−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Eb/N

0(dB)

P(F

)

(15,11)(7,4)

Note that this is not the bit error rate probability but word errorrate probability.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Soft Decision Decoding

I A BPSK modulation is used to send the codeword (−√

Ec ,+√

Ec)

I The Hamming distance of two codewords is related to theirEuclidean distance by the relation: dE = 2

√EcdH

I With a (n, k , dmin) block code, and an AWGN with σ2 = N0/2, atransmitted codeword will be a point in n-dimensional space.

I Ec and Eb are related by Ec = REb where R = n/k is the code rate.

I Suppose that there are on the average K codewords at the distancedminfrom a codeword, the probability of block decoding (by theunion bound) is

P(E ) ≈ KQ

(√2RdminEb

N0

)I Neglecting K , the asymptotic coding gain in comparison with

uncoded system is Rdmin.

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Modifications to Linear Codes: Extension

An (n + 1, k , d + 1) code can be constructed from an (n, k , d)code by:

1. Add to the H a last all-zero column

2. Add to the new H a last all-one row

3. Use linear operation to obtain an equivalent systematic code

4. Find the generator matrix G

Exercise: Give the systematic generator matrix of (8, 4, 4) from thefollowing (7,4,3) code:

H =

1 0 0 1 0 1 10 1 0 1 1 1 00 0 1 0 1 1 1

Vahid Meghdadi Chapter 5: Linear Block Codes

OutlineBasic principles

Linear Block Coding

Modifications to Linear Codes: Puncturing

A code is punctured by deleting one column in the H matrix. So a(n, k) code is transformed into an (n − 1, k) code. The minimumdistance of the code can be decreased by one.

Vahid Meghdadi Chapter 5: Linear Block Codes