channel coding part 2

Download Channel Coding Part 2

If you can't read please download the document

Upload: dario-sinchi-torres

Post on 29-Sep-2015

242 views

Category:

Documents


4 download

DESCRIPTION

Linear Block Codes Notation

TRANSCRIPT

  • Channel Coding Part 2

  • Introduction A linear block code is represented by two integers,

    andand a generator matrix or polynomial.

    The integer is the number of data bits that form aninput to a block encoder, and n is the total number ofbits in the associated codeword out of the encoder.

    A convolutional code is described by three integers,,and.

    is a parameter known as the constraint length; itrepresents the number of-tuple stages. The-tupleis not only a function of an input-tuple, but is also afunction of the previous-1 inputtuples.

  • Convolutional Encoding The input message source is denoted by the sequence:

    =1+2,, Whererepresents a binary digit (bit) and is a time

    index. Whe shall assume that eachis equally likely to be a one

    or a zero and independent from digit to digit, therefore thebit sequence lacks any redundancy. The encondertransforms each sequence into a unique codewordsequence given by:

    =() The sequience can be partitioned into a esquence of

    branch eords:=1,2,,. Each branch word is made up of binary code symbols,

    often called channel symbols, channel bits or code bits.

  • Convolutional Encoding

    Figure 1 indicates a typical communicationapplication, the codeword modulates awaveform (), this is corrupted by noise,resulting in a received waveform ()and ademodulated sequence=1,2,,.

    The task of decoder is to produce an estimate = 1, 2,, of the original messagesequence, using the received sequence together with a priori knowledge of theencoding procedure.

  • Convolutional Encoding

    Figure 1. Encode/decode and modulate/demodulate portions of a communication link.

  • Convolutional EncoderRepresentation

    We need to characterize the encoding function(), so that for a given input sequence , wecan readily compute the output sequence.

    The most popular methods for representing aconvolutional code are: Connection Pictoral

    Connection Vectors

    State Diagram

    Tree Diagram

    Trellis Diagram

  • Convolutional EncoderRepresentation

    Connection Representation Figure 2 in the next screen illustrates a (2,1)

    convolutional encoder with constraint length K=3.There are n=2 modulo 2 adders.

    At each input bit time, a bit is shifted into the leftmoststage and the bits in the register are shifted oneposition to the right.

    Next, the output swith samples the output of eachmodulo-2 adder, thus forming the code symbol pairmaking up the branch word associated with the bit justinputted.

    The sampling is repeated for each inputted bit.

  • Convolutional EncoderRepresentation

    The choice of connections between the adders and thestages of the register gives rise to the characteristics of thecde, and the connections are not choosen arbitrarily.

    Unlike a block code that has a fixed word length n, aconvolutional code has no particular block size.

    However, convolutional codes are often forced into a blockstructure by periodic truncation.

    Figure 2. Convolutional encoder (rate , K=3)

  • Convolutional EncoderRepresentation

    One way to represent the encoder is to specify aset of n connection vectors, one for each of the nmodulo-2 adders.

    Each vector has dimension K and describes theconnection of the encoding shift register to thatmodulo-2 adder.

    A one in the i-th position of the vector indicatesthat the corresponding stage in the shift registeris connected to the modulo-2 adder, and a zero ina given position indicates that no connectionexists.

  • Convolutional EncoderRepresentation

    For the encoder example in Figure 2, we can write theconnections vector1for theuper connections and2forthe connections as follows:

    1=1112=101

    Figure 3. Convolutional encoder (rate , K=3)

  • Convolutional EncoderRepresentation

    Now consider that a message vector =101 isconvolutionally encoded with the encoder show inFigure 2. The three message bits are inputted, one attime, at time1,2and3as shown in Figure 4.

    Subsequently, (K-1)=2 zeros are inputted at times 4and5to flush the register and thus ensure that the tailend of the message is shifted the full length of theregister.

    The output sequence is seen to be 1110001011,where the leftmost symbol represents the earliesttransmission.

    The entire output sequence, including the codesymbols as a results of flushing, are needed to decodethe message.

  • Convolutional EncoderRepresentation

    Figure 4. Convolutional encoding a message sequence witha rate , K=3encoder.

  • Convolutional EncoderRepresentation

    Impulse Response of the encoder We can approach the encoder in terms of its impulse

    responsethat is, the response of the encoder to a singleone bit that moves through it. Consider the contents of theregister in Figure 3 as a one moves through it:

  • Convolutional EncoderRepresentation

    The input sequence for the input one is called theimpulse response of the encoder. Then, for the inputsequence =101, the output may be found by thesuperpositionor the linear addition of the time-shiftedinput impulses as follows:

    Observe that this is the same output as that obtained inFigure 4, demonstrating that convolutional codes arelinearjust like the linear block codes of Chapter 6.

  • Convolutional EncoderRepresentation

    Notice that the effective code rate for theforegoing example with 3-bit input sequence and 10-bit output sequence is k /n = 3/10quite a bit lessthan the rate that might have been expected fromthe knowledge that each input data bit yields a pair ofoutput channel bits.

    The reason for the disparity is that the final data bitinto the encoder needs to be shifted through theencoder. All of the output channel bits are needed inthe decoding process. If the message had beenlonger, say 300 bits, the output codeword sequencewould contain 604 bits, resulting in a code rate of300/604much closer to 1/2 .

  • Convolutional EncoderRepresentation

    Sometimes, the encoder connections are characterized bygenerator polynomials, for describing the feedback shiftregister implementation of cyclic codes.

    We can represent a convolutional encoder with a set of ngenerator polynomials, one for each of the n modulo-2adders. Each polynomial is of degree K-1 or less anddescribes the connection of the encoding shift register tothat modulo-2 adder, much the same way that aconnection vector does.

    The coefficient of each term in the (K-1)-degreepolynomial is either 1 or 0, depending on whether aconnection exists or does not exist between the shiftregister and the modulo-2 adder in question.

  • Convolutional EncoderRepresentation

    For the encoder example in Figure 3, the generator polynomial1,2()for the upper and lower connections are:

    1=1++2

    2=1+2

    The lowest order term in the polynomial corresponds to the inputstage if the register.

    First, express the message vector=101as a polynomial The output sequence is found as follows:

    = 1 2

  • Convolutional EncoderRepresentation

    State Representation and the State Diagram A convolutional encoder belongs to a class of devices

    known as finite-state machines, which is the general name given to machines that have a memory of past signals.

    The adjective finite refers to the fact that there are only a finite number of unique states that the machine can encounter.

    In the most general sense, the state consists of the smallest amount of information that, together with a current input to the machine, can predict the output of the machine.

    The state provides some knowledge of the past signaling events and the restricted set of possible outputs in the future.

  • Convolutional EncoderRepresentation

    One way to represent simple encoders is with a state diagram; such a representation for the encoder in Figure 2 is shown in Figure 5.

    The states shown in the boxes of the diagram, represent the possible contents of the right most K-1 stages of the register, and the paths between the states represent the output branch words resulting from such state transitions.

    Figure 5. Encoder State Diagram, rate , K=3encoder.

  • Convolutional EncoderRepresentation

    Example 1: Convulutional Encoding For the encoder shown in Figure 2, show the state changes and the

    resulting output codeword sequence U for the message sequence=11011. followed by 1=2zeros to flush the register.Assume that the initial contents of the register are all zeros.

    Solution:

  • Convolutional EncoderRepresentation

    Example 2 In Example 1 the initial contents of the register are all zeros. This is equivalent

    to the condition that the given input sequence is preceded by two zero bits ( the encoding is a junction of the present bit and the 1prior bits). Repeat Example 1 with the assumption that the given input sequence is preceded by two one bits, and verify that now the codeword sequence for input sequence =11011is different than the codeword found in Example 1.

    Solution

  • Convolutional EncoderRepresentation

    By comparing this result with that of Example 1, we can see that each branch word of the output sequence U is not only a function of the input bit. but is also a function of the 1prior bits.

  • Convolutional EncoderRepresentation

    The Tree Diagram The tree diagram adds the dimension of time to the state

    diagram. The tree diagram for the convolutional encoder shown in

    Figure 2 is illustrated in Figure 6. At each successive input bit time the encoding procedure can be described by traversing the diagram from left to right, each tree branch describing an output branch word.

    The branching rule for finding a codeword sequence is as follows: If the input bit is a zero, its associated branch word is found by

    moving to the next rightmost branch in the upward direction.

    If the input bit is a one, its branch word is found by moving to the next rightmost branch in the downward direction.

    Assume that the initial contents of the encoder is all zeros.

  • Convolutional EncoderRepresentation

    Figure 6. Treerepresentation of encoder, (rate , K=3).

  • Convolutional EncoderRepresentation

    The added dimension of time in the tree diagram (compared to the state diagram) allows one to dynamically describe the encoder as a function of a particular input sequence.

    However, can you see one problem in trying to use a tree diagram for describing a sequence of any length? The number of branches increases as a function of 2L, where L is the number of branch words in the sequence. You would quickly run out of paper, and patience.

  • Convolutional EncoderRepresentation

    The Trellis Diagram

    The trellis diagram, by exploiting the repetitive structure , provides a more manageable encoder description than does the tree diagram. The trellis diagram for the convolutional encoder of Figure 2 is shown in Figure 7.

    In trellis diagram, a solid line denotes the output generated by an input bit zero, and a dashed line denotes the output generated by an input bit one.

  • Convolutional EncoderRepresentation

    Figure 7. Encoder Trellis Diagram, rate , K=3.

  • Convolutional EncoderRepresentation

    The nodes of the trellis characterize the encoder states: the first row nodes correspond to the state =00,the second and subsequent rows correspond to the states =10, =01, and =11. At each unit of time, the trellis requires 21nodes to represent the 21possible encoder states.

    The trellis in our example assumes a fixed periodic structure after trellis depth 3 is reached. In the general case, the fixed structure prevails after depth is reached. At this point and thereafter, each of the states can be entered from either of two preceding states. Of the two branches, one corresponds to an input bit zero and the other corresponds to an input bit one.

    One time-interval section of a fully-formed encoding trellis structure completely defines the code. The only reason for showing several sections is for viewing a code-symbol sequence as a function of time.

  • Formulation of the ConvolutionalDecoding Problem

    Maximum likelihood decoding The maximum likelihood concept:

    =max(|)

    is the formalization of a common-sense way to make decisions when there is statistical knowledge of the possibilities.

    To make the binary maximum likelihood decision, given a received signal, meant only to decide that 1()was transmitted if

    |1 >|2 The parameter z represents (T), the receiver pre-detection value

    at the end of each symbol duration time =. There are typically a multitude of possible codeword sequences

    that might have been transmitted. To be specific, for a binary code, a sequence of branch words is a member of a set of 2possible sequences.

  • Formulation of the ConvolutionalDecoding Problem

    Therefore, in the maximum likelihood context, we can saythat the decoder chooses a particular as thetransmitted sequence if the likelihoodis greaterthan the likelihoods of all the other possible transmittedsequences.

    The likelihood functions are given or computed from thespecifications of the channel.

    Generally, it is computationally more convenient to usethe logarithm of the likelihood function since this permitsthe summation, instead of the multiplication, of terms.

    = =

    =

    log =

    =1

    =

    log

  • Formulation of the ConvolutionalDecoding Problem

    Channel Models: Hard Versus Soft Desitions The codeword sequence made up of branch words,

    with each branch word comprised of n code symbols,can be considered to be an endless stream, as opposedto a block code, in which the source data and theircodewords are partitioned into precise block sizes, thenenters the modulator, where the code symbols aretransformed into signal waveforms.

    The modulation may be baseband or bandpass. In general,symbols at a time, where is an integer, are mappedinto signal waveforms(), where=1,2,,=2

    . Consider that a binary signal transmitted over a symbol

    interval (0,)is represented by 1()for a binary one and2()for a binary zero. The received signal is =+(), where ()is a zero-mean Gaussian noiseprocess.

  • Formulation of the ConvolutionalDecoding Problem

    The conditional probabilities of ,|1 , and|2 are shown in Figure 8, labeled likelihood of1,and likelihood of2.

    Figure 8. Hard and Soft Decoding Decisions

  • Formulation of the ConvolutionalDecoding Problem

    Binary Simmetric Channel Is a discrete memoryless channel that has binary input and

    output alphabets and symmetric transition probabilities. It can be described by the conditional probabilities:

    01=1|0=11=0|0=1

    The probability that an output symbol will differ from theinput symbol is and the probability that the outputsymbol will be identical to the input symbol is(1).

    The BSC is an example of a hard-decision channel,which means that, even though continuous-valuedsignals may be received by the demodulator, a BSC allowsonly firm decisions such that each demodulator outputsymbol, consists of one of two binary values.

  • Formulation of the ConvolutionalDecoding Problem

    Let be a transmitted codeword over a BSC with symbol errorprobability and let Z be the corresponding received decodersequence.

    Suppose that and are each L-bit-long sequences and thatthey differ in positions.

    Then, since the channel is assumed to be memoryless, theprobability that this was transformed to the specific receivedat distance from it can be written as:

    =(1)

    And the log-likelihood function is

    log =log1

    +log(1)

    If we compute this quantity for each possible transmitted sequence,the last term in the equation will be constant in each case.

  • Formulation of the ConvolutionalDecoding Problem

    Assuming that

  • Formulation of the ConvolutionalDecoding Problem

    The viterbi Convolutional Decoding Algorithm The Viterbi algorithm essentially performs maximum likelihood

    decoding; however, it reduces the computational load by takingadvantage of the special structure in the code trellis.

    The advantage of Viterbi decoding, compared with brute-forcedecoding, is that the complexity of a Viterbi decoder is not afunction of the number of symbols in the codeword sequence.

    The algorithm involves calculating a measure of similarity, ordistance, between the received signal, at time 1and all the trellispaths entering each state at time.

    The Viterbi algorithm removes from consideration those trellispaths that could not possibly be candidates for the maximumlikelihood choice.

    Note that the goal of selecting the optimum path can be expressed,equivalently as choosing the codeword with the maximumlikelihood metric, or as choosing the codeword with the minimumdistance metric.

  • Formulation of the ConvolutionalDecoding Problem

    An example of Viterbi Convolutional Decoding For simplicity, a BSC is assumed; thus Hamming distance is a proper

    distance measure. The encoder for this example is shown in Figure 2, and the

    encoder trellis diagram is shown in Figure 7. We start at time1in the 00 state. Since in this example, there are

    only two possible transitions leaving any state, not all branchesneed be shown initially.

    The basic idea behind the decoding procedure can best beunderstood by examining the Figure 7 encoder trellis in concertwith the Figure 10 decoder trellis.

    For the decoder trellis it is convenient at each time interval, tolabel each branch with the Hamming distance between the receivedcode symbols and the branch word corresponding to the samebranch from the encoder trellis.

  • Formulation of the ConvolutionalDecoding Problem

    The example in Figure 10 shows a message sequence m, thecorresponding codeword sequence and a noise corruptedreceived sequence=1101011001.

    The branch words seen on the encoder trellis branchescharacterize the encoder in Figure 2 and are known a priori to boththe encoder and the decoder.

    The labels on the decoder trellis branches are accumulated by thedecoder on the fly.

    From the received sequence shown in Figure 10, we see that thecode symbols received at (following) time1are 11.

    In order to label the decoder branches at (departing) time1withthe appropriate Hamming distance metric, we look at the Figure 7encoder trellis.

    Looking at the encoder trellis again, we see that a state0010transition yields an output branch word of 11, whichcorresponds exactly with the code symbols we received at time1.

  • Formulation of the ConvolutionalDecoding Problem

    Therefore, on the decoder trellis, we label the state 0010transition with a Hamming distance of 0.

    In summary, the metric entered on a decoder trellis branchrepresents the difference (distance) between what was receivedand what should have been" received had the branch wordassociated with that branch been transmitted.

    Figure 10. Decoder Trellis Diagram, rate , K=3.

  • Formulation of the ConvolutionalDecoding Problem

    The basis of Viterbi decoding is the following observation: If anytwo paths in the trellis merge to a single state, one of them canalways be eliminated in the search for an optimum path.

    For example, Figure 11 shows two paths merging at time 1 tostate 00.

    Figure 11. Decoder Trellis Diagram, rate , K=3.

  • Formulation of the ConvolutionalDecoding Problem

    Viterbi decoding consists of computing themetrics for the two paths entering each stateand eliminating one of them.

    At a given time, the winning path metric foreach state is designated as the state metric forthat state at that time.

    The first few steps in our decoding exampleare as show in Figure 12.

  • Formulation of the ConvolutionalDecoding Problem

    Figure 12. Selection of worvivor paths, (a) Survivors at2.(b) Survivorsat 3. (c) Metric comparisons at 4. (d) Survivors at 4. (e) Metriccomparison at5. (f) Survivors at5. (g) Metrics comparisons at6. (h)Survivors at6.

  • Formulation of the ConvolutionalDecoding Problem

    Path Memory and Synchronization The storage requirements of the Viterbi decoder grow exponentially

    with constraint length. For a code with rate , the decoder retains a set of 21paths

    after each decoding step. All of the 21paths tend to have a common stem which

    eventually branches to the various states. A simple decoder implementation, then, contains a fixed

    amount of path history and outputs the oldest bit on an arbitrarypath each time it steps one level deeper into the trellis.

    The amount of path storage required is=21

    Where h is the length of the information bit path history perstate.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Distance Properties of Convolutional Codes

    Consider the distance properties ofconvolutional codes in the context of thesimple encoder in Figure 7.3 and its trellisdiagram in Figure:7.7. We want to evaluatethe minimum distance between all possiblepairs of codeword sequences.

    The minimum distance is related to the error-correcting capability of the code

  • PROPERTIES OF CONVOLUTIONAL CODES

    Assuming that the all-zeros input sequence was 1transmitted, the paths of interest are those that start andend in the 00 state and do not return to the 00 s lateanywhere in between.

    An error will occur whenever the distance of any other path that merges with the a= 00 state at time is less than that of the all zeros path up to time , causing the all-zeros path to be discarded in the decoding process.

    In other words, given the all-zeros transmission. an erroroccurs whenever the all-zero,path doesnot survive.

    Thus an error of interest is associated with a surviving path that diverges from and then remerges to the all-zeros path.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Why is it necessary for the path to remerge? Isn'tthe divergence enough to indicate an error?

    Yes, of course, but an error characterized by onlya divergence means that the decoder, from thatpoint on, will be outputting ''garbage" for the restof the message duration.

    We want to quantify the decoder's capability interms of errors that will usually take place- thatis, we want to learn the easiest way for thedecoder to make an error.

  • PROPERTIES OF CONVOLUTIONAL CODES

    The minimum distance for making such an error can be found by exhaustively examining every path from the 00 state to the 00 state.

    First, let us redraw the trellis diagram, shown in Figure7.16. labeling each branch with its Hamming distancefrom the all-zeros codeword instead of with its branchword symbols.

    The Hamming distance between two unequal-lengthsequences will be found by first appending thenecessary number of zeros to the shorter sequence tomake the two sequences equal in length

  • PROPERTIES OF CONVOLUTIONAL CODES

    Consider all the paths that diverge from the all-zerospath and then remerge for the first time at somearbitrary node. From Figure 7.16 we can compute thedistances of these paths from the all-zeros path.

    There is one path at distance 5 from the all-zeros path;this path departs from the all-zeros path at time1andmerges with it at time4.

    Similarly, there are two paths at distance 6,one whichdeparts at time1and merges at time5and the otherwhich departs at time1 and merges at time6, and soon.

  • PROPERTIES OF CONVOLUTIONAL CODES

    We can also see from the dashed and solid linesof the diagram that the input bits for the distance5 path are 1 0 0 ; it differs in only one input bitfrom the all-zeros input sequence.

    Similarly, the input bits for the distance 6 paths are 1 1 0 0 and 1 0 1 0 0: each differs in two positions from the all-zeros path.

    The minimum distance in the set of all arbitrarilylong paths that diverge and remerge, called theminimum free distance, or simply the freedistance,is seen to be 5 in this example.

  • PROPERTIES OF CONVOLUTIONAL CODES

    For calculating the error-correcting capability of the code, we repeat Equation (6.44) with the minimum distance replaced by the free distanceas

    =

    where x means the largest integer no greater

    than x. Setting = 5, we see that the codecharacterized by the Figure 7.3 encoder, cancorrect any two channel errors.

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    Consider Figure 7.16 and the possibledivergence-remergence error paths. From thispicture one sees that the decoder can notmake an error in any arbitrary way.

    The error path must follow one of theallowable transitions. The trellis pinpoints allsuch allowable paths.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Although Figure 7.16 presents the computation of freedistance in a straight-forward way, a more directclosed-form expression can be obtained:

    First, we label the branches of the state diagram aseither 0=1,12 shown in Figure 7.17 wherethe exponent of D denotes the Hamming distance fromthe branch word of that branch to the all-zeros branch.

    The self-loop at node a can be eliminated since itcontributes nothing to the distance properties of acodeword sequence relative to the all-zeros sequence.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Furthermore, node a can be split into two nodes(labeled a and e), one of which represents the inputand the other the output of the state diagram.

    All paths originating at a = 00 and terminating at e =00 can be traceuon the modified state diagram ofFigure 7.17.

    We can calculate the transfer function of path a b c e(starting and ending at state 00) in terms of theindeterminate "placeholder" Das22=5.

    The exponent of D represents the cumulative tally ofthe number of ones in the path, and hence theHamming distance from the all-zeros path.

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    The paths a b d c e and a b c b c e each have thetransfer function 6 and thus a Hammingdistance of 6 from the all-zeros path. We nowwrite the state equations as

    =2+

    =+=+=

    2 ,..are dummy variables for the partial

    paths to the intermediate nodes.

  • PROPERTIES OF CONVOLUTIONAL CODES

    The transfer function, T(D). sometimes called thegeneratingfunction of the code can be expressed as =

    . By solving the state equations

    =5

    12=5+26+47++2+5

    The transfer function for this code indicates that thereis a single path of distance 5 from the all-zeros path,two of distance 6, four of distance 7, and In general,there are 2paths of distance +5from the all-zerospath, where=0,1,2..The free distanceof the codeis the Hamming weight of the lowest-order term in theexpansion of T(D).

  • PROPERTIES OF CONVOLUTIONAL CODES

    ln evaluating distance properties, the transfer function,T(D),cannot be used for long constraint lengths since thecomplexity of T(D)increases exponentially with constraintlength.

    The transfer function can be used to provide more detailedinformation than just the distance of the various paths.

    We can introduce a factor N into all branch transitionscaused by the input bit one. Thus, as each branch istraversed, the cumulative exponent on N increases by one,only if that branch transition is due to an input bit one.

    For the convolutional code characterized in the Figure 7.3example. the additional factors L and N are shown on themodified state diagram of Figure 7.18

  • PROPERTIES OF CONVOLUTIONAL CODES

    =2+

    =+=+

    =2

    The transfer function of tills augmemed state diagram is

    ,, =53

    11+=53N+64(1+)2+++3+5+1

    Thus, we can verify some of the path properties displayedin Figure 7.16. There is one path of distance 5, length 3,which differs in one input bit from the all-zeros path

  • PROPERTIES OF CONVOLUTIONAL CODES

    There are two paths of distance 6, one of which islength 4, the other length 5, and both differ intwo input bits from the all-zeros path. Also,of thedi stance 7 paths, one is of length 5, two are oflength 6, and one is of length 7: all four pathscorrespond to input sequences that differ in threeinput bits from the all-zeros path.

    Thus if the all -zeros path is the correct path andthe noise causes us to choose one of theincorrect paths of distance 7, three bit errors willbe made.

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    Systematic and Nonsystematic Convolutional Codes A systematicconvolutional code is one in which the input k-

    tuple appears as part of the output branch word n-tupleassociated with that k-tuple. Figure 7.19 shows a binary,rate 1/2 ,k=3 systematic encoder.

    For linear block codes, any nunsystematic code can betransformed into a systematic code with the same blockdistance properties. This is not the case for convolutionalcodes

    The reason for this is that convolutional codes dependlargely on free distance; making the convolutional codesystematic, in general, reducesthe maximum possible freedistance for a given constraint length and rate.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Table 7.1 shows the maximum free distance for rate1 systematic and nonsystematic codes for K = 2through X. For large constraint lengths the results areeven more widely separated

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    Catastrophic Error Propagation in Convolutional Codes

    A catastrophicerror is defined as an event whereby afinite number of code symbol errors cause an infinitenumber of decoded data bit errors

    Massey and Sain have derived a necessary andsufficient condition for convolutional codes to displaycatastrophic error propagation.

    For rate 1/codes with register taps designated bypolynomial generators, the condition for catastrophicerror propagation is that the generators have acommonpolynomialfactor (of degree at least one).

  • PROPERTIES OF CONVOLUTIONAL CODES

    For example, Figure 7.20a illustrates a rate1/2, k=3 encoder with upper polynomial1 and lower polynomial2 as follows:

    1 =1+1 =1+

    2=(1+)(1+)

    The generators 1 and 2 have incommon the polynomial factor 1+.Therefore, the encoder in Figure 7.20a canmanifest catastrophicerrorpropagation.

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    In terms of the state diagram for any-rate code,catastrophic errors can occur if. and only if, any closed-loop path in the diagram has zero weight (zero distancefrom the all-zeros path).

    To illustrate this, consider the example of Figure 7.20.The state diagram in Figure 7.20b is drawn with thestate a = 00 node split into two nodes, a and e, asbefore.

    Assuming that the all-zeros path is the correct path, theincorrect path a b d d . .. d c e has exactly 6 ones, nomatte how many times we go a round the self-loop atnode d.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Thus for a BSC, for example, three channel errorsmay cause us to choose this incorrect path. Anarbitrarily large number of errors (two plus thenumber of times the self-loop is traversed) can bemade on such a path.

    We observe that for rate1/codes, if each adderin the encoder has an even number ofconnections, the self-loop corresponding to theall-ones data state will have zero weight, andconsequently, the codewill becatastrophic.

  • PROPERTIES OF CONVOLUTIONAL CODES

    Performance Bounds for Convolutional Codes

    The probability of bit error, , for a binaryconvolutional code using hard-decisiondecoding can be shown to be upper boundedas follows:

    ,

    =1,= (1)

    where p is the probability of channel symbolerror.

  • PROPERTIES OF CONVOLUTIONAL CODES

    For the example of Figure 7.3. T(D, N) is obtained from T(D, L, N ) by setting L = 1 in Equation

    =5

    12,

    =5

    12

    (2[1]

    12)5

    14112

    2

  • PROPERTIES OF CONVOLUTIONAL CODES

    For coherent BPSK modulation over an additive white Gaussian noise (AWGN) channel, it can be shown that the bit error probability is bounded by:

    Q 20exp

    0

    ,

    0=r

    0

    0= ratio of channel symbol energy to noise power spectral density

    0= ratio of information bit energy to noise power spectral density

    r=k/n rate of the code

  • PROPERTIES OF CONVOLUTIONAL CODES

    Therefore, for the rate 1/2 code with freedistance df= 5, in conjunction with coherent BPSKand hard-decision decoding, we can write

    Q 5

    0exp5

    0

    exp(520)

    12exp20

    2

    Q 50

    12exp20

    2

  • PROPERTIES OF CONVOLUTIONAL CODES

    Coding Gain Coding gain, is defined as the reduction. Usually

    expressed in decibels, in the required

    0to achieve a

    specified error probability of the coded system over anuncoded system with the same modulation andchannel characteristics.

    Table 7.2 lists an upper bound on the coding gains,compared to uncoded coherent BPSK, for severalmaximum free distance convolutional codes withconstraint lengths varying from 3 to 9 over a Gaussianchannel with hard-decision decoding.

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    Table 7.3 lists the measured coding gains,compared to uncoded coherent BPSK,achieved with hardware implementation orcomputer simulation over a Gaussian channelwith soft-decision decoding .

    The uncoded

    0is given in the leftmost

    column, from Table 7.3 we can see that codinggain increases as the error probability isdecreased

  • PROPERTIES OF CONVOLUTIONAL CODES

    However, the coding gain cannot increase indefinitely; ithas an upper bound as shown in the table. This bound indecibels can he shown to be

    10log10 where r is the code rate and is the free distance .

    Examination of Table 7.3 also reveals that at=107, for

    code rates of and 2/3, the weaker codes tend to be closerto the upper bound than are the more powerful codes.

    Typically, Viterbi decoding is used over binary inputchannels with either hard or 3-bit soft quantized outputs.The constraint lengths vary between 3 and 9, the code rateis rarely smaller than 1/3 and the path memory is usually afew constraint lengths.

  • PROPERTIES OF CONVOLUTIONAL CODES

  • PROPERTIES OF CONVOLUTIONAL CODES

    Best Known Convolutional Codes The connection vectors or polynomial generators

    of a convolutional code are usuaUy selectedbased on the code's free distance properties.

    The first criterion is to select a code that does nothave catastrophic error propagation and that hasthe maximum free distance for the given rate andconstraint length.

    Then the number of paths at the free distanceor the number of data bit errors the pathsrepresent, should be minimized.

  • PROPERTIES OF CONVOLUTIONAL CODES

    The selection procedure c.-an be furth errefined by considering the number of paths or bit errors at +1at +2and so on, until only one code or class of codes remains.

    A list of the best known codes of rate , K = 3 to 9, and rate 1/3, K = 3 to 8, based on this criterion was compiled by Odenvwlder and is given in Table 7.4

  • PROPERTIES OF CONVOLUTIONAL CODES

    The connection vectors in this table representthe presence or absence ( 1 or 0) of a tapconnection on the corresponding stage of theconvolutional encoder, the leftmost termcorresponding to the leftmost stage of theencoder register.

    lt is interesting to note that these connectionscan be inverted (leftmost and rightmost canbe interchanged in the above description).

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    Sequential Decoding

    Prior to the discovery of an optimumalgorithm by Viterbi, other algorithms hadbeen proposed for decoding convolutionalcodes.

    The earliest was the sequential decodingalgorithm, originally proposed by Wozencraftand subsequently modified by Fano.

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    Consider that us ing the encoder shown inFigure 7.3. a sequence m = 1 1 0 1 1 isencoded into the codeword sequence U = 1 10 1 0 1 0 0 0 1

    Assume that the received sequence Z is, infact, a correctrendition of U. The decoder hasavailable a replica of the encoder code tree,shown in Figure 7.6, and can use the receivedsequence Z to penetrate the tree.

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    The decoder starts at the time 1node of thetree and generates both paths leaving thatnode. The decoder follows that path whichagrees with the received n code symbols.

    At the next level in the tree, the decoderagain generates both paths leaving that node ,and follows the path agreeing with the secondgroup of n code symbols.

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    Suppose, however, that the received sequence Z is acorruptedversion of U. The decoder starts at the time1 node or the code tree and generates both pathsleading from that node.

    If the received n code symbols coincide with one of thegenerated paths, the decoder follows that path. lf thereis no t agreement, the decoder follows the most likelypath but keeps a cumulative count on the number ofdisagreements between the received symbols and thebranch words on the path being followed, If twobranches appear equally likely, tile receiver uses anarbitrary rule, such as following the zero input path

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    lf the disagreement count exceeds a certainnumber (which may increase as we penetratethe tree), the decoder decides that it is on anincorrect path, backs out of the path, and triesanother.

    The decoder keeps track of the discardedpathways to avoid repeating any pathexcursions

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    Comparisons and Limitations of Viterbi and SequentialDecoding

    The major drawback of the Viterbi algorithm isthat while error probability decreasesexponentially with constraint length, thenumber of code states, and consequentlydecoder complexity, grows exponentiallywithconstraintlength.

    On the other hand, the computationalcomplexity of the Viterbi algorithm isindependent of channel characteristics

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS

    Sequential decoding achieves asymptotically the same error probability as maximum likelihood decoding but without searching all possible states.

    In Figure 7.24, some typical versus /0curves for these two popular solutions to the convolutional decoding problem, Viterbi decoding and sequential decoding, illustrate their comparative performance using coherent BPSK over an AWGN channel.

  • OTHER CONVOLUTIONAL DECODING ALGORITHMS