ece 6640 digital communicationsunix.cc.wmich.edu/~bazuinb/ece6640/chap_07.pdfece 6640 3 sklar’s...
TRANSCRIPT
ECE 6640Digital Communications
Dr. Bradley J. BazuinAssistant Professor
Department of Electrical and Computer EngineeringCollege of Engineering and Applied Sciences
ECE 6640 2
Chapter 7
7. Channel Coding: Part 2 .1. Convolutional Encoding. 2. Convolutional Encoder Representation. 3. Formulation of the Convolutional Decoding Problem. 4. Properties of Convolutional Codes. 5. Other Convolutional Decoding Algorithms.
ECE 6640 3
Sklar’s Communications System
Notes and figures are based on or taken from materials in the course textbook: Bernard Sklar, Digital Communications, Fundamentals and Applications,
Prentice Hall PTR, Second Edition, 2001.
Signal Processing Functions
ECE 6640 4Notes and figures are based on or taken from materials in the course textbook:
Bernard Sklar, Digital Communications, Fundamentals and Applications, Prentice Hall PTR, Second Edition, 2001.
ECE 6640 5
Waveform Coding Structured Sequences
• Structures Sequences:– Transforming waveforms in to “better” waveform representations
that contain redundant bits– Use redundancy for error detection and correction
• Block Codes are memoryless• Convolution Codes have memory!
Convolutional Encodings
• The encoder transforms each sequence M into a unique codeword sequence U=G(m). Even though the sequence m uniquely defines the sequence U, a key feature of convolutional codes is that a given k-tuple within m does not uniquely define its associated n-tuple within U since the encoding of each k-tuple is not only a function of that k-tuple but is also a function of the K-1 input k-tuples that precede is.
• Each k-tuple effects not just the codeword generated when the value is input, by the next K-1 codewords as well.– the system was memory
ECE 6640 6
Convolutional Encoder Diagram
ECE 6640 7
• Each message, mi, may be a k-tuple. (or k could be a bit)
• K messages are in the encoder
• For each message input, an n-tuple is generated
• The code rate is k/n
• We will usually be working with k=1 and n=2 or 3
ECE 6640 8
Proakis Convolution Encoder
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
Representations
• Several methods are used for representing a convolutional encoder, the most popular being;– the connection pictorial– connection vectors or polynomials– state diagrams– tree diagrams– trellis diagrams.
ECE 6640 9
Connection Representation
• k=1, n=3• Generator Polynomials
– G1 = 1 + X + X2
– G2 = 1 + X2
• To end a message, K-1 “zero” messages are transmitted. This allows the encoder to be flushed.– effective code rate is different than k/n … the actual rate would be
(2+k*m_length)/n*m_length– a zero tailed encoder ….
ECE 6640 10
Impulse Response of the Encoder
• allow a single “1” to transition through the K stages– 100 -> 11– 010 -> 10– 001 -> 11– 000 -> 00
• If the input message where 1 0 1– 1 11 10 11– 0 00 00 00– 1 11 10 11– Bsum 11 10 00 10 11 – Bsum is the transmitted n-tuple sequence …. if a 2 zero tail follows– The sequence/summation involves superpoition or linear addition.
• The impulse response of one k-tuple sums with the impulse responses of successive k-tuples!
ECE 6640 11
Convolutional Encoding the Message
• As each k-tuple is input,an n-tuple is output
• This is a rate ½ encoding• The “constraint” length is
K=3, the length of the k-tuple shift register.
• The effective code rate for m_length = 3 is: 3/10
ECE 6640 12
Proakis (3,1), rate 1/3, K=3 Pictorial
ECE 6640 13John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
• Generator Polynomials– G1 = 1 – G2 = 1 + X2
– G3= 1 + X + X2
Polynomial Representation
• Generator Polynomials (also represented in octal)– G1 = 1 + X + X2 7octal
– G2 = 1 + X2 5octal
• It is assumed to be a binary input.• There are two generator polynomials, therefore n=2
– each polynomials generates one of the elements of the n-tuple output
• Polynomial Multiplication can be used to generate output sequences
• m(X)*g1(X) = (1 + X2)* (1 +X+ X2) = 1 +X+X3+ X4
• m(X)*g2(X) = (1 + X2)* (1 + X2) = 1 + X4
• Output: (11 , 10 , 00, 10, 11) as beforeECE 6640 14
State Representation
• Using the same encoding:• Solid lines represent 0 inputs• Dashed lines represent 1 inputs• The n-tuple output is shown with
the state transition• It can be verified that 2 zeros
always returns to the same “steady state”
• Note: two previous k-tuples provide state, the new k-tuple drives transitions
ECE 6640 15
ECE 6640 16
Proakis (3,1) K=3 State Diagram
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
Solid Lines are 0 inputs
Dashed Lines are 1 inputs
Tree Diagram
• Input values define where to go next.
• Each set of branch outputs transmit complementary n–tuples
• States can be identified by repeated level operations
ECE 6640 17
ECE 6640 18
Proakis (3,1) K=3 Polynomial and Tree
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
111g
101g001g
3
2
1
Trellis Diagram
• The tree structure repeats itself.• The tree/state diagrams define a finite number of states.
– the tree has ever increasing number of branches to show the complete path of a message
– the state folds back on itself so observing the complete path is difficeult
• Can we identify diagrammatically a figure that shows the state transitions and the entire message path?
• Yes, the Trellis Diagram
ECE 6640 19
Trellis Diagram
• Initial state, state development, continuous pattern observable, “fully engaged” after K-1 inputs, easily trace tail zeros to the “initial state”.
ECE 6640 20
ECE 6640 21
Proakis (3,1) K=3 Trellis Diagram
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
A more complicated example follows
• Proakis (3,2) K=2 Encoder– Pictorial– Polynomial and Tree– State Diagram– Trellis
ECE 6640 22
ECE 6640 23
Proakis (3,2) K=2 Pictorial
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
ECE 6640 24
Proakis (3,2) K=2 Polynomial and Tree
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
0101g
1011g1101g
3
2
1
ECE 6640 25
Proakis (3,2) K=2 State Diagram
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
Solid Lines are 0 inputs
Dashed Lines are 1 inputs
ECE 6640 26
Proakis (3,2) K=2 Trellis Diagram
John G. Proakis, “Digital Communications, 4th ed.,” McGraw Hill, Fourth Edition, 2001. ISBN: 0-07-232111-3.
Encoding
• Each approach can be readily implemented in hardware.• Good codes have been found be computer searches for
each value of the constraint length, K. • The easy part, now for decoding.
ECE 6640 27
Decoding Convolutional Codes
• As the codes have memory, we wish to use a decoder that achieves the minimum probability of error … using a condition called maximum likelihood.
ECE 6640 28
ECE 5820 MAP and ML
• There are various estimators for signals combined with random variables.
• In general we are interested in the maximum a-posteriori estimator of X for a given observation Y.
– this requires knowledge of the a-priori probability, so that
ECE 6640 29
yYxXPx
|max
yYP
xXPxXyYPyYxXP
||
ECE 5820 MAP and ML
• In some situations, we know
but not the a-priori probabilities. • In these cases, we form a maximum likelihood estimate
For a maximum likelihood estimate, we perform
ECE 6640 30
xXyYPx
|max
xXyYP |
ECE 5820 Markov Process
• A sequence or “chain” of subexperiments in which the outcome of a given subexperiment determines which subexperiment is performed next.
• If the output from the previous state in a trellis is known, the next state is only based on the previous state and the new input. – the decoder can be computed one step at a time to determine the
maximum likelihood path.
• Viterbi’s improvement on this concept. – In a Trellis, there is a repetition of states. If two paths arrive at the
same state, only the path with the maximum likelihood must be maintained … the “other path” can no longer become the ML path!ECE 6640 31
001211100 ||||,,, sPssPssPssPssPsssP nnnnnnn
Maximum Likelihood Decoding
• If all input message sequences are equally likely, the decoder that achieves the minimum probability of error is the one that compares the conditional probabilities of all possible paths against the received sequence.
– where U are the possible message paths
• For a memoryless channel we can base the computation on the individual values of the observed path Z
– where Zi is the ith branch of the received sequence Z, zji is the jthcode symbol of Zi and similarly for U and u.
ECE 6640 32
m
Uall
m UZPUZPm
|max| '
1 11
|||i j
mjiji
i
mii
m uzPUZPUZP
ML Computed Using Logs
• As the probability is a product of products, computation precision and the final magnitude is of concern.
• By taking the log of the products, a summation may be performed instead of multiplications.– constants can easily be absorbed– similar sets of magnitudes can be pre-computed and/or even scaled
to more desirable values. – the precision used for the values can vary as desired for the
available bit precision (hard vs. soft values)
ECE 6640 33
Channel Models: Hard vs. Soft Decisions
• Our previous symbol determinations selected a detected symbol with no other considerations … a hard decision.
• The decision had computed metrics that were used to make the determination that were then discarded.
• What if the relative certainty of decision were maintained along with the decision.
– if one decision influenced another decision, hard decisions keep certainty from being used.
– maintaining a soft decision may allow overall higher decision accuracy when an interactions exists.
ECE 6640 34
ML in Binary Symmetric Channels
• Bit error probability– P(0|1)=P(1|0) = p – P(1|1)=P(0|0) = 1-p
• Suppose Z and any possible message U differ in dmpositions (related to the hamming distance). Then the ML probability for an L bit message becomes
• taking the log
ECE 6640 35
dmLdmm ppUZP 1|
pdmLpdmUZP m 1loglog|log
pLp
pdmUZP m
1log1log|log
ML in Binary Symmetric Channels
• The ML value for each possible U is then
– The constant is identical for all possible U and can be pre-computed– The log of the probability ratios is also a constant
• Overall, we are looking for the possible sequence with a minimum Hamming distance. – for hard decisions, we use the Hamming distance– for soft decisions, we can use the “certainty values” shown in the
previous figure!ECE 6640 36
pLp
pdmUZP m
1log1log|log
BAdmUZP m |log
Viterbi Decoding
• The previous slide suggested that all possible U should be checked to determine the minimum value (or maximum likelihood).– If we compute the “metrics” for each U as they arrive, the trellis
structure can reduce the number of computations that must be performed.
– For a 2^K-1 state trellis, only that number of possible U paths need to be considered.
• Each trellis state has two arriving states. If we compute path values for each one, only the smallest one needs to be maintained.
• The larger can never become smaller as more n-tuples arrive!• Therefore, only 2^K-1 possible paths vs. 2^L possible paths for U
must be considered!
ECE 6640 37
Viterbi Decoder Trellis
• Decoder Trellis with Hamming distances shown for each of the possible paths from “state to state”.
ECE 6640 38
encoder trellis
Viterbi Example
ECE 6640 39
• m: 1 1 0 1 1• U: 11 01 01 00 01• Z: 11 01 01 10 01
merging paths
Viterbi Example
ECE 6640 40
• m: 1 1 0 1 1• U: 11 01 01 00 01• Z: 11 01 01 10 01
ECE 6640 41
Add Compare SelectViterbi Decoding Implementation
• Section 7.3.5.1, p. 406
Possible Connections
ECE 6640 42
Add Compare SelectViterbi Decoding Implementation
• State Metric Update based on new Branch Metric Values– Hard coding uses bit difference measure– Soft coding uses rms distances between actual and expected branch
values– The minimum path value is maintained after comparing incoming
paths.– Paths are eliminated that are not maintained.
• When all remaining paths use the same branch, update the output sequence
• Path history does has to go back to the beginning anymore …
MATLAB
• See doc comm.ViterbiDecoder– matlab equaivalent to Dr. Bazuin’s trellis simulation structure– Use:– t = poly2trellis(K,[g1 g2 g3])
– t2 = poly2trellis([3],[7 7 5])– t2.outputs– t2.nextStates
ECE 6640 43
MATLAB Simulations
• Communication Objects.– see ViterbiComm directory for demos– TCM Modulation
• comm.PSKTCMModulator• comm.RectangularQAMTCMModulator• comm.GeneralQAMTCMModulator
– Convolutional Coding• comm.ConvolutionalEncoder• comm.ViterbiDecoder (Hard and Soft)
• comm.TurboEncoder – available from Matlab, no demo
ECE 6640 44
ECE 6640 45
Properties of Convolutional Codes
• Distance Properties– If an all zero sequence is input and there is a bit error, how and
how long will it take to return to an all zeros path?– Find the “minimum free distance”
• The number of code bit errors required before returning• Note that this is not time steps and not states moved through• This determines the error correction capability
– Systematic and non systematic codes• For linear block codes, any non-systematic code can be transformed
into a systematic code (structure with I and data in columns)• This is not true for convolutional codes. Convolutional codes focus
on free distance, making them systematic would reduce the distance!
2
1dt f
ECE 6640 46
Catastrophic Error Propagation
• A necessary and sufficient condition to have catastrophic error propagation is that the generator polynomials have a common factor.– Catastrophic errors is when a finite number of code symbol errors
can generate a infinite number of decoded data bit errors.– See Section 7.4.3 and p. 414
ECE 6640
47
Computing Distance Caused by a One
• Split the state diagram to start at 00.. and end at 0..– Show state transitions with the following notations– D: code bit errors for a path
• Define the state equations using the state diagram– Determine the result with the smallest power of D and interpret– See Figure 7.17 and page p. 411
ECE 6640 48
Computing Distance, Number of Branches, and Branch Transition caused by a One
• Split the state diagram to start at 00.. and end at 0..– Show state transitions with the following notations– D: code bit errors for a path– L: one factor for every branch– N: one factor for every branch taken due to a “1” input
• Define the state equations using the state diagram– Determine the result with the smallest power of D and interpret– See Figure 7.18 and page p. 412
Computing Distance, Number of Branches, and Branch Transition caused by a One
ECE 6640 49
Interpretation: N=1 branch transitions caused by a 1 inputL=3 number of branches taken counterD=5 number of 1 outputs that occurs (Hamming distance of error)
ECE 6640 50
Performance Bounds
• Upper Bound of bit error probability
– for Figure 7.18 and Eq. 7.15 on p. 412
p1p2D,1N
B dNN,DdTP
ND21
NDN,DT5
25
2
5
2
55
ND21D
ND21DDN2ND21
D2ND21
NDND21
DdN
N,DdT
2
5
p1p2D,1N2
5
Bp1p41
p1p2ND21
DP
ECE 6640 51
Performance Bounds
• For
• The bound becomes
0
b
0
b
0
C
NE
nk
NEr
NE
2
0
b0
b
0
bB
N2Eexp21
1N2E5exp
NE5QP
ECE 6640 52
Coding Gain Bounds
• From Eq. 6.19
• This is bounded by– The 10 log base 10 of the code rate and the min. free distance
• Coding Gains are shown in Tables 7.2 and 7.3, p. 417
ValueSamePfor,dBNEdB
NEdBG b
coded0
b
uncoded0
b
f10 drlog10dBG
Proakis Error Bounds (1)
• The sequence error probability is bounded by
– in terms of the transfer function d
– and for convolutional codes based on the Chap 7 derivation (7.2)
– for soft decisions– for hard decision
ECE 6640 53
ZYb Y
ZYTk
P,1
,1
Ze ZTP
pathy
ypyp 1|0|
freedd
dde aP
bcR exp
pp 12
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Proakis Error Bounds (2)
• Additional error bounds and computations– Hard decision pairwise errors (p. 514-15)
ECE 6640 54
d
dk
knk ppkd
dP21
2 1
d
dk
knkddpp
kd
ppdd
dP2
222 11
21
21
freedd
de dPaP 2
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
ECE 6640 55
Soft Decision Viterbi
• The previous example used fractional “soft values”– See the Viterbi example slides on line
• For digital processing hardware: use integer values and change the observed code to the maximum integer values– For 0, 1 in an 8-level system use 0,7– Compute distances as the rms value from the desired received code
to the observed received code. • Note that only 4 values need to be computed to define all branch
metric values, • Example: see Fig. 7.22 for (0,0) and (7,7) now computed distances
from (0,7) and (7,0) and you have all 4!– Apply computations and comparisons as done before.
Other Decoding Methods
Viterbi and Trellis is not the only way …
Section 8.5 of Proakis: • Sequential Decoding Algorithm, p. 525• Stack Algorithm, p. 528• Feedback Decoding, p. 529
ECE 6640 56John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
ECE 6640 57
Sequential Decoding
• Existed prior to Viterbi• Generate a hypothesis about the transmitted sequence
– Compute metric between hypothesis and received signal– If metric indicates reasonable agreement, go forward
otherwise go backward, change the hypothesis and keep trying.– See section 7.5 and p. 422-425.
• Complexity– Viterbi grows exponentially with constraint length– Sequential is independent of the constraint length
• Can have buffer memory problems at low SNR (many trials)
ECE 6640 58
Feedback Decoding
• Use a look-ahead approach to determine the minimum “future” hamming distance– Look ahead length, L, has received code symbols forward in time– Compare look-ahead paths for minimum hamming distance and
take the tree branch that contains the minimum value ….
• Section 7.5.3 on p. 427-429
• Called a feedback decoder as detection decisions are required as feedback to compute the next set of code paths to search for a minimum.
ECE 6640 59
References
• http://home.netcom.com/~chip.f/viterbi/tutorial.html • http://www.eccpage.com/
• B. Sklar, "How I learned to love the trellis," in IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 87-102, May 2003.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1203212
Practical Considerations (1)
• Convolutional codes are widely used in many practical applications of communication system design.
– The choice of constraint length is dictated by the desired coding gain.– Viterbi decoding is predominantly used for short constraint lengths
(K ≤ 10)– Sequential decoding is used for long-constraint-length codes, where
the complexity of Viterbi decoding becomes prohibitive.
ECE 6640 60John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Practical Considerations (2)
• Two important issues in the implementation of Viterbi decoding are1. The effect of path memory truncation, which is a desirable feature that ensures a fixed decoding delay.2. The degree of quantization of the input signal to the Viterbi decoder.
• As a rule of thumb, we stated that path memory truncation to about five constraint lengths has been found to result in negligible performance loss.
• In addition to path memory truncation, the computations were performed with eight-level (three bits) quantized input signals from the demodulator.
ECE 6640 61John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Practical Considerations (3)
ECE 6640 62
• Figure 8.6–2 illustrates the performance obtained by simulation for rate 1/2, constraint-lengths K = 3, 5, and 7 codes with memory path length of 32 bits.
• The broken curves are performance results obtained from the upper bound in the bit error rate given by Equation 8.2–12.
• Note that the simulation results are close to the theoretical upper bounds, which indicate that the degradation due to path memory truncation and quantization of the input signal has a minor effect on performance (0.20–0.30 dB).
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
MATLAB
• ViterbiComm– poly2trellis: define the convolutional code and trellis to be used– istrellis: insuring that the trellis is valid and not catastrophic– distpec: computes the free distance and the first N components of
the weight and distance spectra of a linear convolutional code.– comm.ConvolutionalEncoder– quantiz - a quantization index and a quantized output value
allowing either a hard or soft output value– comm.ViterbiDecoder – either hard or soft decoding– bercoding
– Viterbi_Hard.m– Viterbi_Soft.m
ECE 6640 63
Supplemental Information
• John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. Chapter 8. ISBN: 978-0-07-295716-6.
ECE 6640 64
Figure 8.1-2
ECE 6640 65
• Convolutional Code (3,1), rate 1/3, n=3, k=1, K=3
• Generator Polynomials– G1 = 1 – G2 = 1 + D2
– G3= 1 + D + D2
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Polynomial RepresentationExample 8.1-1
• Generator Polynomials (also represented in octal)– G1 = 1 – G2 = 1 + D2
– G3= 1 + D + D2
• It is assumed to be a binary input.• There are three generator polynomials, therefore n=3• Polynomial Multiplication can be used to generate output
sequences u = (1 0 0 1 1 1)• c1 = u(D)*g1(D) = (1 + D3 + D4 + D5)* (1 )• c2 = u(D)*g2(D) = (1 + D3 + D4 + D5)* (1 + D2)• c3 = u(D)*g3(D) = (1 + D3 + D4 + D5)* (1 +D+ D2)
ECE 6640 66John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Polynomial ComputationExample 8.1-1
• EXAMPLE 8.1–1. Let the sequence u = (100111) be the input sequence to the convolutional encoder shown in Figure 8.1–2.
ECE 6640 67
6 bits in plus 2 zero tail bits (flush memory)8 x 3 = 24 bit output sequence
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Decoding Convolutional Codes
• As the codes have memory, we wish to use a decoder that achieves the minimum probability of error … using a condition called maximum likelihood.
• But first there is the Transfer Function of a Convolutional Code
ECE 6640 68John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Transfer Function Example (1)
• the textbook outlines a procedure for “tracing” from an “input” to return to the “output”– moving from state a back to state a!
ECE 6640 69
• ignore trivial path (a to a)• describe path transition as number of
output “ones”• determine state equations
John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008. ISBN: 978-0-07-295716-6.
Transfer Function Example (2)
• The transfer function for the code is defined as T(Z) = Xe/Xa.
• By solving the state equations given above, we obtain
ECE 6640 70John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Transfer Function Example (3)
• The transfer function for this code indicates that there is a single path of Hamming distance d = 6 from the all-zero path that merges with the all-zero path at a given node.– The transfer functions defines who many “ones” will be output
based on 1 or more errors in decoding. – If all 0’s are transmitted and a “state-error” occurs, there will be 6
ones transmitted before returning to the “base”/correct state and a cycle consisting of acba will have to be completed.
– Such a path is called a first event error and is used to bound the error probability of convolutional codes
• The transfer function T (Z) introduced above is similar to the weight enumeration function (WEF) A(Z) for block codes introduced in Chapter 7.
ECE 6640 71
Augmenting the Transfer Function (1)
• The transfer function can be used to provide more detailed information than just the distance of the various paths.
– Suppose we introduce a factor Y into all branch transitions caused by the input bit 1. Thus, as each branch is traversed, the cumulative exponent on Y increases by 1 only if that branch transition is due to an input bit 1.
– Furthermore, we introduce a factor of J into each branch of the state diagram so that the exponent of J will serve as a counting variable to indicate the number of branches in any given path from node a to node e.
ECE 6640 72John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Augmenting the Transfer Function (1)
• The transfer function can be used to provide more detailed information than just the distance of the various paths.
– Suppose we introduce a factor Y into all branch transitions caused by the input bit 1. Thus, as each branch is traversed, the cumulative exponent on Y increases by 1 only if that branch transition is due to an input bit 1.
– Furthermore, we introduce a factor of J into each branch of the state diagram so that the exponent of J will serve as a counting variable to indicate the number of branches in any given path from node a to node e.
ECE 6640 73John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
Augmenting the Transfer Function (2)
• This form for the transfer functions gives the properties of all the paths in the convolutional code.
– That is, the first term in the expansion of T (Y, Z, J ) indicates that the distance d = 6 path is of length 3 and of the three information bits, one is a 1.
– The second and third terms in the expansion of T (Y, Z, J ) indicate that of the two d = 8 terms, one is of length 4 and the second has length 5.
ECE 6640 74John G. Proakis, “Digital Communications, 5th ed.,” McGraw Hill, Fourth Edition, 2008.
ISBN: 978-0-07-295716-6.
References (Conv. Codes)
• K. Larsen, "Short convolutional codes with maximal free distance for rates 1/2, 1/3, and 1/4 (Corresp.)," in IEEE Transactions on Information Theory, vol. 19, no. 3, pp. 371-372, May 1973.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1055014
• E. Paaske, "Short binary convolutional codes with maximal free distance for rates 2/3 and 3/4 (Corresp.)," in IEEE Transactions on Information Theory, vol. 20, no. 5, pp. 683-689, Sep 1974.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1055264
• J. Conan, "The Weight Spectra of Some Short Low-Rate Convolutional Codes," in IEEE Transactions on Communications, vol. 32, no. 9, pp. 1050-1053, Sep 1984.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1096180
• Jinn-Ja Chang, Der-June Hwang and Mao-Chao Lin, "Some extended results on the search for good convolutional codes," in IEEE Transactions on Information Theory, vol. 43, no. 5, pp. 1682-1697, Sep 1997.
– http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=623175
ECE 6640 75