Download - Introduction & Motivation
© Keith M. Chugg, 2017
Introduction & Motivation
EE564: Digital Communication and Coding Systems
Keith M. ChuggSpring 2017 (updated 2020)
1
© Keith M. Chugg, 2017
Course Topic (from Syllabus)
• Overview of Comm/Coding
• Signal representation and Random Processes
• Optimal demodulation and decoding
• Uncoded modulations, demod, performance
• Classical FEC
• Modern FEC
• Non-AWGN channels (intersymbol interference)
• Practical consideration (PAPR, synchronization, spectral masks, etc.)
2
© Keith M. Chugg, 2017
Overview Topics
• Why Digital Comm? Why not analog?
• The digital comm system block diagram
• Source model and entropy
• Separation and channel capacity (mutual information)
• Modulations, Channels, Soft vs. Hard Decision Information
• Performance measures
• Overview of Coding
• More Channels
3
© Keith M. Chugg, 2017
Digital vs. Analog Communications
• Digital comm = send one of a finite number of signals at the transmitter (most modern systems are digital)
• Analog comm = send messages on the continuum (e.g., analog FM radio)
• Why Digital?
• Exploits digital processing resources (Moore’s Law) via ASICs, FPGA, DSP, etc.
• More robustness and better fidelity — via use of memory in encoding/decoding
• Control the amount of degradation from source to sink
• Security (encryption)
• Easier to share resources: multiplexing, routing, multiple access, multimedia
• Advantages in multi-hop systems — alleviates distortion accumulating over hops
4
© Keith M. Chugg, 2017
Digital Comm. Block Diagram
5
SourceCoder
SourceDecoder
ChannelDecoder
ChannelCoder Modulator
Demodulator
Channel
InformationSource
InformationSink
Noise
Distortion
Interference
bicj x(t)
r(t)decision on cj
(soft or hard)bis(t) or sn
s(t) or sn
continuous or discrete timedigital or analog
discrete timedigital or analog
discrete time
digital
discrete time
digital
digital
continuous time
continuous timeanalog
our focus
© Keith M. Chugg, 2017
Separation Theorem
Get bits through this at an info rate R<C,C = channel capacity
Sample, quantize, compress this to an acceptable distortion with
minimal info rate R
There is no benefit to combining these tasks if the encoding length for each encoder can be arbitrarily large
6
SourceCoder
SourceDecoder
ChannelDecoder
ChannelCoder Modulator
Demodulator
Channel
InformationSource
InformationSink
Noise
Distortion
Interference
bicj x(t)
r(t)decision on cj
(soft or hard)bis(t) or sn
s(t) or sn
continuous or discrete timedigital or analog
discrete timedigital or analog
discrete time
digital
discrete time
digital
digital
continuous time
continuous timeanalog
our focus
© Keith M. Chugg, 2017
Source Model: Binary Memoryless
Sourcesn ⇠ iid, Bernoulli(p)
H(sn(u)) = p log2
✓1
p
◆+ (1� p) log2
✓1
1� p
◆(bits/source symbol).
Entropy of the source
7
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1p (probability of 1)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Aver
age
Entro
py p
er B
inar
y Sy
mbo
l
© Keith M. Chugg, 2017
Lossless Compression
Sourcesn ⇠ iid, Bernoulli(p) Ideal
Compression
bi ⇠ iid, Bernoulli(1/2)
• (Losseless) Source coding theorem:
• “Source can be compressed to its Entropy and no further”
• For asymptotically large encoding block size
• H values of b for each value of s
For EE564, the effective information source is b and it is iid, Bernoulli(0.5)
(we will consider sources with p != 0.5 for iterative decoding)
8
© Keith M. Chugg, 2017
Coding Block Diagram
9
ChannelDecoder
ChannelCoder
Channel
bicj
decision on cj
(soft or hard)
bi
informationbits
information bitdecisions
coded symbols(e.g., coded bits)
zj or yj
This simplified model abstracts the modulation-demodulation and details of the waveform channel
Simplified model is used to study coding
© Keith M. Chugg, 2017
Channel Capacity
Channel Capacity for Memoryless Channel
Mutual Information
maxpx(u)(·)
I(x(u); y(u))
P (y|x) =Y
n
P (yn|xn)
10
Channelxn yn
c�K.M. Chugg - August 24, 2020– TITLE 3
2kd�1X
i=0
⇣ni
⌘< 2
n
2kd�2X
i=0
⇣n� 1i
⌘< 2
n
dmin < n/2 : 2(dmin � 1)� log2(dmin) (n� k)
dmin � n/2 : dmin n2k�1
2k � 1
dmin dave
dmin (n� k) + 1
I(x(u); y(u)) =
X
y
X
x
px(u),y(u)(x, y)
log2
✓px(u),y(u)(x, y)
px(u)(x)py(u)(y)
◆�
=
X
y
X
x
px(u),y(u)(x, y)
log2
✓1
px(u)(x)
◆� log2
✓1
px(u)|y(u)(x|y)
◆�
= H(x(u))| {z }Entropy in x(u)
� H(x(u)|y(u))| {z }Entropy in x(u) given y(u)
C = maxp
I(z(u);x(u))
⌘bps/Hz =⌘b/2d
1 + �(RRC)
q =n
log2(M)
zi(u) = xi(u) +wi(u) (D ⇥ 1)
D = 2WT
wi(u) ⇠ ND(·; 0;N0/2I)
E�kx(u)k2
PT
© Keith M. Chugg, 2017
Channel Capacity: Binary Symmetric Channel
C(✏) = 1�H(✏) = 1 + ✏ log2
✓1
✏
◆+ (1� ✏) log2
✓1
(1� ✏)
◆
11
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1epsilon (BSC error probability)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cap
acity
of B
SC (b
its/c
hann
el-u
se cj yj
✏
✏
(1� ✏)
(1� ✏)
0
1
0
1
pyj(u)|cj(u)(yj |cj)labels:
BSC is a special case of discrete memoryless
channel (DMC)
© Keith M. Chugg, 2017
Interpreting BSC Capacity
12
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1epsilon (BSC error probability)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cap
acity
of B
SC (b
its/c
hann
el-u
se
C(0.2) = 0.2781 bits/channel-use
b p
c
parity bits
information(systematic)
bits
10,000 bitsFEC
Encode
b
40,000 binary digits
rate: r = 1/4 bits/channel-use
FECDecode
b
BSC ✏ = 0.2
y
Code can be found with error probability ⇡ 0
Example of what capacity means
capacity is achieved (approached) through coding
© Keith M. Chugg, 2017
Typical In-Phase/Quadrature Digital Modulation
13
This is approach converts a waveform channel to a 2-dimensional channel vector
NyquistPulse
Shaping
ImpulseWeighting
xIj
xQj
�
j
xIj�(t� jTs)
�
j
xIjp(t� jTs)
�
j
xQj p(t� jTs)
�
j
xQj �(t� jTs)
�2 cos(2�fct)
�⇥
2 sin(2�fct)
NyquistPulse
Shaping
ImpulseWeighting
�2 cos(2�fct)
�⇥
2 sin(2�fct)
pulsematched
filter
pulsematched
filterzQj
zIj
jTs
jTs
Rp(t) = p(t) ⇥ p(�t)
Rp(jTs) = �K(j)
n(t)
© Keith M. Chugg, 2017
Common I/Q Digital Modulations
Binary Phase Shift Keying(BPSK)
Quadrature Phase Shift Keying(QPSK) 8-ary PSK (8-PSK)
16 Quadrature Amplitude Modulation(16QAM)
14
I
Q
pEb
�Es
I
Q
�Es
I
Q
I
Q
d =
s6Es
(M � 1)
© Keith M. Chugg, 2017
Trade-offs with M?
• Dimension = Time * Bandwidth
• The I/Q constellations use 2 dimensions
• As you increase M
• More bits/channel use (log2(M))
• Points get closer for fixed energy
SNR (dB)
(bits/channel use)
SNR (dB)
Perror (log scale)
M = 2 M = 4 M = 8 M = 16
15
These are the two types of performance plots that we use to evaluate coding and modulation schemes
© Keith M. Chugg, 2017
Throughput vs. SNR Trade (IT)
Not Achievable
SNR (dB)
Information Rate(bits/channel use)
Capacity
Achievable (e.g., via coding)
16
Channel capacity shows up on this type of plot as regions of achievable performance
© Keith M. Chugg, 2017
Throughput vs. SNR Trade (IT)
0
2
4
6
8
10
0 5 10 15 20
AWGNQPSK/AWGN8PSK/AWGN16QAM/AWGN
BW
eff
icie
ncy
(b
ps/
Hz)
Eb/No(dB)
17
Example channel capacity curves for AWGN channels with and without modulation constraints
© Keith M. Chugg, 2017
Throughput vs. SNR Trade
0
0.5
1
1.5
2
2.5
-2 0 2 4 6 8 10 12
Theoretical Limits for QPSK/AWGN (BPSK/AWGN)B
W E
ffic
ency
(b
its/
sec)
/Hz
Eb/No(dB)
AWGN Capacity
QPSK/AWGN Capacity
Finite Block Size (BLER = 1e-4)
k= 65,536; 16,384; 4096; 2048;1024; 512; 256; 128
(7,4) Hamming soft-in
(7,4) Hamming hard-in
no coding
Conv., 64-state (soft)
18
How some common coding schemes with BPSK modulation compare to capacity
© Keith M. Chugg, 2017
Throughput vs. SNR TradeSeptember 2004
Keith Chugg, et al, TrellisWare TechnologiesSlide 28
doc.: IEEE 802.11-04/0953r4
Submission
0
1
2
3
4
5
6
7
8
9
-5 0 5 10 15 20 25 30
Ban
dwid
th E
ffici
ency
(inf
o bi
ts/s
ymbo
l)
Required SNR for 1% PER (dB)
BPSK
QPSK
16QAM
64QAM
256QAM
BPSK Bound
QPSK Bound
16QAM Bound
64QAM Bound
256QAM Bound
log2(1 + SNR)
AWGN Perf.:Comparison with Bound
All 8000 info bits
19
An example of a standards
contribution based on these same
concepts
© Keith M. Chugg, 2017
Bit-Iterleaved Coded Modulation (BICM)
20
FECEncode
InterleaveModulationMapper
FECDecode
De-interleaveModulationDe-mapper
AWGN
bi cj dj xk
zkdj or M[dj ]cj or M[cj ]
bi
Most common approach to coding & modulation used in practice
(you will simulate a BICM system with a modern code this semester)
© Keith M. Chugg, 2017
Summary of Channel Coding
• Forward Error Control/Correction Coding (FEC)
• As described previously — add redundancy and send across channel
• Error Detection Coding (aka CRC = Cyclic Redundancy Check)
• Detect if an error has occurred on the channel, but no correction
• Automatic Repeat Request (ARQ)
• “Hey, I did not get that, send it again!”
21
© Keith M. Chugg, 2017
Typical Use of Coding in Modern System
22
Hybrid ARQ (H-ARQ) System
CRC Encode
info bits info bits
CRC parity
FEC Encode
info bits
CRC parity FEC
Parity
Channel (including
modulation)
CRC Decode FEC Decode
hard decisions hard or soft decisionsCRC
Pass?
feedback channel
accept
Y
N
request a retransmit
© Keith M. Chugg, 2017
Codes as Constraints on Variables
23
Only 2^k of the 2^n (n x 1) binary vectors are in the code
c1 c2 c3c0 cn�1
Constraint on n binary symbols
• Repetition Code (equality constraint)
• all n bits are the same
• Single Parity Check Code (SPC code)
• only patterns with even number of 1s
© Keith M. Chugg, 2017
Example: Repetition Code
24
Codewords for n = 4: 0000 1111
Number of codewords =2 , so k = 1
Encoder:
take one information bit in and output n copies of this bit
rate = 1/n (info bits per channel use)
= c1
c2
c3
c0
© Keith M. Chugg, 2017
Example: Single Parity Check Code
25
Codewords for n = 4: 0000
Number of codewords = 8 , so k = 3 = n-1
Encoder:
take n-1 information bit in and these plus one parity bit which is the mod 2 sum of the input bits
rate = (n-1)/n (info bits per channel use)
001111001010
0101100101101111
+ c1
c2
c3
c0
© Keith M. Chugg, 2017
Example: (7,4) Hamming Code
26
Linear Block Code (“Multiple Parity Check Code”)
H =
2
41 1 0 1 1 0 01 0 1 1 0 1 00 1 1 1 0 0 1
3
5
c1 c2 c3c0 c4 c5 c6
Hc = 0
All three SPCs must be satisfied simultaneously
© Keith M. Chugg, 2017
Example: (7,4) Hamming Code
27
H =
2
41 1 0 1 1 0 01 0 1 1 0 1 00 1 1 1 0 0 1
3
5
All local constraints must be satisfied simultaneously
=
+
= = =
+ +
= = =
c1 c2 c3c0 c4 c5 c6
Parity Check Graph or Tanner Graph
© Keith M. Chugg, 2017
Example: (7,4) Hamming Code
28
H =
2
41 1 0 1 1 0 01 0 1 1 0 1 00 1 1 1 0 0 1
3
5
All local constraints must be satisfied simultaneously
Parity Check Trellis Graphical Model
= = = = = = =
c1 c2 c3c0 c4 c5 c6
t0 t1 t2 t3 t4 t5 t6
e.g., hidden, non-binary state variables3
© Keith M. Chugg, 2017
Example: Low Density Parity Check (LDPC) Code
29
number of 1s = number of bits in
first SPC
Just a very large (multiple) parity check code with mostly 0s
H =
2
6664
1 0 . . . 1 0 00 0 . . . 0 1 0...
.... . .
......
1 1 . . . 0 0 0
3
7775
number of 1s = number of SPCs second code bit is
involved in
A systematic way to build codes with very large block size
© Keith M. Chugg, 2017
Example: Low Density Parity Check (LDPC) Code
30
The trick is in the decoding algorithm:
Repeatedly do “soft-in/soft-out (SISO)” decoding of each local code and exchange these soft decisions (messages, beliefs, metrics)
ITERATE until things look good!
= c1
c2
c3
c0
+ c1
c2
c3
c0
Equality Constraint SISO SPC SISO
Outgoing soft-decision informationIncoming soft-decision information
SISO rule (message update rule)
depends on code constraint
© Keith M. Chugg, 2017
Example: Low Density Parity Check (LDPC) Code
31
This simple construction with this decoding approach can approach channel capacity with large block sizes
© Keith M. Chugg, 2017
Summary of Modern FEC
• Construct large codes (big n) by connecting simple, local (or constituent) codes via pseudo-random permutations
• Iteratively decoding
• Run SISO decoding for each local code
• Exchange soft-information between local code SISOs
32
© Keith M. Chugg, 2017
Overview Topics
• Why Digital Comm? Why not analog?
• The digital comm system block diagram
• Source model and entropy
• Separation and channel capacity (mutual information)
• Modulations, Channels, Soft vs. Hard Decision Information
• Performance measures
• Overview of Coding
• More Channels
33
© Keith M. Chugg, 2017
More on Channel Models
34
AWGN - Intersymbol Interference (ISI) Channel
x(u, t)
n(u, t)
r(u, t)
AWGN
n(u, t)
r(u, t)x(u, t) y(u, t)h(t)
LTI
0 f
N0/2flat noise power level
channel gain is frequency
selective across signal band
0 f
N0/2flat noise power level
© Keith M. Chugg, 2017
Channel Models• We will focus primarily on the AWGN channel
• Several approaches to ISI-AWGN
• Convert to many parallel, narrow, frequency channels with each having flat gain: Orthogonal Frequency Division Multiplexing (OFDM)
• Converts to many parallel AWGN channels
• Use a constrained receiver structure such as a linear filter to try to invert ISI effects: (Linear) Channel Equalization
• Do optimal data detection with ISI channel modeled: MAP Sequence/Symbol Detection — Viterbi for hard-out and Forward-Backward Algorithm for soft-out
• We will learn these algorithms as part of the coding material
35