theory and design of digital communication systems by tri t. ha

670

Upload: sri-ganesh

Post on 02-Oct-2015

21 views

Category:

Documents


1 download

DESCRIPTION

Design of Digital Communication Systems by Tri t. Ha

TRANSCRIPT

  • This page intentionally left blank

  • Theory and Design of Digital Communication Systems

    Providing the underlying principles of digital communication and the design techniques ofreal-world systems, this textbook prepares senior undergraduate and graduate students forthe engineering practices required in industry. Covering the core concepts, including linkanalysis, modulation, demodulation, spread spectrum, equalization, channel fading effects,and channel coding, it provides step-by-step mathematical derivations to aid understandingof background material. In addition to describing the basic theory, the principles of systemand subsystem design are introduced, enabling students to visualize the intricate connec-tions between subsystems and understand how each aspect of the design supports the overallgoal of achieving reliable communications. Throughout the book, theories are linked topractical applications with over 250 real-world examples, whilst 370 varied homeworkproblems in three levels of difficulty enhance and extend the text material. With thistextbook, students can understand how digital communication systems operate in the realworld, learn how to design subsystems, and evaluate end-to-end performance with ease andconfidence.

    Tri T. Ha is a Professor in the Department of Electrical and Computer Engineering at the NavalPostgraduate School (NPS), Monterey, California, a position he has held since 1989. Prior tojoining NPS he worked at Fairchild Industries and GTE, and was an Associate Professor atVirginia Tech for four years. He is an IEEE Fellow who has written two previous textbooks,and his current research interests are in wireless communications and cyber warfare.

  • Theory and Design of DigitalCommunication Systems

    TRI T. HANaval Postgraduate School, Monterey, California

  • cambr idge univers ity press

    Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,So Paulo, Delhi, Dubai, Tokyo, Mexico City

    Cambridge University PressThe Edinburgh Building, Cambridge CB2 8RU, UK

    Published in the United States of America by Cambridge University Press, New York

    www.cambridge.orgInformation on this title: www.cambridge.org/9780521761741

    Cambridge University Press 2011

    This publication is in copyright. Subject to statutory exceptionand to the provisions of relevant collective licensing agreements,no reproduction of any part may take place without the written

    permission of Cambridge University Press.

    First published 2011

    Printed in the United Kingdom at the University Press, Cambridge

    A catalog record for this publication is available from the British Library

    Library of Congress Cataloging in Publication dataHa, Tri T., 1949

    Theory and design of digital communication systems / Tri T. Ha.p. cm.

    ISBN 978-0-521-76174-1 (hardback)1. Digital communications. I. Title.

    TK5103.7.H35 2011384 dc22

    2010024374

    ISBN 978-0-521-76174-1 Hardback

    Additional resources for this publication at www.cambridge.org/9780521761741

    Cambridge University Press has no responsibility for the persistence oraccuracy of URLs for external or third-party internet websites referred to

    in this publication, and does not guarantee that any content on suchwebsites is, or will remain, accurate or appropriate.

  • World peace must develop from inner peace. Peace is not the absence of violence. Peaceis the manifestation of human compassion.14th Dalai Lama of Tibet (Inscription on United States Congressional Medal)

  • Contents

    Preface page xviiAcknowledgements xixList of symbols xxList of abbreviations xxx

    1 Introduction 11.1 Brief overview 11.2 Scope 31.3 Summary 8

    Bibliography 9

    2 Deterministic signal analysis 10Introduction 10

    2.1 General description of deterministic signals 10Continuous-time signals 11Discrete-time signals 11Periodic and aperiodic signals 12Analog and digital signals 13

    2.2 Power and energy 162.3 Orthogonal signals 212.4 Signal space 27

    GramSchmidt procedure 27Signal vectors 28

    2.5 Linear time-invariant systems 342.6 Convolution 352.7 Fourier series of continuous-time periodic signals 37

    Parseval relation for periodic signals 39Unit step and unit impulse functions 39Power spectral density 41Bandwidth 41Frequency shifting property 42Response of an LTI system to a periodic signal 45

    2.8 Fourier transform of continuous-time signals 47Frequency shifting property 50Parseval relation 52Fourier transform of periodic signals 53

    vii

  • Response of LTI systems 53Tables of Fourier properties and Fourier transform pairs 56

    2.9 Autocorrelation 56Autocorrelation and energy spectral density 57Autocorrelation and power spectral density 59Output autocorrelation and power spectral density 61

    2.10 Sampling 62Sampling theorem 62Impulse-train sampling 63Reconstruction with an ideal lowpass filter 64

    2.11 Bandpass signals 68Representations 68Response of an LTI bandpass system 70

    2.12 Summary 72Problems 73Further reading 77Bibliography 77

    3 Random signal analysis 78Introduction 78

    3.1 Review of probability theory 78Total probability theorem 79Bayes theorem 79Independence 79Union bound 79

    3.2 Random variables 79Bayes theorem and total probability revisited 81

    3.3 Random processes 82Autocorrelation and autocovariance 82Types of random processes 85Power spectral density 92

    3.4 Gaussian process 100Linear transformation 100Sampling 102Sufficient statistics for signal processing in white Gaussian noise 103KarhunenLoeve expansion 105Whitening filter 106The central limit theorem 108

    3.5 Gaussian-derived processes 108Rayleigh and Rice processes 108Squared envelope 111Sum of squared Gaussian processes 112Nakagami-m density function 113Log-normal density function 114

    viii Contents

  • 3.6 Summary 117Problems 118Further reading 121Bibliography 121

    4 Information theory and channel coding 122Introduction 122

    4.1 Entropy of a discrete source 1224.2 Source coding 124

    Huffman codes 126Shannon noiseless source coding theorem (first theorem) 127

    4.3 Discrete channel 128Mutual information 129Channel capacity 131Shannon noisy channel coding theorem (main theorem) 131

    4.4 Gaussian channel 134Differential entropy 136Mutual information and channel capacity 138Bandlimited Gaussian channel capacity 144Gaussian channel with discrete inputs 144

    4.5 Channel coding with block codes 1474.6 Low-density parity-check codes (LDPC) 152

    Tanner graph 153Message passing algorithm (MPA) 154

    4.7 Channel coding with convolutional codes 160Hard decodingBSC 163Soft decodingDMC 165Soft decodingGaussian input vector 167

    4.8 Summary 169Problems 170Further reading 175Bibliography 175

    5 Communication link analysis 177Introduction 177

    5.1 Basic wireless communication link 177Thermal noise 178Effective noise temperature 181Receiver noise model 187System signal-to-noise ratio 190

    5.2 Cellular communication link 191Frequency spectrum 192Major cellular standards 192Cell connection 195

    ix Contents

  • Path loss: two-ray ground reflection model 196Hata model 199Modified Hata model 202Reciprocity and the reverse link 203

    5.3 Co-channel interference in a narrowband cellular system 203Combined signal-to-interference and noise ratio 206Sectoring 208Microcell-zoning 210

    5.4 CDMA cellular link analysis 211Forward link 211Reverse link 214

    5.5 Satellite communication link 2155.6 Summary 219

    Problems 220Further reading 224Bibliography 224

    6 Modulation 225Introduction 225

    6.1 Review of double sideband-amplitude modulation (DSB-AM) 2256.2 Digital modulation 2296.3 Phase shift keying (PSK) 2316.4 Differential phase shift keying (DPSK) 2346.5 Amplitude shift keying (ASK) 2366.6 Frequency shift keying (FSK) 238

    Power spectral density 2406.7 Minimum shift keying (MSK) 242

    Power spectral density 243Modulator 247

    6.8 Gaussian minimum shift keying (GMSK) 2496.9 The need for M-ary modulation 251

    6.10 M-ary amplitude shift keying (MASK) 252Signal space 252Power spectral density 253Modulator 255

    6.11 M-ary phase shift keying (MPSK) 255Signal space 256Power spectral density 258Modulator 260Offset quadrature phase shift keying (OQPSK) 260

    6.12 Differential M-ary phase shift keying (DMPSK) 262Alternative differential encoding for DQPSK 264Direct symbol mapping 265Modulator 265

    x Contents

  • 6.13 /4-shifted differential quadrature phase shift keying (/4-DQPSK) 266Signal space 267Direct symbol mapping 268

    6.14 M-ary quadrature amplitude modulation (MQAM) 270Signal space 270Power spectral density 271Differential MQAM (DMQAM) 274

    6.15 Code shift keying (CSK) 275Power spectral density 276Modulator 276

    6.16 M-ary frequency shift keying (MFSK) 277Power spectral density 278Modulator 279

    6.17 Continuous phase modulation (CPM) 280Power spectral density 281Modulator 282

    6.18 Orthogonal frequency division multiplexing (OFDM) 282Practical baseband implementation 283Cyclic prefix 287

    6.19 Trellis coded modulation (TCM) 290Ungerboeck TCM 291Pragmatic TCM 295

    6.20 Summary 298Problems 299Further reading 302Bibliography 302

    7 Demodulation 305Introduction 305

    7.1 The matched filter 306Time domain interpretation 306Frequency domain interpretation 310Output signal and noise waveforms 310Decision variable 311Summary 312

    7.2 The correlator 3127.3 The matched filterenvelope detector (noncoherent matched filter) 313

    Output signal-to-noise ratio 317Decision variable 318Summary 318

    7.4 The quadrature correlatorsquare law detector (noncoherent correlator) 3187.5 The threshold detector 320

    Optimum threshold 320Maximum likelihood criterion 321

    xi Contents

  • 7.6 The maximum detector 322Gaussian decision variables 323Rice and Rayleigh decision variables 323

    7.7 Binary demodulation 325Coherent PSK 326Coherent DPSK 328Direct detection ASK 329Coherent FSK 331Coherent MSK and precoded MSK and GMSK 332Noncoherent FSK and MSK 337Noncoherent DPSK 339Performance summary of binary modulation techniques 345

    7.8 Minimum Euclidean distance detector 345Symbol error probability 348

    7.9 M-ary maximum detector 350Gaussian decision variables 351Rice and Rayleigh decision variables 353

    7.10 M-ary demodulation 355Coherent L-path demodulator: a signal space approach 355Coherent M-path demodulator 358Noncoherent M-path demodulator 360

    7.11 Coherent MASK 3627.12 Coherent MPSK 3667.13 Coherent DMPSK 3717.14 Noncoherent DMPSK 3757.15 Noncoherent /4-DQPSK 3797.16 Coherent MQAM and DMQAM 3807.17 Coherent CSK and MFSK 3837.18 Noncoherent CSK and MFSK 3847.19 Coherent CPM with sequence detection 3867.20 Coherent CPM with symbol-by-symbol detection 3957.21 Noncoherent CPM 3977.22 Performance summary of M-ary modulation techniques 4017.23 OFDM demodulation 402

    Timing synchronization 403Carrier phase synchronization 409

    7.24 Binary demodulation with convolutional codes 410Hard decoding 411Soft decodingGaussian input vector 412Soft decoding2 input vector 414

    7.25 TCM demodulation and decoding 4167.26 Summary 422Appendix 7A: The Q-function 422

    xii Contents

  • Problems 423Further reading 430Bibliography 430

    8 Spread spectrum 432Introduction 432

    8.1 Direct sequence modulation 432Orthogonal covering 436IS-95 forward link 440Code division multiple access (CDMA) 443IS-95 reverse link 444

    8.2 Direct sequence demodulation 446Quadrature orthogonal covering demodulation 450Noncoherent demodulation of DS-CSK 454CDMA: performance evaluation 457

    8.3 Frequency hop spread spectrum 460Partial-band jamming 461Multi-tone jamming 462Follower jamming 464

    8.4 Summary 466Problems 466Further reading 471Bibliography 471

    9 Intersymbol interference and equalization 473Introduction 473

    9.1 Intersymbol interference 473Nyquist criterion for zero ISI 475

    9.2 Optimum demodulator for bandlimited channel 477Condition for maximum SNR0 480Condition for zero ISI 480Solution for |HT ( f )| and |HR ( f )| 480

    9.3 Zero-forcing linear equalizer (ZF-LE) 482Summary 493

    9.4 Mean-square error linear equalizer (MSE-LE) 4949.5 Zero-forcing decision-feedback equalizer (ZF-DFE) 5019.6 Mean-square error decision-feedback equalizer (MSE-DFE) 5069.7 Maximum likelihood sequence detection 5099.8 Fractionally spaced equalizer (FSE) 5169.9 Summary 517

    Problems 518Further reading 520Bibliography 520

    xiii Contents

  • 10 Fading channels 522Introduction 522

    10.1 Physical characterization of fading channels 524Time-varying effect 524Space-varying effect 526Summary of fading characteristics 527

    10.2 Mathematical representation of fading channels 528Channel impulse response 528Multipath autocorrelation and Doppler profiles 530ClarkeDoppler power spectrum 532Generalized Doppler power spectrum 534

    10.3 Coherent demodulation 536Equivalent complex-valued demodulator 536Rayleigh 539Rice 540Nakagami-m 541Effect of channel tap error 543

    10.4 Pilot symbol-aided decision-feedback demodulation 547Differential and double-differential decision-feedback algorithms 549

    10.5 OFDM 552Low mobility 556High mobility 557

    10.6 Noncoherent demodulation 558Rayleigh 558Rice 558Nakagami-m 559Doppler tracking of MFSK 559Doppler tracking of CSK 562

    10.7 Pilot tone-aided demodulation of orthogonal covering signal 566Complex spreading and despreading 566Doppler analysis 567

    10.8 Noncoherent demodulation of offset quadrature DS-CSK 57210.9 Time diversity 574

    Level crossing rate 575Average fade duration 577

    10.10 Maximal ratio combining (MRC) 578Rayleigh 579Rice 581Nakagami-m 581

    10.11 Selection combining (SC) 58310.12 Equal gain combining (EGC) 585

    Coherent EGC 585Noncoherent EGC 586

    xiv Contents

  • 10.13 Frequency diversity 587Fast frequency hop (FFH) 588OFDM subcarrier combining 588Rake receiver 590

    10.14 Receive antenna diversity (SIMO) 595Mobile station antennas 596Base station antennas 597Performance 598

    10.15 Transmit antenna diversity (MISO) 600Space-time coding 600Alamouti code 602Rate space-time code 603Sufficient statistics 604Rate space-time code 604

    10.16 Transmitreceive antenna diversity (MIMO) 60510.17 Channel capacity 608

    Slow fading 609Slow fadingreceive antenna diversity 610Slow fadingtransmit antenna diversity 611Slow fadingtransmit and receive antenna diversity 612Slow fadingOFDM 612Fast fading 615

    10.18 Summary 616Appendix 10A: Complex-valued demodulators 617Appendix 10B: Bit error probabilities 617

    Rayleigh 618Nakagami-m 618Rayleighdiversity: 2-density function with 2L degrees of freedom 619Problems 620Further reading 627Bibliography 627

    Index 629

    xv Contents

  • Preface

    This book was written with two goals in mind: to provide the underlying principles of digitalcommunication and to study design techniques integrated with real world systems. Theultimate aim of a communication system is to provide reliable transmission of information tothe user(s). This fundamental foundation was established in 1948 by Claude Shannon, thefounding father of information theory, and led eventually to the development of moderndigital communication. Analog communication is near extinction or at the very gate of it.The full spectrum dominance of digital communication has arrived and new frontiers arebeing established every decade; from cellular systems to wireless LAN and MAN, the bitrates are being pushed ever higher for ubiquitous mobile applications.

    Knowing the limit of digital transmission is vital to the design of future communicationsystems, particularly mobile wireless systems, where both spectrum and power are preciousresources, and design techniques can be used to manipulate these two main resources to fitreal world applications. No single technique can cover all the requirements of a moderncommunication system, which makes it necessary for students to understand the intricateweb between subsystems, each designed to support others to achieve the common goal ofreliable communication.

    The book contains more than 250 examples to help students achieve a firmer under-standing of the subject. The problems at the end of each chapter follow closely the order ofthe sections. They are designed for three levels: level one covers the straightforwardapplication of equations; level two requires patience and deep thinking; whilst level threerequires some research of the literature to assist in finding a solution. A solutions manual forthe instructor accompanies the book.

    The book was written for both senior undergraduate and graduate students studyingcommunications at universities and colleges. The entire book is suitable for two-semester courses in digital communications. The first course is typically a one-semestersenior course in digital communication, which may be taken by students new tostudying communications (the conventional wisdom is that students should learn analogcommunication before learning digital communications) or after completing an intro-ductory course in communication systems (one that is heavy in analog communicationsystems such as AM and FM). The second course is a one-semester course for graduatestudents who already have a firm background in random variables and processes. Thepractical material included in this book (much of it focused on commercial and militarysystems) will be helpful for practitioners and professionals in the digital communicationfield.

    As in the learning of any subject, some prerequisites are required for the reading of thisbook. A first course in probability theory is necessary and exposures to random processes

    xvii

  • would be helpful. Readers should also be familiar with linear system analysis. A knowledgeof analog communication is helpful but not required. For readers who do not have thepatience to go through all the design techniques but would appreciate the beauty of theunderlying principles, we recommend our favorite book, Principles of DigitalCommunication, authored by the legendary Robert G. Gallager.

    xviii Preface

  • Acknowledgements

    I would like to express my thanks to Dr. Phil Meyler of Cambridge University Press for hisenthusiastic support and for suggesting the title of the book to closely reflect its coverage.I would like to thank the anonymous reviewers for their valuable comments that helped inimproving the book. I am grateful for the support of my friends Nathan Beltz, ProfessorVicente Garcia, Professor Jeff Knorr, Professor Frank Kragh, Donna Miller, Rita Painter,Professor Clark Robertson, Professor Wei Su, Dr. Jan Tighe, and Dr. Charlie Victory. I alsowould like to thank my students over the years for their valuable suggestions to improve theoriginal notes. I am grateful to my parents for providing me the opportunity to go to theUnited States for higher education, and to Thuy, Khanh, Tuan, Huong and Dennis, Thu, andHien for their help and support over the years. This book is dedicated to the people whowork tirelessly for world peace.

    xix

  • Symbols

    A amplitude, smallest signal amplitude in MQAM, azimuth angle

    A matrix A

    jjAjj2 squared Frobenius norm of matrix AA conjugate and transpose of a matrix AAd number of paths of Hamming weight d that merge with the all-zero paths

    Ae effective aperture area of the receiver antenna

    A \ B;AB intersection of set A and set B, A and BA [ B;A B union of set A and set B, A or Ba Gaussian filter parameter

    a(hR) correction factor in Hata model

    af frequency sensitivity

    ak Fourier series coefficients

    B bandwidth

    Bd information weight (number of information bit errors) of all paths of

    Hamming weight d

    Bd free information weight (number of information bit errors) of all paths ofEuclidean distance d free of TCM

    C channel capacity in coded bits/input symbol, correlation

    C channel capacity in coded bits/second

    Co outage capacity

    c DK correlation metric in MLSDCN correlation metric in CPM

    CX t1; t2 autocovariance of the random process x(t)C(z) transfer function of causal ZF-LE, z-transform of the sequence c(k)

    CovNjNk covariance of two noise samplesc code word, PN sequence

    ci a coded bit, a differentially coded bit

    c(t) PN function of N chips

    ct complex PN functioncI t PN function of I-channelcQt PN function of Q-channelcxjHi metric for CPM demodulationDi ith symbol in the symbol stream

    xx

  • d distance

    d data sequence

    dB decibel

    d free free distance of a convolutional code

    d free Euclidean free distance of TCMdi, d(i) normalized bit or symbol amplitude, di 2 f1;1, di 2 f0; 1fdig, fdig data sequencedmin minimum Hamming distance, minimum Euclidean distance

    d(u,v) Hamming distance between two code words

    d x; si Euclidean distance between two vectorsE energy (with or without a subscript), smallest symbol energy in MQAM or

    MASK, electric-field wave, elevation angle

    Eb bit energy

    Eb diversity bit energyE(d,t) free space E-field at distance d from transmitter and time t

    Eh hop energy

    EIRP effective isotropic radiated power

    E0 free space E-field at distance d0 from transmitter

    Es symbol energy

    Es diversity symbol energyEX expected value (mean value) of XEX 2 mean-square value of X 2e 2.718

    e error word

    e(k) error sequence of MSE-LE

    e(t) error process

    F noise figure

    Ffxtg Fourier transform of x(t)F z minimum-phase transfer function, transfer function of a synthetic channel,

    z-transform of the sequence f(k)

    F 1=z maximum-phase functionF 1fX f g inverse Fourier transform of X(f )2F1 Gauss hypergeometric function

    F 1hj j2 po SNR outage signal-to-noise ratio

    FX x distribution function of XFXY x; y joint distribution function of X and Yf hj j x density function of the channel tap magnitude hj jfXY x; y joint density function of X and YfX jY xjy conditional density function of X given Yf frequency, Doppler shift

    xxi List of symbols

  • fc carrier frequency

    fD Doppler spread

    fj instantaneous carrier frequency of FH

    fL;i instantaneous local carrier frequency of a frequency synthesizer

    fm maximum Doppler shift

    fs sampling frequency

    fX x density function of XfXx density function of vector XG amplifier gain, gain of a two-port network

    G generator matrix, space-time block code matrix

    G TCM asymptotic coding gain

    GDC gain of a downconverter

    GLNA gain of a low-noise amplifier

    GR receiver antenna gain

    GR=Ts antenna gain-to-noise temperature ratio of the earth station

    GS=Tsat antenna gain-to-noise temperature ratio of the satellite

    GT transmitter antenna gain

    g parity

    g(t) pulse

    gnt orthonormal pulse shapes in OFDMg(x) code generator polynomial

    H Hadamard matrix (with or without a subscript), parity check matrix,

    channel tap matrix

    H source entropy in bits/second

    Hi hypothesis

    H(f ) transfer function or frequency response

    HFEf transfer function of the front-end filter of the receiverHf j j magnitude response (amplitude response)HT f transfer function of transmit filterHRf transfer function of receive filterH(k) N-point DFT of the sequence h(n)

    H(X) entropy of the discrete random variable (discrete source) X

    h Planck constant, digital modulation index, complex channel tap

    h row vector of a Hadamard matrix

    h MMSE of the vector hhi; hi ith channel tap~hi complex channel taphLt complex envelope of the impulse response h(t)h(n) sequence used in OFDM

    hR receive antenna height

    xxii List of symbols

  • hT transmit antenna height

    h(t) impulse response

    h(X) differential entropy of a continuous random variable X

    h(X|Y) conditional differential entropy of a continuous random variable X

    h(X) differential entropy of a continuous n-dimensional random vector X

    h(x) monic binary irreducible primitive polynomial, PN code polynomial

    I interference power, photodiode current

    Im interchannel interference

    I MUI variable

    I(X) self-information of the discrete random variable (discrete source) X

    Iui; vj pair-wise mutual informationIU ;V mutual informationI0 modified Bessel function of the first kind of zero orderIn modified Bessel function of the first kind of nth orderJ jamming variable

    J0, J0 jamming spectral density, Bessel function of zero orderK number of simultaneous users in CDMA, Kelvin

    K covariance matrix

    k integer, Boltzmann constant, number of information bits in a block code,

    number of inputs of a convolutional encoder

    k0 free space wave number

    k/n code rate

    LC path loss

    L diversity order, loss of a two-port network

    L(f ) transfer function of an equalizer

    Lr receive antenna diversity

    Ls number of symbol times for m transmitted symbols in transmit diversity

    LTL loss of a transmission line

    Lt transmit antenna diversity

    LdB mean value in decibels of the log-normal density variable

    L;P1; :::;Pn Lagrangianl length of a code word

    l average length of a code word

    ln natural logarithm

    lnX jy conditional ln-likelihood ratiolog base-10 logarithm

    log2 base-2 logarithmM number of distinct M-ary symbols

    M(r | c) path metric in Viterbi algorithm or log-likelihood function

    m mean value of a random variable, Nakagami-m parameter

    xxiii List of symbols

  • mA X 1 arithmetic mean of 1=X f mGX geometric mean of X(f )mn metric for OFDM timing synchronization

    m(t) message signal

    mlij message sent by the bit node i to check node j at the lth iteration

    mlji message sent by the check node j to bit node i at the lth iteration

    m2l sum of the squares of the means

    mX t mean value of the random process x(t)N noise variable, available noise power, number of OFDM subcarriers,

    period of a PN sequence

    N noise vector, complex noise

    N noise variable at detector inputN noise variable

    N complex noise variable, complex noise vector

    N f power spectral density of the equivalent lowpass noise n t ND number of branches in a frequency bin determinator

    NH number of hop bins

    Ni system noise power

    Nn average number of nearest neighbors of a signal vector

    N0; 2 Gaussian random variable with zero mean and variance 2N0; 2 Gaussian vector with iid components of zero mean and variance 2Nk noise sample

    N0=2 power spectral density of noise

    NV average number of level crossingsn code word length, path loss exponent

    nI ; NI in-phase noise variables

    nQ; NQ quadrature noise variables

    (n,k) block code of k information bits and code word length n

    n(t) noise

    nLt complex envelope of bandpass noisen0t output noise of a matched filterP power

    Pc;h probability of correctly identifying the frequency bin

    Pd pair-wise error probability (Prc ! c 0)Pe error probability (bit, symbol, code word)

    P(f ) energy spectrum, Fourier transform of pulse shape p(t)

    Pj power of a jamming signal

    Pp peak power

    Pr(A) probability of A

    Pr(A,B) joint probability of A and B

    xxiv List of symbols

  • Pr(A|B) conditional probability of A given B

    Prc ! c 0 pair-wise error probabilityPr(c | r) a posteriori probability

    Pr(r | c) likelihood of the transmitted code vector c

    PT transmit power

    P(z) linear predictor in MSE-DFE

    p crossover probability of a BSC, probability of a binary symbol

    pout outage probability

    pUV ui; vj joint distribution of ui; vjpvjjui transition probability of a discrete channelpX xi distribution of the discrete random variable X, i= 1, 2, . . . , np(t) pulse shape

    Qa; b Marcum Q-functionQx Gaussian integral Q-function of argument xQz transfer function of the composite channel in suboptimum MSE-LEq optimum number of jamming tones

    R resistance, Rayleigh random variable, source rate in symbols/second

    Rb bit rate

    Rc chip rate

    R responsivityRe Earths radius (6378 km)

    Rh; t 0 multipath autocorrelation profileRhit 0 ith path autocorrelationRi j autocorrelation of the data sequence fdigRs symbol rate

    R(t) envelope of a bandpass process

    RV level crossing rateRw Walsh chip rate

    Rx autocorrelation of WSS random process x(t)RX t1; t2 autocorrelation of the random process x(t)r value assumed by a random variable R, code rate, spectral efficiency

    r received word

    r2 signal-to-intertone interference ratio

    re extinction ratio

    S input variable of a Gaussian channel

    S input vector of a Gaussian channel

    S sample spaceS(f ) power spectral density (with or without a subscript)

    Sh; f 0 multipath Doppler profileShif 0 ith path Doppler power spectrum

    xxv List of symbols

  • s syndrome vector, orthogonal covering symbol

    sgn (x) signum function

    sign x sign of xs(t) digital signal

    fsitg set of M digital signals, i = 1, 2, . . . , Mfsig set of M signal vectors in the signal space, i = 1, 2, . . . , Msik k norm of signal vector si

    sik coefficients of the GramSchmidt orthogonal expansion, IQ values of a

    two-dimensional signal vector

    sLn time samples of an OFDM signalsLt complex envelope a bandpass signal s(t)s0t output signal of a matched filtersp pilot symbol

    T time interval, period, sampling period

    T phase error rotation matrixTA antenna noise temperature

    Tb bit time

    Tc chip time, channel coherence time

    Td time delay, multipath delay spread

    TDC effective noise temperature of a downconverter

    TDM effective noise temperature of a demodulator

    Te effective noise temperature

    Th hop time

    TLNA effective noise temperature of a low-noise amplifier

    Tn physical temperature of the resistor

    T0 reference temperature, time interval, period

    Tp pulse width

    Tr(A) trace of matrix A

    Ts symbol time, system noise temperature

    t time, error-correcting capability of a block code

    U set of M input symbols of a discrete channelSM

    i1Ai union of Ai set

    u(t) unit step function

    u message vector

    V voltage, set of Q output symbols of a discrete channel

    VarX variance of XVrms root mean-square voltage

    VT threshold voltage

    v radial velocity

    v(t) voltage signal

    xxvi List of symbols

  • W bandwidth, watt

    fwntg set of Walsh functions, n = 1, 2, . . . , MX random variable, discrete source, output variable of a Gaussian channel or

    a matched filter

    X random vector, output vector of a Gaussian channel

    X mean value (expected value) of X

    X 2 mean-square value of X 2

    X ej2f0

    discrete-time Fourier transform of the sequence x(k)

    X f 2 energy spectral density of the energy signal x(t)X(f ) Fourier transform of x(t), folded spectrum

    X(k) N-point DFT of the sequence x(n)

    X n nth extension of the discrete source X

    XT f Fourier transform of xT t, 2T-truncation of x(t), T t TX(z) z-transform of the sequence x(k), transfer function of the composite

    channel in optimum MSE-LE

    x value assumed by a random variable X

    x value assumed by a random vector X

    xb c integer part of xxn discrete-time signal, sequence used in OFDMx(t) continuous-time signal (with or without a subscript)

    xI t in-phase component of the bandpass signal x(t)fxktg set of L orthonormal basis functions, k = 1, 2, . . . , Lfxkg set of L orthonormal basis vectors, k = 1, 2, . . . , LxLt complex envelope (equivalent lowpass signal) of the bandpass signal x(t)xpt periodic signalxQt quadrature component of the bandpass signal x(t)xst sampled functionxT t 2T-truncation of x(t), T t TY(k) N-point DFT of the sequence y(n)

    y(n) sequence

    y(t) continuous-time function

    Z pre-mapped vector at the input of the combiner

    Z(k) frequency samples of an OFDM signal (the IQ values of symbols of

    OFDM subcarriers)

    Z0 amplifier transimpedance

    zkt complex envelope of the kth OFDM subcarrier linear convolution circular convolution complex conjugate arbitrary constant

    xxvii List of symbols

  • n normalized signal amplitude in MQAM

    p complex Doppler factor

    p Doppler loss factor proportionality constant, roll-off factor of a raised-cosine filter

    threshold

    k;m complex Doppler loss factor

    n MSK data stream

    ground reflection coefficient, gamma function

    F frequency offset in OFDM

    f peak frequency deviation

    "l differential Doppler phase error

    "l 1 post-estimated differential Doppler phase error

    "l "l 1 double-differential Doppler phase error jamming pulse duty cycle, fraction of FH bandwidth being jammed,

    fraction of a hop being jammed

    ij 0 for i 6 j and 1 for i = jt unit impulse function" phase error

    "l Doppler phase error

    phase

    k azimuth angle of the kth wave

    L Earth station longitude

    Earth station latitude

    S GEO satellite longitude

    t phase function wavelength, Lagrange multiplier

    conditional mean value

    Xjy conditional likelihood ratio product, fractional coverage area

    spatial correlation coefficient

    X t normalized autocovariance of the random process x(t)2 variance of noise

    dB standard deviation of the log-normal density variable in decibels

    2X variance of the random variable X

    2s power of the diffuse paths

    time delay variable, average fade duration

    it path delay phase state in CPM

    k polar angle of the kth wave

    f impossible event, null set, phase in MFSK and CPM, phase shift

    xxviii List of symbols

  • F0 f power spectral density of the equalizer output noiseffktg set of orthonormal eigenfunctions of the noise autocorrelation voice activity factor or data duty cycle

    2 chi-square

    angle of mobile direction with respect to the x-axis, Doppler phase error

    C f power spectral density of sampled noiseC0 f power spectral density of output noise of ZF-LECt phase of a bandpass processO mean-square value of the envelope of the Nakagami-m process

    xxix List of symbols

  • Abbreviations

    2G second generation

    3G third generation

    A/D analog/digital conversion

    AGN additive Gaussian noise

    AMPS advanced mobile phone system

    APD avalanche photodiode

    ASK amplitude shift keying

    AWGN additive white Gaussian noise

    BCH BoseChaudhuriHocquenghem code

    BEC binary erasure channel

    BPA belief propagation algorithm

    BSC binary symmetric channel

    CDM code division multiplexing

    CDMA code division multiple access

    CDMA 2000 3G CDMA

    CP cyclic prefix

    CPM continuous phase modulation

    CP-MFSK continuous phase M-ary frequency shift keying

    CRC cyclic redundancy check

    CSI channel side information

    CSIR channel side information at the receiver

    CSK code shift keying

    D/A digital/analog conversion

    DD-DF double-differential decision-feedback algorithm

    D-DF differential decision-feedback algorithm

    DEMUX demultiplexer

    DFS decision-feedback selection

    DFT discrete Fourier transform

    DMC discrete memoryless channel

    DMPSK differential M-ary phase shift keying

    DMQAM differential quadrature amplitude modulation

    DPSK differential phase shift keying

    DQPSK differential quadrature phase shift keying

    xxx

  • DSB-AM double sidebandamplitude modulation

    DS direct sequence

    DS-CSK direct sequencecode shift keying

    DS-PSK direct sequencephase shift keying

    DS-SS direct sequence spread spectrum

    DTFT discrete-time Fourier transform

    EGC equal gain combining

    EIRP effective isotropic radiated power

    ESN electronic serial number

    ETACS extended total access cellular system

    FCC Federal Communications Commission

    FDM frequency division multiplexing

    FDMA frequency division multiple access

    FFH fast frequency hop

    FFT fast Fourier transform

    FH frequency hop

    FIR finite impulse response

    FM frequency modulation

    FSE fractionally spaced equalizer

    FSK frequency shift keying (binary frequency shift keying)

    fT frequencytime product

    GEO geostationary orbit

    GMSK Gaussian minimum shift keying

    GPS global positioning system

    GSM global system for mobile communication

    hj j2SNR instantaneous SNRICI intercarrier interference

    ICI interchannel interference

    IDFT inverse discrete Fourier transform

    IEEE Institute of Electrical and Electronics Engineers

    IFFT inverse fast Fourier transform

    iid independent and identically distributed

    IIR infinite impulse response

    IPI intrapath interference

    IS interim standard

    ISI intersymbol interference

    ISI intersample interferenceJDC Japanese digital cellular system

    JTACS Japanese total access communication system

    LDPC low-density parity-check code

    xxxi List of abbreviations

  • LFSR linear feedback shift-register

    LLR ln-likelihood ratio

    LR likelihood ratio

    L-REC rectangular pulse of duration L symbols

    L-RC raised cosine pulse shape of duration L symbols

    LSB lower sideband

    LTI linear time-invariant

    MAP maximum a posteriori

    MASK M-ary amplitude shift keying

    MFSK M-ary frequency shift keying

    MIMO multiple-input multiple-output

    MIN mobile identification number

    MIP multipath intensity profile

    MISO multiple-input single-output

    ML maximum likelihood

    MLSD maximum likelihood sequence detection

    MMSE minimum mean-square error

    MPA message passing algorithm

    MPSK M-ary phase shift keying

    MQAM quadrature amplitude modulation

    MRC maximal ratio combining

    MSC mobile switching center

    MSE-DFE mean-square error decision-feedback equalizer

    MSE-LE mean-square error linear equalizer

    MSK minimum shift keying

    MUI multi-user interference

    MUX multiplexer

    NAMPS narrowband advanced mobile phone system

    NRZ non-return-to-zero

    NTACS narrowband total access communication systems

    OFDM orthogonal frequency division multiplexing

    OOK onoff keying

    OQPSK offset quadrature phase shift keying

    PCS personal communication system

    PD pin photodiode

    PDC Pacific (or personal) digital cellular system

    PDF probability distribution function

    pdf probability density function

    =4-DQPSK =4 shift differential quadrature phase shift keyingPLL phase-locked loop

    xxxii List of abbreviations

  • PN pseudo-noise

    PSK phase shift keying (binary phase shift keying)

    PSTN public switched telephone network

    QPSK quadrature phase shift keying

    RS ReedSolomon code

    SC selection combining

    SCM station class mark

    SFH slow frequency hop

    SIMO single-input multiple-output

    SINR signal-to-interference and noise ratio

    SINR path signal-to-interference-and-noise ratio

    SINR0 output signal-to-interference plus noise ratio

    SIR signal-to-interference ratio

    SIRi input signal-to-interference ratio

    SIR0 output signal-to-interference ratio

    SJNR0 output signal-to-jamming-plus-noise ratio

    SJRi input signal-to-jamming ratio

    SJRi; p input signal-to-pulse jamming ratio

    SJR0 output signal-to-jamming ratio

    SJR0; p output signal-to-pulse jamming ratio

    SNR signal-to-noise ratio

    SNR diversity symbol signal-to-noise ratio

    SNR0 output signal-to-noise ratio

    SPA sum product algorithm

    TCM trellis coded modulation

    TDMA time division multiple access

    TIA/EIA Telecommunication Industry Association/Electronic Industry Association

    USB upper sideband

    USDC US digital cellular

    VCO voltage-controlled oscillator

    WCDMA wideband CDMA (3G CDMA)

    WLAN wireless local area network

    WMAN wireless metropolitan area network

    WSCS wide-sense cyclostationary

    WSS wide-sense stationary

    ZF-DFE zero-forcing decision-feedback equalizer

    ZF-LE zero-forcing linear equalizer

    xxxiii List of abbreviations

  • 1 Introduction

    If the mind is in tranquility, time and space cease to exist.Essence of Buddhism

    1.1 Brief overview

    This book provides the principles of digital communication and studies techniques to designand analyze digital communication systems for point-to-point and point-to-multipointtransmission and reception. Other than for radio broadcasting, modern communicationsystems are going digital, and in the USA the conversion of analog TV broadcasting intodigital HDTV broadcasting at the beginning of 2009 signified the coming end of analogcommunications. Communications between living beings began with the voice, and thethree biggest voice systems in the world are the telephone, and the cellular and radiobroadcasting systems.

    The dissemination of visual activities then propelled the development of TV broadcastingsystems. The pioneer telephone network and radio broadcasting systems employed analogcommunication techniques, such as AM and FM, for transmission of analog voice, as did theanalog TV broadcasting systems, which employed VSB-AM for picture transmission. Thequality of the message, such as voice and images, at the analog receiver depends onhow well the waveform that carries the message over the physical channel (twisted-pairtelephone wires, coaxial and fiber-optic cables, space, and water) can be reproduced. Inaddition, the fidelity of the received message depends on the signal-to-noise ratio at thereceiver input. For good analog communications, the signal-to-noise ratio must be large, andthis requires high-power transmitters, such as are used in AM radio and TV broadcasting.For FM radio broadcasting a large frequency spectrum is used, such as 200 kHz for radiobroadcasting, which shows that analog communications do not utilize power and bandwidthefficiently. Furthermore, the advent of the Internet requires audio, video, imagery, and textmessages to be integrated for transmission over a common channel and this in effect rulesout analog communications such as AM and FM.

    In analog communications, the message signal requires an infinite set of continuous timewaveforms for transmission over a physical channel. This is because the message itself, suchas audio or video, must first be converted into a voltage baseband waveform with acontinuous range in amplitude that has countless possible values. When the basebandvoltage waveform is used to modulate an RF carrier for transmission, such as in AM orFM, the modulated RF signal transmitted over the physical channel also has countlesspossible values in both its amplitude and frequency ranges. The only way to recover the

    1

  • message signal is to faithfully reproduce the baseband waveform from the modulated signal.This can be done easily in the case of no noise and no equipment imperfections, butotherwise the fidelity of the message signal may be reduced. Digital communication doesnot involve the faithful reproduction of the baseband waveform in the presence of noiseand equipment imperfections. Digital communication operates instead with a finite set ofcontinuous time modulation waveforms for transmission over a physical channel. Thisimplies that the message signal must be represented by a finite set of voltage basebandwaveforms. Mathematically, a finite set of waveforms can only represent a finite setof alphabets, commonly referred to as symbols. A symbol consists of a fixed number ofbinary digits or bits. For example, the set of four distinct symbols {00, 01, 10, 11} can berepresented by four distinct waveforms fA cos 2fct,A sin 2fctg. The time separation ofconsecutive waveforms that represent a symbol stream is called the symbol time, which isthe inverse of the symbol rate. If the waveforms are of finite duration then this duration is thesymbol time. This begs the question of how to obtain the bits or symbols that represent themessage. The process of converting a voltage baseband waveform that represents an audioor video message into bits is referred to as the analog to digital conversion (or A/D). Textmessages generated by computers are inherently in bits, so with A/D conversion, audio,video, text, and imagery can all be integrated into a single digital stream of bits. The processof A/D, bit-symbol mapping, baseband waveform shaping, and modulation is referred toas digital transmission. The process of demodulating the modulated signal, detecting thesymbol, symbolbit mapping, and digital to analog conversion (or D/A) is called digitalreception.

    Digital communication makes no attempts to reproduce the finite set of voltage basebandwaveforms. Instead, the receiver detects the energy content of each baseband waveform inthe presence of noise and equipment imperfections, and then makes a best estimate of whichtransmitted symbol was received. If the signal to noise ratio per symbol is reasonably large,a symbol will most likely be detected correctly with high probability. If not, a symbol errormay occur. This is the essence of digital communication. For a given signal-to-noise ratio, ananalog communication receiver attempts to reproduce the voltage baseband waveform withcertain subjective fidelity. On the other hand, for a given signal-to-noise ratio per symbol, adigital communication receiver produces symbols with a quantitative error rate. It isimportant to know in advance the lower bound of the signal-to-noise ratio per symbol fora specified error rate irrespective of the type and size of the set of modulation waveforms. In1948 Claude Shannon established this lower bound and also provided the channel capacityfor reliable transmission [1]. Shannons work gives the designers of digital communicationsystems the freedom to choose the set of modulation waveforms that achieve either thebest power or bandwidth efficiency, or a trade-off combination of both. As long as thetransmission rate is below the channel capacity and the signal-to-noise ratio per symbol isabove the Shannon limit, reliable communication is possible with an arbitrarily small errorrate. Guided by the Shannon channel capacity theorem (main theorem), the designer canfurther integrate error-correction codes with modulation techniques to lower the signal-to-noise ratio per symbol to achieve a specified error rate. The first error-correction code, theHamming code, was discovered by Richard W. Hamming in 1950, two years after Shannonpublished his landmark work [2]. In addition to the main theorem, the Shannon first theorem

    2 Introduction

  • provided the framework for encoding a discrete source of a finite set of symbols to minimizethe bit rate at the source encoder output. This allows the compression of the A/D samples ofthe message signal to remove redundancy and any insignificant information not perceptibleby the human eye or ear.

    The most common compression algorithms in use today are MP3 for music, JPEG forpictures, and MPEG for video. Figure 1.1 shows the conceptual block diagram of a digitalcommunication system. The material in the book is organized to cover the transmitter,receiver, and channel.

    1.2 Scope

    Chapter 2 provides a general study of deterministic signals that can be analyzed with Fouriertransform and Fourier series. Simple classification of signals and the concept of power andenergy are reviewed. One important class of signal, namely orthogonal signals, such as theWalsh functions employed in IS-95, CDMA 2000, and WCDMA, is discussed in detail.The majority of continuous-time and finite-energy signals in practice can be convenientlyanalyzed via their signal spaces. These signal spaces are displays of the signal vectors intheir respective constellations. The signal vectors which can be viewed as the A/D versionsof a signal set contain all the information about the signal set. Practical communicationsystems are inherently linear time-invariant systems operating in the small-signal range.They can be analyzed by Fourier series and Fourier transform to provide a frequency-domain snapshot of the signal bandwidth. The concept of autocorrelation and its relationshipwith energy or power spectral density are discussed for linear time-invariant systems.The sampling theorem that governs the A/D conversion of an analog signal and the

    Channel

    A/Dcompression

    encoding

    bit-symbolmapping

    Basebandwaveformshaping

    Modulation

    Analog message

    Decodingdecompression

    D/A

    Symbol-bitmapping

    Equalizationdetection

    Demodulationsignal

    processing

    Analogmessage

    Figure 1.1 Conceptual block diagram of a digital communication system.

    3 1.2 Scope

  • NyquistShannon interpolation for reconstruction of the analog signal are presented.Finally, the representations of a bandpass signal, that is, the signal sent over a physicalchannel, are discussed. The complex envelope (equivalent lowpass signal) of a bandpasssignal that can be employed to simplify the analysis of a communication system is alsoincluded.

    Chapter 3 studies random signals and their statistics. Although a finite set of deterministicsignals is employed to represent a finite set of information symbols, the transmitted symbolsare truly random, with each one in the set occuring with a fixed probability. Therefore, theinfinite series of signals transmitted over the channel is indeed a random process with finitepower. The study of random processes allows the establishment of the Fourier transformrelationship between the autocorrelation of the random process and its power spectraldensity via the EinsteinWienerKhinchine theorem. The emphasis here is on cyclo-stationary processes, which encompass all digitally modulated signals. The Gaussianprocess that is used to represent channel noise is discussed in sufficient detail. Samplingof bandlimited white Gaussian process, sufficient statistics for white Gaussian samples, theKarhunenLoeve theorem, and whitening filter are studied. To study the performance ofwireless communication via a fading channel we look at a variety of processes derivedfrom the Gaussian process, such as the Rayleigh, Rice, Nakagami-m, 2, and log-normalprocesses.

    Chapter 4 provides a general study of information theory developed by Shannon, andaddresses both source and channel coding. The concept of source entropy and prefix code isdiscussed. ShannonFano and Huffman prefix codes are used as examples. The Shannonfirst theorem is presented with a proof, and the concept of mutual information is presentedtogether with the Shannon main theorem for a discrete channel. The concept of differentialentropy for a Gaussian channel is introduced, and leads to mutual information and Shannonchannel capacity. Vector Gaussian channels and the water filling strategy are presented tohighlight the concept of channel coding, which is also interpreted via the sphere packingbound. The channel capacity of a bandlimited Gaussian channel and the channel capacity ofa Gaussian channel with discrete inputs are derived. The latter provides a snapshot of howefficient digitally modulated signals perform as compared to the Shannon capacity. Channelcoding can be done with error-correction codes such as block codes and convolutionalcodes. Performance of coded digital signals is presented for both block codes and convo-lutional codes, with the emphasis on low-density parity-check codes (LDPC) and con-volutional codes. The decoding of LDPC codes is implemented via the message passingalgorithm. The decoding of convolutional codes is carried out via the Viterbi algorithm,which includes hard decoding, and quantized or unquantized soft decoding.

    Chapter 5 examines methods for establishing a communication link between the trans-mitter and receiver, commonly referred to as link analysis. The link budget involves theallocation of power to the transmitter and noise temperature (or noise figure) to the receiverso that a signal-to-noise ratio is established at the receiver to match a specified error raterange. Given the transmitter power and the channel attenuation, the power of the receivedsignal can then be established. The channel attenuation is unique for each physical medium.The chapter begins with the concept of the noise temperature of a two-port network,which leads to the concept of the system noise temperature of a cascade of two-port

    4 Introduction

  • networks modeling a receiver. The system noise temperature allows the evaluation of noisepower in the receiver bandwidth, and hence the system signal to noise ratio. The physicalchannels investigated in this chapter are the cellular and satellite channels. For the cellularchannel, we adopt the well-known Hata model to estimate the median path loss betweenthe transmitter and receiver. The presence of co-channel interference between cells is alsotaken into account. Both narrowband cellular systems (IS-136, GSM) and wideband CDMAcellular systems (IS-95, CDMA-2000, WCDMA) are covered. For a satellite channel,the communication link is a point-to-point link, consisting of up- and downlinks. TheFriis formula for free-space attenuation is employed to establish the uplink or downlinkattenuation.

    Chapter 6 presents modulation techniques for transmitting information over the physicalchannel. The chapter essentially has two parts, namely binary modulation and M-arymodulation. The structure of each modulation technique is studied via the signal waveform,the power spectral density, and the modulator. For binary modulation we investigate phaseshift keying (PSK), differential phase shift keying (DPSK), amplitude shift keying (ASK)(commonly referred to as intensity-modulated onoff keying (OOK), a technique used infiber optic communication), frequency shift keying (FSK), minimum shift keying (MSK),and Gaussian MSK employed by the GSM cellular standard. Many practical applicationsrequire either the higher spectral efficiency or higher power efficiency that binary modu-lation techniques can provide; M-ary modulation can accommodate both. The second part ofthis chapter covers M-ary amplitude shift keying (MASK), M-ary phase shift keying(MPSK), offset quadrature phase shift keying (OQPSK), differential M-ary phase shiftkeying (DMPSK), =4 shifted differential quadrature phase shift keying (=4-DQPSK),M-ary quadrature amplitude modulation (MQAM), code shift keying (CSK), M-ary fre-quency shift keying (MFSK), and continuous phase modulation (CPM). The chapter con-tinues with a treatment of the dominant multiplexing-modulation technique, namelyorthogonal frequency division multiplexing (OFDM), which is used in many wirelessstandards. The chapter ends with a look at trellis coded modulation (TCM) for bandlimitedchannels. Both Ungerboeck and pragmatic TCM are investigated.

    Chapter 7 provides a treatment of digital demodulation. A generic digital demodulatorconsists of two major subsystems, namely the signal processor and the detector. There arefour types of signal processor: the matched filter, the correlator, the noncoherent matchedfilter, and the noncoherent correlator. The first two types are employed in coherent de-modulation while the last two types are used in noncoherent demodulation. For binarydemodulation the two fundamental detectors are threshold and maximum detectors. ForM-ary demodulation the two fundamental detectors are the minimum Euclidean distancedetector and the M-ary maximum detector. Combining the signal processor(s) and thedetector in that order produces an L-path demodulator for the set of digital signals withL orthonormal basis functions and an M-path demodulator for the set of M orthogonalsignals. The bit error probability analysis is carried out for binary modulation techniquessuch as coherent PSK, coherent DPSK, direct-detection ASK (for fiber optic communica-tion), coherent FSK, coherent MSK, precoded MSK and GMSK, noncoherent FSK andMSK, and noncoherent DPSK. For M-ary demodulation, the bit error probability analysis iscarried out for coherent MASK, coherent MPSK, coherent DMPSK, noncoherent DMPSK,

    5 1.2 Scope

  • coherent MQAM and DMQAM, coherent CSK and MFSK, noncoherent CSK and MFSK,coherent CPM with sequence detection, coherent CPM with symbol-by-symbol detection,and noncoherent CPM. The chapter continues with OFDM demodulation, with emphasis onthe IEEE 802.11a,g standards. Finally, the demodulation and decoding of TCM are studiedand performance analysis is investigated. The Viterbi algorithm is again used to illustrate thedecoding process.

    Chapter 8 investigates two major spread spectrum communication techniques for bothcommercial and military applications: direct sequence (DS) and frequency hop (FH). Thechapter begins with a presentation of the pseudo-noise (PN) sequences needed for spreadingthe modulated signal. Next the concept of quadrature orthogonal covering using Walshfunctions of the same length for multiplexing DS signals with an identical symbol rate isdiscussed. This concept is then extended to variable-length orthogonal covering for variablesymbol rates. IS-95 is used as a real life example for the study of the direct sequencespread spectrum. The demodulation of DS signals in the presence of tone jamming, broad-band jamming, and pulse jamming is analyzed. Demodulation of quadrature orthogonalcovering (IS-95 forward link) as well as noncoherent DS-CSK (IS-95 reverse link) ispresented. The analysis of code division multiple access (CDMA) with random spreadingsequences is presented together with a closed form expression and a tight upper bound for biterror probability. For frequency hop signals, three jamming strategies are studied: partial-band jamming, multi-tone jamming, and follower jamming. Both slow and fast hops areconsidered for follower jamming.

    Chapter 9 deals with intersymbol interference (ISI) in a bandlimited channel. The Nyquistcriterion for zero ISI is stated together with the corresponding pulse shapes that satisfy it.The design of an optimum demodulator for a bandlimited channel with Gaussian noise iscarried out. The optimum demodulator relies on the signal pulse shape implemented at themodulator. The channel is converted to an ideal channel via an equalizer implemented at themodulator (the equalizer is a filter with a transfer function equal to the inverse transferfunction of the channel). At the demodulator, a matched filter matched to the signal pulseshape simultaneously achieves both the maximum signal-to-noise ratio and zero ISI as longas the pulse shape at the matched filter output satisfies the Nyquist criterion of zero ISI.In practice, because the channel transfer function is not known or varies with time, ISIremoval is instead implemented at the demodulator. The equalizer implemented at thedemodulator can be classified into two types: linear and nonlinear. The treatment of linearequalizers covers zero-forcing and mean-square error equalizers. The latter alleviates thenoise enhancement effect that severely degrades the former in channels with deep attenu-ation in the passband. Nonlinear equalizers such as zero-forcing decision-feedback andmean-square error decision-feedback can avoid the noise enhancement effect altogether,although in channels with severe distortion the error propagation due to decision feedbackcould worsen the performance. To obtain optimum performance, maximum likelihoodsequence detection may be employed to mitigate the ISI. The motivation behind sequencedetection is to use the symbol energy that resides in the ISI portion of the symbol to aid thedetection instead of throwing it away. The Viterbi algorithm is employed in practice forsequence detection. Finally, a fractionally spaced equalizer that can mitigate timing erroris presented.

    6 Introduction

  • Chapter 10 studies the transmission and reception of a digitally modulated signal over afading channel. Unlike the AWGN channel, where noise is the only problem, a fadingchannel is a greater challenge, as it may cause signal envelope variations, phase errors, andintersymbol interference, all of which are detrimental to the performance of the signal. Thus,it is necessary to understand the mechanism that causes these unwanted effects and findways to mitigate them. A fading channel arises from the movement of the transmitter andreceiver, commonly referred to as the time-varying effect or Doppler effect. This may causerandom amplitude attenuation and random phase rotation of the signal space. In underwateracoustic communication, the medium also changes over time, further compounding theproblem. A fading channel also exhibits a space-varying effect, where the locations of thetransmitter and receiver and the physical structures in the environment dictate the paths thatthe signal may travel. A transmitted signal representing an arbitrary symbol may arrive at thereceiver via multiple paths. Depending on the time delay between paths, signal echoes of asymbol may overlap the next several symbols causing intersymbol interference. Both time-varying and space-varying effects can be classified into four fading characteristics: slowfading with random amplitude attenuation and negligible phase error, fast fading withrandom amplitude attenuation and large phase error, flat fading with random amplitudeattenuation and negligible intersymbol interference, and frequency selective fading withrandom amplitude attenuation and intersymbol interference. A mobile wireless channel mayhave two of these four characteristics where the random amplitude attenuation is describedby a special distribution (Rayleigh, Rice, and Nakagami-m). For analysis, it is convenient tomodel a fading channel with a channel impulse response that includes both time- and space-varying effects. From the channel impulse response, the multipath autocorrelation andDoppler profiles are derived, which lead to the concept of the Doppler power spectrum.ClarkeDoppler and AulinDoppler spectra are studied as examples. Using a mathematicalmodel, the performance of a modulated signal in a fading channel is analysed. First, idealcoherent demodulation (assuming the carrier phase is always available for symbol-by-symbol detection) is investigated and the fading channel is assumed to produce only randomamplitude attenuation. Channel tap estimation as well as the channel tap error effect isstudied to reflect real world situations.

    Next, the slow fading channel with random amplitude attenuation is investigated for pilotsymbol aided demodulation (the pilot symbols are periodically transmitted in the symbolstream; this represents a less desirable situation than ideal coherent demodulation butremains in line with practical applications). These investigations are extended to OFDM(the major waveform that was adopted by IEEE 802.11a-g, 802.16 to name a few) where aslow and frequency-selective fading channel is assumed. The fundamentals of coherentdemodulation are extended to noncoherent demodulation, where the Doppler tracking oforthogonal signals is investigated. The next discussion centers on another major waveformthat was adopted by IS-95, CDMA 2000, and WCDMA (for use either in the forwardchannel or reverse channel or both) for their respective cellular systems, namely, orthogonalcovering and spread spectrum signals. Complex spreading and despreading as well asDoppler analysis and tracking are presented. For completeness the demodulation of thesignal used in the reverse channel of IS-95 is also presented. Once the Doppler phase errorresulting from either slow or fast fading is corrected via Doppler tracking, and the ISI

    7 1.2 Scope

  • resulting from frequency-selective fading is mitigated via OFDM and equalization, onlyrandom amplitude attenuation remains to be dealt with. This particular effect can beeffectively alleviated via the use of time, frequency, antenna, or multipath delay diversity.Diversity is a signal combining method that makes use of uncorrelated signal redundancyfor both transmission and reception to enhance symbol detection in the presence of adeep fade which may destroy a non-diversity symbol. Diversity can be achieved viaredundant symbol interleaving for time diversity, or via uncorrelated subcarrier com-bining in OFDM for frequency diversity. It can also be achieved via multiple transmitantennas for transmit antenna diversity, and via multiple receive antennas for receiveantenna diversity or using a Rake receiver for multipath delay diversity. Combinations ofthese methods are also possible. Three main signal combining methods are studied:maximal ratio combining (MRC), selection combining (SC), and equal gain combining(EGC). MRC is the optimum combining scheme for coherent or pilot symbol-aideddemodulation in AWGN and is the most commonly used method for wireless LAN,MAN, WAN, and cellular systems. It is superior to SC and EGC, although it cannot beused for noncoherent demodulation, unlike the other two schemes. Wireless communi-cation in a fading channel favors the use of multiple transmit antennas for performanceenhancement. This type of antenna diversity employs orthogonal space-time block codeswith rates of , , and 1 with MRC at the receiver. The Alamouti code with unity ratewas recommended for the IEEE 802.16 family. Integrating both transmit and receiveantenna diversity provides the receiver with a powerful method to combat randomamplitude attenuation.

    The remaining parts of this chapter investigate the capacity of a fading channel. Sincea fade causes an outage that may drive the instantaneous channel capacity to zero, theaverage channel capacity defined for an AWGN channel does not exist for a slow fadingchannel. Therefore, the outage channel capacity is defined instead. The evaluation ofoutage capacity for a slow fading channel, as well as slow fading channels with receiveantenna diversity (SIMO), transmit antenna diversity (MISO), both receive and transmitantenna diversity (MIMO), and OFDM are presented. For a fast fading channel, theaverage channel capacity is well defined since symbols fade independently and thereexists a coding system that ensures a maximum reliable rate, which is the ensembleaverage rate. Fast fading is less detrimental than slow fading from the capacity point ofview but requires more complex channel coding to deal with both Doppler phase error(virtually eliminated via Doppler tracking in slow fading) and random amplitudeattenuation.

    1.3 Summary

    The structure of the book can be summarized as follows:

    * Acquiring the prerequisite knowledge of communication signals: Chapters 2 and 3.* Packaging the message and introducing the concept of signal-to-noise ratio and bandwidth: Chapter 4.

    8 Introduction

  • * Measuring and establishing the required signal-to-noise ratio for a given communication coverage:Chapter 5.

    * Sending the message based on the required signal-to-noise ratio and bandwidth: Chapter 6.* Receiving the message and providing the best detection: Chapter 7.* Enhancing the survivability of narrowband modulation in the presence of interference via bandwidth

    spreading, i.e., spread spectrum modulation: Chapter 8.* Pulse shaping and equalizing the effect of a bandlimited channel for modulated signals: Chapter 9.* Dealing with sending and receiving signals over a mobile channel for previously discussed modulation

    techniques: Chapter 10.

    Bibliography

    1. C. E. Shannon, Amathematical theory of communication, Bell Syst. Tech. J., Vol. 27,pp. 379423, pp. 62356, 1948. Also available at http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html.

    2. R.W. Hamming, Error detecting and error correcting codes, Bell Syst. Tech. J.,Vol. 29, pp. 14760, 1950.

    3. R.W. Hamming, Coding and Information Theory, 2nd edition, Hemel Hempstead:Prentice-Hall, 1986.

    4. R. G. Gallager, Principles of Digital Communications, Cambridge: CambridgeUniversity Press, 2008.

    9 Bibliography

  • 2 Deterministic signal analysis

    Introduction

    In this chapter we lay the foundation for the analysis and design of communication systems,and digital communication systems in particular. We employ deterministic signals to carryinformation from the transmitter to the receiver. These deterministic signals contain certain apriori features sufficiently adequate for the receiver to retrieve the information. Note that theinformation always appears random to the receiver, that is, it does not know which data itwill receive; otherwise, communications would not be needed. Deterministic signals forma very broad class of signals; therefore, the first step is to categorize them so that theircharacterization can be fully exploited. The categorization leads to the labels continuoustime, discrete time, periodic, aperiodic, analog, digital, energy, and power signals. Furtherstudy leads us to orthogonal signals and the use of signal space to represent digital signals asvectors. We also review linear time invariant (LTI) systems and the important convolutionoperation that relates the inputs and outputs of an LTI system.

    We then investigate Fourier series representation of continuous-time periodic signals,and Fourier transform of continuous-time aperiodic signals. The Fourier transform isindispensable in the analysis and design of LTI systems. The energy spectral density of anenergy signal and the power spectral density of a power signal are studied. From here theautocorrelation functions of both energy and power signals are examined.

    The process of representing a continuous-time signal by its samples is then studied usingthe sampling theorem. We also discuss the process of recovering a continuous-time signalfrom its samples. Finally, we study various representations of bandpass signals, which arecommonly used in the analysis of communication systems.

    2.1 General description of deterministic signals

    A deterministic signal is completely specified at any instant of time t. There is no uncertaintyabout its value at t. The transmitter employs deterministic signals to carry random informa-tion. When the receiver receives a transmitted signal that has been corrupted by noise(a random signal), it attempts to detect the information by stripping away the deterministicsignals. A deterministic signal can fall into a number of categories, which are describedbelow.

    10

  • Continuous-time signals

    A continuous time signal x(t) is a signal of the real variable t. For example, the sinusoidalsignal

    xt A cos 2f0t (2:1)

    shown in Figure 2.1 is a function of the time variable t, with amplitude A and frequency f0.

    Discrete-time signals

    A discrete time signal xn is a sequence where the values of the index n are integers. Forexample, the discrete-time sinusoidal signal

    xn A cos 2f0n (2:2)

    with f0 1=12 is shown in Figure 2.2.

    1f0

    t

    A

    A

    0

    x(t)

    Figure 2.1 Sinusoidal signal.

    n

    2n12

    x(n) = A cos

    Figure 2.2 Discrete-time sinusoidal signal.

    11 2.1 General description of deterministic signals

  • Often a discrete-time signal is obtained from a continuous-time signal x(t) by samplingat the time instants separated by a sampling interval T. Thus, xn xnT. For example,if xt cos 2 f0t, then xn xnT cos 2 f0nT . Specifically, f0T 1=12 forFigure 2.2.

    Periodic and aperiodic signals

    A periodic signal x(t) satisfies the following condition:

    xt xt T0 (2:3)

    for all time t, where T0 is the period. A signal x(t) that is not periodic is referred to as anaperiodic or nonperiodic signal.

    Example 2.1 Continuous-time sinusoidThe signal xt A cos 2f0t shown in Figure 2.1 is a continuous-time periodic signal withperiod T0 1=f0. Note that

    xt T0 A cos2f0t T0 A cos2f0t 2f0T0 A cos2f0t 2 A cos2f0t xt

    (2:4)

    Periodic signals are defined analogously in discrete time. Specifically, a discrete-timeperiodic signal x(n) satisfies the following condition:

    xn xn N0 (2:5)

    for all integers n, where the positive integer N0 is the period.

    Example 2.2 Discrete-time sinusoidThe signal xn A cos 2f0n is periodic if

    A cos 2f0n A cos 2f0n N0 A cos2f0n 2f0N0

    (2:6)

    This implies that 2f0N0 must be a multiple of 2. Thus, for an arbitrary integer m

    2f0N0 2m (2:7)

    or equivalently

    f0 m=N0 (2:8)

    12 Deterministic signal analysis

  • Thus x(n) is periodic only if f0 is a rational number. Obviously, the fundamentalfrequency is f0 1=N0, assuming thatm and N0 do not have a common factor. The sequencexn A cos2n=12 is periodic with period N0 1=f0 12.

    Analog and digital signals

    An analog signal is a signal with a continuous range of amplitudes. For example, the signalxt A cos 2f0t, < t < , is an analog signal.

    A digital signal is a member of a set of M unique analog signals that represent M datasymbols.

    Example 2.3 Antipodal signalsThe binary digital signals s1t A, iTb t < (i+1)Tb and s2t A, iTb t < (i+1) Tbshown in Figure 2.3 represent bit 1 and bit 0, respectively. The parameter Tb is the durationof s1(t) or s2(t), and it is called the bit time (bit interval, bit duration). The time index i is aninteger. The amplitude is A. This signaling technique is called antipodal signaling (alsoreferred to as bipolar, non-return-to-zero (NRZ), or polar NRZ in the literature).

    The digital signals s1(t) and s2(t) can be employed to represent the sequence of bits{1, 0, 1, 1}, as shown in Figure 2.4.

    Example 2.4 Phase shift keyingThe set of two digital signals s1t A cos 2fct, iTb t5i 1Tb ands2t A cos 2fct A cos2fct , iTb t < (i+1) Tb shown in Figure 2.5 can be

    s1(t)

    tiTb (i + 1) Tb

    t

    A

    A

    (i + 1) TbiTb

    s2(t)

    Figure 2.3 Antipodal signals.

    t

    1 0 11

    Figure 2.4 Representation of the bit sequence {1, 0, 1, 1}.

    13 2.1 General description of deterministic signals

  • employed to represent bit 1 and bit 0, respectively. The frequency fc is normally greater than1=Tb. The parameter Rb 1=Tb is referred to as the bit rate. Note that there is a phaseshift of radians between bits 1 and 0. This signaling or modulation technique is calledbinary phase shift keying or simply PSK (also referred to as BPSK or 2PSK in theliterature).

    The sequence of bits {1, 0, 1, 1} represented by s1(t) and s2(t) is shown in Figure 2.6.

    A generalization of PSK is M-ary phase shift keying (MPSK) where M data symbols arerepresented byM signals with distinct phase shifts. Each symbol has k bits, hence,M 2k .The symbol time (symbol interval, symbol duration) is Ts. The parameter Rs 1=Ts isreferred to as the symbol rate. Note that a symbol has k bits; therefore, a symbol time consistsof k bit times, that is, Ts kTb. Thus, the symbol rate is 1=k times the bit rate, that is,Rs Rb=k 1=kTb.

    The MPSK signals can be written in an arbitrary symbol interval iTs t < (i+ 1)Ts, wherei is an integer, as follows:

    snt A cos 2fct 2n 1 M

    ; n 1; 2; :::;M (2:9)

    where the frequency fc > 1/Ts.

    Example 2.5 Quadrature phase shift keyingFor the caseM 4, there are four signals representing four distinct symbols 00, 01, 10, 11.In this case, each symbol has k 2 bits and the signals are given as

    iTb iTb(i + 1)Tbt

    (i + 1)Tbt

    s1(t) s2(t)

    Figure 2.5 Binary digital signals.

    t

    1 0 1 1

    Figure 2.6 Representation of bit sequence {1, 0, 1, 1}

    14 Deterministic signal analysis

  • s1t A cos 2fct 4

    for 00

    s2t A cos 2fct 34

    for 01

    s3t A cos 2fct 54

    for 11

    s4t A cos 2fct 74

    for 10

    This signaling or modulation technique is called quadrature phase shift keying or simplyQPSK (also referred to as 4PSK in the literature).

    Example 2.6 4-ary amplitude shift keyingConsider the set of M = 4 symbols {00, 01, 10, 11}. The following four digitalsignals represent the above symbols in an arbitrary symbol interval iTs t< (i + 1) Ts:

    s1t A cos 2fct for 00s2t 3A cos 2fct for 01s3t A cos 2fct for 10s4t 3A cos 2fct for 11

    Figure 2.7 shows the sequence {00, 01, 10, 11} as represented by s1t, s2t, s3t, and s4tin time. This signaling or modulation technique is called 4 ary amplitude shift keying orsimply 4ASK.

    In the next section we will investigate two special types of deterministic signals, namely,power and energy signals, and their relationship.

    t

    00 01 10 11

    Figure 2.7 4ASK for the sequence {00, 01, 10, 11}

    15 2.1 General description of deterministic signals

  • 2.2 Power and energy

    The average power in watts delivered by a voltage signal v(t) in volts to a resistive load R inohms is given by

    P V2rms

    R(2:10)

    where Vrms is the root mean square (rms) value of v(t), defined as

    Vrms limT!1

    1

    2T

    T

    Tv2 t dt

    1=2(2:11)

    In communications, the value of the resistive load is normally assumed to be

    R 1O (2:12)

    Thus the average (normalized) power in watts is given by

    P limT!1

    1

    2T

    T

    Tv2 t dt (2:13)

    A signal v(t) is called a power signal if and only if the average power is non-zero and finite,that is,

    05P51 (2:14)

    Example 2.7 Power of a sinusoidLet a voltage signal v(t) = A cos 2f0t volts be applied across a resistive load R = 1. Theaverage (normalized) power in watts delivered to the load is

    P limT!1

    1

    2T

    T

    TA cos 2f0t 2dt

    limT!1

    A2

    4T

    T

    T1 cos 4f0t dt

    A2

    2 lim

    T!1A2

    4T

    sin 4f0T4f0

    sin 4f0T4f0

    A2

    2 A

    2

    2limT!1

    sin 4f0T4f0T

    A2

    2

    (2:15)

    Therefore, vt A cos 2f0t is a power signal.

    16 Deterministic signal analysis

  • The energy in joules of a voltage signal v(t) in volts is given by

    E 1

    1vtj j2dt (2:16)

    A signal v(t) is called an energy signal if and only if its energy is non-zero and finite,that is,

    05E51 (2:17)

    Example 2.8 Energy of a sinusoidThe power signal in Example 2.7 has infinite energy, as proved below:

    E 1

    1A cos 2f0t 2dt

    1

    1

    A2

    21 cos 4f0t dt 1

    (2:18)

    The above two examples show that a power signal has infinite energy. In the next twoexamples we investigate some additional energy signals.

    Example 2.9 Energy and power of antipodal signalsConsider the digital signals s1(t) = A, iTb t (i+ 1)Tb, and s2(t) = A, iTb t < (i+ 1)Tb, asgiven in Example 2.3. The energy Eb of these two signals, commonly referred to as the bitenergy, can be calculated as follows:

    Eb 1

    1A 2dt

    i 1 Tb

    iTb

    A2dt A2Tb (2:19)

    The energy of s1(t) and the energy of s2(t) satisfy (2.17), hence both signals are energysignals. However, the average power of these two signals is zero, as shown below:

    P limT!1

    1

    2T

    T

    TA 2dt lim

    T!11

    2T

    i 1 Tb

    iTb

    A2dt

    limT!1

    1

    2TA2Tb 0

    (2:20)

    17 2.2 Power and energy

  • Example 2.10 Energy and power of PSK signalsConsider thePSKsignals s1t A cos 2fct, iTb t

  • and s2t x t iTb A cos 2fct A cos2fct , iTb t < (i+ 1)Tb forodd i, represents odd integers. We can write the sequence as one composite signal asfollows:

    s t X1

    i 1x t iTb

    AX1

    i 11 ip t iTb cos 2fct

    (2:25)

    where p(t iTb) is a pulse of unit amplitude and duration Tb, as shown in Figure 2.8

    p t iTb 1; iTb t5 i 1 Tb0; otherwise

    (2:26)

    The average power of s(t) is given by (2.13):

    P limT!1

    1

    2T

    T

    Ts2 t dt

    limT!1

    1

    2T

    T

    TA2

    X1

    i 11 ip t iTb cos 2fct

    " #2dt

    (2:27)

    Since the pulses p(t hTb) and p(t iTb) do not overlap for i h, the cross-terms that involvethe product p(t hTb) p(t iTb) are all zeros, and (2.27) reduces to

    P limT!1

    A2

    2T

    T

    T

    X1

    i 1p2 t iT cos2 2fct

    " #

    dt

    limT!1

    A2

    2T

    T

    Tcos2 2fct dt

    limT!1

    A2

    4T

    T

    T1 cos 4fct dt

    A2

    2

    (2:28)

    The last two lines of (2.28) follow (2.15). From (2.22) and (2.24), we have the followingrelationship between the energy of the digital signals s1(t) and s2(t) and their associatedsequence s(t) in (2.25):

    p(t iTb)

    tiTb (i + 1)Tb

    Figure 2.8 Unit amplitude pulse of duration Tb.

    19 2.2 Power and energy

  • Eb PTb (2:29)or

    Eb PTb (2:30)when the product nfcTb is a positive integer.

    Example 2.11 shows that although the infinite sequence s(t) of digital signals has finiteaverage power, its component digital signal must have zero average power. If its componentdigital signal has finite power, an infinite number of them would result in a sequence s(t)possessing infinite power. The concept can be made simpler by the following example.

    Example 2.12 Power of a finite sequence of PSK signalsInstead of using the power averaged over infinite time, we can employ the poweraveraged over finite time. Let P(T) be the power of a voltage signal v(t) averaged overthe finite time T:

    P T 12T

    T

    Tv2 t dt (2:31)

    Now consider the sequence s(t) in (2.25). For simplicity let fc = k/Tb, where k is a positiveinteger. By using the definition of power in (2.31) and choosing T = nTb, where n is apositive integer, we can calculate the power of s(t) as follows:

    P T A2

    2 A

    2

    2

    sin 4fcT4fcT

    A

    2

    2 A

    2

    2

    sin 4nk4nk

    A2

    2

    (2:32)

    Similarly, the power of the digital signal s1(t) = Ap(t iTb) cos 2fct or s2(t) = Ap(t iTb)cos 2fct averaged over 2T = 2nTb > 4(i+ 1)Tb is

    Ps T 12T T

    TA2 p2 t iTb cos2 2fct

    dt

    A2

    4T

    i1 Tb

    iTb

    1 cos 4fct dt

    A2Tb4T

    (2:33)

    Substituting T = nTb into (2.33) we get

    Ps T A2

    4n(2:34)

    20 Deterministic signal analysis

  • Since there are 2n non-overlapping signals s1(t) and s2(t) in the interval 2T, it is obvious that

    PT 2nPsT (2:35)In practice, (2.31) is used in laboratory instruments to measure power averaged over afinite time. This is also true for other average quantities, such as dc value and rms value.

    2.3 Orthogonal signals

    Orthogonal signals are employed in many modern communication systems. Two signals x(t)and y(t) are said to be orthogonal over the interval (a, b) if their inner product is zero, that is,

    b

    axtyt dt 0 (2:36)

    where y*(t) is the complex conjugate of y(t). If, in addition, the energy of each signal definedover the interval (a, b) is unity, that is,

    Ex b

    axtj j2dt 1; Ey

    b

    aytj j2dt 1 (2:37)

    then the two signals are said to be orthonormal. A set of signals {xi (t)} is said to beorthogonal (orthonormal) if each pair of signals in the set is orthogonal (orthonormal).Conditions (2.36) and (2.37) can be relaxed to include quasi orthogonal (quasiorthonormal) signals. Two signals x(t) and y(t) are said to be quasi-orthogonal overthe interval (a, b) if their inner product is much less than the energy of each signal, that is,

    b

    axtyt dt Ex Ey (2:38)

    In addition, if

    Ex Ey 1 (2:39)then the two signals are said to be quasi-orthonormal. A set of signals {xi (t)} is said to bequasi-orthogonal (quasi-orthonormal) if each pair of signals in the set is quasi-orthogonal(quasi-orthonormal).

    Example 2.13 Orthonormal sinusoids with identical frequencyConsider two sinusoidal signals xt 2=T0

    pcos 2nf0t and yt 2=T0

    psin 2nf0t,

    where n 1; 2; . . .; and T0 1=f0. Substituting xt and yt into (2.36) with a 0 andb T0 we have

    21 2.3 Orthogonal signals

  • T0

    0

    2

    T0cos 2nf0t sin 2nf0t dt 1T0

    T0

    0sin 4nf0t dt

    1T0 cos 4nf0t

    4nf0

    T0

    0

    0(2:40)

    Furthermore, applying (2.37) we obtain the energy of xt and yt, defined over (0, T0) asfollows:

    E T0

    0

    2

    T0cos2 2nf0t dt

    T0

    0

    2

    T0sin2 2nf0t dt

    1T0

    T0

    01 cos 4nf0t dt 1T0

    T0

    01 cos 4nf0t dt

    1

    (2:41)

    Thus, the two signals xt and yt are orthonormal over (0, T0).

    The above example shows that two sinusoidal signals with a relative phase difference of=2 radians are orthogonal over an integer multiple of their period 1=nf0 T0=n: Inpractice, the frequency of the sinusoidal signals might not be an integer multiple of 1=T0.In other words, n is not a positive integer but a positive real number. In this case the energyEx of xt and the energy Ey of yt as calculated in (2.41) become

    Ex 1 sin 4n4n 1 (2:42a)

    Ey 1 sin 4n4n 1 (2:42b)

    for a large n. Also from (2.40) we have for a large n:

    T0

    0

    2

    T0cos 2nf0t sin 2nf0t dt 1T0

    T0

    0sin 4nf0t dt

    1T0 cos 4nf0t

    4nf0

    T0

    0

    1 cos 4n4n

    1

    (2:43)

    Thus, the two signals xt and yt have near unity energy and their inner product is muchsmaller than unity. In this case they are quasi-orthonormal.

    Example 2.14 Orthonormal sinusoids with different frequenciesConsider the set of sinusoidal signals {xnt}, n 1; 2; :::, where xnt is given by

    22 Deterministic signal analysis

  • xnt 2T0

    r

    cos 2nf0t n; T0 1f0 (2:44)

    Substituting a pair of signals xkt and xmt with k 6 m into (2.36) and setting a 0 andb T0 yields

    T0

    0xktxmt dt

    T0

    0

    2

    T0cos2kf0t k cos2mf0t m dt

    1T0

    T0

    0fcos2k mf0t k m

    cos2k mf0t k mgdt 0

    (2:45)

    Furthermore, it can be seen that the energy of each signal xkt and xmt is unity by applying(2.37). Thus, the set of sinusoidal signals {xnt} is orthonormal over (0, T0).

    The signals in (2.45) are often employed to represent data bits or symbols. For example,two adjacent signals xnt and xn1t can be used to represent bit 0 and bit 1, respectively.The parameter T0 is set to be the bit time Tb. Note that the frequencies of the two signalsare separated by f0 1=T0 1=Tb, which is also the bit rate. This is the minimumfrequency spacing between xnt and xn1t. This modulation technique is referred toas binary frequency shift keying or simply FSK (also referred to as BFSK or 2FSK in theliterature). In another example, M 2k adjacent signals, xnt, xn1t, . . . , xnM t, canbe used to represent M symbols. In this case the parameter T0 is set to be the symboltime Ts. The adjacent frequencies are separated by the minimum frequency spacingf0 1=T0 1=Ts, which is also the symbol rate. This is called M ary frequency shiftkeying or simply MFSK. Figure 2.9 illustrates an FSK representation of two consecutivebits 0 and 1.

    Example 2.15 Continuous-phase orthonormal sinusoids with minimum frequency spacingConsider two sinusoidal signals xn;it and xn1;it defined over the interval iT0; i 1T0as follows:

    t

    Tb Tb

    Figure 2.9 FSK representation of two consecutive and different bits.

    23 2.3 Orthogonal signals

  • xn;it 2T0

    r

    cos nf0t iT0 n;i (2:46)

    xn1;it 2T0

    r

    cos n 1f0t iT0 n1;i (2:47)where n 2; 3; :::, T0 1=f0, and n;i and n1;i are the initial phases of xn;it and xn1;it attime t iT0, respectively. Note that theminimum frequency spacing between these two signalsis f0=2. Applying these two signals to (2.36) over the interval iT0; i 1T0 we get

    i1T0

    iT0

    xn;itxn1;it dt

    1T0

    i1T0

    iT0

    fcosf0t i n1;i n;i

    cos2n 1 f0t iT0 n;i n1;igdt

    1T0

    sinf0t i n1;i n;if0

    sin2n 1 f0t iT0 n;i n1;i2n 1f0

    i1T0

    iT0

    1T0

    sin2 i n1;i n;i sin2 i n1;i n;if0

    sin2n 1 n;i n1;i sinn;i n1;i2n 1f0

    (2:48)

    The result in (2.48) shows that the two signals are not orthogonal unless the followingconditions are satisfied:

    n1;i n:i k (2:49a)

    n1;i n:i m (2:49b)where k and m are integers. This implies that n;i and n1;i must be an integer multiple of radians. In practice, it is difficult to force the phases of two sinusoidal signals at differentfrequencies to