modern digital analog communications lathi pg1-75

75
INTRODUCTION O ver the past decade, the rapid expansion of digital communication technologies has been simply astounding. Internet, a word and concept once familiar only to technolo- gists and the scientific community, has permeated every aspect of people's daily lives. It is quite difficult to find any individual in a modern society that has not been touched by new communication technologies ranging from cellular phones to Bluetooth. This book examines the basic principles of communication by electric signals. Before modern times, messages were carried by runners, carrier pigeons, lights, and fires. These schemes were adequate for the distances and "data rates" of the age. In most parts of the world, these modes of communication have been superseded by electrical communication systems,* which can transmit signals over much longer distances (even to distant planets and galaxies) and at the speed of light. Electrical communication is dependable and economical; communication technologies improve productivity and energy conservation. Increasingly, business meetings are conducted through teleconferences, saving the time and energy formerly expended on travel. Ubiqui- tous communication allows real-time management and coordination of project participants from around the globe. E-mail is rapidly replacing the more costly and slower "snail mails." E-commerce has also drastically reduced some costs and delays associated with marketing, while customers are also much better informed about new products and product information. Traditional media outlets such as television, radio, and newspapers have been rapidly evolving in the past few years to cope with, and better utilize, the new communication and networking technologies. The goal of this textbook is to provide the fundamental technical knowledge needed by next-generation communication engineers and technologists for designing even better communication systems of the future. 1.1 COMMUNICATION SYSTEMS Figure 1.1 presents three typical communication systems: a wire-line telephone-cellular phone connection, a TV broadcasting system, and a wireless computer network. Because of the numerous examples of communication systems in existence, it would be unwise to attempt to study the details of all kinds of communication systems in this book. Instead, the most efficient and effective way to learn about communication is by studying the major func- tional blocks common to practically all communication systems. This way, students are not With the exception of the postal service.

Upload: sean-waithe

Post on 22-Oct-2014

1.054 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Modern Digital Analog Communications Lathi Pg1-75

INTRODUCTION

Over the past decade, the rapid expansion of digital communication technologies hasbeen simply astounding. Internet, a word and concept once familiar only to technolo-gists and the scientific community, has permeated every aspect of people's daily lives.

It is quite difficult to find any individual in a modern society that has not been touched by newcommunication technologies ranging from cellular phones to Bluetooth. This book examinesthe basic principles of communication by electric signals. Before modern times, messageswere carried by runners, carrier pigeons, lights, and fires. These schemes were adequate for thedistances and "data rates" of the age. In most parts of the world, these modes of communicationhave been superseded by electrical communication systems,* which can transmit signals overmuch longer distances (even to distant planets and galaxies) and at the speed of light.

Electrical communication is dependable and economical; communication technologiesimprove productivity and energy conservation. Increasingly, business meetings are conductedthrough teleconferences, saving the time and energy formerly expended on travel. Ubiqui-tous communication allows real-time management and coordination of project participantsfrom around the globe. E-mail is rapidly replacing the more costly and slower "snail mails."E-commerce has also drastically reduced some costs and delays associated with marketing,while customers are also much better informed about new products and product information.Traditional media outlets such as television, radio, and newspapers have been rapidly evolvingin the past few years to cope with, and better utilize, the new communication and networkingtechnologies. The goal of this textbook is to provide the fundamental technical knowledgeneeded by next-generation communication engineers and technologists for designing evenbetter communication systems of the future.

1.1 COMMUNICATION SYSTEMSFigure 1.1 presents three typical communication systems: a wire-line telephone-cellular phoneconnection, a TV broadcasting system, and a wireless computer network. Because of thenumerous examples of communication systems in existence, it would be unwise to attemptto study the details of all kinds of communication systems in this book. Instead, the mostefficient and effective way to learn about communication is by studying the major func-tional blocks common to practically all communication systems. This way, students are not

With the exception of the postal service.

Page 2: Modern Digital Analog Communications Lathi Pg1-75

Figure 1.1Some examplesof communi-cations sys-tems.

Wirelesscomputers

Wireless accesspoint

Wire-linenetwork

merely learning the operations of those existing systems they have studied; More impor-tantly, they can acquire the basic knowledge needed to design and analyze new systems neverencountered in a textbook. To begin, it is essential to establish a typical communication sys-tem model as shown in Fig. 1.2. The key components of a communication system are asfollows.

The source originates a message, such as a human voice, a television picture, an e-mailmessage, or data. If the data is nonelectric (e.g., human voice, e-mail text, television video),it must be converted by an input transducer into an electric waveform referred to as thebaseband signal or message signal through physical devices such as a microphone, a computerkeyboard, or a CCD camera.

The transmitter modifies the baseband signal for efficient transmission. The transmittermay consist of one or more subsystems: an A/D converter, an encoder, and a modulator.Similarly, the receiver may consist of a demodulator, a decoder, and a D/A converter.

Page 3: Modern Digital Analog Communications Lathi Pg1-75

Figure 1.2Communicationsystem.

Inputsignal

Inputmessage

The channel is a medium of choice that can convey the electric signals at the transmitteroutput over a distance. A typical channel can be a pair of twisted copper wires (telephone andDSL), coaxial cable (television and internet), an optical fiber, or a radio link. Additionally, achannel can also be a point-to-point connection in a mesh of interconnected channels that forma communication network.

The receiver reprocesses the signal received from the channel by reversing the signalmodifications made at the transmitter and removing the distortions made by the channel. Thereceiver output is fed to the output transducer, which converts the electric signal to its originalform—the message.

The destination is the unit to which the message is communicated.A channel is a physical medium that behaves partly like a filter that generally attenuates

the signal and distorts the transmitted waveforms. The signal attenuation increases with thelength of the channel, varying from a few percent for short distances to orders of magni-tude in interplanetary communications. Signal waveforms are distorted because of physicalphenomena such as frequency-dependent gains, multipath effects, and Doppler shift. Forexample, a frequency-selective channel causes different amounts of attenuation and phaseshift to different frequency components of the signal. A square pulse is rounded or "spreadout" during transmission over a low-pass channel. These types of distortion, called lineardistortion, can be partly corrected at the receiver by an equalizer with gain and phasecharacteristics complementary to those of the channel. Channels may also cause nonlin-ear distortion through attenuation that varies with the signal amplitude. Such distortionscan also be partly corrected by a complementary equalizer at the receiver. Channel distor-tions, if known, can also be precompensated by transmitters by applying channel-dependentpredistortions.

In a practical environment, signals passing through communication channels not onlyexperience channel distortions but also are corrupted along the path by undesirable inter-ferences and disturbances lumped under the broad term noise. These interfering signals arerandom and are unpredictable from sources both external and internal. External noise includesinterference signals transmitted on nearby channels, human-made noise generated by faultycontact switches of electrical equipment, automobile ignition radiation, fluorescent lights ornatural noise from lightning, microwave ovens, and cellphone emissions, as well as elec-tric storms and solar and intergalactic radiation. With proper care in system design, externalnoise can be minimized or even eliminated in some cases. Internal noise results from thermalmotion of charged particles in conductors, random emission, and diffusion or recombina-tion of charged carriers in electronic devices. Proper care can reduce the effect of internalnoise but can never eliminate it. Noise is one of the underlying factors that limit the rate oftelecommunications.

Thus in practical communication systems, the channel distorts the signal, and noise accu-mulates along the path. Worse yet, the signal strength decreases while the noise level remains

Page 4: Modern Digital Analog Communications Lathi Pg1-75

steady regardless of the distance from the transmitter. Thus, the signal quality is continuouslyworsening along the length of the channel. Amplification of the received signal to make up forthe attenuation is to no avail because the noise will be amplified by the same proportion, andthe quality remains, at best, unchanged.* These are the key challenges that we must face indesigning modern communication systems.

1.2 ANALOG AND DIGITAL MESSAGESMessages are digital or analog. Digital messages are ordered combinations of finite symbols orcodewords. For example, printed English consists of 26 letters, 10 numbers, a space, and severalpunctuation marks. Thus, a text document written in English is a digital message constructedfrom the ASCII keyboard of 128 symbols. Human speech is also a digital message, because it ismade up from a finite vocabulary in a language.' Music notes are also digital, even though themusic sound itself is analog. Similarly, a Morse-coded telegraph message is a digital messageconstructed from a set of only two symbols—dash and dot. It is therefore a binary message,implying only two symbols. A digital message constructed with M symbols is called an M -arymessage.

Analog messages, on the other hand, are characterized by data whose values vary over acontinuous range and are denned for a continuous range of time. For example, the temperatureor the atmospheric pressure of a certain location over time can vary over a continuous range andcan assume an (uncountable) infinite number of possible values. A piece of music recorded bya pianist is also an analog signal. Similarly, a particular speech waveform has amplitudes thatvary over a continuous range. Over a given time interval, an infinite number of possible differentspeech waveforms exist, in contrast to only a finite number of possible digital messages.

1.2.1 Noise Immunity of Digital Signals

It is no secret to even a casual observer that every time one looks at the latest electroniccommunication products, newer and better "digital technology" is replacing the old analogtechnology. Within the past decade, cellular phones have completed their transformation fromthe first-generation analog AMPS to the current second-generation (e.g., GSM, CDMA) andthird-generation (e.g., WCDMA) digital offspring. More visibly in every household, digitalvideo technology (DVD) has made the analog VHS cassette systems almost obsolete. Digitaltelevision continues the digital assault on analog video technology by driving out the lastanalog holdout of color television. There is every reason to ask: Why are digital technologiesbetter? The answer has to do with both economics and quality. The case for economics ismade by noting the ease of adopting versatile, powerful, and inexpensive high-speed digitalmicroprocessors. But more importantly at the quality level, one prominent feature of digitalcommunications is the enhanced immunity of digital signals to noise and interferences.

Digital messages are transmitted as a finite set of electrical waveforms. In other words,a digital message is generated from a finite alphabet, while each character in the alphabetcan be represented by one waveform or a sequential combination of such waveforms. Forexample, in sending messages via Morse code, a dash can be transmitted by an electri-cal pulse of amplitude A/2 and a dot can be transmitted by a pulse of negative amplitude

* Actually, amplification may further deteriorate the signal because of additional amplifier noise.Here we imply the information contained in the speech rather than its details such as the pronunciation of words

and varying inflections, pitch, and emphasis. The speech signal from a microphone contains all these details and istherefore an analog signal, and its information content is more than a thousand times greater than the informationaccessible from the written text of the same speech.

Page 5: Modern Digital Analog Communications Lathi Pg1-75

1.2 Analog and Digital Messages

Figure 1.3(a) Transmittedsignal.(b) Receiveddistorted signal(without noise).(c) Receiveddistorted signal(with noise).jd) Regeneratedsignal (delayed).

,4/2

-A/2(a)

(b)

( c )

(d)

—A/2 (Fig 1.3a). In an M-ary case, M distinct electrical pulses (or waveforms) are used;each of the M pulses represents one of the M possible symbols. Once transmitted, thereceiver must extract the message from a distorted and noisy signal at the channel output.Message extraction is often easier from digital signals than from analog signals becausethe digital decision must belong to the finite-sized alphabet. Consider a binary case: twosymbols are encoded as rectangular pulses of amplitudes A/2 and —A/2. The only deci-sion at the receiver is to select between two possible pulses received; the fine details ofthe pulse shape are not an issue. A finite alphabet leads to noise and interference immu-nity. The receiver's decision can be made with reasonable certainty even if the pulseshave suffered modest distortion and noise (Fig. 1.3). The digital message in Fig. 1.3a is dis-torted by the channel, as shown in Fig. 1.3b. Yet, if the distortion is not too large, we canrecover the data without error because we need make only a simple binary decision: Is thereceived pulse positive or negative? Figure 1.3c shows the same data with channel distortionand noise. Here again, the data can be recovered correctly as long as the distortion and thenoise are within limits. In contrast, the waveform shape itself in an analog message carries theneeded information, and even a slight distortion or interference in the waveform will show upin the received signal. Clearly, a digital communication system is more rugged than an analogcommunication system in the sense that it can better withstand noise and distortion (as longas they are within a limit).

1.2.2 Viability of Distortionless Regenerative Repeaters

One main reason for the superior quality of digital systems over analog ones is the viabilityof regenerative repeaters and network nodes in the former. Repeater stations are placed alongthe communication path of a digital system at distances short enough to ensure that noiseand distortion remain within a limit. This allows pulse detection with high accuracy. At eachrepeater station, or network node, the incoming pulses are detected such that new, "clean" pulsesare retransmitted to the next repeater station or node. This process prevents the accumulationof noise and distortion along the path by cleaning the pulses at regular repeater intervals.We can thus transmit messages over longer distances with greater accuracy. There has beenwidespread application of distortionless regeneration by repeaters in long-haul communicationsystems and by nodes in a large (possibly heterogeneous) network.

For analog systems, signals and noise within the same bandwidth cnnnot be separated.Repeaters in analog systems are basically filters plus amplifiers and are not "regenerative."

-

Page 6: Modern Digital Analog Communications Lathi Pg1-75

INTRODUCTION

Thus, it is impossible to avoid in-band accumulation of noise and distortion along the path.As a result, the distortion and the noise interference can accumulate over the entire transmis-sion path as a signal traverses through the network. To compound the problem, the signal isattenuated continuously over the transmission path. Thus, with increasing distance the signalbecomes weaker, whereas the distortion and the noise accumulate more. Ultimately, the signal,overwhelmed by the distortion and noise, is buried beyond recognition. Amplification is oflittle help, since it enhances both the signal and the noise equally. Consequently, the distanceover which an analog message can be successfully received is limited by the first transmitterpower. Despite these limitations, analog communication was used widely and successfully inthe past for short- to medium-range communications. Nowadays, because of the advent ofoptical fiber communications and the dramatic cost reduction achieved in the fabrication ofhigh-speed digital circuitry and digital storage devices, almost all new communication sys-tems being installed are digital. But some old analog communication facilities are still in use,including those for AM and FM radio broadcasting.

1.2.3 Analog-to-Digital (A/D) Conversion

Despite the differences between analog and digital signals, a meeting ground exists betweenthem: conversion of analog signals to digital signals (A/D conversion). A key device inelectronics, the analog-to-digital (A/D) converter, enables digital communication systems toconvey analog source signals such as audio and video. Generally, analog signals are continuousin time and in range; that is, they have values at every time instant, and their values can be any-thing within the range. On the other hand, digital signals exist only at discrete points of time,and they can take on only finite values. A/D conversion can never be 100% accurate. Since,however, human perception does not require infinite accuracy, A/D conversion can effectivelycapture necessary information from the analog source for digital signal transmission.

Two steps take place in A/D conversion: a continuous time signal is first sampled into adiscrete time signal, whose continuous amplitude is then quantized into a discrete level signal.First, the frequency spectrum of a signal indicates relative magnitudes of various frequencycomponents. The sampling theorem (Chapter 6) states that if the highest frequency in thesignal spectrum is B (in hertz), the signal can be reconstructed from its discrete samples,taken uniformly at a rate not less than 2B samples per second. This means that to preservethe information from a continuous-time signal, we need transmit only its samples (Fig. 1.4).

Figure 1.4Analog-to-digitalconversion of asignal.

m(rt

< \

\ --

-nip

2mpf

~M

Quantized sampies of m(r)

K

Page 7: Modern Digital Analog Communications Lathi Pg1-75

However, the sample values are still not digital because they lie in a continuous dynamicrange. Here, the second step of quantization comes to rescue. Through quantization, eachsample is approximated, or "rounded off," to the nearest quantized level, as shown in Fig. 1.4.As human perception has only limited accuracy, quantization with sufficient granularity doesnot compromise the signal quality. If amplitudes of the message signal m(t) lie in the range(—mp, mp), the quantizer partitions the signal range into L intervals. Each sample amplitudeis approximated by the midpoint of the interval in which the sample value falls. Each sam-ple is now represented by one of the L numbers. The information is thus digitized. Hence,after the two steps of sampling and quantizing, the analog-to-digital (A/D) conversion iscompleted.

The quantized signal is an approximation of the original one. We can improve the accu-racy of the quantized signal to any desired level by increasing the number of levels L.For intelligibility of voice signals, for example, L = 8 or 16 is sufficient. For commercialuse, L = 32 is a minimum, and for telephone communication, L= 128 or 256 is commonlyused.

Atypical distorted binary signal with noise acquired over the channel is shown in Fig. 1.3. IfA is sufficiently large in comparison to typical noise amplitudes, the receiver can still correctlydistinguish between the two pulses. The pulse amplitude is typically 5 to 10 times the rms noiseamplitude. For such a high signal-to-noise ratio (SNR) the probability of error at the receiveris less than 10~6; that is, on the average, the receiver will make fewer than one error permillion pulses. The effect of random channel noise and distortion is thus practically eliminated.Hence, when analog signals are transmitted by digital means, some error, or uncertainty, in thereceived signal can be caused by quantization, in addition to channel noise and interferences.By increasing L, we can reduce to any desired amount the uncertainty, or error, caused byquantization. At the same time, because of the use of regenerative repeaters, we can transmitsignals over a much longer distance than would have been possible for the analog signal. Aswill be seen later in this text, the price for all these benefits of digital communication is paidin terms of increased processing complexity and bandwidth of transmission.

1.2.4 Pulse-Coded Modulation—A Digital Representation

Once the A/D conversion is over, the original analog message is represented by a sequenceof samples, each of which takes on one of the L preset quantization levels. The transmissionof this quantized sequence is the task of digital communication systems. For this reason,signal waveforms must be used to represent the quantized sample sequence in the transmissionprocess. Similarly, a digital storage device also would need to represent the samples as signalwaveforms. Pulse-coded modulation (PCM) is a very simple and yet common mechanism forthis purpose.

First, one information bit refers to one binary digit of 1 or 0. The idea of PCM is to representeach quantized sample by an ordered combination of two basic pulses: p\ (t) representing 1 andpo(t) representing 0. Because each of the L possible sample values can be written as a bit stringof length Iog2 L, each sample can therefore also be mapped into a short pulse sequence thatrepresents the binary sequence of bits. For example, if L = 16, then, each quantized level canbe described uniquely by 4 bits. If we use two basic pulses, p\(t) = A/2andpo(t) — — A/2.Asequence of four such pulses gives 2 x 2 x 2 x 2 = 16 distinct patterns, as shown in Fig. 1.5.We can assign one pattern to each of the 16 quantized values to be transmitted. Each quantizedsample is now coded into a sequence of four binary pulses. This is the principle of PCMtransmission, where signaling is carried out by means of only two basic pulses (or symbols).

Page 8: Modern Digital Analog Communications Lathi Pg1-75

Example of PCM Digit Binary equivalent Pulse code waveformencoding.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

1010

1011

1100

1101

1110

1111

• • • ••m

•• • •• •

• •

• •• • ••

m

• •• •

p •

• • ••

• •

• • •a

• • • •

The binary case is of great practical importance because of its simplicity and ease of detection.Much of today's digital communication is binary.*

Although PCM was invented by P. M. Rainey in 1926 and rediscovered by A. H. Reeves in1939, it was not until the early 1960s that the Bell System installed the first communication linkusing PCM for digital voice transmission. The cost and size of vacuum tube circuits were thechief impediments to the use of PCM in the early days before the discovery of semiconductordevices. It was the transistor that made PCM practicable.

From all these discussions on PCM, we arrive at a rather interesting (and to certain extentnot obvious) conclusion—that every possible communication can be carried on with a mini-mum of two symbols. Thus, merely by using a proper sequence of a wink of the eye, one canconvey any message, be it a conversation, a book, a movie, or an opera. Every possible detail(such as various shades of colors of the objects and tones of the voice, etc.) that is reproducibleon a movie screen or on the high-definition color television can be conveyed with no lessaccuracy, merely by winks of an eye.^

* An intermediate case exists where we use four basic pulses (quaternary pulses) of amplitudes ±A/2 and ±3/4/2. Asequence of two quaternary pulses can form 4 x 4 = 16 distinct levels of values.' Of course, to convey the information in a movie or a television program in real time, the winking would have to beat an inhumanly high speed. For example, the HDTV signal is represented by 19 million bits (winks) per second.

Page 9: Modern Digital Analog Communications Lathi Pg1-75

] .3 CHANNEL EFFECT, SIGNAL-TO-NOISE RATIO,AND CAPACITY

In designing communication systems, it is important to understand and analyze important fac-tors such as the channel and signal characteristics, the relative noise strength, the maximumnumber of bits that can be sent over a channel per second, and, ultimately, the signalquality.

1.3.1 Signal Bandwidth and Power

In a given (digital) communication system, the fundamental parameters and physical limita-tions that control the rate and quality are the channel bandwidth B and the signal power P$.Their precise and quantitative relationships will be discussed in later chapters. Here we shalldemonstrate these relationships qualitatively.

The bandwidth of a channel i s the range of frequencies that it can transmit with reasonablefidelity. For example, if a channel can transmit with reasonable fidelity a signal whose frequencycomponents vary from 0 Hz (dc) up to a maximum of 5000 Hz (5 kHz), the channel bandwidthB is 5 kHz. Likewise, each signal also has a bandwidth that measures the maximum range ofits frequency components.

The faster a signal changes, the higher its maximum frequency is, and the larger itsbandwidth is. Signals rich in content that changes quickly (such as those for battle scenes ina video) have larger bandwidth than signals that are dull and vary slowly (such as those for adaytime soap opera or a video of sleeping animals). A signal can be successfully sent over achannel if the channel bandwidth exceeds the signal bandwidth.

To understand the role of B, consider the possibility of increasing the speed of informationtransmission by compressing the signal in time. Compressing a signal in time by a factorof 2 allows it to be transmitted in half the time, and the transmission speed (rate) doubles.Time compression by a factor of 2, however, causes the signal to "wiggle" twice as fast,implying that the frequencies of its components are doubled. Many people have had firsthandexperience of this effect when playing a piece of audiotape twice as fast, making the voices ofnormal people sound like the high-pitched speech of cartoon characters. Now, to transmit thiscompressed signal without distortion, the channel bandwidth must also be doubled. Thus, therate of information transmission that a channel can successfully carry is directly proportionalto B. More generally, if a channel of bandwidth B can transmit TV pulses per second, thento transmit KN pulses per second by means of the same technology, we need a channel ofbandwidth KB. To reiterate, the number of pulses per second that can be transmitted over achannel is directly proportional to its bandwidth B.

The signal power Ps plays a dual role in information transmission. First, Ps is related tothe quality of transmission. Increasing Ps strengthens the signal pulse and diminishes the effectof channel noise and interference. In fact, the quality of either analog or digital communicationsystems varies with the signal-to-noise ratio (SNR). In any event, a certain minimum SNR atthe receiver is necessary for successful communication. Thus, a larger signal power Ps allowsthe system to maintain a minimum SNR over a longer distance, thereby enabling successfulcommunication over a longer span.

The second role of the signal power is less obvious, although equally important. Fromthe information theory point of view, the channel bandwidth B and the signal power Ps are,to some extent, exchangeable; that is, to maintain a given rate and accuracy of informationtransmission, we can trade PS for B, and vice versa. Thus, one may use less B if one is willing

.-

Page 10: Modern Digital Analog Communications Lathi Pg1-75

to increase Ps, or one may reduce Ps if one is given bigger B. The rigorous proof of this willbe provided in Chapter 14.

In short, the two primary communication resources are the bandwidth and the transmittedpower. In a given communication channel, one resource may be more valuable than the other,and the communication scheme should be designed accordingly. A typical telephone channel,for example, has a limited bandwidth (3 kHz), but the power is less restrictive. On the otherhand, in space vehicles, huge bandwidth is available but the power is severely limited. Hence,the communication solutions in the two cases are radically different.

1.3.2 Channel Capacity and Data Rate

Channel bandwidth limits the bandwidth of signals that can successfully pass through, whereassignal SNR at the receiver determines the recoverability of the transmitted signals. Higher SNRmeans that the transmitted signal pulse can use more signal levels, thereby carrying more bitswith each pulse transmission. Higher bandwidth B also means that one can transmit morepulses (faster variation) over the channel. Hence, SNR and bandwidth B can both affect theunderlying channel "throughput." The peak throughput that can be reliably carried by a channelis defined as the channel capacity.

One of the most commonly encountered channels is known as the additive white Gaussiannoise (AWGN) channel. The AWGN channel model assumes no channel distortions exceptfor the additive white Gaussian noise and its finite bandwidth B. This ideal model capturesapplication cases with distortionless channels and provides a performance upper bound formore general distortive channels. The band-limited AWGN channel capacity was dramaticallyhighlighted by Shannon's equation,

= Slog2(l+SNR) bit/s (1.1)

Here the channel capacity C is the upper bound on the rate of information transmission persecond. In other words, C is the maximum number of bits that can be transmitted per secondwith a probability of error arbitrarily close to zero; that is, the transmission is as accurate as onedesires. The capacity only points out this possibility, however; it does not specify how it is to berealized. Moreover, it is impossible to transmit at a rate higher than this without incurring errors.Shannon's equation clearly brings out the limitation on the rate of communication imposed by Band SNR. If there is no noise on the channel (assuming SNR = oo), then the capacity C wouldbe oo, and communication rate could be arbitrarily high. We could then transmit any amount ofinformation in the world over one noiseless channel. This can be readily verified. If noise werezero, there would be no uncertainty in the received pulse amplitude, and the receiver wouldbe able to detect any pulse amplitude without error. The minimum pulse amplitude separationcan be arbitrarily small, and for any given pulse, we have an infinite number of fine levelsavailable. We can assign one level to every possible message. Because an infinite number oflevels are available, it is possible to assign one level to any conceivable message. Catalogingsuch a code may not be practical, but that is beside the point. Rather, the point is that if thenoise is zero, communication ceases to be a problem, at least theoretically. Implementationof such a scheme would be difficult because of the requirement of generation and detectionof pulses of precise amplitudes. Such practical difficulties would then set a limit on the rate ofcommunication. It should be remembered that Shannon's result, which represents the upperlimit on the rate of communication over a channel, would be achievable only with a system ofmonstrous and impractical complexity, and with a time delay in reception approaching infinity.Practical systems operate at rates below the Shannon rate.

Page 11: Modern Digital Analog Communications Lathi Pg1-75

1.4 Modulation and Detection 11

In conclusion, Shannon's capacity equation demonstrates qualitatively the basic roleplayed by B and SNR in limiting the performance of a communication system. These twoparameters then represent the ultimate limitation on the rate of communication. The possi-bility of resource exchange between these two basic parameters is also demonstrated by theShannon equation.

As a practical example of trading SNR for bandwidth B, consider the scenario in which wemeet a soft-spoken man who speaks a little bit too fast for us to fully understand. This meansthat as listeners, our bandwidth B is too low and therefore, the capacity C is not high enough toaccommodate the rapidly spoken sentences. However, if the man can speak louder (increasingpower and hence the SNR), we are likely to understand him much better without changinganything else. This example illustrates the concept of resource exchange between SNR andB. Note, however, that this is not a one-to-one trade. Doubling the speaker volume allows thespeaker to talk a little faster, but not twice as fast. This unequal trade effect is fully captured byShannon's equation [Eq. (1.1)], where doubling the SNR cannot always compensate the lossof B by 50%.

1.4 MODULATION AND DETECTIONAnalog signals generated by the message sources or digital signals generated through A/Dconversion of analog signals are often referred to as baseband signals because they typicallyare low pass in nature. Baseband signals may be directly transmitted over a suitable channel(e.g., telephone, fax). However, depending on the channel and signal frequency domain char-acteristics, baseband signals produced by various information sources are not always suitablefor direct transmission over a given channel. When signal and channel frequency bands donot match exactly, channels cannot be moved. Hence, messages must be moved to the rightchannel frequency bandwidth. Message signals must therefore be further modified to facilitatetransmission. In this conversion process, known as modulation, the baseband signal is usedto modify (i.e., modulate), some parameter of a radio-frequency (RF) carrier signal.

A carrier is a sinusoid of high frequency. Through modulation, one of the carrier sinusoidalparameters—such as amplitude, frequency, or phase—is varied in proportion to the basebandsignal m(t). Accordingly, we have amplitude modulation (AM), frequency modulation (FM),or phase modulation (PM). Figure 1.6 shows a baseband signal m(t) and the correspondingAM and FM waveforms. In AM, the carrier amplitude varies in proportion to m(t), and inFM, the carrier frequency varies in proportion m(t). To reconstruct the baseband signal at thereceiver, the modulated signal must pass through a reversal process called demodulation.

As mentioned earlier, modulation is used to facilitate transmission. Some of the importantreasons for modulation are given next.

1.4.1 Ease of Radiation/Transmission

For efficient radiation of electromagnetic energy, the radiating antenna should be on the orderof a fraction or more of the wavelength of the driving signal. For many baseband signals, thewavelengths are too large for reasonable antenna dimensions. For example, the power in aspeech signal is concentrated at frequencies in the range of 100 to 3000 Hz. The correspondingwavelength is 100 to 3000 km. This long wavelength would necessitate an impractically largeantenna. Instead, by modulating a high-frequency carrier, we effectively translate the signalspectrum to the neighborhood of the carrier frequency that corresponds to a much smallerwavelength. For example, a 10 MHz carrier has a wavelength of only 30 m, and its transmissioncan be achieved with an antenna size on the order of 3 m. In this respect, modulation is like

Page 12: Modern Digital Analog Communications Lathi Pg1-75

Figure 1.6Modulation:(a) carrier;(b) modulating(baseband)signal;(c) amplitude-modulated wave;(d) frequency-modulated

m(f)

(b)

(d)

letting the baseband signal hitch a ride on a high-frequency sinusoid (carrier). The carrier andthe baseband signal may also be compared to a stone and a piece of paper. If we wish to throwa piece of paper, it cannot go too far by itself. But if it is wrapped around a stone (a carrier), itcan be thrown over a longer distance.

1.4.2 Simultaneous Transmission of MultipleSignals—Multiplexing

Modulation also allows multiple signals to be transmitted at the same time in the same geo-graphical area without direct mutual interference. This case in point is simply demonstratedby considering the output of multiple television stations carried by the same cable (or overthe air) to people's television receivers. Without modulation, multiple video signals will allbe interfering with one another because all baseband video signals effectively have the samebandwidth. Thus, cable TV or broadcast TV without modulation would be limited to one sta-tion at a time in a given location—a highly wasteful protocol because the channel bandwidthis many times larger than that of the signal.

One way to solve this problem is to use modulation. We can use various TV stations tomodulate different carrier frequencies, thus translating each signal to a different frequency

Page 13: Modern Digital Analog Communications Lathi Pg1-75

1.5 Digital Source Coding and Error Correction Coding 13

range. If the various carriers are chosen sufficiently far apart in frequency, the spectra of themodulated signals (known as TV channels) will not overlap and thus will not interfere witheach other. At the receiver (TV set), a tunable bandpass filter can select the desired stationor TV channel for viewing. This method of transmitting several signals simultaneously, overnonoverlapping frequency bands, is known as frequency division multiplexing (FDM). Asimilar approach is also used in AM and FM radio broadcasting. Here the bandwidth of thechannel is shared by various signals without any overlapping.

Another method of multiplexing several signals is known as time division multiplexing(TDM). This method is suitable when a signal is in the form of a pulse train (as in PCM).When the pulses are made narrower, the spaces left between pulses of one user signal are usedfor pulses from other signals. Thus, in effect, the transmission time is shared by a number ofsignals by interleaving the pulse trains of various signals in a specified order. At the receiver,the pulse trains corresponding to various signals are separated.

1.4.3 Demodulation

Once multiple modulated signals have arrived at the receiver, the desired signal must bedetected and recovered into its original baseband form. Note that because of FDM, the firststage of a demodulator typically requires a tunable bandpass filter so that the receiver can selectthe modulated signal at a predetermined frequency band specified by the transmission stationor channel. Once a particular modulated signal has been isolated, the demodulator will thenneed to convert the carrier variation of amplitude, frequency, or phase, back into the basebandsignal voltage.

For the three basic modulation schemes of AM, FM, and PM, the corresponding demod-ulators must be designed such that the detector output voltage varies in proportion to the inputmodulated signal's amplitude, frequency, and phase, respectively. Once circuits with suchresponse characteristics have been implemented, the demodulators can downconvert the mod-ulated (RF) signals back into the baseband signals that represent the original source message,be it audio, video, or data.

1.5 DIGITAL SOURCE CODING AND ERRORCORRECTION CODING

As stated earlier, SNR and bandwidth are two factors that determine the performance of a givencommunication. Unlike analog communication systems, digital systems often adopt aggressivemeasures to lower the source data rate and to fight against channel noise. In particular, sourcecoding is applied to generate the fewest bits possible for a given message without sacrificing itsdetection accuracy. On the other hand, to combat errors that arise from noise and interferences,redundancy needs to be introduced systematically at the transmitter, such that the receivers canrely on the redundancy to correct errors caused by channel distortion and noise. This processis known as error correction coding by the transmitter and decoding by the receiver.

Source coding and error correction coding are two successive stages in a digital com-munication system that work in a see-saw battle. On one hand, the job of source coding isto remove as much redundancy from the message as possible to shorten the digital messagesequence that requires transmission. Source coding aims to use as little bandwidth as possiblewithout considering channel noise and interference. On the other hand, error correction codingintentionally introduces redundancy intelligently, such that if errors occur upon detection, theredundancy can help correct the most likely errors.

Page 14: Modern Digital Analog Communications Lathi Pg1-75

Randomness, Redundancy, and Source CodingTo understand source coding, it is important to first discuss the role of randomness in communi-cations. As noted earlier, channel noise is a major factor limiting communication performancebecause it is random and cannot be removed by prediction. On other other hand, randomness isalso closely associated with the desired signals in communications. Indeed, randomness is theessence of communication. Randomness means unpredictability, or uncertainty, of a sourcemessage. If a source had no unpredictability, like a friend who always wants to repeat the samestory on "how I was abducted by an alien," then the information would be known beforehandand would contain no information. Similarly, if a person winks, it conveys some informationin a given context. But if a person winks continuously with the regularity of a clock, the winksconvey no information. In short, a predictable signal is not random and is fully redundant.Thus, a message contains information only if it is unpredictable. Higher predictability meanshigher redundancy and, consequently, less information. Conversely, more unpredictable or lesslikely random signals contain more information.

Source coding reduces redundancy based on the predictability of the message source. Theobjective of source coding is to use codes that are as short as possible to represent the sourcesignal. Shorter codes are more efficient because they require less time to transmit at a givendata rate. Hence, source coding should remove signal redundancy while encoding and trans-mitting the unpredictable, random part of the signal. The more predictable messages containmore redundancy and require shorter codes, while messages that are less likely contain moreinformation and should be encoded with longer codes. By assigning more likely messages withshorter source codes and less likely messages with longer source codes, one obtains more effi-cient source coding. Consider the Morse code, for example. In this code, various combinationsof dashes and dots (code words) are assigned to each letter. To minimize transmission time,shorter code words are assigned to more frequently occurring (more probable) letters (suchas e, t, and a) and longer code words are assigned to rarely occurring (less probable) letters(such as x, q, and z). Thus, on average, messages in English would tend to follow a knownletter distribution, thereby leading to shorter code sequences that can be quickly transmitted.This explains why Morse code is a good source code.

It will be shown in Chapter 14 that for digital signals, the overall transmission time isminimized if a message (or symbol) of probability P is assigned a code word with a lengthproportional to log(l/f). Hence, from an engineering point of view, the information of amessage with probability P is proportional to log (\/P). This is known as entropy (source)coding.

Error Correction CodingError correction coding also plays an important role in communication. While source codingremoves redundancy, error correction codes add redundancy. The systematic introduction ofredundancy supports reliable communication.4 Because of redundancy, if certain bits are inerror due to noise or interference, other related bits may help them recover, allowing us todecode a message accurately despite errors in the received signal. All languages are redundant.For example, English is about 50% redundant; that is, on the average, we may throw out halfthe letters or words without losing the meaning of a given message. This also means that inany English message, the speaker or the writer has free choice over half the letters or words,on the average. The remaining half is determined by the statistical structure of the language.If all the redundancy of English were removed, it would take about half the time to transmita telegram or telephone conversation. If an error occurred at the receiver, however, it wouldbe rather difficult to make sense out of the received message. The redundancy in a message,therefore, plays a useful role in combating channel noises and interferences.

Page 15: Modern Digital Analog Communications Lathi Pg1-75

1.6 A Brief Historical Review of Modern Telecommunications 15

It may appear paradoxical that in source coding we would remove redundancy, only to addmore redundancy at the subsequent error correction coding. To explain why this is sensible,consider the removal of all redundancy in English through source coding. This would shortenthe message by 50% (for bandwidth saving). However, for error correction, we may restoresome systematic redundancy, except that this well-designed redundancy is only half as long aswhat was removed by source coding while still providing the same amount of error protection.It is therefore clear that a good combination of source coding and error correction codingcan remove inefficient redundancy without sacrificing error correction. In fact, a very popularproblem in this field is the persistent pursuit of joint source-channel coding that can maximallyremove signal redundancy without losing error correction.

How redundancy can enable error correction can be seen with an example: to transmitsamples with L = 16 quantizing levels, we may use a group of four binary pulses, as shownin Fig. 1.5. In this coding scheme, no redundancy exists. If an error occurs in the reception ofeven one of the pulses, the receiver will produce a wrong value. Here we may use redundancyto eliminate the effect of possible errors caused by channel noise or imperfections. Thus, if weadd to each code word one more pulse of such polarity as to make the number of positive pulseseven, we have a code that can detect a single error in any place. Thus, to the code word 0001we add a fifth pulse, of positive polarity, to make a new code word, 00011. Now the number ofpositive pulses is 2 (even). If a single error occurs in any position, this parity will be violated.The receiver knows that an error has been made and can request retransmission of the message.This is a very simple coding scheme. It can only detect an error; it cannot locate or correctit. Moreover, it cannot detect an even number of errors. By introducing more redundancy, itis possible not only to detect but also to correct errors. For example, for L = 16, it can beshown that properly adding three pulses will not only detect but also correct a single erroroccurring at any location. Details on the subject of error correcting codes will be discussed inChapter 15.

1.6 A BRIEF HISTORICAL REVIEW OF MODERNTELECOMMUNICATIONS

Telecommunications (literally: communications at a distance) are always critical to humansociety. Even in ancient times, governments and military units relied heavily on telecommu-nications to gather information and to issue orders. The first type was with messengers on footor on horseback; but the need to convey a short message over a large distance (such as onewarning a city of approaching raiders) led to the use of fire and smoke signals. Using signalmirrors to reflect sunlight (heliography), was another effective way of telecommunication. Itsfirst recorded use was in ancient Greece. Signal mirrors were also mentioned in Marco Polo'saccount of his trip to the Far East.1 These ancient visual communication technologies are,amazingly enough, digital. Fires and smoke in different configurations would form differentcodewords. On hills or mountains near Greek cities there were also special personnel for suchcommunications, forming a chain of regenerative repeaters. In fact, fire and smoke signalplatforms still dot the Great Wall of China. More interestingly, reflectors or lenses, equivalentto the amplifiers and antennas we use today, were used to directionally guide the light farther.

Naturally, these early visual communication systems were very tedious to set up and couldtransmit only several bits of information per hour. A much faster visual communication systemwas developed just over two centuries ago. In 1793 Claude Chappe of France invented andperformed a series of experiments on the concept of "semaphore telegraph." His system was aseries of signaling devices called semaphores, which were mounted on towers, typically spaced

Page 16: Modern Digital Analog Communications Lathi Pg1-75

16 INTRODUCTION

10 km apart. (A semaphore looked like a large human figure with signal flags in both hands.) Areceiving semaphore operator would transcribe visually, often with the aid of a telescope, andthen relay the message from his tower to the next, and so on. This visual telegraph became thegovernment telecommunication system in France and spread to other countries, including theUnited States. The semaphore telegraph was eventually eclipsed by electric telegraphy. Today,only a few remaining streets and landmarks with the name "Telegraph Hill" remind us of theplace of this system in history. Still, visual communications (via Aldis lamps, ship flags, andheliographs) remained an important part of maritime communications well into the twentiethcentury.

These early telecommunication systems are optical systems based on visual receivers.Thus, they can cover only line-of-sight distance, and human operators are required to decodethe signals. An important event that changed the history of telecommunication occurred in1820, when Hans Christian Oersted of Denmark discovered the interaction between electricityand magnetism.2 Michael Faraday made the next crucial discovery, which changed thehistory of both electricity and telecommunications, when he found that electric currentcan be induced on a conductor by a changing magnetic field. Thus, electricity generationbecame possible by magnetic field motion. Moreover, the transmission of electric signalsbecame possible by varying an electromagnetic field to induce current change in a distantcircuit. The amazing aspect of Faraday's discovery on current induction is that it providesthe foundation for wireless telecommunication over distances without line-of-sight, and moreimportantly, it shows how to generate electricity as an energy source to power such systems.The invention of the electric telegraph soon followed, and the world entered the modern electrictelecommunication era.

Modern communication systems have come a long way from their infancy. Since itwould be difficult to detail all the historical events that mark the recent development oftelecommunication, we shall instead use Table 1.1 to chronicle some of the most notableevents in the development of modern communication systems. Since our focus is on electricaltelecommunication, we shall refrain from reviewing the equally long history of optical (fiber)communications.

It is remarkable that all the early telecommunication systems are symbol-based digitalsystems. It was not until Alexander Graham Bell's invention of the telephone system thatanalog live signals were transmitted. Live signals can be instantly heard or seen by the receivingusers. The Bell invention that marks the beginning of a new (analog communication) era istherefore a major milestone in the history of telecommunications. Figure 1.7 shows a copy ofan illustration from Bell's groundbreaking 1876 telephone patent. Scientific historians oftenhail this invention as the most valuable patent ever issued in history.

The invention of telephone systems also marks the beginning of the analog com-munication era and live signal transmission. On an exciting but separate path, wirelesscommunication began in 1887, when Heinrich Hertz first demonstrated a way to detectthe presence of electromagnetic waves. French scientist Edouard Branly, English physi-cist Oliver Lodge, and Russian inventor Alexander Popov all made important contributionsto the development of radio receivers. Another important contributor to this area wasthe Croatian-born genius Nikola Tesla. Building upon earlier experiments and inventions,Italian scientist and inventor Guglielmo Marconi developed a wireless telegraphy sys-tem in 1895 for which he shared the Nobel Prize in Physics in 1909. Marconi's wirelesstelegraphy marked a historical event of commercial wireless communications. Soon, the mar-riage of the inventions of Bell and Marconi allowed analog audio signals to go wireless,thanks to amplitude modulation (AM) technology. Quality music transmission via FM radio

Page 17: Modern Digital Analog Communications Lathi Pg1-75

1.6 A Brief Historical Review of Modern Telecommunications 17

TABLE 1.1Important Events of the Past Two Centuries of Telecommunications

Year Major Events

1820 First experiment of electric current causing magnetism (by Hans C. Oersted)1831 Discovery of induced current from electromagnetic radiation (by Michael Faraday)1830-32 Birth of telegraph (credited to Joseph Henry and Pavel Schilling)1837 Invention of Morse code by Samuel F. B. Morse1864 Theory of electromagnetic waves developed by James C. Maxwell1866 First transatlantic telegraph cable in operation1876 Invention of telephone by Alexander G. Bell1878 First telephone exchange in New Haven, Connecticut1887 Detection of electromagnetic waves by Heinrich Hertz1896 Wireless telegraphy (radio telegraphy) patented by Guglielmo Marconi1901 First transatlantic radio telegraph transmission by Marconi1906 First amplitude modulation radio broadcasting (by Reginald A. Fessenden)1907 Regular transatlantic radio telegraph service1915 First transcontinental telephone service1920 First commercial AM radio stations1921 Mobile radio adopted by Detroit Police Department1925 First television system demonstration (by Charles F. Jenkins)1928 First television station W3XK in the United States1935 First FM radio demonstration (by Edwin H. Armstrong)1941 NTSC black and white television standard

First commercial FM radio service1947 Cellular concept first proposed at Bell Labs1948 First major information theory paper published by Claude E. Shannon

Invention of transistor by William Shockley, Walter Brattain, and John Bardeen1949 The construction of Golay code for 3 (or fewer) bit error correction1950 Hamming codes constructed for simple error corrections1953 NTSC color television standard1958 Integrated circuit proposed by Jack Kilby (Texas Instruments)1960 Construction of the powerful Reed-Solomon error correcting codes1962 First computer telephone modem developed: Bell Dataphone 103A (300 bit/s)1962 Low-density parity check error correcting codes proposed by Robert G. Gallager1968-9 First error correction encoders on board NASA space missions (Pioneer IX and Mariner VI)1971 First wireless computer network: AlohaNet1973 First portable cellular telephone demonstration to the U.S. Federal Communications

Commission, by Motorola1978 First mobile cellular trial by AT&T1984 First handheld (analog) AMPS cellular phone service by Motorola1989 Development of DSL modems for high-speed computer connections1991 First (digital) GSM cellular service launched (Finland)

First wireless local area network (LAN) developed (AT&T-NCR)1993 Digital ATSC standard established1993 Turbo codes proposed by Berrou, Glavieux, and Thitimajshima1996 First commercial CDMA (IS-95) cellular service launched

First HDTV broadcasting1997 IEEE 802.11 (b) wireless LAN standard1998 Large-scope commercial ADSL deployment1999 IEEE 802.11 a wireless LAN standard2000 First 3G cellular service launched2003 IEEE 802.1 Ig wireless LAN standard

Page 18: Modern Digital Analog Communications Lathi Pg1-75

18 INTRODUCTION

A. G. BELL.TELEGRAPHY

Figure 1.7Illustration fromBell's U.S. Patent NO. 174.465.No. 174,465issued March 7 , A £ d c d c d c J c1876. (From the «- - " - £ - -U.S. Patent and *+B

Trademark A "g'2_Office.)

2 Sheets—Sheet 1A. G. BELL.

TELEGRAPHY

2 Sheets—Sheet 2.

Patented March 7. 1876. No. 174,465. Patented March 7, 1876.

Fig. 1. Fig. 6.

B— — — - - — - — — —A+B — — — —

A -S —

A+B —

Fig. 3.

Fig. 4.

a e~<L^a o e^L^d

b bR flXTX, j .XTX

o e~^L^a o e~^

r/TW A f A ̂ ,/A, ,A /7! "e\\m a e**f*5 y ^7 T7 V y«\jy<! a <.̂ a--a c

Rj. 5.

Fig- 7-

broadcast was first demonstrated by American inventor Major Edwin H. Armstrong. Arm-strong's FM demonstration in 1935 took place at an IEEE meeting in New York's EmpireState Building.

A historic year for both communications and electronics was 1948, the year that wit-nessed the rebirth of digital communications and the invention of semiconductor transistors.The rebirth of digital communications is owing to the originality and brilliance of ClaudeE. Shannon, widely known as the father of modern digital communication and informationtheory. In two seminal articles published in 1948, he first established the fundamental conceptof channel capacity and its relation to information transmission rate. Deriving the channelcapacity of several important models, Shannon3 proved that as long as the information istransmitted through a channel at a rate below the channel capacity, error-free communicationscan be possible. Given noisy channels, Shannon showed the existence of good codes that canmake the probability of transmission error arbitrarily small. This noisy channel coding theoremgave rise to the modern field of error correcting codes. Coincidentally, the invention of thefirst transistor in the same year (by Bill Shockley, Walter Brattain, and John Bardeen) pavedthe way to the design and implementation of more compact, more powerful, and less noisycircuits to put Shannon's theorems into practical use. The launch of Mariner IX Mars orbiterin March of 1971 was the first NASA mission officially equipped with error correcting codes,which reliably transmitted photos taken from Mars.

Today, we are in an era of digital and multimedia communications, marked by the wide-spread applications of computer networking and cellular phones. The first telephone modemfor home computer connection to a mainframe was developed by AT&T Bell Labs in 1962. Ituses an acoustic coupler to interface with a regular telephone handset. The acoustic couplerconverts the local computer data into audible tones and uses the regular telephone microphone

Page 19: Modern Digital Analog Communications Lathi Pg1-75

References 19

to transmit the tones over telephone lines. The coupler receives the mainframe computer datavia the telephone headphone and converts them into bits for the local computer terminal,typically at rates below 300 bit/s. Rapid advances in integrated circuits (first credited to JackKilby in 1958) and digital communication technology dramatically increased the link rate to56 kbit/s by the 1990s. By 2000, wireless local area network (WLAN) modems were developedto connect computers at speed up to 11 Mbit/s. These commercial WLAN modems, the sizeof a credit card, were first standardized as IEEE 802. lib.

Technological advances also dramatically reshaped the cellular systems. While the cellularconcept was developed in 1947 at Bell Labs, commercial systems were not available until 1983.The "mobile" phones of the 1980s were bulky and expensive, mainly used for business. Theworld's first cellular phone, developed by Motorola in 1983 and known as DynaTAC 8000X,weighed 28 ounces, earning the nickname of "brick" and costing $3995. These analog phonesare basically two-way FM radios for voice only. Today, a cellphone is truly a multimedia,multifunctional device that is useful not only for voice communication but also can sendand receive e-mail, access websites, and display videos. Cellular devices are now very small,weighing no more than a few ounces. Unlike in the past, cellular phones are now for the masses.In fact, Europe now has more cellphones than people. In Africa, 13% of the adult populationnow owns a cellular phone.

Throughout history, the progress of human civilization has been inseparable from tech-nological advances in telecommunications. Telecommunications played a key role in almostevery major historical event. It is not an exaggeration to state that telecommunications helpedshape the very world we live in today and will continue to define our future. It is thereforethe authors' hope that this text can help stimulate the interest of many students in telecommu-nication technologies. By providing the fundamental principles of modern digital and analogcommunication systems, the authors hope to provide a solid foundation for the training offuture generations of communication scientists and engineers.

REFERENCES

1. M. G. Murray, "Aimable Air/Sea Rescue Signal Mirrors," The Bent ofTau Beta Pi, pp. 29-32, Fall2004.

2. B. Bunch and A. Hellemans, Eds., The History of Science and Technology: A Browser's Guide to theGreat Discoveries, Inventions, and the People Who Made Them from the Dawn of Time to Today,Houghton Mifflin, Boston, 2004.

3. C. E. Shannon, "A Mathematical Theory of Communications," Bell Syst. Tech. J. parti: pp. 379-423;part II: 623-656, July 1948.

4. S. Lin and D. J. Costello Jr., Error Control Coding, 2nd ed., Prentice Hall, Upper Saddle River, NJ,2004.

Page 20: Modern Digital Analog Communications Lathi Pg1-75

2 SIGNALS AND SIGNALSPACE

In this chapter we discuss certain basic signal concepts. Signals are processed by systems.We shall start by explaining the terms signals and systems.

SignalsA signal, as the term implies, is a set of information or data. Examples include a telephone ora television signal, the monthly sales figure of a corporation, and the daily closing prices of astock market (e.g., the Dow Jones averages). In all these examples, the signals are functions ofthe independent variable time. When an electric charge is distributed over a surface, however,the signal is the charge density, a function of space rather than time. In this book we dealalmost exclusively with signals that are functions of time. The discussion, however, appliesequally well to other independent variables.

SystemsSignals may be processed further by systems, which may modify them or extract an additionalinformation from them. For example, an antiaircraft missile launcher may want to know thefuture location of a hostile moving target, which is being tracked by radar. From the radarsignal, he knows the past location and velocity of the target. By properly processing the radarsignal (the input), he can approximately estimate the future location of the target. Thus, a systemis an entity that processes a set of signals (inputs) to yield another set of signals (outputs).A system may be made up of physical components, as in electrical, mechanical, or hydraulicsystems (hardware realization), or it may be an algorithm that computes an output from aninput signal (software realization).

2.1 SIZE OF A SIGNALThe size of any entity is a quantity that indicates its largeness or strength. Generally speaking,signal amplitude varies with time. How can a signal that exists over a certain time intervalwith varying amplitude be measured by one number that will indicate the signal size or signalstrength? Such a measure must consider not only the signal amplitude, but also its duration.For instance, if we are to devise a single number V as a measure of the size of a human being,we must consider not only his or her width (girth), but also the height. The product of girth

20

Page 21: Modern Digital Analog Communications Lathi Pg1-75

i u oiyriui

Figure 2.1Examples ofsignals:(a) signal withfinite energy;(b) signal withfinite power.

and height is a reasonable measure of the size of a person. If we wish to be little more precise,we can average this product over the entire length of the person. If we make a simplifyingassumption that the shape of a person is a cylinder of radius r, which varies with the height hof the person, then a reasonable measure of the size of a person of height H is his volume Vgiven by

V f" 2= 71 I r1

JO(h)dh

Signal EnergyArguing in this manner, we may consider the area under a signal g(t) as a possible measure ofits size, since it takes into account of not only the amplitude, but also the duration. However,this will be a defective measure because if g(t) was a large signal, its positive and negativeareas might cancel each other, wrongly indicating a signal of small size. This difficulty can becorrected by defining the signal size as the area under g2(t), which is always positive. We callthis measure the signal energy Eg, defined (for a real signal) as

(2.1)Eg =

This definition can be generalized to a complex valued signal g(t) as

E, = (2.2)

There are also other possible measures of signal size, such as the area under \g(t)\. The energymeasure, however, is not only more tractable mathematically, it is also more meaningful (asshown later) in the sense that it is indicative of the energy that can be extracted from the signal.

Signal PowerThe signal energy must be finite for it to be a meaningful measure of signal size. A necessarycondition for the energy to be finite is that the signal amplitude must —> 0 as \t\ —>• co(Fig. 2. la). Otherwise the integral in Eq. (2.1) will not converge.

If the amplitude of g(t) does not —> 0 as \t —>• oo (Fig. 2.1b), the signal energy is infinite.A more meaningful measure of the signal size in such a case would be the time average of the

( a )

Page 22: Modern Digital Analog Communications Lathi Pg1-75

22 SIGNALS AND SIGNAL SPACE

, ̂ ^s.

energy (if it exists), which is the^average power Pg defined by

rJ-T/2

•areal signal)

82(t)dt

We can generalize this definitioXfor a complex signal giffSs

Pe = lim -1 [T/~T J-T/2

\g(t)\2dt

(2.3)

(2.4)

Observe that the signal power Pg is the time average (mean) of the signal amplitude square,that is, the mean squared value of g(t). Indeed, the square root of Pg is the familiar rms(root-mean-square) value of g(t).

The mean of an entity averaged over a large time interval approaching infinity exists ifthe entity is either periodic or has a statistical regularity. If such a condition is not satisfied,the average may not exist. For instance, a ramp signal g(t) = t increases indefinitely as \t\approaches infinity, and neither the energy, nor the power exists for this signal.

CommentsIt should be stressed that "signal energy" and "signal power" are inherent characteristicvalues of a signal. They have nothing to do with the consumption of the signal by anyload. Signal energy and signal power are used to measure the signal strength or size. Forinstance, if we approximate a signal g(t) by another signal z(t), the approximation error isthus e(t) =g(t) — z(t). The energy (or power) of e(t) is a convenient indicator of the approx-imation accuracy. It provides us with a quantitative measure of determining the closeness ofthe approximation. It also allows us to determine whether one approximation is better thananother. During transmission over a channel in a communication system, message signals arecorrupted by unwanted signals (noise). The quality of the received signal is judged by therelative sizes of the desired signal and the unwanted signal (noise). In this case the ratio of themessage signal and noise signal powers (the ratio of signal to noise power) is a good indicationof the received signal quality.

Units of Signal Energy and PowerThe standard units of signal energy and power are the joule (J) and the watt (W). However,in practice, it is often customary to use logarithmic scales to describe a signal power. Thisnotation saves the trouble of dealing with decimal points and zeros when the signal power isvery large or small. As a convention, a signal with average power of P watts can be said tohave power of

[10-log,0P]dBw or [30+ 10 • log,0P] dBm

For example, —30 dBm represents signal power of 10~6 W in linear scale.

Example 2.1 Determine the suitable measures of the signals in Fig. 2.2.

The signal in Fig. 2.2a approaches 0 as \t\ —>• oo. Therefore, the suitable measure for thissignal is its energy Eg given by

/

oo /»0 /*c

g2(t)dt= I (2)2dt+ I-oo J-\ JO

Page 23: Modern Digital Analog Communications Lathi Pg1-75

2.1 Size of a Signal 23

Figure 2.2Signal forExample 2.1.

(a)

- 1 0 2 4 t

i >- »)

The signal in Fig. 2.2b does not approach 0 as \t\ ->• oo. However, it is periodic, andtherefore its power exists. We can use Eq. (2.3) to determine its power. For periodic signals,we can simplify the procedure by observing that a periodic signal repeats regularly eachperiod (2 seconds in this case). Therefore, averaging g2(?) over an infinitely large intervalis equivalent to averaging it over one period (2 seconds in this case). Thus

1 f} j 1 /"'-J/./«*-5/.,

Recall that the signal power is the square of its rms value. Therefore, the rms value of thissignal is l/-\/3.

Example 2.2 Determine the power and rrns_¥alue-Qf^"TaT^gO) = C~cos (coot + 6)

(b) g(t) = C\ cos(c) #(/) = Dei'0*'

cos

(a) This is a periodic signal with period TO = 2n/u>Q. The suitable measure of its size ispower. Because it is a periodic signal, we may compute its power by averaging its energyover one period 2n/(t>o. However, for the sake of generality, we shall solve this problemby averaging over infinitely large time interval using Eq. (2.3).

/•„ = lim -r/2

r-*oo 7 J-7-/2

•r/2

-r/2

cos2 (a>0f + 0)dt= lim - / /— [1 + cos (2w<)t + 29)] dt

dt + limc2lrT/2:—//27* J_r/2

cos (2a)0t + 29) dt

The first term on the right-hand side equals C2/2, while the second term is zero becausethe integral appearing in this term represents the area under a sinusoid over a very largetime interval T with T —> oo. This area is at most equal to the area of half the cyclebecause of cancellations of the positive and negative areas of a sinusoid. The second term

Page 24: Modern Digital Analog Communications Lathi Pg1-75

24 SIGNALS AND SIGNAL SPACE

is this area multiplied by C2/2T with T —> oo. Clearly this term is zero, and

p =8 2(2.5a)

This shows a well-known fact that a sinusoid of amplitude C has a power C2/2 regardlessof its angular frequency O>Q (wo ̂ 0) and its phase 9. The rms value is C/V2. If the signalfrequency is zero (dc or a constant signal of amplitude t"J, the readercan show that thepower is C2.

(b) In this case

T/2Pg = lim - I [Ci cos

r-^oo 7" J-r/2

i fT /2= l im — I Ci cos

r^oo 7 J-T/2

+ 0i) + C2 cos («2f + 02>]2 dt

+T/2

-r/2

lim

cos

cos

Observe that the first and the second integrals on the right-hand side are the powers ofthe two sinusoids, which are C\2/2 and C2

2/2 as found in part (a). We now show that thethird term on the right-hand side is zero if a>\ ^ o>2.

lim2CiC2

[T/2I COS

J-T/2cos

= limr->oo 7"

,r/2- / cosJ-T/2

rI-T/26*1

Consequently,

(2.5b)

and the rms value is ̂ (C\2 + C22)/2.

We can readily extend this result to any sum of sinusoids with distinct angularfrequencies u>n (a>n j^= 0). In other words, if

g(t) = cos

Page 25: Modern Digital Analog Communications Lathi Pg1-75

2.2 Classification of Signals 25

where none of the sinusoids have identical frequencies, then

Pg = -'^_/ Cn2 (2.5c)

n=\

(c) In this case the signal is complex valued, and we use Eq. (2.4) to compute thepower. Thus,

1 fT'2 • .,Pg = lim — /

r-»-oo T J-T/2

Recall that |<>»r| = 1 so that IDe^'l2 = \D\2, and

-7721 f 'Pg= lim / \D\2dt = \D\2

7^00 T J-T/2(2.5d)

'-772

Its rms value is |D|.

Comment: In part (b), we have shown that the power of the sum of two sinusoids is equalto the sum of the powers of the sinusoids. It may appear that the power of g\ (f) +g2(0 is simplyPg[ + Pg2. Be cautioned against such a generalization! All we have proved here is that thisis true if the two signals g\ (?) and gi(t) happen to be sinusoids. It is not true in general! Infact, it is not true even for the sinusoids if the two sinusoids are of the same frequency. We shallshow later (Sec. 2.5.4) that only under a certain condition, called the orthogonality condition,is the power (or energy) of g i ( t ) + g2(?) equal to the sum of the powers (or energies) of g \ ( t )andg2(0-

2.2 CLASSIFICATION OF SIGNALSThere are various classes of signals. Here we shall consider only the following pairs of classesthat are suitable for the scope of this book.

1. Continuous time and discrete time signals

2. Analog and digital signals

3. Periodic and aperiodic signals

4. Energy and power signals

5. Deterministic and probabilistic signals

2.2.1 Continuous Time and Discrete Time Signals

A signal that is specified for every value of time t (Fig. 2.3a) is a continuous time signal,and a signal that is specified only at discrete points of t — nT (Fig. 2.3b) is a discrete timesignal. Audio and video recordings are continuous time signals, whereas the quarterly grossdomestic product (GDP), the monthly sales of a corporation, and stock market daily averagesare discrete time signals.

Page 26: Modern Digital Analog Communications Lathi Pg1-75

26 SIGNALS AND SIGNAL SPACE

Figure 2.3(a) Continuoustime and(b) discrete timesignals.

12

10

8

6

4

7 •

0

_2

-4

-6 -

-8

-10

(a)

8

u Mil

Annualized U.S. GDP quarterly percentage growth

(b)

2.2.2 Analog and Digital Signals

One should not confuse analog signals with continuous time signals. The two concepts arenot the same. This is also true of the concepts of discrete time and digital signals. A signalwhose amplitude can have any value in a continuous range is an analog signal. This means thatan analog signal amplitude can take on an (uncountably) infinite number of values. A digitalsignal, on the other hand, is one whose amplitude can take on only a finite number of values.Signals associated with a digital computer are digital because they take on only two values(binary signals). For a signal to qualify as digital, the number of values need not be restrictedto two. It can be any finite number. A digital signal whose amplitudes can take on M values isan M-ary signal of which binary (M = 2) is a special case. The terms "continuous time" and"discrete time" qualify the nature of signal along the time (horizontal) axis. The terms "analog"and "digital," on the other hand, describe the nature of the signal amplitude (vertical) axis.Figure 2.4 shows signals of various types. It is clear that analog is not necessarily continuoustime, nor does digital need to be discrete time. Figure 2.4c shows an example of an analog,discrete time signal. An analog signal can be converted into a digital signal (analog-to-digitalor A/D conversion) through quantization (rounding off), as explained in Section 6.2.

Page 27: Modern Digital Analog Communications Lathi Pg1-75

Figure 2.4Examples ofsignals:(a) analog andcontinuous time;(b) digital andcontinuous time;(c) analog anddiscrete time;(d) digital anddiscrete time.

2.2 Classification of Signals 27

(a) ( h i

till

g(l)

1 L1

f 1 11111 1 "

( d )

Figure 2.5Periodic signal ofperiod TO.

2.2.3 Periodic and Aperiodic Signals

A signal g(t) is said to be periodic if there exists a positive constant TO such that

g(t)=g(t for all f (2.6)

The smallest value of TO that satisfies the periodicity condition (2.6) is the period of g ( t ) . Thesignal in Fig. 2.2b is a periodic signal with period of 2. Naturally, a signal is aperiodic if it isnot periodic. The signal in Fig. 2.2a is aperiodic.

By definition, a periodic signal g(t) remains unchanged when time-shifted by one period.This means that a periodic signal must start at t = —oo because if it starts at some finite instant,say, t = 0, the time-shifted signal g(t + TQ) will start at t = —To and g(t + TO) is no longer thesame as g(t). Therefore, a periodic signal, by definition, must start from -oo and continueforever, as shown in Fig. 2.5. Observe that a periodic signal shifted by an integral multipleof TQ remains unchanged. Therefore, g(t) may also be considered to be a periodic signal withperiod m/b, where m is any integer. However, by definition, the period is the smallest intervalthat satisfies periodicity condition (2.6). Therefore, TO is the period.

Page 28: Modern Digital Analog Communications Lathi Pg1-75

2.2.4 Energy and Power Signals

A signal with finite energy is an energy signal, and a signal with finite power is a power signal.In other words, a signal g(t) is an energy signal if

£J—c\g(t)\2dt <oo (2.7)

Similarly, a signal with a finite and nonzero power (mean square value) is a power signal. Inother words, a signal is a power signal if

0 < lim~* co

i fT/2

rim\g(t)\~dt < oo (2-8)

The signals in Fig. 2.2a and b are examples of energy and power signals, respectively. Observethat power is the time average of the energy. Since the averaging is over an infinitely largeinterval, a signal with finite energy has zero power, and a signal with finite power has infiniteenergy. Therefore, a signal cannot both be an energy signal and a power signal. If it is one, itcannot be the other. On the other hand, certain signals with infinite power are neither energynor power signals. The ramp signal is one example.

CommentsEvery signal observed in real life is an energy signal. A power signal, on the other hand, musthave an infinite duration. Otherwise its power, which is its average energy (averaged overan infinitely large interval) will not approach a (nonzero) limit. Obviously it is impossible togenerate a true power signal in practice because such a signal would have infinite duration andinfinite energy.

Also because of periodic repetition, periodic signals for which the area under |g(/)|2 overone period is finite are power signals; however, not all power signals are periodic.

2.2.5 Deterministic and Random Signals

A signal whose physical description is known completely, in either mathematical or graphicalform is a deterministic signal. If a signal is known only in terms of probabilistic description,such as mean value, mean squared value and distributions, rather than its full mathematical orgraphical description, it is a random signal. Most of the noise signals encountered in practiceare random signals. All message signals are random signals because, as will be shown later, asignal, to convey information, must have some uncertainty (randomness) about it. Treatmentof random signals will be discussed in later chapters.

2.3 SOME USEFUL SIGNAL OPERATIONS

We discuss here three useful and common signal operations: shifting, scaling, and inversion.Since the signal variable in our signal description is time, these operations are discussed astime shifting, time scaling, and time inversion (or folding). However, this discussion is equallyvalid for functions having independent variables other than time (e.g., frequency or distance).

Page 29: Modern Digital Analog Communications Lathi Pg1-75

2.3 Some Useful Signal Operations 29

Figure 2.6Time shifting asignal.

g(t)

(a)

(b)

g(t+T)

(c)

2.3.1 Time Shifting

Consider a signal g(t) (Fig. 2.6a) and the same signal delayed by T seconds (Fig. 2.6b), whichwe shall denote as </>(?)• Whatever happens in g(t) (Fig. 2.6a) at some instant / also happensin 0(0 (Fig. 2.6b) T seconds later at the instant t + T. Therefore

(2.9)

or

= g(t - T) (2.10)

Therefore, to time-shift a signal by T, we replace / with t — T. Thus, g(f — T) represents g(t)time-shifted by T seconds. If T is positive, the shift is to the right (delay). If T is negative, theshift is to the left (advance). Thus, g(t — 2) is g(t) delayed (right-shifted) by 2 seconds, andg(t + 2) is g(t) advanced (left-shifted) by 2 seconds.

2.3.2 Time Scaling

The compression or expansion of a signal in time is known as time scaling. Consider thesignal g (t) of Fig. 2.7a. The signal 0(0 in Fig. 2.7b is g (t) compressed in time by a factor of 2.Therefore, whatever happens in g(t) at some instant t will be happening to <j>(t) at the instantt/2 so that

(2.11)

and

= £(20 (2.12)

Page 30: Modern Digital Analog Communications Lathi Pg1-75

30 SIGNALS AND SIGNAL SPACE

Figure 2.7Time scaling asignal.

(a)

(b)

(c)

Observe that because g(t) = 0 at t = T\ and TI, the same thing must happen in <£(/) at halfthese values. Therefore, </>(?) = 0 at t = T\/2 and 72/2, as shown in Fig. 2.7b. If g(t) wererecorded on a tape and played back at twice the normal recording speed, we would obtaing(2t). In general, if g(t) is compressed in time by a factor a (a > 1), the resulting signal (j)(t)is given by

= g(at) (2.13)

We can use a similar argument to show that g(t) expanded (slowed down) in time by afactor a (a > 1) is given by

(2-14)

Figure 2.7c shows g(t/2), which is g(t) expanded in time by a factor of 2. Note that the signalremains anchored at t = 0 during scaling operation (expanding or compressing). In otherwords, the signal at t = 0 remains unchanged. This is because g(t) = g(at) = g(0) at t = 0.

In summary, to time-scale a signal by a factor a, we replace t with at. If a > 1, the scalingis compression, and if a < 1, the scaling is expansion.

Example 2.3 Consider the signals g(t) and z(t) in Fig. 2.8a and b, respectively. Sketch (a) g(3t) and(b) z(t/2).

(a)g(3?) is g(t) compressed by a factor of 3. This means that the values of g(t) at t = 6,12, 15, and 24 occur in g(3t) at the instants t = 2, 4, 5, and 8 respectively, as shown inFig. 2.8c.

(b) z(t/2) is z(t) expanded (slowed down) by a factor of 2. The values of z(t) att = 1, —1, and —3 occur in z(t/2) at instants 2, —2, and —6 respectively, as shown inFig. 2.8d.

Page 31: Modern Digital Analog Communications Lathi Pg1-75

Figure 2.8Examples of timecompression andtime expansionof signals. 0.5

0

-1

g(t)

some Useful Signal Operations Jl

z(0l '

15 24 r "

(a)

1 / -

(b )

0.5

g(30

V(c)

-6 -2 0

(d)

'(T)

2.3.3 Time Inversion (or Folding)

Time inversion may be considered to be a special case of time scaling with a = —1 inEq. (2.13). Consider the signal g(t) in Fig. 2.9a. We can viewg(/) as a rigid wire frame hingedat the vertical axis. To invert g(t), we rotate this frame 180° about the vertical axis. Thistime inversion or folding [the mirror image of g(t) about the vertical axis] gives us the signal(f>(t) (Fig. 2.9b). Observe that whatever happens in Fig. 2.9a at some instant t also happens inFig. 2.9b at the instant — t. Therefore

(-t) = g(t)

and

(2.15)

Therefore, to time-invert a signal we replace t with —t such that the time inversion of g(t)yields g(—t). Consequently, the mirror image of g(t) about the vertical axis is g(—t). Recallalso that the mirror image of g(t) about the horizontal axis is — g(/)-

Example 2.4 For the signal g(t) shown in Fig. 2.10a, sketch g(-t).

8 The instants — 1 and —5 in g(t) are mapped into instants 1 and 5 in g(—t). If g(t) — e'/2,then g(—t) = e~1/2. The signal g ( — t ) is shown in Fig. 2.10b.

Page 32: Modern Digital Analog Communications Lathi Pg1-75

32 SIGNALS AND SIGNAL SPACE

Figure 2.9Time inversion(reflection) of asignal.

(a)

) = g(-t)

-5

Figure 2.10Example of time

-7 -5 -3

(a)

(b)

7 ' -

Figure 2.11(a) Unit impulseand (b) itsapprox-imation.

5(0

(a)

2 2

(b)

2.4 UNIT IMPULSE SIGNALThe unit impulse function 5(0 is one of the most important functions in the study of signalsand systems. Its definition and application provide much convenience that is not permissiblein pure mathematics.

The unit impulse function S(t) was first defined by P. A. M. Dirac (hence often known asthe "Dirac delta") as

S(t) = 0

£ 5(0 dt = 1

(2.16)

(2.17)

We can visualize an impulse as a tall, narrow rectangular pulse of unit area, as shown inFig. 2.11. The width of this rectangular pulse is a very small value e; its height is a very largevalue 1/6 in the limit as e -> 0. The unit impulse therefore can be regarded as a rectangularpulse with a width that has become infinitesimally small, a height that has become infinitely

Page 33: Modern Digital Analog Communications Lathi Pg1-75

2.4 Unit Impulse Signal 33

large, and an overall area that remains constant at unity.* Thus, S(t) = 0 everywhere exceptat t = 0, where it is, strictly speaking, undefined. For this reason a unit impulse is graphicallyrepresented by the spearlike symbol in Fig. 2.1 la.

Multiplication of a Function by an ImpulseLet us now consider what happens when we multiply the unit impulse <5(f) by a function 0(0that is known to be continuous at t = 0. Since the impulse exists only at / = 0, and the valueof </>(t) at t = 0 is 0(0), we obtain

(2.18a)

Similarly, if 0(0 is multiplied by an impulse S(t - T*) (an impulse located at t — T), then

0(0<$(? - 7) = <t>(T)S(t - T) (2.18b)

provided 0(0 is defined at t = T.

The Sampling Property of the Unit Impulse FunctionFrom Eq. (2.18) it follows that

/

OO fOO

4>(t)S(t ~T)dt = 0(7) / S(t -T)dt = 0(7)-OO J — OO

(2.19a)

provided 0(0 is continuous at t = T. This result means that the area under the product of afunction with an impulse S(t) is equal to the value of that function at the instant where the unitimpulse is located. This property is very important and useful, and is known as the sampling(or sifting) property of the unit impulse.

Depending on the value of T and the integration limit, the impulse function may or maynot be within the integration limit. Thus, it follows that

/'Ja /'Ja

\4>(T) a < T < bI 0 T < a < b, or T > b > a

(2.19b)

The Unit Impulse as a Generalized FunctionThe unit impulse function definition (2.17) leads to a nonunique function.1 Moreover, <5(0 isnot even a true function in the ordinary sense. An ordinary function is specified by its valuesfor all time t. The impulse function is zero everywhere except at t = 0, and at this, the onlyinteresting point of its range, it is undefined. In a more rigorous approach, the impulse functionis defined not as an ordinary function but as a generalized function, where 5(0 is defined byEq. (2.19a). We say nothing about what the impulse function is or what it looks like. Instead,it is defined only in terms of the effect it has on a test function 0(0- We define a unit impulseas a function for which the area under its product with a function 0(0 is equal to the value ofthe function 0(0 at the instant where the impulse is located. Recall that the sampling property[Eq. (2.19a)] is the consequence of the classical (Dirac) definition of impulse in Eq. (2.17). Incontrast, the sampling property [Eq. (2.19a)] defines the impulse function in the generalizedfunction approach.

* The impulse function can also be approximated by other pulses, such as a positive triangle, an exponential pulse,or a Gaussian pulse.

Page 34: Modern Digital Analog Communications Lathi Pg1-75

34 SIGNALS AND SIGNAL SPACE

Figure 2.12(a) Unit stepfunction «(/).(b) Causalexponential

u(t)

(a)

The Unit Step Function u(t)Another familiar and useful function, the unit step function u(t), is often encountered in circuitanalysis and is defined by Fig. 2.12a:

t>0t <0 (2.20)

If we want a signal to start at t = 0 (so that it has a value of zero for t < 0), we need onlymultiply the signal by u(t). A signal that starts after t = 0 is called a causal signal. In otherwords, g(t) is a causal signal if

8(t) = 0 <0

The signal e "' represents an exponential that starts at t = -oo. If we want this signal to startat t = 0 (the causal form), it can be described as e~a'u(t) (Fig. 2.12b). From Fig. 2.1 Ib, weobserve that the area from —oo to t under the limiting form of 8(t) is zero if t < 0 and unity ift > 0. Consequently,

f S(T)dT=\°J -oo

t <0t>0

From this result it follows that

= «(*)

du

*=«"

(2.21 a)

(2.21b)

2.5 SIGNALS VERSUS VECTORSThere is a strong connection between signals and vectors. Signals that are defined for onlya finite number of time instants (say N) can be written as vectors (of dimension N). Thus,consider a signal g(t) defined over a closed time interval [a, b]. If we pick N points uniformlyon the time interval [a, b] such that

t\ = a, t2=a + e, t^=a + 2e, tN = a + (N - l)e = b

We then can write a signal vector g as a A'—dimensional vector:

b-aN - 1

Page 35: Modern Digital Analog Communications Lathi Pg1-75

2.5 Signals Versus Vectors 35

Figure 2.13Component(projection] of avector alonganother vector.

As the number of time instants N increases, the sampled signal vector g will grow. Eventually,as N -* oo, the signal values would form a vector g with infinitely long dimension. Because€ ->• 0, the signal vector g would transform into the continuous time signal g(t) defined overthe interval [a, b\. In other words,

lim g = g(t) t e [a, b]Af-»-oo

This relationship clearly shows that continuous time signals are straightforward generalizationsof finite dimension vectors. Thus, basic definitions and operations in a vector space can beapplied to continuous time signals as well. We now highlight this connection between the finitedimension vector space and the continuous time signal space.

We shall denote all vectors by boldface type. For example, x is a certain vector withmagnitude or length ||x||. A vector has magnitude and direction. In a vector space, we candefine the inner (dot or scalar) product of two real-valued vectors g and x as

x> = ||x||cos (2.22)

where 0 is the angle between vectors g and x. Using this definition we can express ||x||, thelength (norm) of a vector x as

= <x, x> (2.23)

This defines a normed vector space.

2.5.1 Component of a Vector along Another Vector

Consider two vectors g and x, as shown in Fig. 2.13. Let the component of g along x be ex.Geometrically, the component of g along x is the projection of g on x and is obtained bydrawing a perpendicular from the tip of g on the vector x, as shown in Fig. 2.13. What isthe mathematical significance of a component of a vector along another vector? As seen fromFig. 2.13, the vector g can be expressed in terms of vector x as

g = ex + e (2.24)

However, this does not describe a unique way to decompose g in terms of x and e. Figure 2.14shows two of the infinite other possibilities. From Fig. 2.14a and b, we have

=c2x (2.25)

Page 36: Modern Digital Analog Communications Lathi Pg1-75

36 SIGNALS AND SIGNAL SPACE

Figure 2.14Approximationsof a vector interms of anothervector.

(a)

-*•C.X

(b)

The question is: Which is the "best" decomposition? The concept of optimality depends onwhat we wish to accomplish by decomposing g into two components.

In each of these three representations, g is given in terms of x plus another vector calledthe error vector. If our goal is to approximate g by ex (Fig. 2.13)

g ~ g = ex (2.26)

then the error in this approximation is the (difference) vector e= g — ex. Similarly, the errorsin approximations in Fig. 2.14a and b are ej and 62. What is unique about the approximationin Fig. 2.13 is that its error vector is the shortest (with smallest magnitude or norm). We cannow define mathematically the component (or projection) of a vector g along vector x to beex, where c is chosen to minimize the magnitude of the error vector e = g - ex.

Geometrically, the magnitude of the component of g along x is ||g|| cos 0, which is alsoequal to c||x||. Therefore

c||x|| = ||g||cos0

Based on the definition of inner product between two vectors, multiplying both sides by ||x||yields

= ||g||||x||cos#=<g, x>

and

c =, x>, x> IMI-

, x> (2.27)

From Fig. 2.13, it is apparent that when g and x are perpendicular, or orthogonal, then g hasa zero component along x; consequently, c = 0. Keeping an eye on Eq. (2.27), we thereforedefine g and x to be orthogonal if the inner (scalar or dot) product of the two vectors is zero,that is, if

, x> = 0 (2.28)

2.5.2 Decomposition of a Signal and Signal Components

The concepts of vector component and orthogonality can be directly extended to continuoustime signals. Consider the problem of approximating a real signal g (t) in terms of another realsignal x(t) over an interval [t\, ti\.

g(t) ~ cx(t) t\<t < (2.29)

Page 37: Modern Digital Analog Communications Lathi Pg1-75

2.5 Signals Versus Vectors 37

The error e(t) in this approximation is

\g(t) - cx(t) h <t <t2e(t) =0 otherwise (2.30)

For "best approximation," we need to minimize the error signal, that is, minimize its norm.Minimum signal norm corresponds to minimum energy Ee over the interval [ t \ , tj\ given by

Ee = r 2= / e2

Jti=L [g(t) - cx(t)Y dt

Note that the right-hand side is a definite integral with t as the dummy variable. Hence Ee is afunction of the parameter c (not t) and Ee is minimum for some choice of c. To minimize Ee,a necessary condition is

dEe

~dc~= 0 (2.31)

or

-ITdc LA [g(t}-cx(t)]2dt\ =

Expanding the squared term inside the integral, we obtain

From which we obtain

dc [_ J, dc

-2 / g(t)x(t) dt + 2c ' x2(t) dt = 0Jti

x2(t)dt\ = 0

and

c — g(t)x(t)dt (2.32)

To summarize our discussion, if a signal g(t) is approximated by another signal x(t) as

g(t) ~

then the optimum value of c that minimizes the energy of the error signal in this approximationis given by Eq. (2.32).

Taking our clue from vectors, we say that a signal g(t) contains a component cx(t), wherec is given by Eq. (2.32). As in vector space, cx(t) is the projection of g(t) on x(t). Consistentwith the vector space terminologies, we say that if the component of a signal g(t) of the formx(t) is zero (that is, c = 0), the signals g(t) and x(t) are orthogonal over the interval [t\, t2\.

Page 38: Modern Digital Analog Communications Lathi Pg1-75

38 SIGNALS AND SIGNAL SPACE

In other words, with respect to real-valued signals, two signals x(t) and g(t) are orthogonalwhen there is zero contribution from one signal to the other (i.e., when c = 0). Thus, x(t) andg (t) are orthogonal if and only if

/'Jt\g(t)x(t)dt = (2.33)

Based on the illustration between vectors in Fig. 2.14, two signals are orthogonal if and onlytheir inner product is zero. This relationship indicates that the integral of Eq. (2.33) is closelyrelated to the concept of inner product between vectors.

Indeed, compared to the standard definition of inner product of two A'-dimensional vectorsg and x

< g, X >= 2_,Sixi

i=l

the integration of Eq. (2.33) has an almost identical form. We therefore define the inner productof two (real-valued) signals g(t) and x(t) over a time interval [t\, t?] as

(2.34)

Recall from algebraic geometry that the square of a vector length ||x||2 is equal to <x, x>.Keeping this concept in mind and continuing to draw the analogy to vector analysis, we definethe norm of a signal g(t) as

- J<g(t), g(t): (2.35)

which is the square root of the signal energy in the time interval. It is therefore clear that thenorm of a signal is analogous to the length of a finite-dimensional vector. More generally, thesignals may not be merely defined over a continuous segment [t\, t2\*

Example 2.5 For the square signal g(t) shown in Fig. 2.15 find the component in g(t) of the form sin t. Inother words, approximate g(t) in terms of sin t:

g(t) ~ csin t 0 < t < 2n

such that the energy of the error signal is minimum.

In this case

x(t) = sin t and sin (/) dt = n

* Indeed, the signal space under consideration may be over a set of time segments represented simply by ©. Forsuch a more general space of signals, the inner product is defined as an integral over the time domain 0

g(t), x(t) >= / g(t)x(t)dtJ0

(2.36)

Given the inner product definition, the signal norm ||g(f)|| = V< g ( t ) , g(t) > and the signal space can be definedfor any time domain signals.

»|

Page 39: Modern Digital Analog Communications Lathi Pg1-75

2.5 Signals Versus Vectors 39

Figure 2.15Approximationor square signalin terms of asingle sinusoid.

-!••

From Eq. (2.32), we find

c =

Therefore

i r2* i f r f 4= - / g(t)sintdt=-\ I sintdt+ (-smt)dt\ = - (2.37)

* JO * [JO JIT *

g(t) ~ - sin t7T

(2.38)

represents the best approximation of g(t) by the function sin t, which will minimize theerror signal energy. This sinusoidal component of g(t} is shown shaded in Fig. 2.15. Asin vector space, we say that the square function g(t) shown in Fig. 2.15 has a componentof signal sin t with magnitude of 4/jr.

2.5.3 Complex Signal Space and Orthogonality

So far we have restricted our discussions to real functions of t. To generalize the results tocomplex functions of t, consider again the problem of approximating a function g(t) by afunction x(t) over an interval (t\ <t< 12)'.

g(t) ~ cx(t) (2.39)

where g(t) and x(t) are complex functions of t. In general, both the coefficient c and the error

e(t) = g(t) - cx(t) (2.40)

are complex. Recall that the energy Ex of the complex signal x(t) over an interval [t\, t2\ is

>•*=!;Jh x(t)\2dt

For the best approximation, we need to choose c that minimizes Ee, the energy of the errorsignal e(t) given by

•jfJh\g(t)-cx(t)\2dt (2.41)

Page 40: Modern Digital Analog Communications Lathi Pg1-75

40 SIGNALS AND SIGNAL SPACE

Recall also that

\u + v|2 = (u + V)(M* + v*) = |w|2 + v|2 + u*v + uv* (2.42)

After some manipulation, we can use this result to express the integral Ee in Eq. (2.41) as

= /'Jti\g(t)\2dt- g(t)x*(t)dt CjEy -— fV&L g(t)x*(t)dt

Since the first two terms on the right-hand side are independent of c, it is clear that Ee isminimized by choosing c such that the third term is zero. This yields the optimum coefficient

c= — g(t)x*(t)dt (2.43)

In light of the foregoing result, we need to redefine orthogonality for the complex case asfollows: complex functions (signals) x \ ( t ) and^2(f) are orthogonal over an interval (t < t \ 5/2)as long as

[2xi(t)4(t)dt =Jt,

or rJt, (2.44)

In fact, either equality suffices. This is a general definition of orthogonality, which reduces toEq. (2.33) when the functions are real.

Similarly, the definition of inner product for complex signals over a time domain 0 canbe modified as

), x(t)>

Consequently, the norm of a signal g (t) is simply

= / g(t)x*(t)dtJ[t:te&}

(2.45)

U[tt:te&}\8(f)\2dt

1/2(2.46)

2.5.4 Energy of the Sum of Orthogonal Signals

We know that the geometric length (or magnitude) of the sum of two orthogonal vectors isequal to the sum of the magnitude squares of the two vectors. Thus, if vectors x and y areorthogonal, and if z = x + y, then

This is in fact the famous Pythagorean theorem. We have a similar result for signals. The energyof the sum of two orthogonal signals is equal to the sum of the energies of the two signals.Thus, if signals x(t) and y(t) are orthogonal over an interval [t\, t 2 ] , and if z(t) = x(t) +y(t),then

— t'x T (2.47)

Page 41: Modern Digital Analog Communications Lathi Pg1-75

2.6 Correlation of Signals 41

We now prove this result for complex signals, of which real signals are a special case.From Eq. (2.42) it follows that

r \x(t) + y(t)\2dt = I x(t)\2dt+ I \y(t)\2dt + f x(t)y*(t)dt + f * x*(t)y(t)dtJt\ Jtt Jt\ Ji] Jtt

\y(t)\2dt (2.48)

The last equality follows from the fact that because of orthogonality, the two integrals of thecross products x(t)y*(t) and x*(t)y(t) are zero. This result can be extended to sum of anynumber of mutually orthogonal signals.

2.6 CORRELATION OF SIGNALSBy defining the inner product and the norm of signals, we have set the foundation for signalcomparison. Here again, we can benefit by drawing parallels to the familiar vector space. Twovectors g and x are similar if g has a large component along x. If c in Eq. (2.27) is large, thevectors g and x are similar. We could consider c to be a quantitative measure of similaritybetween g and x. Such a measure, however, would be defective because c varies with thenorms (or lengths) of g and x. To be fair, the amount of similarity between g and x should beindependent of the lengths of g and x. If we double the length of g, for example, the amount ofsimilarity between g and x should not change. From Eq. (2.27), however, we see that doublingg doubles the value of c (whereas doubling x halves the value of c). The similarity measurebased on signal correlation is clearly faulty. Similarity between two vectors is indicated by theangle 9 between the vectors. The smaller the 9, the larger is the similarity, and vice versa. Theamount of similarity can therefore be conveniently measured by cos 9. The larger the cos 9, thelarger is the similarity between the two vectors. Thus, a suitable measure would be p = cos 9,which is given by

p — cos & —x>

(2.49)

We can readily verify that this measure is independent of the lengths of g and x. Thissimilarity measure p is known as the correlation coefficient. Observe that

-I < P < 1 (2.50)

Thus, the magnitude of p is never greater than unity. If the two vectors ̂ re aligned, the sim-ilarity is maximum (p = 1). Two vectors aligned in opposite directions have the maximumdissimilarity (p = —1). If the two vectors are orthogonal, the similarity is zero.

We use the same argument in denning a similarity index (the correlation coefficient) forsignals. For convenience, we shall consider the signals over the entire time interval from — ooto oo. To establish a similarity index independent of energies (sizes) of g(t) and x(t), wemust normalize c by normalizing the two signals to have unit energies. Thus, the appropriatesimilarity index p analogous to Eq. (2.49) is given by

P = g(t)x(t)dt (2.51)

Page 42: Modern Digital Analog Communications Lathi Pg1-75

42 SIGNALS AND SIGNAL SPACE

Observe that multiplying either g (?) or x(t) by any constant has no effect on this index. Thus, itis independent of the size (energies) of g(t) and x(t). By using the Cauchy-Schwarz inequality(proved in Appendix B), * one can show that the magnitude of p is never greater than 1:

-1 <P< 1 (2.52)

2.6.1 Best Friends, Opposite Personalities,and Complete Strangers

We can readily verify that if g(t) = Kx(t), then p = 1 when K is any positive constant, andp = — 1 when K is any negative constant. Also p = 0 if g(t) andx( t ) are orthogonal. Thus, themaximum similarity [when g(t) — Kx(t)} is indicated by p = 1, the maximum dissimilarity[when g(t) = — Kx(t)] is indicated by p = — 1. When the two signals are orthogonal, thesimilarity is zero. Qualitatively speaking, we may view orthogonal signals as unrelated signals.Note that maximum dissimilarity is different from unrelatedness qualitatively. For example,we have the best friends (p = 1), the opposite personalities (p — — 1), and complete strangers,who do not share anything in common (p = 0). Our opposite personalities are not strangers,but are in fact, people who do and think always in opposite ways.

We can readily extend this discussion to complex signal comparison. We generalize thedefinition of p to include complex signals as

(2.53)

Example 2.6 Find the correlation coefficient p between the pulsex(t) and the pulses #,-(/), / = 1, 2, 3, 4, 5,and 6 shown in Fig. 2.16.

We shall compute p using Eq. (2.51) for each of the six cases. Let us first compute theenergies of all the signals.

r5 r5= I x2(t)dt = I dt =

Jo Jo

In the same way we find Eg] = 5, Eg2 = 1.25, and £g, = 5. Also to determine Eg4 andEg5, we determine the energy E of e~atu(t) over the interval t = 0 to T:

E= j (e~a')2dt = I e~2atdt=—(lJo Jo 2a

- e'2aT)

. - 2* The Cauchy-Schwarz inequality states that for two real energy signals g(t) and x(t), (/^ g(t)x(t) dn < EgEx

with equality if and only if x(t) = Kg(t), where K is an arbitrary constant. There is also similar inequality forcomplex signals.

Page 43: Modern Digital Analog Communications Lathi Pg1-75

2.6 Correlation of Signals 43

Figure 2.16Signals forExample 2.6.

°1

j

' — >• 5 0

W

o Cfl

1

0

~~ |

Si

t—>

(0)

*5<

1

/ -*• 5 0

(e)

0

#2 '

|

/ — >• 5 0

«3W

' ^51

W(d)

sin 2m

V <- l~\t^^^

— - 5 0

A A A A A

1 /I l\ l\ l\ 5

\l\l\j\l\j '"""

(0 (g)

For g4(r), a = 1/5 and T = 5. Therefore, £^4 = 2.1617. For gs(t), a = 1 and T = oo.Therefore, Eg, = 0.5. The energy of Egf> is given by

/•= 1

JoE = 1 sin2 2 = 2.5

From Eq. (2.51), the correlation coefficients for six cases are found to be

5)dt =

(-l)dt = -l

(4)

(5)

V(2.1617)(5)/Jo

1/(0.5)(5)

1

/(2.5)(5)

/Jo

/5Jo

?-'/5J? = '

= 0.628

sin 2?rr dt — 0

Comments on the Results from Example 2.6

Because gi(t) = x(t), the two signals have the maximum possible similarity, and p = 1.However, the signal g^(t) also shows maximum possible similarity with p = 1. This isbecause we have defined p to measure the similarity of the waveshapes, and is independent

Page 44: Modern Digital Analog Communications Lathi Pg1-75

44 SIGNALS AND SIGNAL SPACE

of amplitude (strength) of the signals compared. The signal g2(t) is identical to x(t) in shape;only the amplitude (strength) is different. Hence p = 1. The signal g$(t), on the other hand, |has the maximum possible dissimilarity with x(t) because it is equal to —x(t).

For #4(0, P = 0.961, implying a high degree of similarity with x(t). This is reasonablebecause g4(t) is very similar to x(t) over the duration of x(t) (for 0 < t < 5). Just byinspection, we notice that the variations or changes in both x(t) and g$(t) are at similar rates.Such is not the case with gs(t), where we notice that variations in gs(t) are generally at ahigher rate than those in x(t). There is still a considerable degree of similarity; both signalsalways remain positive and show no oscillations. Both signals have zero or negligible strengthbeyond? = 5. Thus, g$ (t) is similar to x(t), but not as similar as g^(t). This is why p = 0.628forgs(t).

The signal g&(t) is orthogonal to x(t), so that p = 0. This appears to indicate that thedissimilarity in this case is not as strong as that of gj(t), for which / > = — ! . This may seemodd because g$(t) appears more similar to x(t), than does g(,(t). The dissimilarity betweenx(t) and gi(t) is of the nature of opposite characteristics; in a way they are very similar.but in opposite directions. On the other hand, x(t) and g(,(t) are dissimilar because theygo their own ways irrespective of each other signal's variation; it is of the nature of theirbeing strangers to each other. Hence the dissimilarity of gt,(t) to x(t) rates lower than thatOf £3(0-Sec. 2.10 provides a MATLAB exercise that numerically computes the correlation coeffi-cients [sign_cor.m]

2.6.2 Application to Signal Detection

Correlation between two signals is an extremely important concept that measures the degree ofsimilarity (agreement or alignment) between the two signals. This concept is widely used forsignal processing in radar, sonar, digital communication, electronic warfare, and many otherapplications.

We explain such applications by an example in which a radar signal pulse is transmitted inorder to detect a suspected target. If a target is present, the pulse will be reflected by it. If a targetis not present, there will be no reflected pulse, just noise. By detecting the presence or absenceof the reflected pulse, we confirm the presence or absence of a target. The crucial problemin this procedure is to detect the heavily attenuated, reflected pulse (of known waveform)buried among the unwanted noise and interferences. Correlation of the received pulse with thetransmitted pulse can be of great help in this situation.

We denote the transmitted radar pulse signal as g(t). The received radar return signal is

z(t) = \ag(t — to) + w(t) target present\w(t) target absent

(2-54)

where a represents the target reflection and attenuation loss, to represents the round-trip prop-agation delay, which equals twice the target distance divided by the speed of electromagneticwave, and w(t) captures all the noise and interferences. The key to target detection is theorthogonality between w(t) and g(t — to), that is,

/'Jtiw(t)g*(t-t0)dt = t\ < to < (2.55)

Page 45: Modern Digital Analog Communications Lathi Pg1-75

2.6 Correlation of Signals 45

Thus, to detect whether a target is present, a correlation can be computed between z(t) and adelayed pulse signal g(t - to)

r*Jt\aEp target present

target absent_ JutJ7>

0(2.56)

Here Ep is the pulse energy. Given the orthogonality between w(t) and g(t — to), the targetdetection problem can be reduced to a thresholding problem in Eq. (2.56) to determine whetherthe correlation is aEp or 0 by applying a magnitude threshold. Note that when to is unknown,a bank of N correlators, each using a different delay T,,

r z(t)g*(t-Ti)dt i= (2.57)

may be applied to allow the receiver to identify the peak correlation at the correct delay TJ = ?o-Alternatively, a correlation function can be used (as in the next section).

A similar situation exists in digital communication when we are required to detect thepresence of one of the two known waveforms in the presence of noise. Consider the caseof binary communication, where two known waveforms are received in a random sequence.Each time we receive a pulse, our task is to determine which of the two (known) waveformshas been received. To make the detection easier, we must make the two pulses as dissimi-lar as possible. This means we should select one pulse to be the negative of the other pulse.This gives the highest dissimilarity (p= — I). This scheme is sometimes called the antipo-dal scheme. We can also use orthogonal pulses that result in p — 0. In practice both theseoptions are used, although antipodal is the best in terms of distinguishability between the twopulses.

Let us consider the antipodal scheme in which the two pulses are p(t) and —p(t). Thecorrelation coefficient p of these pulses is — 1. Assume no noise or any other imperfectionsin the transmission. The receiver consists of a correlator that computes the correlation coef-ficient between p(t) and the received pulse. If p is 1, we decide that p(t) is received, andif p is —1, we decide that —p(t) is received. Because of the maximum possible dissim-ilarity between the two pulses, detection is easier. In practice, however, there are severalimperfections. There is always an unwanted signal (noise) superimposed on the receivedpulses. Moreover, during transmission, pulses are distorted and dispersed (spread out) intime. Consequently, the correlation coefficient is no more ±1, but has a smaller magni-tude, thus reducing their distinguishability. We use a threshold detector, which decidesthat if p is positive, the received pulse is p(t), and if p is negative, the received pulseis -p(t).

Suppose, for example, that p(t) has been transmitted. In the ideal case, correlation of thispulse at the receiver would be 1, the maximum possible. Now because of noise and pulsedistortion, p is less than 1. In some extreme situation, noise and pulse overlapping (spreading)make this pulse so dissimilar to p(t) that p is a negative amount. In such a case the thresholddetector decides that —p(t) has been received, thus causing a detection error. In the same way,if —p(t) is transmitted, channel noise and pulse distortion could cause a positive correlation,resulting in a detection error. Our task is to make sure that the transmitted pulses have sufficientenergy to ensure that the damage caused by noise and other imperfections remains within alimit, with the result that the error probability is below some acceptable bound. In the idealcase, the margin provided by the correlation p for distinguishing the two pulses is 2 (from 1 to-1, and vice versa). Noise and channel distortion reduce this margin. That is why it is importantto start with as large a margin as possible. This explains why the antipodal scheme has the best

Page 46: Modern Digital Analog Communications Lathi Pg1-75

46 SIGNALS AND SIGNAL SPACE

performance in terms of guarding against channel noise and pulse distortion. For other reasons,however, as mentioned earlier, schemes such as an orthogonal scheme, where p = 0, are alsoused even when they provide a smaller margin (from 0 to 1, and vice versa) in distinguishingthe pulses. A quantitative discussion of correlation in digital signal detection is presented inChapter 11.

In later chapters we shall discuss pulse dispersion and pulse distortion during transmission,as well as the calculation of error probability in the presence of noise.

2.6.3 Correlation Functions

We should revisit the application of correlation to signal detection in radar, where a signalpulse is transmitted to detect a suspected target. By detecting the presence or absence of thereflected pulse, we confirm the presence or absence of a target. By measuring the time delaybetween the transmitted pulse and received (reflected) pulse, we determine the distance of thetarget. Let the transmitted and the reflected pulses be denoted by g(f) and z(t), respectively, asshown in Fig. 2.17. If we were to use Eq. (2.51) directly to measure the correlation coefficientp, we would obtain

1

/

ooZ(t)g*(t)dt =

-oo(2.58)

Thus, the correlation is zero because the pulses are disjoint (nonoverlapping in time). Theintegral in Eq. (2.58) will yield zero value even when the pulses are identical but with relativetime shift. To avoid this difficulty, we compare the received pulse z(t) with the transmittedpulse g(t) shifted by r. If for some value of T there is a strong correlation, we not only willdetect the presence of the pulse but we also will detect the relative time shift of z(t) with respectto g(t). For this reason, instead of using the integral on the right-hand side of Eq. (2.58) weuse the modified integral i/^gW, the cross-correlation function of two complex signals g(t)and z(f) defined by

Z(t)g*(t-T)dt rL r)g*(t)dt (2.59)

Therefore, Vfzg(r) is an indication of similarity (correlation) of g(t) with z(t) advanced (left-shifted) by r seconds.

2.6.4 Autocorrelation Function

The correlation of a signal with itself is called the autocorrelation. The autocorrelation functioni/g(T) of a real signal g(t) is defined as

g(t)g(t (2.60)

It measures the similarity of the signal g(t) with its own displaced version. In Chapter 3, weshall show that the autocorrelation function provides valuable spectral information about thesignal.

Page 47: Modern Digital Analog Communications Lathi Pg1-75

2.7 Orthogonal Signal Sets 47

Figure 2.17Physicalexplanation

auto-correlationfunction.

g(t)

0

Z0

0

(a)

1 t -*•

-(t-T)

r~^~~^ w

T T+\ t -*•

2.7 ORTHOGONAL SIGNAL SETSIn this section we show a way of representing a signal as a sum of an orthogonal set of signals.In effect, this orthogonal set of signals form a basis for the specific signal space. Here againwe can benefit from the insight gained from a similar problem in vectors. We know that avector can be represented as a sum of orthogonal vectors, which form the coordinate systemof a vector space. The problem in signals is analogous, and the results for signals are parallelto those for vectors. For this reason, let us review the case of vector representation.

2.7.1 Orthogonal Vector Space

Consider a multidimensional Cartesian vector space described by three mutually orthogonalvectors Xi, x2, and \3, as shown in Fig. 2.18 for the special case of three-dimensional vec-tor space. First, we shall seek to approximate a three-dimensional vector g in terms of twoorthogonal vectors xi and \2".

C2X2

The error e in this approximation is

e = g - + c2X2)

or equivalently

g = +e

Building on our earlier geometrical argument, it is clear from Fig. 2.18 that the length of errorvector e is minimum when it is perpendicular to the (x\, x2) plane, and when cjxi and c2x2

are the projections (components) of g on xi and x2, respectively. Therefore, the constants c\and c2 are given by formula in Eq. (2.27).

Now let us determine the best approximation to g in terms of all the three mutuallyorthogonal vectors \\, x2, and x3:

+C2X2 +C3X3 (2.61)

Page 48: Modern Digital Analog Communications Lathi Pg1-75

48 SIGNALS AND SIGNAL SPACE

Figure 2.18Representation ofa vector in three-dimensionalspace.

Figure 2.18 shows that a unique choice of c\, ci, and cj exists, for which Eq. (2.61) is nolonger an approximation but an equality:

g = C\X\ 03X3

In this case, c\x\,C2\2, and £3X3 are the projections (components) of g on xi ,X2, and xs,respectively. Note that the approximation error e is now zero when g is approximated in termsof three mutually orthogonal vectors: \\ , X2, and X3. This is because g is a three-dimensionalvector, and the vectors xi, \2, and X3 represent a complete set of orthogonal vectors in three-dimensional space. Completeness here means that it is impossible in this space to find any othervector X4, which is orthogonal to all the three vectors xi ,X2, and x3. Any vector in this spacecan therefore be represented (with zero error) in terms of these three vectors. Such vectors areknown as basis vectors, and the set of vectors is known as a complete orthogonal basis ofthis vector space. If a set of vectors (x, •} is not complete, then the approximation error willgenerally not be zero. For example, in the three-dimensional case just discussed, it is generallynot possible to represent a vector g in terms of only two basis vectors without an error.

The choice of basis vectors is not unique. In fact, each set of basis vectors corresponds to aparticular choice of coordinate system. Thus, a three-dimensional vector g may be representedin many different ways, depending on the coordinate system used.

To summarize, if a set of vectors {x, } is mutually orthogonal, that is, if

0m = n

and if this basis set is complete, a vector g in this space can be expressed as

g = ClXl +C 2 X 2 +C 3 X 3

where the constants c, are given by

Ci =

i = 1,2,3

(2.62)

(2.63a)

(2.63b)

Page 49: Modern Digital Analog Communications Lathi Pg1-75

2.7 Orthogonal Signal Sets 49

2.7.2 Orthogonal Signal Space

We continue with our signal approximation problem using clues and insights developed forvector approximation. As before, we define orthogonality of a signal set x\ (t), x2(t), ... XN(I)over a time domain 0 (may be an interval [ t \ , t2]} as

xm(t)x*(t)dt =0 m ̂ nEn m = n

(2.64)

If all signal energies are equal En — \, then the set is normalized and is called an orthonormalset. An orthogonal set can always be normalized by dividing xn (t) by \/£^ for all «. Now, con-sider the problem of approximating a signal g(t) over the © by a set of TV mutually orthogonals ignalsx\( t ) ,x 2 ( t ) , ...,xN(t):

g(t) ~ C\Xl(t) + C2X2(t) + • •

N

n=\

CNXN(t) (2.65a)

(2.65b)

It can be shown that Ee, the energy of the error signal e(t) in this approximation is minimizedif we choose

Jt&g(t)x*(t)dt

,

I \J/60

\xn(t)\2dt

= TT / 8(t)xn(t)dt n=\,2,...,Nt-n J0

(2.66)

Moreover, if the orthogonal set is complete, then the error energy Ee —>• 0, and the represen-tation in (2.65) is no longer an approximation, but an equality. More precisely, let the TV-termapproximation error be defined by

eN(t) = g(t) - + c2x2(t) + ••• + cNxN(t)] = g(t) ~ t e 0 (2.67)n=l

If the orthogonal basis is complete, then the error signal energy converges to zero, that is,

lim (2.68)

Although strictly in a mathematical sense, a signal may not converge to zero even though itsenergy does. This is because a signal may be nonzero at some isolated points.* Still, for allpractical purposes, signals are continuous for all t, and the equality (2.68) states that the errorsignal has zero energy as N -> oo. Thus, for N -> oo, the equality (2.65) can be loosely

* Known as a measure-zero set

Page 50: Modern Digital Analog Communications Lathi Pg1-75

50 SIGNALS AND SIGNAL SPACE

written as

g(t) = CnXn(t)

(2.69)n=]

where the coefficients cn are given by Eq. (2.66). Because the error signal energy approacheszero, it follows that the energy of g(t) is now equal to the sum of the energies of its orthogonalcomponents.

The series on the right-hand side of Eq. (2.69) is called the generalized Fourier series ofg(t) with respect to the set (xn(t)}. When the set {xn(t)} is such that the error energy EN ->• 0as N —»• oo for every member of some particular signal class, we say that the set {xn(t)} iscomplete on {t : 0} for that class of g(t), and the set [xn(t)} is called a set of basis functionsor basis signals. In particular, the class of (finite) energy signals over 0 is denoted as L2{&}.

2.7.3 ParsevaPs Theorem

Recall that the energy of the sum of orthogonal signals is equal to the sum of their energies.Therefore, the energy of the right-hand side of Eq. (2.69) is the sum of the energies of theindividual orthogonal components. The energy of a component cnxn(t) is c^En. Equating theenergies of the two sides of Eq. (2.69) yields

Eg = c\ £3

(2.70)

This important result goes by the name of Parseval's theorem. Recall that the signal energy(area under the squared value of a signal) is analogous to the square of the length of a vector inthe vector-signal analogy. In vector space we know that the square of the length of a vector isequal to the sum of the squares of the lengths of its orthogonal components. Parseval's theorem(2.70) is the statement of this fact as applied to signals.

2.7.4 Some Examples of Generalized Fourier Series

Signal representation by generalized Fourier series shows that signals are vectors in everysense. Just as a vector can be represented as a sum of its components in a variety of ways,depending upon the choice of a coordinate system, a signal can be represented as a sum ofits components in a variety of ways. Just as we have vector coordinate systems formed bymutually orthogonal vectors, we also have signal coordinate systems (basis signals) formed bya variety of sets of mutually orthogonal signals. There exists a large number of orthogonal signalsets that can be used as basis signals for generalized Fourier series. Some well-known signalsets are trigonometric (sinusoid) functions, exponential (sinusoid) functions, Walsh functions,Bessel functions, Legendre polynomials, Laguerre functions, Jacobi polynomials, Hermitianpolynomials, and Chebyshev polynomials. The functions that concern us most in this book arethe trigonometric and the exponential sinusoids in the rest of the chapter.

Page 51: Modern Digital Analog Communications Lathi Pg1-75

2.8 Trigonometric Fourier Series 51

2.8 TRIGONOMETRIC FOURIER SERIESWe first consider the class of (real or complex) periodic signals with period TQ. This space ofperiodic signals of period TO has a well-known complete orthogonal basis formed by real-valuedtrigonometric functions. Consider a signal set:

{1, cos coot, cos 2coot, ..., cos ncoot, sin wot, sin 2cont, sin ncoot, ...}(2.71)

A sinusoid of angular frequency ncoo is called the nth harmonic of the sinusoid of angularfrequency coo when n is an integer. Naturally the sinusoidal frequency/o (in hertz) is relatedto its angular frequency coo via

COO = 27T/0

These terms offer different conveniences and are equivalent. We note that coo and 2nfy areequally commonly used in practice. Neither offers any distinct advantages. For this reason,we will be using coo and/o interchangeably in this book according to the convenience of theparticular problem in question.

The sinusoid of angular frequency coo serves as an anchor in this set, called the funda-mental tone of which all the remaining terms are harmonics. Note that the constant term 1is the zeroth harmonic in this set because cos (0 x coot) = 1. We can show that this set isorthogonal over any interval of duration TO = 2jt/a>o, which is the period of the fundamental.This follows from the equations (proved in Appendix A):

rI

JTO

fI

/TO

cos nco0t cos mcootdt = TOn ^ m

sin ncoot sin truant at — n

and

/ -0

sin ncoot cos mcoot dt = 0 for all n and m

(2.72a)

(2.72b)

(2.72c)

The notation fT means "integral over an interval from t — t\ to t\ + TO for any value of t\."These equations show that the set (2.71) is orthogonal over any contiguous interval of durationTQ. This is the trigonometric set, which can be shown to be a complete set.3'4 Therefore, wecan express a signal g (t) by a trigonometric Fourier series over any interval of duration TOseconds as

g(t) = GO + <?i cos a>ot + «2 cos

+ b\ sin a>ot + bi sin 2a>ot t\ < t < t\ + T0 (2.73a)

or

g(t) — GO + ̂ (an cos ncoot + bn sin ncoot) t\ < t < t] + TO (2.73b)

Page 52: Modern Digital Analog Communications Lathi Pg1-75

52 SIGNALS AND SIGNAL SPACE

where

MO = 2^/o = — and /0 = —?b ?b

We can use Eq. (2.66) to determine the Fourier coefficients OQ, an, and bn. Thus

ig(r)cos na)Qtdt

(2.74)

an =/Jti

fJ t ,

(2.75)

cos

The integral in the denominator of Eq. (2.75) as seen from Eq. (2.72a) (with m = ri) is To/2when n •£ 0. Moreover, for n = 0, the denominator is TQ. Hence

g(t)dt (2.76a)

and

a,, = g(t)cos niOQtdt =2 f'i

= — I'0 Jt,

g(t)cos n2jrfQt dt n =1,2,3,. . .

(2.76b)

By means of a similar argument, we obtain

b =^fTo J,,

g(t)sm2 f ' 1

= — g(t)sin n2jif0tdt n= 1,2,3,. . .To 7,i

(2.76c)

We note that the trigonometric Fourier series of Eq. (2.73) applies to both real and complexperiodic signals.

2.8.1 Compact Trigonometric Fourier Series

If the periodic signal g ( t ) is real, then the trigonometric Fourier series coefficients of Eq.(2.76) are also real. Consequently, the Fourier series in Eq. (2.73) contains real-valued sineand cosine terms of the same frequency. We can combine the two terms in a single term of thesame frequency using the well-known trigonometric identity of

an cos bn sin = Cn cos On)

where

„ / 2 i L 2Cn = V«n +»«

(2.77)

(2.78a)

(2.78b)

Page 53: Modern Digital Analog Communications Lathi Pg1-75

2.8 Trigonometric Fourier Series 53

For consistency we denote the dc term ao by CQ, that is,

C0 = a0 (2.78c)

From the identity (2.77), the trigonometric Fourier series in Eq. (2.73) can be expressed in thecompact form of the trigonometric Fourier series as

oo

g(t) = CQ + ^Cttcos(n2jrfot + 8n) ?i < t < ?i + T0 (2.79)n=l

where the coefficients Cn and 9,, are computed from an and bn by using Eq. (2.78).Equation (2.76a) shows that ao (or Co) is the average value of g(t) (averaged over one

period). This value can often be determined by direct inspection of g ( t ) .

Example 2.7 Find the compact trigonometric Fourier series for the exponential e~'/2 shown in Fig. 2.19aover the interval 0 < t < n.

Because we are required to represent g(t) by the trigonometric Fourier series over theinterval 0 < t < n, TO = n, and the fundamental angular frequency and the fundamentalfrequency are

In 1 1«0 = — = 2 rad/s and /0 = — = -

TQ T0 it

respectively. Therefore

g(t) = ao + > an cos 2nt + bn sin 2nt 0 < t < nn=\

where [from Eq. (2.76)]

GO = - / e~llL dt = 0.504K Jo2 r2 r

an = -x Jo

cos 2ntdt = 0.504I6n2

and

9 r77"9 r*„ = - /

x Josin 2«rA = 0.504 —l + 16n2

Therefore

g(t) = 0.504 ^ (COS 2rtf + 4" Sin 0 < / < n

Page 54: Modern Digital Analog Communications Lathi Pg1-75

54 SIGNALS AND SIGNAL SPACE

Figure 2.19(a, b) Periodicsignal and (c, d)its Fourierspectra.

t -e,,

-71/2

271

0.244'•--... 0.125

I T it _j_^10

(c)

(d)

To find the compact Fourier series, we use Eq. (2.78) to find its coefficients, as

C0 = GO = 0.504

Cn = Ja* + b% = 0.504.64n2

16/z2)2 16n2)2 = 0.504

(2.80)/ i •>

^1 - ) = tair'(-4n) = -tan"1 4nan

The amplitude and phases of the dc and the first seven harmonics are computed fromEq. (2.80) and displayed in Table 2.1. We can use these numerical values to express g(t)in the compact trigonometric Fourier series as

g ( t ) = (1.504 + 0-504 cos (Int. - t a n ~ 4 n ) 0 < t < Tt (2.81a)^ VI + 16«2

= 0.504 + 0.244 cos (It - 75.96°) + 0.125 cos (4t - 82.87°)

+ 0.084 cos (6r - 85.24°) + 0.063 cos (8f - 86.42°) + • • • 0 < t < n(2.81b)

Page 55: Modern Digital Analog Communications Lathi Pg1-75

2.8 Trigonometric Fourier Series 55

TABLE 2.1

0

Cn 0.504 0.244 0.125 0.084 0.063 0.0504 0.042 0.0369n 0 -75.96° -82.87° -85.24° -86.42° -87.14° -87.61° -87.95°

2.8.2 Periodicity of the Trigonometric Fourier Series

We have shown how an arbitrary signal g ( t ) may be expressed as a trigonometric Fourier seriesover any continuous interval of TO seconds. The Fourier series is equal to g(t) over this intervalalone, as shown in Fig. 2.19a and b. Outside this interval, the series is not necessarily equal tog(t). It would be interesting to find out what happens to the Fourier series outside this interval.We now show that the trigonometric Fourier series is a periodic function of period TO (theperiod of the fundamental). Let us denote the trigonometric Fourier series on the right-handside of Eq. (2.79) by <p(t). Therefore

<p(t) — C0 + Cn cos (n2nfot + 9n) for all t

and

B=l

<p(t + To) = C0 + ̂ Cn cos [n2nf0(t + T0) + 9n]n=\oo

71=1

OO

= Co + y^ Cn cos (71=1

= <p(t) for all f (2.82)

This shows that the trigonometric Fourier series is a periodic function of period TO (the periodof its fundamental). For instance, <p(t), the Fourier series on the right-hand side of Eq. (2.81b)is aperiodic function in which the segment of g ( t ) in Fig. 2.19a over the interval (0 < t < TT)repeats periodically every n seconds as shown in Fig. 2.19b.*

Thus, when we represent a signal g ( t ) by the trigonometric Fourier series over a certaininterval of duration TO, the function g ( t ) and its Fourier series <p(t) need be equal only overthat interval of TO seconds. Outside this interval, the Fourier series repeats periodically withperiod TO. Now if the function g ( t ) were itself to be periodic with period TO, then a Fourierseries representing g ( t ) over an interval TO will also represent g(t) for all t (not just over theinterval TO). Moreover, such a periodic signal g ( t ) can be generated by a periodic repetitionof any of its segment of duration TO. Therefore, the trigonometric Fourier series representinga segment of g(t) of duration TO starting at any instant represents g(t) for all t. This means incomputing the coefficients OQ, an, and bn, we may use any value for t\ in Eq. (2.76). In other

* In reality, the series convergence at the points of discontinuity shows about 9% overshoot (Gibbs phenomenon).

Page 56: Modern Digital Analog Communications Lathi Pg1-75

56 SIGNALS AND SIGNAL SPACE

words, we may perform this integration over any interval of TO. Thus, the Fourier coefficientsof a series representing a periodic signal g(t) (for all t) can be expressed as

«o = -5- /TO JTO

2 fan = — / g(t) cos n2jrfyt dt n = 1,2,3, ...

TO JTO

and

(2.83a)

(2.83b)

(2.83c)b* = — f g(t) sin n2nf0tdt n= 1,2,3,. . .yo JTO

where fT means that the integration is performed over any interval of TO seconds.

2.8.3 The Fourier Spectrum

The compact trigonometric Fourier series in Eq. (2.79) indicates that a periodic signal g(t) canbe expressed as a sum of sinusoids of frequencies 0 (dc),/o, 2/b, . . . , nfo,..., whose amplitudesare Co, C\, €2, .. • , Cn, ..., and whose phases are 0, 6\, 02, ..., 9n,..., respectively. We canreadily plot amplitude Cn versus/ (amplitude spectrum) and 0n versus/ (phase spectrum).These two plots of magnitude and phase together are the frequency spectra of g(t).

Figure 2.19c and d show the amplitude and phase spectra for the periodic signal <p(t)in Fig. 2.19b. These spectra tell us at a glance the frequency composition of <p(t): that is,the amplitudes and phases of various sinusoidal components of <p(t). Knowing the frequencyspectra, we can reconstruct or synthesize <p(t), as shown on the right-hand side of Eq. (2.8 la).Therefore, the frequency spectra in Figs. 2.19c and d provide an alternative description—thefrequency domain description of tp(t). The time domain description of <p(t) is shown inFig. 2.19b. A signal, therefore, has a dual identity: the time-domain identity (p(t) and thefrequency domain identity (Fourier spectra). The two identities complement each other; takentogether, they provide a better understanding of a signal.

Series Convergence at Jump DiscontinuitiesWhen there is a jump discontinuity in a periodic signal g ( t ) , its Fourier series at the pointof discontinuity converges to an average of the left-hand and right-hand limits of g(t) at theinstant of discontinuity. This property was discussed with regard to the convergence of theapproximation error to zero in terms of its energy in Sec. 2.7.2. In Fig. 2.19b, for instance, theperiodic signal (p(t) is discontinuous at t = 0 with <f>(0+) = 1 and ip(Q~) = e~Jt/2 = 0.208.The corresponding Fourier series converges to a value (1 + 0.208)/2 = 0.604 at / = 0. Thisis easily verified from Eq. (2.81b) by setting t = 0.

Existence of the Fourier Series: Dirichlet ConditionsThere are two basic conditions for the existence of the Fourier series.

1. For the series to exist, the Fourier series coefficients ao, an, and bn must be finite. FromEq. (2.76), it follows that the existence of these coefficients is guaranteed if g(t) is absolutelyintegrable over one period; that is,

JTO

\g(t)\dt < oo (2.84)

Page 57: Modern Digital Analog Communications Lathi Pg1-75

2.8 Trigonomefric Fourier Series 57

This is known as the weak Dirichlet condition. If a function g(t) satisfies the weak Dirichletcondition, the existence of a Fourier series is guaranteed, but the series may not convergeat every point. For example, if a function g(t) is infinite at some point, then obviouslythe series representing the function will be nonconvergent at that point. Similarly, if afunction has an infinite number of maxima and minima in one period, then the functioncontains an appreciable amount of the infinite frequency component and the higher coef-ficients in the series do not decay rapidly, so that the series will not converge rapidly oruniformly. Thus, for a convergent Fourier series, in addition to condition (2.84), we requirethat

2. The function g(t) can have only a finite number of maxima and minima in one period,and it may have only a finite number of finite discontinuities in one period.

These two conditions are known as the strong Dirichlet conditions. We note here that anyperiodic waveform that can be generated in a laboratory satisfies strong Dirichlet conditionsand hence possesses a convergent Fourier series. Thus, a physical possibility of a periodicwaveform is a valid and sufficient condition for the existence of a convergent series.

Example 2.8

Figure 2.20(a) Square pulseperiodic signaland (b) its Four-ier spectrum.

Find the compact trigonometric Fourier series for the periodic square wave w(t) shown inFig. 2.20a, and sketch its amplitude and phase spectra.

The Fourier series is

w(?) = GO + / Qn cos n2jrfot + bn sin nn=l

1.

1c,,

0.5 (

0

1

-•-"" I ""-••- --- t ...

2.. f , 4 6 ....J 8 To

2 1 - '" (0 — *•

T~"*'"

(a)

(b)

Page 58: Modern Digital Analog Communications Lathi Pg1-75

58 SIGNALS AND SIGNAL SPACE

where

= — / w(t)dt

In the foregoing equation, we may integrate w(t) over any interval of duration TO- In thisparticular case, the signal period is TO = 2it. Thus,

= I

Thus, it is simpler to use MO to represent the Fourier series. Figure 2.20a shows that thebest choice for a region of integration is from —To/2 to To/2. Because w(t) = 1 only over(— -^-, -^) and w(t) = 0 over the remaining segment,

(2.85a)TO J-T0/4 2

We could have found OQ, the average value of w(t), to be 0.5 merely by inspection of w(t)in Fig. 2.20a. Also,

2 fT°'4 2 . (mt\an = — / cos nct)ot at = — sin I — I

TO J-T0/4 nn V 2 /

0 n even

Jm n ~ ' ^> "' I3' • • •

-^r 7 1 = 3 , 7 , 1 1 , 1 5 , . . .

(2.85b)

2 fT0/4= T \ 'lQ J-To/4

sin ntdt = 0 (2.85c)

In these derivations we used the fact that fyTg = 1. Therefore

= - H — I cos 2jrfot -- cos 6irfot + - cos2 TT \ 5

-- cos 14;r/or +7

(2.86)

1 2 / 1 1 1= - H— I cos t cos 3t + - cos 5t cos It +

Observe that bn = 0 and all the sine terms are zero. Only the cosine terms appear inthe trigonometric series. The series is therefore already in the compact form except thatthe amplitudes of alternating harmonics are negative. Now by definition, amplitudes Cn

are positive [see Eq. (2.78a)]. The negative sign can be accommodated by a phase of TT

Page 59: Modern Digital Analog Communications Lathi Pg1-75

Figure 2.21Bipolar squarepulse periodicsignal.

2.8 Trigonometric Fourier Series 59

radians. This can be seen from the trigonometric identity*

— cos x — cos (x — TT)

Using this fact, we can express the series in Eq. (2.86) as

w(t) = - H — cos a)Qt + - cos (3a>ot — JT) + - cos 5a>ot2 TT [_ 3 5

+ - cos (7<w0? - n) + - cos 9u>ot H ----

This is the desired form of the compact trigonometric Fourier series. The amplitudes are

O n even

-dd

and the phases are

0-n

n 56 3, 7, 11, 15,n = 3, 7, 11, 15,

We could plot amplitude and phase spectra using these values. We can, however,simplify our task in this special case if we allow amplitude Cn to take on negative values.If this is allowed, we do not need a phase of — TT to account for the sign. This means thatthe phases of all components are zero, and we can discard the phase spectrum and managewith only the amplitude spectrum, as shown in Fig. 2.20b. Observe that there is no loss ofinformation in doing so and that the amplitude spectrum in Fig. 2.20b has the completeinformation about the Fourier series in (2.86). Therefore, whenever all sine terms vanish(bn = 0), it is convenient to allow Cn to take on negative values. This permits the spectralinformation to be conveyed by a single spectrum—the amplitude spectrum. Because Cn

can be positive as well as negative, the spectrum is called the amplitude spectrum ratherthan the magnitude spectrum.

Another useful function that is related to the periodic square wave is the the bipolarsquare wave w0(t) shown in Fig. 2.21. We encounter this signal in switching applications.

-TO1 .

W0( ')

TO 2T0 t-

* Because cos (x ± jr) = - cos x, we could have chosen the phase n or —n. In fact, cos (x ± NJT) = — cos x for anyodd integral value of N. Therefore, the phase can be chosen as ±Nx where N is any convenient odd integer.

*l

Page 60: Modern Digital Analog Communications Lathi Pg1-75

60 SIGNALS AND SIGNAL SPACE

Note that w0(t) is basically w(t) minus its dc component. It is easy to see that

w0(t) = 2[w(0 - 0.5]

Hence, from Eq. (2.86), it follows that

w0(t) = — I cos (OQt cos 3a)Qt H— cos 5a>ot cos lu>ot +n \ 3 5 7

(2.87)

Comparison of Eq. (2.87) with Eq. (2.86) shows that the Fourier components of w0(t) areidentical to those of w(t) [Eq. (2.86)] in every respect except for doubling the amplitudesand loss of dc.

Example 2.9 Find the trigonometric Fourier series and sketch the corresponding spectra for the periodicimpulse train STO(I) shown in Fig. 2.22a.

The trigonometric Fourier series for STO (?) is given by

STo(t) = C0 + y^Cn cos (n2nfQt + 9n) u>0 = —-Jo

We first compute CIQ, an and bn:

To/2 ]S(t) dt = —

-To/2 T0

2 CTo/2 2an = — / 8(t) cos n2nfotdt — —

TO J-T0/2 TO

Figure 2.22(a) Impulse trainand (b) itsFourier spectrum.

i

-

> i

2T0

( i

TO

2

To

g(t) =\

0

G,

1TO

0 i

5 (0

i

00 2

1 1ro 2

j)() 3a)o 4(

[76

j)0 5(

i

Oo 60

- ^

i

(a)

T0 '-»

• ••(b)

J() 0)^-

Page 61: Modern Digital Analog Communications Lathi Pg1-75

2.9 The Exponential Fourier Series 61

This result follows from the sampling property (2.19a) of the impulse function. Similarly,by using the sampling property of the impulse, we obtain

O pTo/2bn = — I S(t) sin n2nf0tdt = 0

TO J-T0/2

Therefore, Co = I / T o , Cn — 2/To and 9n = 0. Thus

J_= To

1+2 y cos n2jifytn=\

(2.88)

Figure 2.22b shows the amplitude spectrum. The phase spectrum is zero.

2.8.4 The Effect of Symmetry

The Fourier series for the signal g(t) in Fig. 2.19a (Example 2.7) consists of sine and cosineterms, but the series for the signal vv(?) in Fig. 2.20a (Example 2.8) consists of cosine termsonly. In some cases the Fourier series consists of sine terms only. None of this is accidental.It can be shown that the Fourier series of any even periodic function g(t) consists of cosineterms only and the series for any odd periodic function g(t) consists of sine terms only (seeProb. 2.8-1).

2.9 THE EXPONENTIAL FOURIER SERIESWe noted earlier that orthogonal signal representation is NOT unique. While the trigonometricFourier series allows a good representation of all periodic signals, here we provide a neworthogonal representation of periodic signals that is equivalent to the trigonometric Fourierseries but has a simpler form.

First of all, it is clear that the set of exponentials e1"1"0' (n = 0, ± 1, ±2,...) is orthogonalover any interval of duration TO = ZTT/COQ, that is,

/ gj

JTQ

_ I ^(m-n)^! ^ _ I 0

JTO ' [To

m nm = n

(2.89)

Moreover, this set is a complete set.3'4 From Eqs. (2.66) and (2.69), it follows that a signalg (/) can be expressed over an interval of TO second(s) as an exponential Fourier series

g(t) =oo

_ "sr jna>0r (2.90)n=—oo

oo

Page 62: Modern Digital Analog Communications Lathi Pg1-75

62 SIGNALS AND SIGNAL SPACE

where [see Eq. (2.66)]

rTO

' dt (2.91)TO

The exponential Fourier series is basically another form of the trigonometric Fourier series.Each sinusoid of frequency/ can be expressed as a sum of two exponentials, e/2^ and e~y2jr^.As a result, the exponential Fourier series consists of components of the form ^"27r/o< wim

n varying from — oo to oo. The exponential Fourier series in Eq. (2.90) is periodic withperiod TQ.

To see its close connection with the trigonometric series, we shall rederive the exponentialFourier series from the trigonometric Fourier series. A sinusoid in the trigonometric series canbe expressed as a sum of two exponentials by using Euler's formula:

Ca cos 9,,) =

(2.92)

where

j. •/)

D — — c fJ "'-'n — - *-'nt?

— _/" f~J6J—n — U/je (2.93)

Recall that g(t) is given by

g(t) = C0 + T Cn cos (n2nf0t + 8n)

Use of Eq. (2.92) in the foregoing equation (and letting Co = DO) yields

g(t) =

= D0 +

which is precisely equivalent to Eq. (2.90) derived earlier. Equation (2.93) shows the closeconnection between the coefficients of the trigonometric and the exponential Fourier series.

Observe the compactness of expressions (2.90) and (2.91) and compare them with expres-sions corresponding to the trigonometric Fourier series. These two equations demonstrate veryclearly the principal virtue of the exponential Fourier series. First, the form of the series is morecompact. Second, the mathematical expression for deriving the coefficients of the series is alsocompact. It is much more convenient to handle the exponential series than the trigonometricone. In system analysis also, the exponential form proves more convenient than the trigonomet-ric form. For these reason we shall use exponential (rather than trigonometric) representationof signals in the rest of the book.

I

Page 63: Modern Digital Analog Communications Lathi Pg1-75

2.9 The Exponential Fourier Series 63

Example 2.10 Find the exponential Fourier series for the signal in Fig. 2.19b (Example 2.7).

In this case, TO — it, 2nfo — 2n/To = 2, and

<p(t) =

where

£>„ = — / v(t)e~J2nt dt'TO

= i r e-'/* JO

»> dt

-1

n (i +j2n)

0.504

1 + 7'4n

and

00

= <>•*» £ TTM

= 0.504 I 1 + e>2t + ,/4r 11 +74 1 +78 1 +7'12

1 1

1-74 1-78,-j6t1 -7-12

(2.94)

(2.95a)

(2.95b)

Observe that the coefficients Dn are complex. Moreover, Dn and D_n are conjugates asexpected [see Eq. (2.93)].

Exponential Fourier SpectraIn exponential spectra, we plot coefficients Dn as a function of a>. But since Dn is complex ingeneral, we need two plots: the real and the imaginary parts of Dn or the amplitude (magnitude)and the angle of Dn. We prefer the latter because of its close connection to the amplitudes andphases of corresponding components of the trigonometric Fourier series. We therefore plot\Dn versus u> and /.Dn versus u>. This requires that the coefficients Dn be expressed in polarform as |Dn|e/ZDn.

Comparison of Eqs. (2.76a) and (2.91) (for n — 0) shows that DO = GO — Co- Equation(2.93) shows that for a real periodic signal, the twin coefficients Dn and D-n are conjugates,

Page 64: Modern Digital Analog Communications Lathi Pg1-75

64 SIGNALS AND SIGNAL SPACE

and

\D,,\ = \D-n\ = -Cn

ZD,, = On

Thus,

Dn = \Dn\eJe"

and

and )_„ = \Dn\e^"

(2.96a)

(2.96b)

(2.97)

Note that \Dn\ are the amplitudes (magnitudes) and ZDn are the angles of various expo-nential components. From Eq. (2.96) it follows that the amplitude spectrum (\Dn\ vs./) is aneven function off and the angle spectrum (ZD,, vs./) is an odd function off when g(t) is areal signal.

For the series in Example 2.10 [see Eq. (2.93)], for instance,

D0 = 0.504

/>, =1+J4

0.504

= 0.122<?^'75-96° =>|D,| = 0.122, Z£>, = -75.96°

= 0.122e/'75'96° ==> |D_i| = 0.122, Z£)_i = 75.96°

and

0.504D2 = -:^— = 0.0625 '̂82-87° =$\D2\ = 0.0625, Z£>2 = -82.87°

D-2 = — = 0.0625e/82'87° ==> |Z>_2| = 0.0625, ZD_2 = 82.87°

and so on. Note that £>„ and /?_„ are conjugates, as expected [see Eq. (2.96)].Figure 2.23 shows the frequency spectra (amplitude and angle) of the exponential Fourier

series for the periodic signal <p(t) in Fig. 2.19b.We notice some interesting features of these spectra. First, the spectra exist for positive

as well as negative values off (the frequency). Second, the amplitude spectrum is an evenfunction off and the angle spectrum is an odd function off. Finally, we see a close connectionbetween these spectra and the spectra of the corresponding trigonometric Fourier series for<p(t) (Fig. 2.19c and d). Equation (2.96a) shows the connection between the trigonometricspectra (Cn and #„) with exponential spectra (\Dn\ and ZDn). The dc components DO and Coare identical in both spectra. Moreover, the exponential amplitude spectrum \Dn \ is half ofthe trigonometric amplitude spectrum Cn for n > 1. The exponential angle spectrum ZDn

is identical to the trigonometric phase spectrum 6n for n > 0. We can therefore produce theexponential spectra merely by inspection of trigonometric spectra, and vice versa.

What Does Negative Frequency Mean?The existence of the spectrum at negative frequencies is somewhat disturbing to some peoplebecause by definition, the frequency (number of repetitions per second) is a positive quantity.How do we interpret a negative frequency/o? If we use a trigonometric identity, then a sinusoid

Page 65: Modern Digital Analog Communications Lathi Pg1-75

2.9 The Exponential Fourier Series 65

Figure 2.23ExponentialFourier spectrafor the signal inFig. 2.19a.

0.504 i

. t T I I-10 -6 -4 -2

[A,

0.122T 0.0625

I T T10

A,

(b)

-10 -8 -6 -4 -2 10

of a negative frequency — fo can be expressed, by borrowing o>o = 2jr/n, as

cos (— a>ot + 9) = cos (o>o? — 0)

This clearly shows that the frequency of a sinusoid cos (— <wrj? + 6) is |o>ol, which is a positivequantity. The common ideas that a frequency must be positive comes from the traditionalnotion that frequency is associated with a real-valued sinusoid (such as a sine or a cosine).In reality, the concept of frequency for a real-valued sinusoid describes only the rate of thesinusoidal variation, without concern for the direction of the variation. This is because real-valued sinusoidal signals do NOT contain information on the direction of its variation.

The concept of negative frequency is meaningful only when we are considering complexsinusoids for which the rate and the direction of variation are meaningful. Observe that

This relationship clearly shows that positive or negative coo both lead to periodic variation ofthe same rate. However, the resulting complex signals are NOT the same. Because \both e+Mif and e~JW(>t are unit length complex variables that can be shown on the complexplane. We illustrate the two exponential sinusoids as unit length complex variables that arevarying with time t in Fig. 2.24. Thus, the rotation rate for both exponentials e^100' is |o>o|. Itis clear that for positive frequency, the exponential sinusoid rotates counterclockwise, whereasfor negative frequency, the exponential sinusoid rotates clockwise. This illustrates the actualmeaning of negative frequency.

There exists a good analogy between positive/negative frequency and positive and negativevelocity. Just as people are reluctant to use negative velocity in describing a moving object,they are equally unwilling to accept the notion of "negative" frequency. However, once weunderstand that negative velocity simply refers to both the negative direction and the actualspeed of a moving object, negative velocity makes perfect sense. Likewise, negative frequencydoes NOT describe the rate of periodic variation of a sine or a cosine. It describes the directionof rotation of a unit length exponential sinusoid and its rate of revolution.

Page 66: Modern Digital Analog Communications Lathi Pg1-75

66 SIGNALS AND SIGNAL SPACE

Figure 2.24(a) Unit lengthcomplex variablewith positivefrequency (rotat-ing counterclock-wise) and (b) unitlength complexvariable withnegativefrequency(rotatingclockwise).

Im

7;<V

Re/ I

,\ /

Im

e~j<ao'/s/ R«;l

.•\

Another way of looking at the situation is to say that exponential spectra are a graphicalrepresentation of coefficients Dn as a function off. Existence of the spectrum at f = — nfois merely an indication that an exponential component e~^ f°l exists in the series. We knowthat a sinusoid of frequency nu)Q can be expressed in terms of a pair of exponentials e>na)°t

and e~->na}ot [Eq. (2.92)]. That both sine and cosine consist of positive- and negative-frequencyexponential sinusoidal components clearly indicates that we are NOT at all able to use o>o aloneto describe the direction of their periodic variations. Indeed, both sine and cosine functionsof frequency <wo consist of two equal-sized exponential sinusoids of frequency ±o>o. Thus,the frequency of sine or cosine is the absolute value of their two component frequencies anddenotes only the rate of the sinusoidal variations.

Example 2.11 Find the exponential Fourier series for the periodic square wave w(t) shown in Fig. 2.20a.

W(r)= Dne>n27TfQt

where

To

12

£>„ = — / w(f)e-Jn2"f°' dtTo JTO

I fTo/4 _.-_,^,, .1 f= -- I

TO J-

= 1

-jn2nfoT0 L-/n27r/07b/4 _

2 /«27r/07b\ 1 /«TTNsin - = — sin —

mi \ 2 /- -n27r/07b V 4 )

In this case Dn is real. Consequently, we can do without the phase or angle plot if we plotDn versus/ instead of the amplitude spectrum (\Dn\ vs./) as shown in Fig. 2.25. Comparethis spectrum with the trigonometric spectrum in Fig. 2.20b. Observe that DO = Co and\Dn\ = |D_n| = Cn/2, as expected.

Page 67: Modern Digital Analog Communications Lathi Pg1-75

iFigure 2.25ExponentialFourier spectrumof the squarepulse periodicsignal.

-9

-7 ±-3

2.9 The Exponential Fourier Series 67

0.5

-1

, Dn

I\ /

-i\L/371 *

Example 2.12 Find the exponential Fourier series and sketch the corresponding spectra for the impulse train<5r0(?) shown in Fig. 2.26a.

The exponential Fourier series is given by

/o = — (2.98)

where

Dn - — I 8To(t)e~jn27rf0'dtTO JTO

Choosing the interval of integration (=£i, ^f) and recognizing that over this interval8To(t) = S(t), we write

Dn =1 r T-o/2

-To/2

In this integral, the impulse is located at t = 0. From the sampling property of the impulsefunction, the integral on the right-hand side is the value of g-./

n27r/o' at t = 0 (where theimpulse is located). Therefore

—TO

and

5ro(r) =

(2.99)

(2.100)

Equation (2.100) shows that the exponential spectrum is uniform (Dn = l/7b) for all thefrequencies, as shown in Fig. 2.26. The spectrum, being real, requires only the amplitude

Page 68: Modern Digital Analog Communications Lathi Pg1-75

68 SIGNALS AND SIGNAL SPACE

Figure 2.26(a) Impulse trainand (b) itsexponentialFourier spectrum.

plot. All phases are zero. Compare this spectrum with the trigonometric spectrum shownin Fig. 2.22b. The dc components are identical, and the exponential spectrum amplitudesare half of those in the trigonometric spectrum for all/ > 0.

1 I k 1

-2?o -To

g(f) = 8To(t)

k I k 1 , I

(a)

0 TO 27b 3r0 t^

Dn

-4o)o ~3coo -2

(b)

2o)0 3coo 4(00 5(00

Parseval's Theorem in the Fourier SeriesA periodic signal g(t) is a power signal, and every term in its Fourier series is also a powersignal. The power Pg of g(t) is equal to the power of its Fourier series. Because the Fourierseries consists of terms that are mutually orthogonal over one period, the power of the Fourierseries is equal to the sum of the powers of its Fourier components. This follows from Parseval'stheorem. We have already demonstrated this result in Example 2.2 for the trigonometric Fourierseries. It is also valid for the exponential Fourier series. Thus, for the trigonometric Fourierseries

g(f) = CQ + 2_ Cn COS (nwof + On)

n=\

the power of g(t) is given by

(2.101)71=1

For the exponential Fourier series

g(t)=Do+

the power is given by (see Prob. 2.1-7)

(2.102a)

Page 69: Modern Digital Analog Communications Lathi Pg1-75

2.10 MATLAB Exercises 69

For a real g(t), |D_,,| = \Dn\. Therefore

(2.102b)n=l

Comment: Parseval's theorem occurs in many different forms, such as in Eqs. (2.70),(2.101), and (2.102a). Yet another form is found in the next chapter for nonperiodic signals.Although these forms appear different, they all state the same principle: that is, the square ofthe length of a vector equals the sum of the squares of its orthogonal components. The form(2.70) applies to energy signals, the form (2.101) applies to periodic signals represented bythe trigonometric Fourier series, and the form (2.102a) applies to periodic signals representedby the exponential Fourier series.

2.10 MATLAB EXERCISESIn this section, we provide some basic MATLAB exercises to illustrate the process of signalgeneration, signal operations, and Fourier series analysis.

Basic Signals and Signal GraphingBasic functions can be defined by using MATLAB's m-files. We gave three MATLAB programsthat implement three basic functions when a time vector t is provided:

• ustep . m implements the unit step function u(t)

• rect. m implements the standard rectangular function rect(?)

• triangl. m implements standard triangle function A(?)

% (file name: ustep.m)% The unit step function is a function of time vt'.% Usage y = ustep(t)%% ustep (t) = 0 if t < 0% ustep(t) = 1 , if t >= 1%% t - must be real-valued and can be a vector or a matrix%function y=ustep(t)

y = (t> = 0) ;end

% (file name: rect.m)% The rectangular function is a function of time 't'.

Usage y = rect(t)t - must be real-valued and can be a vector or a matrix

Page 70: Modern Digital Analog Communications Lathi Pg1-75

70 SIGNALS AND SIGNAL SPACE

Figure 2.27Graphing asignal.

yrm time domain

-2 -1.5 -1 -0.5

rect(t) = 1,

rect(t) = 0,

if t| < 0.5

if |t| > 0.5

function y=rect(t)

y =(sign(t+0.5)-sign(t-0.5) >0)

end

% (file name: triangl.m)

% The triangle function is a function of time 't'.

% triangl(t) = l-|t|, if |t| < 1

% triangl(t) = 0, if |t| > 1

% Usage y = triangl(t)

% t - must be real-valued and can be a vector or a matrix

%

function y=triangl(t)

y = (1-abs (t)) .* (t> = -l) . * (t<l) ;

end

We now show how to use MATLAB to generate a simple signal plot through an exampleprogram siggraf . m. In this example, we construct and plot a signal

y(t) = exp (— t ) sin (6nt}u(t + 1)

The resulting figure is shown in Fig. 2.27.

% (file name: siggraf.m)

% To graph a signal, the first step is to determine

% the x-axis and the y-axis to plot

% We can first decide the length of x-axis to plot

t=[-2:0.01:3]; % "t" is from -2 to 3 in 0.01 increment

% Then evaluate the signal over the range of "t" to plot

Page 71: Modern Digital Analog Communications Lathi Pg1-75

2.10MATLAB Exercises 71

y=exp(-t).*sin(10*pi*t).*ustep(t+l);

figure(1); figl=plot(t,y); % plot t vs y in figure 1

set(figl,'Linewidth',2); % choose a wider line-width

xlabel('\it t'); % use italic 't' to label

x-axis

ylabel('{\bf y}({\it t })'); % use boldface %y' to label y-axis

title ('{\bf y}_{\rm time domain}');

% can use subscript

Signal OperationsSome useful signal operations introduced in Sec. 2.3 can be implemented and illustrated by

means of MATLAB. We provide the example program sigtransf . m to demonstrate thebasic operations. First, we generate a segment of an exponentially decaying function

We then apply time scaling and time shifting to generate new signals

yi(t)=y(2t) y2(t)=y(t + 2)

Finally, time scaling, time inversion, and time shifting are all utilized to generate a new signal

y3(t)=y(2-2t)

The original signal y(t) and its transformations are all given in Fig. 2.28.

(file name sigtransf.m)

To apply a transform time-domain transformation on a signal

y=g(t), simply redefine the x-axis

a signal, the first step is to determine

the x-axis and the y-axis to plot

We can first decide the range of 't' for g(t)

t= [-3 :0.002 :5] ; % "t" is from -3 to 5 in 0.002 increment

Then evaluate the signal over the range of "t" to plot

y=exp(-abs(t)/4).*(ustep(t)-ustep(t-4));

Form the corresponding signal vector

y=g(t) = exp(-t) sin(10*pi*t) [u(t)-u(t-4)]

figure(1);

subplot(221);figO=plot(t,y); % plot y vs t in subfigure 1

set(figO,'Linewidth',2); % choose a wider line-width

xlabel('\it t'); % use italic vt' to label x-axis

ylabel('{\bf y}({\it t })'); % use boldface 'y' to label y-axis

Page 72: Modern Digital Analog Communications Lathi Pg1-75

72 SIGNALS AND SIGNAL SPACE

Figure 2.28Examples ofbasic signaloperations.

1.5

0.5

-0.5

1.5

0.5

-0.5

Original signal y(t)

0 2t

Time shifting y(t+2)

-2

1.5

I

0.5

0

-0.5

1.5

1

. 0.5

0

-0.5

Time scaling y(2t)

- 2 0 2t

y(2-2t)

0 2t

0 2f

I

title('original signal y(t)');Now we evaluate the dilated signal

tl=t*2; % Scale in timeForm the corresponding signal vectoryl=exp(-abs(tl)/4).*(ustep(tl)-ustep(tl-4));

figure(1);subplot(222);figl=plot(t,yl);set (f igl, 'Linewidth' , 2) ,-

xlabel ('\it t' ) ;ylabel('{\bf y_l}({\it t }'y' to label y-axistitle('time scaling y(2t)'

% plot y vs t in subfigure 2% choose a wider line-width

use italic 't' to label x-axis; % use boldface

Now we evaluate the dilated signal

t2=t+2; % shift in timeForm the corresponding signal vectory2=exp(-abs(t2)/4).*(ustep(t2)-ustep(t2-4));figure (1) ,-subplot(223);fig2=plot(t,y2); % plot y vs t in subfigure 2set(fig2,'Linewidth',2)/ % choose a wider line-width

xlabelC\it t'); % use italic 't' to label x-axisylabel('{\bf y_2}({\it t })'); % use boldface'y' to label y-axistitle('time shifting y(t+2)');

Page 73: Modern Digital Analog Communications Lathi Pg1-75

2.10MATLAB Exercises 73

Now we evaluate the dilated signal

t3=2-t*2; % time-reversal+Scale in time

Form the corresponding signal vector

y3=exp(-abs(t3)/4).*(ustep(t3)-ustep(t3-4));

figure(1) ;

subplot(224);fig3=plot(t,y3); % plot y vs t in subfigure 2

set(fig3,'Linewidth',2); % choose a wider line-width

xlabel('\it t'); % use italic 't' to label x-axis

ylabel('{\bf y_3}({\it t })'); % use boldface

'y' to label y-axis

title(vy(2-2t)');

subplot(221); axis([-3 5 -.5 1.5]); grid;

subplot(222); axis([-3 5 -.5 1.5]); grid;

subplot(223); axis([-3 5 -.5 1.5]); grid;

subplot(224); axis([-3 5 -.5 1.5]); grid;

Periodic Signals and Signal PowerPeriodic signals can be generated by first determining the signal values in one period beforerepeating the same signal vector multiple times.

In the following MATLAB program, PfuncEx.m, we generate a periodic signal andobserve its behavior over 2M periods. The period of this example is T = 6. The program alsoevaluates the average signal power, which is stored as a variable yjpower, and the signalenergy in one period, which is stored in the variable y_energyT.

% (file name: PfuncEx.m)

% This example generates a periodic signal, plots the signal

% and evaluates the averages signal power in y_power and signal

% energy in 1 period T : y_energyT

echo of f ; clear ; elf ;

% To generate a periodic signal g_T(t) ,

% we can first decide the signal within the period of 'T' for g(t)

Dt=0.002; % Time interval (to sample the signal)

T=6; % period=T

M=3 ; % To generate 2M periods of the signal

t= [0 :Dt :T-Dt] ; %"t" goes for one period [0, T] in Dt increment

% Then evaluate the signal over the range of "T"

y=exp (-abs (t) /2) . *sin(2*pi*t) .* (ustep (t) -ustep (t-4) ) ;

% Multiple periods can now be generated.

time= [] ;

y_periodic= [] ;for i = - M : M - l ,

time= [time i*T+t] ;

y_periodic= [y_periodic y] ;end

figure (1) ; fy=plot (time, y_periodic) ;

Page 74: Modern Digital Analog Communications Lathi Pg1-75

74 SIGNALS AND SIGNAL SPACE

Figure 2.29Generating aperiodic signal.

set(fy,'Linewidth',2);xlabel('?');echo onCompute average powery_power=sum(y_periodic*y_periodic')*Dt/(max(time)-min(time) )Compute signal energy in I period Ty_energyT=sum(y.*conj(y))*Dt

The program generates a figure of periodic signal in Fig. 2.29 and numerical answers:

y_power =0.0813

y_energyT =0 . 4 8 7 8

Signal CorrelationThe concept of signal correlation introduced in Sec. 2.6 can be implemented directly by usinga MATLAB program. In the next computer example, the program s ign_cor. m evaluates thesignal correlation coefficients between x(t) and signals g\ (t), g2(t), ... , g$(t). The programfirst generates Fig. 2.30, which illustrates the six signals in the time domain.

% (file name: sign_cor.m)clear% To generate 6 signals x(t), g_l(t), ... g_5(t);% of Example 2.6% we can first decide the signal within the period of 'T' for g(t)

Dt=0.01; % time increment Dt

Page 75: Modern Digital Analog Communications Lathi Pg1-75

2.10MATLAB Exercises 75

Figure 2.30Signals fromExample 2.6.

1

? °-0.5

i

-

c) 2 4t

f

1

0.5

?. 0tw

-0.5

-1

J

0

: 1 L

2 4 (•t

1

S n=<:

-0.5

i

1

0.5

I 0D

-0.5

-1

0.5

t °

-0.5

-1

2 4r

1

0.5

':• °

-0.5

-1

2 4r

2 4t

T=6.0; % time duration = Tt= [-l:Dt:T]; %"t" goes between [-1, T]

% Then evaluate the signal over the range of

x=ustep(t)-ustep(t-5);gl=0.5*(ustep(t)-ustep(t-5));g2=-(ustep(t)-ustep(t-5));g3 = exp(-t/5) .*(ustep(t)-ustep (t-5)) ;g4=exp(-t) .* (ustep (t)-ustep(t-5)) ;g5=sin(2*pi*t).*(ustep(t)-ustep(t-5));

subplot(231); sigl=plot(t,x,'k');xlabel (' t') ; ylabel (' x ( t) ' ) ;set(sigl,'Linewidth',2);

axis([-.5 6 -1.2 1.2]); gridsubplot(232); sig2=plot(t,gl,'kxlabel ('/'); ylabel(' g_1(t) ' ) ;set(s ig2, 'Linewidth',2) ;axis([-.5 6 -1.2 1.2]); gridsubplot(233); sig3=plot(t,g2,'k

xlabel('*'); ylabel(' g_2(/)');set(sig3,'Linewidth',2);

axis([-.5 6 -1.2 1.2]); gridsubplot(234); sig4=plot(t,g3

xlabel(' t') ; ylabel(' g_3(t ) 'set(sig4,'Linewidth',2);axis([-.5 6 -1.2 1.2]); gridsubplot(235); sig5=plot(t,g4xlabel (';'); ylabel(' g_4( / ) 'set (sig5, 'Linewidth' ,2) ;gridaxis([-.5 6 -1.2 1.2]);

in Dt increment't" to plot

Label axis% change linewidth

% set plot range

'k') ;