wireless & cellular communications

198
Wireless & Cellular Communications Class Notes for TLEN-5510 Thomas Schwengler Copyright ©2010 Thomas Schwengler. All rights reserved. No part of this document may be reproduced, reprinted, transmitted, or utilized in any form, without written permission from the author. Requests to the author should be addressed to [email protected].

Upload: bejeta

Post on 09-Mar-2015

978 views

Category:

Documents


53 download

TRANSCRIPT

Page 1: Wireless & Cellular Communications

Wireless & Cellular Communications Class Notes for TLEN-5510 Thomas Schwengler

Copyright ©2010 Thomas Schwengler.

All rights reserved. No part of this document may be reproduced, reprinted, transmitted, or utilized in

any form, without written permission from the author. Requests to the author should be addressed to

[email protected].

Page 2: Wireless & Cellular Communications

2

Page 3: Wireless & Cellular Communications

3

Chapter 1 Wireless Communications

This chapter introduces digital wireless communications systems and the spectrum landscape.

1.1 Brief History

Many textbooks begin with a good historical overview of wireless communications and excellent insights (see for instance [3, 5, 7]), and we will not attempt to reproduce them; let us simply summarize in a few statements the major innovations in wireless system.

1.1.1 Past

Early smoke signals and carrier pigeons may of course be seen as a form of wireless communications, but offer little modern interst. Early coding schemes can be attributed to the british scientist Robert Hooke for inventing large mobile panels coding the letters of the alphabet (1684). More elaborate schemes appear in the late 18th century, including the noteworthy optical telegraph invented by the French physicist Claude Chappe (1791); these large signaling towers transmitted coded words (rather than letters) over long distances, and were developed in the following years into a large network over major cities in France and surrounding countries. This precursor to radio communications suffered however from outages due to fog or rain, and in a sense is still rather reminiscent of current microwave, millimeter-wave, or infrared radio links.

True radio communications were of course based on the work of Maxwell and the experiments of Hertz. The first use of radio to transmit coded information was probably proposed by Tesla in the 1880’s, and the first radio communication systems were described in his papers around 1991. Nearly simultaneously, Marconi patented the telegraph and demonstrated to the world the usefulness of mobile communications with ships crossing the English Channel. Interestingly the infancy of radio communications already emphasizes the importance of some important points: 1) certain radio frequencies overcome line of site obstructions and weather impediments, 2) mobility is the main application, 3) patent protection is paramount.

The next major advances in radio systems were developed during and after world war two, and benefitted from significant research around radar and remote sensing. Subsequently, different applications flourished: TV broadcasting in the 1940’s probably has the merit of introducing the first standardization of communications technology, leading to major television standards, (NTSC Color Standard in 1953, and recently ATSC Digital Standard in 2009). Standards have become very important in all aspects of wireless communications, and will be analyzed in more details further.

Page 4: Wireless & Cellular Communications

4

Cellular systems were devised by AT&T Bell Labs in the seventies. Continued improvements in standards and products provide increasing spectral efficiencies, lower prices, and wider consumer acceptance. Amazing growth occurred in the wireless industry during the 1980’s and 1990’s, which led to almost ubiquitous service availability and cheap service plans; some irrational exuberance in the industry also caused failures and bankruptcies around 2000 (such as excessive spectrum bidding, expensive satellite services, or some early broadband wireless initiatives).

1.1.2 Present

Current events and trends in the wireless industry are still in constant evolution, and their description inevitably becomes obsolete, and should be rewritten every year. A few important data points may be recalled, but the reader is better off browsing the latest industry analysts reports, publications and trade journals for a clear picture of the current industry.

Mobile penetration rates continue to increase. Penetration rates exceed 100% in several countries since 2006. Penetration numbers exceeding a country’s population may seem odd at first, they are due to multiple user accounts (and authentication SIM cards) for different use: work vs. personal use, or data vs. voice.

Mobile data usage, such as text and multimedia messaging, mobile Web, and downloads, reach 50% adoption rates in 2007. Use of SMS reach 41%, web browsing 11% (M:Metrics Jul. 2007).

In the US wireless minutes passed landline minutes of use in 2005. Telcos continue to sustain landline loss in most developed countries, while wireless

subscribers increase significantly. (In the US, new wireless subscribers near 4 million quarterly in 2008).

Laptop sales exceed desktop sales in Europe since 2006, in the US since 2008, paving the need for more mobile data.

Mobile data revenues appear in 2006. US monthly revenues for wireless providers on average approximate $55 ($40 voice and $15 data).

Figure 1.1: Wireless penetration rates in the US.

Page 5: Wireless & Cellular Communications

5

1.1.3 Future

“Prediction is very difficult, especially about the future.” (Niels Bohr, Danish physicist, 1885 - 1962). Nevertheless, most predictions show a continuing growing trend, in the US and worldwide, far exceeding fixed telephony and fixed broadband data. We will investigate several new standards and technologies that will lead us to better understand future wireless trends (in particular regarding expected throughput, capacity, coverage, and cost).

Figure 1.2: Worldwide 2008 estimates of population, fixed phone lines, fixed broadband (such as

ADSL and cable modem), and mobile subscribers.

1.2 Spectrum

Spectrum is a very important notion for any wireless system: it refers to the range of frequencies used by the systems’ electromagnetic waves; when several services use the same spectrum in the same location interferences occur that may be harmful to these services. Governments therefore step in and set rules and regulations for spectrum coordination.

1.2.1 Spectrum Use

Spectrum is a valuable resource for many applications such as radio communications (terrestrial and satellite), radar, remote sensing. Some applications require very quiet portions of spectrum (such as deep-space observations), others may share or reuse spectrum fairly aggressively. In all cases rules and regulations are in place for each spectrum bands, which will be reviewed further.

Different bands of spectrum benefit from very different properties. Lower spectrum propagates better through the atmosphere and through obstacles; it has been traditionally

Page 6: Wireless & Cellular Communications

6

used for radio and TV broadcasting, and is excellent for mobile applications. Higher spectrum is more abundant and therefore used for higher throughput applications, but is much more attenuated by atmospheric particles and weather variations, and typically needs line-of-sight conditions.

Figure 1.3:

Atmospheric attenuation in dB/km for various RF frequencies: water vapor (in blue)

in continental climate (7.5g/kg), and oxygen absorption (in red) are the dominant

atmospheric gaseous absorption. Total atmospheric absorption is also shown (in

black).

A wide range of spectrum is used in modern communications, from a few megahertz (MHz) radio frequencies, to gigahertz (GHz) microwaves and millimeter waves, to terahertz (THz) infrared lasers for free-space optics. Accurate study of radio waves at any given frequencies can be performed by studying electromagnetic fields properties: propagation, scattering, etc. Maxwell’s equations are used to determine wave characteristics in complex situations, but in many practical situation full-wave modeling is too complex, and often to time consuming; instead the industry has been relying on simple rules and approximations. We will see some of these rules in chapter 3, but we start here with some high-level properties of radio frequencies.

Communication systems use diverse frequencies either in a wired mode (with a wave guide from transmitter to receiver such as coaxial cable, copper wires, or fiber), or in a wireless mode, which is our focus here. To summarize:

Low RF frequencies (few MHz) propagate very well through most media (in the lower atmosphere, and even below ground) and are well suited for long-range communications. Main drawbacks: little spectrum available and large antenna sizes.

Page 7: Wireless & Cellular Communications

7

Higher frequencies (UHF, VHF in the 100 MHz range) propagate well in the atmosphere, they are impeded by obstacles like buildings and vegetation, but still allow for good coverage in most situations.

Increasing demand for radio communication systems opened services in the GHz range. These services see higher attenuation in the atmosphere and by obstacles, which lowers their typical coverage range, but may be useful for higher capacity services.

Higher frequencies (microwaves and millimeter waves, 10 to 100 GHz) are used for line-of-sight only, they are attenuated by the atmosphere, especially by rain.

Infrared communication (THz) are also used for line-of-sight communications, either indoor or for outdoor links, in which case fog and dust are sometimes an impediment.

1.2.2 Atmospheric Effects

The earth atmosphere is a complex gaseous mixture and has various effects on radio propagation. The dominant effect is usually some attenuation, although some depolarization effects also occur. Atmospheric absorption are dominated by water vapor and oxygen. Their atmospheric attenuation is given in a simple but good model for atmospheric gas attenuation in ITU-R report ITU-R P.676-6 “Attenuation by Atmospheric Gases”.

The water vapor absorption wary with humidity, and the absorption rate is estimated in dB/km by the formula below, as a function of the water vapor density (ρ in g/m3). At sea level, in continental climates (like Europe) a typical value ρ = 7.5g/m3 is often used.

(1.1

)

Attenuation peaks occur at frequencies of resonance of the water molecule (such as 23GHz, 183GHz, and 324GHz) where a lot of transmitted energy is absorbed and does not propagate very far.

The oxygen absorption is less variable and shows similar peaks of absorption at 60GHz and 120GHz. Its formula is defined piecewise as:

(1.2)

In addition to gaseous absorption, hydrometeors are also a severe cause of loss for frequencies above 10GHz or so. Heavy rains and large drop sizes have an especially severe impact. At 30GHz for instance heavy 50-100mm/hr precipitation causes 10-20dB/km

Page 8: Wireless & Cellular Communications

8

attenuation, and will usually cause radio outages. This may be the case for LMDS (28GHz) or point-to-point microwave links (6, 11, 18, 23GHz).

The above formulas show that atmospheric attenuations and variations are minimal at frequencies below say 6GHz, and are rarely a consideration for cellular systems. Atmospheric absorptions and rain fades are however significant at higher frequencies, for which wireless links need to take into account the impact of this variability. Rain regions are defined (ITU or Crane rain regions) in order to gauge the probability of heavy rain outages. And maximum link distances are calculated for a certain percentage of availability.

1.2.3 Duplexing

Some spectrum bands are paired, that is split between uplink and downlink operations, using Frequency Division Duplexing (FDD). Others (including all unlicensed bands) use the same band for uplink and downlink, and use different time slots to separate uplink from downlink traffic, which is Time Division Duplexing (TDD).

Each duplexing scheme has advantages in some situations. FDD schemes tend to be slightly more spectrum efficient and well adapted to symmetric traffic needs; they are typically preferred for long-range links such as private line and for voice communications (typically based on symmetrical standards: DS0, DS1, etc). TDD schemes are slightly less spectrum efficient since some quiet times are required between uplink and downlink traffic (at least greater than round-trip delay between the base and the furthest device); but they offer the great advantage of allowing dynamic changes in the amount of time (and therefore bandwidth) dedicated for uplink versus downlink, which may actually be a more spectral efficient solution for asymmetric or bursty data services. TDD schemes are therefore useful for small cells and LAN data applications.

Other considerations of cost of equipment are sometimes important to choose between TDD and FDD: FDD devices are typically more expensive to manufacture since frequency diplexers must be used to transmit and receive at the same time. To alleviate that cost, some cheaper devices use hybrid frequency division duplexing (H-FDD) in which they use pair spectrum like an FDD device (i.e. different transmit and receive band), but are not capable of transmitting and receiving at the same time (like a TDD device). FDD vs. TDD preferences sometimes give rise to endless arguments, but practically they are often determined by local spectrum rules and regulations. Another argument often used is that adaptive antenna systems (smart antennas) and MIMO systems require fast channel estimation and are better suited for TDD schemes (FDD requires subscriber devices to provide channel response for the downlink back to the base, which takes some time and reduces the adaptive system).

1.3 US Spectrum Landscape

Page 9: Wireless & Cellular Communications

9

In the US, spectrum is regulated by several government institutions: the National Telecommunications and Information Administration (NTIA) Office of Spectrum Management (OSM) for government and military use, the Federal Communications Commission (FCC) for commercial use.1 Some bands are also opened for license exempt use, and have contributed to the widespread of wireless LAN technologies like 802.11a, b, g, or n.

1.3.1 Unlicensed Spectrum

The main bands for unlicensed (or license exempt) use in the United States are listed below. These bands are governed by FCC Part 15 rules, and may not cause harmful interference to authorized services. Different sections of part 15 rule different technologies like frequency hopping, spread spectrum, and have been modified several times by the FCC.

Unused TV spectrum:

(below 698 MHz) TV channels 2 to 51. The FCC issued an order in 2008 to allow unlicensed

use of “white spaces” left unused by TV broadcasters. Such an amount of spectrum at low

frequency is causing unprecedented enthusiasm, especially for rural environments. Rules

and restrictions may still change (as of mid 2008):

Channels 2-51 are available for fixed service (except 3, 4, 37) up to 1 W power output, 4 W EIRP as long as no TV signal is in the channel or in an adjacent channel. Antennas are required to be mounted outdoors.

Channels 21-52 (except 37) are for personal/portable service, including mobile. Devices that use spectrum scanning and an Internet database can operate at 100 mW when no adjacent TV broadcast is present. Portable devices that only scan the are limited to 50 mW EIRP on all channels and 40 mW EIRP in a channel adjacent to TV broadcast.

Channels can be aggregated, which allows for higher capacity and adapts well to emerging standards (like WiMAX or LTE).

Figure 1.4:

Unused TV channels (below 700 MHz) are referred to as white spaces, and

may be used in an unlicensed manner in the US, for fixed and mobile services,

under certain restrictions.

Page 10: Wireless & Cellular Communications

10

ISM 900 MHz:

(902-928 MHz) industrial, scientific, and medical band Maximum power up to 100 mW,

devices.

ISM 2.4 GHz:

(2400-2483.5 MHz) industrial, scientific, and medical band. Maximum power up to 4 W,

equivalent isotropically radiated power (EIRP). Additional limits on Peak power density

(PSD) and out-of-band emissions.

UNII bands:

at 5 GHz, unlicensed national information & infrastructure (U-NII) bands.

UNII-1 (5.15-5.25 GHz) intended for use for indoor short-range networking devices, 200 mW maximum EIRP.

UNII-2 (5.25-5.35 GHz), intended to be communications within and between buildings, such as campus-type networks, 1W maximum EIRP. Recent rules allow for more with Dynamic Frequency Selection (DFS) capability in order to protect Federal Government radar systems.

UNII-3 (5.725-5.825 GHz), intended for community networking communications devices operating over a range of several kilometers, 1 W transmit power, up to 23 dBi antenna gain, but no more than 4 W EIRP.

In addition a new 5.47-5.725 GHz band has recently been opened for unlicensed operations similar to the UNII-3 band, but with required Dynamic Frequency Selection (DFS) in order to protect Federal Government radar systems.

Higher Bands:

Higher bands typically for point-to-point links at 24 GHz, 60 GHz, and infrared free-space

optic are also available for unlicensed use.

1.3.2 Licensed Spectrum

Different spectrum bands are made available for commercial use at different times. The spectrum bands listed in this section are interesting for US activities; interestingly these bands all have a very different history and different rules. Many of the detailed band plans are available at www.fcc.gov under auctions.

Cellular and PCS spectrum:

The first US cellular spectrum, at 800 MHz, was given to interested operators in 1982 and

1986 to encourage rolling out mobile wireless systems. With the booming success of these

cellular systems, the FCC decided to auction more spectrum, at 1900 MHz, referred to as

PCS spectrum (for Personal Communication Services).

Page 11: Wireless & Cellular Communications

11

Figure 1.5: Cellular band plan at 800 MHz: two 20-MHz blocks (A and B) allocated by the

FCC in 1982, augmented in 1986 (A* and B*).

Figure 1.6:

PCS band plan: the PCS band was auctioned by the FCC in 1994-1996, different

block sizes combined with spectrum caps encouraged newcomers in the

industry.

Figure 1.7: AWS-1 band plan, pairs 1.7 GHz and 2.1 GHz spectrum, auctioned in 2006;

later auctions AWS-2 and 3 plan a few more nearby blocks.

Figure 1.8: New 700 MHz band plan converts former TV channels 52 to 69 to different

bands, of different sizes, some FDD, some TDD. These bands are auctioned for

Page 12: Wireless & Cellular Communications

12

commercial use, a portion is reserved for public safety use. From www.fcc.gov.

UHF channels at 700 MHz:

With the migration of TV broadcasting to digital, some spectrum becomes available in the

700 MHz band (TV channels 52 to 69, 698-806 MHz). A portion of that spectrum will be

available for new public safety services after the transition to digital TV is completed.

Previous lower 700 MHz auctions (auction 44, 49, 60, 73) took place in 2002, 2003, 2005, 2008.

AWS at 1.7-2.1 GHz:

Auction 66, for Advanced Wireless Services (AWS) started July 28, 2006, 168 bidders

qualified to participate.

The auction of 90 megahertz of AWS spectrum reached close to $15 billion. The top bidders are T-Mobile, Verizon, cable operators (bidding as SpectrumCo, LLC.), MetroPCS, Cingular, Cricket.

WCS at 2.3 GHz:

This small auction was mostly overlooked (auction 14, in 1997), and brought only $13.6

million for 30 MHz of spectrum (major bidders: Comcast, BellSouth, Metricom). That

spectrum was designated very generically as Wireless Communication Service (WCS), and

was left unused for a long time. It has become very interesting again since it corresponds to

a band of interest for 802.16e, and has standard equipment for mobile WiMAX and WiBro.

EBS and BRS at 2.5 GHz:

Formerly MMDS and ITFS, these spectrum bands are now referred to as Educational

Broadband Services (EBS) and Broadband Radio Services (BRS). Sprint is the largest license

holders in this band, which is well suited for 802.16e mobile WiMAX. Sprint started major

efforts at the WiMAX forum in 2006 for its next generation “4G” network, creates a

partnership with Clearwire in 2007, and announces investments in excess of $3 billion

(mostly from cable operators) for 2009-2010 WiMAX rollout.

A new band plan was proposed by FCC to transition the old 6 MHz analog TV channels to 5.5 MHz channels. In 2006 broadband radio service (BRS) operators began filing their plans to initiate transitions of the 2.5 GHz band in various basic trading areas (BTAs) under the new rules adopted by the FCC in 2004.

Page 13: Wireless & Cellular Communications

13

Figure 1.9: New BRS and EBS band plan (formerly MMDS channels). From www.fcc.gov.

Fixed Wireless Access:

3.65-3.7 GHz is a fairly new band for fixed wireless access in the US, much like the European

3.4-3.6 GHz band, albeit with much less bandwidth. The band is regulated by non-exclusive

nationwide licenses (150 km buffer zones were created around existing fixed satellite

stations), power is limited to 25 W EIRP maximum in 25 MHz (or 1 W in 1 MHz to keep

same power spectral density), 1 W EIRP per 25 MHz maximum for mobile stations.

Contention-based protocol is required to allow multiple users to share the same spectrum by forcing a transmitter to provide reasonable opportunities for other transmitters to operate. Equipment is currently available and accredited for use in 25 MHz, the remaining 25 MHz requires equipment to sense preexisting military equipment and currently has no accredited equipment.

Public Safety at 4.9 GHz:

In 2002, in the 4.9 GHz Order, the Commission designated 50 MHz in the 4.9 GHz band for

exclusive public safety use. In many cases local governments control this spectrum, and

opportunities exist around data services for emergency response or public safety systems

(some Wi-Fi-like or WiMAX-like products exist in that band today).

Vehicle Communications at 5.9 GHz:

The FCC has modified some of the licensing and service rules it adopted in 2003 for

dedicated short-range communications (DSRC) in the intelligent transportation systems

(ITS) radio service in the 5.9 GHz band. Channel 172 (5.855-5.865 GHz) is reserved for

“vehicle-to-vehicle safety communications for accident avoidance and mitigation, and safety

of life and property applications,” the Commission said, while Channel 184 (5.915-

5.925 GHz) is “for high-power, longer-distance communications to be used for public safety

applications involving safety of life and property, including road intersection collision

mitigation.”

Page 14: Wireless & Cellular Communications

14

That band is the area of focus of 802.11p, and may be an opportunity for public safety applications. Automobile industry has had most of the activity so far around smart cars and collision avoidance.

Higher Bands:

Higher microwave bands are available for licensing, either on a case-by-case basis (6, 11, 18,

23 GHz, and more recently 70, 80 GHz) or in wider geographical areas (LMDS at 28, 31,

38 GHz). These bands are used for high capacity fixed links, as a fiber alternative in urban

environment as well as for rural long-haul where wired links are too costly, such as in

mountainous regions. Their main engineering parameters deal with outage probabilities

due to rain attenuations.2

1.3.3 In Summary

The above sections show a fairly complex spectrum landscape in the US, with different bands coming from different use at different times; this overview shows the following points:

This complex band plan gives a hint on how difficult worldwide harmonization of spectrum might be.

Regulators have been very active recently, which shows the high value of that limited resource.

One puzzling aspect of spectrum is the wide range of prices seen at auctions and other acquisitions: some companies got great deals, while others have to pay high premiums.

Comments from the industry are numerous before recent rulemaking. Rules and rights of license holders vary greatly between license holders regarding selling,

leasing, swapping, or disaggregating spectrum. Some hidden cost of spectrum clearing and service relocation may be high in some cases. Overall rules associated with a spectrum band are tech savvy but fairly technology agnostic:

the FCC tends to be standard agnostic in order to allow all technologies and promote innovation (unlike the European approach for IMT-2000 for instance that specified what standards could be used in that band).

1.4 Other Devices and FCC part 15

It may be important here to open a parenthesis on other electrical devices: in general they all emit electromagnetic radiations, wanted or not, sometimes guided as in a modem (for transport via coax cable, or copper twisted pairs). Consequently, the FCC has a minimum allowed emission level in all bands.

More generally, the Code of Federal Regulations (CFR) is the set of rules and regulations for federal administrative law, governing anything from agriculture to public health, including telecommunications, under title 47. Therefore the FCC rules around telecommunications (including spectrum) are referred to as CFR title 47. It has multiple

Page 15: Wireless & Cellular Communications

15

parts, governing amateur radios, satellite, microwave, cellular, etc. And one part, 15, deals with spurious emissions of any device in any band.

Part 15 deals mostly with fairly low power devices (typically less than 1mW devices). The unlicensed bands identified in §1.3.1 are covered by part 15, and allow for reasonably high power levels, all other bands are strictly limited: power limitations are set in terms of electric field strength at a certain distance from the device: typically 200μV/m at 3 meters below 960MHz, and 500μV/m above. An equivalent in terms of EIRP can be calculated; see for instances details in FCC document OET bulletin no. 63, “Understanding the FCC Regulations for Low-Power, Non-Licensed Transmitters”, Oct. 1993. Other than in unlicensed bands, EIRP radiated are limited to less than -40dBm, which is very low...

1.5 Homework

1. Do a quick literature search. List one or more further detailed references (chapters and pages) of books or papers where you found good general wireless communication introductory comments. List a few bullets of significant recent facts and statistics in the wireless industry; comments on the points that you found particularly enlightening. (Warning: do not copy/paste entire paragraphs, but summarize important points in your own words and sentences).

2. What are the advantages of digital wireless over analog? 3. What are the advantages of 3G technologies over 2G? 4. Estimate how far apart cellular systems were introduced from first, to second, to third

generation. (Document your references that lead you to these dates). From these estimates, when would you anticipate fourth generation systems to be introduced? Explain why.

5. This problem reviews and compares spectrum acquisition for the major bidders of the 2008 700 MHz auction.

a. Find the average price per MHz per pop of the 700 MHz auction held in 2008 for the top 4 bidders (Verizon Wireless, AT&T, Echostar, and Qualcomm) – use FCC published results (on www.fcc.gov);

b. Summarize licenses and prices for the top four bidders; c. Conclude on the real winner(s) of the auction. (Who got (1) the best coverage, (2)

the most capacity, and (3) the best deal?) 6. Consider a 10 mile microwave point-to-point link set in the US, in a fairly dry continental

climate. a. What is the typical atmospheric loss (or attenuation), in dB, of the link for the typical

point-to-point microwave frequencies used in the US (6, 11, 18, and 23 GHz)? b. What happens when it rains heavily?

7. In the US, the FCC is standard agnostic; on the contrary the EU often mandates that one standard be used in a given band. Explain in a few statements each choice and describe the advantages of each approach.

8. We consider manufacturing a wireless device under the rules of part 15 (see §1.4). We want the device to operate in the PCS band (at 1.9GHz) and remain under the part 15 power levels.

a. What are the maximum transmitted electric fields and EIRP allowed for data communications? (Hint: find FCC OET bulletin no. 63 and examine maximum power

Page 16: Wireless & Cellular Communications

16

levels allowed; ignore levels associated with periodic or intermittent control signals.)

b. Assuming receiver devices require -90dBm received power for good communications level. What is the link budget allowed by such device?

c. Comparatively, a typical Wi-Fi link budget is around 100dB, and is commonly said to propagate around 300 feet. How far would our manufactured system propagate? (show all your calculations.)

Chapter 2 Cellular Systems

An important aspect of wireless communications systems consists of splitting a large area into

many cells. That cellular property of wireless systems has major impacts on overall service quality,

and introduces important difficulties such as how to hand-over from one cell to another. Coverage,

capacity, interference, and spectrum reuse are important concerns of cellular systems; this chapter

reviews these aspects as well as the technologies, tools, and standards used to optimize them.

2.1 Cellular Concepts

The many frequency blocks detailed earlier are used for a variety of communications services. Higher frequencies (say above 6 GHz) are mostly used for point-to-point services such as dedicated private lines. Lower frequencies are better suited for broader coverage, and are split into geographical cells. This cellular model is the fundamental system of modern mobile communications and is reviewed in this section.

2.1.1 Frequency Reuse

Covering a large geographic area with limited amount of spectrum leads to the consideration of co-channel interference, that is interference from different areas (or cells) that use the same frequency channel.1Co-channel interference considerations are usually approached by considering the following parameters:

S: total number of RF channels available (given the amount of spectrum and channel width dictated by technology standard),

S0: number of channels per cell (for a given capacity), K: the reuse factor, is number of cells in a cluster, which is repeated to provide coverage

over a large area.

The three quantities are linked by the straightforward relation: S = S0K. The overall capacity goal in an area is therefore an important parameter to determine the reuse factor K. Of course higher capacity seems to mean that the lowest reuse factor (K = 1) is always the best choice. This however must be balanced with interference considerations: indeed a higher reuse factor (K = 3, 4, 7, or even 19) provides more distance between cells using the same channel, and therefore lowers interferences.

Page 17: Wireless & Cellular Communications

17

2.1.2 Interference Considerations in Reuse

Assume a propagation model using a power path loss exponent n such as power decays in 1∕dn (d being the distance separating transmit station from receiver), that is the received power may be expressed as Pr∕Pt = A∕dn, where Pt is the transmit power, and A is some constant.2

Figure 2.1:

Frequency reuse patterns K =3, 4, and 7, on hexagonal cells. Bold contour shows the

pattern of cells repeated to provide wide area coverage. Di shows the shortest distance

between cells reusing the same frequency.

With this model, signal to interference ratios are estimated as

Page 18: Wireless & Cellular Communications

18

(2.1)

where i0 is the number of co-channel cells nearest to the cell (called first tier or tier one); that number increases with K. And Di is the distance to the tier-one cells reusing the same frequency (as shown in figure 2.1). In the case of hexagonal cell approximation the expression simplifies to [1]:

(2.2)

We’ll see more details on n further, its values vary typically between 2 and 4 with the types of terrain. We’ll also see that specific wireless technologies require a certain signal to noise and interference ratio (mostly based on data rates); so equation (2.2) leads to a minimal acceptable K.

The tradeoff between capacity required and interference level required lead to the choice of K. In some areas however, that value may need to change, and several techniques are used to improve on it.

2.1.3 Capacity and Coverage Improvements

The usual techniques for coverage and capacity improvements include the following:

Reuse factor K: may be reduced in congested areas Trunking: increase blocking probability to gain capacity Cell splitting: microcells, picocells, femtocells. Sectoring: often 3, and up to 6 sectors Range extension: use repeaters or low-noise amplifiers

And of course, technology improvements in terms of increasingly efficient standards also provide significant improvements, and will be detailed in further sections.

2.2 System Capacity

All the widely used digital standards (TDMA GSM, TDMA ANSI-136, CDMA - cdmaOne, IS-95 or ANSI-95) gave birth to a large amount of literature on how to deploy and optimize capacity. For TDMA, the number of time slots and voice coding characteristics give a capacity limit (although interference considerations are still important); modulations used and equipment link budget set the limit for coverage. CDMA systems, however, have no such hard limit: tradeoffs are possible between capacity, coverage, and other

Page 19: Wireless & Cellular Communications

19

considerations linked to performance (such as likelihood of call setup failure, and dropped calls). The possibility of soft handoff introduces even more parameters. 3

Cellular analog capacity:

Fairly straight forward, every voice channel uses a 30 kHz frequency channel, these

frequencies may be reused according to a certain reuse pattern, the system is FDMA. The

overall capacity comes from the total amount of spectrum, the channel width and the

pattern in which frequencies are reused.

An effective way to define cellular system capacity is the number of voice channels per cell per spectrum unit (e.g. 1 MHz).

(2.3)

where

m is the system radio capacity (number of channels per cell per MHz), W is the total bandwidth available (in MHz), M is the total amount of channels in that bandwidth, K is the reuse factor,

The frequency reuse factor hides a lot of complexity and will be examined in further details ([1] ch. 3.2, and 9.7); its value depends greatly on the signal to interference levels acceptable to the cellular system.

TDMA/FDMA capacity:

Digital FDMA systems have the same capacity equation as above, capacity improvements

mainly come from the voice coding in order to reduce channel bandwidth and elaborate

schemes (such as frequency hopping) to decrease reuse factor.

TDMA capacity is similar as well but must also take into considerations that one frequency channel has several time slots, therefore several voice channels. Therefore:

(2.4)

where

m is the system radio capacity (number of channels per cell per MHz),

Page 20: Wireless & Cellular Communications

20

NTS is the number of time slots (number of voice channels in a frequency channel),

pr is the fraction of rate (e.g. 1 for full rate, for half-rate), W is the total bandwidth available (in MHz), M is the total amount of frequency channels in that bandwidth, K is the reuse factor,

The fraction pr directly impacts the capacity, since when fractional vocoder rates are used the remainder of time slots may be used for other subscribers. This capacity equation sometimes has an additional factor(1 - χ), where χ is the fraction of the channel used for signaling rather than voice. Although simple, the equation already allows for some discussion on parameters that may be improved: such as increased number of time slots with better voice coders, and as always reuse factor.

CDMA capacity:

a usual capacity equation for CDMA systems may be fairly easily derived as follows (for the

reverse link): first examine a base station with N mobiles, its noise and interference power

spectral density dues to all mobiles in that same cell is ISC = (N - 1)Sα, where S is the received

power density for each mobile, and α is the voice activity factor. Other cell interferences IOC

are estimated by a reuse fraction β of the same cell interference level, such that IOC = βISC;

(usual values of β are around 1∕2). The total noise and interference at the base is therefore

Nt = ISC(1 + β). Next assume the mobile signal power density received at the base station is S

= REb∕W. Eliminating ISC, we derive:

(2.5)

where

W is the channel bandwidth (in Hz), R is the user data bit rate (symbol rate), Eb∕Nt is the ratio of energy per bit by total noise, α is the voice activity factor (for the reverse link), typically 0.5, and β is the interference reuse fraction, and represents the ratio of interference level

from the cell in consideration by interferences due to other cells. (The number 1 + β is sometimes called reuse factor, and 1∕(1 + β) reuse efficiency)

This simple equation (2.5) gives us a number of voice channels in a CDMA frequency channel4, the relation to the former capacity measure is simply m = N∕W.

We can already see some hints of CDMA optimization and investigate certain possible improvement for a 3G system. In particular: improving α can be achieved with dim and burst capabilities, β with interference mitigation and antenna downtilt

Page 21: Wireless & Cellular Communications

21

considerations, R with vocoder rate, W with wider band CDMA, Eb∕Nt with better coding and interference mitigation techniques.

Some aspects however are omitted in this equation and are required to quantify other capacity improvements mainly those due to power control, and softer/soft handoff algorithms.

Of course other limitations come into play for wireless systems, such as base station (and mobile) sensitivity, which may be incorporated into similar formulas; and further considerations come into play such as: forward power limitations, channel element blocking, backhaul capacity, mobility, and handoff.

A final note on capacity: voice capacity is often given in Erlang, and refers to trunking efficiency given a certain blocking probability. ([2] p. 350, or [1] §3.6.)

2.3 Standard Air Interfaces

We first briefly review current mobile digital technologies, how they were initially introduced, and how and they evolved. Good overviews may be found in introductory chapters of many good textbooks, including [1] cc. 1, 2, 11.1; and excellent perspectives can be found in [3] , ch. 1.

Analog cellular phones:

Advanced mobile phone service (AMPS) was developed by Bell Laboratories in the 1970’s,

and started in the US after FCC allocation in 1983 of 40 MHz paired spectrum in the

800 MHz frequency range. For further details, see [1] ch. 1, 2, & 11.1, and [4] ch. 3, 4. Briefly

stated, the system is a frequency divided modulation access (FDMA), duplex frequencies for

up and down link (frequency division duplexing - FDD), with 30 kHz channels, one user per

channel, analog voice modulation (FM), blank and burst transmission.

RF channel 30 kHz

Reuse pattern typically 7

Duplex FDD

Multiple access FDMA

Multiplex 1 traffic channel per RF channel

Voice FM modulation

Page 22: Wireless & Cellular Communications

22

Digital wireless systems:

Second generation cellular systems are characterized by the introduction of voice digitizing

and digital encoding, thus opening a number of DSP possibilities such as forward error

correction schemes. Frequency or time division multiple access techniques are used (FDMA

or TDMA). Code division multiple access (CDMA) is introduced by Qualcomm (TIA-EIA IS-

95, or ANSI-95) and becomes the basis for the main 3G systems. Overall capacity is

increased, signaling capabilities and system intelligence is considerably enriched.

RF channel 30 kHz, 200 kHz in GSM, 1.25 MHz for CDMA

Reuse pattern 7 (less with frequency hopping), 1 for CDMA

Duplex mostly FDD (emergence of TDD)

Multiple access FDMA, TDMA (8 full-rate time slots for GSM), or CDMA

Voice Digital encoded: GSM full rate 13.4 kb/s, CDMA 13 kb/s QCELP or 8 kb/s EVRC

Third generation systems:

Digital systems were further improved upon, mostly for higher voice capacity and higher

data rates; they evolved into third generation standards.

RF channel 1.25, 5, 10, 15 MHz

Reuse pattern 1 (CDMA)

Duplex mostly FDD, some TDD

Multiple access CDMA

Voice Digital encoded: bit rates 8 kb/s and below

Data Up to several Mbps (3.1 Mbps for EV-DO, 15 Mbps for HSDPA)

Page 23: Wireless & Cellular Communications

23

Fourth generation systems:

The industry is interested in further improvements towards fourth generation standards.

RF channel generally wider: 10, 20 MHz

Reuse pattern 1-1.5 (OFDMA – see §7.3.3)

Duplex FDD or TDD depending on spectrum

Multiple access OFDMA

Voice based onVoIP

Data IP based, flat architecture, convergence

Third generation systems bring higher bitrates, and some questions remain about the requirements of fourth generation. The ITU standard community specifies rates for IMT-Advanced around 100 Mbps for full mobility and 1 Gbps for fixed wireless, with a timeframe of 2012 to 2015. Air interfaces of interest that may achieve such rates focus on multicarrier techniques like OFDM, and advanced antenna systems such as multiple input multiple output (MIMO) systems. In addition, fourth generation standards include requirements such as: low latency, open network architecture (flat IP-based) and convergence with other fixed and mobile standards.

2.4 Speech Coding

The introduction of digital wireless systems means that the acoustic voice wavefront is not simply converted to an electrical signal directly transmitted over RF channel. Voice is now digitized, encoded, and the resulting bit stream is transmitted and of course decoded on the receiving side. Although this process requires additional DSP, it opens the door to many optimization algorithms and is much more efficient than usual analog voice transmission.

Page 24: Wireless & Cellular Communications

24

2.4.1 Basic Vocoder Theory

Digital voice coding (vocoding) is very important yet very subjective. Voice coding theory is a domain of study of its own; introductory overviews are presented for instance in [2] ch. 15, or [1] ch. 8.

2.4.2 Classic Cellular Vocoders

Analog vocoders have emerged at Bell Laboratories in the late 1920’s, and have become more elaborate and efficient in dealing with harmonics important to a good understanding of voice while minimizing bandwidth required, which is in the 500 Hz to 3400 Hz range. The digital area brought significant changes. Initial digital systems sampled that range, which at the Nyquist rate leads to a 64 kilobits per second (kbps, kbit/s, or kb/s) bandwidth. This is referred to as pulse-code modulation (PCM). More elaborate algorithms however can achieve reasonably good voice transmission by transmitting a codebook (set of parameters for a given voice coding algorithm) with as little as 2.4 kb/s rate: a 26x improvement. Usually these algorithms provide acceptable voice quality, but may provide poor performance in specific situations such as in a noisy environment, for background music, or when combined with different voice coding systems (such as PCM) and external voice mail systems. Several vocoder systems exist and have been chosen in 2G and 3G standards:

CELP:

Code Excited Linear Prediction, 2400 and 4800 bit/s, Federal Standard 1016, used in STU-

III.

QCELP:

Qualcomm Code Excited Linear Prediction, also known as Qualcomm PureVoice, developed

in 1994, and used in initial IS-95 CDMA networks. Two bit rates available: QCELP8 and

QCELP13 using 8 and 13 kb/s respectively, which is well adapted for this standard’s 9.6

kb/s and 14.4 kb/s frames. It was later improved upon by EVRC.

RCELP:

Relaxed Code Excited Linear Prediction, a more advanced advanced algorithm that does not

attempt to match the original signal exactly but a simplified pitch contour.

EVRC:

Enhanced Variable Rate CODEC is a speech codec used in CDMA networks that replaced

QCELP. EVRC uses RCELP and provides better 8 kb/s voice quality than 8QCELP. Half rate

EVRC were also developed to further lower bitrate at the cost of some quality.

CVSD:

Page 25: Wireless & Cellular Communications

25

Continuously Variable Slope Delta-modulation, 16 kb/s, used in wide band encryptors such

as the KY-57.

MELP:

Mixed Excitation Linear Prediction, MIL STD 3005, 2.4 kb/s.

ADPCM:

Adaptive Differential Pulse Code Modulation (G.721, G.726).

Comparing the quality differences between vocoder is usually done by testing a number of standard phrases, and assessing the quality of the transmitted result under various conditions. That assessment is subjective and is usually given a grade called Mean Opinion Score (MOS) between 0 (completely unintelligible) and 4 (perfect quality). Many test devices now offer algorithms providing a MOS, but many initial tests relied on actual opinion surveys.

2.5 Migration to 3G

Second generation cellular systems certainly achieved major capacity improvements and contributed to the fast adoption of wireless handsets throughout the world. In many countries like the US and Korea, the number of wireless handset customers now exceeds that of wired telephone customers. And the growth continues.

Third generation systems focused on increasing capacity yet again, and on introducing efficient high-speed mobile data systems. Given past heavy investments in different 2G networks, adoption of a common 3G standard had tremendous cost implications and competitive advantages.

These efforts from the wireless industry focused on improving widely deployed systems, and migrate them towards a third generation. All major digital technologies proposed an evolution path to a next generation, typically broader band (in throughput and spectrum), and additional technologies were proposed and standardized. Technical arguments and different industry interests were in competition, migration paths were suggested, and we review the fundamental reasons behind the relative successes and failures to harmonize toward one unique standard.

Several proposals:

Initially 10 new proposals were submitted to the ITU body responsible for standardizing

next generation systems: 2 TDMA, 8 CDMA. (See details in a US contribution to the ITU:

US8F01-16, February 2001.)

Harmonization process:

Page 26: Wireless & Cellular Communications

26

A difficult harmonization effort was undertaken from 1998 to 2001 by the ITU. Many

technical comparisons and discussions ensued, resulting in some harmonization, but falling

short of selecting one unique worldwide standard.

Successes:

All TDMA solutions remained as an evolution path, but not a true 3G alternative. CDMA

solutions were narrowed down to two. Other important issues such as spectrum plans,

emission levels, and spectrum sharing with satellite and high-altitude platform stations

(HAPS) were also discussed and approved with relative success.

Failures:

One major issue remained: to merge the last two CDMA camps: the 3G partnership project

(3GPP) based on UMTS (WCDMA), and 3GPP2 based cdma2000. The former was very

reluctant to tread on intellectual property of the latter, and the latter was adamant about

conserving smooth evolution and backward compatibility with cdmaOne. Frequency

choices of existing CDMA solutions were also somewhat less efficient than new greenfield

approaches as illustrated on figure 2.2. Rumors of lawsuits appeared, claiming that WCDMA

techniques were infringing on 2G CDMA solutions. Ericsson, a major WCDMA advocate,

ultimately purchased Qualcomm’s infrastructure branch (including rights to cdmaOne

intellectual property), and the discussions stalled; it seemed obvious that neither camp had

any incentive in giving in, hence today’s two competing standards: UMTS-WCDMA and

cdma2000.

Figure 2.2:

Existing CDMA carrier use (left) is convenient for migration to multicarrier

standard, but may be less efficient than full spreading on same frequency block

(right).

In short two major 3G standards remain in competition, and the choice of any carrier is clear: 2G technology clearly points either toward a GSM to UMTS migration (3GPP), or alternatively toward a cdmaOne to 3G-1X cdma2000 evolution (3GPP2). The latter is certainly initially cheaper, has advantages in equipment availability, and has well-known

Page 27: Wireless & Cellular Communications

27

performances; but the former may benefit from larger economies of scales when GSM carriers migrate to UMTS services.

In 2002, CDMA Americas Congress (San Diego, December 2002) estimated that cdmaOne operators benefited from a smooth transition and a well-known standard, thus giving them a one or two year advance over GSM efforts towards UMTS. Indeed cdma2000 (3G 1X) systems have been available since 2002, IS-856 (3G-1X EV-DO) have been widely available in the US and Asia since 2004. GPRS and UMTS are finally catching up in 2006. High-speed data services (HSPA) still lag in coverage behind EV-DO in 2008, but most dense areas in the US are well covered by both technologies.

Choosing a migration path is only the first step; upgrading the network is of course very costly. Initially service providers had to decide how long to delay network upgrade: voice capacity and time to market for high-speed data services were the driving factors. Now service providers have to decide how much resources to dedicate to voice versus data.

2.6 Another migration to 4G?

Second generation cellular systems achieved digital voice efficiency, third generation systems focused on increasing capacity and data rates, what more can a fourth generation standard achieve? Can we hope for a unique worldwide standard this time? When can we expect to see 4G products?

Today’s main 3G standards have an evolution towards a 4G standard. They have a number of commonalities:

LTE:

Long Term Evolution of the current GSM/UMTS/3GPP set of standard is OFDMA on the

forward link, and SC-FDMA (a single carrier OFDMA scheme) on reverse link. Interestingly,

GSM carriers migrated once to CDMA, and now propose to abandon it for OFDMA. LTE

promises to carry much of the international crowd of operators and create economies of

scale, allow for international roaming, etc. 5

UMB:

Ultra Mobile Broadband is the proposed evolution for the other 3G camp (cdma2000,

3GPP2). But, when the largest major cdma2000 carriers (Verizon Wireless) announced

migration plans towards LTE, all UMB efforts in the industry practically died.

WiMAX:

WiMAX is an emerging wireless standard based on 802.16e. The wimax forum

(www.wimaxforum.org) has claimed that it is capable of much better rates than other

current CDMA-based 3G standards, it is based on OFDMA, and therefore looks very much

Page 28: Wireless & Cellular Communications

28

like a 4G standard. Others claim that it still does not meet 4G requirements, but 802.16m,

it’s longer term evolution, will be the true 4G standard. Unlike other 4G standards, WiMAX

seems to have a smooth evolution path and will preserve backward compatibility with

current 802.16e systems.

Oddly enough two different camps seem to emerge again: LTE and WiMAX, each backed up by different suppliers, and different operators. Just like for 3G, the two 4G camps use very similar technologies (OFDMA this time) and there are very few technical reasons why they should not harmonize to a unique standard. Practically, however, just like for 3G, we will likely see at least two major standards for several reasons:

Timelines are different enough: operators can immediately roll-out WiMAX with future evolution plans, or alternatively carriers can keep existing 3G networks operational for a few more ears before considering 4G LTE replacements.

The type of spectrum: WiMAX focused initial efforts on TDD, whereas LTE will focus on the legacy 2G-3G spectrum, which is mostly FDD.

Intellectual properties of course plays an important role and might reserve a few surprises. Backward compatibility: WiMAX prepares a smoother evolution, LTE starts a greenfield

approach, so why compromise to please a few WiMAX competitors?

In many respects the fourth generation seems to repeat the history of third generation debates. The next few years will teach us if one standard has an edge over the other (technically, economically), or if something yet unforseens takes the industry by surprise.

Which one of these standard will conquer the 4G world? Although difficult to predict, a few elements of answers can be outlined. First and foremost, the current huge 2G/3G service base seems to migrate toward LTE, giving it potentially the best economy of scale perspective. Equipment manufacturers therefore cannot ignore the LTE trend: Alcatel-Lucent, Ericsson, Nokia, NEC are clearly pushing for an LTE only future. Motorola and Samsung Electronics are developing WiMAX products as well as LTE.

The other main argument to consider is that of spectrum: the vast majority of mobile operators operate in FDD spectrum (see sections 1.2.3 and 1.3) LTE will provide an evolution first in that mode. WiMAX on the other hand chose to focus first on TDD bands and is the obvious choice for TDD spectrum owners.

Finally, the overall timeline for evolution is important: current cellular providers have made significant investments in EV-DO, or are still in major deployment phases of HSPA. Newcomers on the other hand who need high data rates today with smooth evolution towards 4G later, are more likely to chose WiMAX.

All in all, the industry is likely to be ultimately dominated by LTE, but the timeline is still uncertain. As we are still in the early phases of 4G standardization, some carriers are already warning against multiple standards: why not try to converge WiMAX and LTE into one standard? The practicality of the situation is that few companies and experts are

Page 29: Wireless & Cellular Communications

29

interested in spending time and efforts towards such a unified standard. Both sides seem to be content with their solution... or the battle for intellectual property and individual company interests might be too much of an obstacle to overcome.

2.7 Technology Advances

Recent technology advances aim at increasing capacity further. Technology improvements are sometimes the result of a major standard modification, but sometimes simple schemes that can be added to existing standards and allow for additional improvements with minimal infrastructure changes.

2.7.1 Speech Coding

Voice coding algorithms and DSP capabilities have improved, and current voice codecs operate on less power, and with greater processing efficiencies. (Refer to [2] ch. 15, or [1] ch. 8 for speech coding details). GSM for instance is improving voice digitization and quantizing from RPE-LPT to a series of AMR standards. IS-95 systems have a parallel evolution, with EVRC, and half-rate EVRC.

Another standard for selectable mode vocoder (SMV) was in the work but never saw any success in the industry; it based requirements on: operation in presence of frame erasures, noise suppression recommended for background noises, reasonable performance with music for on-hold situations, equivalent performances with different languages, multiple quality modes and multiple bit rates, seamless transition from mode to mode. SMV was design to offer four modes of operations:

Mode 0 is designed to improve voice quality over EVRC with the same capacity requirements as EVRC.

Mode 1 is designed to maintain the quality provided by EVRC while realizing a capacity benefit.

Mode 2 is for the system operator who is willing to sacrifice some voice quality robustness in order to realize a significant capacity gain.

Similarly, Mode 3 of SMV provides even more capacity gains. But the voice quality is, by toll grade standards, poor.

Tests and opinion groups conducted by SMV development group confirm that mode 0 is equivalent or better than the best EVRC performance, and that mode 1 is equivalent or better than minimal EVRC performance. Simulations and tests show the following capacity improvements over cdma2000 using EVRC:

Mode Forward Reverse Erlang B capacity (2% blocking)

Page 30: Wireless & Cellular Communications

30

Mode 0 0% 0% 0%

Mode 1 27% 16% 34%

Mode 2 49% 29% 61%

Mode 3 60% 35% 75%

These requirements and the resulting capacity vs. quality tradeoffs seem useful and attractive to service providers. Nevertheless this standard never took off, which illustrate that some standard evolutions (even when based on sound requirements and good improvements) are not destined to successful adoption by the industry. Instead, in this case, a simple half-rate EVRC is typically employed when additional capacity is needed at the cost of quality (as in mode 2).

2.7.2 Handoff

Handoff or handover is the technique implemented by a cellular system to hand over the communication from one cell to the next. Handoff considerations are of course especially important for highly mobile service.

First and second generation systems use hard handoff. In hard handoff a cell (or sector) hands off the communication to another channel in another cell; the mobile is therefore in communication with only one cell at a time. While efficient, that system may cause some voice calls to drop when moving from one cell to another.

With CDMA, and the use of reuse factor K = 1, a new type of handoff appeared: soft handoff. In soft handoff, a mobile may be in communication with several sectors or cells at the same time. Soft handoff is a great technique to provide seamless fast transitions between cells; it provides a kind of macro-diversity, but may lower system capacity if too many redundant links are used. Improving handoff efficiency is therefore important to insure good call performance while limiting the amount of power spent on these redundant links. Soft handoff is typically a mobile assisted procedure: the mobile unit detects pilot strengths from various sectors and reports them (via Pilot Strength Measurement Message) to the base station. The decision is then made to add, drop or replace sectors according to set power thresholds and timers (Tadd, Tdrop, Ttdrop).6

Page 31: Wireless & Cellular Communications

31

Figure 2.3: CDMA handoff parameters: Tadd, Tdrop thresholds, Ttdrop timer.

CDMA vendors focus on a wide range of soft handoff features, including the following:

Base station assisted:

In order to limit forward power transmitted, the base station eliminates handoff when not

necessary (i.e. when combined Ec∕Io > TQuality).

Extended Handoff Direction Message:

IS-95B and cdma2000 add a priority group for neighbour lists, which allows to sort

neighboring sectors in likelihood preference: in cell, first tier, second tier, etc. The extended

message also allows for fast swap of pilots in one operation, when needed.

Access and channel assignment:

IS-95B and cdma2000 extend the handoff capabilities to the various stages of call

establishment, in particular call originations, page responses, channel assignment messages

now include a list of pilots that allow the call to be established into soft handoff.

Dynamic Thresholds:

Instead of fixed add and drop thresholds, a linear function of the combined Ec∕Io prevents

the addition of a pilot when not required. (Tadd-B = slope * combinedEc∕Io + intercept).

Page 32: Wireless & Cellular Communications

32

Figure 2.4: CDMA IS-95B handoff parameters: Tadd-B slope and intercept.

These features buy significant capacity and improve call setup statistics.

Other call feature improvements, sometimes proprietary to equipment manufacturers, continually appear; for instance the algorithm to detect a lost call. Other improvements relate to refining power control, or gating, and relate to fundamental 3G improvements, which will be examined later.

2.7.3 Adaptive Modulations

For systems primarily designed for voice, modulations were chosen to be reliable and operating well at fairly low SNR (like QPSK). For data systems it is advantageous to take advantage of higher modulation schemes such as 16QAM and 64QAM when the radio link allows it. Higher modulations are more spectral efficient but prone to more bit error rates and may cause more retransmissions.

Data bursts:

when low SNR allows for it, use higher modulation and coding rates for better spectral

efficiency.

Adaptive modulation:

fast modulation changes frame by frame allow for efficient scheduling of high speed data

bursts when the radio channel is capable of it.

ARQ:

automatic retransmit requests are used to lower modulation when necessary and

retransmit faded data.

Page 33: Wireless & Cellular Communications

33

2.7.4 Interference Mitigation

Interferences may be cancelled or mitigated by changing antenna patterns as required. Such systems are usually referred to as smart antennas, and are in essence an elaborate extension of sectoring. The aim may be to balance the load, or steer a main lobe toward a user, or create a null in the direction of an interferer. Some systems are static, others are dynamic and change with cell load. Some systems are passive others include active amplification devices. The main types of smart antenna systems may be described as follows:

Active antennas:

An array of passive and active elements using multiple power amplifiers on the transmit

side, and a low-noise amplifier on the receive side.

Switched beams:

A fixed array of narrow beams, combined to form various size sectors.

Adaptive arrays:

An array of elements offering several degrees of freedom to steer a beam in a certain

direction, or create nulls. Array element are sometimes amplified, or attenuated, or are

purely passive and utilize phase shift to create the wanted patterns.

Spatial Division Multiple Access (SDMA):

A sophisticated combination of many adaptive elements.

Smart antenna systems of all kinds are available in the industry, but their use is typically limited to dense areas where they bring most gain. The cost of equipment (sometimes due to the complex transmit aspect) and fairly large antenna system sizes are major drawbacks for these systems [8]. Conversely the trend for base station electronics is to become smaller and cheaper; the market for smart antenna system has therefore been smaller than once anticipated, but recent interest in MIMO systems may bring back some renewed interest for antenna array systems.

2.7.5 Diversity

Antenna diversity is a wonderful technique to improve link budgets; receiving diversity simply consists in having more than one antenna at the receiving site. Given the power limitations of a mobile handset, receiving diversity has been implemented at cell site from the early days of cellular systems. Good diversity schemes can add 8 to 11dB on the up-link budget, thus improving voice quality and capacity on that link. The goal of antenna diversity is to provide two uncorrelated paths and combine the two signals, thus reducing

Page 34: Wireless & Cellular Communications

34

the probability of deep fades. A general guideline is to measure or calculate the correlation coefficient, ρ, and try to achieve the lowest possible correlation between the two paths.

Diversity improvements are of two kinds: improvements on existing receive diversity in the uplink, and introduction of transmit diversity for the forward link.

Figure 2.5: Test setup to measure several antenna spacing for horizontal space diversity for a PCS

system: antennas are placed 2λ,5λ, and 10λ apart.

Page 35: Wireless & Cellular Communications

35

Figure 2.6:

Cellular networks utilize many types of towers and poles, and even some disguise

depending on the areas to cover. Different antennas make use of different diversity

schemes (space for the left two, polarization for the far right). And some antennas are

slightly downtilted (right) to reduce interferences to neighboring cells.

Receive diversity has been used from the early days of cellular, and are as popular as ever. Classic diversity schemes use two antennas at the base station and some algorithms to combine signals (More details in ref. [2] ch. 13.)

Spatial diversity:

Used at every sector, well known combining techniques, probably the most efficient type of

diversity.

Angular diversity:

Typically of little use, its benefits are usually exploited by softer handoff (within a site) or

smart antennas.

Time diversity:

Currently heavily used (interleaving, half chip offset in I and Q QPSK transmission, rake

receivers).

Page 36: Wireless & Cellular Communications

36

Polarization diversity:

Widely used, convenient for small base station sites where antennas cannot be separated.

Transmit diversity is an important feature for forward link capacity improvement. Since handsets are rather small, their receive diversity capabilities are limited and there transmit diversity schemes were long ignored. Now transmit diversity schemes are introduced in 802.11n wireless LAN’s as well as all forward looking cellular technologies.

Orthogonal Transmit Diversity (OTD):

Coded symbol streams are split into two data streams, each containing half the number of

symbols, modulated and spread separately (with two different codes), and transmitted on

two different antennas thus doubling transmit rate.

Space-Time Spreading (STS):

Coded symbol streams are duplicated into two identical streams, modulated and spread

separately (with two different codes), and transmitted on two different antennas. The key

difference with OTD is that in STS all of the data is sent out on each antenna. This scheme

provides redundancy rather than data rate improvement.

Multiple input, multiple output systems (MIMO):

These systems are the real emphasis of all new wireless standards: they combine transmit

and receive diversity, using several transmitting antennas and several receiving antennas.

Systems in use for fixed wireless have shown good performance; a lot of recent research

focuses on MIMO processing. [9]

2.7.6 MIMO

Multiple Input Multiple Outpout (MIMO) systems are becoming popular in all wireless standards, from cellular evolutions like LTE to wireless LAN like 802.11n. A MIMO system consists of splitting a data stream into multiple streams encoded differently, transmitted over different antennas, and received by multiple antennas. Two main improvements generally result from such a scheme: diversity or capacity increase.

Transmitting the same data stream over different antennas results in a complex transmit and receive diversity scheme. Transmitting different streams on different antennas on the other hand add capacity by spatial division multiplexing (SDM). In both cases a MIMO system is more efficient when a good channel estimation is performed between transmit and receive elements.

Page 37: Wireless & Cellular Communications

37

Figure 2.7: MIMO systems use multiple antennas on each side, to provide transmit and receive

diversity schemes, or spatial division multiplexing for improved capacity.

2.7.7 Other Optimization Techniques

Technology advances and standard improvements target an increase in capacity, coverage, data rate, or some other system performance aspect. In many cases however some simple optimization techniques can be used to increase performance:

Antenna height: higher for further range, or lower to reduce interference. Cell splitting (into smaller cells: microcells, picocells, femtocells). Sectoring: often 3, and up to 6 sectors. Range extension by repeaters or low-noise amplifiers increase coverage. Antenna downtilt conversely reduce coverage to limit interference with other cells, and are

sometimes necessary as cells are reduced in size for increased capacity. Some antennas use electrical downtilt while others are physically tilted down.

And a number of parameter adjustments (power levels, handoff parameters, etc.)

Many of these techniques are used by operators for different optimization needs depending on capacity and coverage demand. In some cases optimization may be seasonal due to different foliage patterns or different usage patterns. In all cases RF network demand constant tweaking to provide optimal performance.

2.8 Fixed Wireless Access

Fixed wireless access has been used in rural areas for several decades. It is sometimes referred to as wireless local loop (WLL), and is an alternative to provide Plain Old Telephone Services (POTS) in remote areas where wired solutions are impractical for various reasons. In most cases, trenching long distances to place communication conduits (for fiber or copper) is very costly, especially in mountainous areas. In these cases, WLL

Page 38: Wireless & Cellular Communications

38

provides the ability to extend the Public Switch Telephone Network (PSTN) via wireless solutions.

2.8.1 Classic Architectures

Radio solutions for wireless local loops were rolled-out extensively since the 1970s. Some such radio services are still in place, and in use today. These systems of course rely on analog radios, and were designed to offer voice service for fairly long distances. Many of these radio systems however are no longer manufactured, and become difficult to maintain as spare parts become rare. Therefore, new solutions are increasingly needed for WLL; these solutions need to be cost-effective, reliable, adaptable to a wide range of situations, and compliant to local exchange carrier technical, legal, and regulatory standards. Unfortunately, volumes and price points of WLL services are generally low, and suppliers consequently treat the opportunity as a fairly low priority.

Initially, WLL focused on providing extensions to the public switched telephone network (PSTN). Consequently, wirecenters and central offices in remote areas had to be outfitted with different radio solutions to reach remote customers. As the PSTN evolved to digital voice, digital switching, and Class 5 features (such as call waiting, caller ID, 3-way calling, and others), WLL systems evolved to include many of these features. WLL products therefore focused on providing feature parity for these class 5 services to mimic other wired infrastructure. Connectivity to Class 5 switches like Lucent 5ESS or Nortel DMS100 is specified in Telcordia standards such as GR-303 or GR-008. And WLL systems evolved to use these standard interfaces to the PSTN. Radio connections for these legacy WLL radios rely on TDM circuits, DS0 are simply carried over radio channels, and aggregated into DS1 where backhaul extensions are needed to reach further clusters of customers.

Radio frequencies were allocated for wireless local loop applications, and are referred to as Land Mobile Radio (LMR). LMR radio links for telephony use frequencies in the UHF/VHF band (138-512 MHz), which provide great propagation characteristics even in difficult terrain, and fairly heavy tree density. These frequencies however are becoming very rare. In fact, they are in such demand that the FCC recently mandated that radio systems have to increase their spectral efficiencies, and use only a narrow band of spectrum. Many legacy LMR equipment use 20-25 kHz RF channels (referred to as wideband). Narrowbanded LMR equipment occupies a channel bandwidth of 12.5 kHz or less. An FCC order (FCC-04-292) released December 23, 2004 mandates that LMR equipment which transmits on frequencies between 138-512 MHz (VHF and UHF) must migrate to 12.5 kHz bandwidth by January 1, 2013. In addition, the FCC order mentions the goal to reach 6.25 kHz channelization so new WLL systems are urged to deploy these narrow RF channels.

Other radio solutions work in the 2.4 GHz and 5 GHz unlicensed bands, building on the popularity and therefore economies of scale of 802.11a/b/g radios. Unfortunately the popularity of these radios for Wi-Fi LAN also creates a lot of interferences. WLL systems

Page 39: Wireless & Cellular Communications

39

are unfortunately less flexible than most LAN devices: they deal with voice, they often use fixed antennas mounted on outside walls, roofs, or nearby poles, and they have to provide emergency communications (life line): consequently the uncertainty of potential Wi-Fi interference can be a major risk. A few systems therefore have a 900 MHz version; although less spectrum is available and less power is allowed, that frequency can be a very useful alternative. Some cordless phones and baby monitors have to coexist in the band, but it is in general much more available for use. Replacing old legacy radios is usually a problem as they use lower legacy frequencies; 900 MHz usually offers solutions with the least amount of new designs; and as mentioned above, new TV white spaces are a wonderful new opportunity to explore.

2.8.2 Cellular WLL

In addition to frequencies mentioned above, wireless carriers have access to licensed cellular spectrum. Of course their main purpose is to provide mobile cellular service, but fixed applications are of course possible as well.

Fixed radio links usually behave differently from mobile radio links, they are typically less variable in time (therefore easier to predict or equalize), and their fading statistics are generally easier to deal with. Consequently fixed propagation is usually advantageous for a wireless system. Several important aspects of fixed system should be emphasized.

Propagation

Mobile communications link are more likely to be obstructed and theoretical path loss of 40dB/decade; fixed links on the other hand can approach 20dB/decade path loss. Even though these theoretical arguments are questionable, experimentally, fixed communication typically benefits from better propagation constants. In addition, an advantage of fixed customers is that (when the need arises) the customers antenna can be elevated in order to reach (almost) a line-of-sight with the base station and therefore improve propagation characteristics.

Propagation modeling of a fixed radio link has fundamental differences with that of a mobile link. Most wireless propagation models are designed for mobile communications, whereas fixed communication links have been derived from simpler models (such as free-space estimates and Fresnel zone considerations), the main reason being that vast data collection campaigns used for empirical models are nearly always obtained by extensive drive testing hence mobile.

The problem of collecting fixed data for an empirical model is more difficult; in many cases experimenters present methods to locally average data (over one half of a wavelength) to remove small-scale fading due to multipath. Small-scale fading is difficult to quantify accurately, and even a large number of fixed data points would provide insufficient sampling to be able to evaluate its impact. Another important issue is that of

Page 40: Wireless & Cellular Communications

40

antenna beamwidth (or directivity). Mobile data collections are conducted using an omnidirectional antenna (isotropic with respect to azimuth). It has long been known that the antenna beamwidth and more specifically the distribution of angles of arrival with respect to the direction of motion of a mobile are important parameters to quantify the fading of a mobile link [1].

Consequently fixed data models may differ in some cases from the usual empirical models. One contribution to IEEE 802.16 [36] analyzes these details and proposes models based on a large PCS data campaign and associated model [35].

Good fixed models would be welcome by the industry for fixed wireless access, but the current use of cellular and PCS models is likely to continue for a number of reasons: first, they provide a good estimate for initial design (site-specific models and simulations are used for more precise predictions); second, some time is necessary to roll-out large fixed wireless systems that can be used and analyzed in order to provide a wide modeling range; lastly, by the time these fixed models exist, the focus of wireless access is likely to turn again towards mobility.

Advantages of Fixed Links

Fixed links have a few important differences in propagation characteristics. These differences have a significant impact on reach, capacity, and therefore overall cost of a fixed wireless system.

Many mobile radio links, especially urban, suffer from fast fading. Fixed links, especially in more rural areas still suffers from some fading, but slower, mostly due to the changes in the neighbouring scatterers. As a result, the bit error probability function and frame error rates (FER) are typically improved that is similar (or greater) FER can be achieved with lower SNR (or Eb/No). In an IS-95 CDMA system for instance, the industry usually accepts Eb/No levels of 4 for fixed communication, rather than 7 needed for mobility. All other parameters being equal, a reduction of Eb/No target of 3dB nearly doubles capacity. (Refer to CDMA capacity in 2.2.)

Frequency reuse is a very important aspect of system capacity since all wirelesss systems are limited by the amount of spectrum they can use. Analyses have shown that fixed users with fairly narrow antenna beamwidths oriented toward a given base station offers more efficient spectrum reuse patterns than what mobile omnidirectional users require.

Mobile usage means handoff between base stations, which require additional radio resources. Fixed usage does not reauire these radio resourses and therefore increases system capacity.

Yet another advantage of narrow beamwidth antennas is that the RF link is improved. In addition, repeaters can be strategically placed at customers premise to increase that link further.

Fixed wireless links can therefore provide increased reach and capacity than equivalent mobile links. As a result, some of these otherwise costly cellular systems have been used for fixed use, sometimes with minor modifications. WLL equipment are sometimes as

Page 41: Wireless & Cellular Communications

41

simple as cordless telephones, in fact one standard was based on that approached and gained popularity in Europe: Digital European Cordless Telecommunications (DECT) was a popular 2G cellular standard, and even had a third generation evolution during the IMT-2000 international efforts of harmonizing 3G standards. CDMA systems like IS-95, also have profiles and even specific devices for fixed service.

In some cases, wireless local loop base stations became handy to deploy in rural areas to provide extended coverage, and reach minimum service mandated by the FCC for PCS spectrum auctions for instance. More recently 3G and 4G systems are advertising their fixed capabilities again and may be trying to compete with other wired broadband services. The main advantage of these data system is not necessarily to provide high speed data at 65mph on a highway, but rather to offer good throughput in a fixed use scenario, but with roaming capability anywhere you may need to use it.

2.8.3 IP-based Architectures

Voice over IP (VoIP) is an efficient and widely accepted method of providing telephony. When considering wireless transport, the efficient compression of VoIP is an especially valuable property. Most recent WLL radio solutions therefore use VoIP transport; this is especially convenient as most consumer and enterprise radio solutions are based on IP and Ethernet. Consequently fairly cheap off-the-shelf systems can be adapted to WLL voice and data delivery. The problem remains however to interface these systems with the nearest telephony network. Several architectures are possible for WLL, depending on the location of network elements with voice features.

Figure 2.8:

Fixed wireless links, or wireless local loop (WLL) provide fixed wireless voice and/or

data links. Voice services use a voice over IP gateway; additional data services are

routed to a broadband data network, and bypass the voice gateway.

Page 42: Wireless & Cellular Communications

42

Voice Integration

In most rural areas, a local central office has TDM voice circuits available rather than a VoIP system. Consequently, a VoIP gateway is normally required for WLL purposes. Suppliers of WLL systems often have a VoIP gateway as part of the solution; until recently, these solutions were still difficult to roll-out because of the VoIP gateway cost, and its operations integration. Today, the solution is more practical because radio systems such as those based on the Wi-Fi physical layer and WiMAX systems offer good, fairly cheap, reliable radio solutions; and small size gateways are available at reasonable prices with good interface standards. Interfaces from the gateway to the switching fabric have to rely on legacy telephony standards. One solution is to connect the VoIP gateway to a telephony CLASS 5 switch via GR-008 or GR-303. These Telcordia standards allow for a gateway to connect to a switch (with one or two T1 lines), and to access class-5 features (such as call waiting, caller ID, 3-way calling, etc.) An alternative solution when GR-008 or GR-303 interfaces are not supported are to simply interface with analog tip and ring lines. Of course this method has the disadvantage of offering no remote alarming or troubleshooting capability.

The remainder of the voice transport between the voice gateway and the customer end-point follows typical IP transport architectures. Network elements usually interface with Ethernet (10/100 sometimes 1000bT). Many radio systems use a somewhat proprietary physical and MAC layer to insure reliable voice transport, but often these systems are based on Wi-Fi or WiMAX physical layers. A number of protocols are available to establish a reliable IP session that can provide voice transport, including session initiation protocol (SIP), or and Media Gateway Control Protocol (MGCP); ITU recommendation H.323 also provides interoperability standards for multimedia communications over IP including voice features.

Data Integration

Data features are also available on many WLL radios, but are somewhat different. Features like fax and low data rates (up to 56kbps) are fairly simple to add to most WLL setups, and are supported much like voice switches support them. The task is slightly different when trying to add higher data rates (in the multiple Mbps range). Indeed, higher data rates can no longer interface with the voice switch and need to be split into a data network of its own. If a high-speed internet network is available in the area, data sessions have to be routed to that network while voice traffic needs to be identified as such, and routed towards the VoIP gateway.

2.9 Homework

1. In a table, list all the wireless technologies popular in modern wireless services (2G, 3G, Wi-Fi, WiMAX). Research and list their main parameters such as: (a) frequency of operation; (b) RF channel bandwidth; (c) peak uplink and downlink data rates; (d) standard body for air interface; (e) modulation type; (f) multiple access; (g) capacity estimate as in §2.2.

Page 43: Wireless & Cellular Communications

43

2. Calculate capacity numbers for the following standards mentioned in §2.2 and 2.3. In each case, simply assume K = 7 as the reuse factor.

a. Verify that AMPS system capacity in independent of the amount of spectrum available, and is m = 4.7 ch./cell/MHz.

b. Calculate GSM full rate system capacity. (Answer: m=5.7) c. Calculate GSM half rate system capacity.

3. CDMA capacity improvement: a. What capacity gain does a CDMA service provider achieve by changing its handset

from QCELP vocoders to EVRC vocoders? b. In addition, the better speech coding allows typical Eb∕Nt to be reduced from 7 dB to

6.5 dB. What is the total capacity gain? 4. CDMA capacity:

a. Derive in details the capacity formula (2.5) for CDMA systems. b. Compute a radio system capacity (mCDMA) for IS-95 half rate EVRC (Eb∕Nt=6.5 dB)

5. Different radio standards system capacity: a. Compare radio system capacity for above IS-95 half rate EVRC, GSM half rate voice

frames, DECT, and PHS (search online, or refer for instance to [1] chapter 11 for the last 2).

b. What are the chances of PHS or DECT to evolve into a 3G standard? 6. You invented a new voice coder that allows you to code voice in 4.8 kb/s rather than

9.6 kb/s with no significant voice degradation. a. What will the link budget improvement be? b. Using capacity equations, quantify the impact on network capacity.

7. As an operator, you are faced with the difficult decisions of having to regularly upgrade your network to better standards and newer equipment. Assume you are operating a GSM network and you consider upgrading it to UMTS. Consider (a) price and availability of equipment, (b) timeline to upgrade, (c) impact of other carriers timeline, (d) field experience and proven technology, (e) other considerations.

8. Similarly to the above problem, you now operate a UMTS network with voice and high-speed packet data. Write a proposal to upgrade it to a fourth generation system (with the same above considerations).

Chapter 3 Radio Propagation Modeling

This section introduces propagation characteristics and models for cellular systems. It summarizes

important notions, and expands a bit on some aspects of fixed-versus-mobile and indoor-versus-

outdoor propagation modeling. For a good introduction to details of radio propagation, refer to [2]

cc. 4 to 7, or [1] cc. 3 to 5.

Before studying details of propagation, a few notation conventions are necessary. Although no further details are derived here, the reader is assumed to be familiar with the following general concepts of electromagnetic field and wave theory.

Wireless communications signals of interest are electromagnetic waves, and may be – at

least in free space – derived from the electric field .

Page 44: Wireless & Cellular Communications

44

The power density of the electromagnetic wave may be written in the form of the Poynting

vector: = × . The power density is the modulus of the Poynting vector Pd = | |. In free space the power density of the electromagnetic wave is proportional to the modulus

squared of the electric field: Pd(t) = |E(t)|2∕η0, where η0 ≈ 377 Ω is the impedance of the vacuum (and by approximation of air). 1

The electric field may be identified with the transmitted signal S(t) = s(t)⋅exp(j2πft), where s(t) is the (real) user encoded information to transmit, and f is the carrier frequency. S(t) is a complex function which real part Re{S(t)} = s(t)⋅cos(2πft) is the physical quantity of interest; although the complex function S(t) is usually used for simpler mathematical treatment, one should remember that its real part is the meaningful quantity.

Similarly, we identify the received signal with the received electric field; we will note received signal: R(t) = r(t)⋅exp(j2πft).

Given the above, received power densities are given by the expression Pd(t) = |R(t)|2∕η0. Actual received power Pr also depends on the effective area of the receiving antenna Pr(t) =

AePd(t) = Ae|R(t)|2∕η0 (see further details in §3.2 and §3.3).

More details of E-field propagation will be studied later with ray tracing; but most of the remainder of the section deals with simpler expressions for power levels in paths loss (PL) and link budgets.

3.1 Propagation Characteristics

Between transmitter and receiver, the wireless channel is modeled by several key parameters. These parameters vary significantly with the environment, rural versus urban, or flat versus mountainous. Different kinds of fading occur; they are often separated in three types [1] [3]:

Distance Dependence

path loss is approximated by PL = PL0 + 10n × log(d), where n is the path loss exponent,

which varies with terrain and environment, and is described further in §3.3 and §3.4.

Large-scale Shadowing

causes variations over larger areas, and is caused by terrain, building, and foliage

obstructions; its impact on link budgets is detailed further in §3.7.

The large-scale fading due to various obstacles is commonly accepted to follow a log-normal distribution ([22], [23], [24] ch. 7). This means that its attenuation x measured in dB is normally distributed N(m,ς), with mean m and standard deviation ς. The probability density function of x is given by the usual Gaussian formula:

(3.1)

Small-scale fading

Page 45: Wireless & Cellular Communications

45

causes great variation within a half wavelength. It is caused by multipath and moving

scatterers. Resulting fades are usually approximated by Rayleigh, Ricean, or similar fading

statistics – measurements also show good fit to Nakagami-m and Weibull distributions.

Radio systems rely on diversity, equalizing, channel coding, and interleaving schemes to mitigate its impact.

Different spectrum bands have very different propagation characteristics and require different prediction models. Some propagation models are well suited for computer simulation in presence of detailed terrain and building data; others aim at providing simpler general path loss estimates [25].

3.2 Free-Space Propagation

The simplest approach is to estimate the power ratio between transmitter and receiver as a function of the separation distance d, that ratio is referred to as path loss. A physical argument of conservation of energy leads to the Friis’ power transmission formula in free space. A transmitted power source Pt radiates spherically; the portion of that power impinging an effective area Ae at a distance d is Pr = PtGtAe∕(4πd2). The effective area of an antenna is related to antenna gain by Ae∕λ2 = Gt∕4π, which is used for the receiving antenna, and thus yields:

(3.2)

(Pt and Pr are the transmitted and received power, Gt and Gr are the transmitter and receiver antenna gain, λ is the wavelength of the signal, and d is the separation distance).

Figure 3.1: Spherical free-space propagation.

Page 46: Wireless & Cellular Communications

46

This equation shows a free-space dependence in 1∕d2, and is sometimes expressed in decibels (dB): L(dB) = 10 × log .

In many cases, antenna gains are considered separately, and one choses to focus on the path loss between the two antennas. The path loss reflects how much power is dissipated between transceiver and receiver antennas (without counting any antenna gain). Of course the path loss variation with distance is d2, or 20 log(d) in dB, which is characteristic of a free space model. The exponent (here n = 2) is called the path loss exponent, and may vary in other models. Path loss is often expressed as a function of frequency (f), distance (d), and a scaling constant that contains all other factors of the formula. For instance:

(3.3)

where f0 = 1 MHz, and d0 = 1 km. Note that the constant 32.44 changes with the reference frequency f0 and the reference distance d0.

3.3 Ray Tracing

Ray tracing is a method that uses a geometric approach, and examines what paths the wireless radio signal takes from transmitter to receiver as if each path was a ray of light (possibly reflecting off surfaces). Ray-tracing predictions are better when good map information of the area is available. But the predicted results may not be applicable in other locations.

Fairly simple and fairly general models may be devised from ray tracing concepts as well. The well-known two-ray model uses the fact that for most wireless propagation cases, two paths exist from transmitter to receiver: a direct path and a bounce off the ground. That model alone shows some important variations of the received signal with distance. [1] [3]

Ray tracing models are important for a good understanding of radio propagation; they are extensively used in software propagation prediction packages, which justifies a closer look at them in this section. Rays are an optical approximation for the propagation of the electromagnetic wave, in free space it is convenient to focus on the propagation of the electric field.

3.3.1 Two-Ray Model

With a few notation conventions we can explain a simple but useful model with two rays.

Page 47: Wireless & Cellular Communications

47

Figure 3.2: 2-ray model geometry.

Figure 3.2 shows a fixed tower (e.g. in a cellular system) at a height hb, and a fixed or mobile client device at a distance d0, and at a height hm (usually lower). The figure shows a direct ray and an indirect ray bouncing off the ground, assumed to be a perfect plane (this assumption is referred to as the flat-earth model).

It is easy to see from this figure that the two path lengths are:

(3.4)

(3.5)

Assuming free-space propagation, equation (3.2) can be written Pr∕Pt = |p0|2 in terms of a parameter p0(t) = R(t)∕S(t). The received signal at a distance d0 is therefore:

(3.6)

where λ = c∕f is the wavelength, τ is the time difference between the two paths, and Γ is the ground reflection coefficient. We will elaborate on Γ in §3.3.2 (for now let us simply assume perfect reflection and use Γ = -1).2

Another important assumption must be made here to simplify the model: we will assume that τ is small compared to the symbol length of the useful information, that is s(t) = s(t - τ). For a bounce off the ground, that assumption is fairly safe, but in general we will have to recall that such an assumption means that we assume that the delay spread (spread of values of τ) is small compared to transmitted symbol rates.

Page 48: Wireless & Cellular Communications

48

So finally we obtain:

(3.7)

the last factor of which can be easily plotted and examined for variations of the received signal strength.

Figure 3.3: Simple propagation models: free-space one-slope direct line of sight, and two-ray with

direct ray and ground reflected ray.

Figure 3.3 represents the path loss attenuation Pr∕Pt = |p0|2 (in dB) as a function of logarithm of distance; it uses hb = 8 m, hm = 2 m, f = 2.4 GHz, Gt = Gr = 0dBi, and Γ as given later in §3.3.2. The direct path (using the first term only of (3.7)) leads to the simple free-space model; the complete expression leads to the two-ray model, which shows interesting characteristics:

In close proximity the overall power decay is in 1∕d2, with maxima and fast fades due to additive and destructive components of the two rays.

After a certain cutoff distance, usually taken to be 4hbhm∕λ, the model approaches power decay in 1∕d4.

Page 49: Wireless & Cellular Communications

49

3.3.2 Reflection and Refraction

Before moving ahead, we need to take a closer look at reflection coefficients used for indirect rays. The details of this analysis come from boundary conditions for electromagnetic waves traveling between two media. In general these boundary conditions vary with the polarization of the wave and the media permittivities. For a wave impinging on the ground with an angle of incidence θ, the ground reflection coefficient depends on the polarization and is given by equation (3.8):

(3.8)

Γ is the ground reflection coefficient; Z is the characteristic impedance of the media, as obtained by transmission line theory [70] [24]; θ is the ray angle of incidence (as shown on

fig. 3.4); εr is the complex relative permittivity of the medium: εr = εr -j ≈ εr -j60ςλ where εr is the lossless relative permittivity and ς is the conductivity (in Ω-1m-1).

The wave typically has many polarized component to it; even when a transmitter uses vertically polarized antennas, different scatterers in the path may depolarize the wave. Nevertheless, the majority of cellular systems use vertical polarization, which is shown empirically to propagate slightly better in most practical cellular environments. In these cases, the electrical field is near vertical, and the reflection (and refraction) on a surface is shown on figure 3.4.

Figure 3.4: Vertical polarization ground reflection coefficient.

Page 50: Wireless & Cellular Communications

50

Figure 3.5: Horizontal polarization ground reflection coefficient.

Similarly, rays bouncing off walls have a reflection coefficient (of course the vertically polarized waves now needs to be considered as impinging the surface with electric field near the surface plane as a horizontally polarized wave does on the ground).

Values for complex permittivities may be used approximately from table 3.1 (from [24] p. 55, and a few other references); www.fcc.gov/mb/audio/m3/ gives ground conductivity maps for the US.

Table 3.1: Relative permittivities for various materials.

Material εr ς Comments

(Fm-1) (Ω-1m-1)

Vacuum 1 By definition

Air 1.00054 Usually approximated to 1.0

Glass 3.8-8 Varies with glass types

Wood 1.5-2.1

Drywall 2.8

Polystyrene 2.4-2.7

Dry brick 4

Concrete 4.5 May vary 4-6

Limestone 7.5 0.03

Marble 11.6

Page 51: Wireless & Cellular Communications

51

Fresh water 80.2 0.01

Sea water 80.2 5

Snow 1.3

Ice 3.2

Ground 15 (7-30) 0.005 (0.001-0.03) Varies with type and humidity

Further refinements may be thought of regarding the thickness of walls: the ground may easily be considered as an infinite semi-plane, but walls are usually thin enough to make that approximation questionable. The impact of wall thickness is shown in [24] p. 75.

3.3.3 Multiple Rays

Figure 3.6: Six- and ten-ray model geometry.

The above 2-ray approach can easily be extended to add as many rays as required [3]. We may add rays bouncing off each side of a street in an urban corridor, leading to a 6-ray model (with rays R0,R1,R2 each having a direct and a ground bouncing ray). Adding four more rays (bouncing on both sides: R3,R4 dashed line in figure 3.6) lead to a 10-ray model.

Page 52: Wireless & Cellular Communications

52

The direct two rays were computed earlier, the additional rays may easily be obtained from geometry of figure 3.6. Let us assume for instance a street corridor of width ws, with a transmitter on a light pole wt from the walls. For simplicity, let us move the receiving point down the street at constant wt from the wall, in that case distance Tx to Rx represented by R1

is d1 = , so much like (3.7):

(3.9)

where l1 = , and l1′ = . Γ1 is the refection coefficient off the nearest wall, and is computed from (3.8), but with angles with respect to the walls.

Additional rays (R2 and more) can be calculated in expressions resembling (3.9), and added to others in order to produce a multiple-ray model.

Figure 3.7:

Ray tracing plots of received signal power indicator 20log |∑ i=0i=Npi| as a function of log

d0 for N {0,2,3,5}. A typical suburban case is taken with street width of 20 feet,

and average distance from street to home of wt = 10 feet (so ws = 40 feet).

Page 53: Wireless & Cellular Communications

53

Figure 3.7 shows the increased fading statistic when more rays are taken into account. The figure simply represents the received signal power indicator 20log |∑ i=0i=Npi| as a

function of log d0 for N {0,2,3,5}. For that plot a typical suburban case is taken with street width of 20 feet, and average distance from street to home of wt = 10 feet (so ws = 40 feet).

3.3.4 Residential Model

As previously mentioned, that approach is interesting for urban and suburban corridors. We further assume that property lengths and home lengths along the street are approximately identical (say 100 feet and 80 feet respectively). In that case, some rays escape the corridor and never reach the receiver – as illustrated in figure 3.8, R3 rays escape the urban canyon and never reach the receiver. Taking into account these gaps show a slightly modified model (figure 3.9). Alternatively, instead of examining where rays may escape the corridor, a simplified model may be used that takes into account a power loss proportional to the gaps [40].

Figure 3.8: Ray tracing geometry for a street corridor: some rays escape the corridor through

gaps between homes.

Page 54: Wireless & Cellular Communications

54

Figure 3.9: Ray tracing power levels down a street, with gaps between homes.

3.3.5 Indoor Penetration

Most cellular towers are placed outdoors, while eighty percent of phone calls are placed indoors. Therefore the problem of how much of the signal strength propagating down the street might be available indoor is of great interest. Grazing angles of incidence are somewhat concerning in urban and suburban corridors. Figure 3.10 shows a typical case where wireless systems (base stations or access points) may be placed on opposite side of the street to provide coverage to residences.

Figure 3.10: Ray tracing impinging on home walls.

Page 55: Wireless & Cellular Communications

55

An indoor system may detect the optimal signal between outside sources. Received power levels on the home front wall (inside and outside) is compared on figure 3.11.

Figure 3.11: Received power levels from four rays outside and inside home front wall.

In our previous urban corridor model, the angles of incidences should be restricted to rays illuminating walls (as in figure 3.12). 3 4

Figure 3.12: Angles of incidence illuminating homes in an urban corridor.

Page 56: Wireless & Cellular Communications

56

(3.10)

(3.11)

(3.12)

(3.13)

Angles of incidences between these values should be used to calculate penetration losses such as:

(3.14)

For instance in a Lakewood neighborhood a light pole is placed every three homes on opposite street sides (i.e. a pole every 6 homes); we get the values in table 3.2 for the furthest home (n = 3, 100-feet properties, 80-feet long homes, 40-feet wide streets, and wt = 10 feet). And the value Lge ≈ 10dB is typical for residential areas. (More details in §3.6).

Table 3.2: Angles of incidence in a suburban area in Lakewood, CO.

Pole position θ3 (deg) θ4 (deg) L′ge from (3.14)

Across street 19.4 14.6 0.5 Lge

Page 57: Wireless & Cellular Communications

57

Same side 6.7 5.0 8.0 Lge

3.3.6 Indoor Propagation

Propagation within a building is yet another problem of interest, and is different when signal comes from the outside, or has a source within the building. Indoor propagation varies greatly with the type of buildings, and the position of access points within the building – how far from wall, how high compare to obstructions and furniture. 3D ray models are sometimes used to better predict these situations. Other generic models are detailed in §3.4.5.

3.4 Empirical Models

Empirical models are simpler but provide good first order modeling for a wide range of locations. A handful of empirical models are widely accepted for cellular communications; these models usually simply consist of computing a path loss exponent n from some linear regression argument on a set of field data, and deriving a model like:

(3.15)

(where the intercept L0 is the path loss at an arbitrary reference distance d0). These models are referred to as empirical one-slope models; their applications and domains of validity are well described and analyzed for instance in [3] ch. 2, [1] ch. 4, [5], [24] ch. 6-7. They provide a first estimate used by service provider in wireless systems’ design phase.5

A couple of important points should be kept in mind about most propagation models. The first is that large amounts of empirical data are collected usually at cellular or PCS frequencies (800 MHz or 1900 MHz), and extensions to other frequencies are derived as discussed in §3.4.6. The second is that these data points are collected while driving and may not accurately reflect fixed wireless links, which is discussed in more details in §2.8.

3.4.1 COST 231-Hata Model

A one-slope empirical model was derived by Okumura [26] from extensive measurements in urban and suburban areas. It was later put into equations by Hata [27]. This Okumura-Hata model, valid for 150 MHz to 1.5 GHz, was later extended to PCS frequencies, 1.5 GHz to 2 GHz, by the COST project ([28], [29] ch. 4), and is referred to as the COST 231-Hata model; it is still widely used by cellular operators. The model provides good path loss

Page 58: Wireless & Cellular Communications

58

estimates for large urban cells (1 to 20 km), and a wide range of parameters like frequency, base station height (30 to 200 m), and environment (rural, suburban or dense urban).

(3.16)

with the following values:

Table 3.3: Values for COST 231 Hata and Modified Hata model.

Frequency c0 cf b(hB)

(MHz) (dB) (dB) (dB)

150-1500 69.55 26.16 13.82log(hB∕1m)

1500-2000 46.3 33.9 13.82log(hB∕1m)

The parameter a(hM) is strongly impacted by surrounding buildings, and is sometimes refined according to city sizes:

Table 3.4: Values of a(hM) for COST 231-Hata model according to city size.

Frequency City size a(hM)

Page 59: Wireless & Cellular Communications

59

(MHz) (dB)

150-2000 Small-medium (1.1log( ) - 0.7) - 1.56log( ) + 0.8

150-300 Large 8.29(log(1.54hM∕1m))2 - 1.1

300-2000 Large 3.2(log(11.75hM∕1m))2 - 4.97

And an additional parameter CM is added to take into account city size, and can be summarized for both models as:

Table 3.5: Values of CM for COST 231 Hata model according to city size.

Frequency City size CM

(MHz) (dB)

150-1500 Urban 0

150-1500 Suburban -2(log( ))2 - 5.4

150-1500 Open rural -4.78(log( ))2 + 18.33log( ) - 40.94

1500-2000 Medium city and suburban 0

1500-2000 Metropolitan center 3

Page 60: Wireless & Cellular Communications

60

Empirical values of the model are limited to distances and tower heights that were used to derive the model; consequently the model is usually restricted to:

: Base station antenna height: 30 to 200 m : Mobile height: 1 to 10 m : Cell range: 1 to 20 km

3.4.2 COST 231-Walfish-Ikegami Model

Another popular model is the Walfisch-Ikegami-Bertoni model [33] [34], also revised the COST project ([28], [29] ch. 4), into a COST 231-Walfisch-Ikegami model. It is based on considerations of reflection and scattering above and between buildings in urban environments. It considers both line of sight (LOS) and non line of sight (NLOS) situations. It is designed for 800 MHz to 2 GHz, base station heights of 4 to 50 m, and cell sizes up to 5 km, and is especially convenient for predictions in urban corridors.

The case of line of sight is approximated by a model using free-space approximation up to 20 m and the following beyond:

(3.17)

The model for non line of sight takes into account various scattering and diffraction properties of the surrounding buildings:

(3.18)

where L0 represents free space loss, Lrts is a correction factor representing diffraction and scatter from rooftop to street, and Lmsd represents multiscreen diffraction due to urban rows of buildings. These terms vary with street width, building height and separation, angle of incidence, and are detailed in table 3.6.

Table 3.6: Values for COST 231-Walfish-Ikegami model.

Parameter Value (dB)

Page 61: Wireless & Cellular Communications

61

L0 32.4 + 20log(d∕1km) + 20log(f∕1MHz)

Lrts -16.9 - 10log(w∕1m) + 10log(f∕1MHz) + 20log(ΔhM∕1m) + LOri

w Average street width

ΔhM hRoof - hM

LOri

φ Road orientation with respect to direct radio path (see figure(3.13))

Lmsd Lbsh + ka + kd log(d∕1km) + kf log(f∕1MHz) - 9log(b∕1m)

b Average building searation

ΔhB hB - hRoof

Lbsh

ka

kd

kf

Page 62: Wireless & Cellular Communications

62

Figure 3.13:

Definition of street orientation angle φ for use in COST-231 Walfish-Ikegami model:

in the best case (φ = 0∘) the direction of propagation follows the street; in the worst

case (φ = 90∘) the main radio wave is perpendicular to the street.

The model is usually restricted to:

: Frequency: 800 to 2000 MHz : Base station antenna height: 4 to 50 m : Mobile height: 1 to 3 m : Cell range: 0.2 to 5 km

3.4.3 Erceg Model

More recently Erceg et al. [35] proposed a model derived from a vast amount of data at 1.9 GHz, which makes it a preferred model for PCS and higher frequencies. The model was in particular adopted in the 802.16 study group [36] and is popular with WiMAX suppliers for 2.5 GHz products, and even 3.5 GHz fixed WiMAX.

(3.19)

where free space approximation is used for d < d0. Values for L0, γ, and s are defined in tables 3.7 and 3.8:

Table 3.7: Values for Erceg model.

Page 63: Wireless & Cellular Communications

63

Parameter Value (dB)

L0 20log(4πd0∕λ) as in free space

d0 100 m

γ (a - bhB + c∕hB) + xςγ

s yς

ς μς + zςς

x,y,z Gaussian random variables N(0,1)

Table 3.8: Values for Erceg model parameters in various terrain categories.

Parameter Terrain Category

A B C

(Hilly / moderate to heavy tree density)

(Hilly / light tree density or flat / moderate to heavy tree density)

(Flat / light tree density)

a 4.6 4.0 3.6

b(m-1) 0.0075 0.0065 0.0050

c (m) 12.6 17.1 20.0

Page 64: Wireless & Cellular Communications

64

ςγ 0.57 0.75 0.59

μς 10.6 9.6 8.2

ςς 2.3 3.0 1.6

The model is usually restricted to:

: Frequency: 800 to 3700 MHz : Base station antenna height: 10 to 80 m : Mobile height: around 2 m : Cell range: 0.1 to 8 km

The model is particularly interesting as it provides more than an estimate for path loss exponent and path loss: it also gives a measure of its variation about that median value in terms of three zero-mean Gaussian random variables of variance 1 (x,y, and z = N(0,1)).

3.4.4 Multiple Slope Models

Further refinements to these models in which multiple path loss exponents (n1,n2) are used at different ranges provide some improvements, especially in heavy multipath indoors environments. For outdoor propagation, two slopes are sometimes used: one near free-space for close points, and another empirically determined. In fact we’ve seen that our 2-ray model could be approximated by a 2-slope model: n1 = 2 and n2 = 4 for distances greater than 4hthr∕λ.

It seems however that variations from site to site generally are such that these multiple slope improvements are fairly small, and simple one-slope models are generaly a good enough first approximation. More detailed site specific models are required for better results; but they require additional efforts and site specific terrain or building data.

3.4.5 In-building

Indoor propagation often has to be estimated by site-specific models with features specific to a particular building: construction material, wall thickness, floor and ceiling material, all have a strong impact on wave guiding within the building. Some models simply approximate the number of walls and floors, with an average loss for each. See in particular the COST 231 approach in §3.6.

Page 65: Wireless & Cellular Communications

65

A similar model for indoor environment is the Motley-Keenan model ([2],§7.2), which estimates path loss between transmitter and receiver by a free space component (L0) and additive loss in terms of wall attenuation factors (Fwall) and floor attenuation factors (Ffloor).

(3.20)

Wall attenuation factors vary greatly, typically 10 to 20dB (see table 3.10 in §3.6); and floor attenuation factors are reported to vary between 10 and 40dB depending on buildings. [1]

This model is very site specific, yet sometimes imprecise as it does not take into account proximity of windows external walls, etc; but it can be useful as a guideline to estimate signal strength to different rooms, suites, and floors in buildings.

3.4.6 Frequency Variations

Frequency of operations impacts propagation and path loss estimates. As many models are built on cellular or PCS data measurements, one must be careful about extending them to other frequency ranges.

As seen in equation (3.3) in §3.2, the impact of frequency on free-space propagation is 20logf. Some empirical measurements confirm the trend [37], and the extension is used for instance in the COST-231 Walfish-Ikegami model.

Empirical evidence also shows however that frequency extensions are obtained by adding a frequency dependence in f2.6 (or a 26log f term in dB) as suggested by [38], and used for instance in the Okumura-Hata model [27] and the 802.16 contribution [36].

Finally other important aspects have a significant impact as frequency changes. Spatial diversity gain typically improves with frequency since spatial separation increases when related to wavelength ([39] shows a 2dB diversity gain from cellular 850 MHz to PCS 1.9 GHz). Doppler spread and impact on symbol duration should also be studied separately and may have a significant impact on a change of frequency [41]. Impact on in-building penetration is examined further in §3.6.

3.4.7 Foliage

Foliage attenuates radio waves and may cause additional variations in high wind conditions [42]. Propagation losses vary for instance with the position of transmitter with respect to the tree canopy; they also vary with the types and density of foliage, and with seasons. [43][44][45][46]

Page 66: Wireless & Cellular Communications

66

We will report in a later chapter on the impact of foliage for fixed wireless links at 3.5 GHz, in a suburban area as foliage grows from the winter months into the spring (see figure 9.7).

Studies have been published at different frequencies, and impact of foliage is given in a number of ways: some identify empirical attenuation statistic with Raleigh, Ricean, or Gaussian variables, others derive excess path loss, or attenuation per meter of vegetation.

As a rule of thumb, at our frequencies of interest (2-6 GHz) single tree causes approximately 10-12 dB attenuation, and typical estimates are 1-2 dB/m attenuation. Deciduous trees in winter cause less attenuation: 0.7-0.9 dB/m. See table 3.9 and figure 3.14.

Practically, the height of the antenna with respect to tree height (or canopy height) strongly impact propagation characteristics; different path loss estimates and path loss exponents may be empirically derived depending on relative height with the canopy.[47]

Table 3.9: Vegetation loss caused by tree Foliage: single-tree model loss in dB, and estimates for

dB/m loss) — summary of values for various frequencies reported.

Source Frequency single tree per meter loss Comments

(GHz) (dB) (dB/m)

Benzair [43] 2.0 20.0 1.05 Summer

4.0 27.5 1.40

2.0 9.5 0.70 Winter

4.0 10.7 0.85

Dalley [44] 3.5 11.2 1.9 With leaves

5.8 12.0 2.0

Wang [46] 1.0 10.0 - Single tree

Page 67: Wireless & Cellular Communications

67

2.0 14.0 -

4.0 18.0 -

Torrico [47] 1.0 - 0.7 With leaves

2.0 - 1.0

Approximation 12.01+7.46 logfGHz 0.54+1.40 logfGHz

Figure 3.14: Tree foliage attenuation as a function of frequency.

3.5 Further Modeling Work

The above models are in a sense simplistic as they focus on path loss as a function of distance. Although these models work well in large cellular coverage prediction, they are often deemed insufficient for smaller cells such as wireless LAN’s, especially where multipath is dominant, as in a heavy urban environment or indoor environment.

Page 68: Wireless & Cellular Communications

68

An interesting and important activity around propagation modeling is the COST project (COperation europénne dans le domaine de la recherche Scientifique et Technique), a European Union Forum for cooperative scientific research that has been useful in focusing efforts and publishing valuable summary reports for wireless communications needs.

COST 207

“Digital Land Mobile Radio Communications”, March 1984 - September 1988, developed

channel model used for GSM,

COST 231

“Evolution of Land Mobile Radio (Including Personal) Communications”, April 1989 - April

1996, contributed to the deployment of GSM1800, DECT, HIPERLAN 1 and UMTS, and

defined propagation models for IMT-2000 frequency bands [29]

COST 259

“Wireless Flexible Personalised Communications”, December 1996 - April 2000, contributed

to wireless LAN modeling, and 3GPP channel model [30]

COST 273

“Towards Mobile Broadband Multimedia Communications”, May 2001 - June 2005, which

contributed to standardisation efforts in 3GPP, UMTS networks, provided channel models

for MIMO systems [31]

The COST2100 effort has now started [32] to continue the COST 273 work, and is an important effort around the current MIMO advances in the wireless industry.

3.6 In-Building Penetration

Fixed wireless service may use antennas placed on individual homes, but that comes with a number of obvious problems: customers may not welcome structures on their homes, and installation time and cost are high. Therefore even for fixed services, important operational aspects lead operators to ship a small device, like an ADSL or cable modem, that customers may install without on-site technician time. Furthermore the clear advantage of wireless data services lies in its portability or full mobility; therefore it seems clear that the trend is to pursue small indoor devices.

Sending RF signal into buildings comes at an additional cost, which can be quantified by an additional building penetration loss in the link budget. Unfortunately indoor penetration measurements are difficult to compare from one experiment to another. The difficulty arises mostly from the fact that indoor and outdoor environments are so different that the method of data collection may cause large variations between the two environments; the

Page 69: Wireless & Cellular Communications

69

following parameters have an influence: antenna beamwidth, angle of incidence, outside multipath, indoor multipath, distance from the walls, etc.

The COST project proposes models for indoor penetration – see [29] §4.6 – with variations of angle of incidence. The COST 231 indoor model simply uses a line-of-sight path loss with an indoor component:

(3.21)

where S is the outdoor path, d is the indoor path, and

(3.22)

where Le is the normal incidence first wall penetration; the next term represents the added loss due to angle of incidence θ and is sometimes measured over an average of empirical values of incidence, in which case it may be noted L′ge = Lge(1 - sinθ)2; and the last term max(Γ1,Γ2) aims at estimating loss within the building, whether going through walls or in a corridor.

Since angles of incidence are not always known the estimate L′ge = Lge(1-sinθ)2 is sometimes more convenient. As a rough estimate for angles of incidence between -π∕2 and π∕2 lead to the following:

(3.23)

Empirical values of L′ge are reported to be ≈ 5.7 - 6.4 [51] for residential areas, therefore we may use Lge ≈ 10dB. For urban environments, COST-231 reports Lge ≈ 20dB.

As for further interior loss, the COST model distinguishes between propagation through walls and propagation down coridors. Through ni interior walls of loss Li each: Γ1 = niLi. In a corridor: Γ2 = α(d′- 2)(1 - sinθ)2, with an empirical propagation loss α = 0.6dB/m.

Page 70: Wireless & Cellular Communications

70

Figure 3.15: COST-231 indoor penetration loss model.

Typical values for the model reported in [29] and [51] are summarized in table 3.10.

Table 3.10: Penetration Loss into buildings, from COST-231 model.

Material Frequency Le Lge L′ge Li

Wood, plaster 900MHz 4 4 4

Concrete w/

windows 1.8GHz 7 ≈20 6 10

Residential 2.5GHz 6.2 ≈10 6.1 3

Page 71: Wireless & Cellular Communications

71

Figure 3.16: Penetration loss into residential buildings, cumulative density distribution for

700 MHz, 900 MHz, 1.9 GHz, and 5.8 GHz.

Measurement campaigns show that the distribution of building penetration loss is close to log-normal [22], a Gaussian function is a good approximation of the cumulative distribution function (CDF) of indoor measurements. The mean and standard deviation of indoor penetration loss vary with frequency, types of homes, and environment around the homes. Variations also depend on the location within the building (near an outside wall, a window, or further inside). Finally the angle of incidence with the outside wall also has a significant impact.

With that in mind, we consider that in-building penetration is a log-normal random variate independent of the large-scale shadowing. Therefore, the log-normal fading used

for indoor propagation should be the normal random variable N . Both median penetration loss and modified excess margin should be taken into account for a new indoor link budget.

This has a significant impact on the total link budget. Consequently indoor radio units need to somehow increase their link budgets with a plurality of antennas making use of diversity schemes or MIMO.

3.6.1 Residential Homes

In most residential and suburban environments, surfaces involved are mostly made of glass, bricks, wood, and drywall. Penetration is often dominated by paths through windows and roofs, loss are relatively low and go up with frequency. Precise characterization of in-

Page 72: Wireless & Cellular Communications

72

building penetration is difficult, a rough approximation of an average penetration loss μi around 10 to 15 dB and a standard deviation ςi around 6 dB seems to be the norm in published studies. Table 3.11 and figure 3.6.1 summarize some published results for residential homes.

Table 3.11: Penetration Loss into residential buildings: median loss (μi) and standard deviation

(ςi) from experimental results reported at various frequencies.

Source Frequency μi ςi Comments

(GHz) (dB) (dB)

Aguirre [48][52] 0.9 6.4 6.8 7 Boulder residences

1.9 11.6 7.0

5.9 16.1 9.0

Wells [49] 0.86 6.3 6 Sat. meas. into 5 homes

1.55 6.7 6

2.57 6.7 6

Durgin [66] 5.8 14.9 5.6 [66]Table 5 average

Martijn [50] 1.8 12.0 4.0 [50]Table 1

Oestges [51] 2.5 12.3 — [51]Table 6 (avg. Le + L′ge)

Schwengler 1.9 12.0 6.0 Personal measurements

Schwengler [69] 5.8 14.7 5.5 [69]Table 2

Average 0.9 6.4 6.4

≈ 2 10.3 6.3

Page 73: Wireless & Cellular Communications

73

5.8 13.8 6.7

Figure 3.17: In-building loss for residential buildings: measurements campaigns published for

different frequencies, in different residential areas.

Figure 3.18: In-building loss for urban office buildings and high-rises: measurements campaigns

published for different frequencies, in different urban areas.

Page 74: Wireless & Cellular Communications

74

3.6.2 Urban Environments

In dense urban areas experiments show different trends as illustrated in figure 3.6.1: some papers show penetration loss increasing with frequency [48][52]; some claim loss are independent of frequency [61][54]; others show a decrease with frequency [58][57][56]. Furthermore the variations between buildings and types of environments nearly always exceed the frequency variations. These environments are dominated by reflections off metal reinforced concrete and heavily reflective glass. In case of high-rises, penetration also depends on the floor and height of neighboring buildings or clutter.

3.7 Large-Scale Shadowing

Path loss models give a median estimate for received power. That value varies greatly within a wavelength (small-scale variations) as well as with different obstructions and shadowing (large-scale variations).

3.7.1 Log-Normal Shadowing

As mentioned earlier, large-scale variations caused by shadowing of obstacles are shown to follow a log-normal distribution [22][23][24]; they are usually incorporated to path loss estimates by the addition of a zero-mean Gaussian random variable, with standard deviation ς often estimated by empirical measurements. Commonly accepted values for ς are between 6 dB and 12 dB. Measured values of ς seem to display Gaussian distribution as well and depend on: the radio frequency, the type of environment (rural, suburban, or urban), base station and subscriber station height. Reports may be found in the literature ([22],[63]–[69]) and are summarized in table 3.12.

Table 3.12: Path loss exponent (n) and log-normal shadowing standard deviation (ς, in dB) —

summary of values for various frequencies reported for suburban or residential areas.

Source Frequency Path Loss ς Comments

(GHz) Exponent n (dB)

Seidel [63] 0.9 2.8 9.6 Suburban (Stuttgart)

Page 75: Wireless & Cellular Communications

75

Erceg [35] 1.9 4.0 9.6 Terrain-category B

Feuerstein [64] 1.9 2.6 7.7 Medium antenna height

Abhayawardhana [65] 3.5 2.13 6.7–10 [65] Table 2, 3.

Durgin [66] 5.8 2.93 7.85 [66] Fig. 7, residential

Porter [67] 3.7 3.2 9.5 Some denser urban

Rautiainen [68] 5.3 4.0 6.1 [68] Fig. 3, 4.

Schwengler [69] 5.8 2.0 6.9 LOS

5.8 3.5 9.5 NLOS

3.5 2.7 11.7 Near LOS

Average 3.5-5.8 3.0 8.7

0.9-1.9 3.1 9.0

Approximation 1-6 n = 3.0 ς = 9.29 - 1.58log(fGHz)

3.7.2 Coverage Reliability

In order to maintain signal strength above a certain fading probability an additional margin is added to the link budget. The probability of service availability is sometimes determined at the edge of the cell, or more usefully for the entire cell surface. Jakes’ equation [70] is often used for an estimate of such excess fade margin.

Page 76: Wireless & Cellular Communications

76

Figure 3.19:

Large-scale shadowing can be pictured as a zero-mean Gaussian random variable

around the median received power. An additional fade margin or excess margin F is

chosen to model received power levels at a greater probability than the median 50

percent of the time.

The assumption is that the shadowing statistic throughout the cell is log-normally distributed (i.e. values in dB are normally distributed):

(3.24)

The probability that x exceeds the threshold x0 (the receiver threshold that provides an acceptable signal) at a given radius R is

(3.25)

By integrating the probability density function from x0 to ∞, the edge reliability result is

(3.26)

Page 77: Wireless & Cellular Communications

77

For a fade margin of zero (x0 -m = 0) at a given R, the error function (erf) also equals to zero resulting in 50% edge reliability.

Alternative representations of that formula sometime make use of the complementary error function or the Q function. 6

(3.27)

Instead of an edge reliability, the reliability in the entire cell area is often more useful: the fraction of useful service area, FA(R), within a circle of radius R where the received signal strength from a radiating base-station antenna exceeds a threshold value x0 it is the integration of the probability function over incremental area as shown below.

(3.28)

With the assumption that the mean value of the signal strength, m, behaves according to an

r-n propagation law, then m = α- 10nlog 10 , where α (in dB) is a constant determined from the transmitter power, antenna heights and gain, and so on, and n is the propagation exponent value. Substituting m into the probability density function gives the area reliability: (after substitution and integration by part):

(3.29)

where

(3.30)

The additional margin computed in that way is added to the link budget, which therefore may be designed to represent any percentage of service reliability rather than a median power level. These considerations are examined further in section 3.8.

3.8 Link Budgets

What are link budgets and why are they useful? Link budgets are a convenient tool to compare different technologies and different systems. As we’ve seen earlier, coverage

Page 78: Wireless & Cellular Communications

78

distances vary greatly with terrain conditions; link budgets allows for good system comparison while removing some of these variations. Assumptions in Link Budgets also give an estimate for the system capacity

We will examine link budgets for popular new wireless access standards such as cdmaOne (IS-95), cdma2000 (IS-2000), EV-DO (IS-856) [74], and fixed WiMAX. We will also discuss coverage and capacity tradeoffs, increasing throughput vs. capacity or coverage, soft handoff benefits and cost.

3.8.1 Important Parameters

Link budgets from different radio manufacturers are sometimes difficult to compare because they use different terms and definitions (without always clearly specifying them). Always compare them to a common definition, and try to identify the following parameters.

EIRP (or ERP):

Defines the maximum transmit power. Effective isotropic radiated power (EIRP) is the

power radiated relative to a perfect isotropic antenna; it is obtained by adding available

transmit power and antenna gain in dBi, and removing any loss (due to cable, inefficiency,

angle away from boresight, etc.) Sometimes (though rarely) manufacturers give an

equivalent parameter called effective radiated power (ERP), which is the power radiated

relative to a dipole; it is obtained by adding available power to antenna gain in dBd (instead

of dBi) and again removing any transmission loss.

ERP is smaller than EIRP by the amount of gain difference between an isotropic antenna and a dipole, that is 2.15 dB. (Indeed a dipole gain is 0 dBd=2.15 dBi, so any antenna gain G may be expressed in either unit with the simple conversion G(dBi) = G(dBd) + 2.15 dB, and EIRP=ERP+2.15 dB.)

SNR or Eb/No:

A minimum signal to noise ration (SNR) is required to achieve at the receiver a certain error

probability given a signal modulation. The SNR is sometimes expressed in term of energy

per bit over noise power spectral density noted Eb∕N0 or Eb∕Nt.

Receiver sensitivity:

On the receiving side some measure of sensitivity must be given: it is usually expressed in

terms of a signal to noise ratio required above a certain noise floor.

Some care must be taken to define the receiver sensitivity, which is the lowest power level at which the received signal may be decoded, it is usually defined as a power level above ambient noise and interferences, and depends on several parameters such as bit rate, coding, error rate. It uses the following parameters:

Page 79: Wireless & Cellular Communications

79

Boltzman’s constant: k = 1.38 ⋅ 10-23 J/K, Reference temperature T0 = 290 K, (63∘F, 17∘C), hence kT0 = -174 dBm/Hz, Thermal noise is the noise caused by components in the receiving chain (of

bandwidth B): N0 = kT0B dBm, Total noise is the sum of thermal noise and any other noise sources, usually noted

Nt. Hystorically it was sometimes identified with N0. Total noise and interference also adds to Nt any source of interferences, and is

usually noted I0 or It. Noise figure F of the receiver, the noise added by the receiver system. Receiver sensitivity is the minimal signal for successful decoding. If a certain SNR is

required the receiver sensitivity is simply: S = F ⋅Eb ⋅R, where Eb is the energy per user bit, and R is that user bit rate (e.g. 9.6 kb/s):

(3.31)

Other similar and equivalent expressions my be derived for system sensitivity using the minimum SNR required in an RF channel or bandwidth B, in which case S = SNR ⋅ F ⋅ Nt or:

(3.32)

(as seen in link budget on figure 3.23).

Excess margin:

Any number of reasons can be required to add an excess margin to the link budget:

increased service availability, in-building penetration, etc. When comparing radio systems,

it is important to verify that the same excess margin conditions are used. In radio systems

deep fades occur and an excess margin is used to achieve a given success rate of the

receiver, it is therefore referred to as fade margin.

Maximum allowable path loss:

In the end the goal of the link budget it to derive the difference between transmitted

radiated power and received power (possibly removing any margin required). That result is

what quantifies the radio system performance. It provides a technology independent value

that can be used for coverage, capacity, or other estimates.

Different manufacturers present link budgets differently, and some analyses are required to reduce them to a common format. Still, transmitted EIRP, receiver sensitivity, excess margin, and maximum allowable path loss can usually be found.

Page 80: Wireless & Cellular Communications

80

3.8.2 Reverse Link Budget

Equipment manufacturers typically claim a certain reverse link budget, which is studied by potential operator buyers in order to predict performance, coverage, capacity, and compare them with other equipment vendors. The reverse link lends itself well to a straightforward power budget, based on the mobile maximum transmit power and the base sensitivity level and the industry commonly admits that reverse link budgets are the basis for radio design, and the forward link is studied subsequently, simply in order to verify that it provides enough resources to be balanced with the reverse link.

3.8.3 Forward Link Budget

Equipment manufacturers sometimes do not provide forward link budgets, and argue that systems are usually reverse-link limited. Of course systems should be balanced, and the forward link budget, which is a power allocation between devices within the cell should be considered as well.

Unlike in the reverse link, the entire power is not necessarily allocated to one remote client device: either a portion of orthogonal channels (CDMA or OFDMA) are allocated to it, or a certain percentage of the time (as in TDMA, or IS-856 EV-DO). The link budget should reflect that fact as shown on forward link budget figures of this section. (The details of derivation of these percentages are not trivial and depend heavily on standards, system efficiency, and suppliers implementations.)

3.8.4 Licensed vs. Unlicensed Radios

Whenever possible, operators use licensed spectrum for wireless communications. For instance CDMA systems are commonly used at PCS frequency (1.9 GHz), and fixed WiMAX systems operate at 3.5 to 3.7 GHz. We summarize parameters for these licensed radio systems with the link budgets shown on figures 3.20 to 3.22.

Link budgets in unlicensed bands are similar to the above but are usually limited by a lower maximum allowed EIRP set by government regulations (FCC in the US) as shown in a separate table on figure 3.23.

Page 81: Wireless & Cellular Communications

81

Figure 3.20: Reverse link budgets for cdmaOne, cdma2000, and IS-856 (1.9 GHz).

Figure 3.21: Forward link budgets for cdmaOne, cdma2000, and IS-856 (1.9 GHz).

Figure 3.22: Reverse link budget for fixed WiMAX (3.5 GHz).

Page 82: Wireless & Cellular Communications

82

Figure 3.23: Reverse link budget for unlicensed fixed WiMAX (5.8 GHz).

3.9 Small Scale Fading

We’ve seen in §3.3 the impact of multiple rays on propagation models: this effect of multipath causes deep fades within small distances and is referred to as small-scale fading. Another important yet different cause of fading is that of small frequency variations such as doppler effect. Both of these small-scale fading effects are studied in this section. The section presents a summary of small-scale fading; for further details refer to [1] chapter 5.

3.9.1 Multipath Fading

Multipath fading is significant for both mobile and fixed wireless systems. Intuitively that type of fading varies with surrounding scatterers which reflect differently the wavefront between transmitter and receiver. Practically, it is very important to quantify that aspect of the propagation environment, and even to taylor the standard to perform well in such an environment: for instance we’ll see later that the length of a transmitted symbol will be depending on the multipath situation in which it has to perform well.

In the time domain, multipath parameters can be seen as the spread of the arriving waves. In the frequency domain, the concept is less intuitive and relates to a coherence bandwidth, that is the width of the spectrum that is attenuated by a fade. The main parameters are summarized in table 3.13.

Page 83: Wireless & Cellular Communications

83

Table 3.13: Multipath fading parameters to measure and quantify.

Multipath Parameter Symbol

Time domain Channel impulse response H(t)

Mean excess delay

RMS delay spread ςτ

Excess delay spread τX

(for x dB threshold)

Frequency domain Channel impulse response H(f)

Coherence bandwidth (90%) Bc ≈

Channel impulse response (50%) Bc ≈

Flat or frequency-selective fading: Depending on the values of these above parameters and how they compare to the speed of transmitted symbol, the wireless channel will have flat fading (over the entire bandwidth used) or frequency-selective fading. This is of course a frequency domain interpretation describing what happens to the signal spectrum: it is either faded over the entire amount, or selectively only over a portion of it.

High or low delay spread: Again depending on the values of the above parameters and how they compare to the length of transmitted symbol, the wireless channel is said to have high delay spread (or heavy multipath), or low delay spread (low multipath). This simply says in the time domain what flat vs. frequency-selective fading said in the frequency domain.

Page 84: Wireless & Cellular Communications

84

3.9.2 Time Dispersion

Another aspect of wireless communication, different from the above, is the concept of how fast things are changing in the wireless channel; that aspect is referred to as time dispersion, or as doppler effect.

In the time domain, these parameters describe how fast the wireless channel is changing: obviously that aspect is important for estimating the quality of communication: for instance if the channel has a certain property, how long can we count on it? This defines for instance how often training sequences should be sent to estimate the wireless channel.

In the frequency domain the effect is best described by the doppler spread: it describes how fast transmitter, receiver, and scatterers in-between are moving; of course the faster they are moving, the faster the wireless channel changes.

Table 3.14: Doppler fading parameters to measure and quantify.

Doppler Parameter Symbol

Frequency domain Channel impulse response H(f)

Doppler spread BD = 2fm

Time domain Channel impulse response H(t)

Coherence time (50%) Tc ≈

Practical rule of thumb Tc ≈

Page 85: Wireless & Cellular Communications

85

Fast or slow fading Depending on the values of these above parameters and how they compare to the length of transmitted symbol, the wireless channel will have fast fading (faster than a transmitted symbol) or slow fading (one or several transmitted symbol during a fade). This is of course a time domain interpretation describing how fading time compares to transmitted symbol time.

High or low doppler Again depending on the values of the above parameters and how they compare to the speed of transmitted symbol, the wireless channel is said to have high or low doppler spread. This simply expresses in the frequency domain what fast vs. slow fading said in the time domain. It is important to understand the non-intuitive equivalence: for a given transmission rate, slow fading means long fades, meaning high coherence time, therefore low doppler spread.

3.9.3 Fading summary

Wireless engineers might talk about fast fading meaning all of the above types of small-scale fading; this however should be avoided; instead always refer to it as small-scale fading, as opposed to the large-scale shadowing.

It is important to reiterate the difference between the types of fading presented in the previous two section, and to understand that the characteristics presented in these two sections are completely uncorrelated. A wireless channel can be fast or slow and flat or frequency-selective. In either case, parameters above have to be compared to the transmitted symbol period (in the time domain) or to the data symbol baseband frequency (in the frequency domain).

To summarize: remember the following.

The amount of multipath is related to the delay spread and inversely to the coherence bandwidth ; it produces flat vs. frequency-selective fades.

The variability of the transmit media is related to the coherence time and inversely to the doppler spread; it produces fast or slow fades.

Page 86: Wireless & Cellular Communications

86

Figure 3.24:

Summary of small-scale parameters: multipath parameters and time variance

(Doppler) parameters are different concepts; but each have parameter interpretation

in the frequency domain or time domain.

3.9.4 Fading distribution

Small-scale fading is caused by different reflections of the signal (delayed, frequency shifted, alternatively constructive or destructive) and varies fast over time and space. It is usually taken into account by random variable with a certain probability distribution.

We also derive estimation methods in order to fit empirical data sets to specific distributions. (See their graphs on figure 3.25).

Rayleigh Rayleigh fading channels are widely used in theoretical approaches as well as in empirical urban studies. They are generally accepted to model multipath environments with no direct line of sight (LOS). Given two random variable x and y Gaussian and zero-mean, that represent some central limit theorem of a large number of multipaths

(practically more than six), it is shown that the signal envelope or amplitude α = is Rayleigh distributed [7]. The channel amplitude follows the Rayleigh distribution:

(3.33)

where Ω = is the mean square value of the random variate α.

The signal power is then related to α2. When the noise spectral density N0 is assumed to be one-sided Gaussian, the SNR has exponential distribution [71]: let us use as a measure of signal to noise ratio, SNR: γ = α2Es∕N0

Page 87: Wireless & Cellular Communications

87

(3.34)

with γ ≥ 0, and λ = 1∕E(γ).

We can estimate the parameter λ for the exponential distribution in some way, such as by matching the first two moments with sample data {Xi}, of mean mX and standard deviation ςX.

(3.35)

Rice The amplitude of a fading channel may have a dominant component; the faded

amplitude is now given by α = . Its probability distribution is given by the Ricean distribution:

(3.36)

where I0(z) is the modified Bessel function of the first kind of order zero. That channel model offers the advantage of having an additional parameter K that has a physical meaning; but the Bessel function makes its computationally difficult, and has no straightforward form for its power or SNR.

Nakagami-m Similarly a Nakagami-m fading channel is often used for fade channels:

(3.37)

α ≥ 0, the SNR then follows the distribution [7][71]:

(3.38)

γ ≥ 0, which is gamma distributed. The problem of estimating parameters is more complicated in this case (as discussed in [72]ch. 17.7). Still, moment matching estimates lead to:

(3.39)

Page 88: Wireless & Cellular Communications

88

Gaussian Although not usually used for fading, the normal (or Gaussian) distribution is given for comparison:

(3.40)

for which we may use the following simple estimates:

(3.41)

The estimate for is unbiased and corresponds to moment matching and maximum likelihood, and with large enough sample size the estimate for although biased is usually a good estimate.

Log-normal There is a general consensus that large-scale fading may be approximated by lognormal distributions [22]. Its probability distribution is:

(3.42)

for which, the best estimates are simply obtained by change of variable Y = lnX and referring to the Gaussian case. A more complex approach would be to investigate Z = ln(X - Θ) but these estimations are more difficult and in many cases rather inaccurate ([72] ch. 14).

Weibull The flexibility and relative simplicity of Weibull distribution may also be convenient and leads to good data fit [73]:

(3.43)

To estimate parameters, the simplest approach is to follow Weibull’s method based on the first two moments about the smallest sample value [72]ch. 21.

3.9.5 Case Study: Dropped Calls and Setup Failures

The above probability distributions are useful for modeling fading in wireless channels. In some cases empirical measurements are taken, and finding the fading situation may be a difficult problem. We look in this section at a specific case of fading in urban cores, and measure different events.

Page 89: Wireless & Cellular Communications

89

Test drives are conducted throughout major US cities in which mobile handsets continually place calls on several major cellular service providers. For that purpose a van is outfitted with several handsets, each cabled to a roof antenna, these antennas are placed as far as physically possible from one another to limit interferences. A system is setup to place a 90-second call on every handset, then remain idle for 30 seconds, and repeat the cycle. A wealth of data may be analyzed and compared; in particular we focus presently on the occurrence of dropped calls and call setup failures.

Dropped Calls:

We peg a dropped call every time a handset is in active talk mode and for some reason loses

that call. In such case, the handset remains in idle mode for 30 seconds before attempting to

place another call.

Setup Failures:

We peg a setup failure every time an idle handset attempts to originate a call to a fixed test

number. Such an attempt may fail for many reasons including trunk blocking, network

resource allocation failures, RF resource failures, handoff failures, etc.

In this example we collect the rates of dropped calls and setup failures for major cities and service providers by driving between 1000 miles and 1500 miles (depending on the size of the city) on every major road and a portion of secondary roads. The data collected for dropped calls and setup failures is summarized in table 3.15.

Table 3.15: Measured Dropped Calls and Setup Failures in several major US cities on different

cellular networks.

Moment of sample Dropped Calls Setup Failures

Mean (m) mX(DC)= 1.32% mX(SF)= 2.32%

Standard Dev. (s) sX(DC)= 1.44% sX(SF)= 2.67%

Page 90: Wireless & Cellular Communications

90

Moment matching for the above probability density distributions lead to table 3.16.

Table 3.16: Moment matching for measured Dropped Calls and Setup Failures in several major US

cities on different cellular networks.

Estimated Parameters for Parameters for

Distribution Dropped Calls Setup Failures

Uniform b - a=0.0500 b - a=0.0925

Exponential λ=69.2761, μ=-0.0012 λ=37.4559, μ=-0.0035

Gamma a=0.8383, b=0.0158 a=0.7542, b=0.0307

Gaussian μ=0.0132, ς=0.0144 μ=0.0232, ς=0.0267

Lognormal μ=0.0130, ς=0.0140 μ=0.0230, ς=0.0255

Weibull a=0.38, b=0.0022 a=0.53, b=0.0150

A simple error estimate may be used to estimate differences between measured data and the different probability distribution functions covered above. The best fit (minimal error) is that of the gamma distribution; graphical representations also show a good fit.

Page 91: Wireless & Cellular Communications

91

Figure 3.25: Dropped call rates and various distributions estimated with moment matching; best

fit is that of gamma distribution.

3.10 Homework

1. At the beginning of section 3.2, we start to derive a free-space model from Friis’ equation. (a) Rederive in details the Friis’ formula (3.2). (b) Assume in the above that Gt = Gr = 1 (=0 dBi), derive (3.3).

2. Find the paper [35] by V. Erceg & al. “An Empirically Based Path Loss Model for Wireless Channels in Suburban Environments”, in IEEE Journal on Selected Areas in Communications, Vol. 17, No 7, July 1999. This popular paper for PCS propagation modelling and design deserves some attention. Read it and answer the following questions:

a. Summarize data collection campaign methods and size. b. Summarize key findings. c. A key finding is that path loss exponent variations are Gaussian, how is that proven

in the paper? 3. Plot path loss prediction versus distance and log distance for a cellular system you are

designing with the following assumptions: PCS frequency (1900 MHz), base height 20 m, mobile 2 m, suburban area, flat terrain with moderate tree density.

a. Use and compare the 3 following models: Free space, COST 231-Hata, & Erceg (use a median path loss: i.e. x=y=z=0)

b. Using typical 140 dB maximum allowable path loss for a CDMA voice system, what is the range (cell radius) according to these models?

4. Repeat the above problem with unlicensed frequency 5.8 GHz and a link budget of 120 dB. Compare. (Use the same models, including COST 231-Hata and Erceg models even though the frequency exceeds their domain of validity.)

5. Compare the received power level of free-space (n=2), and 2-ray models for a PCS signal (1900 MHz).

Page 92: Wireless & Cellular Communications

92

a. Write a program (in any language of your choice) to plot a graph of power level versus log of distance (from 100m to 10km). Submit code with comments and explanations, and a resulting figure.

b. What cell site radius would be ideal for a system design? Why? 6. Plot and compare on a same graph the propagation estimate for a radio system at 2.4 GHz

and another at 5.8 GHz (all other parameters being equal); use hb = 8 m, hm = 2 m; use a) the two-ray model from §3.3.1, b) the 6-ray model from §3.3.3. Point out the main differences.

7. Assume an environment where lognormal shadowing is defined by standard deviation ς = 6 dB. Using §3.7.2, answer the following questions:

a. What fade margin is required to achieve 50% edge reliability? b. What fade margin is required to achieve 90% edge reliability? c. Further assume Hata’s model for propagation, which has path loss exponent: n =

4.49 - 0.655log(hBTS(m)), and a 20 m high average base station height. What is the usable surface area reliability with the previous two fade margins?

8. In this problem we consider some simple handoff rules and derive the impact of handoff on system capacity.

a. A mobile is moving from base station 1 (B1) to base station 2 (B2) at speed v. We assume for simplicity that the system has no shadowing and can compensate for all fast fading, so the power received at the base station can be written: Pr = P0-10nlog 10(d∕d0) where P0 and d0 are reference values (P0 = 0dBm, d0 = 1m), and n = 3.6 is the path loss exponent.

Assume that a call is dropped if the power received by all base stations is below a minimal power Pmin = -110dBm. Assume that the system initiates handoff when base station B1 power drops below PHO = -108dBm. The time required to complete the handoff is Δt=4s.

Question: above what speed vmax of the mobile would the call be dropped?

b. With the above distances, in what percentage of the cell coverage is a mobile in handoff situation? (Simply assume circular cells).

9. The popularity of Wi-Fi systems is obvious for small residential or small office applications; but the standard is getting so popular that attempts are being made at widening its use to much larger areas, almost like a cellular system. This problems aims at applying a few concept seen so far to study advantages and disadvantages of such possible wide area Wi-Fi systems.

a. Some cities are trying to cover entire urban areas with Wi-Fi mesh. What are the main reasons why a service provider would consider using Wi-Fi mesh for major coverage areas rather than a 3G system (like EV-DO)?

b. What are the main disadvantages of a wide coverage wireless system using Wi-Fi? (consider unlicensed aspects, frequency reuse, link budgets, and other parameters).

c. An 802.11a System proposes to use 20 MHz channels (TDD) in the new 5.4 GHz frequency block. What advantages / disadvantages does this system have over a 2.4 GHz Wi-Fi system?

d. Assume you’re allowed 4 Watt EIRP for a Wi-Fi system, assume a typical receiver sensitivity of -90dBm for a 20 MHz Wi-Fi channel. Try to build a link budget for such a typical system. Give your answer in a typical link budget table; use any estimates that make sense to you – justify or discuss where you are unsure.

Page 93: Wireless & Cellular Communications

93

e. What is the maximum allowable path loss for good outdoor coverage? (Use 90% coverage reliability.)

f. How many access point per square miles would you recommend to provide good outdoor coverage (Assume a simple one-slope model with path-loss exponent n = 3.)

10. We consider a very simple atmospheric attenuation model on a link from an earth station to a LEO satellite orbiting earth. The link works at 300GHz, and because satellites are not geostationary, the angles of elevation from the ground to the satellite vary continuously. The total atmospheric attenuation Az at zenith (that is at an elevation angle of 90 degrees) can be estimated from figure 1.3 from chapter 1, assuming a 2km thick atmosphere.

When the elevation angle α changes, slant path attenuation varies since the thickness of the atmosphere traversed by the radio link increases. That slant path attenuation is usually approximated by the cosecant law:

(Values are not in dB in the above equation).

a. Estimate total link attenuation for the satellite at zenith. b. Estimate total link attenuation at α=5 degrees (near horizon). c. Assume that one satellite is always visible in the sky (between 5 and 90 degrees of

elevation), and that the system can handover from one satellite to another. What link budget variation does the system have to deal with to maintain a link continuously?

d. Does the link budget variation from the previous question change if the frequency of operation is 20GHz?

11. Wireless satellite systems sometimes use a constellation of LEO satellites revolving around the earth. We will focus in this problem on the Doppler effect analysis of LEO satellite systems.

Let us consider a system like Globalstar, operating at 1.6GHz, with satellites 1400km above ground, and with a rotation period of approximately 2 hours. (Also remember the average earth radius is approximately 6350km).

a. What are LEO satellites? What are their main advantages? What are their main disadvantages?

b. Calculate the speed of such a satellite. (In the remainder we assume this speed constant, and we assume that the rotation of earth is much smaller and therefore negligible).

c. We now suppose that we have an earth station (such as a satellite handset) placed at location R. When the satellite appears at the horizon, at H1 on figure 3.26, what is the angle α?

d. What is the Doppler shift when the satellite appears at the horizon (H1)? (Note: for each of these Doppler shift questions, specify if the shift is positive or negative).

e. What is the Doppler shift at zenith (Z)?

Page 94: Wireless & Cellular Communications

94

f. What is the Doppler shift when the satellite disappears at the horizon (H2)? g. Conclude on the total amount of Doppler spread that such a LEO satellite system has

to handle. h. If we now consider Iridium instead of Globalstar, the system operates at 1.6GHz

also, with satellites 800km above ground, and at 27,000km/hr. What is that systems total Doppler spread?

Figure 3.26: Doppler geometry for LEO satellites.

12. We examine the advantages of inserting a tower-top low-noise amplifier (LNA) at a cell site. a. For a wireless base station, the noise floor of a system may be expressed as kTeffB,

where Teff is an effective temperature that may be calculated as follows: (Be careful: values in formulas are not in dB). For a conventional CDMA base station:

(3.44)

b. Assume the following values: Antenna temperature: Tant = 50K, Cable effective temperature and loss: Tc = 300K,Lc = 2dB, BTS effective temperature: TBTS = 1200K (equivalent to a 7dB noise figure), cdmaOne channel width: B=1.25MHz

What is the noise floor of that conventional cdmaOne base station?

c. In some case, that noise floor is considered too high, and one tries to reduce it by inserting a low-noise amplifier. For a base station with an added low-noise amplifier (LNA) the effective temperature of the system is calculated by:

Page 95: Wireless & Cellular Communications

95

(3.45)

d. Assume the same values as above, and: LNA effective temperature TLNA = 150K, LNA gain: GLNA = 20dB,

What is the noise floor of that system with LNA?

e. What is the sensitivity improvement (noise floor improvement) of the receiving system when the LNA is inserted? Give the result in dB.

f. This improvement translates into a direct gain in the total link budget. That gain may be seen as an improvement in coverage. Assume a 1∕r3 propagation model (that is a one-slope model, with path loss exponent n = 3); what is the cell radius improvement?

g. Yet another use may be in terms of service reliability. If we assume a log-normal shadowing of ς = 8dB, and if we have an 6dB fade margin before installing the LNA, what is edge coverage before installing the LNA?

h. What is the improvement in edge coverage probability achieved by the LNA? Give the result P(LNA) - P(noLNA).

Chapter 4 Practical Aspects of Wireless Systems

Practical aspects of deploying a wireless network include many other efforts than those related to radio and network planning: real estate considerations for the choice of cell site location, and dealing with local regulators are often part of a complex process, which is presented in this chapter.

We also review several ancillary yet important aspects of wireless systems: methods to synchronize cells, security aspects, and finally RF radiation levels and health concerns.

4.1 Cell Site Considerations

Wireless cellular networks require to build very many cell sites, both to provide good coverage, and to provide added capacity where needed.

4.1.1 Real Estate

Real estate is a major effort behind the choice of cell site placement and build-out. Several options are available to operators to gain access to tower or roof space. Owned property is of course an option: operators can use existing land or buildings (such as central offices) as cell sites. In may cases however that needs to be augmented by some leased properties

Page 96: Wireless & Cellular Communications

96

where the operator builds a tower; in these cases a recurring lease cost is incurred. In some cases public right-of-way can be used to access public streets or even existing poles.

Figure 4.1:

Some landowners combine revenue from several service providers by building several

towers on their properties. Tower owners may also allow several service providers on

one tower. This location shows a combination. In these cases interference studies have

to be performed before allowing multiple antenas.

Co-location on an existing tower is also an option: tower companies or competitors may have towers with some spare space, in these cases the highest spot on the tower is usually taken, but a lower location might be an option. Structural integrity studies and interference studies are usually performed. If co-location on the same tower is not possible, building another tower nearby can be an alternative. Dealing with tower companies can be an easier process than dealing with many different land owners, but usually comes at a higher lease price. Dealing with competing operators can bring other business issues, but sometimes the reciprocal needs of competitors can lead to good mutual agreements.

Location choice is one of the most important decision: premium locations come at a price, sometimes more cells around a prime area (like a mall or airport) may be cheaper than on-site lease cost.

Height vs. number is also is an important design choice: higher macrocells cover more terrain but cost more; lower sites (minicells, microcells, nanocells, picocells) are cheaper, but more are needed (including backhaul). Now even femtocells are proposed that are cheap (like a LAN AP) and may be connected to the network over a residential backhaul line (like DSL or cable modem).

Many different types of towers are available to operators according to their needs and for different height, equipment weight, and wind conditions. Operators usually chose the cheapest possible among the following types of structures: large lattice towers, monopoles

Page 97: Wireless & Cellular Communications

97

(galvanized steal, wood), or existing structures such as rooftops, water towers, utility poles (electricity, telephone).

High towers provide the most coverage, and typically allow for a large tower-top structure that can provide good horizontal diversity and achieve the best possible gains. For these tower, a fairly large enclosed area is usually placed nearby with large base station cabinets and power backup cabinets; remote locations may need several hours of power backup to account for technician travel time. Thicker coax cables are needed for higher towers since longer cables cause more cable loss and noise on the receiving end, and thicker cables reduce that cable loss. Tower-top low-noise amplifiers are an option (but with possible maintenance issues).

Densely populated areas and cities usually no longer allow large towers, and smaller monoples are the typical alternative, with smaller footprint, and smaller equipment. Co-locations are sometimes enforced by cities; European regulations even attempted to force antenna sharing between operators.

Figure 4.2: Different cell site types may be hidden in structures more pleasing to landlords or

local regulators.

Page 98: Wireless & Cellular Communications

98

4.1.2 Building Process

The design choices and processes involved in building all these cell sites in a time consistent with operators need is sometimes complex.

Site location:

Where to place a tower is of course the main concern: capacity or coverage needs dictate

the general area where a new site should be build, but its exact location depends on many

other practical considerations such as lease opportunities, poles in place, and prices.

A site survey is usually necessary to examine practical details. Finally when a location is chosen, necessary local government hearings may be required, zoning requirements may be a restriction, and proper permits must be obtained.

Site Preparation:

If all goes well with local regulations, a contractor can be chosen to build a new site, and the

site preparation starts. Soil analysis is conducted to assess shifting in the long term.

Sometimes National Environmental Policy Act (NEPA) analyses are required: the presence

of toxic contaminents in the soil can prevent digging and construction. It is important for an

operator to complete these analyses before starting any construction work, or the operator

is at risk of having to clean-up any dangerous or toxic material.

Structure:

Structural study (height, weight, wind) must be performed for tower or rooftop. When a

tower is erected, overhead trees and power lines can be an obstacle. FAA regulations must

also be followed: for towers above 200 ft or near airports, filing, lighting, and marking are

required.

Electrical and grounding are important considerations as well: the nature of cellular towers is to exceed the surrounding structures, and as such they are likely to be hit by lightning. Lightning rods with proper grounding are nearly always required.

Power must be brought to the tower for the wireless equipment, batteries are usually needed, and in some cases lighting is required for safety reasons. In many cases local power companies can bring metered power (for a fee and a recurring cost) but in some remote areas, power considerations can be an issue.

Construction:

Concrete pads are required for large towers. A cheaper alternative is to use wooden poles

simply placed in a hole with a large crane (hole depth usually amounts to 20% of total pole

height) In either case, a crane (or sometimes helicopter) and crew must be hired for several

Page 99: Wireless & Cellular Communications

99

days. Antennas, and coaxial cables from the base station to the antennas have to be

installed.

Concrete pads are also poured to place electronic and power cabinets. Fenced (and padlocked) enclosures are usually built when a large amount of equipment is placed at the site; in some cases a chain-link enclosure is enough, or for esthetic reasons a wall or a wooden privacy fence may be mandated by local government or land owner to hide the equipment cabinetry.

Coordination & Project management:

Site preparation and construction schedule must be coordinated with local regulations,

public notices, permits, etc. Cost tracking and risk management must be regularly evaluated.

In certain cases, land owners may change their minds, and make additional costly requests;

an operator has to now when any unexpected delays or cost may exceed the anticipated

value of a new site, and in some cases some investments spent towards a potential new site

are simply abandoned...

Backhaul:

Wired backhaul mean monthly recurring charges; wireless backhaul cost more upfront;

other options, like mesh or repeaters, may be cheaper when lower capacity is required.

Voice backhaul was and is still often provided by wired T1 ordered from local telcos; although not always cheap, they are available nearly everywhere. Data backhaul initially (and in many cases still) have no choice than to rely on additional T1s. 3G and 4G data backhaul bring additional issues: throughput rates are increasing and cell sizes are decreasing; wireless carriers typically prefer highly reliable and scalable fiber backhaul, which is expensive to build-out.

Decommissioning:

In rare cases, a tower has to be decommissioned. Various reasons may cause that: land

owners may no longer wish to renew a lease (or try to significantly increase leasing fees).

Decommissioning is costly as well; in most cases leasing agreements have a clause

stipulating that, if for any reason the operator leaves the property, the site must be returned

to its original state. Deconstruction cost, including the removal of the tower, concrete pads,

fencing, and possible landscaping efforts have to be taken into account.

While early cellular systems relied on very large umbrella cells, the recent tendency is to opt for more sites, but cheaper, and smaller. There are several reasons for that tendency: historically wireless networks were built to cover highways and commuter traffic, now more pedestrian and indoor coverage is required; recent wireless services focus on increased data rates which require higher modulation and lower link budgets, therefore smaller cells; and finally the cost of equipment is decreasing thus allowing many more sites

Page 100: Wireless & Cellular Communications

100

(almost like LAN access points) rather than very large and expensive base stations that need to cover several square miles.

4.1.3 Cost of Build-Out

Wireless networks require a high capital investment upfront to create necessary coverage before customers can benefit from satisfactory service. That large cost covers planning and building cell sites, as well as a recurring cost of leases, backhaul, and general operations.

General estimates of course vary with location, negotiation, and time, and rough orders of magnitudes can be outlined as follows:

One-time cost for a new cell site vary from k$100 for small tower or monopoles to k$300 for larger towers.

Recurring monthly cost varies from $1000s/mo to $100’s/mo depending on the real estate and location. Some utility poles can be as low as $10s/mo, right-of-ways may even be free.

Construction cost include crane and crew approximates k$10/day.

Overall process for building a new tower may last 6 months to a year.

Table 4.1: City-wide wireless roll-out cost for a major city (2 GHz).

One city roll-out Number Cost Total

(thousand $) (million $)

Major sites 300 200 60

Minor sites 50 100 5

Addl. coverage 40 200 8

Equipment 390 60 23

Site prep and cabinet 390 20 8

Page 101: Wireless & Cellular Communications

101

Wireless backhaul 30% 50 6

Total roll-out 390 110

Recurring cost

Property leases 9.6 3.7

Wired backhaul 10.8 4.2

Maintenance, repair 1

Yearly Total 9

It is also important not to underestimate recurring cost of operation: lease, backhaul, electricity, repair, maintenance are expenses that increase with the number of sites deployed.

Of course other expenses add to the above for other business needs such as: a network operation center (NOC) to monitor alarms, schedule and dispatch repairs, load new software, schedule upgrades, etc. A fleet of engineers is also required for constant network optimization (RF, network, switching, signaling, and more).

The above estimates vary greatly with times and locations. Many factors have a high impact on these estimates: the frequency of operation has a major impact (lower frequencies require fewer cells), and the amount of spectrum available has an impact as capacity considerations appear.

The cost of rolling out networks on a large scale (nationwide) is difficult to estimate reliably. Major carriers have recently announced costs of upgrading networks from to higher data standards: Sprint and Verizon announced in 2006 investments in excess of $1.5billion for EVDO upgrade to an existing network. Sprint-Nextel announced plans for

Page 102: Wireless & Cellular Communications

102

$3billion in 4G/WiMAX rollout nationwide (plus backhaul), and later modified that estimate to $2.5billion incuding backhaul (presumably due to the added participation of Clearwire).

700MHz deployments are also raising questions: coverage is easier and may lower the cost and time of achieving nationwide coverage; estimates vary from as low as $2billion for nationwide coverage to $12 billion to build out a nationwide 700 MHz system (not including the cost of the spectrum).

4.2 Synchronization and GPS

Some cellular standards such as CDMA require a very accurate cell sites timing reference within a network. Although any timing synchronization is allowed, Global Positioning System (GPS) is often used to provide that timing source. GPS is also used for mobile location determination, such as in handsets for emergency calls (911) or for location based services (see section 10.5).

The GPS system is a wonderful tool and merits some attention and a few explanations. The GPS network was built for the department of defense and comprises of a constellation of (initially at least 24) satellites orbiting the earth (twice a day) at an 11,000 mile orbit. Satellites are grouped in trajectory planes (A to F), and are numbered within a plane (A1 to A4).

Page 103: Wireless & Cellular Communications

103

Figure 4.3: GPS satellite constellation of at least 24 satellites in their different orbital planes A to

F; satellites in a plane are numbered, for instance A1 to A4 in the A-plane.

Each satellite has a very precise time source from caesium and rubidium atomic clocks and transmits a fast pseudo-random sequence; these sequences can be correlated to accurately determine a very accurate time offsets between receiver location and each satellite. This time delay corresponds to a distance to each satellite, which allows for position determination.

The legacy GPS signals are transmitted on two different frequencies known as the L1 (1575.42 MHz) and L2 (1227.6 MHz) frequencies. Each satellite uses a different ranging code, or spreading sequence. The ranging codes used by the satellites have very low cross-correlation properties between one-another. Each satellite uses two different ranging codes, the coarse acquisition (C/A) code and the precision (P(Y)) code. The C/A code is a short (1023 bits) code modulated on to the L1 frequency, that is a 1.023 Mcps pseudo-random sequence repeated every millisecond. The P(Y) is a longer code (6.1871 ⋅ 1012 bits) that is modulated onto both L1 and L2 frequencies, it is a 10.23 Mcps sequence used to improve accuracy. Signal strength reaching the earth are generally in the -127 to -130 dBm range. (For more details see www.gps.gov.)

Simple geometric considerations show that four spheres are needed for a location determination; three spheres determine two possible locations, which is usually sufficient (the fourth sphere being the surface of the earth), or the speed of satellites involved might mean that one of the two locations is moving at extra-terrestrial speeds and is unlikely to be the receiver of interest. Any additional satellite signal can be used to further refine positioning as well as providing an accurate timing source.

In practicality the three or more spheres do not exactly intersect and some distance uncertainties remain: the actual sequence correlation process is imprecise to about 10ns, which is an uncertainty of 3m. Ionospheric and tropospheric effects tend to change the permitivity of the atmosphere, which impact the speed of transmission, and cause variations; these variations depend on trajectories, and if a satellite is for instance near zenith or near horizon. These variations are typically slow though and can somewhat be mitigated. Similarly some multipath distortion due to nearby terrestrial structures may occur, these can vary very fast and may need other mitigation methods. Satellite clock errors, orbit adjustment, or temporary unavailability are also causes for errors. Even relativistic terms due to satellite speeds cause onboard clocks to slowly drift. These errors may add up to several meters, and simple GPS measurements have an accuracy of 30 to 100m depending on environment and number of satellites.

Many of these errors can be attenuated by differential GPS (DGPS) in which two nearby stations measure GPS location and time; since these are relatively close (say within 100

Page 104: Wireless & Cellular Communications

104

miles), the large atmospheric variations are very similar for both stations. Now equip one station as a fixed known location and an accurate atomic clock, and the atmospheric errors can be estimated and removed to improve the second station measurements. DGPS can reach sub 10m position accuracy.

4.3 Health Concerns

The question of health risks associated with wireless devices comes up regularly in the press. The significant increase of cellular phone use is a concern to the public, and to regulators. When requesting a permit to build a new tower, operators have to answer questions from city council members or elected officials on health risks and concerns.

Health hazards caused by radiations vary greatly with the frequencies of radiations: ionizing radiations, stating above UV rays, including X-rays and gamma rays have severe physiological effects. Radiations below 300 GHz are our concerns here; they are referred to as non-ionizing since their energy levels are below 100’s eV and therefore not sufficient to impact valence liaisons in molecules and atoms.

Figure 4.4:

Spectrum of ionizing and non ionizing radiations. Radiations above 300GHz ionize

molecules and cause severe physiological damage; radiations below 300GHz are non-

ionizing and safer to use on large scales – source: FCC report [76]

They may still be somewhat controversial since the general opinion is concerned about the increasing use of radio devices; clear evidence of health hazards are less obvious, but nonetheless, studies continue to consider carcinogenic, reproductive, and neurological effects. Health risks associated with high level radiations have long been a concern but have failed to provide convincing proof. The effects of power lines on the central nervous system have been of most concern as one study cites an apparent increased risk of leukemia in children by exposure to power magnetic fields (from 50-60 Hz power lines) in excess of 0.4 μT. A more recent report finds the epidemiological evidence not strong enough to

Page 105: Wireless & Cellular Communications

105

justify the conclusion [75]. Studies have focused on workers exposed to static magnetic fields up to a few millitesla with no signs of elevated cancer risks; other studies found no association between the risk of brain or lymphatic cancer in either static or power frequencies (50-60 Hz), but the number of cases studied were small, and more studies are required.

In the US, the Occupational Safety and Health Administration (OSHA)1 publishes study results and guidelines for safety. Other associations monitor new studies, such as the IEEE, the FDA2. Internationally, guidelines have been issued by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) in a 1998 report. Other international bodies such as the Independent Expert Group on Mobile Phones (IEGMP) have also published reports and recommendations on mobile phones and health.

A difficulty in the development of guidelines for exposure to radiations lies in the fact that health impacts may be difficult to track, and that controversies sometimes arise from the source of funding of some studies (with potential conflicts of interests).

We’ll study two major concerns related to wireless systems: 1) the presence of towers and high power transmitters on a property, and 2) the effects of handsets on the human body.

4.3.1 Health risks from cellular towers

Base stations may cause concerns because of the higher power they radiate and sometimes the number of fairly large antennas they have.

RF energy absorbed by the body elevate its temperature and has major effects on the body: as the heat is distributed throughout the body by blood circulation, it causes whole body heating, which has the following impacts:

Causes drowsiness, headaches, and other temporary effects such as increased sweat rate, dehydration.

Has a major impact on the cardiovascular system and the body’s thermoregulation abilities (as in a heavy exercise phase), and may be severe in an unhealthy individual.

The ability to carry out cognitive tasks is also affected by heat, increasing unsafe behavior, reduced vigilance and performance (especially dangerous while climbing a tower).

The effect on pregnant women seems no greater than in other women, and the fetus is somewhat shielded from temperature stress (if the umbilical cord is healthy and not occluded) ; but elevated temperature can induce development defects. Also 2-3 month old infants are more vulnerable to heat stress.

Studies show that an elevation of 1∘C start to have effects. Many of these effects are temporary (with exception of prenatal development), but nonetheless may be very dangerous and increase health risks or accidents. In all cases, the only remedy is simply to move away from the radiations.

Page 106: Wireless & Cellular Communications

106

Some localized heating may be cause for concern as well, and occur generally when tissue temperature exceed 42∘C for more than about an hour:

Burns and lesions (damage in kidney, liver, muscle tissues have been reported). Cardiovascular problems may arise as well. Male germ cells in the testes normally require lower temperature; 3-5∘C heating will result

in lower sperm count lasting several weeks. Eye lens opacity (cataract) may occur with acute RF heating (more than 41-43∘C for more

than 2 hours).

The eyes and the testes are known to be particularly vulnerable to localized heating by RF energy because of the relative lack of available blood flow to dissipate excessive heat; effects are usually temporary except in the case of the eyes where irreversible damage (cataract) may occur.

Studies and reports all seem to conclude that exposure from living near a cellular cell site are very low and unlikely to pose any health risk. Of course, the widespread presence of cell sites is still a fairly recent situation, and more long-term studies are required.

4.3.2 Health risks from cellular handsets

The main health risk associated with the use of cellular handsets might be that of increased inattention and resulting accidents, especially on the road. New York first passed laws banning cell phones while driving in 2001 (followed by New Jersey, Connecticut, Utah, and Washington, D.C.). Most recently, California, passed laws in July 2008 to limit cell phone usage to experienced drivers and to the use of hands-free kits. That topic has been the subject of numerous studies on: cause for inattention while driving, use of hand-free kits, seriousness and frequency of accidents, etc. Some interesting findings:3

The National Highway Traffic Safety Administration (NHTSA) estimates that 85% of cell phone users talk on the phone while driving.

An estimated 6% of road accidents each year in the US are caused by drivers talking on their phones (2,600 people killed and 330,000 injured in cell phone related car accidents in 2008).

Studying state bans on hand-held devices while driving suggests it might save 300 lives a year in California (mostly in adverse conditions such as on wet and icy roads).

Motorists who use cellphones while driving are four times as likely to get into crashes serious enough to injure themselves. (Australian study, 2005)

Talking on the phone while driving is as dangerous as driving drunk, even with hands-free kit, and causes 18% slower breaking reflex. (Univ. of Utah, 2006)

Hand-held users need redial 18% of the time, hands-free users 40%. (NHTSA, 2004)

(The last few points might argue that accidents seem to be caused more by driver inattention due to the phone conversation, rather than by the fact that hands are busy with a handset.)

Page 107: Wireless & Cellular Communications

107

Other serious risks have been investigated on the danger of using cellular phones ranging from: impact on pacemakers, cancer causing handsets, etc. It is difficult to distinguish the part of anecdotal data, hearsay, or fibs from actual facts. Studies are rare, and sometimes funded by parties feared not to be impartial.

Early handsets and car phones used fairly high power levels (as high as 3W), and may have caused health concerns. (Early wireless industry workers have even alleged that early trial prototypes caused severe health damage such as cancerous brain tumors.) But current handsets seem to offer no such concerns: transmitted powers are usually below 200mW and never exceed dangerous power density levels.

The high peak-to-average ratios associated with modern digital techniques like GSM and CDMA have caused concerns as well, but the problem is more difficult to pose and less obvious to study. Studies have considered various physiological effects:

Cognitive effects Pulse modulation effects on calcium efflux from the nervous system Mutation Initiation or promotion of tumors Increased risk of cancer (brain tumors, malignant melanoma in eye or skin) Effect on pregnancies Fatigue, headaches, warmth Stress, nervousness Even mortality rate

Evidences from hundreds of studies of populations near major broadcast facilities, cell site towers, analog or digital cell phone usage, even cordless phone usage have been examined and reported. Most non thermal effects remain inconclusive so far and show no health impact from these types of low level RF exposures [75].

Most studies conclude that the effect of temperature elevation on the human body is the main effect from RF radiations, but also agree that further research and studies are required, especially on longer term data. Recent research has also suggested that nonthermal effects do occur. The phenomenon of RF “hearing” has been reported and verified: the human auditory system responds to RF energy pulses (in the 2 MHz to 10 GHz range), converting it in or around the cochlea to acoustic energy, similar to a buzz, click, or hiss. Finally, alterations in animal behavior patterns following RF and microwave radiation exposure have been observed.

In conclusion: current evidences are derived from fairly recent usage data, and with ever increasing use of wireless devices, including at younger ages, longer term studies are required. Until convincing data are produced, the issue is likely to gain more press attention in the next few years, and even become a politicized debate. For example: in the European community, countries such as France, the U.K., Germany have issued advisories to limit exposure to cell phone radiations. The French government is currently proposing a

Page 108: Wireless & Cellular Communications

108

cell-phone ban for children, pending more studies on long term effects on the developing body. And in July 2008, a research group in the US issued an advisory of caution 4 including particularly strong advice for protecting children from cell phone radio waves, due to “children’s small size, rapidly growing bodies and brains, and potential for long-term exposure”.

The bottom line is that these questions are unlikely to be settled for a few years, until new convincing research results are produced. For now, the industry remains in a state where regular review of current guidelines are still necessary.

4.3.3 Official guidelines

The FCC derives guidelines for human exposure to RF levels; these guidelines are derived from the recommendations of two expert organizations, the National Council on Radiation Protection and Measurements (NCRP) and the Institute of Electrical and Electronics Engineers (IEEE), and are based on the many studies mentioned above. Many other expert bodies exist that generate similar guidelines for the EU and other parts of the world; the World Health Organization (WHO) is usually involved in a harmonization effort for every country to agree to similar guidelines.

In the case of RF (non ionizing) radiations, power density of radiations impinging the human body is used to measure the level of exposure. The quantity of RF energy actually absorbed by the body is called the Specific Absorption Rate (SAR); the SAR is usually expressed in units of watts per kilogram (W/kg).

The exposure limits used by the FCC are expressed in terms of SAR, electric and magnetic field strength, and power density for transmitters operating at frequencies from 300 kHz to 100 GHz. [76] [77] Since 1996, FCC guidelines are based on the estimate that potentially harmful biological effects can occur at SAR level of 4 W/kg as averaged over the whole-body (as identified by NCRP, IEEE, and ICNIRP reports). Safety guidelines are derived from these thresholds; and additional safety factors have been incorporated to consider partial-body exposure. The FCC also differentiates between occupational exposure in closed environment, and more general population exposure, which tend to occur in more open conditions. The FCC also adopted limits for localized (partial body) absorption in terms of SAR, shown in Table 4.2, and apply for instance to devices such as hand-held cellular telephones.

Exposure limits for frequencies from 300 kHz to 100 GHz are shown in Table 4.3. The reason for differentiating with frequencies is that higher frequencies have fairly low penetration depth, whereas lower frequencies penetrate the body and may heat internal body tissues. For frequencies Above 10 GHz (3 cm wavelength) heating occurs mainly on the skin, which acts as a shield and (other than in the eyes) may safely sustain fairly high radiation levels. From 10 GHz to 3 GHz (3 cm to 10 cm wavelength) the penetration and heating is deeper, and from 1.2 GHz to 150 MHz (25 cm to 200 cm) penetration and

Page 109: Wireless & Cellular Communications

109

absorption are sufficient to cause heating of internal body tissues, and hence require the lowest exposures limits.

Furthermore, a time average is considered over which the levels of exposure should be average; in that calculation levels of exposures may be exceeded for short time periods as long as the average over the specified time does not exceed guideline levels. For instance: if exposure levels exceed the prescribed limit by a small percentage, simply reduce the amount of time exposure by that percentage.

Table 4.2: FCC Limits for Localized (Partial-body) Exposure – from FCC report [76].

Specific absorption rate (SAR)

Occupational/Controlled General/Uncontrolled

exposure exposure

(100 kHz - 6 GHz) (100 kHz - 6 GHz)

Whole-body <0.4 W/kg <0.08 W/kg

Partial-body ≤8 W/kg ≤1.6 W/kg

Table 4.3: FCC Limits for Maximum Permissible Exposure (MPE) – from FCC report [77].

(A) Limits for Occupational/Controlled Exposure

Frequency Electric Field Magnetic Field Power Density Averaging

Range Strength (E) Strength (H) Time

Page 110: Wireless & Cellular Communications

110

(MHz) (V/m) (A/m) (mW/cm2) (minutes)

0.3-3.0 614 1.63 (100)* 6

3.0-30 1842/f 4.89/f (900/f2)* 6

30-300 61.4 0.163 1.0 6

300-1500 – – f/300 6

1500-100,000 – – 5 6

(B) Limits for General Population/Uncontrolled Exposure

Frequency Electric Field Magnetic Field Power Density Averaging

Range Strength (E) Strength (H) Time

(MHz) (V/m) (A/m) (mW/cm2) (minutes)

0.3-1.34 614 1.63 (100)* 30

1.34-30 824/f 2.19/f (180/f2)* 30

30-300 27.5 0.073 0.2 30

300-1500 – – f/1500 30

1500-100,000 – – 1.0 30

f = frequency in MHz.

* = Plane-wave equivalent or far-field equivalent power density.

Page 111: Wireless & Cellular Communications

111

NOTE 1: Occupational/controlled limits apply in situations in which persons are exposed as a consequence of their employment provided those persons are fully aware of the potential for exposure and can exercise control over their exposure. Limits for occupational/controlled exposure also apply in situations when an individual is transient through a location where occupational/controlled limits apply provided he or she is made aware of the potential for exposure.

NOTE 2: General population/uncontrolled exposures apply in situations in which the general public may be exposed, or in which persons that are exposed as a consequence of their employment may not be fully aware of the potential for exposure or can not exercise control over their exposure.

Figure 4.5:

FCC Limits for Maximum Permissible Exposure (MPE) as a function of frequency, for

non-ionizing radiations. Both occupational and general public exposure guidelines are

given. Limits are lowest where the body absorption rate is highest – from [77].

4.4 Environmental Concerns

The telecommunication industry and overall Information and Communication Technologies (ICT) seem to have a beneficial impact on the environment, generally encouraging eco-friendly transfer of information rather than transport of paper goods or people. Still some concerns arise from the fact that the sector is growing rapidly, and with it the manufacturing of electronic devices (By 2008, the number of ICT users has tripled worldwide since the adoption of the Kyoto Protocol, in December 1997). Consequently green initiative should be two-fold: first a continued effort to replace more polluting

Page 112: Wireless & Cellular Communications

112

industries by wider ICT use, and secondly improve ICT green initiatives to make it more environment friendly.

Note that these efforts are increasingly important for service providers as general public environmental awareness increases. U.S. telecom service providers need to move immediately to offer green communications services, or risk losing out on a multi-trillion dollar global market, an Insight Research study states.5 In 2008, for instance, China Mobile cited environmental reasons as a major selection criteria for equipment selection, and attributed major contracts to Nokia-Siemens for that reason; the public relationship aspect was of course important, especially with Olympic spotlight on Beijing, but China Mobile also cited power consumption as a major source of operational savings.

Note however that before touting green products, companies must implement earnest green strategies and products rather than advertising half baked green measures. Too many recent green advertising results in “greenwashing accusations” and customers mistrust reactions. Government watchdog agencies start cracking down on false green ads, and environmental groups are exposing frauds and exaggerations. [78]

4.4.1 Background

Greenhouse gases are gases that cause atmospheric heating; naturally occurring greenhouse gases include water vapor, carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and ozone (O3). Other chemically produced gases, such as chlorofluorocarbons (CFC), are potent greenhouse gases, and major efforts have been in place to eliminate their use. Other gases like carbon monoxide (CO) and nitrogen oxides (NO, NO2) do not have a direct global warming effect but affect atmospheric absorption. And some gases like sulfur dioxide (SO2) or small carbon particles have very small drop sizes that significantly affect cloud formation and other atmospheric absorption properties. An equivalency table of Global Warming Potentials (GWP) was created to estimate in terms of CO2 equivalent the impact of every gas. [79]

CO2 emissions, resulting both directly from the combustion of fossil fuels and indirectly from the generation of electricity is approximately divided in the following categories (as of 2005): Industry - 27 percent , Residential end-use 21 percent, Commercial end-use 18 percent.

4.4.2 ICT Greenhouse Contribution

It is generally estimated that ICT contribute between 2 and 3 percent of global greenhouse gas (GHG) emissions (www.itu.int/climate). These numbers now seem universally agreed upon, even though some previous opinions have contended that information technology consumes up to 13 percent of US energy, and keeps growing, 6 In July 2008 the ITU created a new group to work on standards related to the impact of ICT on climate change, in

Page 113: Wireless & Cellular Communications

113

particular by promoting the use of more energy efficient devices and networks and the development of technical standards recommendations to reduce power requirements.

The major negative contribution of ICT to climate change comes from the proliferation of user devices, all of which need power and radiate heat (nearly 3 billion cellular phones in 2008). These devices also increase in computing power and use (including unnecessary spam). The manufacturing and disposal processes of electronic equipment are major areas of focus.

4.4.3 ICT Opportunities

ICT major opportunities for energy savings are in reducing power, waste, and travel. Various studies report on potential savings; for instance the estimated opportunity to reduce carbon footprint from improvement in telecommunication networks in Australia is 4.9 percent of total national emissions, making it one of the most significant opportunities. [80]

Identifying and quantifying opportunities to save greenhouse gas emissions is sometimes complex. Impact on climate change can be examined in terms of three distinct kinds of effects 7

First order or direct effects that arise from the design, production, distribution, maintenance, and disposal of goods and services by the ICT industry. This carbon footprint is roughly proportional to the size of the economic sector and is currently estimated to generate around 5+ percent of GDP, and accounts for GHG emissions, generally estimated to be in the range of 23 percent.

Second order or indirect effects that arise from the application and use of ICT such as the energy required to power and cool network devices. Second order impacts are usually estimated to be higher than first order impacts but more difficult to measure. One recent report estimated that indirect effects might account for 80 percent of ICT-generated GHG emissions. (Madden and Weissbrod, 2008, p.7)

Third order or systemic effects that arise from changes in general behavior enabled by the availability and use of ICT goods and services (such as less commuting, traveling). These changes affect nearly all industry sectors as well as general consumer behaviors; their impact is so indirect that it is still more difficult to measure. In addition, a number of predictions are necessary to account for this effect, and therefore more uncertain to quantify. For instance increased energy efficiency sometimes lead to lower energy cost and higher consumption, and may not equate to GHG emission reduction.

4.4.4 Current Opportunities

In total, use of broadband networks can decrease our dependence on oil up to 11 percent over the next 10 years. The majority of energy savings is estimated to come from telecommuting followed by teleconferencing leading to travel reduction; but other initiatives listed below also have major potential, with the equivalent greenhouse gas emission savings (in million tons of CO2 – Mt CO2). [81]

Page 114: Wireless & Cellular Communications

114

Telecommuting

: 600 Mt CO2. Personal vehicles accounts for 20% to 50% of greenhouse gas emissions.

Telecommuting also reduces real estate construction, parking, heating/cooling.

Teleconferencing

: 199.8 Mt CO2, if 10% of airline travel could be replaced by teleconferencing over the next

10 years

Business to business e-commerce

: US reduction of greenhouse gases over next 10 years estimate: 206.3 Mt CO2

Consumer e-commerce

: 28.1 Mt CO2. Electronics goods like books, music, video are usually more eco-friendly to

produce and especially to distribute.

Telemedicine

: Fewer home visits, admissions and readmissions also increases care quality. Combines

with rural availability of services increases availability.

Distance learning

: Reduces commuting, increases availability of programs.

Home monitoring

: Home remote control thermostat can achieve savings, which can achieve 10- 15% energy

savings (WWF study, 2002)

Electronic billing, office mail, and electronic documents

(paper, plastic, delivery), 67.2 Mt CO2. (shifting newspaper subscriptions from physical to

online media alone will save 57.4 million tons).

Between 2002 and 2006, first class mail declined from 103.5 billion pieces to 97.6 billion pieces, i.e. at least 200 kt of paper saved (savings impacts trees, landfill space, and chemical bleaching, inking, etc.)

Fiber transport

: is more energy efficient than copper; Verizon estimates FIOS total energy consumption to

38% of DSL. 8

Page 115: Wireless & Cellular Communications

115

Home Power Consumption are equivalent, central office power consumption is greater for DSL Copper: 32 kW hours/year/line, compared to fiber to the home: 12 kW hours/year/line. Fiber maintenance cost is 61% of copper, hence fewer truck rolls.

Fleet vehicles

: Retrofit vans to hybrids, new hybrid vehicles, bio-diesel, eco-friendly lubricants, etc.

Central Offices and Data centers

:

- Green energy (solar)

- More efficient generators (Data393 Announces Environmentally Friendly Upgrades – Denver, Aug. 22, 2008)

- Mainframes computing power underutilized. Could be increased by 25% (IBM, Mainz meeting, 2007)

- Network Equipment power down

- More efficient N+1 redundancy schemes.

- Use of advanced 20-year-rated UPS batteries

- Hot aisle - cold aisle containment

- Motion-sensor lighting

- Efficient humidification systems

Data storage

:

- The average utilization rate for servers ranges from 5 percent to 15 percent and for non-networked storage 20 percent to 40 percent. (Jeff Nick, EMC)

- Virtualization and consolidation lead to better utilization of servers and storage.

- Information lifecycle management: information accessibility changes, 70% of data volume is rarely or never accessed, migrate it to storage that consumes less energy.

Page 116: Wireless & Cellular Communications

116

- De-duplication: avoid data duplication and regular backup duplication.

IT Equipment efficiency

: Remote computer management for on or off computers. Thin-client computers in call

centers that use central data systems.

Network element efficiency

: Reduce the quantity of materials used for equipment (China Mobile Green initiative, Dec

2007). Lessen the weight of each piece of equipment. Increase equipment’s integration

capacity. Lower equipment’s power. Consumer products: increase small transformers

efficiency, lower power consumption, improve sleep modes, etc.

Local green power generation

: Solar panels, micro turbines.

Manufacturing

: Reduce toxic chemicals used in electronics. Reduce energy used by electronic products.

Reduce manufacturing energy use and waste.

Recycling

: Reduce waste, provide a god infrastructure for recycling and reusing old electronic

equipment, copper wires, etc. Recent estimates indicate that we recycle less than 10 percent

of all our unwanted electronic products, which includes computers, televisions, and cell

phones. (www.epa.gov).

4.4.5 Specific Wireless Opportunities

It is fairly difficult to compare environmental effects of manufacturing radio equipment compared to producing and building wired (copper of fiber) lines. Nevertheless, wireless technologies do present a few specific advantages.

Location based services

: Route mapping reduces fleet driving by finding nearest possible trucks. Route optimization

by prioritizing right turns thus reducing idle time: UPS estimates that it saved in 2007 3.1

million gallons of fuel, and avoided discharging 32 Mt of CO2 into the air by turning right

whenever possible.

Wireless-broadband allows freight and freight vehicles to be monitored in real time, allowing more freight laden vehicles. (Australian estimates [80]: avoiding 25% of

Page 117: Wireless & Cellular Communications

117

unladen truck mileage lead to 2.9 Mt CO2 savings per annum, i.e. 0.52% of total national emissions.)

The Australian study further surmises that a good wireless service may facilitate personalized public transport, leading to more vehicle occupancy and more convenient taxis, car pool, etc.

Wireless phones, battery, and accessory recycling

: Advertise recycling centers, give credit for old handsets and accessories. Virgin cellular

pride itself to include a prepaid envelope for new customers to return older phones.

Equipment

: Improve chargers and transformers: many small consumer device transformers are always

plugged in and very inefficient, especially at low load. Sleep and low power modes in mobile

devices lower power consumption and increase battery life. Wireless AP and Ethernet

switches activity detection, link status for lower power use, claims 20 to 40 percent power

savings.9

4.5 Homework

1. Put together an estimated cost of rollout of WiMAX service for a minor city like Boulder, CO. Assume a 140dB link budget for a 2.5GHz WiMAX network. Estimate cell radius for good outdoor coverage. Find a map of the area, and decide where to place cell sites.

2. Assuming a GPS satellite has circular orbit 20,200km above earth (of radius near 6,350km), calculate its speed and period of revolution.

3. Discuss health risks of a crew working right in front of a PCS transmitting antenna (16W base station). Consider exposure power density if a worker is in front of the antenna and absorbs the entire energy radiated within a 1ft by 2ft area on his body.

4. Are there any health risks associated with using a modern handset (200mW) for long calls exceeding an hour? Consider mainly SAR absorbed by the user’s head (averaging 5kg on a human adult).

Chapter 5 Wireless Security

Modern communications offer ubiquitous access to public and private information. But that

convenience and flexibility also brings serious security concerns.

5.1 Security Concerns

Authentication:

Page 118: Wireless & Cellular Communications

118

allowing the right customer to have access to the right data. Also combined with detailed

billing: it is important for service providers to bill for elaborate services, and to make sure

these services are not hijacked by non-paying users.

Private data security:

securing data from eavesdropping is important to consumers, and businesses.

Public data security or digital right management:

controlling distribution of copyrighted material is necessary for content providers to

provide access to pay content.

5.1.1 Strengths and Weaknesses

All these concerns are well known in the industry, and many security standards are available to implement necessary security features. In the case of wireless systems, a few special restrictions apply, specific design needs exist, and different types of wireless threats and challenges are encountered such as remote jamming and eavesdropping.

Nevertheless, in some cases wireless solutions combined with the proper security measures can bring precious advantages: mobility of course; resiliency since conduits cannot be broken, vandalized, or stolen; speed as new links can be installed faster with the proper equipment; also terrestrial or satellite wireless systems offers good redundancy to other wired solutions (for both voice and data).

Wireless systems are often used to provide redundant path to the usual fiber and copper infrastructure; they may offer a useful alternative to accidental or deliberate cable cuts, and to natural disasters such as earthquakes, hurricanes, and floods. In several instances a robust cellular system provided reliable communications when other fixed infrastructure failed. In February 28, 2001, a powerful earthquake with a magnitude of 6.8 jolted Seattle, WA; tremors were felt as far away as Portland, OR. Although wired telephony outages were rare, the exceptional overload of communication demand was well complemented by the cellular infrastructure, which performed extremely well throughout the crisis, and was quickly augmented by temporary sites where most needed. Other major disasters such as Huricane Katrina, the 9/11 terrorist attack, and the major North-Eastern grid failure have shown the resiliency and the limits of many wireless networks and their usefulness in emergency situations. In October of 2008, the FCC sought to improve them further by mandating an 8-hour power backup for most cell sites within a year, in case of emergency needs (except where health, safety risks, or local laws are of concern).

5.1.2 Concerns and Mitigation

Given unknown eavesdropping threats and major international communication intelligence and signal intelligence projects, any wireless communication may be potentially

Page 119: Wireless & Cellular Communications

119

intercepted by a competitor or a potentially harmful entity. Wireless propagation may be monitored at greater distances, and wireless systems are consequently more vulnerable than copper or fiber waveguides. Nevertheless, wired optical and electrical communications may be snooped as well, and their implicit physical layer security should not be trusted.

Wireless systems can be designed to minimize vulnerabilities to authentication and other threat to privacy. Wireless equipment manufacturers and service providers must be careful to consider all aspects of wireless vulnerabilities.

Wireless classic attacks are usually categorized as follows:

Passive Attack:

eavesdropping of wireless signal that propagates somewhat uncontrollably.

Cryptographic attacks:

some encryption schemes are vulnerable to various attack methods.

Active Attack:

eavesdropping as well as injecting traffic for various purposes: denial of service or

information hijacking.

Jamming Attack:

the simplest denial of service, usually feasible with very little information on the wireless

system.

Man in the Middle Attack:

more subtle, and more dangerous; given enough information on the wireless system, an

attacker can emulate all authentication parameter of an access node to try to intercept

traffic from other nodes.

In many cases wireless security is especially difficult to achieve because the physical layer is more vulnerable.

Table 5.1: Typical wireless security concerns: vulnerabilities, threats, and associated mitigation

measures.

Page 120: Wireless & Cellular Communications

120

Vulnerability Threat Mitigation

RF signal reliability

Jamming: a brute force denial of service attack

Resilient network with redundancy and interference detection and localization

RF signal propagation beyond controlled areas

Eavesdropping: experiments have shown that wireless LANs can be attacked with fairly simple equipment (very high gain antenna and simple NIC client card) nearly 20 miles away, more with elaborate equipment

Robust encryption and authentication (as implemented in layer 2 of 802.11i, and recent cellular algorithms)

Heavy cellular network usage

Encryption schemes cannot significantly increase required throughput

Efficient key management, limited public key solutions

Mobile station low computing capacity

Hardware (handsets, PDA) are only capable of fairly simple, vulnerable encryption schemes

Use specific mobile hardware for secure communication

Mobile stations may cause security leaks

Man in the middle attacks may introduce malware or spyware into entire networks

Strict hardware control and authentication

5.1.3 Regulations and Protection

Communication systems today are used by major companies to exchange sensitive data: business espionage from US companies is strictly regulated, but international eavesdropping remains a concern. Foreign threats are a concern for business intelligence (manufacturing and trade secrets, intellectual property, etc.) since potential illegalities are difficult to prosecute. And government services are even more so concerned with political and military information exchange over the air.

Page 121: Wireless & Cellular Communications

121

Several international monitoring initiatives have been active world war two. Today, they mostly rely on terrestrial centers, high altitude aircrafts, and satellites. 1 [11] [12] In the US, communication intelligence falls within the guidelines of United States Signals Intelligence Directive (USSID) and US privacy laws. Terrestrial and satellite service providers must help government entities as necessary as mandated by CALEA (Communication Assistance for Law Enforcement). Foreign threats also exist; one should always assume major satellite and terrestrial eavesdropping systems are in place from several foreign nations. In particular major efforts in signal intelligence have been documented at the strategically located Lourdes signal intelligence facility, near Havana, Cuba. 2

5.2 Security for Wireless Systems

Wireless systems are very different in nature: from high capacity dedicated point-to-point links (terrestrial or satellites) to heavily used cellular terrestrial or satellite systems, each needs specific security measures.

5.2.1 Satellite Communications

Satellite communications are especially vulnerable to eavesdropping due to the large areas they cover. A few earth stations were famously known to intercept all satellite communications in the 80s. Only a few geosynchronous TELSAT satellites were carrying mostly voice traffic, and eavesdropping was a fairly simple task. Current satellite communications are still more likely to be intercepted given the long communication range, and given that terrestrial communications require a lot more geographic frequency reuse making long-range interception very difficult.

Some commercial security systems are available for satellites handsets, which rely on common encryption algorithms. Nevertheless satellite links should always be considered unreliable; and specific authentication and encryption measures should always be used.

Satellite services are however a good solution for redundant and resilient service in case of major emergencies. In the US four companies provide mobile satellite service (MSS) 3 : in the days immediately after Hurricane Katrina, MSS providers deployed over 20,000 satellite handsets to the Gulf Coast region; and their handset manufacturers moved to a 24x7 production schedule to keep up with demand. [15]§172

5.2.2 Point-to-point Systems

General point-to-point wireless links usually use microwave frequencies (6 GHz to 38GHz), or even mm-wave frequencies (60 GHz to 80 GHz). These radio links cover several miles and deliver point-to-point services, and are typically used to reach rural locations, as well as between high-rises in dense urban environments.

Page 122: Wireless & Cellular Communications

122

Historically, eavesdropping of long-haul terrestrial links have played important roles at least during world war two, when after 1941 Vetterleins eavesdropping station (near Noordwijk, on the Netherland coast) monitored and descrambled over 30 daily allied conversations [11].

Current microwave links are based on proprietary systems and generally do not have specific imbedded physical layer security systems. This is usually acceptable when these links are used to provide public voice and data infrastructure (e.g. in rural areas). These links should be considered vulnerable, and higher end-to-end layer encryption systems should be implemented when microwave links are used as part of data transport.

5.2.3 Cellular Systems

The vast majority of wireless applications focus on wide coverage and mobility. Consequently very specific security measures have to be applied. In addition the volume of data and large amount of usage on shared resources often determine security measure requirements and limitations.

Cellular security systems are often limited because of their massive commercial use [6], ch 13:

- Mass production: equipment and services are mass produced and need to be cheap

- Import/Export: algorithms must be allowed to be manufactured, implemented, and carried anywhere in the world.

- Limited handset computing power: size, weight, power, battery, processor speed limits encryption strength.

- Heavy use: limits the amount of data rate increase required by some encryption algorithms.

Physical Layer Security Some carriers mention that frequency hoping (FH) or CDMA spread spectrum provides a layer of security that makes eavesdropping very difficult and expensive. That may have held some semblance of truth in the 90s, but it is surprising that these arguments are still being used: the type of FH or spreading used for modern digital communications systems has very little to do with low-detection communications, even though that is what FH and spread spectrum were devised for. Anyone who has conducted spectrum surveys, or digital cellular network data collection knows that these signals are very easy to detect and that cheap devices are available to decode them.

GSM Security GSM authentication relies on SIM cards, which is convenient to customers, as they are not tied to a handset, and it is reasonably (although not fully) secure. GSM security

Page 123: Wireless & Cellular Communications

123

architecture uses public keys and three primary security algorithms: - A3 for handset authentication to a GSM network.

- A5/1 in Western Europe (or A5/2 elsewhere): a block cipher algorithm for voice encryption.

- A8 for key generation.

Unfortunately, in an effort to keep algorithms fairly secrets, GSM created the secret Security Algorithm Group of Experts (SAGE), and did not submit algorithms to a worldwide analysis effort. Consequently design flaws surfaced in 1998-1999: encrypted conversations could be decrypted real-time with a simple 15-millisecond attack on a typical portable PC rendering over 215 million GSM phones non-secure at the time. Since then improved security standards are available for GSM / 3GPP TS 33.105, TS 33.120, still providing acceptable security levels. Nevertheless, some attacks are known (for instance based on power monitoring in the SIM card) and GSM should not be considered absolutely secure.

CDMA Security CDMA systems (cdmaOne IS-95, to CDMA2000) provide authentication, signaling message encryption, and voice privacy. They use the CAVE (Cellular Authentication and Voice Encryption Algorithm) for authentication; and CMEA (Cellular Message Encryption Algorithm) for signaling encryption. For authentication CDMA (CAVE) uses: - Subscribers A-Key

- Handset serial number (ESN)

CDMA handsets use an authentication key (A-key) to produce two shared secret data (SSD) using the CAVE algorithm. Two 64-bit values (SSD_A and SSD_B) one for authentication, the other for encryption algorithms. CDMA/3GPP2 voice and data encryption is standardized in S.S0053 - Common Cryptographic Algorithms.

Much like GSM, CDMA is keeping algorithms such as CAVE relatively secret, and are therefore likely to have security weaknesses.

A subset of the PN sequence (private long code mask) is also used to uniquely scramble user voice and thus provide voice privacy; but given the predictability of the long code and this system is generally deemed fairly vulnerable to cryptanalysis.

In many cases carriers have not implemented voice privacy or any other sophisticated encryption standards. Therefore specific security needs should be implemented in partnership with service provider when additional private communication requirements exist.

Upper Layer Security The large amount of wireless capacity needed for commercial service combined with the fairly low computing capabilities of many mobile devices make strong encryption impractical for many wireless services. Some service providers offer

Page 124: Wireless & Cellular Communications

124

specific handsets that include hardware capable of better security applications for voice and data encryption. The main concern in these cases is that the coded information increases capacity demand, which may come at premium prices over public cellular infrastructure. All classic application layer security schemes can be implemented on wireless devices. Current IP-based wireless networks adapt very well to IPSec and VPN schemes, and makes wireless access of controlled and authenticated wireless devices very safe.

5.2.4 Wireless LAN

Wireless LANs typically rely on various security protocols in their lower layers, but these standards have had notorious weaknesses. For Wi-FI, WEP (based on RC4) has well-documented weaknesses and is now completely unsafe [19]. EAP-related encryptions have been reported to be broken as well. First WPA, and now WPA2 use AES algorithms with 128-bit keys or more, and provide fairly safe WLAN environments . Although WPA/WPA2 are also vulnerable to brute force attacks as reported recently by ElcomSoft in a press release. 4 Further initiatives within 802.11i improve on authentication as well as encryption with the CBC-MAC protocol (CCMP).

Current IP-based wireless networks adapt very well to IPSec and VPN schemes, and makes wireless access of controlled and authenticated wireless devices very safe. The IEEE 802 standard body also pursues an effort to standardize MAC layer security (MAC SEC) for many IP based standards, including Ethernet and wireless LAN/MAN.

5.3 Basic Concepts

A few basic concepts are common to security systems: encryption is of course an important one, but it should be emphasized that many other important building blocks exist to a robust security system such as authentication, key generation, key distribution, etc. These major concepts are details in this section.

5.3.1 Authentication and Identification

In order to provide secure information exchange, a network must be comprised of secure, trusted elements only. In an IT infrastructure, this must include considerations for all devices connected at any time to the network. These devices and their allowed functions must be trusted, which requires a strict authentication procedure. Authentication is crucial in all parts of a secured system:

- Hardware and software applications must be secure, including: storage and memory, bootloader, applications, I/O bus, and CPU communications,

- Network links between entities must be secure and immune to attacks, eavesdropping, etc.

Page 125: Wireless & Cellular Communications

125

IT professionals are increasingly worried about wireless devices that physically enter secure premises since they can in connecting to an otherwise secure network create wholes in its security system.

5.3.2 Encryption

Standard encryption schemes keep information exchange secure, mostly by careful handling of Encryption/Decryption and Key generation and management

Encryption/Decryption mechanisms usually use well-known and standardized public-key algorithms. Note that implementation flaws and key management issues can render nearly useless a seemingly sound protection scheme: WEP used in 802.11 is based on robust RC4, but has notorious key scheduling algorithm weaknesses: a single static key is used by all users; and some keys cause encryption weaknesses that allow an attacker to decipher the key after a few minutes network capture in a basic cipher text attack. [18] [19]

Robust encryption algorithms are nearly always public and well standardized in order to have weathered many analyses and withstood multiple cracking efforts. Such algorithms like DES, 3DES, AES are fairly secure at a given time, but also become obsolete, mostly because of the length of their keys.

5.3.3 Commonly accepted encryption safety levels

Currently DES is considered insecure mostly because of its 56-bit key length. 3DES with two or three keys is more secure, but is advantageously replaced by AES. AES (Rijndael) with 128bit key is currently considered safe. Nevertheless 192 and 256 key options are available for further protection in upcoming years.

Standard algorithms tend to increase key length to keep up with increasing computing capabilities. Today at least 128bits are required for safe public-key symmetrical algorithms; sometimes 192 is preferred.

Generally an order of magnitude higher is required for private key algorithms (or asymmetrical algorithms), which rely on the mathematical difficulty to inverse some computationally easy operations:

Integer factorization: factorization of two prime numbers p ⋅ q = n; used for instance by RSA. Prime numbers involved need to exceed 300 digits, leading to key length in excess of 1024 bits, 2048-bit keys are preferred.

Discrete logarithm: y = g ⋅x (mod p); used by the government Digital Signature Algorithm (DSA) or the classic Diffie-Hellman scheme. Similar: 1024 to 2048-bit keys.

Elliptic curve discrete logarithm: y = g ⋅ x (mod p) on an elliptic curve y2 = x3 + ax + b (mod p); more recent, the algorithm is sometimes feared to need more time to weather cryptanalysts attacks, but it seems to have the great advantage in terms of key length: 172-bit key might be equivalent to the above two 1024-bit key systems. Consequently elliptic

Page 126: Wireless & Cellular Communications

126

curve cryptography is a system well suited for computation-limited environments such as wireless devices [11].

The above considerations are generally accepted, and based on a few reasonable premises:

1. These difficult inverse mathematical problems have no known fast algorithms, but have not been proven to be irreducible: faster algorithms might emerge and change the public key cryptography landscape overnight.

2. Moore’s law is likely to continue for years to come. If a breakthrough in computational speed were to occur that would far exceed Moores law, it would have a significant impact on the encryption and privacy industry. (Quantum computing might one day provide the basis for such a breakthrough).

5.4 Further Areas of Research

Encryption techniques are constantly evolving and changing, and many areas of research focus current efforts. A few interesting research domains are listed below:

Standard MAC layer security

WLAN now addresses security in 802.11i. New data confidentiality protocols are the

Temporal Key Integrity Protocol (TKIP) and the very robust Counter-Mode/Block Chaining

Message Authentication Protocol (CCMP).

Radio Channel Characteristics

For wireless communications, interesting properties of the wireless channel between

transmitter and receiver can be used for secret information sharing, that are inherently

secure since they are fairly unique to one communication link, and therefore potentially

hard to duplicate by an eavesdropper. [21]

Handsets and Pubic Key Algorithms

Handsets and portable devices have improved in computing power, but are still very limited

in their ability to perform intensive computations (for no other reason than battery life

concerns). Public-key (asymmetric) algorithms were therefore not practical for mobile

handsets. Still current computing capabilities combined with apparent lower key length

required by elliptic curve cryptosystems (ECC) make that area of investigation more

attractive for wireless communications.

Quantum Cryptography

Quantum cryptography and quantum key distributions are important technologies in

current research initiatives. Based on quantum principles that eavesdropping may modify

quantum properties, these techniques may offer very robust encryption tools for future

Page 127: Wireless & Cellular Communications

127

security standards. Although applications are unlikely for wireless devices in the near

future, research groups are studying combining principles of quantum cryptography with

fiber optic communications to provide secure optical systems.

5.5 In Summary

Wireless security is an end-to-end task that includes many different aspects such as authentication of hardware and software, encryption/decryption, key management (generation, distribution, renewal), etc. Different systems require different considerations: terrestrial or satellite, private or public; some systems are constrained by end-user devices computing limitations, or by traffic volumes. In general wireless communication systems – microwave transport, cellular, or satellite – are not encrypted by operators, and should not be trusted. Whenever necessary, additional security measures should be implemented for both voice and data.

Chapter 6 CDMA

This section provides an introduction to CDMA concepts such as orthogonal and pseudo-orthogonal

coding. It first introduces simple but powerful properties of auto-correlation and cross-correlation

of specific bit sequences, and then illustrates how a CDMA standard like cdmaOne 1 makes use of

these properties. It then focuses on additional improvements brought by the third generation of

standards (3G) that use CDMA.

This section presents a cursory overview that allows the reader to understand the importance of orthogonal and pseudo-orthogonal properties of codes used in CDMA. A few block diagrams are reproduced (from the cdmaOne standard) but the section does not go in-depth into various layers of the standard. For a compete description of cdmaOne, refer for instance to [82], [83], or [84].

6.1 CDMA Basics

CDMA systems spread a slow information bit rate with a fast chip sequence, transmit it over the air and retrieve the original information. How to actually spread and retrieve the information is standardized in details in IS-95. Three main tools are used:

Walsh codes, 64-chip orthogonal sequences A short code: 215 - 1 = 32767 chips long, which has the property of being orthogonal to any

nonzero offset of itself. A long code: 242 chips long, used to generate unique sequences, which are pseudo-

orthogonal to one-another.

Page 128: Wireless & Cellular Communications

128

The following sections go into further details on where these tools are used in various aspects of the IS-95 air interface: on forward links, reverse links, in access mode, and in traffic mode.

6.1.1 Walsh Codes

Forward Link Walsh codes are orthogonal codes, IS-95 uses them to multiplex several mobile communications (and control channels) on the forward link. Walsh codes are simply built from Hadamar matrices as represented in figure 6.1.

Figure 6.1: Hadamar matrix to build Walsh codes.

In the forward link, each mobile uses a specific Walsh code sequence; all sequences are multiplexed together in a total combined sequence:

(6.1)

(where si represents the information symbol (+1 or -1), gi is the individual channel gain.) That sequence is manipulated further and sent over the air; on the receiver side, that sequence is decoded by simply integrating for each channel: for channel k, the information bit is retrieved from the sign of the integral:

(6.2)

Reverse Link In the reverse link of IS-95, Walsh codes are not used in that manner but simply encode bits in a 64-ary encoding scheme: that s each 6-bit sequence is mapped to a 26 = 64 bit Walsh code. Figure 6.2 illustrate the example with 2 bits encoded by 22 = 4 bit Walsh codes.

Page 129: Wireless & Cellular Communications

129

Figure 6.2: Walsh codes can be used to encode n-ary bit sequences as in IS 95 reverse link. This

example shows a 4-bit long Walsh code set used to encode 2 bits.

In further evolutions like IS-2000 and IS-856, the reverse link multiplexes several channels similarly to the forward link, and Orthogonal properties of the Walsh codes provide the multiplexing scheme:

(6.3)

Encoding Example Figure 6.3 illustrates this CDMA spread spectrum. In the example, we encode two separate user bit streams: user 0 transmits the logical bitstream u0 = [0,1], which is physically represented by u0 = [+1,-1]. Each bit is encoded (“spread”) by Walsh code 0: W0 = [+1,+1,+1,+1], thus giving the represented symbol stream for user 0. Assume next that this symbol stream is amplified by some gain (here g0 = 1). Note that the gain may vary from one user to the next, and from one user bit to the next.

A similar stream is produce for user 1’s logical bitstream u1 = [0,0]. And more user bit streams could be added, but this example only shows two separate user streams u0 and u1, here transmitted wth equal gain g0 = g1 = 1.

Page 130: Wireless & Cellular Communications

130

Figure 6.3: A simple example of CDMA coding using orthogonal Walsh codes of length 4.

Decoding Example Now examining figure 6.3, the initial user sequences can be retrieved by simply applying formula 6.2 for every Walsh code, and bit by bit. The reader is invited to try to multiply bit by bit the sequence of figure 6.3, and integrate (which is here straightforward since piecewise constant functions are involved) to retrieve the user bit value (+1 or -1). A further exercise is given at the end of the section, where the total symbol sequence is given (see figure 6.13) and the reader is invited to retreive each user sequence (ui) and gain (gi).

6.1.2 Short Code

Short codes are bit sequences with very specific autocorrelation functions. IS-95 uses a 215 - 1 = 32767 chip short code. For a first example let us use here a 22 - 1 = 3 bit long short code: 100.

Permutated, that short code has the following property: if +1 is counted for every bit of the permutated sequence that is identical to the bit of the original sequence, and -1 is counted when that bit is different, every permutated sequence totals -1, whereas the original sequence obviously totals its length (here 3).

100

Page 131: Wireless & Cellular Communications

131

010 +1 agreement -2 disagreements = -1

001 +1 agreement -2 disagreements = -1

100 +3 agreement -0 disagreements = 3

If we note < S,S(n) > the above computation, we have for a short code sequence:

(6.4)

This is a remarkable property of autocorrelation between bit sequences that is used in CDMA. As another example, verify the following two short code sequences (of length 23 - 1 = 7) verify the same property as the table above: s1 = 0011101 and s2 = 1110010. These two sequences are used for Gold codes in other CDMA standards. And another short code (of length 25 - 1 = 31) is s3 = 0000101011101100011111001101001.

IS-95 uses a much longer such sequence (215 - 1 = 32768 chips) which provides many possible sequences orthogonal to one another, used to differentiate between sectors on the forward link.

6.1.3 Long Code

Long code is a sequence of bits used for its pseudo-orthogonal properties; that much longer sequence does not have the perfect autocorrelation properties as the short code, but a similar one such that:

(6.5)

This pseudo orthogonal property is used in IS-95 reverse link in a 1.2288Mcps chip sequence to differentiate between mobiles. On the forward link the long code is decimated down to a 19.2kbps bit stream for a unique user mask.

6.2 Overview of cdmaOne

Page 132: Wireless & Cellular Communications

132

cdmaOne, or IS-95 commonly refers to the first implementations of CDMA techniques as invented by Qualcomm Inc. We first review the basic functions of IS-95 CDMA channels, and we will build on these to understand further IS-2000 improvements in the next section.

6.2.1 Forward Link Control Channel

During a call, both traffic and certain control data are embedded in the forward-link channels. Three forward-link control channels, the pilot channel, the synchronization channel, and the paging channel, are discussed in this section.

Pilot Channel The primary purposes of the forward-link pilot channel are to allow mobile units to detect the presence of a sector and to provide some form of timing reference for those mobiles so that coherent demodulation can be accomplished on the other forward-link channels. Detection of pilot channel power levels is also used to control handoff.

Synchronization Channel The synchronization channel transmitted by each sector is used along with the pilot channel to provide important system information to the mobile unit, such as revision (IS-95A, B, or IS-2000), system ID, Network ID, system time, long code state, etc.

Paging Channel One or more paging channels can be utilized within each sector of a CDMA system to inform mobile units of incoming calls, to convey channel assignments during call setup, and to transmit other overhead system information to the mobile units. The forward-link paging channel is used to set up a call, or to exchange short messages (SMS) in idle mode. But once a call is established, the sector transmits information specific to a particular mobile unit within the forward-link traffic channel.

6.2.2 Forward Link Traffic Channel

Page 133: Wireless & Cellular Communications

133

Figure 6.4: Forward-Link Pilot, Paging, Synch, and Traffic Channel rate set 1 (from J-STD-008).

Page 134: Wireless & Cellular Communications

134

Figure 6.5: Forward-Link I and Q Transmission.

The forward-link traffic channel carries user data such as voice communication. The forward-link voice traffic is parsed into 20-ms frames. On a frame-by-frame basis, bit rates associated with the particular Rate Set being employed by the connection are allowed. Frame Quality Indicator (FQI) bits are added as appropriate to these bit streams so that the mobile unit can apply a Cyclical Redundancy Check (CRC) to evaluate whether there are errors in the frame or not after reception. Finally, an 8-bit encoder tail is appended to the frame.

Different convolutional encoding schemes and symbol repetition are then applied for different vocoding rates to achieve a fixed bit rate of 19.2 kbps as set by the IS-95 standard. For less-than-full-rate frames, symbol repetition is applied as appropriate such that the equivalent overall data rate is still 19.2 kbps after symbol repetition.

Finally, after convolutional encoding, symbol repetition, and block interleaving, the bit stream is effectively multiplied by the decimated “Long Code”. Note that there is a single long PN code utilized by the system, based on system time (and hence aligned with Universal Coordinated Time). This PN code runs at a rate of 1.2288 Mcps, and still only repeats every 41 days due to the length of the code and the design of the shift register used to generate this long code. This long code is decimated, or sampled every 64th bit, and the resulting 19.2-kbps stream is the decimated long code which multiplies the traffic channel bit stream of interest.

6.2.3 Reverse Link Access Channel

Page 135: Wireless & Cellular Communications

135

Figure 6.6: Access Channel.

The reverse-link access channel is used to access the system during call setup; it is used to request service or to answer a page for an incoming call. The mobile unit makes an Access Attempt, in which it repeatedly attempts to access the system until either the network responds or the maximum number of attempts have been made.

Figure 6.7: Access Attempts on the IS-95 access channel consist of Access Probe Sequences of

successive Access Probes of increasing power.

Within each Access Attempt, the mobile unit transmits a series of Access Probe Sequences, each separated by a waiting period to allow the system time to respond. If a response is obtained, then no more Access Probe Sequences are sent, if a maximum settable number of Access Probe Sequences are sent without a response from the network, then the mobile unit will discontinue trying to contact the network. Each Access Probe Sequence within an Access Attempt consists of a series of Access Probes, each separated by a waiting period to allow the system time to respond to each probe, and increasingly powerful.

Page 136: Wireless & Cellular Communications

136

6.2.4 Reverse-Link Traffic Channel

Figure 6.8: Reverse-Link Traffic channel Rate Set 1.

The reverse-linktraffic channel processes user information such as voice in successive processing steps. The first step is to add Frame Quality Indicator (FQI) bits, which allow the base station to determine whether errors occurred during transmission via a Cyclical Redundancy Check (CRC). After the addition of the FQI bits, encoder tail bits are added to each 20-ms frame. The result is a bit stream at a rate determined by the Rate Set selected for the connection (as established at call setup through negotiation between the network and the mobile unit) and the sub-rate required to transmit the information for that individual 20-ms frame.

A convolutional encoder followed by symbol repetition is then employed to achieve a 28.8-ksps stream. The resulting bit stream is then sent through a 576-bit interleaver (576 bits per 20-ms frame) to mitigate burst errors during transmission.

On the reverse link, Walsh codes are used for modulation rather than to discriminate between separate traffic channels. For each six sequential bits in the coded and interleaved voice frame, a 64-bit Walsh code is transmitted, depending on the values of the 6 input bits (called 64-ary modulation).

6.2.5 Handoff

Page 137: Wireless & Cellular Communications

137

Figure 6.9: IS-95 and IS-2000 handoff sets: pilots are promoted or demoted to different sets as

their power levels compares to different threshold values or as their timers expire.

A short handoff overview was given in section 2.7.2. In fact the handoff process is based on several sets, populated by different possible PN offset values that may be used by the mobile for soft handoff. Figure 6.9 summarizes these sets and the processes in place to promote a possible sector seen by the mobile from any set to the next as their power level exceeds given thresholds (and conversely to demote them as their power weakens).

6.2.6 Power Control

Power control is essential (although not mandatory) in a CDMA system to resolve the near-far problem. Forward-link and reverse-link power control are very different in nature and are discussed separately.

Page 138: Wireless & Cellular Communications

138

Figure 6.10:

IS-95 forward-link power control simply reports to the base the quality of frames.

Depending on rate set, frame errors are reported by an Erasure Indicator Bit (EIB) or

a separate Power Measurement Report Message (PMRM).

Forward-Link Power Control At the initiation of a call, the serving sector begins transmitting the forward-link traffic channel at a fixed default power level. After connection is established, a form of close-loop forward-link power control takes over, in which the mobile unit reports the quality of the received forward-link signal and the sector responds by adjusting the power allocated to that connection. For Rate Set 1 calls (9.6kb/s), the mobile unit reports frame errors via a Power Measurement Report Message embedded in the reverse-link traffic channel. The reporting is usually periodic or can be threshold-based. Typically the sector will periodically reduce the transmit power allocated to a specific traffic channel, by small steps (typically between 0.2 dB and 0.5 dB), as long as it receives a Power Measurement Report Message (PMRM). When a PMRM is not received, it then increase its transmit power by a larger increment (1-2dB up to a maximum).

For Rate Set 2 Calls (14.4kb/s), forward-link power control is simpler: the mobile sends a forward-link Erasure Indicator Bit (EIB) in each reverse-link traffic-channel frame to indicate whether the most recent frame was in error or not based on the Cyclical Redundancy Check (CRC). In good conditions (no erasures reported), the base lowers power in the traffic channel by dn_adj (typically 0.2 dB to 0.5 dB). If an erasure is reported, the traffic channel needs more power, and the base powers up by up_adj (typically higher than dn_adj, such as 1 dB).

Reverse-Link Power Control In an IS-95 system, reverse-link power control is performed at an effective rate of 800 Hz. That is, every 1.25 ms, the mobile unit will either power up or power down in response to the various power control commands that it receives from the serving sectors.

Page 139: Wireless & Cellular Communications

139

Figure 6.11: IS-95 reverse-link power control: closed loop. An inner loop adjusts Eb∕N0 every

1.25ms; and an outer loop recalculates the Eb∕N0 setpoint to reach a target FER.

Open-Loop Reverse-Link Power Control Open-loop power control is utilized to establish a connection between a base station and a mobile unit. This applies particularly to the reverse-link access channel and the reverse-link traffic channel before closed-loop power control takes over. Under open-loop power control, the mobile unit estimates the required reverse-link transmit power for the signal to be detected at the base station based on the received power in the forward-link pilot channel. This open-loop estimate for the transmit power is given by a formula similar to the following:

(6.6)

where K is a so-called turn-around constant, PRx is the received pilot-channel power in dBm, and NOM_PWR and INIT_PWR are parameters that can be changed by the network operator to adjust the initial power that mobile units throughout the network use to initially access the system. K is typically -73 dB for cellular-band CDMA whereas K is typically -76 dB for PCS-band CDMA.

Closed-Loop Reverse-link Power Control (Outer Loop) Each individual call being supported by a base station has a target Eb∕N0 set point (ratio of energy per transmitted information bit to the power spectral density of the noise plus interference floor)

Page 140: Wireless & Cellular Communications

140

associated with that call. When the connection is initiated, a default value (Eb∕N0)nom is assigned to this target Eb∕N0. Typical default values are on the order of 7dB for mobile subscribers. The reverse-link Frame Erasure Rate (FER) truly characterizes the absolute quality of the received signal, not the Eb∕N0 value. The network operator therefore specifies a desired voice quality by setting a target FER. Values of 1-2% are typical. The outer-loop reverse-link power control periodically adjusts the target Eb∕N0 set point to achieve the desired FER. If for some period of time the FER is nearly 0%, then this target Eb∕N0 value can be decreased slightly. If, on the other hand, the desired FER is being exceeded, then this target Eb∕N0 value will be increased slightly to improve the FER. The network operator can typically set absolute minimum and maximum bounds beyond which this Eb∕N0 set point cannot be adjusted by the outer-loop power-control algorithm.

Closed-Loop Reverse-Link Power Control (Inner Loop) The channel element within any base station supporting the call essentially continuously monitors the effective Eb∕N0 of the reverse-link signal that it has received from the mobile unit. Every 1.25 ms, it compares the actual measured Eb∕N0 value to the particular target Eb∕N0 set point that has been determined to be required to maintain call quality via the outer-loop reverse-link power control. If the measured Eb∕N0 value is lower than the target Eb∕N0, then a power-up command is issued to the mobile unit for this 1.25-ms interval. If the measured Eb∕N0 value is higher than the target Eb∕N0, then a power-down command is issued to the mobile unit for this 1.25-ms interval. Note that either a power-up or power-down command is issued. There is no corresponding command for no power adjustment of the mobile unit.

6.3 Improvements for cdma2000

We now review the basic functions of IS-2000 CDMA channels, focusing on differences and improvements compared to IS-95. IS-2000 CDMA systems are similar to IS-95 systems, and still use the fundamental three tools of IS-95, but in different ways, and with additional improvements:

Walsh codes, 64-chip orthogonal sequences A short code: 32678 chips long, similar to that of IS-95. A long code: 242 chips long, similar to that of IS-95.

IS-2000 specifies a spread spectrum radio interface that uses Code Division Multiple Access (CDMA) technology to meet the requirements for 3G wireless communication systems. IS-2000 is a family of standards that deal with all aspects of cdma2000 systems:

IS-2000-1 Introduction to cdma2000 Standards for Spread Spectrum Systems IS-2000-2 Physical Layer Standard for cdma2000 Spread Spectrum Systems IS-2000-3 Medium Access Control (MAC) Standard for cdma2000 Spread Spectrum Systems IS-2000-4 Signaling Link Access Control (LAC) Standard for cdma2000 Spread Spectrum

Systems IS-2000-5 Upper Layer (Layer 3) Signaling Standard for cdma2000 Spread Spectrum

Systems

Page 141: Wireless & Cellular Communications

141

In addition, the family includes a standard that specifies analog operation, to support dual-mode mobile stations and base stations (which we shall not discuss here):

IS-2000-6 Analog Signaling Standard for cdma2000 Spread Spectrum Systems

Throughout this document, use of the term cdma2000 or IS-2000 refers to this family of standards.

6.3.1 Forward Link Architecture

The IS-2000 forward link is composed of the following channel types:

F-PICH

: pilot channel, similar to IS-95 pilot for backward compatibility.

F-SYNC

: sync channel, similar to IS-95.

F-PCH

: paging channel, similar to IS-95, but with extended messages, such as Extended CDMA

Channel List Message (ECLM), which broadcasts the 3G-1X capable carriers to 3G-1X

capable mobiles.

F-FCH

: fundamental channels, 9.6kbps, similar to traffic channel for voice, transmit user data and

signaling for data sessions.

F-SCH

: supplemental channels, new channels specific to IS-2000 for high-speed packet data

(HSPD). Supplemental channels are added to reach higher bit rates.

F-QPCH

: quick paging channel, new IS-2000 channel for battery life conservation, informs the

mobile when it needs to monitor the paging channel. F-QPCH is an uncoded, spread, and On-

Off-Keying (OOK) modulated spread spectrum signal. It increases the standby time of the

mobile. The Broadcast Indicator Bit in F-QPCH is set to ON to indicate there is a broadcast

page/message on the corresponding forward common signal channel.

Page 142: Wireless & Cellular Communications

142

6.3.2 Reverse Link Architecture

The IS-2000 reverse link is fundamentally different from IS-95, it is composed of several orthogonal channels, multiplexed on different Walsh codes (thus resembling a little more the IS-95 forward link channel structure, including the presence of a reverse pilot channel). And the 64-ary modulation by Walsh codes is only used in radio configurations 1 and 2 for 2G backward compatibility, but is not used for the 3G radio configurations 3 and higher.

R-FCH

: fundamental channel: used for voice traffic, or for signaling in High-Speed Packet Data

(HSPD).

R-SCH

: supports HSPD busrts above 9.6kbps.

R-PICH

: reverse pilot channel, fundamentally new, provides phase reference for the reverse link, as

well as power control information.

R-ACH

: Access channel, fairly similar to IS-95, used to establish a call.

6.3.3 Power Control

Power control systems and typical algorithms are reviewed for IS-2000, and compared to the IS-95 capabilities. Note that for radio configurations 1 & 2 (RC1 & RC2), no 3G improvement exist due to backward compatibility. In the remaining of the section, when we examine “3G power control”, it concerns RC3 to RC6. Specific power control considerations for packet data are also briefly presented.

Forward Link Power Control The 2G forward link power control is based on mobile station reporting frame quality: For rate set 1, the mobile reports frame error information in Power Measurement Report messages (PMRM) as long as the frame error rate (FER) is below a set threshold. In rate set 2, the mobile uses the Erasure Indicator Bit (EIB) to indicate a bad frame; the base may then keep track of FER, and accordingly adjust power levels. The 3G forward link power control is much improved (for radio configuration 3 to 6), and more closely related to the 2G reverse link power control: it operates at a 800Hz; it involves both the mobile station and the base station. For voice: the mobile reports commands on the reverse pilot channel for the base to power up or down, every 1.25ms, based on the comparison of estimated Eb∕Nt with a set point (inner-loop), that set point is updated every frame (20ms), based on the target FER (outer-loop). For data, the scheme is very similar, with the following nuances:

Page 143: Wireless & Cellular Communications

143

Because of bursty nature of data, set points and thresholds may be selected differently. When supplemental channels are used in addition to the fundamental channel, they have to

share the power control sub-channel (on the reverse pilot channel), thus slowing down the process.

Minimum and maximum gains depend on the speed of power control. FER targets vary for voice and different data speeds.

Reverse Link Power Control The 2G reverse link power control had an open-loop and a closed-loop (with an inner and outer-loop). The 3G reverse link open-loop power control resembles that of 2G (or RC1 & RC2), but requires a few new parameters due to the presence of the reverse pilot channel (R-PICH). The mean output power of the latter is computed from the last access channel power (P(R-ACH)) and a set offset (RLGAIN_ADJ) as follows:

(6.7)

Subsequently the mean traffic channel (fundamental channel R-FCH) power is computed by a fixed offset:

(6.8)

For voice, the 3G reverse link closed-loop power control is similar to 2G, since it operates at 800Hz inner-loop, 50Hz outer-loop, but is based on the reverse pilot channel, which simplifies the algorithm. Indeed the presence of the R-PICH, continuous, not gated, allows the base station (inner-loop) to simply measure the R-PICH Ec∕I0 (not the R-FCH Eb∕Nt), and to send power up/down commands accordingly. The outer-loop sets a frame-by-frame Ec∕I0 set point based on the target FER.

For data, the algorithm is similar, but with different set points; data and voice channels use different target FER, and different initial Eb∕Nt, set points. Also, (unlike the forward link) cdma2000 supports only one inner-loop on the reverse link. Consequently, when supplemental channels are used, the mobile applies the inner-loop power control commands to the R-FCH and R-SCH. Two outer-loops still exist, one for R-FCH, and one for R-SCH, since the two have different set points and target FER.

6.4 Improvements for UMTS

Although mostly presented as an evolution for GSM operators, UMTS, or its underlying air interface standard wideband CDMA (WCDMA) may be regarded as a dissident evolution of cdmaOne, inasmuch as it resembles the latter much more than a TDMA standard such as GSM.

Page 144: Wireless & Cellular Communications

144

6.4.1 WCDMA Summary

We therefore present an introduction to WCDMA with an emphasis on differences with the cdmaOne and cdma2000 line of standards.

DS-CDMA:

WCDMA, like the other standards seen thus far uses a direct-sequence CDMA spreading

scheme. The spreading sequences, however, differ from those of cdmaOne, long and short

codes are replaced by Gold codes.

Chip rate:

WCDMA uses a 3.84 Mc/s rate in a 5 MHz channel. Wider bandwidth (10 or 15 MHz) are

also allowed.

Frames:

WCDMA standardizes 10 ms frames. These frames are well suited for voice since they allow

the system to maintain an overall delay much below the 40 ms level for which voice delays

become noticeable. In some cases (for high speed data especially), longer frames gain from

better interleaving, which WCDMA remedies by allowing inter-frame interleaving.

Asynchronous mode:

Unlike cdma2000, WCDMA does not require all base stations to share a time reference such

as global positioning system (GPS). This in an advantage for indoor deployment for

instance, but is less robust to interference and does not benefit from the years of

operational experience as synchronous CDMA systems do.

Orthogonal channels:

WCDMA is very similar to cdma2000 in its use of orthogonal codes (orthogonal variable

spreading factors - OVSF) which are variable-length Walsh codes. These are used on the

forward and reverse links. In particular pilot channels are used for coherent detection in

both.

Scrambling codes:

In the forward link Gold codes are used as scrambling codes, they provide good

autocorrelation and cross correlation properties, much like offset short codes did in

cdmaOne and cdma2000. Gold codes are also used as scrambling codes in the WCDMA

reverse link, instead of cdmaOne long codes.

Power control:

Page 145: Wireless & Cellular Communications

145

WCDMA also uses an open-loop power control at call establishment, and a closed-loop

power control with an inner-loop and an outer-loop on both links. The inner-loop works at

1.5 kHz and tries to maintain a signal to interference ration (SIR) proportional to an Eb∕Nt.

The outer-loop works with a frame quality indicator, between 10 and 100 Hz, and floats the

SIR set point to maintain a bit error rate (BER) or block error rate (BLER).

Multi-user detection (MUD):

Unlike cdmaOne or cdma2000, WCDMA allow different choices of spreading sequences on

the reverse link. A short scrambling code is designed for base station efficiency when MUD

is used. When MUD is not used at the base station a long scrambling code is preferred.

Soft and softer handoff:

Much like in cdmaOne, soft and softer handoffs are based on timers and thresholds linked to

the forward link pilot channels involved.

6.4.2 Power Control and Multi-User Detection

As seen above, WCDMA implements fast closed-loop power control much like cdma2000 (slightly faster). Additionally, WCDMA implementations may use some flavour of multi-user detection, which we briefly treat here.

Multi-user detection algorithms may be optimal or sub optimal.

Optimal multi-user detection consists of a maximum likelihood sequence estimator (MLSE), its complexity increases exponentially with the number of users, and therefore sub optimal algorithms are of interest, and often preferred.

Sub optimal detection uses either linear or interference cancellation algorithms.

Sub optimal linear detection simply uses the inverse of the cross-correlation matrix of all signals received, no specific care is taken to remove noise, relative signal amplitudes do not have to be known. The method is similar to an equalizer.

Sub optimal interference cancellation is an algorithm that uses parallel or successive interference cancellation (SIC), detecting and removing each users’ signal, one by one, starting from the strongest.

Multi-user detection may be used in the forward and reverse link. The two schemes are very different in concept. The forward link is synchronous and has very different power levels multiplexed for each mobile (due to their varying distance to the base station). The reverse link, on the other hand, is asynchronous and the received signals are all power-controlled to reach the receiver at comparable amplitude. Also the base station must decode all users, whereas the mobile is only interested in one traffic channel and a pilot (one proposal suggests to only cancel that pilot, which must be decoded by the handset

Page 146: Wireless & Cellular Communications

146

anyway). Finally, the computing power available at the base station is much greater than the mobile’s.

6.5 Comparison

The items listed above show the close proximity of the spirit of WCDMA with that of cdma2000. We’ve seen already that the few major differences could not be ironed out more out of legal and business reasons than out of technical incompatibility. The result of having two major standards so close in their fundamentals is a shame in some respect, but it might also be viewed as yet another degree of freedom for certain operators. Indeed operators reluctant to use the US controlled GPS system see an alternative in asynchronous WCDMA.

Practically the air interface is only the tip of the iceberg, the underlying network infrastructure is an important part of the standard evolution: cdma2000 manufacturers built their evolution around the same network standard as cdmaOne (IS-41), whereas WCDMA equipment used GSM-MAP

6.6 3G Evolution Data Optimized

The above CDMA evolutions are part of the third generation of mobile standards (3GPP, 3GPP2); recall that their main requirements were two-fold: to increase voice capacity and to allow higher data rates (higher than the few multiples of 9.6 or 14.4 kbps that is). Subsequently however much higher peak rates in the megabit rage were needed, and standards appeared to address that need, standards like 1x-EV-DO (1-channel evolution - data optimized, 1-channel referring to the previously used cdmaOne 1.25MHz frequency channel) , and HSPA (High-speed packet access).

EVDO is a data standard for the cdmaOne-cdma2000 (3GPP2) line of product. It initially optimizes the forward link for high speed data, for targeted applications such as Web browsing, file transfer, VoIP, push-to-talk, video streaming, video conferencing.

EVDO is also a CDMA standard, also works in a 1.25MHz frequency channel (or multiple 1.25MHz channels), and is similar in some aspects: it uses Wash codes, short codes, and long codes. But is different in several ways: it now uses turbo codes, and a hybrid ARQ (for early termination of forward link packets), it uses adaptive modulation (QPSK to 16QAM).

EVDO has several releases:

Release 0:

Forward-link (download) peak data rate of 2.4 Mbps (although average advertised

download throughput is typically 300 kbps); reverse (upload) peak data rate of 153.6 kbps,

which was quickly deemed insufficient.

Revision A:

Page 147: Wireless & Cellular Communications

147

EVDO Rev. A, backward compatible with Rel. 0, enhanced several important aspects such as

Peak downlink of 3.1 Mbps, uplink 1.8 Mbps; and support for larger packets with multiple

user per packet. Other improvements include: more efficient peak traffic, access channel,

and better QoS mechanisms.

Revision B:

EV-DO Rev. B supports yet higher rates, up to 4.9 Mbps downlink, and may combine 3

carriers for a peak rate of 14.7 Mbps (similar and competing with HSPA). EVDO Rev. B

started deployment in late 2008.

Forward Link One of the main fundamental difference of EVDO is in its forward link: orthogonal channels are no longer transmitted simultaneously for multiple users, instead, one user receives the entire set of channels in one given time slot. EVDO is thus a time divided multiplexed standard (TDM); see figure 6.12.

Figure 6.12:

EVDO forward-link compared to cdmaOne forward-link: the EVDO forward link

transmit the entire sector power and all its orthogonal channels for pilot, control

channels, or actual user data channels, during a given time slot. IS-95 on the other

hand transmits pilot, paging, synch, and user data simultaneously, over different

orthogonal channels..

Similarly, a pilot channel and control channels exist in their own time slot, during which do actual user data is transmitted. This schemes needs no forward-link power control since at any time the entire sector power is dedicated to one given user; instead the modulation varies to adapt to the link condition. EVDO adapts modulation rate on the forward link from QPSK, to 8PSK, to 16QAM, depending on the wireless channel conditions.

Another novelty is the use of virtual soft-handoff in which a client device in handoff may transmit to several base stations, but on the forward link, only one sector (estimated the best at the time) transmits to the client. This soft handoff is very efficient in providing

Page 148: Wireless & Cellular Communications

148

macro-diversity on the reverse link while optimizing throughput and limiting interferences on the forward link.

Reverse Link The EVDO reverse link is much more like cdma2000: it is a CDMA link; it is power controlled for pilot and MAC channels, with the now familiar closed and open loop; power of user data channel vary with data rate. Walsh codes (of length 4, 8, and 16) are used for orthogonal channels, including pilot (W016), DRC (W816), ACK (W48), Data (W24).

6.7 Homework

1. Summarize cdma2000 improvements, a) for voice, b) for data. 2. When does more soft handoff help? (analyze the forward and reverse links). When does soft

handoff hurt, and what is the risk of limiting the amount of soft handoff? 3. Some vendors do not allow high speed packet data (in cdma2000 systems) to be in soft

handoff, can you think of reasons why? (Or can you think of more reasons to try to convince that vendor to support it?)

4. Decode the bit sequence given in figure 6.13 given that Walsh codes are of length 8, and 4 user bits have been encoded and transmitted as explained earlier in this chapter and shown in figure 6.3. Find the bit sequence and gains for each user, and note which ones are null (i.e. have no data at all).

Figure 6.13: Homework problem: decode this simple example of CDMA coding with 8-bit

Walsh codes.

5. Prove that any Walsh code Hn, with n≠0, of any size (or length) p, p ≥ 2, has as many zeros as it has ones.

6. In a convolution encoder of code rate r = 1∕3 and constraint length K = 7, how does the output bit rate compare to the input bit rate?

7. In cdma2000 reverse link, a user stream of 9.6kbps is sent to an encoder r = 1∕4, K = 9, and then through a symbol repetition (2x). What data rate is obtained? What Walsh code length should you use to spread that stream into a typical 1.2288Mcps CDMA channel?

Page 149: Wireless & Cellular Communications

149

Chapter 7 OFDM

This section provides an introduction to OFDM concepts. It first introduces simple signal

transmission concepts and orthogonal subcarrier properties. It then illustrates how standards like

Wi-Fi and WiMAX make use of OFDM properties.

7.1 Overview of OFDM

We’ve seen important and popular standards that use direct spreading sequences on user data, thus implementing a CDMA scheme. We now review another technique called Orthogonal Frequency Division Multiplex (OFDM), which is increasingly popular and adopted by standards like Wi-Fi, WiMAX, and LTE.

7.1.1 OFDM Basics

OFDM techniques consist in splitting a user data stream into several sub-streams, which are sent in parallel on several subcarriers. These sub-streams and subcarriers benefit from a number of properties that we now review in details.

Recall the classic example of continuous wave to encode information: the carrier frequency in itself in not capable of encoding information. The quantity of information s(t) is encoded by changes or modulation of the wave, and affects the amount of spectrum required Δfc as shown on figure 7.1.

Figure 7.1: Duality time-frequency.

One can of course use several carriers fi,i {1,2,…,Nc}, and filter them separately. That is a common approach and is used extensively in FDMA systems: in particular multiple

Page 150: Wireless & Cellular Communications

150

network operators who own licenses over a same area must take care not to exceed allowed levels of adjacent channel interference into one-another’s bands.

OFDM improves on the idea by using orthogonal properties of functions to increase spectral efficiency by choosing a specific interval Δf = fi+1 -fi between subcarriers. Multiple parallel signal streams are used: si(t) = exp(jωit) (where ωi = 2πfi), and in frequency domain: Si(f) = δ(f - fi).

In fact time signals are limited in a time window, and a user information symbol has a time interval for transmission: [0, Ts], so

(7.1)

(where ui is a user information symbol) and the frequency domain representation of the signal is modified from a perfect Dirac function δ(f - fi) to a sinc function1:

(7.2)

This last expression is derived from Fourier transform, using definitions from the next section.

Figure 7.2: Orthogonal carrier spacing.

7.1.2 Fourier Transform

Now recall the duality between time domain and frequency domain, with Fourier transform (and inverse Fourier transform) to switch from one domain to the other. Several definitions exist; let us the following definition of Fourier transform

Page 151: Wireless & Cellular Communications

151

(7.3)

and inverse Fourier transform

(7.4)

(Other definitions exist, with different signs under the exponent and different 2π factors, so it is important to always specify what definitions are used.) With this definition, the reader can readily derive formula (7.2):

(7.5)

7.1.3 Orthogonality

OFDM is a multicarrier modulation in which a user bit stream (of rate Ru) is transmitted

over Nc subcarriers, each having a symbol rate Rs = , or a symbol duration Ts = = . The advantage of that parallel transmission is that the symbol time may be increased, which mitigates inter-symbol interference.

Each symbol stream is multiplied by a function φk from a family of orthogonal functions

{φk},k {0,…,Nc - 1}. In CDMA, these functions were Walsh codes, in OFDM, they are windowed complex exponentials (or possibly cosine functions):

(7.6)

So in a similar manner to the CDMA forward link presented in section 6.1, multiple channels are multiplexed and combined, using exponential functions instead of Walsh code sequences:

(7.7)

Page 152: Wireless & Cellular Communications

152

(where ui = sigi; si represents the information symbol (+1 or -1), gi is the individual channel gain) or for many successive bits m = 0,1,2,..., etc.

(7.8)

That sequence is manipulated further and sent over the air; on the receiver side, that sequence may decoded by using orthogonal properties of {φk}.

(7.9)

where δk-l is the Kronecker symbol (i.e. 0 if k = l, 1 otherwise), and the asterisk (*) denotes the complex conjugate of the expression. Equations (7.6) and (7.9) gives us the orthogonal condition for subcarriers’ spacing: the right hand side equals zero if and only if (ωk -ωl)Ts = 2πn, for n non-zero integer. This leads to the condition Δf = n∕Ts, and 1∕Ts is the smallest separation between two subcarriers.

On the receiving side, the sequence may be decoded by simply integrating for each channel; for channel k, the information bit is retrieved from the sign of the integral:

(7.10)

Although in this case an additional trick is used, and direct and inverse Fourier transforms are used for decoding.

7.1.4 Discrete Fourier Transform

If we then examine the Fourier transform of our functions given in equation (7.6), we obtain a sinc function of pseudo-period Ts, which means that in the frequency domain subcarriers are spaced exactly such that the peak of the next one corresponds to the previous one’s first zero – see figure 7.3.

Page 153: Wireless & Cellular Communications

153

Figure 7.3: OFDM subcarrier spacing.

The overall envelope looks a bit like a spread spectrum signal, and may be tapered further to reduce out of band spectral power density.

The above choice of orthogonal basis functions has another useful property, relating to Fourier transform. Indeed sampling Stot turns the above expressions into the usual discrete Fourier Transform (DFT), and therefore, instead of multiplying, summing, and then integrating for decoding, OFDM allows to simply carry out a DFT and its inverse (IFT), which are very efficient operations.

Looking back at the Fourier transform (7.3), and sampling the time function and its Fourier transform (with N samples), one may define the following notations: uk = u(tk),tk =

kτ,k {0,1,…,N - 1}, and Un = U(fn),fn = ,n {-N∕2,…,N∕2}. And one obtains the discrete Fourier transform

(7.11)

and the inverse discrete Fourier transform

(7.12)

Now comparing these discrete transforms to above S′tot with the particular OFDM orthogonal functions, one sees that the sk,m ⋅gk,m coefficients are Fourier transforms 2 of the complex amplitude of the subcarriers.

Consequently, encoding-decoding of an OFDM signal is practically not done with integration like (7.10), but by simple FFT. The transmitter builds: {S(ωn)} = DFT{sk,m ⋅ gk,m}. And the receiver decodes the received spectral signal: Stot = IDFT{S(ωn)}.

Page 154: Wireless & Cellular Communications

154

If sampling is made as a power of 2 (N = 2p), DFT (and IFT) algorithms are in O(Nlog2N), referred to as fast Fourier transform (FFT) and very efficient to implement. OFDM schemes are therefore based on the nearest power of two, and when fewer subcarriers are users, the same order N = 2p is used for the FFT algorithms, but with zero entries for a subset Nz = N - Nc.

7.1.5 Number of Subcarriers

Note that increasing the number of subcarriers in a given band of spectrum does not increase capacity but provides a useful parameter to optimize: there is an interesting tradeoff between number of subcarriers Nc and subcarriers symbol time Tc. The more subcarriers are used, the longer their symbol rate is, which means that the overall rate of information remains the same, but a longer symbol rate is useful for multipath mitigation (recall conditions when equalizers are required). Consequently subcarrier spacing is a fundamental parameter to chose for an OFDM standard like Wi-Fi or WiMAX.3

7.1.6 Other Usual Techniques

A few more standard techniques are used in combination with the above OFDM definition in practical radio systems. [89]

Guard limits inter symbol interference (ISI): added guard time allows for larger delay spread and limits multipath interference from one symbol to the next.

Cyclic prefix limits inter carrier interference (ICI): by ransmitting a cyclical replica of the signal as a cyclical prefix, frequency orthogonality is improved between carriers.

Data scrambling, FEC encoding, interleaving, puncturing, even MIMO are also typically used as in other modern radio systems.

OFDM systems are therefore well suited to resolve rich multipath situations and slow time varying channels, which explains their popularity for standards like Wi-Fi. They are however not ideal for Doppler shift and phase noise.

7.2 Overview of Wi-Fi

Wi-Fi is a standard for interoperable equipment, certified by the Wi-Fi alliance, and based on various iterations of IEEE 802.11, which uses OFDM for its highest throughput profiles. Wi-Fi has been the most successful local area network standard, and it is worth spending some time examining some of its OFDM parameters.

Details of the 802.11 air interface can be found in a number of references, and recent books have good overview of the latest efforts [90], including good overview of 802.11n [91]. We only examine here some aspects of 802.11 as they relate to OFDM in order to provide some insight on performance goals and limitations.

Page 155: Wireless & Cellular Communications

155

7.2.1 802.11a/g

General Parameters: 802.11a/g use N = 64 point FFT in a 20MHz channel, δf = 1∕Ts = 312.5kHz, Ts = 3.2μs, 4 μs time block is used, with cyclic prefix to lower ISI, 52 of the 64 subcarriers are populated, 4 pilots (for phase and frequency training and tracking), 48 actually carry data.

The 802.11g packet structure includes the following:

Short training field (STF):

8 μs; 10x repetition of 0.8 μs symbol. Uses 12 subcarriers; good autocorrelation property

and low peak to average ratio; also used for automatic gain control (AGC)

Long training field (LTF):

8 μs, composed of two 3.2μs training symbols and prepended by a 1.6μs cyclic prefix. Used

for time acquisition and channel estimation.

Signal field (SiG):

4 μs (3.2 + 0.8 cyclic prefix), contains 24 bits BPSK decribing tranmit rate, modulation,

coding, length. Forms together with the training field the preamble (totaling 20 μs).

Data Field:

includes service field (16 bits: 7 used to synchronize descrambler, 9 reserved for future

use), data bits, and tail bits, and optionally padding bits. The Data field consists of a stream

of symbols, each 4 μs (3.2 + 0.8 cyclic prefix), transmitted over 48 subcarriers, and 4 pilots.

7.2.2 802.11n

802.11n is a high throughput amendment to 802.11 containing improvements over 802.11a/g. So what exactly does improve in this high throughput amendment? We review its major improvements in the physical and MAC layers.

Physical layer

A number of improvements in the physical (PHY) layer were designed to increase throughput in some situations, although these improvements may come at a cost, which will be pointed out.

Page 156: Wireless & Cellular Communications

156

Modul. bits/ Hz bits/ subcx 48cx 11g rates

40MHz 11n rates

40MHz 4x4MIMO

BPSK1/2 1 1/2 24 6 13.5 54

BPSK3/4 1 3/4 36 9

QPSK1/2 2 1 48 12 27 108 QPSK3/4 2 1.5 72 18 40.5 162 16QAM1/2 4 2 96 24 54 216 16QAM3/4 4 3 144 36 81 324 64QAM2/3 6 4 192 48 108 432 64QAM3/4 6 4.5 216 54 121.5 486

64QAM5/6 6 5

135 540

Table 7.1:

Throughput rates for 802.11a/g/n calculated for different modulations, given the

number of data subcarriers used (48 for 11g, and 52 for 11n, 108 in 40MHz) and the

symbol time of 3.2μs hence a data rate per subcarrier of 312.5kbps ×3.2∕4 = 250kbps,

because only 3.2 of 4 μs carry actual data.

OFDM carriers:

48 data subcarriers (+4 pilots) for 11g, 52 (+4 pilots) for 11n, and 108 (+6 pilots) for

40MHz operations. That increase in data subcarriers brings the maximum throughput up

from 54Mbps to 58.5Mbps. Tradeoff: higher cost of mitigating interference in adjacent

channel.

FEC:

Maximum FEC rate is increased from 3/4 to 56, hence reaching maximum data rate of

65Mbps, or 135Mbps in 40MHz channels. Tradeoff: more errors may occur, in which case

the system can revert to lower modulation.

Guard interval:

The guard interval, or cyclic prefix may be shortened from 0.8μs to 0.4μs, thus increasing

actual data rate to 72.25Mbps, or 150Mbps in 40MHz. Tradeoff: more ISI.

Page 157: Wireless & Cellular Communications

157

Multiple spatial streams:

MIMO offers 2, 3, or even 4 times the above rates, reaching 270/300, 540/600Mbps rates.

Tradeoff: system complexity and cost.

Greenfield Preamble:

The 802.11n preamble is modified for higher throughput, adding a high throughput training

field (series of 4 μs fields). A legacy mode appends this new preamble to the 11g preamble

for backward compatibility, whereas the shorter “greenfield” 802.11n only preamble

increase throughput by 10 to 15 percent, at the cost of backward compatibility.

Low density parity check (LDPC):

increases FEC efficiency and throughput in some cases.

MAC layer

The media access control (MAC) layer deals with multiple element addressing, channel access prioritization, and control. It transmits among other things beacons with regulatory and management information (such as country code, allowed channels, max power), and scans channels for beacons. Scanning is usually done passively but when regulations allow it, active probe requests can be sent for specific SSIDs or BSSIDs. MAC improvements for 802.11n include:

Fragmentation:

large frame may see considerable channel variations over time (especially in poor

condition, thus low bitrates). Consequently frames can be fragmented, and only an erred

fragment needs to be resent. MAC service data units (MSDU) above a certain settable

threshold are broken into several fragments sent over different MAC protocol data units

(MPDU).

Aggregation:

Aggregated MSDU or MPDU: A-MSDU is an efficient MAC frame format that aggregates

multiple MSDUs in a single MPDU which maximum size is extended to 4 KBytes and

optionally 8 KBytes. A-MPDU is another form of aggregation that aggregates multiple

MPDUs in a single MPDU, which maximum size is extended to 64 KBytes.

Enhanced Block Ack:

a new scheme in which the sender requests to enter block acknowledgement (BA) request

session, in which BA are requested periodically instead of having ongoing BA.

Optional features:

Page 158: Wireless & Cellular Communications

158

Other optional features are standardized as part of the 802.11n MAC:

a reduced inter-frame space (RIFS<SIFS) during data bursts improves burst efficiency,

a reverse direction protocol allows clients to let peers use potential unused transmit slots,

fast link adaptation optimizes throughput vs. loss in fast changing channel conditions,

transmit beamforming (TxBF) control, power save multi-poll (PSMP): a new channel scheduling scheme with very few idle

mode transmissions for handheld devices to save power.

7.3 Overview of WiMAX

IEEE 802.16 is a standard for wide area wireless networks [92]. The group focuses from the beginning on important service providers’ requirements for service reliability. Consequently 802.16 standardizes important features such as quality of service (QoS), security, flexible and scalable operations in many RF bands. WiMAX goes one step further and narrows down some implementation choices of 802.16 in order to achieve interoperation between equipment manufacturers. WiMAX still standardizes several air interfaces and several profiles in different frequency bands. Of course, performance varies with frequency, channel bandwidth, and other profile characteristics; and conformance between products and suppliers exist only in a given profile. [93]

7.3.1 Fixed and Mobile

Two very different families of WiMAX systems exist and should be treated separately: fixed and mobile WiMAX. In addition, a regional initiative, WiBro, which resembles mobile WiMAX, has been standardized in Korea.

Fixed WiMAX

is a reliable and efficient air interface, based on 802.16-2004 [94], used for fixed broadband

access. Several profiles exist for fixed WiMAX, including different bandwidths, carrier

frequencies, and duplexing schemes. Its air interface is based on Orthogonal Frequency

Division Multiplexing (OFDM), and access between multiple users within a sector is

managed by time-division multiple access (TDMA). While equipment has been available

since 2004, major milestones were achieved in 2005 when suppliers demonstrated

successful inter-vendor operations. Conformance testing [95] led to the first WiMAX

equipments to be certified in January 2006.

Fixed WiMAX profiles at 3.5 MHz (TDD and FDD) in the 3.5 GHz band were the first to be certified and will be examined in this chapter; 10 MHz TDD channels at 5.8 GHz are another important profile for use in unlicensed bands worldwide.

Page 159: Wireless & Cellular Communications

159

Mobile WiMAX

is an extension of the above that includes a new standard for mobility: 802.16e-2005 [96].

Mobile operations require more complexity in the air interface and in the network

architecture. Therefore, mobile WiMAX defines a different standard with considerations

such as: location register, paging, handoff, battery saving modes, and other network

functions to manage mobility. Its air interface is based on Orthogonal Frequency Division

Multiple Access (OFDMA).

Release-1 Mobile WiMAX profiles cover 5, 7, 8.75, and 10 MHz channel bandwidths for operations in the 2.3 GHz, 2.5 GHz, 3.3 GHz, and 3.5 GHz frequency bands. Plugfests showing interoperability between suppliers started in September 2006.

WiBro

is a Korean initiative for Wireless Broadband. Similar in many ways to mobile WiMAX,

WiBro includes mobility and handoff, and is commercially available in Korea since mid

2006.

WiBro operates in 10 MHz TDD channels at 2.3 GHz, and uses OFDMA. It targets mobile usage up to 60 mph.

7.3.2 OFDM Fixed WiMAX

Although the standard community is focusing on mobile WiMAX, fixed WiMAX applications still have a small role to play, especially in less dense areas. Small and large service providers worldwide have conducted over 200 fixed WiMAX trials, and analysts once estimated some growth potential for fixed wireless market. 4 All in all, fixed wireless access remains usually a fairly small scale offering, led by small carriers, and do not achieve the order of magnitude of mobile wireless carriers. (Recall figure 1.2. from chapter 1.)

Fixed WiMAX is based on the 802.16d standard and has the following properties:

OFDM air interface with 256 subcarriers in 1.75 or 3.5 MHz channels, TDMA TDD, FDD, or hybrid FDD (H-FDD) operations Adaptive modulation (BPSK 1/2 to 64QAM 3/4) Power Control: uplink open-loop and closed-loop (up to 30 dB/second) Forward Error Correction (Concatenated Reed-Solomon Convolutional Coding,

Convolutional Turbo Encoding or Block Turbo Encoding) Puncturing for rate variability Optional STS Transmit Diversity 802.16d Flexible Channel bandwidth 1.75 MHz 20 MHz, but only a subset (1.75, 3.5 MHz)

for WiMAX profiles MAC supports Automatic Repeat Request (ARQ) to remedy imperfect link adaptation, BW

allocated differently for different physical Modes, contention based Standard QoS profiles (real time to best effort)

Page 160: Wireless & Cellular Communications

160

Mesh Mode Optional topology for license-exempt operation Product interoperability and conformance certified at 3.5 GHz [97]

7.3.3 OFDMA Mobile WiMAX

OFDM is primarily used for fixed access. For mobility WiMAX uses a method for providing multiple user access in different simultaneous OFDM subchannels. This Orthogonal Frequency Division Multiple Access (OFDMA) is the true focus of 802.16 and WiMAX standards. Figure 7.4 shows how groups of subcarriers form subchannels, which are allocated to different users (as well as pilot and control channels). [98] [99]

Mobile WiMAX is based on the 802.16e standard and has the following properties:

802.16e OFDMA air interface with scalable number of subcarriers (128 to 2048) in different channel widths

TDD, FDD, or hybrid FDD (H-FDD) operations Adaptive modulation (BPSK 1/2 to 64QAM 3/4) Power Control: uplink open-loop and closed-loop (up to 30 dB/second) Forward Error Correction (Concatenated Reed-Solomon Convolutional Coding,

Convolutional Turbo Encoding or Block Turbo Encoding) Puncturing for rate variability Optional Adaptive Antenna Systems (AAS) for MIMO Diversity 802.16e Flexible Channel bandwidth 1.75 MHz 20 MHz, but only a subset (5.5 MHz) for

WiMAX profiles MAC supports Automatic Repeat Request (ARQ) to remedy imperfect link adaptation, BW

allocated differently for different physical Modes, contention based Elaborate QoS profiles Handoff and soft handoff for mobility Mesh Mode Optional topology in the work Product interoperability and conformance efforts at 2.3-2.5 GHz [97]

WiMAX frame structures are very flexible in terms of use of subcarriers, which can be allocated to different subscriber units according to their needs (figure 7.5).

Figure 7.4: OFDMA subcarriers, as used by WiMAX: at a given time certain subgroups of

Page 161: Wireless & Cellular Communications

161

subcarriers are dedicated to specific subscribers.

Figure 7.5: WiMAX subframes are very flexible in allocating subcarriers to different subscribers

according to demand.

Further: the number of subcarriers is used as a mean to establish frequency reuse schemes. Recall from §2.1.1 that the reuse factor has a strong impact on spectrum efficiency, and that one of the strength of CDMA is to allow a reuse factor of one whereas FDMA or TDMA schemes needed higher, less efficient reuse factors. Mobile WiMAX and OFDMA use a neat trick called fractional reuse to optimize spectrum in different areas: the concept is simple, use all subcarriers near the center of the cell (full use of subcarriers, or FUSC), but only make partial use of subcarriers (PUSC) in areas where they would interfere.

Page 162: Wireless & Cellular Communications

162

Figure 7.6:

Fractional reuse of subcarriers, some areas only use subgroups of subcarriers (F1, F2,

or F3), to avoid interference where they overlap. Areas near the center can make full

use of all subcarriers.

Further work in 802.16m will provide the 4G evolution in a backward compatible way (including MIMO and OFDMA); 4G improvements are inserted in reserved fields that can be ignored from legacy 802.16e gear, but utilized by future 802.16m equipment.

7.4 Overview of LTE

As seen previously, fourth generation standards are using some kind of OFDM scheme. Once again not all industry standard groups are willing to converge to one unique standard. In addition to WiMAX, at least two other standards remain in the picture: UMB and LTE (the two remaining camps from the 3G evolution seen in 2.5).

The goal of LTE is to provide 3GPP with further evolutions, and maintain its competitiveness. The current architecture used by cellular technologies is the UMTS terrestrial radio access network (UTRAN), and HSPA for high-speed data transmissions. LTE aims at enhancing this architecture by using the UTRAN evolution architecture, thus simplifying the current UTRAN architecture and supporting multiple vendors. LTE proposes to use an IP-based backhaul network capable of supporting high quality of service. LTE aims at being flexible with the efficient use of the current frequency bands as well as using new frequency bands, reduce the cost and time of transporting the bits over the network, increase the uplink and downlink speeds and optimal power consumption at the terminal nodes. It also aims at removing single points of failures from the network.

As seen in section 2.6, UMB is losing momentum, but LTE is likely to be a successful 4G standard and is worthy of some attention. Its air interface, like other 4G standards revolves around OFDMA, but with a few differences, which are reviewed below.

MIMO is used to either enhance data rates or increase redundancy (diversity and MRC). And the other usual tools are used as well: convolutional and turbo codes, and adaptive modulation (QPSK, 16QAM, 64QAM). LTE offers a flexible range of channel bandwidth (1.25, 5, 10, or 20 MHz), which is well adapted to the current cellular and PCS bands; WiMAX channel widths are similar (3.5, 7, 14, and 20 MHz) but reflect more recent or eclectic band plans.

On the downlink OFDMA is used, but the uplink standard is departing from the usual OFDMA approach (which is somewhat unfortunate since it may make impossible any further attempts of convergence between 4G standards). LTE uplink uses single carrier FDMA (SC-FDMA). SC-FDMA is a type of frequency domain equalization (FDE).

Page 163: Wireless & Cellular Communications

163

In SC-FDMA, a bit stream is converted into single carrier symbols, then a Discrete Fourier transform (DFT) is applied to it, subcarriers are mapped to these DFT tones and an inverse DFT (IDFT) is performed to convert back for transmission. Much like in OFDMA, the signal has a cyclic prefix to limit ICI, and pulse shaping is used to limit ISI. The reasons for preferring SC-DMA over OFDMA are mainly that transmitting mobile units have strict limitations on the transmit power, and that peak-to-average power ratios (PAPR) are high for OFDMA. [103]

LTE has a simple frame structure for the downlink and uplink. For FDD, 10ms frames are used, divided into 20 sub-frames (of 0.5ms each). Each sub-frame uses 7 OFDM symbols and cyclic prefix. There are three downlink channels in the physical layer, shared, control, and common control. And there are two uplink channels, the shared and the control channel. Modulation techniques used for uplink and downlink are QPSK, 16 QAM, 64 QAM while the broadcast channel uses only QPSK.

7.5 Fixed wireless high-speed access

As mentioned earlier, mobility is the main focus in the wireless industry. Nevertheless, a few important applications require fixed broadband standards. Rural access solutions for instance are driven by the need to reduce the costs associated with network infrastructure. In many cases true mobility (at vehicular speed) is not necessary, and service availability at multiple locations offers most of the customer value without too great of a price to pay (such as complex handover schemes).

Wireless local areas networks (WLAN) are another example of convenient service without complex mobility needs, and have seen enormous success in providing cheap and flexible high-speed Internet access for many applications like residential or business “hot spots”, warehousing, retail, airports, hotels, etc.

The industry has been active on parallel technologies for broadband access: Wi-Fi has become the preferred technology for LAN’s and is expanding to larger WAN’s; 3G evolutions to HSDPA and EV-DO are gaining momentum; and finally WiMAX products are coming along.

Wi-Fi:

Providers like Boingo, Wayport (now an AT&T company) provide WiFi access throughout

the US and other countries with multiple agreements similar to cellular roaming

agreements.

Wi-Fi access points are typically fairly cheap devices, but some efforts are attempting to widen coverage to large campuses, cities, and mobile applications; but added range and mobility is neither inexpensive nor easy. The question though is whether these initiatives are sustainable and profitable. Wi-Fi allows for fairly cheap

Page 164: Wireless & Cellular Communications

164

roll-out in unlicensed spectrum, but has mobility limits, fairly weak link budgets, and some operational concerns. (See section 8.2).

After a hyped beginning, some major initiatives struggle: Wi-Fi networks in Philadelphia or Taipei shows slow growth and expensive operations. Albuquerque has tried several initiatives, and has years later, millions later, still no coverage:5 Portland network overpromised and disappoints in coverage, capacity.6

Engineering, operating, and building profitably a Wi-Fi network is far from easy: service providers must be well prepared for RF coverage uncertainties and complaints, outages, problems, upgrades, and other operational difficulties, which are worse in uncontrolled, unlicensed spectrum.

3G evolution:

3G migration from mobile standard towards HSDPA or EVDO mitigate some of the

unlicensed uncertainties, and provide proven cellular techniques that evolve to higher data

rates.

Link budgets are well known, operational networks provide known coverage, good mobility. Major drawbacks may reside in the heavy upfront network investments, expensive infrastructure cost and client devices.

WiMAX:

Another family of standard built as an attempt to take the best of both world: starting with

cheap fixed LAN/WAN profiles, adding important carrier requirements (QoS, range,

mobility).

Built on new efficient and IEEE standard radios, WiMAX adapts to a wide range of frequencies. Carriers have great expectations for ubiquitous, high-speed, mobile coverage. WiMAX is a good standard that holds good promises for economies of scale (like Wi-Fi), but has risks linked to expensive devices, expensive spectrum (like 3G).

Towards 4G:

All the above are struggling to provide what is typically understood to be the fourth

generation wireless services: a flexible, ubiquitous, broadband family of service. Is 4G an

evolution, an alternative, or a supplement to 3G, or something more?

In terms of capacity, typical requirements for 4G were agreed upon worldwide at the ITU and focus on 100 Mbps full mobile service, and 1 Gbps fixed services [5]; to that end, air interface focus on OFDMA, channel flexibility (wider for more capacity), and MIMO. But beyond this effort, other aspects are important for 4G: mostly

Page 165: Wireless & Cellular Communications

165

around flexible QoS, and flexible support for heterogeneous services (data, voice, video, multimedia). 4G is therefore much more than an air interface, it relates to the more complex set of aspects of an end-to-end service and its underlying network. (It often relates to IP-centric networks, and perhaps IMS.)

Figure 7.7:

A comparison table between various OFDM standards is a good starting point for

comparison between standards; it allows to clearly outline advantages of certain

standards.

7.6 Homework

Page 166: Wireless & Cellular Communications

166

1. Derive equation (7.2) using the definition of Fourier transform in §7.1.2. 2. An 802.11a system uses Nc = 48 subcarriers for data and 4 more for pilot.

a. What is the nearest power of two N = 2p ? b. 802.11a uses 20 MHz channels, what is Δf between subcarriers in such a channel? c. What is each user information bitrate? (Assume a BPSK modulation, i.e. only one bit

transmitted per symbol) d. Compare to the spectral efficiency of WCDMA where 3.84 Mc/s are transmitted in

5 MHz. (Again assume 1 chip is 1 symbol for that comparison)

Other non standard solutions are becoming popular, such as one by Flarion (now owned by Qualcomm). The proposal was the basis of work to another IEEE group to be created: 802.20. The proposal initially used 113 subcarriers, 17 of which are used for pilots. (The next four questions refer to this solution)

e. What is the nearest power of two N = 2p ? f. This uses 1.25 MHz channels (convenient for current CDMA providers). What is Δf

between subcarriers in such a channel? g. What is each user information bit rate? (Assume a BPSK modulation, i.e. only one bit

transmitted per symbol) h. Compare to the spectral efficiency of 802.11a above.

3. Beside spectral efficiency, what advantages and disadvantages can you think of between the solutions presented in the previous problem? (Hint: think of fading characteristcs and §7.1.5).

4. The following 14-bit sequence 00001111001101 is to be encoded on an OFDM system. Represent each bit by a BPSK symbol, ±1. Ignore any pilot signal, i.e. every subcarrier is for data transmission.

a. Implement a system within 1 MHz of spectrum bandwidth. Specify how many subcarriers are used and their frequency separation.

b. Compute complex coefficients for each subcarrier by FFT. (Zero out the Nz trailing ports). Use matlab for instance, and compute the FFT.

c. Show the approximate spectral shape, i.e. the modulus of the sum of all subcarrier with their associated coefficient. (Use matlab, or any other graphic software, or approximate by hand drawing; in any case, show details of your method).

d. Truncate the result to the first 14 bits, again fill in the remaining bits by zero, compute the IFFT and explain how to retrieve data from the original bit stream.

5. We consider a Wi-Fi 802.11g system where approximately 64 subcarriers are used in a 20 MHz channel.

a. Assuming that symbol periods must be greater than 10 times delay spread (Ts > 10ςτ), what is the maximum delay spread in which this system performs well.

(For simplicity, ignore guard bands, cyclic prefix, etc, and assume that the entire symbol duration is for user data).

b. What happens if the delay spread is much greater? c. Searching for typical delay spreads in various sources, is Wi-Fi subcarrier spacing

adequate for most indoor environments?

Page 167: Wireless & Cellular Communications

167

d. We now consider a WiMAX 802.16d system with 256 subcarriers over a 3.5 MHz channel. Searching again for typical delay spreads in various sources, in what environment would this system be appropriate? (indoors? in rural areas? in major cities?)

e. 802.16e now standardizes 512 subcarriers for 3.5 MHz channels. In what environment might this be an improvement?

f. Explain why WiMAX is better suited for providing wireless access throughout a city than Wi-Fi access points.

6. A city is interested in a system providing coverage for its citizens city-wide. a. Suggest a few state-of-the-art wireless systems for the city’s consideration and

justify your choices. b. Make a table showing advantages of these technologies for a carrier to provide

extensive wireless coverage for a city. (Include at least considerations around: spectrum, cell sizes (consider power allowed, propagation, and delay spread), indoor service availability, mobility (Doppler, handoff).

c. The city is also interested in having its police, fire department and other first response emergency services communicate on that system. Are there any additional important arguments to consider for this type of use? Would they preclude any of your above systems — why?

Chapter 8 Wi-Fi System Performance

Wi-Fi systems and services are evolving from simple stand-alone access points (AP) to complex

networks. It may be interesting to review some initial technologies evaluations, which showed

promising data rates, and examine some extensions to larger systems.

8.1 Data Rates

Wi-Fi systems rely on IEEE 802.11 a, b, g, and more recently n. Early systems use frequency hopping schemes and direct spread spectrum, but most systems now focus on the OFDM physical layer (802.11a at 5.8 GHz and 802.11g at 2.4 GHz). Their data rate of is determined by the type of modulation and coding: from BPSK 1/2 to QAM64 3/4 (64QAM 5/6 for 11n). Physical data rates for 802.11a/g are quoted up to 54 Mbps, but maximum payload or user data throughput cannot exceed 36 Mbps, and actual measured throughput vary with suppliers (up to 30 Mbps); and interoperability between suppliers introduce even greater variations.

Table 8.1:

Wi-Fi maximum rates, theoretical and actual measured throughput in indoor lab, with

controlled environment and low interferences for 802.11g, 20 MHz cannels at 2.4 GHz.

(Actual rates are an average over different brands of access points and client devices).

Page 168: Wireless & Cellular Communications

168

Modulation 20MHz sensitivity SNR Phy. rate Max payload Actual

(dBm) (dB) (Mbps) (Mbps) (Mbps)

BPSK 1/2 -90.6 6.4 6 5.2 2.8

BPSK 3/4 -88.6 8.5 9 8.1 4.3

QPSK 1/2 -87.6 9.4 12 10.6 6.0

QPSK 3/4 -85.8 11.2 18 16.0 9.0

16QAM 1/2 -80.6 16.4 24 19.0 11.8

16QAM 3/4 -78.8 18.2 36 25.9 18.1

64QAM 2/3 -74.3 22.7 48 31.6 24.0

64QAM 3/4 -72.6 24.4 54 36.2 26.8

Table 8.1 summarizes typical data rates observed in a 20 MHz TDD 802.11g channel. These results were measured with low interferences and all client devices near the AP and within the same room, leading to system operations at maximum modulation. Perfect conditions will allow throughputs between 25 and 30 Mbps in some cases, but in most conditions network engineers should keep in mind throughput ranges as those of table 8.1 and figure 8.2 as opposed to the often quoted 54 Mbps.

A recent addendum to the standard, 802.11n, increases throughput from the 54 Mbps PHY rate to 150, 300, and even 600 Mbps by various new schemes such as: higher coding rate, 40MHz channel width, and MIMO. Here again actual throughput are lower (of the order of half the PHY rate).

Page 169: Wireless & Cellular Communications

169

Figure 8.1: Wi-Fi throughput measured for a variety of access points and client cards at close

indoor range, for an increasing number of client PC’s.

Figure 8.2:

Wi-Fi one-way delay measured for a variety of access points and client cards at close

indoor range, for an increasing number of client PC’s per AP. (Chart shows average

delay and error bars represent a standard deviation in each direction)

8.2 Municipal Wi-Fi

Municipalities have tried in recent years to deploy cheap wireless networks free to use for their citizens. The common opinion is that Wi-Fi networks would be cheaper than other mobile wireless products. That opinion relies mainly on two arguments: 1) spectrum is free and 2) Wi-Fi client devices are essentially free since they are part of laptops computers. Others arguments include 3) access points are cheap (when compared to cellular

Page 170: Wireless & Cellular Communications

170

equipment) and they are 4) easier and cheaper to install (when compared to cellular towers)

The first two facts are a great advantage for Wi-Fi since they are an important part of the cost of providing service to the general population, the other two however hide more subtle aspects of wireless systems operations. [104]

8.2.1 Case study: Longmont

Let us look closely at the case of Longmont, CO, and compare a citywide Wi-Fi network to a 3G data network (for instance an operational CDMA EV-DO network).

The city of Longmont contracted Kite Network in late 2006 to provide Wi-Fi Internet access throughout the city. The network was built to covered an ambitious 22 square miles of Longmont, and was completed in less than 90 days, which is much more successful than many other initiatives. 1 [105] The city of Longmont is also covered by several cellular networks, including an EV-DO network, which is used in our comparison.

Network Cost

Cost of rollout, spectrum, and recurring expenses for both EVDO and Wi-Fi can be estimated from various sources: EVDO rollout costs can be calculated using typical urban cell site costs (seen above) as if the network were rolled out in a greenfield manner. Of course, a cellular network operator would incur less cost by reusing real-estate and poles, but we compare here a greenfield build-out of the two networks.

Wi-Fi numbers can be estimated in a similar manner, and in our example are actual build out cost reported 2. These figures show a very important and often misunderstood fact: that the build-out cost for a 3G network and a Wi-Fi network are fairly similar. For both networks the number of sites are not estimated but taken from the actual networks (5 EV-DO sites, and 600 Wi-Fi APs)

Cost of spectrum is estimated from published auction results in that area prorated for the Longmont population. Spectrum is of course a high cost for EVDO, while free for Wi-Fi. But a controlled licensed spectrum allows for consistent and reliable throughput city-wide.

Recurring cost such as power, real-estate, wired backhaul, is significant for both networks: EVDO needs a few high-cost, large towers; Wi-Fi needs for a similar coverage cheaper but more numerous AP locations.

The operational cost of maintaining wireless networks is often overlooked: it shows a main difference between networks: Wi-Fi costs more than $4 million annually (per SEC filing EVDO on the other hand is much cheaper: the network has fewer sites and requires fewer field technicians, and less NOC (Network Operations Center) manpower and tech

Page 171: Wireless & Cellular Communications

171

support to resolve maintenance issues. We estimate that cost to $2.7 million for a city the size of Longmont; in many case, that cost is part of a more extensive operation (over a greater region, or even nationwide). Several factors contribute to the operational cost differences between the two types of networks. The costs of repair for instance are very different: repairing a Wi-Fi access-point usually requires bucket-crews since they are located high above on the traffic poles or street lamps while EVDO electronic equipment is on the ground. Trends also seem to show that the frequency of repair of a relatively cheap access-point is far higher than that of a more expensive EVDO cell site. Fewer cell-site locations and less equipment is always an operational advantage for software or hardware upgrade, and general maintenance issues. Finally scaling up a network to much wider areas and many more cities is more manageable with a network requiring fewer sites. The comparisons are summarized in table 8.2.

Table 8.2: Network cost comparison between EVDO and Wi-Fi for the city of Longmont.

Item EVDO Wi-Fi

Spectrum $1.4 million $0

No. of cell sites 5 600

Tower $200k $0k

Site prep $50k $500

Equipment $120k $3k

Total Build-out $2 million $2 million

Power $15k $80

Real estate leases $3k $0

Page 172: Wireless & Cellular Communications

172

Backhaul $2.4k per site $14k tot

Monthly Recurring Cost $102,000 $62,000

Field techs 2 5

NOC techs 1 1

Tech support 2 3

Yearly Operations $2.7 million $4 million

The comparison is actually rather close, and does not show one roll-out as a clear winner. In practicality, however, lower operational expenses, clean spectrum insuring reliable service make a big difference that seem to outweigh the free aspects of client devices and spectrum. To back-up our analysis, and complete our comparison, one should add that EV-DO is still operational in Longmont, whereas since January 2008, Kite Network is no longer in a financial position to manage the Wi-Fi network, opened it for free use, and put it on the auction block. 3 (Similar things happened to other cities, such as Philadelphia).

8.2.2 Other Successes and Failures

Other municipal initiatives were reported in the news in past years: nearly every city has released a request for proposal to deploy city-wide Wi-Fi. Some requests were targeted on public safety respondent, including use in a specific spectrum block available for free for that purpose (4.9GHz). Some requests requested complete city coverage extending over hundredth of square miles. For instance in Albuquerque and surrounding cities, a request even included the build-out of areas where no homes were yet built: it was argued that the presence of wireless network would be an incentive for builders. Of course designing a wireless network before buildings are in place makes no sense. 4 Nearly every major and medium city in the US considered some degree of wireless coverage in past years, and many of the most ambitious projects were abandoned. Companies such as Earthlink were

Page 173: Wireless & Cellular Communications

173

intending on building such networks, using advertising to pay more most of it. The unforeseen operational expenses and the limited demand were most likely to blame.

Not all large Wi-Fi coverage failed though. Some cities and large campuses were fairly successful. The successful initiatives all had a few facts in common: 1) a reasonable target build-out, such as campus, or city public buildings, and most importantly 2) an anchor tenant, such as city employees, insuring some minimum revenue stream to sustain operational expenses.

8.2.3 Other Factors

Of course our business analysis is limited here to the roll-out of the existing networks as we tested them. Many additional performance optimizations methods could be considered for both networks. Another important cost aspect is the cost of acquisition of new customers, including activation, credit check, billing, and the distribution of client device. Again, the latter gives a significant advantage to Wi-Fi given the major penetration of Wi-Fi devices.

Other considerations should include usage needs and patterns, especially around indoor coverage. The majority of data need is still indoor (at home or at work). Indoor coverage from outdoors access points are very difficult to build reliably. (Recall link budgets and indoor penetration). Furthermore: the residential market has a very obsolete and inhomogeneous population of PCs. Focusing on Centrino-enabled laptop may be a serious restriction to any business case.

Outdoor needs for local citizens and the traveling population are well covered by cell phones, blackberries, iPhones, and a variety of devices. Outdoor usage is of course limited by weather and temperature. Denver for instance has on average 150 days per year with an average temperature of 32 degrees or colder during which no laptop is likely to be used in parks public benches. Denver is of course comparatively cold, and southern cities may have better outdoor usage potential, but conversely how likely is outdoor usage on days exceeding 90 degrees or when sun glares are too strong? Interesting data points can be gathers from www.weatherbase.com.

Table 8.3: Typical temperatures and weather conditions in major US cities.

City Sunny days Rainy days Days above 90F Days below 32F

Page 174: Wireless & Cellular Communications

174

Denver 115 89 34 155

Dallas 136 79 100 39

San Francisco 140 67 2 30

San Diego 147 42 4 0

Seattle 71 150 1 20

Some cities are of course more conducive than others for outdoor use. Rainy days will also impact outdoor use as will days above 90 degrees and those days below 32 degrees. All in all, the total outdoor use model across a variety of cities is typically less than half the year.

8.3 Large Coverage Considerations

Considerations of coverage are important: coverage radius, percent reliability, frequency reuse, interference (intracell, intercell, and other sources).

These considerations are similar to any cellular network, and even more stringent since: only 3 non-overlapping channels are available for a reuse factor of 3 (Although reuse factors of 4 with channels 1, 4, 8, 11 have been rolled-out, e.g. in Longmont).

Capacity provisioning is also a major consideration. Understanding the network bottlenecks is critical to a good network design.

Page 175: Wireless & Cellular Communications

175

Figure 8.3: Wi-Fi access points in a Longmont residential area

8.3.1 Radio Parameters Analysis and Modeling

In an initial design phase, a simple one-slope model and low-resolution terrain data suffice for a rough estimate to qualify customers. As operations progress, actual measurements should be compared to predictions and the process should be refined further.

The variations in RSSI as a function of distance were measured in a residential neighborhood and are shown here. Note that measurements taken in the street may show a fairly low slope (due to the street corridor effect).

Figure 8.4: Received power level signal strength indicator (in dBm) of a Wi-Fi system as a

function of distance (on a logarithmic scale).

Page 176: Wireless & Cellular Communications

176

Figure 8.5: Wi-Fi physical modulation rate as a function of distance (on a logarithmic scale).

8.3.2 Throughput Measurements

Having now characterized RF levels we focus on the parameter of most interest: data throughput. Throughput is affected by distance, shadowing, and interferences. The parameter of importance is the signal to noise ratio (SNR); it can be estimated from RSSI and ambient noise measurements, or can usually be reported in some form by the RF equipment.

The standard deviation of measured signal strength is also interesting to consider. In most cellular trials, mobile data is collected, which makes it impossible to quantify variations over long periods of time for a given location. In a population of fixed location, however, a measured standard deviation over a long period may be useful in predicting seasonal changes in the radio channel.

Chapter 9 WiMAX System Performance

Service providers are in an intensive phase of trials and performance evaluation for fixed WiMAX systems and services. Initial technical evaluation showed promising data rates, and a number of more wide-scale trials were conducted on larger customer base throughout the world, in Europe, Asia, and the Americas.

9.1 Data Rates

Page 177: Wireless & Cellular Communications

177

IEEE 802.16 and WiMAX profiles allow for several radio channel bandwidths, which lead to very different data rates. In a given profile, physical layer data rate of a WiMAX system is determined by the type of modulation and coding: from BPSK 1/2 to QAM64 3/4. Theoretical data rates are quoted in standards or by manufacturers, but actual throughput vary with suppliers: a degradation of 40% to 50% is often observed. Table below summarizes typical data rates observed in a 3.5 MHz FDD channel (also see figure 9.7 on page §). That seemingly large difference is mainly due to timing delays necessary for scheduling and collision avoidance between users. Actual data results vary with suppliers, and interoperability between suppliers introduce even greater variations. Nevertheless the great value of WiMAX certified products is to guarantee some minimum performance: a service provider may rely on the fact that WiMAX certified products will work well with other suppliers certified for the same profile.

Table 9.1: WiMAX 3.5 MHz channel maximum theoretical and actual measured throughput (at 3.5 GHz).

Modulation 3.5MHz sensitivity SNR Theoretical Actual (dBm) (dB) (Mbps) (Mbps)

BPSK 1/2 -90.6 6.4 1.41 0.86 BPSK 3/4 -88.6 8.5 2.1 1.28 QPSK 1/2 -87.6 9.4 2.82 1.72 QPSK 3/4 -85.8 11.2 4.23 2.58 16QAM 1/2 -80.6 16.4 5.64 3.44 16QAM 3/4 -78.8 18.2 8.47 5.16 64QAM 2/3 -74.3 22.7 11.29 6.88 64QAM 3/4 -72.6 24.4 12.71 7.74

These results are for one direction 3.5 MHz channel, a full duplex FDD system may see up to twice as much throughput in the total 7 MHz bandwidth. Of course different profiles and channel widths lead to different throughput results. An unlicensed TDD 10 MHz channel profile for instance has the advantage of adapting to asymmetrical data demand. Similar benchmark tests show that such a system is also capable of throughput around 8 Mbps (see §9.7 and specifically figure 9.8).

Interferences from other cells (co-channel interferences) strongly impact actual rates [106] [107]. And in unlicensed cases, unwanted interferences in the band are also a

Page 178: Wireless & Cellular Communications

178

concern: minimum signal to noise ratios listed in the table must be maintained for given throughput.

In order to compare system performance in diverse environments, tests are usually conducted with traffic load generators and fading emulators. Service providers can thus create repeatable benchmark tests, in a controlled environment, in order to compare equipment performance under different conditions. These tests quantify the different access performances in large rural areas, suburban areas, or dense urban cores, both for fixed access and full mobility.

Stanford University Interim (SUI) models are used to create a small number of models that emulate different terrain types, Doppler shift, and delay spread as summarized in table. Terrain Types are (from [35]) defined as follows. A: the maximum path loss category, hilly terrain with moderate-to-heavy tree densities; B: the intermediate path loss category, hilly with light tree density or flat with moderate-to-heavy tree density; C: the minimum path loss category, mostly flat terrain with light tree densities. In some cases, these terrain categories are used to refer to obstructed urban, low-density suburban, and rural environments respectively.

Table 9.2:

Typical parameters for SUI-1 to 6 channel models (delay spread values estimated for 30-degree antennas azimuthal beamwidths, and Ricean K-factors are for 90% cell coverage — from [36]).

Channel Terrain RMS Delay Spread Doppler Ricean K Model Type (μs) Shift (dB)

SUI-1 C 0.042 (Low) Low 14.0 SUI-2 C 0.069 (Low) Low 6.9 SUI-3 B 0.123 (Low) Low 2.2 SUI-4 B 0.563 (High) High 1.0 SUI-5 A 1.276 (High) Low 0.4 SUI-6 A 2.370 (High) High 0.4

9.2 Experimental Data

Fade emulators can be used in lab environment can recreate the above SUI fading profiles. Radio systems can then be evaluated in different fading environments.

Page 179: Wireless & Cellular Communications

179

Figure 9.1:

Throughput variations in time of a WiMAX radio system at 5.8 GHz in two different SUI fading models: SUI-1 (left) and SUI-3 (right). Radio system modulation was fixed to 64QAM-2/3.

Radio system under test comprises one base station (BS) and several subscriber stations (SS’s). The air interface is a short direct line of sight and is then sent through the fading emulator. Different fading channels are programmed in the emulator. The Fading emulator emulates two separate channels (forward and reverse links), each comprised of several multipaths, each of which is independently faded and delayed. Fade statistics for the direct path are either Rayleigh or Ricean, delayed paths are attenuated and Rayleigh faded as specified by SUI models. As in many wireless LAN devices, our radio devices are TDD and have duplex ports: transmit and received signals are cabled to the same antenna. In this test, because of the unidirectional nature of the fade emulator, transmit and receive paths are separated by circulators and faded by two independent channels. Additional attenuation (pad) is added where necessary. Finally a traffic generator is connected (via 100bT Ethernet) to the BS and laptops are connected to SSs for data collection. Figure 9.2 shows the detailed setup.

Page 180: Wireless & Cellular Communications

180

Figure 9.2: Fade emulator setup between base and several clients.

9.3 Field Data

As an example, let us illustrate the above with data for fixed broadband access in a residential suburban area. Unlike mobile cellular systems, a fixed wireless access system needs a careful selection process for qualifying customers. Propagation tools and terrain data are used in that process, but the level of detail is a matter of choice. A precise qualification process leads to better targeted mailing and may avoid miscalculated predictions. Service providers cannot afford to be too optimistic nor too pessimistic in their predictions: false negatives are a missed revenue opportunity, and false positives lead to wasted technician time and unhappy customers. It is therefore time well spent to refine selection criteria and tools as much a possible.

A simple selection process consists of geocoding customers’ addresses and correlating them to terrain data as well as to a simple propagation model for an initial estimate. Address geocoding, however, is far from a perfect process. A customer address may not give reliable longitude and latitude, and will rarely hint on where an outdoor antenna may be in good RF visibility of a base station. Some manual processing and even some local knowledge of the area may be required; and in the end, a site visit may still discard a possible location. The quality of terrain data and RF modeling is of course also of high importance. Terrain data can be obtained at no cost from U.S. geological surveys (100 or 30 meter accuracy), which is useful for path loss prediction, but it will not accurately predict shadowing in all areas. More granular data, including building data, with sub-meter

Page 181: Wireless & Cellular Communications

181

accuracy can be obtained at much higher cost. Another alternative is to drive-test around the area of interest and to optimize a propagation model in a given area. Many software packages allow for such model optimization, which significantly improve prediction tools. (Of course these models, as well as these drive-test optimizations, are usually based on mobile data.)

9.4 Other Trial Considerations

In many cases trial data are published and compared to existing models, or (if extensive enough) they are used to create a new propagation model. Many other aspects of major customer trials are important to service providers, such as: customer qualification, installation, support, troubleshooting, and overall estimation of customer satisfaction.

The overall trial goal makes a significant difference in trial results: the customer selection process for instance may focus on capacity limitations in a specific area, or it may be geared towards testing distance limits of a radio system; clearly trial results will be different.

Trial architectures vary. Most WiMAX radio systems use Ethernet network interfaces, but many systems require a mixture of backhaul or longhaul transport, which include microwave, copper, or fiber links, over TDM T1, T3, SONET, etc.

Integration to a monitoring system is also a major portion of a technical trial. Major network element (including customer devices) should be monitored. Maintenance, repairs, and upgrades should be performed in a low-intrusion maintenance window in order to limit the impact of down time.

Most network elements should be controlled remotely and centrally from a network operations center. Good control of network elements, including customer equipment, is precious for system support, especially when it reaches large scale.

Data collection is highly important for a trial. As a successful trial moves into production, ongoing data analyses are still important for network optimization.

Customer satisfaction surveys and focus groups are also an integral part of a complete trial; they should also continue into production phase and be compared to network quality metrics.

9.5 Radio Parameters Analysis and Modeling

In an initial design phase, a simple one-slope model and low-resolution terrain data suffice for a rough estimate to qualify customers. As operations progress, actual measurements should be compared to predictions and the process should be refined further.

Page 182: Wireless & Cellular Communications

182

For instance an initial selection process leads to the chart on figure 9.3. Actual measurements show the right trend, but some variations are very large (sometimes in excess of 20 dB). Better modeling and drive testing should be considered in this case.1

Figure 9.3: Actual RSSI (in dBm) measured at customer locations versus predicted RSSI (in dBm) from planning model.

During trial, a received signal strength indicator (RSSI), in dBm, is logged at all customer locations. A plot of RSSI as a function of the logarithm of distance is graphed in figure 9.4. The logarithmic scale for the distance is simply justified by the fact that a one-slope model will show a linear approximation on the graph. Many propagation studies use this scale since it allows for easy comparison of path loss exponents. The variations in RSSI for a given customer location are represented by error bars at each point. Each error bar represents a standard deviation; that is, the total width of the error bar shows two standard deviations.

Page 183: Wireless & Cellular Communications

183

Figure 9.4: Received power level signal strength indicator (in dBm) as a function of distance (on a logarithmic scale).

The next step in data analysis is a comparison between the data set and typical models. For that comparison, a path loss estimate should be derived from the empirical system. The RSSI measurement provides one term of the path loss. The other is in the transmitted power level, which depends on base station power, cable loss, antenna pattern, and even (to a small extent) on the deviation from boresight of the sector’s antenna.2 Path loss estimates are represented in figure 9.5.

Figure 9.5: Empirical path loss (in dB) as a function of distance (on a logarithmic scale), and comparison to prediction models.

Approximation of path loss to a one-slope model leads to the following equation:

(9.1)

with d0 = 1 km. The trial environment is compared to typical cellular models as discussed below.

Path loss exponent is approximately n = 2.7. The Walfish-Ikegami model for line of sigh in urban corridors predicts n = 2.6. Other reports have shown similar results for 3.5 GHz: [65] reports values of n between 2.13 and 2.7 for rural and suburban environments, [67] reports n = 3.2. But many other models predict higher exponents n between 3.5 and 4.5. (See path loss exponents in table 1.1).

Otherwise, approximations are fairly good with Erceg-B and C models. Erceg-B is the best fit and is represented on figure 9.5.

Page 184: Wireless & Cellular Communications

184

The most popular method to compute slope estimate is a least square error estimate. In that method a set of error terms {ei} is defined between each data point and a linear estimate. Minimizing the sum of these errors yields the slope and intercept, which intuitively gives a good approximation of the data set. That method also benefits from the following important properties [108]:

1. Least square estimated slope and intercept are unbiased estimators. 2. Standard deviations of the slope and intercept depend only on the known data

points and the standard deviation of the error set {ei}. 3. Estimated slope and intercept are linear combinations of the errors {ei}.

From the last point, if we assume that the errors are independent normal random variables (as in a log-normal shadowing situation) the estimated slope and intercept are also normally distributed. If we assume more generally that the data points are independent, the central limit theorem implies that for large data sets, the estimated slope and intercept tend to be normally distributed.

For the last assumption to be true, very low correlation of the wireless channel must exist between data points. This is the case when data points are measured at fixed locations tens or hundreds of meters apart — in which case measurements show very low correlations between the respective fading channels. Similarly, this is the case even in a mobile cellular environment, from one cell to another.

The important conclusion is that path loss exponent is approximated by a normal (or Gaussian) random variable.

We also verify a few more key findings as in [35], for a 3.5 GHz fixed link:

1. Free-space approximation (PL0 = 20×log ) works well within 100 m. 2. Path loss exponent depends strongly on height of transmitter (mobile height being

more or less constant throughout). 3. Variations around median path loss are Gaussian within a cell (Log-normal

shadowing), with standard deviation ς ≈ 11.7 dB. 4. Unfortunately, our limited number of cells does not allow us to quantify the nature

of the variations of ς over the population of macro cells.

9.6 Throughput Measurements

Having now characterized RF levels we focus on the parameter of most interest: data throughput. Throughput is affected by distance, shadowing, and interferences. The parameter of importance is the signal to noise ratio (SNR); it can be estimated from RSSI and ambient noise measurements, or can usually be reported in some form by the RF equipment. The SNR has a direct impact on the modulation used by the link3 and therefore

Page 185: Wireless & Cellular Communications

185

on the throughput of that link. That throughput is graphed as a function of distance in figure 9.6.

Figure 9.6: Throughput in Mbps measured at customer locations as a function of distance to base station, with ten point moving average, and logarithmic fit.

In fact, modulation and throughput change from time to time. It may be important to study the statistical distribution of the resulting throughput, as in figure 9.7 (and 9.8). These figures show the probability of reaching a certain throughput, over the population of fixed location under test. These plots may be compared to plots representing fixed modulations and controlled fading environments described in 9.1. Fading statistics in suburban areas show close correlation with SUI models 3 and 5, and throughput density functions near those of 16QAM 3/4 in such fading environments [109].

Finally we report on the standard deviation of measured signal strength. In most cellular trials, mobile data is collected, which makes it impossible to quantify variations over long periods of time for a given location. In a population of fixed location, however, a measured standard deviation over a long period may be useful in predicting seasonal changes in the radio channel. Typical standard deviations in fixed links over several months vary between 1 and 6 dB; when deciduous trees are present, the value increases in the spring as leaves come out. Trial data also show that the standard deviation tends to increase with distance. A median value of the standard deviation of path loss is given by:

(9.2)

with d0 = 1 km.

Seasonal variations are especially noticeable as leaves come out. The impact on the link budget has been reported for fixed wireless links [45] and in different wind conditions [42]. We measure some variations of the path loss exponent, the intercept, and the log-

Page 186: Wireless & Cellular Communications

186

normal shadowing. In many cases the wireless system can adapt to these variations, but in some marginal locations where link budget nears the maximum allowable path loss, throughput is affected. As shown on figure 9.7, low bit rates are affected the most by changes in foliage.

Figure 9.7: Throughput cumulative distribution statistic measured in various foliage conditions in a 3.5 MHz FDD channel at 3.5 GHz.

9.7 Unlicensed WiMAX

Although 806.16-2004 WiMAX equipment is certified conform at 3.5 GHz, other profiles exist, and it is important to mention unlicensed profiles at 5.8 GHz. Unlicensed operations were initially seen as a wonderful opportunity to achieve economies of scale. These profiles were scheduled to be standardized as a second wave after the 3.5 GHz profiles; unfortunately, as of summer 2007, they seem to have been put on hold indefinitely.

Nevertheless suppliers manufacture equipment at 5.8 GHz that follow these profiles, and have all the WiMAX properties except for certified interoperability between suppliers. The main profiles of these products is the following: 5.8 GHz, TDD 5 or 10 MHz channels, 256 or 512 FFT. Their performance is illustrated in this section.

Note that comparison between field and lab data may be interesting for further service predictions. One experiment made downtown Denver shows that the environment behaves like a SUI-3 model, and that adaptive modulation seems to maintain a radio link between QPSK and 16QAM.

Page 187: Wireless & Cellular Communications

187

Figure 9.8: Throughput cumulative distribution statistic measured in a 10 MHz TDD channel at 5.8 GHz, for data measured in the field and data emulated in the lab.

Figure 9.9: Throughput statistical distribution in SUI-3 model at 5.8 GHz for several modulations BPSK-1/2, QPSK-3/4, 16QAM-3/4, and 64QAM-2/3.

Chapter 10 Core Network Evolution

After a compendious look at radio technologies in third generation systems, we now focus on core

network elements. These elements are an integral part of the wireless mobility function, and has

undergone significant changes from the 2G circuit-switched (voice centric) environment to the 3G

high speed packet data capabilities.

10.1 Wireless Signaling

Modern communication signaling is almost always carried on a separate network than the actual user payload (or bearer services); this kind of signaling is called out-of-band. In the US, Signaling System 7 (SS7) is widely used and many services such as 800 numbers, caller

Page 188: Wireless & Cellular Communications

188

ID, calling name, local number portability are implemented on SS7 networks. The US SS7 network is particularly large and cumbersome due to the many regional Bell operating companies. The SS7 network is a major signaling network for wireless voice services, so we review the basic of SS7 and how it applies to wireless network signaling.

SS7 lower OSI layers (physical, data link, and network) are defined by Message Transfer protocol (MTP). Transport layer is defined by SCCP. Upper layers use TCAP (Transaction Capability Application Protocol) and ISUP (Integrated Services Digital Network - ISDN User Part).

10.1.1 Physical Network

SS7 has three types of nodes or signaling points:

Service Switching Point (SSP):

Network elements responsible for providing access to voice switching fabric, radio and base

station controllers.

Signal Control Point (SCP):

Interfaces to databases and database applications such as 800 number translation, HLR.

Signal Transfer Point (STP):

Transfer points to connect different SS7 backbones and different SSP and SCP.

Actual links between these nodes are typically DS0 in North America (56kbps), also ATM or DS1 are also supported. The ITU standardizes 64kbps links and is widely used in Europe and most of Asia.

SS7 links are engineered for redundancy: STPs are always in pairs, and links are dimensioned in order to be able to sustain the additional load of another failing link. Several types of links exist in SS7: A-links (for Access links) connect an SSP to the SS7 network. B-links (for Bridge) connect an STP pair to another one as seen in figure 10.1. A-links are usually engineered for a busy hour load of 40% of capacity, so if one link to the SSP fails, the other is 80% loaded, which still sustains normal operations. The extra 20% is used to accommodate extra bursts of signals. B-links are engineered for 20% load at busy hour so as to provide redundancy upon failure of one STP and another link, or three out of the four B-links. C-links connect two STPs within a pair and insure redundancy of that node; they typically carry small amount of network management messaging, but are dimensioned to carry the total load of both STPs connected.

Page 189: Wireless & Cellular Communications

189

Figure 10.1: SS7 Signaling Network.

10.1.2 Mobility Management

Wireless networks are linked by SS7 signaling network, in particular, the following entities use that network:

MSC:

An SSP in SS7, the MSC provides the switching fabric for voice, the interface to the fixed

voice network (PSTN) as well as the radio network (BSC, BTS, MS).

HLR:

An SCP, the HLR provides records for MIN, ESN, subscriber features, class of service, etc.

VLR:

The VLR is a database similar to the HLR, but created temporarily on a visiting (or roaming)

switch. Since its interface to the switch is not open nor standardized, the VLR is practically

always part of the MSC, still it is usually represented as a separate functional entity in

network diagrams due to the important role it plays during registration and call delivery.

AC (or AuC):

An SCP containing authentication keys and performing algorithms for validation of mobiles

(MS). The AC typically computes complex encryption schemes and algorithms, as does the

MS, if they match the AC validates that MS.

SMSC (or MC):

An SSP, the Message Center or Short Message Service Center stores or forwards messages to

mobiles or other Short Message Entities (SME).

With these nodes, a mobile wireless network handles handoff, registration on foreign (roaming) systems, and call delivery when allowed.

Page 190: Wireless & Cellular Communications

190

Figure 10.2: SS7 Signaling during handoff.

10.2 2G Wireless Network

Second generation wireless cellular systems brought significant improvement to wireless voice capacity; all standards in competition focused on a digital air interface and circuit switched connectivity to legacy public switched telephony networks including SS7 signaling for voice feature integration with legacy telephony networks.

The US cdmaOne network uses the TIA IS-41 or ANSI-41 standard to define interfaces and communication between major elements of the network. GSM is based on a completely different set of standards described in the GSM MAP. Although the two are very different, similar functional blocks can be identified, and the interfaces between these blocks are defined in the two families of standards. Some attempts of convergence were made, and encouraging signs appeared: Qualcomm 6300 chips allowing dual GSM-CDMA2000 modes were available in October 2002, and interoperability Interworking Function (IIF) were devised to offer transparent operations between IS-41 and GSM MAP. Nevertheless market demand never forced the wide availability for dual mode (cdma-GSM) handsets, and the two networks never reached much of an integration stage.

Page 191: Wireless & Cellular Communications

191

Figure 10.3: IS-41 Network Reference Architecture.

10.3 3G Wireless Network Evolutions

Second generation wireless networks are defined by digital air interfaces and digital voice circuit switching. Similarly, third generation systems are probably defined by a combination of two characteristics: a more efficient air interface, and a data capable network that may leave circuit switched call flows to open wireless communications to packet switched networks, and especially IP-based routing.

The dominance of two 2G networks evolved into two families of 3G networks. WCDMA networks are typically based on an evolution of GSM/GPRS networks, for which European network reference models are used. CDMA2000 on the other hand uses networks reference models developed in US standard bodies. The two reference architectures are somewhat similar, but with some major differences.

10.3.1 GSM/GPRS Networks

The evolution of GSM networks included the introduction of a new packet data standard: General Packet Radio Services (GPRS) were introduced on existing GSM network to provide higher speed packet-switched data, up to 21.4 kbps. Consequently the GPRS architecture is similar to that of GSM, but with two new network nodes: the service GPRS support node (SGSN) and the gateway GPRS support node (GGSN), which provide access to data networks such as the public Internet.

UMTS networks build on the previous reference models, and are standardized in detailed by the third generation partnership project (3GPP). Details may be found at www.3gpp.org/ftp/Specs/archive/25_series/.

Page 192: Wireless & Cellular Communications

192

10.3.2 IS-41 Networks

The evolution of IS-41-bases CDMA system is CDMA2000; that standard (and already IS-95B) introduces as well a packet data architecture for mobile networks: the A-interface becomes more elaborate (A10-A11-interfaces), and a packet data service node (PDSN) is added to direct packet data calls towards the IP network before even reaching the MSC.

10.4 New Network Features and Application

The major network change is the introduction of packet data capability, in conjunction a number of new features and services are making their debuts.

10.4.1 Multimedia Message Services

2G services saw the use of short messages service (SMS) increase in popularity, starting in Europe. With new handsets such as camera-phones (even short video) on the market, larger messaging services are required, called multimedia message service (MMS).

10.4.2 Presence

Presence detection offers nice features, such as identifying in a handset any contact presently available, which may be reached by phone, or by instant messaging. Further refinements are considered in combination with location-based services.

Presence properties are standardized by the IETF session initiation protocol (SIP), which is an application layer protocol to locate users and start applications such as voice over IP, multimedia broadcast, etc. In particular push-to-talk services are popular and now available from many wireless service providers. 1 Dual-mode handsets are now in production to allow seamless transfer from cellular wireless to Wi-Fi hotspot in a home or office when available. The problem of handset cost, battery life (constantly listening to two networks) and other network architectures are some of the most interesting in the industry at present time.

10.4.3 Convergence

The tighter integration of the 3G packet data network and IP networks allows for many converged applications. These applications allow customers to keep one account and access seemingly identical applications such as email, instant messaging, voice mail, etc. Location determination as examined in the next section, especially when combined with presence generate a wealth of new innovative applications.

10.5 Location Based Services

Page 193: Wireless & Cellular Communications

193

Location based services (LBS) are born from an FCC mandate to provide location for emergency 911 calls. The FCC issued a mandate requiring wireless carriers to provide enhanced 911 service requiring carriers to allow 911 operators to locate callers within a certain range. In CDMA networks, a new network element, the position determination entity (PDE) is responsible locating the handset. A specific standard, IS-801, deals with position determination information.

Figure 10.4: IS-801 Location-based services and E911.

In E911, the request for localization comes from a public safety answering point (PSAP) where the 911 call was answered. PSAP operator can trigger the location determination: IS-801 takes over and interrupts the IS-95/IS-41 voice call and returns location estimate to the PSAP operator. Carriers did implement E911 services as government entities requested them: Public safety entities applied first for phase 1, in which only the sector and base station originating the emergency call was identified, and subsequently for phase 2, where location accuracy is mandated within 50 meters 65% of the time, and within 150 meters 95% of the time.

Many carriers are also interested in revenue generating location services. Some claim that E911 mandated accuracy caused 80% of the engineering cost and efforts, and for 20% more, carriers can reap the benefits of that system by using it to generate revenue. Location-based services were slow to start though; a few fleet services used vehicle tracking applications, disappointing, one of the most innovative service is one introduced by Disney in 2005 (as a Mobile Virtual Network operator, i.e. a service provider reselling minutes on a network operated by another major carrier such as Sprint) advertising the ability for parents to access a web service locating their children. Since then similar such services have appeared. For these services, an application running on a Location Positioning Server (LPS) or Mobile Positioning Center (MPC) queries the handset using IS-801, and uses its location information for different services. Constant network queries would be too much burden on the network, so applications need to be written to only

Page 194: Wireless & Cellular Communications

194

sporadically query for location. Network operators have a strict certification process to insure that applications have an acceptable network load. Another important aspect is that these applications have to be authenticated to insure that location information is not disclosed to unauthorized parties. Popular applications using location, yet not too network intensive include:

Area restriction: parents may define an area and receive an alarm when a child cell phone leaves that area. Note that the application does not require frequent querying, but simply triggers an alarm (like an SMS to a parent) when the childs mobile is no longer in a set of base stations.

Navigation: user can set a route from one location to another and get an alarm if he deviates from that route by taking a wrong turn. Again triggers are based on a simple test, not on frequent position determination.

Real-time track applications plotting a fleet position in real time on a map are possible too, position queries are typically made every 15-20 minutes, in order not to overload the network.

Two methods are considered: assisted global positioning system (AGPS), or advanced forward-link trilateration (AFLT).

Assisted global positioning system requires a GPS receiver in the device, which can capture latitude and longitude. They work well in wide open areas where several satellite can be seen for location determination. The method is ineffective in indoor cases.

Advanced -link trilateration (AFLT) uses round-trip delays from a plurality of sectors to triangulate. That method works better when more cell sites (at least 3). The method requires network operators to calibrate their network for better accuracy. Often a hybrid between the best of the above two methods are used for actual position determination.

IS-801 allows for elaborate information exchange to refine position accuracy: the mobile station can report its nearest base station(s) to the PDE, which determines what GPS satellites to look for, the MS then attempt to lock on to these satellites signals and determine via GPS its latitude and longitude, which is a more accurate location reported to the PDE.

Page 195: Wireless & Cellular Communications

195

Figure 10.5:

Location services can first estimate position determination, and send information to

the handset such as what GPS satellites to lock on to in order to refine the position

estimate.

10.6 Fixed Mobile Convergence

In the early nineties, operators started to consider cellular phones that might have a cordless phone mode to switch to within the home. Most projects were abandoned but see today some renewed interest and are referred to as fixed-mobile convergence (FMC).

The purpose of such offering is two-fold: better in-home coverage, and free minutes in home use and/or at select other hotspots such as business location. Of course other Wi-Fi network roaming may also be considered.

10.6.1 Wi-Fi FMC

Todays wide presence of Wi-Fi and the wide acceptance of VoIP brings into consideration dual mode phones again that could switch to VoIP over Wi-Fi.

Handoff may be difficult from CDMA to Wi-Fi since the CDMA active mode does not have the ability to also monitor another RF technology to hand to. Wi-Fi to CDMA handover is usually easier, and needs to deal with: an efficient handover algorithm between two different RF conditions (in different frequencies and standards), and a time to establish a call that is short enough to maintain fairly good call perception.

Two major approaches are used for FMC: VCC and UMA. The Voice Call Continuity (VCC) standardization initiative started to define different means for the fixed network to handle call control differently and reach a mobile on a fixed wireless access point or on a mobile cellular network. The Unlicensed Mobile Access (UMA) standard tries to improve upon that effort by defining a set of features that a mobile can access across several different

Page 196: Wireless & Cellular Communications

196

networks. Although both approaches have advantages, the latter is also well suited to adapt to future network evolutions such as IMS.

Data can be supported as well, usual mobile IP sessions can be maintained between Wi-Fi and EVDO, through PDSN of PCF. A common home agent (HA) is used, and usual mobile IP handoffs are triggered.

10.6.2 Femtocell

Weve mentioned that the traditional circuit-switched fabric of cellular network is shifting towards a flexible IP packet switched network for data.

Weve seen the trend for cell sites to shrink from large coverage sites to smaller sites, allowing for higher modulation and higher data rates. One consequence of that trend in the arrival of small devices, almost like an access point, destined to cover only a residential home or a small office.

These small bases are called femtocells and are typically designed to provide service to a handful of customers within a home. These small sites are treated differently than other macro cellsites at least in two respects: their RF resources and coordination including handoff are different, and their backhaul network is different.

Examine for instance cdma femtocells introduced by Sprint or Verizon.

Femtocells typically have GPS capability, report to the network their location, an RF database tells them what RF channel they are allowed to use and should transmit on.

Radio resources are managed fairly independently of other neighbouring cells, thus allowing no soft handoff for instance

Access limitations (limited to a few accounts or open to the public) as well handover (hard) into or out of the femtocell can be handled in specific servers

Voice and data usage and billing can be handled differently

For voice, femtocells have to include a minimal base station controller function (including EVRC vocoder), and use a slightly more complex A-interface in the IS-41 network. A special MSC, like aVoIP switch, can handle all femtocells in an area much like another MSC in that area. For data, the specific femtoMSC has a PCF, which redirects data traffic to the IP network.

Page 197: Wireless & Cellular Communications

197

Figure 10.6: Femtocells are small cell sites in a home, using the customer’s own high-speed

Internet connection for backhaul.

10.7 Handsets Operating Systems and Applications

Handsets are of course very resource limited: computing resources, memory, network access, even battery. Consequently specialized operating systems are needed.

Many proprietary operating sytems might have been acceptable for voice only handsets; but even then operators always like to differentiate their handsets and give them a certain unified look or user interface, rather than what handset manufacturers devise. Furthermore some level of handset customization might be left to the end customer and might be a good source of revenue such as backdrop, screen saver, downloadable ring tones.

Recent improvements in handsets computing capabilities as well as the shift to more data capabilities made these completely proprietary environments impractical, and several standards options emerged such as Blackberry’s RIM, Symbian, Open Handset Alliance, Microsoft Palm OS, Windows Mobile, Java Mobile, or BREW.

Palm OS is an important environment that initially gained much attention for handsets with PDA functions.

BlackBerry succeeded in creating a very loyal user base with early simple features and a fairly wide range of aplications (on RIM OS).

Sun Javasoft produced a Java virtual machine mobile edition (JVME), which is a small footprint virtual machine taking advantages of the Java class and package dowoad, its language, and the popular Sun Javasoft environment. As a result, the Java mobile environment is a very rich development environment, and widely portable to many handsets and mobile computing devices.

BREW is an operating system devised by Qualcomm (to compete with Java as its name implies). Its advantage is that a chipmaker designed it for its own chip line, consequently

Page 198: Wireless & Cellular Communications

198

the OS is very close to the hardware, and very efficient for cdma handsets. Some restrictions and certification needs from Qualcomm are however a drawback, which pushes the industry to consider more open solutions.

Windows mobile was slower to gain acceptance, with limited and somewhat unstable version 5; but versions 6.x are gaining momentum on higher end handsets that integrate widely used windows applications such as office, email, instant messaging, and others.

Symbian, Googles Open Handset Alliance are other initiatives to produce open OS for mobile applications.

Last but not least, Apple produces an increasingly popular software development kit for iPhone and iPod Touch applications.

In many cases applications are produces by third party developers who specialize in understanding different mobile platforms. Consequently network operators have a certification process to ensure applications behave acceptably to the bandwidth constraint wireless network. For instance applications are not allowed to continuously ping the network, or make overly demanding over-the-air requests (see for instance location based sevices).

Handsets may have the most popular applications preloaded. In fact small portions of the applications called stubs are often preloaded to start, which can download the remainder of the application as en user needs it.

10.8 Industry trends and outlook

Global availability is becoming available. GSM has been touting global roaming capabilities, additional efforts of convergence around a comon Long term evolution (LTE), based on a flat-IP architecture will help further. Roaming is a large business opportunity, a key value added service, as well as revenue generating on the inbound. Data roaming have long been too slow to progress; first technical obstacles have been solved: SS7 connectivity, authentication and fraud issues. Open IP architectures certainly help along the way. Business elements, billing and roaming agreements sometimes remain a problem.