linear companding transform to reduce papr in ofdm signals

113
A PROJECT ON Linear Companding Transform for the Reduction of Peak-to-Average Power Ratio of OFDM Signals

Upload: raghavendranunemunthala

Post on 21-Nov-2015

14 views

Category:

Documents


0 download

DESCRIPTION

compltete documentation

TRANSCRIPT

APROJECTON

Linear Companding Transform for the Reduction ofPeak-to-Average Power Ratio of OFDM Signals

ABSTRACT

A major drawback of orthogonal frequency-division multiplexing (OFDM) signals is their high peak-to-average power ratio (PAPR), which causes serious degradation in performance when a nonlinear power amplifier (PA) is used. Companding transform (CT) is a well-known method to reduce PAPR without restrictions on system parameters such as number of subcarriers, frame format and constellation type. Recently, a linear nonsymmetrical companding transform (LNST) that has better performance than logarithmic-based transforms such as -law companding was proposed. In this paper, a new linear companding transform (LCT) with more design flexibility than LNST is proposed. Computer simulations show that the proposed transform has a better PAPR reduction and bit error rate (BER) performance than LNST with better power spectral density (PSD).

CHAPTER 1

INTRODUCTION

ORTHOGONAL frequency division multiplexing (OFDM) has been attracting substantial attention due to its excellent performance under severe channel condition . The rapidly growing application of OFDM includes WiMAX, DVB/DAB and 4G wireless systems.

OVERVIEWInitial proposals for OFDM were made in the 60s and the 70s. It has taken more than a quarter of a century for this technology to move from the research domain to the industry. The concept of OFDM is quite simple but the practicality of implementing it has many complexities. So, it is a fully software project.OFDM depends on Orthogonality principle. Orthogonality means, it allows the sub carriers, which are orthogonal to each other, meaning that cross talk between co-channels is eliminated and inter-carrier guard bands are not required. This greatly simplifies the design of both the transmitter and receiver, unlike conventional FDM; a separate filter for each sub channel is not required.

Orthogonal Frequency Division Multiplexing (OFDM) is a digital multi carrier modulation scheme, which uses a large number of closely spaced orthogonal sub-carriers. A single stream of data is split into parallel streams each of which is coded and modulated on to a subcarrier, a term commonly used in OFDM systems.Each sub-carrier is modulated with a conventional modulation scheme (such as quadrature amplitude modulation) at a low symbol rate, maintaining data rates similar to conventional single carrier modulation schemes in the same bandwidth. Thus the high bit rates seen before on a single carrier is reduced to lower bit rates on the subcarrier.

In practice, OFDM signals are generated and detected using the Fast Fourier Transform algorithm. OFDM has developed into a popular scheme for wideband digital communication, wireless as well as copper wires.Actually; FDM systems have been common for many decades. However, in FDM, the carriers are all independent of each other. There is a guard period in between them and no overlap whatsoever. This works well because in FDM system each carrier carries data meant for a different user or application. FM radio is an FDM system. FDM systems are not ideal for what we want for wideband systems. Using FDM would waste too much bandwidth. This is where OFDM makes sense. In OFDM, subcarriers overlap. They are orthogonal because the peak of one subcarrier occurs when other subcarriers are at zero. This is achieved by realizing all the subcarriers together using Inverse Fast Fourier Transform (IFFT). The demodulator at the receiver parallel channels from an FFT block. Note that each subcarrier can still be modulated independently.

CHAPTER 2

Background:

Most first generations systems were introduced in the mid 1980s, and can be Characterized by the use of analog transmission techniques and the use of simple multiple access techniques such as Frequency Division Multiple Access (FDMA). First generation telecommunications systems such as Advanced Mobile Phone Service (AMPS) only provided voice communications. They also suffered from a low user capacity, and security problems due to the simple radio interface used. Second generation systems were introduced in the early 1990s, and all use digital technology. This provided an increase in the user capacity of around three times. This was achieved by compressing the voice waveforms before transmission.

Third generation systems are an extension on the complexity of second-generation systems and are expected to be introduced after the year 2000. The system capacity is expected to be increased to over ten times original first generation systems. This is going to be achieved by using complex multiple access techniques such as Code Division Multiple Access (CDMA), or an extension of TDMA, and by improving flexibility of services available. The telecommunications industry faces the problem of providing telephone services to rural areas, where the customer base is small, but the cost of installing a wired phone network is very high. One method of reducing the high infrastructure cost of a wired system is to use a fixed wireless radio network. The problem with this is that for rural and urban areas, large cell sizes are required to get sufficient coverage.

Fig.1.1 shows the evolution of current services and networks to the aim of combining them into a unified third generation network. Many currently separate systems and services such as radio paging, cordless telephony, satellite phones and private radio systems for companies etc, will be combined so that all these services will be provided by third generation telecommunications systems.

Fig: 1.1 Evolution of current networks to the next generation of wireless networks.Currently Global System for Mobile telecommunications (GSM) technology is being applied to fixed wireless phone systems in rural areas. However, GSM uses time division multiple access (TDMA), which has a high symbol rate leading to problems with multipath causing inter-symbol interference. Several techniques are under consideration for the next generation of digital phone systems, with the aim of improving cell capacity, multipath immunity, and flexibility. These include CDMA and OFDM. Both these techniques could be applied to providing a fixed wireless system for rural areas. However, each technique as different properties, making it more suited for specific applications.

OFDM is currently being used in several new radio broadcast systems including the proposal for high definition digital television (HDTV) and digital audio broadcasting (DAB). However, little research has been done into the use of OFDM as a transmission method for mobile telecommunications systems. In CDMA, all users transmit in the same broad frequency band using specialized codes as a basis of channelization. Both the base station and the mobile station know these codes, which are used to modulate the data sent. OFDM/COFDM allows many users to transmit in an allocated band, by subdividingthe available bandwidth into many narrow bandwidth carriers. Each user is allocated several carriers in which to transmit their data.

The transmission is generated in such a way that the carriers used are orthogonal to one another, thus allowing them to be packed together much closer than standard frequency division multiplexing (FDM). This leads to OFDM/COFDM providing a high spectral efficiency.

Orthogonal Frequency Division Multiplexing is a scheme used in the area of high-data-rate mobile wireless communications such as cellular phones, satellite communications and digital audio broadcasting. This technique is mainly utilized to combat inter-symbol interference.

Multiple Access Techniques:

Multiple access schemes are used to allow many simultaneous users to use the same fixed bandwidth radio spectrum. In any radio system, the bandwidth, which is allocated to it, is always limited. For mobile phone systems the total bandwidth is typically 50 MHz, which is split in half to provide the forward and reverse links of the system. Sharing of the spectrum is required in order increase the user capacity of any wireless network. FDMA, TDMA and CDMA are the three major methods of sharing the available bandwidth to multiple users in wireless system. There are many extensions, and hybrid techniques for these methods, such as OFDM, and hybrid TDMA and FDMA systems. However, an understanding of the three major methods is required for understanding of any extensions to these methods.

Frequency Division Multiple Accesses (FDMA):

In Frequency Division Multiple Access (FDMA), the available bandwidth is subdivided into a number of narrower band channels. Each user is allocated a unique frequency band in which to transmit and receive on. During a call, no other user can use the same frequency band.

Each user is allocated a forward link channel (from the base station to the mobile phone) and a reverse channel (back to the base station), each being a single way link. The transmitted signal on each of the channels is continuous allowing analog transmissions. The bandwidths of FDMA channels are generally low (30 kHz) as each channel only supports one user. FDMA is used as the primary breakup of large allocated frequency bands and is used as part of most multi-channel systems.

Fig. 1.2 & Fig. 1.3 show the allocation of the available bandwidth into several channels.

Time Division Multiple Access:

Time Division Multiple Access (TDMA) divides the available spectrum into multiple time slots, by giving each user a time slot in which they can transmit or receive. Fig. 1.4 shows how the time slots are provided to users in a round robin fashion, with each user being allotted one time slot per frame. TDMA systems transmit data in a buffer and burst method, thus the transmission of each channel is non-continuous. Fig 1.4 TDMA scheme, where each user is allocated a small time slot

The input data to be transmitted is buffered over the previous frame and burst transmitted at a higher rate during the time slot for the channel. TDMA can not send analog signals directly due to the buffering required, thus are only used for transmitting digital data. TDMA can suffer from multipath effects, as the transmission rate is generally very high. This leads the multipath signals causing inter-symbol interference. TDMA is normally used in conjunction with FDMA to subdivide the total available bandwidth into several channels. This is done to reduce the number of users per channel allowing a lower data rate to be used. This helps reduce the effect of delay spread on the transmission. Fig. 1.5 shows the use of TDMA with FDMA. Each channel based on FDMA, is further subdivided using TDMA, so that several users can transmit of the one channel. This type of transmission technique is used by most digital second generation mobile phone systems. For GSM, the total allocated bandwidth of 25MHz is divided into 125, 200 kHz channels using FDMA. These channels are then subdivided further by using TDMA so that each 200 kHz channel allows 8-16 users.

Fig. 1.5 TDMA/FDMA hybrid, showing that the bandwidth is split into frequency channels and time slots.

Code Division Multiple Access:

Code Division Multiple Access (CDMA) is a spread spectrum technique that uses neither frequency channels nor time slots. In CDMA, the narrow band message (typically digitized voice data) is multiplied by a large bandwidth signal, which is a pseudo random noise code (PN code). All users in a CDMA system use the same frequency band and transmit simultaneously. The transmitted signal is recovered by correlating the received signal with the PN code used by the transmitter. Fig. 1.6 shows the general use of the spectrum using CDMA.

Some of the properties that have made CDMA useful are: Signal hiding and non-interference with existing systems, Anti-jam and interference rejection, Information security, Accurate Ranging, Multiple User Access, Multipath tolerance. Fig. 1.6 Code Division Multiple Access (CDMA)

Fig.1.7 shows the process of a CDMA transmission. The data to be transmitted (a) is spread before transmission by modulating the data using a PN code. This broadens the spectrum as shown in (b). In this example the process gain is 125 as the spread spectrum bandwidth is 125 times greater the data bandwidth. Part (c) shows the received signal. This consists of the required signal, plus background noise, and any interference from other CDMA users or radio sources.

The received signal is recovered by multiplying the signal by the original spreading code. This process causes the wanted received signal to be dispread back to the original transmitted data. However, all other signals, which are uncorrelated to the PN spreading code used, become more spread. The wanted signal in (d) is then filtered removing the wide spread interference and noise signals.

Fig. 1.7 Basic CDMA Generation.

CDMA Generation:CDMA is achieved by modulating the data signal by a pseudo random noise sequence (PN code), which has a chip rate higher then the bit rate of the data. The PN code sequence is a sequence of ones and zeros (called chips), which alternate in a random fashion. The data is modulated by modular-2 adding the data with the PN code sequence. This can also be done by multiplying the signals, provided the data and PN code is represented by 1 and -1 instead of 1 and 0. Fig. 1.8 shows a basic CDMA transmitter.

Fig. 1.8 Simple direct sequence modulator The PN code used to spread the data can be of two main types. A short PN code(Typically 10-128 chips in length), can be used to modulate each data bit. The short PN code is then repeated for every data bit allowing for quick and simple synchronization of the receiver. Fig.1.9 shows the generation of a CDMA signal using a 10-chip length short code. Alternatively a long PN code can be used. Long codes are generally thousands to millions of chips in length, thus are only repeated infrequently. Because of this they are useful for added security as they are more difficult to decode.

Fig.1.9 Direct sequence signals

CHAPTER 3

THEORY & REASEARCH

Theory & Research Introduction:

The OFDM technology was first conceived in the 1960s and 1970s during research into minimizing ISI, due to multipath. The expression digital communications in its basic form is the mapping of digital information into a waveform called a carrier signal, which is a transmitted electromagnetic pulse or wave at a steady base frequency of alternation on which information can be imposed by increasing signal strength, varying the base frequency, varying the wave phase, or other means. In this instance, orthogonality is an implication of a definite and fixed relationship between all carriers in the collection. Multiplexing is the process of sending multiple signals or streams of information on a carrier at the same time in the form of a single, complex signal and then recovering the separate signals at the receiving end.

Modulation is the addition of information to an electronic or optical signal carrier. Modulation can be applied to direct current (mainly by turning it on and off), to alternating current, and to optical signals. One can think of blanket waving as a form of modulation used in smoke signal transmission (the carrier being a steady stream of smoke). In telecommunications in general, a channel is a separate path through which signals can flow. In optical fiber transmission using dense wavelength-division multiplexing, a channel is a separate wavelength of light within a combined, multiplexed light stream. This project focuses on the telecommunications definition of a channel.

OFDM Principles:

OFDM is a special form of Multi Carrier Modulation (MCM) with densely spaced sub carriers with overlapping spectra, thus allowing for multiple-access. MCM) is the principle of transmitting data by dividing the stream into several bit streams, each of which has a much lower bit rate, and by using these sub-streams to modulate several carriers. This technique is being investigated as the next generation transmission scheme for mobile wireless communications networks.

Fourier Transform:

Back in the 1960s, the application of OFDM was not very practical. This was because at that point, several banks of oscillators were needed to generate the carrier frequencies necessary for sub-channel transmission. Since this proved to be difficult to accomplish during that time period, the scheme was deemed as not feasible.

However, the advent of the Fourier Transform eliminated the initial complexity of the OFDM scheme where the harmonically related frequencies generated by Fourier and Inverse Fourier transforms are used to implement OFDM systems. The Fourier transform is used in linear systems analysis, antenna studies, etc., The Fourier transform, in essence, decomposes or separates a waveform or function into sinusoids of different frequencies which sum to the original waveform. It identifies or distinguishes the different frequency sinusoids and their respective amplitudes.

The Fourier transform of f(x) is defined as:

(1)and its inverse is denoted by:

(2)However, the digital age forced a change upon the traditional form of the Fourier transform to encompass the discrete values that exist is all digital systems. The modified series was called the Discrete Fourier Transform (DFT). The DFT of a discrete-time system, x(n) is defined as:

(3)1 k N

and its associated inverse is denoted by:

(4)1 n NHowever, in OFDM, another form of the DFT is used, called the Fast Fourier Transform (FFT), which is a DFT algorithm developed in 1965. This new transform reduced the number of computations from something on the order of

(5) to

Orthogonality:

In geometry, orthogonal means, "involving right angles" (from Greek ortho, meaning right, and gon meaning angled). The term has been extended to general use, meaning the characteristic of being independent (relative to something else). It also can mean: non-redundant, non-overlapping, or irrelevant. Orthogonality is defined for both real and complex valued functions. The functions m(t) and n(t) are said to be orthogonal with respect to each other over the interval a < t < b if they satisfy the condition:

(6)Where n m

OFDM Carriers:

As fore mentioned, OFDM is a special form of MCM and the OFDM time domain waveforms are chosen such that mutual orthogonality is ensured even though sub-carrier spectra may over-lap. With respect to OFDM, it can be stated that orthogonality is an implication of a definite and fixed relationship between all carriers in the collection. It means that each carrier is positioned such that it occurs at the zero energy frequency point of all other carriers. The sinc function, illustrated in Fig. 2.1 exhibits this property and it is used as a carrier in an OFDM system.

fu is the sub-carrier spacing Fig .2.1. OFDM sub carriers in the frequency domain

Orthogonal Frequency Division Multiplexing:

Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier transmission technique, which divides the available spectrum into many carriers, each one being modulated by a low rate data stream. OFDM is similar to FDMA in that the multiple user access is achieved by subdividing the available bandwidth into multiple channels that are then allocated to users. However, OFDM uses the spectrum much more efficiently by spacing the channels much closer together. This is achieved by making all the carriers orthogonal to one another, preventing interference between the closely spaced carriers.

Coded Orthogonal Frequency Division Multiplexing (COFDM) is the same as OFDM except that forward error correction is applied to the signal before transmission.This is to overcome errors in the transmission due to lost carriers from frequency selective fading, channel noise and other propagation effects. For this discussion the terms OFDM and COFDM are used interchangeably, as the main focus of this thesis is on OFDM, but it is assumed that any practical system will use forward error correction, thus would be COFDM.

In FDMA each user is typically allocated a single channel, which is used to transmit all the user information. The bandwidth of each channel is typically 10 kHz-30 kHz for voice communications. However, the minimum required bandwidth for speech is only 3 kHz. The allocated bandwidth is made wider then the minimum amount required preventing channels from interfering with one another. This extra bandwidth is to allow for signals from neighboring channels to be filtered out, and to allow for any drift in the center frequency of the transmitter or receiver. In a typical system up to 50% of the total spectrum is wasted due to the extra spacing between channels.

This problem becomes worse as the channel bandwidth becomes narrower, and the frequency band increases. Most digital phone systems use vocoders to compress the digitized speech. This allows for an increased system capacity due to a reduction in the bandwidth required for each user. Current vocoders require a data rate somewhere between 4- 13kbps, with depending on the quality of the sound and the type used. Thus each user only requires a minimum bandwidth of somewhere between 2-7 kHz, using QPSK modulation. However, simple FDMA does not handle such narrow bandwidths very efficiently. TDMA partly overcomes this problem by using wider bandwidth channels, which are used by several users. Multiple users access the same channel by transmitting in their data in time slots. Thus, many low data rate users can be combined together to transmit in a single channel, which has a bandwidth sufficient so that the spectrum can be used efficiently.

There are however, two main problems with TDMA. There is an overhead associated with the change over between users due to time slotting on the channel. A change over time must be allocated to allow for any tolerance in the start time of each user, due to propagation delay variations and synchronization errors. This limits the number of users that can be sent efficiently in each channel. In addition, the symbol rate of each channel is high (as the channel handles the information from multiple users) resulting in problems with multipath delay spread. OFDM overcomes most of the problems with both FDMA and TDMA. OFDM splits the available bandwidth into many narrow band channels (typically 100-8000). Thecarriers for each channel are made orthogonal to one another, allowing them to be spaced very close together, with no overhead as in the FDMA example. Because of this there is no great need for users to be time multiplex as in TDMA, thus there is no overhead associated with switching between users.

The orthogonality of the carriers means that each carrier has an integer number ofcycles over a symbol period. Due to this, the spectrum of each carrier has a null at the center frequency of each of the other carriers in the system. This results in no interference between the carriers, allowing then to be spaced as close as theoretically possible. This overcomes the problem of overhead carrier spacing required in FDMA.Each carrier in an OFDM signal has a very narrow bandwidth (i.e. 1 kHz), thus the resulting symbol rate is low. This results in the signal having a high tolerance to multipath delay spread, as the delay spread must be very long to cause significant ISI (e.g > 500usec).

OFDM generation:

To generate OFDM successfully the relationship between all the carriers must be carefully controlled to maintain the orthogonality of the carriers. For this reason, OFDM is generated by firstly choosing the spectrum required, based on the input data, and modulation scheme used. Each carrier to be produced is assigned some data to transmit. The required amplitude and phase of the carrier is then calculated based on the modulation scheme (typically differential BPSK, QPSK, or QAM).

The required spectrum is then converted back to its time domain signal using an Inverse Fourier Transform. In most applications, an Inverse Fast Fourier Transform (IFFT) is used. The IFFT performs the transformation very efficiently, and provides a simple way of ensuring the carrier signals produced are orthogonal.The Fast Fourier Transform (FFT) transforms a cyclic time domain signal into itsequivalent frequency spectrum. This is done by finding the equivalent waveform, generated by a sum of orthogonal sinusoidal components. The amplitude and phase of the sinusoidal components represent the frequency spectrum of the time domain signal.

. The IFFT performs the reverse process, transforming a spectrum (amplitude and phase of each component) into a time domain signal. An IFFT converts a number of complex data points, of length, which is a power of 2, into the time domain signal of the same number of points. Each data point in frequency spectrum used for an FFT or IFFT is called a bin. The orthogonal carriers required for the OFDM signal can be easily generated by setting the amplitude and phase of each bin, then performing the IFFT. Since each bin of an IFFT corresponds to the amplitude and phase of a set of orthogonal sinusoids, the reverse process guarantees that the carriers generated are orthogonal. Fig. 2.2 OFDM Block Diagram

Fig. 2.2 shows the setup for a basic OFDM transmitter and receiver. The signal generated is a base band, thus the signal is filtered, then stepped up in frequency before transmitting the signal. OFDM time domain waveforms are chosen such that mutual orthogonality is ensured even though sub-carrier spectra may overlap. Typically QAM or Differential Quadrature Phase Shift Keying (DQPSK) modulation schemes are applied to the individual sub carriers. To prevent ISI, the individual blocks are separated by guard intervals wherein the blocks are periodically extended.

Modulation Techniques:Quadrature Amplitude Modulation (QAM):This modulation scheme is also called quadrature carrier multiplexing. Infact, this modulation scheme enables to DSB-SC modulated signals to occupy the same transmission BW at the receiver o/p. it is, therefore, known as a bandwidth-conservation scheme. The QAM Tx consists of two separate balanced modulators, which are supplied, with two carrier waves of the same freq but differing in phase by 90. The o/p of the two balanced modulators are added in the adder and transmitted. Fig. 2.3 QAM SystemThe transmitted signal is thus given byS (t) = X1 (t) A cos (2Fc t) + X2 (t) A sin (2Fc t)Hence, the multiplexed signal consists of the in-phase component A X1 (t) and the quadrature phase component A X2 (t).

Balanced Modulator:

A DSB-SC signal is basically the product of the modulating or base band signal and the carrier signal. Unfortunately, a single electronic device cannot generate a DSB-SC signal. A circuit is needed to achieve the generation of a DSB-SC signal is called product modulator i.e., Balanced Modulator.We know that a non-linear resistance or a non-linear device may be used to produce AM i.e., one carrier and two sidebands. However, a DSB-SC signal contains only 2 sidebands. Thus, if 2 non-linear devices such as diodes, transistors etc., are connected in balanced mode so as to suppress the carriers of each other, then only sidebands are left, i.e., a DSB-SC signal is generated. Therefore, a balanced modulator may be defined as a circuit in which two non-linear devices are connected in a balanced mode to produce a DSB-SC signal.

Quadrature Phase Shift Keying (QPSK) :In communication systems, we have two main resources. These are:1. Transmission Power2. Channel bandwidthIf two or more bits are combined in some symbols, then the signaling rate will be reduced. Thus, the frequency of the carrier needed is also reduced. This reduces the transmission channel B.W. Hence, because of grouping of bits in symbols; the transmission channel B.W can be reduced. In QPSK two successive bits in the data sequence are grouped together. This reduces the bits rate or signaling rate and thus reduces the B.W of the channel. In case of BPSK, we know that when sym. Changes the level, the phase of the carrier is changed by 180. Because, there were only two syms in BPSK, the phase shift occurs in 2 levels only. However, in QPSK, 2 successive bits are combined. Infact, this combination of two bits forms 4 distinct syms. When the sym is changed to next sym, then the phase of the carrier is changed by 45 degrees.

S.NoI/p successive bitssymbol phase shift in carrierI=11(1v)0(-1v)S1/4

I=20(-1v)0(-1v)S23/4

I=30(-1v)1(1v)S35/4

I=41(1v)1(1v)S47/4

Generation of QPSK:

Here the i/p binary seq. is first converted into a bipolar NRZ type of signal. This signal is denoted by b (t). It represents binary 1 by +1V and binary 0 by -1V. The demultiplexer divides b (t) into 2 separate bit streams of the odd numbered and even numbered bits. Here Be (t) represents even numbered sequence and Bo (t) represents odd numbered sequence. The symbol duration of both of these odd numbered sequences is 2Tb. Hence, each symbol consists of 2 bits.

Fig.2.4 Generation of QPSK

It may be observed that the first even bit occurs after the first odd bit. Hence, even numbered bit sequence Be (t) starts with the delay of one bit period due to first odd bit. Thus, first symbol of Be (t) is delayed by one bit period due to first odd bit. Thus, first symbol of Be (t) is delayed by on bit period Tb with respect to first symbol of Bo (t). This delay of Tb is known as offset. This shows that the change in the levels of Be (t) and Bo (t) cant occur at the same time due to offset or staggering. The bit stream Be (t) modulates carrier cosine carrier and B0(t) modulates sinusoidal carrier. These modulators are the balanced modulators. The 2 carriers are Ps.cos (2Fc.t) and Ps.sin (2Fc.t) have been shown in fig. Their carriers are known as quadrature carriers. Due to the offset, the phase shift in QPSK signal is /2.

FFT & IFFT:In practice, OFDM systems are implemented using a combination of FFT and IFFT blocks that are mathematically equivalent versions of the DFT and IDFT, respectively, but more efficient to implement. An OFDM system treats the source symbols (e.g., the QPSK or QAM symbols that would be present in a single carrier system) at the Tx as though they are in the freq-domain. These syms are used as the i/ps to an IFFT block that brings the sig into the time domain. The IFFT takes in N syms at a time where N is the num of sub carriers in the system. Each of these N i/p syms has a symbol period of T secs. Recall that the basis functions for an IFFT are N orthogonal sinusoids. These sinusoids each have a different freq and the lowest freq is DC. Each i/p symbol acts like a complex weight for the corresponding sinusoidal basis fun. Since the i/p syms are complex, the value of the sym determines both the amplitude and phase of the sinusoid for that sub carrier.

The IFFT o/p is the summation of all N sinusoids. Thus, the IFFT block provides a simple way to modulate data onto N orthogonal sub carriers. The block of N o/p samples from the IFFT make up a single OFDM sym. The length of the OFDM symbol is NT where T is the IFFT i/p symbol period mentioned above.

Fig. 2.5 FFT & IFFT diagram

After some additional processing, the time-domain sig that results from the IFFT is transmitted across the channel. At the Rx, an FFT block is used to process the received signal and bring it into the freq domain. Ideally, the FFT o/p will be the original syms that were sent to the IFFT at the Tx. When plotted in the complex plane, the FFT o/p samples will form a constellation, such as 16-QAM. However, there is no notion of a constellation for the time-domain sig. When plotted on the complex plane, the time-domain sig forms a scatter plot with no regular shape. Thus, any Rx processing that uses the concept of a constellation (such as symbol slicing) must occur in the frequency- domain. Adding a Guard Period to OFDM:One of the most important properties of OFDM transmissions is the robustnessagainst multipath delay spread. This is achieved by having a long symbol period, which minimizes the ISI. The level of robustness, can infact is increased even more by the addition of a guard period b/w transmitted syms. The guard period allows time for multipath sigs from the pervious symbol to die away before the information from the current symbol is gathered. The most effective guard period to use is a cyclic extension of the symbol. If a mirror in time, of the end of the symbol waveform is put at the start of the symbol as the guard period, this effectively extends the length of the symbol, while maintaining the orthogonally of the waveform. Using this cyclic extended symbol the samples required for performing the FFT (to decode the sym), can be taken anywhere over the length of the sym. This provides multipath immunity as well as sym time synchronization tolerance.As long as the multipath delay echos stay within the guard period duration, there is strictly no limitation regarding the signal level of the echos: they may even exceed the signal level of the shorter path! The signal energy from all paths just adds at the input to the receiver, and since the FFT is energy conservative, the whole available power feeds the decoder. If the delay spread is longer then the guard interval then they begins to cause ISI. However, provided the echos are sufficiently small they do not cause significant problems. This is true most of the time as multipath echos delayed longer than the guard period will have been reflected of very distant objects. Other variations of guard periods are possible. One possible variation is to have half the guard period a cyclic extension of the symbol, as above, and the other half a zero amplitude signal. This will result in a signal as shown in Fig.2.6. Using this method the symbols can be easily identified. This possibly allows for symbol timing to be recovered from the signal, simply by applying envelop detection. The disadvantage of using this guard period method is that the zero period does not give any multipath tolerance, thus the effective active guard period is halved in length. It is interesting to note that this guard period method has not been mentioned in any of the research papers read, and it is still not clear whether symbol timing needs to be recovered using this method.

Fig. 2.6 Section of an OFDM signal showing 5 symbols, using a guard period whichis half a cyclic extension of the symbol, and half a zero amplitude signal.

CHAPTER 5PROPAGATION OF CHANNEL CHARACTERISTICSPropagation Characteristics of mobile radio channels:

In an ideal radio channel, the received signal would consist of only a single directpath signal, which would be a perfect reconstruction of the transmitted signal. However in a real channel, the signal is modified during transmission in the channel.It is known that the performance of any wireless systems performance is affected by the medium of propagation, namely the characteristics of the channel. In telecommunications in general, a channel is a separate path through which signals can flow. In the ideal situation, a direct line of sight between the transmitter and receiver is desired. But alas, it is not a perfect world; hence it is imperative to understand what goes on in the channel so that the original signal can be reconstructed with the least number of errors. The received signal consists of a combination of attenuated, reflected, refracted, and diffracted replicas of the transmitted signal. On top of all this, the channel adds noise to the signal and can cause a shift in the carrier frequency if the transmitter, or receiver is moving (Doppler effect). Understanding of these effects on the signal is important because the performance of a radio system is dependent on the radio channel characteristics.

Attenuation:Attenuation is the drop in the signal power when transmitting from one point to another. It can be caused by the transmission path length, obstructions in the signal path, and multipath effects. Fig.3.1 shows some of the radio propagation effects that cause attenuation. Any objects, which obstruct the line of sight signal from the transmitter to the receiver, can cause attenuation.

Fig. 3.1. Some channel characteristicsShadowing of the signal can occur whenever there is an obstruction between the transmitter and receiver. It is generally caused by buildings and hills, and is the most important environmental attenuation factor. Shadowing is most severe in heavily built up areas, due to the shadowing from buildings. However, hills can cause a large problem due to the large shadow they produce. Radio signals diffract off the boundaries of obstructions, thus preventing total shadowing of the signals behind hills and buildings. However, the amount of diffraction is dependent on the radio frequency used, with low frequencies diffracting more then high frequency signals. Thus high frequency signals, especially, Ultra High Frequencies (UHF), and microwave signals require line of sight for adequate signal strength. To over come the problem of shadowing, transmitters are usually elevated as high as possible to minimize the number of obstructions. Typical amounts of variation in attenuation due to shadowing are shown in Table 3.1.

Table.3.1 Typical attenuation in a radio channel.Shadowed areas tend to be large, resulting in the rate of change of the signal power being slow. For this reason, it is termed slow-fading, or lognormal shadowing.

Multipath Effects:

Rayleigh fading:

In a radio link, the RF signal from the transmitter may be reflected from objects such as hills, buildings, or vehicles. This gives rise to multiple transmission paths at the receiver. Fig. 3.2 show some of the possible ways in which multipath signals can occur.

Fig.3.2 Multipath Signals

Because of the multipath phase of the signal may by that constructive or destructive interference when it reaches to the Rx. This is experienced over very short distances (typically at half wavelength distances), thus is given the term fast fading. These variations can vary from 10-30dB over a short distance.

Fig. 3.3 Typical Rayleigh fading while the mobile unit is moving.The Rayleigh distribution is commonly used to describe the statistical time varying nature of the received signal power. It describes the probability of the signal level. Being received due to fading. Table 3.2 shows the probability of the signal level for the Rayleigh distribution.

Table 3.2 Cumulative distributions for Rayleigh distribution

Frequency Selective Fading:In any radio transmission, the channel spectral response is not flat. It has dips or fades in the response due to reflections causing cancellation of certain frequencies at the receiver. Reflections off near-by objects (e.g. ground, buildings, trees, etc) can lead to multipath signals of similar signal power as the direct signal. This can result in deep nulls in the received signal power due to destructive interference. For narrow bandwidth transmissions if the null in the frequency response occurs at the transmission frequency then the entire signal can be lost. This can be partly overcome in two ways.

By transmitting a wide bandwidth signal or spread spectrum as CDMA, any dips in the spectrum only result in a small loss of signal power, rather than a complete loss. Another method is to split the transmission up into many small bandwidth carriers, as is done in a COFDM/OFDM transmission. The original signal is spread over a wide bandwidth thus; any nulls in the spectrum are unlikely to occur at all of the carrier frequencies. This will result in only some of the carriers being lost, rather then the entire signal. The information in the lost carriers can be recovered provided enough forward error corrections are sent. Delay Spread:The received radio signal from a transmitter consists of typically a direct signal, plus reflections of object such as buildings, mountings, and other structures. The reflected signals arrive at a later time than the direct signal because of the extra path length, giving rise to a slightly different arrival time of the transmitted pulse, thus spreading the received energy. Delay spread is the time spread between the arrival of the first and last multipath signal seen by the receiver. In a digital system, the delay spread can lead to inter-symbol interference. This is due to the delayed multipath signal overlapping with the following symbols. This can cause significant errors in high bit rate systems, especially when using time division multiplexing (TDMA). Fig.3.4 shows the effect of inter-symbol interference due to delay spread on the received signal. As the transmitted bit rate is increased the amount of inter-symbol interference also increases. The effect starts to become very significant when the delay spread is greater then ~50% of the bit time.

Fig.3.4 Multi delay spreadshows the typical delay spread that can occur in various environments. The maximum delay spread in an outdoor environment is approximately 20usec, thus significant intersymbol interference can occur at bit rates as low as 25kbps.

Table. 3.3 Typical Delay Spread

Inter-symbol interference can be minimized in several ways. One method is to reduce the symbol rate by reducing the data rate for each channel (i.e. split the bandwidth into more channels using frequency division multiplexing). Another is to use a coding scheme which is tolerant to inter-symbol interference such as CDMA.

Doppler Shift:When a wave source and a receiver are moving relative to one another the frequency of the received signal will not be the same as the source. When they are moving toward each other the frequency of the received signal is higher then the source, and when they are approaching each other the frequency decreases. This is called the Doppler Effect. An example of this is the change of pitch in a cars horn as it approaches then passes by. This effect becomes important when developing mobile radio systems. The amount the frequency changes due to the Doppler effect depends on the relative motion between the source and receiver and on the speed of propagation of the wave. The Doppler shift in frequency can be written:

Where f is the change in frequency of the source seen at the receiver, fo is the frequency of the source, v is the speed difference between the source and transmitter, and c is the speed of light.For example: Let fo = 1GHz, and v = 60km/hr (16.7m/s) then the Doppler shift willbe:

This shift of 55Hz in the carrier will generally not effect the transmission. However,Doppler shift can cause significant problems if the transmission technique is sensitive to carrier frequency offsets (for example COFDM) or the relative speed is higher (for example in low earth orbiting satellites).

Inter Symbol Interference:

As communication systems evolve, the need for high symbol rates becomes more apparent. However, current multiple access with high symbol rates encounter several multi path problems, which leads to ISI. An echo is a copy of the original signal delayed in time. ISI takes place when echoes on different-length propagation paths result in overlapping received symbols. Problems can occur when one OFDM symbol overlaps with the next one. There is no correlation between two consecutive OFDM symbols and therefore interference from one symbol with the other will result in a disturbed signalIn addition, the symbol rate of communications systems is practically limited by the channels bandwidth. For the higher symbol rates, the effects of ISI must be dealt with seriously. Several channel equalization techniques can be used to suppress the ISIs caused by the channel. However, to do this, the CIR channel impulse response, must be estimated. Recently, OFDM has been used to transmit data over a multi-path channel. Instead of trying to cancel the effects of the channels ISIs, a set of sub-carriers can be used to transmit information symbols in parallel sub-channels over the channel, where the systems output will be the sum of all the parallel channels throughputs. This is the basis of how OFDM works. By transmitting in parallel over a set of sub-carriers, the data rate per sub-channel is only a fraction of the data rate of a conventional single carrier system having the same output. Hence, a system can be designed to support high data rates while deferring the need for channel equalizations.

In addition, once the incoming signal is split into the respective transmission sub-carriers, a guard interval is added between each symbol. Each symbol consists of useful symbol duration, Ts and a guard interval, t, in which, part of the time, a signal of Ts is cyclically repeated. This is shown in Fig.3.5.

Fig. 3.5 Combating ISI using a guard interval

As long as the multi path propagation delays do not exceed the duration of the interval, no inter-symbol interference occurs and no channel equalization is required.

CHANNELS We Used: The transmission signal models of the electromagnetic wave which travels form transmitter to receiver. Along the way the wave encounters a wide range of different environments. Channel models represent the attempt to model these different environments. Their aim is to introduce well defined disturbances to the transmission signal. In this lecture we discuss channel models which are typical for DAB transmission. We consider the effects of noise, movement, and signal reflection. The general strategy is to have a pictorial representation of the channel environment before we introduce the mathematical model.Overview DiagramThe following figure shows again the block diagram of communication system. Such a system consists of Sender, Channel and Receiver. In this lecture we focus on the channel aspect of the communication system. In the block diagram, s(t) is the transmission signal and s(t) is the received transmission signal.

Frequency offset channelThe frequency offset channel introduces a static frequency offset. One possible cause for such a frequency offset is a slow drifting time base, normally a crystal oscillator, in either transmitter or receiver. The frequency offset channel tests the frequency correction circuit in the receiver. The following figure shows the block diagram of the Frequency shift channel.

The mathematical model follows as:.AWGN channelFor the Additional White Gaussian Noise (AWGN) channel the received signal is equal to the transmitted signal with some portion of white Gaussian white noise added. This channel is particularly important for discrete models operating on a restricted number space, because this allows one to optimise the circuits in terms of their noise performance. The block diagram of the AWGN channel is given in the next figure.

s(t) = s(t) + n(t)where n(t) is a sample function of a Gaussian random process. This represents white Gaussian noise.Multi path channelThe multipath channel is the last of the static channels. It reflects the fact that electromagnetic waves can travel over various paths from the transmission antenna to the receiver antenna. The receiver antenna sums up all the different signals. Therefore, the mathematical model of the multipath environment creates the received transmission signal by summing up scaled and delayed versions of the original transmission signal. This superposition of signals causes ISI. The following figure shows a multipath environment.

The block diagram, shown in the next figure, details a DSP model for the multipath environment.

The mathematical model follows as:

Fading channelsFading channels represent a mathematical model for wireless data exchange in a physical environment which changes over time. These changes arise for two reasons: 1. The environment is changing even though the transmitter and receiver are fixed; examples are changes in the ionosphere, movement of foliage and movement of reflectors and scatterers.2. Transmitter and receiver are mobile even though the environment might be static.

3. The next figure shows a multipath fading environment. The fading is modeled by the fact that the environment is changing.

The block diagram, shown in the next figure, details a DSP model for the multipath environment

Mathematically the DSP model can be formulated as follows:

DSP model and mathematical description are close to the underlying physical phenomena. This makes them unsuitable for practical channel models. To establish practical channel models we employ statistical methods to abstract and generalize the fading channel models. In the following two subsections we discuss Rayleigh and Rician fading channels. Both represent statistical channel modes, the difference between them is that the Rayleigh model does not assume a direct or prominent path and the Ricien model assumes a direct path. The last channel model extends the ideas of Rayleigh and Rician fading channels with mobility aspects. The resulting mobile fading channels model the degrading effects in the frequency domain of wireless multipath channels.Rayleigh fading:Rayleigh fading is caused by multipath reception. The mobile antenna receives a large number, say N, reflected and scattered waves. Because of wave cancellation effects, the instantaneous received power seen by a moving antenna becomes a random variable, dependent on the location of the antenna. To simplify the derivation of the fading models an un-modulated carrier of the form s(t) = Acos(2pifct) as transmission signal is used. Based on the block diagram the complex envelope of the received signal is:

where ai (t) is the gain factor and Ti (t) is the delay for a specific path i at a specific time t.

where rRa (t) is a sample function of a Rayleigh distributed random process:

and the is uniformly distributed in the interval [0, 2pi).The general form of this channel model is:

again, and are amplitude and phase from a particular measurement of a rayleigh distributed random process. This channel is called rayleigh fading channel.Rician fading channelRician fadingThe model behind Rician fading is similar to that for Rayleigh fading, except that in Rician fading a strong dominant component is present. This dominant component can for instance be the line-of-sight wave. Refined Rician models also consider 1. that the dominant wave can be a phasor sum of two or more dominant signals, e.g. the line-of- sight, plus a ground reflection. This combined signal is then mostly treated as a deterministic (fully predictable) process 2. that the dominant wave can also be subject to shadow attenuation. This is a popularAssumption in the modeling of satellite channels. Besides the dominant component, the mobile antenna receives a large number of reflected and Scattered waves.

A Rician fading channel indicates that there is a prominent or direct path over which the electromagnetic wave can travel. Compared to the Rayleigh channel model, Equation 1, the Rician fading channel model has an additional Acos(2pifct) component to reflect the prominent path:

Above Equation can be written as:

Where rRi (t) is a sample function of a random process with a Rician distributed probability density function (pdf):

Where I0 is the zero order modified Bessel functions of the first kind given by:

and the distribution of is:

Where is the error function defined as:

The ratio, referred as the K-factor, relates the power in un faded and faded components. Values of K >> 1 indicate less severe fading, whereas K 0), where 0 is the threshold.

CHAPTER 6

PAPR reduction methodsPAPR reduction methods have been studied for many years and significant number of methods has been developed. These methods are discussed below: Clipping: Clipping naturally happens in the transmitter if power back-off is not enough. Clipping leads to a clipping noise and out-of-band radiation. Filtering after clipping can reduce out-of-band radiation, but at the same time it can cause peak regrowth. Repeated clipping and filtering can be applied to reduce peak regrowth in expense of complexity. Several methods for mitigation of the clipping noise at the receiver were proposed: for example reconstructing of the clipped sample, based on another samples in the oversampled signal. Coding: Coding methods include Golay complementary sequences [1], block coding scheme [2], complementary block codes (CBC) [3], modified complementary block codes (MCBC) [3] etc. An application of the Golay Complementary sequences is limited by the fact that they can not be used with M-QAM modulation. Simple scheme, proposed in [2], relies on lookup tables containing sequences with lower PAPR. This method doesnt attempt to utilize those sequences for error correction/detection. CBC utilizes complement bits that are constructed from the subset of the information bits. MCBC is a modification of CBC suitable for large number of sub-carriers. Coding methods have low complexity but PAPR reduction is achieved in expense of redundancy causing data rate loss.

Partial Transmit Sequences (PTS): a set of sub-carriers of an OFDM symbol is divided into non-overlapping sub-blocks [4]. Each sub-block undergoes zero-padding and IDFT resulting in p(k), k=1V, called PTS. Peak value optimization is performed over linear combination of PTSs: , where b(k) is optimization parameter. The optimization parameter is often limited to four rotation factors: . Selected mapping (SLM) [5]: a set of sub-carriers of an OFDM symbol is multiplied sub-carrier wise by U rotation vectors b.Then all the rotated U data blocks are transformed into the time-domain by IDFT and then the vector with the lowest PAPR is selected for transmission. Interleaving [6]: The same data block is interleaved by K different interleavers. K IDFTs of the original data block and modified data blocks are calculated. PAPR of K blocks is calculated. The block with minimum PAPR is transmitted. Tone Reservation (TR) [7]: L sub-carriers are reserved for peak reduction purposes. The values of the signals to insert on peak reduction sub-carriers are computed by suitable Linear Programming algorithm. Tone Injection (TI) [7]: TI maps one constellation point of the original constellation (for example QPSK) to several constellation points of the expanded constellation (for example 16QAM). PAPR redaction is achieved by choosing constellation points of the expanded constellation. Active Constellation Extension (ACE) [8]: ACE modifies original constellation by moving nominal constellation points located on the outer constellation boundaries in the directions that dont decrease Euclidean distances between constellation points. Nonlinear Companding Transform (NCT) [9, 10]: NCT compand original OFDM signal using strict monotone increasing function. Companded signal can be recovered by the inverse function at the receiver.COMPANDING BASICS

A companding system compresses the signal at input and expands the signal at output in order to keep the signal level above the noise level during processing. In other Words, companding amplifies small inputs so that the signal level is well above the Noise floor during processing. At the output, the original input signal is then restored by a simple attenuation. Companding increases the SNR when the input signal is low and therefore reduces the effect of a systems noise source.HISTORY AND APPLICATIONS

The concept of companding was first patented by A.B. Clark at AT&T in 1928. The purpose of the patent was to adaptively transmit images through a noisy medium such that the received image has tone values similar to the image transmitted. Since then, companding has been developed for numerous other applications such as audio transmission and recording, communication systems, and signal processing. The -Law, for example, applies a logarithmic formula to audio or speech signals such that the transmit signals has a smaller number of bits. The logarithmic formula compresses the signal by allocating more bits or quantization levels to smaller values to reduce the signal to quantization noise ratio, which in general increased as the amplitude of the signal decreased. New variations of companding for transmission and communications have been

proposed using the hyperbolic tangent function used for ofdm signals, the hyperbolic tangent can reduce the magnitude of the signal peaks to increase the efficiency in transmission. In all the examples cited above, only a simple compression algorithm is required because the transmission channel is modeled as additive noise. if convolutions occurred within the channel, as in companding for signal processing, then compensation methods would be required to reduce the effects of transients, Recent applications of companding have been developing in the fields of analog and digital signal processing. Tsividis first proposed using companding in analog signal processors in. Unlike previous companding techniques where the transmitter and the receiver are geographically different locations, the compressor and expander for a signal processor can be placed on the same chip, introducing the concept of syllabic companding: the compression level is known to both the compressor and the expander and can be determined by the average value of the input rather than the instantaneous value of the input. The concept is demonstrated for a second order high Q band pass filter. It was found that transients occur within companding signal processors and various compensations. Using companding on signal processors has the additional benefit of lower power dissipation for a required signal to noise ratio. It is shown that to increase the SNR requirements of an analog system by 3 dB requires twice the capacitance area and double the power dissipation. If companding is employed, then the necessary SNR can be met without such a large increase in chip size and power consumption.The benefits of companding

For a generic signal processor, the output is comprised of three components: signal, noise and distortion. If the processor is linear, then Figure 3-1 shows the relationship between the input and the output components as the input level increases. The output signal is proportional to the input level while noise is generally independent of the input. At high input levels, the linearitys of the system can no longer be maintained, and distortion occurs

A plot of the input level versus the signal to noise and distortion ratio (SNDR) is shown in Figure 3-2(a). The plot demonstrates that the dynamic range of the system is limited when the output SNDR is required to be above a given level. To increase the dynamic range, it is possible to redesign the processor such that the noise level is lowered. However, a decreased noise level can only increase the dynamic range by a small amount, see Figure 3-2(b). For a wider acceptable dynamic range, companding must be implemented, see Figure 3-2(c).

Companding works by compressing the dynamic range of the signal at the input and expanding the dynamic range at the output. Compression occurs when small signals are amplified and large signals are attenuated. Thus, the signal strength can be kept significantly higher than any noise that may be introduced into the system, even at low input power. At output, the expander then restores the input signal, while keeping the signal power above the noise level and increasing the SNDR of the signal. Companding affects the signal differently depending on the signal; this can be seen in Figure 3-3. There are two types of companding techniques, instantaneous and syllabic

companding. The two techniques differ on how the signal is compressed and expanded. Instantaneous companding compresses the input signal according to the instantaneous input level. Syllabic companding compresses depending on the average strength, for example the envelope or the peak, of the input signal.Instantaneous compandingInstantaneous companding is a memory less system: signals are compressed according to the current values in the system, not the past values. For example, the -law algorithm is an example of instantaneous companding. The compressed signal is calculated by Equation 3.1a and Equation 3.1b. A higher corresponds to a higher compression level. The -law behavior, with = 255 is plotted in Figure 3-4. From the figure, it can be seen that small values of_ are amplified while large values of are attenuated.

Nonlinear Companding Transforms

One of the most attractive schemes is nonlinear companding transform due to its good system performance including PAPR reduction and BER, low implementation complexity and no bandwidth expansion. The first nonlinear companding transform is the -law companding, which is based on the speech processing algorithm U-law, and it has shown better performance than that of clipping method u-law mainly focuses on enlarging signals with small amplitude and keeping peak signals unchanged, and thus it increase the average power of the transmitted signals and possibly results in exceeding the saturation region of HPA to make the system performance worse. In fact, the nonlinear companding transform is also an especial clipping scheme. The differences between the clipping and nonlinear companding transform can be summarized as: 1) Clipping method deliberately clips large signals when the amplitude of the original OFDM signals is larger than the given threshold, and thus the clipped signals can not be recovered at the receiver. However, nonlinear companding transforms compand original OFDM signals using the strict monotone increasing function. Therefore, the companded signals at the transmitter can be recovered correctly through the corresponding inversion of the nonlinear transform function at the receiver; 2) Nonlinear companding transforms enlarge the small signals while compressing the large signals to increase the immunity of small signals from noise, whereas clipping method does not change the small signals. Therefore, clipping method suffers from three major problems: in-band distortion, out-of-band radiation and peak regrowth after digital analog conversion. As a result, the system performance degradation due to the clipping may not be optimistic. However, nonlinear companding transforms can operate well with good BER performance while keeping good PAPR reduction.The design criteria of nonlinear companding transform has also been given in. Since the distribution of the original OFDM signals has been known, such as Rayleigh distribution of the OFDM amplitudes written in, we can obtain the nonlinear companding transform function through theoretical analysis and derivation according to the desirable distribution of the companded OFDM signals. For example, we transform the amplitude of the original OFDM signals into the desirable distribution with its PDF , . Therefore, the nonlinear transform function can be derived as obviously, this nonlinear companding transform of (23) belongs to the exponential companding scheme. Based on this design criteria, two types of nonlinear companding transform, which are based on error function and exponential function, respectively, have been proposed. It is well-known that original OFDM signals have a very sharp, rectangular-like power spectrum as shown in Fig. 4. This good property will be affected by the PAPR reduction schemes, e.g. slower spectrum roll-off, more spectrum side-lobes, and higher adjacent channel interference. Many PAPR reduct

Fig. 5. Block diagram of TR/TI approaches for PAPR reduction.

Schemes cause spectrum side-lobes generation, but the nonlinear companding transforms cause less spectrum side-lobes. As seen in Fig. 4, error and exponential companding transforms have much less impact on the original power spectrum comparing to the -law companding scheme. It is the major reason that the error and exponential companding schemes not only enlarge the small amplitude signals but also compress thelarge amplitude signals, while maintain the average power unchanged by properly choosing parameters, which can increase the immunity of small amplitude signals from noise. However, the -law companding transform increases the average power level and therefore requires a larger linear operation region in HPA. Nonlinear companding transform is a type of nonlinear process that may lead to significant distortion and performance loss by companding noise. Companding noise can be defined that the noises are caused by the peak regrowth after DAC to generate in-band distortion and out-band noise, by the excessive channel noises magnified after inverse nonlinear companding transform etc. For out-of-band noise, it needs to be filtered and oversampled. For in-band distortion and channel noises magnified, they need to iterative estimation. Unlike Additive White Gaussian Noise (AWGN), companding noise is generated by a process known and that can be recreated at the receiver, andsubsequently be removed, the framework of an iterative receiver has been proposed to eliminate commanding noise for companded and filtered OFDM system.As a multi-carrier modulation technique, orthogonal frequency division multiplexing (ofdm) has the following advantages robust to multi-path fading, inter-symbol interference, co-channel interference and impulsive parasitic noise; lower implementation complexity compared with the single carrier solution; and high spectral efficiency in supporting broadband wireless communications. Therefore, ofdm is believed to be a suitable technique for broadband wireless communications and has been used in many wireless standards, such as digital audio broadcasting (dab), terrestrial digital video broadcasting (dvb-t), the etsi hiperlan/2 standard and the ieee 802.11a standard for wireless local area networks (wlan), and the ieee 802.16a standard for wireless metropolitan area networks (wman). Original ofdm signals have very high peak-to-average power ratio (papr), which require sophisticated (expensive) radio transmitters with their high power amplifiers operating in a very large linear range. Otherwise, nonlinear signal distortion occurs and leads to high adjacent channel interference and poor system performance Many PAPR reduction schemes based on different techniques, such as clipping and filtering window shaping block coding partial transmit sequence (PTS) technique and selective mapping (SLM) technique, phase optimization tone reservation and injection and nonlinear companding transform schemes, have been proposed in literature. Specifically, it is shown in that the u law companding scheme can reduce PAPR more effectively than the clipping approach. However, comparing to the original signals, the compressed signals have a larger average power level and still exhibit no uniform distributions. In this paper, we propose and analyze a new nonlinear companding technique, called exponential companding, to reduce the PAPR of OFDM signals. It can effectively transform the original Gaussian-distributed OFDM signals into uniform-distributed (companded) signals without changing the average power level. Unlike the -law companding scheme, which mainly focuses on enlarging small signals, our exponential companding schemes adjust both small and large signals without bias so that it is able to offer better performance in terms of PAPR reduction, Bit-Error-Rate (BER) and phase error for OFDM systems.

PAPR FORMULATION

Fig. 1 shows a typical companded OFDM system, where input bit stream is first converted into N parallel lower rate bit streams and then fed into symbol mapping to obtain symbols . These symbols are then applied to IFFT to generate OFDM symbol, which can be expressed as(1)The PAPR of the discrete OFDM signal may be expressed as(2)If OFDM signal is oversampled by a factor 4, its PAPR is a good approximate to the one of continuous OFDM signal. Oversampling by a factor of L can be achieved by padding the symbols with (L-1)N zeros. After IFFT, the resultant symbols are converted to serial and companding transform (CT) is performed. To guarantee that all transformed signals are under a given threshold, a digital clipping not shown in Fig. 1 is used after the CT. Note that, due to the disadvantages of clipping, the CT should be designed cautiously so that the amount of clipped signals is as little as possible. A cyclic prefix (CP) is then inserted to OFDM symbol interval to eliminate intersymbol interference (ISI).

PROPOSED TRANSFORM

It was shown that, a linear companding transform with an inflexion point (LNST) can outperform logarithmic-based companding transforms such as -law companding. LNST can be expressed as (3)Where, and . Since is complex-valued, the companding transform should be applied to real and imaginary parts separately. At receiver, the original signal can be recovered according to(4)Where is noise component, is quantization noise which is usually very small, and are the index sets of OFDM samples. It assumed that the receiver has the knowledge of the two sets. It is clear that due to the presence of the inflexion point v , small and large parts of the signal can be treated with different scales; enlarging small amplitudes by while compressing large amplitudes by , which gives more flexibility and freedom in designing the companding form in order to meet the given system requirements such as PAPR reduction, signal average power, Power amplifier characteristics, and BER, and hence, leads to a better performance. However, taking into account the more accurate case that OFDM signal consists of three parts: small amplitudes, large amplitudes, and average amplitudes, more design flexibility and performance enhancement can be achieved if each one of these parts treated independently with a different scale. To satisfy this, a new linear companding transform (LCT) with two inflexion points is proposed, the new transform is (5)

(6)where and . Regarding , setting its value to unity can effectively reduce the undesired effect of noise transformation at the receiver since average amplitudes are scaled with unity and hence, no inverse scaling is required at the receiver. Fig. 2 shows profiles of both transforms where A= , it is clear that with two inflexion points, more design flexibility is available and hence a better tradeoff between PAPR and BER can be achieved. Fig. 3 shows the original and companded OFDM signals on the complex plane, were transforms are designed to preserve the average power of input signal for case study, for practical purpose, the average power of companded signal should be selected to best fit for specific PA characteristics included in the system. The variance of transformed noise at the receiver along with the average power of companded signal, are derived in the Appendix A. It is obvious that the companded signal by proposed LCT has the lowest PAPR smallest radius, since LCT allows for more reduction of PAPR by extra compression of large amplitudes and by extra enlargement of small amplitudes without affecting average amplitudes and hence, reallocate power among all subcarriers. Moreover, the flexibility of the proposed transform allows reducing abrupt jumps in the transformed signal, which leads to better power spectrum as depicted in Fig. 4. Since the receiver must have the knowledge of index sets, side information should be transmitted along with the signal. For LNST either or can be transmitted as side information on dedicated subcarriers or imbedded in training sequences. Specifically if v is set to be equal to the square root of signal average power, then transmitting will result in less overhead to be transmitted since it contains a smaller number of indices. This is because the samples of large amplitudes are usually occurring with low probability. Regarding the proposed transform, advantages of the extra inflexion point come at the price of another index set that should be transmitted.

(a)

(b)

(c)Fig. 3. (a) The original OFDM; (b) companded OFDM signal by LNST; (c)companded OFDM signal by the proposed transform.

Fig. 4. Power spectrums of LNST and proposed transforms.

Fig. 5. SSPA characteristics.NONLINEAR POWER AMPLIFIERA widely accepted memoryless solid-state power amplifier (SSPA) model [19], which is extensively used in investigating PAPR of OFDM signals, is Rapp model [20], where a memoryless nonlinearity is assumed. Therefore, the PA has a frequency- nonselective response. Representing the complex envelope of the input signal into the amplifier as(7)the transmitted output signal according to the model can be expressed as(8)Where, is the amplifier gain, is the saturation level, and p is a positive number to control nonlinearity characteristics of the amplifier. According to this model, SSPA introduces no phase distortion and only the AM/AM conversion is produced.

Input power Backoff (IBO)can be expressed as(9)Where is the average power of the input signal. According to (9), the average power of the input signal should be scaled with the proper value for a given PA characteristic. Fig. 5 shows the characteristics of SSPA model, as it is shown, for large values of p, the model converges to a hard limiting amplifier that is exactly linear until it reaches its output saturation level. A good approximation of existing amplifiers is obtained by choosing p in the range of 2 to 3. In this paper, we chose p=2.CHAPTER 7

IV. PERFORMANCE SIMULATIONIn order to evaluate and compare the performance of the proposed transform and examine its impact on the system, a MATLAB simulation was performed, assuming nonlinear AWGN channel and using randomly generated data bits with DQPSK modulation. Symbols are transmitted over 64 subcarriers with 256-point IFFT/FFT (oversampling factor equal to 4). The LNST & proposed LCT Transforms parameters and achieved PAPR reduction are tabulated in Table I. Simulation results are presented in Fig. 6.As it is shown in Table I, the PAPR reduction capability power efficiency of the proposed transform reaches 70% of original PAPR on the average, which is 20% more than LNST capability. This is demonstrated in Fig. 6(a), which shows simulated Complementary Cumulative Distribution Function (CCDF) of each CT. CCDFs where obtained by randomly generating 100,000 OFDM symbols at each value and counting number of blocks that exceed this value.Regarding BER efficiency which depicted in Fig. 6(b), the proposed transform has an excellent performance. Specifically, for a target BER of with IBO of 0dB, the proposed transform requires a signal-to-noise ratio (SNR) of 12.5 dB which is just 0.38 dB above performance bound obtained by disabling the SSPAand transmitting originalOFDM signal directly. Compared to the proposed LCT, the LNST requires 13.29 dB which is 1.17 dB above the bound.

6(a)

6(b)Fig. 6. Performance of LNST and proposed LCT. (a) Simulated CCDFs of LNST and proposed transform. (b) BER performance in nonlinear AWGN channel.

Orthogonal Frequency Division Multiplexing (OFDM) is an attractive multicarrier technique for mitigating the effects of multipath delay spread of radio channel, and hence accepted for APPLICATION1. Several wireless standards as well as number of mobile multimedia applications. 2. WiMAX3. 4G wireless systems4. DVB/DAB5. Wireless network in downlink and SC-FDMA in the uplink. 6. High speed wireless multiple access communication systems.

Among all these techniques the simplest solution is perhaps to clip the transmitted signal when its` amplitude exceeds a desired threshold. But clipping corresponds to a highly nonlinear process which produces signicant out-of-band interference (OBI). Improved versions of the clipping technique have been proposed (i.e. clipping and filtering) that remove the out-of-band radiation but they may cause some peak regrowth letting the signal samples to exceed the clipping level at some points.

ADVANTAGES Another attractive solution is the companding technique which was originally designed for speech processing using the classical -law transformation and showed to be rather effective . It is the most attractive PAPR reduction technique for multicarrier transmission due to its good performance and low complexity. This technique soft compresses, rather than hard clips, the signal peak and causes far less OBI. However, companding techniques may introduce undesired effects because of the requisite expansion of the compressed signal at the receiver end, a process which amplifies receiver noise.

CONCLUSION

In this paper, a new linear companding transform is proposed with two inflexion points in order to increase the flexibility of companding design, results show that the proposed transform has a higher PAPR reduction capability and better BER performance than LNST, with less spectral broadening. In general, with the aid of two inflexion points, different signal levels can be scaled independently of each other. Thus, the proposed transform can be designed to meet system requirements, power amplifier characteristics, and achieve an excellent tradeoff between PAPR reduction and BER performance. Furthermore, the proposed transform is simple to implement and has no limitations on the system parameters such as number of subcarriers modulation order, or constellation type.

APPENDIX A

The transform gain is defined as

Thus, , where and , denote the average power of the original and companded signals, respectively.

Is the peak power of companded signal, and is the peak power of original signal.With the probability distribution function of OFDM signal denoted as , which has Gaussian distribution function

the average power of the companded signal can be written as

The variance of the transformed noise term at the receiver can be written as

Where

and and , are variances of the Gaussian noise and the quantization error, respectively.

MATLABA.1 IntroductionMATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. MATLAB stands for matrix laboratory, and was written originally to provide easy access to matrix software developed by LINPACK (linear system package) and EISPACK (Eigen system package) projects. MATLAB is therefore built on a foundation of sophisticated matrix software in which the basic element is array that does not require pre dimensioning which to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of time.MATLAB features a family of applications specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow learning and applying specialized technology. These are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control system, neural networks, fuzzy logic, wavelets, simulation and many others. Typical uses of MATLAB include: Math and computation, Algorithm development, Data acquisition, Modeling, simulation, prototyping, Data analysis, exploration, visualization, Scientific and engineering graphics, Application development, including graphical user interface building. A.2 Basic Building Blocks of MATLABThe basic building block of MATLAB is MATRIX. The fundamental data type is the array. Vectors, scalars, real matrices and complex matrix are handled as specific class of this basic data type. The built in functions are optimized for vector operations. No dimension statements are required for vectors or arrays. A.2.1 MATLAB WindowThe MATLAB works based on five windows: Command window, Workspace window, Current directory window, Command history window, Editor Window, Graphics window and Online-help window.A.2.1.1 Command WindowThe command window is where the user types MATLAB commands and expressions at the prompt (>>) and where the output of those commands is displayed. It is opened when the application program is launched. All commands including user-written programs are typed in this window at MATLAB prompt for execution.A.2.1.2 Work Space WindowMATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. Double clicking on a variable in the workspace browser launches the Array Editor, which can be used to obtain information.A.2.1.3 Current Directory WindowThe current Directory tab shows the contents of the current directory, whose path is shown in the current directory window. For example, in the windows operating system the path might be as follows: C:\MATLAB\Work, indicating that directory work is a subdirectory of the main directory MATLAB; which is installed in drive C. Clicking on the arrow in the current directory window shows a list of recently used paths. MATLAB uses a search path to find M-files and other MATLAB related files. Any file run in MATLAB must reside in the current directory or in a directory that is on search path. A.2.1.4 Command History WindowThe Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions. Previously entered MATLAB commands can be selected and re-executed from the command history window by right clicking on a command or sequence of commands. This is useful to select various options in addition to executing the commands and is useful feature when experimenting with various commands in a work session.A.2.1.5 Editor WindowThe MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. In this window one can write, edit, create and save programs in files called M-files. MATLAB editor window has numerous pull-down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions.A.2.1.6 Graphics or Figure WindowThe output of all graphic commands typed in the command window is seen in this window.A.2.1.7 Online Help Window MATLAB provides online help for all its built in functions and programming language constructs. The principal way to get help online is to use the MATLAB help browser, opened as a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by typing help browser at the prompt in the command window. The help Browser is a web browser integrated into the MATLAB desktop that displays a Hypertext Markup Language (HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to find information, and the display pane, used to view the information. Self-explanatory tabs other than navigator pane are used to perform a search.A.3 MATLAB FilesMATLAB has three types of files for storing information. They are: M-files and MAT-files.A.3.1 M-FilesThese are standard ASCII text file with m extension to the file name and creating own matrices using M-files, which are text files containing MATLAB code. MATLAB editor or another text editor is used to create a file containing the same statements which are typed at the MATLAB command line and save the file under a name that ends in .m. There are two types of M-files:1. Script Files It is an M-file with a set of MATLAB commands in it and is executed by typing name of file on the command line. These files work on global variables currently present in that environment.2. Function Files A function file is also an M-file except that the variables in a function file are all local. This type of files begins with a function definition line.A.3.2 MAT-FilesThese are binary data files with .mat extension to the file that are created by MATLAB when the data is saved. The data written in a special format that only MATLAB can read. These are located into MATLAB with load command.

A.4 the MATLAB System:The MATLAB system consists of five main parts:A.4.1 Development Environment: This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.A.4.2 the MATLAB Mathematical Function: This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms. A.4.3 the MATLAB Language: This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away programs, and "programming in the large" to create complete large and complex application programs. A.4.4 Graphics: MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications. A.4.5 the MATLAB Application Program Interface (API): This is a library that allows you to write C and FORTRAN programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.A.5 SOME BASIC COMMANDS:pwd prints working directoryDemo demonstrates what is possible in Mat labWho lists all of the variables in your Mat lab workspace?Whose list the variables and describes their matrix sizeclear erases variables and functions from memoryclear x erases the matrix 'x' from your workspace close by itself, closes the current figure windowfigure creates an empty figure windowhold on holds the current plot and all axis properties so that subsequent graphing commands add to the existing graphhold off sets the next plot property of the current axes to "replace"find find indices of nonzero elements e.g.: d = find(x>100) returns the indices of the vector x that are greater than 100break terminate execution of m-file or WHILE or FOR loopfor repeat statements a specific number of times, the general form of a FOR statement is: FOR variable = expr, statement, ..., statement END for n=1:cc/c; magn(n,1)=NaNmean(a((n-1)*c+1:n*c,1)); enddiff difference and approximate derivative e.g.: DIFF(X) for a vector X, is [X(2)-X(1) X(3)-X(2) ... X(n)-X(n-1)]. NaN the arithmetic representation for Not-a-Number, a NaN is obtained as a result of mathematically undefined operations like 0.0/0.0 INF the arithmetic representation for positive infinity, a infinity is also produced by operations like dividing by zero, e.g. 1.0/0.0, or from overflow, e.g. exp(1000).save saves all the matrices defined in the current session into the file, matlab.mat, located in the current working directory load loads contents of matlab.mat into current workspacesave filename x y z saves the matrices x, y and z into the file titled filename.matsave filename x y z /ascii save the matrices x, y and z into the file titled filename.datload filename loads the contents of filename into current workspace; the file can be a binary (.mat) file load filename.dat loads the contents of filename.dat into the variable filenamexlabel( ) : Allows you to label x-axisylabel( ) : Allows you to label y-axistitle( ) : Allows you to give title for plot subplot(