07 circuits & systems. analog & digital signal processing

1041
210 ACTIVE FILTERS ACTIVE FILTERS quality performance and low cost resulted in a fundamental change in design philosophy. An electrical filter may be defined as ‘‘a transducer for sepa- Designers, previously cost-constrained to single-amplifier second-order sections, were now able to consider multiampli- rating waves on the basis of their frequencies’’ (1). There are numerous everyday uses for such devices ranging from the fier sections whose performance and multipurpose functions made commercial production a viable proposition. In particu- filter that allows one to select a particular radio station, to the circuit that detects brainwaves, to resonant cavities that lar, the state variable topology (5) formed the basis for a uni- versal filter yielding all basic filtering functions from a sin- operate at microwave frequencies. Indeed, filters are needed for operation across the electromagnetic spectrum. Further- gle structure. Inductor replacement and direct simulation techniques more, they are required to perform frequency selection to sat- isfy various specialized approximating functions, not neces- such as the leapfrog approach (6) offered low-sensitivity ana- logs of classical LC filters. The difficulty in tuning these de- sarily confined to the conventional low-pass, bandpass, high- pass, and band-stop forms. vices was simplified enormously by the introduction of com- puter-controlled laser trimming using hybrid microelectronics However, the purpose of this article is to focus on a partic- ular category of filter, the active filter, whose evolution over technology. Indeed, by the mid-1970s, sophisticated fifth-or- der elliptic characteristic filters were in large-scale production the past 40 years has been heavily influenced by advances in microelectronic circuit fabrication. The earliest active filters within the Bell System (7). Thus, over a period of 20 years (1954–1974), active filter were motivated by the need to overcome significant limita- tions of inductor–capacitor (LC) passive filters, namely: designers had come to rely upon a relatively small number of basic building blocks to form second-order sections, or were basing higher-order designs on analogs of LC structures. Al- 1. In the audio band, inductors are bulky and prone to though many realizations used discrete components, larger- pick up. scale production of thick and thin film hybrid microelectronic 2. Resistor–capacitor (RC) filter structures offer a limited structures was quite common. range of responses and are subject to substantial pass- The advent of switched-capacitor filters in 1979 (8) over- band attenuation. came the need to laser trim resistors and yielded the first fully integrated active filters. While truly a sampled-data By contrast, active RC structures can realize (theoreti- technique, the use of sufficiently high clock frequencies meant cally) lossless filter characteristics in miniaturized form. Pas- that active filters could be used up to 100 kHz, far higher sive and active filter properties are summarized in Table 1. than by conventional analog techniques. Subsequent develop- A disadvantage of the active filter is its need for a power ments have led to metal-oxide semiconductor field-effect tran- supply and the incorporation of one or more active elements, sistor-capacitor (MOSFET-C) and operational transconduc- usually operational amplifiers. As a result, highly selective tance amplifier-capacitor (OTA-C) filters (9) which yield filters need careful design so as to avoid instability. However, authentic analog performance at frequencies exceeding 1 as active filter design has matured, a small number of highly MHz. reliable topologies have evolved that provide solid perfor- The following sections will concentrate on a few fundamen- mance across a variety of fabrication technologies. tal filter design techniques that form the basis for modern The earliest active filters used discrete components and active filter design. The Sallen and Key, multiple loop feed- were based upon direct synthesis of RC sections with appro- back, and state variable structures have stood the test of time priately embedded active devices such as the negative imped- and have proven to be as effective in discrete component real- ance converter (2). Second-order sections were then cascaded izations as they have in MOSFET-C structures. They all form to form higher order structures. higher-order filters when cascaded with similar sections. Fi- Subsequently, a catalog of building blocks was developed nally, the leapfrog design and direct replacement techniques by Sallen and Key (3), which led to a much broader interest are discussed as examples of direct higher-order filter syn- in active filters. This was due in no small part to removal of thesis. the need for classical synthesis expertise. However, widespread use of active filters was still inhib- SECOND-ORDER STRUCTURES ited by concerns over sensitivity, particularly when compared to the passband performance of passive filters. This was over- The fundamental building blocks for active RC filters are sec- come by the simulation of the floating inductor (4) and the ond-order structures which can readily be cascaded to realize widespread availability of operational amplifiers whose high- higher-order approximating functions described by the gen- eral voltage transfer function: V o V i = H · s 2 + ω z Q z s + ω 2 z s 2 + ω p Q p s + ω 2 p (1) where z , Q z and p , Q p refer to the zero and pole frequency and Q, respectively. All-pole functions (low-pass, bandpass, high-pass) occur when only one of the numerator terms (s 0 , Table 1. Comparison of Active and Passive Filter Properties Audio Band Filters LC Active RC Bulky Small Lossy (low Q) Lossless (high Q) Stable (absolutely) Stability depends upon design Transmission loss Capable of transmission gain J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

Upload: angel-celestial

Post on 18-Feb-2016

77 views

Category:

Documents


30 download

DESCRIPTION

07 Circuits & Systems. Analog & Digital Signal Processing

TRANSCRIPT

  • 210 ACTIVE FILTERS

    ACTIVE FILTERS quality performance and low cost resulted in a fundamentalchange in design philosophy.

    An electrical filter may be defined as a transducer for sepa- Designers, previously cost-constrained to single-amplifiersecond-order sections, were now able to consider multiampli-rating waves on the basis of their frequencies (1). There are

    numerous everyday uses for such devices ranging from the fier sections whose performance and multipurpose functionsmade commercial production a viable proposition. In particu-filter that allows one to select a particular radio station, to

    the circuit that detects brainwaves, to resonant cavities that lar, the state variable topology (5) formed the basis for a uni-versal filter yielding all basic filtering functions from a sin-operate at microwave frequencies. Indeed, filters are needed

    for operation across the electromagnetic spectrum. Further- gle structure.Inductor replacement and direct simulation techniquesmore, they are required to perform frequency selection to sat-

    isfy various specialized approximating functions, not neces- such as the leapfrog approach (6) offered low-sensitivity ana-logs of classical LC filters. The difficulty in tuning these de-sarily confined to the conventional low-pass, bandpass, high-

    pass, and band-stop forms. vices was simplified enormously by the introduction of com-puter-controlled laser trimming using hybrid microelectronicsHowever, the purpose of this article is to focus on a partic-

    ular category of filter, the active filter, whose evolution over technology. Indeed, by the mid-1970s, sophisticated fifth-or-der elliptic characteristic filters were in large-scale productionthe past 40 years has been heavily influenced by advances in

    microelectronic circuit fabrication. The earliest active filters within the Bell System (7).Thus, over a period of 20 years (19541974), active filterwere motivated by the need to overcome significant limita-

    tions of inductorcapacitor (LC) passive filters, namely: designers had come to rely upon a relatively small number ofbasic building blocks to form second-order sections, or werebasing higher-order designs on analogs of LC structures. Al-1. In the audio band, inductors are bulky and prone tothough many realizations used discrete components, larger-pick up.scale production of thick and thin film hybrid microelectronic2. Resistorcapacitor (RC) filter structures offer a limitedstructures was quite common.range of responses and are subject to substantial pass-

    The advent of switched-capacitor filters in 1979 (8) over-band attenuation.came the need to laser trim resistors and yielded the firstfully integrated active filters. While truly a sampled-data

    By contrast, active RC structures can realize (theoreti- technique, the use of sufficiently high clock frequencies meantcally) lossless filter characteristics in miniaturized form. Pas- that active filters could be used up to 100 kHz, far highersive and active filter properties are summarized in Table 1. than by conventional analog techniques. Subsequent develop-

    A disadvantage of the active filter is its need for a power ments have led to metal-oxide semiconductor field-effect tran-supply and the incorporation of one or more active elements, sistor-capacitor (MOSFET-C) and operational transconduc-usually operational amplifiers. As a result, highly selective tance amplifier-capacitor (OTA-C) filters (9) which yieldfilters need careful design so as to avoid instability. However, authentic analog performance at frequencies exceeding 1as active filter design has matured, a small number of highly MHz.reliable topologies have evolved that provide solid perfor- The following sections will concentrate on a few fundamen-mance across a variety of fabrication technologies. tal filter design techniques that form the basis for modern

    The earliest active filters used discrete components and active filter design. The Sallen and Key, multiple loop feed-were based upon direct synthesis of RC sections with appro- back, and state variable structures have stood the test of timepriately embedded active devices such as the negative imped- and have proven to be as effective in discrete component real-ance converter (2). Second-order sections were then cascaded izations as they have in MOSFET-C structures. They all formto form higher order structures. higher-order filters when cascaded with similar sections. Fi-

    Subsequently, a catalog of building blocks was developed nally, the leapfrog design and direct replacement techniquesby Sallen and Key (3), which led to a much broader interest are discussed as examples of direct higher-order filter syn-in active filters. This was due in no small part to removal of thesis.the need for classical synthesis expertise.

    However, widespread use of active filters was still inhib-SECOND-ORDER STRUCTURESited by concerns over sensitivity, particularly when compared

    to the passband performance of passive filters. This was over-The fundamental building blocks for active RC filters are sec-come by the simulation of the floating inductor (4) and theond-order structures which can readily be cascaded to realizewidespread availability of operational amplifiers whose high-higher-order approximating functions described by the gen-eral voltage transfer function:

    VoVi

    = H

    {s2 + z

    Qzs + 2z

    }{

    s2 + pQp

    s + 2p} (1)

    where z, Qz and p, Qp refer to the zero and pole frequencyand Q, respectively. All-pole functions (low-pass, bandpass,high-pass) occur when only one of the numerator terms (s0,

    Table 1. Comparison of Active and Passive Filter Properties

    Audio Band Filters

    LC Active RC

    Bulky SmallLossy (low Q) Lossless (high Q)Stable (absolutely) Stability depends upon designTransmission loss Capable of transmission gain

    J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.

  • ACTIVE FILTERS 211

    V1

    G1

    C1

    C2

    G2

    V3

    K

    G3

    RCnetwork

    3

    21

    V1 V3

    K

    Figure 2. Sallen and Key second-order bandpass filter using posi-Figure 1. Sallen and Key structure consisting of a controlled source tive-gain controlled source.and an RC network. Appropriate choice of the RC network yields allbasic forms of second-order transfer functions.

    Table 2 illustrates four topologies and the resulting voltagetransfer function when that RC structure is used in the cir-

    s1, or s2) is present. A notch occurs when the s1 term disap- cuit of Fig. 1. Thus, it is seen that low-pass and high-passpears in the numerator. We will not discuss the more general sections can be achieved with a positive-gain controlledcase (10) that arises when all numerator terms are present source, whereas the two bandpass sections require a negative-simultaneously. gain controlled source (11).

    The topologies that follow are suitable for design using dis- Although not included in the original catalog, the five-ele-crete, hybrid, or fully monolithic fabrication. Furthermore, ment bandpass circuit of Fig. 2, which utilizes a positive-gainthey have stood the test of time for ease of design, tuning controlled source, is now generally incorporated under thesimplicity, and relatively low cost. Sallen and Key banner. In this case:

    Sallen and Key

    Sallen and Key originally proposed (3) a family of all-pole fil-ters based upon the circuit shown in Fig. 1, for which

    V3V1

    =s

    KG1C1

    s2 +{

    G2(1K)C1

    + (G1 +G3)C1

    + G3C1

    }s+ G3(G1 +G2)

    C1C2

    (3)

    Design is most commonly restricted to the positive-gainV3V1

    = Ky21y22 + Ky23

    (2)

    controlled source realizations, despite the inherent positivefeedback used to enhance Qp. In general, if realizations areBy appropriate choice of the passive RC network, it is possiblerestricted to Qp 10, the advantages of the lower componentto realize all forms of basic filter. However, because the cre-spread in this design outweigh the stability considerations.ation of a band-stop (notch) section requires the use of twin TDesign is relatively straightforward and proceeds by coeffi-networks, which are inherently difficult to tune, we confinecient matching.our discussion to the realization of all-pole functions.

    Example 1. For our example, we design a second-order low-pass Chebyshev filter having 0.5 dB passband ripple and apassband gain of 20 dB. From the standard tables (1215),the normalized transfer function is

    V3V1

    = Hs2 + 1.426s + 1.516 (4)

    A passband gain of 20 dB is equivalent to an absolute gain of10, so that H 15.16. By matching the low-pass expressionfrom Table 2 and the coefficients of Eq. (4), we obtain

    KG1G2C1C2

    = 15.16 (5a)G1G2C1C2

    = 1.516 (5b)G2C2

    + G1 + G2C1

    KG2C2

    = 1.426 (5c)

    Thus, K 10 [from Eqs. (5a) and (5c)]. The remaining twoequations contain four unknowns, indicating freedom ofchoice for two of the elements. Such freedom of choice is acharacteristic of the coefficient-matching technique. For con-venience, set C1 C2 IF, since this is highly desirable in

    Table 2. Sallen and Key Realizations

    CircuitNo. RC Structure Voltage Transfer Function

    a

    1

    2

    3

    4

    aK > 0 for circuits 1 and 2, K < 0 for circuits 3 and 4.

    1G1

    C1

    C2

    KG1G2C1C2

    G2

    3

    2

    C1

    C2G22

    1G1

    3

    1C1 C2

    G1

    G2

    3

    2

    C2 G22

    1C1

    G13

    G2C2

    s2 + s + +G1G2C1C2

    KG2C2

    G1 + G2C1

    KG2C2

    G2C2

    s2 + s

    s

    s

    + + (1 K )G1G2C1C2

    G1 + G2C1

    KG1(1 K )C1

    s(1 K )s

    2 + +G1G2

    (1 K )C1C2

    Ks2

    G2C2s2 + s + (1 K ) ++ G1G2C1C2G1C1G2C1

    G1C1

    + +G2C2

    G2C1

  • 212 ACTIVE FILTERS

    RCnetwork

    3

    2+

    1

    V1 V3

    Figure 3. Multiple feedback (MFB) structure consisting of an opera-tional amplifier and an RC network. Appropriate choice of the RCnetwork yields all basic forms of second-order transfer functions.

    many practical realizations. As a result,

    Table 3. MFB Structure and Voltage Transfer Functions

    Filter Type Network Voltage Transfer Function

    (a) Low-pass

    (b) High-pass

    (c) Bandpass

    G1 C2G3

    G4 C5G1G3

    s2C2C5 + sC5(G1 + G3 + G4) + G3G4

    s2C1C3s2C3C4 + sG5(C1 + C3 + C4) + G2G5

    sG1C3s2C3C4 + sG5(C3 + C4) + G5(G1 + G2)

    +

    C1 G2C3

    C4 G5

    +

    G1 G2C3

    C4 G5

    +

    G1G2 = 1.516 (6a)G1 8G2 = 1.426 (6b)

    The only solution yielding positive values is G1 4.268 S andG2 0.355 S. Impedance and frequency denormalization can stable, negtive-feedback circuit. Specific realizations of thebe applied, depending upon specific design requirements. all-pole functions are shown in Table 3.

    Realization of the basic low-pass Sallen and Key filter has As for Sallen and Key sections, design proceeds by coeffi-been widely discussed in the literature (11). Popular alterna- cient matching. A widely used design set is illustrated in Ta-tives to the C1 C2 IF used above are as follows: ble 4, for which both bandpass and high-pass circuits use

    equal-valued capacitors. No such solution is possible for the1. Setting C1 C2 C, G1 G3 G, and K 3 (1/Q) low-pass circuit, though an equal-valued resistor pair is pos-2. Setting K 1 (thereby eliminating two gain-setting re- sible.

    sistors), G1 nG3 and C1 mC2 Although highly stable, the MFB structure has a pole Qdependent upon the square root of component ratios. Thus,3. Setting K 2 (for equal-valued gain-setting resistors),for a Qp of n, the maximum component spread will be propor-C1 C2 C and G3 Q2G1tional to n2. As a result, the MFB arrangement is best suitedto modest values of Qp, typically not greater than 10.Multiple Feedback Structure

    The multiple feedback (MFB) structure (16) is derived fromModified Multiple-Loop Feedback Structurethe general feedback configuration of Fig. 3, in which the ac-

    tive element is an ideal operational amplifier. In positive-feedback topologies such as the Sallen and Key,The most common realization of the RC network is shown Qp is enhanced by subtracting a term from the damping (s1)in Fig. 4, which yields the MFB transfer function coefficient in the denominator. By contrast, in negative-feed-

    back topologies such as the MFB, high values of Qp are ob-tained at the expense of large spreads in element values. The

    V3V1

    = Y1Y3Y5(Y1 + Y2 + Y3 + Y4) + Y3Y4

    (7)two techniques are combined in the modified multiple-loop

    The basic all-pole functions can be realized by single-elementreplacement of the admittances Y1 Y5, yielding a highly

    Y5Y4Y3

    Y2

    Y11 2

    3

    Figure 4. Three-terminal, double-ladder structure for use in MFBsections.

    Table 4. Element Values for MFB Realizations (H is theNumerator Constant in Each Case)

    Element Value

    Bandpass High-pass Low-pass

    G1 H C1 H G1 Hp

    G2 2pQp H G2 p(2 H)Qp C2 Qp(22p H)

    2pC3 1 C3 1 G3 p

    C4 C3 C4 C3 G4 G3

    G5 p

    2QpG5

    pQp(2 H)

    C5 2p

    Qp(22p H)

  • ACTIVE FILTERS 213

    versatility and ease of tuning. The advent of the operationalamplifier eliminated earlier cost concerns, and the ability torealize relatively high-Q sections remains an attractive con-sideration. However, it is the ability of the circuit to yield allbasic forms of second-order sections by appropriate choice ofoutput terminal that has made it so popular for commercialmanufacture (19). Custom filters are readily fabricated by ap-propriate interconnection of terminals, yielding the universalfilter terminology of several vendors. In particular, highly re-liable notch filters are possible through the addition of a sum-

    Vi

    G1

    GaGb

    C3

    C4G5

    Vo

    +

    ming amplifier to the basic three-amplifier array.The circuit shown in Fig. 7 is an example of a state-vari-Figure 5. Modified multiple-loop feedback (MMFB) structure due to

    able section and can be recognized as an analog computer re-Deliyannis which yields a second-order bandpass function. By judi-alization of a second-order differential equation. It is morecious use of positive feedback, this circuit reduces the large compo-

    nent spreads which are characteristic of the MFB structure while commonly referred to as the Huelsman-Kerwin-Newcombyielding greater stability margin than the Sallen and Key ar- (HKN) filter (5). In the frequency domain it is capable of yield-rangement. ing a variety of voltage transfer functions, according to the

    particular output connections used. Assuming ideal opera-tional amplifiers, the specific transfer functions are as follows:

    feedback (MMFB) circuit (17) of Fig. 5, for which1. The low-pass response with

    VoVi

    = sC3G1(1 + k)s2C3C4 + s{G5(C3 + C4) kC3G1} + G1G5

    (8)V1Vi

    =(

    R2[R3 + R10]R3[R1 + R2]

    )/D(s) (9a)

    where k Gb/Ga, and the Q-enhancement term signifies thepresence of positive feedback.

    2. The bandpass response withDesign of this bandpass circuit proceeds by coefficientmatching, although the reader is advised to adopt the step-by-step procedure developed by Huelsman (11).

    A generalization of the MMFB circuit, yielding a fully bi-

    V2Vi

    = R9C2s(

    R2[R3 + R10]R3[R1 + R2]

    )/D(s) (9b)

    quadratic transfer ratio has been developed by Friend et al.(18), as shown in Fig. 6. This arrangement was used exten- 3. The high-pass response withsively in the Bell System where the benefits of computer-con-trolled (deterministic) laser trimming techniques and large-scale manufacture were utilized. Although this resulted in

    V3Vi

    = R2(R3 + R10)R3(R1 + R2)

    C1C2R8R9s2/

    D(s) (9c)

    quite exacting realizations using tantalum thin-film technol-ogy, the structure is less suited to discrete component realiza-

    wheretions. An ordered design process based upon coefficientmatching is presented elsewhere by Huelsman (15).

    D(s) = C1C2R8R9s2 +{

    R1(R3 + R10)R3(R1 + R2)

    }C2R9s +

    R10R3State Variable Structure

    Based upon analog computer design techniques, the state A general biquadratic function may be obtained by combin-variable (SV) structure (5) assumed popularity because of its ing the various outputs via a summing network, as shown in

    Fig. 8. The composite voltage transfer function then becomes

    VoVi

    = R2(R10 + R3)R5(R6 + R7)(R1 + R2)R3(R4 + R5)R7

    C1C2R8R9s2 +

    ([R4 + R5]R6R5[R6 + R7]

    )R9C2s +

    R4R5

    C1C2R8R9s2 +(

    R1[R3 + R10]R3[R1 + R2]

    )C2R9s +

    R10R3

    (10)

    Now consider the design of a low-pass response

    T(s) = Hs2 + (ps/Qp) + 2p

    (11)Vi

    C2

    Vo

    C1

    +

    R4

    R6

    R5 R7 RD

    Rc

    Rb

    R2

    It is clear from Eqs. 9(a) and (10) that there is considerableflexibility in the design since there are nine passive compo-Figure 6. The Friend biquad which generalizes the MMFB structure

    of Fig. 5 so as to yield biquadratic filters. nents and only three specified variables in Eq. (11).

  • 214 ACTIVE FILTERS

    Figure 7. State variable filter capable of yieldinga variety of all-pole second-order transfer func-tions.

    Vi

    V3 (high-pass)

    V2(bandpass)

    Vi(low-pass)

    C1 C2

    R3

    R10

    R2

    R8

    R9

    R1

    +

    +

    +

    The design equations are thus Setting C IF and R8 R9 R yields the following simpli-fied equations

    2p =1

    R2(14a)

    Qp = 12(

    1 + R2R

    )(14b)

    H = 2R2/R(1 + R2

    R

    )R2

    (14c)

    p =

    R10R3R8R9C1C2

    (12a)

    Qp =

    R10R3

    C1C2R8R9{R1(R3 + R10)R3(R1 + R2)

    R9C2} (12b)

    H = R2(R3 + R10)R3(R1 + R2)C1C2R8R9

    (12c)

    Therefore, the design equations areSelecting C1 C2 C and R1 R3 R10 R gives

    R = 1p

    (15a)

    R2R

    + 2Qp 1 (15b)

    The gain constant, H, is fixed as [2 (1/Qp)]2p.

    p = (R8R9C2)1/2 (13a)

    Qp = (R + R2)2R

    R8R9

    (13b)

    H = 2R2(R + R2)C2R8R9

    (13c)

    Figure 8. Composite state variable structure.The addition of an output summing network tothe arrangement of Fig. 7 yields fully biquad-ratic transfer functions.

    Vin

    V3

    V1

    Vout

    C1 C2

    R3

    R10

    R2

    R8

    R7

    R9

    R1

    +

    R6

    +

    +

    V2

    +

    R4

    R5

  • ACTIVE FILTERS 215

    Example 2. Here we design a state-variable filter satisfying HIGHER-ORDER REALIZATIONSthe following normalized elliptic function characteristic hav-ing a notch at 1.4 kHz. Higher-order filters may be designed by cascading second-or-

    der structures of the form described in the previous section.Odd-order functions are accommodated by the addition of asingle-pole section or, if combined with a low-Q pole-pair, bythe addition of a third-order section. The section types (Sallen

    T(s) = H(s2 + 2z )

    s2 + pQp

    s + 2p= s

    2 + 1.438664s2 + 0.314166s + 1.167222 (16)

    and Key, MFB, MMFB, SV) may be mixed in a realization soThus, z 1.199, p 1.08038, and Qp 3.4389. that the SV is used for higher Q and notch functions. Particu-

    Realization requires the use of the summing network to lar care must be taken with the selection of pole-zero pairscombine the low-pass and high-pass outputs. Since no band- and the ordering of sections due to considerations of dynamicpass component is required, the left-hand end of resistor R7 range. A fuller discussion of these features is described else-(Fig. 8) should be grounded. where (2022).

    Now consider the realization of the low-pass section. Set The major advantage of the cascade approach is the abilityR C 1 and Eqs. 15(a) and (b) to give the normalized to independently tune each pole pair. This is offset to somecomponent values as degree by the higher sensitivity to component changes and

    the care needed to properly order the sections and pair thepoles and zeroes. A widely used alternative bases designs onthe passive LC prototype whose passband sensitivity is mini-mal. The most common approaches are described below.

    C1 = C2 = 1FR = R8 = R9 = 0.926R = 1 so that R1 = R3 = R10 = 1

    and R2 = 5.878Inductor Replacement

    The gain constant, H, has the value 1.995. The frequency As indicated above, it is highly desirable to base active RCdenormalization factor, n, is filter designs upon passive LC prototypes because of the re-

    sulting low passband sensitivity. An added advantage resultsfrom the availability of tabulated LC designs (1215), whichn = 2 1.4 10

    3

    1.199= 7.351 103

    obviate the need for sophisticated synthesis techniques. Thus,for a given standard approximating function, the LC proto-Assume that available capacitors have a value of 6800 pF.type may be established with the aid of design tables.Then, the impedance denormalization factor is evaluated as

    The resulting inductors may be replaced by means of anappropriately terminated generalized impedance converter(GIC). The ideal GIC is shown in Fig. 9, for whichZn =

    16800 1012 7.351 103 = 20 k

    Therefore, the denormalized values are Z11 =a11a22

    = k(s)ZL

    Z22 =a22a11

    = 1k(s)

    ZL

    (18a)

    (18b)

    if a12 a21 0.

    C1 = C2 = 6800 pFR1 = R3 = R10 = 20 kR8 = R9 = 18.7 kR2 = 118 k

    standard 1% values

    The high-pass and low-pass outputs may now be combinedto yield the desired transfer function of Eq. (16). Thus, bysubstituting normalized element values into Eq. (10)

    VoVin

    = 1.709R5(R6 + R7)R7(R4 + R5)

    {s2 + 1.1672(R4/R5)

    s2 + 0.314179s + 1.1672}

    (17)

    The location of z is obtained by appropriate choice of the re-sistor ratio, R4/R5. Hence,

    R4R5

    = 1.2326

    Choosing R5 20 k gives R4 24.65 k. The dc gain ofthe filter is determined by appropriate choice of R6/R7. If theseresistors are set equal at 20 k, the resulting dc gain is5.52 dB.

    The filter may be tuned by means of the R4 to R5 ratio to

    GIC[a]

    21

    (a)

    1 2

    Z11ZL

    ZL

    GIC[a]

    1

    (b)

    1

    2

    2

    Z22ZL

    ZL

    locate the notch accurately. In practice, this may be observedby closing a Lissajous pattern on an oscilloscope. The fre- Figure 9. Generalized impedance converter (GIC). (a) Load at termi-

    nal 2. (b) Load at terminal 1.quency at which this occurs will be z.

  • 216 ACTIVE FILTERS

    +

    1Z1 Z2 Z3

    ZLZL

    Z4

    +

    2

    Figure 10. Antoniou GICthe most widely used realization of thisimportant circuit element.

    R

    L1 L3 L4

    C1 C2 C3

    L2 R

    R

    R1

    C1 C2 C3

    R2

    R

    R3 R4

    (a)

    (b)

    Figure 11. High-pass filter realization using direct inductor replace-The most commonly used realization of the GIC, from An-ment. (a) Passive prototype. (b) Active realization using resistor-ter-toniou (23), is shown in Fig. 10. In this caseminated GICs to realize the grounded inductors.

    k(s) = Z1Z3Z2Z4

    (19)capacitor, because

    Thus, if we selectZ22

    s= j

    = 12D

    (20)

    Z1 = Z3 = Z4 = R and Z2 =1

    sC However, the term frequency-dependent negative resistance(FDNR) has achieved universal adoption. D is in units of

    we obtain k(s) sk. If ZL R1, then (farad)2ohms and is represented by the symbol shown inFig. 13.

    Z11 = skR1 A synthesis technique incorporating FDNRs (24) over-comes the need for floating inductor simulation in LC proto-

    and we have simulated a grounded inductor whose Q value types. If the admittances in a network are scaled by a factorfar exceeds that of a conventional coil. Indeed, audio band Q s, neither the voltage nor current transfer ratios are affected,factors of the order of 1000 are readily obtained if high-qual- because they are formed from ratios of impedance or admit-ity capacitors are used in the GIC. tance parameters. However, scaling does affect the network

    Grounded inductor simulation is readily applicable to the elements as follows:realization of high-pass filters, as illustrated in Fig. 11. Notethat a dot ( ) is used to denote terminal 1 of the GIC, because Admittance Y(s) becomes sY(s) (transformed admittance)it is a directional device having a conversion factor k(s) from Capacitor sC becomes s2C (FDNR)terminal 1, and 1/k(s) from terminal 2.

    Inductor 1/sL becomes 1/L (resistor)The simulation of a floating inductor requires the use of

    Resistor 1/R becomes s/R (capacitor)two GICs, as shown in Fig. 12. However, the simulation ofstructures containing several floating inductors can become

    Inductors are thus eliminated and a new, but topologicallyundesirable due to the excessive number of active blocks.equivalent, network is formed.

    Frequency-dependent Negative ResistanceExample 3. In this example, we realize a doubly terminated

    low-pass filter having a fourth-order Butterworth characteris-Depending upon the choice of impedances Z1 Z4, the GIC ofFig. 10 may be used to provide conversion factors of sn, wheren 1, 2. If one internal port impedance is capacitive andthe other three are resistive, the conversion factor is ks inone direction and 1/ks in the other. Use of two internal ca-pacitors yields ks2 and 1/ks2, respectively. Using the firstcombination of elements and a capacitor at port 1 produces a

    ks ks

    kR

    port 2 impedance given by Z22 (1/s2)D, where D is frequencyinvariant. At real frequencies, this represents a second-order Figure 12. GIC realization of a floating inductor.

  • ACTIVE FILTERS 217

    for which, for example, RA may be set at 100 so as to avoidloading the capacitor.

    Denormalization of the circuit is straightforward, notingthat an FDNR of normalized value Dn, is denormalized usingthe expression

    D = DnZn2n

    (21)

    The FDNR approach is most effective when all inductorsare floating. In more complex arrangements, floating FDNRsresult whenever a floating capacitor is present in the originalprototype. Since the replacement of each floating FDNR re-quires the use of two GICs, the alternative of partial transfor-mation (25,26) is preferred.

    The technique is illustrated in Fig. 15, for which the com-posite transmission matrix [a] for the three-section cascadeis given as

    ks2

    kR

    R

    D1

    s2D

    Z =

    ks

    C kC

    (a)

    (c)

    (b)

    Figure 13. FDNR symbol and realization. (a) Symbol for FDNR ofvalue D. (b) Realization of FDNR by resistively-terminated GIC. (c)Realization of FDNR by capacitively-terminated GIC.

    [a] =

    1 1

    01

    k1sn

    a11 a12k1sn

    k1a21sn a22

    [1 0

    0 k2sn

    ]=

    a11a12k2

    k1

    a21a22k2

    k1

    (22)

    tic. From design tables, we obtain the LC prototype of Fig. Hence, for matched GICs (k1 k2), we see that [a] [a].14(a). Transformation yields the so-called DCR network of The technique is illustrated in Fig. 16(a)(c) for a band-Fig. 14(b). If biasing problems are encountered due to the pass section. Using direct FDNR realization of Fig. 16(b)presence of floating capacitors, they may be overcome by the would require a total of five GICs. The partial transformationaddition of shunt resistors, RA and RB, as shown in Fig. 14(c). of Fig. 16(c) reduces the requirement to three GICs. Clearly,In order to preserve the passband loss of the original network, the savings are more dramatic for higher-order realizations.these resistors are arranged to yield a dc gain of 0.5. Hence,

    Leapfrog Realization

    The leapfrog technique (6) was introduced over 40 years agoRB

    RA + 0.7654 + 1.8478= 0.5

    and represents the first of several multiloop feedback simula-tion methods (2729). Its simplicity and elegance derivesfrom a one-to-one relationship between passive reactances ina ladder structure and integrators in the leapfrog model.

    The technique is particularly well suited to the realizationof low-pass responses, which are the most difficult to realizeby direct replacement methods. Although the presence of mul-tiple feedback loops can render tuning difficult, the closematching of capacitor ratios and the similarity of the activeblocks rendered this approach ideal for the realization ofswitched-capacitor filters (SCF). Indeed, SCF technology revi-talized interest in the leapfrog approach.

    Consider the output sections of the low-pass LC filter ofFig. 17(a). The relationships between the various voltagesand currents are shown in Eqs. 23

    (a)

    (c)

    (b)

    1

    10.7654 F1.8478 F

    1F

    1F0.7654 F21.8478 F2

    1.8478 0.7654

    1F

    1FRB

    0.7654 F21.8478 F2

    1.8478 0.7654

    0.7654 H 1.8478 H

    RA

    Figure 14. FDNR realization of low-pass filter. (a) LC prototype offourth-order Butterworth filter. (b) DCR network derived from (a). (c)Resistive shunts added for biasing purposes.

    i1 = sC1RV0i2 = i1 + i0

    V2 =sL1R

    i2

    V3 = V2 + V0i3 = sC2RV3i4 = i3 + i2

    V4 =sL2R

    i4

    V5 = V4 + V3

    (23a)

    (23b)

    (23c)

    (23d)

    (23e)

    (23f)

    (23g)

    (23h)

  • 218 ACTIVE FILTERS

    Figure 15. The use of two GICs to yield an embed-ded network equivalence which eliminates the needto realize floating FDNRs.

    [a]1 2 1 2

    k1sna11 a12

    k1a21sn a22k1sn k2sn

    Thus, working from output to input, we have alternating pro-cesses of differentiation and addition. Now, consider themultifeedback integrator structure of Fig. 17(b), for which

    1 = sT10 (24a)2 = 1 + 0 (24b)3 = sT22 (24c)4 = 3 + 0 (24d)5 = sT34 (24e)6 = 5 + 4 (24f )7 = sT46 (24g)8 = 7 + 6 (24h)

    Thus, for every current and voltage in Eqs. (23ah), there isa corresponding quantity i in Eqs. (24ah). Furthermore, ifcorresponding factors such as C1R1 and T1, L1/R and T2 areset equal, the two systems have full equivalence.

    As a result, LC low-pass structures may be simulatedby a straightforward process, as illustrated in Fig. 18. Moredetailed discussions of this approach, including its extensionbeyond the low pass are presented elsewhere (14, Ch. 10).As an analog of a passive LC filter, the leapfrog structureprovides a low sensitivity structure, and one which is inher-ently stable.

    INTEGRATED FILTERS

    As indicated previously, the earliest active filters were fabri-

    (a)

    (c)

    (b)

    Lo L2Co C2

    C1L1

    Ro R2DoA B

    A B

    D2

    D1R1

    Ro R2Co

    D1R1

    s sC2

    cated using discrete components and, eventually, operationalamplifiers. The selection of high-quality capacitors and low-Figure 16. Partial transformation to eliminate floating FDNRs. (a)tolerance, high-performance resistors is crucial to the ulti-LC bandpass section. (b) DCR realization of (a). (c) Partial transfor-mate quality of the filter (20). Finally, the circuit must bemation of (a) by embedding the section AABB between two GICs.tuned by the adjustment of one or more trimmer pots.

    Figure 17. Basic equivalence of LC and leapfrogstructures. (a) LC prototype. (b) Multifeedback inte-grator structure.

    (a)

    (b)

    1C2 C1

    V4

    V5

    8 7 6

    i3i4

    V0V3

    i2L2/R

    1/sT4

    L1/R i1i0

    V2

    5 4 31/sT3 1/sT2 1/sT1 2 1 0

  • ACTIVE FILTERS 219

    The advent of laser trimming, combined with thick and Fully integrated filters have been developed using theMOSFET-C (32) technique, which is based upon monolithicthin film hybrid microelectronic processing, not only led to

    larger-scale production but allowed for much more precise operational amplifiers, capacitors, and MOS (metal oxidesemiconductor) transistors. The latter are biased in theiradjustment of resistors. Coupled with numerically controlled

    resistor adjustment, hybrid microelectronics fabrication led ohmic region to yield tunable resistors. The technique allowsthe designer to take advantage of well-tried RC active filterto more widespread use of active filters. However, the quest

    for ever-smaller structures, and for higher cut-off frequen- design methods but is restricted in frequency by the opera-tional amplifiers and the nonlinear nature of the simulatedcies ultimately led to fully integrated filters. Several major

    technical problems inhibited the fabrication of fully inte- resistance.Further limitations occur due to integrated circuit parasit-grated filters:

    ics and switching noise resulting from the tuning circuitry.These problems can be overcome by using fully balanced dif-1. The relatively low bulk resistance of silicon, whichferential circuits so that parasitic effects appear as commonmeant that large values of resistance required an un-mode signals. Fully balanced circuits are usually derived fromduly large volume.their single-ended counterparts, and are based upon well-2. The relatively low dielectric constant of silicon whichtried structures such as those described in earlier sections. Aresulted in excessively large capacitor plate area.useful general rule (9) for converting from single-ended to a3. The inability to trim passive elements.balanced circuit is presented below:

    Active-R filters (3031) which utilize the single-pole roll- Identify ground node(s) in the single-ended circuit.off model of an operational amplifier provide an effective ca- Mirror the circuit at ground, duplicating all elements,pacitance for simple, high cut-off filters. However, the need to

    and divide the gain of all active devices by two.accurately determine the roll-off properties of each amplifier Change the sign of the gain of all mirrored active devicesrenders this approach inefficient in the absence of sophisti-

    and merge so that any resulting pair with inverting-non-cated on-chip self-tuning circuitry (9).inverting gains becomes one balanced differential input-Switched-capacitor filters were the first fully-integrateddifferential output device.structures. Although they are strictly sampled-data systems,

    they simulate an analog system if the clock frequency is much Realize any devices whose sole effect in the original cir-higher than the cut-off frequency. Although a more detailed cuit is a sign inversion by a simple crossing of wires.description of SCFs is presented in SWITCHED CAPACITOR NET-WORKS, two of their advantages are worthy of note at this time: The conversion process for a state variable filter is shown

    in Fig. 19(ab), while Fig. 19(c) shows the MOSFET-C real-1. The filters are fully integrated. ization in which the resistors of Fig. 19(b) have been replaced

    by MOSFET-simulated resistors.2. Performance depends upon the ratio of relatively smallcapacitors and an accurate clock to establish circuit By contrast, fully integrated active filters based upon the

    operational transconductance amplifier (OTA) (3334) aretime constants with high precision.

    Vi

    Vi Vo

    V0

    T5 T4 T3 T2 T1

    (a)

    (b)

    Figure 18. Leapfrog realization of low-pass LC filter. (a) Fifth-order LC filter. (b) Leapfrog real-ization of (a).

  • 220 ACTIVE FILTERS

    R2

    R3R1

    R

    R

    R4

    C1

    Vi

    Vi+

    C2

    Vo

    +

    R2

    R2

    R3

    R3

    R1

    Vi

    Vo+

    Vo

    R1

    R4

    R4

    C1

    C1

    C2

    C2

    +

    +

    +

    +

    +

    Vi+

    R2

    R2

    R3

    R3

    R1

    Vi

    Vo+

    Vo

    R1

    R4

    R4

    C1

    C1C2

    C2

    +

    +

    +

    +

    +

    (a)

    (b)

    (c)

    Figure 19. Realization of MOSFET-C state variable filter. (a) Original active RC version usingsingle-ended amplifiers. (b) Fully differential, active-RC version. (c) MOSFET-C version withMOSFETs R1 . . . R4 replacing equivalent resistors of (b).

  • ACTIVE FILTERS 221

    gmI0V

    +

    V

    +

    Figure 20. Circuit symbol for the operational transconductance am-plifier (OTA).

    simpler to design and have a much wider frequency range.This has led to OTA-C structures capable of accurate opera-tion at frequencies beyond 100 MHz (35).

    The OTA is a high-gain voltage-controlled current source,which is relatively easy to fabricate using CMOS or comple-mentary bipolar technology. Some basic properties of the OTAare as follows:

    1. High gain-bandwidth product that yields filters with

    I0

    V1

    I1

    +

    ReqReq

    V1 V2V1 V2

    I2I1

    +

    +

    Req

    (a)

    (b)higher operating frequencies than those using conven-tional operational amplifiers. Figure 21. Resistance simulation using OTAs. (a) Grounded resistor.

    2. Can be electronically tuned to modify its transconduc- (b) Floating resistor.tance.

    3. Infinite input impedance and infinite output impedance.from which:

    The circuit symbol for the OTA is shown in Fig. 20, forwhich

    [I1I2

    ]=[

    gm1 gm1gm2 gm2

    ][V1V2

    ](29)

    I0 = gm(V+ V ) (25)For matched devices, gm1 gm2, and Eq. (29) represents afloating resistor of value 1/gm. Various building blocks canwhere gm is the transconductance, a typical value being 500now be developed, forming the basis for simulation of struc-A/V. gm can be controlled by Ic such that:tures such as the state variable. For example, the simplesummer shown in Fig. 22(a) yieldsgm = KIc (26)

    where Ic is in microamps and a typical value of K is 15. Of V0 =gm1gm3

    V1 +gm2gm3

    V2 (30)

    particular importance, Eq. (26) is valid over a wide range,perhaps as much as six decades for Ic, that is, 0.001 to 1000A. In addition, the gain-bandwidth is also proportional toIc and may extend to hundreds of megahertz. This will belimited by input and output parasitics.

    An OTA-C filter structure depends upon the ability of theOTA to simulate large values of resistance. Hence, in conjunc-tion with relatively small values of capacitance, it is possibleto set the appropriate filter time constants without undue useof silicon real estate.

    Resistance can be simulated by the circuits shown in Figs.21(a,b). For the grounded resistance,

    I1 = I0 = gm(0 V ) = gmV1

    Hence,

    Req = V1I1= 1

    gm(27)

    Thus, if gm 105S, Req 100 k.For the floating resistance of Fig. 21(b):

    V1

    +

    +

    V2 V0

    (a)

    1

    2

    +3

    V+

    +

    VV0C

    1 +

    2

    (b)

    Figure 22. OTA filter building blocks. (a) Summer. (b) Damped inte-grator.

    I1 = gm1(V2 V1)I2 = gm2(V1 V2)

    (28a)

    (28b)

  • 222 ACTIVE FILTERS

    5. W. J. Kerwin, L. P. Huelsman, and R. W. Newcomb, State-vari-able synthesis for insensitive integrated circuit transfer func-tions, IEEE Journal, SC-2: 8792, 1967.

    6. F. E. Girling and E. F. Good, Active filters, Wireless World, 76:341345, 445450, 1970. The leapfrog method was first describedby the same authors in RRE Memo No. 1177, September, 1955.

    7. R. A. Friedenson et al., RC active filters for the D3 channel bankfilter, Bell Syst. Tech. J., 54 (3): 507529, 1975.

    8. R. W. Brodersen, P. R. Gray, and D. A. Hodges, MOS switched-capacitor filters, Proc. IEEE, 67: 6175, 1979.

    9. R. Schaumann, M. S. Ghausi, and K. R. Laker, Design of AnalogFilters, Englewood Cliffs, NJ: Prentice-Hall, 1990.

    10. R. W. Daniels, Approximation Methods for Electronic Filter De-sign, New York: McGraw-Hill, 1974.

    +1

    +

    2 +

    3

    +4

    BandpassLow-pass

    11. P. Bowron and F. W. Stephenson, Active Filters for Communica-Figure 23. OTA circuit yielding both bandpass and low-pass second- tions and Instrumentation, London: McGraw-Hill, 1979.order transfer functions. 12. A. I. Zverev, Handbook of Filter Synthesis, New York: Wiley, 1967.

    13. E. Christian and E. Eisenman, Filter Design Tables and Graphs,New York: Wiley, 1966.

    whereas the circuit of Fig. 22(b) realizes a damped integrator, 14. F. W. Stephenson (ed.), RC Active Filter Design Handbook, Newfor which York: Wiley, 1985.

    15. L. P. Huelsman, Active and Passive Analog Filter DesignAnIntroduction, New York: McGraw-Hill, 1993.V0 =

    gm1sC + gm2

    (V + V ) (31)16. F. W. Stephenson, Single-amplifier multiple-feedback filters, in

    W-K. Chen, (ed.), The Circuits and Filters Handbook, New York:Furthermore, by setting gm2 0 (eliminating the second OTA), CRC Press/IEEE Press, 1995.Eq. (31) reduces to that of an undamped integrator. 17. T. Deliyannis, High-Q factor circuit with reduced sensitivity,

    Other biquads, not based directly on earlier topologies, Electron. Lett., 4 (26): 577579, 1968.may also be realized. For example, the biquad of Fig. 23 may

    18. J. J. Friend, C. A. Harris, and D. Hilberman, STAR: an activebe analyzed to yield a bandpass output of biquadratic filter section, IEEE Trans. Circuits Syst., CAS-22:

    115121, 1975.19. L. P. Huelsman and P. E. Allen, Introduction to the Theory and

    Design of Active Filters, New York: McGraw-Hill, 1980.20. F. W. Stephenson and W. B. Kuhn, Higher-order filters, in J. T.

    VbpVi

    = s(gm1/C1)s2 +

    (gm2C2

    )s + gm3gm4

    C1C2

    (32)

    Taylor and Q. Huang (eds.), CRC Handbook of Electrical Filters,New York: CRC Press, 1997, pp. 119139.

    SUMMARY 21. G. S. Moschytz, A second-order pole-zero pair selection for nth-order minimum sensitivity networks, IEEE Trans., CT-17 (4):

    RC active filters have reached a degree of maturity that could 527534, 1970.not have been envisaged when they were first conceived over 22. M. S. Ghausi and K. R. Laker, Modern Filter Design, Englewood40 years ago. The successive introductions of operational am- Cliffs, NJ: Prentice-Hall, 1981.plifiers, laser trimming, hybrid microelectronic fabrication, 23. A. Antoniou, Realization of gyrators using operational amplifiersand, finally, fully integrated filters have all helped advance and their use in RC-active network synthesis, Proc. IEEE, 116

    (11): 18381850, 1969.the state of the art. However, the thread linking all of thesetechnological advances has been the retention of a small num- 24. L. T. Bruton, Network transfer functions using the concept of

    frequency-dependent negative resistance, IEEE Trans., CT-16:ber of topologies and techniques that have been proven to406408, 1969.yield reliable filters for large-scale practical applications.

    25. A. W. Keen and J. L. Glover, Active RC equivalents of RCL net-These structures have formed the basis for discussion in thisworks by similarity transformation, Electron. Lett. 7 (11): 288article. By no means do they represent all the possibilities,290, 1971.but they do form a solid basis upon which further study may

    26. L. T. Bruton and A. B. Haase, Sensitivity of generalized immit-be based.tance converter-embedded ladder structures, IEEE Trans., CAS-21 (2): 245250, 1974.

    BIBLIOGRAPHY 27. G. Hurtig, The primary resonator block technique of filter synthe-sis, Proc. IEEE International Filter Symposium, Santa Monica,

    1. F. Jay (ed.), IEEE Standard Dictionary of Electrical and Electronic CA, April 1972, p. 84, [US Patent 3,720,881, March, 1973].Terms, 4th ed., New York: IEEE Press, 1988. 28. K. R. Laker and M. S. Ghausi, Synthesis of low-sensitivity

    2. J. G. Linvill, RC active filters, Proc. IRE, 12: 555564, 1954. multiloop feedback active RC filter, IEEE Trans., CAS-21 (2):252259, 1974.3. R. P. Sallen and E. L. Key, A practical method of designing RC

    active filters, IRE Trans., CT-2: 7485, 1955. 29. J. Tow, Design and evaluation of shifted-companion form of ac-tive filters, Bell Syst. Tech. J., 54 (3): 545568, 1975.4. A. G. J. Holt and J. R. Taylor, Method of replacing ungrounded

    inductances by grounded gyrators, Electron. Lett., 1 (4): 105, 30. J. R. Brand and R. Schaumann, Active R filters: Review of theoryand practice, IEEE J., ECS-2 (4): 89101, 1978.1965.

  • ACTIVE PERCEPTION 223

    31. A. S. Sedra and P. O. Brackett, Filter Theory and Design: Activeand Passive, Portland OR: Matrix, 1978.

    32. Y. Tsividis, M. Banu, and J. Khoury, Continuous-Time MOSFET-C Filters in VLSI, IEEE Trans., CAS-33: 112540, 1986.

    33. Y. P. Tsividis and J. O. Voorman (eds.), Integrated Continuous-Time Filters, Piscataway, NJ: IEEE Press, 1993.

    34. R. L. Geiger and E. Sanchez-Sinencio, Active filter design usingoperational transconductance amplifiers: A tutorial, IEEE Circ.Dev. Mag., CDM-1: 2032, 1985.

    35. M. Atarodi and J. Choma, Jr., A 7.2 GHz bipolar operationaltransconductance amplifier for fully integrated OTA-C filters, J.Analog Integ. Circ. Signal Process., 6 (3): 243253, 1994.

    F. WILLIAM STEPHENSONVirginia Polytechnic Institute and

    State University

    ACTIVE FILTERS. See ANALOG FILTERS; CASCADE NET-WORKS.

    ACTIVE NETWORK SYNTHESIS. See CURRENT CON-VEYORS.

  • ADAPTIVE FILTERS 259

    Now consider the expectation of Eq. (37): For stable convergence each term in Eq. (45) must be lessthan one, so we must have

    E[www(n+ 1)] =E[www(n)] + 2E[d(n)xxx(n)] 2E[xxx(n)xxx(n)T]E[www(n)] (38) 0 < < 1

    max(46)

    We have assumed that the filter weights are uncorrelatedwhere max is the largest eigenvalue of the correlation matrixwith the input signal. This is not strictly satisfied, becauseR, though this is not a sufficient condition for stability underthe weights depend on x(n); but we can assume that hasall signal conditions. The final convergence rate of the algo-small values because it is associated with a slow trajectory.rithm is determined by the value of the smallest eigenvalue.So, subtracting the optimum solution from both sides of Eq.An important characteristic of the input signal is therefore(38), and substituting the autocorrelation matrix R and cross-the eigenvalue spread or disparity, defined ascorrelation vector p, we get

    max/min (47)E[www(n + 1)] R1ppp = E[www(n)] R1ppp + 2R{R1ppp E[www(n)]}(39)

    So, from the point of view of convergence speed, the idealNext, defining value of the eigenvalue spread is unity; the larger the value,

    the slower will be the final convergence. It can be shown (3)that the eigenvalues of the autocorrelation matrix are (n + 1) = E[www(n + 1)] R1ppp (40)bounded by the maximum and minimum values of the power

    from Eq. (39) we obtain spectral density of the input.It is therefore concluded that the optimum signal for fast-

    est convergence of the LMS algorithm is white noise, and that (n + 1) = (III 2R) (n) (41)any form of coloring in the signal will increase the conver-

    This process is equivalent to translation of coordinates. Next, gence time. This dependence of convergence on the spectralwe define R in terms of an orthogonal transformation (7): characteristics of the input signal is a major problem with the

    LMS algorithm, as discussed in Ref. 6.R = KTQK (42)

    LMS-Based Algorithmswhere Q is a diagonal matrix consisting of the eigenvalues

    The Normalized LMS Algorithm. The normalized LMS(0, 1, . . ., N) of the correlation matrix R, and K is the uni-(NLMS) algorithm is a variation of the ordinary LMS algo-tary matrix consisting of the eigenvectors associated withrithm. Its objective is to overcome the gradient noise amplifi-these eigenvalues.cation problem. This problem is due to the fact that in theSubstituting Eq. (42) in Eq. (41), we havestandard LMS, the correction e(n)x(n) is directly propor-tional to the input vector x(n). Therefore, when x(n) is large,the LMS algorithm amplifies the noise.

    (n + 1) = (III 2KTQK) (n)= KT(III 2Q)K (n) (43) Consider the LMS algorithm defined by

    Multiplying both sides of the Eq. (43) by K and defining www(n + 1) = www(n) + 2e(n)xxx(n) (48)

    Now consider the difference between the optimum vector w*vvv(n + 1) = K (n + 1)

    = (III 2Q)vvv(n) (44) and the current weight vector w(n):we may rewrite Eq. (44) in matrix form as

    vvv(n) = www www(n) (49)

    Assume that the reference signal and the error signal are

    d(n) = wwwTxxx(n) (50)e(n) = d(n) www(n)Txxx(n) (51)

    Substituting Eq. (50) in Eq. (51), we obtain

    e(n) = wwwTxxx(n) www(n)Txxx(n)= [wwwT www(n)T]xxx(n)= vvvT(n)xxx(n)

    (52)

    We decompose v(n) into its rectangular components

    vvv(n) = vvvo(n) + vvvp(n) (53)

    v0(n)v1(n)

    ...vN1(n)

    =

    (1 21)n(1 22)n

    ...(1 2N )n

    v0(0)v1(0)

    ...vN1(0)

    (45)

  • 260 ADAPTIVE FILTERS

    Therefore, the NLMS algorithm given by Eq. (64) is equiva-lent to the LMS algorithm if

    2 = xxxT(n)xxx(n)

    (66)

    vp(n)

    vp(n)vp(n1)

    vp(n) x(n)

    NLMS AlgorithmFigure 13. Geometric interpretation of the NLMS algorithm.

    where vo(n) and vp(n) are the orthogonal component and theparallel component of v(n) with respect to the input vector.This implies

    vvvp(n) = Cxxx(n) (54)

    where C is a constant. Then substituting Eq. (53) and Eq. (54)in Eq. (52), we get

    Parameters: M = filter order = step size

    Initialization: Set www(0) = 0Computation: For n = 0,1, 2, . . ., compute

    y(n) = www(n)Txxx(n)e(n) = d(n) y(n) =

    xxxT(n)xxx(n)

    www(n + 1) = www(n) + e(n)xxx(n)e(n) = [vvvo(n) + vvvp(n)]Txxx(n) (55)

    Time-Variant LMS Algorithms. In the classical LMS algo-e(n) = [vvvo(n) + Cxxx(n)]Txxx(n) (56)rithm there is a tradeoff between validity of the final solutionand convergence speed. Therefore its use is limited for severalBecause vo(n) is orthogonal to x(n), the scalar multiplicationpractical applications, because a small error in the coefficientisvector requires a small convergence factor, whereas a highconvergence rate requires a large convergence factor.vvvTo xxx(n) = 0 (57)

    The search for an optimal solution to the problem of ob-taining high convergence rate and small error in the finalThen solving for C from Eqs. (56) and (57) yieldssolution has been an arduous in recent years. Various algo-rithms have been reported in which time-variable conver-gence coefficients are used. These coefficients are chosen so

    C = e(n)xxxT(n)xxx(n)

    (58)

    as to meet both requirements: high convergence rate and lowand MSE. Interested readers may refer to Refs. 914.

    Recursive Least-Squares Algorithmvvvp(n) = e(n)xxx(n)xxxT(n)xxx(n) (59)The recursive least-squares (RLS) algorithm is required forrapidly tracking adaptive filters when neither the reference-The target now is to make v(n) as orthogonal as possible tosignal nor the input-signal characteristics can be controlled.x(n) in each iteration, as shown in Fig. 13. The above men-An important feature of the RLS algorithm is that it utilizestioned can be done by settinginformation contained in the input data, extending back tothe instant of time when the algorithm is initiated. The re-vvv(n + 1) = vvv(n) vvvp(n) (60)sulting convergence is therefore typically an order of magni-

    Finally, substituting Eq. (49) and Eq. (59), we get tude faster than for the ordinary LMS algorithm.In this algorithm the mean squared value of the error sig-

    nal is directly minimized by a matrix inversion. Consider theFIR filter output

    www www(n + 1) = www www(n) e(n)xxx(n)xxxT(n)xxx(n)

    (61)

    y(n) = wwwTxxx(n) (67)www(n + 1) = www(n) + e(n)xxx(n)

    xxxT(n)xxx(n)(62)

    where x(n) is the input vector given by x(n) [x(n), x(n 1,where, in order to reach the target, must satisfy (9). . ., x(n M 1)]T and w is the weight vector. The optimumweight vector is computed in such a way that the mean0 < < 2 (63)squared error, E[e2(n)] is minimized, where

    In this waye(n) = d(n) y(n) = d(n) wwwTxxx(n) (68)

    www(n + 1) = www(n) + e(n)xxx(n) (64)E[e2(n)] = E[{d(n) wwwTxxx(n)}2] (69)

    whereTo minimize E[e2(n)], we can use the orthogonality principlein the estimation of the minimum. That is, we select theweight vector in such a way that the output error is orthogo-

    = xxxT(n)xxx(n)

    (65)

  • ADAPTIVE FILTERS 261

    nal to the input vector. Then from Eqs. (67) and (68), we ob- Next, for convenience of computation, lettain

    Q(n) = R1(n) (82)E[xxx(n){d(n) xxxT(n)www}] = 0 (70)

    andThen

    E[xxx(n)xxxT(n)www] = E[d(n)xxx(n)] (71) K(n) =R1(n 1)xxx(n)

    + xxxT(n)R1(n 1)xxx(n) (83)

    Assuming that the weight vector is not correlated with theThen from Eq. (81) we haveinput vector, we obtain

    E[xxx(n)xxxT(n)]www = E[d(n)xxx(n)] (72) www(n) = 1

    [Q(n 1) K(n)xxxT(n)Q(n 1)][ppp(n 1) + d(n)xxx(n)]

    (84)

    which can be rewritten as

    Rwww = ppp (73)

    where R and p are the autocorrelation matrix of the inputsignal and the correlation vector between the reference sig-

    www(n) = Q(n 1)ppp(n 1) + 1

    d(n)Q(n 1)xxx(n) K(n)xxxT(n)Q(n 1)ppp(n 1)

    1

    d(n)K(n)xxxT(n)Q(n 1)xxx(n)

    (85)

    nal d(n) and input signal x(n), respectively. Next, assumingergodicity, p can be estimated in real time as

    ppp(n) =n

    k=0nkd(k)xxx(k) (74)

    www(n) = www(n 1) + 1

    d(n)Q(n 1)xxx(n)

    Q(n 1)xxx(n)xxxT(n)www(n 1)

    + xxxT(n)Q(n 1)xxx(n) 1

    d(n)Q(n 1)xxx(n)xxxT(n)Q(n 1)xxx(n) + xxxT(n)Q(n 1)xxx(n)

    (86)

    ppp(n) =n1k=0

    nkd(k)xxx(k) + d(n)xxx(n)

    = n1k=0

    nk1d(k)xxx(k) + d(n)xxx(n)(75)

    ppp(n) = ppp(n 1) + d(n)xxx(n) (76)

    www(n) = www(n 1) + 1

    Q(n 1)xxx(n) + xxxT(n)Q(n 1)xxx(n)

    [d(n) + d(n)xxxT(n)Q(n 1)xxx(n) xxxT(n)www(n 1) d(n)xxxT(n)Q(n 1)xxx(n)]

    (87)

    where is the forgetting factor. In a similar way, we can ob-tain

    www(n) = www(n 1) + 1

    Q(n 1)xxx(n) + xxxT(n)Q(n 1)xxx(n)

    [d(n) xxxT(n)www(n 1)](88)

    R(n) = R(n 1) + xxx(n)xxxT(n) (77)Finally, we have

    Then, multiplying Eq. (73) by R1 and substituting Eq. (76)and Eq. (77), we get www(n) = www(n 1) + K(n)(n) (89)

    wherewww = [R(n 1) + xxx(n)xxxT(n)]1[ppp(n 1) + d(n)xxx(n)] (78)

    Next, according to the matrix inversion lemmaK(n) = Q(n 1)xxx(n)

    + xxxT(n)Q(n 1)xxx(n) (90)(A + BCD)1 = A1 A1B(DA1B + C1)1DA1 (79)

    and (n) is the a priori estimation error, based on the oldwith A R(n 1), B x(n), C 1, and D xT(n), we least-square estimate of the weights vector that was made atobtain time n 1, and defined by

    (n) = d(n) wwwT(n 1)xxx(n) (91)

    Then Eq. (89) can be written as

    www(n) = www(n 1) + Q(n)(n)xxx(n) (92)

    www(n) =[

    1

    R1(n 1) 1

    R1(n 1)xxx(n)

    1

    xxxT(n)R1(n 1)xxx(n) + 1

    1 1

    xxxT(n)R1(n 1)]

    [ppp(n 1) + d(n)xxx(n)](80)

    where Q(n) is given by

    Q(n) = 1

    Q(n 1) Q(n 1)xxx

    T(n)Q(n 1) + xxxT(n)Q(n 1)xxx(n)

    (93)

    www(n) = 1

    R1(n 1) R

    1(n 1)xxx(n)xxxT(n)R1(n 1)[ + xxxT(n)R1(n 1)xxx(n)]

    [ppp(n 1) + d(n)xxx(n)] (81)

  • 262 ADAPTIVE FILTERS

    3. S. Haykin, Adaptive Filter Theory, 3rd ed., Upper Saddle River,The applicability of the RLS algorithm requires that it ini-NJ: Prentice-Hall, 1996.tialize the recursion of Q(n) by choosing a starting value

    4. B. Friedlander, Lattice filters for adaptive processing, Proc.Q(0) to ensure the nonsingularity of the correlation matrixIEEE, 70: 829867, 1982.R(n) (3).

    5. J. J. Shynk, Adaptive IIR filtering, IEEE ASSP Mag., 6 (2): 421, 1989.RLS Algorithm

    6. P. Hughes, S. F. A. Ip, and J. Cook, Adaptive filtersa review oftechniques, BT Technol. J., 10 (1): 2848, 1992.

    7. B. Widrow and S. Stern, Adaptive Signal Processing, EnglewoodCliffs, NJ: Prentice-Hall, 1985.

    8. B. Widrow and M. E. Hoff, Jr., Adaptive switching circuits, IREWESCON Conv. Rec., part 4, 1960, pp. 96104.

    9. J. Nagumo and A. Noda, A learning method for system identifi-cation, IEEE Trans. Autom. Control, AC-12: 282287, 1967.

    10. R. H. Kwong and E. W. Johnston, A variable step size LMS algo-rithm, IEEE Trans. Signal Process., 40: 16331642, 1992.

    11. I. Nakanishi and Y. Fukui, A new adaptive convergence factor

    Initialization: Set Q(0)www(0) = 0

    Computation: For n = 1,2, . . ., compute

    K(n) = Q(n 1)xxx(n) + xxxT(n)Q(n 1)xxx(n)

    (n) = d(n) wwwT(n 1)xxx(n)www(n) = www(n 1) + K(n)(n)www(n + 1) = www(n) + e(n)xxx(n)Q(n) = R1(n)

    with constant damping parameter, IEICE Trans. Fundam. Elec-tron. Commun. Comput. Sci., E78-A (6): 649655, 1995.

    IMPLEMENTATIONS OF ADAPTIVE FILTERS 12. T. Aboulnasr and K. Mayas, A robust variable step size LMS-type algorithm: Analysis and simulations, IEEE Trans. Signal

    In the last few years many adaptive filter architectures have Process., 45: 631639, 1997.been proposed, for reducing the convergence rate without in- 13. F. Casco et al., A variable step size (VSS-CC) NLMS algorithm,creasing the computational cost significantly. The digital im- IEICE Trans. Fundam., E78-A (8): 10041009, 1995.plementations of adaptive filters are the most widely used. 14. M. Nakano et al., A time varying step size normalized LMS algo-They yield good performance in terms of adaptivity, but con- rithm for adaptive echo canceler structures, IEICE Trans. Fun-sume considerable area and power. Several implementations dam., E78-A (2): 254258, 1995.achieve power reduction by dynamically minimizing the order 15. J. T. Ludwig, S. H. Nawab, and A. P. Chandrakasan, Low-powerof the digital filter (15) or employing parallelism and pipelin- digital filtering using approximate processing, IEEE J. Soliding (16). On the other hand, high-speed and low-power appli- State Circuits, 31: 395400, 1996.cations require both parallelism and reduced complexity (17). 16. C. S. H. Wong et al., A 50 MHz eight-tap adaptive equalizer for

    Is well known that analog filters offer advantages of small partial-response channels, IEEE J. Solid State Circuits, 30: 228234, 1995.area, low power, and higher-frequency operation over their

    digital counterparts, because analog signal-processing opera- 17. R. A. Hawley et al., Design techniques for silicon compiler imple-tions are normally much more efficient than digital ones. mentations of high-speed FIR digital filters, IEEE J. Solid State

    Circuits, 31: 656667, 1996.Moreover, since continuous-time adaptive filters do not needanalog-to-digital conversion, it is possible to prevent quanti- 18. M. H. White et al., Charge-coupled device (CCD) adaptive dis-

    crete analog signal processing, IEEE J. Solid State Circuits, 14:zation-related problems.132147, 1979.Gradient descent adaptive learning algorithms are com-

    19. T. Enomoto et al., Monolithic analog adaptive equalizer inte-monly used for analog adaptive learning circuits because ofgrated circuit for wide-band digital communications networks,their simplicity of implementation. The LMS algorithm is of-IEEE J. Solid State Circuits, 17: 10451054, 1982.ten used to implement adaptive circuits. The basic elements

    20. F. J. Kub and E. W. Justh, Analog CMOS implementation of highused for implementing the LMS algorithm are delay elementsfrequency least-mean square error learning circuit, IEEE J. Solid(which are implemented with all-pass first-order sections),State Circuits, 30: 13911398, 1995.multipliers (based on a square law), and integrators. The

    21. Y. L. Cheung and A. Buchwald, A sampled-data switched-currenttechniques utilized to implement these circuits are discrete-analog 16-tap FIR filter with digitally programmable coefficientstime approaches, as discussed in Refs. 18 to 21, and continu-in 0.8 m CMOS, Int. Solid-State Circuits Conf., February 1997.ous-time implementations (22,23,24).

    22. J. Ramirez-Angulo and A. Daz-Sanchez, Low voltage program-Several proposed techniques involve the implementation ofmable FIR filters using voltage follower and analog multipliers,the RLS algorithm, which is known to have very low sensitiv-Proc. IEEE Int. Symp. Circuits Syst., Chicago, May 1993.ity to additive noise. However, a direct analog implementa-

    23. G. Espinosa F.-V. et al., Ecualizador adaptivo BiCMOS de tiempotion of the RLS algorithm would require a considerable effort.continuo, utilizando una red neuronal de Hopfield, CONIELEC-To overcome this problem, several techniques have been pro-OMP97, UDLA, Puebla, Mexico, 1997.posed, such as structures based on Hopfield neural networks

    24. L. Ortz-Balbuena et al., A continuous time adaptive filter struc-(23,25,26,27).ture, IEEE Int. Conf. Acoust., Speech Signal Process., Detroit,1995, pp. 10611064.

    BIBLIOGRAPHY 25. M. Nakano et al., A continuous time equalizer structure usingHopfield neural networks, Proc. IASTED Int. Conf. Signal ImageProcess., Orlando, FL, November 1996, pp. 168172.1. S. U. H. Qureshi, Adaptive equalization, Proc. IEEE, 73: 1349

    1387, 1985. 26. G. Espinosa F.-V., A. Daz-Mendez, and F. Maloberti, A 3.3 VCMOS equalizer using Hopfield neural network, 4th IEEE Int.2. J. Makhoul, Linear prediction: A tutorial review, Proc. IEEE, 63:

    561580, 1975. Conf. Electron., Circuits, Syst., ICECS97, Cairo, 1997.

  • ADAPTIVE RADAR 263

    27. M. Nakano-Miyatake and H. Perez-Meana, Analog adaptive fil-tering based on a modified Hopfield network, IEICE Trans. Fun-dam., E80-A: 22452252, 1997.

    Reading List

    M. L. Honig and D. G. Messerschmitt, Adaptive Filters: Structures,Algorithms, and Applications, Norwell, MA: Kluwer, 1988.

    B. Mulgrew and C. F. N. Cowan, Adaptive Filters and Equalisers,Norwell, MA: Kluwer, 1988.

    S. Proakis et al., Advanced Signal Processing, Singapore: Macmillan.

    GUILLERMO ESPINOSA FLORESVERDADJOSE ALEJANDRO DIAZ MENDEZNational Institute for Research in

    Astrophysics, Optics andElectronics

  • ALL-PASS FILTERS

    NETWORKS, ALL-PASS

    FILTERS, ALL-PASS

    PHASE EQUALIZERS

    All-pass filters are often included in the catalog of classicalfilter types. A listing of types of classical filters reads asfollows: low-pass, high-pass, bandpass, band-stop, and all-pass filters. The transfer functions (see Transfer func-tions) of all of these filters can be expressed as real, ra-tional functions of the Laplace transform variable s (seeLaplace transforms). That is, these transfer functionscan be expressed as the ratio of two polynomials in s whichhave real coefficients. All of the types of filters listed havefrequency-selective magnitude characteristics except forthe all-pass filter. That is, in the sinusoidal steady state,a low-pass filter passes low-frequency sinusoids relativelywell and attenuates high-frequency sinusoids. Similarly, abandpass filter in sinusoidal steady state passes sinusoidshaving frequencies that are within the filters passbandrelatively well and attenuates sinusoids having frequen-cies lying outside this band. It should be kept in mind thatall of the filters on the list modify the phase of appliedsinusoids (see Filtering theory). Figure 1 shows idealizedrepresentations of the magnitude characteristics of classi-cal filters for comparison.

    However, the all-pass filter is the only filter on the listhaving a magnitude characteristic that is not frequencyselective; in the sinusoidal steady state, an all-pass filterpasses sinusoids having any frequency. The filter does notchange the amplitude of the input sinusoid or it changesthe amplitudes of input sinusoids by the same amount nomatter the frequency. An all-pass filter modifies only thephase, and this is the property that is found useful in signalprocessing.

    Only the transfer function of the all-pass filter, ex-pressed as a rational function of s must have zeros (losspoles) in the right-half s plane (RHP). The poles and zerosof the transfer function are mirror images with respect tothe origin. The transfer functions of the other filters areusually minimum phase transfer functions, meaning thatthe zeros of these transfer functions are located in the left-half s plane (LHP) or on the imaginary axis but not in theopen RHP. As a result of these properties, the transfer func-tion of an all-pass filter, TAP(s), can be expressed as a gainfactor H times a ratio of polynomials in which the numer-ator polynomial can be constructed from the denominatorpolynomial by replacing s by s thereby creating zeros thatare images of the poles. H can be positive or negative.

    The primary application of all-pass filters is in phaseequalization of filters having frequency-selective magni-tude characteristics. A frequency-selective filter usually re-alizes an optimum approximation to ideal magnitude char-acteristics. For example, a Butterworth low-pass filter ap-

    proximates the ideal brick-wall low-pass magnitude char-acteristic [see Fig. 1(a)] in a maximally flat manner. Anideal filter also has linear phase in the passband in orderto avoid phase distortion. But the Butterworth filter doesnot have linear phase. So an all-pass filter is designed tobe connected in cascade with the Butterworth filter in or-der to linearize its phase characteristic. This application isdiscussed in greater detail later in this article.

    Another application of all-pass filters is creation of delayfor a variety of signal-processing tasks. A signal-processingsystem may have several branches, and, depending on theapplication, it may be important to make the delay in eachbranch approximately equal. This can be done with all-pass filters. On the other hand, a signal processing taskmay require delaying one signal relative to another. Again,an all-pass filter can be used to provide the delay. Thisapplication is also discussed in greater detail in this article.

    PROPERTIES OF ALL-PASS FILTERS

    The transfer function of an all-pass filter, TAP(s), has theform

    where the constant H is the gain factor, which can be posi-tive or negative, and D(s) is a real polynomial of s. Thus, afirst-order all-pass filter transfer function, denoted by TAP1,with a pole on the negative real axis at s = a is given by

    and a second-order transfer function, denoted as TAP2, withcomplex poles described by undamped natural frequency(or natural mode frequency) 0 and Q ( < Q < forcomplex poles in the open LHP) is given by

    Of course, an all-pass transfer function can be created thathas two real-axis poles as would be obtained by cascad-ing two buffered first-order transfer functions, but all-passtransfer functions with complex poles are the most usefulfor phase equalization of filters.

    To show that the magnitude characteristic is constantfor all frequencies for all-pass transfer functions of any or-der, we first obtain from Eq. (1)

    where indicates the conjugate. Then from Eq. (4), we ob-tain

    This result is also shown graphically in Fig. 2 for the caseof a second-order all-pass transfer function with complexpoles. An arbitrary point P has been selected on the j axis,and we see that the lengths of the vectors from the poles to

    J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.

  • 2 Phase Equalizers

    Figure 1. Idealized magnitude characteristics of classical filters. (a) Low-pass filter. (b) High-passfilter. (c) Bandpass filter. (d) Band-stop filter. (e) All-pass filter.

    Figure 2. A pole-zero plot for a second-order all-pass transferfunction with complex poles is shown. An arbitrary point on thej axis, denoted as P, has been selected. The lengths of the vectorsfrom the poles to point P are the same as the corresponding lengthsof the vectors from the zeros to point P

    the point are the same as the lengths of the vectors fromthe zeros to point P. Thus, the magnitude characteristic isdetermined only by H and is not a function of frequency.

    The phase, however, is a function of frequency. Denotingthe phase of the all-pass transfer function as AP, selectingH to be positive for convenience, and denoting the phase ofD(j) as D, we can write:

    The all-pass transfer function produces phase lag, see 1.If H is negative, then an additional phase of radians isintroduced. Figure 3 shows the phase plots obtained forthe first-order transfer function in Eq. (2) with a = 1 and

    Figure 3. Phase plots for the first-order all-pass transfer func-tion of Eq. (2) with a zero at s = 1 and for second-order transferfunctions of Eq. (3) with 0 = 1 and Q = , 1, 2, and 4. H is positivefor all the transfer functions.

    for the second-order transfer function in Eq. (3) with 0 =1 and Q values equal to , 1, 2, and 4. H is positive forall the transfer functions. Upon examination of the plotsgenerated by the second-order transfer functions, it is seenthat for Q = , there is no point of inflection. For certainhigher values of Q, there is a point of inflection. The pointof inflection is obtained by differentiating the expressionfor phase two times with respect to , equating the resultto zero, and solving for a positive . The result is

    Thus, for Qs greater than 0.578, there is a point of inflectionin the phase plot.

    The negative of the derivative of phase with respect to is the group delay, denoted as (). Group delay is also

  • Phase Equalizers 3

    termed envelope delay or merely delay, and its units areseconds. Oftentimes, designers prefer working with delayrather than phase because delay can be expressed as a ra-tional function of . The expression for phase involves atranscendental function tan1( ). For example, the phaseof the second-order transfer function with positive H, 0 =1 and Q = 2 (see Eq. 3) is

    However, the delay is given by

    which is a rational function of resulting from the deriva-tive of the arctangent function. Figure 4 depicts the de-lays corresponding to the phase plots given in Fig. 3. ForQ greater than 0.578, the plots of delay exhibit peaks. ForQ2 > , the peaks occur at 0 = 1.

    PHASE DISTORTION

    At steady state, a linear, time-invariant network affectsonly the amplitude and phase of an applied sinusoid toproduce an output sinusoid. The output sinusoid has thesame frequency as the input sinusoid. If the input sig-nal is composed of two sine waves of different frequencies,then, depending on the network, the output signal couldbe changed in amplitude or in phase or both. For exam-ple, suppose the network is a low-pass filter and that theinput signal consists of two sinusoids with different fre-quencies, but both frequencies lie within the passband. Inthis case, the network should pass the signal to the out-put with minimum distortion. Since the frequencies of thesine waves that make up the input signal lie within thepassband, very little amplitude distortion is encountered.However, the result can be severely phase distorted. If nophase distortion is to be produced, then the phase charac-teristic in the passband of the network must be linear and,hence, have the form k + 0, where k is the magnitude ofthe slope of the phase characteristic and 0 is the phase at = 0. Furthermore, if 0 is neither 0 nor a multiple of 2radians, then a distortion known as phase-intercept distor-tion results. In the following, phase-intercept distortion isnot considered. The interested reader is referred to Ref. 2for further information on phase-intercept distortion.

    To illustrate the effects that a system with linear phasehas on an input signal, let an input signal v(t), given by

    be applied to a network with transfer function T(s). Assumethe phase of the system is given by () = k, where k isa positive constant, and assume that |T(j)| = |T(2j)| =1. In Eq. (10), A1 and A2 are the peak amplitudes of thetwo sinusoids that make up v(t). The output signal can bewritten as

    Rewriting Eq. (11), we obtain

    where it is seen that each sinusoid in vo(t) is delayed by thesame amount, namely k seconds. The output voltage hasbeen delayed by k seconds, but there is no phase distortion.

    However, suppose the phase of the system is given by =k3, a nonlinear phase characteristic. With the input sig-nal given by Eq. (10) and, as before, assuming that |T(j)|= |T(2j)| = 1, we obtain

    From Eq. (13), it is seen that the sinusoids are delayed bydifferent amounts of time. The nonlinear phase character-istic has resulted in phase distortion. Although the humanear is relatively insensitive to phase changes (3), applica-tions such as control and instrumentation can be greatlyimpaired by phase distortion. To illustrate this importantpoint further, assume that a signal vi(t) is applied to threedifferent hypothetical amplifiers. The signal vi(t) is com-posed of two sinusoids and is given by

    One sinusoid has twice the frequency of the other sinusoid.One amplifier is perfectly ideal and has a gain G1 = 10 withno phase shift. The second amplifier has a gain magnitudeequal to 10 and has a linear phase shift given by = .Thus, its transfer function can be expressed as

    The third amplifier also has a gain magnitude equal to 10,but it has a nonlinear phase characteristic given by =3. Thus, its transfer function is given by

    Figure 5 depicts the output of the first amplifier. Sincethe amplifier is perfectly ideal, the output is exactly 10vi.Figure 6 shows the output of the second amplifier, and it isseen that the waveform at the output of the amplifier withlinear phase is the same as shown in Fig. 5 except that thewaveform in Fig. 6 has been delayed by 1 s. Delay of theentire waveform does not constitute phase distortion. Onthe other hand, the output of the amplifier with nonlinearphase, shown in Fig. 7, is clearly distorted. For example, itspeak-to-peak value is more than 12% larger than it shouldbe. In the next section, we examine the use of a second-order all-pass filter to linearize the phase of nth-order low-pass filters.

    PHASE EQUALIZATON

    Phase equalization is the term used to describe compen-sation employed with a filter or a system to remedy phasedistortion. The goal is to achieve linear phase (flat timedelay), and the compensator is labeled a phase equalizer.In this section, we derive the specifications for a second-order all-pass filter that can be used to linearize the phaseof most all-pole low-pass filters. The technique can also be

  • 4 Phase Equalizers

    Figure 4. Delay plots for the all-pass transferfunctions listed in Fig. 3. The plot for Q = hasthe largest delay near the origin.

    Figure 5. Output voltage of perfectly idealamplifier with input voltage given by Eq. (14).The amplifier has a gain equal to 10 with nophase shift.

    Figure 6. Output voltage of amplifier with lin-ear phase characteristic. The output voltage isdelayed 1 s in comparison to the output voltageshown in Fig. 5.

  • Phase Equalizers 5

    Figure 7. Output voltage of amplifier with non-linear phase characteristic with input voltagegiven by Eq. (14). The effects of phase distortionare easily seen when this waveform is comparedwith those in Figs. 5 and 6.

    Figure 8. Cascade connection of a second-order low-pass filterwith a second-order all-pass filter. It is assumed there is no loadingbetween the two filters.

    extended to other phase-equalization tasks. We begin thederivation by linearizing the phase of a second-order low-pass filter having a transfer function given by

    Figure 8 depicts the cascade connection of this low-passfilter with a second-order all-pass filter with transfer func-tion TAP(s). The form of the transfer function of the all-passfilter is given by Eq. (3), but for the purposes of this deriva-tion, let us designate its undamped natural frequency asA and its Q as QA. The overall phase of the cascade cir-cuit is () = L() + A() where L and A are the phasecontributed by the low-pass filter and the all-pass filter, re-spectively. We wish to make () approximate linear phasein the Maclaurin sense (1). Since is an odd function of ,the Maclaurin series for has the form

    where K1 is the first derivative of () with respect to withthe result evaluated at = 0, and K3 is proportional to thethird derivative evaluated at = 0, and so on. Therefore, wewant to choose A and QA to make K3 and K5 equal to zeroin Eq. (18). Then K77 will be the lowest order undesiredterm in the series for ().

    The phase L contributed by the second-order low-passfilter can be expressed as

    The use of a program that is capable of performing symbolicalgebra is recommended to obtain the Maclaurin series forL. The results are

    Equation (20) can be used also to write the series for thephase of the all-pass filter directly. Then, forming = L +A and truncating the results after the term containing 7

    we obtain

    The next step is to set the coefficients of 3 and 5 equal tozero in Eq. (21). Thus, we must satisfy the equations

    Introduce parameters a and b to represent the left sides ofEqs. 22a and 22b, respectively. That is, let

  • 6 Phase Equalizers

    Thus,we have two equations,Eqs. 22a and 22b, that involvea, b, QA, and A. Upon eliminating A, we obtain a twelfth-order equation for QA given by

    where

    For a given second-order low-pass transfer function, d canbe found from Eqs. 23a, and 23b. Then a positive solutionfor QA is sought from Equation (24). Finally, A is obtainedfrom

    Note that a positive result must be found both for QA fromEq. (24) and for A from Eq. (26) in order to obtain a solu-tion.

    Although only a second-order low-pass filter transferfunction was utilized to derive Eqs. 24 and 26, these twoequations are used for the nth order all-pole case as wellbecause only the parameters a, b, and d need to be modi-fied. For example, suppose we wish to linearize the phaseof a normalized fourth-order Butterworth filter, denoted asB4(s), with a second-order all-pass filter. The transfer func-tion B4(s) is given by

    where 1 = 2 = 1, Q1 = 0.541196, and Q2 = 1.306563. Theparameters a and b become

    Calculating d from Eq. (25) and employing Eqs. 24 and 26,we obtain QA = 0.5434 and A = 1.0955. If the normalizedButterworth transfer function is to be frequency scaled toa practical frequency, then the all-pass transfer functionmust be frequency scaled by the same amount.

    Phase equalization has been applied only to transferfunctions of even order in the derivation and the example.To apply phase equalization to an odd-order filter, we mustdetermine the additional factor to add to each parameter aand b. An odd-order, all-pole, low-pass filter transfer func-tion To(s) can be expressed as To(s) = T1(s)TE(s) where T1(s)

    is given by

    k is positive, and TE(s) is the remaining portion of the over-all transfer function and is of even order. We have assumedthat the odd order of To(s) arises because of the existenceof one real axis pole, the usual case. All other poles of To(s)occur in complex conjugate pairs. Denoting the phase ofT1(j) as 1(), we write

    If we consider the case of linearizing the phase given inEq. (30) with a second-order all-pass transfer function, weobtain

    and the terms given in Eq. (31) are added to the expressionsfor the parameters a and b for higher order odd transferfunctions.

    Table 1 provides the values for QA and A needed tolinearize the phase of low-pass Butterworth filters with a3.01-dB variation in the passband and the phase of 1-dBripple Chebyshev low-pass filters. Note that no solutionexists for the second-order Butterworth filter. As an appli-cation of Table 1, we find the step responses of two normal-ized fifth-order Butterworth filters. One filter has a second-order all-pass connected in cascade in order to linearize itsphase, and the other does not. The transfer function B5(s)is given by

    In Fig. 9, the step response of B5(s) has less delay, and thestep response of B5(s)TAP2(s) with QA and A obtained from

  • Phase Equalizers 7

    Figure 9. Step responses of fifth-order Butterworth low-pass fil-ters with and without phase equalization. The step response of thefilter with phase equalization exhibits preshoot and has greaterdelay.

    Table 1 has greater delay due to the presence of the all-passfilter. However, it is seen from Fig. 9 that the response forthe phase-equalized filter more nearly approximates thestep response of an ideal low-pass filter with delay becausethe response of an ideal filter should begin ringing before itrises to the upper amplitude level. In other words, it shouldexhibit preshoot.

    Oftentimes, the design of filters having frequency-selective magnitude characteristics other than low-pass isaccomplished by applying Cauer transformations to a low-pass prototype transfer function. Unfortunately, the Cauertransformations do not preserve the phase characteristicsof the low-pass transfer function. Thus, if a Cauer low-passto bandpass transformation is applied to a low-pass filtertransfer function that has approximately linear phase, theresulting bandpass filter transfer function cannot be ex-pected to have linear phase, especially if the bandwidth ofthe bandpass filter is relatively wide. An approach to lin-earizing the phase of filters other than low-pass filters is tomake use of a computer program that plots delay resultingfrom the cascading of a specified magnitude-selective filterwith one or more all-pass filters. Using Eq. (7) and Figure4 as guides, the peaks of the time delays of the all-pass fil-ters can be placed to achieve approximately linear overallphase.

    AN APPLICATION OF DELAY

    An all-pass filter can be combined with a comparator to ob-tain a slope-polarity detector circuit (4). The basic arrange-ment of the all-pass filter and the comparator is shown inFig. 10. An LM311 comparator works well in this circuit,and a first-order all-pass filter can be used for input signalsthat are composed of sinusoids that do not differ greatly infrequency. To understand the behavior of this circuit, sup-pose vi(t) = Asin(t), where A is positive and represents thepeak value of the sine wave. The output voltage of the all-pass filter is vA(t) = Asin(t-t1), where t1 is the delay inseconds caused by the filter. Figure 11 depicts vi(t), vA(t),and the output voltage of the comparator vo(t), for A = 4 Vand = 2(100) rad/s. The output terminal of the compara-tor has a pull-up resistor connected to 5 V. Ideally, when the

    Figure 10. Essential components of a slope-polarity detector.

    slope of vi(t) is positive, vo(t) is high, and vo(t) is low whenthe slope of vi(t) is negative. Actually, the circuits outputchanges state at a time which is slightly past the time atwhich vi(t) changes slope. It is at this later time that thedelayed input to the comparator, vA(t), causes the polarityof the voltage (vi(t) vA(t)) between the leads of the com-parator to change. The need for an all-pass filter in thisapplication is clear because the amplitude of the input sig-nal must not be changed by the delaying circuit no matterthe frequencies present in the input signal. A first-orderall-pass filter is ordinarily adequate for the task. The poleand zero can be set far from the origin, and their placementis not overly critical. Too little delay results in insufficientoverdrive for the comparator. Too much delay increases theerror in the time at which the comparator changes state be-cause the polarity of the voltage between the leads of thecomparator does not change soon enough after the slopeof vi(t) changes. For the example illustrated in Fig. 11 in-volving a simple sine wave, the amplitudes of the inputand delayed sine waves are equal at a time closest to zerodenoted by tE and given by

    The voltage difference, denoted as VE, between the peak ofthe input sine wave and the level at which the input anddelayed sine waves are equal is given by

    If, for example, the delay provided by the all-pass filter is0.5 ms for the 100-Hz input sine wave, then the input sinewave will have decreased by approximately 50 mV from itspeak value before the comparator begins to change state.This circuit works well at steady state for input signalsthat do not contain significant high-frequency components.Thus, it works reasonably well if the input signal is a tri-angular waveform, but it does not work well with squarewaves.

    A SYNTHESIS APPLICATION

    First-order all-pass filters can be utilized to realize filterswith magnitude-selective characteristics. For example, thecircuit shown in Fig. 12, which is based on (5) and (6), re-alizes a bandpass filter transfer function by using a first-order all-pass circuit in a feedback loop. The overall trans-

  • 8 Phase Equalizers

    Figure 11. Input voltage vi(t), delayed input voltage vA(t), andcomparator output voltage vo(t) for the slope-polarity detectorshown in Figure 10 when the input voltage is a sine wave.

    fer function of the circuit is

    where K1 is the gain factor associated with the transferfunction of the first-order all-pass filter. If C1R1 = CR, K1= 1, and R2 = R3, then Eq. (35) reduces to the transferfunction of a standard second-order bandpass filter givenby

    The Q and 0 of the poles in Eq. (36) are

    Although the circuit requires the matching of elements andseveral operational amplifiers, including, possibly, a bufferat the input, it demonstrates that all-pass filters can beemployed in the realization of filters having frequency-selective magnitude characteristics.

    Figure 12. Second-order bandpass filter realized by incorporat-ing a first-order all-pass filter in a feedback path.

    Figure 13. Passive circuit that can be used to realize a first-orderall-pass filter. Only one capacitor is needed.

    ALL-PASS CIRCUIT REALIZATIONS

    Voltage-mode Realizations

    In this section, we examine a variety of circuits used torealize all-pass transfer functions for which the input andoutput variables of interest are voltage. Inductorless cir-cuits for first-order all-pass filters can be realized usingthe bridge circuit shown in Fig. 13. The transfer functionof this circuit is given by

    If R1 = R2, then Eq. (38) reduces to the transfer function ofan all-pass filter (7). However, the requirement that R1 =R2 results in a gain factor equal to , which is small inmagnitude. Also, a common ground does not exist betweenthe input and output ports of the circuit.

    The bridge circuit shown in Fig. 14, which can be re-drawn as a symmetrical lattice, can realize first-order all-pass transfer functions with a gain factor equal to 1. Thetransfer function of this circuit is

    If ZB = R and ZA = 1/(sC), a first-order all-pass transferfunction is obtained. If inductors are allowed in the circuit,then the circuit in Fig. 14 can realize higher order all-pass

  • Phase Equalizers 9

    Figure 14. Passive circuit that can be used to realize first-orderall-pass filters with gain factor equal to 1. Two capacitors areneeded. If inductors are allowed, this circuit can realize higherorder all-pass transfer functions with complex poles.

    transfer functions. For example, suppose a circuit is neededto realize a third-order all-pass transfer function TAP3(s)given by

    where p(s) and q(s) are the numerator and denomina-tor polynomials, respectively. The denominator polynomialq(s) can be expressed as the sum of its even part, m(s), andits odd part, n(s). Thus, q(s) = m(s) + n(s). If the roots of q(s)are confined to the open LHP, then the ratios n/m and m/nmeet the necessary and sufficient conditions to be an LCdriving point impedance (8). Thus, if the numerator and de-nominator of the transfer function in Eq. (40) are dividedby m(s), we obtain

    By comparing the result in Eq. (41) with Eq. (39), it is seenthat ZA = 1 and the box labeled ZB in Fig. 14 consists ofthe series connection of a 1 Henry inductor and an LC tankcircuit that resonates at 1 rad/s. However, the resultingcircuit requires six reactive elements and does not have acommon ground between the input and output ports, andthese properties may preclude the use of bridge circuit all-pass networks in some applications.

    Single transistor first-order all-pass transfer functionrealizations have been described by several authors. Theinterested reader may refer to Refs. 9 and 10 for addi-tional information. Inductorless second-order realizationsare also described in Refs. 9 and 10 but the poles and zerosof the transfer functions are confined to the real axis. Rubinand Even extended the results in Ref. 9 to include higherorder all-pass transfer functions with complex poles, butinductors are employed (11).

    Figure 15 shows two first-order all-pass circuits basedon operational amplifiers (op-amps) (12, 13, also see Activefilters). The transfer functions are given by Ta = (Z2 kR1)/(Z2 + R1) and Tb = (kZ1 + R2)/(Z1 + R2). Thus, ifZ2 in Fig. 15(a) or if Z1 in Fig. 15(b) are selected to be theimpedances of capacitors and k = 1, then first-order all-passcircuits are realized. The circuit in Fig. 15(a) can be used

    Figure 15. Single op-a