a modified sequential detection procedure

Download A modified sequential detection procedure

If you can't read please download the document

Upload: j

Post on 24-Sep-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

  • 16 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT 30, NO. 1, JANlJARY 1984

    PI

    [91

    [lOI

    Ml

    1121

    P31

    * u41

    [I51

    1161

    1171

    1181

    P91

    WI

    T. Kailath, S.-Y. Kung, and M. Morf, Displacement ranks of matrices and linear equations, J. Math. Anal. Appl., vol. 68, pp. 395-407,1979. (See also Bull. Amer. Math. Sot., vol. 1, pp. 769-773, Sept. 1979.) D.T. L. Lee, M. Morf, and B. Friedlander, Recursive least squares ladder estimation algorithms, IEEE Trans. Circuits Syst., vol. CAS-28, pp. 467-481, 1981. B. Friedlander, T. Kailath, M. Morf, and L. Ljung, Extended Levinson and Chandrasekhar equations for general discrete-time linear estimation problems, IEEE Trans. Automatic Contr., vol. AC-23, no. 4, pp. 653-659, Aug. 1978. J. M. Delosme, Algorithms for finite shift-rank processes, Ph.D. dissertation, Dept. of Electrical Eng., Stanford University, Stanford, CA, Sept. 1982. I. Schur, Uber Potenzreihen, die in Innern des Einheitskreises Beschrankt Sind, J. fur die Reine und Angewandte Mathematik, vol. 147, pp. 205-232, Berlin, 1917. P. Dewilde, A. Vieira, and T. Kailath, On a generalized Szegii-Levinson realization algorithm for optimal linear predictors based on a network synthesis approach, IEEE Trans. Circuits Systs., vol. CAS-25, pp:663-675, Sept. 1978. P. Dewilde and H. Dvm. Schur recursions. error formulas. and convergence of rational estimators for stationary stochastic processes, IEEE Trans. Inform. Theory, vol. IT-27, no. 4, pp. 446-461, July 1981. G. Szegii, Orthogonal Polynomials, Vol. 23. Providence, RI: American Mathematical Society Colloquium Publishing, 1939. Y. Geronimus, Orthogonal Polynomials. New York: Consultants Bureau, 1961. T. Kailath, A. Vieira, and M. Morf, Inverses of Toeplitz operators, innovations, and orthogonal polynomials, SIAM Rev., voi. 20, no. 1, up. 106-119, 1978. M.- S. Livsic and A. A. Yantsevich, Operator Colligations in Hilbert Spaces. New York: Wiley and Sons, 1979. M. Brodskii, Unitary operator colligations and their characteristic functions, Russian Math. Surety, vol. 33, pp. 159-191, 1978. Y. Genin, P. Van Dooren, T. Kailath, J. Delosme, and M. Morf,

    WI

    1221

    / ~231

    On Z-lossless transfer functions and related questions, J. Lrnetrr Algebra and Its Appl., vol. 50, pp. 251-275, Apr. 1983. H. Lev-Ari, Nonstationary lattice-filtering modeling, Ph.D. dis- sertation, Stanford University, Stanford, CA, 1983. H. Lev-Ari, T. Kailath, and T. Cioffi Least-squares adaptive lattice and transversal filters: A unified geometric theory, submitted to IEEE Trans. Inform. Theory. P. Delsarte, Y. Genin, and Y. Kamp, A polynomial annroach to the generalized Levinson algorithm, IEEE Trans. Inform. Theory, vol. IT-30, no. 2, vv. 268-278. Mar. 1983. __

    ~241

    1251

    G. Strang, Linear Algebra and Its Applications. New York, NY: Academic Press, 1980.

    WI

    1271

    WI

    ~291

    [301

    [311

    [321

    1331

    E. H. Bareiss, Numerical solution of linear equations with Toeplitz and vector toeplitz matrices, Numer. Math., vol. 13, pp. 4044424, Oct. 1969. M. Morf, Fast algorithms for multivariable systems, Ph.D. disser- tation, Dept. of Electrical Eng., Stanford University, Stanford, CA, 1974. J. Rissanen, Algorithms for triangular decomposition of block Hankel and Toeplitz matrices with application to factoring positive matrix polynomials, Math. Comput., vol. 27, pp. 147-154, Jan. 1973. B. R. Musicus, Levinson and fast Cholesky algorithms for Toeplitz and almost toeplitz matrices, Internal Rep. MIT, Nov. 1981. M. Morf, Doubling algorithms for Toeplitz and related equations, in Proc. 1980 Intt. Conf. Acoustics, -Speech, Signal Proc., pp. 954-959, Apr. 1980. R. R. Bitmead and B. D. 0. Anderson, Asymptotically fast solu tion of Toeplitz and related systems of linear equations, J. Linear Algebra Its Appl., vol. 34, pp. 103-116, Dec. 1980. D. R. Morgan, Orthogonal triangularization methods for fast estimation algorithms, Ph.D. dissertation, Dept. of Elec. Eng., Stanford Universitv. Stanford. CA. 1983. N. I. Akhiezer, TheClassical Moment Problem. New York: Hafner Publishing, 1965 (Russian original, 1961). N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. SOL, vol. 68, pp. 337-404, 1950.

    A Modified Sequential Detection Procedure

    Abstraci-A modified version of sequential testing is proposed and applied to discrete-time signal detection. The resulting detection procedure exhibits simplicity in structure and in analysis and retains most of the optimal features of sequential detection. Also, two simple and efficient truncation schemes are suggested for the proposed de&ion procedure which is unrealizable without truncation.

    Manuscript received October 6, 1980; revised July 5, 1983. This re- search was supported in part by the National Science Foundation under Grant ECS79-18915 and in part by the US. Army Research Office under Contract DAAG29-82-K-0095. This research was presented in part at the 15th Annual Conference on Information Sciences and Systems, The Johns Hopkins University, Baltimore, MD, March 1981.

    C. C. Lee is with the Department of Electrical Engineering and Com- puter Science, Northwestern University. Evanston. IL 60201.

    J. B. Thomas is with the Department of Electrical Engineering and Computer Science, Princeton University, Princeton, NJ 08544.

    I. INTRODUCTION

    ALDS sequential testing procedure [l]-[4] provides a significant advantage in signal detection: to

    achieve a certain accuracy, it requires, on the average, substantially fewer samples than fixed-sample-size (FSS) procedures [5]-[8]. This average saving can be very large (often greater than 50 percent). However, sequential detec- tion involves a more complicated structure and a more difficult analysis that may mitigate against using it. A comparison must be performed after each sample, a feed- back link is required to ask for additional samples, and the test function (i.e., the test statistic) will change after each sample. Although a sequential procedure will always terminate in a finite number of steps, only an appropriately

    CHUNG C. LEE, MEMBER, IEEE, AND JOHN B. THOMAS, FELLOW, IEEE

    OOlS-9448/84/0100-0016$01.00 01983 IEEE

  • LEE AND THOMAS: MODIFIED DETECTION PROCEDURE 17

    truncated version can be useful in practice. The analysis of sequential procedures is basically parallel to the theory of random walk with absorbing barriers [9]-[lo]. Since the integral equations associated with random walk problems usually do not have explicit solutions except for some special cases [ll], approximation methods are widely used in sequential analysis to determine the test thresholds, to calculate the average sample numbers (ASN), to treat the truncation problems, etc. [l]-[4]. These approximations are sometimes inappropriate [4].

    On the other hand, the sample-efficiency of sequential detection clearly indicates that the FSS detection method is wasting information. Therefore, the problem of exploring other statistical procedures so as to avoid the disad- vantages while retaining the sample-efficiency of sequential detection deserves investigation. In this paper, a memory- less grouped-data sequential (MLGDS) procedure is pro- posed and applied to signal detection.

    Consider the following sampling policy: at each stage, take a package of the previous N, samples, calculate a test statistic which is based on these N, observations, and perform a two-threshold test. Stop sampling if one of the thresholds is crossed and the decision between the hy- pothesis and the alternative is made. Otherwise, discard this package of samples and proceed to the next stage.

    In such a detection procedure, the decision function (the test statistic) remains the same from stage to stage, no memory is required to store the previous data, and, as will be shown, the complexity of analysis is of the same degree as that of analyzing an FSS detector with the additional requirement of a geometric distribution.

    Intuitively, this MLGDS procedure may appear to be much worse than Walds sequential procedure because of the discarding of samples. However, discarding of informa- tion will not occur in the MLGDS procedure if a decision is reached at the first stage. Therefore, if the package size is large enough so that the probability of termination at the first stage is high, then the probability of discarding sam- ples is low and thus the associated loss in efficiency is not so much as one might expect. As a result, an MLGDS detector can have a sample-efficiency approaching that of the corresponding sequential detector, as will be seen in Section IV.

    II. MLGDS DETECTION PROCEDURE- DESCRIPTION AND ANALYSIS

    For testing a simple hypothesis against a simple location alternative based on n independent and identically distrib- uted samples, we have the model

    H: &=I;:+So versus

    K: xj=N,+el

    where 8, > 0,. The MLGDS test procedure is defined as follows: at the nth stage, using the previous N, samples, form a test statistic T&(X,) where

    x* = (x(u-l)No+l, qn+vo+2,. . *y XnNo)r

    and perform the following:

    ! >A * K, T~O( *n) B. The values for N,, A, and B are predetermined so as to achieve the desired test level (Y and power p and to satisfy some optimal or problem-simplifying conditions.

    Let p b P{TNO > AIH} and q A P{TNO < BIH} and Y = 1 - p - q. In the same way, let p p P{ TN, > AlK } and q 2 P{TN, < BlK} and r 2 1 - p - q.

    Then, when H is true, the MLGDS detection procedure will decide K with probability p at the first stage, rp at the second stage, r2p at the third stage, and so on. Therefore the probability of a Type I error is given by

    a =p + rp + rp + 0.. =p/(l - r). (1) Also, under H, the procedure will terminate at the first stage with probability (1 - r), at the second stage with probability r(l - r), at the third stage with probability r2(1 - r), and so on. Thus the average sample number (ASN) under H is given by

    E,(N) = NO. 2 krk-(1 - r) = N,,/(l - r). (2) k=l

    Similarly, when K is true, we can obtain the power and the ASN as

    and p = p/(l - r) (3)

    respectively. E,(N) = No/(1 - 4, (4)

    Before the test procedure can be executed, we must select the package size N, and test thresholds A and B. Clearly, the best choice for these quantities are those which not only satisfy (1) and (3), but also minimize the ASNs. However, solving an optimization problem of this kind is not straightforward and depends on the actual noise distri- bution. In the following, we will investigate a Gaussian example through which we can attain an understanding of the simplicity and the performance of the proposed proce- dure.

    III. AN MLGDS LINEAR DETECTOR

    Let us use the MLGDS test procedure to treat the detection of known antipodal signals in additive white Gaussian noise; that is, consider the hypothesis pair

    H: X, = N, - 8, 8 > 0,

    versus K: Xi = N, + 8, 6 > 0,

    where the N, are normal and independent with means 0 and variance u2. It is well-known that the Neyman- Pearson optimal detector for this problem is the linear

  • 18 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT - 30, NO. 1, JANUARY 1984

    detector. Therefore, our test scheme will be based on the linear test statistic

    nNn

    i=(n-l)No+l

    It follows that TN, - N( - N,8, N,02) under H and TN, - N( N& N,a2) under K and that

    p=l-qJ A+Noe I $0 NU

    and

    q=Q,

    (5)

    (6)

    where Ca( .) is the unit normal cumulative distribution function (cdf).

    Supposecr=l-/3sothatB= -A,p=q,andq=p by symmetry. For the moment, let us pick A = NOB so that q = Q(O) = 0.5. Then, it follows from (1) that

    r-. 1 - Q, [2/N. S/u]

    a = 1 - @ [2$Q/u] + 0.5 which implies

    No = [&Q-f =)I.

    Since (p/q) = ar/(l - (u), we have (p + q) = [2(1 - c~)J-l and

    E,(N) = E,(N) A E(N)

    = ;[&~-1(g+g]2~l - a). (9)

    For the FSS linear detector, the required sample size is given by

    NF = &r(l - a) 2. [ 1 Thus, the relative efficiency (RE) of this MLGDS linear detector (Detector 2) compared to the FSS linear detector (Detector 1) is given by

    2 NF RE,,, A ~ = E(N)

    2 (1 - a>-. (11)

    A plot of RE,,, versus (Y = 1 - p is shown in Fig. 1. It can be seen that RE,,, approaches 2 when (Y becomes very small. Indeed, using the continuity and strict monotonicity of Qa-( .), it is easy to verify that the asymptotic relative efficiency (ARE) is given by

    ARE,,, s lim RE,,, = 2. a=l-p-+0

    Although this relation is derived for Gaussian noise and the linear detector, it holds under all noise distributions and test statistics for which the central lim it theorem is applicable.

    -. N E

    2.1-

    2D-

    1.9-

    IB-

    17-

    lb-

    1.5.

    l.4-

    l3-

    l.2- 0

    A=2 I -------_________ 2 ----------------

    DETECTOR I : FSS LINEAR CETECTOR DETECTOR 2: ;EF&flFU; MLGDS LINEAR

    1

    3 6 9 12 I5 I6 -103 a

    Fig. 1. Relative efficiency of simplified MLGDS linear detector relative to FSS linear detector (Gaussian noise).

    IV. OPTIMIZATION OF MLGDS DETECTION PROCEDURE

    Although we have shown by an example that an MLGDS detector can be asymptotically twice as efficient as the corresponding FSS detector, it must be noted that such a performance evaluation is too conservative. The test thresholds and the package size were chosen so that q = p = l/2 to facilitate the analysis. To achieve a m inimum average sample number for given error probabilities, the selection of these parameters is not arbitrary. Since the objective is to save samples over FSS detection, the package size No must be considerably smaller than the sample size NF required by the corresponding FSS procedure. Based on (12), we can predict that the optimal package size should be smaller than NJ2 for the Gaussian case when (Y = 1 - p. On the other hand, as illustrated in Section I, the package size cannot be too small. Otherwise, the loss in efficiency resulting from discarding samples may be signifi- cant. For example, in [12], a statistical procedure that is closely related to the case No = 1 was used to treat binary transmission problems. As indicated in [13], such a proce- dure is very inefficient when compared to Walds sequen- tial method and is inappropriate for signal detection.

    With the probabilities of error (CX and 1 - p) prespeci- fied, we will select the package size No and the test thresholds A and B so that the resulting ASN is a m ini- mum. In practice, we need only to optimize one of these parameters because the other two will then be determined.

    Assume that the test statistic is of the form

    TN, = ; zi (13) i=l

    for which central lim it theorem applies for large No, where zi are independent identically distributed (i.i.d.) with mean

  • LEE AND THOMAS: MODIFIED DETECTION PROCEDURE

    E(z,) = p under H and E(z,) = p + 8 under K and vari- ance uZ, 2 = a* under both H and K. Then when N, is large, we have

    [(TN, - NOp)/&a] -+ N(0, 1) in distribution under H

    (14 and

    [TN, - No(P + q/$6 + WA 1) in distribution under K, (15)

    where N(0, 1) denotes the unit Gaussian distribution. Assume (Y = 1 - p so that, by symmetry, the two test

    thresholds for the MLGDS test are symmetric about N,(p + e/2) and the ASNs under H and under K are equal. In this case we have

    and

    B = N,(/i + d/2) - C,

    A = N&i + t9/2) + c,

    E,(N) = E,(N) A E(N), (16) where C is a positive constant that determines the distance between the two thresholds. Define a new parameter k so that C = k&O and B and A can be expressed respectively as

    and B = Ng, + (l/2 - k)N@

    A = N,p +(1/2 + k)N,,B. (17) Then, the optimization problem becomes that of finding the N, and k that minimize E(N). From (14) and (17), it follows that, for large N,,

    (l/2 + k),&&/a 2: @-(1 - p) (18)

    and N, 1

    N$%J [(a-1(1 - a)] = (l/2 + k)*8*/a2 * 084

    Similarly we have for large N,,

    (l/2 - k)fid/a = @-l(q), (19)

    No = (8/a)-*[Q+(q) -@-l(p)]*, (20)

    and

    k = [Q-(p) + V(q)],2[@-(p) -W(q)].

    (21)

    Also, since (Y = p/(1 - r) I p/( p + q), we can write q = (1 - cw)p/a. (22)

    The ASN can thus be expressed as

    E(N) = No/(1 - r> = &/(P + d

    = (e/u)-{ Wl[(l - a)p/a] - cp-(p)}*a/p c ASN(p). (23)

    Therefore, the optimization problem is essentially a uni- variate one: find p* E S such that ASN (p*) =

    19

    TABLE I COMPARISONBETWEENOPTIMIZED MLGDS DETECTORAND FSS

    DETECTORBASEDONNUMBEROFSAMPLESREQUIREDTOACHIEVE THE SAMEERRORPROBABILITIES*

    -2 10-3 10 10-4 10-5 10-6 10-7 10;; $10

    g:::

    10-13 10-14 10-15

    g

    MLGDS IETECTOR

    E(N)

    312.0 492.1 664.2 830.7 993.5

    1153.4 1310.8 1466.2 1619.8 1772.0 1922.7 2072.3 2220.8 2368.4 2515.1 2660.9 2806.0

    FSS )ETECTOR

    N F

    541 954 1383 1818 2259 2703 3149 3597 4046 4497 4948 5400 5853 6306 6760 7214 7669

    ! SAVING X SAVING B' iY MLGDS SEQUBNTIAI )ET!XTOR DETECTOR

    41 58 1.50 48 63 1.37 52 66 1.30 54 68 1.26 56 69 1.23 57 70 1.20 58 70 1.19 59 71 1.17 60 71 1.16 61 71 1.15 61 71 1.14 62 72 1.13 62 72 1.13 62 72 1.13 73 72 1.12 63 72 1.12 63 72 1.11

    *The percent saving of samples by the corresponding sequential detec- tor is also included for comparison (B/o = 0.2).

    min,,, ASN ( p) where S is defined as the set { ~10 < p < a} I? { p 1 N, given in (20) is a positive integer}. We require p < (Y sincep = (1 - r)a < (Y.

    Upon obtaining p *, the optimal q, k, N,, A, and B can be calculated through (22), (21), (20), and (17), stepwise. Let q*, k*, N,*, A*, and B* denote these optimal parame- ters. In what follows, we will find p* by means of a simple numerical approach. We assume that No is large enough so that the above asymptotic approximations are accurate, and we will thus treat (19), (20), (21), and (23) as equalities. Based on this assumption the following two facts hold when a( = 1 - /3) < 1 - a/2. Proofs are provided in Appendix A.

    Fact 1: N,, is strictly increasing with p (and thus a one-to-one correspondence exists) for LY < 1 - a/2.

    Fact II: NJ < N,,* < 2N,(l - a) for a! < l/2, where N, is the package size associated with the case k = l/2, which can be calculated from (20), (21), and (22).

    It follows from these two facts that the set S is finite and contains less than N,, elements. Therefore, for each N, between N, and 2 N,(l - a), we may solve for p from (20) and then calculate ASN (p) from (23). The optimal param- eters p* and No* can thus be obtained.

    Table I is a comparison among FSS detection, sequential detection, and optimized MLGDS detection in terms of sample efficiency. It is observed that MLGDS detection is much more efficient than FSS detection. The expected number of stages (or of packages of samples), MN (p*)/N,*, required by the MLGDS procedure to reach a decision is given in the last column of the table. We see that less than 1.5 stages are expected so that less than two comparisons are expected in the MLGDS decision process.

    Although those optimal parameters vary with the proba- bility of error specified, numerical computations show that the sample-efficiency is not affected appreciably if, instead of N,* and k*, we choose either k = 0.36 or N, = NJ3

  • 20 IEEE TRANSACTIONSON INFORMATIONTHEORY,VOL. IT-30, NO. l,JANUARY1984

    when the probability of error lies in the range of lo-* to 10Pi8, where NF is the sample size required by the corre- sponding FSS procedure. These approximations to the optimal parameters are very useful when the Gaussian approximations of (14) and (15) hold.

    Concerning the asymptotic performance of the MLGDS detector relative to the FSS detector we have the following theorem.

    Theorem 1: Assume that the test statistic is asymptoti- cally Gaussian (in the sense of (14) and (15)), that (Y = 1 - /3, and that, for given (Y, the package size and the test thresholds for the MLGDS detector are chosen so as to m inimize the ASN. Then, as (11 + 0, the ARE of an opti- mum MLGDS detector relative to the corresponding FSS detector equals 4.

    The proof of this theorem is given in Appendix B. Under the same assumptions, the ARE of Walds sequential test relative to the FSS test is also equal to 4 (see [14]). We conclude that an MLGDS detector can be asymptotically as efficient as the corresponding sequential detector.

    V. TRUNCATIONOFTHE MLGDS DETECTOR

    As in sequential detection, the sample size N of the MLGDS detector is a random variable; thus it may happen that the sample size is undesirably large and so a trunca- tion scheme is indispensable. The truncation problem asso- ciated with Walds sequential procedure is complicated [15]-[17] since it involves truncating a random walk. In contrast, since the sample size for the MLGDS detector is geometrically distributed, the truncation problem is sim- pler.

    Define Type A truncation as follows: perform the MLGDS test until at most j stages are executed. If the decision between H and K is still pending, take NF new samples and perform an FSS test.

    The test level and power associated with the Type A truncated MLGDS detector are thus given by

    (YA = (p+rp+ .f. + ripp) + rja = a (24) and

    j3, = (p + rp + * * * + r-p) + r$ = /3. (25)

    We conclude that the test level and power are invariant to Type A truncation. On the other hand, Type A trunca- tion will affect the ASN. Since the probability with which the Type A truncated MLGDS test terminates at the k th (k G j) stage is r k-1(l - r) under H, and since the test must terminate no later than the (j + 1)st state, the proba- bility that the terminating stage has to be executed is given by

    PH{MA =j + l} = 1 - i: rk-(1 - r) = rj. (26) k=l

    Similarly, we have under K that

    P,{M, =j+ l} = 1- i rfkel(l - r) = (r)j* k=l

    (27)

    TABLE11 FRACTIONALINCREMENTSINASNCAUSEDBYTYPEA

    TRUNCATIONONAPPROXIMATELYOPTIMIZED(~ = 0.36)MLGDS LINEARDETECTOR(~ = ~,GAUSSIANNOISE)

    a(-1-p) 10-2 10-3 10-4 10-5 10-6 10-7

    W A 0.025 0.022 0.017 0.013 0.010 0.008

    a(-l-p) 10-E 10-g lo-lo IO-l1 lo-l2

    NO, 0.006 0.004 0.003 0.002 0.001

    Therefore, the ASNs required by Type A truncated MLGDS detection can be expressed as follows. Under H we have

    EH(N)A = NO i krk-(1 - r) + rj(jN, + N,), k=l

    (28)

    and the normalized increase in ASN is

    A H

    (N) A

    p Ef,(N>A -E,(N) EH(N)

    = d[(l - r)(N,/N,) - 11. (29) A similar result can be obtained under K by replacing r by r in (29).

    Forj = 3, Table II lists numerical values for A(N), that result from applying Type A truncation to the MLGDS linear detector with k = 0.36. These quantities clearly ex- hibit the negligible change in ASN for Type A truncation. Further verification of this phenomenon is given in the following result.

    Theorem 2: Under the same assumptions as made in Theorem 1,

    lim A(N), = 0. cu=l-p-+0

    The proof of this theorem is given in Appendix C. For the Type A truncation scheme, the maximum sample

    size (jNo + NF) seems unnecessarily large. To avoid this defect an alternative scheme, called Type B truncation, is proposed as follows: letj denote the smallest integer greater than the quantity N,/N,. If the decision between the hypothesis and the alternative has not been made after j stages of the MLGDS detection procedure have been ex- ecuted, stop sampling, sum up thosej test statistics already computed at the (j + 1)st stage, and perform a one- threshold test.

    We see immediately that less than NF + No samples are needed for Type B truncation but that memory is required to record the sum of the test statistics. Involving memory is the major disadvantage of Type B truncation.

    To investigate the performance of Type B truncation, let us start by observing the (j + 1)st stage. This terminating stage is essentially a conditional FSS test: that is, given

    B < T&(x,> < A, forall 1 < k

  • LEE ANDTHOMAS: MODIFIED DETECTION PROCEDURE 21

    TABLE III PROBABILITY OF ERROR AND ITS UPPER BOUND FOR A TYPE B

    TRUNCATED MLGDS LINEARDETECTOR

    a-l-8 OB h B 1.00 x 10-2 1.27 x 10 -2 1.97 x 10 -2 1.00 x m-3 1.47 x 10 -3 1.98 x 10 -3 1.00 x 10-4 1.49 x 10-4 1.98 x 10 -4

    1.00 x 10-5 1.51 x 10 -5 1.99 x 10-5 1.00 x 10-b 1.52 x lO-6 1.99 x 10-6 1.00 x 10-7 1.51 x 10-7 1.99 x 10-7 1.00 x 10-8 1.50 x 10-a 2.00 7. 10-s 1.00 x 10-9 1.48 x lo- 2.00 x 10 -9

    1.00 x 10-10 1.45 x 10-10 2.00 x 10-10

    1.00 x lo-l1 1.42 x lo-l1 2.00 x lo-l1 1.00 x lo-l2 1.40 x lo-l2 2.00 x lo-l2

    where the threshold T must be chosen appropriately. Therefore, the test level and power associated with this terminating stage are given, respectively, by

    Oljil = 41 Tvo(&) ' TB < T,o(X,) < A>

    for all 1 < k G j I

    (30)

    and

    foralll d k p, we have

    Also, by employing the inequality [18]

    IQ-(p)1 G [ -2lnJp(l - P)]~,

    we obtain

    > F[(l -a> - 4p(l(lp)] -

    =$I+ -&Q- l 40 - r)(l -P)

    ]

    >E - a [ 1

    1-a-2(1-p) fi 1 [ 1 OL. 1-a-2a(1-a) 1

    J21; = 241 - CY)

    [(l -a) - Jj] > 0 ifol< 1 - a/2.

    Thus, dN,/dp > 0 for CY < 1 - a/2 and Fact I follows. When k = l/2, we have q = l/2 from (19) and p = (u/[2(1 -

    a)] $ p. by (22). Then, it follows that E(N) = Na/(l - r) = Nd/(po + l/2) = 2Na(l - CX). Therefore, 2Nd(l - a) is the ASN associated with the case where p = a/2(1 - (Y) (i.e., k = l/2). Thus,

    N,* < N;/(p* + q*) = ASN( p*) < 2N,,(l - a).

  • 22 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT - 30, NO. 1, JANUARY 1984

    On the other hand, we have

    $ ASN(p) = -QYPo) P=Po 4P,2WJ)2 ~- i s;o, c#J[8-~(po)] + @-(po) i

    For a < l/2, we have p. < l/2 and Qa-( po) < 0, and thus

    +;o, a[st(po), + cp-(po) < +;o, +&PO)] < O

    since +(O) is the maximum of +( .). We conclude that, for OL < l/2, we have (d/dp)ASN(p)l,=,, < 0 and thus ASN(p) is decreasing at p = po. Therefore, p* > p. and thus No* > Nd by Fact I. 0

    APPENDIX B

    Proof of Theorem I

    We proved in Appendix A that p* > a/2(1 - a) or, equiva- lently, q* > l/2. It thus follows from (19) that k* < l/2. Since we are concerned with optimum MLGDS detectors, the parame- ter k will be constrained to 0 < k < l/2. Note that choosing optimum package size and optimum test thresholds are achieved by choosing an optimum k (i.e., k*).

    Since p = (1 - r)a < a, we have Q-*(1 -p) < @-(1 - a). Therefore, it follows from (18) that, as (Y approaches zero, No will approach infinity. We thus obtain for k < 1/2,lim,,, q = lim No+w q = 1 and thus,

    lim r = Jhop = 0 Cl-0

    and

    lim E(N) -= l 1 -= LX+0 N, J!!o 1 - r

    Also, note that

    lim @-(l -P> = lirn Q-v - CY - r&l = 1, a-0 Q-1(1 - a) a-0 Q-1(1 - a)

    (Bl)

    Of21

    Combining (18a), (Bl), and (B2) yields

    lim E(N) a-0 [@-(l - a)]2 = (l/2 + :)82/o G (eu)-

    where the equality holds when k = ($)-. For the corresponding FSS detection procedure, it is easy to

    verify that

    Therefore, we obtain that

    ARE,,, = ac=liitn, oN,/E(N) = 4(1/2 + k)-* d 4

    where the equality holds when k = (f)). (This implies that k* approaches l/2 from the left as (II approaches 0.) This completes the proof. 0

    APPENDIX C

    Proof of Theorem 2

    When (Y = 1 - p, we have r = r and AH( = AK(~)A p A(N),. As shown in Appendix B. lim.,, r = 0 and lim a+O N,/N, 4 4. Therefore

    fForj[(r - r)(N,/N,) - l] = 0.

    APPENDIX D

    Proofs of (34) and (35)

    Let U, V, and W denote the following three events respec- tively: CL=, TNo(X,) > T, B < TNO(Xk) < A for all 1 I k I

    j, and H is true. Then, we have, from (30)

    a,+1 = P(ww = p(y,w) P(WW I p(wv

    p(wu But P(YlW) = rJ and P(UIW) _< (Y since P(UIW) is the proba- bility of Type I error associated with a one-threshold test whose sample size (jNa> is greater than N,=. Therefore, we have a,+ i I a/rj. On substituting this inequality into (32), we obtain

    a,lp+rp+ ... + rj-lp + a = (2 - rj)a

    Similarly, we can prove that

    1 - Pj+l 5 (1 - B)/(r).

    Thus, we have

    /ij+l 2 1 -(l - p)/(r)j= /3 -(l - /3)[1 -(r)j]/(r).

    On substituting this inequality into (33), we obtain

    flB 2 /3 -(l - /?)[l -(r)j].

    REFERENCES

    ;:;

    [31

    [41

    [51

    [61

    [71

    PI

    [91

    1101

    WI

    A. Wald, Sequential Analysis. New York: Wiley, 1947. Z. Govindarajulu, Sequential Statistical Procedures. New York: Academic, 1975. G. B. Wetherill, Sequential Methods in Statistics. New York: Wiley, 1966. . B. K. Ghosh, Sequential Tests of Statistical Hypotheses. Reading, MA: Addison-Weslev. 1970. J. J. Bussgang and &liddleton, Optimum sequential detection of signals in noise, IRE Trans. Inform. Theory, vol. IT-l, pp. 5-18, Dec. 1955. H. Blasbalg, Experimental results in seguential detection, IRE Trans. Inform. Th>ory, vol. IT-5, pp. 41-51, June 1959. J. J. Busseana. Seauential methods in radar detection. Proc. IEEE, ~01.~58,~uo. 5, pp. 731-743, May 1970. S. Tantaratana and J. B. Thomas, Relative efficiency of the sequential probability ratio test in signal detection, IEEE Trans. Inform. Theory, vol. IT-24, no. l., pp. 22-31, Jan. 1978. W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1. New York: Wiley, 1968. J. Kemperman, The General One-Dimensional Random Walk with Absorbing Barriers- With Applications to Sequential Analysis. s-Graveuhage: Excelsior, 1950. J. G. Proakis, Exact distribution functions of test length for sequential processors with discrete input data, IEEE Trans. In- form. Theory, vol. IT-9, pp. 182-190, Mar. 1963.

  • IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT 30, NO. 1, JANUARY 1984

    [12] B. Harris, A. Hauptschein, and L. S. Schwartz, Optimum decision feedback systems, IRE Nat. Convention Rec., part 2, pp. 3-10, 1957. WI

    [13] S. S. L. Chang, B. Harris, and K. C. Morgan, Cumulative binary decision feedback systems, in Proc. Nat. Electronics Conf., pp. P71 1045-1056, 1958.

    [14] H. Chernoff, Large sample theory-parametric case, Ann. Math. Statist., vol. 27, pp. l-22, 1956. [181

    [15] J. Bussgang and M. Marcus, Truncated sequential hypothesis

    tests, IEEE Trans. Inform. Theory, vol. IT-13, no. 3, pp. 512-516, July 1967. S. Tantaratana and J. B. Thomas, Truncated sequential probability ratio test, Inform. Sci., vol. 13, pp. 283-300, 1977. T. W. Anderson. A modification of the sequential nrobability ratio test to reduce the sample size, Ann. Mdth. Statist., vol. 31, pp. 165-197,196O. A. J. Strecok, On the calculation of the inverse of the error function, Math. Cornput., vol. 22 (lOl), pp. 144-158, 1968.

    A New Lower Bound for the a-Mean Error of Parameter Transmission over the White

    Gaussian Channel MARAT V. BURNASHEV

    Abstract-Upper and lower bounds are derived on the exponential behavior of the a-mean error for parameter transmission over the additive white Gaussian noise channel. The method of proving the lower bound may be used in many information theory and mathematical statistics problems.

    I. DESCRIPTION OF THE PROBLEM: STATEMENT OF MAIN RESULTS

    A SSUME that we must transmit some parameter 8 E 0 = [0, l] over a channel with additive white Gaus- sian noise. For transmission we may use any signal S, (r3), 0 < t < T, satisfying only the energy constraint

    s~~~I,lls(~)l12 G A, 11~112 = AT& dt. (1.1)

    The transmitted signal S,(0) is disturbed by additive white Gaussian noise with unit intensity. From a mathematical point of view, the observed signal X, is described by the stochastic differentials

    dX, = S,(e) dt - dW,, O 0. (S,(Q},i ~=lO,Il

    (1.3) The problem is how to evaluate the function e,(A) and how to choose the signals {s,(e)} and estimate 8. (The duration of transmission T is not important in that the problem statement, and all results depend only on the energy A.) It should be noted that in this paper we do not consider any bandwidth constraints on {s,(e)}. Such in- vestigations will be given in a forthcoming paper.

    We are interested only in asymptotic behavior of e,(A) when A + cc. It is clear that

    e,(A) = w { -y(a)A), A + 00, a > 0, (1.4)

    and so the problem is how to evaluate the function y(a). Using quantization of the set 0 = [0, l] and the set of orthogonal signals { S,(t9,)} it is simple to get the following upper bound for e,(A) [l].

    Theorem 1:

    e,(A) d C(1 + Ap) exp { -y,(a)A}, (1.5) where

    Y(4 2 YUW = 2(1 + 6)

    (y 0 4

    4(1 + a) a > 1,

    0018-9448/84/0100-0023$01.00 01983 IEEE