pseudorandom testing - computer action teamweb.cecs.pdx.edu/~ecex75/readings/1987_mccluskey... ·...

12
IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987 Pseudorandom Testing KENNETH D. WAGNER, CARY K. CHIN, AND EDWARD J. McCLUSKEY, FELLOW, IEEE Abstract-Algorithmic test generation for high fault coverage is an expensive and time-consuming process. As an alternative, circuits can be tested by applying pseudorandom patterns generated by a linear feedback shift register (LFSR). Although no fault simulation is needed, analysis of pseudorandom testing requires the circuit detectability profile. Measures of test quality are developed for pseudorandom testing. These include an exact expression and an approximation for the expected fault coverage. The influence of each fault on the expected fault coverage can then be evaluated. Relationships between test confidence, fault coverage, fault detectability, and test length are also examined. Previous analyses of pseudorandom testing have often used random testing as an approximation. It is shown that the random test model is not in general a good approximation. Finally, analysis of the pseudorandom input vector model is extended to situations where the size of the test pattern generator is not equal to the number of inputs to the circuit. Index Terms-Detectability profile, fault coverage, pseudoran- dom testing, random testing, test confidence, test generation, test length. I. INTRODUCTION T HE use of pseudorandom vectors ;to test combinational l circuits effectively avoids long and complex algorithmic test pattern generation procedures. Only a fault-free circuit simulation is required for the correct circuit output response. An important use of pseudorandom test patterns is in systems with BIST (built-in self test) which internally generate test vectors. In such systems a custom test set may be placed in a look-up table, making it very costly to store and apply. Also, many test pattern generation programs produce pseudorandom test patterns initially, augmenting them with algorithmically- generated test vectors when necessary. For identical test quality, more pseudorandom test vectors are required than algorithmically-generated test vectors. Since the pseudorandom test patterns have little development time and cost, the test engineer must balance this savings with the increased test length. For a pseudorandom test to be practical, Manuscript received February 23, 1986; revised June 26, 1986. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under their Postgraduate Scholarship program, by Data General Corporation under their Honors Cooperative program, by the MCC, and by the National Science Foundation under Grant DCR-8200129. K. D. Wagner was with the Center for Reliable Computing, Stanford University, Stanford, CA 94305. He is now with the EDS VLSI Design Rules Control Department, IBM, Poughkeepsie, NY 12602. C. K. Chin was with the Center for Reliable Computing, Stanford University, Stanford, CA 94305. He is now with Integrated CMOS Systems Corporation, Sunnyvale, CA 94806. E. J. McCluskey is with the Center for Reliable Computing, Departments of Electrical Engineering and Computer Science, Stanford University, Stanford, CA 94305. IEEE Log Number 8612057. the test length must be significantly less than that of exhaustive test (its upper bound), or the test length will be prohibitive for most circuits. To obtain the exact test length for a desired test quality, or the exact test quality for a given test length, requires fault simulation. This paper shows how to calculate test length and how to estimate measures of test quality for pseudorandom testing, without requiring circuit fault simula- tion. It compares the variation of test quality in actual pseudorandom tests (using fault simulation) with the test quality estimates. An LFSR (linear feedback shift register) is the most common circuit structure used to produce pseudorandom vectors. Although its patterns are deterministically generated and therefore repeatable, the LFSR output sequence possesses some of the properties of random sequences [6]. A (homogeneous) Bernouilli process [15] is generally used when modeling LFSR pattern generation. This is called random pattern generation; the patterns are used in random testing. Their practicality is discussed in [14]. However, it is probabilistically more accurate to use a nonhomogeneous Bernouilli process to model the LFSR generator. This is called pseudorandom pattern generation and the patterns are used in pseudorandom testing. This model has been avoided in the past, in part because it was assumed that it would produce similar results, and in part because of the supposed intractabil- ity of the analysis. The greater accuracy of the pseudorandom generation model along with some initial results are presented in [3]. The test procedure analized in this paper is shown in Fig. 1. In Sections III-VI it will be assumed that m = n, i.e., the pattern generator with m outputs has been matched to the n- input combinational circuit-under-test (CUT). Cases where m * n are also of interest and are considered in Section VII. The single stuck-at fault model is used. There are M possible faults in the CUT. A total of N = 2 different test vectors can be applied to an n-input CUT, where the test vectors are produced by either a random or pseudorandom pattern generator. Sections Ill-V show and interpret the various measures of test quality for pseudorandom testing. These same measures are developed in Section VI for random testing. The pseudorandom test model is always superior to the random test model: it produces more accurate results, its test length estimates are shorter for a selected test quality, and its test quality results are better for a selected test length. This paper analyzes and discusses several measures of test quality. These include expected fault coverage, test confi- dence, weighted test confidence, and the probability of 100 percent fault coverage. The most attention is paid to the fault 0018-9340/87/0300-0332$01.00 © 1987 IEEE 332

Upload: others

Post on 12-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987

Pseudorandom TestingKENNETH D. WAGNER, CARY K. CHIN, AND EDWARD J. McCLUSKEY, FELLOW, IEEE

Abstract-Algorithmic test generation for high fault coverageis an expensive and time-consuming process. As an alternative,circuits can be tested by applying pseudorandom patternsgenerated by a linear feedback shift register (LFSR). Although nofault simulation is needed, analysis of pseudorandom testingrequires the circuit detectability profile.Measures of test quality are developed for pseudorandom

testing. These include an exact expression and an approximationfor the expected fault coverage. The influence of each fault on theexpected fault coverage can then be evaluated. Relationshipsbetween test confidence, fault coverage, fault detectability, andtest length are also examined.

Previous analyses of pseudorandom testing have often usedrandom testing as an approximation. It is shown that the randomtest model is not in general a good approximation. Finally,analysis of the pseudorandom input vector model is extended tosituations where the size of the test pattern generator is not equalto the number of inputs to the circuit.

Index Terms-Detectability profile, fault coverage, pseudoran-dom testing, random testing, test confidence, test generation, testlength.

I. INTRODUCTION

T HE use of pseudorandom vectors ;to test combinationall circuits effectively avoids long and complex algorithmic

test pattern generation procedures. Only a fault-free circuitsimulation is required for the correct circuit output response.An important use of pseudorandom test patterns is in systemswith BIST (built-in self test) which internally generate testvectors. In such systems a custom test set may be placed in alook-up table, making it very costly to store and apply. Also,many test pattern generation programs produce pseudorandomtest patterns initially, augmenting them with algorithmically-generated test vectors when necessary.

For identical test quality, more pseudorandom test vectorsare required than algorithmically-generated test vectors. Sincethe pseudorandom test patterns have little development timeand cost, the test engineer must balance this savings with theincreased test length. For a pseudorandom test to be practical,

Manuscript received February 23, 1986; revised June 26, 1986. This workwas supported in part by the Natural Sciences and Engineering ResearchCouncil of Canada under their Postgraduate Scholarship program, by DataGeneral Corporation under their Honors Cooperative program, by the MCC,and by the National Science Foundation under Grant DCR-8200129.

K. D. Wagner was with the Center for Reliable Computing, StanfordUniversity, Stanford, CA 94305. He is now with the EDS VLSI Design RulesControl Department, IBM, Poughkeepsie, NY 12602.

C. K. Chin was with the Center for Reliable Computing, StanfordUniversity, Stanford, CA 94305. He is now with Integrated CMOS SystemsCorporation, Sunnyvale, CA 94806.

E. J. McCluskey is with the Center for Reliable Computing, Departmentsof Electrical Engineering and Computer Science, Stanford University,Stanford, CA 94305.IEEE Log Number 8612057.

the test length must be significantly less than that of exhaustivetest (its upper bound), or the test length will be prohibitive formost circuits. To obtain the exact test length for a desired testquality, or the exact test quality for a given test length,requires fault simulation. This paper shows how to calculatetest length and how to estimate measures of test quality forpseudorandom testing, without requiring circuit fault simula-tion. It compares the variation of test quality in actualpseudorandom tests (using fault simulation) with the testquality estimates.An LFSR (linear feedback shift register) is the most

common circuit structure used to produce pseudorandomvectors. Although its patterns are deterministically generatedand therefore repeatable, the LFSR output sequence possessessome of the properties of random sequences [6].A (homogeneous) Bernouilli process [15] is generally used

when modeling LFSR pattern generation. This is calledrandom pattern generation; the patterns are used in randomtesting. Their practicality is discussed in [14]. However, it isprobabilistically more accurate to use a nonhomogeneousBernouilli process to model the LFSR generator. This is calledpseudorandom pattern generation and the patterns are used inpseudorandom testing. This model has been avoided in thepast, in part because it was assumed that it would producesimilar results, and in part because of the supposed intractabil-ity of the analysis. The greater accuracy of the pseudorandomgeneration model along with some initial results are presentedin [3].The test procedure analized in this paper is shown in Fig. 1.

In Sections III-VI it will be assumed that m = n, i.e., thepattern generator with m outputs has been matched to the n-

input combinational circuit-under-test (CUT). Cases where m* n are also of interest and are considered in Section VII. Thesingle stuck-at fault model is used. There areM possible faultsin the CUT.A total ofN = 2 different test vectors can be applied to an

n-input CUT, where the test vectors are produced by either arandom or pseudorandom pattern generator. Sections Ill-Vshow and interpret the various measures of test quality forpseudorandom testing. These same measures are developed inSection VI for random testing. The pseudorandom test modelis always superior to the random test model: it produces moreaccurate results, its test length estimates are shorter for aselected test quality, and its test quality results are better for aselected test length.

This paper analyzes and discusses several measures of testquality. These include expected fault coverage, test confi-dence, weighted test confidence, and the probability of 100percent fault coverage. The most attention is paid to the fault

0018-9340/87/0300-0332$01.00 © 1987 IEEE

332

Page 2: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

WAGNER et aL.: PSEUDORANDOM TESTING

PATTERN GENERATORm bits Random or Pseudorandomn inputs input Vectors

CIRCUIT-UNDER-TEST(Single stuck-at fault model)

Output Vectors

OUTPUT COMPARATOR(Explicit comparison)

ERROR

Fig. 1. Test procedure.

coverage since it is the standard measure of quality inalgorithmic test pattern generation. Test confidence, theprobability that a particular fault will be detected in a testsequence, is related to fault coverage through the weighted testconfidence measure.

II. FAULT DETECTABILITYThe detectability of a fault is the number of different input

vectors that cause a circuit output error when the fault ispresent [8]. To calculate test length and measures of testquality for pseudorandom (and random) testing, the detectabil-ity of every fault in the circuit fault set is needed. To obtainthis fault set information, since all faults in a fault equivalenceclass have identical detectabilities, knowing the detectabilityof one fault chosen from each fault class and the class sizes issufficient. The detectability profile H, expressed as a vector[hj, h2, ---, hN], is constructed from the detectabilityanalysis. Element hk of the vector is the number of faults in thecircuit that have detectability k (for detectable faults 1 c k <

N). The detectability profile of a simple circuit is shown inFig. 2. An important property of the profile is

N

- hk=M.k= 1

The detectability profile of most integrated circuits is verydifficult to obtain. Exhaustive fault simulation is the bruteforce approach. A computationally expensive TPG algorithmlike the D-algorithm [10] can be generalized to find the exactprofile. The profile can also be approximated using probabilis-tic analyses like the Cutting Algorithm [11]. Since the leadingnonzero elements of the profile will be shown to generallyhave the most significance, testability measures can be appliedto the CUT to determine those fault classes requiring exactanalysis.

III. FAULT COVERAGEThe measure used to rate the quality of an algorithmically-

generated test set is fault coverage, the fraction of potentialfaults in the circuit that can be detected by applying the test set.[1] has calculated fault coverage and test length for pseudoran-dom testing using testability analysis and a fast fault gradingalgorithm.

Expected Fault CoverageAn estimate of the fault coverage of a pseudorandom test

sequence is necessary to allow direct comparison withalgorithmically-generated test sets. The expectedfault cover-age E[CLI is the expected number of faults that can bedetected in a test of length L, divided by the total number of

x & } ZY

X/O X/1 Y/O Y/1 Z/O Z/1

0001 x1011 x

xxx

x

H = [h1l,h,h,h4] = [5,0,1,0] k = 1 1 1 1 1 3

Fig. 2. Detectability profile H of 2-input AND gate.

possible circuit faults M. It will be shown that the expectedfault coverage can be obtained when the detectability profile isknown. The exact fault coverage of a pseudorandom test canonly be determined by fault simulation.

In pseudorandom testing, each test vector is chosen withequal probability out of a "pool" that initially contains Ndifferent vectors and not replaced (sampling without replace-ment). Assume a fault of detectability k is present, i.e., k ofthe N vectors in the pool detect the fault. When a sequence of ttest vectors is applied, the probability Pt that thefault isfirstdetected by the tth vector is

Pt= [(N- k)/N][(N- k - 1)/(N- 1)][(N- k- 2)/(N- 2)]* [(N-k - (t - 2))/(N- (t - 2))][k/(N- (t - 1))]

(N- t)

( t) (1)($0Jwhere t > 1. 1The jth term, (N - k - (j - 1))/(N - (j - 1)), 1 < j

< t - 1, of the product expression for Pt is the probabilitythat the jth test vector did not detect the fault (and is removedfrom the pool of available vectors). The tth term, k/(N - (t- 1)), is the probability that the fault is detected when kdetecting vectors remain in a pool ofN - (t - 1) vectors. It isimplicitly assumed that the pseudorandom generator producesall patterns, i.e., the LFSR is maximal length and has beenmodified to include the all-zero pattern [9]. Since a fault ofdetectability k must be detected within N - k + 1pseudorandom test vectors, pt = 0 when t 2 N - k + 2.The relationship of probability p, to the hypergeometric

distribution is discussed in Appendix A. From the derivationin Appendix A using (1), the expectedfault coverage is

tN-LN-L (AkL hk

E[CL]= 1- ( ) M (2)

Since the range of the summation is 1 c k c N - L, eachsummation term in (2) decreases the expected fault coveragewhen its associated hk is nonzero. The upper limit of thesummation is N - L, since faults with detectabilities N - L+ 1 through N must have been detected in a pseudorandomtest of length L.

I The combinatorial notation is defined as (Q) = a!/(a - b)!b! and (11) =0, b > a, () = 1, a . 0.

333

Page 3: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987

The quotient of combinatorial terms in (2) is [(N - L)(N-L - 1)N-L - 2) ... (N-L - k)]l[N(N-1)(N -2) (N - k)]. Assuming N - L > k, this quotient can berewritten as (N - L)k/Nk = (1 - L/N)k. Thus, (2) isapproximated by

N-L hk

k=1 MThe change in expected coverage with the addition of one

test vector is also shown in Appendix A in (40) and (41).The pseudorandom test expected fault coverage results of

(2) and (3) can be applied and reformulated in several ways.For instance, they can be used directly on circuits with knowndetectability profiles. [8] gives profiles for seven circuits andthe SN181 ALU was profiled in this research. Table I showsthe circuit characteristics.

Applying (2) and (3), the expected fault coverage for thecircuit examples is shown in Figs. 3-5. The examples aredivided into combinational circuits with n < 4, 4 . n . 5,and n > 6 inputs. The exact solution of (2) is used to produceFig. 3 and portions of Fig. 4 where the approximation N - L> k is invalid.For the SN181 ALU, the exhaustive test length is 16384

and the minimum test length for 100 percent fault coverage is12 [7]. Table II shows a set of fault simulation coverage resultson the SN181 ALU using different polynomials and differentseeds over a range of test lengths. For test lengths below 76,no result differs from the estimate by more than 6 percent; forlarger test lengths, no result differs from the estimate by morethan 2.5 percent. These results validate the use of the expectedfault coverage as a test quality measure. The fault simulationswere performed on a ALU 181 model with 400 possible singlestuck-at faults, including the same 374 faults used in thedetectability analysis shown in Table I. Undetected faults stillremain in most of the simulations after the test sequence oflength 150 is complete. For each undetected fault class, therightmost column of the table lists the number of detected faultclasses with smaller detectabilities.

Fault Coverage Loss ComponentsOne can calculate AfLk, the expected fault coverage loss

component of a fault with detectability k in a test of length L.The loss component concept allows evaluation of the effect ofeach fault on the expected fault coverage. As shown in (7), thedegradation from 100 percent expected fault coverage in apseudorandom test can be calculated as the sum of the faultcoverage loss components due to each fault in the circuit faultset. The fault coverage loss component can be extracted from(2) as

1N-L)

AfLk=K± z A where 1.k.N-L. (4)

VkJxLU

Approximating, making the assumption N - L > k,

AfLk= l/M* (1-L/N)k

TABLE ICHARACTERISTICS OF CIRCUIT EXAMPLES

NAMIE

DecoderPLASchneider Ex.Full Adder2-Stage Adder3-Level NAND TreeSN153 MUXSN181 ALU

h _,-

N M DEECABLTPROE H

4 368 9216 4416 9032 142

256 304096 11216384 374d< h _1 1h

[hlt,h2h3] = [20,12,4][h1,h2,h3,h4,hs] = [15,23,31,11,12][hl,h2,h3,hl4] = [23,19,1,1][hl,h2h3,h4,h5,h6,h8J = [ 1, 1 1,2,43,21,4,8][h4,h8,h,61 = [86,24,32][h21,h4g,h63,h20?] = [20,3,6,1][h256,h5l2,h768,h024,h3O^721 = [72,6,6,18,10][h96=.l, hi28=1, h176=1, h,92=7, h216=1, h256=2, h264=1,=6 Ab..-9zt= ht.n2heo4ht= Z,= h"^ .-=32--5ZU352- "'38S''4l-'4115- A, I 432---'1512W-1`28-'56-JL1160''4U--'784=-'

h832=4,h880=1,h888=1 ,h896= 14,h 1024=26,hl 120=4,h1280=8,hl36=7,h1808=1h2048=92 h230=54 h3152=1, h3920=1, h4O96=59, h4776=2, h540=2, h5428=2,h5456=l, h5532=2, h5600=8, h6144=2, h6176=4, h6832=2, h6944=5, h8192=29,hg440=1,h9552=6,hl2288=37,hl4o8o= 1]

o ~~~~~~~~~~~~~2U 0. B.o 1: Decoder (2 inputs)

2: PLA (3 inputs)> 0.6D0

EL 0.44

a 0.2

0 4 8Test Length L

Fig. 3. Expected fault coverage (small circuits).

o 3

O) 2

0 / 1: Schneider Example (4 inputs)0/ / 2: 2-Stage Adder (5 inputs)

c o.4 3: Full Adder (4 inputs)

..

0

0. D4 a t2 16 20 24 281 1 32Test Length L

Fig. 4. Expected fault coverage (medium circuits).

0~~~~~~~~~~~~~~~~

L 0.8 1: 3-Level NAND Tree (8 inputs)a) ZX 2: SN153 MUX (12 inputs)

° 0.6 tf 3: SN181 ALU (14 inputs)0

CZ 0.4 /

a~ _0.2 I

i5 T Lg LI

Test Length L75 --I

Fig. 5. Expected fault coverage (large circuits).

tl /M * {1-k(L/N)+k(k- 1)(L/N)2/2--* }. (6)

334

(5)

Page 4: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

WAGNER et al.: PSEUDORANDOM TESTING335

Expected FaultCoverage

TABLE IIFAULT COVERAGE OF THE ALU 181 IN PSEUDORANDOM TESTING

I

TEST LENGTHL=25 L=50 L75 L.100 L=125 L=15088.18 94.88 97.10 98.14 98.75 99.13

SMALLER kDETECTEDFAULTCLASSES

Poly 1 Seed 1 90.00 97.25 99.75 99.75 99.75 100.00Seed 2 91.25 97.50 98.00 98.50 98.50 98.75 3Seed 3 89.25 93.25 94.75 96.00 96.25 98.50 3,6Seed 4 93.50 97.75 98.00 98.75 99.75 99.75 3Seed S 91.75 97.25 98.00 98.25 98.25 98.25 3,30Seed 6 85.00 92.75 95.25 98.75 99.50 99.50 7,30

Poly 2 Seed 1 95.25 97.75 99.50 99.50 99.50 99.50 0,30Seed 2 91.50 95.50 97.25 98.75 100.00 100.00 -Seed 3 88.75 94.00 96.25 96.25 96.25 96.50 2,2,2,14Seed 4 90.75 98.25 99.50 99.75 99.75 99.75 7Seed 5 92.00 94.00 98.75 99.50 99.75 99.75 3Seed 6 88.50 95.25 97.00 97.50 97.50 97.50 0,2,2,4,27,49

Poly 3 Seed 1 88.25 91.00 97.25 98.25 98.75 98.75 3Seed 2 91.50 94.50 97.00 98.00 98.25 98.25 0,2,29Seed 3 88.25 94.50 96.50 97.00 97.25 97.75 3,13Seed 4 93.50 95.00 98.00 98.00 99.50 99.50 0,8Seed S 91.00 96.50 96.50 96.75 97.50 98.50 2,9,15Seed 6 87.50 95.50 97.25 97.25 99.50 99.50 7,30

Poly 4 Seed 1 89.00 94.25 97.75 98.25 98.75 98.75 3Seed 2 91.25 95.50 96.25 96.50 100.00 100.00 -Seed 3 89.00 95.00 97.50 98.50 98.50 98.50 0,2Seed 4 87.75 96.75 97.75 97.75 99.00 99.00 2,6,29,51Seed 5 92.00 95.15 97.75 99.00 99.75 99.75 2Seed 6 88.50 97.25 99.25 100.00 100.00 100.00

. . .. .~~~~~~~~~~~~~~~~

From (4), faults causing the largest loss of coverage arethose with the smallest detectabilities. These faults are countedin the initial nonzero elements of the detectability profile. Thelargest fault equivalence class counted in the leading nonzeroelement of the profile can be covered free by seeding the LFSRwith a test pattern capable of detecting it. The expected faultcoverage calculation can then omit loss components due tofaults in this fault class.

Define the expected fault uncoverage E[UL] as 1 -E[C,L]. From (2), the expected fault uncoverage is the sum ofthe fault coverage losses of faults with detectabilities notexceeding N - L,

N-L

E[UL] = , hkAfLk.k= 1

(7)

camc0CLE0

sa

0

-j

a)

0)0

a,i

U.

0

a)

3%-; I_ Hard-to-Detect Faults

k.2

\l"', \ - -1-

k-4k4 3

k=16 kr

Easy-to-Detect Faults

L 0 0.0625 0.125 0.1875 0.250Normalized Test Length UN

Fig. 6. Expected fault coverage loss components (M = 30)-pseudorandommodel.

responsible for a fault coverage loss of (1/M) 100 percent,regardless of its detectability.

IV. OTHER MEASURES OF TEST QUALITYOther measures of test quality are considered in this section.

The relationships between these measures and the expectedfault coverage measure are analyzed.

Test Confidence

Test length calculation is often based upon a selected testconfidence CL, or probability that a particular fault will bedetected in a test of length L. Test confidence can be derivedusing (1). The fault chosen for the test length calculation iseither: 1) the worst case fault, i.e., fault with minimumdetectability in the CUT, or 2) the upper bound fault; i.e.,fault with detectability k = 1, used where the. detectabilityprofile of the CUT is unknown. Such faults have the largestimpact on measures of test quality when compared to any otherfaults. From Appendix B,

L (NkL)CL==S Pt= 1-t=t ( (8)

Whether faults counted as the initial nonzero elements of thedetectability profile will dominate the fault uncoverage isdependent on the composition of the profile. The separation ofnonzero profile elements and their magnitudes can vary widelybetween circuits.Example 1: Analyzing any circuit with M = 30 possible

faults corresponds to the use of Fig. 6, produced from (5). Theexpected fault coverage loss component in the figure is givenas AfLk X 100 percent. Assume a circuit has one lowdetectability fault, k = 2, and 29 other faults, each with k =16 (i.e., H = [h2, h16] = [1, 29]). Selecting a normalized testlength of 0.1875, the expected fault coverage loss is 2.2percent due to the h2 fault and 29 x 0.12 percent = 3.48percent due to the h16 faults. The expected fault uncoverage is2.2 percent + 3.48 percent = 5.68 percent, and thus expectedfault coverage is 94.3 percent. It is clear that the h2 elementalone does not dominate fault uncoverage for this CUT.

Fig. 6 also shows that for very short test lengths, a1l faultscause about equal fault coverage loss. With a test sequence oflength zero, fault coverage is zero, and each fault is

Since test confidence is generally the parameter chosen andtest length must be calculated, (8) can be reordered. Testlength is then given by the smallest integer value of L thatsatisfies the inequality

(Nk )<( ) ( -CL). (9)

It is also possible to find an accurate approximation to testlength by rewriting (9) as

(N- L)(N- L - 1)(N- L-2) ... (N-L - (k - 1))

< [N(N- 1)(N-2)...* (N-(k-1))](I- CL)

AssumingN - L ~> k, the expression can be approximated as(N - L)k < Nk(1 - cL) or

L tzFN[l-(1-CL)Ilk]] 2 (10)

2 [x] denotes the least integer greater than or equal to x.

eL _., q-

- I~-lI101-I'D ;d-

0Z.:-4~

sIIIII|I|.4 ./g i-L tr/d-

no}1

Page 5: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

3IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987

zz

20.-

o

i2? 0.4-L4 CL.99 C.9

0 0.2--

0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64Detectability k

Fig. 7. Test length versus detectability.

a) k= 1-j

C2 0.25.oE k-azNI

v k=16

0.1 0. . .. I... 0.5.6..7.8 0.9b 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Test Confidence CL

Fig. 8. Test length versus test confidence.

The assumption N - L > k is reasonable because anyincrease in fault detectability k provokes a much largerdecrease in test length L. Equation (10) is very accurate exceptin the cases of circuits with very small numbers of inputs orfaults with very large detectabilities. In both these cases, theexact solution provided by (9) is easily obtained. Thisassumption is actually too restrictive for faults with detectabi-lities k = 1 or k = 2. For these low detectability faults (10)still applies with the following more relaxed restrictions: (1)For k = 1 no assumptions are made and L = FNcL1, and (2)fork = 2 assumeN > 1 andL N[I - A(1 CL)]].A plot of normalized test length versus detectability for

different test confidences is shown in Fig. 7 and plots ofnormalized test length versus test confidence for different faultdetectabilities are shown in Figs. 8 and 9. All are derived from(10). An important feature to note in Fig. 8 (whose region ofinterest is expanded in the log plot of Fig. 9) is that test lengthincreases very slowly with increasing test confidence until avery high confidence is required, at which point the requiredtest length rises dramatically. The relationship is extremelysensitive in this region. For instance, with k = 32, CL =0.9990 requires L = 0. 19N, while cL = 1 - 10-1 requiresL 00.51N and CL = 1.0 requires L = N (for all values ofk).

Test length calculations based on worst case faults areunnecessarily pessimistic. The effect of faults with largerdetectabilities should be considered. For instance, from (10),to detect a k = 2 fault, a normalized test length of 0.761 isrequired for a test confidence of 0.943. This test lengthexceeds four times the estimate made for a fault coverage of94.3 percent in Example 1 of Section III. For the SN18 1 ALU,a test length of 755 for a test confidence of 0.989 would berequired based on the worst case fault. -Since the goal ofpseudorandom testing is to detect ali possible fa-ults in theCUT, test quality measures which consider the fault set aremore reasonable and significantly shorter test lengths can oftenbe obtained.

Expected Test Length

From Appendix C, the expected test length E[L,] for a

particular fault Fi of detectability k in pseudorandom testing is

N N+11E[LiI=>2t.Pt= ~~.(11)

t=1 k I

z

'E 0.75

0' 0.50)

, 0.25EZ.o

k=2

k=4

k-8 k=16

k.32

.*' | * | I~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-

0.9 0.99 0.999 0.9999 0.99999Test Confidence CL

Fig. 9. Test length versus test confidence (expanded for CL = 1).

If ki is the detectability of fault F,, and all faults are equallylikely, then the average value of test length for the fault set is

Mi M 1 N+ I N+1 N hkE[L] = -E[Li = _- . (12)

The expected test length E[L] can be used as a crudemeasure of circuit testability. However, it gives no informa-tion about how test length affects measures of test quality,unlike the expected fault coverage measure of Section III andthe following measure.

Weighted Test ConfidenceTo include the entire fault set F (composed of faults F,) in

the test length calculation, [14] proposed a "weighted sum"solution for random testing. It is modified in (13) forpseudorandom testing. Let Pr {F,1F} be the probability thatfault Fi has occurred, given a fault is present. In a test of lengthL there is a particular test confidence associated with thedetection of each fault in the circuit fault set. The weightedtest confidence TL is the weighted average of these testconfidence values,

TL= CLi Pr {FiF}i:FjEF

Pr {F,IF} (13)i:FjEF

where

Y Pr {Fi F}= I and L.> 1.i:FiEF

336

1

Page 6: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

337WAGNER et al.: PSEUDORANDOM TESTING

In (13), ki is the detectability of fault Fi and CLi is the testconfidence of detecting the fault in a test of length L from (8).Assuming N - L > ki for all i, (13) can be approximated by

TL= S [1-(l-L/N)ki] Pr {FilF}. (14)i:FjEF

It is difficult to evaluate the practicality of the weighted testconfidence measure from the preceding definition and equa-tions. However, when faults are equiprobable, Appendix Dproves that the weighted test confidence and expected faultcoverage over a fault set F are identical.

The correspondence between expected fault coverage andweighted test confidence measures can also be resolved byclose examination of (5) and (10). Equation (10) can berewritten as (1 - CL) -(1 - L/N)k. If fault confidenceloss is defined as (1 - cL)/M for a fault of detectability k,then fault coverage loss AfLk and fault confidence loss areidentical. Hence, Figs. 6, 8, and 9 are equivalent versions ofthe amount of confidence/coverage lost per fault.

100 Percent Fault Coverage

[12] takes a different approach to pseudorandom testanalysis. It considers the probability that a test will provide100 percent fault coverage. The mathematics become un-wieldy for exact solutions involving more than two faults, soan upper bound and a lower bound to test length are providedfor M 2 3. The bounds assume that no faults share anycommon test vectors that can detect them. [8] adopts thisapproach to calculate the expected number of vectors neededfor 100 percent coverage, and then also notes that themathematics for all practical cases is too difficult to solve.We argue that if 100 percent fault coverage must be

obtained, then probabilistic arguments are not reasonable. If apseudorandom generator is used, 100 percent coverage requireseither exhaustive test, or a complete fault simulation whichensures the test is long enough to detect all possible faults. Ifless than 100 percent coverage is satisfactory, then it is moresensible to use the expected fault coverage measure; itprovides a fault coverage estimate for all test lengths, andallows the influence of test length on fault coverage to beclearly understood.

V. INTERPRETATION OF RESULTS

Equation (2) and its approximation in (3) allow calculationof the expected fault coverage for any circuit whose completedetectability profile is known. If the first nonzero elements ofthe detectability profile have small indexes, say k < 8, thesecount hard-to-detect faults [2] or random-pattern resistantfaults [11]. Such faults are difficult to detect in a CUT becausefew test patterns can both provoke them and sensitize them toan output; there is a low probability of applying a test vectorcapable of their detection. Much concern over the effective-ness of pseudorandom testing is focused on the presence ofhard-to-detect faults.

Three different methods have been proposed to deal withhard-to-detect faults. They are: 1) adding test patterns to thetest set to specifically detect these faults [11] where one worst

case fault class can be detected "free" by seeding the LFSR,2) circuit modification (insertion of test points) to increase theprobability of provoking and/or sensitizing these faults [5],and 3) using weighted pattern generation, i.e., changing theprobability of l's and 0's in pattern generation to bias the testvector inputs towards those few that detect these faults [2].The influence of hard-to-detect faults on test length had not

been well understood in the past. Instead, some multiple of thedetectability of the worst case fault was used as a thresholdvalue. Only those faults with detectabilities lower than thethreshold were considered significant and used in the testlength calculation [12], [8]. Rules can also be based upon thegenerally exponential shape of fault coverage curves [16].

Equation (4) is a precise measure of the influence of eachfault on the expected fault coverage for a given test length.Expected fault coverage loss component curves such as thosein Fig. 6 are easy to obtain. Each is derived only on the basisof the number of possible faults in a circuit and otherwise iscircuit independent. Although Example 1 of Section III issomewhat contrived, it illustrates that the test length requiredfor a stuck-at fault test of a circuit is not necessarilydetermined by its few hard-to-detect faults. The circuit faultset rather than single worst case or hard-to-detect faults shouldbe considered when evaluating pseudorandom test lengthrequirements; the result may be a significant decrease in testlength while maintaining adequate fault coverage.

VI. RANDOM TESTING

Expected Fault Coverage

[13] analyzed random testing when considering a singlefault in the CUT. The results are repeated in (15) and (20)-(22). In random testing, each test vector is chosen out of a"pool" of N different vectors and immediately replaced(sampling with replacement). Only k of the N vectors in thepool detect the fault. Let X be a random variable representingthe number of test vectors applied until the fault is detected.The random variable X is geometrically distributed [15]. Theprobability pt that the fault is first detected by the tth testvector is

pt= Pr {X= t} = [(N- k)/N]t- 1(k/N)

=(1-k/N)'-1(k/N) where t21. (15)

[8] derived the expression for the expected fault coverage ofrandom testing. The derivation is repeated in Appendix Ausing (15). For random testing, the expectedfault coverageE[CL] is

EC N = hk

k=1l(16)

The random test expected fault coverage result of (16), likethe pseudorandQm test result of (2), can be applied directly tocircuits with known detectability profiles. Fault coverage losscomponents and marginal expected coverage can also beextracted.

Page 7: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987

Fault Coverage Loss Components

For random testing, AfLk, the expected fault coverage lossin a test of length L per fault of detectability k, can be foundanalogously to that of pseudorandom testing. The degradationfrom 100 percent expected fault coverage in a random test iscalculated as the linear sum of the losses due to each individualfault in the circuit fault set, as shown in (19).

Using (16), the fault coverage loss component can beidentified as

AfLk = 1/M * (I1-k/N) L

o) 4%/c -

0S.E0o 3%k )0-J0C 20/ -

I0

U, 10/ya)0)0)

Uw 0

(17)

= I/M{1-L(k/N)+L(L-1)(k/N)2/2-* *}. (18)

From (16), the expected fault uncoverage is the sum of thefault coverage losses of all faults,

N

E[UL]= , hkAfLk.k= 1

(19)

0.0625 0.125 * 0.1875Normalized Test Length UN

Fig. 10. Expected fault coverage loss components (M = 30, N = 256)-random model.

and when faults are equiprobable, the average value of testlength for all faults in the fault set is

EM[I =EMNN hk

i=1 i=1 k=l(24)

For example, analyzing random testing of any circuit withM = 30 possible faults corresponds to the use of Fig. 10,produced from (17). The expected fault coverage loss in a testof length L per fault of detectability k is AfLk X 100 percent.Using the random test model, the expected fault coverage inExample 1 of Section III is 93.3 percent whenN = 256 and L= 48 (LIN = 0.1875).

Other Measures of Test QualityThe test confidence CL, or probability that a particular fault

is detected within the first L test vectors applied, can be foundusing (15)

L

cL=EPt=1-(1-k/N)L whereL>1. (20)t=1

This result can also be obtained by considering the testconfidence as one minus the probability that a test of length Ldoes not detect the fault. Rearranging (20) to solve for the testlength L, which must be a positive integer value

L = Flog (1- c)/log (1 - k/N)]. (21)

Consider the Taylor series expansion of the natural logfunction

loge (1-x)= -x--(x2/2)-(x3/3)-

Ifx << 1, this expression simplifies to loge (1 - x) - x.Assuming k/N *< 1 in the log (1 - k/N) term of (21)combined with the simplification, test length can be approxi-mated accurately as

LFF-(N/k) loge (1-CL)1 (22)

From [8], the expected test length E[L,] for a particularfault of detectability k is

N NE[Lj] = t Pt k(23)

k

To include the entire fault set F in the test length calculationthe "weighted sum" solution proposed in [14] is shown in(25). The weighted test confidence TL is the weightedaverage of the test confidence values associated with each faultin the circuit fault set. Similar to the proof in Appendix D, itcan be shown that the weighted test confidence is identical tothe expected fault coverage when faults are equiprobable.

TL= cLi Pr {FilF}i:FiEF

(25)= E [1-(1-ki/N)L] Pr {FIF}i:F,EF

where

E r{FIJF}=land'L.1.i:F,EF

As in (13), k, is the detectability of fault Fi, Pr {FilF} is theprobability that the fault Fi has occurred given a fault ispresent, and cLi is the test confidence of detecting the fault in atest of length L from (20).

Comparison of Random and Pseudorandom TestingIt is instructive to compare the test length requirements of

pseudorandom and random testing. For pseudorandom testing,the test length is bounded by the exhaustive test length N (theapplication of all possible inputs). No such limit exists in arandom test.

Comparison of Fig. 6 to Fig. 10 shows that the randommodel can provide reasonable results for the expected faultcoverage since its fault coverage loss components are similarto those of pseudorandom testing. However, binomial expan-sions of (5) and (17), shown in (6) and (18), reveal that therandom test model loss components slowly diverge from thecorrect results of the pseudorandom model as test lengths orfault detectabilities increase.The random test length can be used as a baseline to evaluate

the efficiency of pseudorandom testing. To detect a particular

338

Page 8: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

WAGNER et al.: PSEUDORANDOM TESTING

c.9 c-.999

0E.6.-

=0. y

co

cL0 16 24 32 40 48 56 64

Detectability k

Fig. 11. Test length ratio.

fault of detectability k with test confidence c, a pseudorandomtest of length L and a random test of length L' are required.Assuming N - L > k, from the approximations (10) and(22), the ratio of test lengths is

L/L' -k[l -(1 -C)l/k]/log, (1-c). (26)

As observed by [3] in numerical simulations of particularcases, this test length ratio is independent ofN for N > 1.Fig. 11, derived from (26), shows a plot of the ratio ofpseudorandom test length to random test length for differentlevels of test confidence. Pseudorandom test length neverexceeds random test length (LIL' c 1).As noted by [4], when k = 1, the ratio (26) effectively

reduces to

L/L' -c/loge(l -c). (27)

When k approaches N

L/L' 1. (28)

The ratio is smallest for low detectability faults where thepseudorandom test property of never repeating the same testvector provides the most benefit. This is precisely where themost accuracy is required in calculating test length. Thusrandom testing analysis is a poor indicator of pseudorandomtesting behavior.

VII. UNMATCHED PATTERN GENERATIONConsider the test setup shown in Fig. 1. It is not always the

case that the number of outputs m of the pseudorandom testpattern generator is equal to the number of inputs n of theCUT. Cases where m * n are not uncommon, especially thecase m > n. For instance, the same LFSR may be used to testmultiple circuits of different sizes in parallel, where it is largeenough to accommodate the largest of the circuits. This"unmatched" pattern generation cannot be analyzed directlyusing the results of Sections III and IV.

Case 1: m < n

This is the simpler case to analyze. The CUT requires N =2" patterns for exhaustive test and only 2m patterns areavailable (some test generator outputs must be connected tomultiple-circuit inputs). Several approaches are possible all ofwhich modify the detectability profile. Whichever approach istaken, the analyses of Sections III and IV can then be used.

1) With a fault simulator the actual fault detectabilities couldbe found under the new set of 2m < N possible distinct inputvectors.

2) As a simpler alternative, the dependency of each fault onthe CUT primary inputs could be found and used to revise thedetectability profile.

3) To avoid CUT analysis completely, all faults withdetectability k could be assigned the new detectability (2m/2n)k = 2m-nk. The modified profile would be a left-shiftedversion of the original, where an element rk in position "k" ismoved to position 2m-nk. Since 2m-n k may not be aninteger, it should be replaced by L2"-" *kjk as a lower bound.As a consequence, hard-to-detect faults become even moredifficult to detect and may be assigned modified detectabilitiesof zero (undetectable). From (5), each fault of this type willthen cause an expected fault coverage loss of (1/M) x 100percent, regardless of test length.

Case 2: m > n

In this case test vectors are sampled with limited replace-ment. Each test vector can be applied at most 2"m-n times. Thisvalue is called x, the replacement factor. The required testlength associated with a particular fault ranges between that ofpseudorandom test (no replacement) as a lower bound, and(2m - 2m-n + 1) vector exhaustive test as an upper bound.As x increases, the test pattern generation approaches that ofthe random test model. Since only a subset of LFSR outputsare used, all possible vectors are generated with no specialmodifications needed to include the all-zero pattern.

There is probability pt that t test vectors must be applieduntil a particular fault is first detected. To find Pt, assume thata test vector must be removed from the pool of availablevectors after x test vectors are applied that do not detect thefault. Since in general the same vector was not sampled all xtimes, the following expression is an approximation,

(N-k-z)x'..N-Z

(N-k-z-1 Y

N-z-l( kN-L(t - 0.5)/xj J

(N-k)!(N-z-1)! x'

(N- k- z- 1)!N!

( N-k-z-1 Y

N-z-l

(N-L(t-0.5)/xj)

(N-Lz- I xf

N,j( N-z- Ik

N- (t-0.5)lx] (29)

I Lxi denotes the greatest integer less than or equal to x.

339

N-k X/ N-k-I xiPt =Z

N N- I

Page 9: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987

TABLE IIIAPPROXIMATE TEST LENGTH IN UNMATCHED PATTERN GENERATION

(m 2 n)

m=n=8 m=9 m=10 m=11 m=12 m=13 m=14Replacement factor x

1 2 4 8 16 32 64

1 251 440 639 793 889 943 9722 220 319 396 444 471 485 4933 186 245 284 307 320 326 330

k 4 159 197 221 235 242 246 2476 122 142 153 159 162 164 1658 I 98 110 117 120 122 123 123161 54 58 59 60 61 61 61321 28 29 29 30 30 30 30

(Test confidence = 0.98)

In this equation,

y-(t- 1) mod (x), z = L(t- )/xj- 1,

and x' =x if t>-x else x' =0.

Unfortunately, no simple formulae have been found toprovide CL and E[CL] for this case. Numerical techniques canbe used on (29) when it is substituted into the generalexpression in (8) for CL. The p, values are accumulated for 1' t ' L until the desired test confidence is obtained at testlength L. To illustrate the effect of unmatched patterngeneration, Table III was constructed for n = 8, 8 < m < 14when CL = 0.98. The table provides test length versusdetectability for a single fault of detectability k. The upperbound on test length is L = 2m - x + 1 = x(2" - 1) + 1 =255x + 1 for test confidence CL = 1.The effect of unmatched pattern generation for this case

becomes less noticeable as the fault detectability increases,i.e., test length converges to the sampling without replacement(x = 1) value as k increases. For instance, column 4 of thetable describes the behavior at 98 percent test confidence witha pattern generator that has 3 outputs not connected to theCUT (x = 23). A fault of detectability k = 1 requires a testlength 3.16 times greater than that of the matched case (wherem = n and thus x = 1), but reduces to a factor of 1.07 for afault of detectability k = 32.To avoid random test behavior, LFSR's should be reconfi-

gurable so that their size matches the CUT as closely aspossible. Otherwise, test lengths can easily exceed thoserequired for exhaustive test. If multiple circuits are tested inparallel, concatenation of the appropriately-sized LFSR's(whose periods must be relatively prime) is preferable to theuse of a subset of the outputs of one large LFSR. For instance,to test circuits of size n = 5, 7 and 12, two maximal lengthLFSR's of sizes m = 5 and m = 7 should be constructed.These test the n = 5 and n = 7 circuits, respectively. The n= 12 input circuit can be tested concurrently with the smallercircuits by concatenating the two small LFSR's, the resultingperiod of the concatenated LFSR is (25 - 1)(2 7- 1) = 3937= 212. To ensure that periods of two maximal length LFSR'sare relatively prime, it is sufficient to modify one of them toinclude the all-zero state. The period of the modified LFSRbecomes even and the period of the unmodified LFSR remainsodd, so their periods will be guaranteed to be relatively prime.

VIII. SUMMARY AND CONCLUSIONS

Pseudorandom testing is an important alternative to al-gorithmic test pattern generation. Also, it is the commonmethod of pattern generation in systems using built-in self test.

This research provides the tools to analyze pseudorandom test.Exact solutions and accurate approximations can be made

quickly for measures of interest such as test length and faultcoverage in pseudorandom testing. Extensive logic or numeri-cal simulation to derive these measures is avoided, althoughfinding the detectability profile of the CUT may be an

obstacle. The equations allow the general relationships be-tween test parameters to be explored, leading to a betterunderstanding of the efficiency of pseudorandom testing. Thepseudorandom test set can be directly compared to algorithmictest sets using the fault coverage measure.The results show that the circuit fault set rather than hard-to-

detect faults should be considered when evaluating test lengthrequirements. Each fault's influence on fault coverage can beevaluated easily once its detectability is known; so only thosenonzero elements of the detectability profile where thecollective fault influence is nonnegligible need to be consid-ered. These are generally the leading elements of the CUTdetectability profile since the influence of each fault on theexpected fault coverage decreases rapidly with increasingdetectability. Faults with larger detectabilities are significant ifthey occur in sufficient numbers.

Results are also derived for the random test model and showthat it can be a poor predictor of pseudorandom test behavior.Since this model is always less accurate, also requires thedetectability profile, and is no easier to use than thepseudorandom test model, it should be discarded.

Finally, situations where the size of the pseudorandomgenerator does not match the size of the CUT are considered.Test measures can also be calculated for these cases althoughclosed form solutions for the more important case have notbeen found. Reducing the size mismatch between the genera-tor and CUT is very important because test length isminimized when their sizes are equal.

APPENDIX

A. Expected Fault Coverage

Let Y be a random variable representing the number of testvectors that detect a particular fault in a pseudorandom test

340

Page 10: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

WAGNER et al.: PSEUDORANDOM TESTING

sequence of length t. Then Y has a hypergeometric distribu-tion [15]. The probability that the fault is first detected by thetth test vector, orpt of (1), is equivalent to Pr { Y = 1 }/t. Therandom variable Y equals one because the fault is detectedonly once in the test sequence. Pr { Y = 1} is divided by t,since the hypergeometric distribution makes no distinctionamong the t possible positions in the sequence where thedetecting vector could occur (for our purposes this vector mustbe in the tth position).To calculate the expected fault coverage [8], consider a

random variable dt, which is equal to one if fault F, is firstdetected by the tth test vector applied and zero otherwise.Then Gt, the total number of "new" faults detected byapplying the tth vector (i.e., the number of faults that are notdetected by vectors in the test sequence positions 1, 2, ***,t - 1 and that are detected by the test vector in position t) is

MGt =E dt,

i=1

number of extra steps. Substituting (33) in (34) and switchingthe index limit from L to t

Ehk

Lkk

I N hk t KN. jA

kt=l N j=lVklVkJ

(36)

Examining this result, it can be proved by induction on (36)together with the following identity relation

E[CN-P- I] =E[CNpI - (E[GN-P])/M (37)(30) that

whereMis the number of possible stuck-at faults in the CUT.From (30) and properties of the expected value, E[G,], the

expected number of "new" faults detected by the tth vector is

E[CN-t]= IM z(k) (N)

E[Gt] =E {< dt} = E(dt,) = 1 -

Pti = hkPti=l i=l ~~i=l k=1

(31)

where p,j = Pr {dti = 1 } and pi is the probability that a faultof detectability k is first detected by the tth test vector.

Substituting pt from (1) and (15) results in

N

E[Gt] = hk(l- k/N) t - (k/N) (32)k= 1

for random testing and

tN- tN k -1t

E[Gt] hk (K-) (33)

VkJ

for pseudorandom testing.The expectedfault coverage E[CL] is the expected number

of faults that can be detected in a test of length L, divided bythe total number of possible circuit faults M. Observe that

(34)

Thus, substituting L for N - t in (38), for pseudorandomtesting

(39)

A reformulation of (39) produces AE[CL], the marginalexpected fault coverage, i.e., the expected coverage changewith the application of the Lth pseudorandom vector, afterL - 1 vectors have been applied. From the identity relation(37) and (33),

AE[CL] =E[CL]-E[CL- 1] = (E[GL])/M

k=1 k-J NAk

M E [kl(N-L-k+l)] ( -) h

(40)Using the approximation technique of (3) with the assumptionN-L > k

Substituting E[Gt] from (32) and interchanging the order ofsummation as in [8], for random testing

1NE[CL]-1-ME- (1O-k/N)hk. (35)

k = I

However, obtaining a simple closed form solution for theexpected fault coverage of pseudorandom test requires a

1 N-L+l

&E[CL]=ZM 1 [k/(N-L)](1-LIN) hkh (41)

B. Test ConfidenceThe test length required to detect a particular fault in a

pseudorandom test is derived in [3] but its form is not readilymanipulable. Instead, alternate forms of solution as well as an

(38)

341

I t

E[Ctl =-Emj=l

I N-L N-L hkE[CLI=I-- 1;m kk=l N

k

I

E[CLI= E E[Gtl .

m t=l

Page 11: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

IEEE TRANSACTIONS ON COMPUTERS, VOL. C-36, NO. 3, MARCH 1987

accurate approximation are derived here. These forms of thegeneral equations allow the relationships between test length,test confidence and fault detectability to be fully examined forthe first time.The test confidence CL, or probability that the fault will be

detected in a test of length L, can be found using (1)

CL-= Pt

() 'N-t1 ( N-1 Kill(N) t=1V- (N) i=N-L k1J

(Nzr) [ :(ku1) (k-i)]

1

(k:E( )(N-Lk

k k

N++ 1

=N+ 1- ) k(k) =N+ 1- k((

=(N+ 1)/(k+ 1). (44)

D. Expected Fault Coverage = Weighted TestConfidence

Using (2) and (13), the relationship between expected faultcoverage and weighted test confidence for pseudorandomtesting can be found. From (2),

E[CL]=-

1

M

[k=l k=l( L(

hk

'N-LAV k

=1-(N-L) (42)

(kI)An alternate way of expressing the same relation, equivalent

to (42), is to interpret CL as one minus the probability that a testof length L does not detect any faults [3], i.e.,

(N-k)

cL=I-Pr {Y=O}l=- .N* (43)

C. Expected Test Length

Using (1), the expected test length E[L,] for a particularfault Fi of detectability k in pseudorandom testing is

N N-tE[LJYI= t 'Ptr=

t=1N

1 N-1= K (N- i)

/

(kill)

= ~N(NN

=( ) N()- [(i+l) (kil &(ki1)]]

N

k= I

1

M i:F1EF

= TL. (45)

The derivation of expected fault coverage implicitly as-

sumes all faults in the fault set are equiprobable since the ptf'sin (31) are given equal weight. Thus, the expected faultcoverage E[CL] and the weighted test confidence TL over thefault set are identical when all faults have equal probability ofoccurrence, i.e., where Pr {FijF} = 1/M.

ACKNOWLEDGMENTS

The authors would like to thank Prof. J. Wakerly and Prof.R. David for their comments and suggestions concerning thisresearch. The authors also gratefully acknowledge the assist-ance of Dr. J. Hughes of the Center for Reliable Computingand the helpful comments of the referees.

REFERENCES[1] F. Brglez, P. Pownall, and R. Hum, "Accelerated ATPG and fault

grading via testability analysis," in Proc. Int. Symp. Circuits andSyst. (ISCAS), Kyoto, Japan, 1985, pp. 695-698.

[2] C. Chin and E. J. McCluskey, "Weighted pattern generation for built-in self test," Stanford Univ., Center for Reliable Comput., Tech. Rep.84-249, Aug. 1984.

[3] , "Test length for pseudorandom testing," IEEE Trans. Com-put., vol. C-36, pp. 252-256, Feb. 1987.

[4] W. H. Debany, Jr., "Probability expressions, with applications to faulttesting in digital networks," M.S. thesis, Rome Air DevelopmentCenter, Griffiss Air Force Base, NY, RADC-TR-83-83, Mar. 1983.

[5] E. M. Eichelberger and E. Lindbloom, "Random pattern coverage

342

N-1 N- Ii iE k-I

- 1: ' k-Ii=k-I i=k-I

Page 12: Pseudorandom Testing - Computer Action Teamweb.cecs.pdx.edu/~ecex75/readings/1987_McCluskey... · 2014-01-08 · random pattern generation; the patterns are used in random testing

WAGNER et al.: PSEUDORANDOM TESTING

enhancement for LSSD logic self-test," IBM J. Res. Develop., vol.27, pp. 265-272, May 1983.

[6] S. W. Golomb, Shift Register Sequences, rev. e4. Laguna Hills,CA: Aegean Park, 1982.

[7] J. L. Hughes and E. J. McCluskey, "Multiple stuck-at fault coverage

of single stuck-at fault test sets," in Proc. IEEE Int. Test Conf., Nov.1986.

[8] Y. K. Malaiya and S. Yang, "The coverage problem for randomtesting," in Proc. IEEE Int. Test Conf., Nov. 1984, pp. 237-245.

[9] E. J. McCluskey, Logic Design Principles: With Emphasis on

Testable Semicustom Circuits. Englewood Cliffs, NJ: Prentice-Hall, 1986, ch. 10.

[10] J. P. Roth, W. G. Bouricuis, and P. R. Schneider, "Programmedalgorithms to compute tests to detect and distinguish between failures inlogic circuits," IEEE Trans. Comput., vol. C-16, pp. 567-589, Oct.1967.

[11] J. Savir, G. Ditlow, and P. Bardell, "Random pattern testability,"Dig. Papers, IEEE 13th Ann. Int. Symp. Fault-Tolerant Comput.,June 1983, pp. 80-89.

[12] J. Savir and P. Bardell, "On random pattern test length," IEEE Trans.Comput., vol. C-33, pp. 467-474, June :1984.

[13] J. J. Shedletsky and E. J. McCluskey, "The error latency of a fault in a

combinational digital circuit," Dig. 1975 Int. Symp. Fault-TolerantComput., June 1975, pp. 210-214.

[14] J. J. Shedletsky, "Random testing: Practicality versus verified effec-tiveness,"' in Proc. 7th Int. Conf. Fault-Tolerant Comput., June1977, pp. 175-179.

[15] K. S. Trivedi, Probability and Statistics with Reliability, Queuing,and Computer Science Applications. Englewood Cliffs, NJ: Pren-tice-Hall, 1982.

[16] T. Williams, "Test length in a self-testing environment," IEEE Des.Test, pp. 59-63, Apr. 1985.

Kenneth D. Wagner received the B.Eng (Honors)degree in electrical engineering from MeGill Uni-versity, Montreal, P.Q., Canada in 1979, and the

MSEE apd Ph.D. degrees from Stanford Univer-sity, Stanford, CA, in 1980 and 1986, respectively.He received a 4-year NSERC postgraduate schol-

arship from the Government of Canada, and workedfor Stanford University's Center for Reliable Com-puting from 1983 to 1986. His research interests are

the timing and testing of high-speed systems,including clock distribution and tuning, randorn

testing, and design for testability. From 1981 through 1983 he worked forAmdahl Corporation as a Systems Design Engineer. He is currently an

Advisory Engineer with the EDS VLSI Design Rules Control Department,IBM, Poughkeepsie, NY.

Dr. Wagner is a member of the IEEE Computer Society and Sigma Xi.

sity, Stanford, CA.

Cary K. Chin received the BSEE (with distinction)and MSEE degrees in 1982 and 1984, respectively,from Stanford University, Stanford, CA.

Currently he is a Senior Engineer at IntegratedCMOS Systems, Sunnyvale, CA. His areas ofspecialty include design for testability and com-puter-aided testing. Prior to joining ICS he was a

Senior Test Engineer at Data General Corporation,Sunnyvale, CA. He is also currently continuingresearch in random and pseudorandom testing at theCenter for Reliable Computing, Stanford Univer-

Edward J. McCluskey (S'51-M'55-SM'59-F'65)

received the A.B. degree (summa cum laude) inmathemnatics and physics fromn Bowdoin College,

Brunswick, ME, in 1953 and the B.S., M.S., andSc.D. degrees in electrical engineering in 1953,1953, and 1956, respectively, from the Massachu-setts Institute of Technology, Cambridge, MA.He worked on electronic switching systems at the

Bell.Telephone Laboratories from 1955 to 1959. In1959, he moved to Princeton University, Princeton,NJ, where he was Professor of Electrical Engineer-

ing and Director of the University Computer Center. In 1966, he joinedStanford University, Stanford, CA, where he is Professor of ElectricalEngiheering and Computer Science, as well as Director of the Center forReliable Computing. He has published several books and book chapters. Hismost recent book is Logic Design Principles with Emphasis on TestableSemicustom Circuit (Prentice-Hall 1986). Book chapters include Design forTestability in Faulf-tolerant Computing, D. K. Pradhan, Ed., and chapterson logic design in the Van Nostrand Reinhold Encyclopedia of ComputerScience and Engineering and in Reference Data for Engineers, E. C.Jordan, Ed. He is President of Stanfprd Logical Systems Institute whichprovides consulting services on fault-tolerant computing, testing, and designfor testability.

Dr. McCluskey served as the first President of the IEEE Computer Societyand as a member of the AFIPS Executive Committee. He has been GeneralChairman of the Computer Architecture Symposium, the Fault-TolerantComputing Symposium, and the Operating Systems Symposium. He was

Associate Editor of the IEEE TRANSACTIONS ON COMPUTERS and theJournal of the ACM. He is a member of the Editorial Board of the IEEEDesign and Test Magazine. In 1984, he. received the IEEE Centenniai Medaland the IEEE Computer Society Technical Achievement Award in Testing.

343