review of built-in test methodologies for gate arrays

9
Review of built-in test methodologies for gate arrays K.A.E. Totton, M.A., M.Sc. Indexing terms: Circuit theory and design, Integrated circuits, Testing, Semicustom gate arrays Abstract: The paper presents a review of current and proposed test methodologies for semicustom gate arrays. The necessity of high quality testing is emphasised by considering some of the hazards and penalties associated with poor testability. The usefulness and limitations of testability analysis programs are then considered. A built-in test is introduced as an attractive alternative to conventional approaches based on automatic test pattern generation for highly structured circuits. This test technique is shown to offer significant benefits, including reduced test data volume, improved test quality, and easier maintenance testing. The advantages and disadvantages of three built-in test implementations for gate arrays are discussed. First, an architecture which combines an ad hoc design for testability with a comprehensive on-chip maintenance system is reviewed. This is followed by a presentation of an LSSD-based pseudorandom self-test and the associated test problems. Finally an exhaustive test, based on a similar architecture achieves a high quality test with guaranteed fault coverage. In conclusion, the future direction of test strategy development is predicted, in the context of increasing integration density and the convergence of 'semicustom' and 'full-custom' design styles. 1 Introduction The essence of semicustom design is that it provides the digital systems designer with a fast, low-cost route to silicon. It might therefore be expected that computer-aided design (CAD) would play a vital role at all stages of semi- custom chip development, especially in the context of the continually increasing integration complexity made pos- sible by advances in silicon processing technology. In prac- tice, however, with the exception of some design systems developed by large computer manufacturers with captive processing capabilities, the penetration of CAD has been patchy. The most notable benefits have been in the areas of automated layout, and design verification by logic simula- tion. In the case of gate arrays, the tasks of placement and routing are facilitated by the regular replication of array cells and routing channels. The type of design usually implemented on a gate array has closely resembled a printed circuit board (PCB) design using standard TTL integrated circuits (ICs). The major challenge to test program generation arises from the greatly reduced access to internal circuit nodes as one moves from a PCB to a single chip. The traditional approach has been to utilise the design verification stimulus as an input to a fault simu- lator and to add extra tests to detect outstanding faults. With increasing complexity, this tedious manual process has tended to become a major bottleneck in the gate array development cycle, diminishing the advantages inherent in semicustom design. The alternative, automatic test pattern generation (ATPG), has long been recognised as one of the most intractable areas of design automation (DA). Certain- ly, test generation programs based on Roth's D-algorithm [1] have been in use for many years, but their success has been limited to highly structured designs, with the gener- ator operating on the combinational portion of the circuit. Since VLSI technology offers an abundance of gates capable of high speed operation, the use of dedicated hard- ware on-chip to facilitate testing is an attractive proposi- tion, especially for low-volume application-specific circuits. In particular, built-in self test (BIST) offers considerable promise of easing many chip test problems; further benefits Paper 3697E (C2, E3) received 22nd June 1984 The author is with the British Telecom Research Laboratories, Martlesham Heath, Ipswich 1P5 7RE, England IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985 accrue from the use of such testable components at the higher levels of assembly. In this paper, attention will be focused on some recently announced BIST strategies applicable to gate arrays, and related testing issues will be discussed. 2 Testability problems Tonge [2] has drawn attention to some of the penalties associated with semicustom IC development if the circuit testability is not given adequate consideration. The main points are as follows: (a) long test program development timescale (b) high test program development cost (c) long program execution time at production (d) inadequately tested, thus less reliable devices (e) extreme difficulty in board or subsystem testing. The cost of fault simulation is greatly influenced by the number of vectors in the test program, so that lengthy test programs are to be avoided, if possible. If this cost is likely to become excessive, one might compromise fault cover- age, i.e. accept a test program which achieves less than 100% coverage of single stuck-at faults. There is then the likelihood that some devices will be accepted and enter service with faults which may be activated only under certain conditions, leading to troublesome intermittent errors. Since one study [3] has suggested that the costs of fault detection increase by roughly an order of magnitude at each step from chip to board to subsystem to field diag- nosis, a compromise in test program quality might well be a false economy. The best solution, however, is to avoid the problems associated with lengthy test programs by adequate design for testability (DFT). The use of increasingly sophisticated automatic test equipment (ATE) has been necessitated by the steady growth in the complexity of gate arrays. The cost of such ATE must be amortised over a fairly short lifetime, so that there is some motivation to design integrated circuits which are testable by fairly simple ATE. High pin count and lengthy, unstructured test programs place heavy demands on ATE since in the latter case the programming facilities, e.g. looping, may not be exploited. Wafer probing problems also arise with circuits having high pin count; the probe card becomes congested and reliable probing of a large number of pads is difficult. 121

Upload: kae

Post on 20-Sep-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Review of built-in test methodologies for gate arrays

Review of built-in test methodologies forgate arrays

K.A.E. Totton, M.A., M.Sc.

Indexing terms: Circuit theory and design, Integrated circuits, Testing, Semicustom gate arrays

Abstract: The paper presents a review of current and proposed test methodologies for semicustom gate arrays.The necessity of high quality testing is emphasised by considering some of the hazards and penalties associatedwith poor testability. The usefulness and limitations of testability analysis programs are then considered. Abuilt-in test is introduced as an attractive alternative to conventional approaches based on automatic testpattern generation for highly structured circuits. This test technique is shown to offer significant benefits,including reduced test data volume, improved test quality, and easier maintenance testing. The advantages anddisadvantages of three built-in test implementations for gate arrays are discussed. First, an architecture whichcombines an ad hoc design for testability with a comprehensive on-chip maintenance system is reviewed. This isfollowed by a presentation of an LSSD-based pseudorandom self-test and the associated test problems. Finallyan exhaustive test, based on a similar architecture achieves a high quality test with guaranteed fault coverage. Inconclusion, the future direction of test strategy development is predicted, in the context of increasing integrationdensity and the convergence of 'semicustom' and 'full-custom' design styles.

1 Introduction

The essence of semicustom design is that it provides thedigital systems designer with a fast, low-cost route tosilicon. It might therefore be expected that computer-aideddesign (CAD) would play a vital role at all stages of semi-custom chip development, especially in the context of thecontinually increasing integration complexity made pos-sible by advances in silicon processing technology. In prac-tice, however, with the exception of some design systemsdeveloped by large computer manufacturers with captiveprocessing capabilities, the penetration of CAD has beenpatchy. The most notable benefits have been in the areas ofautomated layout, and design verification by logic simula-tion. In the case of gate arrays, the tasks of placement androuting are facilitated by the regular replication of arraycells and routing channels. The type of design usuallyimplemented on a gate array has closely resembled aprinted circuit board (PCB) design using standard TTLintegrated circuits (ICs). The major challenge to testprogram generation arises from the greatly reduced accessto internal circuit nodes as one moves from a PCB to asingle chip. The traditional approach has been to utilisethe design verification stimulus as an input to a fault simu-lator and to add extra tests to detect outstanding faults.With increasing complexity, this tedious manual processhas tended to become a major bottleneck in the gate arraydevelopment cycle, diminishing the advantages inherent insemicustom design. The alternative, automatic test patterngeneration (ATPG), has long been recognised as one of themost intractable areas of design automation (DA). Certain-ly, test generation programs based on Roth's D-algorithm[1] have been in use for many years, but their success hasbeen limited to highly structured designs, with the gener-ator operating on the combinational portion of the circuit.

Since VLSI technology offers an abundance of gatescapable of high speed operation, the use of dedicated hard-ware on-chip to facilitate testing is an attractive proposi-tion, especially for low-volume application-specific circuits.In particular, built-in self test (BIST) offers considerablepromise of easing many chip test problems; further benefits

Paper 3697E (C2, E3) received 22nd June 1984

The author is with the British Telecom Research Laboratories, Martlesham Heath,Ipswich 1P5 7RE, England

IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985

accrue from the use of such testable components at thehigher levels of assembly.

In this paper, attention will be focused on some recentlyannounced BIST strategies applicable to gate arrays, andrelated testing issues will be discussed.

2 Testability problems

Tonge [2] has drawn attention to some of the penaltiesassociated with semicustom IC development if the circuittestability is not given adequate consideration. The mainpoints are as follows:

(a) long test program development timescale(b) high test program development cost(c) long program execution time at production(d) inadequately tested, thus less reliable devices(e) extreme difficulty in board or subsystem testing.

The cost of fault simulation is greatly influenced by thenumber of vectors in the test program, so that lengthy testprograms are to be avoided, if possible. If this cost is likelyto become excessive, one might compromise fault cover-age, i.e. accept a test program which achieves less than100% coverage of single stuck-at faults. There is then thelikelihood that some devices will be accepted and enterservice with faults which may be activated only undercertain conditions, leading to troublesome intermittenterrors. Since one study [3] has suggested that the costs offault detection increase by roughly an order of magnitudeat each step from chip to board to subsystem to field diag-nosis, a compromise in test program quality might well bea false economy. The best solution, however, is to avoidthe problems associated with lengthy test programs byadequate design for testability (DFT).

The use of increasingly sophisticated automatic testequipment (ATE) has been necessitated by the steadygrowth in the complexity of gate arrays. The cost of suchATE must be amortised over a fairly short lifetime, so thatthere is some motivation to design integrated circuitswhich are testable by fairly simple ATE. High pin countand lengthy, unstructured test programs place heavydemands on ATE since in the latter case the programmingfacilities, e.g. looping, may not be exploited. Wafer probingproblems also arise with circuits having high pin count;the probe card becomes congested and reliable probing ofa large number of pads is difficult.

121

Page 2: Review of built-in test methodologies for gate arrays

3 Testability improvement

The need to address the issue of testability in digital cir-cuits has motivated the development of a number of test-ability analysis tools [4, 5]. For a small computationaleffort one may obtain useful information regarding thetestability of a given design and, in particular, draw on arange of DFT techniques [2] to remove any testabilityblackspots. Some programs, e.g. CAMELOT [5], willautomatically insert observation points with a view tooptimising the overall testability of the circuit.

Most testability analysers operate on a gate leveldescription of the logic design. To achieve run times whichare small in comparison with test generation, testabilityanalysis algorithms usually sacrifice accuracy, aiming for acomplexity which is approximately linear in the number ofgates. Agrawal and Mercer [6] have drawn attention tothe fact that one must exercise discretion in interpretingthe data from a testability measure, due to the approx-imate nature of the analysis.

The accurate analysis of circuit testability is, in general,a potentially complex (NP-complete) problem even incombinational circuits, due to the presence of reconvergentfanout and redundancy [7]. To simplify the computations,most measures do not handle reconvergent fanout accu-rately; this will adversely affect the credibility of controlla-bility (Cy) and observability (Oy) ratings for such circuitnodes. Savir [7] has recently pointed out that good Cyand Oy do not guarantee good testability, the point beingthat the testability of a fault is related to the cardinality ofthe intersection of the Cy and Oy test sets, i.e. the propor-tion of possible test vectors which simultaneously controland observe the node. Most measures assume some simplerelationship between Cy, Oy and testability.

In spite of the above limitations, testability analysistools have given useful service in highlighting the majorpotential obstacles to successful test program generation.

4 Test generation for sequential circuits

It was mentioned above that ATPG has been successfullyapplied to combinational logic circuits. Indeed, a numberof refinements to the basic concepts in the D-algorithmhave been made, leading to newer and faster algorithmssuch as PODEM [8] and FAN [9].

Little success, however, has been enjoyed with generalsequential circuits. Early efforts to generate tests for syn-chronous circuits, using iterative arrays to model thecircuit states at consecutive time intervals [10], were ham-pered by several major problems. For example, the lengthof a sequential test is not known a priori, and can growexponentially with the number of flip-flops present. Also,to keep computation time within reasonable limits, thesearching processes associated with error propagation andconsistency checking were often constrained by user-imposed limits. These limits resulted in failure to generatetests for certain faults. The situation with asynchronouscircuits was even worse: because of the possibility ofunstable intermediate states in transitions between stablestates, each time frame might require expansion intoseveral time frames to account for the unstable states.

By contrast, the competent test programmer under-stands how a circuit functions macroscopically. He recog-nises hierarchy and discovers how to control and observethe major subcircuits. He draws upon standard techniquesfor testing memory, counters, shifters, ALUs, based upontheir function, rather than structure. If only a gate leveldescription of a circuit is presented to the test generator,

these important constraints on the test search procedureare lost.

The recently announced HITEST test generation system[11] tackles the problem of representation and utilisationof high level knowledge of circuit function to expedite testgeneration. The circuit is partitioned into its com-binational and sequential parts: the test generation algo-rithms operate on the combinational part, thus avoidingsearches through multiple time frames. A knowledge baseholds information relevant to the control and observationof the sequential elements so that the test generator cangenerate waveforms to propagate fault effects to primaryoutputs. Parameterised knowledge items, pertinent to func-tional testing of circuit building blocks such as countersand shifters, are also held in the knowledge base. Mecha-nisms also exist to add supplementary test generationobjectives if a substantial region of the circuit is in theunassigned state, and a fault simulator handles the propa-gation of fault effects. The design of the system permits theaddition of new algorithms and heuristics for test gener-ation.

Interactive test generation using a system such asHITEST as an 'expert' assistant probably holds the great-est promise for economic test generation for generalsequential circuits.

5 Structured approach

The adoption of a methodology which will guarantee atestable design is an alternative to ad hoc design for test-ability. It is noteworthy that the large computer manufac-turers have been leaders in this field, with IBM's levelsensitive scan design (LSSD) among the most publicised[12]. Structured design for testability has significantadvantages in the design automation environment. Thetest generation problem is reduced to that of generatingtests for combinational logic, and Cy and Oy of all internalnodes are ensured by the scan path feature, whereby inter-nal latches are configured into one or more shift registersfor the purposes of loading stimuli and observingresponses. The 'level sensitive' attribute ensures that thelogic is intrinsically race- and hazard-free, thus removingany need for accurate timing simulation. The dependencyon AC parameters such as clock edge rise and fall timesand minimum propagation delays is greatly reduced. Theseparameters are difficult to simulate in the designenvironment and difficult to monitor in the manufacturingenvironment.

There are, however, certain costs and penalties associ-ated with this great improvement in the testability ofcomplex systems. Logic designs must be synchronous, andno feedback is permitted in logic between the retiming ele-ments (latches). Depending on the technology, an LSSDlatch may be two to three times more complex than thebasic latch it replaces, leading to a penalty in silicon areaof up to 20%. This problem is acute in gate arrays if adesign is highly sequential. One approach to reducing thisoverhead is to replicate custom-designed shift registerlatches on the gate array [13]: the scan path connectionsmay then be prewired so as to avoid interference with thecustomising routing.

Although the LSSD methodology simplifies on-chiptiming, a problem sometimes arises when interfacing toasynchronous circuits, particularly if narrow pulses mustbe detected. As a possible solution, a special 'glitch-catching' input primitive might be designed to expandnarrow pulses so that they can be detected by the slowersynchronous circuit. Alternatively, oversampling could be

122 IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985

Page 3: Review of built-in test methodologies for gate arrays

performed by running the receiving chip at a higher clockfrequency.

The volume of test data for complex designs placesheavy demands on automatic test equipment (ATE) interms of both storage and speed. If costly ATE must beused, this will be reflected in the final price of the parts.The problem is aggravated in LSSD because there is a verylarge depth of data associated with the scan-in and scan-out pins and comparatively little for the normal I/O pins.Special testers have been designed for LSSD circuits; thedesign of a hardware enhancement for a general-purposeATE would be a less costly alternative.

6 Built-in test for gate arrays

6.1 Built-in testThe alternative to testing with an ATE operating understored program control is to provide hardware on-chip tostimulate the logic and to observe the response. Thisapproach is particularly desirable for VLSI systems testingsince there is no shortage of gates and test access is greatlyreduced.

Built-in test (BIT) aims to solve a number of problemsassociated with conventional ATE-based testing.

6.1.1 ATPG costs: Goel [14] has investigated the costsassociated with ATPG for LSSD designs in the context ofincreasing gate count G. His findings may be summarisedas follows:

(a) test volume grows linearly with gate count G(b) parallel fault simulation costs grow as G3

(c) deductive fault simulation costs grow as G2

(d) minimum test pattern generation costs grow as G2

(e) total test application time for LSSD structures growsasG2

Note that in (d) there is the supposition of a linear growthin the complexity of the logic cones feeding each shiftregister latch input: this is probably unrealistic. On theother hand, it is assumed that an ideal TPG algorithmexists, which can determine tests without backtracking.

One way of reducing test generation costs is to exhaus-tively test subcircuits [15]. The advantage of this techniqueis that the logic of the subcircuit may be ignored com-pletely; the main disadvantage is that the test data volumegreatly exceeds that required to detect all single stuck-atfaults in the subcircuit.

6.1.2 Test data volume: The difficulties associated withtest data volume have been mentioned above: with built-intest, the volume of test data crossing the ATE interface canbe greatly reduced if test vectors are generated and circuitresponses compressed on-chip. The former function mightbe performed using linear feedback shift registers (LFSRs)and the latter by signature analysis [16] techniques. In thiscase, the functions required of the ATE are very simple,namely clock generation, control and final signature com-parison.

6.1.3 Test quality: Since the tester technology frequentlylags that of the device under test (DUT), it is frequentlyimpossible to test the DUT at its full operating speed. Thishas led to the concept of DC testing and inevitably to theexpectation that some devices, subjected to a test atreduced clock rate, may fail during system operation.Built-in test (BIT) may be much faster than ATE basedtesting if the test vectors are generated and responses com-

pressed on-chip, thus permitting the detection of certaindelay faults as well as permanent stuck faults.

6.1.4 Maintenance testing: The inclusion of self-testhardware on a chip considerably eases the problem ofsystem maintenance testing. If the chip can be logically iso-lated from the rest of the subsystem, it may be tested atany subsequent stage, e.g. by an on-board test processor. Aboard containing such self-testable chips could be put intoa concurrent self-test mode, thus drastically reducing thetotal test time required.

6.1.5 Boundary scan: To make a chip self-testable usuallyrequires the inclusion of primary inputs and outputs in ascan register, 'boundary scan' [17]. This feature facilitatesthe testing of logic situated between the internal latches ofthe circuit and primary I/O. Note that a boundary latchmight be bypassed during normal operation, if desired,using suitable multiplexing. Fig. 1 illustrates the typicalarchitecture.

SDO

Fig. 1 Boundary scan

Certain additional benefits accrue from boundary scan,notably simplified parametric testing because the states ofthe output pins are readily controlled. For board-level test,the boundary latches may be used to stimulate other logic,observe responses, and to check interconnect integrity[18]. Wafer probe is also facilitated, since the number ofpins required to test the complete circuit is greatlyreduced, a significant advantage in large semicustom chips.

6.2 Evaluation of built-in test schemesIn evaluating any proposed BIT scheme, McCluskey hasraised the following considerations [15]:

(a) Design difficulty: if the implementation of BIT cir-cuitry is difficult, the technique will probably not gainacceptance.

(b) Is the system automatable? To guarantee generality,a formal technique suitable for a DA system should bespecified.

(c) Performance penalty: BIT circuitry should not havea significant adverse effect on the performance of the func-tion to which it is added.

(d) Cost penalty: BIT circuitry should require minimaladditional circuitry or chip area.

7 Self-testing gate arrays

In this Section a number of approaches to built-in self testfor gate arrays are reviewed.

IEE PROCEEDINGS, Vol. 132, Pts. E and 1, No. 2, MARCH/APRIL 1985 123

Page 4: Review of built-in test methodologies for gate arrays

7.1 CDC 6k gate array (VLSI-6000)Control Data Corp (CDC) has published information on anovel 6000 gate CMOS gate array which is specificallydesigned for ease of testability [19]. This description con-centrates on the test strategy adopted and the capabilitiesof the on-chip maintenance system (OCMS). Fig. 2 gives ahigh-level view of the main functional components of thearray.

inputs

input and shift serially. In the self-test mode, the inputs tothe logic array are stimulated by different bits in theBILBO register, so that a new random vector is applied oneach clock cycle. The test may proceed at full systemspeed. The response from the outputs of the logic array iscompressed using an 89-bit output register in the check-sum mode, see Fig. 4.

Before self test can commence, the various registers and

outputs

additional diagnostic information

Fig.2

•test strobe•test clock•clockholdoff

CDC 6k gate array

7.1.1 Functional self test: Random pattern testing is usedto test the logic array. The source of these patterns is atype of BILBO register [20] (Built-in Logic BlockObservation) whose length is 159 bits. Fig. 3 shows anexample of a BILBO.

This test generator is a multimode device which can becontrolled to operate as a LFSR sourcing pseudorandompatterns and may be initialised with a suitable startingvector or 'seed'. It may also accept data on its parallel

parallel data inputsZ2

mode control C1

the array must be initialised, the correct seed value beingloaded into the input register.

In general, sequential logic is not particularly suscep-tible to random pattern testing, so that the chip logic mustbe subjected to testability analysis and, if necessary, logicadded to increase the Cy and Oy of internal nodes [2]. Thedesigner must demonstrate, using the CAD tools, that atleast 95% coverage of stuck faults has been obtained using100 random patterns [34]. If this is done, then there is

Z3 1U

•SDO

CLK

Fig. 3 BILBO register

124

Q1

parallel data outputs

IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985

Page 5: Review of built-in test methodologies for gate arrays

every expectation that coverage will approach 100% whenthe full self-test (utilising of the order of 106 vectors) is run.

from logic network

To apply a test, the shift clocks are activated to scan ina vector from the pattern generator. The system clocks are

clock

clock enable

>

testout

Fig. 4

data

Check-sum register

Since a new test vector is applied on each clock cycle, thistest will typically take less than 1 second. By contrast, inLSSD the vectors must be serially shifted into positionbefore the test can be applied. Nevertheless, considerableoverhead in added logic may be required to gain thenecessary increase in random pattern testability in highlysequential designs. Spare latches in the input and outputregisters may be used to control and observe the internalnodes in the array.

7.7.2 System check-sum mode: A check sum may begenerated by the output register while the system runs nor-mally, since the maintenance registers are in parallel withthe data path. Thus a diagnostic program could be runand the check sum evaluated to make sure that the chip isfunctioning normally.

7.1.3 Interconnect testing: The provision of input andoutput registers associated with the signal pins facilitatesinterchip wiring checks. A test operand is loaded into theoutput register of one chip and the signals are clocked intothe input registers of receiving chips according to theboard interconnection list. Very few operands are requiredto test for opens, shorts and stuck faults in the boardwiring [18].

7.1.4 Other OCMS features: The OCMS is designed tosupport board-level and system self test. The variousmodes of operation are controlled by a serial-in parallel-out (SIPO) control register which may be loaded by a testprocessor. The functional self test may be run at any stagefrom wafer probe to field maintenance. Wafer probe is par-ticularly facilitated since only 18 of the possible 172 pinsare required to run this test. Furthermore, I/O parametrictesting is simplified since the output buffers have an associ-ated register which may be loaded serially.

The OCMS occupies approximately 12% of the chiparea [19]. Note that there is also a design-specific over-head associated with Cy and Oy improvement, in terms ofadded gating and wiring. This could be quite significant fora circuit of high sequential depth.

7.2 LSSD-based pseudorandom self-testThe combination of an LSSD architecture and signatureanalysis techniques yields a self-test scheme suitable forgate arrays [17]. The configuration used is shown in Fig. 5.

The random pattern generator is a linear feedback shiftregister (LFSR) of suitable length. The actual number ofbits required depends on the number of random testvectors to be applied. For a maximal length sequence gen-erator of N bits, a sequence of length 2N — 1 bits is avail-able before the pattern repeats.

then cycled and the response is scanned into the signatureanalyser, while the next test vector is scanned in. The sig-nature analyser is typically a LFSR of the type described

randompatterngeneration

SDI

controland clocks

SDO

LSSD network

PI f t PO5l PI

I ' ' *——> ' ' ' 1 I Iboundary scan

signatureanalyser

diagnosticoutput

Fig. 5 LSSD self testPI primary input; PO primary output; SDI/O scan data in/out

by Hewlett Packard [16]. With all such data compressiontechniques, some faulty sequences may yield the goodmachine signature, but the probability of this occurrencemay be made arbitrarily small by increasing the number ofbits in the signature analyser.

7.3 Random pattern testabilityThe effectiveness of testing LSSD gate arrays usingrandom patterns has been demonstrated [12]. However,random pattern testing (RPT) encounters difficulties whena definite fault coverage must be guaranteed. Shedletsky[33] has shown that, in the absence of expensive faultsimulation, random testing must rely on statistical mea-sures of fault coverage which, theoretically, are less efficientthan exhaustive testing. The following questions must beanswered if test quality is to be guaranteed when using thismethod:

(a) Can random pattern resistant faults be located for amodest computational effort?

(b) How may a design be modified to make it RP test-able?

(c) How long should the test sequence be, in order toessentially guarantee the coverage of all postulated faults?

(d) Can diagnostic techniques be devised which circum-vent the loss of information due to data compression?

Each of these issues will be considered in turn.

1EE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985 125

Page 6: Review of built-in test methodologies for gate arrays

7.3.1 Hard fault location: A fault in a combinationalcircuit may be considered 'hard', if its detection probabilityfor RPT is less than some predetermined threshold. Thisdetection probability threshold may be calculated by con-sidering the test confidence required, e.g. 99%, and themaximum permissible test time. The latter parameter willdepend on the normal chip operating frequency, themaximum shift speed and the number of LSSD latches inthe scan loop.

Conventional testability measures such as SCOAP [4]are insufficiently accurate in their handling of logic net-works to be useful for this purpose. They offer no solutionto the complication of reconvergent fanout, so that if thisfeature is present in a logic network the ratings will beerroneous.

An analytic method of determining the location of hardfaults has been published by Savir et al. [22]. Theirapproach is based on some earlier work by Parker andMcCluskey on signal probabilities [23]. As shown in Fig.6, the detection probability of a target fault may be shownto be equivalent to a signal probability on an auxiliarygate.

The faults considered for a given circuit are those oneach logic cone input and fanout branch, since thesedominate all the other faults in the circuit. A cone is acombinational network controlling the input of a shiftregister latch (SRL).

AAND

auxiliary gate

2 N 0 R PO

Fig. 6 Calculating detection probability using auxiliary gate

For each cone input the 1-probability (i.e. the probabil-ity that the logic value is 1) is 1/2, given unbiased randomtests. Accurate signal probabilities may then be calculatedfor all tree nodes (no reconvergent fanout). The next step isto turn the circuit into a tree by cutting reconvergentfanout branches. Signal probability bounds (e.g. [0, 1]) areassigned to the cut branches and propagated to all non-tree nodes using tree formulae. In each case, the boundsenclose the true value, but sometimes tighter bounds arepossible if the inversion parity of the reconvergent paths isexploited. Depending on the ordering of the cuts, variousvalues will be computed for the bounds on a particularnode and so a final tight bound may be obtained.

7.3.2 Improving random pattern testability. If a fault islisted as hard, it is likely that it will require a test sequenceapproaching the length of an exhaustive test for a reason-able detection confidence level. Hard faults may be mademore testable using certain circuit modifications.

Observation points: It is generally found that the hardestfaults to detect are those associated with the longest sensi-tised paths [24]. The detectability of these faults may beincreased by the judicious placement of extra observabilitytest points (in LSSD, shift register latch inputs). A numberof such nodes may be fed to the inputs of a parity treecircuit to minimise the circuit overhead.

Partitioning: Simple large cones, such as N-input ANDtrees, constitute a particular problem for random testing.The probability of detecting a stuck-at-1 fault on a parti-cular input of an AND function having 2N inputs is 2~2N.

126

For large N, an unacceptably large number of randomvectors might be required.

To improve the detection probabilities, the partitioningmodification shown in Fig. 7 has been suggested [25].

N

Fig. 7 Large cone partitioning

The costs involved include the extra NAND gates andthe SRL, and the introduction of a new mode of operationvia the control signal. However, the probability of detect-ing input faults has now been substantially increased to

Contiguous inputs: As mentioned previously, the placementof logic cone inputs contiguously on the scan path is onesimple method of ensuring an exhaustive test. Note thatthe 'all zero' vector will not appear since this correspondsto the LFSR lock-up state. This technique would be usefulwhen testing a circuit such as a multiplexor where arandom test is theoretically less efficient than an exhaus-tive test. The number of cone inputs must not be greaterthan the number of bits in the LFSR.

7.3.3 Test length: The required length of random test todetect all faults in a combinational circuit with high prob-ability is dictated by the presence of a small number ofhard faults [26]. If an analytical method is used to deter-mine these hard faults, a testability profile of the leastdetectable faults may be obtained. Savir and Bardell [26]introduce the following simplifying assumptions:

(a) All test sets for the hard faults are disjoint, so thatthe test length computed represents an upper bound.

(b) For practical purposes, faults with detection prob-abilities greater than twice that of the worst fault can beignored.

(c) All hard fault detection probabilities are equal to theworst fault detection probability.

With these pessimistic simplifications, a simple relationshipcan be derived yielding L, the upper bound test sequencelength, in terms of the test confidence (confidence of testingthe worst-case fault), the number of hard faults and thedetection probability of the hardest fault.

7.3.4 Diagnosis and good signature derivation: The pro-vision of boundary scan facilitates the location of faults tothe failing chip. It may, however, be necessary to call onfurther diagnostic procedures during chip debugging. Onereported approach involves the storage of intermediate sig-natures during the good circuit simulation to determinethe final known good device (KGD) signature [27]. If sig-natures are stored at intervals of 100 test vectors, one maystep through the blocks to find the block containing thefirst failure. Thereafter standard LSSD diagnostic tech-niques may be applied, such as post-test fault simulation[28].

All self-test schemes which rely on the application oflarge numbers of test vectors exacerbate the problems ofKGD signature derivation. Zero-delay good circuit simu-lation over the full test set is possible, particularly if specialpurpose software and even hardware is available, althoughthis may not be an affordable option. Alternatively, one

IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985

Page 7: Review of built-in test methodologies for gate arrays

might opt for the derivation of a signature from a simu-lated subset of the test vectors: due to the nature ofrandom testing, fault coverage quickly reaches 90%, sothat a short test should eliminate most failing dice. There-after the full test may be performed with the confidencethat a number of dice yielding the same signature aregood, and hence the KGD signature is obtained.

7.3.5 Observations: IBM have carried out some inter-esting comparisons between the test quality of determin-istic and pseudorandom test patterns, for a wide range ofLSSD gate arrays [27]. The results showed that the latterdetects faults which escaped the conventional test, andfurther analysis of these led to a classification into threecategories:

(a) net-to-net short faults not modelled for conventionalATPG

(b) pattern dependent node loading faults(c) stuck faults not tested by deterministic patterns but

covered by pseudorandom test.

Some designs were found to be random pattern resistant,i.e. they were not fully tested by a feasible number ofrandom patterns. This is considered to be a function ofdesign rather than chip density: as noticed above, tech-niques are available for logic cone testability enhancement.

7.4 Exhaustive self testRecently, researchers have studied methods of guar-anteeing exhaustive testing in LSSD logic circuits [29, 30].Essentially, a LFSR test generator is selected which willyield an exhaustive test over each logic cone, even if itsinputs are noncontiguously located on the scan path. Fig.8 shows a typical configuration found in LSSD gate arrays.

Fig. 8 Noncontiguous cone inputs

The self-test architecture is identical to that shown inFig. 5, except that the LFSR will be programmable interms of the number of bits used and the feedback taps.

7.4.7 Theory: While LFSRs have long been exploited assources of pseudorandom patterns, their sequences possesscertain properties which may preclude an exhaustive testover arbitrarily placed cone inputs. This is easily demon-strated by reference to Fig. 9.

Since the data in the scan register is merely a direct shiftof that in the LFSR, the following relationship must hold

LFSR

a b e Id le

Fig. 9 Cone input dependency

IEE PROCEEDINGS, Vol. 132, Pts. E and 1, No. 2, MARCH/APRIL 1985

between the data in latches a, b and e:

a = b@e

This constraint means that the logic cone in Fig. 9 cannotbe exhaustively tested. One solution might be to alter theinput positions in the scan loop, an unattractive solutionsince routing and placement would require alteration. Thealternative is to chose an LFSR with different feedbacktaps, thus removing the dependency. In this way a searchmay be carried out until an LFSR is found which satisfiesthe requirement for an exhaustive test on each logic conein the circuit [30].

Peterson [31] gives tables of irreducible polynomials upto degree 34. To yield a maximal length sequence, anLFSR must implement a primitive polynomial: the numberof applicable polynomials increases significantly with thedegree. The search begins with an LFSR of degree N,where N is the number of inputs to the widest cone to beexhaustively tested. The chances of success are significantlyincreased by trying polynomials of degree N + 1, but theexhaustive test time is doubled.

7.4.2 Test times: The test time increases exponentiallywith the number of bits in the LFSR generator. Theproblem is aggravated by the serial shift required to scaneach vector into place in an LSSD architecture. Forcomplex designs multiple scan loops may be necessary toreduce the test time.

7.4.3 Wide cones: Those cones having more inputs thanthe number of bits in the LFSR generator will not beexhaustively tested. Instead, they will be pseudorandomlytested as in the previous Section. The testability enhance-ment techniques mentioned would therefore apply. Asimpler solution is to ban the use of wide cones by intro-ducing further latches and accepting a performancepenalty; alternatively one could perform fault simulationto evaluate the coverage and, if necessary, generate supple-mentary patterns using conventional ATPG. Unfor-tunately, the latter introduces a second test mode andtends to diminish the advantages inherent in self test. Thebest approach is a combination of knowledgeable designand suitable testability enhancement.

7.4.4 Advantages: In spite of the difficulties noticedabove, this self-test technique has significant advantagesover other methods. Exhaustive testing eliminates anynecessity to evaluate fault-coverage since 100% stuck faultcoverage is guaranteed: the logic within a cone is ignoredcompletely.

A simple ATE is all that is necessary to support thistype of self test. It should be capable of generating thecorrect clock sequences algorithmically, providing controlsignals to initialise the registers and scanning out the finalsignature. These functions are readily performed by amicroprocessor, thus permitting board level self test underthe control of a test processor.

8 Future test strategy

As semicustom chip complexities increase beyond 10000gates, the cell-based design style becomes preferable to theuse of gate arrays. The former approach ensures a moreeconomic use of silicon area and encounters fewer wire-ability problems than a gate array solution. The arearequired to implement a given function is particularlyimportant in the context of the unavoidable overheads

127

Page 8: Review of built-in test methodologies for gate arrays

associated with on-chip test hardware. Thus, the future willprobably see some convergence of the semicustom andfull-custom design approaches.

Significant improvements in system performance arepossible, if memory arrays and PLAs can be integrated ona single chip, due to the removal of the delay penaltiesassociated with interfacing to the outside world. However,if the development of such application specific circuits is tobe economically feasible, new tools and test methodologiesmust be developed. For such circuits, the application of theusual DFT techniques, e.g. adequate partitioning and goodbus access, will be mandatory, if the problem of testprogram development is to be tractable.

Certain generic structures could carry parameterisedtests or built-in tests. A RAM, for instance, might have anassociated finite state machine, implemented as a PLA, togenerate tests for pattern sensitivity and cell stuck faultsaccording to some efficient algorithm [32]. On the otherhand, in certain circumstances, e.g. a data path, this tech-nique might lead to unnecessary duplication of test cir-cuitry, i.e. the application of self-test techniques at a higherlevel might yield a high quality test with less overhead.

Self-test schemes such as LSSD-based exhaustive testinghave the advantage that a known fault coverage is guar-anteed, avoiding costly fault simulation. It is important,however, that some degree of diagnostic resolution is pre-served to facilitate chip debugging: if sufficient scan accessto the main functional blocks is available, standard diag-nostic techniques might then be employed, assuming theproblems can be traced to the offending modules. Perhapscomplete self test might be unrealistic for some designs: insuch circumstances, self-testing subcircuits would greatlyreduce the complexity of test generation for the overallcircuit.

In practice the test-conscious VLSI designer drawsupon a rich variety of techniques and expertise to optimisethe testability of a given design. He may decide forexample, that certain circuit blocks may be adequatelytested using BIT techniques, others may require bus accessto the I/O pins. Since test time should be minimised, pos-sible concurrency will be exploited and sequential depthwill be reduced using scannable registers. Different genericblocks, e.g. ROM, RAM or PLA, demand specialised faultmodels and test sequences, for example those designed todetect pattern sensitive faults in RAMs.

If the expertise of the VLSI test engineer could beencapsulated in an expert system, the VLSI designer coulddevelop a sound test strategy as the system design evolves.Numerous support tools would be necessary, such as highlevel testability analysers, programs for generating optimalpartitions, tools to support built-in self test, and others torecognise the applicability of these techniques.

9 Conclusion

Increasing integration densities, made possible byadvances in wafer fabrication technology, have stretchedthe capabilities of conventional IC test techniques. Theproblems are becoming acute on the more complex semi-custom ICs, and even the adoption of structured designtechniques enforced by rigid design rules leaves some testissues unresolved.

To exploit the denser technologies to produceapplication-specific circuits cost effectively, new test meth-odologies will be required, which utilise dedicated on-chiplogic to reduce the complexity of the test generation task.Built-in self-test techniques will play an important role in

reducing the volume of test data, while maintaining hightest quality.

The major challenges to design automation include thedevelopment of high level testability analysis aids, chippartitioning algorithms, and tools to support the applica-tion of BIST techniques. The creation of such anenvironment should ensure that test is considered as anintegral part of the design process, rather than a costly'retrofit' operation.

10 Acknowledgment

Acknowledgment is made to the Director of Research ofBritish Telecom for permission to publish this paper.

11 References

1 ROTH, J.P.: 'Diagnosis of automata failures: A calculus and amethod', IBM J. Res. & Dev., 1966, pp. 278-281

2 TONGE, J.D.: 'Designing testable digital integrated circuits'. Proc. of2nd semicustom ICs Conf., London, 1982

3 WILLIAMS, T.W., and PARKER, K.P.: 'Design for testability—asurvey', IEEE Trans., 1982, C-31, (1), pp. 2-15

4 GOLDSTEIN, L.H., and THIGPEN, E.L.: 'SCOAP: Sandiacontrollability/observability analysis program'. Proc. of 17th DAConf., Minneapolis, MN, June 1980, pp. 190-196

5 BENNETTS, R.G., MAUNDER, CM., and ROBINSON, G.D.:'CAMELOT: a computer-aided measure for logic testability'. Proc. ofICCC, Port Chester, NY, Oct., 1980, pp. 1162-1165, also IEE Proc. E,Comput. & Digital Tech., 1981, 128, pp. 177-189

6 AGRAWAL, V.D., and MERCER, M.R.: Testability measures—what do they tell us?'. Proc. 1982 IEEE Int. Test Conf., Cherry Hill,Nov. 1982, pp. 391-396

7 SAVIR, J.: 'Good controllability and observability do not guaranteegood testability', IEEE Trans., 1983, C-32, (12), pp. 1198-1200

8 GOEL, P.: 'An implicit enumeration algorithm to generate tests forcombinational logic circuits', ibid., 1981, C-30, (3), pp. 215-222

9 FUJIWARA, H., and SHIMONO, T.: 'On the acceleration of testgeneration algorithms', ibid., 1983, C-32, (12), pp. 1137-1144

10 BREUER, M.A., and FRIEDMAN, A.D.: 'Diagnosis and reliabledesign of digital systems' (Computer Science Press, Potomac, Md.,1976)

11 ROBINSON, G.D.: 'HITEST—intelligent test generation'. Proc.IEEE Int. Test Conf., Oct. 1983, pp. 311-322

12 EICHELBERGER, E.B., and WILLIAMS, T.W.: A logic designstructure for LSI testability', J. Design Automat, and Fault-TolerantComput., 1978, 2, pp. 165-178

13 GRIERSON, J.R., COSGROVE, B., DANIEL, R., HALL1WELL,R.E., KIRK, I.H., KNIGHT, J.C., McLEAN, J.A., McGRAIL, J.M.,and NEWTON, CO. : 'The UK5000—successful collaborative devel-opment of an integrated design system for a 5000 gate CMOS arraywith built-in test'. Proc. of 20th DA Conf., June 1983, pp. 629-636

14 GOEL, P.: 'Test generation costs, analysis and projections'. Proc. of17th DA Conf., Minneapolis, MN, 1980, pp. 77-84

15 McCLUSKEY, E.J., and BOZORGUI-NESBAT, S.: 'Design forautonomous test', IEEE Trans., 1981, C-30, (11), pp. 866-875

16 FROHWERK, R.A.: 'Signature analysis: A new digital field servicemethod', Hewlett-Packard J., 1977, pp. 2-8

17 KOMONYTSKY, D.: 'LSI self-test using level sensitive scan designand signature analysis'. Proc. IEEE Int. Test Conf., Cherry Hill, 1982,pp. 414-424

18 GOEL, P., and McMAHON, M.T.: 'Electronic chip-in-place test'.Proc. of 19th DA Conf, June 1982, pp. 482-488

19 RESNICK, DR.: 'Testability and maintainability with a new 6k gatearray', VLSI Des., Mar./Apr. 1983, pp. 34-38

20 KONEMANN, B., MUCHA, J , and ZWEIHOFF, G.: 'Built-in logicblock observation technique'. Proc. IEEE Test Conf, 1979, pp. 37-41

21 WILLIAMS, T.W.: 'Random patterns within a structured sequentiallogic design', ibid., pp. 19-26

22 SAVIR, J , DITLOW, G, and BARDELL, P.H.: 'Random patterntestability', IEEE Trans., 1984, C-33, (1), pp. 79-90

23 PARKER, K.P, and McCLUSKEY, E.J.: 'Analysis of logic circuitswith faults using input signal probabilities, ibid., 1975, C-24, pp.573-578

24 BOYCE, A.H, and WATT, W.: 'The design and testing of logic cir-cuits', Electron. Tech., 1982, 16, pp. 25-28

25 EICHELBERGER, E.B, and LINDBLOOM, E.: 'Random-patterncoverage enhancement and diagnosis for LSSD logic self-test', IBM J.Res. & Dev., 1983, 27, (3), pp. 265-272

128 IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985

Page 9: Review of built-in test methodologies for gate arrays

26 SAVIR, J., and BARDELL, P.H.: 'On random pattern test length'.Tech. Report TR 00.3224, IBM Poughkeepsie, NY, 1983

27 MOTIKA, F., WAICUKAUSKI, J.A., and LINDBLOOM, E.: 'AnLSSD pseudo random pattern test system'. Proc. IEEE Int. TestConf., Oct. 1983, pp. 283-288

28 ARZOUMANIAN, Y., and WAICUKAUSKI, J.: 'Fault diagnosis inan LSSD environment', ibid., Philadelphia, PA, 1981, pp. 362-370

29 TANG, D.T., and WOO, L.S.: 'Exhaustive test pattern generationwith constant weight vectors', IEEE Trans., 1983, C-32, (12), pp.1145-1150

30 BARZ1LAI, Z., COPPERSMITH, D., and ROSENBERG, A.L.:'Exhaustive generation of bit patterns with applications to VLSI self-

testing', ibid., 1983, C-32, (2), pp. 190-19431 PETERSON, W.W.: 'Error correcting codes' (Cambridge, MA: MIT

Press, 1961)32 MARINESCU, M.: 'Simple and efficient algorithms for functional

RAM testing'. Proc. IEEE Int. Test Conf., Cherry Hill, Nov. 1982, pp.236-239

33 SHEDLETSKY, J.J.: 'Random testing: practicality versus verifiedeffectiveness'. Digest FTCS-7, 7th Annual Int. Conf. on Fault-Tolerant Computing, 1977, pp. 175-179

34 HOWELL, S.K.: 'Testable system design using VLSI-6000 semi-custom CMOS'. Proc. of 3rd Semi-custom ICs Conf., London, 1983

IEE PROCEEDINGS, Vol. 132, Pts. E and I, No. 2, MARCH/APRIL 1985 129