workshop computer report for the 80's · hours; an 8-bit alu required one weekofeffort,...

5
Chips and the systems they reside in are becoming increasingly complex, presenting challenges to device designers and manufacturers to produce devices that are both more reliable and more fault-tolerant. WORKSHOP REPORT Computer Elements for the 80's Matthew F. Slana Bell Telephone Laboratories The keynote panel discussion for the June 1978 Computer Elements Workshop focused on the implica- tions of increased scale of integration in devices for the computer business. Sessions at the workshop, sponsored by the IEEE Computer Society's Technical Committee on Computer Elements, examined trends in memory, LSI design, emerging logic technologies, packaging, and testing. Device designers see a continuing trend to larger scales of integration. Simultaneously, speed-power im- provements are forecast. The chal- lenge to the device people is to curtail the costs of designing and testing ever more complex chips. There is a concern that reliability might suffer as the chips become more complex. The increased scale of integration is being exploited rapidly by the designers of terminals, minicom- puters, and other microprocessor ap- plications. Of course, high-density semiconductor memory is being ex- ploited widely. Designers of large central pro- cessors are less certain of how to ex- ploit the increasing scale of integra- tion. Since the major objective for a central processor is speed, the em- phasis is on high-performance de- vices, possibly using several micro- processors in parallel to achieve high throughput. But since the power of this technique is unproven, CPU designers are exploiting modular gate-array or master-slice techniques in high performance configurations. These modular techniques allow a variety of configurations with modest design and testing costs. Memory trends When one talks about semiconduc- tor memory trends, three tech- nologies appear that both comple- ment and compete with each other: MOS or bipolar RAMs, bulk memor- ies (such as disks), and bubble and CCD memories. The architecture in- herent in each technology plays a large role in the present and projected use of the technology. In memory hierarchies, the internal, memories will generally be MOS or bipolar RAMS, with TTL or ECL interfaces, that will mate up to the large bulk memories via the "gap-filler" CCD and bubble memories. RAMs. Generally, the application will determine the RAM technology used, with the prime considerations being the variables of access time (3:1 or 4:1 factor of higher speed in favor of bipolar) and power/bit (10:1 factor of lower power in favor of DMOS). Rapid advances in printing tech- nology.from the contact and projec- tion methods to E-beam, X-ray, and direct-step-on-wafer methods in the 1979-1981 period show area-geometry factors decreasing by ratios of 5:1 to 7:1, allowing the potential of dynamic RAM availability to proceed from the 16K chips prevalent in 1978 (with 4p-5p line widths and 35OM2 45oI2 cell sizes) to 64K chips in 1981 and to 256K chips (with 1.01i-1.5p line widths and 50p2-751X2 cell sizes) by 1985 to 1987. The appparent evolution of memory-chip architectures to byte- wide arrangements has a variety of advantages, including larger packag- 0018-9162/79/0400-0098$00.75 1979 IEEE ing densities, reduced testing times for microprocessors, and ease of pro- viding on-chip error correction. Testing costs are directly related to testing time, and testing time is strongly dependent on chip architec- ture, as is shown in one study, in which the "wider" architectures had significantly lower costs for the same total bits/chip (see table). The general trends in semiconduc- tor RAMs in the future will be toward faster and smaller cells, using a single power supply, and with all overhead functions performed on-chip. Byte- wide memories will become prevalent, with on-chip error correction allowing the use of partially-good memories. Bubbles. Bubble memories are under intensive development in a number of laboratories, with million- bit chips having been fabricated in research labs. Block organizations are seen as the only way to increase yield for chips of over 100,000 bits. Redundant loops are being imple- Relationship between chip width and test time. CHIP TEST TEST SIZE TIME COST IN SECONDS 16K x 1 8.42 $0.17 2K x8 4.64 $0.09 64K x 1 38.47 $0.77 8K x 8 7.32 $0.15 256K x 1 276.11 $5.52 32K x 8 23.20 $0.96 COMPUTER 98

Upload: vanhanh

Post on 04-Aug-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Chips and the systems they reside in are becoming increasingly complex,presenting challenges to device designers and manufacturers to

produce devices that are both more reliable and more fault-tolerant.

WORKSHOPREPORT

Computer Elementsfor the 80'sMatthew F. SlanaBell Telephone Laboratories

The keynote panel discussion forthe June 1978 Computer ElementsWorkshop focused on the implica-tions of increased scale of integrationin devices for the computer business.Sessions at the workshop, sponsoredby the IEEE Computer Society'sTechnical Committee on ComputerElements, examined trends inmemory, LSI design, emerging logictechnologies, packaging, and testing.Device designers see a continuing

trend to larger scales of integration.Simultaneously, speed-power im-provements are forecast. The chal-lenge to the device people is to curtailthe costs of designing and testingever more complex chips. There is aconcern that reliability might sufferas the chips become more complex.The increased scale of integration isbeing exploited rapidly by thedesigners of terminals, minicom-puters, and other microprocessor ap-plications. Of course, high-densitysemiconductor memory is being ex-

ploited widely.Designers of large central pro-

cessors are less certain of how to ex-ploit the increasing scale of integra-tion. Since the major objective for acentral processor is speed, the em-phasis is on high-performance de-vices, possibly using several micro-processors in parallel to achieve highthroughput. But since the power ofthis technique is unproven, CPUdesigners are exploiting modulargate-array or master-slice techniquesin high performance configurations.These modular techniques allow avariety of configurations with modestdesign and testing costs.

Memory trends

When one talks about semiconduc-tor memory trends, three tech-nologies appear that both comple-ment and compete with each other:MOS or bipolar RAMs, bulk memor-ies (such as disks), and bubble andCCD memories. The architecture in-herent in each technology plays a

large role in the present and projecteduse of the technology. In memoryhierarchies, the internal, memorieswill generally be MOS or bipolarRAMS, with TTL or ECL interfaces,that will mate up to the large bulkmemories via the "gap-filler" CCDand bubble memories.

RAMs. Generally, the applicationwill determine the RAM technologyused, with the prime considerationsbeing the variables of access time (3:1or 4:1 factor of higher speed in favorof bipolar) and power/bit (10:1 factorof lower power in favor of DMOS).Rapid advances in printing tech-nology.from the contact and projec-tion methods to E-beam, X-ray, anddirect-step-on-wafer methods in the1979-1981 period show area-geometryfactors decreasing by ratios of 5:1 to7:1, allowing the potential of dynamicRAM availability to proceed from the16K chips prevalent in 1978 (with4p-5p line widths and 35OM2 45oI2 cellsizes) to 64K chips in 1981 and to256K chips (with 1.01i-1.5p line widthsand 50p2-751X2 cell sizes) by 1985 to1987. The appparent evolution ofmemory-chip architectures to byte-wide arrangements has a variety ofadvantages, including larger packag-

0018-9162/79/0400-0098$00.75 1979 IEEE

ing densities, reduced testing timesfor microprocessors, and ease of pro-viding on-chip error correction.Testing costs are directly related totesting time, and testing time isstrongly dependent on chip architec-ture, as is shown in one study, inwhich the "wider" architectures hadsignificantly lower costs for the sametotal bits/chip (see table).The general trends in semiconduc-

tor RAMs in the future will be towardfaster and smaller cells, using a singlepower supply, and with all overheadfunctions performed on-chip. Byte-wide memories will become prevalent,with on-chip error correction allowingthe use of partially-good memories.

Bubbles. Bubble memories areunder intensive development in anumber of laboratories, with million-bit chips having been fabricated inresearch labs. Block organizationsare seen as the only way to increaseyield for chips of over 100,000 bits.Redundant loops are being imple-

Relationship between chip width andtest time.

CHIP TEST TESTSIZE TIME COST

IN SECONDS

16K x 1 8.42 $0.172K x 8 4.64 $0.09

64K x 1 38.47 $0.778K x 8 7.32 $0.15

256K x 1 276.11 $5.5232K x 8 23.20 $0.96

COMPUTER98

mented for fault tolerance in oper-ations requiring high reliability.Higher yields and faster access timesare projected for the future with themore complex permalloy circuits nowunder development. Quarter-million-bit devices, containing 1024 blocks of256 bits each and costing 30 to 50millicents/bit, should be on themarket soon.

CCDs. Now on the market in smallnumbers, CCDs show some promiseof filling the gap between senicon-ductor RAMs and bubbles. CCD diedensities are approximately threetimes those of the semiconductorRAMs, while power dissipation/cellmay be one half to one tenth that ofsemiconductor RAMs. However,RAM applications remain more per-vasive, and production rates for MOSRAMs- remain much greater thanthose for CCDs. The technology ap-pears to be available to provide a256K CCD module for $3.00 in the1982-1983 period.A soft bit error problem, observed

in both CCDs and dynamic RAMs,has been under study; this random,nonrecurring error is recovered fromin the following write cycle with nophysical defects, and has been iden-tified as being caused by anc-particleinteraction inside the package. Theproblem can be eliminated by dieshielding or by cell design using verylarge charge packets. The ultimateminimum error level may be limitedby cosmic protons.

LSI design systems

'The tools and techniques used forcomputer-assisted layout of VLSI cir-cuits can range from fully automatedplacement and routing systemswithout editing facilites to one wherethe user is at the mercy of commer-cially available software on a vintage1972 turnkey system. The accom-plishments in this area, while veryrespectable, still indicate that- muchneeds to be developed to properly andcost-effectively process VLSI circuitsof the 80's.

Digitized drawings. The mostprevalent.method available today forgenerating mask-tooling tapes is todigitize hand-drawn representationsof the circuit topology into a geo-metric data base on a turnkey sys-tem. These systems typically consistof a 16-bit minicomputer control-

ling various storage-CRT editing sta-tions, a digitizer, and one or moredigital plotters. In addition to thesemanual interface terminals, the CPUcontrols a moving-head disk, a tapedrive, a card reader, and a lineprinter.The layout of a 20K-transistor

PMOS circuit has been done on sucha system. As in most designs of VLSIcircuits, a large amount of thedesigner's time was spent at thegraphics terminal, after the entire cir-cuit was digitized, to make altera-tions to the mask topology. Modify-ing data on the screen required asubstantial amount of time, withsome screen regenerations requiringup to two minutes. After majorediting sessions had been completed,a multicolor plot was generated sothat adherence to layout rules couldbe checked in detail. There appears tobe a lack of commercially availableprograms to perform these in-trafigure and interfigure spacing,overlap, and area checks. The man-power invested, from circuitspecification to a mask-tooling tape,exceeded 25 man-months; the size ofthe chip was 6.8 x 6.8 mm.

Logical input. Another method in-volves the generation of an 12L circuitlayout through a combination ofautomated and manual techniquesusing a logical description of the cir-cuit.The data base for this system was

first used to confirm correct opera-tion by exercising the data throughlogic simulation programs. The datawere then partitioned into groups oftightly interconnected transistors,which the program assembled along acommon injector in such a mannerthat it minimized intra-injector inter-connect. Routing with an injectorstring was performed using a varia-tion of the Lee algorithm. Connec-tions that went between theseclusters were routed to the edge of acluster. The routed clusters were thenassembled and placed onto whatwould become the final chip layout. Achannel router then completed therouting between clusters. Since theefficiency of the final layout dependsheavily on the actual routing within acluster, manual editing was used toredirect part of the initial transistorrouting.The actual amount of editing on a

system of this type can be substan-tial if the circuit designer demandsthat the last square micron of real

estate be utilized. In practice, an ac-ceptable layout, which falls within 30percent of the area of a manuallayout, can be generated within halfthe time of a manual layout. Alllayout data were checked by softwarefor adherence to both' the originallogic description and the topologicallayout rules.

Full automation. In the design ap-proaches described above, it is ap-parent that the largest amount oftime in the layout process is consum-ed by either the laborious drafting,digitizing, and editing of circuittopologies or the adjustment ofcomputer-generated routing to op-timize silicon real estate. A fullyautomatic system was described thatuses seven CMOS primitive cells forthe layout of complex VLSI circuitsof the 8-bit microprocessor class.The fully automatic system was

designed in such a way that noediting of topologies is possible; if achange in topology is required (due toa logic change, for example), a com-plete new layout is generated.Although this is a radical departurefrom conventional layout approaches,it has the distinct advantage of ex-tremely fast cycle time: an 800-gatecircuit was taken from logicaldescription to mask-tooling tape in 24hours; an 8-bit ALU required oneweek of effort, major portions of thistime being required to execute theplacement and routing of over 4000gates. A unique feature of thissystem is the reduction of the logic in-to a schematic using the sevenprimitive cells; most conventionalstandard cell systems allow for theentry into the logic and layout database of complex structures. Areautilization is poor compared to anequivalent manual approach-1.5 to 5times larger-but this system pro-vides an exceptionally fast cycle timein return. Parametric performance ofthe CMOS circuits is only slightlydegraded by the large amount ofrouting used for implementation.

A massive amount of work stillneeds to be done in order to cost-effectively process layouts of VLSIcircuits with more than 10K gateseach. Major cycle-time delays will be'encountered unless more phases ofthe layout are automated to mask-tooling processes. As always, thechallenge remains of accomplishingthis, automation without a severepenalty in silicon real estate.

April 1979 99

Emerging logic tech-nologies

With the increasing emphasis onLSI and VLSI, the standard silicontechnology is being reevaluated, anda variety of new techniques are beingstudied for either adaption to orreplacement of current technologies.Two improvements in silicon

technology were discussed. The firstdealt with the personalization of pro-grammable logic arrays by use oflasers, which are used to make a con-nection between two 10,000-ang-strom metal layers separated by twomicrons of silicon without damagingthe silicon underneath. This is doneby selectively "zapping" the connec-tion crossover areas with a con-trolled-energy laser pulse. Five or sixvias per second can be produced, pro-viding engineering hardware for earlysystem development models andPLA macros for modeling VLSI hard-ware. The master-slice PLA used was5.65 x 5.65 mm, with 34 latched in-puts, 44 latched outputs, 8960 "and"array bits, and 7360 "or" array bits.The slice uses +5 V and ground, anddissipates 1-3/4 watts maximum.Designer-controlled options includethe bit pattern, 1-bit or 2-bit parti-tioning, input and output latchgating options, and normal or in-verted data out. Software-controlledoptions include unused circuitdepowering.

Multi-junction LSI wafers formedby this method may be used to pro-vide higher yields by use of mirror-image wafer designs, A and B, which,when processed and tested, may bedivided into two classes: I (mostlygood) and II (mostly bad). The bad"islands" are removed from the most-ly good (Class I) wafers, and the goodislands are removed and saved fromthe mostly bad (Class II) wafers. TheClass II islands that are saved fromthe A wafers are then used to repairthe Class I B wafers where badislands have been removed, and viceversa.The second improvement in current

technology addresses the wireabilitylimit on random logic. The popularview of VLSI is that it provides morecomponents per chip, meaning larger,higher-density chips, costing less andcombining larger functional moduleson a single chip. The lengthy customdesign required, and the large volumeneeded to amortize development cost,may slow the introduction of VLSI,except for certain specialized func-tions.

New opportunities for VLSI can oc-cur however, since VLSI can providesufficient components and connec-tivity, allowing the area to be tradedfor reduced design time, so that lesstime will be needed for the customchip designer, giving more power tothe logic designer. Capitalizing onVLSI for traditional logic will involveinteractions between the pin/gateratio, design time, the regular struc-ture of gates or cells, the need for asufficient amount of metal intercon-nection tracks, and delay/gate.VLSI, if planned for, can provide

cost-effective master-slice gate arrayseven with the area losses, and can al-low new LSI logic applications. Sincethe interconnections dominate in botharea and line-length influences, carefulplanning will be needed.

It has been claimed that galliumarsenide technology is capable ofsuperseding silicon. With recent ad-vances in GaAs technology, promisehas turned to demonstrated perform-ance-120-ps delay gates dissipating0.040 picojoules, and 83-ps gatesdissipating 0.23 picojoules. A widevariety of devices-photodetectors,light emitters, varactor diodes, im-patt diodes, and microwave GaAsMESFETs-are being developed, andsome are already on the market. Thelimits to GaAs technology remain thecompound semiconductor technolo-gy, the lack of a diffusion process,and the lack of an established MOStechnology. Available GaAs pro-cesses include ion implantation (lowpower, high speed), epitaxy (both liq-uid and vapor phase), and Schottkybarriers. Current capability includes0.8-1.0-micron resolution, 0.25-0.5 mi-cron accuracy, 1-micron gate lengths,and 1000-angstrom implanted gatedepths.

Packaging for performance

Circuit-packaging techniques canmake the difference between a sys-tem's meeting its performance re-quirements or not working at all.Various methods of optimizing per-formance, using a variety of parame-ters, have been studied and reportedupon.Packing density of circuitry has

traditionally been related to power-distribution and power-dissipationcharacteristics. However, an analysisof the effect of circuit-packing den-sities on performance leads to somenew insights into the effects of op-

timization of signal-propagationdelay in large partitioned digitalsystems. This analysis, based on start-ing with a power-dissipation limit perunit area. indicates that the circuitrywill, of necessity, be slower as thepower density is reached. The analysisleads to circuit-packing techniques inwhich delay is minimized by adjust-ments among system logic partitionsize, package configuration, coolingtechnology, and circuit technology.The results of the analysis indicatethat system performance does not de-pend strongly upon circuit-packingdensity and that the trend to largercircuit-packing densities must be sup-ported by reductions in the cost ofbuilding assemblies.One example used was a commer-

cial computer for which a major at-tempt has been made to maximizeperformance with SSI piece parts.The system was constructed in a cir-cular fashion around a central core,with the backplane wiring on the in-terior to the central core and the cir-cuit cards mounted horizontally inmodules facing outward from the cen-tral core. The cards are arranged incolumns, with up to 144 two-cardmodules per column. The 6x8-inchcards are composed of a 12x12 arrayof IC positions, spaced on a 0.375inch center-center grid, on a five-layerboard, with two such boards attachedto opposite sides of a cold/groundplate to form a module. Cooling is ac-complished by providing a good ther-mal connection between the cold!ground plate and a freon-cooled ver-tical card guide via a set of screw-activated inclined planes. Powerplanes and supplies limit supply-voltage noise to less than 200 mVpeak-to-peak. With an averagedissipation of 38 watts/board, themechanical assembly provides a 60 ICmaximum junction temperature withno air flow.

All backplane wires are on 1-foot in-crements, from a minimum of 1 footto a maximum of 4 feet, with a 1.5ns/ft propagation delay; wires have a70-ohm characteristic impedance anda 70-ohm termination, providing140-ohm differential impedances fordrivers and receivers. Memory col-umns, constructed in the same man-ner, contain 132,000 72-bit words;eight memory columns provide themillion-word main memory capabili-ty. Sixteen-pin flat packs were usedthroughout. The SSI delays were heldto 1 ns/gate plus 0.5 ns/interconnec-tion, with a 12.5 ns clock ratio,

COMPUTER100

Orange County Computer Group / LA IEEE Council / CSU Fullertonachieved through eight gates plus 12to 22 inches of foil.

Another method of high-perform-ance packaging, using LSI gates, wasalso presented. Since higher packag-ing densities are one of the essentialfactors in achieving high-speed, large-scale computers, methods of packag-ing for future large-scale CPUs with150,000 or more gates are needed.Low-power, high-speed, emitter-coupled-logic LSI chips, in the rangeof 1-2 pj, with 400 to 500 gates/chip,have led to a three-level hierarchy.The first level is the LSI chip. This isincorporated into a second level-aprinted wiring board-containing 80LSI chips and 35,000 to 40,000 gates.These are then placed in the third(system) level package: a frame con-taining six PWBs, holding 150,000gates plus the associated bufferstores. This philosophy is beingstudied as a strong candidate for thehigh-speed large-scale computer ofthe early 1980s. The key to its suc-cess will be finding effective means oflogic partitioning, system cooling,and reasonably simple reworkabilityand testability.

In a slightly different vein, othertechnologies that offer significantattributes for high-performance com-puter packaging problems have radi-cally different packaging. Josephsonsuperconducting technology, for ex-ample, has very fast circuits with verylow speed-power products, a complete-ly terminated transmission-line sys-tem (both on-chip and off-chip), loss-less superconducting transmissionlines that can be fabricated at highdensity using thin-film techniques,and a unique power distribution sys-tem that simplifies clock distribution.An exploratory effort is being miadeto fabricate a small special-purposeprocessor in this technology. Thiscomputer, using gates with delays ofless than 100 ps, and dissipating lessthan 10 microwatts/gate, will include5000 gates, and 150,000 bits of cachememory, in a volume of 0.6 x 1 x 1.2inches, using thin-film evaporationtechniques. Only three logic cir-cuits-an OR gate, an AND gate, andan INVERT gate-are used. Thetechnology uses five-micron linewidths and allows 400 circuits perchip. Special requirements for thehigh-performance package includeddense electronics, good transmissioncharacteristics, good cooling capaci-ty, good power distribution, anddense interconnection capability.

April 1979

Networks and Distributed Processing:Applications, Concepts, and TechnologyThe Orange County Computer Group, the L.A. IEEE Council and C.S.U., Fullerton are

sponsoring a two session seminar on Networks and Distributed Processing.Several engineers will be presenting material on various topics as outlined. A distfnguishedComputer researcher on operating systems and Computer networking has been obtained-Mr. Stuart Wecker.Mr. Wecker is a researcher in the R & D Group at the Digital Equipment Corporation,Maynard, Massachusetts facility. He is currently on leave at the Computer Science Depart-ment of the University of California at Berkeley. Prior to his current research and teachingarchitectures and activities, he was network architect in the software development organiza-tion at Digital, responsible for the definition and protocols of the Digital Network Architec-ture (DNA). In this role, he developed the architectural design for DECnet and coordinatedthe designs and specifications of the DDCMP, NSP, and DAP protocols of DECnet. He haswritten many technical papers, given numerous talks at conferences, symposia, and work-shops, and has lectured extensively.

COURSE DESCRIPTIONThis lecture series will focus on the application of networking systems, the structure ofsuch communication mechanisms, and the state-of-the art of networking technology. Anumber of applications will be presented, showing the relationship of the network to theuser application environment. Network structures will include current network architecturalphilosophies and specific details of Digital's DECnet structure and IBM's SNA.

Lecture No. I-May 12* Distributed Processing andComputer Networking

* Distributed Processing* Characterization of

Networking and theUser Interface

* Network Architectures

Lecture No. Il-May 19* Decomposition of Functions* Protocol Design andTrade Offs

* Specific Protocols: DECnet;IBM; SNA; ARPA Network

* Specification and DesignTrade Off

* Routing* Congestion Control* Network Management

Course CoordinatorJames T. Wellington (Orange County Computer Group IEEE), Home (714) 637-3946,Work (714) 871-4848, Ext. 3512 or 1653

Class MeetingsClass lectures are on Saturday from 9:00 a.m. to 4:00 p.m. May 12 and May 19, 1979 atCalState Fullerton, Fullerton, California. Room S-121 (Science Building). Limited to first 200applicants for each seminar.

Make payments to: Orange County Computer GroupMail forms to: Michael D. Garvey, 27371 Osuna, Mission Viejo, CA92691

Registration Form (please print)Distributed Processing and ProtocolMay 12 and May 19, 1979 Prtncipal Speaker: Stuart Wecker

Fee: IEEE Member $50.00 Non-member $75.00

Name Telephone

Address

City State Zip.

N 0. -dkl iqbjL-.Pl

.,*4

Test aids

With the advent of LSI, testingstrategies came into play as a signifi-cant method of reducing costs, sincethe costs of classical test methodstended to increase geometrically ascircuit complexity increased. Withthe future introduction of VLSI, theproblem will become increasinglycomplex. In fact, two questions canbe raised: "Are we testing enough ortoo much?" and "Can we test the ICsthat we make in the next 10 years?"Simulation,' test generators, and fullfault coverage can probably be per-formed, but technological considera-tions and costs may lead to wastedeffort and money from too muchtesting on the one hand, or potentialdisaster from too little on the other, ifnot adequately planned for.

The first question we must ask iswhether we can -test VLSI. Theanswer must be "yes," but we mustdesign the circuits for testability. TheVLSI environment is multi-technology/mixed technology withopen part number sets, and requiresthat the quality of the parts must behigh. Under these conditions, test-pattern generation must beautomatic. One method of doing thisis by level-sensitive scan design,which places strobed shift registersbetween logic blocks to provide func-tional partitioning. Advantages arehazard-free design, very high testcoverage, and the capability ofautomatic test-pattern generation forlarge networks. Disadvantages aresilicon and wiring overhead, the needfor extra I/O pins, the need for top-down design for best use, and theneed for externally-generated clocksignals.

Unit-in-place testing is anotheralternative, allowing reuse of chiptests on the board, high quality (if thechip tests are good), technology in--dependence, and good faultdiagnostics. Disadvantages includethe need for extra circuits, extra pinsand test points, possible re-quirements of special test features,and not solving the problem oftesting very dense chips.

A combination of both approachesmay be the best solution.

Major VLSI testing problems areredundance, embedded RAM/ROMarrays, and ultralarge networks on a

chip Logic simulation and simulatorshave come into play as a method ofsolving the testing problem. Bothgate-model and junctional-modelsimulators have been' studied, butgeneral results have been in-conclusive as to which model is betteroverall. For LSI testability, it isdesirable to have a system that canwork at all levels of manufacturingand field support and that minimizestest cost when there are many typesof chips. One method currently beingimplemented makes use of pseudo-random testing, where all the inputsto a given chip are driven by apseudo-random shift register, whichis started in a fixed pattern andclocked a fixed number of counts. Theoutput signature (the sum of all out-put states) is unique for each chip.

Living with faulty elements

With the increased complexity ofchips and the systems they reside in,the overall concerns of reliabilityforce designers to look at first pro-viding the most reliable devicespossible, and then developing ar-chitectures that become tolerant offaults as they occur.

A number of programs exist ingovernment and industry to assurethe reliability of LSI products. In thepast, standards have been set formilitary LSI failure rates, over thetemperature range of -55 °C to+12500, of 0.01 percent/lO00 hours;comparable commercial failure rateshave been set at -0.1 percent/1000hours.'These failure rates are gener-ally set through product evaluation,electrical characterization, and relia-bility assurance programs.

Product evaluation is mainlydirected toward assessing the poten-tial reliability of newly introducedsolid-state devices. It involves reviewof processing steps, evaluation ofdevice construction, checking formaterial compatibility, and preparingproduct evaluation reports. The cur-rent emphasis is on LSI micro-processors, bubble memories, andsimilar devices. Selection criteria in-clude device usage, device tech-

nology, and availability of relatedreliability data.

Electrical characterization verifiesthe functional design, determines thecritical electrical parameters, andestablishes optimum test sequences.In this way, a minimum parts list isformed, and system parts and second-sourcing plans may be developed.

Finally, part-reliability assuranceidentifies the basic failure modes andmechanisms as functions of time andstress, by means of high strebs, short-term testing, and failure analysis.This then identifies faulty materials,processes, and designs, leading to ef-fective and efficient device screening.

Even the increased reliability maynot be sufficient. for availability re-quirements of large-scale systems,however; in this case, fault-tolerantdesigns' may be proposed. Eventhough the complexity of each chip isincreasing, the failure rates increaseat a slower rate than the complexityincreases, and the reduction in inter-connections provides an additionalfactor in increasing reliability. In oneexperiment, based on five billiondevice hours of experience, SSI chipswith four to eight -logic circuits/chipwere found to have approximatelyhalf the failure rate of LSI chips with70 to 100 circuits per chip, giving aneffective LSI circuit failure rate ofone seventh that of the SSI.

In a system sense, improvingreliability provides for increasedavailability; improving diagnosabili-ty provides for increased main-tainability. However, in addition to"hard" faults, noise and intermittentor "soft" faults can cause systemdegradation. Some measure of protec-tion can be provided by self-checkingand self-testing; additional protectioncan be obtained for a price, by errordetection/correction techniques suchas parity and/or error detecting/cor-recting codes or circuitry. Somemethods that have been implementedin commercial systems includeduplication, duplication with match-ing, n-tuple voting, error checkingwith spare switching, and decoderchecking. However, all systems,unless structured for resolution, maygive false indications as to wherefailures exist. Use of "cheap"microprocessors may provide arevolution in the development ofreliable large-scale systems'. U

COMPUTER102