automated volume diagnostics

57
© Synopsys 2013 1 Automated Volume Diagnostics Accelerated yield learning in 40nm and below technologies John Kim Yield Explorer Applications Consultant Synopsys, Inc June 19, 2013

Upload: eddy

Post on 24-Feb-2016

65 views

Category:

Documents


0 download

DESCRIPTION

Automated Volume Diagnostics. A ccelerated yield learning in 40nm and below technologies. John Kim Yield Explorer Applications Consultant Synopsys, Inc. June 19, 2013. Agenda. Current Challenges Diagnostics vs Volume Diagnostics Analysis Flows with Volume Diagnostics - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Automated Volume  Diagnostics

© Synopsys 2013 1

Automated Volume DiagnosticsAccelerated yield learning in 40nm and

below technologies

John KimYield Explorer Applications Consultant

Synopsys, IncJune 19, 2013

Page 2: Automated Volume  Diagnostics

© Synopsys 2013 2 Korea Test Conference 2013

Agenda

Current Challenges

Diagnostics vs Volume Diagnostics

Analysis Flows with Volume Diagnostics

Collaboration between Fab/Fabless

Conclusions

Page 3: Automated Volume  Diagnostics

© Synopsys 2013 3 Korea Test Conference 2013

Systematic Issues Rising Dramatically

• Systematic contribution to initial yield losses are worsening at newer technodes

• Random Defect issues are also increasing but can be managed with existing methods and infrastructure

• Some different methods are needed to address these new mechanisms**Chart data source - IBS

Technology Node (nm)

Design-based yield issuesLitho-based

yield issues

Defect-based yield issues

Yiel

d Lo

ss (%

)Systematic

Random

Trend of Initial Yield Loss by Technode

Page 4: Automated Volume  Diagnostics

© Synopsys 2013 4 Korea Test Conference 2013

How to address those systematics?

• Traditional yield learning methods can address random defectivity sources…– Inline inspections– Technology structural and IP testchips– Single production volume yield learning vehicle– Memory array based detection and FA localization– Various EFA visualization techniques– Litho/DFM simulation– Legacy learning

• But what about product/technology specific design and layout systematics?

Page 5: Automated Volume  Diagnostics

© Synopsys 2013 5 Korea Test Conference 2013

ATPG Diagnostics Based Yield Learning• ATPG Diagnostics based Yield Learning gives us an

enhanced level of analysis and characterization capability

• Most logic products already use ATPG for automated high test coverage pattern generation

• Diagnostics provides very high localization of likely defective region, often down to a few square microns with physical diagnostics usage

• Volume diagnostics adds statistical confidence to identify root cause

Page 6: Automated Volume  Diagnostics

© Synopsys 2013 6 Korea Test Conference 2013

Agenda

Current Challenges

Diagnostics vs Volume Diagnostics

Analysis Flows with Volume Diagnostics

Collaboration between Fab/Fabless

Conclusions

Page 7: Automated Volume  Diagnostics

© Synopsys 2013 7 Korea Test Conference 2013

How Logic Diagnostics Work

• Assumptions:– Many ATPG patterns– ATE failures recorded from all those

patterns

• Most faults produce a unique test response signature

• Find the fault which most closely matches the defect signature from the ATE

P1:11001010P2:00011101P3:10100011

P1:PPPPFPPP2:PPPPPPPP3:PFFPPPP

Signature for fault A

P1:PPPPPPPP2:PPFFFPFP3:PFPPPPP

Signature for fault B

Following pages provide basics of how scan diagnostics works

Page 8: Automated Volume  Diagnostics

© Synopsys 2013 8 Korea Test Conference 2013

DSI

Q DSI

Q DSI

QScan Chain1

DSI

Q DSI

Q DSI

Q

ATE0

Combinational Logic

Scan Chain2

Scan Chain Loading

0 1 1 1 0

System Clock

0000000

Scan Chain Unloading

0

000000Load Data Expect Data

Good Scan Operation

Page 9: Automated Volume  Diagnostics

© Synopsys 2013 9 Korea Test Conference 2013

DSI

Q DSI

Q DSI

QScan Chain1

DSI

Q DSI

Q DSI

Q

ATE0

Combinational Logic

Scan Chain2

Loading

0 1 1

Setup

1 0

System Clock

1110000

Unloading

0

000000Load Data Expect Data

Defect

Miscompare

Miscompare

Scan Operation with Defect

Page 10: Automated Volume  Diagnostics

© Synopsys 2013 10 Korea Test Conference 2013

DSI

Q DSI

Q DSI

QScan Chain1

DSI

Q DSI

Q DSI

Q

Combinational Logic

Scan Chain2

1110000 0Defect

Miscompare

Miscompare

Scan Diagnosis

Page 11: Automated Volume  Diagnostics

© Synopsys 2013 11 Korea Test Conference 2013

Diagnostics• Subnet diagnosis enables even further localization of

open defects

Fail

Fail

Pass

Driver

Fail Region

Page 12: Automated Volume  Diagnostics

© Synopsys 2013 12 Korea Test Conference 2013

What is Volume Diagnostics?• Performs statistical analysis of diagnostics results from

multiple failing chips• Identifies systematic, yield-limiting issues by using design data• Provides actionable information on high value candidates for

Physical Failure Analysis (PFA)• Can apply to both chain or logic diagnostics

Category 1 Category 2 Category 3 Category 4

Prioritizing the Systematic Yield Issues

Defect TypeRel

ativ

e Yi

eld

Fallo

ut

So why volume diagnostics vs

single diagnostics?

Page 13: Automated Volume  Diagnostics

© Synopsys 2013 13 Korea Test Conference 2013

Why Volume Diagnostics• To explain why volume

diagnostics are important, let’s first consider BINSORT data

• What can be concluded from one die of BINSORT data?

• Can anything be concluded from this failing die BIN88

• How important is Bin 88 failures on this wafer?

• Is it a systematic failure?

Page 14: Automated Volume  Diagnostics

© Synopsys 2013 14 Korea Test Conference 2013

Why Volume Diagnostics• To understand it’s

importance and characteristic, need more data to make conclusion

• With the inclusion of other dies on this wafermap, it becomes more clear 1. Bin 88 is unlikely a systematic

nor is it important failing BIN on this wafer

2. Bin 68 is the most important issue here and shows a strong systematic signature

Analysis of a statistically significant volume of data provides a better level of understanding about the

failing population

Page 15: Automated Volume  Diagnostics

© Synopsys 2013 15 Korea Test Conference 2013

Why Volume Diagnostics• Similar to BINSORT example, volume diagnostic

analysis of multiple dies/wafers/lots provides clearer picture of the most important systematics on a sample

1 die diagnostics Increase analysis sample to 10 die diagnostics

No systematics observable

Systematic becomes observable with increased volume

Failing Net1 Failing Net1

Failing Net2

Failing Net3

Page 16: Automated Volume  Diagnostics

© Synopsys 2013 16 Korea Test Conference 2013

What is Volume Diagnostics?• Volume diagnostics can describe any statistical treatment of

diagnostic data (both chain and logic)• It can range from the simple to the extremely sophisticated

Basic Volume Diagnostics• Manual parsing of diagnostic datalogs and data

manipulation• Simple summing, sorting and filtering to identify

strong systematic signals• Manual inspection of results• Manual generation of coordinates for FA team to

localize defect

Full Automated Volume Diagnostics• Automatic/semiautomatic prefiltering of bad diagnostic

data• Analyzes data from multiple directions, with single or

multiple variable combinations• Applies statistical tests and intelligent heuristics to

interpret and quantify results• Aligns non-diagnostic data sources to enrich

understanding• Generates tool files to drive FA equipment to likely

source of defects

Page 17: Automated Volume  Diagnostics

© Synopsys 2013 17 Korea Test Conference 2013

Considerations during analysis

• Some important details should be considered in volume diagnostics– Should any data be removed prior to analysis?– Are normalization required to interpret the data– How important are the findings, in terms of overall yield impact

and statistical significance?– Is there some supporting data to validate the findings?– Is the problem new, or pre-existing?– Are the results something that FA can reasonably isolate?

Page 18: Automated Volume  Diagnostics

© Synopsys 2013 18 Korea Test Conference 2013

Automated Volume Diagnostics

• With Volume Diagnostics, we are usually trying to answer specific questions. For example:– Is there a systematic metal or via location that is repeatedly failing?– Are there standard cells that are failing above their entitlement?– Are there scan chain that are consistently failing?– Is there a design or IP block that is failing above it’s entitlement?– What is the highest yield impact systematic on the analyzed dataset?– Is there a systematic lithography weakpoint associated with a significant

number of fails– Were any of the failures observable inline?

• There are a large number of possible questions that can be asked. • A comprehensive and flexible system to quickly configure, analyze

large amounts of data, and direct the analysts to next steps is necessary for a production volume diagnostic flow

Page 19: Automated Volume  Diagnostics

© Synopsys 2013 19 Korea Test Conference 2013

Volume Diagnostics – Analysis

• For effective volume diagnostics, should minimally provide: – Identification of the systematic observation to it’s smallest

resolvable element– Quantification of the systematics in terms of yield impact– Statistical significance of the systematic– Output information sufficient for failure analysis (wafer diexy and

within die coordinates) in a format easily consumed by FA labs– Additional information to help FA teams isolate defects and/or

test/design/process teams to investigate possible fixes

Page 20: Automated Volume  Diagnostics

© Synopsys 2013 20 Korea Test Conference 2013

Agenda

Current Challenges

Diagnostics vs Volume Diagnostics

Analysis Flows with Volume Diagnostics

Collaboration between Fab/Fabless

Conclusions

Page 21: Automated Volume  Diagnostics

© Synopsys 2013 21 Korea Test Conference 2013

Volume Diagnostics• What are some examples of volume diagnostic analysis results?

– Design Based:– Repeating nets or instances– Std Cell systematics– Design/IP block sensitivity– Routing pattern dependency– Scan Chain failures– Timing slack analysis– Voltage/temperature sensitivity

– Process Based– Spatial systematics– FEOL, Metal or Via layer systematic opens/shorts– Lot/Lot, Wafer to Wafer variability– Process equipment/history dependency

– Test Based– Test Pattern dependency– Tester/Probecard dependency

• Or combinations of any of the above

Page 22: Automated Volume  Diagnostics

© Synopsys 2013 22 Korea Test Conference 2013

Use Case: Which Nets Fail Systematically?

A net is a unique element on a design. It only occurs once out of possible 10s or 100s of millions of possible

nets on a design. Repetitive failures on a net

indicate a strong systematic signatures

What is the probability of a randomly occuring 5 die repeater of 1 net on 1000 die sample, in a 10 million

net design?

p(1000 dies, 10e6 nets, 5 coincident ) ~8 e-16 -7.9 sigma event

Likelihood of this being a random event is small

Page 23: Automated Volume  Diagnostics

© Synopsys 2013 23 Korea Test Conference 2013

Use Case – Are any std cells failing systematically?

• Early in technology development, FEOL issues are prominent

• Important to evaluate std cell failures to characterize FEOL systematics

Page 24: Automated Volume  Diagnostics

© Synopsys 2013 24 Korea Test Conference 2013

Use Case – Are any std cells failing systematically?• Important to use design data to understand fail

entitlement to interpret results#1 cell is actually failing

at random baseline entitlement. What

appeared to be the #2 item is actually the

worst when comparing gap vs entitlement

Page 25: Automated Volume  Diagnostics

© Synopsys 2013 25 Korea Test Conference 2013

Entitlement Gap Discussion

• What is an entitlement gap?

– This just means that failures aren’t evaluated on an absolute basis

– Unfortunately, there is no 100% yield– There is always some baseline amount of failures expected. Our

failures need to be compared against the expected amounts to properly conclude it is systematic

Page 26: Automated Volume  Diagnostics

© Synopsys 2013 26 Korea Test Conference 2013

Some basic concepts• How should we assess the effect of a factor?• Let’s consider the following general case

Is it ~30% ?

Yield Loss for Factor X

Item Number

What is the amount of yield loss for Item

X

Page 27: Automated Volume  Diagnostics

© Synopsys 2013 27 Korea Test Conference 2013

Some basic concepts (cont’d)• What if we had additional information about item X?• For example, comparison against yield loss for other

elements for that variable?

We can say that item 20 has a 20% yield loss above the

baseline entitlement of 10% loss for mechanism X

Yield Loss for Factor X

Item Number

Now, for item 20, what is the

interesting quantity?

Page 28: Automated Volume  Diagnostics

© Synopsys 2013 28 Korea Test Conference 2013

Some basic concepts (cont’d)• Is this a reasonable way to look at yield loss mechanisms?• Actually Yield/Product Engineers do this regularly• Consider the familiar Bin Loss Pareto

This bin pareto by itself isn’t that useful But the inclusion of a reference to understand what the bin losses should be, can be used as the baseline entitlement for each bin.

From this data alone, it would appear that

Bins 68, 6, and 41are problematic at ~20%

yield loss

But with the inclusion of the baseline

entitlement, it is clear that only Bin 68 is the

excursion, and the amount is ~15%

Page 29: Automated Volume  Diagnostics

© Synopsys 2013 29 Korea Test Conference 2013

Entitled Bin Value

• From the previous example, what could explain why bin 6, 41 are high but not something that is necessarily unexpected?

• Consider the situation, where binning is done by major functional block within design

Bin 6 Coverage

Bin 41 Coverage

68

Imagine that Bin 6 and Bin 41 cover functional blocks in large portions of the chip, but Bin 68, covers this smaller portionIn this case, if the three bins are failing at the same rate, we would suspect that Bin 68 failures have some unique systematic

Page 30: Automated Volume  Diagnostics

© Synopsys 2013 30 Korea Test Conference 2013

Gap Metrics• General formula for gap of a mechanism

• In this case– Observed = measured from test, extracted by diagnostics,

expressed as % of total dies– Expected = Entitlement quantity, also expressed in % of total dies

• Why Gap:– Gap cannot exceed Observed Fail %

– i.e. if observed loss is 1%, even if fail rate is very high, gap cannot exceed 1%. Ensures that focus is on high yield impact issues

𝑮𝒂𝒑𝒊=𝑶𝒃𝒔𝒆𝒓𝒗𝒆𝒅𝒊−𝑬𝒙𝒑𝒆𝒄𝒕𝒆𝒅 𝒊

Page 31: Automated Volume  Diagnostics

© Synopsys 2013 31

Gap to Model – Basics

• Let us consider another familiar example

Korea Test Conference 2013

Device A Device B Both devices are designed and manufactured in the same technology (e.g. 28nm) in the same foundry

Do you expect the yields to be the same or different?

Area Device A = ½ area of Device B

YA YB

<

=

> We know intuitively that the larger die should yield less

Page 32: Automated Volume  Diagnostics

© Synopsys 2013 32

Gap to Model – Basics

• Let’s look at another example of this concept• Imagine we are a foundry. We are running 8 different

products in the same fab in the same process during the same time period

• Yield Summary per device is as follows

Korea Test Conference 2013

What conclusion can we make? Is there

some device here that is not behaving

properly? What is the missing information?

Page 33: Automated Volume  Diagnostics

© Synopsys 2013 33

Gap to Model – Basics

• Let’s include area to see if that helps you come to some conclusion

Korea Test Conference 2013

Page 34: Automated Volume  Diagnostics

© Synopsys 2013 34

Gap to Model – Basics

• Based on the area of each device, can estimate an expected yield using some yield model, defectivity rate and area of each device

Korea Test Conference 2013

Now, it’s more clear that device E is

misbehaving

Page 35: Automated Volume  Diagnostics

© Synopsys 2013 35 Korea Test Conference 2013

Use Case – Are any std cells failing systematically?• Important to use design data to understand fail

entitlement to interpret results#1 cell is actually failing

at random baseline entitlement. What

appeared to be the #2 item is actually the

worst when comparing gap vs entitlement

A volume diagnostic analysis tool should be

able to use design normalizations and generate expected

entitlements for proper interpretation

Page 36: Automated Volume  Diagnostics

© Synopsys 2013 36 Korea Test Conference 2013

Volume Diagnostics – Yield Normalization In addition to design

normalization, important to normalize results to

wafer yield

Case A: 14/21 dies systematic

Case B: 14/21 dies systematic

In this wafer, the effect of this systematic has very large yield impact

In this wafer, the effect of this systematic has very small yield impact

Does the systematic

here have the same yield impact on

both wafers?

Volume diagnostic analysis should consider the overall yield data on the wafer to understand

the true yield impact of the systematic

Page 37: Automated Volume  Diagnostics

© Synopsys 2013 37 Korea Test Conference 2013

Physical Verification

• Use Cases:1. Overlay hotspots to failing diagnostic nets or instances

– Localize failure to small point for long failing nets

HotspotOverlay to litho

weakpoint simulation hotspot,

narrows down failure location to very specific point

on one layerThis net would be too long for FA without any

additional information

Page 38: Automated Volume  Diagnostics

© Synopsys 2013 38 Korea Test Conference 2013

DFM Hotspot Correlation• In addition to helping FA, a statistical analysis is also important to

quantify effect of different hotspots rules on diagnostics failures• Various metrics such as hotspot fail rate, candidate hit rate, etc.

are calculated and visualized

Page 39: Automated Volume  Diagnostics

© Synopsys 2013 39 Korea Test Conference 2013

DFM Hotspot Correlation

Hotspot locationfrom hotspot file

Reported failing cell matches sensitive viabar hotspot location

Fault location from diagnostic log

Page 40: Automated Volume  Diagnostics

© Synopsys 2013 40 Korea Test Conference 2013

Inline Defect Correlation• Correlate inline defects with diagnostic candidates• Various metrics such as hotspot fail rate, candidate hit

rate, etc. are calculated and visualized

Page 41: Automated Volume  Diagnostics

© Synopsys 2013 41 Korea Test Conference 2013

Inline Defect Correlation

1. Use inline observed defects to narrow down source of diagnostic failure

– For long nets, FA might be difficult. If net is overlaid to an inline defect, can go directly to that location on that layer to help FA localize defect

– For FEOL instances, can identify layer that may be source of defect

2. Use inline observed defects to disqualify candidates from FA

– Source already identified inline, doesn’t need additional FA characterization. Better for FA lab to spend time on finding new defects. Skip FA on this candidate.

Page 42: Automated Volume  Diagnostics

© Synopsys 2013 42 Korea Test Conference 2013

Case Study: Large fallout at Vddmin• Problem: Large Vddmin fallout observed• Solution: Automated Dft to Parametric correlation study performed• Considerations

– 1000 cell x 100 parameters ~ 100,000 possible data pairs– Need an automated algorithm that searches through all pairs to find most significant

ones Statistical test automatically finds significant pair of results (Cell and

parameter)

• Follow-on validation of this hypothesis by:– Analyze Split lots (transistor skew lots to validate this finding), and historical trends– Perform Simulations (verify if this parametric behavior could be related to diagnostic signal)– Perform FA (construction analysis to validate this signal)

Page 43: Automated Volume  Diagnostics

© Synopsys 2013 43 Korea Test Conference 2013

Physical Verification• Use Cases:

1. STA data alignment with failing instances– Use some static timing analysis results and assign timing slack to

failing transition faults

These large slack candidates are

unlikely timing issues, and are better

candidates for FA

These small slack candidates are likely slow path

related, and likely mayhave no visible

defect

Without binning transition candidates by slack, it is possible to confuse mechanisms and generate many NDF*Nelly Feldman, ST Microelectronics, Silicon Debug and Diagnostics Conference 2012

Page 44: Automated Volume  Diagnostics

© Synopsys 2013 44 Korea Test Conference 2013

Use Case – Correlation to Memories

• Modern SOC enables us opportunity to use other product data to help explain diagnostics

Leveraging correlated results from bitmap classification vs logic diagnostics, we have ability to

Page 45: Automated Volume  Diagnostics

© Synopsys 2013 45 Korea Test Conference 2013

Use Case – Correlation to Memories• Using bit classifications correlation to cell fail results from

diagnostics, we can attain better understanding about correlated failures

• In this example, these diagnostic FADDX1 cell failures can be investigated by FA of single bit failures

Page 46: Automated Volume  Diagnostics

© Synopsys 2013 46 Korea Test Conference 2013

Use Case – Via Analysis• In this experiment, failures on Via12C were injected

above a background random via fail rate on all other vias.

Note, vias that don’t have significant affect on the yield, will not show results from this method due to statistical significance validation

Page 47: Automated Volume  Diagnostics

© Synopsys 2013 47 Korea Test Conference 2013

Use Case – Via Analysis• Finally, via fail rate values are converted through a yield

model into overall yield impact

Yield Model transformation is necessary to understand significance of result. A via may have high fail rate but low usage in design, in which case, yield impact many be small even with high fail rates

Page 48: Automated Volume  Diagnostics

© Synopsys 2013 48 Korea Test Conference 2013

Diagnostic Considerations

• Some things to consider when analyzing diagnostics– Equivalent faults– Correlated failures – Diagnostics are heavily resource constrained

– Need to make more intelligent use of upstream data to make diagnostics more targeted, biggest bang for the buck

Page 49: Automated Volume  Diagnostics

© Synopsys 2013 49 Korea Test Conference 2013

Agenda

Current Challenges

Diagnostics vs Volume Diagnostics

Analysis Flows with Volume Diagnostics

Collaboration between Fab/Fabless

Conclusions

Page 50: Automated Volume  Diagnostics

© Synopsys 2013 50 Korea Test Conference 2013

Volume Diagnostics Methodology

• Statistically prioritize the candidates from multiple failing dies

• Localize likely failure sites by mask layer and segment/Via using correlations

1000s of Likely FA Sites

List of Top 10 Sites for PFA

Diagnostics

Timing

Inline

LRC

DRC

LEF DEFand Layout

T

T B

SSS

More data into volume diagnostics, enables

better characterization

Page 51: Automated Volume  Diagnostics

© Synopsys 2013 51 Korea Test Conference 2013

Data Used in Volume Diagnostics

LEF / DEF

Diagnostics Call Outs

STA DRC, Hotspot

WET WAT E-Test

In-line Defect CFM

In-line CD Metrology

OPC Verification

GDS

BIN & Parametric

Test

From Design

From Fab

Required Data Optional Data

Page 52: Automated Volume  Diagnostics

© Synopsys 2013 52 Korea Test Conference 2013

Scenario 1: IndependentAccess to LEF DEF is Assured

LEF / DEF

Diagnostics Call Outs

STADRC+

Hotspot RDR etc.

WET WAT E-Test

In-line Defect CFM

In-line CD Metrology

OPC Verification

GDS

BIN & Parametric

Test

At an IDM, or at Foundry for their test chip

Page 53: Automated Volume  Diagnostics

© Synopsys 2013 53 Korea Test Conference 2013

Scenario 2: Foundry-FablessFabless Customers Don’t Give LEF DEF to Foundry

LEF / DEF

Diagnostics Call Outs

STADRC+

Hotspot RDR etc.

WET WAT E-Test

In-line Defect CFM

In-line CD Metrology

OPC Verification

GDS

BIN & Parametric

Test

From Design

From Fab

Page 54: Automated Volume  Diagnostics

© Synopsys 2013 54 Korea Test Conference 2013

Foundry-Fabless Collaboration

LEF / DEF

Diagnostics Call Outs

STA DRC, Hotspot

WET WAT E-Test

In-line Defect CFM

In-line CD Metrology

OPC Verification

GDS

BIN & Parametric

Test

From Design

From Fab

Yield Explorer Secure

Snapshot

Secure Snapshots protect the privacy of sensitive data on either side

Page 55: Automated Volume  Diagnostics

© Synopsys 2013 55 Korea Test Conference 2013

• Enables analysis of silicon defects to accelerate product ramp and increase yield– TetraMAX diagnoses individual failing die for

defect locations– Yield Explorer correlates these defects

across many failing die with physical design and test data

• Easy to deploy– Support for industry-standard formats

(LEF/DEF and STDF)– Direct interface between TetraMAX and Yield

Explorer

Candidates & Physical Data

TetraMAX (Diagnostics)

Yield Explorer

Patterns STDFLEF/DEF

TetraMAX + Yield ExplorerFaster Root Cause Analysis for Yield Ramp

Page 56: Automated Volume  Diagnostics

© Synopsys 2013 56 Korea Test Conference 2013

Agenda

Current Challenges

Diagnostics vs Volume Diagnostics

Analysis Flows with Volume Diagnostics

Collaboration between Fab/Fabless

Conclusions

Page 57: Automated Volume  Diagnostics

© Synopsys 2013 57 Korea Test Conference 2013

Conclusions

• Design/process systematics are becoming worse at advanced nodes

• Volume Diagnostics enables better and faster analysis and FA turnaround

• Many analysis flows enabled with volume diagnostics• Collaboration between fabless and foundry required for

complete analysis• YieldExplorer with Tetramax provides complete platform

for volume diagnostic analysis