Home >> Articles >> Product Engineering >> Adaptive Testing
 
Adaptive Testing PDF Print E-mail
Written by Peter M O'Neill   

 

I. Adaptive Test

A. Definition

Adaptive test is concerned with making predictions about the behavior of the devices being tested from the statistical distributions of their measurements.  Adaptive test is based on two main concepts.

 

The first is the concept of a statistical outlier which is that if a measurement on a part differs significantly from an expected pattern of behavior, that part is probably bad.  The part is assumed to be bad even if the measurement is within its specification limits because something unexpected, a random defect, must have caused it to differ from the similarly processed members of its population.  Outlier detection offers the benefits of increasing the defect detection sensitivity (to increase the effectiveness) of a measurement without improving how the measurement is made and being able to detect a defect by fewer or easier to make measurements (increasing test efficiency).  Outlier methods apply to value (analog) measurements or to counts of pass/fail tests.  Outlier measurements should be chosen for broad defect coverage such as IDDQ or min. VDD.

 

The second concept is adapting the test flow based on prior results to run the next most likely to fail test.  The most common form of flow adaptation is sampling, that if a measurement is in statistical control, it doesn’t need to be measured on every part to verify that every part is in compliance with an acceptable probability.  This concept comes straight from statistical process control theory as is practiced in control charting and process capability and metrology, i.e. gauge repeatability and reproducibility, studies.  Sampling improves test efficiency by only testing as much as is required to meet the outgoing quality target given the incoming (manufactured) quality level.  Sampling applies to tests that are affected by smoothly varying trends, i.e. ones with short-range predictability such as the effect of normal process variation on a circuit performance parameter.  Sampling cannot be used for tests intended to detect outliers because outliers are caused by random defects which, by definition, are not predictable.

B. Variations

1. Adaptation

Outlier methods rely on modeling the expected behavior, but going from a model to deciding whether the part is good requires first choosing a model form (equation) from a family of candidate models, determining the values of the parameters of the model form to create a model of the specific behavior, computing the test limit for the part in question from this model, and finally comparing the measured value to the limit to make the decision.  These operations can take place in characterization prior to test execution, during test execution, or be assumed depending on the adaptability (effectiveness) required of the test.  This choice in turn determines the complexity of the test and result processing flow.

A conventional fixed threshold has no adaptation.  A known model, both its equation and parameter values, can make a prediction by plugging in values measured on only one sample (vector, test, device as related to the model) and is able to handle the range of variation of a failure mechanism.  A known model form requires measuring multiple samples before its parameters can be fit and a prediction computed giving it the ability adapt to drift in the characteristics of the failure mechanism.  Finally, choosing from a family of model forms allows adaptation to the switching of failures among mechanisms but requires making even more measurements to identify the best fitting candidate model before evaluating it to give the expected value.

 

 

Degree of adaptation ®

 

 

Fixed Threshold

 

No adaptation

Known Model

 

 

Simple adaptation

Known Model Form

 

Complex adaptation

Automatic Model Discovery

High adaptation

Machine Learning

 

No idea what to look for

Step in making decision ®

Decision

E

E

E

E

E

Limit

C

E

E

E

E

Parameter Values

A

C

E

E

E

Model Form

A

A

C

E

E

Model Form Family

A

A

A

A

E

 

Figure 1 – Where aspects of an outlier identification approach are performed:  C – Characterization, E – Test Execution, A – Assumed or given

2. Scope

Another important consideration is the scope over which the threshold should be adapted, e.g. test, die, wafer, or lot.  This can be determined experimentally by variance components analysis, which is related to the analysis of variance (ANOVA) method.

II. Reporting Adaptive Decision Results

A. Introduction

In adaptive test, test results and decisions are recorded for these uses:

1. Sorting – Identifying acceptance or grade bin.  This is usually done with an electronic wafer map at probe, and part trays at package test.

2. Yield analysis – Diagnosing the cause of bad parts and how to detect them more efficiently.  This is usually done by recording selected test decision and measurement values in a database.

3. Making a decision to accept the part or to branch its test flow.  Making decisions beyond the immediate comparison of a measurement to an expected value is the heart of adaptive test.  Reporting on how these decisions were made is essential to yield analysis just as reporting which tests failed is to traditional test.

 

STDF records are sufficient to report the raw results of each test for further analysis and the decision (initial or final) on each part for sorting.  The soft bin in the PRR and the good count in the WRR can be changed by the decision algorithm together with writing a corresponding ATR to note that this algorithm was run.  This would still enable downstream processes to select the proper parts.

However STDF is not sufficient to record the information on how all types of acceptance decisions are made so as to analyze yield and test (decision algorithm) effectiveness.  Reporting on some decisions requires new fields related to the decision algorithm to be included within records related to the scope of the data used to make the decision.  This could be done either by extending STDF or creating a parallel report structure.  Since STDF is the industry standard it is preferable to use it to report adaptive test results and, as the following shows, STDF is amenable to this new use.

Interim Use of Existing STDF Records

The raw measurement and setting information needed for decision-making is contained in the FTR, PTR, MPR records.  There is some ability to report on real-time and post-processing decisions made on these records in existing STDF.  As mentioned above, decision-making software can change the SOFT_BIN field in a part’s PRR and note that it made the change by adding an ATR.  Besides the date and time of the modification, the ATR only provides a variable length character field which, lacking structure, is only sufficient to record the name of the program that made the modification and perhaps the options with which it was run.

Outlier identification is equivalent to adapting the test limits so yield analysis requires knowing the limit applied to each part.  In simple univariate outlier methods that apply to data scoped within a test, e.g. Iddq at multiple vectors or spec. search,  it is possible to do this by rewriting the LO_LIMIT and HI_LIMIT fields in the PTR or MPR.  But there is no such one-to-one correspondence between measurements and limits for higher data scoping, e.g. die or wafer, or for multivariate methods.  In these cases transformations of the measurement or combinations of measurements, not the specific individual measurements, are compared to the adapted limits.  These types of decisions require the generation of new records to report their results.

It is possible to add anything to an STDF file (or stream) as a character string in a DTR but this has no structure, other than record type and sub-type, that other tools can automatically comprehend.  It would be better to use the Generic Data Record (GDR) because it is divided into user-defined fields.  The GDR’s REC_TYP could designate its data scope, as this field is already defined in STDF, while its REC_SUB (record sub-type) could designate the decision algorithm.  The remaining fields would then be the parameters of the model used in the algorithm.

Finally, part counts in the wafer’s WRR and bin counts in the test lot’s SBR can be updated by decision-making software but this information would be more useful in a report designed specifically for summarizing adaptive tests.

 

 
Joomla Templates by Joomlashack