quam2000 (measurement - um - for - micro managers) - to help your question: how can you manage a...

126
EURACHEM / CITAC Guide CG 4 Quantifying Uncertainty in Analytical Measurement Second Edition QUAM:2000.1

Upload: kaizen-group-experts

Post on 16-May-2015

351 views

Category:

Technology


1 download

TRANSCRIPT

Page 1: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

EURACHEM / CITAC Guide CG 4

Quantifying Uncertainty inAnalytical Measurement

Second Edition

QUAM:2000.1

Page 2: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

EURACHEM/CITAC Guide*Quantifying Uncertainty inAnalytical Measurement

Second Edition

EditorsS L R Ellison (LGC, UK)M Rosslein (EMPA, Switzerland)A Williams (UK)

Composition of the Working Group

EURACHEM membersA Williams Chairman UKS EllisonSecretary LGC, Teddington, UKM Berglund Institute for Reference Materials and

Measurements, BelgiumW Haesselbarth Bundesanstalt fur Materialforschung und

Prufung, GermanyK Hedegaard EUROM IIR Kaarls Netherlands Measurement Institute, The

NetherlandsM Månsson SP Swedish National Testing and Research

Institute, SwedenM Rösslein EMPA St. Gallen, SwitzerlandR Stephany National Institute of Public Health and the

Environment, The NetherlandsA van der Veen Netherlands Measurement Institute, The

NetherlandsW Wegscheider University of Mining and Metallurgy, Leoben,

AustriaH van de Wiel National Institute of Public Health and the

Environment, The NetherlandsR Wood Food Standards Agency, UK

CITAC MembersPan Xiu Rong Director, NRCCRM,. ChinaM Salit National Institute of Science and Technology

USAA Squirrell NATA, AustraliaK Yasuda Hitachi Ltd, Japan

AOAC RepresentativesR Johnson Agricultural Analytical Services, Texas State

Chemist, USAJung-Keun Lee U.S. F.D.A. WashingtonD Mowrey Eli Lilly & Co., Greenfield, USA

IAEA RepresentativesP De Regge IAEA ViennaA Fajgelj IAEA Vienna

EA RepresentativeD Galsworthy, UKAS, UK

AcknowledgementsThis document has been produced primarily by a jointEURACHEM/CITAC Working Group with thecomposition shown (right). The editors are grateful toall these individuals and organisations and to otherswho have contributed comments, advice andassistance.

Production of this Guide was in part supported undercontract with the UK Department of Trade andIndustry as part of the National Measurement SystemValid Analytical Measurement (VAM) Programme.

*CITAC ReferenceThis Guide constitutes CITAC Guide number 4

media
Note
Questions/queries, you can ask the authors, or the advisor at http://Kaizenvietnam.com/vi/lien-he the experts who being well advising for lab's iso/iec 17025 system to be accredited (for chemical, microbiological, electrical-electronic testing lab to be accredited)
Page 3: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty in Analytical Measurement

English edition

Second edition 2000

ISBN 0 948926 15 5

Copyright 2000Copyright in this document is vested in thecontributing members of the working group.

Page 4: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Contents

QUAM:2000.1 Page i

CONTENTS

FOREWORD TO THE SECOND EDITION 1

1. SCOPE AND FIELD OF APPLICATION 3

2. UNCERTAINTY 4

2.1. DEFINITION OF UNCERTAINTY 42.2. UNCERTAINTY SOURCES 42.3. UNCERTAINTY COMPONENTS 42.4. ERROR AND UNCERTAINTY 5

3. ANALYTICAL MEASUREMENT AND UNCERTAINTY 7

3.1. METHOD VALIDATION 73.2. CONDUCT OF EXPERIMENTAL STUDIES OF METHOD PERFORMANCE 83.3. TRACEABILITY 9

4. THE PROCESS OF MEASUREMENT UNCERTAINTY ESTIMATION 11

5. STEP 1. SPECIFICATION OF THE MEASURAND 13

6. STEP 2. IDENTIFYING UNCERTAINTY SOURCES 14

7. STEP 3. QUANTIFYING UNCERTAINTY 16

7.1. INTRODUCTION 167.2. UNCERTAINTY EVALUATION PROCEDURE 167.3. RELEVANCE OF PRIOR STUDIES 177.4. EVALUATING UNCERTAINTY BY QUANTIFICATION OF INDIVIDUAL COMPONENTS 177.5. CLOSELY MATCHED CERTIFIED REFERENCE MATERIALS 177.6. USING PRIOR COLLABORATIVE METHOD DEVELOPMENT AND VALIDATION STUDY DATA 177.7. USING IN-HOUSE DEVELOPMENT AND VALIDATION STUDIES 187.8. EVALUATION OF UNCERTAINTY FOR EMPIRICAL METHODS 207.9. EVALUATION OF UNCERTAINTY FOR AD-HOC METHODS 207.10. QUANTIFICATION OF INDIVIDUAL COMPONENTS 217.11. EXPERIMENTAL ESTIMATION OF INDIVIDUAL UNCERTAINTY CONTRIBUTIONS 217.12. ESTIMATION BASED ON OTHER RESULTS OR DATA 227.13. MODELLING FROM THEORETICAL PRINCIPLES 227.14. ESTIMATION BASED ON JUDGEMENT 227.15. SIGNIFICANCE OF BIAS 24

8. STEP 4. CALCULATING THE COMBINED UNCERTAINTY 25

8.1. STANDARD UNCERTAINTIES 258.2. COMBINED STANDARD UNCERTAINTY 258.3. EXPANDED UNCERTAINTY 27

Page 5: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Contents

QUAM:2000.1 Page ii

9. REPORTING UNCERTAINTY 29

9.1. GENERAL 299.2. INFORMATION REQUIRED 299.3. REPORTING STANDARD UNCERTAINTY 299.4. REPORTING EXPANDED UNCERTAINTY 299.5. NUMERICAL EXPRESSION OF RESULTS 309.6. COMPLIANCE AGAINST LIMITS 30

APPENDIX A. EXAMPLES 32

INTRODUCTION 32EXAMPLE A1: PREPARATION OF A CALIBRATION STANDARD 34EXAMPLE A2: STANDARDISING A SODIUM HYDROXIDE SOLUTION 40EXAMPLE A3: AN ACID/BASE TITRATION 50EXAMPLE A4: UNCERTAINTY ESTIMATION FROM IN-HOUSE VALIDATION STUDIES.

DETERMINATION OF ORGANOPHOSPHORUS PESTICIDES IN BREAD. 59EXAMPLE A5: DETERMINATION OF CADMIUM RELEASE FROM CERAMIC WARE BY

ATOMIC ABSORPTION SPECTROMETRY 70EXAMPLE A6: THE DETERMINATION OF CRUDE FIBRE IN ANIMAL FEEDING STUFFS 79EXAMPLE A7: DETERMINATION OF THE AMOUNT OF LEAD IN WATER USING DOUBLE

ISOTOPE DILUTION AND INDUCTIVELY COUPLED PLASMA MASS SPECTROMETRY 87

APPENDIX B. DEFINITIONS 95

APPENDIX C. UNCERTAINTIES IN ANALYTICAL PROCESSES 99

APPENDIX D. ANALYSING UNCERTAINTY SOURCES 100

D.1 INTRODUCTION 100D.2 PRINCIPLES OF APPROACH 100D.3 CAUSE AND EFFECT ANALYSIS 100D.4 EXAMPLE 101

APPENDIX E. USEFUL STATISTICAL PROCEDURES 102

E.1 DISTRIBUTION FUNCTIONS 102E.2 SPREADSHEET METHOD FOR UNCERTAINTY CALCULATION 104E.3 UNCERTAINTIES FROM LINEAR LEAST SQUARES CALIBRATION 106E.4: DOCUMENTING UNCERTAINTY DEPENDENT ON ANALYTE LEVEL 108

APPENDIX F. MEASUREMENT UNCERTAINTY AT THE LIMIT OF DETECTION/LIMIT OF DETERMINATION 112

F1. INTRODUCTION 112F2. OBSERVATIONS AND ESTIMATES 112F3. INTERPRETED RESULTS AND COMPLIANCE STATEMENTS 113

APPENDIX G. COMMON SOURCES AND VALUES OF UNCERTAINTY 114

APPENDIX H. BIBLIOGRAPHY 120

Page 6: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty

QUAM:2000.1 Page 1

Foreword to the Second Edition

Many important decisions are based on the results of chemical quantitative analysis; the results are used, forexample, to estimate yields, to check materials against specifications or statutory limits, or to estimatemonetary value. Whenever decisions are based on analytical results, it is important to have some indicationof the quality of the results, that is, the extent to which they can be relied on for the purpose in hand. Usersof the results of chemical analysis, particularly in those areas concerned with international trade, are comingunder increasing pressure to eliminate the replication of effort frequently expended in obtaining them.Confidence in data obtained outside the user’s own organisation is a prerequisite to meeting this objective.In some sectors of analytical chemistry it is now a formal (frequently legislative) requirement forlaboratories to introduce quality assurance measures to ensure that they are capable of and are providingdata of the required quality. Such measures include: the use of validated methods of analysis; the use ofdefined internal quality control procedures; participation in proficiency testing schemes; accreditation basedon ISO 17025 [H.1], and establishing traceability of the results of the measurements

In analytical chemistry, there has been great emphasis on the precision of results obtained using a specifiedmethod, rather than on their traceability to a defined standard or SI unit. This has led the use of “officialmethods” to fulfil legislative and trading requirements. However as there is now a formal requirement toestablish the confidence of results it is essential that a measurement result is traceable to a defined referencesuch as a SI unit, reference material or, where applicable, a defined or empirical (sec. 5.2.) method. Internalquality control procedures, proficiency testing and accreditation can be an aid in establishing evidence oftraceability to a given standard.

As a consequence of these requirements, chemists are, for their part, coming under increasing pressure todemonstrate the quality of their results, and in particular to demonstrate their fitness for purpose by giving ameasure of the confidence that can be placed on the result. This is expected to include the degree to which aresult would be expected to agree with other results, normally irrespective of the analytical methods used.One useful measure of this is measurement uncertainty.

Although the concept of measurement uncertainty has been recognised by chemists for many years, it wasthe publication in 1993 of the “Guide to the Expression of Uncertainty in Measurement” [H.2] by ISO incollaboration with BIPM, IEC, IFCC, IUPAC, IUPAP and OIML, which formally established general rulesfor evaluating and expressing uncertainty in measurement across a broad spectrum of measurements. ThisEURACHEM document shows how the concepts in the ISO Guide may be applied in chemicalmeasurement. It first introduces the concept of uncertainty and the distinction between uncertainty anderror. This is followed by a description of the steps involved in the evaluation of uncertainty with theprocesses illustrated by worked examples in Appendix A.

The evaluation of uncertainty requires the analyst to look closely at all the possible sources of uncertainty.However, although a detailed study of this kind may require a considerable effort, it is essential that theeffort expended should not be disproportionate. In practice a preliminary study will quickly identify themost significant sources of uncertainty and, as the examples show, the value obtained for the combineduncertainty is almost entirely controlled by the major contributions. A good estimate of uncertainty can bemade by concentrating effort on the largest contributions. Further, once evaluated for a given methodapplied in a particular laboratory (i.e. a particular measurement procedure), the uncertainty estimateobtained may be reliably applied to subsequent results obtained by the method in the same laboratory,provided that this is justified by the relevant quality control data. No further effort should be necessaryunless the procedure itself or the equipment used is changed, in which case the uncertainty estimate wouldbe reviewed as part of the normal re-validation.

The first edition of the EURACHEM Guide for “Quantifying Uncertainty in Analytical Measurement” [H.3]was published in 1995 based on the ISO Guide.

Page 7: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Foreword to the Second Edition

QUAM:2000.1 Page 2

This second edition of the EURACHEM Guide has been prepared in the light of practical experience ofuncertainty estimation in chemistry laboratories and the even greater awareness of the need to introduceformal quality assurance procedures by laboratories. The second edition stresses that the proceduresintroduced by a laboratory to estimate its measurement uncertainty should be integrated with existingquality assurance measures, since these measures frequently provide much of the information required toevaluate the measurement uncertainty. The guide therefore provides explicitly for the use of validation andrelated data in the construction of uncertainty estimates in full compliance with formal ISO Guideprinciples. The approach is also consistent with the requirements of ISO 17025:1999 [H.1]

NOTE Worked examples are given in Appendix A. A numbered list of definitions is given at Appendix B. Theconvention is adopted of printing defined terms in bold face upon their first occurrence in the text, with areference to Appendix B enclosed in square brackets. The definitions are, in the main, taken from theInternational vocabulary of basic and general standard terms in Metrology (VIM) [H.4], the Guide [H.2] andISO 3534 (Statistics - Vocabulary and symbols) [H.5]. Appendix C shows, in general terms, the overallstructure of a chemical analysis leading to a measurement result. Appendix D describes a general procedurewhich can be used to identify uncertainty components and plan further experiments as required; Appendix Edescribes some statistical operations used in uncertainty estimation in analytical chemistry. Appendix Fdiscusses measurement uncertainty near detection limits. Appendix G lists many common uncertainty sourcesand methods of estimating the value of the uncertainties. A bibliography is provided at Appendix H.

Page 8: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Scope and Field of Application

QUAM:2000.1 Page 3

1. Scope and Field of Application

1.1. This Guide gives detailed guidance for the evaluation and expression of uncertainty in quantitativechemical analysis, based on the approach taken in the ISO “Guide to the Expression of Uncertainty inMeasurement” [H.2]. It is applicable at all levels of accuracy and in all fields - from routine analysis tobasic research and to empirical and rational methods (see section 5.3.). Some common areas in whichchemical measurements are needed, and in which the principles of this Guide may be applied, are:

• Quality control and quality assurance in manufacturing industries.

• Testing for regulatory compliance.

• Testing utilising an agreed method.

• Calibration of standards and equipment.

• Measurements associated with the development and certification of reference materials.

• Research and development.

1.2. Note that additional guidance will be required in some cases. In particular, reference material valueassignment using consensus methods (including multiple measurement methods) is not covered, and the useof uncertainty estimates in compliance statements and the expression and use of uncertainty at low levelsmay require additional guidance. Uncertainties associated with sampling operations are not explicitlytreated.

1.3. Since formal quality assurance measures have been introduced by laboratories in a number of sectorsthis second EURACHEM Guide is now able to illustrate how data from the following procedures may beused for the estimation of measurement uncertainty:

• Evaluation of the effect of the identified sources of uncertainty on the analytical result for a singlemethod implemented as a defined measurement procedure [B.8] in a single laboratory .

• Results from defined internal quality control procedures in a single laboratory.

• Results from collaborative trials used to validate methods of analysis in a number of competentlaboratories.

• Results from proficiency test schemes used to assess the analytical competency of laboratories.

1.4. It is assumed throughout this Guide that, whether carrying out measurements or assessing theperformance of the measurement procedure, effective quality assurance and control measures are in place toensure that the measurement process is stable and in control. Such measures normally include, for example,appropriately qualified staff, proper maintenance and calibration of equipment and reagents, use ofappropriate reference standards, documented measurement procedures and use of appropriate checkstandards and control charts. Reference [H.6] provides further information on analytical QA procedures.NOTE: This paragraph implies that all analytical methods are assumed in this guide to be implemented via fully

documented procedures. Any general reference to analytical methods accordingly implies the presence of sucha procedure. Strictly, measurement uncertainty can only be applied to the results of such a procedure and not toa more general method of measurement [B.9].

Page 9: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Uncertainty

QUAM:2000.1 Page 4

2. Uncertainty

2.1. Definition of uncertainty2.1.1. The definition of the term uncertainty (ofmeasurement) used in this protocol and takenfrom the current version adopted for theInternational Vocabulary of Basic and GeneralTerms in Metrology [H.4] is:

“A parameter associated with the result of ameasurement, that characterises the dispersion ofthe values that could reasonably be attributed tothe measurand”Note 1 The parameter may be, for example, a

standard deviation [B.23] (or a givenmultiple of it), or the width of a confidenceinterval.

NOTE 2 Uncertainty of measurement comprises, ingeneral, many components. Some of thesecomponents may be evaluated from thestatistical distribution of the results of seriesof measurements and can be characterised bystandard deviations. The other components,which also can be characterised by standarddeviations, are evaluated from assumedprobability distributions based on experienceor other information. The ISO Guide refers tothese different cases as Type A and Type Bestimations respectively.

2.1.2. In many cases in chemical analysis, themeasurand [B.6] will be the concentration* of ananalyte. However chemical analysis is used tomeasure other quantities, e.g. colour, pH, etc.,and therefore the general term "measurand" willbe used.

2.1.3. The definition of uncertainty given abovefocuses on the range of values that the analystbelieves could reasonably be attributed to themeasurand.

2.1.4. In general use, the word uncertainty relatesto the general concept of doubt. In this guide, the

* In this guide, the unqualified term “concentration”

applies to any of the particular quantities massconcentration, amount concentration, numberconcentration or volume concentration unless unitsare quoted (e.g. a concentration quoted in mg l-1 isevidently a mass concentration). Note also that manyother quantities used to express composition, such asmass fraction, substance content and mole fraction,can be directly related to concentration.

word uncertainty, without adjectives, refers eitherto a parameter associated with the definitionabove, or to the limited knowledge about aparticular value. Uncertainty of measurementdoes not imply doubt about the validity of ameasurement; on the contrary, knowledge of theuncertainty implies increased confidence in thevalidity of a measurement result.

2.2. Uncertainty sources2.2.1. In practice the uncertainty on the resultmay arise from many possible sources, includingexamples such as incomplete definition,sampling, matrix effects and interferences,environmental conditions, uncertainties of massesand volumetric equipment, reference values,approximations and assumptions incorporated inthe measurement method and procedure, andrandom variation (a fuller description ofuncertainty sources is given in section 6.7.)

2.3. Uncertainty components2.3.1. In estimating the overall uncertainty, it maybe necessary to take each source of uncertaintyand treat it separately to obtain the contributionfrom that source. Each of the separatecontributions to uncertainty is referred to as anuncertainty component. When expressed as astandard deviation, an uncertainty component isknown as a standard uncertainty [B.13]. Ifthere is correlation between any components thenthis has to be taken into account by determiningthe covariance. However, it is often possible toevaluate the combined effect of severalcomponents. This may reduce the overall effortinvolved and, where components whosecontribution is evaluated together are correlated,there may be no additional need to take accountof the correlation.

2.3.2. For a measurement result y, the totaluncertainty, termed combined standarduncertainty [B.14] and denoted by uc(y), is anestimated standard deviation equal to the positivesquare root of the total variance obtained bycombining all the uncertainty components,however evaluated, using the law of propagationof uncertainty (see section 8.).

Page 10: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Uncertainty

QUAM:2000.1 Page 5

2.3.3. For most purposes in analytical chemistry,an expanded uncertainty [B.15] U, should beused. The expanded uncertainty provides aninterval within which the value of the measurandis believed to lie with a higher level ofconfidence. U is obtained by multiplying uc(y),the combined standard uncertainty, by a coveragefactor [B.16] k. The choice of the factor k isbased on the level of confidence desired. For anapproximate level of confidence of 95%, k is 2.NOTE The coverage factor k should always be stated

so that the combined standard uncertainty ofthe measured quantity can be recovered foruse in calculating the combined standarduncertainty of other measurement results thatmay depend on that quantity.

2.4. Error and uncertainty2.4.1. It is important to distinguish between errorand uncertainty. Error [B.19] is defined as thedifference between an individual result and thetrue value [B.3] of the measurand. As such,error is a single value. In principle, the value of aknown error can be applied as a correction to theresult.NOTE Error is an idealised concept and errors cannot

be known exactly.

2.4.2. Uncertainty, on the other hand, takes theform of a range, and, if estimated for an analyticalprocedure and defined sample type, may apply toall determinations so described. In general, thevalue of the uncertainty cannot be used to correcta measurement result.

2.4.3. To illustrate further the difference, theresult of an analysis after correction may bychance be very close to the value of themeasurand, and hence have a negligible error.However, the uncertainty may still be very large,simply because the analyst is very unsure of howclose that result is to the value.

2.4.4. The uncertainty of the result of ameasurement should never be interpreted asrepresenting the error itself, nor the errorremaining after correction.

2.4.5. An error is regarded as having twocomponents, namely, a random component and asystematic component.

2.4.6. Random error [B.20] typically arises fromunpredictable variations of influence quantities.These random effects give rise to variations inrepeated observations of the measurand. The

random error of an analytical result cannot becompensated for, but it can usually be reduced byincreasing the number of observations.

NOTE 1 The experimental standard deviation of thearithmetic mean [B.22] or average of a seriesof observations is not the random error of themean, although it is so referred to in somepublications on uncertainty. It is instead ameasure of the uncertainty of the mean due tosome random effects. The exact value of therandom error in the mean arising from theseeffects cannot be known.

2.4.7. Systematic error [B.21] is defined as acomponent of error which, in the course of anumber of analyses of the same measurand,remains constant or varies in a predictable way.It is independent of the number of measurementsmade and cannot therefore be reduced byincreasing the number of analyses under constantmeasurement conditions.

2.4.8. Constant systematic errors, such as failingto make an allowance for a reagent blank in anassay, or inaccuracies in a multi-point instrumentcalibration, are constant for a given level of themeasurement value but may vary with the level ofthe measurement value.

2.4.9. Effects which change systematically inmagnitude during a series of analyses, caused, forexample by inadequate control of experimentalconditions, give rise to systematic errors that arenot constant.

EXAMPLES:

1. A gradual increase in the temperature of a setof samples during a chemical analysis can leadto progressive changes in the result.

2. Sensors and probes that exhibit ageing effectsover the time-scale of an experiment can alsointroduce non-constant systematic errors.

2.4.10. The result of a measurement should becorrected for all recognised significant systematiceffects.NOTE Measuring instruments and systems are often

adjusted or calibrated using measurementstandards and reference materials to correctfor systematic effects. The uncertaintiesassociated with these standards and materialsand the uncertainty in the correction must stillbe taken into account.

2.4.11. A further type of error is a spurious error,or blunder. Errors of this type invalidate ameasurement and typically arise through humanfailure or instrument malfunction. Transposing

Page 11: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Uncertainty

QUAM:2000.1 Page 6

digits in a number while recording data, an airbubble lodged in a spectrophotometer flow-through cell, or accidental cross-contamination oftest items are common examples of this type oferror.

2.4.12. Measurements for which errors such asthese have been detected should be rejected andno attempt should be made to incorporate theerrors into any statistical analysis. However,errors such as digit transposition can be corrected(exactly), particularly if they occur in the leadingdigits.

2.4.13. Spurious errors are not always obviousand, where a sufficient number of replicatemeasurements is available, it is usuallyappropriate to apply an outlier test to check forthe presence of suspect members in the data set.Any positive result obtained from such a testshould be considered with care and, wherepossible, referred back to the originator forconfirmation. It is generally not wise to reject avalue on purely statistical grounds.

2.4.14. Uncertainties estimated using this guideare not intended to allow for the possibility ofspurious errors/blunders.

Page 12: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Analytical Measurement and Uncertainty

QUAM:2000.1 Page 7

3. Analytical Measurement and Uncertainty

3.1. Method validation3.1.1. In practice, the fitness for purpose ofanalytical methods applied for routine testing ismost commonly assessed through methodvalidation studies [H.7]. Such studies producedata on overall performance and on individualinfluence factors which can be applied to theestimation of uncertainty associated with theresults of the method in normal use.

3.1.2. Method validation studies rely on thedetermination of overall method performanceparameters. These are obtained during methoddevelopment and interlaboratory study orfollowing in-house validation protocols.Individual sources of error or uncertainty aretypically investigated only when significantcompared to the overall precision measures inuse. The emphasis is primarily on identifying andremoving (rather than correcting for) significanteffects. This leads to a situation in which themajority of potentially significant influencefactors have been identified, checked forsignificance compared to overall precision, andshown to be negligible. Under thesecircumstances, the data available to analystsconsists primarily of overall performance figures,together with evidence of insignificance of mosteffects and some measurements of any remainingsignificant effects.

3.1.3. Validation studies for quantitativeanalytical methods typically determine some orall of the following parameters:

Precision. The principal precision measuresinclude repeatability standard deviation sr,reproducibility standard deviation sR, (ISO 3534-1) and intermediate precision, sometimes denotedsZi, with i denoting the number of factors varied(ISO 5725-3:1994). The repeatability sr indicatesthe variability observed within a laboratory, overa short time, using a single operator, item ofequipment etc. sr may be estimated within alaboratory or by inter-laboratory study.Interlaboratory reproducibility standard deviationsR for a particular method may only be estimateddirectly by interlaboratory study; it shows thevariability obtained when different laboratoriesanalyse the same sample. Intermediate precisionrelates to the variation in results observed when

one or more factors, such as time, equipment andoperator, are varied within a laboratory; differentfigures are obtained depending on which factorsare held constant. Intermediate precisionestimates are most commonly determined withinlaboratories but may also be determined byinterlaboratory study. The observed precision ofan analytical procedure is an essential componentof overall uncertainty, whether determined bycombination of individual variances or by studyof the complete method in operation.

Bias. The bias of an analytical method is usuallydetermined by study of relevant referencematerials or by spiking studies. The determinationof overall bias with respect to appropriatereference values is important in establishingtraceability [B.12] to recognised standards (seesection 3.2). Bias may be expressed as analyticalrecovery (value observed divided by valueexpected). Bias should be shown to be negligibleor corrected for, but in either case theuncertainty associated with the determination ofthe bias remains an essential component ofoverall uncertainty.

Linearity. Linearity is an important property ofmethods used to make measurements at a range ofconcentrations. The linearity of the response topure standards and to realistic samples may bedetermined. Linearity is not generally quantified,but is checked for by inspection or usingsignificance tests for non-linearity. Significantnon-linearity is usually corrected for by use ofnon-linear calibration functions or eliminated bychoice of more restricted operating range. Anyremaining deviations from linearity are normallysufficiently accounted for by overall precisionestimates covering several concentrations, orwithin any uncertainties associated withcalibration (Appendix E.3).

Detection limit. During method validation, thedetection limit is normally determined only toestablish the lower end of the practical operatingrange of a method. Though uncertainties near thedetection limit may require careful considerationand special treatment (Appendix F), the detectionlimit, however determined, is not of directrelevance to uncertainty estimation.

Page 13: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Analytical Measurement and Uncertainty

QUAM:2000.1 Page 8

Robustness or ruggedness. Many methoddevelopment or validation protocols require thatsensitivity to particular parameters beinvestigated directly. This is usually done by apreliminary ‘ruggedness test’, in which the effectof one or more parameter changes is observed. Ifsignificant (compared to the precision of theruggedness test) a more detailed study is carriedout to measure the size of the effect, and apermitted operating interval chosen accordingly.Ruggedness test data can therefore provideinformation on the effect of important parameters.

Selectivity/specificity. Though loosely defined,both terms relate to the degree to which a methodresponds uniquely to the required analyte.Typical selectivity studies investigate the effectsof likely interferents, usually by adding thepotential interferent to both blank and fortifiedsamples and observing the response. The resultsare normally used to demonstrate that thepractical effects are not significant. However,since the studies measure changes in responsedirectly, it is possible to use the data to estimatethe uncertainty associated with potentialinterferences, given knowledge of the range ofinterferent concentrations.

3.2. Conduct of experimental studies ofmethod performance

3.2.1. The detailed design and execution ofmethod validation and method performancestudies is covered extensively elsewhere [H.7]and will not be repeated here. However, the mainprinciples as they affect the relevance of a studyapplied to uncertainty estimation are pertinentand are considered below.

3.2.2. Representativeness is essential. That is,studies should, as far as possible, be conducted toprovide a realistic survey of the number and rangeof effects operating during normal use of themethod, as well as covering the concentrationranges and sample types within the scope of themethod. Where a factor has been representativelyvaried during the course of a precisionexperiment, for example, the effects of that factorappear directly in the observed variance and needno additional study unless further methodoptimisation is desirable.

3.2.3. In this context, representative variationmeans that an influence parameter must take adistribution of values appropriate to theuncertainty in the parameter in question. Forcontinuous parameters, this may be a permittedrange or stated uncertainty; for discontinuous

factors such as sample matrix, this rangecorresponds to the variety of types permitted orencountered in normal use of the method. Notethat representativeness extends not only to therange of values, but to their distribution.

3.2.4. In selecting factors for variation, it isimportant to ensure that the larger effects arevaried where possible. For example, where day today variation (perhaps arising from recalibrationeffects) is substantial compared to repeatability,two determinations on each of five days willprovide a better estimate of intermediateprecision than five determinations on each of twodays. Ten single determinations on separate dayswill be better still, subject to sufficient control,though this will provide no additional informationon within-day repeatability.

3.2.5. It is generally simpler to treat data obtainedfrom random selection than from systematicvariation. For example, experiments performed atrandom times over a sufficient period will usuallyinclude representative ambient temperatureeffects, while experiments performedsystematically at 24-hour intervals may be subjectto bias due to regular ambient temperaturevariation during the working day. The formerexperiment needs only evaluate the overallstandard deviation; in the latter, systematicvariation of ambient temperature is required,followed by adjustment to allow for the actualdistribution of temperatures. Random variation is,however, less efficient. A small number ofsystematic studies can quickly establish the sizeof an effect, whereas it will typically take wellover 30 determinations to establish an uncertaintycontribution to better than about 20% relativeaccuracy. Where possible, therefore, it is oftenpreferable to investigate small numbers of majoreffects systematically.

3.2.6. Where factors are known or suspected tointeract, it is important to ensure that the effect ofinteraction is accounted for. This may beachieved either by ensuring random selectionfrom different levels of interacting parameters, orby careful systematic design to obtain bothvariance and covariance information.

3.2.7. In carrying out studies of overall bias, it isimportant that the reference materials and valuesare relevant to the materials under routine test.

3.2.8. Any study undertaken to investigate andtest for the significance of an effect should havesufficient power to detect such effects before theybecome practically significant.

Page 14: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Analytical Measurement and Uncertainty

QUAM:2000.1 Page 9

3.3. Traceability3.3.1. It is important to be able to compare resultsfrom different laboratories, or from the samelaboratory at different times, with confidence.This is achieved by ensuring that all laboratoriesare using the same measurement scale, or thesame ‘reference points’. In many cases this isachieved by establishing a chain of calibrationsleading to primary national or internationalstandards, ideally (for long-term consistency) theSysteme Internationale (SI) units of measurement.A familiar example is the case of analyticalbalances; each balance is calibrated usingreference masses which are themselves checked(ultimately) against national standards and so onto the primary reference kilogram. This unbrokenchain of comparisons leading to a knownreference value provides ‘traceability’ to acommon reference point, ensuring that differentoperators are using the same units ofmeasurement. In routine measurement, theconsistency of measurements between onelaboratory (or time) and another is greatly aidedby establishing traceability for all relevantintermediate measurements used to obtain orcontrol a measurement result. Traceability istherefore an important concept in all branches ofmeasurement.

3.3.2. Traceability is formally defined [H.4] as:“The property of the result of a measurementor the value of a standard whereby it can berelated to stated references, usually nationalor international standards, through anunbroken chain of comparisons all havingstated uncertainties.”

The reference to uncertainty arises because theagreement between laboratories is limited, in part,by uncertainties incurred in each laboratory’straceability chain. Traceability is accordinglyintimately linked to uncertainty. Traceabilityprovides the means of placing all relatedmeasurements on a consistent measurement scale,while uncertainty characterises the ‘strength’ ofthe links in the chain and the agreement to beexpected between laboratories making similarmeasurements.

3.3.3. In general, the uncertainty on a resultwhich is traceable to a particular reference, willbe the uncertainty on that reference together withthe uncertainty on making the measurementrelative to that reference.

3.3.4. Traceability of the result of the completeanalytical procedure should be established by acombination of the following procedures:

1. Use of traceable standards to calibrate themeasuring equipment

2. By using, or by comparison to the results of, aprimary method

3. By using a pure substance RM.

4. By using an appropriate matrix CertifiedReference Material (CRM)

5. By using an accepted, closely definedprocedure.

Each procedure is discussed in turn below.

3.3.5. Calibration of measuring equipment

In all cases, the calibration of the measuringequipment used must be traceable to appropriatestandards. The quantification stage of theanalytical procedure is often calibrated using apure substance reference material, whose value istraceable to the SI. This practice providestraceability of the results to SI for this part of theprocedure. However, it is also necessary toestablish traceability for the results of operationsprior to the quantification stage, such asextraction and sample clean up, using additionalprocedures.

3.3.6. Measurements using Primary Methods

A primary method is currently described asfollows:

“A primary method of measurement is amethod having the highest metrologicalqualities, whose operation is completelydescribed and understood in terms of SI unitsand whose results are accepted withoutreference to a standard of the same quantity.”

The result of a primary method is normallytraceable directly to the SI, and is of the smallestachievable uncertainty with respect to thisreference. Primary methods are normallyimplemented only by National MeasurementInstitutes and are rarely applied to routine testingor calibration. Where applicable, traceability tothe results of a primary method is achieved bydirect comparison of measurement resultsbetween the primary method and test orcalibration method.

3.3.7. Measurements using a pure substanceReference Material (RM).

Traceability can be demonstrated bymeasurement of a sample composed of, or

Page 15: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Analytical Measurement and Uncertainty

QUAM:2000.1 Page 10

containing, a known quantity of a pure substanceRM. This may be achieved, for example, byspiking or by standard additions. However, it isalways necessary to evaluate the difference inresponse of the measurement system to thestandard used and the sample under test.Unfortunately, for many chemical analyses and inthe particular case of spiking or standardadditions, both the correction for the difference inresponse and its uncertainty may be large. Thus,although the traceability of the result to SI unitscan in principle be established, in practice, in allbut the most simple cases, the uncertainty on theresult may be unacceptably large or evenunquantifiable. If the uncertainty isunquantifiable then traceability has not beenestablished

3.3.8. Measurement on a Certified ReferenceMaterial (CRM)

Traceability may be demonstrated throughcomparison of measurement results on a certifiedmatrix CRM with the certified value(s). Thisprocedure can reduce the uncertainty compared tothe use of a pure substance RM where there is asuitable matrix CRM available. If the value ofthe CRM is traceable to SI, then thesemeasurements provide traceability to SI units andthe evaluation of the uncertainty utilising

reference materials is discussed in 7.5. However,even in this case, the uncertainty on the resultmay be unacceptably large or evenunquantifiable, particularly if there is not a goodmatch between the composition of the sample andthe reference material.

3.3.9. Measurement using an acceptedprocedure.

Adequate comparability can often only beachieved through use of a closely defined andgenerally accepted procedure. The procedure willnormally be defined in terms of input parameters;for example a specified set of extraction times,particle sizes etc. The results of applying such aprocedure are considered traceable when thevalues of these input parameters are traceable tostated references in the usual way. Theuncertainty on the results arises both fromuncertainties in the specified input parametersand from the effects of incomplete specificationand variability in execution (see section 7.8.1.).Where the results of an alternative method orprocedure are expected to be comparable to theresults of such an accepted procedure, traceabilityto the accepted values is achieved by comparingthe results obtained by accepted and alternativeprocedures.

Page 16: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty The Uncertainty Estimation Process

QUAM:2000.1 Page 11

4. The Process of Measurement Uncertainty Estimation

4.1. Uncertainty estimation is simple in principle.The following paragraphs summarise the tasksthat need to be performed in order to obtain anestimate of the uncertainty associated with ameasurement result. Subsequent chapters provideadditional guidance applicable in differentcircumstances, particularly relating to the use ofdata from method validation studies and the useof formal uncertainty propagation principles. Thesteps involved are:

Step 1. Specify measurandWrite down a clear statement of what isbeing measured, including the relationshipbetween the measurand and the inputquantities (e.g. measured quantities,constants, calibration standard values etc.)upon which it depends. Where possible,include corrections for known systematiceffects. The specification information shouldbe given in the relevant Standard OperatingProcedure (SOP) or other methoddescription.

Step 2. Identify uncertainty sourcesList the possible sources of uncertainty. Thiswill include sources that contribute to theuncertainty on the parameters in therelationship specified in Step 1, but mayinclude other sources and must includesources arising from chemical assumptions.A general procedure for forming a structuredlist is suggested at Appendix D.

Step 3. Quantify uncertainty componentsMeasure or estimate the size of theuncertainty component associated with eachpotential source of uncertainty identified. Itis often possible to estimate or determine asingle contribution to uncertainty associatedwith a number of separate sources. It is alsoimportant to consider whether available dataaccounts sufficiently for all sources ofuncertainty, and plan additional experimentsand studies carefully to ensure that allsources of uncertainty are adequatelyaccounted for.

Step 4. Calculate combined uncertaintyThe information obtained in step 3 willconsist of a number of quantifiedcontributions to overall uncertainty, whetherassociated with individual sources or withthe combined effects of several sources. Thecontributions have to be expressed asstandard deviations, and combined accordingto the appropriate rules, to give a combinedstandard uncertainty. The appropriatecoverage factor should be applied to give anexpanded uncertainty.

Figure 1 shows the process schematically.

4.2. The following chapters provide guidanceon the execution of all the steps listed above andshows how the procedure may be simplifieddepending on the information that is availableabout the combined effect of a number of sources.

Page 17: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty The Uncertainty Estimation Process

QUAM:2000.1 Page 12

Figure 1: The Uncertainty Estimation Process

SpecifyMeasurand

IdentifyUncertainty

Sources

Simplify bygrouping sources

covered byexisting data

Quantify remaining

components

Quantifygrouped

components

Convertcomponents to

standard deviations

Calculatecombined

standard uncertainty

ENDCalculate

Expanded uncertainty

Review and if necessary re-evaluate

large components

START

Step 1

Step 2

Step 3

Step 4

Page 18: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 1. Specification of the Measurand

QUAM:2000.1 Page 13

5. Step 1. Specification of the Measurand

5.1. In the context of uncertainty estimation,“specification of the measurand” requires both aclear and unambiguous statement of what is beingmeasured, and a quantitative expression relatingthe value of the measurand to the parameters onwhich it depends. These parameters may be othermeasurands, quantities which are not directlymeasured, or constants. It should also be clearwhether a sampling step is included within theprocedure or not. If it is, estimation ofuncertainties associated with the samplingprocedure need to be considered. All of thisinformation should be in the Standard OperatingProcedure (SOP).

5.2. In analytical measurement, it is particularlyimportant to distinguish between measurementsintended to produce results which areindependent of the method used, and those whichare not so intended. The latter are often referredto as empirical methods. The following examplesmay clarify the point further.

EXAMPLES:

1. Methods for the determination of the amountof nickel present in an alloy are normallyexpected to yield the same result, in the sameunits, usually expressed as a mass or molefraction. In principle, any systematic effect dueto method bias or matrix would need to becorrected for, though it is more usual to ensurethat any such effect is small. Results would notnormally need to quote the particular methodused, except for information. The method is notempirical.

2. Determinations of “extractable fat” maydiffer substantially, depending on the extraction

conditions specified. Since “extractable fat” isentirely dependent on choice of conditions, themethod used is empirical. It is not meaningfulto consider correction for bias intrinsic to themethod, since the measurand is defined by themethod used. Results are generally reportedwith reference to the method, uncorrected forany bias intrinsic to the method. The method isconsidered empirical.

3. In circumstances where variations in thesubstrate, or matrix, have large andunpredictable effects, a procedure is oftendeveloped with the sole aim of achievingcomparability between laboratories measuringthe same material. The procedure may then beadopted as a local, national or internationalstandard method on which trading or otherdecisions are taken, with no intent to obtain anabsolute measure of the true amount of analytepresent. Corrections for method bias or matrixeffect are ignored by convention (whether ornot they have been minimised in methoddevelopment). Results are normally reporteduncorrected for matrix or method bias. Themethod is considered to be empirical.

5.3. The distinction between empirical and non-empirical (sometimes called rational) methods isimportant because it affects the estimation ofuncertainty. In examples 2 and 3 above, becauseof the conventions employed, uncertaintiesassociated with some quite large effects are notrelevant in normal use. Due consideration shouldaccordingly be given to whether the results areexpected to be dependent upon, or independentof, the method in use and only those effectsrelevant to the result as reported should beincluded in the uncertainty estimate.

Page 19: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 2. Identifying Uncertainty Sources

QUAM:2000.1 Page 14

6. Step 2. Identifying Uncertainty Sources

6.1. A comprehensive list of relevant sources ofuncertainty should be assembled. At this stage, itis not necessary to be concerned about thequantification of individual components; the aimis to be completely clear about what should beconsidered. In Step 3, the best way of treatingeach source will be considered.

6.2. In forming the required list of uncertaintysources it is usually convenient to start with thebasic expression used to calculate the measurandfrom intermediate values. All the parameters inthis expression may have an uncertaintyassociated with their value and are thereforepotential uncertainty sources. In addition theremay be other parameters that do not appearexplicitly in the expression used to calculate thevalue of the measurand, but which neverthelessaffect the measurement results, e.g. extractiontime or temperature. These are also potentialsources of uncertainty. All these different sourcesshould be included. Additional information isgiven in Appendix C (Uncertainties in AnalyticalProcesses).

6.3. The cause and effect diagram described inAppendix D is a very convenient way of listingthe uncertainty sources, showing how they relateto each other and indicating their influence on theuncertainty of the result. It also helps to avoiddouble counting of sources. Although the list ofuncertainty sources can be prepared in otherways, the cause and effect diagram is used in thefollowing chapters and in all of the examples inAppendix A. Additional information is given inAppendix D (Analysing uncertainty sources).

6.4. Once the list of uncertainty sources isassembled, their effects on the result can, inprinciple, be represented by a formalmeasurement model, in which each effect isassociated with a parameter or variable in anequation. The equation then forms a completemodel of the measurement process in terms of allthe individual factors affecting the result. Thisfunction may be very complicated and it may notbe possible to write it down explicitly. Wherepossible, however, this should be done, as theform of the expression will generally determinethe method of combining individual uncertaintycontributions.

6.5. It may additionally be useful to consider ameasurement procedure as a series of discreteoperations (sometimes termed unit operations),each of which may be assessed separately toobtain estimates of uncertainty associated withthem. This is a particularly useful approach wheresimilar measurement procedures share commonunit operations. The separate uncertainties foreach operation then form contributions to theoverall uncertainty.

6.6. In practice, it is more usual in analyticalmeasurement to consider uncertainties associatedwith elements of overall method performance,such as observable precision and bias measuredwith respect to appropriate reference materials.These contributions generally form the dominantcontributions to the uncertainty estimate, and arebest modelled as separate effects on the result. Itis then necessary to evaluate other possiblecontributions only to check their significance,quantifying only those that are significant.Further guidance on this approach, which appliesparticularly to the use of method validation data,is given in section 7.2.1.

6.7. Typical sources of uncertainty are

• Sampling

Where in-house or field sampling form partof the specified procedure, effects such asrandom variations between different samplesand any potential for bias in the samplingprocedure form components of uncertaintyaffecting the final result.

• Storage Conditions

Where test items are stored for any periodprior to analysis, the storage conditions mayaffect the results. The duration of storage aswell as conditions during storage shouldtherefore be considered as uncertaintysources.

• Instrument effects

Instrument effects may include, for example,the limits of accuracy on the calibration of ananalytical balance; a temperature controllerthat may maintain a mean temperature whichdiffers (within specification) from its

Page 20: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 2. Identifying Uncertainty Sources

QUAM:2000.1 Page 15

indicated set-point; an auto-analyser thatcould be subject to carry-over effects.

• Reagent purity

The concentration of a volumetric solutionwill not be known exactly even if the parentmaterial has been assayed, since someuncertainty related to the assaying procedureremains. Many organic dyestuffs, forinstance, are not 100% pure and can containisomers and inorganic salts. The purity ofsuch substances is usually stated bymanufacturers as being not less than aspecified level. Any assumptions about thedegree of purity will introduce an element ofuncertainty.

• Assumed stoichiometry

Where an analytical process is assumed tofollow a particular reaction stoichiometry, itmay be necessary to allow for departuresfrom the expected stoichiometry, or forincomplete reaction or side reactions.

• Measurement conditions

For example, volumetric glassware may beused at an ambient temperature different fromthat at which it was calibrated. Grosstemperature effects should be corrected for,but any uncertainty in the temperature ofliquid and glass should be considered.Similarly, humidity may be important wherematerials are sensitive to possible changes inhumidity.

• Sample effects

The recovery of an analyte from a complexmatrix, or an instrument response, may beaffected by composition of the matrix.Analyte speciation may further compoundthis effect.

The stability of a sample/analyte may changeduring analysis because of a changingthermal regime or photolytic effect.

When a ‘spike’ is used to estimate recovery,the recovery of the analyte from the samplemay differ from the recovery of the spike,introducing an uncertainty which needs to beevaluated.

• Computational effects

Selection of the calibration model, e.g. usinga straight line calibration on a curvedresponse, leads to poorer fit and higheruncertainty.

Truncation and round off can lead toinaccuracies in the final result. Since theseare rarely predictable, an uncertaintyallowance may be necessary.

• Blank Correction

There will be an uncertainty on both the valueand the appropriateness of the blankcorrection. This is particularly important intrace analysis.

• Operator effects

Possibility of reading a meter or scaleconsistently high or low.

Possibility of making a slightly differentinterpretation of the method.

• Random effects

Random effects contribute to the uncertaintyin all determinations. This entry should beincluded in the list as a matter of course.

NOTE: These sources are not necessarilyindependent.

Page 21: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 16

7. Step 3. Quantifying Uncertainty

7.1. Introduction7.1.1. Having identified the uncertainty sources asexplained in Step 2 (Chapter 6), the next step is toquantify the uncertainty arising from thesesources. This can be done by

• evaluating the uncertainty arising from eachindividual source and then combining them asdescribed in Chapter 8. Examples A1 to A3illustrate the use of this procedure.

or

• by determining directly the combinedcontribution to the uncertainty on the resultfrom some or all of these sources usingmethod performance data. Examples A4 to A6represent applications of this procedure.

In practice, a combination of these is usuallynecessary and convenient.

7.1.2. Whichever of these approaches is used,most of the information needed to evaluate theuncertainty is likely to be already available fromthe results of validation studies, from QA/QCdata and from other experimental work that hasbeen carried out to check the performance of themethod. However, data may not be available toevaluate the uncertainty from all of the sourcesand it may be necessary to carry out further workas described in sections 7.10. to 7.14.

7.2. Uncertainty evaluation procedure7.2.1. The procedure used for estimating theoverall uncertainty depends on the data availableabout the method performance. The stagesinvolved in developing the procedure are

• Reconcile the information requirementswith the available data

First, the list of uncertainty sources should beexamined to see which sources of uncertaintyare accounted for by the available data,whether by explicit study of the particularcontribution or by implicit variation withinthe course of whole-method experiments.These sources should be checked against thelist prepared in Step 2 and any remainingsources should be listed to provide anauditable record of which contributions to theuncertainty have been included.

• Plan to obtain the further data requiredFor sources of uncertainty not adequatelycovered by existing data, either seekadditional information from the literature orstanding data (certificates, equipmentspecifications etc.), or plan experiments toobtain the required additional data.Additional experiments may take the form ofspecific studies of a single contribution touncertainty, or the usual method performancestudies conducted to ensure representativevariation of important factors.

7.2.2. It is important to recognise that not all ofthe components will make a significantcontribution to the combined uncertainty; indeed,in practice it is likely that only a small numberwill. Unless there is a large number of them,components that are less than one third of thelargest need not be evaluated in detail. Apreliminary estimate of the contribution of eachcomponent or combination of components to theuncertainty should be made and those that are notsignificant eliminated.

7.2.3. The following sections provide guidance onthe procedures to be adopted, depending on thedata available and on the additional informationrequired. Section 7.3. presents requirements forthe use of prior experimental study data,including validation data. Section 7.4. brieflydiscusses evaluation of uncertainty solely fromindividual sources of uncertainty. This may benecessary for all, or for very few of the sourcesidentified, depending on the data available, and isconsequently also considered in later sections.Sections 7.5. to 7.9. describe the evaluation ofuncertainty in a range of circumstances. Section7.5. applies when using closely matchedreference materials. Section 7.6. covers the use ofcollaborative study data and 7.7. the use of in-house validation data. 7.8. describes specialconsiderations for empirical methods and 7.9.covers ad-hoc methods. Methods for quantifyingindividual components of uncertainty, includingexperimental studies, documentary and otherdata, modelling, and professional judgement arecovered in more detail in sections 7.10. to 7.14.Section 7.15. covers the treatment of known biasin uncertainty estimation.

Page 22: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 17

7.3. Relevance of prior studies7.3.1. When uncertainty estimates are based atleast partly on prior studies of methodperformance, it is necessary to demonstrate thevalidity of applying prior study results. Typically,this will consist of:

• Demonstration that a comparable precision tothat obtained previously can be achieved.

• Demonstration that the use of the bias dataobtained previously is justified, typicallythrough determination of bias on relevantreference materials (see, for example, ISOGuide 33 [H.8]), by appropriate spikingstudies, or by satisfactory performance onrelevant proficiency schemes or otherlaboratory intercomparisons.

• Continued performance within statisticalcontrol as shown by regular QC sampleresults and the implementation of effectiveanalytical quality assurance procedures.

7.3.2. Where the conditions above are met, andthe method is operated within its scope and fieldof application, it is normally acceptable to applythe data from prior studies (including validationstudies) directly to uncertainty estimates in thelaboratory in question.

7.4. Evaluating uncertainty byquantification of individualcomponents

7.4.1. In some cases, particularly when little or nomethod performance data is available, the mostsuitable procedure may be to evaluate eachuncertainty component separately.

7.4.2. The general procedure used in combiningindividual components is to prepare a detailedquantitative model of the experimental procedure(cf. sections 5. and 6., especially 6.4.), assess thestandard uncertainties associated with theindividual input parameters, and combine themusing the law of propagation of uncertainties asdescribed in Section 8.

7.4.3. In the interests of clarity, detailed guidanceon the assessment of individual contributions byexperimental and other means is deferred tosections 7.10. to 7.14. Examples A1 to A3 inAppendix A provide detailed illustrations of theprocedure. Extensive guidance on the applicationof this procedure is also given in the ISO Guide[H.2].

7.5. Closely matched certifiedreference materials

• 7.5.1. Measurements on certified referencematerials are normally carried out as part ofmethod validation or re-validation, effectivelyconstituting a calibration of the wholemeasurement procedure against a traceablereference. Because this procedure providesinformation on the combined effect of manyof the potential sources of uncertainty, itprovides very good data for the assessment ofuncertainty. Further details are given insection 7.7.4.

NOTE: ISO Guide 33 [H.8] gives a useful account ofthe use of reference materials in checkingmethod performance.

7.6. Uncertainty estimation using priorcollaborative method developmentand validation study data

7.6.1. A collaborative study carried out tovalidate a published method, for exampleaccording to the AOAC/IUPAC protocol [H.9] orISO 5725 standard [H.10], is a valuable source ofdata to support an uncertainty estimate. The datatypically include estimates of reproducibilitystandard deviation, sR, for several levels ofresponse, a linear estimate of the dependence ofsR on level of response, and may include anestimate of bias based on CRM studies. How thisdata can be utilised depends on the factors takeninto account when the study was carried out.During the ‘reconciliation’ stage indicated above(section 7.2.), it is necessary to identify anysources of uncertainty that are not covered by thecollaborative study data. The sources which mayneed particular consideration are:

• Sampling. Collaborative studies rarely includea sampling step. If the method used in-houseinvolves sub-sampling, or the measurand (seeSpecification) is estimating a bulk propertyfrom a small sample, then the effects ofsampling should be investigated and theireffects included.

• Pre-treatment. In most studies, samples arehomogenised, and may additionally bestabilised, before distribution. It may benecessary to investigate and add the effects ofthe particular pre-treatment proceduresapplied in-house.

• Method bias. Method bias is often examinedprior to or during interlaboratory study, wherepossible by comparison with reference

Page 23: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 18

methods or materials. Where the bias itself,the uncertainty in the reference values used,and the precision associated with the biascheck, are all small compared to sR, noadditional allowance need be made for biasuncertainty. Otherwise, it will be necessary tomake additional allowances.

• Variation in conditions.Laboratories participating in a study may tendtowards the means of allowed ranges ofexperimental conditions, resulting in anunderestimate of the range of results possiblewithin the method definition. Where sucheffects have been investigated and shown tobe insignificant across their full permittedrange, however, no further allowance isrequired.

• Changes in sample matrix. The uncertaintyarising from matrix compositions or levels ofinterferents outside the range covered by thestudy will need to be considered.

7.6.2. Each significant source of uncertainty notcovered by the collaborative study data should beevaluated in the form of a standard uncertaintyand combined with the reproducibility standarddeviation sR in the usual way (section 8.)

7.6.3. For methods operating within their definedscope, when the reconciliation stage shows thatall the identified sources have been included inthe validation study or when the contributionsfrom any remaining sources such as thosediscussed in section 7.6.1. have been shown to benegligible, then the reproducibility standarddeviation sR, adjusted for concentration ifnecessary, may be used as the combined standarduncertainty.

7.6.4. The use of this procedure is shown inexample A6 (Appendix A)

7.7. Uncertainty estimation using in-house development and validationstudies

7.7.1. In-house development and validationstudies consist chiefly of the determination of themethod performance parameters indicated insection 3.1.3. Uncertainty estimation from theseparameters utilises:

• The best available estimate of overallprecision.

• The best available estimate(s) of overall biasand its uncertainty.

• Quantification of any uncertainties associatedwith effects incompletely accounted for in theabove overall performance studies.

Precision study

7.7.2. The precision should be estimated as far aspossible over an extended time period, andchosen to allow natural variation of all factorsaffecting the result. This can be obtained from

• The standard deviation of results for a typicalsample analysed several times over a period oftime, using different analysts and equipmentwhere possible (the results of measurementson QC check samples can provide thisinformation).

• The standard deviation obtained from replicateanalyses performed on each of severalsamples.NOTE: Replicates should be performed at materially

different times to obtain estimates ofintermediate precision; within-batchreplication provides estimates of repeatabilityonly.

• From formal multi-factor experimentaldesigns, analysed by ANOVA to provideseparate variance estimates for each factor.

7.7.3. Note that precision frequently variessignificantly with the level of response. Forexample, the observed standard deviation oftenincreases significantly and systematically withanalyte concentration. In such cases, theuncertainty estimate should be adjusted to allowfor the precision applicable to the particularresult. Appendix E.4 gives additional guidance onhandling level-dependent contributions touncertainty.

Bias study

7.7.4. Overall bias is best estimated by repeatedanalysis of a relevant CRM, using the completemeasurement procedure. Where this is done, andthe bias found to be insignificant, the uncertaintyassociated with the bias is simply the combinationof the standard uncertainty on the CRM valuewith the standard deviation associated with thebias.NOTE: Bias estimated in this way combines bias in

laboratory performance with any bias intrinsicto the method in use. Special considerationsmay apply where the method in use isempirical; see section 7.8.1.

• When the reference material is onlyapproximately representative of the test

Page 24: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 19

materials, additional factors should beconsidered, including (as appropriate)differences in composition and homogeneity;reference materials are frequently morehomogeneous that test samples. Estimatesbased on professional judgement should beused, if necessary, to assign theseuncertainties (see section 7.14.).

• Any effects following from differentconcentrations of analyte; for example, it isnot uncommon to find that extraction lossesdiffer between high and low levels of analyte.

7.7.5. Bias for a method under study can also bedetermined by comparison of the results withthose of a reference method. If the results showthat the bias is not statistically significant, thestandard uncertainty is that for the referencemethod (if applicable; see section 7.8.1.),combined with the standard uncertaintyassociated with the measured difference betweenmethods. The latter contribution to uncertainty isgiven by the standard deviation term used in thesignificance test applied to decide whether thedifference is statistically significant, as explainedin the example below.

EXAMPLE

A method (method 1) for determining theconcentration of Selenium is compared with areference method (method 2). The results (inmg kg-1) from each method are as follows:

x s n

Method 1 5.40 1.47 5

Method 2 4.76 2.75 5

The standard deviations are pooled to give apooled standard deviation sc

205.2255

)15(75.2)15(47.1 22

=−+

−×+−×=cs

and a corresponding value of t:

46.04.164.0

51

51205.2

)76.440.5( ==

+

−=t

tcrit is 2.3 for 8 degrees of freedom, so there isno significant difference between the means ofthe results given by the two methods. However,the difference (0.64) is compared with astandard deviation term of 1.4 above. Thisvalue of 1.4 is the standard deviation associatedwith the difference, and accordingly representsthe relevant contribution to uncertaintyassociated with the measured bias.

7.7.6. Overall bias can also be estimated by theaddition of analyte to a previously studiedmaterial. The same considerations apply as forthe study of reference materials (above). Inaddition, the differential behaviour of addedmaterial and material native to the sample shouldbe considered and due allowance made. Such anallowance can be made on the basis of:

• Studies of the distribution of the biasobserved for a range of matrices and levels ofadded analyte.

• Comparison of result observed in a referencematerial with the recovery of added analyte inthe same reference material.

• Judgement on the basis of specific materialswith known extreme behaviour. For example,oyster tissue, a common marine tissuereference, is well known for a tendency to co-precipitate some elements with calcium saltson digestion, and may provide an estimate of‘worst case’ recovery on which an uncertaintyestimate can be based (e.g. By treating theworst case as an extreme of a rectangular ortriangular distribution).

• Judgement on the basis of prior experience.

7.7.7. Bias may also be estimated by comparisonof the particular method with a value determinedby the method of standard additions, in whichknown quantities of the analyte are added to thetest material, and the correct analyteconcentration inferred by extrapolation. Theuncertainty associated with the bias is thennormally dominated by the uncertaintiesassociated with the extrapolation, combined(where appropriate) with any significantcontributions from the preparation and addition ofstock solution.NOTE: To be directly relevant, the additions should

be made to the original sample, rather than aprepared extract.

7.7.8. It is a general requirement of the ISO Guidethat corrections should be applied for allrecognised and significant systematic effects.Where a correction is applied to allow for asignificant overall bias, the uncertainty associatedwith the bias is estimated as paragraph 7.7.5.described in the case of insignificant bias

7.7.9. Where the bias is significant, but isnonetheless neglected for practical purposes,additional action is necessary (see section 7.15.).

Page 25: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 20

Additional factors

7.7.10. The effects of any remaining factorsshould be estimated separately, either byexperimental variation or by prediction fromestablished theory. The uncertainty associatedwith such factors should be estimated, recordedand combined with other contributions in thenormal way.

7.7.11. Where the effect of these remainingfactors is demonstrated to be negligible comparedto the precision of the study (i.e. statisticallyinsignificant), it is recommended that anuncertainty contribution equal to the standarddeviation associated with the relevantsignificance test be associated with that factor.

EXAMPLE

The effect of a permitted 1-hour extraction timevariation is investigated by a t-test on fivedeterminations each on the same sample, for thenormal extraction time and a time reduced by 1hour. The means and standard deviations (inmg l-1) were: Standard time: mean 1.8, standarddeviation 0.21; alternate time: mean 1.7,standard deviation 0.17. A t-test uses the pooledvariance of

037.0)15()15(

17.0)15(21.0)15( 22

=−+−

×−+×−

to obtain

82.0

51

51037.0

)7.18.1(=

−=t

This is not significant compared to tcrit = 2.3.But note that the difference (0.1) is comparedwith a calculated standard deviation term of

)5/15/1(037.0 +× =0.12. This value is thecontribution to uncertainty associated with theeffect of permitted variation in extraction time.

7.7.12. Where an effect is detected and isstatistically significant, but remains sufficientlysmall to neglect in practice, the provisions ofsection 7.15. apply.

7.8. Evaluation of uncertainty forempirical methods

7.8.1.An ‘empirical method’ is a method agreedupon for the purposes of comparativemeasurement within a particular field ofapplication where the measurandcharacteristically depends upon the method inuse. The method accordingly defines themeasurand. Examples include methods for

leachable metals in ceramics and dietary fibre infoodstuffs (see also section 5.2. and example A5)

7.8.2. Where such a method is in use within itsdefined field of application, the bias associatedwith the method is defined as zero. In suchcircumstances, bias estimation need relate only tothe laboratory performance and should notadditionally account for bias intrinsic to themethod. This has the following implications.

7.8.3. Reference material investigations, whetherto demonstrate negligible bias or to measure bias,should be conducted using reference materialscertified using the particular method, or for whicha value obtained with the particular method isavailable for comparison.

7.8.4. Where reference materials so characterisedare unavailable, overall control of bias isassociated with the control of method parametersaffecting the result; typically such factors astimes, temperatures, masses, volumes etc. Theuncertainty associated with these input factorsmust accordingly be assessed and either shown tobe negligible or quantified (see example A6).

7.8.5. Empirical methods are normally subjectedto collaborative studies and hence the uncertaintycan be evaluated as described in section 7.6.

7.9. Evaluation of uncertainty for ad-hoc methods

7.9.1. Ad-hoc methods are methods established tocarry out exploratory studies in the short term, orfor a short run of test materials. Such methods aretypically based on standard or well-establishedmethods within the laboratory, but are adaptedsubstantially (for example to study a differentanalyte) and will not generally justify formalvalidation studies for the particular material inquestion.

7.9.2. Since limited effort will be available toestablish the relevant uncertainty contributions, itis necessary to rely largely on the knownperformance of related systems or blocks withinthese systems. Uncertainty estimation shouldaccordingly be based on known performance on arelated system or systems. This performanceinformation should be supported by any studynecessary to establish the relevance of theinformation. The following recommendationsassume that such a related system is available andhas been examined sufficiently to obtain areliable uncertainty estimate, or that the methodconsists of blocks from other methods and thatthe uncertainty in these blocks has been

Page 26: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 21

established previously.

7.9.3. As a minimum, it is essential that anestimate of overall bias and an indication ofprecision be available for the method in question.Bias will ideally be measured against a referencematerial, but will in practice more commonly beassessed from spike recovery. The considerationsof section 7.7.4. then apply, except that spikerecoveries should be compared with thoseobserved on the related system to establish therelevance of the prior studies to the ad-hocmethod in question. The overall bias observed forthe ad-hoc method, on the materials under test,should be comparable to that observed for therelated system, within the requirements of thestudy.

7.9.4. A minimum precision experiment consistsof a duplicate analysis. It is, however,recommended that as many replicates as practicalare performed. The precision should be comparedwith that for the related system; the standarddeviation for the ad-hoc method should becomparable.NOTE: It recommended that the comparison be based

on inspection. Statistical significance tests(e.g. an F-test) will generally be unreliablewith small numbers of replicates and will tendto lead to the conclusion that there is ‘nosignificant difference’ simply because of thelow power of the test.

7.9.5. Where the above conditions are metunequivocally, the uncertainty estimate for therelated system may be applied directly to resultsobtained by the ad-hoc method, making anyadjustments appropriate for concentrationdependence and other known factors.

7.10. Quantification of individualcomponents

7.10.1. It is nearly always necessary to considersome sources of uncertainty individually. In somecases, this is only necessary for a small number ofsources; in others, particularly when little or nomethod performance data is available, everysource may need separate study (see examples 1,2and 3 in Appendix A for illustrations). There areseveral general methods for establishingindividual uncertainty components:

! Experimental variation of input variables

! From standing data such as measurement andcalibration certificates

! By modelling from theoretical principles

! Using judgement based on experience orinformed by modelling of assumptions

These different methods are discussed brieflybelow.

7.11. Experimental estimation ofindividual uncertaintycontributions

7.11.1. It is often possible and practical to obtainestimates of uncertainty contributions fromexperimental studies specific to individualparameters.

7.11.2. The standard uncertainty arising fromrandom effects is often measured fromrepeatability experiments and is quantified interms of the standard deviation of the measuredvalues. In practice, no more than about fifteenreplicates need normally be considered, unless ahigh precision is required.

7.11.3. Other typical experiments include:

• Study of the effect of a variation of a singleparameter on the result. This is particularlyappropriate in the case of continuous,controllable parameters, independent of othereffects, such as time or temperature. The rateof change of the result with the change in theparameter can be obtained from theexperimental data. This is then combineddirectly with the uncertainty in the parameterto obtain the relevant uncertainty contribution.

NOTE: The change in parameter should be sufficientto change the result substantially compared tothe precision available in the study (e.g. byfive times the standard deviation of replicatemeasurements)

• Robustness studies, systematically examiningthe significance of moderate changes inparameters. This is particularly appropriate forrapid identification of significant effects, andcommonly used for method optimisation. Themethod can be applied in the case of discreteeffects, such as change of matrix, or smallequipment configuration changes, which haveunpredictable effects on the result. Where afactor is found to be significant, it is normallynecessary to investigate further. Whereinsignificant, the associated uncertainty is (atleast for initial estimation) that obtained fromthe robustness study.

• Systematic multifactor experimental designsintended to estimate factor effects andinteractions. Such studies are particularly

Page 27: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 22

useful where a categorical variable isinvolved. A categorical variable is one inwhich the value of the variable is unrelated tothe size of the effect; laboratory numbers in astudy, analyst names, or sample types areexamples of categorical variables. Forexample, the effect of changes in matrix type(within a stated method scope) could beestimated from recovery studies carried out ina replicated multiple-matrix study. An analysisof variance would then provide within- andbetween-matrix components of variance forobserved analytical recovery. The between-matrix component of variance would provide astandard uncertainty associated with matrixvariation.

7.12. Estimation based on other resultsor data

7.12.1. It is often possible to estimate some of thestandard uncertainties using whatever relevantinformation is available about the uncertainty onthe quantity concerned. The following paragraphssuggest some sources of information.

7.12.2. Proficiency testing (PT) schemes. Alaboratory’s results from participation in PTschemes can be used as a check on the evaluateduncertainty, since the uncertainty should becompatible with the spread of results obtained bythat laboratory over a number of proficiency testrounds. Further, in the special case where

• the compositions of samples used in thescheme cover the full range analysedroutinely

• the assigned values in each round aretraceable to appropriate reference values, and

• the uncertainty on the assigned value is smallcompared to the observed spread of results

then the dispersion of the differences between thereported values and the assigned values obtainedin repeated rounds provides a basis for a goodestimate of the uncertainty arising from thoseparts of the measurement procedure within thescope of the scheme. For example, for a schemeoperating with similar materials and analytelevels, the standard deviation of differenceswould give the standard uncertainty. Of course,systematic deviation from traceable assignedvalues and any other sources of uncertainty (suchas those noted in section 7.6.1.) must also betaken into account.

7.12.3. Quality Assurance (QA) data. As notedpreviously it is necessary to ensure that thequality criteria set out in standard operatingprocedures are achieved, and that measurementson QA samples show that the criteria continue tobe met. Where reference materials are used in QAchecks, section 7.5. shows how the data can beused to evaluate uncertainty. Where any otherstable material is used, the QA data provides anestimate of intermediate precision (Section7.7.2.). QA data also forms a continuing check onthe value quoted for the uncertainty. Clearly, thecombined uncertainty arising from random effectscannot be less than the standard deviation of theQA measurements.

7.12.4. Suppliers' information. For many sourcesof uncertainty, calibration certificates or supplierscatalogues provide information. For example, thetolerance of volumetric glassware may beobtained from the manufacturer’s catalogue or acalibration certificate relating to a particular itemin advance of its use.

7.13. Modelling from theoreticalprinciples

7.13.1. In many cases, well-established physicaltheory provides good models for effects on theresult. For example, temperature effects onvolumes and densities are well understood. Insuch cases, uncertainties can be calculated orestimated from the form of the relationship usingthe uncertainty propagation methods described insection 8.

7.13.2. In other circumstances, it may benecessary to use approximate theoretical modelscombined with experimental data. For example,where an analytical measurement depends on atimed derivatisation reaction, it may be necessaryto assess uncertainties associated with timing.This might be done by simple variation of elapsedtime. However, it may be better to establish anapproximate rate model from brief experimentalstudies of the derivatisation kinetics near theconcentrations of interest, and assess theuncertainty from the predicted rate of change at agiven time.

7.14. Estimation based on judgement7.14.1. The evaluation of uncertainty is neither aroutine task nor a purely mathematical one; itdepends on detailed knowledge of the nature ofthe measurand and of the measurement methodand procedure used. The quality and utility of the

Page 28: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 23

uncertainty quoted for the result of ameasurement therefore ultimately depends on theunderstanding, critical analysis, and integrity ofthose who contribute to the assignment of itsvalue.

7.14.2. Most distributions of data can beinterpreted in the sense that it is less likely toobserve data in the margins of the distributionthan in the centre. The quantification of thesedistributions and their associated standarddeviations is done through repeatedmeasurements.

7.14.3. However, other assessments of intervalsmay be required in cases when repeatedmeasurements cannot be performed or do notprovide a meaningful measure of a particularuncertainty component.

7.14.4. There are numerous instances inanalytical chemistry when the latter prevails, andjudgement is required. For example:

• An assessment of recovery and its associateduncertainty cannot be made for every singlesample. Instead, an assessment is made forclasses of samples (e.g. grouped by type ofmatrix), and the results applied to all samplesof similar type. The degree of similarity isitself an unknown, thus this inference (fromtype of matrix to a specific sample) isassociated with an extra element ofuncertainty that has no frequentisticinterpretation.

• The model of the measurement as defined bythe specification of the analytical procedureis used for converting the measured quantityto the value of the measurand (analyticalresult). This model is - like all models inscience - subject to uncertainty. It is onlyassumed that nature behaves according to thespecific model, but this can never be knownwith ultimate certainty.

• The use of reference materials is highlyencouraged, but there remains uncertaintyregarding not only the true value, but alsoregarding the relevance of a particularreference material for the analysis of aspecific sample. A judgement is required ofthe extent to which a proclaimed standardsubstance reasonably resembles the nature ofthe samples in a particular situation.

• Another source of uncertainty arises when themeasurand is insufficiently defined by theprocedure. Consider the determination of

"permanganate oxidizable substances" thatare undoubtedly different whether oneanalyses ground water or municipal wastewater. Not only factors such as oxidationtemperature, but also chemical effects such asmatrix composition or interference, mayhave an influence on this specification.

• A common practice in analytical chemistrycalls for spiking with a single substance, suchas a close structural analogue or isotopomer,from which either the recovery of therespective native substance or even that of awhole class of compounds is judged. Clearly,the associated uncertainty is experimentallyassessable provided the analyst is prepared tostudy the recovery at all concentration levelsand ratios of measurands to the spike, and all"relevant" matrices. But frequently thisexperimentation is avoided and substituted byjudgements on

• the concentration dependence ofrecoveries of measurand,

• the concentration dependence ofrecoveries of spike,

• the dependence of recoveries on (sub)typeof matrix,

• the identity of binding modes of nativeand spiked substances.

7.14.5. Judgement of this type is not based onimmediate experimental results, but rather on asubjective (personal) probability, an expressionwhich here can be used synonymously with"degree of belief", "intuitive probability" and"credibility" [H.11]. It is also assumed that adegree of belief is not based on a snap judgement,but on a well considered mature judgement ofprobability.

7.14.6. Although it is recognised that subjectiveprobabilities vary from one person to another, andeven from time to time for a single person, theyare not arbitrary as they are influenced bycommon sense, expert knowledge, and by earlierexperiments and observations.

7.14.7. This may appear to be a disadvantage, butneed not lead in practice to worse estimates thanthose from repeated measurements. This appliesparticularly if the true, real-life, variability inexperimental conditions cannot be simulated andthe resulting variability in data thus does not givea realistic picture.

7.14.8. A typical problem of this nature arises iflong-term variability needs to be assessed when

Page 29: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 3. Quantifying Uncertainty

QUAM:2000.1 Page 24

no collaborative study data are available. Ascientist who dismisses the option of substitutingsubjective probability for an actually measuredone (when the latter is not available) is likely toignore important contributions to combineduncertainty, thus being ultimately less objectivethan one who relies on subjective probabilities.

7.14.9. For the purpose of estimation of combineduncertainties two features of degree of beliefestimations are essential:

• degree of belief is regarded as interval valuedwhich is to say that a lower and an upperbound similar to a classical probabilitydistribution is provided,

• the same computational rules apply incombining 'degree of belief' contributions ofuncertainty to a combined uncertainty as forstandard deviations derived by othermethods.

7.15. Significance of bias7.15.1. It is a general requirement of the ISOGuide that corrections should be applied for allrecognised and significant systematic effects.

7.15.2. In deciding whether a known bias canreasonably be neglected, the following approachis recommended:

i) Estimate the combined uncertainty withoutconsidering the relevant bias.

ii) Compare the bias with the combineduncertainty.

iii) Where the bias is not significant compared tothe combined uncertainty, the bias may beneglected.

iv) Where the bias is significant compared to thecombined uncertainty, additional action isrequired. Appropriate actions might:

• Eliminate or correct for the bias, makingdue allowance for the uncertainty of thecorrection.

• Report the observed bias and itsuncertainty in addition to the result.

NOTE: Where a known bias is uncorrected byconvention, the method should be consideredempirical (see section 7.8).

Page 30: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty

QUAM:2000.1 Page 25

8. Step 4. Calculating the Combined Uncertainty

8.1. Standard uncertainties8.1.1. Before combination, all uncertaintycontributions must be expressed as standarduncertainties, that is, as standard deviations. Thismay involve conversion from some other measureof dispersion. The following rules give someguidance for converting an uncertaintycomponent to a standard deviation.

8.1.2. Where the uncertainty component wasevaluated experimentally from the dispersion ofrepeated measurements, it can readily beexpressed as a standard deviation. For thecontribution to uncertainty in singlemeasurements, the standard uncertainty is simplythe observed standard deviation; for resultssubjected to averaging, the standard deviationof the mean [B.24] is used.

8.1.3. Where an uncertainty estimate is derivedfrom previous results and data, it may already beexpressed as a standard deviation. Howeverwhere a confidence interval is given with a levelof confidence, (in the form ±a at p%) then dividethe value a by the appropriate percentage point ofthe Normal distribution for the level ofconfidence given to calculate the standarddeviation.

EXAMPLE

A specification states that a balance reading iswithin ±0.2 mg with 95% confidence. Fromstandard tables of percentage points on thenormal distribution, a 95% confidence intervalis calculated using a value of 1.96σ. Using thisfigure gives a standard uncertainty of (0.2/1.96) ≈ 0.1.

8.1.4. If limits of ±a are given without aconfidence level and there is reason to expect thatextreme values are likely, it is normallyappropriate to assume a rectangular distribution,with a standard deviation of a/√3 (see AppendixE).

EXAMPLE

A 10 ml Grade A volumetric flask is certified towithin ±0.2 ml. The standard uncertainty is0.2/√3 ≈ 0.12 ml.

8.1.5. If limits of ±a are given without aconfidence level, but there is reason to expect thatextreme values are unlikely, it is normally

appropriate to assume a triangular distribution,with a standard deviation of a/√6 (see AppendixE).

EXAMPLE

A 10 ml Grade A volumetric flask is certified towithin ±0.2 ml, but routine in-house checksshow that extreme values are rare. The standarduncertainty is 0.2/√6 ≈ 0.08 ml.

8.1.6. Where an estimate is to be made on thebasis of judgement, it may be possible to estimatethe component directly as a standard deviation. Ifthis is not possible then an estimate should bemade of the maximum deviation which couldreasonably occur in practice (excluding simplemistakes). If a smaller value is consideredsubstantially more likely, this estimate should betreated as descriptive of a triangular distribution.If there are no grounds for believing that a smallerror is more likely than a large error, theestimate should be treated as characterising arectangular distribution.

8.1.7. Conversion factors for the most commonlyused distribution functions are given in AppendixE.1.

8.2. Combined standard uncertainty8.2.1. Following the estimation of individual orgroups of components of uncertainty andexpressing them as standard uncertainties, thenext stage is to calculate the combined standarduncertainty using one of the procedures describedbelow.

8.2.2. The general relationship between thecombined standard uncertainty uc(y) of a value yand the uncertainty of the independent parametersx1, x2, ...xn on which it depends is

uc(y(x1,x2,...)) = ∑= ni

ii xuc,1

22 )( = ∑= ni

ixyu,1

2),( *

where y(x1,x2,..) is a function of severalparameters x1,x2..., ci is a sensitivity coefficientevaluated as ci=∂y/∂xi, the partial differential of ywith respect to xi and u(y,xi) denotes theuncertainty in y arising from the uncertainty in xi.Each variable's contribution u(y,xi) is just the * The ISO Guide uses the shorter form ui(y) instead of

u(y,xi)

Page 31: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty

QUAM:2000.1 Page 26

square of the associated uncertainty expressed asa standard deviation multiplied by the square ofthe relevant sensitivity coefficient. Thesesensitivity coefficients describe how the value ofy varies with changes in the parameters x1, x2 etc.NOTE: Sensitivity coefficients may also be evaluated

directly by experiment; this is particularlyvaluable where no reliable mathematicaldescription of the relationship exists.

8.2.3. Where variables are not independent, therelationship is more complex:

∑∑≠==

⋅+=

kinki

kikini

iiji xxuccxucxyu,1,,1

22..., ),()())((

where u(xi,xk) is the covariance between xi and xkand ci and ck are the sensitivity coefficients asdescribed and evaluated in 8.2.2. The covarianceis related to the correlation coefficient rik by

u(xi,xk) = u(xi)⋅u(xk)⋅rik

where -1 ≤ rik ≤ 1.

8.2.4. These general procedures apply whetherthe uncertainties are related to single parameters,grouped parameters or to the method as a whole.However, when an uncertainty contribution isassociated with the whole procedure, it is usuallyexpressed as an effect on the final result. In suchcases, or when the uncertainty on a parameter isexpressed directly in terms of its effect on y, thesensitivity coefficient ∂y/∂xi is equal to 1.0.

EXAMPLE

A result of 22 mg l-1 shows a measured standarddeviation of 4.1 mg l-1. The standarduncertainty u(y) associated with precision underthese conditions is 4.1 mg l-1. The implicitmodel for the measurement, neglecting otherfactors for clarity, is

y = (Calculated result) + ε

where ε represents the effect of randomvariation under the conditions of measurement.∂y/∂ε is accordingly 1.0

8.2.5. Except for the case above, when thesensitivity coefficient is equal to one, and for thespecial cases given in Rule 1 and Rule 2 below,the general procedure, requiring the generation ofpartial differentials or the numerical equivalentmust be employed. Appendix E gives details of anumerical method, suggested by Kragten [H.12],which makes effective use of spreadsheetsoftware to provide a combined standarduncertainty from input standard uncertainties anda known measurement model. It is recommended

that this method, or another appropriate computer-based method, be used for all but the simplestcases.

8.2.6. In some cases, the expressions forcombining uncertainties reduce to much simplerforms. Two simple rules for combining standarduncertainties are given here.

Rule 1

For models involving only a sum or difference ofquantities, e.g. y=(p+q+r+...), the combinedstandard uncertainty uc(y) is given by

.....)()(..)),(( 22 ++= qupuqpyuc

Rule 2

For models involving only a product or quotient,e.g. y=(p × q × r ×...) or y= p / (q × r ×...), thecombined standard uncertainty uc(y) is given by

.....)()(

)(22

+

+

=

qqu

ppuyyuc

where (u(p)/p) etc. are the uncertainties in theparameters, expressed as relative standarddeviations.NOTE Subtraction is treated in the same manner as

addition, and division in the same way asmultiplication.

8.2.7. For the purposes of combining uncertaintycomponents, it is most convenient to break theoriginal mathematical model down to expressionswhich consist solely of operations covered by oneof the rules above. For example, the expression

)()(

rqpo

++

should be broken down to the two elements (o+p)and (q+r). The interim uncertainties for each ofthese can then be calculated using rule 1 above;these interim uncertainties can then be combinedusing rule 2 to give the combined standarduncertainty.

8.2.8. The following examples illustrate the use ofthe above rules:

EXAMPLE 1

y = (p-q+r) The values are p=5.02, q=6.45 andr=9.04 with standard uncertainties u(p)=0.13,u(q)=0.05 and u(r)= 0.22.

y = 5.02 - 6.45 + 9.04 = 7.61

26.022.005.013.0)( 222 =++=yu

Page 32: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty

QUAM:2000.1 Page 27

EXAMPLE 2

y = (op/qr). The values are o=2.46, p=4.32,q=6.38 and r=2.99, with standard uncertaintiesof u(o)=0.02, u(p)=0.13, u(q)=0.11 and u(r)=0.07.

y=( 2.46 × 4.32 ) / (6.38 × 2.99 ) = 0.56

22

22

99.207.0

38.611.0

32.413.0

46.202.0

56.0)(

+

+

+

×=yu

⇒ u(y) = 0.56 × 0.043 = 0.024

8.2.9. There are many instances in which themagnitudes of components of uncertainty varywith the level of analyte. For example,uncertainties in recovery may be smaller for highlevels of material, or spectroscopic signals mayvary randomly on a scale approximatelyproportional to intensity (constant coefficient ofvariation). In such cases, it is important to takeaccount of the changes in the combined standarduncertainty with level of analyte. Approachesinclude:

• Restricting the specified procedure oruncertainty estimate to a small range ofanalyte concentrations.

• Providing an uncertainty estimate in the formof a relative standard deviation.

• Explicitly calculating the dependence andrecalculating the uncertainty for a givenresult.

Appendix E4 gives additional information onthese approaches.

8.3. Expanded uncertainty8.3.1. The final stage is to multiply the combinedstandard uncertainty by the chosen coveragefactor in order to obtain an expanded uncertainty.The expanded uncertainty is required to providean interval which may be expected to encompassa large fraction of the distribution of values whichcould reasonably be attributed to the measurand.

8.3.2. In choosing a value for the coverage factork, a number of issues should be considered. Theseinclude:

• The level of confidence required

• Any knowledge of the underlyingdistributions

• Any knowledge of the number of values usedto estimate random effects (see 8.3.3 below).

8.3.3. For most purposes it is recommended that kis set to 2. However, this value of k may beinsufficient where the combined uncertainty isbased on statistical observations with relativelyfew degrees of freedom (less than about six). Thechoice of k then depends on the effective numberof degrees of freedom.

8.3.4. Where the combined standard uncertaintyis dominated by a single contribution with fewerthan six degrees of freedom, it is recommendedthat k be set equal to the two-tailed value ofStudent’s t for the number of degrees of freedomassociated with that contribution, and for thelevel of confidence required (normally 95%).Table 1 (page 28) gives a short list of values fort.

EXAMPLE:

A combined standard uncertainty for a weighingoperation is formed from contributionsucal=0.01 mg arising from calibrationuncertainty and sobs=0.08 mg based on thestandard deviation of five repeatedobservations. The combined standarduncertainty uc is equal to

mg081.008.001.0 22 =+ . This is clearlydominated by the repeatability contribution sobs,which is based on five observations, giving 5-1=4 degrees of freedom. k is accordingly basedon Student’s t. The two-tailed value of t for fourdegrees of freedom and 95% confidence is,from tables, 2.8; k is accordingly set to 2.8 andthe expanded uncertaintyU=2.8×0.081=0.23 mg.

8.3.5. The Guide [H.2] gives additional guidanceon choosing k where a small number ofmeasurements is used to estimate large randomeffects, and should be referred to when estimatingdegrees of freedom where several contributionsare significant.

8.3.6. Where the distributions concerned arenormal, a coverage factor of 2 (or chosenaccording to paragraphs 8.3.3.-8.3.5. using a levelof confidence of 95%) gives an intervalcontaining approximately 95% of the distributionof values. It is not recommended that this intervalis taken to imply a 95% confidence intervalwithout a knowledge of the distributionconcerned.

Page 33: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Step 4. Calculating the Combined Uncertainty

QUAM:2000.1 Page 28

Table 1: Student’s t for 95% confidence (2-tailed)

Degrees of freedomνννν

t

1 12.7

2 4.3

3 3.2

4 2.8

5 2.6

6 2.5

Page 34: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Reporting Uncertainty

QUAM:2000.1 Page 29

9. Reporting Uncertainty

9.1. General9.1.1. The information necessary to report theresult of a measurement depends on its intendeduse. The guiding principles are:

• present sufficient information to allow theresult to be re-evaluated if new informationor data become available

• it is preferable to err on the side of providingtoo much information rather than too little.

9.1.2. When the details of a measurement,including how the uncertainty was determined,depend on references to publisheddocumentation, it is imperative that thedocumentation to hand is kept up to date andconsistent with the methods in use.

9.2. Information required9.2.1. A complete report of a measurement resultshould include or refer to documentationcontaining,

• a description of the methods used tocalculate the measurement result and itsuncertainty from the experimentalobservations and input data

• the values and sources of all corrections andconstants used in both the calculation andthe uncertainty analysis

• a list of all the components of uncertaintywith full documentation on how each wasevaluated

9.2.2. The data and analysis should be presentedin such a way that its important steps can bereadily followed and the calculation of the resultrepeated if necessary.

9.2.3. Where a detailed report includingintermediate input values is required, the reportshould

• give the value of each input value, itsstandard uncertainty and a description ofhow each was obtained

• give the relationship between the result andthe input values and any partial derivatives,covariances or correlation coefficients usedto account for correlation effects

• state the estimated number of degrees offreedom for the standard uncertainty of eachinput value (methods for estimating degreesof freedom are given in the ISO Guide[H.2]).

NOTE: Where the functional relationship isextremely complex or does not existexplicitly (for example, it may only exist as acomputer program), the relationship may bedescribed in general terms or by citation ofappropriate references. In such cases, it mustbe clear how the result and its uncertaintywere obtained.

9.2.4. When reporting the results of routineanalysis, it may be sufficient to state only thevalue of the expanded uncertainty and the valueof k.

9.3. Reporting standard uncertainty9.3.1. When uncertainty is expressed as thecombined standard uncertainty uc (that is, as asingle standard deviation), the following form isrecommended:

"(Result): x (units) [with a] standard uncertaintyof uc (units) [where standard uncertainty is asdefined in the International Vocabulary of Basicand General terms in Metrology, 2nd ed., ISO1993 and corresponds to one standarddeviation.]"

NOTE The use of the symbol ± is not recommendedwhen using standard uncertainty as thesymbol is commonly associated withintervals corresponding to high levels ofconfidence.

Terms in parentheses [] may be omitted orabbreviated as appropriate.

EXAMPLE:

Total nitrogen: 3.52 %w/w

Standard uncertainty: 0.07 %w/w *

*Standard uncertainty corresponds to onestandard deviation.

9.4. Reporting expanded uncertainty9.4.1. Unless otherwise required, the result xshould be stated together with the expandeduncertainty U calculated using a coverage factor

Page 35: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Reporting Uncertainty

QUAM:2000.1 Page 30

k=2 (or as described in section 8.3.3.). Thefollowing form is recommended:

"(Result): (x ± U) (units)

[where] the reported uncertainty is [an expandeduncertainty as defined in the InternationalVocabulary of Basic and General terms inmetrology, 2nd ed., ISO 1993,] calculated usinga coverage factor of 2, [which gives a level ofconfidence of approximately 95%]"

Terms in parentheses [] may be omitted orabbreviated as appropriate. The coverage factorshould, of course, be adjusted to show the valueactually used.

EXAMPLE:

Total nitrogen: (3.52 ± 0.14) %w/w *

*The reported uncertainty is an expandeduncertainty calculated using a coverage factorof 2 which gives a level of confidence ofapproximately 95%.

9.5. Numerical expression of results9.5.1. The numerical values of the result and itsuncertainty should not be given with anexcessive number of digits. Whether expandeduncertainty U or a standard uncertainty u isgiven, it is seldom necessary to give more thantwo significant digits for the uncertainty. Resultsshould be rounded to be consistent with theuncertainty given.

9.6. Compliance against limits9.6.1. Regulatory compliance often requires thata measurand, such as the concentration of a toxicsubstance, be shown to be within particularlimits. Measurement uncertainty clearly hasimplications for interpretation of analyticalresults in this context. In particular:

• The uncertainty in the analytical result mayneed to be taken into account when assessingcompliance.

• The limits may have been set with someallowance for measurement uncertainties.

Consideration should be given to both factors inany assessment. The following paragraphs giveexamples of common practice.

9.6.2. Assuming that limits were set with noallowance for uncertainty, four situations areapparent for the case of compliance with anupper limit (see Figure 2):

i) The result exceeds the limit value plus theexpanded uncertainty.

ii) The result exceeds the limiting value by lessthan the expanded uncertainty.

iii) The result is below the limiting value byless than the expanded uncertainty

iv) The result is less than the limiting valueminus the expanded uncertainty.

Case i) is normally interpreted as demonstratingclear non-compliance. Case iv) is normallyinterpreted as demonstrating compliance. Casesii) and iii) will normally require individualconsideration in the light of any agreements withthe user of the data. Analogous arguments applyin the case of compliance with a lower limit.

9.6.3. Where it is known or believed that limitshave been set with some allowance foruncertainty, a judgement of compliance canreasonably be made only with knowledge of thatallowance. An exception arises wherecompliance is set against a stated methodoperating in defined circumstances. Implicit insuch a requirement is the assumption that theuncertainty, or at least reproducibility, of thestated method is small enough to ignore forpractical purposes. In such a case, provided thatappropriate quality control is in place,compliance is normally reported only on thevalue of the particular result. This will normallybe stated in any standard taking this approach.

Page 36: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Reporting Uncertainty

QUAM:2000.1 Page 31

Figure 2: Uncertainty and compliance limits

UpperControlLimit

( i )Result plus uncertainty above limit

( iv )Result minus uncertainty below limit

( ii )Result

above limit but limit within

uncertainty

( iii )Result below limit but limit

within uncertainty

Page 37: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix A. Examples

QUAM:2000.1 Page 32

Appendix A. Examples

Introduction

General introduction

These examples illustrate how the techniques forevaluating uncertainty, described in sections 5-7,can be applied to some typical chemical analyses.They all follow the procedure shown in the flowdiagram (Figure 1 on page 12). The uncertaintysources are identified and set out in a cause andeffect diagram (see appendix D). This helps toavoid double counting of sources and also assistsin the grouping together of components whosecombined effect can be evaluated. Examples 1-6illustrate the use of the spreadsheet method ofAppendix E.2 for calculating the combineduncertainties from the calculated contributionsu(y,xi).*

Each of examples 1-6 has an introductorysummary. This gives an outline of the analyticalmethod, a table of the uncertainty sources andtheir respective contributions, a graphicalcomparison of the different contributions, and thecombined uncertainty.

Examples 1-3 illustrate the evaluation of theuncertainty by the quantification of theuncertainty arising from each source separately.Each gives a detailed analysis of the uncertaintyassociated with the measurement of volumesusing volumetric glassware and masses fromdifference weighings. The detail is for illustrativepurposes, and should not be taken as a generalrecommendation as to the level of detail requiredor the approach taken. For many analyses, theuncertainty associated with these operations willnot be significant and such a detailed evaluationwill not be necessary. It would be sufficient touse typical values for these operations with dueallowance being made for the actual values of themasses and volumes involved.

Example A1

Example A1 deals with the very simple case ofthe preparation of a calibration standard ofcadmium in HNO3 for AAS. Its purpose is to * Section 8.2.2. explains the theory behind the

calculated contributions u(y,xi).

show how to evaluate the components ofuncertainty arising from the basic operations ofvolume measurement and weighing and howthese components are combined to determine theoverall uncertainty.

Example A2

This deals with the preparation of a standardisedsolution of sodium hydroxide (NaOH) which isstandardised against the titrimetric standardpotassium hydrogen phthalate (KHP). It includesthe evaluation of uncertainty on simple volumemeasurements and weighings, as described inexample A1, but also examines the uncertaintyassociated with the titrimetric determination.

Example A3

Example A3 expands on example A2 byincluding the titration of an HCl against theprepared NaOH solution.

Example A4

This illustrates the use of in house validationdata, as described in section 7.7., and shows howthe data can be used to evaluate the uncertaintyarising from combined effect of a number ofsources. It also shows how to evaluate theuncertainty associated with method bias.

Example A5

This shows how to evaluate the uncertainty onresults obtained using a standard or “empirical”method to measure the amount of heavy metalsleached from ceramic ware using a definedprocedure, as described in section 7.2.-7.8. Itspurpose is to show how, in the absence ofcollaborative trial data or ruggedness testingresults, it is necessary to consider the uncertaintyarising from the range of the parameters (e.g.temperature, etching time and acid strength)allowed in the method definition. This process isconsiderably simplified when collaborative studydata is available, as is shown in the next example.

Example A6

The sixth example is based on an uncertaintyestimate for a crude (dietary) fibre determination.

Page 38: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix A. Examples

QUAM:2000.1 Page 33

Since the analyte is defined only in terms of thestandard method, the method is empirical. In thiscase, collaborative study data, in-house QAchecks and literature study data were available,permitting the approach described in section 7.6.The in-house studies verify that the method isperforming as expected on the basis of thecollaborative study. The example shows how theuse of collaborative study data backed up by in-house method performance checks cansubstantially reduce the number of differentcontributions required to form an uncertaintyestimate under these circumstances.

Example A7

This gives a detailed description of the evaluationof uncertainty on the measurement of the leadcontent of a water sample using IDMS. Inaddition to identifying the possible sources ofuncertainty and quantifying them by statisticalmeans the examples shows how it is alsonecessary to include the evaluation ofcomponents based on judgement as described insection 7.14. Use of judgement is a special caseof Type B evaluation as described in the ISOGuide [H.2].

Page 39: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A1: Preparation of a Calibration Standard

QUAM:2000.1 Page 34

Example A1: Preparation of a Calibration Standard

Summary

Goal

A calibration standard is prepared from a highpurity metal (cadmium) with a concentration ofca.1000 mg l-1.

Measurement procedure

The surface of the high purity metal is cleaned toremove any metal-oxide contamination.Afterwards the metal is weighed and thendissolved in nitric acid in a volumetric flask. Thestages in the procedure are show in the followingflow chart.

Clean metalsurface

Clean metalsurface

Weigh metalWeigh metal

Dissolve anddilute

Dissolve anddilute

RESULTRESULT

Figure A1. 1: Preparation of cadmiumstandard

Measurand

VPmcCd

⋅⋅= 1000 [mg l-1]

where

cCd :concentration of the calibration standard[mg l-1]

1000 :conversion factor from [ml] to [l]

m :mass of the high purity metal [mg]

P :purity of the metal given as mass fraction

V :volume of the liquid of the calibrationstandard [ml]

Identification of the uncertainty sources:

The relevant uncertainty sources are shown in thecause and effect diagram below:

PurityV

m

Repeatability

Calibration

Temperature

c(Cd)

m(tare) m(gross)

Repeatability Repeatability

Calibration

Linearity

Sensitivity

Calibration

Linearity

Sensitivity

ReadabilityReadability

Quantification of the uncertainty components

The values and their uncertainties are shown inthe Table below.

Combined Standard Uncertainty

The combined standard uncertainty for thepreparation of a 1002.7 mg l-1 Cd calibrationstandard is 0.9 mg l-1

The different contributions are showndiagrammatically in Figure A1.2.

Table A1.1: Values and uncertainties

Description Value Standarduncertainty

Relative standarduncertainty u(x)/x

P Purity of the metal 0.9999 0.000058 0.000058

m Mass of the metal 100.28 mg 0.05 mg 0.0005

V Volume of the flask 100.0 ml 0.07 ml 0.0007

cCd concentration of thecalibration standard

1002.7 mg l-1 0.9 mg l-1 0.0009

Page 40: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A1: Preparation of a Calibration Standard

QUAM:2000.1 Page 35

Figure A1.2: Uncertainty contributions in cadmium standard preparation

u(y,xi) (mg l-1)0 0.2 0.4 0.6 0.8 1

c(Cd)

Purity

m

V

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A1.3

Page 41: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A1: Preparation of a Calibration Standard

QUAM:2000.1 Page 36

Example A1: Preparation of a calibration standard. Detailed discussion

A1.1 Introduction

This first introductory example discusses thepreparation of a calibration standard for atomicabsorption spectroscopy (AAS) from thecorresponding high purity metal (in this example≈1000 mg l-1 Cd in dilute HNO3). Even thoughthe example does not represent an entireanalytical measurement, the use of calibrationstandards is part of nearly every determination,because modern routine analytical measurementsare relative measurements, which need areference standard to provide traceability to theSI.

A1.2 Step 1: Specification

The goal of this first step is to write down a clearstatement of what is being measured. Thisspecification includes a description of thepreparation of the calibration standard and themathematical relationship between the measurandand the parameters upon which it depends.

Procedure

The specific information on how to prepare acalibration standard is normally given in aStandard Operating Procedure (SOP). Thepreparation consists of the following stages

Figure A1.3: Preparation of cadmiumstandard

Clean metalsurface

Clean metalsurface

Weigh metalWeigh metal

Dissolve anddilute

Dissolve anddilute

RESULTRESULT

The separate stages are:

i) The surface of the high purity metal is treatedwith an acid mixture to remove any metal-oxide contamination. The cleaning method isprovided by the manufacturer of the metal andneeds to be carried out to obtain the purityquoted on the certificate.

ii) The volumetric flask (100 ml) is weighedwithout and with the purified metal inside.The balance used has a resolution of 0.01 mg.

iii) 1 ml of nitric acid (65% m/m) and 3 ml of ion-free water are added to the flask to dissolvethe cadmium (approximately 100 mg, weighedaccurately). Afterwards the flask is filled withion-free water up to the mark and mixed byinverting the flask at least thirty times.

Calculation:

The measurand in this example is theconcentration of the calibration standard solution,which depends upon the weighing of the highpurity metal (Cd), its purity and the volume of theliquid in which it is dissolved. The concentrationis given by

VPmcCd

⋅⋅= 1000 mg l-1

wherecCd :concentration of the calibration standard

[mg l-1]1000 :conversion factor from [ml] to [l]m :mass of the high purity metal [mg]P :purity of the metal given as mass fractionV :volume of the liquid of the calibration

standard [ml]

A1.3 Step 2: Identifying and analysinguncertainty sources

The aim of this second step is to list all theuncertainty sources for each of the parameterswhich affect the value of the measurand.

Page 42: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A1: Preparation of a Calibration Standard

QUAM:2000.1 Page 37

Purity

The purity of the metal (Cd) is quoted in thesupplier's certificate as 99.99 ±0.01%. P istherefore 0.9999 ±0.0001. These values dependon the effectiveness of the surface cleaning of thehigh purity metal. If the manufacturer's procedureis strictly followed, no additional uncertainty dueto the contamination of the surface with metal-oxide needs to be added to the value given in thecertificate. There is no information available that100% of the metal dissolves. Therefore one has tocheck with a repeated preparation experiment thatthis contribution can be neglected.

Mass m

The second stage of the preparation involvesweighing the high purity metal. A 100 ml quantityof a 1000 mg l-1 cadmium solution is to beprepared.

The relevant mass of cadmium is determined by atared weighing, giving m= 0.10028 g

The manufacturer’s literature identifies threeuncertainty sources for the tared weighing: therepeatability; the readability (digital resolution)of the balance scale; and the contribution due tothe uncertainty in the calibration function of thescale. This calibration function has two potentialuncertainty sources, identified as the sensitivityof the balance and its linearity. The sensitivitycan be neglected because the mass by differenceis done on the same balance over a very narrowrange.NOTE: Buoyancy correction is not considered

because all weighing results are quoted on theconventional basis for weighing in air [H.19].The remaining uncertainties are too small toconsider. Note 1 in Appendix G refers.

Volume V

The volume of the solution contained in thevolumetric flask is subject to three major sourcesof uncertainty:

• The uncertainty in the certified internalvolume of the flask.

• Variation in filling the flask to the mark.

• The flask and solution temperatures differingfrom the temperature at which the volume ofthe flask was calibrated.

The different effects and their influences areshown as a cause and effect diagram in FigureA1.4 (see Appendix D for description).

Figure A1.4: Uncertainties in Cd Standardpreparation

PurityV

m

Repeatability

Calibration

Temperature

c(Cd)

m(tare) m(gross)

Repeatability Repeatability

Calibration

Linearity

Sensitivity

Calibration

Linearity

Sensitivity

ReadabilityReadability

A1.4 Step 3: Quantifying the uncertaintycomponents

In step 3 the size of each identified potentialsource of uncertainty is either directly measured,estimated using previous experimental results orderived from theoretical analysis.

Purity

The purity of the cadmium is given on thecertificate as 0.9999 ±0.0001. Because there is noadditional information about the uncertaintyvalue, a rectangular distribution is assumed. Toobtain the standard uncertainty u(P) the value of0.0001 has to be divided by 3 (see AppendixE1.1)

000058.03

0001.0)( ==Pu

Mass m

The uncertainty associated with the mass of thecadmium is estimated, using the data from thecalibration certificate and the manufacturer’srecommendations on uncertainty estimation, as0.05 mg. This estimate takes into account thethree contributions identified earlier (SectionA1.3).NOTE: Detailed calculations for uncertainties in mass

can be very intricate, and it is important torefer to manufacturer’s literature where massuncertainties are dominant. In this example,the calculations are omitted for clarity.

Page 43: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A1: Preparation of a Calibration Standard

QUAM:2000.1 Page 38

Volume V

The volume has three major influences;calibration, repeatability and temperature effects.

i) Calibration: The manufacturer quotes avolume for the flask of 100 ml ±0.1 mlmeasured at a temperature of 20 °C. The valueof the uncertainty is given without aconfidence level or distribution information,so an assumption is necessary. Here, thestandard uncertainty is calculated assuming atriangular distribution.

ml04.06ml1.0 =

NOTE: A triangular distribution was chosen,because in an effective production process,the nominal value is more likely thanextremes. The resulting distribution is betterrepresented by a triangular distribution than arectangular one.

ii) Repeatability: The uncertainty due tovariations in filling can be estimated from arepeatability experiment on a typical exampleof the flask used. A series of ten fill and weighexperiments on a typical 100 ml flask gave astandard deviation of 0.02 ml. This can beused directly as a standard uncertainty.

iii) Temperature: According to the manufacturerthe flask has been calibrated at a temperatureof 20 °C, whereas the laboratory temperaturevaries between the limits of ±4 °C. Theuncertainty from this effect can be calculatedfrom the estimate of the temperature range andthe coefficient of the volume expansion. Thevolume expansion of the liquid is considerablylarger than that of the flask, so only the formerneeds to be considered. The coefficient ofvolume expansion for water is 2.1×10–4 °C–1,

which leads to a volume variation of

ml084.0)101.24100( 4 ±=×××± −

The standard uncertainty is calculated usingthe assumption of a rectangular distributionfor the temperature variation i.e.

ml05.03

ml084.0=

The three contributions are combined to give thestandard uncertainty u(V) of the volume V

ml07.005.002.004.0)( 222 =++=Vu

A1.5 Step 4: Calculating the combinedstandard uncertainty

cCd is given by

]lmg[1000 1-

VPmcCd

⋅⋅=

The intermediate values, their standarduncertainties and their relative standarduncertainties are summarised overleaf (TableA1.2)

Using those values, the concentration of thecalibration standard is

1lmg7.10020.100

9999.028.1001000 −=××=Cdc

For this simple multiplicative expression, theuncertainties associated with each component arecombined as follows.

222 )()()()(

+

+

=

VVu

mmu

PPu

ccuCd

Cdc

0009.00007.00005.0000058.0 222

=++=

1

1

lmg9.0

0009.0lmg7.10020009.0)(−

=

×=×= CdCdc ccu

It is preferable to derive the combined standarduncertainty (uc(cCd)) using the spreadsheet methodgiven in Appendix E, since this can be utilisedeven for complex expressions. The completedspreadsheet is shown in Table A1.3.

The contributions of the different parameters areshown in Figure A1.5. The contribution of theuncertainty on the volume of the flask is the

Table A1.2: Values and Uncertainties

Description Value x u(x) u(x)/x

Purity of themetal P

0.9999 0.000058 0.000058

Mass of themetal m(mg)

100.28 0.05 mg 0.0005

Volume ofthe flaskV (ml)

100.0 0.07 ml 0.0007

Page 44: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A1: Preparation of a Calibration Standard

QUAM:2000.1 Page 39

largest and that from the weighing procedure issimilar. The uncertainty on the purity of thecadmium has virtually no influence on the overalluncertainty.

The expanded uncertainty U(cCd) is obtained bymultiplying the combined standard uncertaintywith a coverage factor of 2, giving

11 lmg8.1lmg9.02)( −− =×=CdcUTable A1.3: Spreadsheet calculation of uncertainty

A B C D E1 P m V2 Value 0.9999 100.28 100.003 Uncertainty 0.000058 0.05 0.0745 P 0.9999 0.999958 0.9999 0.99996 m 100.28 100.28 100.33 100.287 V 100.0 100.00 100.00 100.0789 c(Cd) 1002.69972 1002.75788 1003.19966 1001.9983210 u(y,xi) 0.05816 0.49995 -0.7014011 u(y)2, u(y,xi)2 0.74529 0.00338 0.24995 0.491961213 u(c(Cd)) 0.9

The values of the parameters are entered in the second row from C2 to E2. Their standard uncertainties are in the rowbelow (C3-E3). The spreadsheet copies the values from C2-E2 into the second column from B5 to B7. The result(c(Cd)) using these values is given in B9. The C5 shows the value of P from C2 plus its uncertainty given in C3. Theresult of the calculation using the values C5-C7 is given in C9. The columns D and E follow a similar procedure. Thevalues shown in the row 10 (C10-E10) are the differences of the row (C9-E9) minus the value given in B9. In row 11(C11-E11) the values of row 10 (C10-E10) are squared and summed to give the value shown in B11. B13 gives thecombined standard uncertainty, which is the square root of B11.

Figure A1.5: Uncertainty contributions in cadmium standard preparation

u(y,xi) (mg l-1)0 0.2 0.4 0.6 0.8 1

c(Cd)

Purity

m

V

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A1.3

Page 45: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 40

Example A2: Standardising a Sodium Hydroxide Solution

Summary

Goal

A solution of sodium hydroxide (NaOH) isstandardised against the titrimetric standardpotassium hydrogen phthalate (KHP).

Measurement procedure

The titrimetric standard (KHP) is dried andweighed. After the preparation of the NaOHsolution the sample of the titrimetric standard(KHP) is dissolved and then titrated using theNaOH solution. The stages in the procedure areshown in the flow chart Figure A2.1.

Measurand:

]lmol[1000 1−

⋅⋅⋅

=TKHP

KHPKHPNaOH VM

Pmc

wherecNaOH :concentration of the NaOH solution

[mol l–1]1000 :conversion factor [ml] to [l]mKHP :mass of the titrimetric standard KHP [g]PKHP :purity of the titrimetric standard given as

:mass fractionMKHP :molar mass of KHP [g mol–1]VT :titration volume of NaOH solution [ml]

Weighing KHPWeighing KHP

Preparing NaOHPreparing NaOH

TitrationTitration

RESULTRESULT

Figure A2.1: Standardising NaOH

Identification of the uncertainty sources:

The relevant uncertainty sources are shown as acause and effect diagram in Figure A2.2.

Quantification of the uncertainty components

The different uncertainty contributions are givenin Table A2.1, and shown diagrammatically inFigure A2.3. The combined standard uncertaintyfor the 0.10214 mol l-1 NaOH solution is0.00010 mol l-1

Table A2.1: Values and uncertainties in NaOH standardisation

Description Value x Standard uncertainty u Relative standarduncertainty u(x)/x

rep Repeatability 1.0 0.0005 0.0005

mKHP Mass of KHP 0.3888 g 0.00013 g 0.00033

PKHP Purity of KHP 1.0 0.00029 0.00029

MKHP Molar mass of KHP 204.2212 g mol-1 0.0038 g mol-1 0.000019

VT Volume of NaOH for KHPtitration

18.64 ml 0.013 ml 0.0007

cNaOH NaOH solution 0.10214 mol l-1 0.00010 mol l-1 0.00097

Page 46: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 41

Figure A2.2: Cause and effect diagram for titration

c(NaOH)

P(KHP)

M(KHP)V(T)

calibration

temperature

end-point

bias

m(KHP)

m(gross)m(tare)

calibration

linearity

sensitivity

calibration

linearity

sensitivity

repeatability

V(T)

end-point

m(KHP)

Figure A2.3: Contributions to Titration uncertainty

u(y,xi) (mmol l-1)0 0.02 0.04 0.06 0.08 0.1 0.12

c(NaOH)

Repeatability

m(KHP)

P(KHP)

M(KHP)

V(T)

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A2.3

Page 47: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 42

Example A2: Standardising a sodium hydroxide solution. Detailed discussionA2.1 Introduction

This second introductory example discusses anexperiment to determine the concentration of asolution of sodium hydroxide (NaOH). TheNaOH is titrated against the titrimetric standardpotassium hydrogen phthalate (KHP). It isassumed that the NaOH concentration is known tobe of the order of 0.1 mol l–1. The end-point of thetitration is determined by an automatic titrationsystem using a combined pH-electrode to measurethe shape of the pH-curve. The functionalcomposition of the titrimetric standard potassiumhydrogen phthalate (KHP), which is the numberof free protons in relation to the overall number ofmolecules, provides traceability of theconcentration of the NaOH solution to the SIsystem.

A2.2 Step 1: Specification

The aim of the first step is to describe themeasurement procedure. This description consistsof a listing of the measurement steps and amathematical statement of the measurand and theparameters upon which it depends.

Procedure:

The measurement sequence to standardise theNaOH solution has the following stages.

Figure A2.4: Standardisation of a solution ofsodium hydroxide

Weighing KHPWeighing KHP

Preparing NaOHPreparing NaOH

TitrationTitration

RESULTRESULT

The separate stages are:

i) The primary standard potassium hydrogenphthalate (KHP) is dried according to thesupplier’s instructions. The instructions aregiven in the supplier's catalogue, which alsostates the purity of the titrimetric standardand its uncertainty. A titration volume ofapproximately 19 ml of 0.1 mol l-1 solution ofNaOH entails weighing out an amount asclose as possible to

g388.00.11000

191.02212.204 =×

××

The weighing is carried out on a balance witha last digit of 0.1 mg.

ii) A 0.1 mol l-1 solution of sodium hydroxide isprepared. In order to prepare 1 l of solution,it is necessary to weigh out ≈4 g NaOH.However, since the concentration of theNaOH solution is to be determined by assayagainst the primary standard KHP and not bydirect calculation, no information on theuncertainty sources connected with themolecular weight or the mass of NaOH takenis required.

iii) The weighed quantity of the titrimetricstandard KHP is dissolved with ≈50 ml ofion-free water and then titrated using theNaOH solution. An automatic titrationsystem controls the addition of NaOH andrecords the pH-curve. It also determines theend-point of the titration from the shape ofthe recorded curve.

Calculation:

The measurand is the concentration of the NaOHsolution, which depends on the mass of KHP, itspurity, its molecular weight and the volume ofNaOH at the end-point of the titration

]lmol[1000 1−

⋅⋅⋅

=TKHP

KHPKHPNaOH VM

Pmc

wherecNaOH :concentration of the NaOH solution

[mol l–1]1000 :conversion factor [ml] to [l]mKHP :mass of the titrimetric standard KHP [g]

Page 48: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 43

PKHP :purity of the titrimetric standard givenas mass fraction

MKHP :molar mass of KHP [g mol–1]VT :titration volume of NaOH solution [ml]

A2.3 Step 2: Identifying and analysinguncertainty sources

The aim of this step is to identify all majoruncertainty sources and to understand their effecton the measurand and its uncertainty. This hasbeen shown to be one of the most difficult step inevaluating the uncertainty of analyticalmeasurements, because there is a risk of

neglecting uncertainty sources on the one handand an the other of double-counting them. Theuse of a cause and effect diagram (Appendix D) isone possible way to help prevent this happening.The first step in preparing the diagram is to drawthe four parameters of the equation of themeasurand as the main branches.

Afterwards, each step of the method is considered

and any further influence quantity is added as afactor to the diagram working outwards from themain effect. This is carried out for each branchuntil effects become sufficiently remote, that is,until effects on the result are negligible.

Mass mKHP

Approximately 388 mg of KHP are weighed tostandardise the NaOH solution. The weighingprocedure is a weight by difference. This meansthat a branch for the determination of the tare(mtare) and another branch for the gross weight(mgross) have to be drawn in the cause and effectdiagram. Each of the two weighings is subject torun to run variability and the uncertainty of thecalibration of the balance. The calibration itselfhas two possible uncertainty sources: thesensitivity and the linearity of the calibrationfunction. If the weighing is done on the samescale and over a small range of weight then thesensitivity contribution can be neglected.

All these uncertainty sources are added into thecause and effect diagram (see Figure A2.6).

Purity PKHP

The purity of KHP is quoted in the supplier'scatalogue to be within the limits of 99.95% and100.05%. PKHP is therefore 1.0000 ±0.0005. Thereis no other uncertainty source if the dryingprocedure was performed according to thesuppliers specification.

Molar mass MKHP

Potassium hydrogen phthalate (KHP) has the

c(NaOH)

m(KHP)P(KHP)

M(KHP)V(T)

Figure A2.5: First step in setting up a causeand effect diagram

c(NaOH)

m(KHP)P(KHP)

M(KHP)V(T)

m(gross)m(tare)

repeatability

repeatability

calibration

linearity

sensitivity

calibration

linearity

sensitivity

Figure A2.6:Cause and effect diagram with added uncertainty sourcesfor the weighing procedure

Page 49: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 44

empirical formulaC8H5O4K

The uncertainty in the molar mass of thecompound can be determined by combining theuncertainty in the atomic weights of itsconstituent elements. A table of atomic weightsincluding uncertainty estimates is publishedbiennially by IUPAC in the Journal of Pure andApplied Chemistry. The molar mass can becalculated directly from these; the cause andeffect diagram (Figure A2.7) omits the individualatomic masses for clarity

Volume VT

The titration is accomplished using a 20 ml pistonburette. The delivered volume of NaOH from thepiston burette is subject to the same threeuncertainty sources as the filling of the volumetricflask in the previous example. These uncertaintysources are the repeatability of the deliveredvolume, the uncertainty of the calibration of thatvolume and the uncertainty resulting from thedifference between the temperature in thelaboratory and that of the calibration of the pistonburette. In addition there is the contribution of theend-point detection, which has two uncertaintysources.

1. The repeatability of the end-point detection,which is independent of the repeatability ofthe volume delivery.

2. The possibility of a systematic differencebetween the determined end-point and theequivalence point (bias), due to carbonate

absorption during the titration and inaccuracyin the mathematical evaluation of the end-point from the titration curve.

These items are included in the cause and effectdiagram shown in Figure A2.7.

A2.4 Step 3: Quantifying uncertaintycomponents

In step 3, the uncertainty from each sourceidentified in step 2 has to be quantified and thenconverted to a standard uncertainty. Allexperiments always include at least therepeatability of the volume delivery of the pistonburette and the repeatability of the weighingoperation. Therefore it is reasonable to combineall the repeatability contributions into onecontribution for the overall experiment and to usethe values from the method validation to quantifyits size, leading to the revised cause and effectdiagram in Figure A2.8.

The method validation shows a repeatability forthe titration experiment of 0.05%. This value canbe directly used for the calculation of thecombined standard uncertainty.

Mass mKHP

The relevant weighings are:container and KHP: 60.5450 g(observed)container less KHP: 60.1562 g(observed)KHP 0.3888 g(calculated)

Because of the combined repeatability termidentified above, there is no need to take intoaccount the weighing repeatability. Any

c(NaOH)

P(KHP)

M(KHP)V(T)

repeatability

calibration

temperature

end-point

biasrepeatability

m(KHP)

m(gross)m(tare)

repeatabilityrepeatability

calibration

linearity

sensitivity

calibration

linearity

sensitivity

Figure A2.7: Cause and effect diagram (all sources)

Page 50: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 45

systematic offset across the scale will also cancel.The uncertainty therefore arises solely from thebalance linearity uncertainty.

Linearity: The calibration certificate of thebalance quotes ±0.15 mg for the linearity. Thisvalue is the maximum difference between theactual mass on the pan and the reading of thescale. The balance manufacture's ownuncertainty evaluation recommends the use ofa rectangular distribution to convert thelinearity contribution to a standard uncertainty.

The balance linearity contribution isaccordingly

mg09.03mg15.0

=

This contribution has to be counted twice,once for the tare and once for the gross weight,because each is an independent observationand the linearity effects are not correlated.

This gives for the standard uncertainty u(mKHP) ofthe mass mKHP, a value of

mg13.0)()09.0(2)( 2

=⇒

×=

KHP

KHP

mumu

NOTE 1: Buoyancy correction is not consideredbecause all weighing results are quoted on theconventional basis for weighing in air [H.19].The remaining uncertainties are too small toconsider. Note 1 in Appendix G refers.

NOTE 2: There are other difficulties when weighing atitrimetric standard. A temperature differenceof only 1 °C between the standard and the

balance causes a drift in the same order ofmagnitude as the repeatability contribution.The titrimetric standard has been completelydried, but the weighing procedure is carriedout at a humidity of around 50 % relativehumidity, so adsorption of some moisture isexpected.

Purity PKHP

PKHP is 1.0000±0.0005. The supplier gives nofurther information concerning the uncertainty inthe catalogue. Therefore this uncertainty is takenas having a rectangular distribution, so thestandard uncertainty u(PKHP) is

00029.030005.0 = .

Molar mass MKHP

From the latest IUPAC table, the atomic weightsand listed uncertainties for the constituentelements of KHP (C8H5O4K) are:

Element Atomicweight

Quoteduncertainty

Standarduncertainty

C 12.0107 ±0.0008 0.00046H 1.00794 ±0.00007 0.000040O 15.9994 ±0.0003 0.00017K 39.0983 ±0.0001 0.000058

For each element, the standard uncertainty isfound by treating the IUPAC quoted uncertaintyas forming the bounds of a rectangulardistribution. The corresponding standarduncertainty is therefore obtained by dividingthose values by 3 .

c(NaOH)

P(KHP)

M(KHP)V(T)

calibration

temperature

end-point

bias

m(KHP)

m(gross)m(tare)

calibration

linearity

sensitivity

calibration

linearity

sensitivity

repeatability

V(T)

end-point

m(KHP)

Figure A2.8: Cause and effect diagram (Repeatabilities combined)

Page 51: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 46

The separate element contributions to the molarmass, together with the uncertainty contributionfor each, are:

Calculation Result Standarduncertainty

C8 8×12.0107 96.0856 0.0037H5 5×1.00794 5.0397 0.00020O4 4×15.9994 63.9976 0.00068K 1×39.0983 39.0983 0.000058

The uncertainty in each of these values iscalculated by multiplying the standard uncertaintyin the previous table by the number of atoms.

This gives a molar mass for KHP of

1molg2212.204

0983.399976.630397.50856.96−=

+++=KHPM

As this expression is a sum of independent values,the standard uncertainty u(MKHP) is a simplesquare root of the sum of the squares of thecontributions:

1

2

222

molg0038.0)(

000058.0

00068.00002.00037.0)(

−=⇒

+

++=

KHP

KHP

Mu

Mu

NOTE: Since the element contributions to MKHP aresimply the sum of the single atomcontributions, it might be expected from thegeneral rule for combing uncertaintycontributions that the uncertainty for eachelement contribution would be calculated fromthe sum of squares of the single atomcontributions, that is, for carbon,

001.000037.08)( 2 =×=CMu . Recall,however, that this rule applies only toindependent contributions, that is,contributions from separate determinations ofthe value. In this case, the total is obtained bymultiplying a single value by 8. Notice that thecontributions from different elements areindependent, and will therefore combine in theusual way.

Volume VT

1. Repeatability of the volume delivery: Asbefore, the repeatability has already been takeninto account via the combined repeatabilityterm for the experiment.

2. Calibration: The limits of accuracy of thedelivered volume are indicated by themanufacturer as a ± figure. For a 20 ml pistonburette this number is typically ±0.03 ml.Assuming a triangular distribution gives astandard uncertainty of ml012.0603.0 = .

Note: The ISO Guide (F.2.3.3) recommendsadoption of a triangular distribution ifthere are reasons to expect values in thecentre of the range being more likelythan those near the bounds. For theglassware in examples A1 and A2, atriangular distribution has been assumed(see the discussion under Volumeuncertainties in example A1).

3. Temperature: The uncertainty due to the lackof temperature control is calculated in thesame way as in the previous example, but thistime taking a possible temperature variation of±3 °C (with a 95% confidence). Again usingthe coefficient of volume expansion for wateras 2.1×10–4 °C–1 gives a value of

ml006.096.1

3101.219 4

=××× −

Thus the standard uncertainty due toincomplete temperature control is 0.006 ml.

NOTE: When dealing with uncertainties arising fromincomplete control of environmental factorssuch as temperature, it is essential to takeaccount of any correlation in the effects ondifferent intermediate values. In this example,the dominant effect on the solutiontemperature is taken as the differential heatingeffects of different solutes, that is, thesolutions are not equilibrated to ambienttemperature. Temperature effects on eachsolution concentration at STP are thereforeuncorrelated in this example, and areconsequently treated as independentuncertainty contributions.

4. Bias of the end-point detection: The titration isperformed under a layer of Argon to excludeany bias due to the absorption of CO2 in thetitration solution. This approach follows theprinciple that it is better to prevent any biasthan to correct for it. There are no otherindications that the end-point determined fromthe shape of the pH-curve does not correspondto the equivalence-point, because a strong acidis titrated with a strong base. Therefore it is

Page 52: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 47

assumed that the bias of the end-pointdetection and its uncertainty are negligible.

VT is found to be 18.64 ml and combining theremaining contributions to the uncertainty u(VT)of the volume VT gives a value of

ml013.0)(006.0012.0)( 22

=⇒+=

T

T

VuVu

A2.5 Step 4: Calculating the combinedstandard uncertainty

cNaOH is given by

]lmol[1000 1−

⋅⋅⋅

=TKHP

KHPKHPNaOH VM

Pmc

The values of the parameters in this equation,their standard uncertainties and their relativestandard uncertainties are collected in Table A2.2

Using the values given above:

1lmol10214.064.182212.204

0.13888.01000 −=×

××=NaOHc

For a multiplicative expression (as above) thestandard uncertainties are used as follows:

2

22

22

)(

)()(

)()(

)(

+

+

+

+

=

T

T

KHP

KHP

KHP

KHP

KHP

KHP

NaOH

NaOHc

VVu

MMu

PPu

mmu

reprepu

ccu

00097.000070.0000019.0

00029.000033.00005.0)(22

222

=+

+++=⇒

NaOH

NaOHc

ccu

( ) 1lmol00010.000097.0 −=×=⇒ NaOHNaOH ccuc

Spreadsheet software is used to simplify theabove calculation of the combined standarduncertainty (see Appendix E.2). The spreadsheetfilled in with the appropriate values is shown asTable A2.3, which appears with additionalexplanation.

It is instructive to examine the relativecontributions of the different parameters. Thecontributions can easily be visualised using ahistogram. Figure A2.9 shows the calculatedvalues |u(y,xi)| from Table A2.3.

The contribution of the uncertainty of the titrationvolume VT is by far the largest followed by therepeatability. The weighing procedure and thepurity of the titrimetric standard show the sameorder of magnitude, whereas the uncertainty in themolar mass is again nearly an order of magnitudesmaller.

Table A2.2: Values and uncertainties for titration

Description Value x Standard uncertaintyu(x)

Relative standarduncertainty u(x)/x

rep Repeatability 1.0 0.0005 0.0005

mKHP Weight of KHP 0.3888 g 0.00013 g 0.00033

PKHP Purity of KHP 1.0 0.00029 0.00029

MKHP Molar mass of KHP 204.2212 g mol-1 0.0038 g mol-1 0.000019

VT Volume of NaOH for KHPtitration

18.64 ml 0.013 ml 0.0007

Figure A2.9: Uncertainty contributions inNaOH standardisation

u(y,xi) (mmol l-1)0 0.02 0.04 0.06 0.08 0.1 0.12

c(NaOH)

Repeatability

m(KHP)

P(KHP)

M(KHP)

V(T)

Page 53: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 48

A2.6 Step 5: Re-evaluate the significantcomponents

The contribution of V(T) is the largest one. Thevolume of NaOH for titration of KHP (V(T)) itselfis affected by four influence quantities: therepeatability of the volume delivery, thecalibration of the piston burette, the differencebetween the operation and calibration temperatureof the burette and the repeatability of the end-point detection. Checking the size of eachcontribution, the calibration is by far the largest.Therefore this contribution needs to beinvestigated more thoroughly.

The standard uncertainty of the calibration ofV(T) was calculated from the data given by themanufacturer assuming a triangular distribution.The influence of the choice of the shape of thedistribution is shown in Table A2.4.

According to the ISO Guide 4.3.9 Note 1:

“For a normal distribution with expectation µand standard deviation σ, the interval µ ±3σencompasses approximately 99.73 percent ofthe distribution. Thus, if the upper and lowerbounds a+ and a- define 99.73 percent limitsrather than 100 percent limits, and Xi can beassumed to be approximately normallydistributed rather than there being no specificknowledge about Xi [between the bounds], thenu2(xi) = a2/9. By comparison, the variance of asymmetric rectangular distribution of the half-width a is a2/3 ... and that of a symmetrictriangular distribution of the half-width a is a2/6... The magnitudes of the variances of the threedistributions are surprisingly similar in view ofthe differences in the assumptions upon whichthey are based.”

Thus the choice of the distribution function ofthis influence quantity has little effect on thevalue of the combined standard uncertainty(uc(cNaOH)) and it is adequate to assume that it istriangular.

Table A2.3: Spreadsheet calculation of titration uncertainty

A B C D E F G1 Rep m(KHP) P(KHP) M(KHP) V(T)2 Value 1.0 0.3888 1.0 204.2212 18.643 Uncertainty 0.0005 0.00013 0.00029 0.0038 0.01345 rep 1.0 1.0005 1.0 1.0 1.0 1.06 m(KHP) 0.3888 0.3888 0.38893 0.3888 0.3888 0.38887 P(KHP) 1.0 1.0 1.0 1.00029 1.0 1.08 M(KHP) 204.2212 204.2212 204.2212 204.2212 204.2250 204.22129 V(T) 18.64 18.64 18.64 18.64 18.64 18.6531011 c(NaOH) 0.102136 0.102187 0.102170 0.102166 0.102134 0.10206512 u(y,xi) 0.000051 0.000034 0.000030 -0.000002 -0.00007113 u(y)2, u(y,xi)2 9.72E-9 2.62E-9 1.16E-9 9E-10 4E-12 5.041E-91415 u(c(NaOH)) 0.000099The values of the parameters are given in the second row from C2 to G2. Their standard uncertainties are entered in therow below (C3-G3). The spreadsheet copies the values from C2-G2 into the second column from B5 to B9. The result(c(NaOH)) using these values is given in B11. C5 shows the value of the repeatability from C2 plus its uncertainty givenin C3. The result of the calculation using the values C5-C9 is given in C11. The columns D and G follow a similarprocedure. The values shown in the row 12 (C12-G12) are the differences of the row (C11-G11) minus the value givenin B11. In row 13 (C13-G13) the values of row 12 (C12-G12) are squared and summed to give the value shown in B13.B15 gives the combined standard uncertainty, which is the square root of B13.

Page 54: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A2

QUAM:2000.1 Page 49

The expanded uncertainty U(cNaOH) is obtained bymultiplying the combined standard uncertainty bya coverage factor of 2.

1lmol0002.0200010.0)( −=×=NaOHcU

Thus the concentration of the NaOH solution is(0.1021 ±0.0002) mol l–1.

Table A2.4: Effect of different distribution assumptions

Distribution factor u(V(T;cal))(ml)

u(V(T))(ml)

uc(cNaOH)

Rectangular 3 0.017 0.019 0.00011 mol l-1

Triangular 6 0.012 0.015 0.00009 mol l-1

NormalNote 19 0.010 0.013 0.000085 mol l-1

Note 1: The factor of 9 arises from the factor of 3 in Note 1 of ISO Guide4.3.9 (see page 48 for details).

Page 55: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 50

Example A3: An Acid/Base Titration

SummaryGoal

A solution of hydrochloric acid (HCl) isstandardised against a solution of sodiumhydroxide (NaOH) with known content.

Measurement procedure

A solution of hydrochloric acid (HCl) is titratedagainst a solution of sodium hydroxide (NaOH),which has been standardised against thetitrimetric standard potassium hydrogen phthalate(KHP), to determine its concentration. The stagesof the procedure are shown in Figure A3.1.

Measurand:

HClKHPT

TKHPKHPHCl VMV

VPmc

⋅⋅⋅⋅⋅

=1

21000 [mol l-1]

where the symbols are as given in Table A3.1 andthe value of 1000 is a conversion factor from mlto litres.

Identification of the uncertainty sources:

The relevant uncertainty sources are shown inFigure A3.2.

Quantification of the uncertainty components

The final uncertainty is estimated as0.00016 mol l-1. Table A3.1 summarises thevalues and their uncertainties; Figure A3.3 showsthe values diagrammatically.

Figure A3.1: Titration procedure

Weighing KHPWeighing KHP

Titrate KHP with NaOH

Titrate KHP with NaOH

Take aliquotof HCl

Take aliquotof HCl

Titrate HClwith NaOH

Titrate HClwith NaOH

RESULTRESULT

Figure A3.2: Cause and Effect diagram for acid-base titration

V(T2)

Calibration

Temperature

m(KHP)

V(T2)

end-point V(T2)

V(T1)

end-point V(T1)

V(HCl)

End point

Temperature

Calibration

Bias

CalibrationCalibration

sensitivity

linearitysensitivity

linearity

same balance

m(gross)

c(HCl)

m(KHP)P(KHP)

M(KHP) V(HCl)V(T1)Bias

End point

Temperature

Calibration

m(tare)

Repeatability

Page 56: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 51

Table A3.1: Acid-base Titration values and uncertainties

Description Value x Standarduncertainty u(x)

Relative standarduncertainty u(x)/x

rep Repeatability 1 0.001 0.001

mKHP Weight of KHP 0.3888 g 0.00013 g 0.00033

PKHP Purity of KHP 1.0 0.00029 0.00029

VT2 Volume of NaOH for HCl titration 14.89 ml 0.015 ml 0.0010

VT1 Volume of NaOH for KHP titration 18.64 ml 0.016 ml 0.00086

MKHP Molar mass of KHP 204.2212 g mol-1 0.0038 g mol-1 0.000019

VHCl HCl aliquot for NaOH titration 15 ml 0.011 ml 0.00073

cHCl HCl solution concentration 0.10139 mol l-1 0.00016 mol l-1 0.0016

Figure A3.3: Uncertainty contributions in acid-base titration

u(y,xi) (mmol l-1)0 0.05 0.1 0.15 0.2

c(HCl)

Repeatability

m(KHP)

P(KHP)

V(T2)

V(T1)

M(KHP)

V(HCl)

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A3.3.

Page 57: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 52

Example A3: An acid/base titration. Detailed discussionA3.1 Introduction

This example discusses a sequence ofexperiments to determine the concentration of asolution of hydrochloric acid (HCl). In addition, anumber of special aspects of the titrationtechnique are highlighted. The HCl is titratedagainst solution of sodium hydroxide (NaOH),which was freshly standardised with potassiumhydrogen phthalate (KHP). As in the previousexample (A2) it is assumed that the HClconcentration is known to be of the order of0.1 mol l–1 and that the end-point of the titrationis determined by an automatic titration systemusing the shape of the pH-curve. This evaluationgives the measurement uncertainty in terms of theSI units of measurement.

A3.2 Step 1: Specification

A detailed description of the measurementprocedure is given in the first step. Itcompromises a listing of the measurement stepsand a mathematical statement of the measurand.

Procedure

The determination of the concentration of theHCl solution consists of the following stages (Seealso Figure A3.4):

i) The titrimetric standard potassium hydrogenphthalate (KHP) is dried to ensure the purityquoted in the supplier's certificate.Approximately 0.388 g of the dried standardis then weighed to achieve a titration volumeof 19 ml NaOH.

ii) The KHP titrimetric standard is dissolvedwith ≈50 ml of ion free water and thentitrated using the NaOH solution. A titrationsystem controls automatically the addition ofNaOH and samples the pH-curve. The end-point is evaluated from the shape of therecorded curve.

iii) 15 ml of the HCl solution is transferred bymeans of a volumetric pipette. The HClsolution is diluted with de-ionised water togive ≈50 ml solution in the titration vessel.

iv) The same automatic titrator performs themeasurement of HCl solution.

Weighing KHPWeighing KHP

Titrate KHP with NaOH

Titrate KHP with NaOH

Take aliquotof HCl

Take aliquotof HCl

Titrate HClwith NaOH

Titrate HClwith NaOH

RESULTRESULT

Figure A3.4: Determination of theconcentration of a HCl solution

Calculation:

The measurand is the concentration of the HClsolution, cHCl. It depends on the mass of KHP, itspurity, its molecular weight, the volumes ofNaOH at the end-point of the two titrations andthe aliquot of HCl.:

]lmol[1000 1

1

2 −

⋅⋅⋅⋅⋅

=HClKHPT

TKHPKHPHCl VMV

VPmc

wherecHCl :concentration of the HCl solution

[mol l-1]1000 :conversion factor [ml] to [l]mKHP :mass of KHP taken [g]PKHP :purity of KHP given as mass fractionVT2 :volume of NaOH solution to titrate HCl

[ml]VT1 :volume of NaOH solution to titrate KHP

[ml]MKHP: molar mass of KHP [g mol–1]VHCl :volume of HCl titrated with NaOH

solution [ml]

Page 58: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 53

A3.3 Step 2: Identifying and analysinguncertainty sources

The different uncertainty sources and theirinfluence on the measurand are best analysed byvisualising them first in a cause and effectdiagram (Figure A3.5).

Because a repeatability estimate is available fromvalidation studies for the procedure as a whole,there is no need to consider all the repeatabilitycontributions individually. They are thereforegrouped into one contribution (shown in therevised cause and effect diagram in Figure A3.5).

The influences on the parameters VT2, VT1, mKHP,PKHP and MKHP have been discussed extensively inthe previous example, therefore only the newinfluence quantities of VHCl will be dealt with inmore detail in this section.

Volume VHCl

15 ml of the investigated HCl solution is to betransferred by means of a volumetric pipette. Thedelivered volume of the HCl from the pipette issubject to the same three sources of uncertainty asall the volumetric measuring devices.

1. The variability or repeatability of the deliveredvolume

2. The uncertainty in the stated volume of thepipette

3. The solution temperature differing from thecalibration temperature of the pipette.

A3.4 Step 3: Quantifying uncertaintycomponents

The goal of this step is to quantify eachuncertainty source analysed in step 2. The

quantification of the branches or rather of thedifferent components was described in detail inthe previous two examples. Therefore only asummary for each of the different contributionswill be given.

repeatability

The method validation shows a repeatability forthe determination of 0.1% (as %rsd). This valuecan be used directly for the calculation of thecombined standard uncertainty associated withthe different repeatability terms.

Mass mKHP

Calibration/linearity: The balance manufacturerquotes ±0.15 mg for the linearity contribution.This value represents the maximum differencebetween the actual mass on the pan and thereading of the scale. The linearity contributionis assumed to show a rectangular distributionand is converted to a standard uncertainty:

mg087.0315.0 =

The contribution for the linearity has to beaccounted for twice, once for the tare and oncefor the gross mass, leading to an uncertaintyu(mKHP) of

mg12.0)()087.0(2)( 2

=⇒

×=

KHP

KHP

mumu

NOTE 1: The contribution is applied twice because noassumptions are made about the form of thenon-linearity. The non-linearity is accordinglytreated as a systematic effect on eachweighing, which varies randomly inmagnitude across the measurement range.

Figure A3.5: Final cause and effect diagram

V(T2)

Calibration

Temperature

m(KHP)

V(T2)

end-point V(T2)

V(T1)

end-point V(T1)

V(HCl)

End point

Temperature

Calibration

Bias

CalibrationCalibration

sensitivity

linearitysensitivity

linearity

same balance

m(gross)

c(HCl)

m(KHP)P(KHP)

M(KHP) V(HCl)V(T1)Bias

End point

Temperature

Calibration

m(tare)

Repeatability

Page 59: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 54

NOTE 2: Buoyancy correction is not consideredbecause all weighing results are quoted on theconventional basis for weighing in air [H.19].The remaining uncertainties are too small toconsider. Note 1 in Appendix G refers.

P(KHP)

P(KHP) is given in the supplier's certificate as100% ±0.05%. The quoted uncertainty is taken asa rectangular distribution, so the standarduncertainty u(PKHP) is

00029.03

0005.0)( ==KHPPu .

V(T2)

i) Calibration: Figure given by the manufacturer(±0.03 ml) and approximated to a triangulardistribution ml012.0603.0 = .

ii) Temperature: The possible temperaturevariation is within the limits of ±4 °C andapproximated to a rectangular distribution

ml007.034101.215 4 =××× − .

iii) Bias of the end-point detection: A biasbetween the determined end-point and theequivalence-point due to atmospheric CO2 canbe prevented by performing the titration underArgon. No uncertainty allowance is made.

VT2 is found to be 14.89 ml and combining thetwo contributions to the uncertainty u(VT2) of thevolume VT2 gives a value of

ml0140)(00700120)(

2

222

.Vu..Vu

T

T

=⇒+=

Volume VT1

All contributions except the one for thetemperature are the same as for VT2

i) Calibration: ml012.0603.0 =

ii) Temperature: The approximate volume for thetitration of 0.3888 g KHP is 19 ml NaOH,therefore its uncertainty contribution is

ml009.034101.219 4 =××× − .

iii) Bias: Negligible

VT1 is found to be 18.64 ml with a standarduncertainty u(VT1) of

ml015.0)(009.0012.0)(

1

221

=⇒+=

T

T

VuVu

Molar mass MKHP

Atomic weights and listed uncertainties (fromcurrent IUPAC tables) for the constituentelements of KHP (C8H5O4K) are:

Element Atomicweight

Quoteduncertainty

Standarduncertainty

C 12.0107 ±0.0008 0.00046H 1.00794 ±0.00007 0.000040O 15.9994 ±0.0003 0.00017K 39.0983 ±0.0001 0.000058

For each element, the standard uncertainty isfound by treating the IUPAC quoted uncertaintyas forming the bounds of a rectangulardistribution. The corresponding standarduncertainty is therefore obtained by dividingthose values by 3 .

The molar mass MKHP for KHP and its uncertaintyu(MKHP)are, respectively:

1molg2212.204

0983.399994.15400794.150107.128

−=

+×+×+×=KHPM

1

22

22

molg0038.0)(

000058.0)00017.04(

)00004.05()00046.08()(

−=⇒

+×+

×+×=

KHP

KHP

Fu

Mu

NOTE: The single atom contributions are notindependent. The uncertainty for the atomcontribution is therefore calculated bymultiplying the standard uncertainty of theatomic weight by the number of atoms.

Volume VHCl

i) Calibration: Uncertainty stated by themanufacturer for a 15 ml pipette as ±0.02 mland approximated with a triangulardistribution: ml008.0602.0 = .

ii) Temperature: The temperature of thelaboratory is within the limits of ±4 °C. Usinga rectangular temperature distribution gives astandard uncertainty of 34101.215 4 ××× −

= 0.007 ml.

Combining these contributions gives

ml011.0)(007.0008.000370)( 222

=⇒++=

HCl

HCl

Vu.Vu

Page 60: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 55

A3.5 Step 4: Calculating the combinedstandard uncertainty

cHCl is given by

HClKHPT

TKHPKHPHCl VMV

VPmc

⋅⋅⋅⋅⋅

=1

21000

NOTE: The repeatability estimate is, in this example,treated as a relative effect; the completemodel equation is therefore

repVMV

VPmc

HClKHPT

TKHPKHPHCl ×

⋅⋅⋅⋅⋅

=1

21000

All the intermediate values of the two stepexperiment and their standard uncertainties arecollected in Table A3.2. Using these values:

1-lmol10139.01152212.20464.18

89.140.13888.01000 =××××××=HClc

The uncertainties associated with eachcomponent are combined accordingly:

0018.0001.000073.0000019.000080.0

00094.000029.000031.0

)(

)()()(

)()()(

)(

2222

222

2

222

1

1

2

2

222

=+++

+++=

+

+

+

+

+

+

=

repu

VVu

MMu

VVu

VVu

PPu

mmu

ccu

HCl

HCl

KHP

KHP

T

T

T

T

KHP

KHP

KHP

KHP

HCl

HClc

1lmol00018.00018.0)( −=×=⇒ HClHClc ccu

A spreadsheet method (see Appendix E) can beused to simplify the above calculation of the

combined standard uncertainty. The spreadsheetfilled in with the appropriate values is shown inTable A3.3, with an explanation.

The sizes of the different contributions can becompared using a histogram. Figure A3.6 showsthe values of the contributions |u(y,xi)| fromTable A3.3.

Figure A3.6: Uncertainties in acid-basetitration

u(y,xi) (mmol l-1)0 0.05 0.1 0.15 0.2

c(HCl)

Repeatability

m(KHP)

P(KHP)

V(T2)

V(T1)

M(KHP)

V(HCl)

The expanded uncertainty U(cHCl) is calculated bymultiplying the combined standard uncertainty bya coverage factor of 2:

-1lmol0004.0200018.0)( =×=HClcU

The concentration of the HCl solution is

(0.1014 ±0.0004) mol l–1

Table A3.2: Acid-base Titration values and uncertainties (2-step procedure)

Description Value x StandardUncertainty u(x)

Relative standarduncertainty u(x)/x

rep Repeatability 1 0.001 0.001

mKHP Mass of KHP 0.3888 g 0.00012 g 0.00031

PKHP Purity of KHP 1.0 0.00029 0.00029

VT2 Volume of NaOH for HCl titration 14.89 ml 0.014 ml 0.00094

VT1 Volume of NaOH for KHP titration 18.64 ml 0.015 ml 0.00080

MKHP Molar mass of KHP 204.2212 g mol-1 0.0038 g mol-1 0.000019

VHCl HCl aliquot for NaOH titration 15 ml 0.011 ml 0.00073

Page 61: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 56

A3.6 Special aspects of the titration example

Three special aspects of the titration experimentwill be dealt with in this second part of theexample. It is interesting to see what effectchanges in the experimental set up or in theimplementation of the titration would have on thefinal result and its combined standard uncertainty.

Influence of a mean room temperature of 25°C

For routine analysis, analytical chemists rarelycorrect for the systematic effect of thetemperature in the laboratory on the volume. Thisquestion considers the uncertainty introduced bythe corrections required.

The volumetric measuring devices are calibratedat a temperature of 20°C. But rarely does anyanalytical laboratory have a temperaturecontroller to keep the room temperature thatlevel. For illustration, consider correction for amean room temperature of 25°C.

The final analytical result is calculated using thecorrected volumes and not the calibrated volumesat 20°C. A volume is corrected for thetemperature effect according to

)]20(1[' −α−= TVV

whereV' :actual volume at the mean temperature TV :volume calibrated at 20°Cα :expansion coefficient of an aqueous

solution [°C–1]T :observed temperature in the laboratory

[°C]

The equation of the measurand has to berewritten:

HClT

T

KHP

KHPKHPHCl VV

VM

Pmc''

'1000

1

2

⋅⋅⋅⋅=

Table A3.3: Acid-base Titration – spreadsheet calculation of uncertainty

A B C D E F G H I1 rep m(KHP) P(KHP) V(T2) V(T1) M(KHP) V(HCl)2 value 1.0 0.3888 1.0 14.89 18.64 204.2212 153 uncertainty 0.001 0.00012 0.00029 0.014 0.015 0.0038 0.01145 rep 1.0 1.001 1.0 1.0 1.0 1.0 1.0 1.06 m(KHP) 0.3888 0.3888 0.38892 0.3888 0.3888 0.3888 0.3888 0.38887 P(KHP) 1.0 1.0 1.0 1.00029 1.0 1.0 1.0 1.08 V(T2) 14.89 14.89 14.89 14.89 14.904 14.89 14.89 14.899 V(T1) 18.64 18.64 18.64 18.64 18.64 18.655 18.64 18.6410 M(KHP) 204.2212 204.2212 204.2212 204.2212 204.2212 204.2212 204.2250 204.221211 V(HCl) 15 15 15 15 15 15 15 15.0111213 c(HCl) 0.101387 0.101489 0.101418 0.101417 0.101482 0.101306 0.101385 0.10131314 u(y, xi) 0.000101 0.000031 0.000029 0.000095 -0.000082 -0.0000019 -0.00007415 u(y)2,

u(y, xi)23.34E-8 1.03E-8 9.79E-10 8.64E-10 9.09E-9 6.65E-9 3.56E-12 5.52E-9

1617 u(c(HCl)) 0.00018The values of the parameters are given in the second row from C2 to I2. Their standard uncertainties are entered in the rowbelow (C3-I3). The spreadsheet copies the values from C2-I2 into the second column from B5 to B11. The result (c(HCl))using these values is given in B13. The C5 shows the value of the repeatability from C2 plus its uncertainty given in C3.The result of the calculation using the values C5-C11 is given in C13. The columns D to I follow a similar procedure. Thevalues shown in the row 14 (C14-I14) are the differences of the row (C13-H13) minus the value given in B13. In row 15(C15-I15) the values of row 14 (C14-I14) are squared and summed to give the value shown in B15. B17 gives the combinedstandard uncertainty, which is the square root of B15.

Page 62: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 57

Including the temperature correction terms gives:

−α−⋅−α−

−α−×

⋅⋅=

⋅⋅⋅⋅=

)]20(1[)]20(1[)]20(1[

1000

'''1000

1

2

1

2

TVTVTV

MPm

VVV

MPmc

HClT

T

KHP

KHPKHP

HClT

T

KHP

KHPKHPHCl

This expression can be simplified by assumingthat the mean temperature T and the expansioncoefficient of an aqueous solution α are the samefor all three volumes

−α−⋅⋅

×

⋅⋅=

)]20(1[

1000

1

2

TVVV

MPm

c

HClT

T

KHP

KHPKHPHCl

This gives a slightly different result for the HClconcentration at 20°C:

1

4

lmol10149.0)]2025(101.21[1564.182236.204

89.140.13888.01000

=−×−×××

×××=HClc

The figure is still within the range given by thecombined standard uncertainty of the result at amean temperature of 20°C, so the result is notsignificantly affected. Nor does the change affectthe evaluation of the combined standarduncertainty, because a temperature variation of±4°C at the mean room temperature of 25°C isstill assumed.

Visual end-point detectionA bias is introduced if the indicatorphenolphthalein is used for visual end-pointdetection, instead of an automatic titration systemextracting the equivalence-point from the pHcurve. The change of colour from transparent tored/purple occurs between pH 8.2 and 9.8 leadingto an excess volume, introducing a bias comparedto the end-point detection employing a pH meter.Investigations have shown that the excess volumeis around 0.05 ml with a standard uncertainty forthe visual detection of the end-point ofapproximately 0.03 ml. The bias arising from theexcess volume has to be considered in thecalculation of the final result. The actual volumefor the visual end-point detection is given by

ExcessTIndT VVV += 1;1

where

IndTV ;1 :volume from a visual end-point detection

1TV :volume at the equivalence-point

VExcess:excess volume needed to change the colourof phenolphthalein

The volume correction quoted above leads to thefollowing changes in the equation of themeasurand

HClExcessIndTKHP

ExcessIndTKHPKHPHCl VVVM

VVPmc

⋅−⋅−⋅⋅⋅

=)(

)(1000

;1

;2

The standard uncertainties u(VT2) and u(VT1) haveto be recalculated using the standard uncertaintyof the visual end-point detection as theuncertainty component of the repeatability of theend-point detection.

ml034.003.0009.0012.0004.0

)()(2222

;11

=+++=

−= ExcessIndTT VVuVu

ml033.003.0007.0012.0004.0

)()(2222

;22

=+++=

−= ExcessIndTT VVuVu

The combined standard uncertainty1lmol0003.0)( −=HClc cu

is considerable larger than before.

Triple determination to obtain the final resultThe two step experiment is performed three timesto obtain the final result. The triple determinationis expected to reduce the contribution fromrepeatability, and hence reduce the overalluncertainty.

As shown in the first part of this example, all therun to run variations are combined to one singlecomponent, which represents the overallexperimental repeatability as shown in the in thecause and effect diagram (Figure A3.5).

The uncertainty components are quantified in thefollowing way:

Mass mKHP

Linearity: mg087.0315.0 =

mg12.087.02)( 2 =×=⇒ KHPmu

Purity PKHP

Purity: 00029.030005.0 =

Page 63: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A3

QUAM:2000.1 Page 58

Volume VT2

calibration: ml012.0603.0 =

temperature:

ml007.034101.215 4 =××× −

( ) ml014.0007.0012.0 222 =+=⇒ TVu

Repeatability

The quality log of the triple determination showsa mean long term standard deviation of theexperiment of 0.001 (as RSD). It is notrecommended to use the actual standard deviationobtained from the three determinations becausethis value has itself an uncertainty of 52%. Thestandard deviation of 0.001 is divided by thesquare root of 3 to obtain the standarduncertainty of the triple determination (threeindependent measurements)

00058.03001.0 ==Rep (as RSD)

Volume VHCl

calibration: ml008.0602.0 =

temperature: ml007.034101.215 4 =××× −

( ) ml01.0007.0008.0 22 =+=⇒ HClVu

Molar mass MKHP

( ) 1molg0038.0 −=KHPMu

Volume VT1

calibration: ml12.0603.0 =

temperature:ml009.034101.219 4 =××× −

( ) ml015.0009.0012.0 221 =+=⇒ TVu

All the values of the uncertainty components aresummarised in Table A3.4. The combinedstandard uncertainty is 0.00016 mol l–1, which is avery modest reduction due to the tripledetermination. The comparison of the uncertaintycontributions in the histogram, shown in FigureA3.7, highlights some of the reasons for thatresult. Though the repeatability contribution ismuch reduced, the volumetric uncertaintycontributions remain, limiting the improvement.

Figure A3.7: Replicated Acid-base Titrationvalues and uncertainties

u(y,xi) (mmol l-1)0 0.05 0.1 0.15 0.2

c(HCl)

Repeatability

m(KHP)

P(KHP)

V(T2)

V(T1)

M(KHP)

V(HCl)ReplicatedSingle run

Table A3.4: Replicated Acid-base Titration values and uncertainties

Description Value x StandardUncertainty

u(x)

Relative StandardUncertainty

u(x)/x

Rep Repeatability of the determination 1.0 0.00058 0.00058

mKHP Mass of KHP 0.3888 g 0.00013 g 0.00033

PKHP Purity of KHP 1.0 0.00029 0.00029

VT2 Volume of NaOH for HCl titration 14.90 ml 0.014 ml 0.00094

VT1 Volume of NaOH for KHP titration 18.65 ml 0.015 ml 0.0008

MKHP Molar mass of KHP 204.2212 g mol-1 0.0038 g mol-1 0.000019

VHCl HCl aliquot for NaOH titration 15 ml 0.01 ml 0.00067

Page 64: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 59

Example A4: Uncertainty Estimation from In-House Validation Studies.Determination of Organophosphorus Pesticides in Bread.

SummaryGoalThe amount of an organophosphorus pesticideresidue in bread is determined employing anextraction and a GC procedure.

Measurement procedureThe stages needed to determine the amount oforganophosphorus pesticide residue are shown inFigure A4.1

Measurand:

homsampleref

oprefopop F

mRecIVcI

P ⋅⋅⋅

⋅⋅= mg kg-1

wherePop :Level of pesticide in the sample

[mg kg-1]Iop :Peak intensity of the sample extractcref :Mass concentration of the reference

standard [µg ml-1]Vop :Final volume of the extract [ml]Iref :Peak intensity of the reference standardRec :Recoverymsample :Mass of the investigated sub-sample [g]Fhom :Correction factor for sample

inhomogeneityIdentification of the uncertainty sources:The relevant uncertainty sources are shown in thecause and effect diagram in Figure A4.2.

Quantification of the uncertainty components:Based on in-house validation data, the three majorcontributions are listed in Table A4.1 and showndiagrammatically in Figure A4.3 (values are fromTable A4.5).

Figure A4.1: Organophosphorus pesticidesanalysis

RESULTRESULT

HomogeniseHomogenise

ExtractionExtraction

Clean-upClean-up

‘Bulk up’‘Bulk up’

GCDetermination

GCDetermination

Preparecalibrationstandard

Preparecalibrationstandard

GCCalibration

GCCalibration

Table A4.1: Uncertainties in pesticide analysis

Description Value x Standarduncertainty u(x)

Relative standarduncertainty u(x)/x

Comments

Repeatability(1) 1.0 0.27 0.27 Based on duplicate tests ofdifferent types of samples

Bias (Rec) (2) 0.9 0.043 0.048 Spiked samples

Other sources (3)

(Homogeneity)

1.0 0.2 0.2 Estimation based on modelassumptions

Pop - - - - 0.34 Relative standard uncertainty

Page 65: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 60

Figure A4.2: Uncertainty sources in pesticide analysis

P(op)

I(op) c(ref) V(op)

m(sample)I(ref)

V(op)

Calibration

Temperature

dilution

dilutionCalibration

V(ref)V(ref)

CalibrationTemperaturem(ref)

Calibration

m(ref)Purity (ref)I(op)

Calibration

Recovery

m(gross)

I(ref)

CalibrationCalibration

Linearity m(tare)

m(sample)

Calibration

Linearity

F(hom)

RepeatabilityLinearity

Figure A4.3: Uncertainties in pesticide analysis

u(y,xi) (mg kg-1)0 0.1 0.2 0.3 0.4

P(op)

Repeatability

Bias

Hom ogeneity

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A4.5

Page 66: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 61

Example A4: Determination of organophosphorus pesticides in bread. Detaileddiscussion.

A4.1 Introduction

This example illustrates the way in which in-house validation data can be used to quantify themeasurement uncertainty. The aim of themeasurement is to determine the amount of anorganophosphorus pesticides residue in bread.The validation scheme and experiments establishtraceability by measurements on spiked samples.It is assumed the uncertainty due to any differencein response of the measurement to the spike andthe analyte in the sample is small compared withthe total uncertainty on the result.

A4.2 Step 1: Specification

The specification of the measurand for moreextensive analytical methods is best done by acomprehensive description of the different stagesof the analytical method and by providing theequation of the measurand.

Procedure

The measurement procedure is illustratedschematically in Figure A4.4. The separate stagesare:

i) Homogenisation: The complete sample isdivided into small (approx. 2 cm) fragments,a random selection is made of about 15 ofthese, and the sub-sample homogenised.Where extreme inhomogeneity is suspectedproportional sampling is used beforeblending.

ii) Weighing of sub-sampling for analysis givesmass msample

iii) Extraction: Quantitative extraction of theanalyte with organic solvent, decanting anddrying through a sodium sulphate columns,and concentration of the extract using aKuderna-Danish apparatus.

iv) Liquid-liquid extraction:

v) Acetonitrile/hexane liquid partition, washingthe acetonitrile extract with hexane, dryingthe hexane layer through sodium sulphatecolumn.

vi) Concentration of the washed extract by gasblown-down of extract to near dryness.

vii) Dilution to standard volume Vop (approx.2 ml) in a 10 ml graduated tube.

viii) Measurement: Injection and GCmeasurement of 5 µl of sample extract togive the peak intensity Iop.

ix) Preparation of an approximately 5 µg ml-1

standard (actual mass concentration cref).

x) GC calibration using the prepared standardand injection and GC measurement of 5 µl ofthe standard to give a reference peakintensity Iref.

Figure A4.4: Organophosphorus pesticidesanalysis

RESULTRESULT

HomogeniseHomogenise

ExtractionExtraction

Clean-upClean-up

‘Bulk up’‘Bulk up’

GCDetermination

GCDetermination

Preparecalibrationstandard

Preparecalibrationstandard

GCCalibration

GCCalibration

Page 67: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 62

Calculation

The mass concentration cop in the final sample isgiven by

1mlg −µ⋅=ref

oprefop I

Icc

and the estimate Pop of the level of pesticide in thebulk sample (in mg kg-1) is given by

1kgmg −

⋅⋅

=sample

opopop mRec

VcP

or, substituting for cop,

1-kgmgsampleref

oprefopop mRecI

VcIP

⋅⋅⋅⋅

=

wherePop :Level of pesticide in the sample [mg kg-1]

Iop :Peak intensity of the sample extract

cref :Mass concentration of the referencestandard [µg ml-1]

Vop :Final volume of the extract [ml]

Iref :Peak intensity of the reference standard

Rec :Recovery

msample:Mass of the investigated sub-sample [g]

Scope

The analytical method is applicable to a smallrange of chemically similar pesticides at levelsbetween 0.01 and 2 mg kg-1 with different kindsof bread as matrix.

A4.3 Step 2: Identifying and analysinguncertainty sources

The identification of all relevant uncertaintysources for such a complex analytical procedureis best done by drafting a cause and effectdiagram. The parameters in the equation of themeasurand are represented by the main branchesof the diagram. Further factors are added to thediagram, considering each step in the analyticalprocedure (A4.2), until the contributory factorsbecome sufficiently remote.

The sample inhomogeneity is not a parameter inthe original equation of the measurand, but itappears to be a significant effect in the analyticalprocedure. A new branch, F(hom), representingthe sample inhomogeneity is accordingly added tothe cause and effect diagram (Figure A4.5).

Finally, the uncertainty branch due to theinhomogeneity of the sample has to be included inthe calculation of the measurand. To show theeffect of uncertainties arising from that sourceclearly, it is useful to write

Figure A4.5: Cause and effect diagram with added main branch for sample inhomogeneity

P(op)

I(op) c(ref) V(op)

m(sample)I(ref)

Precision

Calibration

Temperature

dilution

PrecisionCalibrationV(ref)

PrecisionCalibrationTemperature

PrecisionCalibration

m(ref)Purity(ref)

Precision

Calibration

Recovery

m(gross)Precision

Calibration Precision

Calibration

Linearity

Sensitivity

m(tare)

Precision

Calibration

Linearity

Sensitivity

F(hom)

Linearity

Page 68: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 63

]kgmg[ 1−

⋅⋅⋅⋅

⋅=sampleref

oprefophomop mRecI

VcIFP

where Fhom is a correction factor assumed to beunity in the original calculation. This makes itclear that the uncertainties in the correction factormust be included in the estimation of the overalluncertainty. The final expression also shows howthe uncertainty will apply.NOTE: Correction factors: This approach is quite

general, and may be very valuable inhighlighting hidden assumptions. In principle,every measurement has associated with it suchcorrection factors, which are normallyassumed unity. For example, the uncertainty incop can be expressed as a standard uncertaintyfor cop, or as the standard uncertainty whichrepresents the uncertainty in a correctionfactor. In the latter case, the value isidentically the uncertainty for cop expressed asa relative standard deviation.

A4.4 Step 3: Quantifying uncertaintycomponents

In accordance with section 7.7., the quantificationof the different uncertainty components utilisesdata from the in-house development andvalidation studies:

• The best available estimate of the overall runto run variation of the analytical process.

• The best possible estimation of the overallbias (Rec) and its uncertainty.

• Quantification of any uncertainties associatedwith effects incompletely accounted for theoverall performance studies.

Some rearrangement the cause and effect diagramis useful to make the relationship and coverage ofthese input data clearer (Figure A4.6).NOTE: In normal use, samples are run in small

batches, each batch including a calibration set,a recovery check sample to control bias andrandom duplicate to check precision.Corrective action is taken if these checks showsignificant departures from the performancefound during validation. This basic QC fulfilsthe main requirements for use of the validationdata in uncertainty estimation for routinetesting.

Having inserted the extra effect ‘Repeatability’into the cause and effect diagram, the impliedmodel for calculating Pop becomes

1kgmg −⋅⋅⋅

⋅⋅⋅= Rep

sampleref

oprefophomop F

mRecIVcI

FP

Eq. A4.1

That is, the repeatability is treated as amultiplicative factor FRep like the homogeneity.This form is chosen for convenience incalculation, as will be seen below.

Figure A4.6: Cause and effect diagram after rearrangement to accommodate the data of thevalidation study

P(op)

I(op) c(ref) V(op)

m(sample)I(ref)

V(op)

Calibration

Temperature

dilution

dilutionCalibration

V(ref)V(ref)

CalibrationTemperaturem(ref)

Calibration

m(ref)Purity (ref)I(op)

Calibration

Recovery

m(gross)

I(ref)

CalibrationCalibration

Linearity m(tare)

m(sample)

Calibration

Linearity

F(hom)

RepeatabilityLinearity

Page 69: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 64

The evaluation of the different effects is nowconsidered.

1. Precision study

The overall run to run variation (precision) of theanalytical procedure was performed with anumber of duplicate tests (same homogenisedsample, complete extraction/determinationprocedure) for typical organophosphoruspesticides found in different bread samples. Theresults are collected in Table A4.2.

The normalised difference data (the differencedivided by the mean) provides a measure of theoverall run to run variability. To obtain theestimated relative standard uncertainty for singledeterminations, the standard deviation of thenormalised differences is taken and divided by

2 to correct from a standard deviation forpairwise differences to the standard uncertaintyfor the single values. This gives a value for thestandard uncertainty due to run to run variation ofthe overall analytical process, including run to runrecovery variation but excluding homogeneityeffects, of 27.02382.0 =

NOTE: At first sight, it may seem that duplicate testsprovide insufficient degrees of freedom. But it

is not the goal to obtain very accurate numbersfor the precision of the analytical process forone specific pesticide in one special kind ofbread. It is more important in this study to testa wide variety of different materials andsample levels, giving a representativeselection of typical organophosphoruspesticides. This is done in the most efficientway by duplicate tests on many materials,providing (for the repeatability estimate)approximately one degree of freedom for eachmaterial studied in duplicate.

2. Bias study

The bias of the analytical procedure wasinvestigated during the in-house validation studyusing spiked samples (homogenised samples weresplit and one portion spiked). Table A4.3 collectsthe results of a long term study of spiked samplesof various types.

The relevant line (marked with grey colour) is the"bread" entry line, which shows a mean recoveryfor forty-two samples of 90%, with a standarddeviation (s) of 28%. The standard uncertaintywas calculated as the standard deviation of themean 0432.04228.0)( ==Recu .

A significance test is used to determine whetherthe mean recovery is significantly different from

Table A4.2: Results of duplicate pesticide analysis

Residue D1[mg kg-1]

D2[mg kg-1]

Mean[mg kg-1]

DifferenceD1-D2

Difference/mean

Malathion 1.30 1.30 1.30 0.00 0.000Malathion 1.30 0.90 1.10 0.40 0.364Malathion 0.57 0.53 0.55 0.04 0.073Malathion 0.16 0.26 0.21 -0.10 -0.476Malathion 0.65 0.58 0.62 0.07 0.114Pirimiphos Methyl 0.04 0.04 0.04 0.00 0.000Chlorpyrifos Methyl 0.08 0.09 0.085 -0.01 -0.118Pirimiphos Methyl 0.02 0.02 0.02 0.00 0.000Chlorpyrifos Methyl 0.01 0.02 0.015 -0.01 -0.667Pirimiphos Methyl 0.02 0.01 0.015 0.01 0.667Chlorpyrifos Methyl 0.03 0.02 0.025 0.01 0.400Chlorpyrifos Methyl 0.04 0.06 0.05 -0.02 -0.400Pirimiphos Methyl 0.07 0.08 0.75 -0.10 -0.133Chlorpyrifos Methyl 0.01 0.01 0.10 0.00 0.000Pirimiphos Methyl 0.06 0.03 0.045 0.03 0.667

Page 70: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 65

1.0. The test statistic t is calculated using thefollowing equation

( ) 315.20432.0

9.01)(

1=−=

−=

Recu

Rect

This value is compared with the 2-tailed criticalvalue tcrit, for n–1 degrees of freedom at 95%confidence (where n is the number of results usedto estimate Rec ). If t is greater or equal than thecritical value tcrit than Rec is significantlydifferent from 1.

021.231.2 41; ≅≥= crittt

In this example a correction factor (1/ Rec ) isbeing applied and therefore Rec is explicitlyincluded in the calculation of the result.

3. Other sources of uncertainty

The cause and effect diagram in Figure A4.7shows which other sources of uncertainty are (1)adequately covered by the precision data, (2)covered by the recovery data or (3) have to befurther examined and eventually considered in thecalculation of the measurement uncertainty.

All balances and the important volumetricmeasuring devices are under regular control.Precision and recovery studies take into account

the influence of the calibration of the differentvolumetric measuring devices because during theinvestigation various volumetric flasks andpipettes have been used. The extensive variabilitystudies, which lasted for more than half a year,also cover influences of the environmentaltemperature on the result. This leaves only thereference material purity, possible nonlinearity inGC response (represented by the ‘calibration’terms for Iref and Iop in the diagram), and thesample homogeneity as additional componentsrequiring study.

The purity of the reference standard is given bythe manufacturer as 99.53% ±0.06%. The purityis potential an additional uncertainty source witha standard uncertainty of 00035.030006.0 =(Rectangular distribution). But the contribution isso small (compared, for example, to the precisionestimate) that it is clearly safe to neglect thiscontribution.

Linearity of response to the relevantorganophosphorus pesticides within the givenconcentration range is established duringvalidation studies. In addition, with multi-levelstudies of the kind indicated in Table A4.2 andTable A4.3, nonlinearity would contribute to theobserved precision. No additional allowance is

Table A4.3: Results of pesticide recovery studies

Substrate ResidueType

Conc.[mg kg–1]

N1) Mean 2) [%] s 2)[%]

Waste Oil PCB 10.0 8 84 9Butter OC 0.65 33 109 12Compound Animal Feed I OC 0.325 100 90 9Animal & Vegetable Fats I OC 0.33 34 102 24Brassicas 1987 OC 0.32 32 104 18Bread OP 0.13 42 90 28Rusks OP 0.13 30 84 27Meat & Bone Feeds OC 0.325 8 95 12Maize Gluten Feeds OC 0.325 9 92 9Rape Feed I OC 0.325 11 89 13Wheat Feed I OC 0.325 25 88 9Soya Feed I OC 0.325 13 85 19Barley Feed I OC 0.325 9 84 22(1) The number of experiments carried out(2) The mean and sample standard deviation s are given as percentage recoveries.

Page 71: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 66

required. The in-house validation study hasproven that this is not the case.

The homogeneity of the bread sub-sample is thelast remaining other uncertainty source. Noliterature data were available on the distributionof trace organic components in bread products,despite an extensive literature search (at first sightthis is surprising, but most food analysts attempthomogenisation rather than evaluateinhomogeneity separately). Nor was it practical tomeasure homogeneity directly. The contributionhas therefore been estimated on the basis of thesampling method used.

To aid the estimation, a number of feasiblepesticide residue distribution scenarios wereconsidered, and a simple binomial statisticaldistribution used to calculate the standarduncertainty for the total included in the analysedsample (see section A4.6). The scenarios, and thecalculated relative standard uncertainties in theamount of pesticide in the final sample, were:

! Scenario (a) Residue distributed on the topsurface only: 0.58.

! Scenario (b) Residue distributed evenly overthe surface only: 0.20.

! Scenario (c) Residue distributed evenly

through the sample, but reduced inconcentration by evaporative loss ordecomposition close to the surface: 0.05-0.10(depending on the "surface layer" thickness).

Scenario (a) is specifically catered for byproportional sampling or completehomogenisation: It would arise in the case ofdecorative additions (whole grains) added to onesurface. Scenario (b) is therefore considered thelikely worst case. Scenario (c) is considered themost probable, but cannot be readilydistinguished from (b). On this basis, the value of0.20 was chosen.NOTE: For more details on modelling inhomogeneity

see the last section of this example.

A4.5 Step 4: Calculating the combinedstandard uncertainty

During the in-house validation study of theanalytical procedure the repeatability, the biasand all other feasible uncertainty sources hadbeen thoroughly investigated. Their values anduncertainties are collected in Table A4.4.

The relative values are combined because themodel (equation A4.1) is entirely multiplicative:

Figure A4.7: Evaluation of other sources of uncertainty

P(op)

I(op) c(ref) V(op)

m(sample)I(ref)

V(op)

Calibration(2)

Temperature(2)

dilution

dilutionCalibration(2)

V(ref)V(ref)

Calibration(2)Temperature(2)m(ref)

Calibration (2)

m(ref)Purity(ref)I(op)

Calibration(3)

Recovery(2)

m(gross)

I(ref)

Calibration(3)Calibration(2)

Linearity

m(tare)

m(sample)

Calibration(2)

Linearity

F(hom)(3)

Repeatability(1)Linearity

(1) Repeatabilty (FRep in equation A4.1) considered during the variability investigation of the analytical procedure.

(2) Considered during the bias study of the analytical procedure.

(3) To be considered during the evaluation of the other sources of uncertainty.

Page 72: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 67

opopc

op

opc

PPuP

Pu

×=⇒

=++=

34.0)(

34.02.0048.027.0)( 222

The spreadsheet for this case (Table A4.5) takesthe form shown in Table A4.5. Note that thespreadsheet calculates an absolute valueuncertainty (0.377) for a nominal corrected resultof 1.1111, giving a value of 0.373/1.11=0.34.

The relative sizes of the three differentcontributions can be compared by employing ahistogram. Figure A4.8 shows the values |u(y,xi)|taken from Table A4.5.

The repeatability is the largest contribution to themeasurement uncertainty. Since this component isderived from the overall variability in the method,further experiments would be needed to showwhere improvements could be made. Forexample, the uncertainty could be reducedsignificantly by homogenising the whole loafbefore taking a sample.

The expanded uncertainty U(Pop) is calculated bymultiplying the combined standard uncertaintywith a coverage factor of 2 to give:

opopop PPPU ×=××= 68.0234.0)(

Table A4.4: Uncertainties in pesticide analysis

Description Value x Standarduncertainty u(x)

Relative standarduncertainty u(x)

Remark

Repeatability(1) 1.0 0.27 0.27 Duplicate tests of differenttypes of samples

Bias (Rec) (2) 0.9 0.043 0.048 Spiked samples

Other sources (3)

(Homogeneity)

1.0 0.2 0.2 Estimations founded on modelassumptions

Pop - - - - 0.34 Relative standard uncertainty

Figure A4.8: Uncertainties in pesticide analysis

u(y,xi) (mg kg-1)0 0.1 0.2 0.3 0.4

P(op)

Repeatability

Bias

Homogeneity

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A4.5

Page 73: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 68

A4.6 Special aspect: Modelling inhomogeneityfor organophosphorus pesticide uncertaintyAssuming that all of the material of interest in asample can be extracted for analysis irrespectiveof its state, the worst case for inhomogeneity isthe situation where some part or parts of a samplecontain all of the substance of interest. A moregeneral, but closely related, case is that in whichtwo levels, say L1 and L2 of the material arepresent in different parts of the whole sample.The effect of such inhomogeneity in the case ofrandom sub-sampling can be estimated usingbinomial statistics. The values required are themean µ and the standard deviation σ of theamount of material in n equal portions selectedrandomly after separation.

These values are given by

( )⇒+⋅=µ 2211 lplpn

( ) 2211 nlllnp +−⋅=µ [1]

( ) ( )22111

2 1 llpnp −⋅−⋅=σ [2]

where l1 and l2 are the amount of substance inportions from regions in the sample containingtotal fraction L1 and L2 respectively, of the totalamount X, and p1 and p2 are the probabilities ofselecting portions from those regions (n must besmall compared to the total number of portionsfrom which the selection is made).

The figures shown above were calculated asfollows, assuming that a typical sample loaf isapproximately 241212 ×× cm, using a portionsize of 222 ×× cm (total of 432 portions) andassuming 15 such portions are selected at randomand homogenised.

Scenario (a)

The material is confined to a single large face (thetop) of the sample. L2 is therefore zero as is l2;and L1=1. Each portion including part of the topsurface will contain an amount l1 of the material.For the dimensions given, clearly one in six(2/12) of the portions meets this criterion, p1 is

Table A4.5: Uncertainties in pesticide analysis

A B C D E1 Repeatability Bias Homogeneity2 value 1.0 0.9 1.03 uncertainty 0.27 0.043 0.245 Repeatability 1.0 1.27 1.0 1.06 Bias 0.9 0.9 0.943 0.97 Homogeneity 1.0 1.0 1.0 1.289 Pop 1.1111 1.4111 1.0604 1.33310 u(y, xi) 0.30 -0.0507 0.22211 u(y)2, u(y, xi)2 0.1420 0.09 0.00257 0.049381213 u(Pop) 0.377 (0.377/1.111 = 0.34 as a relative standard uncertainty)

The values of the parameters are entered in the second row from C2 to E2. Their standard uncertainties are in the rowbelow (C3:E3). The spreadsheet copies the values from C2-E2 into the second column from B5 to B7. The result usingthese values is given in B9 (=B5×B7/B6, based on equation A4.1). C5 shows the value of the repeatability from C2 plusits uncertainty given in C3. The result of the calculation using the values C5:C7 is given in C9. The columns D and Efollow a similar procedure. The values shown in the row 10 (C10:E10) are the differences of the row (C9:E9) minus thevalue given in B9. In row 11 (C11:E11) the values of row 10 (C10:E10) are squared and summed to give the valueshown in B11. B13 gives the combined standard uncertainty, which is the square root of B11.

Page 74: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A4

QUAM:2000.1 Page 69

therefore 1/6, or 0.167, and l1 is X/72 (i.e. thereare 72 "top" portions).

This gives

11 5.2167.015 ll =××=µ

21

21

2 08.2)17.01(167.015 ll =×−××=σ

12

1 44.108.2 ll ==σ⇒

58.0=µσ=⇒ RSD

NOTE: To calculate the level X in the entire sample, µis multiplied back up by 432/15, giving amean estimate of X of

XX

lX =×=××=72

725.215432

1

This result is typical of random sampling; theexpectation value of the mean is exactly themean value of the population. For randomsampling, there is thus no contribution tooverall uncertainty other that the run to runvariability, expressed as σ or RSD here.

Scenario (b)

The material is distributed evenly over the wholesurface. Following similar arguments andassuming that all surface portions contain thesame amount l1 of material, l2 is again zero, and p1

is, using the dimensions above, given by

63.0)241212(

)2088()241212(1 =

××××−××=p

i.e. p1 is that fraction of sample in the "outer"2 cm. Using the same assumptions then

2721 Xl = .

NOTE: The change in value from scenario (a)

This gives:

11 5.963.015 ll =××=µ

21

21

2 5.3)63.01(63.015 ll =×−××=σ

12

1 87.15.3 ll ==σ⇒

2.0=µσ=⇒ RSD

Scenario (c)

The amount of material near the surface isreduced to zero by evaporative or other loss. This

case can be examined most simply by consideringit as the inverse of scenario (b), with p1=0.37 andl1 equal to X/160. This gives

11 6.537.015 ll =××=µ

21

21

2 5.3)37.01(37.015 ll =×−××=σ

12

1 87.15.3 ll =×=σ⇒

33.0=µσ=⇒ RSD

However, if the loss extends to a depth less thanthe size of the portion removed, as would beexpected, each portion contains some material l1

and l2 would therefore both be non-zero. Takingthe case where all outer portions contain 50%"centre" and 50% "outer" parts of the sample

2962 121 Xlll =⇒×=

( )222

221

6.201537.0151537.015

llllll

=×+××=×+−××=µ

22

221

2 5.3)()37.01(37.015 lll =−×−××=σ

giving an RSD of 09.06.2087.1 =

In the current model, this corresponds to a depthof 1 cm through which material is lost.Examination of typical bread samples shows crustthickness typically of 1 cm or less, and taking thisto be the depth to which the material of interest islost (crust formation itself inhibits lost below thisdepth), it follows that realistic variants onscenario (c) will give values of µσ not above0.09.NOTE: In this case, the reduction in uncertainty arises

because the inhomogeneity is on a smallerscale than the portion taken forhomogenisation. In general, this will lead to areduced contribution to uncertainty. It followsthat no additional modelling need be done forcases where larger numbers of smallinclusions (such as grains incorporated in thebulk of a loaf) contain disproportionateamounts of the material of interest. Providedthat the probability of such an inclusion beingincorporated into the portions taken forhomogenisation is large enough, thecontribution to uncertainty will not exceed anyalready calculated in the scenarios above.

Page 75: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 70

Example A5: Determination of Cadmium Release from Ceramic Ware byAtomic Absorption Spectrometry

SummaryGoal

The amount of released cadmium from ceramicware is determined using atomic absorptionspectrometry. The procedure employed is theempirical method BS 6748.

Measurement procedure

The different stages in determining the amount ofcadmium released from ceramic ware are given inthe flow chart (Figure A5.1).

Measurand:

20 dmmg −⋅⋅⋅⋅⋅

= temptimeacidV

L fffda

Vcr

The variables are described in Table A5.1.

Identification of the uncertainty sources:

The relevant uncertainty sources are shown in thecause and effect diagram at Figure A5.2.

Quantification of the uncertainty sources:

The sizes of the different contributions are givenin Table A5.1 and shown diagrammatically inFigure A5.2

Figure A5.1: Extractable metal procedure

RESULTRESULT

Surfaceconditioning

Surfaceconditioning

Fill with 4% v/vacetic acid

Fill with 4% v/vacetic acid

LeachingLeaching

Homogeniseleachate

Homogeniseleachate

AASDetermination

AASDetermination

Preparecalibrationstandards

Preparecalibrationstandards

AASCalibration

AASCalibration

PreparationPreparation

Table A5.1: Uncertainties in extractable cadmium determination

Description Value x Standarduncertainty u(x)

Relative standarduncertainty u(x)/x

co Content of cadmium in the extractionsolution

0.26 mg l-1 0.018 mg l-1 0.069

d Dilution factor (if used) 1.0 Note 1 0Note 1 0 Note 1

VL Volume of the leachate 0.332 l 0.0018 l 0.0054

aV Surface area of the vessel 2.37 dm2 0.06 dm2 0.025

facid Influence of the acid concentration 1.0 0.0008 0.0008

ftime Influence of the duration 1.0 0.001 0.001

ftemp Influence of temperature 1.0 0.06 0.06

r Mass of cadmium leached per unitarea

0.036 mg dm-2 0.0033 mg dm-2 0.09

Note 1: No dilution was applied in the present example; d is accordingly exactly 1.0

Page 76: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 71

Figure A5.2: Uncertainty sources in leachable cadmium determination

Result r

c(0) V(L)

da(V)

Reading

Calibration

Temperature

1 length

2 length

area

Fillingcalibration

curve

f(temperature)

f(time)

f(acid)

Figure A5.3: Uncertainties in leachable Cd determination

u(y,xi) (mg dm-2)x10000 1 2 3 4

r

c(0)

V(L)

a(V)

f(acid)

f(time)

f(temp)

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A5.4

Page 77: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 72

Example A5: Determination of cadmium release from ceramic ware by atomicabsorption spectrometry. Detailed discussion.

A5.1 Introduction

This example demonstrates the uncertaintyevaluation of an empirical method; in this case(BS 6748), the determination of metal releasefrom ceramic ware, glassware, glass-ceramic wareand vitreous enamel ware. The test is used todetermine by atomic absorption spectroscopy(AAS) the amount of lead or cadmium leachedfrom the surface of ceramic ware by a 4% (v/v)aqueous solution of acetic acid. The resultsobtained with this analytical method are onlyexpected to be comparable with other resultsobtained by the same method.

A5.2 Step 1: Specification

The complete procedure is given in BritishStandard BS 6748:1986 “Limits of metal releasefrom ceramic ware, glass ware, glass ceramicware and vitreous enamel ware” and this formsthe specification for the measurand. Only ageneral description is given here (right).

A5.2.1 Apparatus and Reagent specifications

The reagent specifications affecting theuncertainty study are:• A freshly prepared solution of 4% v/v glacial

acetic acid in water, made up by dilution of40 ml glacial acetic to 1 l.

• A (1000 ±1) mg l-1 standard lead solution in4% (v/v) acetic acid.

• A (500 ±0.5) mg l-1 standard cadmiumsolution in 4% (v/v) acetic acid.

Laboratory glassware is required to be of at leastclass B and incapable of releasing detectablelevels of lead or cadmium in 4% acetic acidduring the test procedure. The atomic absorptionspectrophotometer is required to have detectionlimits of at most 0.2 mg l-1 for lead and 0.02 mg l-1

for cadmium.

A5.2.2 Procedure

The general procedure is illustrated schematicallyin Figure A5.4. The specifications affecting theuncertainty estimation are:i) The sample is conditioned to (22±2) °C.

Where appropriate (‘category 1’ articles), thesurface area of the article is determined. Forthis example, a surface area of 2.37 dm2 wasobtained (Table A5.1 and Table A5.3 includethe experimental values for the example).

ii) The conditioned sample is filled with 4% v/vacid solution at (22±2) °C to within 1 mmfrom the overflow point, measured from theupper rim of the sample, or to within 6 mmfrom the extreme edge of a sample with a flator sloping rim.

iii) The quantity of 4% v/v acetic acid required orused is recorded to an accuracy of ±2% (in thisexample, 332 ml acetic acid was used).

iv) The sample is allowed to stand at (22 ±2) °Cfor 24 hours (in darkness if cadmium isdetermined) with due precaution to preventevaporation loss.

v) After standing, the solution is stirredsufficiently for homogenisation, and a testportion removed, diluted by a factor d ifnecessary, and analysed by AA, using

Figure A5.4: Extractable metal procedure

RESULTRESULT

Surfaceconditioning

Surfaceconditioning

Fill with 4% v/vacetic acid

Fill with 4% v/vacetic acid

LeachingLeaching

Homogeniseleachate

Homogeniseleachate

AASDetermination

AASDetermination

Preparecalibrationstandards

Preparecalibrationstandards

AASCalibration

AASCalibration

PreparationPreparation

Page 78: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 73

appropriate wavelengths and, in this example,a least squares calibration curve.

vi) The result is calculated (see below) andreported as the amount of lead and/orcadmium in the total volume of the extractingsolution, expressed in milligrams of lead orcadmium per square decimetre of surface areafor category 1 articles or milligrams of lead orcadmium per litre of the volume for category 2and 3 articles.

NOTE: Complete copies of BS 6748:1986 can beobtained by post from BSI customer services,389 Chiswick High Road, London W4 4ALEngland " +44 (0) 208 996 9001.

A5.3 Step 2: Identity and analysinguncertainty sources

Step 1 describes an ‘empirical method’. If such amethod is used within its defined field ofapplication, the bias of the method is defined aszero. Therefore bias estimation relates to thelaboratory performance and not to the biasintrinsic to the method. Because no referencematerial certified for this standardised method isavailable, overall control of bias is related to thecontrol of method parameters influencing theresult. Such influence quantities are time,temperature, mass and volumes, etc.

The concentration c0 of lead or cadmium in theacetic acid after dilution is determined by atomicabsorption spectrometry and calculated using

1-

1

000 lmg

)(B

BAc

−=

wherec0 :concentration of lead or cadmium in the

extraction solution [mg l-1]A0 :absorbance of the metal in the sample

extractB0 :intercept of the calibration curveB1 :slope of the calibration curveFor vessels that can be filled, the result r' is then

dcr .' 0=

where d is the dilution factor employed.Otherwise, the empirical method calls for theresult to be expressed as mass r of lead orcadmium leached per unit area. r is given by

2-

1

000 dmmg)(

dBa

BAVd

aVc

rV

L

V

L ⋅⋅

−⋅=⋅

⋅=

where the additional parameters arer :mass of Cd or Pb leached per unit area

[mg dm-2]

VL :the volume of the leachate [l]

aV :the surface area of the vessel [dm2]

d :factor by which the sample was diluted

The first part of the above equation of themeasurand is used to draft the basic cause andeffect diagram (Figure A5.5).

Figure A5.5:Initial cause and effect diagram

Result r

c(0) V(L)

da(V)

Reading

Calibration

Temperature

1 length

2 length

area

Filling

calibrationcurve

There is no reference material certified for thisempirical method with which to assess thelaboratory performance. All the feasible influencequantities, such as temperature, time of theleaching process and acid concentration thereforehave to be considered. To accommodate theadditional influence quantities the equation isexpanded by the respective correction factorsleading to

temptimeacidV

L fffda

Vcr ⋅⋅⋅⋅⋅

= 0

These additional factors are also included in therevised cause and effect diagram (Figure A5.6).They are shown there as effects on c0.NOTE: The latitude in temperature permitted by the

standard is a case of an uncertainty arising as aresult of incomplete specification of themeasurand. Taking the effect of temperatureinto account allows estimation of the range ofresults which could be reported whilstcomplying with the empirical method as wellas is practically possible. Note particularlythat variations in the result caused by differentoperating temperatures within the range

Page 79: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 74

cannot reasonably described as bias as theyrepresent results obtained in accordance withthe specification.

A5.4 Step 3: Quantifying uncertainty sources

The aim of this step is to quantify the uncertaintyarising from each of the previously identifiedsources. This can be done either by usingexperimental data or from well basedassumptions.

Dilution factor d

For the current example, no dilution of theleaching solution is necessary, therefore nouncertainty contribution has to be accounted for.

Volume VL

Filling: The empirical method requires the vesselto be filled ‘to within 1 mm from the brim’. For atypical drinking or kitchen utensil, 1 mm willrepresent about 1% of the height of the vessel.The vessel will therefore be 99.5 ±0.5% filled(i.e. VL will be approximately 0.995 ±0.005 of thevessel’s volume).

Temperature: The temperature of the acetic acidhas to be 22 ±2ºC. This temperature range leadsto an uncertainty in the determined volume, dueto a considerable larger volume expansion of theliquid compared with the vessel. The standarduncertainty of a volume of 332 ml, assuming arectangular temperature distribution, is

ml08.03

2332101.2 4

=××× −

Reading: The volume VL used is to be recorded towithin 2%, in practice, use of a measuringcylinder allows an inaccuracy of about 1% (i.e.0.01VL). The standard uncertainty is calculatedassuming a triangular distribution.

Calibration: The volume is calibrated accordingto the manufacturer’s specification within therange of ±2.5 ml for a 500 ml measuring cylinder.The standard uncertainty is obtained assuming atriangular distribution.

For this example a volume of 332 ml is used andthe four uncertainty components are combinedaccordingly

( )

ml83.165.2

633201.0

)08.0(6

332005.0

22

22

=

+

×+

+

×

=LVu

Cadmium concentration c0

The amount of leached cadmium is calculatedusing a manually prepared calibration curve. Forthis purpose five calibration standards, with aconcentration 0.1 mg l-1, 0.3 mg l-1, 0.5 mg l-1,0.7 mg l-1 and 0.9 mg l-1, were prepared from a500 ±0.5 mg l-1 cadmium reference standard. Thelinear least squares fitting procedure used

Figure A5.6:Cause and effect diagram with added hidden assumptions (correction factors)

Result r

c(0) V(L)

da(V)

Reading

Calibration

Temperature

1 length

2 length

area

Fillingcalibration

curve

f(temperature)

f(time)

f(acid)

Page 80: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 75

assumes that the uncertainties of the values of theabscissa are considerably smaller than theuncertainty on the values of the ordinate.Therefore the usual uncertainty calculationprocedures for c0 only reflect the uncertainty inthe absorbance and not the uncertainty of thecalibration standards, nor the inevitablecorrelations induced by successive dilution fromthe same stock. In this case, however, theuncertainty of the calibration standards issufficiently small to be neglected.

The five calibration standards were measuredthree times each, providing the results in TableA5.2.

The calibration curve is given by

01 BBcA ij +⋅=

whereAj :jth measurement of the absorbance of the ith

calibration standardci :concentration of the ith calibration standardB1 :slopeB0 :interceptand the results of the linear least square fit are

Value Standarddeviation

B1 0.2410 0.0050B0 0.0087 0.0029

with a correlation coefficient r of 0.997. The

fitted line is shown in Figure A5.7. The residualstandard deviation S is 0.005486.

The actual leach solution was measured twice,leading to a concentration c0 of 0.26 mg l-1. Thecalculation of the uncertainty u(c0) associatedwith the linear least square fitting procedure isdescribed in detail in Appendix E3. Thereforeonly a short description of the differentcalculation steps is given here.

u(c0) is given by

10

2

20

10

lmg018.0)(2.1

)5.026.0(151

21

241.0005486.0

)(11)(

−=⇒

−++=

−++=

cu

Scc

npBScu

xx

with the residual standard deviation S given by

Table A5.2: Calibration results

Concentration[mg l-1]

1 2 3

0.1 0.028 0.029 0.029

0.3 0.084 0.083 0.081

0.5 0.135 0.131 0.133

0.7 0.180 0.181 0.183

0.9 0.215 0.230 0.216

Figure A5.7:Linear least square fit and uncertainty interval for duplicate determinations

Concentration of Cadmium [mg/l]

Abso

rptio

n

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.05

0.10

0.15

0.20

0.25

Page 81: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 76

005486.02

)]([1

210

=−

⋅+−=∑

=

n

cBBAS

n

jjj

and

2.1)(1

2 =−=∑=

n

jjxx ccS

whereB1 :slopep :number of measurements to determine c0

n :number of measurements for thecalibration

c0 :determined cadmium concentration ofthe leached solution

c :mean value of the different calibrationstandards (n number of measurements)

i :index for the number of calibrationstandards

j :index for the number of measurements toobtain the calibration curve

Area aV

Length measurement: The total surface area of thesample vessel was calculated, from measureddimensions, to be 2.37 dm2. Since the item isapproximately cylindrical but not perfectlyregular, measurements are estimated to be within2 mm at 95% confidence. Typical dimensions arebetween 1.0 dm and 2.0 dm leading to anestimated dimensional measurement uncertaintyof 1 mm (after dividing the 95% figure by 1.96).Area measurements typically require two lengthmeasurements, height and width respectively (i.e.1.45 dm and 1.64 dm)

Area: Since the item has not a perfect geometricshape, there is also an uncertainty in any areacalculation; in this example, this is estimated tocontribute an additional 5% at 95% confidence.

The uncertainty contribution of the lengthmeasurement and area itself are combined in theusual way.

2

222

dm06.0)(

96.137.205.001.001.0)(

=⇒

×++=

V

V

au

au

Temperature effect ftemp

A number of studies of the effect of temperatureon metal release from ceramic ware have beenundertaken(1-5). In general, the temperature effect

is substantial and a near-exponential increase inmetal release with temperature is observed untillimiting values are reached. Only one study1 hasgiven an indication of effects in the range of 20-25°C. From the graphical information presentedthe change in metal release with temperature near25°C is approximately linear, with a gradient ofapproximately 5% °C-1. For the ±2°C rangeallowed by the empirical method this leads to afactor ftemp of 1±0.1. Converting this to a standarduncertainty gives, assuming a rectangulardistribution:

u(ftemp)= 06.031.0 =

Time effect ftime

For a relatively slow process such as leaching, theamount leached will be approximatelyproportional to time for small changes in the time.Krinitz and Franco1 found a mean change inconcentration over the last six hours of leachingof approximately 1.8 mg l-1 in 86 mg l-1, that is,about 0.3%/h. For a time of (24±0.5)h c0 willtherefore need correction by a factor ftime of1±(0.5×0.003) =1±0.0015. This is a rectangulardistribution leading to the standard uncertainty

001.030015.0)( ≅=timefu .

Acid concentration facid

One study of the effect of acid concentration onlead release showed that changing concentrationfrom 4 to 5% v/v increased the lead released froma particular ceramic batch from 92.9 to101.9 mg l-1, i.e. a change in facid of

097.09.92)9.929.101( =− or close to 0.1.Another study, using a hot leach method, showeda comparable change (50% change in leadextracted on a change of from 2 to 6% v/v)3.Assuming this effect as approximately linear withacid concentration gives an estimated change infacid of approximately 0.1 per % v/v change in acidconcentration. In a separate experiment theconcentration and its standard uncertainty havebeen established using titration with astandardised NaOH titre (3.996% v/vu = 0.008% v/v). Taking the uncertainty of0.008% v/v on the acid concentration suggests anuncertainty for facid of 0.008×0.1 = 0.0008. As theuncertainty on the acid concentration is alreadyexpressed as a standard uncertainty, this value canbe used directly as the uncertainty associated withfacid.

Page 82: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 77

NOTE: In principle, the uncertainty value would needcorrecting for the assumption that the singlestudy above is sufficiently representative of allceramics. The present value does, however,give a reasonable estimate of the magnitude ofthe uncertainty.

A5.5 Step 4: Calculating the combinedstandard uncertainty

The amount of leached cadmium per unit area,assuming no dilution, is given by

2-0 dmmgtemptimeacidV

L fffa

Vcr ⋅⋅⋅

⋅=

The intermediate values and their standarduncertainties are collected in Table A5.3.Employing those values

2dmmg036.00.10.10.137.2

332.026.0 −=××××=r

In order to calculate the combined standarduncertainty of a multiplicative expression (asabove) the standard uncertainties of eachcomponent are used as follows:

( )( ) ( ) ( )

( ) ( ) ( )

095.006.0001.00008.0

025.00054.0069.0222

222

222

222

0

0

=+++

++=

+

+

+

+

+

=

temp

temp

time

time

acid

acid

V

V

L

L

c

ffu

ffu

ffu

aau

VVu

ccu

rru

-2dmmg0034.0095.0)( ==⇒ rruc

The simpler spreadsheet approach to calculate thecombined standard uncertainty is shown in Table

A5.4. A description of the method is given inAppendix E.

The contributions of the different parameters andinfluence quantities to the measurementuncertainty are illustrated in Figure A5.8,comparing the size of each of the contributions(C13:H13 in Table A5.4) with the combineduncertainty (B16).

The expanded uncertainty U(r) is obtained byapplying a coverage factor of 2

Ur= 0.0034 × 2 = 0.007 mg dm-2

Thus the amount of released cadmium measuredaccording to BS 6748:1986

(0.036 ±0.007) mg dm-2

where the stated uncertainty is calculated using acoverage factor of 2.

A5.6 References for Example 5

1. B. Krinitz, V. Franco, J. AOAC 56 869-875(1973)

2. B. Krinitz, J. AOAC 61, 1124-1129 (1978)

3. J. H. Gould, S. W. Butler, K. W. Boyer, E. A.Stelle, J. AOAC 66, 610-619 (1983)

4. T. D. Seht, S. Sircar, M. Z. Hasan, Bull.Environ. Contam. Toxicol. 10, 51-56 (1973)

5. J. H. Gould, S. W. Butler, E. A. Steele, J.AOAC 66, 1112-1116 (1983)

Table A5.3: Intermediate values and uncertainties for leachable cadmium analysis

Description Value Standard uncertaintyu(x)

Relative standarduncertainty u(x)/x

c0 Content of cadmium in the extractionsolution

0.26 mg l-1 0.018 mg l-1 0.069

VL Volume of the leachate 0.332 l 0.0018 l 0.0054

aV Surface area of the vessel 2.37 dm2 0.06 dm2 0.025

facid Influence of the acid concentration 1.0 0.0008 0.0008

ftime Influence of the duration 1.0 0.001 0.001

ftemp Influence of temperature 1.0 0.06 0.06

Page 83: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A5

QUAM:2000.1 Page 78

Table A5.4: Spreadsheet calculation of uncertainty for leachable cadmium analysis

A B C D E F G H1 c0 VL aV facid ftime ftemp

2 value 0.26 0.332 2.37 1.0 1.0 1.03 uncertainty 0.018 0.0018 0.06 0.0008 0.001 0.0645 c0 0.26 0.278 0.26 0.26 0.26 0.26 0.266 VL 0.332 0.332 0.3338 0.332 0.332 0.332 0.3327 aV 2.37 2.37 2.37 2.43 2.37 2.37 2.378 facid 1.0 1.0 1.0 1.0 1.0008 1.0 1.09 ftime 1.0 1.0 1.0 1.0 1.0 1.001 1.010 ftemp 1.0 1.0 1.0 1.0 1.0 1.0 1.061112 r 0.036422 0.038943 0.036619 0.035523 0.036451 0.036458 0.03860713 u(y,xi) 0.002521 0.000197 -0.000899 0.000029 0.000036 0.00218514 u(y)2,

u(y,xi)21.199 E-5 6.36 E-6 3.90 E-8 8.09 E-7 8.49 E-10 1.33 E-9 4.78 E-6

1516 uc(r) 0.0034The values of the parameters are entered in the second row from C2 to H2, and their standard uncertainties in the rowbelow (C3:H3). The spreadsheet copies the values from C2:H2 into the second column (B5:B10). The result (r) usingthese values is given in B12. C5 shows the value of c0 from C2 plus its uncertainty given in C3. The result of thecalculation using the values C5:C10 is given in C12. The columns D and H follow a similar procedure. Row 13(C13:H13) shows the differences of the row (C12:H12) minus the value given in B12. In row 14 (C14:H14) the valuesof row 13 (C13:H13) are squared and summed to give the value shown in B14. B16 gives the combined standarduncertainty, which is the square root of B14.

Figure A5.8: Uncertainties in leachable Cd determination

u(y,xi) (mg dm-2)x10000 1 2 3 4

r

c(0)

V(L)

a(V)

f(acid)

f(time)

f(temp)

The values of u(y,xi)=(∂y/∂xi).u(xi) are taken from Table A5.4

Page 84: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A6

QUAM:2000.1 Page 79

Example A6: The Determination of Crude Fibre in Animal Feeding Stuffs

SummaryGoal

The determination of crude fibre by a regulatorystandard method.

Measurement procedure

The measurement procedure is a standardisedprocedure involving the general steps outlined inFigure A6.1. These are repeated for a blanksample to obtain a blank correction.

Measurand

The fibre content as a percentage of the sample byweight, Cfibre, is given by:

Cfibre = ( )a

cb 100×−

Where:a is the mass (g) of the sample.

(Approximately 1 g)b is the loss of mass (g) after ashing during

the determination;c is the loss of mass (g) after ashing during

the blank test.

Identification of uncertainty sources

A full cause and effect diagram is provided asFigure A6.9.

Quantification of uncertainty components

Laboratory experiments showed that the methodwas performing in house in a manner that fullyjustified adoption of collaborative study

reproducibility data. No other contributions weresignificant in general. At low levels it wasnecessary to add an allowance for the specificdrying procedure used. Typical resultinguncertainty estimates are tabulated below (asstandard uncertainties) (Table A6.1).

Figure A6.1: Fibre determination.

Alkaline digestionAlkaline digestion

Dry andweigh residue

Dry andweigh residue

Ash and weighresidue

Ash and weighresidue

RESULTRESULT

Acid digestionAcid digestion

Grind and weigh sampleGrind and

weigh sample

Table A6.1: Combined standard uncertainties

Fibre content(%w/w)

Standard uncertaintyuc(Cfibre) (%w/w)

Relative Standard uncertaintyuc(Cfibre) / Cfibre

2.5 0 29 0 115 0 312 2. . .+ = 0.12

5 0.4 0.08

10 0.6 0.06

Page 85: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A6

QUAM:2000.1 Page 80

Example A6:The determination of crude fibre in animal feeding stuffs.Detailed discussion

A6.1 Introduction

Crude fibre is defined in the method scope as theamount of fat-free organic substances which areinsoluble in acid and alkaline media. Theprocedure is standardised and its results useddirectly. Changes in the procedure change themeasurand; this is accordingly an example of anempirical method.

Collaborative trial data (repeatability andreproducibility) were available for this statutorymethod. The precision experiments describedwere planned as part of the in-house evaluation ofthe method performance. There is no suitablereference material (i.e. certified by the samemethod) available for this method.

A6.2 Step 1: Specification

The specification of the measurand for moreextensive analytical methods is best done by acomprehensive description of the different stagesof the analytical method and by providing theequation of the measurand.

Procedure

The procedure, a complex digestion, filtration,drying, ashing and weighing procedure, which isalso repeated for a blank crucible, is summarisedin Figure A6.2. The aim is to digest mostcomponents, leaving behind all the undigestedmaterial. The organic material is ashed, leavingan inorganic residue. The difference between thedry organic/inorganic residue weight and theashed residue weight is the “fibre content”. Themain stages are:i) Grind the sample to pass through a 1mm

sieveii) Weigh 1g of the sample into a weighed

crucibleiii) Add a set of acid digestion reagents at stated

concentrations and volumes. Boil for astated, standardised time, filter and wash theresidue.

iv) Add standard alkali digestion reagents andboil for the required time, filter, wash andrinse with acetone.

v) Dry to constant weight at a standardisedtemperature (“constant weight” is notdefined within the published method; nor areother drying conditions such as aircirculation or dispersion of the residue).

vi) Record the dry residue weight.vii) Ash at a stated temperature to “constant

weight” (in practice realised by ashing for aset time decided after in house studies).

viii) Weigh the ashed residue and calculate thefibre content by difference, after subtractingthe residue weight found for the blankcrucible.

Measurand

The fibre content as a percentage of the sample byweight, Cfibre, is given by:

Cfibre = ( )a

cb 100×−

Where:a is the mass (g) of the sample. Approximately

1 g of sample is taken for analysis.b is the loss of mass (g) after ashing during the

determination.c is the loss of mass (g) after ashing during the

blank test.

A6.3 Step 2: Identifying and analysinguncertainty sources

A range of sources of uncertainty was identified.These are shown in the cause and effect diagramfor the method (see Figure A6.9). This diagramwas simplified to remove duplication followingthe procedures in Appendix D; this, together withremoval of insignificant components, leads to thesimplified cause and effect diagram in FigureA6.10.

Since prior collaborative and in-house study datawere available for the method, the use of thesedata is closely related to the evaluation ofdifferent contributions to uncertainty and isaccordingly discussed further below.

Page 86: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A6

QUAM:2000.1 Page 81

A6.4 Step 3: Quantifying uncertaintycomponents

Collaborative trial results

The method has been the subject of acollaborative trial. Five different feeding stuffsrepresenting typical fibre and fat concentrationswere analysed in the trial. Participants in the trialcarried out all stages of the method, includinggrinding of the samples. The repeatability and

reproducibility estimates obtained from the trialare presented in Table A6.2.

As part of the in-house evaluation of the method,experiments were planned to evaluate therepeatability (within batch precision) for feedingstuffs with fibre concentrations similar to those ofthe samples analysed in the collaborative trial.The results are summarised in Table A6.2. Eachestimate of in-house repeatability is based on 5replicates.

Figure A6.2: Flow diagram illustrating the stages in the regulatory method for thedetermination of fibre in animal feeding stuffs

Grind sample to passthrough 1 mm sieve

Weigh 1 g of sample into crucible

Add filter aid, anti-foaming agentfollowed by 150 ml boiling H2SO4

Add anti-foaming agent followedby 150 ml boiling KOH

Boil vigorously for 30 mins

Filter and wash with 3x30 ml boiling water

Boil vigorously for 30 mins

Filter and wash with 3x30 ml boiling water

Apply vacuum, wash with 3x25 ml acetone

Dry to constant weight at 130 °C

Ash to constant weight at 475-500 °C

Calculate the % crude fibre content

Weigh crucible for blank test

Page 87: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A6

QUAM:2000.1 Page 82

The estimates of repeatability obtained in-housewere comparable to those obtained from thecollaborative trial. This indicates that the methodprecision in this particular laboratory is similar tothat of the laboratories which took part in thecollaborative trial. It is therefore acceptable touse the reproducibility standard deviation fromthe collaborative trial in the uncertainty budgetfor the method. To complete the uncertaintybudget we need to consider whether there are anyother effects not covered by the collaborative trialwhich need to be addressed. The collaborativetrial covered different sample matrices and thepre-treatment of samples, as the participants weresupplied with samples which required grindingprior to analysis. The uncertainties associatedwith matrix effects and sample pre-treatment donot therefore require any additional consideration.Other parameters which affect the result relate tothe extraction and drying conditions used in themethod. These were investigated separately toensure the laboratory bias was under control (i.e.,small compared to the reproducibility standarddeviation). The parameters considered arediscussed below.

Loss of mass on ashing

As there is no appropriate reference material forthis method, in-house bias has to be assessed byconsidering the uncertainties associated withindividual stages of the method. Several factorswill contribute to the uncertainty associated withthe loss of mass after ashing:! acid concentration;

! alkali concentration;! acid digestion time;! alkali digestion time;! drying temperature and time;! ashing temperature and time.

Reagent concentrations and digestion times

The effects of acid concentration, alkaliconcentration, acid digestion time and alkalidigestion time have been studied in previouslypublished papers. In these studies, the effect ofchanges in the parameter on the result of theanalysis was evaluated. For each parameter thesensitivity coefficient (i.e., the rate of change inthe final result with changes in the parameter) andthe uncertainty in the parameter were calculated.

The uncertainties given in Table A6.3 are smallcompared to the reproducibility figures presentedin Table A6.2. For example, the reproducibilitystandard deviation for a sample containing2.3 % w/w fibre is 0.293 % w/w. The uncertaintyassociated with variations in the acid digestiontime is estimated as 0.021 % w/w (i.e.,2.3 × 0.009). We can therefore safely neglect theuncertainties associated with variations in thesemethod parameters.

Drying temperature and time

No prior data were available. The method statesthat the sample should be dried at 130 °C to“constant weight”. In this case the sample is driedfor 3 hours at 130 °C and then weighed. It is thendried for a further hour and re-weighed. Constant

Table A6.2: Summary of results from collaborative trial of the method and in-houserepeatability check

Fibre content (% w/w)

Collaborative trial results

Sample Mean Reproducibilitystandard

deviation (sR)

Repeatabilitystandard

deviation (sr)

In-houserepeatability

standarddeviation

A 2. 3 0.293 0.198 0.193

B 12.1 0.563 0.358 0.312

C 5.4 0.390 0.264 0.259

D 3.4 0.347 0.232 0.213

E 10.1 0.575 0.391 0.327

Page 88: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A6

QUAM:2000.1 Page 83

weight is defined in this laboratory as a change ofless than 2 mg between successive weighings. Inan in-house study, replicate samples of fourfeeding stuffs were dried at 110, 130 and 150 °Cand weighed after 3 and 4 hours drying time. Inthe majority of cases, the weight change between3 and 4 hours was less than 2 mg. This wastherefore taken as the worst case estimate of theuncertainty in the weight change on drying. Therange ±2 mg describes a rectangular distribution,which is converted to a standard uncertainty bydividing by √3. The uncertainty in the weightrecorded after drying to constant weight istherefore 0.00115 g. The method specifies asample weight of 1 g. For a 1 g sample, theuncertainty in drying to constant weightcorresponds to a standard uncertainty of0.115 % w/w in the fibre content. This source ofuncertainty is independent of the fibre content ofthe sample. There will therefore be a fixedcontribution of 0.115 % w/w to the uncertaintybudget for each sample, regardless of theconcentration of fibre in the sample. At all fibreconcentrations, this uncertainty is smaller than thereproducibility standard deviation, and for all butthe lowest fibre concentrations is less than 1/3 ofthe sR value. Again, this source of uncertainty canusually be neglected. However for low fibreconcentrations, this uncertainty is more than 1/3of the sR value so an additional term should be

included in the uncertainty budget (see TableA6.4).

Ashing temperature and time

The method requires the sample to be ashed at475 to 500 °C for at least 30 mins. A publishedstudy on the effect of ashing conditions involveddetermining fibre content at a number of differentashing temperature/time combinations, rangingfrom 450 °C for 30 minutes to 650 °C for 3 hours.No significant difference was observed betweenthe fibre contents obtained under the differentconditions. The effect on the final result of smallvariations in ashing temperature and time cantherefore be assumed to be negligible.

Loss of mass after blank ashing

No experimental data were available for thisparameter. However, as discussed above, theeffects of variations in this parameter are likely tobe small.

A6.5 Step 4: Calculating the combinedstandard uncertainty

This is an example of an empirical method forwhich collaborative trial data were available. Thein-house repeatability was evaluated and found tobe comparable to that predicted by thecollaborative trial. It is therefore appropriate touse the sR values from the collaborative trial. Thediscussion presented in Step 3 leads to the

Table A6.3: Uncertainties associated with method parameters

Parameter SensitivitycoefficientNote 1

Uncertainty inparameter

Uncertainty in finalresult as RSD Note 4

acid concentration 0.23 (mol l-1)-1 0.0013 mol l-1 Note 2 0.00030

alkali concentration 0.21 (mol l-1)-1 0.0023 mol l-1 Note 2 0.00048

acid digestion time 0.0031 min-1 2.89 mins Note 3 0.0090

alkali digestion time 0.0025 min-1 2.89 mins Note 3 0.0072

Note 1.The sensitivity coefficients were estimated by plotting the normalised change in fibre content againstreagent strength or digestion time. Linear regression was then used to calculate the rate of change of theresult of the analysis with changes in the parameter.

Note 2.The standard uncertainties in the concentrations of the acid and alkali solutions were calculated fromestimates of the precision and trueness of the volumetric glassware used in their preparation, temperatureeffects etc. See examples A1-A3 for further examples of calculating uncertainties for the concentrations ofsolutions.

Note 3.The method specifies a digestion time of 30 minutes. The digestion time is controlled to within ±5minutes. This is a rectangular distribution which is converted to a standard uncertainty by dividing by √3.

Note 4.The uncertainty in the final result, as a relative standard deviation, is calculated by multiplying thesensitivity coefficient by the uncertainty in the parameter.

Page 89: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A6

QUAM:2000.1 Page 84

conclusion that, with the exception of the effectof drying conditions at low fibre concentrations,the other sources of uncertainty identified are allsmall in comparison to sR. In cases such as this,the uncertainty estimate can be based on thereproducibility standard deviation, sR, obtainedfrom the collaborative trial. For samples with afibre content of 2.5 % w/w, an additional term hasbeen included to take account of the uncertaintyassociated with the drying conditions.

Standard uncertainty

Typical standard uncertainties for a range of fibreconcentrations are given in the Table A6.4 below.

Expanded uncertainty

Typical expanded uncertainties are given in TableA6.5 below. These were calculated using acoverage factor k of 2, which gives a level ofconfidence of approximately 95%.

Table A6.4: Combined standard uncertainties

Fibre content(% w/w)

Standard uncertaintyuc(Cfibre) (%w/w)

Relative standard uncertaintyuc(Cfibre) / Cfibre

2.5 0 29 0 115 0 312 2. . .+ = 0.12

5 0.4 0.08

10 0.6 0.06

Table A6.5: Expanded uncertainties

Fibre content(% w/w)

Expanded uncertaintyU(Cfibre) (% w/w)

Expanded uncertainty (% of fibre content)

2.5 0.62 25

5 0.8 16

10 0.12 12

Page 90: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Fig

ure

A6.

9: C

ause

and

eff

ect

diag

ram

for

the

det

erm

inat

ion

of f

ibre

in a

nim

al f

eedi

ng s

tuff

s

Cru

de F

ibre

(%)

Los

s of

mas

s af

ter

ashi

ng (

b)

Los

s of

mas

s af

ter

blan

k as

hing

(c)

Mas

s sa

mpl

e (a

)P

reci

sion

sam

ple

wei

ght p

reci

sion

wei

ghin

g pr

ecis

ion

wei

ght o

f cru

cibl

ebe

fore

ash

ing

dryi

ng te

mp

wei

ght o

f sam

ple

and

cruc

ible

afte

r ash

ing

ashi

ng te

mp

ashi

ng ti

me

wei

ghin

g of

cru

cibl

e

bala

nce

linea

rity

bala

nce

calib

ratio

n

extra

ctio

n pr

ecis

ion

wei

ght o

fcr

ucib

le a

fter

ashi

ng

dryi

ng ti

me

bala

nce

calib

ratio

n

bala

nce

linea

rity

ashi

ng te

mp

ashi

ng ti

me

bala

nce

calib

ratio

nba

lanc

elin

earit

yw

eigh

ing

of c

ruci

ble

wei

ghin

g of

cru

cibl

e

acid

dig

est

alka

li di

gest

alka

livo

lum

e

alka

li co

ncdi

gest

con

ditio

ns

boili

ng ra

te

extra

ctio

n tim

eac

id v

ol

acid

con

c

dige

st c

ondi

tions

boili

ng ra

te

extra

ctio

n tim

e

bala

nce

linea

rity

bala

nce

calib

ratio

n

dryi

ng te

mp

dryi

ng ti

me

wei

ghin

g of

cru

cibl

eashi

ng p

reci

sion

bala

nce

calib

ratio

n

bala

nce

linea

rity

wei

ght

of

sam

ple

and

cruc

ible

bef

ore

ashi

ng

acid

dig

est*

alka

li di

gest

*

*The

bra

nche

s fee

ding

into

thes

e “a

cid

dige

st”

and

“alk

ali d

iges

t” b

ranc

hes

have

bee

n om

itted

for c

larit

y. T

hesa

me

fact

ors a

ffec

t the

m a

s for

the

sam

ple

(i.e.

, dig

est c

ondi

tions

, aci

dco

nc e

tc.).

QUAM:2000.1 Page 85

Quantifying Uncertainty Example A6

Page 91: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Fig

ure

A6.

10:

Sim

plif

ied

caus

e an

d ef

fect

dia

gram

alka

li vo

l

Cru

de F

ibre

(%)

Los

s of

mas

s af

ter

ashi

ng (

b)

Los

s of

mas

s af

ter

blan

k as

hing

(c)

Pre

cisi

on

sam

ple

wei

ght p

reci

sion

wei

ghin

g pr

ecis

ion

wei

ght o

f cru

cibl

ebe

fore

ash

ing

dryi

ng te

mp

ashi

ng te

mp

ashi

ng ti

me

extra

ctio

nw

eigh

t of

cruc

ible

afte

ras

hing

dryi

ng ti

me

ashi

ng te

mp

ashi

ng ti

me

acid

dig

est

alka

li di

gest al

kali

conc

dige

st c

ondi

tions

boili

ng ra

te

extra

ctio

n tim

eac

id v

ol

acid

con

c

dige

st c

ondi

tions

boili

ng ra

te

extra

ctio

n tim

e

dryi

ng te

mp

dryi

ng ti

me

ashi

ng p

reci

sion

wei

ght o

f sam

ple

and

cruc

ible

afte

r ash

ing

wei

ght

of

sam

ple

and

cruc

ible

bef

ore

ashi

ng

acid

dig

est*

alka

li di

gest

*

*The

bra

nche

s fee

ding

into

thes

e “a

cid

dige

st”

and

“alk

ali d

iges

t” b

ranc

hes

have

bee

n om

itted

for c

larit

y. T

hesa

me

fact

ors a

ffec

t the

m a

s for

the

sam

ple

(i.e.

, dig

est c

ondi

tions

, aci

dco

nc e

tc.).

QUAM:2000.1 Page 86

Quantifying Uncertainty Example A6

Page 92: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 87

Example A7: Determination of the Amount of Lead in Water Using DoubleIsotope Dilution and Inductively Coupled Plasma Mass Spectrometry

A7.1 Introduction

This example illustrates how the uncertaintyconcept can be applied to a measurement of theamount content of lead in a water sample usingIsotope Dilution Mass Spectrometry (IDMS) andInductively Coupled Plasma Mass Spectrometry(ICP-MS).

General introduction to Double IDMS

IDMS is one of the techniques that is recognisedby the Comité consultatif pour la quantité dematière (CCQM) to have the potential to be aprimary method of measurement, and therefore awell defined expression which describes how themeasurand is calculated is available. In thesimplest case of isotope dilution using a certifiedspike, which is an enriched isotopic referencematerial, isotope ratios in the spike, the sampleand a blend b of known masses of sample andspike are measured. The element amount contentcx in the sample is given by:

( )( )∑

∑⋅

⋅⋅

⋅−⋅⋅−⋅

⋅⋅=

i

i

RK

RK

RKRKRKRK

mm

ccyiyi

xixi

x1x1bb

bby1y1

x

yyx

(1)

where cx and cy are element amount content in thesample and the spike respectively (the symbol c isused here instead of k for amount content1 toavoid confusion with K-factors and coveragefactors k). mx and my are mass of sample and spikerespectively. Rx, Ry and Rb are the isotope amountratios. The indexes x, y and b represent thesample, the spike and the blend respectively. Oneisotope, usually the most abundant in the sample,is selected and all isotope amount ratios areexpressed relative to it. A particular pair ofisotopes, the reference isotope and preferably themost abundant isotope in the spike, is thenselected as monitor ratio, e.g. n(208Pb)/n(206Pb).Rxi and Ryi are all the possible isotope amountratios in the sample and the spike respectively.For the reference isotope, this ratio is unity. Kxi,Kyi and Kb are the correction factors for massdiscrimination, for a particular isotope amountratio, in sample, spike and blend respectively. TheK-factors are measured using a certified isotopicreference material according to equation (2).

observed

certified0bias0 where;

RR

KKKK =+= (2)

where K0 is the mass discrimination correctionfactor at time 0, Kbias is a bias factor coming intoeffect as soon as the K-factor is applied to correcta ratio measured at a different time during themeasurement. The Kbias also includes otherpossible sources of bias such as multiplier deadtime correction, matrix effects etc. Rcertified is thecertified isotope amount ratio taken from thecertificate of an isotopic reference material andRobserved is the observed value of this isotopicreference material. In IDMS experiments, usingInductively Coupled Plasma Mass Spectrometry(ICP-MS), mass fractionation will vary with timewhich requires that all isotope amount ratios inequation (1) need to be individually corrected formass discrimination.

Certified material enriched in a specific isotope isoften unavailable. To overcome this problem,‘double’ IDMS is frequently used. The procedureuses a less well characterised, isotopicallyenriched spiking material in conjunction with acertified material (denoted z) of natural isotopiccomposition. The certified, natural compositionmaterial acts as the primary assay standard. Twoblends are used; blend b is a blend betweensample and enriched spike, as in equation (1). Toperform double IDMS a second blend, b’ isprepared from the primary assay standard withamount content cz, and the enriched material y.This gives a similar expression to equation (1):

( )( )∑

∑⋅

⋅⋅

⋅−⋅⋅−⋅

⋅⋅=

i

i

zz

yy

RK

RK

RKRKRKRK

mm

ccyiyi

zizi

11bb

bb11

z

yyz ''

'''

(3)

where cz is the element amount content of theprimary assay standard solution and mz the massof the primary assay standard when preparing thenew blend. m’y is the mass of the enriched spikesolution, K’b, R’b, Kz1 and Rz1 are the K-factor andthe ratio for the new blend and the assay standardrespectively. The index z represents the assay

Page 93: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 88

standard. Dividing equation (1) with equation (3)gives

( )

( )( )( )∑

∑∑∑

⋅⋅

⋅−⋅⋅−⋅

⋅⋅

⋅⋅

⋅−⋅⋅−⋅

⋅⋅

=

i

i

i

i

RK

RK

RKRKRKRK

mm

c

RK

RK

RKRKRKRK

mm

c

cc

yiyi

zizi

z1z1bb

bby1y1

z

yy

yiyi

xixi

x1x1bb

bby1y1

x

yy

z

x

'''''

(4)

Simplifying this equation and introducing aprocedure blank, cblank, we get:

( )

( ) blankzizi

xixi

bby1y1

z1z1bb

x1x1bb

bby1y1

y

z

x

yzx

''''

'

cRK

RK

RKRKRKRK

RKRKRKRK

mm

mm

cc

i

i −⋅

⋅⋅

⋅−⋅⋅−⋅

×

⋅−⋅⋅−⋅

⋅⋅⋅=

∑∑ (5)

This is the final equation, from which cy has beeneliminated. In this measurement the number indexon the amount ratios, R, represents the followingactual isotope amount ratios:

R1=n(208Pb)/n(206Pb) R2=n(206Pb)/n(206Pb)

R3=n(207Pb)/n(206Pb) R4=n(204Pb)/n(206Pb)

For reference, the parameters are summarised inTable A7.1.

A7.2 Step 1: Specification

The general procedure for the measurements isshown in Table A7.2. The calculations andmeasurements involved are described below.

Calculation procedure for the amount content cx

For this determination of lead in water, fourblends each of b’, (assay + spike), and b, (sample+ spike), were prepared. This gives a total of 4values for cx. One of these determinations will bedescribed in detail following Table A7.2, steps 1to 4. The reported value for cx will be the averageof the four replicates.

Table A7.1. Summary of IDMS parameters

Parameter Description Parameter Description

mx mass of sample in blend b [g] my mass of enriched spike in blend b [g]m'y mass of enriched spike in blend b’

[g]mz mass of primary assay standard in

blend b’ [g]cx amount content of the sample x

[mol g-1 or µmol g-1]Note 1cz amount content of the primary assay

standard z [mol g-1 or µmol g-1]Note 1

cy amount content of the spike y[mol g-1 or µmol g-1]Note 1

cblank observed amount content in procedureblank [mol g-1 or µmol g-1] Note 1

Rb measured ratio of blend b,n(208Pb)/n(206Pb)

Kb mass bias correction of Rb

R'b measured ratio of blend b’,n(208Pb)/n(206Pb)

K'b mass bias correction of R’b

Ry1 measured ratio of enriched isotopeto reference isotope in the enrichedspike

Ky1 mass bias correction of Ry1

Rzi all ratios in the primary assaystandard, Rz1, Rz2 etc.

Kzi mass bias correction factors for Rzi

Rxi all ratios in the sample Kxi mass bias correction factors for Rxi

Rx1 measured ratio of enriched isotopeto reference isotope in the sample x

Rz1 as Rx1 but in the primary assaystandard

Note 1: Units for amount content are always specified in the text.

Page 94: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 89

Table A7.2. General procedure

Step Description1 Preparing the primary assay

standard2 Preparation of blends: b’ and b3 Measurement of isotope ratios4 Calculation of the amount

content of Pb in the sample, cx

5 Estimating the uncertainty in cx

Calculation of the Molar Mass

Due to natural variations in the isotopiccomposition of certain elements, e.g. Pb, themolar mass, M, for the primary assay standard hasto be determined since this will affect the amountcontent cz. Note that this is not the case when cz isexpressed in mol g-1. The molar mass, M(E), foran element E, is numerically equal to the atomicweight of element E, Ar(E). The atomic weightcan be calculated according to the generalexpression:

( )

=

=⋅

= p

i

p

i

R

MRA

1i

1

ii

r

E)E( (6)

where the values Ri are all true isotope amountratios for the element E and M(iE) are thetabulated nuclide masses.

Note that the isotope amount ratios in equation(6) have to be absolute ratios, that is, they have tobe corrected for mass discrimination. With theuse of proper indexes, this gives equation (7). Forthe calculation, nuclide masses, M(iE), were takenfrom literature values2, while Ratios, Rzi, and K0-factors, K0(zi), were measured (see Table A7.8).These values give

( )

11

iziz

1

iziziz

molg21034.207

E)1Assay,Pb(

−=

=

=

⋅⋅=

∑p

i

p

i

RK

MRKM (7)

Measurement of K-factors and isotope amountratios

To correct for mass discrimination, a correctionfactor, K, is used as specified in equation (2). TheK0-factor can be calculated using a reference

material certified for isotopic composition. In thiscase, the isotopically certified reference materialNIST SRM 981 was used to monitor a possiblechange in the K0-factor. The K0-factor is measuredbefore and after the ratio it will correct. A typicalsample sequence is: 1. (blank), 2. (NIST SRM981), 3. (blank), 4. (blend 1), 5. (blank), 6. (NISTSRM 981), 7. (blank), 8. (sample), etc.

The blank measurements are not only used forblank correction, they are also used formonitoring the number of counts for the blank.No new measurement run was started until theblank count rate was stable and back to a normallevel. Note that sample, blends, spike and assaystandard were diluted to an appropriate amountcontent prior to the measurements. The results ofratio measurements, calculated K0-factors and Kbias

are summarised in Table A7.8.

Preparing the primary assay standard andcalculating the amount content, cz.

Two primary assay standards were produced,each from a different piece of metallic lead with achemical purity of w=99.999 %. The two piecescame from the same batch of high purity lead.The pieces were dissolved in about 10 ml of 1:3w/w HNO3:water under gentle heating and thenfurther diluted. Two blends were prepared fromeach of these two assay standards. The valuesfrom one of the assays is described hereafter.

0.36544 g lead, m1, was dissolved and diluted inaqueous HNO3 (0.5 mol l-1) to a total ofd1=196.14 g. This solution is named Assay 1. Amore diluted solution was needed and m2=1.0292g of Assay 1, was diluted in aqueous HNO3

(0.5 mol l-1) to a total mass of d2=99.931g. Thissolution is named Assay 2. The amount content ofPb in Assay 2, cz, is then calculated according toequation (8)

( )118

1

1

2

2

gmol092605.0gmol102605.9

1,Pb1

−−− µ=×=

⋅⋅

⋅=AssayMd

wmdm

cz (8)

Preparation of the blends

The mass fraction of the spike is known to beroughly 20µg Pb per g solution and the massfraction of Pb in the sample is also known to be inthis range. Table A7.3 shows the weighing datafor the two blends used in this example.

Page 95: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 90

Measurement of the procedure blank cBlank

In this case, the procedure blank was measuredusing external calibration. A more exhaustiveprocedure would be to add an enriched spike to ablank and process it in the same way as thesamples. In this example, only high purityreagents were used, which would lead to extremeratios in the blends and consequent poorreliability for the enriched spiking procedure. Theexternally calibrated procedure blank wasmeasured four times, and cBlank found to be4.5×10-7 µmol g-1, with standard uncertainty4.0×10-7 µmol g-1 evaluated as type A.

Calculation of the unknown amount content cx

Inserting the measured and calculated data (TableA7.8) into equation (5) gives cx=0.053738µmol g-1. The results from all four replicates aregiven in Table A7.4.

A7.3 Steps 2 and 3: Identifying andquantifying uncertainty sources

Strategy for the uncertainty calculation

If equations (2), (7) and (8) were to be included inthe final IDMS equation (5), the sheer number ofparameters would make the equation almostimpossible to handle. To keep it simpler, K0-factors and amount content of the standard assaysolution and their associated uncertainties aretreated separately and then introduced into theIDMS equation (5). In this case it will not affectthe final combined uncertainty of cx, and it isadvisable to simplify for practical reasons.

For calculating the combined standarduncertainty, uc(cx), the values from one of themeasurements, as described in A7.2, will be used.The combined uncertainty of cx will be calculatedusing the spreadsheet method described inAppendix E.

Uncertainty on the K-factors

i) Uncertainty on K0

K is calculated according to equation (2) andusing the values of Kx1 as an example gives forK0:

9992.01699.21681.2)1(

observed

certified0 ===

RR

xK (9)

To calculate the uncertainty on K0 we first look atthe certificate where the certified ratio, 2.1681,has a stated uncertainty of 0.0008 based on a 95%confidence interval. To convert an uncertaintybased on a 95% confidence interval to standarduncertainty we divide by 2. This gives a standarduncertainty of u(Rcertified)=0.0004. The observedamount ratio, Robserved=n(208Pb)/n(206Pb), has astandard uncertainty of 0.0025 (as rsd). For the K-factor, the combined uncertainty can be calculatedas:

( )

002507.0

0025.01681.20004.0

)1())1(( 2

2

0

0c

=

+

=

xKxKu

(10)

This clearly points out that the uncertaintycontributions from the certified ratios arenegligible. Henceforth, the uncertainties on themeasured ratios, Robserved, will be used for theuncertainties on K0.

Uncertainty on Kbias

This bias factor is introduced to account forpossible deviations in the value of the massdiscrimination factor. As can be seen in the causeand effect diagram above, and in equation (2),there is a bias associated with every K-factor. Thevalues of these biases are in our case not known,and a value of 0 is applied. An uncertainty is, of

Table A7.3

Blend b b’

Solutionsused

Spike Sample Spike Assay 2

Parameter my mx m’y mz

Mass (g) 1.1360 1.0440 1.0654 1.1029

Table A7.4

cx (µmol g-1)

Replicate 1 (our example) 0.053738

Replicate 2 0.053621

Replicate 3 0.053610

Replicate 4 0.053822

Average 0.05370

Experimental standarddeviation (s)

0.0001

Page 96: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 91

course, associated with every bias and this has tobe taken into consideration when calculating thefinal uncertainty. In principle, a bias would beapplied as in equation (11), using an excerpt fromequation (5) and the parameters Ky1 and Ry1 todemonstrate this principle.

( )....

........)1()1(

.... y1bias0x ⋅

−⋅+⋅=

RyKyKc (11)

The values of all biases, Kbias(yi, xi, zi), are (0 ±0.001). This estimation is based on a longexperience of lead IDMS measurements. AllKbias(yi, xi, zi) parameters are not included indetail in Table A7.5, Table A7.8 or in equation 5,but they are used in all uncertainty calculations.

Uncertainty of the weighed masses

In this case, a dedicated mass metrology labperformed the weighings. The procedure appliedwas a bracketing technique using calibratedweights and a comparator. The bracketingtechnique was repeated at least six times for everysample mass determination. Buoyancy correctionwas applied. Stoichiometry and impuritycorrections were not applied in this case. Theuncertainties from the weighing certificates weretreated as standard uncertainties and are given in

Table A7.8.

Uncertainty in the amount content of the StandardAssay Solution, cz

i) Uncertainty in the atomic weight of Pb

First, the combined uncertainty of the molar massof the assay solution, Assay 1, will be calculated.The values in Table A7.5 are known or have beenmeasured:

According to equation (7), the calculation of themolar mass takes this form:

z4z4z3z3z22z1z1

4z4z43z3z32z21z1z1

)1,Pb(

RKRKRKRKMRKMRKMRMRK

AssayM

z ⋅+⋅+⋅+⋅⋅⋅+⋅⋅+⋅+⋅⋅

=

(12)

To calculate the combined standard uncertainty ofthe molar mass of Pb in the standard assaysolution, the spreadsheet model described inAppendix E was used. There were eightmeasurements of every ratio and K0. This gave amolar mass M(Pb, Assay 1)=207.2103 g mol-1,with uncertainty 0.0010 g mol-1 calculated usingthe spreadsheet method.

ii) Calculation of the combined standarduncertainty in determining cz

To calculate the uncertainty on the amountcontent of Pb in the standard assay solution, cz thedata from A7.2 and equation (8) are used. Theuncertainties were taken from the weighingcertificates, see A7.3. All parameters used inequation (8) are given with their uncertainties inTable A7.6.

The amount content, cz, was calculated usingequation (8). Following Appendix D.5 thecombined standard uncertainty in cz, is calculatedto be uc(cz)=0.000028. This givescz=0.092606 µmol g-1 with a standard uncertaintyof 0.000028 µmol g-1 (0.03% as %rsd).

To calculate uc(cx), for replicate 1, the spreadsheetmodel was applied (Appendix E). The uncertaintybudget for replicate 1 will be representative forthe measurement. Due to the number ofparameters in equation (5), the spreadsheet willnot be displayed. The value of the parameters andtheir uncertainties as well as the combineduncertainty of cx can be seen in Table A7.8.

Table A7.5

Value StandardUncertainty

TypeNote 1

Kbias(zi) 0 0.001 B

Rz1 2.1429 0.0054 A

K0(z1) 0.9989 0.0025 A

K0(z3) 0.9993 0.0035 A

K0(z4) 1.0002 0.0060 A

Rz2 1 0 A

Rz3 0.9147 0.0032 A

Rz4 0.05870 0.00035 A

M1 207.976636 0.000003 B

M2 205.974449 0.000003 B

M3 206.975880 0.000003 B

M4 203.973028 0.000003 BNote 1. Type A (statistical evaluation) or Type B (other)

Page 97: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 92

A7.4 Step 4: Calculating the combinedstandard uncertainty

The average and the experimental standarddeviation of the four replicates are displayed inTable A7.7. The numbers are taken from TableA7.4 and Table A7.8.

Table A7.7

Replicate 1 Mean of replicates1-4

cx= 0.05374 cx= 0.05370 µmol g-1

uc(cx)= 0.00018 s= 0.00010 Note 1 µmol g-1

Note 1.This is the experimental standard uncertainty and notthe standard deviation of the mean.

In IDMS, and in many non-routine analyses, acomplete statistical control of the measurementprocedure would require limitless resources and

time. A good way then to check if some source ofuncertainty has been forgotten is to compare theuncertainties from the type A evaluations with theexperimental standard deviation of the fourreplicates. If the experimental standard deviationis higher than the contributions from theuncertainty sources evaluated as type A, it couldindicate that the measurement process is not fullyunderstood. As an approximation, using data fromTable 8, the sum of the type A evaluatedexperimental uncertainties can be calculated bytaking 92.2% of the total experimentaluncertainty, which is 0.00041 µmol g-1. Thisvalue is then clearly higher than the experimentalstandard deviation of 0.00010 µmol g-1, see TableA7.7. This indicates that the experimentalstandard deviation is covered by the contributionsfrom the type A evaluated uncertainties and thatno further type A evaluated uncertaintycontribution, due to the preparation of the blends,needs to be considered. There could however be abias associated with the preparations of theblends. In this example, a possible bias in thepreparation of the blends is judged to beinsignificant in comparison to the major sourcesof uncertainty.

The amount content of lead in the water sample isthen:

cx=(0.05370±0.00036) µmol g-1

The result is presented with an expandeduncertainty using a coverage factor of 2.

References for Example 7

1. T. Cvitaš, Metrologia, 1996, 33, 35-39

2 G. Audi and A.H. Wapstra, Nuclear Physics,A565 (1993)

Table A7.6

Value Uncertainty

Mass of lead piece, m1

(g)0.36544 0.00005

Total mass first dilution,d1 (g)

196.14 0.03

Aliquot of first dilution,m2 (g)

1.0292 0.0002

Total mass of seconddilution, d2 (g)

99.931 0.01

Purity of the metalliclead piece, w (massfraction)

0.99999 0.000005

Molar mass of Pb in theAssay Material, M(g mol-1)

207.2104 0.0010

Page 98: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 93

Table A7.8

parameter uncertaintyevaluation

value experimentaluncertainty

(Note 1)

contributionto total uc(%)

finaluncertainty

(Note 2)

contributionto totaluc(%)

ΣKbias B 0 0.001Note 3 7.2 0.001Note 3 37.6

cz B 0.092605 0.000028 0.2 0.000028 0.8

K0(b) A 0.9987 0.0025 14.4 0.00088 9.5

K0(b’) A 0.9983 0.0025 18.3 0.00088 11.9

K0(x1) A 0.9992 0.0025 4.3 0.00088 2.8

K0(x3) A 1.0004 0.0035 1 0.0012 0.6

K0(x4) A 1.001 0.006 0 0.0021 0

K0(y1) A 0.9999 0.0025 0 0.00088 0

K0(z1) A 0.9989 0.0025 6.6 0.00088 4.3

K0(z3) A 0.9993 0.0035 1 0.0012 0.6

K0(z4) A 1.0002 0.006 0 0.0021 0

mx B 1.0440 0.0002 0.1 0.0002 0.3

my1 B 1.1360 0.0002 0.1 0.0002 0.3

my2 B 1.0654 0.0002 0.1 0.0002 0.3

mz B 1.1029 0.0002 0.1 0.0002 0.3

Rb A 0.29360 0.00073 14.2 0.00026Note 4 9.5

R’b A 0.5050 0.0013 19.3 0.00046 12.7

Rx1 A 2.1402 0.0054 4.4 0.0019 2.9

Rx2 Cons. 1 0 0

Rx3 A 0.9142 0.0032 1 0.0011 0.6

Rx4 A 0.05901 0.00035 0 0.00012 0

Ry1 A 0.00064 0.00004 0 0.000014 0

Rz1 A 2.1429 0.0054 6.7 0.0019 4.4

Rz2 Cons. 1 0 0

Rz3 A 0.9147 0.0032 1 0.0011 0.6

Rz4 A 0.05870 0.00035 0 0.00012 0

cBlank A 4.5×10-7 4.0×10-7 0 2.0×10-7 0

cx 0.05374 0.00041 0.00018

ΣAcontrib.= 92.2 ΣAcontrib.= 60.4

ΣBcontrib.= 7.8 ΣBcontrib.= 39.6

Notes overleaf

Page 99: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Example A7

QUAM:2000.1 Page 94

Notes to Table A7.8

Note 1. The experimental uncertainty is calculated without taking the number of measurements on eachparameter into account.

Note 2. In the final uncertainty the number of measurements has been taken into account. In this case alltype A evaluated parameters have been measured 8 times. Their standard uncertainties have been divided by√8.

Note 3. This value is for one single Kbias. The parameter ΣKbias is used instead of listing all Kbias(zi,xi,yi),which all have the same value (0 ± 0.001).

Note 4. Rb has been measured 8 times per blend giving a total of 32 observations. When there is no blend toblend variation, as in this example, all these 32 observations could be accounted for by implementing allfour blend replicates in the model. This can be very time consuming and since, in this case, it does not affectthe uncertainty noticeably, it is not done.

Page 100: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix B - Definitions

QUAM:2000.1 Page 95

Appendix B. Definitions

General

B.1 Accuracy of measurement

The closeness of the agreement betweenthe result of a measurement and a truevalue of the measurand [H.4].

NOTE 1 "Accuracy" is a qualitative concept.

NOTE 2 The term "precision" should not beused for "accuracy".

B.2 Precision

The closeness of agreement betweenindependent test results obtained understipulated conditions [H.5].

NOTE 1 Precision depends only on thedistribution of random errors and doesnot relate to the true value or thespecified value.

NOTE 2 The measure of precision is usuallyexpressed in terms of imprecision andcomputed as a standard deviation of thetest results. Less precision is reflected bya larger standard deviation.

NOTE 3 "Independent test results" meansresults obtained in a manner notinfluenced by any previous result on thesame or similar test object. Quantitativemeasures of precision depend criticallyon the stipulated conditions.Repeatability and reproducibilityconditions are particular sets of extremestipulated conditions.

B.3 True value

Value consistent with the definition of agiven particular quantity [H.4].

NOTE 1 This is a value that would be obtainedby a perfect measurement.

NOTE 2 True values are by natureindeterminate.

NOTE 3 The indefinite article "a" rather thanthe definite article "the" is used in

conjunction with "true value" becausethere may be many values consistentwith the definition of a given particularquantity.

B.4 Conventional true value

Value attributed to a particular quantityand accepted, sometimes by convention,as having an uncertainty appropriate for agiven purpose [H.4].

EXAMPLES

a) At a given location, the value assignedto the quantity realised by a referencestandard may be taken as a conventionaltrue value.

b) The CODATA (1986) recommendedvalue for the Avogadro constant, NA:6.0221367×1023 mol-1

NOTE 1 "Conventional true value" issometimes called assigned value, bestestimate of the value, conventional valueor reference value.

NOTE 2 Frequently, a number of results ofmeasurements of a quantity is used toestablish a conventional true value.

B.5 Influence quantity

A quantity that is not the measurand butthat affects the result of the measurement[H.4].

EXAMPLES

1. Temperature of a micrometer used tomeasure length;

2. Frequency in the measurement of analternating electric potential difference;

3. Bilirubin concentration in themeasurement of haemoglobinconcentration in human blood plasma.

Page 101: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix B - Definitions

QUAM:2000.1 Page 96

Measurement

B.6 Measurand

Particular quantity subject tomeasurement [H.4].

NOTE The specification of a measurand mayrequire statements about quantities suchas time, temperature and pressure..

B.7 Measurement

Set of operations having the object ofdetermining a value of a quantity [H.4].

B.8 Measurement procedure

Set of operations, described specifically,used in the performance of measurementsaccording to a given method [H.4].

NOTE A measurement procedure is usuallyrecorded in a document that is sometimesitself called a "measurement procedure"(or a measurement method) and isusually in sufficient detail to enable anoperator to carry out a measurementwithout additional information.

B.9 Method of measurement

A logical sequence of operations,described generically, used in theperformance of measurements [H.4].

NOTE Methods of measurement may bequalified in various ways such as:- substitution method- differential method- null method

B.10 Result of a measurement

Value attributed to a measurand,obtained by measurement [H.4].

NOTE 1 When the term "result of ameasurement" is used, it should be madeclear whether it refers to:- The indication.- The uncorrected result.- The corrected result.and whether several values are averaged.

NOTE 2 A complete statement of the result of ameasurement includes information aboutthe uncertainty of measurement.

Uncertainty

B.11 Uncertainty (of measurement)

Parameter associated with the result of ameasurement, that characterises thedispersion of the values that couldreasonably be attributed to themeasurand [H.4].

NOTE 1 The parameter may be, for example, astandard deviation (or a given multipleof it), or the width of a confidenceinterval.

NOTE 2 Uncertainty of measurementcomprises, in general, many components.Some of these components may beevaluated from the statistical distributionof the results of a series of measurementsand can be characterised by experimentalstandard deviations. The othercomponents, which can also becharacterised by standard deviations, areevaluated from assumed probabilitydistributions based on experience orother information.

NOTE 3 It is understood that the result of themeasurement is the best estimate of thevalue of the measurand and that allcomponents of uncertainty, includingthose arising from systematic effects,such as components associated withcorrections and reference standards,contribute to the dispersion.

B.12 Traceability

The property of the result of ameasurement or the value of a standardwhereby it can be related to statedreferences, usually national orinternational standards, through anunbroken chain of comparisons allhaving stated uncertainties [H.4].

B.13 Standard uncertainty

u(xi) Uncertainty of the result xi of ameasurement expressed as a standarddeviation [H.2].

B.14 Combined standard uncertainty

uc(y) Standard uncertainty of the result y of ameasurement when the result is obtainedfrom the values of a number of otherquantities, equal to the positive squareroot of a sum of terms, the terms being

Page 102: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix B - Definitions

QUAM:2000.1 Page 97

the variances or covariances of theseother quantities weighted according tohow the measurement result varies withthese quantities [H.2].

B.15 Expanded uncertainty

U Quantity defining an interval about theresult of a measurement that may beexpected to encompass a large fraction ofthe distribution of values that couldreasonably be attributed to themeasurand [H.2].

NOTE 1 The fraction may be regarded as thecoverage probability or level ofconfidence of the interval.

NOTE 2 To associate a specific level ofconfidence with the interval defined bythe expanded uncertainty requiresexplicit or implicit assumptionsregarding the probability distributioncharacterised by the measurement resultand its combined standard uncertainty.The level of confidence that may beattributed to this interval can be knownonly to the extent to which suchassumptions can be justified.

NOTE 3 An expanded uncertainty U iscalculated from a combined standarduncertainty uc and a coverage factor kusing

U = k × uc

B.16 Coverage factor

k Numerical factor used as a multiplier ofthe combined standard uncertainty inorder to obtain an expanded uncertainty[H.2].

NOTE A coverage factor is typically in therange 2 to 3.

B.17 Type A evaluation (of uncertainty)

Method of evaluation of uncertainty bythe statistical analysis of series ofobservations [H.2].

B.18 Type B evaluation (of uncertainty)

Method of evaluation of uncertainty bymeans other than the statistical analysisof series of observations [H.2]

Error

B.19 Error (of measurement)

The result of a measurement minus a truevalue of the measurand [H.4].

NOTE 1 Since a true value cannot bedetermined, in practice a conventionaltrue value is used.

B.20 Random error

Result of a measurement minus the meanthat would result from an infinite numberof measurements of the same measurandcarried out under repeatability conditions[H.4].

NOTE 1 Random error is equal to error minussystematic error.

NOTE 2 Because only a finite number ofmeasurements can be made, it is possibleto determine only an estimate of randomerror.

B.21 Systematic error

Mean that would result from an infinitenumber of measurements of the samemeasurand carried out under repeatabilityconditions minus a true value of themeasurand [H.4].

NOTE 1: Systematic error is equal to errorminus random error.

NOTE 2: Like true value, systematic error andits causes cannot be known.

Statistical terms

B.22 Arithmetic meanx Arithmetic mean value of a sample of n

results.

n

xx ni

i∑== ,1

B.23 Sample Standard Deviation

s An estimate of the population standarddeviation σ from a sample of n results.

1

)(1

2

−=∑

=

n

xxs

n

ii

Page 103: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix B - Definitions

QUAM:2000.1 Page 98

B.24 Standard deviation of the mean

xs The standard deviation of the mean x ofn values taken from a population is givenby

nssx =

The terms "standard error" and "standarderror of the mean" have also been used todescribe the same quantity.

B.25 Relative Standard Deviation (RSD)

RSD An estimate of the standard deviation ofa population from a sample of n resultsdivided by the mean of that sample.Often known as coefficient of variation(CV). Also frequently stated as apercentage.

xs=RSD

Page 104: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix C – Uncertainties in Analytical Processes

QUAM:2000.1 Page 99

Appendix C. Uncertainties in Analytical Processes

C.1 In order to identify the possible sources ofuncertainty in an analytical procedure it is helpfulto break down the analysis into a set of genericsteps:1. Sampling2. Sample preparation3. Presentation of Certified Reference

Materials to the measuring system4. Calibration of Instrument5. Analysis (data acquisition)6. Data processing7. Presentation of results8. Interpretation of results

C.2 These steps can be further broken down bycontributions to the uncertainty for each. Thefollowing list, though not necessarilycomprehensive, provides guidance on factorswhich should be considered.

1. Sampling- Homogeneity.- Effects of specific sampling strategy (e.g.

random, stratified random, proportionaletc.)

- Effects of movement of bulk medium(particularly density selection)

- Physical state of bulk (solid, liquid, gas)- Temperature and pressure effects.- Does sampling process affect

composition? E.g. differential adsorptionin sampling system.

2. Sample preparation- Homogenisation and/or sub-sampling

effects.- Drying.- Milling.- Dissolution.- Extraction.- Contamination.

- Derivatisation (chemical effects)- Dilution errors.- (Pre-)Concentration.- Control of speciation effects.

3. Presentation of Certified ReferenceMaterials to the measuring system- Uncertainty for CRM.- CRM match to sample

4. Calibration of instrument- Instrument calibration errors using a

Certified Reference Material.- Reference material and its uncertainty.- Sample match to calibrant- Instrument precision

5. Analysis- Carry-over in auto analysers.- Operator effects, e.g. colour blindness,

parallax, other systematic errors.- Interferences from the matrix, reagents or

other analytes.- Reagent purity.- Instrument parameter settings, e.g.

integration parameters- Run-to-run precision

6. Data Processing- Averaging.- Control of rounding and truncating.- Statistics.- Processing algorithms (model fitting, e.g.

linear least squares).

7. Presentation of Results- Final result.- Estimate of uncertainty.- Confidence level.

8. Interpretation of Results- Against limits/bounds.- Regulatory compliance.- Fitness for purpose.

Page 105: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix D – Analysing Uncertainty Sources

QUAM:2000.1 Page 100

Appendix D. Analysing Uncertainty Sources

D.1 IntroductionIt is commonly necessary to develop and record alist of sources of uncertainty relevant to ananalytical method. It is often useful to structurethis process, both to ensure comprehensivecoverage and to avoid over-counting. Thefollowing procedure (based on a previouslypublished method [H.14]), provides one possiblemeans of developing a suitable, structuredanalysis of uncertainty contributions.

D.2 Principles of approachD.2.1 The strategy has two stages:

• Identifying the effects on a result In practice, the necessary structured analysis

is effected using a cause and effect diagram(sometimes known as an Ishikawa or‘fishbone’ diagram) [H.15].

• Simplifying and resolving duplicationThe initial list is refined to simplifypresentation and ensure that effects are notunnecessarily duplicated.

D.3 Cause and effect analysisD.3.1 The principles of constructing a cause andeffect diagram are described fully elsewhere. Theprocedure employed is as follows:

1. Write the complete equation for the result. Theparameters in the equation form the mainbranches of the diagram. It is almost alwaysnecessary to add a main branch representing anominal correction for overall bias, usually asrecovery, and this is accordinglyrecommended at this stage if appropriate.

2. Consider each step of the method and add anyfurther factors to the diagram, workingoutwards from the main effects. Examplesinclude environmental and matrix effects.

3. For each branch, add contributory factors untileffects become sufficiently remote, that is,until effects on the result are negligible.

4. Resolve duplications and re-arrange to clarifycontributions and group related causes. It is

convenient to group precision terms at thisstage on a separate precision branch.

D.3.2 The final stage of the cause and effectanalysis requires further elucidation.Duplications arise naturally in detailingcontributions separately for every inputparameter. For example, a run-to-run variabilityelement is always present, at least nominally, forany influence factor; these effects contribute toany overall variance observed for the method asa whole and should not be added in separately ifalready so accounted for. Similarly, it is commonto find the same instrument used to weighmaterials, leading to over-counting of itscalibration uncertainties. These considerationslead to the following additional rules forrefinement of the diagram (though they applyequally well to any structured list of effects):• Cancelling effects: remove both. For

example, in a weight by difference, twoweights are determined, both subject to thebalance ‘zero bias’. The zero bias will cancelout of the weight by difference, and can beremoved from the branches corresponding tothe separate weighings.

• Similar effect, same time: combine into asingle input. For example, run-to-runvariation on many inputs can be combinedinto an overall run-to-run precision ‘branch’.Some caution is required; specifically,variability in operations carried outindividually for every determination can becombined, whereas variability in operationscarried out on complete batches (such asinstrument calibration) will only beobservable in between-batch measures ofprecision.

• Different instances: re-label. It is common tofind similarly named effects which actuallyrefer to different instances of similarmeasurements. These must be clearlydistinguished before proceeding.

D.3.3 This form of analysis does not lead touniquely structured lists. In the present example,temperature may be seen as either a direct effecton the density to be measured, or as an effect onthe measured mass of material contained in a

Page 106: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix D – Analysing Uncertainty Sources

QUAM:2000.1 Page 101

density bottle; either could form the initialstructure. In practice this does not affect theutility of the method. Provided that all significanteffects appear once, somewhere in the list, theoverall methodology remains effective.

D.3.4 Once the cause-and-effect analysis iscomplete, it may be appropriate to return to theoriginal equation for the result and add any newterms (such as temperature) to the equation.

D.4 ExampleD.4.1 The procedure is illustrated by reference toa simplified direct density measurement.Consider the case of direct determination of thedensity d(EtOH) of ethanol by weighing a knownvolume V in a suitable volumetric vessel of tareweight mtare and gross weight including ethanolmgross. The density is calculated from

d(EtOH)=(mgross - mtare)/V

For clarity, only three effects will be considered:Equipment calibration, Temperature, and theprecision of each determination. Figures D1-D3illustrate the process graphically.

D.4.2 A cause and effect diagram consists of ahierarchical structure culminating in a singleoutcome. For the present purpose, this outcomeis a particular analytical result (‘d(EtOH)’ inFigure D1). The ‘branches’ leading to theoutcome are the contributory effects, whichinclude both the results of particular intermediatemeasurements and other factors, such asenvironmental or matrix effects. Each branchmay in turn have further contributory effects.These ‘effects’ comprise all factors affecting theresult, whether variable or constant; uncertaintiesin any of these effects will clearly contribute touncertainty in the result.

D.4.3 Figure D1 shows a possible diagramobtained directly from application of steps 1-3.The main branches are the parameters in theequation, and effects on each are represented bysubsidiary branches. Note that there are two‘temperature’ effects, three ‘precision’ effectsand three ‘calibration’ effects.

D.4.4 Figure D2 shows precision andtemperature effects each grouped togetherfollowing the second rule (same effect/time);temperature may be treated as a single effect ondensity, while the individual variations in eachdetermination contribute to variation observed inreplication of the entire method.

D.4.5 The calibration bias on the two weighingscancels, and can be removed (Figure D3)following the first refinement rule (cancellation).

D.4.6 Finally, the remaining ‘calibration’branches would need to be distinguished as two(different) contributions owing to possible non-linearity of balance response, together with thecalibration uncertainty associated with thevolumetric determination.

Figure D1: Initial list

d(EtOH)d(EtOH)

m(gross) m(tare)

Volume

TemperatureTemperature

Calibration

Precision Calibration

Calibration

Lin*. Bias

Lin*. Bias

Precision

Precision

*Lin. = Linearity

Figure D2: Combination of similar effects

d(EtOH)

m(gross) m(tare)

Volume

Temperature

Temperature

Calibration

Precision Calibration

Calibration

Lin. Bias

Lin. Bias

Precision

Precision

Precision

Temperature

Figure D3: Cancellation

d(EtOH)

m(gross) m(tare)

Volume

Calibration

Calibration

Calibration

Lin. Bias

Lin. Bias

Precision

Temperature

Same balance: bias cancels

Page 107: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 102

Appendix E. Useful Statistical Procedures

E.1 Distribution functionsThe following table shows how to calculate a standard uncertainty from the parameters of the two mostimportant distribution functions, and gives an indication of the circumstances in which each should be used.EXAMPLE

A chemist estimates a contributory factor as not less than 7 or more than 10, but feels that the value could beanywhere in between, with no idea of whether any part of the range is more likely than another. This is adescription of a rectangular distribution function with a range 2a=3 (semi range of a=1.5). Using the functionbelow for a rectangular distribution, an estimate of the standard uncertainty can be calculated. Using the aboverange, a=1.5, results in a standard uncertainty of (1.5/√3) = 0.87.

Rectangular distribution

Form Use when: Uncertainty

2a ( = ±a )

1/2a

x

• A certificate or other specification giveslimits without specifying a level ofconfidence (e.g. 25ml ± 0.05ml)

• An estimate is made in the form of amaximum range (±a) with no knowledgeof the shape of the distribution.

3)( axu =

Triangular distribution

Form Use when: Uncertainty

2a ( = ± a )

1/a

x

• The available information concerning x isless limited than for a rectangulardistribution. Values close to x are morelikely than near the bounds.

• An estimate is made in the form of amaximum range (±a) described by asymmetric distribution.

6)( axu =

Page 108: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 103

Normal distribution

Form Use when: Uncertainty

! An estimate is made from repeatedobservations of a randomly varyingprocess.

u(x) = s

• An uncertainty is given in the form of astandard deviation s, a relative standarddeviation xs / , or a coefficient ofvariance CV% without specifying thedistribution.

u(x) = s

u(x)=x⋅( s x/ )

u(x)= x⋅100

%CV

x • An uncertainty is given in the form of a95% (or other) confidence interval x±cwithout specifying the distribution.

u(x) = c /2 (for c at 95%)

u(x) = c/3 (for c at 99.7%)

Page 109: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 104

E.2 Spreadsheet method for uncertainty calculationE.2.1 Spreadsheet software can be used tosimplify the calculations shown in Section 8. Theprocedure takes advantage of an approximatenumerical method of differentiation, and requiresknowledge only of the calculation used to derivethe final result (including any necessarycorrection factors or influences) and of thenumerical values of the parameters and theiruncertainties. The description here follows that ofKragten [H.12].

E.2.2 In the expression for u(y(x1, x2...xn))

∑∑==

∂∂

⋅∂∂

+

∂∂

nkiki

nixxu

kxy

ixy

ixuixy

,1,,1),(

2)(

provided that either y(x1, x2...xn) is linear in xi oru(xi) is small compared to xi, the partialdifferentials (∂y/∂xi) can be approximated by:

)()())((

i

iii

i xuxyxuxy

xy −+≈

∂∂

Multiplying by u(xi) to obtain the uncertaintyu(y,xi) in y due to the uncertainty in xi gives

u(y,xi) ≈ y(x1,x2,..(xi+u(xi))..xn)-y(x1,x2,..xi..xn)

Thus u(y,xi) is just the difference between thevalues of y calculated for [xi+u(xi)] and xi

respectively.

E.2.3 The assumption of linearity or small valuesof u(xi)/xi will not be closely met in all cases.Nonetheless, the method does provide acceptableaccuracy for practical purposes when consideredagainst the necessary approximations made inestimating the values of u(xi). Reference H.12discusses the point more fully and suggestsmethods of checking the validity of theassumption.

E.2.4 The basic spreadsheet is set up as follows,assuming that the result y is a function of the fourparameters p, q, r, and s:i) Enter the values of p, q, etc. and the formula

for calculating y in column A of thespreadsheet. Copy column A across thefollowing columns once for every variable in y(see Figure E2.1). It is convenient to place thevalues of the uncertainties u(p), u(q) and so onin row 1 as shown.

ii) Add u(p) to p in cell B3, u(q) to q in cell C4etc., as in Figure E2.2. On recalculating thespreadsheet, cell B8 then becomes

f(p+u(p), q ,r..) (denoted by f (p’, q, r, ..) inFigures E2.2 and E2.3), cell C8 becomesf(p, q+u(q), r,..) etc.

iii) In row 9 enter row 8 minus A8 (for example,cell B9 becomes B8-A8). This gives the valuesof u(y,p) as

u(y,p)=f (p+u(p), q, r ..) - f (p,q,r ..) etc.iv) To obtain the standard uncertainty on y, these

individual contributions are squared, addedtogether and then the square root taken, byentering u(y,p)2 in row 10 (Figure E2.3) andputting the square root of their sum in A10.That is, cell A10 is set to the formula

SQRT(SUM(B10+C10+D10+E10))

which gives the standard uncertainty on y.

E.2.5 The contents of the cells B10, C10 etc.show the squared contributions u(y,xi)2=(ciu(xi))2

of the individual uncertainty components to theuncertainty on y and hence it is easy to see whichcomponents are significant.

E.2.6 It is straightforward to allow updatedcalculations as individual parameter valueschange or uncertainties are refined. In step i)above, rather than copying column A directly tocolumns B-E, copy the values p to s by reference,that is, cells B3 to E3 all reference A3, B4 to E4reference A4 etc. The horizontal arrows in FigureE2.1 show the referencing for row 3. Note thatcells B8 to E8 should still reference the values incolumns B to E respectively, as shown for columnB by the vertical arrows in Figure E2.1. In step ii)above, add the references to row 1 by reference(as shown by the arrows in Figure E2.1). Forexample, cell B3 becomes A3+B1, cell C4becomes A4+C1 etc. Changes to eitherparameters or uncertainties will then be reflectedimmediately in the overall result at A8 and thecombined standard uncertainty at A10.

E.2.7 If any of the variables are correlated, thenecessary additional term is added to the SUM inA10. For example, if p and q are correlated, witha correlation coefficient r(p,q), then the extra term2×r(p,q) ×u(y,p) ×u(y,q) is added to the calculatedsum before taking the square root. Correlation cantherefore easily be included by adding suitableextra terms to the spreadsheet.

Page 110: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 105

Figure E2.1A B C D E

1 u(p) u(q) u(r) u(s)23 p p p p p4 q q q q q5 r r r r r6 s s s s s78 y=f(p,q,..) y=f(p,q,..) y=f(p,q,..) y=f(p,q,..) y=f(p,q,..)91011

Figure E2.2A B C D E

1 u(p) u(q) u(r) u(s)23 p p+u(p) p p p4 q q q+u(q) q q5 r r r r+u(r) r6 s s s s s+u(s)78 y=f(p,q,..) y=f(p’,...) y=f(..q’,..) y=f(..r’,..) y=f(..s’,..)9 u(y,p) u(y,q) u(y,r) u(y,s)1011

Figure E2.3A B C D E

1 u(p) u(q) u(r) u(s)23 p p+u(p) p p p4 q q q+u(q) q q5 r r r r+u(r) r6 s s s s s+u(s)78 y=f(p,q,..) y=f(p’,...) y=f(..q’,..) y=f(..r’,..) y=f(..s’,..)9 u(y,p) u(y,q) u(y,r) u(y,s)10 u(y) u(y,p)2 u(y,q)2 u(y,r)2 u(y,s)2

11

Page 111: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 106

E.3 Uncertainties from linear least squares calibrationE.3.1 An analytical method or instrument is oftencalibrated by observing the responses, y, todifferent levels of the analyte, x. In most casesthis relationship is taken to be linear viz:

y = b0 + b1x Eq. E3.1

This calibration line is then used to obtain theconcentration xpred of the analyte from a samplewhich produces an observed response yobs from

xpred = (yobs – b0)/b1 Eq. E3.2

It is usual to determine the constants b1 and b0 byweighted or un-weighted least squares regressionon a set of n pairs of values (xi, yi).

E.3.2 There are four main sources of uncertaintyto consider in arriving at an uncertainty on theestimated concentration xpred:

• Random variations in measurement of y,affecting both the reference responses yi andthe measured response yobs.

• Random effects resulting in errors in theassigned reference values xi.

• Values of xi and yi may be subject to aconstant unknown offset, for example arisingwhen the values of x are obtained from serialdilution of a stock solution

• The assumption of linearity may not be valid

Of these, the most significant for normal practiceare random variations in y, and methods ofestimating uncertainty for this source are detailedhere. The remaining sources are also consideredbriefly to give an indication of methods available.

E.3.3 The uncertainty u(xpred, y) in a predictedvalue xpred due to variability in y can be estimatedin several ways:

From calculated variance and covariance.

If the values of b1 and b0, their variances var(b1),var(b0) and their covariance, covar(b1,b0), aredetermined by the method of least squares, thevariance on x, var(x), obtained using the formulain Chapter 8. and differentiating the normalequations, is given by

21

01012 )var(),(covar2)var()var(

)var(

bbbbxbxy

x

predpredobs

pred

+⋅⋅+⋅+

=

Eq. E3.3

and the corresponding uncertainty u(xpred, y) is√var(xpred).

From the calibration data.

The above formula for var(xpred) can be written interms of the set of n data points, (xi, yi), used todetermine the calibration function:

∑∑−∑⋅

−+

∑⋅

+=

))()((

)(1

/)var()var(

22

2

21

2

21

iiiii

pred

i

obspred

wxwxw

xxwb

S

byx

Eq. E3.4

where )2(

)( 2

2

−∑=

niyyw

S fii, )( fii yy − is the

residual for the ith point, n is the number of datapoints in the calibration, b1 the calculated best fitgradient, wi the weight assigned to yi and

)( xx pred − the difference between xpred and themean x of the n values x1, x2....

For unweighted data and where var(yobs) is basedon p measurements, equation E3.4 becomes

∑−∑

−++⋅=

))()((

)(11)var( 22

2

21

2

nxxxx

npbSx

ii

predpred

Eq. E3.5

This is the formula which is used in example 5with Sxx = ( )[ ] ∑∑ ∑ −=− 222 )()( xxnxx iii .

From information given by software used toderive calibration curves.

Some software gives the value of S, variouslydescribed for example as RMS error or residualstandard error. This can then be used in equationE3.4 or E3.5. However some software may alsogive the standard deviation s(yc) on a value of ycalculated from the fitted line for some new valueof x and this can be used to calculate var(xpred)since, for p=1

( )( )∑ ∑−

−++=

nxx

xxn

Sysii

predc 22

2

)(

)(11)(

Page 112: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 107

giving, on comparison with equation E3.5,

var(xpred) = [ s(yc) / b1]2 Eq. E3.6

E.3.4 The reference values xi may each haveuncertainties which propagate through to the finalresult. In practice, uncertainties in these valuesare usually small compared to uncertainties in thesystem responses yi, and may be ignored. Anapproximate estimate of the uncertainty u(xpred, xi)in a predicted value xpred due to uncertainty in aparticular reference value xi is

u(xpred, xi) ≈ u(xi)/n Eq. E3.7

where n is the number of xi values used in thecalibration. This expression can be used to checkthe significance of u(xpred, xi).

E.3.5 The uncertainty arising from the assumptionof a linear relationship between y and x is notnormally large enough to require an additionalestimate. Providing the residuals show that thereis no significant systematic deviation from thisassumed relationship, the uncertainty arising fromthis assumption (in addition to that covered by theresulting increase in y variance) can be taken tobe negligible. If the residuals show a systematictrend then it may be necessary to include higher

terms in the calibration function. Methods ofcalculating var(x) in these cases are given instandard texts. It is also possible to make ajudgement based on the size of the systematictrend.

E.3.6 The values of x and y may be subject to aconstant unknown offset (e.g. arising when thevalues of x are obtained from serial dilution of astock solution which has an uncertainty on itscertified value). If the standard uncertainties on yand x from these effects are u(y, const) andu(x, const), then the uncertainty on theinterpolated value xpred is given by:

u(xpred)2 = u(x, const)2 +

(u(y, const)/b1)2 + var(x) Eq. E3.8

E.3.7 The four uncertainty components describedin E.3.2 can be calculated using equationsEq. E3.3 to Eq. E3.8. The overall uncertaintyarising from calculation from a linear calibrationcan then be calculated by combining these fourcomponents in the normal way.

Page 113: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 108

E.4: Documenting uncertainty dependent on analyte level

E.4.1 Introduction

E.4.1.1 It is often observed in chemicalmeasurement that, over a large range of analytelevels, dominant contributions to the overalluncertainty vary approximately proportionately tothe level of analyte, that is u(x) ∝ x. In such casesit is often sensible to quote uncertainties asrelative standard deviations or, for example,coefficient of variation (%CV).

E.4.1.2 Where the uncertainty is unaffected bylevel, for example at low levels, or where arelatively narrow range of analyte level isinvolved, it is generally most sensible to quote anabsolute value for the uncertainty.

E.4.1.3 In some cases, both constant andproportional effects are important. This sectionsets out a general approach to recordinguncertainty information where variation ofuncertainty with analyte level is an issue andreporting as a simple coefficient of variation isinadequate.

E.4.2 Basis of approach

E.4.2.1 To allow for both proportionality ofuncertainty and the possibility of an essentiallyconstant value with level, the following generalexpression is used:

21

20 )()( sxsxu ⋅+= [1]

whereu(x) is the combined standard uncertainty in the

result x (that is, the uncertainty expressedas a standard deviation)

s0 represents a constant contribution to theoverall uncertainty

s1 is a proportionality constant.

The expression is based on the normal method ofcombining of two contributions to overalluncertainty, assuming one contribution (s0) isconstant and one (xs1) proportional to the result.Figure E.4.1 shows the form of this expression.NOTE: The approach above is practical only where it

is possible to calculate a large number ofvalues. Where experimental study isemployed, it will not often be possible toestablish the relevant parabolic relationship. Insuch circumstances, an adequate

approximation can be obtained by simplelinear regression through four or morecombined uncertainties obtained at differentanalyte concentrations. This procedure isconsistent with that employed in studies ofreproducibility and repeatability according toISO 5725:1994. The relevant expression isthen u x s x s( ) ' . '≈ +0 1

E.4.2.2 The figure can be divided intoapproximate regions (A to C on the figure):

A: The uncertainty is dominated by the term s0,and is approximately constant and close to s0.

B: Both terms contribute significantly; theresulting uncertainty is significantly higherthan either s0 or xs1, and some curvature isvisible.

C: The term xs1 dominates; the uncertainty risesapproximately linearly with increasing x andis close to xs1.

E.4.2.3 Note that in many experimental cases thecomplete form of the curve will not be apparent.Very often, the whole reporting range of analytelevel permitted by the scope of the method fallswithin a single chart region; the result is a numberof special cases dealt with in more detail below.

E.4.3 Documenting level-dependentuncertainty data

E.4.3.1 In general, uncertainties can bedocumented in the form of a value for each of s0

and s1. The values can be used to provide anuncertainty estimate across the scope of themethod. This is particularly valuable whencalculations for well characterised methods areimplemented on computer systems, where thegeneral form of the equation can be implementedindependently of the values of the parameters(one of which may be zero - see below). It isaccordingly recommended that, except in thespecial cases outlined below or where thedependence is strong but not linear*, uncertainties * An important example of non-linear dependence is

the effect of instrument noise on absorbancemeasurement at high absorbances near the upper limitof the instrument capability. This is particularlypronounced where absorbance is calculated fromtransmittance (as in infrared spectroscopy). Underthese circumstances, baseline noise causes very largeuncertainties in high absorbance figures, and the

Page 114: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 109

are documented in the form of values for aconstant term represented by s0 and a variableterm represented by s1.

E.4.4. Special cases

E.4.4.1. Uncertainty not dependent on level ofanalyte (s0 dominant)

The uncertainty will generally be effectivelyindependent of observed analyte concentrationwhen:

• The result is close to zero (for example,within the stated detection limit for themethod). Region A in Figure E.4.1

• The possible range of results (stated in themethod scope or in a statement of scope forthe uncertainty estimate) is small compared tothe observed level.

Under these circumstances, the value of s1 can berecorded as zero. s0 is normally the calculatedstandard uncertainty.

E.4.4.2. Uncertainty entirely dependent onanalyte (s1 dominant)

Where the result is far from zero (for example,above a ‘limit of determination’) and there isclear evidence that the uncertainty changesproportionally with the level of analyte permittedwithin the scope of the method, the term xs1

dominates (see Region C in Figure E.4.1). Underthese circumstances, and where the method scopedoes not include levels of analyte near zero, s0

may reasonably be recorded as zero and s1 issimply the uncertainty expressed as a relativestandard deviation.

E.4.4.3. Intermediate dependence

In intermediate cases, and in particular where thesituation corresponds to region B in Figure E.4.1,two approaches can be taken:

a) Applying variable dependence

The more general approach is to determine,record and use both s0 and s1. Uncertainty

uncertainty rises much faster than a simple linearestimate would predict. The usual approach is toreduce the absorbance, typically by dilution, to bringthe absorbance figures well within the working range;the linear model used here will then normally beadequate. Other examples include the ‘sigmoidal’response of some immunoassay methods.

estimates, when required, can then be producedon the basis of the reported result. This remainsthe recommended approach where practical.NOTE: See the note to section E.4.2.

b) Applying a fixed approximation

An alternative which may be used in generaltesting and where

• the dependence is not strong (that is,evidence for proportionality is weak)

or• the range of results expected is moderate

leading in either case to uncertainties which donot vary by more than about 15% from anaverage uncertainty estimate, it will often bereasonable to calculate and quote a fixed value ofuncertainty for general use, based on the meanvalue of results expected. That is,

eithera mean or typical value for x is used tocalculate a fixed uncertainty estimate, and thisis used in place of individually calculatedestimates

ora single standard deviation has been obtained,based on studies of materials covering the fullrange of analyte levels permitted (within thescope of the uncertainty estimate), and there islittle evidence to justify an assumption ofproportionality. This should generally betreated as a case of zero dependence, and therelevant standard deviation recorded as s0.

E.4.5. Determining s0 and s1

E.4.5.1. In the special cases in which one termdominates, it will normally be sufficient to use theuncertainty as standard deviation or relativestandard deviation respectively as values of s0 ands1. Where the dependence is less obvious,however, it may be necessary to determine s0 ands1 indirectly from a series of estimates ofuncertainty at different analyte levels.

E.4.5.2. Given a calculation of combineduncertainty from the various components, some ofwhich depend on analyte level while others donot, it will normally be possible to investigate thedependence of overall uncertainty on analyte levelby simulation. The procedure is as follows:

Page 115: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 110

1: Calculate (or obtain experimentally)uncertainties u(xi) for at least ten levels xi ofanalyte, covering the full range permitted.

2. Plot u(xi)2 against xi2

3. By linear regression, obtain estimates of mand c for the line u(x)2 = mx2 + c

4. Calculate s0 and s1 from s0 = √c, s1 = √m5. Record s0 and s1

E.4.6. Reporting

E.4.6.1. The approach outlined here permitsestimation of a standard uncertainty for any singleresult. In principle, where uncertainty informationis to be reported, it will be in the form of

[result] ± [uncertainty]

where the uncertainty as standard deviation iscalculated as above, and if necessary expanded(usually by a factor of two) to give increasedconfidence. Where a number of results arereported together, however, it may be possible,and is perfectly acceptable, to give an estimate ofuncertainty applicable to all results reported.

E.4.6.2. Table E.4.1 gives some examples. Theuncertainty figures for a list of different analytesmay usefully be tabulated following similarprinciples.NOTE: Where a ‘detection limit’ or ‘reporting limit’

is used to give results in the form “<x” or“nd”, it will normally be necessary to quotethe limits used in addition to the uncertaintiesapplicable to results above reporting limits.

Table E.4.1: Summarising uncertainty for several samples

Situation Dominant term Reporting example(s)

Uncertainty essentially constantacross all results

s0 or fixed approximation(sections E.4.4.1. or E.4.4.3.a)

Standard deviation: expandeduncertainty; 95% confidenceinterval

Uncertainty generallyproportional to level

xs1

(see section E.4.4.2.)relative standard deviation;coefficient of variance (%CV)

Mixture of proportionality andlower limiting value foruncertainty

Intermediate case(section E.4.4.3.)

quote %CV or rsd together withlower limit as standarddeviation.

Page 116: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix E – Statistical Procedures

QUAM:2000.1 Page 111

Figure E.4.1: Variation of uncertainty with observed result

CBA

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

0 1 2 3 4 5 6 7 8Result x

Uncertainty u(x)

s0

x.s1u(x)

Uncertaintyapproximately

equal to s0

Uncertaintysignificantlygreater than

either s0 or x.s1

Uncertaintyapproximatelyequal to x.s1

Page 117: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix F – Detection limits

QUAM:2000.1 Page 112

Appendix F. Measurement Uncertainty at the Limit of Detection/Limit ofDetermination

F.1. IntroductionF.1.1. At low concentrations, an increasingvariety of effects becomes important, including,for example,

• the presence of noise or unstable baseline,

• the contribution of interferences to the (gross)signal,

• the influence of any analytical blank used,and

• losses during extraction, isolation or clean-up.

Because of such effects, as analyte concentrationsdrop, the relative uncertainty associated with theresult tends to increase, first to a substantialfraction of the result and finally to the pointwhere the (symmetric) uncertainty intervalincludes zero. This region is typically associatedwith the practical limit of detection for a givenmethod.NOTE: The terminology and conventions associated

with measuring and reporting low levels ofanalyte have been widely discussed elsewhere(See Bibliography [H.16, H.17, H.18] forexamples and definitions). Here, the term‘limit of detection’ only implies a level atwhich detection becomes problematic, and isnot associated with any specific definition.

F.1.2. It is widely accepted that the mostimportant use of the ‘limit of detection’ is to showwhere method performance becomes insufficientfor acceptable quantitation, so that improvementscan be made. Ideally, therefore, quantitativemeasurements should not be made in this region.Nonetheless, so many materials are important atvery low levels that it is inevitable thatmeasurements must be made, and results reported,in this region.

F.1.3. The ISO Guide on MeasurementUncertainty [H.2] does not give explicitinstructions for the estimation of uncertaintywhen the results are small and the uncertaintieslarge compared to the results. Indeed, the basicform of the ‘law of propagation of uncertainties’,described in chapter 8 of this guide, may cease toapply accurately in this region; one assumption onwhich the calculation is based is that the

uncertainty is small relative to the value of themeasurand. An additional, if philosophical,difficulty follows from the definition ofuncertainty given by the ISO Guide: thoughnegative observations are quite possible, and evencommon in this region, an implied dispersionincluding values below zero cannot be “...reasonably ascribed to the value of themeasurand” when the measurand is aconcentration, because concentrations themselvescannot be negative.

F.1.4. These difficulties do not preclude theapplication of the methods outlined in this guide,but some caution is required in interpretation andreporting the results of measurement uncertaintyestimation in this region. The purpose of thepresent Appendix is to provide limited guidanceto supplement that already available from othersources.NOTE: Similar considerations may apply to other

regions; for example, mole or mass fractionsclose to 100% may lead to similar difficulties.

F.2. Observations and estimatesF.2.1. A fundamental principle of measurementscience is that results are estimates of true values.Analytical results, for example, are availableinitially in units of the observed signal, e.g. mV,absorbance units etc. For communication to awider audience, particularly to the customers of alaboratory or to other authorities, the raw dataneed to be converted to a chemical quantity, suchas concentration or amount of substance. Thisconversion typically requires a calibrationprocedure (which may include, for example,corrections for observed and well characterisedlosses). Whatever the conversion, however, thefigure generated remains an observation, orsignal. If the experiment is properly carried out,this observation remains the ‘best estimate’ of thevalue of the measurand.

F.2.2. Observations are not often constrained bythe same fundamental limits that apply to realconcentrations. For example, it is perfectlysensible to report an ‘observed concentration’,

Page 118: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix F – Detection limits

QUAM:2000.1 Page 113

that is, an estimate, below zero. It is equallysensible to speak of a dispersion of possibleobservations which extends into the same region.For example, when performing an unbiasedmeasurement on a sample with no analyte present,one should see about half of the observationsfalling below zero. In other words, reports like

observed concentration = 2.4±8 mg l-1

observed concentration = -4.2±8 mg l-1

are not only possible; they should be seen asvalid statements.

F.2.3. The methods of uncertainty estimationdescribed in this guide apply well to theestimation of uncertainties on observations. Itfollows that while reporting observations andtheir associated uncertainties to an informedaudience, there is no barrier to, or contradictionin, reporting the best estimate and its associateduncertainty even where the result implies animpossible physical situation. Indeed, in somecircumstances (for example, when reporting avalue for an analytical blank which willsubsequently be used to correct other results) it isabsolutely essential to report the observation andits uncertainty (however large).

F.2.4. This remains true wherever the end use ofthe result is in doubt. Since only the observationand its associated uncertainty can be used directly(for example, in further calculations, in trendanalysis or for re-interpretation), the uncensoredobservation should always be available.

F.2.5. The ideal is accordingly to report validobservations and their associated uncertaintyregardless of the values.

F.3. Interpreted results and compliancestatements

F.3.1. Despite the foregoing, it must be acceptedthat many reports of analysis and statements ofcompliance include some interpretation for theend user’s benefit. Typically, such aninterpretation would include any relevantinference about the levels of analyte which couldreasonably be present in a material. Such aninterpretation is an inference about the real world,and consequently would be expected (by the enduser) to conform to real limits. So, too, would anyassociated estimate of uncertainty in ‘real’ values.

F.3.2. Under such circumstances, where the enduse is well understood, and where the end usercannot realistically be informed of the nature ofmeasurement observations, the general guidanceprovided elsewhere (for example in referencesH.16, H.17, H.18) on the reporting of low levelresults may reasonably apply.

F.3.3. One further caution is, however, pertinent.Much of the literature on capabilities of detectionrelies heavily on the statistics of repeatedobservations. It should be clear to readers of thecurrent guide that observed variation is onlyrarely a good guide to the full uncertainty ofresults. Just as with results in any other region,careful consideration should accordingly be givento all the uncertainties affecting a given resultbefore reporting the values.

Page 119: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix G – Uncertainty Sources

QUAM:2000.1 Page 114

Appendix G. Common Sources and Values of Uncertainty

The following tables summarise some typical examples of uncertainty components. Thetables give:

• The particular measurand or experimental procedure (determining mass, volumeetc)

• The main components and sources of uncertainty in each case

• A suggested method of determining the uncertainty arising from each source.

• An example of a typical case

The tables are intended only to summarise the examples and to indicate general methodsof estimating uncertainties in analysis. They are not intended to be comprehensive, norshould the values given be used directly without independent justification. The valuesmay, however, help in deciding whether a particular component is significant.

Page 120: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Det

erm

inat

ion

Unc

erta

inty

Cau

seM

etho

d of

det

erm

inat

ion

Typ

ical

val

ues

Com

pone

nts

Exam

ple

Val

ueM

ass

Bala

nce

calib

ratio

nun

certa

inty

Lim

ited

accu

racy

inca

libra

tion

Stat

ed o

n ca

libra

tion

certi

ficat

e,co

nver

ted

to st

anda

rd d

evia

tion

4-fig

ure

bala

nce

0.5

mg

Line

arity

i) Ex

perim

ent,

with

rang

e of

certi

fied

wei

ghts

ii) M

anuf

actu

rer's

spec

ifica

tion

ca. 0

.5x

last

signi

fican

t dig

it

Read

abili

tyLi

mite

d re

solu

tion

ondi

spla

y or

scal

eFr

om la

st sig

nific

ant d

igit

0.5x

last

signi

fican

tdi

git/√

3

Dai

ly d

rift

Var

ious

, inc

ludi

ngte

mpe

ratu

reSt

anda

rd d

evia

tion

of lo

ng te

rmch

eck

wei

ghin

gs. C

alcu

late

as

RSD

if n

eces

sary

.

ca. 0

.5x

last

signi

fican

tdi

git.

Run

to ru

n va

riatio

nV

ario

usSt

anda

rd d

evia

tion

of su

cces

sive

sam

ple

or c

heck

wei

ghin

gsca

. 0.5

x la

stsig

nific

ant

digi

t.

Den

sity

eff

ects

(con

vent

iona

lba

sis)N

ote

1

Calib

ratio

n w

eigh

t/sam

ple

dens

ity m

ismat

ch c

ause

s adi

ffere

nce

in th

e ef

fect

of

atm

osph

eric

buo

yanc

y

Calc

ulat

ed fr

om k

now

n or

assu

med

den

sitie

s and

typi

cal

atm

osph

eric

con

ditio

ns

Stee

l, N

icke

lA

lum

iniu

mO

rgan

ic so

lids

Wat

erH

ydro

carb

ons

1 pp

m20

ppm

50-1

00 p

pm65

ppm

90 p

pm

Den

sity

eff

ects

(in v

acuo

bas

is)N

ote

1A

s abo

ve.

Calc

ulat

e at

mos

pher

ic b

uoya

ncy

effe

ct

and

subt

ract

bu

oyan

cyef

fect

on

calib

ratio

n w

eigh

t.

100

g w

ater

10 g

Nic

kel

+0.1

g (e

ffec

t)

<1 m

g (e

ffect

)

Not

e 1.

For

fund

amen

tal c

onst

ants

or S

I uni

t def

initi

ons,

mas

s det

erm

inat

ions

by

wei

ghin

g ar

e us

ually

cor

rect

ed to

the

wei

ght i

n va

cuum

. In

mos

t oth

er p

ract

ical

situ

atio

ns, w

eigh

t is q

uote

d on

a c

onve

ntio

nal b

asis

as d

efin

ed b

y O

IML

[H.1

8]. T

he c

onve

ntio

n is

to q

uote

wei

ghts

at a

n ai

r den

sity

of 1

.2 k

g m

-3 a

nd a

sam

ple

dens

ity o

f 800

0 kg

m-3

, whi

ch c

orre

spon

ds to

wei

ghin

g st

eel a

t sea

leve

l in

norm

al a

tmos

pher

ic c

ondi

tions

. The

buo

yanc

y co

rrec

tion

to c

onve

ntio

nal m

ass i

s zer

ow

hen

the

sam

ple

dens

ity is

800

0 k

g m

-3 o

r the

air

dens

ity is

1.2

kg

m-3

. Sin

ce th

e ai

r den

sity

is u

sual

ly v

ery

clos

e to

the

latte

r val

ue, c

orre

ctio

n to

con

vent

iona

l wei

ght

can

norm

ally

be

negl

ecte

d. T

he st

anda

rd u

ncer

tain

ty v

alue

s giv

en fo

r den

sity

-rel

ated

eff

ects

on

a co

nven

tiona

l wei

ght b

asis

in th

e ta

ble

abov

e ar

e su

ffic

ient

for

prel

imin

ary

estim

ates

for w

eigh

ing

on a

con

vent

iona

l bas

is w

ithou

t buo

yanc

y co

rrec

tion

at se

a le

vel.

Mas

s det

erm

ined

on

the

conv

entio

nal b

asis

may

, how

ever

,di

ffer

from

the

‘true

mas

s’ (i

n va

cuo)

by

0.1%

or m

ore

(see

the

effe

cts i

n th

e bo

ttom

line

of t

he ta

ble

abov

e).

QUAM:2000.1 Page 115

Quantifying Uncertainty Appendix G – Uncertainty Sources

Page 121: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

* A

ssum

ing

rect

angu

lar

distr

ibut

ion

Det

erm

inat

ion

Unc

erta

inty

Cau

seM

etho

d of

det

erm

inat

ion

Typ

ical

val

ues

Com

pone

nts

Exam

ple

Val

ue

Vol

ume

(liqu

id)

Calib

ratio

n un

certa

inty

Lim

ited

accu

racy

inca

libra

tion

Stat

ed o

n m

anuf

actu

rer's

spec

ifica

tion,

con

verte

d to

stand

ard

devi

atio

n.

For A

STM

cla

ss A

gla

ssw

are

ofvo

lum

e V

, the

lim

it is

appr

oxim

atel

y V

0.6 /2

00

10 m

l (G

rade

A)

0.02

/ √3

=0.

01 m

l*

Tem

pera

ture

Tem

pera

ture

var

iatio

nfro

m th

e ca

libra

tion

tem

pera

ture

cau

ses

adi

ffere

nce

in th

e vo

lum

eat

the

stan

dard

tem

pera

ture

.

∆T⋅α

/(2√3

) giv

es th

e re

lativ

esta

ndar

d de

viat

ion,

whe

re ∆

T is

the

poss

ible

tem

pera

ture

rang

ean

d α

the

coef

ficie

nt o

f vol

ume

expa

nsio

n of

the

liqui

d. α

isap

prox

imat

ely

2 x1

0-4

K-1

for

wat

er a

nd 1

x 1

0-3

K-1

for

orga

nic

liqui

ds.

100

ml w

ater

0.03

ml f

orop

erat

ing

with

in 3

°C o

fth

e st

ated

oper

atin

gte

mpe

ratu

re

Run

to ru

n va

riatio

nV

ario

usSt

anda

rd d

evia

tion

of su

cces

sive

chec

k de

liver

ies (

foun

d by

weig

hing

)

25 m

l pip

ette

Repl

icat

efil

l/weig

h: s

= 0

.009

2 m

l

QUAM:2000.1 Page 116

Quantifying Uncertainty Appendix G – Uncertainty Sources

Page 122: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Det

erm

inat

ion

Unc

erta

inty

Cau

seM

etho

d of

det

erm

inat

ion

Typ

ical

val

ues

Com

pone

nts

Exam

ple

Val

ue

Ref

eren

ce m

ater

ial

conc

entra

tion

Purit

yIm

purit

ies r

educ

e th

eam

ount

of r

efer

ence

mat

eria

l pre

sent

. Rea

ctiv

eim

purit

ies m

ay in

terfe

rew

ith th

e m

easu

rem

ent.

Stat

ed o

n m

anuf

actu

rer’s

certi

ficat

e. R

efer

ence

cer

tific

ates

usua

lly g

ive

unqu

alifie

d lim

its;

thes

e sh

ould

acc

ordi

ngly

be

treat

ed a

s rec

tang

ular

distr

ibut

ions

and

div

ided

by

√3.

Not

e: w

here

the

natu

re o

f the

impu

ritie

s is n

ot st

ated

, add

ition

alal

low

ance

or c

heck

s may

nee

d to

be m

ade

to e

stabl

ish li

mits

for

inte

rfer

ence

etc

.

Ref

eren

cepo

tass

ium

hydr

ogen

phth

alat

ece

rtifie

d as

99.

9±0

.1%

0.1/

√3 =

0.06

%

Conc

entra

tion

(cer

tifie

d)Ce

rtifie

d un

certa

inty

inre

fere

nce

mat

eria

lco

ncen

tratio

n.

Stat

ed o

n m

anuf

actu

rer’s

certi

ficat

e. R

efer

ence

cer

tific

ates

usua

lly g

ive

unqu

alifie

d lim

its;

thes

e sh

ould

acc

ordi

ngly

be

treat

ed a

s rec

tang

ular

distr

ibut

ions

and

div

ided

by

√3.

Cadm

ium

acet

ate

in 4

%ac

etic

aci

d.Ce

rtifie

d as

(100

0±2)

mg

l-1.

2/√3

= 1

.2m

g l-1

(0.0

012

asR

SD)*

Conc

entra

tion

(mad

eup

from

cer

tifie

dm

ater

ial)

Com

bina

tion

ofun

certa

intie

s in

refe

renc

eva

lues

and

inte

rmed

iate

steps

Com

bine

val

ues f

or p

rior s

teps

as

RSD

thro

ugho

ut.

Cadm

ium

acet

ate

afte

rth

ree

dilu

tions

from

100

0 m

g l-1

to 0

.5 m

g l-1

0034

.00017

.000

21.000

17.000

12.0

2222

=

+++

as R

SD

*Ass

umin

g re

ctan

gula

r dist

ribut

ion

QUAM:2000.1 Page 117

Quantifying Uncertainty Appendix G – Uncertainty Sources

Page 123: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Det

erm

inat

ion

Unc

erta

inty

Cau

seM

etho

d of

det

erm

inat

ion

Typ

ical

val

ues

Com

pone

nts

Exam

ple

Val

ue

Abs

orba

nce

Instr

umen

t cal

ibra

tion

Not

e: th

is co

mpo

nent

rela

tes t

o ab

sorb

ance

read

ing

vers

usre

fere

nce

abso

rban

ce,

not t

o th

e ca

libra

tion

ofco

ncen

tratio

n ag

ains

tab

sorb

ance

read

ing

Lim

ited

accu

racy

inca

libra

tion.

Stat

ed o

n ca

libra

tion

certi

ficat

e as

limits

, con

verte

d to

stan

dard

devi

atio

n

Run

to ru

n va

riatio

nV

ario

usSt

anda

rd d

evia

tion

of re

plic

ate

dete

rmin

atio

ns, o

r QA

perf

orm

ance

.

Mea

n of

7ab

sorb

ance

read

ings

with

s=1.

63

1.63

/√7

= 0.

62

Sam

plin

gH

omog

enei

tySu

b-sa

mpl

ing

from

inho

mog

eneo

us m

ater

ial

will

not

gen

erall

yre

pres

ent t

he b

ulk

exac

tly.

Not

e: ra

ndom

sam

plin

gw

ill g

ener

ally

resu

lt in

zer

obi

as. I

t may

be

nece

ssar

yto

che

ck th

at sa

mpl

ing

isac

tual

ly ra

ndom

.

i) St

anda

rd d

evia

tion

of se

para

tesu

b-sa

mpl

e re

sults

(if t

hein

hom

ogen

eity

is la

rge

rela

tive

toan

alyt

ical

acc

urac

y).

ii) S

tand

ard

devi

atio

n es

timat

edfro

m k

now

n or

ass

umed

popu

latio

n pa

ram

eter

s.

Sam

plin

g fro

mbr

ead

ofas

sum

ed tw

o-va

lued

inho

mog

enei

ty(S

ee E

xam

ple

A4)

For 1

5 po

rtion

sfro

m 7

2co

ntam

inat

edan

d 36

0un

cont

amin

ated

bulk

por

tions

:RS

D =

0.5

8

QUAM:2000.1 Page 118

Quantifying Uncertainty Appendix G – Uncertainty Sources

Page 124: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Det

erm

inat

ion

Unc

erta

inty

Cau

seM

etho

d of

det

erm

inat

ion

Typ

ical

val

ues

Com

pone

nts

Exam

ple

Val

ue

Extra

ctio

n re

cove

ryM

ean

reco

very

Extra

ctio

n is

rare

lyco

mpl

ete

and

may

add

or

incl

ude

inte

rfere

nts.

Rec

over

y ca

lcul

ated

as

perc

enta

ge re

cove

ry fr

omco

mpa

rabl

e re

fere

nce

mat

eria

lor

repr

esen

tativ

e sp

ikin

g.U

ncer

tain

ty o

btai

ned

from

stand

ard

devi

atio

n of

mea

n of

reco

very

exp

erim

ents.

Not

e: re

cove

ry m

ay a

lso b

eca

lcul

ated

dire

ctly

from

prev

ious

ly m

easu

red

parti

tion

coef

ficie

nts.

Reco

very

of

pesti

cide

from

brea

d; 4

2ex

perim

ents,

mea

n 90

%,

s=28

%(S

ee E

xam

ple

A4)

28/√

42=

4.3%

(0.0

48 a

sR

SD)

Run

to ru

n va

riatio

n in

reco

very

Var

ious

Stan

dard

dev

iatio

n of

repl

icat

eex

perim

ents.

Reco

very

of

pesti

cide

s fro

mbr

ead

from

paire

d re

plic

ate

data

. (Se

eEx

ampl

e A

4)

0.31

as R

SD.

QUAM:2000.1 Page 119

Quantifying Uncertainty Appendix G – Uncertainty Sources

Page 125: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Quantifying Uncertainty Appendix H - Bibliography

QUAM:2000.1 Page 120

Appendix H. Bibliography

H.1. ISO/IEC 17025:1999. General Requirements for the Competence of Calibration and TestingLaboratories. ISO, Geneva (1999).

H.2. Guide To The Expression Of Uncertainty In Measurement. ISO, Geneva (1993).(ISBN 92-67-10188-9) (Reprinted 1995)

H.3. EURACHEM, Quantifying Uncertainty in Analytical Measurement. Laboratory of theGovernment Chemist, London (1995). ISBN 0-948926-08-2

H.4. International Vocabulary of basic and general terms in Metrology. ISO, Geneva, (1993). (ISBN92-67-10175-1)

H.5. ISO 3534:1993. Statistics - Vocabulary and Symbols. ISO, Geneva, Switzerland (1993).

H.6. Analytical Methods Committee, Analyst (London).120 29-34 (1995).

H.7. EURACHEM, The Fitness for Purpose of Analytical Methods. (1998) (ISBN 0-948926-12-0)

H.8. ISO/IEC Guide 33:1989, “Uses of Certified Reference Materials”. ISO, Geneva (1989).

H.9. International Union of Pure and Applied Chemistry. Pure Appl. Chem., 67, 331-343, (1995).

H.10 ISO 5725:1994 (Parts 1-4 and 6). “Accuracy (trueness and precision) of measurement methodsand results”. ISO, Geneva (1994). See also ISO 5725-5:1998 for alternative methods ofestimating precision.

H.11. I. J. Good, "Degree of Belief", in Encyclopaedia of Statistical Sciences, Vol. 2, Wiley, New York(1982).

H.12. J. Kragten, “Calculating standard deviations and confidence intervals with a universallyapplicable spreadsheet technique”, Analyst, 119, 2161-2166 (1994).

H.13. British Standard BS 6748:1986. Limits of metal release from ceramic ware, glassware, glassceramic ware and vitreous enamel ware.

H.14. S. L. R. Ellison, V. J. Barwick. Accred. Qual. Assur. 3 101-105 (1998).

H.15. ISO 9004-4:1993, Total Quality Management. Part 2. Guidelines for quality improvement. ISO,Geneva (1993).

H.16. H. Kaiser, Anal. Chem. 42 24A (1970).

H.17. L.A. Currie, Anal. Chem. 40 583 (1968).

H.18. IUPAC, Limit of Detection, Spectrochim. Acta 33B 242 (1978).

H.19. OIML Recommendations IR 33 Conventional value of the result of weighing in air.

Page 126: Quam2000 (measurement - UM - for - micro Managers) - to help your question: How can you manage a technologist team if you not really know what they are doing and what they should do?

Copyright 2000

ISBN 0 948926 15 5