robust inference for location parameters : one- and … · estimates such as the sample trimmed...

115
Bei Feng ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND TWO-SAMPLE PROBLEMS emoire pr´ esent´ e ` a la Facult´ e des ´ etudes sup´ erieures de l’Universit´ e Laval pour l’obtention du grade de maˆ ıtre ` es sciences (M. Sc.) epartement de math´ ematiques et de statistique FACULT ´ E DES SCIENCES ET DE G ´ ENIE UNIVERSIT ´ E LAVAL AO ˆ UT 2003 c Bei Feng, 2003

Upload: others

Post on 01-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Bei Feng

ROBUST INFERENCE FOR LOCATION

PARAMETERS : ONE- AND TWO-SAMPLE

PROBLEMS

Memoire

presente

a la Faculte des etudes superieures

de l’Universite Laval

pour l’obtention

du grade de maıtre es sciences (M. Sc.)

Departement de mathematiques et de statistique

FACULTE DES SCIENCES ET DE GENIE

UNIVERSITE LAVAL

AOUT 2003

c© Bei Feng, 2003

Page 2: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Abstract

Estimating mesures of location is a fundamental statistical problem. The

sample mean is not always a good choice to estimate location because it is

not resistant to the influence of outliers. To treat this problem in a precise

manner when nonnormality is present, we may use robust location estimates.

In this work, we present a number of robust location functionals and deter-

mine their breakdown point and influence function. We study robust location

estimates such as the sample trimmed mean, the sample Winsorized mean

and estimates based on symmetric quantiles. Confidence interval estimation

and hypothesis testing are examined from a robust perspective. Both the

one-sample case and the two-sample case are considered, the latter under two

situations : independence and dependence. A few practical examples illustrate

the study.

———————————————– ———————————————–

Jean-Claude Masse Bei Feng

Directeur de recherche Etudiante

Page 3: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Resume

Estimer la localisation est un probleme statistique fondamental. Il est na-

turel de le faire au moyen de la moyenne echantillonnale mais cet estimateur a

le defaut d’etre tres sensible aux observations aberrantes. Ce memoire aborde

ce probleme en utilisant l’estimation dite robuste.

En premier lieu, nous considerons quelques fonctionnelles de localisation

dont nous evaluons la robustesse par deux mesures : le point de rupture et

la fonction d’influence. Ces fonctionnelles nous amenent a etudier des estima-

teurs de localisation robustes teles que la moyenne tronquee, la moyenne Win-

sorisee ainsi que deux estimateurs bases sur des quantiles symetriques. A l’aide

de ces estimateurs robustes, on examine ensuite l’inference par intervalle de

confiance et tests d’hypotheses. Nous nous concentrons sur deux types d’appli-

cations : les problemes a un echantillon et les problemes a deux echantillons,

ce dernier cas comprenant les echantillons independants aussi bien que les

echantillons dependants. Quelques exemples illustrent les methodes.

Page 4: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

———————————————– ———————————————–

Jean-Claude Masse Bei Feng

Directeur de recherche Etudiante

iv

Page 5: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Acknowledgements

I would like to thank my family, particularly my daughter Chen Long, for

her understanding and her tolerance. During this period of more than two

years, I had to be away from her. I was able to compensate her absence by

studying hard.

I would also like to thank my Quebecois friends. Special thanks to Mme Ga-

brielle Veilleux, M. Bedard and Mme Bedard, for their continuous encoura-

gements and support during my graduate studies. Joy Francisco, my friend

and classmate, gave me some necessary help at the beginning when I did not

speak French.

I express my deep appreciation to Mr. Jean-Claude Masse, Professor of

the Departement de mathematiques et de statistique, my director of research,

for his precious comments and his patience. His knowledgeable insights and

his optimistic emotions made a good impression on me.

“Bei, you have to be optimistic”. I will always remember these words of

M. Masse.

Page 6: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Table of material

Abstract ii

Resume iii

Acknowledgements v

List of figures xi

List of tables xii

Introduction 1

Chapter 1 Robust location functionals 4

1.1 Measures of robustness . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Some measures of location . . . . . . . . . . . . . . . . . . . . 6

1.2.1 Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2.2 Average of symmetric quantiles . . . . . . . . . . . . . 6

1.2.3 The two-sided (symmetric) trimmed mean . . . . . . . 7

1.2.4 The (symmetric) Winsorized mean . . . . . . . . . . . 7

1.3 Influence Function . . . . . . . . . . . . . . . . . . . . . . . . 8

1.3.1 Influence function of the mean . . . . . . . . . . . . . . 8

1.3.2 Influence function of an average of symmetric quantiles 8

1.3.3 Influence function of the two-sided trimmed mean . . . 10

vi

Page 7: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1.3.4 Influence function of the Winsorized mean . . . . . . . 14

1.4 Breakdown Point . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.4.1 Breakdown point of the mean . . . . . . . . . . . . . . 15

1.4.2 Breakdown point of an average of symmetric quantiles 16

1.4.3 Breakdown point of the trimmed mean . . . . . . . . . 16

1.4.4 Breakdown point of the Winsorized mean . . . . . . . . 17

Chapter 2 Estimating measures of location 18

2.1 Properties of estimators . . . . . . . . . . . . . . . . . . . . . 18

2.1.1 Consistency . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1.2 Asymptotic normality . . . . . . . . . . . . . . . . . . 19

2.1.3 The finite sample breakdown point . . . . . . . . . . . 20

2.2 The trimmed mean . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2.1 Estimating the trimmed mean functional . . . . . . . . 21

2.2.2 Estimating the standard error of the trimmed mean . . 22

2.3 The Winsorized mean . . . . . . . . . . . . . . . . . . . . . . . 23

2.3.1 Estimating the Winsorized mean . . . . . . . . . . . . 23

2.3.2 Estimating the standard error of the Winsorized mean 25

2.4 The average of symmetric quantiles . . . . . . . . . . . . . . . 25

2.4.1 Estimating the average of symmetric quantiles . . . . . 25

2.4.2 Estimating standard errors for the average of symmetric

quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.5 Estimating standard errors with the bootstrap . . . . . . . . . 27

2.5.1 Bootstrap estimate of the standard error of the sample

trimmed mean . . . . . . . . . . . . . . . . . . . . . . . 29

2.5.2 Bootstrap estimate of the standard error of the sample

Winsorized mean . . . . . . . . . . . . . . . . . . . . . 29

vii

Page 8: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2.5.3 Bootstrap estimate of the standard error of the average

of symmetric quantiles . . . . . . . . . . . . . . . . . . 30

Chapter 3 Robust inference in the one-sample problem 31

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2 Confidence intervals with the bootstrap . . . . . . . . . . . . . 31

3.2.1 The technique of the percentile bootstrap . . . . . . . . 32

3.2.2 The technique of the bootstrap-t . . . . . . . . . . . . 32

3.3 Inference on the trimmed mean functional . . . . . . . . . . . 34

3.3.1 The bootstrap percentile interval for the 2γ trimmed

mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.3.2 The bootstrap-t interval for µt . . . . . . . . . . . . . . 35

3.3.3 2γ-trimmed t test . . . . . . . . . . . . . . . . . . . . 36

3.4 Confidence interval estimation for an average of symmetric

quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 37

3.4.2 The bootstrap percentile interval for θγ . . . . . . . . . 38

3.4.3 The bootstrap-t interval for θγ . . . . . . . . . . . . . . 38

3.5 Comparison and application . . . . . . . . . . . . . . . . . . . 40

Chapter 4 Robust inference in the two-sample problem 45

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.2 Two independent samples . . . . . . . . . . . . . . . . . . . . 46

4.2.1 Student’s test . . . . . . . . . . . . . . . . . . . . . . . 46

4.2.2 The two-sample Yuen-Welch trimmed mean test . . . . 47

4.2.3 Confidence interval estimation based on trimmed means 49

4.2.4 Example and application . . . . . . . . . . . . . . . . . 52

viii

Page 9: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

4.3 Two dependent samples . . . . . . . . . . . . . . . . . . . . . 57

4.3.1 The paired t test . . . . . . . . . . . . . . . . . . . . . 57

4.3.2 The two-sample Yuen-Welch trimmed mean test . . . . 58

4.3.3 Confidence interval estimation for the difference of trim-

med means . . . . . . . . . . . . . . . . . . . . . . . . 60

4.3.4 Example and application . . . . . . . . . . . . . . . . . 61

General conclusion 69

Appendix Splus programs 71

1.1 Estimation of functions . . . . . . . . . . . . . . . . . . . . . . 71

1.1.1 Definitions of functions to calculate T1n and T2n . . . . 71

1.2 Estimation of standard error . . . . . . . . . . . . . . . . . . . 75

1.2.1 Estimation of standard error with the bootstrap . . . . 75

1.3 Confidence interval estimation . . . . . . . . . . . . . . . . . . 76

1.3.1 Confidence intervals with Student’s distribution . . . . 76

1.3.2 Estimation of confidence intervals with the percentile

bootstrap method . . . . . . . . . . . . . . . . . . . . . 77

1.3.3 Confidence interval estimation with the bootstrap-t me-

thod . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

1.4 Confidence interval estimation in the two-sample case . . . . . 84

1.4.1 Estimation of the Winsorized covariance . . . . . . . . 84

1.4.2 Confidence interval estimation for independent samples

based on Student’s distribution . . . . . . . . . . . . . 86

1.4.3 Confidence interval estimation for independent samples

with the percentile bootstrap method . . . . . . . . . . 87

ix

Page 10: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1.4.4 Confidence interval estimation for independent samples

with the bootstrap-t method . . . . . . . . . . . . . . . 89

1.5 Estimation for confidence intervals in dependent samples based

on Student’s distribution . . . . . . . . . . . . . . . . . . . . . 94

1.6 Confidence interval estimation for dependent samples with the

percentile bootstrap method . . . . . . . . . . . . . . . . . . . 95

1.7 Estimation for confidence intervals in dependent samples with

bootstrap-t method . . . . . . . . . . . . . . . . . . . . . . . . 97

References 101

x

Page 11: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

List of figures

3.1 Data point cloud and boxplot . . . . . . . . . . . . . . . . . . 41

4.2 Boxplot of renal and heart disease example . . . . . . . . . . . 55

4.3 Data point cloud and boxplot for Libby and Newgate data . . 66

xi

Page 12: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

List of tables

3.1 Monthly payments in 1979 . . . . . . . . . . . . . . . . . . . . 40

3.2 Estimates of location, of standard errors and 95% confidence

bounds with Student’s t distribution in the case of one sample. 43

3.3 Estimates of location, of standard errors and 95% confidence

bounds with respect to various robust methods. . . . . . . . . 44

4.4 Renal and heart disease measurements . . . . . . . . . . . . . 52

4.5 Point estimate and 95% confidence intervals for difference of

means for the renal and heart disease example. . . . . . . . . . 56

4.6 Yuen-Welch’s statistic and 95% confidence intervals for diffe-

rence of trimmed means for the renal and heart disease example. 56

4.7 Water flow measurements on the Kootenay river . . . . . . . . 62

4.8 Point estimate and 95% confidence intervals for difference of

means for the Kootenay river example. . . . . . . . . . . . . . 67

4.9 Yuen-Welch’s statistic and 95% confidence intervals for diffe-

rence of trimmed means for the Kootenay river example. . . . 67

xii

Page 13: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Introduction

Introduction

In classic statistical inference, to compare two populations in terms of their

means, one usually assumes that observations are randomly sampled from nor-

mal distributions. When comparing two independent populations, it is further

required that populations have equal variances. However, under normality and

when σ1/σ2 >√

3, Student’s approach to tests of hypotheses and confidence

intervals cannot be considered robust (Wilcox, Charlin and Thompson, 1986).

Yuen (1974) designed a robust method to get more satisfactory results. In the

one-sample case, Tukey and McLaughlin (1963) presented a robust approach

to test location. Statistical functionals that describe a distribution, such as

measures of location and scale, are said to be robust if slight changes in a

distribution have a relatively small effect on their value (Wilcox, 1997, p. 11).

In Chapter 1, we begin by presenting three approaches to judge whether

a statistical functional has robustness properties. We then decribe some lo-

cation functionals such as the mean, the 2γ trimmed mean, the Winsorized

mean and estimates based on an average of symmetric quantiles. Further,

breakdown points and influence fuctions are derived and interpreted for these

Page 14: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

location functionals. The three latter functionals are seen to be robust.

Chapter 2 describes methods to estimate the robust location functionals.

We focus on four robust estimates : the sample trimmed mean, the sample

Winsorized mean and two estimates based on an average of symmetric quan-

tiles. Two asymptotic properties of these estimates are then presented, namely

consistency and asymptotic normality. Estimating the standard error of these

estimates is then considered. For that purpose, the bootstrap technique inven-

ted by Efron (1979) is recalled. The latter is going to be extensively applied in

Chapters 3 and 4. Next, a finite measure of robustness, the breakdown point,

is introduced and determined for our four estimates.

Chapter 3 explains procedures to construct robust confidence intervals

in the one-sample case, all being based on the sample 2γ-trimmed mean

and two estimates based on an average of symmetric quantiles. Tukey and

McLaughlin’s test is also introduced. The percentile bootstrap method and

the bootstrap-t method are used for that purpose. To illustrate these methods,

an example is presented in detail.

Finally, Chapter 4 treats the two-sample problem, both in the independent

and dependent cases. In this respect, the two-sample Yuen-Welch trimmed

mean test is introduced to determine whether two populations have the same

location functionals. According to the type of relationship between the two

populations, independence or dependence, we have different methods to cal-

culate confidence intervals. For independent as well as dependent samples,

the bootstrap methodology is again used to construct confidence intervals for

2

Page 15: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

difference of locations. In conclusion, two practical examples are presented.

3

Page 16: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Chapter 1

Robust Location Functionals

1.1 Measures of robustness

In classical statistics, we often assume that observations are randomly

sampled from normal distributions. Under this assumption, methods for com-

puting confidence intervals and testing hypotheses about means and regression

parameters form the background of the field. But if the number of observa-

tions is small, or if the distribution is skewed or nonnormal, practical problems

appear when we use these methods. These problems could be : a) low power ;

b) the probability coverage for confidence intervals can differ greatly from the

expected value ; c) the length of confidence intervals can be relatively impor-

tant ; d) variance can be relatively large. Robust statistics methods provide

an interesting approach to deal with these problems.

Define a statistical functional T (F ) as a real-valued function on the set

of distribution functions F . A statistical functional is understood to be a

measure of robustness if slight changes in the distribution have a relatively

Page 17: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

small effect on its value. We have three basic approaches to judge whether a

statistical functional has good robustness properties. These are called :

1) qualitative robustness ; 2) quantitative robustness ; 3) infinitesimal ro-

bustness.

If F is slightly changed and T (F ) is relatively unaffected, then T (F ) is

said to have qualitative robustness. In practice, given that a sequence Fn tends

to F in some sense, it is difficult to check that T (Fn) tends to T (F ) in some

sense. In this paper, we do not discuss further this robustness character.

Define that ∆x is a distribution where the value x appears with probability

one. For example, if y has distribution ∆x, it can be seen that

∆x(y) =

0 if y < x

1 if y ≥ x

Let Fx,ε = (1− ε)F + ε∆x, where 0 < ε ≤ 12. Suppose that the following limit

exists :

IF (x) = limε↓0

T (Fx,ε) − T (F )

ε.

Then IF is called the influence function of T at F . If IF (x) is bounded, T (F )

is said to have infinitesimal robustness. Since IF depends on F and T , we

also write IF (x; T, F ). T (Fx,ε) can be seen as the effect on T (F ) of a conta-

mination at a “bad point” x.

The lower bound of the ε’s for which T (Fx,ε) goes to infinity, as x gets large,

is called the breakdown point. We write ε∗ = inf{ε > 0, supx | T (Fx,ε) |= ∞}and view ε∗ as a measure of quantitative robustness.

5

Page 18: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

In robust statistical inference, it is helpful to use various measures of loca-

tion. Here we describe some measures of location and their influence function.

1.2 Some measures of location

Let X be a random variable with distribution F . A measure of location

for F is a statistical functional θ(F ), which we identify with θ(X), such that :

1. θ(X + b)=θ(X)+b, for any constant b ;

2. θ(−X)=−θ(X) ;

3. If X ≥ 0, then θ(X) ≥ 0 ;

4. θ(aX) = aθ(X), for any constant a > 0.

1.2.1 Mean

The mean is defined to be

T (F ) = E(X) =

∫xdF (x),

whenever this exists. It is straightforward to verify that the mean is a measure

of location.

1.2.2 Average of symmetric quantiles

For any random variable X with distribution F , the γth quantile is defined

by xγ = F−1(γ) = inf {x : F (x) ≥ γ}. It verifies F (xγ) = γ when the equation

F (x) = γ has a solution. The γth quantile is in general not a measure of loca-

tion. For example, let F be the discrete distribution which gives probability

1/6 on each of the points -3, -2, -1, 1, 2, 3. When γ = 12, x1/2 = −1 = θ(X)

and θ(−X) = −1, so θ(−X) 6= −θ(X). In this case the x1/2 is not a measure

6

Page 19: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

of location because it does not satisfy the above equation 2. It is seen that if

F (x) = 12

has a unique solution and F is continuous, then x1/2 is a measure

of location. We call x1/2 the median of F .

Consider the average

T (F ) = θγ =F−1(γ) + F−1(1 − γ)

2

where 0 < γ < 12. It is easy to check that θγ is a mesure of location if F is

increasing and continuous at xγ and x1−γ .

1.2.3 The two-sided (symmetric) trimmed mean

Let F be any distribution and 0 < γ < 12. Suppose that F is increasing

and continuous at xγ and x1−γ . The γ (symmetric) trimmed mean is defined

to be

Tt(F ) = µt =

∫ x1−γ

x

1 − 2γdF (x).

This statistical functional is a measure of location. See Staudte and Sheather

(1990, p. 103).

1.2.4 The (symmetric) Winsorized mean

Let F be any distribution and 0 < γ < 12. Suppose that F is increasing

and continuous at xγ and x1−γ . Then the γ Winsorized mean is defined as

T (Fw) = µw =

∫ x1−γ

xdF (x) + γ(xγ + x1−γ ).

We note that µw = (1 − 2γ)µt + γ(xγ + x1−γ ), where µt is the two-sided

trimmed mean. Thus, it is a linear combination of µt and the γth, and (1−γ)th

quantiles. Because µt, γ(xγ + x1−γ ) are measures of location, it is easy to see

that the Winsorized mean is a measure of location.

7

Page 20: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1.3 Influence Function

In the above, we defined the influence function. In the following, we want

to describe the influence functions of some measures of location.

1.3.1 Influence function of the mean

If T (F ) = E(X) = µ, it is easy to check that IF (x) = x− µ. Because the

influence function is not bounded in x, the mean µ does not have infinitesimal

robustness.

1.3.2 Influence function of an average of symmetric

quantiles

Assume that F has a density f which is continuous and positive at xγ =

F−1(γ). Then the influence function of the γth quantile is

IFγ(x) =

γ−1f(xγ )

if x < xγ

0 if x = xγ

γf(xγ )

if x > xγ

(1.1)

Proof :

1) For the case x < xγ , see Staudte and Sheather (1990, p. 59). Even though

these authors consider the case where X is positive, the general proof is simi-

lar.

8

Page 21: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2) Suppose that x > xγ . We note that

F−1x,ε (γ) =

F−1( γ1−ε

) if γ < (1 − ε)F (x)

x if (1 − ε)F (x) ≤ γ < (1 − ε)F (x) + ε

F−1( q−ε1−ε

) if (1 − ε)F (x) + ε ≤ γ

(1.2)

Let γ(ε) = T (Fx,ε) = F−1x,ε (γ). By definition of the influence function, IFγ(x) =

limε↓0 γ′(ε). If x > xγ , then γ < (1 − ε)F (x) if ε is small enough, therefore

F−1x,ε (γ) = F−1( γ

1−ε). Then

γ′(ε) =d

dε(F−1

x,ε (γ))

=d

dε(F−1(

γ

1 − ε))

=ddε

( γ1−ε

)

f{F−1( γ1−ε

)}

=

γ(1−ε)2

f{F−1( γ1−ε

)}

so that

IFγ(x) = limε↓0

γ′(ε)

= limε↓0

γ

f{F−1( γ1−ε

)}=

γ

f(xγ).

3) If x = xγ , we have (2), F−1x,ε (γ) = x, so that

γ′(ε) =d

dεF−1

x,ε (γ)

=d

dε(x)

= 0

9

Page 22: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

as ε → 0.

Corollary 1 :

Under the same hypotheses as above at γ = 12, the influence funtion of x1/2 is

IF1/2(x) =

−12f(x1/2)

if x < x1/2

0 if x = x1/2

12f(x1/2)

if x > x1/2

(1.3)

Corollary 2 :

Under the same hypotheses as above at γ and 1− γ, the influence function of

an average of symmetric quantiles is

IF (x) =

12

(γ−1

f(xγ )+ −γ

f(x1−γ )

)if x < xγ

12

f(xγ )+ −γ

f(x1−γ )

)if xγ ≤ x ≤ x1−γ

12

f(xγ )+ 1−γ

f(x1−γ )

)if x > x1−γ

(1.4)

When 0 < γ < 12, the influence function of an average of symmetric quantiles

is bounded, so it has infinitesimal robustness.

1.3.3 Influence function of the two-sided trimmed mean

Assume that F has a density f which is continuous and positive at xγ =

F−1(γ) and x1−γ = F−1(1−γ). For the two-sided trimmed mean, the influence

function is the following :

IFt(x) =

xγ−µw

1−2γif x < xγ

x−µw

1−2γif xγ ≤ x ≤ x1−γ

x1−γ−µw

1−2γif x > x1−γ

(1.5)

10

Page 23: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where µw =∫ x1−γ

xγxdF (x) + γ(xγ + x1−γ) is the Winsorized mean.

Proof :

Define

g(ε) =

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γdFx,ε(y)

=

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd((1 − ε)F (y) + ε∆x(y))

=

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γdF (y) + ε

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd(∆x − F )(y).

Because

d

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γdF (y) =

F−1x,ε (1 − γ)

1 − 2γf(F−1

x,ε (1 − γ))∂

∂εF−1

x,ε (1 − γ)

−F−1

x,ε (γ)

1 − 2γf(F−1

x,ε (γ))∂

∂εF−1

x,ε (γ),

we have

g′(ε) =F−1

x,ε (1 − γ)

1 − 2γf(F−1

x,ε (1 − γ))∂

∂εF−1

x,ε (1 − γ) −F−1

x,ε (γ)

1 − 2γf(F−1

x,ε (γ))∂

∂εF−1

x,ε (γ)

+

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd(∆x − F )(y) + ε

d

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd(∆x − F )(y).

Then

limε↓0

∂εF−1

x,ε (1 − γ) = IF1−γ(x)

limε↓0

∂εF−1

x,ε (γ) = IFγ(x),

for all x, and

limε↓0

εd

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd(∆x − F )(y) = 0,

11

Page 24: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

so that

limε↓0

g′(ε) =x1−γ

1 − 2γf(x1−γ)IF1−γ(x) − xγ

1 − 2γf(xγ)IFγ(x)

+

∫ F−1(1−γ)

F−1(γ)

y

1 − 2γd(∆x − F )(y)

=x1−γ

1 − 2γf(x1−γ)IF1−γ(x) − xγ

1 − 2γf(xγ)IFγ(x)

−∫ x1−γ

y

1 − 2γdF (y) +

∫ x1−γ

y

1 − 2γd∆x(y)

Because µw =∫ x1−γ

xγxdF (x) + γ(xγ + x1−γ ), we have

µt =

∫ x1−γ

x

1 − 2γdF (x) = −γxγ + γx1−γ − µw

1 − 2γ.

Using the indicator function, it is seen that

∫ x1−γ

x

1 − 2γd∆(x) =

x

1 − 2γI{xγ ≤ x ≤ x1−γ}.

Replacing these into the above form, we have

limε↓0

g′(ε) =x1−γ

1 − 2γf(x1−γ)IF1−γ(x) − xγ

1 − 2γf(xγ)IFγ(x)

+γxγ + γx1−γ − µw

1 − 2γ+

xI{xγ ≤ x ≤ x1−γ}1 − 2γ

.

12

Page 25: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1) Let x < xγ. It follows that

limε↓0

g′(ε) =x1−γf(x1−γ)(−γ)

(1 − 2γ)f(x1−γ)− xγf(xγ)(γ − 1)

(1 − 2γ)f(xγ)

+γxγ + γx1−γ − µw

1 − 2γ+ 0

=(−γ)x1−γ

1 − 2γ− (γ − 1)xγ

1 − 2γ+

γxγ + γx1−γ − µw

1 − 2γ

=−γx1−γ − γxγ + xγ + γxγ + γx1−γ − µw

1 − 2γ

=xγ − µw

1 − 2γ.

2) Let xγ ≤ x ≤ x1−γ . Then

limε↓0

g′(ε) =x1−γf(x1−γ)(−γ)

(1 − 2γ)f(x1−γ)− xγf(xγ)γ

(1 − 2γ)f(xγ)

+γxγ + γx1−γ − µw

1 − 2γ+

x

1 − 2γ

=−γx1−γ

1 − 2γ− γxγ

1 − 2γ

+γxγ + γx1−γ − µw

1 − 2γ+

x

1 − 2γ

=−γx1−γ − γxγ + γxγ + γx1−γ − µw + x

1 − 2γ

=x − µw

1 − 2γ

3) Finally, assume x > x1−γ . Then

limε↓0

g′(ε) =x1−γf(x1−γ)(1 − γ)

(1 − 2γ)f(x1−γ)− xγf(xγ)γ

(1 − 2γ)f(xγ)

+γxγ + γx1−γ − µw

1 − 2γ+ 0

=(1 − γ)x1−γ − γxγ + γxγ + γx1−γ − µw

1 − 2γ

=x1−γ − µw

1 − 2γ

Since the influence function of the trimmed mean is bounded, the latter has

infinitesimal robustness.

13

Page 26: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1.3.4 Influence function of the Winsorized mean

Assume that the hypotheses at the beginning of section 3.3 hold. We have

IFw(x) =

xγ − γf(xγ)

− C if x < xγ

x − C if xγ ≤ x ≤ x1−γ

x1−γ + γf(x1−γ)

− C if x > x1−γ

(1.6)

where C = µw − γ2

f(xγ)+ γ2

f(x1−γ).

Proof :

The two-sided Winsorized mean

µw = (1 − 2γ)µt + γ(xγ + x1−γ)

is a linear combination of the two-sided trimmed mean µt and the γ and

(1 − γ) quantiles. According to the definition, the influence function of the

Winsorized mean can be derived from those of the two-sided trimmed mean

and the γ and (1 − γ) quantiles.

Thus

IFw(x) = (1 − 2γ)IFt(x) + γ (IFγ(x) + IF1−γ(x)) .

1) Let x < xγ. Then

IFw(x) =(1 − 2γ)(xγ − µw)

1 − 2γ+ γ

(γ − 1

f(xγ)+

−γ

f(x1−γ)

)

= xγ − µw +γ2 − γ

f(xγ)− γ2

f(x1−γ)

= xγ −γ

f(xγ)−(

µw − γ2

f(xγ)+

γ2

f(x1−γ)

)

= xγ −γ

f(xγ)− C

14

Page 27: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2) Let xγ < x < x1−γ . Then in the same way

IFw(x) =(1 − 2γ)(x − µw)

1 − 2γ+ γ

f(xγ)+

−γ

f(x1−γ)

)

= x − µw +γ2

f(xγ)− γ2

f(x1−γ)

= x −(

µw − γ2

f(xγ)+

γ2

f(x1−γ)

)

= x − C

3) Finally, suppose x > x1−γ . Then

IFw(x) =(1 − 2γ)(x1−γ − µw)

1 − 2γ+ γ

f(xγ)+

1 − γ

f(x1−γ)

)

= x1−γ − µw +γ2

f(xγ)+

γ

f(x1−γ)− γ2

f(x1−γ)

= x1−γ +γ

f(x1−γ)−(

µw − γ2

f(xγ)+

γ2

f(x1−γ)

)

= x1−γ +γ

f(x1−γ)− C

Since the influence function of the Winsorized mean is bounded, that func-

tional has infinitesimal robustness.

1.4 Breakdown Point

1.4.1 Breakdown point of the mean

Since T (∆x) = x, we have T (Fx,ε) = (1 − ε)T (F ) + εx. For any ε > 0,

T (Fx,ε) is unbounded in x when x is large enough, hence, the breakdown point

of the mean is 0.

15

Page 28: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1.4.2 Breakdown point of an average of symmetric quan-

tiles

For the γth quantile functional 0 < γ < 12, T (Fx,ε) = F−1

x,ε (γ) and ε∗ =

γ. For details, see Staudte and Sheather (1990, p. 56). For an average of

symmetric quantiles, it is easy to see that ε∗ = γ. Clearly, if 12≤ γ < 1,

ε∗ = 1 − γ.

1.4.3 Breakdown point of the trimmed mean

For the two-sided trimmed mean, the breakdown point ε∗ is γ.

Proof : For the two-sided trimmed mean µt,

T (Fx,ε) =

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γdFx,ε(y)

=

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd((1 − ε)F (y) + ε∆x(y))

= (1 − ε)

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γdF (y) + ε

∫ F−1x,ε (1−γ)

F−1x,ε (γ)

y

1 − 2γd(∆x)(y)

= (1 − ε)

∫ F−1x,ε (1−γ)

0

y

1 − 2γdF (y)− (1 − ε)

∫ F−1x,ε (γ)

0

y

1 − 2γdF (y)

∫ F−1x,ε (1−γ)

0

y

1 − 2γd(∆x)(y) − ε

∫ F−1x,ε (γ)

0

y

1 − 2γd(∆x)(y)

1) If ε < γ, we have 1−γ1−ε

< 1, so that 1−γ1−ε

< F (x), for x large enough.

According to the equation (1.2), F−1(1−γ1−ε

) = F−1x,ε (1 − γ) and F−1( γ

1−ε) =

F−1x,ε (γ).

16

Page 29: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

We have

T (Fx,ε) = (1 − ε)

∫ F−1( 1−γ1−ε

)

0

y

1 − 2γdF (y)− (1 − ε)

∫ F−1( γ1−ε

)

0

y

1 − 2γdF (y) + 0 + 0

= (1 − ε)

∫ F−1( 1−γ1−ε

)

0

y

1 − 2γdF (y)− (1 − ε)

∫ F−1( γ1−ε

)

0

y

1 − 2γdF (y) (1.7)

Therefore, T (Fx,ε) is bounded in x as x gets large enough, and so ε∗ ≥ γ.

2) If ε > γ, we have (1−ε)F (x) < 1−γ < (1−ε)F (x)+ε as x is large enough.

According to equation (1.2), F−1x,ε (1 − γ) = x. Then

T (Fx,ε) = (1 − ε)

∫ x

0

y

1 − 2γdF (y) − ε

∫ F−1( γ1−ε

)

0

y

1 − 2γdF (y)

+εx

1 − 2γ−

εF−1( γ1−ε

)

1 − 2γ(1.8)

Therefore T (Fx,ε) is unbounded in x as x gets large enough, and so ε∗ = γ.

1.4.4 Breakdown point of the Winsorized mean

For the two-sided Winsorized mean, the breakdown point ε∗ is γ.

Proof :

Since µw = (1 − 2γ)µt + γ(xγ + x1−γ), then

Tw(Fx,ε) = (1 − 2γ)Tt(Fx,ε) + γ(F−1(γ) + F−1(1 − γ)

).

1) If ε < γ, Tt(Fx,ε) is bounded in x as x gets large enough. Since F−1(γ) and

F−1(1 − γ) are fixed, the breakdown point ε∗ ≥ γ.

2) If ε > γ, Tt(Fx,ε) is unbounded in x as x gets large enough. As above, it

follows that ε∗ is γ.

17

Page 30: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Chapter 2

Estimating Measures of Location

2.1 Properties of estimators

2.1.1 Consistency

Assume the empirical distribution function Fn is based on a sample of

size n from F , where F belongs to a given distribution family F . Then a

statistical functional T (F ) generates a natural sequence of estimates (T (Fn)).

Under mild regularity conditions, (T (Fn)) is consistent, that is

T (Fn)p−→ T (F ), F ∈ F

as n → ∞, for every F in the family F , wherep−→ denotes convergence in

probability. Recall that this means that (Serfling, 1980, p. 6)

limn→∞

P(| T (Fn) − T (F ) |< ε

)= 1, every ε > 0.

If F is continuous, it is often the case that (T (Fn)) is consistent for T (F )

(Staudte and Sheather, 1990, p. 66).

Page 31: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2.1.2 Asymptotic normality

We say that (T (Fn)) is asymptotically normal for T (F ) (Staudte and Shea-

ther, 1990, p. 51) if :

n1/2[T (Fn) − T (F )

]d−→ N(0, V ), F ∈ F

whered−→ denotes convergence in distribution and V = V (T, F ) > 0. Recall

that Ynd−→ Y means that (Serfling, 1980, p. 8)

limn→∞

Gn(t) = G(t) at each continuity point t of G,

where Gn = FYn and G = FY . V is often called the asymptotic variance. The

above can also be written as

( n

V

)1/2 [T (Fn) − T (F )

]d−→ N(0, 1)

where N(0, 1) is the standard normal distribution. Furthermore, in many si-

tuations we have (Staudte and Sheather (1990, section 3.6))

nV ar[T (Fn)

]−→ V (T, F ),

and so in those cases V (T,F )n

gives an approximation to the variance of T (Fn).

It is often the case that (Staudte and Sheather, 1990, p. 63 and p. 79)

V = V (T, F ) = E[IF 2

T,F (X)].

In the following, assume that

nV ar[T (Fn)] −→ V (T, F ).

Furthermore, suppose that (Staudte and Sheather, 1990, p. 62-63)

n1/2{[T (Fn) − T (F )] −∫

IFT,�

Fn(x)dFn(x)}

19

Page 32: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

converges to zero in probability, so that V (T, F ) = EF

[IF 2

T,F (X)]. In this

case V (T, F ) can be estimated by

E �

Fn

[IF 2

T,�

Fn(X)

]=

1

n

n∑

i=1

IF 2T,

Fn(Xi).

Using the notation

SE[T (Fn)] =√

V (T, F )/n,

this yields the influence function estimate of the standard error defined by

(Staudte and Sheather 1990, p. 79) :

SE[T (Fn)] =1

n

√√√√n∑

i=1

IF 2T,

Fn(Xi).

2.1.3 The finite sample breakdown point

The finite sample breakdown point concept was introduced by Hodges

(1967). Donoho and Huber (1983) described it as a mesure of the sensitivity

of an estimator to the outliers. For sample (x1, x2, · · · , xn), let the sample

(z1, z2, · · · , zn) be obtained by replacing the m data points xi1 , xi2 , · · · , xim

by the arbitrary values y1, y2, · · · , ym, where 1 ≤ m ≤ n. The finite sample

breakdown point ε∗n of the estimate T (Fn) at the sample (x1, x2, · · · , xn) is

defined as follows :

ε∗n = inf

{m

n: sup

z1,z2,··· ,zn

| T (F mn ) |= ∞

}

where T (F mn ) is the estimate of T (F ) based on the sample (z1, z2, · · · , zn).

Note that the finite sample breakdown point usually does not depend on the

values xi, i = 1, 2, · · · , n from the sample, but depends slightly on the sample

size n.

20

Page 33: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2.2 The trimmed mean

2.2.1 Estimating the trimmed mean functional

Let X1, · · · , Xn be a random sample and let X(1) ≤ X(2) ≤ · · · ≤ X(n) be

the observations written in ascending order, where X(i) denotes the ith order

statistic. Let 0 < γ < 1/2 and put g = [γn], where the notation stands for the

largest integer which is ≤ γn. The sample γ trimmed mean, which we write

X t, is defined as follows :

X t =X(g+1) + · · · + X(n−g)

n − 2g.

Thus, the sample trimmed mean is a linear combination of the order statis-

tics. It has some advantages over the sample mean when F has heavy tails ; it

removes the 2g smallest and largest observations, thus reducing the influence

of tails.

For every F ∈ F , assume that F is continuous and strictly inscreasing at

xγ and x1−γ . Then the sample trimmed mean

X tp−→ µt, F ∈ F ,

in other words, is consistent.

Under the same hypotheses on F , asymptotic normality also holds (Staudte

and Sheather, 1990, p. 106) :

n1/2[X t − µt

] d−→ N(0, V (T, F )), F ∈ F ,

where V (T, F ) = E[IF 2γ,

F(X)], according to the influence function obtained

21

Page 34: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

in section 3.4 of Chapter 1. Thus

V (T, F ) =1

1 − 2γ

[σ2

γ,w +γ

1 − 2γ

((xγ − µw)2 + (x1−γ − µw)2

)],

where µw is the Winsorized mean functional and

σ2γ,w =

∫ x1−γ

(x − µw)2

1 − 2γdF (x).

It is easily seen that the finite sample breakdown point of the γ trimmed mean

is [nγ]+1n

. Indeed if γ% of the observations become large, the trimmed mean

cuts γ% observations, which prevents the sample trimmed mean from going

to infinity. The same holds if we let γ% of the observations tend to −∞. In

many cases, the following limit is correct (Staudte and Sheather, 1990, p. 56) :

ε∗ = limn→∞

ε∗n

where ε∗ is the breakdown point of the statistical functional. In this case, we

have that

limn→∞

[nγ] + 1

n= γ.

2.2.2 Estimating the standard error of the trimmed

mean

The influence function estimate of the standard error is derived as follows.

Let X(1) ≤ X(2) ≤ · · · ≤ X(n) be the order statistics and let

Wi =

X(g+1) if i ≤ g

X(i) if g < i ≤ n − g

X(n−g) if n − g < i

(2.9)

22

Page 35: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Define

W =1

n

∑Wi,

and

s2w =

1

n − 1

∑(Wi − W )2,

respectively the so-called Winsorized sample mean and sample Winsorized

variance. For the two-sided trimmed mean, it can be checked that

E �

Fn

[IF 2

T,�

Fn(X)

]=

∑(Wi − W )2

(1 − 2γ)2.

Then, for n large enough,�(

SE[X t])2

=1

n2(1 − 2γ)2

∑(Wi − W )2

≈ 1

n(n − 1)(1 − 2γ)2

∑(Wi − W )2

=s2

w

(1 − 2γ)2n. (2.10)

For this reason, the influence function estimate of the standard error of the

trimmed mean is taken in practice to be :

SE[Xt] =

√s2

w

(1 − 2γ)2n

=sw

(1 − 2γ)√

n. (2.11)

2.3 The Winsorized mean

2.3.1 Estimating the Winsorized mean

The Winsorized mean functional is estimated by the Winsorized sample

mean

Xw = W,

23

Page 36: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where W1, W2, · · · , Wn is the Winsorized sample defined in 2.2.

Again, assume that for every F ∈ F , F is continuous and strictly increa-

sing at the quantile x1−γ and xγ . Then

Xwp−→ µw, F ∈ F .

Under the same assumption (Serfling, 1980, p. 282),

n1/2[Xw − µw

] d−→ N(0, V (T, F )), F ∈ F .

It is seen below that the asymptotic normal distribution depends on parame-

ters µt, µw, σγ,w. According to the influence function of the Winsorized mean

(see section 3.4, chapter I),

V (T, F ) =

∫ xγ

−∞

(xγ −

γ

f(xγ)− C

)2

dF (x) +

∫ x1−γ

(x − C)2dF (x)

+

∫ ∞

x1−γ

(x1−γ +

γ

f(x1−γ)− C

)2

dF (x)

= γ

[(xγ −

γ

f(xγ)− C

)2

+

(x1−γ +

γ

f(x1−γ)− C

)2]

+

∫ x1−γ

(x − C)2dF (x)

= γ

[(xγ −

γ

f(xγ)− C

)2

+

(x1−γ +

γ

f(x1−γ)− C

)2]

+

∫ x1−γ

[(x − µw)2 + 2(x − µw)

(γ2

f(xγ)− γ2

f(x1−γ)

)]dF (x)

+

∫ x1−γ

(γ2

f(xγ)− γ2

f(x1−γ)

)2

dF (x)

= γ

[(xγ −

γ

f(xγ)− C

)2

+

(x1−γ +

γ

f(x1−γ)− C

)2]

+(1 − 2γ)

(γ2

f(xγ)− γ2

f(x1−γ)

)2

+ (1 − 2γ)σ2γ,w

+2

(γ2

f(xγ)− γ2

f(x1−γ)

)[(1 − 2γ)µt − γµw] (2.12)

24

Page 37: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where C = µw− γ2

f(xγ)+ γ2

f(x1−γ ). Note that V may be difficult to obtain because

f(xγ) is often unknown.

Proceeding in the same manner as for the trimmed mean, it can be checked

that the finite sample breakdown point of the two-sided Winsorized mean is

[nγ]+1n

.

2.3.2 Estimating the standard error of the Winsorized

mean

We use two methods for estimating the standard error of the Winsorized

mean, Xw. The first method is based on the influence function of µw, intro-

duced in section 1.3.4. The influence function estimate of the standard error

of the Winsorized mean has a somewhat complicated form. It depends on the

values of f(xγ) and f(x1−γ). The other method uses the bootstrap, see section

2.5.2.

2.4 The average of symmetric quantiles

2.4.1 Estimating the average of symmetric quantiles

Let θγ denote the average of the γ and 1− γ quantiles. For 0 < γ < 12, the

sample quantile

xγ = X(m)

is often used as an estimate of the quantile xγ , where m = [γn]. A first

estimate of θγ is defined by

T1n =1

2(xγ + x1−γ).

25

Page 38: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Another estimate of xγ is the Harrell-Davis estimate (Harrell and Davis, 1982).

Let Y have a beta distribution with parameters α = (n + 1)γ and β =

(n + 1)(1 − γ), and let

Ui = P

(i − 1

n≤ Y ≤ i

n

).

Then, the Harrell-Davis estimate of the γth quantile is defined to be

θγ =

n∑

i=1

UiX(i).

Again this is a linear combination of order statistics. In the same way, θ1−γ =∑n

i=1 ViX(i), where Vi = P(

i−1n

≤ G ≤ in

)and G has a beta distribution with

parameter a = (n + 1)(1− γ) and b = (n + 1)γ. Then a second estimate of θγ

is obtained by

T2n =1

2

(θγ + θ1−γ

)

=1

2

(n∑

i=1

UiX(i) +n∑

i=1

ViX(i)

)

=1

2

n∑

i=1

(Ui + Vi)X(i). (2.13)

Assuming that for every F ∈ F , F has a continuous, symmetric, nonzero

density at xγ and at x1−γ , consistency holds for the two previous estimates,

that is

Tinp−→ T (F ), F ∈ F , i=1,2.

Asymptotic normality for T1n takes the form :

n1/2 [T1n − T (F )]d−→ N(0, V (T, F )), F ∈ F ,

26

Page 39: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where V (T, F ) = γ

2f(xγ )2(Staudte and Sheather, 1990, p. 103). Most of the

time, f is unknown, so we have to find an estimate of f(xγ), which is not

convenient. Under mild assumptions on F (David, 1981, p. 273),

n1/2 [T2n − T (F )]d−→ N(0, V (T, F )), F ∈ F ,

holds.

The finite sample breakdown point of T1n is easy to obtain, its value being

[γn]+1n

; that of T2n appears to be unknown.

2.4.2 Estimating standard errors for the average of sym-

metric quantiles

The boostrap method will be used ; it is presented in the next section.

2.5 Estimating standard errors with the boots-

trap

The concept of boostrap was invented by Efron(1979a). It is a form of a

large class of statistical methods that resample from the original data. This

method is easy to apply with a computer and avoids complicated mathema-

tical computation. In this chapter, we concentrate on the bootstrap estimate

of standard error. Let F be an unknown distribution. The notation

F −→ (x1, x2, · · · , xn)

stands for an independent and identically distributed sample drawn from F

(Efron & Tibshirani, 1993, p. 9). We want to estimate a statistical functional,

27

Page 40: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

which we write θ = θ(F ). This could be any of the location functionals of

Chapter 1, or in this section, the standard error of an estimate. Let an esti-

mate of θ be written as θ = s(x), where x = (x1, x2, · · · , xn). In general, there

is no formula to compute the standard error of the estimate of a statistical

functional, except for the mean. The bootstrap aims at solving this problem.

A bootstrap sample is defined by randomly sampling from Fn n times, with

replacement. Let us write such a sample as x∗ = (x∗1, x

∗2, · · · , x∗

n),

Fn −→ x∗ = (x∗1, x

∗2, · · · , x∗

n).

For each bootstrap sample, we calculate θ∗ = s(x∗). Repeating this process

B times, we get s(x∗1), s(x∗2), · · · , s(x∗B). Let

s(·) =

∑Bi=1 s(x∗i)

B.

The above B bootstrap replications provide an estimate of the standard error

of θ :

seB =

{1

B − 1

B∑

i=1

(s(x∗i) − s(·)

)2}1/2

,

which we call the bootstrap estimate of the standard error. In summary, the

bootstrap estimate is obtained in three stages (Efron & Tibshirani, 1993,

p. 47) :

1. Generate B bootstrap samples x∗ from the original observations, x∗1, x

∗2, · · · ,

x∗n, where in practice 25 ≤ B ≤ 200 ;

2. Calculate the corresponding estimates for each bootstrap sample,

s(x∗1), s(x∗2), · · · , s(x∗B);

28

Page 41: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

3. Estimate the standard error by using the formula,

seB =

{1

B − 1

B∑

i=1

(s(x∗i) − s(·)

)2}1/2

,

where s(·) =� B

i=1s(x∗i)

B.

2.5.1 Bootstrap estimate of the standard error of the

sample trimmed mean

Following the lines described above, we generate B bootstrap samples

according to the empirical distribution F , corresponding to the original ob-

servations. We calculate B sample trimmed means denoted as

X∗1t , X

∗2t , · · · , X

∗Bt .

From the above, the bootstrap estimate of the standard error is

seB =

{1

B − 1

B∑

i=1

(X

∗it − s(·)

)2}1/2

,

where s(·) =� B

i=1X

∗it

B.

2.5.2 Bootstrap estimate of the standard error of the

sample Winsorized mean

In the same way, we select B bootstrap samples, then calculate B sample

Winsorized means

X∗1w , X

∗2w , · · · , X

∗Bw .

The bootstrap estimate of the standard error of Xw is then

seB =

{1

B − 1

B∑

i=1

(X

∗iw − s(·)

)2}1/2

,

29

Page 42: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where s(·) =� B

i=1X

∗iw

B.

2.5.3 Bootstrap estimate of the standard error of the

average of symmetric quantiles

We have two bootstrap estimates of the standard error of the average of

symmetric quantiles. These two procedures are as follows :

1) For the first estimate T1n, after selecting B bootstrap samples, we calculate

B times T ∗1n = 1

2

(x∗

γ + x∗1−γ

), where x∗

γ = X∗(m) and x∗

1−γ = X∗(p), p = [(1 −

γ)n], yielding

T ∗11n , T ∗2

1n , · · · , T ∗B1n .

The bootstrap estimate of the standard error of T1n is then

seB =

{1

B − 1

B∑

i=1

(T ∗i

1n − s(·))2}1/2

,

where s(·) =� B

i=1

T ∗i1n

B.

2) For the second estimate T2n : the process starts in the same fashion as

above. We calculate B times θ∗γ = 12

(x∗

γ + x∗1−γ

), where θ∗γ =

∑ni=1 UiX

∗(i) and

θ∗1−γ =∑n

i=1 ViX∗(i), which yields

T ∗12n , T ∗2

2n , · · · , T ∗B2n .

The bootstrap estimate of the standard error of the Harrell-Davis estimate is

then

seB =

{1

B − 1

B∑

i=1

(T ∗i

2n − s(·))2}1/2

,

where s(·) =� B

i=1

T ∗i2n

B.

30

Page 43: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Chapter 3

Robust Inference in the One-Sample Problem

3.1 Introduction

Let θ = T (F ) be a statistical location functional and let θ be an esti-

mate for θ. This statistical functional θ = T (F ) could be any of the location

functionals of Chapter 1, but in this section, we only consider the trimmed

mean and an average of symmetric quantiles. Let se(θ) denote an estimate of

the standard error of θ. Efron (1993) gave a clear presentation of bootstrap

confidence interval estimation for θ. Throughout this chapter, a great deal of

the inference is based on the bootstrap.

3.2 Confidence intervals with the bootstrap

Let θ∗ = s(x∗) be a bootstrap estimate for θ based on a bootstrap sample

and let se(θ∗) be an estimate of the standard error for θ based on a bootstrap

sample as in Chapter 2.

Page 44: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

3.2.1 The technique of the percentile bootstrap

Following the lines described in Chapter 2, we generate B bootstrap

samples according to the empirical distribution F corresponding to the origi-

nal observations. We calculate B bootstrap estimates denoted as θ∗ = s(x∗i),

1 ≤ i ≤ B. Efron & Tibshirani (1993, p. 170) describe percentile bootstrap

confidence intervals for θ. Computation goes as follows :

1. generate B bootstrap samples x∗ from the original observations, x∗1,x∗2, · · · ,

x∗B , where in practice B ≤ 1000 ;

2. calculate the corresponding estimates for each bootstrap sample :

s(x∗1), s(x∗2), · · · , s(x∗B);

3. put the s(x∗1), s(x∗2), · · · , s(x∗B) values in ascending order, yielding

s(x∗(1)), s(x∗(2)), · · · , s(x∗(B));

4. set l = [αB/2] and u = [(1 − α/2)B], where 0 < α < 12.

The percentile two-sided (1 − α)-bootstrap confidence interval for θ is defined

as(s(x∗(l)), s(x∗(u))

). (3.14)

3.2.2 The technique of the bootstrap-t

This is another approach to obtain a confidence interval for θ. It is per-

formed as follows (Efron & Tibshirani, 1993, p. 160) :

1. for the B bootstrap samples, calculate the estimates, s(x∗1), s(x∗2), · · · ,

s(x∗B) and the corresponding estimates of the standard error :

se(s(x∗1)

), se(s(x∗2)

), · · · , se

(s(x∗B)

);

32

Page 45: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2. define

T ∗(i) =s(x∗i) − θ

se (s(x∗i)), i = 1, 2, · · · , B,

and obtain

T ∗(1), T ∗(2), · · · , T ∗(B);

3. put T ∗ = (T ∗(1), T ∗(2), · · · , T ∗(B)), and find the percentiles of T ∗. Given

0 < α < 12, if Bα/2 is an integer, the α/2th percentile of T ∗ is the value t(α/2)

such that

#{T ∗(i) ≤ t(α/2)}/B = α/2;

and the 1 − α/2th percentile of T ∗ is the value t(1−α/2) such that

#{T ∗(i) ≤ t(1−α/2)}/B = 1 − α/2.

In case Bα/2 is not an integer, let k = [(B + 1)α/2]. Let the empirical α/2

and 1−α/2 quantiles be the kth largest and (B+1−k)th largest components

of T ∗, respectively.

The bootstrap-t two-sided (1 − α) confidence interval for θ is defined by

(θ − t(1−α/2)se(θ), θ − t(α/2)se(θ)

). (3.15)

In general, we have both computational and interpretive problems when using

the bootstrap-t confidence procedure. The first problem is that we require

se (s(x∗i)) for each bootstrap sample. Most of the time, there is no exact for-

mula to allow us to calculate these standard errors. We can however use the

bootstrap method as in Chapter 2. Thus we can draw B1 samples to calcu-

late s(x∗i), and then, for each such bootstrap sample x∗i, B2 new samples to

compute se (s(x∗i)). This is hard work but it will be done next. The second

33

Page 46: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

problem is that when the sample size is small, the bootstrap-t confidence pro-

cedure may give inaccurate results (Efron & Tibshirani, 1993, p. 162). We do

not further discuss this topic.

The quantity T =�

θ−θ�

se(�

θ)is called an approximate pivot . Its distribution

is approximately the same for each value of θ (Efron & Tibshirani, 1993,

p. 161). This is what allows us to construct the confidence interval for θ from

the distribution of T ∗(i), i = 1, 2, · · · , n.

3.3 Inference on the trimmed mean functio-

nal

3.3.1 The bootstrap percentile interval for the 2γ trim-

med mean

For the 2γ trimmed mean, written θ = µt and estimated by θ = s(x) = X t,

the bootstrap percentile interval is obtained as follows :

1. calculate B sample trimmed mean X∗t from the corresponding bootstrap

samples, yielding X∗jt , j = 1, 2, · · · , B ;

2. then the approximate two-sided (1−α) bootstrap percentile interval for µt

is (X

∗(l)t , X

∗(u)

t

). (3.16)

where l = [αB/2] and u = [(1 − α/2)B] .

34

Page 47: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

3.3.2 The bootstrap-t interval for µt

Recall the sample Winsorized variance defined by s2w = 1

n−1

∑(Wi − W )2.

In Chapter 2, we saw that

se(X t) =sw

(1 − 2γ)√

n.

Let (s∗w)2 be the estimate based on a bootstrap sample, so that

se(X∗t ) =

s∗w(1 − 2γ)

√n

. (3.17)

Let

Tt =θ − θ

se(θ)

=X t − µt

se(X t)

=(1 − 2γ)

√n(X t − µt)

sw

.

The confidence interval estimate is obtained in three stages (Wilcox, 1997,

p. 79) :

1. calculate the sample trimmed-mean X∗t for each bootstrap sample, and

then

T ∗t =

X∗t − X t

se(X∗t )

=(1 − 2γ)

√n(X

∗t − Xt)

s∗w;

2. find the α/2 and 1 − α/2 percentiles for T∗(i)t , i = 1, 2, · · · , B, estimated

by T∗(α/2)t and T

∗(1−α/2)t respectively. Then an approximate two-sided (1− α)

confidence interval for µt is(X t − T

∗(1−α/2)t se(Xt), Xt − T

∗(α/2)t se(X t)

)

=

(X t −

T∗(1−α/2)t sw

(1 − 2γ)√

n, X t −

T∗(α/2)t sw

(1 − 2γ)√

n

)(3.18)

35

Page 48: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

We can use the correspondence between confidence intervals and tests of hy-

potheses to derive a two-sided test

H0 : µt = µ0 versus H1 : µt 6= µ0,

where µ0 is known. It goes as follows : for a given significance level α, the null

hypothesis is rejected if Tt < T∗(α/2)t or Tt > T

∗(1−α/2)t . Equally, we reject H0

if

µ0 /∈(

Xt −T

∗(1−α/2)t sw

(1 − 2γ)√

n, X t −

T∗(α/2)t sw

(1 − 2γ)√

n

).

3.3.3 2γ-trimmed t test

There is another method to obtain a confidence interval for µt which does

not use the bootstrap. Let

Tt =X t − µt

se(X t)

=(1 − 2γ)

√n(X t − µt)

sw,

where again s2w is the sample Winsorized variance.

If we assume that, as n → ∞,

Ttd−→ N(0, 1),

then an approximate (1 − α) confidence interval for µt is given by (Staudte

and Sheather, 1990, p. 190)

X t ±z(1−α/2)sw

(1 − 2γ)√

n. (3.19)

However, Tukey and McLaughlin (1963) observed that

Tt =(1 − 2γ)

√n(X t − µt)

sw

36

Page 49: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

is approximately distributed as a Student t distribution having n − 2g − 1

degrees of freedom, where g = [γn]. This yields an approximate (1 − α)-

confidence interval for µt, that is

X t ±t(1−α/2)sw

(1 − 2γ)√

n, (3.20)

where t(1−α/2) is the 1−α/2 quantile of Student’s t distribution with n−2g−1

degree of freedom.

According to the usual correspondence between confidence intervals and

tests of hypotheses, when testing

H0 : µt = µ0 versus H1 : µt 6= µ0,

we reject H0 if and only if

µ0 /∈ X t ±t(1−α/2)sw

(1 − 2γ)√

n.

3.4 Confidence interval estimation for an ave-

rage of symmetric quantiles

3.4.1 Introduction

Recall that two estimates were introduced for studying the average of

symmetric quantiles

T (F ) = θγ =1

2(xγ + x1−γ), 0 < γ <

1

2.

These are

T1n =1

2(xγ + x1−γ)

37

Page 50: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

and

T2n =1

2

n∑

i=1

(Ui + Vi) X(i).

Let se(Tin), i = 1, 2 be an estimate of the standard error of Tin, i = 1, 2.

Let se(T ∗in), i = 1, 2, be estimates of the standard error based on a bootstrap

sample.

3.4.2 The bootstrap percentile interval for θγ

Based on each of the estimates Tin, i = 1, 2, of θγ , a bootstrap percentile

interval for θγ takes the form :

(T

∗(l)in , T

∗(u)in

), i = 1, 2,

where we recall l = [αB/2], u = [(1−α/2)B] and B is the number of bootstrap

replications.

Again, when testing

H0 : θγ = θ0 versus H1 : θγ 6= θ0.

where θ0 is known, one rejects H0 if and only if

θ0 /∈(T

∗(l)in , T

∗(u)in

), i = 1, 2.

3.4.3 The bootstrap-t interval for θγ

For each of the estimates Tin, i = 1, 2, the bootstrap-t interval for θγ is

based on the calculation of

Z∗i =

T ∗in − Tin

se(T ∗in)

, i = 1, 2,

38

Page 51: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

for each bootstrap replication. An approximate two-sided (1 − α)-confidence

interval for θγ is then given by

(Tin − zi

(1−α/2)se(Tin), Tin − zi(α/2)se(Tin)

), i = 1, 2, (3.21)

where zi(α/2) and zi

(1−α/2) are the α/2 and 1 − α/2 percentiles of the values

Z∗ji , j = 1, 2, · · · , B1, i = 1, 2 respectively.

Wilcox (1997, p. 79) presented a modified equation for the 2γ-trimmed mean.

We will use a similar method for Tin, i = 1, 2. This modified equation is as

follows :

(Tin + zi

(α/2)se(Tin), Tin + zi(1−α/2)se(Tin)

), i = 1, 2; (3.22)

it can be used as zi(α/2) is negative. The difference between equation (3.21)

and between equation (3.22) is that the corresponding term is added, not

subtracted, from Tin, i = 1, 2.

The bootstrap estimate of the standard error of Tin, i = 1, 2, is, as in

Chapter 2 (section 5.1), defined by

se(T ∗in) =

{1

B2 − 1

B2∑

j=1

(T ∗j

in − s(·))2}1/2

,

where s(·) =� B2

j=1

T ∗jin

B2

, i = 1, 2.

When testing

H0 : θγ = θ0 versus H1 : θγ 6= θ0,

we reject H0 if and only if

θ0 /∈(Tin + zi

(α/2)se(Tin), Tin + zi(1−α/2)se(Tin)

), i = 1, 2.

39

Page 52: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

3.5 Comparison and application

Applying the previous notions, an example is now given. A large Bel-

gian insurance company provided the following data which shows monthly

payments in 1979, made as a result of the end of period of life-insurance

contracts. The payments are given as a percentage of the total amount in

1979. This data set is taken from Rousseeuw and al. (1984 a), p. 19. There

are 12 observations (x, y), where x= month and y = payment.

x 1 2 3 4 5 6 7 8 9 10 11 12

y 3.22 9.62 4.50 4.94 4.02 4.20 11.24 4.53 3.05 3.76 4.23 42.69

Tab. 3.1 – Monthly payments in 1979

Figure 3.1 represents the data point cloud and boxplot. In Figure 3.2, the

boxplot shows that there are three outlying observations.

40

Page 53: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

• •• •

•• • •

Month

Pay

men

t

2 4 6 8 10 12

1020

3040

1020

3040

Payment

Fig. 3.1 – Data point cloud and boxplot

Applying SAS yields :

Basic Statistical Measures

Location Variability

Mean 8.333333 Std Deviation 11.11371

Median 4.365000 Variance 123.51464

Quantile Estimate

100% Max 42.690

99% 42.690

95% 42.690

90% 11.240

41

Page 54: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

75% Q3 7.280

50% Median 4.365

25% Q1 3.890

10% 3.220

5% 3.050

1% 3.050

0% Min 3.050

Tests for Normality

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.494751 Pr < W <0.0001

Kolmogorov-Smirnov D 0.369942 Pr > D <0.0100

Cramer-von Mises W-Sq 0.49542 Pr > W-Sq <0.0050

Anderson-Darling A-Sq 2.606234 Pr > A-Sq <0.0050

Normal Probability Plot

42.5+ *

| ++

| ++++

| +++++

22.5+ ++++

| +++++

| ++++ *

| ++++ *

2.5+ * * *++*+* * * * *

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2

42

Page 55: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

According to the tests, the distribution is nonnormal, and this is confirmed

by the normal probability plot.

The results of the various methods are listed in the following tables. In

Table 3.2, (L, U) represents a two-sided 95% confidence interval using Stu-

dent’s t distribution with n−1 degrees of freedom, where n is the number of ob-

servations. In the Table 3.3, (L, U) represents an approximate two-sided 95%

confidence interval using approximate Student’s t distribution with n−2g−1

degrees of freedom, where g = [γn], (Lb, Ub) is an approximate two-sided 95%

percentile bootstrap confidence interval and (Lbt, Ubt) is an approximate two-

sided 95% bootstrap-t confidence interval.

Method θ SE(θ) L U Lb Ub Lbt Ubt

Student’s t 8.33 3.21 1.27 15.39 4.36 15.31 3.68 33.83

Tab. 3.2 – Estimates of location, of standard errors and 95% confidence

bounds with Student’s t distribution in the case of one sample.

From Tables 3.2 and 3.3, it is seen that Student’t method is much more

affected by the presence of outliers than robust methods. Using 2 × 12.5%

trimming, the trimmed mean is the most robust procedure among these me-

thods. Its variance is the smallest and confidence intervals are relatively short.

For Tin, i = 1, 2, values of γ have to be bigger than those of trimmed means.

In this example, we took γ =0.3, 0.35 and 0.4. In Table 3.3, all standard errors

are smaller than that of of the mean. The above methods thus appear robust

as compared with the standard method based on the sample mean.

43

Page 56: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Method θ SE(θ) L U Lb Ub Lbt Ubt

2 × 5% tr. mean 5.43 1.11 2.92 7.93 3.96 13.71 3.77 14.85

2 × 10% tr. mean 4.98 1.19 2.15 7.80 3.89 11.33 3.24 12.79

2 × 12.5% tr. mean 4.41 0.24 3.80 5.01 3.86 7.83 4.01 5.17

30% T1n 4.14 1.25 0.92 7.36 3.62 7.14 0.62 4.75

35% T1n 4.26 0.98 1.14 7.38 3.62 7.06 0.43 5.13

40% T1n 4.26 0.98 1.14 7.38 3.62 7.06 0.43 5.13

30% T2n 5.60 2.43 0.25 10.95 3.99 13.36 4.10 12.86

35% T2n 5.05 1.88 0.91 9.19 3.98 11.06 4.13 10.19

40% T2n 4.74 1.54 1.35 8.12 3.95 9.45 4.12 8.39

Tab. 3.3 – Estimates of location, of standard errors and 95% confidence

bounds with respect to various robust methods.

44

Page 57: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Chapter 4

Robust Inference in the Two-Sample Problem

4.1 Introduction

Suppose that we have two populations : the X1-population having distribu-

tion F and the X2-population having distribution G. Assume that θ1 = T (F )

is a location functional for the X1-population and let θ1 be an estimate for

θ1. In the same way, assume that θ2 = T (G) is a location functional for the

X2-population and let θ2 be an estimate for θ2. We want to test whether these

two location functionals differ or not. The hypotheses are :

H0 : θ1 = θ2 versus H1 : θ1 6= θ2.

We will also be interested in constructing two-sided (1−α)-confidence intervals

for θ1 − θ2. Such confidence intervals can be constructed when a Studentized

estimate θ has a known t distribution or is approximately normal (Staudte

and Sheather (1990, p. 190)). In this chapter, after recalling the classical

approach to these problems, we discuss methods based on trimmed means

Page 58: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

and the bootstrap. Both independent samples and dependent samples are

considered.

4.2 Two independent samples

4.2.1 Student’s test

It is helpful to review some basic results about Student’s test. Let X1 =

(X11, · · · , Xn1,1) and X2 = (X12, · · · , Xn2,2) be two independent random samples,

where Xi1 ∼ N(µ1, σ21), 1 ≤ i ≤ n1, and Xi2 ∼ N(µ2, σ

22), 1 ≤ i ≤ n2. Assu-

ming σ11 = σ2

1 = σ2 unknown, the statistic

T =X1 − X2 − (µ1 − µ2)√(1n1

+ 1n2

)(n1−1)s2

1+(n2−1)s2

2

n1+n2−2

has Student’s t distribution with v = n1 + n2 − 2 degrees of freedom, where

s2i , i = 1, 2 are the sample variances. Under the null hypothesis H0 : µ1 = µ2,

at size α we reject H0, if | T |> t(1−α/2),v .

A two-sided (1 − α)-confidence interval for µ1 − µ2 is given by

X1 − X2 ± t(1−α/2),v

√(1

n1+

1

n2

)(n1 − 1)s2

1 + (n2 − 1)s22

n1 + n2 − 2.

According to the usual correspondence between confidence intervals and tests

of hypotheses, when testing

H0 : µ1 = µ2 versus H1 : µ1 6= µ2,

H0 will be rejected if and only if

0 /∈ X1 − X2 ± t(1−α/2),v

√(1

n1

+1

n2

)(n1 − 1)s2

1 + (n2 − 1)s22

n1 + n2 − 2.

46

Page 59: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Under the same normality assumptions, if n1 = n2 = n ≥ 15 and the va-

riances are not equal, T approximately follows Student’s t distribution with

2(n − 1) degrees of freedom (Wilcox, 1997, p. 107). However, under non-

normality with heteroscedastic variances, Student’s test provides inaccurate

confidence intervals. In fact, as n −→ ∞ (Cressie and Whitford, 1986) it is

no longer true that

Td−→ N(0, 1).

Under heteroscedasticity, the Yuen-Welch test introduced next gives better

results than Student’s test over a wide range of situations (Wilcox, 1997,

p. 109)

4.2.2 The two-sample Yuen-Welch trimmed mean test

Yuen-Welch’s method was derived by Yuen and Dixon (1973) and fur-

ther developped by Yuen (1974). Starting from two independent samples, this

method considers the hypothesis of equality of the trimmed means. Let µt1

be the population 2γ-trimmed mean of the X1-population and let µt2 be the

population 2γ-trimmed mean of the X2-population. The null hypothesis is :

H0 : µt1 = µt2,

whether the variances are equal or not. Suppose we label the sample trimmed

means for these two samples as X ti, i = 1, 2. After 2γ-trimming, we calculate

the Winsorized sample variances s2wi, i = 1, 2. According to section 2.2 of

Chapter 2, an estimate of the variance of X ti iss2

wi

(1−2γ)2ni. Under the assumption

47

Page 60: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

of independence, we have

V ar(Xt1 − X t2) = V ar(X t1) + V ar(X t2),

and so S2(Xt1 − X t2) can be taken to be

s2w1

(1 − 2γ)2n1+

s2w2

(1 − 2γ)2n2.

Let gi = [γni] be the number of observations that are removed on each side

for the ith sample. Further, let mi = ni − 2gi, i = 1, 2, be the number of ob-

servations remaining after trimming. Yuen’s estimates are alternate estimates

for V ar(Xti), i = 1, 2, defined respectively to be

Aγ =(n1 − 1)s2

w1

m1(m1 − 1)

and

Bγ =(n2 − 1)s2

w2

m2(m2 − 1).

The above pairs of estimates give similar results. However, simulations show

that Yuen’s estimates provide better results about type I error probabilities

and probability coverage (Wilcox, 1997, p. 110).

Yuen’s test statistic is defined as

Ty =X t1 − X t2√

Aγ + Bγ

.

Under the null hypothesis

H0 : µt1 = µt2,

Yuen’s test statistic Ty follows approximately Student’s distribution with νy

degrees of freedom (Wilcox, 1997, p. 110), where

νy =(Aγ + Bγ)

2

A2γ

m1−1+

B2γ

m2−1

.

48

Page 61: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

At size α, we then reject the null hypothesis if

| Ty |> t(1−α/2),�

νy ,

where t(1−α/2),�

νy is the 1-α/2 quantile of Student’s distribution with νy degrees

of freedom.

4.2.3 Confidence interval estimation based on trimmed

means

Confidence interval based on the Yuen-Welch test for µt1 − µt2

On the basis of the Yuen-Welch test, an approximate two-sided (1 − α)-

confidence interval for µt1 − µt2 is given by

X t1 − X t2 ± t(1−α/2),�

νy

√Aγ + Bγ .

Then, when testing

H0 : µt1 = µt2 versus H1 : µt1 6= µt2,

H0 will be rejected if and only if

0 /∈ X t1 − X t2 ± t(1−α/2),�

νy

√Aγ + Bγ.

The percentile bootstrap interval for µt1 − µt2

Again, let θ1 = µt1 be the 2γ-trimmed mean of the X1-population, from

which we draw a sample X1 = (X11, · · · , Xn1,1), and let θ2 = µt2 be the 2γ-

trimmed mean of the X2-population, from which we draw a sample X2 =

(X12, · · · , Xn2,2). Suppose that these two samples are independent.

49

Page 62: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Starting from B bootstrap samples X∗bi , i = 1, 2, calculate the B bootstrap

trimmed means X∗bti , i = 1, 2. For the estimate

D = X t1 − X t2,

a bootstrap replication is obtained by

D∗b = X∗bt1 − X

∗bt2 , b = 1, 2, · · · , B.

An approximate two-sided (1 − α)-percentile bootstrap interval for µt1 − µt2

is obtained by(D∗(l), D∗(u)

),

where again l = [Bα/2] and u = [(1 − α/2)B].

The bootstrap-t interval for µt1 − µt2

Efron (1993, p. 224) describes a procedure to generate a bootstrap interval

for the difference of two means, µ1 −µ2 when given two independent samples.

We will use a similar method for the difference of two 2γ-trimmed means,

µt1 − µt2. The details are as follows :

1. Let F be the empirical distribution based on the points Xi1 = Xi1 −X t1 +

Zt, i = 1, 2, · · · , n1, and G the empirical distribution based on the points

Xj2 = Xj2 − X t2 + Zt, j = 1, 2, · · · , n2, where Zt is the sample 2γ-trimmed

mean of the combined sample X1 and X2. Put

U =Xt1 − Xt2√

Aγ + Bγ

,

where Xti is the sample trimmed mean obtained from X = (X1i, · · · , Xni,i),

i = 1, 2, and Aγ , Bγ are Yuen’s estimates of the corresponding variances.

50

Page 63: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

2. Calculate the B bootstrap sample trimmed means X∗bt1 based on the values

Xi1, i = 1, 2, · · · , n1 ; in the same way, get B bootstrap sample trimmed means

X∗bt2 based on the values Xj2, j = 1, 2, · · · , n2.

3. A bootstrap estimate is defined by

U∗b =X∗b

t1 − X∗bt2√

A∗bγ + B∗b

γ

, b = 1, 2, · · · , B,

where A∗bγ and B∗b

γ are Yuen’s estimates of the variance of the 2γ-trimmed

mean statistics based on the bootstrap sample. More precisely

A∗bγ =

(n1 − 1)s2∗bw1

m1(m1 − 1), b = 1, 2, · · · , B,

and

B∗bγ =

(n2 − 1)s2∗bw2

m2(m2 − 1), b = 1, 2, · · · , B.

Denote the γ/2 quantile of the U ∗b’s by U∗(l) and the corresponding 1 − γ/2

quantile by U∗(u), where l = [Bα/2] and u = [(1 − α/2)B] as before.

Then an approximate two-sided (1 − α)-bootstrap-t interval for µt1 − µt2

is obtained by (Wilcox, 1997, p. 113)(

Xt1 − X t2 − U∗(u)

√Aγ + Bγ, X t1 − X t2 − U∗(l)

√Aγ + Bγ

). (4.23)

We can also obtain an equal tailed two-sided (1 − α)-bootstrap-t interval for

µt1 − µt2 by using

| U∗b |= | X∗bt1 − X∗b

t2 |√A∗

γ + B∗γ

.

Indeed, an equal tailed two-sided (1 − α)-bootstrap-t interval for µt1 − µt2 is

then given by(

X t1 − X t2− | U∗(a) |√

Aγ + Bγ, X t1 − Xt2+ | U∗(a) |√

Aγ + Bγ

), (4.24)

where a = [(1 − α/2)B].

51

Page 64: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

4.2.4 Example and application

Perrotta and Finch reported a study concerning 16 patients with severe

renal disease and 10 patients with functional heart disease (Gibbon, J. D.,

1997, p. 216). The observations on percentage shift for hematocrit and reti-

culocyte cell counts were measured. Let x represent a measurement for renal

disease and y a measurement for heart disease.

Renal 2.20 1.52 1.54 0.77 0.34 0.45 0.39 0.29 0.18 0.16

0.23 0.24 0.17 0.08 0.02 0.02

Heart disease 1.84 0.44 0.30 0.06 0.20 0.14 0.10 0.09 0.06 0.04

Tab. 4.4 – Renal and heart disease measurements

Applying SAS for the x-sample and the y-sample yields the following re-

sults :

For the Renal disease-sample :

Basic Statistical Measures

Location Variability

Mean 0.537500 Std Deviation 0.64556

Median 0.265000 Variance 0.41675

Quantile Estimate

100% Max 2.200

99% 2.200

52

Page 65: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

95% 2.200

90% 1.540

75% Q3 0.610

50% Median 0.265

25% Q1 0.165

10% 0.020

5% 0.020

1% 0.020

0% Min 0.020

Test for Normality

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.741174 Pr < W 0.0005

Normal Probability Plot

2.25+ * +++

| * * ++++++++

1.25+ +++++++

| ++++++++*

0.25+ * * * *+*+*+** ** * *

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2

For the Heart disease-sample :

Basic Statistical Measures

Location Variability

Mean 0.327000 Std Deviation 0.54634

53

Page 66: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Median 0.120000 Variance 0.29849

Quantile Estimate

100% Max 1.84

99% 1.84

95% 1.84

90% 1.14

75% Q3 0.30

50% Median 0.12

25% Q1 0.06

10% 0.05

5% 0.04

1% 0.04

0% Min 0.04

Test for Normality

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.557914 Pr < W <0.0001

Normal Probability Plot

1.75+ * ++++

| +++++++++

| +++++++++

0.25+ * * +*++*+*++* * *

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2

54

Page 67: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

0.0

0.5

1.0

1.5

2.0

Renal disease Heart disease

Mea

sure

men

t

Fig. 4.2 – Boxplot of renal and heart disease example

The standard two-sample t-test for the equality of means gives t = 0.8557

with p-value = 0.4006, difference of mean of x and y is 0.21, a 95% percent

confidence interval of µ1 − µ2 is (−0.297218, 0.718218). For the estimation of

standard error of µ1 − µ2, we have that

se(µ1 − µ2) =

√(1

n1+

1

n2

)(n1 − 1)s2

1 + (n2 − 1)s22

n1 + n2 − 2.

These are two independent samples with nonnormality. One may use classical

Student’s t method to compare these resultes in using Yuen’s statistic. The

results of the various methods are listed in Table 4.5 and Table 4.6.

The symbols L, U , Lb,Ub, Lbt, Ubt are explained at p. 42 in section 3.5

of Chapter 3. It is clear that we accept the null hypothesis because 0 is

55

Page 68: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

in confidence intervals. That means these two populations have the same

locations. Again, note that in the table 4.5, Student’s t distribution with

n1 + n2 − 2 degrees of freedom is different from approximate Student’s t

distribution with (Aγ+Bγ)2

A2γ

m1−1+

B2γ

m2−1

degrees of freedom, in the table 4.6. Yuen-Welch’s

statistic Ty with 2×12.5% trimming is the most robust, its confidence interval

is the shortest. Confidence intervals are relatively shorter than others. Under

nonnormality, Yuen-Welch’s method gave satisfied results. On the other hand,

using Student’s t did not have advantage.

Method µ1 − µ2 se(µ1 − µ2) L U Lb Ub Lbt Ubt

Student’s t 0.211 0.246 -0.297 0.718 -0.265 0.623 -0.427 0.508

Tab. 4.5 – Point estimate and 95% confidence intervals for difference of means

for the renal and heart disease example.

γ Ty se(Ty) L U Lb Ub Lbt Ubt

γ = 0 0.890 0.237 -0.280 0.701 -0.265 0.623 -0.283 0.684

γ = .05 1.483 0.190 -0.127 0.691 -0.252 0.651 0.004 0.987

γ = .1 1.479 0.117 -0.0795 0.427 -0.237 0.611 -0.021 0.525

γ = .125 1.602 0.086 -0.049 0.326 -0.246 0.540 -0.011 0.349

Tab. 4.6 – Yuen-Welch’s statistic and 95% confidence intervals for difference

of trimmed means for the renal and heart disease example.

56

Page 69: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

4.3 Two dependent samples

Throughout this section X1 = (X11, · · · , Xn,1) and X2 = (X12, · · · , Xn,2)

denote paired random samples that will be used to compare two location

parameters.

4.3.1 The paired t test

Let Xi1 ∼ N(µ1, σ21), Xi2 ∼ N(µ2, σ

22), i = 1, 2, · · · , n and suppose di =

Xi1 − Xi2, i = 1, 2, · · · , n, are independent of each other. Denote the popu-

lation mean and variance of d as µd, and σ2d, respectively. Then, under the

assumption of normality, we note that

d ∼ N

(µ1 − µ2,

σ2d

n

).

Estimating σ2d by s2

d, we obtain

Td =d − (µ1 − µ2)

sd/√

n∼ tn−1.

Testing the hypotheses

H0 : µ1 = µ2 versus H1 : µ1 6= µ2

is equivalent to testing

H0 : µd = 0 versus H1 : µd 6= 0.

If | Td |> t(1−α/2),n−1, H0 is rejected.

According to the usual correspondence between tests of hypotheses and

confidence intervals, H0 is rejected at size α if and only if

0 /∈ d ± t(1−α/2),n−1sd√n

57

Page 70: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

4.3.2 The two-sample Yuen-Welch trimmed mean test

For j = 1, 2, let X(1)j ≤ X(2)j ≤ · · · ≤ X(n)j be the n values in the jth

sample, written in ascending order.

Let

Wij =

X(g+1)j , if i ≤ g + 1

Xij, if g + 1 < i < n − g

X(n−g)j , if i ≥ n − g

where g = [γn] as before. The Winsorized sample mean for the j th sample is

defined by

W j =1

n

n∑

i=1

Wij, i = 1, 2, · · · , n, j = 1, 2,

and the corresponding Winsorized sample variance is :

s2wj =

1

n − 1

n∑

i=1

(Wij − Wj)2, j = 1, 2.

Under the assumption of dependence, we now have

V ar(X t1 − X t2) = V ar(X t1) + V ar(X t2) − 2 Cov(Xt1, Xt2).

Let g(X) be any function of X. Following Wilcox (1997, p. 27), the γ-

Winsorized expected value of g(X) is defined to be

Ew[g(X)] =

∫ x1−γ

g(x)dF (x) + γ [g(xγ) + g(x1−γ)] .

In particular, the Winsorized covariance between Xi1 and Xi2 is defined as

(Wilcox, 1997, p. 124)

σw12 = Ew [(Xi1 − µw1)(Xi2 − µw2)] ,

58

Page 71: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where Ew indicates the γ-Winsorized expected value and µwi, i = 1, 2, are the

population Winsorized means. From the influence function of the trimmed

mean (Wilcox, 1997, p. 124), it follows that

V ar(X t1 − X t2) =1

(1 − 2γ)2n{σ2

w1 + σ2w2 − 2σw12},

where σ2wi, i = 1, 2 are the population Winsorized variances.

Naturally, the covariance term is estimated with the sample covariance

between the Wi1 and Wi2 values. It is defined as

sw12 =1

n − 1

n∑

i=1

(Wi1 − W 1)(Wi2 − W 2).

Let

d12 =1

m(m − 1)

n∑

i=1

(Wi1 − W 1)(Wi2 − W 2) =(n − 1)sw12

m(m − 1).

Consider Yuen’s estimates for the variances of the sample trimmed means,

V ar(Xti), j = 1, 2. For m = n − 2g, these are

d1 =(n − 1)s2

w1

m(m − 1)

and

d2 =(n − 1)s2

w2

m(m − 1).

An estimate of V ar(X t1 − Xt2) is then obtained by

d1 + d2 − 2d12.

Thus, Yuen’s test statistic in the dependent case is taken to be

Tyd =X t1 − X t2√

d1 + d2 − 2d12

.

59

Page 72: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Under the null hypothesis

H0 : µt1 = µt2,

Tyd approximately follows Student’s t distribution with m− 1 degrees of free-

dom (Wilcox, 1997, p. 125). If | Tyd |> t(1−α/2),m−1, we reject H0.

4.3.3 Confidence interval estimation for the difference

of trimmed means

Confidence interval derived from the Yuen-Welch test

According to the above, under the null hypothesis H0 : µt1 = µt2, an

approximate two-sided (1 − α)-confidence interval for µt1 − µt2 is given by

X t1 − X t2 ± t(1−α/2),m−1

√d1 + d2 − 2d12.

Again, the null hypothesis is rejected at size α if and only if

0 /∈ X t1 − X t2 ± t(1−α/2),m−1

√d1 + d1 − 2d12.

The percentile bootstrap interval

We proceed as in section 2.3.2, except that we now have dependent samples.

We obtain n pairs of observations by randomly sampling with replacement

pairs of observations from the observed data, which we write (Xi1, Xi2), i =

1, · · · , n. Next, we calculate the paired sample trimmed means (X∗bt1 , X

∗bt2) ba-

sed on the bootstrap samples.

Bootstrap replications for

D = X t1 − X t2

are obtained by

D∗b = X∗bt1 − X

∗bt2 , b = 1, 2, · · · , B.

60

Page 73: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

An approximate two-sided (1 − α)-confidence interval for µt1 − µt2 is then

obtained by(D∗(l), D∗(u)

),

where l = [Bα/2] and u = [(1 − α/2)B] as before.

The bootstrap-t interval

Let

U∗bd =

X∗bt1 − X∗b

t2√d∗b

1 + d∗b2 − 2d∗b

12

, b = 1, 2, · · · , B,

where d∗b1 , d∗b

2 are Yuen’s estimates of the variance of Xti, i = 1, 2, d∗b12 is an

estimate of the covariance between Xt1 and Xt2 based on the bootstrap sample

and X∗bti , i = 1, 2 have the same meaning as in section 4.2.3.

Using the above notation, an approximate two-sided (1 − α)-bootstrap-t

interval for µt1 − µt2 is given by

(X t1 − Xt2 − U

∗(u)d

√d1 + d2 − 2d12, Xt1 − X t2 − U

∗(l)d

√d1 + d2 − 2d12

).

An equal tailed two-sided (1 − α)-bootstrap-t interval for µt1 − µt2 is

(X t1 − X t2− | U

∗(a)d |

√d1 + d2 − 2d12, Xt1 − X t2− | U

∗(a)d |

√d1 + d2 − 2d12

).

where a = [(1 − α/2)B].

4.3.4 Example and application

The data represents water flow measurements on the Kootenay river in

January in Libby, Montana, and Newgate, British Columbia, for the years

1931-1943 (Rousseeuw, P. J. and Leroy, A. 1987, p. 64). There are two de-

pendent variables : x, the percentage measurement for Libby, and y, the same

measurement for Newgate.

61

Page 74: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Libby 27.1 20.9 33.4 77.6 37.0 21.6 17.6 35.1 32.6 26.0

27.6 38.7 27.8

Newgate 19.7 18.0 26.1 44.9 26.1 19.9 15.7 27.6 24.9 23.4

23.1 31.3 23.8

Tab. 4.7 – Water flow measurements on the Kootenay river

A SAS analysis of the two samples gives :

For Libby-sample :

Basic Statistical Measures

Location Variability

Mean 32.53846 Std Deviation 14.98096

Median 27.80000 Variance 224.42923

Quantile Estimate

100% Max 77.6

99% 77.6

95% 77.6

90% 38.7

75% Q3 35.1

50% Median 27.8

25% Q1 26.0

10% 20.9

5% 17.6

62

Page 75: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

1% 17.6

0% Min 17.6

Test for Normality

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.728611 Pr < W 0.0011

Normal Probability Plot

75+ *

| +++++++

| +++++++

45+ +++++++

| +++*+* * * *

| * *+*+*+* *

15+ * +++++++

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2

For Newgate-sample :

Basic Statistical Measures

Location Variability

Mean 24.96154 Std Deviation 7.31477

Median 23.80000 Variance 53.50590

Quantile Estimate

63

Page 76: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

100% Max 44.9

99% 44.9

95% 44.9

90% 31.3

75% Q3 26.1

50% Median 23.8

25% Q1 19.9

10% 18.0

5% 15.7

1% 15.7

0% Min 15.7

Test for Normality

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.855967 Pr < W 0.0341

Normal Probability Plot

47.5+

| * +++++

| +++++++

32.5+ +++++*+

| +++*+* *

| ++*+*+* *

17.5+ * +*+++*+*

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2

64

Page 77: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Applying SAS for the difference of x and y yields the following results :

Basic Statistical Measures

Location Variability

Mean 7.576923 Std Deviation 8.05214

Median 7.300000 Variance 4.83692

Quantile Estimate

100% Max 32.7

99% 32.7

95% 32.7

90% 10.9

75% Q3 7.5

50% Median 7.3

25% Q1 2.9

10% 1.9

5% 1.7

1% 1.7

0% Min 1.7

Test for Normality

Test --Statistic--- -----p Value------

Shapiro-Wilk W 0.630544 Pr < W <0.0001

65

Page 78: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

• •

Libby

New

gate

20 30 40 50 60 70 80

1520

2530

3540

45

2030

4050

6070

80

Libby Newgate

Fig. 4.3 – Data point cloud and boxplot for Libby and Newgate data

Normal Probability Plot

32.5+ *

| ++++

| ++++++

17.5+ ++++++

| ++++++ *

| +++*+*+ * *

2.5+ * * +*+*+* *

+----+----+----+----+----+----+----+----+----+----+

-2 -1 0 +1 +2

For Paired t-Test, the calculated t is 3.3928, p-value = 0.0053, sample

66

Page 79: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

estimate : µ1 − µ2 is 7.576923, a 95% percent confidence interval of µ1 − µ2 is

(2.711065, 12.442781). The distribution of x− y is nonormal since the p value

of the Shapiro-Wilk is less than 0.0001. The Normal Probability Plot gives

the same information. The Spearman correlation coefficient between x, y is

0.96286, Prob > |r| is < 0.0001. The difference of x − y constructs of one

sample. One can use the method in Chapter 3. The results are listed in Table

4.8.

Method x − y se(x − y) L U Lb Ub Lbt Ubt

Student’s t 7.577 2.233 2.711 12.442 4.215 11.846 4.512 18.929

Tab. 4.8 – Point estimate and 95% confidence intervals for difference of means

for the Kootenay river example.

The results of the various methods are listed in Table 4.9.

γ Tyd se(Tyd) L U Lb Ub Lbt Ubt

γ = 0 3.428 2.210 2.761 12.393 4.215 11.846 4.536 19.772

γ = .05 7.024 0.827 3.966 7.652 4.027 10.873 4.115 7.767

γ = .1 5.768 1.033 3.429 8.481 3.756 9.411 1.745 9.581

γ = .125 5.853 1.032 3.517 8.569 3.529 8.528 1.923 10.197

Tab. 4.9 – Yuen-Welch’s statistic and 95% confidence intervals for difference

of trimmed means for the Kootenay river example.

Again, note that in Table 4.8, Student’s t distribution with n − 1 degrees

of freedom, where n means the same number of observations with x and y,

is different from approximate Student’s t distribution with m − 1 degrees of

67

Page 80: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

freedom, where m = n − 2[γn], in Table 4.9. From the above results, using

Yuen-Welch’s method without trimming is similar to paired-t test under non-

normality. However, with appropriate trimming, Yuen-Welch’s method gave

better results, Yuen-Welch’s method is more robust than Student’s. In this

example, we take 0.1 ≤ γ ≤ 0.25. All values of estimate of standard error

of Yuen’s statistic with appropriate trimming are much smaller than that of

mean or than that of without trimming. Lengths of confidence intervals are

short. Since 0 is not in confidence intervals, we do not accept the null hypo-

thesis. That means these two populations have different locations, µt1 6= µt2.

68

Page 81: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

General Conclusion

Robust inference started being an important field of study about forty

years ago, mainly from the need of practitioners. With the development of

computers, the robust approach to statistics attracts more and more the at-

tention not only of researchers but also of applied statisticians.

We had two main interest when we started this work. The first one was

to study in some detail robust location functionals and their estimates. The

second one was to study the variability of robust estimates through the boots-

trap technique invented by Efron (1979). This allowed us to become more

familiar with S-PLUS software and language and, in particular, to develop

several S-PLUS programs that have been included in the Appendix.

Perhaps the most original part of this work has to do with our study of two

robust location estimates based on the average of symmetric quantiles, one

of which being the little known Harrell-Davis quantile estimate. In addition

to presenting results of asymptotic normality, consistency and robustness for

these estimates, we developed S-plus programs for the calculation of confi-

dence intervals.

Page 82: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Among these robust methods, results in the applied examples tend to in-

dicate that the trimmed mean is perhaps the most satisfying estimate under

nonnormality in one-and two-sample cases. However, it is certainly not pos-

sible to assert that this estimate is overall the best as regards variability. A

better evaluation of the performance of the trimmed mean could be done

through a simulation.

Monte Carlo simulation could also have been used to study the power of

the tests. Moreover, we could also have compared our robust methods with

those derived from the non-parametric approach. In any case, it is clear that

it is still possible to improve the methods mentioned in this paper. In fact,

both theory and simulations indicate that robust methods offer an advantage

over standard methods when distributions are skewed (Wilcox, 1997).

70

Page 83: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Appendix

Splus Programs

In what follows, we list all S programs that were used for the calculation

of various statistics. For each S function, x denotes a vector and gamma is

a number between 0 and 0.5. Several of the programs are inspired by Wilcox

(1997).

• Function to calculate T1n = 12(xγ + x1−γ). This is our first estimate for

the average of symmetric quantiles.

t1n<-function(x, gamma) { # gamma is a number between 0 and 0.5.

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

y <- sort(x)

m1 <- floor(gamma * length(x))

m2 <- floor((1 - gamma) * length(x))

t1n <- (y[m1] + y[m2])/2

t1n }

Page 84: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

• Function to calculate T2n = 12

∑ni=1 (Ui + Vi)X(i). This is the estimate

of the average of symmetric quantiles based on Harrell-Davis estimate.

t2n<-function(x, gamma) { # gamma is a number between 0 and 0.5.

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

if(length(x) != length(x[!is.na(x)]))

stop("Remove missing values from x")

n <- length(x)

m1 <- (n + 1) * gamma

m2 <- (n + 1) * (1 - gamma)

vec <- seq(along = x)

w <- pbeta(vec/n, m1, m2) - pbeta((vec - 1)/n, m1, m2)

# W sub i values

y <- sort(x)

hd1 <- sum(w * y)

# hd1 is a Harrell-Davis estimate with (gamma)th quantile.

p1 <- (n + 1) * (1 - gamma)

p2 <- (n + 1) * gamma

V <- pbeta(vec/n, p1, p2) - pbeta((vec - 1)/n, p1, p2)

# V sub i values

hd2 <- sum(V * y)

# hd2 is a Harrell-Davis estimate with (1-gamma)th quantile.

t2n <- (hd1 + hd2)/2

# t2n is an average of two Harrell-Davis estimates.

t2n }

• Function to calculate Yuen’s statistic Ty = Xt1−Xt2√Aγ+Bγ

based on inde-

72

Page 85: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

pendent samples, where Aγ =(n1−1)s2

w1

m1(m1−1)and Bγ =

(n2−1)s2

w2

m2(m2−1).

yuen<-function(x, y, gamma) {

#

# Compute Yuen’s statistic based on independent

# samples.

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

x <- x[!is.na(x)]

# Remove missing values in x

y <- y[!is.na(y)]

# Remove missing values in y

n1 <- length(x)

n2 <- length(y)

m1 <- n1 - 2 * round(gamma * n1)

m2 <- n2 - 2 * round(gamma * n2)

A <- winvar(x, gamma) * (n1 - 1)/(m1 * (m1 - 1))

# Calculate estimate of the variance of the sample trimmed

mean of x.

B <- winvar(y, gamma) * (n2 - 1)/(m2 * (m2 - 1))

# Calculate estimate of the variance of the sample trimmed

mean of y.

se <- sqrt(A + B)

md <- mean(x, gamma) - mean(y, gamma)

yuen <- md/se

yuen }

• Function to calculate Yuen’s statistic Tyd = Xt1−Xt2√d1+d2−2d12

based on paired

73

Page 86: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

samples, where d1 =(n−1)s2

w1

m(m−1), d2 =

(n−1)s2

w2

m(m−1)and

d12 =1

m(m − 1)

n∑

i=1

(Wi1 − W 1)(Wi2 − W 2).

yuend<-function(x, y, gamma) {

#

# Compute Yuen’s statistic based on paired

# samples.

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

x <- x[!is.na(x)]

# Remove missing values in x

y <- y[!is.na(y)]

# Remove missing values in y

n1 <- length(x)

n2 <- length(y)

if(length(x) != length(y)) stop(

"The number of observations must be equal")

m <- n1 - 2 * round(gamma * n1)

d1 <- winvar(x, gamma)*(n1 - 1)/(m * (m - 1))

d2 <- winvar(y, gamma)*(n2 - 1)/(m * (m - 1))

d12 <- wincov(x, y, gamma)*(n1 - 1)/(m * (m - 1))

se <- sqrt(d1 + d2 - 2 * d12)

md <- mean(x, gamma) - mean(y, gamma)

yuend<- md/se

yuend }

74

Page 87: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

• Function to calculate standard errors of estimates of locations θ.

seb(θ) =

√√√√ 1

B − 1

B∑

b=1

(θ∗b − s(·)

)2

, b = 1, 2, · · · , B

where s(·) =∑B

b=1

θ∗b

Bis the mean of the bootstrap replications of θ. In

what follows, “fun” represents a bootstrap replication θ∗b. For example, X∗b

(mean), X∗bt (2γ-trimmed mean), X

∗bw (γ-Winsorized mean), T ∗b

in , i = 1, 2 (two

estimates of average of symmetric quantiles) and so on. The last two estimates

are described in more details below.

seb <-function(x, gamma, nboot = 100) {

# In the following program, fun may be an estimate of location.

# The number of boostrap samples is nboot=100.

if(gamma < 0 || gamma >= 0.5) stop("q must be between 0 and 0.5")

set.seed(2) # set seed of random number generator so that

# results can be duplicated.

data <- matrix(sample(x, size = length(x) * nboot, replace = T),

nrow= nboot)

bvec <- apply(data, 1, fun, gamma)

# for example, if gamma=0, "bvec" calculates sample means.

seb <- sqrt(var(bvec))

seb }

• Function to calculate standard errors of estimates Tin, i = 1, 2. This is

a particular case of previous function.

seb(Tin) =

√√√√ 1

B − 1

B∑

b=1

(T ∗b

in −∑

(T ∗bin )/B

)2

, b = 1, 2, · · · , B,

where T ∗bin are estimates of Tin, i = 1, 2 based on bootstrap samples.

75

Page 88: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

tinseb<-function(x, gamma, nboot = 100) {

# for tin, i=1,2.

# Applying the bootstrap samples, the number of

# boostrap is nboot=100.

if(gamma <= 0 || gamma >= 0.5) stop("q must be between

0 and 0.5")

set.seed(2) # set seed of random number generator so that

# results can be duplicated.

data <- matrix(sample(x, size = length(x) * nboot,

replace = T), nrow = nboot)

bvec <- apply(data, 1, tin, gamma)

tinseb <- sqrt(var(bvec))

tinseb }

• Function to calculate an approximate 95% confidence interval for the

average of symmetric quantiles θγ :

Tin ± t(1−α/2,df) ∗ seb(Tin),

where df = n − 2[qn] − 1 and seb(Tin) is the bootstrap estimate of standard

error of Tin. To get seb(Tin) in Splus, we use the function tinseb.

tinci<-function(x, gamma, alpha = 0.05) {

# Compute a 1-alpha confidence interval for the tin, i=1,2.

#

# The default amount of trimming is gamma. For example,

# gamma=0.1,0.2,and 0.25).

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

76

Page 89: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

0 and 0.5")

t2nci <- vector(mode = "numeric", length = 2)

df <- length(x) - 2 * floor(gamma * length(x)) - 1

tinci[1] <- tin(x, gamma) - qt(1 - alpha/2, df) * tinseb(x, gamma)

tinci[2] <- tin(x, gamma) + qt(1 - alpha/2, df) * tinseb(x, gamma)

tinci }

• Function to calculate a 95% confidence interval for the 2γ-trimmed mean

µt :

X t ±t(1−α/2),df sw

(1 − 2γ)√

n,

where df = n−2[qn]−1. We recall that s2w is the winsorized sample variance.

In Splus, we use the function winvar to calculate s2w.

trimci<-function(x, gamma, alpha = 0.05) {

# Compute a 1-alpha confidence interval for the trimmed mean

#

# The default amount of trimming is gamma (gamma=0.1,

0.2, and 0.25).

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

se <- sqrt(winvar(x, gamma))/((1 - 2 * gamma) * sqrt(length(x)))

trimci <- vector(mode = "numeric", length = 2)

df <- length(x) - 2 * floor(gamma * length(x)) - 1

trimci[1] <- mean(x, gamma) - qt(1 - alpha/2, df) * se

trimci[2] <- mean(x, gamma) + qt(1 - alpha/2, df) * se

trimci }

77

Page 90: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

• Function to calculate a 95% percentile bootstrap confidence interval for the

2γ-trimmed mean µt : (X

∗(l)t , X

∗(u)

t

),

where l = [αB/2] and u = [(1 − α/2)B], B = nboot = 599. In the following

context, l, u are as the same as above.

trimcib<-function(x, gamma, alpha = 0.05, nboot = 599) {

#

# Compute a 1-alpha confidence interval for the trimmed

# mean using a bootstrap percentile method.

#

# The default amount of trimming is gamma (gamma=0.1,0.2,

and 0.25).

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

trimcib <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

print("Taking bootstrap samples. Please wait.")

data <- matrix(sample(x, size = length(x) * nboot,

replace = T), nrow = nboot)

tval <- apply(data, 1, mean, gamma)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

78

Page 91: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

trimcib[1] <- tval[low]

trimcib[2] <- tval[up]

trimcib }

• Function to calculate a 95% percentile bootstrap confidence interval for

the population mean µ : (X

∗(l), X

∗(u))

where X is the sample mean.

meancib<-function(x, alpha = 0.05, nboot = 599) {

#

# Compute a 1-alpha confidence interval for the mean

# using a percentile bootstrap method.

#

meancib <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

print("Taking bootstrap samples. Please wait.")

data <- matrix(sample(x, size = length(x) * nboot,

replace = T), nrow = nboot)

tval <- apply(data, 1, mean)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

meancib[1] <- tval[low]

meancib[2] <- tval[up]

meancib }

79

Page 92: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

• Function to calculate a 95% percentile bootstrap confidence interval for

θγ : (T

∗(l)in , T

∗(u)in

), i = 1, 2.

tincib<-function(x, gamma, alpha = 0.05, nboot = 599) {

#

# Compute a 1-alpha confidence interval for tin,1=1,2.

# using the bootstrap percentile method.

#

# The default amount of trimming is gamma ( 0 < gamma < 0.5).

#

# The default is side=F yielding an equal-tailed confidence

# interval given the equation as above.

#

if(gamma <= 0 || gamma >= 0.5) stop("q must be between 0

and 0.5")

tincib <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

print("Taking bootstrap samples. Please wait.")

data <- matrix(sample(x, size = length(x) * nboot,

replace = T), nrow = nboot)

tval <- apply(data, 1, tin, gamma)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

80

Page 93: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

tincib[1] <- tval[low]

tincib[2] <- tval[up]

tincib }

• Function to calculate a 95% bootstrap-t interval with argument side = F :

(Tin + zi

(α/2)seb(Tin), Tin + zi(1−α/2)seb(Tin)

), i = 1, 2 (1)

if zi(α/2) is negative.

• Function to calculate a 95% bootstrap-t interval with argument side = T :

(Tin − zi

(1−α/2)seb(Tin), Tin + zi(1−α/2)seb(Tin)

), i = 1, 2. (2)

> tincibt<-function(x, gamma, alpha = 0.05, nboot1 = 599,

nboot2 = 100, side = F) {

#

# Compute a 1-alpha confidence interval for tin,i=1,2.

# using the bootstrap percentile t method.

#

# The default amount of trimming is gamma (0 < gamma < 0.5).

# The default is side=F yielding an equal-tailed confidence

# interval given by equation (1) as above.

#

# side=T, for true, indicates the symmetric two-sided method

# given by equation (2) as above.

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

side <- as.logical(side)

81

Page 94: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

tincibt <- vector(mode = "numeric", length = 2)

set.seed(2)

print("Taking bootstrap samples. Please wait.")

data1 <- matrix(sample(x, size = length(x) * nboot1, replace = T),

nrow = nboot1)

# get $nboot1=599$ bootstrap samples.

data2 <- matrix(sample(x, size = length(x) * nboot2, replace = T),

nrow = nboot2)

# get $nboot2=100$ bootstrap samples.

bot <- apply(data1, 1, tin, gamma)

top <- bot - tin(x, gamma)

bvec <- sqrt(var(bot))

tval <- top/bvec

# Calculate bootstrap estimate.

if(side)

tval <- abs(tval)

tval <- sort(tval)

low <- round((alpha * nboot1)/2)

up <- round((1 - alpha/2) * nboot1)

bot1 <- apply(data2, 1, tin, gamma)

tinseb <- sqrt(var(bot1))

tincibt[1] <- tin(x, gamma) + tval[low] * tinseb(x, gamma)

tincibt[2] <- tin(x, gamma) + tval[up] * tinseb(x, gamma)

# note that "tval[low]" will be negative: the corresponding term

# in the above equation is added, not subtracted.

if(side)

82

Page 95: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

tincibt[1] <- tin(x, gamma) - tval[up] * tinseb(x, gamma)

if(side)

tincibt[2] <- tin(x, gamma) + tval[up] * tinseb(x, gamma)

tincibt }

• Function to calculate a 95% bootstrap-t confidence interval for the popula-

tion mean µ :

(X t −

T ∗(u)sw

(1 − 2γ)√

n, Xt −

T ∗(l)sw

(1 − 2γ)√

n

), (3)

or (X t −

T ∗(u)sw

(1 − 2γ)√

n, X t +

T ∗(u)sw

(1 − 2γ)√

n

). (4)

trimcibt<-function(x, gamma, alpha = 0.05, nboot = 599, side = F) {

#

# Compute a 1-alpha confidence interval for the trimmed mean

# using a bootstrap percentile t method.

#

# The default amount of trimming is gamma (gamma=0.1, 0.2, 0.25).

#

# The default is side=1, yielding an equal-tailed confidence

interval

# given by equation (3) as above;

# side=T, for true, indicates the symmetric two-sided method

# given by equation (4) as above.

#

if(gamma < 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

side <- as.logical(side)

83

Page 96: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

trimcibt <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

print("Taking bootstrap samples. Please wait.")

data <- matrix(sample(x, size = length(x) * nboot,

replace = T), nrow = nboot)

top <- apply(data, 1, mean, gamma)- mean(x, gamma)

bot <- apply(data, 1, trimse, gamma)

tval <- top/bot

if(side)

tval <- abs(tval)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

trimcibt[1] <- mean(x, gamma) - tval[up] * trimse(x, gamma)

trimcibt[2] <- mean(x, gamma) - tval[low] * trimse(x, gamma)

if(side)

trimcibt[1] <- mean(x, gamma) - tval[up] * trimse(x, gamma)

if(side)

trimcibt[2] <- mean(x, gamma) + tval[up] * trimse(x, gamma)

trimcibt }

• Function to calculate a Winsorized covariance between x and y. In the

program, x and y are vectors having an equal number of components.

sw12 =1

n − 1

n∑

i=1

(Wi1 − W 1)(Wi2 − W 2),

84

Page 97: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

wincov<-function(x, y,gamma) {

#

# Compute a Winsorized covariance between x and y.

# The default amount of trimming is gamma (gamma=0.1,

0.2 and 0.25).

#

x<-x[!is.na(x)] # Remove missing values in x

y<-y[!is.na(y)] # Remove missing values in y

if(length(x)!=length(y))

stop("The number of observations must be equal")

w1 <- sort(x)

i <- seq (1: length(x))

ib <- round(gamma * length(x)) + 1

it <- length(x) - ib + 1

b1 <- w1[ib]

t1 <- w1[it]

w1 <- ifelse(i <= ib, b1, w1)

w1 <- ifelse(i >= it, t1, w1)

# For Winsorization of a random sample from x.

w2 <- sort(y)

jb <- round(gamma * length(y)) + 1

jt <- length(y) - jb + 1

b2 <- w2[jb]

t2 <- w2[jt]

w2 <- ifelse(i <= jb, b2, w2)

w2 <- ifelse(i >= jt, t2, w2)

85

Page 98: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

# For Winsorization of a random sample from y.

wincov<-(sum(w1*w2)-length(x)*mean(w1)*mean(w2))/(length(x)-1)

wincov }

• Function to calculate a 95% confidence interval for a difference of trimmed

means µt1 − µt2 :

X t1 − X t2 ± t(1−α/2),�

νy

√Aγ + Bγ ,

where

νy =(Aγ + Bγ)

2

A2γ

m1−1+

B2γ

m2−1

.

In the S function, x and y are vectors that may have a different number of

components.

yuenci<-function(x, y, gamma, alpha = 0.05) {

#

# Compute an approximate (1-alpha) confidence interval

for the difference between two independent trimmed means

# with Student’s distribution.

#

# The default amount of trimming is gamma (gamma=0.1,

0.2,0.25).

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

yuenci <- vector(mode = "numeric", length = 2)

x<-x[!is.na(x)] # Remove missing values in x

y<-y[!is.na(y)] # Remove missing values in y

n1 <- length(x)

86

Page 99: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

n2 <- length(y)

m1 <- n1 - 2 * round(gamma * n1)

m2 <- n2 - 2 * round(gamma * n2)

A <- winvar(x,gamma) * (n1 - 1)/(m1 * (m1 - 1))

# Calculate estimate of the variance of the sample

trimmed mean of x.

B <- winvar(y,gamma) * (n2 - 1)/(m2 * (m2 - 1))

# Calculate estimate of the variance of the sample

trimmed mean of y.

df <- (A + B)^2/(A^2/(m1 - 1) + B^2/(m2 - 1))

# Calculate the degrees of freedom.

se <- sqrt(A + B)

md <- mean(x, gamma) - mean(y, gamma)

yuenci[1] <- md - qt(1 - alpha/2,df) * se

yuenci[2] <- md + qt(1 - alpha/2,df) * se

yuenci }

• Function to calculate a percentile bootstrap confidence interval for the dif-

ference between two independent trimmed means µt1 − µt2 :

(D∗(l) − D∗(u)

),

where D∗ = X∗t1 − X

∗t2.

yuencib<-function(x, y, gamma, alpha = 0.05, nboot = 599,

side = F) {

#

# Compute a (1-alpha) confidence interval for the difference

# between two independent trimmed means with percentile

87

Page 100: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

bootstrap method.

# The default is side=F yielding an equal-tailed confidence

# interval given by above equation.

#

# The number of bootstrap replications is nboot=599.

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

side <- as.logical(side)

yuencib <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

x <- x[!is.na(x)]

# Remove missing values in x

y <- y[!is.na(y)]

# Remove missing values in y

n1 <- length(x)

n2 <- length(y)

print("Taking bootstrap samples. Please wait.")

data1 <- matrix(sample(x, size = length(x) * nboot,

replace = T), nrow = nboot)

data2 <- matrix(sample(y, size = length(y) * nboot,

replace = T), nrow = nboot)

xbot <- apply(data1, 1, mean, gamma)

ybot <- apply(data2, 1, mean, gamma)

88

Page 101: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

tval<- xbot- ybot

# Calculate the bootstrap statistics.

if(side)

tval <- abs(tval)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

yuencib[1] <- tval[low]

yuencib[2] <- tval[up]

yuencib }

• Function to calculate a bootstrap-t confidence interval for the difference

between two independent trimmed means µt1 − µt2 :

(X t1 − X t2 − U∗(u)

√Aγ + Bγ, X t1 − Xt2 − U∗(l)√Aγ + Bγ

); (5)

(X t1 − X t2 − U∗(u)

√Aγ + Bγ , X t1 − X t2 + U∗(u)

√Aγ + Bγ

). (6)

yuencibt<-function(x, y, gamma, alpha = 0.05, nboot = 599, side = F) {

#

# Compute a (1-alpha) confidence interval for the difference

# between two independent trimmed means with bootstrap-t method.

#

# side=T, for true, indicates the symmetric two-sided method

# given by equation (5) as above.

# The default is side=F yielding an equal-tailed

# confidence interval given by equation (6) as above.

#

# The number of bootstrap replications is nboot=599.

89

Page 102: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

side <- as.logical(side)

yuencibt <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

x <- x[!is.na(x)]

# Remove missing values in x

y <- y[!is.na(y)]

# Remove missing values in y

n1 <- length(x)

n2 <- length(y)

m1 <- n1 - 2 * round(gamma * n1)

m2 <- n2 - 2 * round(gamma * n2)

z <- c(x,y)

xstar <- x - mean(x, gamma) + mean(z, gamma)

ystar <- y - mean(y, gamma) + mean(z, gamma)

print("Taking bootstrap samples. Please wait.")

data1 <- matrix(sample(xstar, size = length(xstar) * nboot,

replace = T), nrow = nboot)

data2 <- matrix(sample(ystar, size = length(ystar) * nboot,

replace = T), nrow = nboot)

top <- apply(data1, 1, mean, gamma) - apply(data2, 1, mean, gamma)

xbot <- apply(data1, 1, winvar, gamma)

90

Page 103: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

ybot <- apply(data2, 1, winvar, gamma)

xbot1 <- xbot * (n1 - 1)/(m1 * (m1 - 1))

ybot1 <- ybot * (n2 - 1)/(m2 * (m2 - 1))

tval <- top/sqrt(xbot1 + ybot1)

if(side)

tval <- abs(tval)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

A <- winvar(x, gamma) * (n1 - 1)/(m1 * (m1 - 1))

B <- winvar(y, gamma) * (n2 - 1)/(m2 * (m2 - 1))

se <- sqrt(A + B)

md <- mean(x, gamma) - mean(y, gamma)

yuencibt[1] <- md - tval[up] * se

yuencibt[2] <- md - tval[low] * se

if(side)

yuencibt[1] <- md - tval[up] * se

if(side)

yuencibt[2] <- md + tval[up] * se

yuencibt }

• Function to calculate a bootstrap-t confidence interval for the difference

between two independent means µ1 − µ2, following Student’s t distribution :

(X1 − X2 − T ∗(u)

√n(A + B), X1 − X2 − T ∗(l)

√n(A + B)

), (7)

(X1 − X2 − T ∗(u)

√n(A + B), X1 − X2 + T ∗(u)

√n(A + B)

), (8)

91

Page 104: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

where A = (n1 − 1) ∗ s21/(n1 + n2 − 1), B = (n2 − 1) ∗ s2

2/(n1 + n2 − 1), and

T =X1 − X2√(

1n1

+ 1n2

)(n1−1)s2

1+(n2−1)s2

2

n1+n2−2

.

stbt<-function(x, y,alpha = 0.05, nboot = 599, side = F) {

#

# Compute a (1-alpha) confidence interval for the difference

# between two independent means with bootstrap-t method.

#

# The default is side=F yielding an equal-tailed confidence

# interval given by equation (7) as above.

# side=T, for true, indicates the symmetric two-sided method

# given by equation (8) as above.

# The number of bootstrap replications is nboot=599.

#

side <- as.logical(side)

stbt <- vector(mode = "numeric", length = 2)

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

x <- x[!is.na(x)]

# Remove missing values in x

y <- y[!is.na(y)]

# Remove missing values in y

n1 <- length(x)

n2 <- length(y)

data1 <- matrix(sample(x, size = length(x) * nboot,

92

Page 105: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

replace = T), nrow = nboot)

data2 <- matrix(sample(y, size = length(y) * nboot,

replace = T), nrow = nboot)

top <- apply(data1, 1, mean) - apply(data2, 1, mean)

xbot <- apply(data1, 1, var)

ybot <- apply(data2, 1, var)

n<- (1/(n1) + 1/(n2))

d1 <- (n1-1)*xbot/(n1+n2-2)

d2 <- (n2-1)* ybot/(n1+n2-2)

seq <- sqrt(n*(d1+d2))

tval <- top/seq

if(side)

tval <- abs(tval)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

A <- var(x) * (n1 - 1)/(n1+n2-2)

B <- var(y) * (n2 - 1)/(n1+n2-2)

se <- sqrt(n*(A + B))

diff <- mean(x) - mean(y)

stbt[1] <- diff - tval[up]*se

stbt[2] <- diff - tval[low]*se

if(side)

stbt[1] <- diff - tval[up]*se

if(side)

stbt[2] <- diff + tval[up]*se

93

Page 106: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

stbt }

• Function to calculate a 95% confidence interval for the difference of two

dependent 2γ-trimmed means µt1 − µt2 :

X t1 − X t2 ± t(1−α/2),m−1

√d1 + d2 − 2d12.

where df = m − 1.

yuendci<-function(x, y, gamma, alpha = 0.05) {

#

# Compute an approximate (1-alpha) confidence interval for

# the difference between two paired-trimmed means

# based on Student’s distribution.

#

# The default amount of trimming is gamma (gamma=0.1,0.2,0.25).

#

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

x<-x[!is.na(x)] # Remove missing values in x

y<-y[!is.na(y)] # Remove missing values in y

if(length(x)!=length(y))stop("The number of observations must

be equal")

set.seed(2)

yuendci <- vector(mode = "numeric", length = 2)

m<-length(x)-2*round(gamma*length(x))

d1 <- winvar(x, gamma)*(length(x)-1)/(m*(m-1))

d2 <- winvar(y, gamma)*(length(y)-1)/(m*(m-1))

d12 <- wincov(x, y, gamma)*(length(x)-1) /(m * (m - 1))

94

Page 107: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

df <- m - 1

se <- sqrt(d1 + d2 - 2 * d12)

md <- mean(x, gamma) - mean(y, gamma)

yuendci[1] <- md - qt(1 - alpha/2, df) *

se

yuendci[2] <- md + qt(1 - alpha/2, df) *

se

yuendci }

• Function to calculate a 95% confidence interval for a difference of dependent

2γ-trimmed means µt1 − µt2 :

(D∗(l), D∗(u)

).

yuendcib<-function(x, y, gamma, alpha = 0.05, nboot = 599, side = F) {

#

# Compute a (1-alpha) confidence interval for the difference

# between two paired-trimmed means with percentile bootstrap.

#

# side=T, for true, indicates the symmetric two-sided method.

#

# The number of bootstrap replications is nboot=599.

#

side <- as.logical(side)

yuendcib <- vector(mode = "numeric", length = 2)

set.seed(2)

x <- x[!is.na(x)]

# Remove missing values in x

95

Page 108: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

y <- y[!is.na(y)]

# Remove missing values in y

if(length(x) != length(y)) stop("Must have equal sample sizes.")

set.seed(2)

# set seed of random number generator so that

# results can be duplicated.

print("Taking bootstrap samples. Please wait.")

data <- matrix(sample(length(y), size = length(y) * nboot,

replace = T), nrow = nboot)

boot1 <- matrix(x[data], nrow = nboot, ncol = length(x))

bvec1 <- apply(boot1, 1, mean,gamma)

# vetor containing the bootstrap

# estimates for the first group, with size 1 by nboot.

boot2 <- matrix(y[data], nrow = nboot, ncol = length(y))

bvec2 <- apply(boot2, 1, mean,gamma)

# vetor containing the bootstrap

# estimates for the second group, with size 1 by nboot.

#

# Compute the nboot=599 bootstrap statistics.

tval<-bvec1- bvec2

if(side)

tval <- abs(tval)

tval <- sort(tval)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

yuendcib[1] <- tval[low]

96

Page 109: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

yuendcib[2] <- tval[up]

yuendcib }

• Function to calculate a bootstrap-t interval as follows :

(X t1 − X t2 − U

∗(u)d

√d1 + d2 − 2d12, X t1 − X t2 − U

∗(l)d

√d1 + d2 − 2d12

); (9)

(X t1 − X t2 − U

∗(u)d

√d1 + d2 − 2d12, X t1 − X t2 + U

∗(u)d

√d1 + d2 − 2d12

). (10)

yuendcibt<-function(x, y, gamma, alpha = 0.05, nboot = 599, side = F) {

#

# Compute a 1-alpha confidence interval for the difference

# between two dependent trimmed means with bootstrap-t method.

# The default is side=F yielding an equal-tailed confidence

# interval given by equation (9) as above.

#

# side=T, for true, indicates the symmetric two-sided method

# given by equation (10) as above.

#

# The number of bootstrap replications is nboot.

if(gamma <= 0 || gamma >= 0.5) stop("gamma must be between

0 and 0.5")

side <- as.logical(side)

yuendcibt <- vector(mode = "numeric", length = 2)

wincovxy <- vector(mode = "numeric", length = nboot)

xbot <- vector(mode = "numeric", length = nboot)

ybot <- vector(mode = "numeric", length = nboot)

set.seed(2)

x <- x[!is.na(x)]

97

Page 110: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

# Remove missing values in x

y <- y[!is.na(y)]

# Remove missing values in y

if(length(x) != length(y)) stop(

"The number of observations must be equal")

n1 <- length(x)

n2 <- length(y)

m <- n1 - 2 * round(gamma * n1)

z <- c(x,y)

xstar <- x - mean(x, gamma) + mean(z, gamma)

ystar <- y - mean(y, gamma) + mean(z, gamma)

data <- matrix(sample(length(xstar), size = length(xstar) * nboot,

replace = T), nrow = nboot)

print("Taking bootstrap samples. Please wait.")

boot1 <- matrix(xstar[data], nrow = nboot, ncol = length(xstar))

boot2 <- matrix(ystar[data], nrow = nboot, ncol = length(ystar))

top <- apply(boot1, 1, mean, gamma) - apply(boot2, 1, mean, gamma)

for (i in 1:nboot)

{

wincovxy[i]<- wincov(boot1[i,], boot2[i,], gamma)

wincovxy

}

for (i in 1:nboot)

{

xbot[i] <- winvar(boot1[i,], gamma)

xbot

98

Page 111: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

}

for (i in 1:nboot)

{

ybot[i] <- winvar(boot2[i,], gamma)

ybot

}

# we can also use "apply" function.

# xbot <- apply(boot1, 1, winvar, gamma)

# ybot <- apply(boot2, 1, winvar, gamma)

xbot1 <- xbot * (n1 - 1)/(m * (m - 1))

ybot1 <- ybot * (n2 - 1)/(m * (m - 1))

bet <- wincovxy* (n2 - 1)/(m * (m - 1))

tval <- top/sqrt(xbot1 + ybot1 - 2 * bet)

if(side)

tval <- abs(tval)

tval <- sort(tval)

crit <- round((1 - alpha) * nboot)

low <- round((alpha * nboot)/2)

up <- round((1 - alpha/2) * nboot)

d1 <- winvar(x, gamma)*(n1 - 1)/(m * (m - 1))

d2 <- winvar(y, gamma)*(n2 - 1)/(m * (m - 1))

d12 <- wincov(x, y, gamma)*(n1 - 1)/(m * (m - 1))

se <- sqrt(d1 + d2 - 2 * d12)

md <- mean(x, gamma) - mean(y, gamma)

yuendcibt[1] <- md - tval[up] * se

yuendcibt[2] <- md - tval[low] * se

99

Page 112: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

if(side)

yuendcibt[1] <- md - tval[up] * se

if(side)

yuendcibt[2] <- md + tval[up] * se

yuendcibt }

100

Page 113: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

References

Basu, S., & DasGupta, A. (1995). Robustness of standard confidence inter-

vals for location parameters under departures from normality. Annals

of Statistics 23, 1433-1442.

Chernick, M. R. (1999). Bootstrap Methods, Wiley, New York.

Dennis D. B., and John F. M. (1982). The bootstrap for robust Bayesian ana-

lysis : An adventure in computing. Computer Science and Statistics :

Proceedings of the 14th symposium on the Interface, Springer-Verlag,

New York.

Dielman, T., Lowry, C., and Pfaffenberger, R. (1994). A comparison of quan-

tile estimators, Communications in Statistics-Simulation and Computa-

tion 23, 355-371.

Page 114: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

Efron, B., and Tibshirani, R. J. (1993). An Introduction to the Bootstrap,

Chapman and Hall, New York, London.

Gibbons, J.D. (1997). Nonparametric Methods for Quantitative Analysis (Third

Edition), American Sciences Press, Inc., Columbus, Ohio.

Hall, P. (1986). On the bootstrap and confidence intervals. Annals of Statis-

tics 14, 1431-1452.

Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., and Stahel, W. A. (1986).

Robust Statistics, Wiley, New York.

Harrell, F. E., and Davis, C. E. (1982). A new distribution-free quantile es-

timator, Biometrika, Vol. 69, 3, 635-640.

Huber, P. J. (1981). Robust Statistics, Wiley, New York.

Huber, P. J. (1964). Robust estimation of location parameters. Annals of

Mathematical Statistics 35, 73-101.

Patel, K. R., Mudholkar, G. S., & Fernando, J. L. I. (1988). Student’s ap-

proximations for three simple robust estimators. Journal of the Ameri-

can Statistical Association 83, 1203-1210.

Rousseeuw, P. J., and Leroy, A. (1987). Robust Regression and Outlier De-

102

Page 115: ROBUST INFERENCE FOR LOCATION PARAMETERS : ONE- AND … · estimates such as the sample trimmed mean, the sample Winsorized mean and estimates based on symmetric quantiles. Con dence

tection. Wiley, New York.

Serfling, R. J. (1980). Approximation Theorems of Mathematical Statistics,

Wiley, New York.

Staudte, R. G., and Sheather, S. J. (1990). Robust Estimation and Testing,

Wiley, New York.

Welsh, A. H. (1987). The trimmed mean in the linear model. Annals of Sta-

tistics 15, 20-36.

Wilcox, R. R. (1997). Introduction to Robust Estimation and Hypothesis

Testing, Academic Press, San Diego.

Yohai, V. J. (1987). High breakdown point and high efficiency robust esti-

mates for regresssion. Annals of Statistics 15, 642-656.

103