empirical selection of the regularization …jlh951/papers/rparm_nov_11.pdfotherwise unknown, y is a...

22
ADAPTIVE NONPARAMETRIC INSTRUMENTAL VARIABLES ESTIMATION: EMPIRICAL CHOICE OF THE REGULARIZATION PARAMETER by Joel L. Horowitz Department of Economics Northwestern University Evanston, IL 60208 USA November 2011 ABSTRACT In nonparametric instrumental variables estimation, the mapping that identifies the function of interest, g say, is discontinuous and must be regularized (that is, modified) to make consistent estimation possible. The amount of modification is controlled by a regularization parameter. The optimal value of this parameter depends on unknown population characteristics and cannot be calculated in applications. Theoretically justified methods for choosing the regularization parameter empirically in applications are not yet available. This paper presents, apparently for the first time, such a method for use in series estimation, where the regularization parameter is the number of terms in a series approximation to g . The method does not require knowledge of the smoothness of g or of other unknown functions. It adapts to their unknown smoothness. The estimator of g based on the empirically selected regularization parameter converges in probability at a rate that is at least as fast as the asymptotically optimal rate multiplied by , where is the sample size. The asymptotic integrated mean-square error (AIMSE) of the estimator is within a specified factor of the optimal AIMSE. 1/2 (log ) n n Key words: Ill-posed inverse problem, regularization, sieve estimation, series estimation, nonparametric estimation JEL Classification: C13, C14, C21 _____________________________________________________________________ I thank Xiaohong Chen, Joachim Freyberger, Sokbae Lee, and Vladimir Spokoiny for helpful discussions. This research was supported in part by NSF grant SES-0817552.

Upload: others

Post on 24-Feb-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

ADAPTIVE NONPARAMETRIC INSTRUMENTAL VARIABLES ESTIMATION: EMPIRICAL CHOICE OF THE REGULARIZATION PARAMETER

by

Joel L. Horowitz Department of Economics Northwestern University

Evanston, IL 60208 USA

November 2011

ABSTRACT In nonparametric instrumental variables estimation, the mapping that identifies the function of interest, g say, is discontinuous and must be regularized (that is, modified) to make consistent estimation possible. The amount of modification is controlled by a regularization parameter. The optimal value of this parameter depends on unknown population characteristics and cannot be calculated in applications. Theoretically justified methods for choosing the regularization parameter empirically in applications are not yet available. This paper presents, apparently for the first time, such a method for use in series estimation, where the regularization parameter is the number of terms in a series approximation to g . The method does not require knowledge of the smoothness of g or of other unknown functions. It adapts to their unknown smoothness. The estimator of g based on the empirically selected regularization parameter converges in probability at a rate that is at least as fast as the asymptotically optimal rate multiplied by , where is the sample size. The asymptotic integrated mean-square error (AIMSE) of the estimator is within a specified factor of the optimal AIMSE.

1/ 2(log )n n

Key words: Ill-posed inverse problem, regularization, sieve estimation, series estimation, nonparametric estimation JEL Classification: C13, C14, C21 _____________________________________________________________________ I thank Xiaohong Chen, Joachim Freyberger, Sokbae Lee, and Vladimir Spokoiny for helpful discussions. This research was supported in part by NSF grant SES-0817552.

Page 2: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

ADAPTIVE NONPARAMETRIC INSTRUMENTAL VARIABLES ESTIMATION: EMPIRICAL CHOICE OF THE REGULARIZATION PARAMETER

1. INTRODUCTION

This paper is about estimating the unknown function g in the model

(1.1) ( ) ; ( | ) 0Y g X U E U W w= + = =

for almost every or, equivalently, w

(1.2) [ ( ) | ] 0E Y g X W w− = =

for almost every . In this model, w g is a function that satisfies regularity conditions but is

otherwise unknown, Y is a scalar dependent variable, X is a continuously distributed

explanatory variable that may be correlated with U (that is, X may be endogenous), W is a

continuously distributed instrument for X , and U is an unobserved random variable. The data

are an independent random sample of ( . The paper presents, apparently for the first

time, a theoretically justified, empirical method for choosing the regularization parameter that is

needed for estimation of

, , )WY X

g .

Existing nonparametric estimators of g in (1.1)-(1.2) can be divided into two main

classes: sieve (or series) estimators and kernel estimators. Sieve estimators have been developed

by Ai and Chen (2003), Newey and Powell (2003); Blundell, Chen, and Kristensen (2007); and

Horowitz (2011). Kernel estimators have been developed by Hall and Horowitz (2005) and

Darolles, Florens, and Renault (2006). Florens and Simoni (2010) describe a quasi-Bayesian

estimator based on kernels. Hall and Horowitz (2005) and Chen and Reiss (2011) found the

optimal rate of convergence of an estimator of g . Horowitz (2007) gave conditions for

asymptotic normality of the estimator of Hall and Horowitz (2005). Horowitz and Lee (2010)

showed how to use the sieve estimator of Horowitz (2011) to construct uniform confidence bands

for g . Newey, Powell, and Vella (1999) present a control function approach to estimating g in

a model that is different from (1.1)-(1.2) but allows endogeneity of X and achieves identification

through an instrument. The control function model is non-nested with (1.1)-(1.2) and is not

discussed further in this paper. Chernozhukov, Imbens, and Newey (2007); Horowitz and Lee

(2007); and Chernozhukov, Gagliardini, and Scaillet (2008) have developed methods for

estimating a quantile-regression version of model (1.1)-(1.2). Chen and Pouzo (2008, 2009)

developed a method for estimating a large class of nonparametric and semiparametric conditional

moment models with possibly non-smooth moments. This class includes the quantile-regression

version of (1.1)-(1.2).

1

Page 3: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

As is explained further in Section 2 of this paper, the relation that identifies g in (1.1)-

(1.2) creates an ill-posed inverse problem. That is, the mapping from the population distribution

of to ( , , )Y X W g is discontinuous. Consequently, g cannot be estimated consistently by

replacing unknown population quantities in the identifying relation with consistent estimators.

To achieve a consistent estimator it is necessary to regularize (or modify) the mapping that

identifies g . The amount of modification is controlled by a parameter called the regularization

parameter. The optimal value of the regularization parameter depends on unknown population

characteristics and, therefore, cannot be calculated in applications. Although there have been

proposals of informal rules-of-thumb for choosing the regularization parameter in applications,

theoretically justified empirical methods are not yet available.

This paper presents an empirical method for choosing the regularization parameter in

sieve or series estimation, where the regularization parameter is the number of terms in the series

approximation to g . The method consists of optimizing a sample analog of a weighted version

of the integrated mean-square error of a series estimator of g . The method does not require a

priori knowledge of the smoothness of g or of other unknown functions. It adapts to their

unknown smoothness. The estimator of g based on the empirically selected regularization

parameter also adapts to unknown smoothness. It converges in probability at a rate that is at least

as fast as the asymptotically optimal rate multiplied by , where is the sample size.

Moreover, its asymptotic integrated mean-square error (AIMSE) is within a specified factor of the

optimal AIMSE. The paper does not address question of whether the factor of can be

removed or is an unavoidable price that must be paid for adaptation. This question is left for

future research.

1/ 2(log )n n

1/ 2(log )n

Section 2 provides background on the estimation problem and the series estimator that is

used with the adaptive estimation procedure. This section also reviews the relevant mathematics

and statistics literature. The problems treated in that literature are simpler than (1.1)-(1.2).

Section 3 describes the proposed method for selecting the regularization parameter. Section 4

presents the results of Monte Carlo experiments that explore the finite-sample performance of the

method. Section 5 presents concluding comments. All proofs are in the supplemental appendix.

2. BACKGROUND

This section explains the estimation problem and the need for regularization, outlines the

sieve estimator that is used with the adaptive estimation procedure, and reviews the statistics

literature on selecting the regularization parameter.

2

Page 4: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

2.1 The Estimation Problem and the Need for Regularization

Let X and W be continuously distributed random variables. Assume that the supports

of X and W are [ . This assumption does not entail a loss of generality, because it can be

satisfied by, if necessary, carrying out monotone increasing transformations of

0,1]

X and W . Let

XWf and Wf , respectively, denote the probability density functions of ( , )X W and W . Define

. ( ) ( | ) ( )Wm w E Y W w f w= =

Let be the space of real-valued, square-integrable functions on [ . Define the operator

from by

2[0,1]L 0,1]

A 2 2[0,1] [0,1]L L→

[0,1]

( )( ) ( ) ( , )XWA w x f xν ν= ∫ w dx ,

where ν is any function in . Then 2[0,1]L g in (1.1)-(1.2) satisfies Ag m= .

Assume that A is one-to-one, which is necessary for identification of g . Then

1g A m−= . If 2XWf is integrable on , then zero is a limit point (and the only limit point) of

the singular values of . Consequently, the singular values of

2[0,1]

A 1A− are unbounded, and 1A− is a

discontinuous operator. This is the ill-posed inverse problem. Because of this problem, g could

not be estimated consistently by replacing in m 1g A m−= with a consistent estimator, even if

were known. To estimate

A

g consistently, it is necessary to regularize (or modify) so as to

remove the discontinuity of

A

1A− . A variety of regularization methods have been developed. See,

for example, Engl, Hanke, and Neubauer (1996); Kress (1999); and Carrasco, Florens, and

Renault (2007), among many others. The regularization method used in this paper is series

truncation, which is a modification of the Petrov-Galerkin method that is well-known in the

theory of integral equations. See, for example, Kress (1999, pp. 240-245). It amounts to

approximating with a finite-dimensional matrix. The singular values of this matrix are

bounded away from zero, so the inverse of the approximating matrix is a continuous operator.

The details of the method are described further in Section 2.2.

A

2.2 Sieve Estimation and Regularization by Series Truncation

The adaptive estimation procedure uses a modified version of Horowitz’s (2011) sieve

estimator of g . The estimator is defined in terms of series expansions of g , , and . Let m A

{ : 1,2,...}j jψ = be a complete, orthonormal basis for . The expansions are 2[0,1]L

1( ) ( )j jj

g x b xψ∞

==∑ 1

( ) ( )k kkm w m wψ

==∑, , and

1 1( , ) ( ) ( )XW jk j kj k

f x w c x wψ ψ∞ ∞

= ==∑ ∑ ,

3

Page 5: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

where

, [0,1]

( ) ( )j jb g x xψ= ∫ dx

dw , [0,1]

( ) ( )k km m w wψ= ∫and

. 2[0,1]

( , ) ( ) ( )jk XW j kc f x w x w dxdwψ ψ= ∫To estimate g , we need estimators of , , , and km jkc m XWf . Denote the data by

, where is the sample size. The estimators of and , respectively,

are and c n . The estimators of m and

{ , , : 1,..., }i i iY X W i n= n km jkc

11

ˆ ( )nk i ki

m n Y Wψ−=

= ∑ 11

ˆ ( ) ( )njk j i k ii

X Wψ ψ−=

= ∑i XWf ,

respectively, are and 1

ˆ ˆ( ) ( )nJk kk

m w m wψ=

=∑ 1 1ˆ ˆ( , ) ( ) ( )n nJ JXW jk j kj k

f x w c xψ ψ= =

=∑ ∑ w , where nJ

is a series truncation point that, for now, is assumed to be non-stochastic. It is assumed that as

, at a rate that is specified in Section 3.1. Section 3.3 describes an empirical

method for choosing

n→∞ nJ →∞

nJ . Define the operator that estimates by A A

[0,1]

ˆˆ( )( ) ( ) ( , )XWA w x f xν ν= ∫ w dx .

For any function :[0,1]ν → , define ( ) ( ) /jj

jD x d x dxν ν= whenever the derivative

exists. Define 0D ν ν= . Define the norm

2 2[0,1]

0[ ( )]js

j sD x dxν ν

≤ ≤

= ∑ ∫ .

Define the function space

0{ :[0,1] : }s s Cν ν= → ≤H ,

where is a finite constant. The value of is unknown in applications, and the method

for choosing the truncation parameter that is presented in this paper does not require knowledge

of . The method requires knowing only that a

0 0C > 0C

0C 0C < ∞ exists. For any integer , define

the subset of

0J >

sH

01

:J

Js j j sj

Cν ν ψ ν=

⎧ ⎫⎪ ⎪= = ≤⎨ ⎬⎪ ⎪⎩ ⎭

∑H ,

where { : 1,..., }j j Jν = are finite constants. Horowitz’s (2011) sieve estimator of g is defined as

4

Page 6: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

(2.2) ˆ ˆarg minJ sn

g A mν ν∈= −H ,

where ⋅ is the norm on . Under the assumptions of Section 3, as

. Therefore,

2[0,1]L ˆ ˆ( )P Ag m= →1

n→∞ 1ˆ ˆg A m−= with probability approaching 1 as n . See Proposition A.1 of

the appendix for a proof. One consequence of this result is that, with probability approaching 1

as ,

→∞

n→∞ g does not depend on . 0C

To obtain the modified estimator that is used with this paper’s adaptive estimation

procedure, let ,⋅ ⋅ denote the inner product in . For 2[0,1]L 1,..., nj J= , define ,j jb g ψ= .

The jb ’s are the generalized Fourier coefficients of g with the basis functions { }jψ . Let

nJ J≤ be a positive integer. The modified estimator of g is

(2.3) 1

ˆJ

J j jj

g b ψ=

=∑ .

If J is chosen to minimize the AIMSE of ˆ Jg , then ˆ Jg g− converges in probability to 0 at the

optimal rate of Chen and Reiss (2011). See Proposition A.2 of the appendix. However, this

choice of J is not available in applications because it depends on unknown population

parameters. The adaptive estimation procedure consists of choosing J in (2.3) to minimize a

sample analog of a weighted version of the AIMSE of ˆ Jg . This procedure is explained in

Section 3.1. Let J denote the resulting value of J . The adaptive estimator of g is ˆˆ Jg . Under

the regularity conditions of Section 3.2, ˆˆ Jg g− converges in probability to 0 at a rate that is at

least as fast as the asymptotically optimal rate times . Moreover, the AIMSE of 1/ 2(log )n ˆˆ Jg is

within a factor of 2 (4 / 3)log n+ of the AIMSE of the infeasible estimator that minimizes the

AIMSE of ˆ Jg . Achieving these results does not require knowledge of s , , or the rate of

convergence of the singular values of .

0C

A

An alternative to the estimator (2.3) consists of using the estimator (2.2) with J in place

of nJ . However, replacing nJ with J causes the lengths of the series in and to be

random variables. The methods of proof of this paper do not apply when the lengths of these

series are random. The asymptotic properties of

A m

g with J in place of nJ are unknown.

5

Page 7: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

2.3 Review of Related Mathematics and Statistics Literature

Ill-posed inverse problems in models that are similar to but simpler than (1.1)-(1.2) have

long been studied in mathematics and statistics. Two important characteristics of (1.1)-(1.2) are

that (1) the operator is unknown and must be estimated from the data and (2) the distribution

of , is unknown. The mathematics and statistics literatures contain no

methods for choosing the regularization parameter in (1.1)-(1.2) under these conditions.

A

[ ( ) | ]V Y E g X W≡ −

A variety of ways to choose regularization parameters are known in mathematics and

numerical analysis. Engl, Hanke, and Neubauer (1996), Mathé and Pereverzev (2003), Bauer and

Hohage (2005), Wahba (1977), and Lukas (1993, 1998) describe many. Most of these methods

assume that is known and that the “data” are deterministic or that is known and

independent of

A ( | )Var Y X x=

x . Such methods are not suitable for econometric applications.

Spokoiny and Vial (2011) describe a method for choosing the regularization parameter in

an estimation problem in which is known and V is normally distributed. Loubes and Ludeña

(2008) also consider a setting in which is known. Efromovich and Koltchinskii (2001),

Cavalier and Hengartner (2005), Hoffmann and Reiss (2008), and Marteau (2006, 2009) consider

settings in which is known up to a random perturbation and, possibly, a truncation error but is

not estimated from the data. Johannes and Schwarz (2010) treat estimation of

A

A

A

g when the

eigenfunctions of *A A are known to be trigonometric functions, where *A is the adjoint of .

Loubes and Marteau (2009) treat estimation of

A

g when the eigenfunctions of *A A are known but

are not necessarily trigonometric functions and the eigenvalues must be estimated from data.

Among the settings in the mathematics and statistics literature, this is the closest to the one

considered here. Loubes and Marteau obtain non-asymptotic oracle inequalities for the risk of

their estimator and show that, for a suitable 4p s> , their estimator’s rate of convergence is

within a factor of (log ) pn of the asymptotically optimal rate. However, the eigenfunctions of

*A A are not known in econometric applications. Section 3 describes a method for selecting J

empirically when neither the eigenvalues nor eigenfunctions of *A A are known. In contrast to

Loubes and Marteau (2009), the results of this paper are asymptotic. However, Monte Carlo

experiments that are reported in Section 4 indicate that the adaptive procedure works well with

samples of practical size. Parts of the proofs in this paper are similar to parts of the proofs of

Loubes and Marteau (2009).

6

Page 8: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

3. MAIN RESULTS

This section begins with an informal description of the method for choosing J . Section

3.2 presents the formal results.

3.1 Description of the Method for Choosing J

Define the asymptotically optimal J as the value that minimizes the asymptotic

integrated mean-square error (AIMSE) of ˆ Jg as an estimator of g . The AIMSE is

2ˆA JE g g− , where denotes the expectation of the leading term of the asymptotic

expansion of ( . Denote the asymptotically optimal value of

( )AE ⋅

)⋅ J by optJ . It is shown in

Proposition A.2 of the appendix that under the regularity conditions of Section 3.2, 2

ˆoptA JE g g− converges to zero at the fastest possible rate (Chen and Reiss 2011). However,

2ˆA JE g g− depends on unknown population parameters, so it cannot be minimized in

applications. We replace the unknown parameters with sample analogs, thereby obtaining a

feasible estimator of a weighted version of 2ˆA JE g g− . Let J denote the value of J that

minimizes the feasible estimator. Note that J is a random variable. The adaptive estimator of

g is ˆˆ Jg . The AIMSE of the adaptive estimator is 2

ˆˆA JE g g− . Under the regularity conditions

of Section 3.2,

(3.1) 22

ˆˆ ˆ[2 (4 / 3)log( )]optA AJ JE g g n E g g− ≤ + − .

Thus, the rate of convergence in probability of ˆˆ Jg g− is within a factor of of the

asymptotically optimal rate. Moreover, the AIMSE of

1/ 2(log )n

ˆˆ Jg is within a factor of of

the AIMSE of the infeasible optimal estimator

2 (4 / 3)log n+

ˆoptJg .

The following notation is used in addition to that already defined. For any positive

integer J , let JA be the operator on that is defined by 2[0,1]L

[0,1]

( )( ) ( ) ( , )J JA w x a xν ν= ∫ w dx

k

,

where

. 1 1

( , ) ( ) ( )J J

J jk jj k

a x w c x wψ ψ= =

=∑∑

Let nJ be the series truncation point defined in Section 2.2. For any [0,1]x∈ , define

7

Page 9: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

1

1( , , , ) [ ( )] ( )( )( )

n

n n

J

n J kk

J kx Y X W Y g X W A xδ ψ −

=

= − ∑ ψ

i i

,

, 1

1( ) ( , , , )

n

n n ii

S x n x Y X Wδ−

=

= ∑

and

1

J

J j jj

g b ψ=

=∑ .

The following proposition is proved in the appendix.

Proposition 1: Let assumptions 1-6 of Section 3.2 hold. Then, as , n→∞

1

ˆ ,J

J J n j j nj

g S rψ ψ=

− = +∑g ,

where

1

,J

n p n j jj

r o S ψ ψ=

⎛ ⎞⎜ ⎟=⎜ ⎟⎝ ⎠∑

for all nJ J≤ . ■

Now for any nJ J≤ ,

2 2

2 22

ˆ ˆ

ˆ .

A J A J J J

A J J J

E g g E g g g g

E g g g g

− = − + −

= − + −

2

Therefore, it follows from Proposition 1 that

22 22

1

ˆ ,J

A J A n jj

E g g E S g gψ=

− = + −∑ J .

Define

2 2

1( ) ,

J

n A n jj

T J E S gψ=

= −∑ J .

Assume that n optJ J≥ . Then because 2g does not depend on J ,

0

arg min ( )opt nJJ T J

>= .

We now put into an equivalent form that is more convenient for the analysis that

follows. Observe that

( )nT J

8

Page 10: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

, 1

1( )( ) ( )

n

n

Jjk

J k jj

A x cψ ψ−

=

=∑ x

where jkc is the ( element of the inverse of the , )j k n nJ J× matrix [ . This inverse exists

under the assumptions of Section 3.2. Therefore,

]jkc

1

1 1 1

1 *

1

( )( )( ) ( ) ( )

( )[( ) ]( ),

n n n

n

n

n

J J Jjk

k J k k jk j k

J

j J jj

W A x c W x

x A W

ψ ψ ψ ψ

ψ ψ

= = =

=

=

=

∑ ∑∑

where * denotes the adjoint operator. It follows that

1 *

1( , , , ) [ ( )] ( )[( ) ]( )

n

n n

J

n J j Jj

jx Y X W Y g X x A Wδ ψ −

=

= − ∑ ψ

and

1 *( , , , ), [ ( )][( ) ]( )n nn j J JY X W Y g X A Wδ ψ ψ−⋅ = − j .

Therefore,

2

21 1 *

1 1( ) [ ( )][( ) ]( )

n n

J n

n A i J i J j ij i

T J E n Y g X A W gψ− −

= =

⎧ ⎫⎪ ⎪= −⎨ ⎬⎪ ⎪⎩ ⎭

∑ ∑ J−

2

.

It follows from lemma 3 of the appendix and the assumptions of Section 3.2 that

21 1 *

1 1

1 2 1 *

1

[ ( )][( ) ]( )

[ ( )] {[( ) ]( )} .

n n

n n

J n

A i J i J j ij i

J

A J J jj

E n Y g X A W

n E Y g X A W

ψ

ψ

− −

= =

− −

=

⎧ ⎫⎪ ⎪−⎨ ⎬⎪ ⎪⎩ ⎭

= −

∑ ∑

Therefore,

21 2 1 * 2

1( ) [ ( )] {[( ) ]( )}

n n

J

n A J J jj

T J n E Y g X A W gψ− −

=

= − −∑ J .

This is the desired form of . ( )nT J

( )nT J depends on the unknown parameters nJg and

nJA and on the operator AE .

Therefore, must be replaced by an estimator for use in applications. One possibility is to ( )nT J

9

Page 11: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

replace nJg ,

nJA , Jg , and AE with g , , A ˆ Jg , and the empirical expectation, respectively.

This gives the estimator

22 2 1 * 2

1 1

ˆ ˆ( ) [ ( )] {( ) ]( )}n J

n i i j ii j

T J n Y g X A W gψ− −

= =

⎧ ⎫⎪ ⎪≡ − −⎨ ⎬⎪ ⎪⎩ ⎭

∑ ∑ J .

However, is unsatisfactory for two reasons. First, it does not account for the effect on nT

2ˆˆA JE g g− of the randomness of J . This randomness is the source of the factor of on

the right-hand side of (3.1). Second, some algebra shows that

log n

(3.2) 2 2 2ˆ ˆ ˆ2 ,J J J J J J Jg g g g g g g− = − + − .

The right-hand side of (3.2) is asymptotically non-negligible, so the estimator of must

compensate for its effect.

nT

It is shown in the appendix that these problems can be overcome by using the estimator

(3.3) 22 2 1 * 2

1 1

ˆˆ ˆ( ) (2 / 3)(log ) [ ( )] {( ) ]( )}n J

n i i ji j

T J n n Y g X A W gψ− −

= =

⎧ ⎫⎪ ⎪≡ −⎨ ⎬⎪ ⎪⎩ ⎭

∑ ∑ i J− .

We obtain J by solving the problem

(3.4) , :1

ˆminimize : ( )n

nJ J J

T J≤ ≤

where nJ is the truncation parameter used to obtain g , nJ satisfies and

as , and

3 1/ 2( / ) 0nJ nJ nρ →

4 1/ 2( / )nJ nJ nρ →∞ n→∞

nJρ is defined as

(3.5) ,

* 1/ 2sup

( )nJ sn

JA Aν

νρ

ν∈=

H.

Section 3.2 gives conditions under which ˆˆ Jg satisfies inequality (3.1) The problem of choosing

nJ in applications is discussed in Section 3.3.

3.2 Formal Results

This section begins with the assumptions under which ˆˆ Jg is shown to satisfy (3.1). A

theorem that states the result formally follows the presentation of the assumptions.

10

Page 12: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

Let 1 1 2 2( , ) ( , ) Ex w x w− denote the Euclidean distance between 1 1( , )x w and 2 2( , )x w .

Let j XWD f denote any j ’th partial or mixed partial derivative of XWf . Let

0 ( , ) ( , )XW XWD f x w f x w= . Let *A denote the adjoint operator of . Define and A ( )U Y g X= −

* 1/ 2

sup( )Js

JA Aν

νρ

ν∈=

H.

Blundell, Chen, and Kristensen (2007) call Jρ the sieve measure of ill-posedness and discuss its

relation to the eigenvalues of *A A .

The assumptions are as follows.

Assumption 1: (i) The supports of X and W are [ . (ii) 0,1] ( , )X W has a bounded

probability density function XWf with respect to Lebesgue measure. The probability density

function of W , Wf , is non-zero almost everywhere on [ . (iii) There are an integer and

a constant such that

0,1] 2r ≥

fC < ∞ | ( , ) |j XW fD f x w C≤ for all and . 2( , ) [0,1]x w ∈ 0,1,...,j r=

Assumption 2: (i) There is a finite constant such that YC 2( | ) YE Y W w C= ≤ for each

. (ii) There are finite constants , and such that [0,1]w∈ 0UC > 1 0Uc > 2 0Uc >

2 2(| | | ) ! ( | )jjUE U W w C j E U W w−= ≤ = and for every integer

and .

21 2( | )Uc E U W w c≤ = ≤ U 2j ≥

[0,1]w∈

Assumption 3: (i) (1.1) has a solution sg∈H with 0sg C< for some . (ii) The

estimators

3s ≥

g and ˆ Jg are as defined in (2.2)-(2.3). (iii) The function has square-

integrable derivatives. The function has infinitely many derivatives if

m r s+

m r = ∞ .

Assumption 4: (i) The basis functions { }jψ are orthonormal, complete on , and

have at least

2[0,1]L

s square integrable derivatives, where s is as in assumption 3. Moreover, there are

constants Cψ and with and v 0 Cψ< < ∞ 0 ( 2) / 2v s≤ < − such that 0 1sup | ( ) | vx j x C jψψ≤ ≤ ≤

for each . (ii) For as in assumption 3, 1,2,...j = r ( rJA A O J )−− = if r < ∞ and for

some if . (iii) For any

( cJO e− )

0c > r = ∞ 2[0,1]Lν ∈ with < ∞ square integrable derivatives, there are

coefficients jν ( ) and a constant 1,2,...j = C < ∞ that does not depend on ν such that

1

J

j jj

C Jν ν ψ −

=

− ≤∑ .

Assumption 5: (i) The operator is one-to-one. (ii) A

11

Page 13: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

( )

supJs

J sJ A

A AC J

ν

νρ

ν−

−≤

H

for some constant that does not depend on AC < ∞ J .

Assumption 6: As , (i) , (ii) , and (iii)

.

n→∞ 3 1/ 2( / ) 0nJ nJ nρ → 4 1/ 2( / )

nJ nJ nρ →∞

1 4 2/ 0n

vn JJ ρ+ →

Assumptions 1 and 2 are smoothness and boundedness conditions. At the cost of slower

convergence of ˆˆ Jg g− , assumption 2(ii) can be relaxed to allow U to have only finitely many

moments. Assumption 3 defines the estimator of g and ensures that is sufficiently smooth.

The assumption requires

m

0sg C< (strict inequality) to avoid complications that arise when g is

on the boundary of sH . Assumption 4 is satisfied by trigonometric functions ( ), Legendre

polynomials that are shifted and scaled to be orthonormal on [ (

0v =

0,1] 1/ 2v = ) and cubic B-splines

that have been orthonormalized by, say, the Gram-Schmidt procedure ( 3/ 2v = ) if s is large

enough. Legendre polynomials require , and cubic B-splines require . Assumption 5(i)

is required for identification of

3s > 5s >

g . Assumption 5(ii) ensures that JA is a sufficiently accurate

approximation to on A JsH . This assumption complements 4(ii), which specifies the accuracy

of JA as an approximation to on the larger set A sH . Assumption 5(ii) can be interpreted as an

additional smoothness restriction on XWf . Assumption 5(ii) also can be interpreted as a

restriction on the sizes of the values of for jkc j k≠ . Hall and Horowitz (2005) used a similar

diagonality restriction. Assumption 6 restricts the rate of increase of nJ as and further

restricts the size of v . Assumption 6(iii) holds automatically if assumptions 1(iii), 4(ii), and 6(i)

hold.

n→∞

For sequences of positive numbers { and { , define if is bounded

away from 0 and as . The problem of estimating

}na }nb n na b /n na b

∞ n→∞ g is said to be mildly ill-posed if

J J βρ for some finite 0β > . In this case, and 1/(2 2 1)r soptJ n + +∝

/(2 2 1)ˆ [opt

s r sJ pg g O n− + +− = ] (Blundell, Chen, and Kristensen 2007; Chen and Reiss 2011). The

estimation problem is severely ill-posed if JJ eβρ for some finite 0β > . In this case,

(log )optJ O n= and ˆ [(log ) ]opt

sJ pg g O n −− = . The results of this section hold in both the mildly

and severely ill-posed cases.

12

Page 14: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

We now have the following theorem.

Theorem 3.1: Let assumptions 1-6 hold. Then ˆˆ Jg satisfies inequality (3.1). ■

3.3 Choosing nJ in Applications

nJ in problem (3.4) depends on Jρ and, therefore, is not known in applications. This

section describes a way to choose nJ empirically. It is shown that inequality (3.1) holds with the

empirically selected nJ . The method for choosing nJ has two parts. The first part consists of

specifying a value for nJ that satisfies assumption 6. This value depends on the unknown

quantity Jρ . The second step consists of replacing Jρ with an estimator.1

To take the first step, define 0nJ by

(3.6) . 2 3.5 2 3.50 1,2,...

arg min { / : / 1 0}n J JJJ J n J nρ ρ

== − ≥

Because 2 3.5 /J J nρ is an increasing function of J , 0nJ is the smallest integer for which

2 3.5 /J 1J nρ ≥ . For example, if J J βρ = for some 0β > , then 0nJ is the integer that is closest to

and at least as large as 1/(3.5 2 )n β+ . If JJ eβρ = for some 0β > , then 0 (log )nJ O n= .

0nJ satisfies assumption 6 if v is not too large but is not feasible because it depends on

Jρ . We obtain a feasible estimator of 0nJ by replacing 2Jρ in (3.6) with an estimator. To

specify the estimator, let ˆJA ( ) be the operator on whose kernel is 1,2,...J = 2[0,1]L

. 1 1

ˆ ˆ( , ) ( ) ( ); ; , [0,1]J J

J jk j kj k

a x w c x w x wψ ψ= =

= ∈∑∑

The estimator of 2Jρ is denoted by 2ˆJρ and is defined by

*

2ˆ ˆ

ˆ infJs

J JJ h

A A h

hρ−

∈=

H.

Thus, 2ˆ Jρ− is the smallest eigenvalue of *ˆ ˆ

J JA A . The estimator of 0nJ is

. 2 3.5 2 3.50 1,2,...

ˆ ˆ ˆarg min { / : / 1 0}n J JJJ J n J nρ ρ

== − ≥

The main result of this paper is given by the following theorem.

1 Blundell, Chen, and Kristensen (2007) proposed estimating the rate of increase of Jρ as J increases by regressing an estimator of log( )Jρ on log J or J for the mildly and severely ill-posed cases, respectively. Blundell, Chen, and Kristensen (2007) did not explain how to use this result to select a specific value of J for use in estimation of g .

13

Page 15: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

Theorem 3.2: Let assumptions 1-6 hold. Assume that either J J βρ (mildly ill-posed

case) or JJ eβρ (severely ill-posed case) for some finite 0β > . Then (i) as

. (ii) Let

0 0ˆ( )n nP J J= →1

n→∞ ˆˆ

Jg be the estimator of g that is obtained by replacing nJ with 0ˆnJ in (3.4).

Then

2 2

ˆˆ ˆ[2 (4 / 3)log( )]

optA AJ JE g g n E g g− ≤ + − . ■

Thus the estimator ˆˆ

Jg that is based on the estimator 0ˆnJ satisfies the same inequality as the

estimator ˆˆ Jg that is based on a non-stochastic but infeasible nJ .

4. MONTE CARLO EXPERIMENTS

This section describes the results of a Monte Carlo study of the finite-sample

performance of ˆˆ Jg . There are 1000 Monte Carlo replications in each experiment. The basis

functions { }jψ are Legendre polynomials that are centered and scaled to be orthonormal on

. [0,1] nJ is chosen using the empirical method that is described in Section 3.3.

The Monte Carlo experiments use two different designs. There are 5 experiments with

Design 1 and 2 experiments with Design 2. In Design 1, the sample size is 1000. Realizations of

( , )X W were generated from the model

(4.1) 1

( , ) 1 2 cos( )cos( )XW jj

f x w c j x j wπ π∞

=

= + ∑ ,

where in experiment 1, 10.7jc −= j j 20.6jc −= in experiment 2, in experiment 3,

in experiment 4, and

40.52jc −= j

j j1.3exp( 0.5 )jc = − 2exp( 1.5 )jc = − in experiment 5. In all experiments,

the marginal distributions of X and W are , and the conditional distributions are

unimodal with an arch-like shape. The estimation problem is mildly ill-posed in experiments 1-3

and severely ill-posed in experiments 4-5.

[0,1]U

The function g is

(4.2) 01

( ) 2 cos( )jj

g x b b j xπ∞

=

= + ∑ ,

where and for . This function is plotted in Figure 1. The series in (4.1)

and (4.2) were truncated at for computational purposes. Realizations of Y were

generated from

0 0.5b = 4jb j−= 1j ≥

100j =

14

Page 16: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

, [ ( ) | ]Y E g x W V= +

where . ~ (0,0.01)V N

Monte Carlo Design 2 mimics estimation of an Engel curve from real data and is similar

to one of the experiments in Blundell, Chen, and Kristensen (2007). The real data consist of

household-level observations from the British Family Expenditure Survey (FES), which is a

popular data source for studying consumer behavior. See Blundell, Pashardes, and Weber (1993),

for example. We use a subsample of 1516 married couples with one or two children and an

employed head of household. In these data, X denotes the logarithm of total income, and

denotes the logarithm of annual income from wages and salaries of the head of household.

Blundell, Chen, and Kristensen (2007) and Blundell and Horowitz (2007) discuss the validity of

as an instrument for

W

W X . In the Monte Carlo experiment, the dependent variable Y is

generated as described below.

The experiment mimics repeated sampling from the population that generated the FES

data. The data { were transformed to be in by using the

transformations

, : 1,...,1516}i iX W i = 2[0,1]

1 1516 1 1516 1 1516( min ) /(max mini i j j j j j j )X X X X≤ ≤ ≤ ≤ ≤ ≤→ − − X

)

and

. 1 1516 1 1516 1 1516( min ) /(max mini i j j j j j jW W W W W≤ ≤ ≤ ≤ ≤ ≤→ − −

The transformed data were kernel smoothed using the kernel 2 2( ) (15 /16)(1 ) (| | 1)K v v I v= − ≤ to

produce a density ( , ; )FESf x w σ , where σ is the bandwidth parameter. The bandwidths are

0.05σ = and , respectively, in experiments 6 and 7. This range contains the cross-

validation estimate,

0.10

0.07σ = , of the optimal bandwidth for estimating the density of the FES

data. Numerical evaluation of Jρ showed that jJ eβρ ∝ with 3.1β = when 0.05σ = and

3.4β = when 0.10σ = . Thus, the estimation problem is severely ill-posed in the Design 2

experiments. The experiments use ( ) [( 0.5) / 0.24]g x x= Φ − , which mimics the share of

household expenditures on some good or service. The dependent variable Y is

(4.3) , [ ( ) | ]Y E g X W V= +

where ( , )X W is randomly distributed with the density ( , ; )FESf x w σ and

independently of (

~ (0,0.01)V N

, )X W .

Each experiment consisted repeating the following procedure 1000 times. A sample of

observations of 1516n = ( , )X Z was generated by independent random sampling from the

15

Page 17: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

distribution whose density is ( , ; )FESf x w σ . Then 1516 corresponding observations of Y were

generated from (4.3). Finally, g was estimated using the methods described in this paper.

The results of the experiments are displayed in Table 1, which shows the empirical means

of 2

ˆoptJg g− and

ˆJg g− , the ratio

2

ˆ

ˆopt

J

J

Empirical mean of g gR

Empirical mean of g g

−≡

−,

and 2 (4 / 3) logB n≡ + , which is the theoretical asymptotic upper bound on R from inequality

(3.1). The results show that the differences between the empirical means of 2

ˆoptJg g− and

ˆJg g− are very small in the experiments with Design 1. The differences are larger in the

experiments with Design 2 but still small. The ratio R is well within the theoretical bound B in

all of the experiments.

5. CONCLUSIONS

This paper has presented, for what appears to be the first time, a theoretically justified,

empirical method for choosing the regularization parameter in nonparametric instrumental

variables estimation. The method does not require a priori knowledge of smoothness or other

unknown population parameters. The method and the resulting estimator of the unknown

function g adapt to the unknown smoothness of g and the density of ( , )X W . The results of

Monte Carlo experiments indicate that the method performs well with samples of practical size.

It is likely that the ideas in this paper can be applied to the multivariate model

, , where ( , )Y g X Z U= + ( | , ) 0E U W Z = Z is a continuously distributed, exogenous

explanatory variable or vector and W is an instrument for the endogenous variable X . This

model is more difficult than (1.1)-(1.2), because it requires selecting at least two regularization

parameters, one for X and one or more for the components of Z . The multivariate model will

be addressed in future research.

16

Page 18: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

g(

X)

X0 .5 1

-.5

0

.5

1

1.5

Figure 1: Graph of ( )g x

17

Page 19: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

TABLE 1: RESULTS OF MONTE CARLO EXPERIMENTS

Exp’t No. Design Empirical

mean of 2

ˆoptJg g−

Empirical

mean of 2

ˆˆ

Jg g−

Ratio of

empirical

means,

R

Theoretical

asymptotic

upper

bound on

R

1 1 0.09600 0.09601 1.0001 11.2

2 1 0.0980 0.1011 1.032 11.2

3 1 0.1005 0.1028 1.023 11.2

4 1 0.0947 0.0950 1.046 11.2

5 1 0.1033 0.1081 1.046 11.2

6 2 0.1206 0.1832 1.518 11.8

7 2 0.1207 0.2642 2.189 11.8

18

Page 20: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

REFERENCES

Ai, C. and X. Chen (2003). Efficient estimation of models with conditional moment restrictions containing unknown functions, Econometrica, 71, 1795-1843.

Bauer, F. and T. Hohage (2005). A Lepskij-type stopping rule for regularized Newton methods,

Inverse Problems, 21, 1975-1991. Blundell, R., X. Chen, and D. Kristensen (2007). Semi-nonparametric IV estimation of shape-

invariant Engel curves, Econometrica, 75, 1613-1669. Blundell, R. and J.L. Horowitz (2007). A non-parametric test of exogeneity, Review of Economic

Studies, 74, 1035-1058. Blundell, R., P. Pashardes, and G. Weber (1993). What do we learn about consumer demand

patterns from micro data? American Economic Review, 83, 570-597. Carrasco, M., J.-P. Florens, and E. Renault (2007): Linear inverse problems in structural

econometrics: estimation based on spectral decomposition and regularization, in Handbook of Econometrics, Vol. 6, ed. by E.E. Leamer and J.J. Heckman. Amsterdam: North-Holland, pp. 5634-5751.

Cavalier, L. and N.W. Hengartner (2005). Adaptive estimation for inverse problems with noisy

operators, Inverse Problems, 21, 1345-1361. Chen, X. and D. Pouzo (2009). Efficient estimation of semiparametric conditional moment

models with possibly nonsmooth residuals, Journal of Econometrics, 152, 46-60. Chen, X. and D. Pouzo (2008). Estimation of nonparametric conditional moment models with

possibly nonsmooth moments, working paper, Department of Economics, Yale University, New Haven, CT.

Chen, X. and M. Reiss (2011). On rate optimality for ill-posed inverse problems in econometrics,

Econometric Theory, 27, 472-521. Chernozhukov, V., P. Gagliardini, and O. Scaillet (2008). Nonparametric instrumental variable

estimation of quantile structural effects, working paper, Department of Economics, Massachusetts Institute of Technology, Cambridge, MA.

Chernozhukov, V., G.W. Imbens, and W.K. Newey (2007). Instrumental variable identification

and estimation of nonseparable models via quantile conditions, Journal of Econometrics, 139, 4-14.

Darolles, S., J.-P. Florens, and E. Renault (2006): Nonparametric instrumental regression,

working paper, GREMAQ, University of Social Science, Toulouse, France. Efromovich, S. and V. Koltchinskii (2001). On inverse problems with unknown operators, IEEE

Transactions on Information Theory, 47, 2876-2894.

19

Page 21: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

Engl, H.W., M. Hanke, and A. Neubauer (1996): Regularization of Inverse Problems. Dordrecht: Kluwer Academic Publishers.

Florens, J.-P. and A. Simoni (2010). Nonparametric estimation of an instrumental regression: a

quasi-Bayesian approach based on regularized posterior, working paper, Department of Decision Sciences, Bocconi University, Milan, Italy.

Hall, P. and J.L. Horowitz (2005). Nonparametric methods for inference in the presence of

instrumental variables, Annals of Statistics, 33, 2904-2929. Horowitz, J.L. (2007). Asymptotic normality of a nonparametric instrumental variables

estimator, International Economic Review, 48, 1329-1349. Horowitz, J.L. (2011). Specification testing in nonparametric instrumental variables estimation,

Journal of Econometrics, in press. Horowitz, J.L. and S. Lee (2007). Nonparametric instrumental variables estimation of a quantile

regression model, Econometrica, 75, 1191-1208. Horowitz, J.L. and S. Lee (2010). Uniform confidence bands for functions estimated

nonparametrically with instrumental variables, Cemmap working paper cwp1809, Department of Economics, University College London.

Johannes, J. and M. Schwarz (2010). Adaptive nonparametric instrumental regression by model

selection, working paper, Université Catholique de Louvain. Kress, R. (1999). Linear Integral Equations, 2nd edition, New York: Springer-Verlag. Loubes, J.-M. and C. Ludeña (2008). Adaptive complexity regularization for inverse problems,

Electronic Journal of Statistics, 2, 661-677. Loubes, J.-M. and C. Marteau (2009). Oracle Inequality for Instrumental Variable Regression,

working paper, Institute of Mathematics, University of Toulouse 3, France. Lukas, M.A. (1993). Asymptotic optimality of generalized cross-validation for choosing the

regularization parameter, Numerische Mathematik, 66, 41-66. Lukas, M.A. (1998). Comparisons of parameter choice methods for regularization with discrete,

noisy data, Inverse Problems, 14, 161-184. Marteau, C. (2006). Regularization of inverse problems with unknown operator, Mathematical

Methods of Statistics, 15, 415-443. Marteau, C. (2009). On the stability of the risk hull method, Journal of Statistical Planning and

Inference, 139, 1821-1835. Mathé, P. and S.V. Pereverzev (2003). Geometry of ill posed problems in variable Hilbert scales,

Inverse Problems, 19, 789-803. Newey, W.K. and J.L. Powell (2003). Instrumental variables estimation of nonparametric

models, Econometrica, 71, 1565-1578.

20

Page 22: EMPIRICAL SELECTION OF THE REGULARIZATION …jlh951/papers/rparm_Nov_11.pdfotherwise unknown, Y is a scalar dependent variable, X is a continuously distributed explanatory variable

Newey, W.K., J.L. Powell, and F. Vella (1999): Nonparametric estimation of triangular

simultaneous equations models, Econometrica, 67, 565-603. Spokoiny, V. and C. Vial (2009). Parameter tuning in pointwise adaptation using a propagation

approach, Annals of Statistics, 37, 2783-2807. Wahba, G. (1977). Practical approximate solutions to linear operator equations when the data are

noisy, SIAM Journal of Numerical Analysis, 14, 651-666.

21