localization for linearly edge reinforced random walks

33
LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS OMER ANGEL, NICHOLAS CRAWFORD, and GADY KOZMA Abstract We prove that the linearly edge reinforced random walk (LRRW) on any graph with bounded degrees is recurrent for sufficiently small initial weights. In contrast, we show that for nonamenable graphs the LRRW is transient for sufficiently large ini- tial weights, thereby establishing a phase transition for the LRRW on nonamenable graphs. While we rely on the equivalence of the LRRW to a mixture of Markov chains, the proof does not use the so-called magic formula which is central to most work on this model. We also derive analogous results for the vertex reinforced jump pro- cess. 1. Introduction The linearly edge reinforced random walk (LRRW) is a model of a self-interacting (and hence non-Markovian) random walk, proposed by Coppersmith and Diaconis [1], and defined as follows. Each edge e of a graph G D .V;E/ has an initial weight a e >0. A starting vertex v 0 is given. The walker starts at v 0 . It examines the weights on the edges around it, normalizes them to be probabilities, and then chooses an edge with these probabilities. The weight of the edge traversed is then increased by 1. (The edge is reinforced.) The process then repeats with the new weights. The process is called linearly reinforced because the reinforcement is linear in the number of steps the edge was crossed. Of course one can imagine many other rein- forcement schemes, and those have been studied (see, e.g., [19] for a survey). Linear reinforcement is special because the resulting process is partially exchangeable. This means that if ˛ and ˇ are two finite paths such that every edge is crossed exactly the same number of times by ˛ and ˇ, then they have the same probability (to be the beginning of an LRRW). Only linear reinforcement has this property. DUKE MATHEMATICAL JOURNAL Vol. 163, No. 5, © 2014 DOI 10.1215/00127094-2644357 Received 8 April 2013. Accepted 6 September 2013. 2010 Mathematics Subject Classification. Primary 60K35; Secondary 60K37. Angel’s work supported in part by NSERC and the Sloan Foundation. Crawford’s work supported in part at the Technion by a Landau Fellowship. Kozma’s work supported in part by the Israel Science Foundation. 889

Upload: gady

Post on 09-Feb-2017

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGEREINFORCED RANDOM WALKS

OMER ANGEL, NICHOLAS CRAWFORD, and GADY KOZMA

AbstractWe prove that the linearly edge reinforced random walk (LRRW) on any graph withbounded degrees is recurrent for sufficiently small initial weights. In contrast, weshow that for nonamenable graphs the LRRW is transient for sufficiently large ini-tial weights, thereby establishing a phase transition for the LRRW on nonamenablegraphs. While we rely on the equivalence of the LRRW to a mixture of Markov chains,the proof does not use the so-called magic formula which is central to most workon this model. We also derive analogous results for the vertex reinforced jump pro-cess.

1. IntroductionThe linearly edge reinforced random walk (LRRW) is a model of a self-interacting(and hence non-Markovian) random walk, proposed by Coppersmith and Diaconis[1], and defined as follows. Each edge e of a graph G D .V;E/ has an initial weightae > 0. A starting vertex v0 is given. The walker starts at v0. It examines the weightson the edges around it, normalizes them to be probabilities, and then chooses an edgewith these probabilities. The weight of the edge traversed is then increased by 1. (Theedge is reinforced.) The process then repeats with the new weights.

The process is called linearly reinforced because the reinforcement is linear in thenumber of steps the edge was crossed. Of course one can imagine many other rein-forcement schemes, and those have been studied (see, e.g., [19] for a survey). Linearreinforcement is special because the resulting process is partially exchangeable. Thismeans that if ˛ and ˇ are two finite paths such that every edge is crossed exactly thesame number of times by ˛ and ˇ, then they have the same probability (to be thebeginning of an LRRW). Only linear reinforcement has this property.

DUKE MATHEMATICAL JOURNALVol. 163, No. 5, © 2014 DOI 10.1215/00127094-2644357Received 8 April 2013. Accepted 6 September 2013.2010 Mathematics Subject Classification. Primary 60K35; Secondary 60K37.Angel’s work supported in part by NSERC and the Sloan Foundation.Crawford’s work supported in part at the Technion by a Landau Fellowship.Kozma’s work supported in part by the Israel Science Foundation.

889

Page 2: Localization for linearly edge reinforced random walks

890 ANGEL, CRAWFORD, and KOZMA

Diaconis and Freedman [3, Theorem 7] showed that a recurrent partially ex-changeable process is a mixture of Markov chains. Today the name random walk inrandom environment is more popular than mixture of Markov chains, but they meanthe same thing: that there is some measure � on the space of Markov chains (knownas the “mixing measure”) such that the process first picks a Markov chain by using �and then walks according to this Markov chain. In particular, this result applies to theLRRW whenever it is recurrent. There “recurrent” means that it returns to its startingvertex infinitely often. We find this result, despite its simple proof (it follows from deFinetti’s theorem for exchangeable processes, itself not a very difficult theorem), tobe quite deep. Even for LRRW on a graph with three vertices it gives nontrivial infor-mation. For general exchangeable processes recurrence is necessary; see [3, Example19c] for an example of a partially exchangeable process which is not a mixture ofMarkov chains. For LRRW this cannot happen, it is a mixture of Markov chains evenwhen it is not recurrent (see Theorem 4 below).

On finite graphs, the mixing measure � has an explicit expression (see [10]),known fondly as the “magic formula.” See [13] for a survey of the formula and thehistory of its discovery. During the last decade significant effort was invested to under-stand the magic formula, with the main target the recurrence of the process in twodimensions, a conjecture dating back to the 1980s (see, e.g., [18, Section 6]). Notably,Merkl and Rolles [16] showed, for any fixed a, that LRRW on certain “dilute” two-dimensional graphs is recurrent, though the amount of dilution needed increases witha. Their approach did not work for Z2, but required stretching each edge of the lat-tice to a path of length 130 (or more). The proof uses the explicit form of the mixingmeasure and relative entropy arguments, which also lead to the Mermin–Wagner the-orem [12]. The latter connection suggests that the methods should not apply in higherdimensions. See also [14] and [17] for further applications of the magic formula.

An interesting variation on this theme is when each directed edge has a weight.When one crosses an edge one increases only the weight in the direction that one hascrossed. This process is also partially exchangeable and is also described by a randomwalk in a random environment. On the plus side, the environment is now independentand identically distributed (i.i.d.), unlike the magic formula, which introduces depen-dencies between the weights of different edges at all distances. On the minus side, theMarkov chain is not reversible, while the Markov chains that come out of the undi-rected case are reversible. These models are quite different. One of the key (expected)features of LRRW—the existence of a phase transition on Z

d for d � 3 from recur-rence to transience as a varies—is absent in the directed model (see [20], [9]). In thispaper we deal only with the reversible one.

Around 2007 it was noted that the magic formula is remarkably similar to the for-mulae that appear in supersymmetric hyperbolic � -models which appear in the study

Page 3: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 891

of quantum disordered systems, see Disertori, Spencer, and Zirnbauer [5] and furthersee Efetov [8] for an account of the utility of supersymmetry in disordered systems.Very recently, Sabot and Tarrès [21] managed to make this connection explicit. Sincerecurrence at high temperature was known for the hyperbolic � -model (see [4]), thisled to a proof that LRRW is recurrent in any dimension, when a is sufficiently small(high temperature in the � -model corresponds to small a). We will return to Sabotand Tarrès’s work later in the introduction and describe it in more detail. However,our approach is very different and does not rely on the magic formula in any way.

Our first result is a general recurrence result.

THEOREM 1For anyK there exists some a0 > 0 such that ifG is a graph with all degrees boundedby K , then the LRRW on G with all initial weights a 2 .0; a0/ is almost surely (a.s.)positive recurrent.

Positive recurrence here means that the walk asymptotically spends a positivefraction of time at any given vertex and has a stationary distribution. In fact, theLRRW is equivalent to a random walk in a certain reversible, dependent random envi-ronment (RWRE) as discussed below. We show that this random environment is a.s.positively recurrent. We formulate and prove this theorem for constant initial weights.However, our proof also works if the initial weights are unequal as long as for eachedge e, the initial weight ae is at most a0. With minor modifications, our argumentcan be adapted to the case of random independent ae’s with sufficient control of theirmoments. See the discussion after the proof of Lemma 8 for details.

Let us stress again that we do not use the explicit form of the magic formulain the proof. We do use that the process has a reversible RWRE description, but wedo not need any details of the measure. The main results in [21] are formulated forthe graphs Z

d but Remark 5 of that paper explains how to extend their proof to allbounded degree graphs. Further, even though [21] claims only recurrence, positiverecurrence follows from their methods. Thus the main innovation here is the prooftechnique.

It goes back to [3] that in the recurrent case the weights are uniquely definedafter normalizing, say by Wv0 D 1 (see there also an example of a transient partiallyexchangeable process where the weights are not uniquely defined). Hence with The-orem 1 the weights are well defined, and we may investigate them. Our next resultis that the weights decay exponentially in the graph distance from the starting point.Denote graph distance by dist.�; �/. Also, let dist.e; v0/ denote the minimal number ofedges in any path from v0 to either endpoint of e.

Page 4: Localization for linearly edge reinforced random walks

892 ANGEL, CRAWFORD, and KOZMA

THEOREM 2Let G be a graph with all degrees bounded by K , and let s 2 .0; 1=4/. Then thereexists a0 D a0.s;K/ > 0 such that for all a 2 .0; a0/, if e1 is the first edge crossedby X ,

E.We/s � E

� WeWe1

�s� 2K

�C.s;K/

pa�dist.e;v0/: (1)

Note that a0 does not depend on the graph except through the maximal degree.The factor 2K on the right-hand side is only relevant, of course, when dist.e; v0/D 0;otherwise it can be incorporated into the constant inside the power.

The parameter s deserves some explanation. It is interesting to note that despitethe exponential spatial decay, eachWi does not have arbitrary moments. Examine, forexample, the graph with three vertices and two edges. The case when the initial weightof each edge is 2 and the process starts from the center vertex is particularly famousas it is equivalent to the standard Pólya urn. In this case the weights are distributeduniformly on Œ0; 1�. This means that the ratio of the weights of the two edges does noteven have a first moment. Of course, this is not quite the quantity we are interestedin, as we are interested in the ratio We=Wf , where Wf is the first edge crossed.This, though, is the same as an LRRW with initial weights 1, 2 and starting fromthe side. For initial weights a, 1C a for general a one can apply the magic formulashow that the ratio has 1C a=2 moments, but not more. It is also known directly thatin this generalized Pólya urn the weights have a Beta distribution. (We will not dothese calculations here, but they are straightforward.) Hence care is needed with themoments of these ratios.

Our methods can be easily improved to give a similar bound for s < 1=3, withpa

replaced by a suitably smaller power. See the discussion after Lemma 8 for details.Our proof can probably be modified to give a 1=2 moment, and depending on a, a bitmore. Going beyond that seems to require a new idea.

The most interesting part of the proof of Theorems 2 and 1 is to show Theorem 2given that the process is already known to be recurrent, and we will present this prooffirst in Section 2. Theorem 1 then follows easily by approximating the graph withfinite subgraphs where the LRRW is of course recurrent. This is done in Section 3.

1.1. TransienceAn exciting aspect of LRRW is that on some graphs it undergoes a phase transitionin the parameter a. LRRW on trees was analyzed by Pemantle [18]. He showed thatthere is a critical ac such that for initial weights a < ac , the process is positivelyrecurrent, while for a > ac it is transient. We are not aware of another example wherea phase transition was proved, nor even of another case where the process was shown

Page 5: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 893

to be transient for any initial weights. The proof of Pemantle relies critically on thetree structure, as in that case, when you know that you have exited a vertex through acertain edge, you know that you will return through the very same edge, if you returnat all. This decomposes the process into a series of i.i.d. Pólya urns, one for eachvertex. Clearly this kind of analysis can only work on a tree.

The next result will show the existence of a transient regime in the case of non-amenable graphs. Recall that a graph G is nonamenable if for some � > 0 and anyfinite set A�G,

j@Aj � �jAj;

where @A is the external vertex boundary of A; that is, @AD ¹x W dist.x;A/D 1º. Thelargest constant � for which this holds is called the Cheeger constant of the graph G.(Note that since our graphs have bounded degrees, this is equivalent to nonamenabilitydefined using the edge boundaries, except for the value of the Cheeger constant.)

THEOREM 3For any K;c0 > 0 there exists a0 so that the following holds. Let G be a graph withCheeger constant �� c0 and degree bound K . Then for a > a0 the LRRW on G withall initial weights a on all edges is transient.

Theorem 3 will be proved in Section 4. As with Theorem 1, our proof workswith nonequal initial weights, provided that ae > a0 for all e. It is tempting to pushfor stronger results by considering graphs G with intermediate expansion propertiessuch that the simple random walk on G is transient. In this context, let us give someexamples where LRRW has no transient regime.(1) The canopy graph is the graph Z

C with a finite binary tree of depth n attachedat vertex n. It is often poetically described as “an infinite binary tree seenfrom the leaves.” Since the process must leave each finite tree eventually, theprocess on the “backbone” is identical to an LRRW on Z

C, which is recurrentfor all a (say from [2] or from the techniques of [18]).

(2) Let T be the infinite binary tree. Replace each edge on the nth level by a pathof length n2. The random walk on the resulting graph is transient (by a simpleresistance calculation, see, e.g., [7]). Nevertheless, LRRW is recurrent for anyvalue of a. This is because LRRW on Z

C has the expected weights decayingexponentially (again from the techniques of [18]), and this decay wins the fightagainst the stretched exponential growth of the levels. Recurrence in this casealso follows from the results of Lyons and Pemantle [11] regarding RWRE ontrees.

Page 6: Localization for linearly edge reinforced random walks

894 ANGEL, CRAWFORD, and KOZMA

(3) These two examples can be combined (a stretched binary tree with finite dec-orations) to give a transient graph with exponential growth on which LRRWis recurrent for any a.

We will not give more details on any of these examples as that will take us off topic,but they are all more-or-less easy to do. The proof of Theorem 3 again uses thatthe process has a dual representation as both a self-interacting random walk and as anRWRE. This might be a good point to reiterate that the LRRW is a mixture of Markovchains even in the transient case (the counterexample of Diaconis and Freedman is fora partially exchangeable process, but that process is not an LRRW). This was provedby Merkl and Rolles [15], who developed a tightness result for this purpose whichwill also be useful for us. Let us state a version of their result in the form we will needhere along with the original result of Diaconis and Freedman.

THEOREM 4 (see [3], [15])Fix a finite graph G D .V;E/ and initial weights aD .ae/e2E with ae > 0. Then Xis a mixture of Markov chains in the following sense: there exists a unique probabilitymeasure � on � so that

PD

ZP

W d�.W/

is an identity of measures on infinite paths in G. With respect to (w.r.t.) �, W.e/ > 0a.s. for all e.

Moreover, the Wv’s form a tight family of random variables: if we set Wv DPe3vWe and av D

Pe3v ae , there are constants c1; c2 depending only on av; ae so

that

�.We=Wv � "/� c1"ae=2 and �.We=Wv � 1� "/� c2"

.av�ae/=2:

As noted, we mainly need the existence of such a representation of the LRRW, aswell as the tightness of the W ’s, but not the explicit bounds.

It is natural to conjecture that in fact a phase transition exists on Zd for all d � 3,

analogously to the phase transition for the supersymmetric � -model. What happensin d D 2? We believe it is recurrent for all a, but dare not guess whether it enjoys aKosterlitz–Thouless transition or not.

1.2. Back to Sabot and TarrèsThe main objects of study for Sabot and Tarrès [21] are vertex reinforced jump pro-cesses (VRJP). Unlike LRRW, this is defined in a continuous time process with rein-forcement acting through local times of vertices. One begins with a positive func-tion JD .Je/e2E on the edges of G. These are the initial rates for the VRJP Yt and

Page 7: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 895

are analogous to the initial weights a of LRRW. Let .Lx.t//x2V be the local timesfor Y at time t and vertex x. If Yt D x, then Y jumps to a neighbor y with rateJxy.1CLy.t// (see [21] for history and additional references for this process).

The VRJP shares a key property with the LRRW: a certain form of partial ex-changeability after applying a certain time change. This suggests that it too has aRWRE description. (Diaconis and Freedman [3] only consider discrete time pro-cesses, but their ideas should carry over.) Such a RWRE description indeed existsand was found by Sabot and Tarrès by other methods. The existence of such a form(though not the formula) is fundamental for our proof. Their main results are thefollowing: first, the law of LRRW with initial weight a is identical to the time-discretization of Yt when J is i.i.d. with marginal distribution �.a; 1/. Second, after atime change, Yt is identical to a mixture of continuous-time reversible Markov chains.Moreover, the mixing measure is exactly the marginal of the supersymmetric hyper-bolic � -models studied in [4] and [5]. This is analogous to the magic formula for theLRRW. Finally, as already mentioned, this allowed them to harness the techniques of[4] and [5] to prove that LRRW is recurrent for small a in all dimensions.

Since the VRJP has both a dynamic and a RWRE representation, our methodsapply to this model too. Thus we show the following.

THEOREM 5Let G be a fixed graph with degree bound K . Let JD .Je/e2E be a family of inde-pendent initial rates with

EJ 1=5 < c.K/;

where c.K/ is a constant depending only on K . Then (a.s. with respect to J), Yt isrecurrent.

In particular this holds for any fixed, sufficiently small J. We formulate this resultfor random J because of the relation to LRRW explained above—we could have alsoproved the LRRW results for random a but we do not see a clear motivation to doso. We will prove this in Section 5, where we will also give more precise informationabout the dependency of c on K .

Next we wish to present a result on the exponential decay of the weights, ananalogue of Theorem 2. To make the statement more meaningful, let us describe theRWRE (which, we remind, is not Yt but a time change of it). We are given a randomfunctionW on the vertices of our graphG. The process is then a standard continuous-time random walk which moves from vertex i to vertex j with rate 1=2JijWj =Wi .The result is now the following.

Page 8: Localization for linearly edge reinforced random walks

896 ANGEL, CRAWFORD, and KOZMA

THEOREM 6If the Je are independent, with EJ

1=5e < c.K/, then for almost everywhere (a.e.) J,

the infinite volume mixing measure exists and under the joint measure, for any vertexv 2G,

EW 1=5v < 2K�5dist.v0;v/:

In particular, this implies that the time-discretization of Y is positively recurrentand not just recurrent. For any s < 1=5, bounds on EJ s yield an estimate on EW s

v

and Theorem 5.It is interesting to note that the proofs of Theorems 5 and 6 are simpler than those

of Theorems 1 and 2, even though the techniques were developed to tackle LRRW.The reader will note that for VRJP, each edge can be handled without interferencefrom adjoining edges on the same vertex, halving the length of the proof. Is theresome inherent reason? Is VRJP (or the supersymmetric � -model) more basic in somesense? We are not sure.

1.3. NotationsIn this paper, G will always denote a graph with bounded degrees and K a bound onthese degrees. The set of edges of G will be denoted by E . The initial weights willalways be denoted by ae , and when the notation a is used, it is implicitly assumedthat ae D a for all edges e 2E .

Let us define the LRRW again, this time by using the notations we will use forthe proofs. Suppose we have constructed the first k steps of the walk, x0; : : : ; xk . Foreach edge e 2E , let

Nk.e/Dˇ̌®j < k W eD hXj ;XjC1i

¯ˇ̌be the number of times that the undirected edge e has been traversed up to time k.Then each edge e incident on Xk is used for the next step with probability propor-tional to aCNk.e/; that is, if Xk D v, then

P.XkC1 Dw jX0; : : : ;Xk/DaCNk..v;w//

dvaCNk.v/1¹v �wº;

where dv is the degree of v, Nk.v/ denotes the sum of Nk.e/ over edges incident tov, and � is the neighborhood relation; that is, v � w () hv;wi 2 E . It is crucialto remember that Nk counts traversals of the undirected edges. We stress this nowbecause at some points in the proof we will count oriented traversals of certain edges.

While all graphs we use are undirected, it is sometimes convenient to think ofedges as being directed. Each edge e has a head eC and tail e�. The reverse of the

Page 9: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 897

edge is denoted e�1. A path of length n in G may then be defined as a sequence ofedges e1; : : : ; en such that eCi D e

�iC1 for all i . Vice versa, if v and w are two vertices,

hv;wi will denote the edge whose two vertices are v and w.By W we will denote a function fromE to Œ0;1/ (“the weights”). We will denote

this as We instead of W.e/, and for a vertex v we will denote

Wv WDXe3v

We:

The space of all such W will be denoted by � and measures on it typically by �.The � which describes our process (whether unique or not) will be called “the mixingmeasure.”

Given W we define a Markov process with law PW on the vertices of G as fol-

lows. The probability to transition from v to w, denoted PW.v;w/, isWhv;wi=Wv . For

a given � on�, the RWRE corresponding to � is a process on the vertices of G givenby

P.X0 D v0; : : : ;Xn D vn/D

Z nYiD1

PW.Xi�1;Xi / d�.W/:

This process will always be denoted by X .

DefinitionA process is recurrent if it returns to every vertex infinitely many times.

This is the most convenient definition of recurrence for us. It is formally differentfrom the definition of [3] we quoted earlier, but by the results of [15] quoted abovethey are in fact equivalent for LRRW.

We also use the following standard notation: For two vertices v and w we definedist.v;w/ as the length of the shortest path connecting them (or 0 if v D w). For anedge e we define dist.v; e/Dmin¹dist.v; e�/;dist.v; eC/º. For a set of vertices A we

define @AD ¹x W dist.x;A/D 1º, that is, the external vertex boundary. By XDD Y we

denote that the variables X and Y have the same distribution. Bern.p/ will denote aBernoulli variable with probability p to take the value 1 and 1�p to take the value 0and Exp.�/will denote an exponential random variable with expectation 1=�. We willdenote constants whose precise value is not so important by c and C , where c willbe used for constants sufficiently small, and C for constants sufficiently large. Thevalues of c and C (even if they are not absolute but depend on parameters, such asC.K/) might change from one place to another. By x � y we denote cx � y � Cx.

Page 10: Localization for linearly edge reinforced random walks

898 ANGEL, CRAWFORD, and KOZMA

2. Proof of Theorem 2In this section we give the most interesting part of the proof of Theorem 2, showingexponential decay assuming a priori that the process is recurrent. We give an upperbound which only depends on the maximal degree K . In the next section we applythis result to a sequence of finite approximations to G to prove recurrence for thewhole graph and complete the proof.

Before we begin, we need to introduce a few notations. For any directed edgeeD .e�; eC/ that is traversed by the walk, let e0 be the edge through which e� is firstreached. In particular, e0 is traversed before e. If e is traversed before its inverse e�1,then e0 is distinct from e as an undirected edge. Iterating this construction to gete00; e000, and so forth yields a path D ¹: : : ; e00; e0; eº starting with the last departurefrom v0 before reaching e, and terminating with e. We call the path of dominationof e. This path is either a simple path, or a simple loop in the case that eC is thestarting vertex v0. In the former case is the backward loop erasure of the LRRW.All edges in the path are traversed before their corresponding inverses. Let D� be theevent that the deterministic path is the path of domination corresponding to the finaledge of .

For an edge e with e0 ¤ e�1, let Q.e/ be an estimate for We=We0 defined asfollows: if e is crossed before f WD .e0/�1, then set Me to be the number of timese is crossed before f , and set Mf D 1. If f is crossed before e, then set Mf tobe the number of times f is crossed before e, and set Me D 1. In both cases wecount crossing as directed edges. In other words, we only count crossings that startfrom e�, the common vertex of e and f (in which case the walker chooses betweenthem). Then

Q.e/ WDMe

Mf

is our estimate for We=We0 . Thus to find Q.e/ we wait until the LRRW has left e�

along both e and f , and take the ratio of the number of departures along e and alongf at that time. Note again that we do not include transitions along e�1 or f �1 D e0,so that by definition one of the two numbers Me and Mf must be 1.

WithQ defined we can start the calculation. Recall that e1 is the first edge crossedby the walk. Suppose x is some edge of G and we want to estimate Wx=We1 . We fixx for the remainder of the proof. Let � D �x denote the set of possible values for thepath of domination, that is, the set of simple paths or loops whose first edge is oneof the edges coming out of v0 and whose last edge is either x or x�1 (depending onwhich is crossed first).

Split the probability space according to the value of the path of domination:

E

h� WxWe1

�siDX�2�

E

h� WxWe1

�s1¹D�º

i:

Page 11: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 899

Naturally, under D� , e1 must be the first edge of . We remind the reader that weassume a priori that our process is recurrent. This has two implications: first, x willa.s. be visited eventually, and so the path of domination is well defined. Second, theweights are unique, so Wx=We1 is a well-defined variable on our probability space.

Given the weights W , let R be the actual ratios along the path of domination, sofor e 2 n ¹e1º,

R.e/DWe

We0:

On the event D� for fixed, we may telescope Wx=We1 via

Wx

We1D

Ye2�ne1

R.e/DY

e2�ne1

R.e/

Q.e/

Ye2�ne1

Q.e/:

An application of the Cauchy–Schwarz inequality then gives

E

h� WxWe1

�s1¹D�º

i

� E

h Ye2�ne1

�R.e/Q.e/

�2s1¹D�º

i1=2E

h Ye2�ne1

Q.e/2s1¹D�ºi1=2

: (2)

Theorem 2 then essentially boils down to the following two lemmas.

LEMMA 7For any graph G, any starting vertex v0 and any a 2 .0;1/ such that the LRRW onG starting from v0 with initial weights a is recurrent, for any edge x 2E , any 2 �xand any s 2 .0; 1/,

E

h Ye2�ne1

�R.e/Q.e/

�s1¹D�º

i� C.s/j� j�1;

where C.s/ is some constant that depends on s but not on G, v0, a or anything else.

LEMMA 8For any graph G with degrees bounded by K , any starting vertex v0 and any a 2.0;1/ such that LRRW on G starting from v0 with initial weights a is recurrent, forany edge x 2G, any 2 �x and any s 2 .0; 1=2/,

E

h Ye2�ne1

Q.e/s1¹D�ºi��C.s;K/a

�j� j�1;

where C.s;K/ is a constant depending only on s;K .

Page 12: Localization for linearly edge reinforced random walks

900 ANGEL, CRAWFORD, and KOZMA

In these two lemmas, it is not difficult to make C.s/ and C.s;K/ explicit. Fol-lowing the proof gives C.s/DO. 1

1�s/ and C.s;K/DO. K

1�2sC 1

s/. However, there

seems to be little reason to be interested in the s-dependency. We will apply the lem-mas with some fixed s, say 1=4. We do not have particularly strong feelings about theK-dependency either. It is worth noting that Lemma 7 is proved by using the randomenvironment point of view on the LRRW, while Lemma 8 is proved by considering thereinforcements. Thus both views are central to our proof. One may wonder whether sis needed in Theorem 2, and if higher moments of EWe also decay exponentially. Wenote that Theorem 2 can be extended to s < 1=3 simply by using Hölder’s inequal-ity instead of the Cauchy–Schwarz inequality. In this vein, removing the assumptions < 1 in Lemma 7 will give exponential decay of EW s

e for any s < 1=2. However,that is the limit of our approach, since s D 1=2 is the best possible in Lemma 8.

Proof of Lemma 7For this lemma the RWRE point of view is used, as it must, since the weights Wappear in the statement, via R. Our first step is to throw away the event 1¹D�º, thatis to write

E

h Ye2�ne1

�R.e/Q.e/

�s1¹D�º

i� E

h Ye2�ne1

�R� .e/Q� .e/

�si(3)

where the terms on the right-hand side are as follows: for an edge e 2 preceded byf 2 , setR� .e/DWe=Wf . For such e; f , letM�;e andM�;f be the number of exitsfrom e� until e and f �1 are both used, and Q� .e/DM�;e=M�f . Clearly, on D� wehave R.e/DR� .e/ and Q.e/DQ� .e/, and so (3) is justified.

This step seems rather wasteful, as heuristically one can expect to lose a factorof K j� j from simply ignoring a condition like D� . But because our eventual result(Theorem 2) has a C.s;K/j� j term, this will not matter. Since is fixed, from thispoint until the end of the proof of the lemma we will denote RDR� and QDQ� .

At this point, and until the end of the lemma, we fix one realization of the weightsW and condition on it being chosen. This conditioning makes theR.e/’s just numbers,while the Q.e/’s become independent. Indeed, given W , the random walk in therandom environment can be constructed by associating with each vertex v a sequenceZvn of i.i.d. edges incident to v with law We=

Pe3vWe . If the walk is recurrent, then

v is reached infinitely often, and the entire sequence is used in the construction of thewalk. If we fix an edge e and let f D e0�1, then Me (resp., Mf ) is the number ofappearances of e (resp., f ) in the sequence ¹Zvº up to the first time that e and fhave both appeared. As a consequence, since the sequences for different vertices areindependent, we get that conditioned on the environment W, the estimates Q.e/ forWe=Wf for pairs incident to different vertices are all independent.

Page 13: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 901

Thus to prove our lemma it suffices to show that for any two edges e; f leavingsome vertex v, with Me and Mf defined as above we have

E

h�WeWf

Mf

Me

�s ˇ̌̌Wi� C.s/:

We now show that this holds uniformly in the environment.First, observe that entries other than e; f in the sequence of i.i.d. edges at v have

no effect on the law ofMe andMf , so we may assume without loss of generality thate and f are the only edges coming out of e�. Denote the probability of e by p andof f by q D 1�p (for some p 2 .0; 1/). Now we have for n� 1, that the probabilitythat e appears n times before f is pnq, and similarly for f before e with the roles ofp and q reversed. Thus

E

h�WeWf

Mf

Me

�s ˇ̌̌WiD�pq

�s„ƒ‚…We=Wf

hXk�1

kspqk

„ ƒ‚ …f first

CXk�1

k�sqpk

„ ƒ‚ …e first

i:

It is a straightforward calculation to see that this is bounded for jsj < 1. The firstterm is the s-moment of a Geom.p/ random variable, which is of order p�sq, andwith the prefactor comes to q1�s . The second term is the .�s/-moment of a Geom.q/random variable, which is, for s 2 .0; 1/, of the order of pqs , and with the prefactorgives psC1. Thus

E

h�WeWf

Mf

Me

�s ˇ̌̌Wi� q1�s C p1Cs � C.s/

(again using s 2 .0; 1/). This finishes the lemma.

Now we move on to the proof of Lemma 8.

Proof of Lemma 8Fix a path . We shall construct a coupling of the LRRW together with a collectionof i.i.d. random variables Q.e/, associated with the edges of (except e1) such thaton the event D� , for every edge e 2 n e1 we have Q.e/�Q.e/, and such that fors < 1=2,

EQ.e/s � C.s;K/a:

The claim would follow immediately, because

E

YQ.e/s1¹D�º � E

YQ.e/s1¹D�º � E

YQ.e/s D

YEQ.e/s

�Y

C.s;K/aD�C.s;K/a

�j� j�1:

Page 14: Localization for linearly edge reinforced random walks

902 ANGEL, CRAWFORD, and KOZMA

The remarks in the previous proof about “waste” are just as applicable here, since wealso, in the second inequality, threw away the event D� . Note that we cannot start byeliminating the restriction, since we only prove Q �Q on the event D� .

Let us first describe the random variables Q. Estimating their moments is thena straightforward exercise. Next we will construct the coupling, and finally we shallverify that Q.e/�Q.e/. For an edge eD .e�; eC/ of , we construct two sequencesof Bernoulli random variables (both implicitly depending on e). For j � 0, considerBernoulli random variables

Yj D Bern� a

j C 1C 2a

�; Y 0j D Bern

� 1C a

2j C 1CKa

�;

where Bern.p/ is a random variable that takes the value 1 with probability p and 0with probability 1� p. All Y and Y 0 variables are independent of each other and ofthose associated with other edges in . In the context of the event D� , we think ofY 00 as the event that decides which of e and f is crossed first. For j � 1, think aboutYj as telling us whether on the j th visit to e� we depart along e (or more preciselyan upper bound for this event). Similarly, Y 0j is a lower bound, telling us whether wedepart along f D e0�1. If the origin of these probabilities is unclear, we ask the readerto bear with us until we provide Cases 4 and 5 below. This leads to the definition

QDM e=M f ;

where

M e D min¹j � 1 W Y 0j D 1º and M f D 1; if Y 00 D 0;

M f D min¹j � 1 W Yj D 1º and M e D 1; if Y 00 D 1:

Moment estimation. To estimate EQs

we note that

P.Y 00 D 0;M e D n/D.K � 1/a

1CKa

1C a

2nC 1CKa

n�1YjD1

�1�

1C a

2j C 1CKa

�:

The first two terms we estimate by

.K � 1/a

1CKa

1C a

2nC 1CKa�

Ka

1CKa

1CKa

2nDKa

2n;

while for the product we note that for any a > 0,

1C a

2j C 1CKa�min

° 1

2j C 1;1

K

±:

Putting these together we get

Page 15: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 903

P.Y 00 D 0;M e D n/ �Ka

2n

n�1YjD1

exp��1

2jCO.j�2/

DKa

2nexp

��1

2log.n/CO.1/

�� C.K/an�3=2:

Here, the implicit constants depend only on K . Thus for s < 1=2,

E�Q.e/s1¹Y 00 D 0º

��Xn�1

nsP.Y 00 D 0;M e D n/

�Xn�1

C.K/ans�3=2 � C.s;K/a:

(This is the main place where the assumption s < 1=2 is used.)For the case Y 00 D 1 we write

P�Y 00 D 1;M.f /D n

�� P.Yn D 1/�

a

n;

and so

E�Q.e/s1¹Y 00 D 1º

�DXn�1

an�1�s <C.s/a:

Together we find EQ.e/s � C.s;K/a as claimed.The coupling. Here we use the linear reinforcement point of view of the walk. We

consider the Bernoulli variables as already given, and construct the LRRW as a func-tion of them (and some additional randomness). Suppose we have already constructedthe first t steps of the LRRW, are at some vertex v, and need to select the next edgeof the walk. There are several cases to consider.

Case 0. v … : In this case we choose the next edge according to the reinforcededge weights, independently of all the Y and Y 0 variables.

Case 1. v 2 and the LRRW so far is not consistent with D� : We may safelydisregard the Y variables, as nothing is claimed in this case. This case occurs if forsome edge e 2 is traversed only after its inverse is, or if the first arrival to e� wasnot through the edge preceding e in .

Case 2. v 2 , and Q is already determined: If v D e� for e 2 , and both e and.e0/�1 have both already been traversed, then Q.e/ is determined by the path so far.Again, we disregard the Y variables.

For the remaining cases, we may assume that D� is consistent with the LRRWpath so far, and that vD e� for e 2 n e1. As before, let f D .e0/�1. If we are not inCase 2, then one of e; f has not yet been traversed.

Page 16: Localization for linearly edge reinforced random walks

904 ANGEL, CRAWFORD, and KOZMA

Case 3. This is the first arrival to v. In this case the weights of all edges incidentto v are still a, except for f which has weight 1C a. Thus the probability of exitingalong f is

1C a

1C dva�

1C a

1CKa;

where as usual dv is the degree of the vertex v. Thus we can couple the LRRW stepso that if Y 00 D 1, then the walk exits along f (and occasionally also if Y 00 D 0).

Case 4. Later visits to v when Y 00 D 0. Suppose this is the nth visit to v. Theweight of f is at least 1C a, since we first entered v through f �1. The total weightof edges at v is 2n� 1C dva. Thus the probability of the LRRW exiting through fis

Nt .f /C a

2n� 1C dva�

1C a

2n� 1CKa;

where t is the time of this visit to v, and Nt .f / is, as usual, the number of crossingsof the edge f up to time t . Thus we can couple the LRRW step so that if Y 0n�1 D 1,then the walk exits along f (and occasionally also if Y 0n�1 D 0).

Case 5. Later visits to v with Y 00 D 1. Here, f was traversed on the first visitto v. Since we are not in Case 2, e has not yet been traversed. Since we are not inCase 1, neither has e�1, and so e still has weight a. In this case, we first decide withappropriate probabilities, and independent of the Y , whether to use one of ¹e; f º, oranother edge from v. If we decide to use another edge, we ignore the Y variables. Ifwe decide to use one of ¹e; f º, and this is the nth time this occurs, then Nt .f / � n(since we had only chosen f so far in these cases, and also used f �1 at least once).Thus the probability that we select e from ¹e; f º is

a

Nt .f /C 2a�

a

nC 2a:

Thus we can couple the LRRW step so that if Yn�1 D 0, then we select f .Domination of Q.e/. We now check that on D� we have Q.e/�Q.e/. Assume

D� occurs. When Y 00 D 0 the coupling considers only the Y 0 variables, one at eachvisit to v until f is used. If n is minimal such that Y 0n D 1, then f is used on the.nC 1/st visit to v or earlier. Thus

Q.e/�Me � nDQ.e/:

Note that if Y 00 D 0, it is possible that the walk uses f before e, and then Q.e/� 1.If Y 00 D 1, then the coupling considers only the Y ’s, one at each time that either

e or f is traversed (and not at every visit to v). If n is the minimal n � 1 such thatYn D 1, then f is used at least n times before e. Thus

Page 17: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 905

Q.e/D1

Mf

�1

nDQ.e/;

and in both cases Q.e/�Q.e/, completing the proof.

Let us remark briefly on how this argument changes if the weights a are not equal,or possibly random. To get the domination Q.e/�Q.e/ the variables Yj and Y 0j aredefined differently. Setting av D

Pe3v ae , we use

Yj D Bern� ae

j C 1C ae C af

�; Y 0j D Bern

� 1C af

2j C 1C av

�:

This changes the moment estimation, and instead of Ca we get C.av C a1Csv /. Ifthe ae’s are all sufficiently small there is no further difficulty. If the a’s are random,this introduces dependencies between edges, and some higher moments must be con-trolled.

COROLLARY

Theorem 2 holds on graphs where the LRRW is recurrent.

ProofThis is just an aggregation of the results of this section:

E

� WxWe1

�sDX�

E

h� WxWe1

�s1¹D�º

i

by (2) �X�

hE

Ye2�ne1

�R.e/Q.e/

�2s1¹D�º

i1=2

hE

Ye2�ne1

Q.e/2s1¹D�ºi1=2

by Lemmas 7 and 8 �X�

�C.2s/j� j�1

�1=2��C.2s;K/a

�j� j�1�1=2

DX�

.C0pa /j� j�1;

for C0 DpC.2s/C.2s;K/. Now, the number of paths of length ` is at most K` and

the shortest path to e has length dist.e; v0/C 1. If we take a0 such that KC0pa0 D

1=2, then for a < a0 longer paths give at most a factor of 2 and so

E

� WxWe1

�s� 2K.KC0

pa /dist.e;v0/

Page 18: Localization for linearly edge reinforced random walks

906 ANGEL, CRAWFORD, and KOZMA

3. Recurrence on bounded degree graphs for small values of aWe are now ready to prove Theorem 1. As noted above, the main idea is to approx-imate the LRRW on G by LRRW on finite balls in G. Let R > 0, and let X .R/ bethe LRRW in the finite volume ball BR.v0/. By Theorem 4, for each R the full dis-tribution of X .R/ is given by a mixture of random conductance models with edgeweights denoted by .W .R/

e /e2BR.v0/, and mixing measure denoted by �.R/. Recallthat �D R

EC is the configuration space of edge weights on the entire graph G. For

fixed a, the measures �.R/ form a sequence of measures on �, equipped with theproduct Borel � -algebra.

Proof of Theorem 1Theorem 2 (or 4 if you prefer) implies that the measures �.R/ are tight, and so havesubsequential limits. Let � be one such subsequence limit. Clearly the law of therandom walk in environment W is continuous in the weights. However, the first Rsteps of the LRRW on BR.v0/ have the same law as the first R steps of the LRRWon G. Thus the LRRW on G is the mixture of random walks on weighted graphs withthe mixing measure being �.

Fix some s < 1=4 for the remainder of the proof. For any edge e 2 BR.v0/ wehave by Markov’s inequality that

�.R/.W .R/e >Q/�

2K.Cpa /dist.e;v0/

Qs;

where C D C.s;K/ comes from Theorem 2, which we already proved for recurrentgraphs. TakingQD .2K/�dist.e;v0/, and since the number of edges at distance ` fromv0 is at most K`C1, we find that the probability of having an edge at distance ` withWe > .2K/

�` is at most

K`C1.2K/s`2K.Cpa /` D 2K2.2sK1CsC

pa /`:

Let a0 be such that 2sK1CsCpa0 D 1=2. Then for a < a0 this last probability is

at most K221�`. Crucially, this bound holds uniformly for all �.R/, and hence alsofor �. By the Borel–Cantelli lemma, it follows that �-a.s., for all but a finite numberof edges we have a bound We � .2K/�dist.e;v0/. On this event the random network.G;W/ has finite total edge weight W D

PeWe , and therefore the random walk on

the network is positive recurrent, with stationary measure .v/DWv=2W .

3.1. Trapping and return times on graphsWith the main recurrence results, Theorems 1 and 2, proved, we wish to make aremark about the notions of “localization” and “exponential decay.” We would liketo point out that one should be careful when translating them from disordered quan-tum systems into the language of hyperbolic nonlinear � -models and the LRRW. To

Page 19: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 907

illustrate this point, let G be a graph with degrees bounded by K , and consider thebehavior of the tail of the return time to the initial vertex v0. Let

� Dmin¹t > 0 WXt D v0º:

PROPOSITION 9Let K be fixed. Suppose that G is a connected graph with degree bound K and someedge not incident to v0. Then there is a constant c.a;K/ > 0 so that

P.� >M/� c.a;K/M�.K�1/a�1=2:

Thus despite the exponential decay of the weights, the return time has a heavytail, and has at most .K � 1/aC 1=2 finite moments.

ProofThere is some edge e at distance 1 from v0. Consider the event EM that the LRRWmoves towards e and then crosses e back and forth 2M times. On this event, � > 2M .The probability of this event is at least

P.EM / �1

K

h Y0�j<M

2j C a

2j C 1CKa

ih Y0�j<M

2j C 1C a

2j C 1CKa

i

D1

K

Y0�j<M

�1�

2.K � 1/aC 1

2j C 1CO

� 1j 2

��

� c.a;K/M�.K�1/a�1=2:

One might claim that we only showed this for the first return time. But in fact,moving to the RWRE point of view shows that this is a phenomenon that can occur atany time. Indeed, if P.� >M/� " then this means that with probability � 1=2", theenvironment W satisfies that P.� >M jW/ � 1=2". But of course, once one condi-tions on the environment, one has stationarity, so the probability that the kth excursionlength (call it �k) is bigger thanM is also � 1=2". Returning to unconditioned resultsgive that

P.�k >M/�1

4"2 D c.a;K/M�2.K�1/a 8k:

This is quite crude, of course, but shows that the phenomenon of polynomial decay ofthe return times is preserved for all times.

3.2. ReminiscencesLet us remarks on how we got the proof of these results. We started with a very simpleheuristic picture, explained to us by Tom Spencer: when the initial weights are very

Page 20: Localization for linearly edge reinforced random walks

908 ANGEL, CRAWFORD, and KOZMA

small, the process gets stuck on the first edge for very long. A simple calculationshows that this does not hold forever but that at some point it breaks to a new edge.It then runs over these two edges, roughly preserving the weight ratio, until breakinginto a third edge, roughly uniform, and so on.

Thus our initial approach to the problem was to try and show using only theLRRW picture that weights stabilize. This is really the point about the linearity—onlylinear reinforcement causes the ratios of weights of different edges to converge tosome value. One direction that failed was to show that it is really independent of therest of the graph. We tried to prove that “whatever the rest of the graph does, the ratioof the weights of two edges is unlikely to move far, when both have already beenvisited enough times.” So we emulated the rest of the graph by an adversary (which,essentially, decides through which edge to return to the common vertex), and tried toprove that any adversary cannot change the weight ratio. This, however, is not true.A simple calculation that shows that an adversary which simply always returns thewalker through edge e will beat any initial advantage the other edge, f , might havehad.

Trying to somehow move away from a general adversary, we had the idea thatthe RWRE picture allows us to restrict the abilities of the adversary, which finallyled to the proof above (which has no adversary, of course). We still think it would beinteresting to construct a pure LRRW proof, as it might be relevant to phase transitionsfor other self-interacting random walks, which are currently wide open.

4. Transience on nonamenable graphsIn this section we prove Theorem 3. The proof is ideologically similar to that ofTheorem 2, and in particular we prove the main lemmas under the assumption thatthe process is recurrent—there it seemed natural, here it might have seemed strange,but after reading Section 3 maybe not so much: in the end we will apply these lemmasfor finite subgraphs of our graph. We will then use compactness, that is, Theorem 4,to find a subsequential limit on G which also describes the law of LRRW on G.

Let us therefore begin with these lemmas, which lead to a kind of “local stability”of the environment W when the initial weights a are uniformly large.

As in the case of small a, our stability argument has two parts. We use thedynamic description to argue that the walk typically uses all edges leaving a ver-tex roughly the same number of times (assuming it enters the vertex sufficiently manytimes). Such vertices are called balanced. We then use the random conductance viewpoint to argue that exits from a vertex typically give a good approximation for the con-ductances of edges containing the vertex, and call such vertices faithful. We deducethat the random weights are typically close to each other. Finally, we use a percolationargument based on the geometry of the graph to deduce a.s. transience.

Page 21: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 909

4.1. Local stability if LRRW is recurrentLet us assume throughout this subsection that the graphG and the weights a are givenand that they satisfy that LRRW is recurrent.

Let L be some parameter which we will fix later (and which will depend on thegraph and "; ı but not on a). The main difference between the proofs here and thosein Section 2 is that here we will examine the process until it leaves a given vertex atleast L times through each edge rather than once. Let therefore � D �.L; v/ denotethe number of visits to v until the LRRW has exited v at leastL times along each edgee 3 v. Note that since we assume that the LRRW is recurrent, these stopping timesare a.s. finite. Let M.e/DM.L;v; e/ denote the total number of outgoing crossingsfrom v along e up to and including the � th exit. Given the environment W and arandom walk in that environment, call a vertex v "-faithful if M.e/=� is close to itsasymptotic value of We=Wv ; that is,

M.e/=�

We=Wv2 Œ1� "; 1C "� for all e 3 v:

LEMMA 10Let the degree bound K be fixed. Then for any "; ı > 0 there exists LDL.K;"; ı/ sothat

PW.v is not "-faithful/ < ı:

Moreover, these events are independent under PW.

A crucial aspect of this lemma is that L does not depend on the environment W.A consequence is that for this L, the "-faithful vertices stochastically dominate 1� ısite percolation on G for any P

W, and hence also for the mixture P. (Recall that onerandom set A stochastically dominates another random set B , if it is possible to con-struct the two in the same probability space so that a.s. B � A. It is straightforwardthat if ¹x 2 Aº are independent events with probability at least p, then A stochas-tically dominates Bernoulli site percolation with parameter p. It is also easy to seethat if A dominates B conditioned on some information, then it also dominates Bunconditionally.)

The second step is to use the dynamical description of the LRRW to show thatif a is large, then the M.L;v; e/ are likely to be roughly equal. Let S.L;v/ Dmaxe3vM.L;v; e/ be the number of exits from v along the most commonly usededge. Given the walk, we call a vertex "-balanced if S.L;v/ � .1C "/L. Note thatby definition, S.L;v/�M.L;v; e/�L for all e; v, so for balanced vertices S.L;v/is fairly constrained.

Page 22: Localization for linearly edge reinforced random walks

910 ANGEL, CRAWFORD, and KOZMA

LEMMA 11For any K , ", ı there is an L0 DL0.K; "; ı/ such that for any L>L0 there is somea0 so that for any a > a0 such that LRRW on G with initial weight a is recurrent wehave

P.v is not "-balanced / < ı:

Moreover, for such a, the "-balanced vertices stochastically dominate 1� ı site per-colation.

As with Lemmas 7 and 8, these lemmas are proved by considering, respectively,the random environment and the reinforced walk representations. Finally, call a vertex"-good (or just good) if it is both "-faithful and "-balanced. Otherwise, call the vertexbad. Note that if v is good, then weights of edges leaving v differ by a factor of atmost 1C"

1�"� .1� "/�2.

COROLLARY 12For any K;"; ı, for any large enough L, for any large enough a, the set of "-goodvertices stochastically dominates the intersection (vertices contained in both) of twoBernoulli site percolation configurations with parameter 1� ı.

Unfortunately, these two percolation processes are not independent of each other,so we cannot merge them into one .1� ı/2 percolation. Indeed, even conditioning onthe state of a single vertex in one of the two makes the other configuration hopelesslyfar from independent.

Proof of Lemma 10Given W, the exits from v are a sequence of i.i.d. random variables taking dv � Kdifferent values with some probabilities p.e/D We=Wv . Let An.e/ be the locationof the nth appearance of e in this sequence, so that An is the cumulative sum ofindependent Geom.p/ random variables. By the strong law of large numbers, there issome L so that with arbitrarily high probability (say, 1� ı), we have

ˇ̌̌pAnn� 1

ˇ̌̌< " for all n�L: (4)

We now claim that L can be chosen uniformly in p (for " and ı fixed). This canbe proved by using a quantified form of the law of large numbers, say based on theLaplace transform, which leads to a straightforward but messy computation whichwe leave to the reader. Instead we may use continuity in p: indeed, the only possiblediscontinuity is at zero, and pAn converges as p! 0 to a sum of i.i.d. exponentials,

Page 23: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 911

and so with this interpretation the same claim holds also for p D 0. Since the prob-ability of a large deviation at some large time for a sum of i.i.d. random variables isupper semicontinuous in the distribution of the steps, and since upper semicontinuousfunctions on Œ0; 1� are bounded from above, we get the necessary uniform bound.

We now move back from the language of An to counting exits at specific time t .Denote by Mt .e/ the number of exits from v through e in the first t visits to v. Ift 2 ŒAn;AnC1/, then Mt D n and hence, for t �AL and with probability > 1� ı,

Mt .e/

pt2� n

pAnC1;n

pAn

i .4/� Œ1� "0; 1C "0�

(where "0 is some function of " with "0! 0 when "! 0).Since � �AL.e/ for all edges e coming out of v, we get M.e/=p� 2 Œ1� "0; 1C

"0� for all e, with probability > 1�Kı. Replacing "0 with " and Kı with ı proves thelemma.

Proof of Lemma 11Here we rely on the reinforced random walk point of view. When a is large, thereinforcement cannot change the exit probabilities too much before a vertex has beenvisited many times, which allows us to deduce that the first few exits are close touniform.

Instead of considering only a single vertex, we prove directly the stochastic dom-ination. To this end, we show that any v is likely to be "-balanced even if we conditionon arbitrary events occurring at other vertices. At the i th visit to v, the probability ofexiting via any edge e is at least a=.dvaC 2i � 1/. Throughout the first T visits tov, this is at least 1=dv � 2T=.ad2v /. Since this bound is uniform in the trajectory ofthe walk, we find that the number of exits along an edge e stochastically dominates aBin.T; 1

dv� 2T

d2v a/ random variable, even if we condition on any event outside of v.

We take T D d.1C "/dvLe. If a is large enough (CL=" suffices) then the bino-mial has expectation at least LC1=2"L. Since it has variance at most T � 2KL, if Lis sufficiently large then the binomial is very likely to be at least L. In summary, givenı and " we can find some large L so that for any large enough a, with probability atleast 1� ı there are at least L exits along any edge e up to time T . This occurs for alledges e 3 v with probability at least 1�Kı.

Finally notice that if all edges are exited at least L times, then this accounts fordvL exits, and only T �dvL exits remain unaccounted for. Even if they all happen atthe same edge, that edge would still have only LC"dVL exits, hence S �LC"dvL.Therefore v is "dv-balanced with probability at least 1�Kı. Changing " and ı givesthe lemma.

Page 24: Localization for linearly edge reinforced random walks

912 ANGEL, CRAWFORD, and KOZMA

4.2. Application to infinite graphsLet Gn denote the ball of radius n in G D .V;E/ with initial vertex v0. Let �n denotethe mixing measure guaranteed by the first half of Theorem 4. Further let Pn be thesequence of coupling measures guaranteed by Corollary 12.

According to the second half of Theorem 4 the measures �n are tight. Quite obvi-ously the remaining marginals of Pn are also tight. Therefore Pn is tight. Thus we canalways pass to a subsequential limit P so that the first marginal is a mixing measurefor � and the conclusion of Corollary 12 holds. We record this in a proposition.

PROPOSITION 13For any K;"; ı, for any large enough L, for any large enough a the following holds.For any weak limit � of finite volume mixing measures there is a coupling so that theset of "-good vertices (with respect to �) stochastically dominates the intersection oftwo Bernoulli site percolation configurations with parameter 1� ı.

We now use a Peierls’s argument to deduce transience for large enough a. Weshall use two results concerning non-amenable graphs. The standard literature usesedge boundaries so let us give the necessary definitions: we define the edge boundaryof a set by @E.A/D ¹.x; y/ 2E W x 2A;y …Aº, VolAD

Pv2A dv and

�E D infA�G;VolA<1

[email protected]/j

VolA:

Clearly

�D infj@Aj

jAj� inf

j@EAj=K

KVolAD

1

K2�E;

and similarly ��K2�E.The first result that we will use is Cheeger’s inequality.

THEOREM 14If G is nonamenable, then the return probabilities for the random walk on G satisfypn.0; 0/� Ce

�ˇn, where ˇ > 0 depends only on the Cheeger constant �E.G/.

Cheeger proved this for manifolds (see, e.g., [6] for a proof in the case of graphs).A second result we use is due to Virág [22, Proposition 3.3]. Recall that the anchoredexpansion constant is defined as

˛.G/D limn!1

infjAj�n

j@EAj

Vol.A/;

where the infimum ranges over connected sets containing a fixed vertex v. It is easyto see that ˛ is independent of the choice of v.

Page 25: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 913

PROPOSITION 15 (see [22])Every infinite graph G with ˛-anchored expansion contains for any " > 0 an infinitesubgraph with edge Cheeger constant at least ˛ � ".

Proof of Theorem 3The main idea is to use Lemmas 10 and 11 to show that for almost all weights W,G contains a nonamenable subgraph where the weights of any two intersecting edgesare close. We then deduce that the random walk on the subgraph with weights W istransient, and by Rayleigh monotonicity, so is all of G.

There are several parameters that need to be set in the proof, to help the reader fol-low the logic, let us briefly summarize them and the order in which their values mustbe chosen. The graph G determines the degree bound K and Cheeger constant �.G/.The probability ı that a vertex is balanced (or faithful) is chosen so as to get largeenough anchored expansion in the set of good vertices. This determines via Theo-rem 14 the value of ˇ (controlling the heat kernel decay) for the subgraph. This willdetermine how small " (the level of balance and fidelity) needs to be for the randomwalk to be transient. Finally, given " and ı we take L large enough to satisfy Lem-mas 10 and 11 and the minimal a is determined from Lemma 11. We now proceed tothe actual proof.

Let " be some small number to be determined later, and let G" denote the set of"-good vertices (and also the induced subgraph). For any set F with boundary @F ,the "-bad vertices in @F are a union of two Bernoulli percolations, each of whichis exponentially unlikely to have size greater than 1=4j@F j, provided that ı < 1=4.Specifically,

P

�jG" \ @F j �

1

2j@F j

�� e�cj@F j � e�c�jF j;

where c D c.ı/ tends to1 as ı! 0, and � > 0 is the Cheeger constant of the graph.The number of connected sets F of size n containing the fixed origin o, is at most2Kn. (This is true in any graph with degree bound K—the maximum is easily seen tohappen on a K-regular tree, on which the set can be identified by its boundary whichis just Kn choices of whether to go up or down.) By the Borel–Cantelli lemma, if ıis small, so that c.ı/ > K.log2/=�, then a.s. only finitely many such sets F have badvertices for half their boundary.

Taking such a ı, we find that G" contains an infinite cluster C which has “vertexanchored expansion” at least �.G/=2. Moving to edge anchored expansion costs afactor of K2 and we get ˛ � �=2K2. By Proposition 15 we find that C contains asubgraph H with �E.H/ > �.G/=3K2.

By Theorem 14, the simple random walk onH has exponentially decaying returnprobabilities. However, since all vertices of H are "-good, edges incident on vertices

Page 26: Localization for linearly edge reinforced random walks

914 ANGEL, CRAWFORD, and KOZMA

of H have weights within .1� "/�2 of equal. Thus the random walk on the weightedgraph .H;W/ is close to the simple random walk on H . Specifically, letting pW

denote the heat kernel for the W-weighted random walk restricted to H , then we find

pWn .0; 0/� .1� "/

�2npn.0; 0/� C.1� "/�2ne�ˇn;

where ˇ is some constant depending only on G. In particular for " small enough,these return probabilities are summable and the walk is transient.

Finally by Rayleigh monotonicity (see, e.g., [7]), .G;W/ is transient as well.Thus if the conclusions of Lemmas 10 and 11 hold for some "; ı, then the process istransient. We now take L large enough to satisfy Lemmas 10 and 11 and the minimala is determined from Lemma 11.

5. Vertex reinforced jump processIn this section we apply our basic methods to the VRJP models mentioned in theintroduction. This demonstrates the flexibility of the approach and gives a secondproof of the recurrence of LRRW based on the embedding of LRRW in VRJP withinitial rates J i.i.d. with marginal distribution �.a; 1/.

5.1. Times they are a changin’As explained in the introduction, the VRJP has a dynamic and an RWRE descriptions,related by a time change. Let us give the details. The dynamic version we will denoteby Yt , and its local time at a vertex x by Lx.t/, so that t D

Px Lx.t/. Recall that Yt

moves from x to y with rate Jx;y.1CLy.t//.The RWRE picture is defined in terms of a positive function WD .Wv/v2V on

the vertices. Given W we will denote by Z D .ZWs / the random walk in continuous

time that jumps from x to y with rate 1=2JxyWy=Wx . We will denote the local timeof Z by Mx.s/, and again s D

PxMx.s/. When some random choice of W is clear

from the context we will denote by Zs the mixed process.The time change relating s and t is then given by the relation that Mx D L

2x C

2Lx , or equivalently that Lx Dp1CMx � 1. Summing over all vertices gives a

relation between s and t . Since the local times are only increasing at the presentlyoccupied vertex, this gives the equivalent relations ds D 2.1CLYt .t// dt and dt Dds=2

p1CMZs .s/.

THEOREM 16 (see [21])On any finite graph there exists a random environment W so that .Zs/ is the timechange of .Yt / given above.

For the convenience of the reader, here is a little table comparing our notationwith that of [21]:

Page 27: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 915

Here J W L

[21] W eU L� 1

In fact, Sabot and Tarrès also give an explicit formula for the law of the environ-ment W. However, as with the LRRW, we do not require the formula for this law, butonly that it exists.

One more way to think of all this is that the process has two clocks measuring theoccupation time at each vertex. From the VRJP viewpoint, the process jumps from i toj at rate Jij .1CLj .t// dt (i.e., with respect to the L’s). From the RWRE viewpoint,the process jumps at rate 1=2JijWj =Wi ds. Theorem 16 states that the above relationbetween ds and dt , these two descriptions give the same law for the trajectories.

It is interesting to also describe the process in terms of both the local times andreinforcement. Using the Lx local times, and given the environment, we find thatthe process jumps from i to j at rate Jij .Wj =Wi /.1C Li / dt . Note that here it isLi and not Lj that controls the jump rates. Similarly, we can get a reinforcementdescription for Z: given the local times Mx , the process jumps from i to j at rateJijp.1CMj /=.1CMi / ds. This gives a description of Z with no random environ-

ment, but using reinforcement. Nevertheless, it will be most convenient to use Z forthe RWRE side of the proof and Y for the dynamical part.

A final observation is that conditional on J and W, the discretization of ZWs (i.e.,

the sequence of vertices visited by the process) has precisely the same law as therandom walk on G with conductances given by Cij D JijWiWj .

5.2. Guessing the environmentAs with the LRRW, the main idea is to extract from the processes some estimate forthe environment, and show that it is reasonably close to the actual environment on theone hand, and behaves well on the other.

For neighboring vertices i , j , let Sij be the first time at which Z jumps from i

to j , and let �ij DMi .Sij / be the local time for Z at i up to that time. Given the

environment W, we have that �ijDD Exp.1=2JijWj =Wi /. Thus we can use Qij WDp

�j i=�ij as an estimator for Rij DWj =Wi .For a vertex v, we consider as before the random simple path from v0 to v,

where each vertex x is preceded by the vertex from which x is entered for the firsttime. This is just the backward loop erasure of the process up to the hitting time of v.As for the LRRW, we need two estimates. First we provide an analogue of Lemma 7.

LEMMA 17For any simple path in a finite graph G, any environment W and any s < 1 we have

EWYe2�

�ReQe

�2sD� s

sins

�j� j:

Page 28: Localization for linearly edge reinforced random walks

916 ANGEL, CRAWFORD, and KOZMA

Second, we need an analogue of Lemma 8. Recall from Section 2 that D� denotesthe event that the backward loop erasure from v is a given path . We use the samenotation here.

LEMMA 18There exists some C > 0 such that for any 0 < s < 1=4 the following holds. For anyfinite graph G, any conductances J, and simple path starting at v0,

E

Ye2�

Q2se 1¹D�º � C.s/

j� jYe2�

J 2se :

Proof of Lemma 17Given the environment, the timeZW spends at i before jumping to j is Exp.1=2Jij

Wj =Wi / which may be written as .2Wi=JijWj /Xij , where XijDD Exp.1/. Crucially,

given the environment the variables Xij are all independent. For an edge e D .i; j /this gives Re=Qe D

pXij =Xj i . Therefore

EWYe2�

�ReQe

�2sD��.1C s/�.1� s/

�j� j

D� s

sins

�j� j;

by the reflection identity for the � function.

LEMMA 19Suppose 0 < s < 1=4, and let J > 0 be fixed. Let U

DD Exp.J / and conditioned on U ,

let VDD Exp.J.1CU //. Then

E

� 2V C V 22U CU 2

�s�

C

1� 4sJ 2s;

where C is some universal constant.

ProofWe can reparameterize U D X

Jand V D Y

XCJ, where X and Y are independent

Exp.1/ random variables. In term of X;Y we have

2V C V 2

2U CU 2D2J 2Y.J CX/C J 2Y 2

X.J CX/2.2J CX/

<2J 2Y

X3CJ 2Y 2

X4:

We now calculate, using .aC b/s � as C bs for 0 < s < 1,

Page 29: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 917

E

�2J 2YX3

CJ 2Y 2

X4

�s� J 2s

�E.2YX�3/s CE.Y 2X�4/s

�D J 2s.2sEY sEX�3s CEY 2sEX�4s/ by independence

�C

1� 4sJ 2s since EXa �

C

1C a.

(The inequality EXa � C=.1C a/ holds for jaj< 1, the relevant range here.)

Proof of Lemma 18Consider an edge eD .i; j / 2 , and let Tij be the first time t at which Yt jumps fromi to j (and similarly define Tj i ). Since this can be confusing, we recall that there aretwo clocks measuring the local time at each vertex, and note that here we are usingthe t times (governing the VRJP).

On the event D� the process does not visit j before the first jump from i to j ,and so Lj .t/D 0 for all t < Tij . Hence the jump i ! j occurs at rate Jij wheneverY is at i , and so U WDLi .Tij / has law Exp.Jij /. More precisely, the statement aboutthe jump rate implies that we can couple the process Y with an Exp.Jij / randomvariable, so that on the event D� it equals U .

Let V WD Lj .Tj i / be the time spent at j before the first jump back to i . SinceLi .t/ � U from the time we first enter j , the rate of such jumps is always at leastJij .1CU /, we find that V is stochastically dominated by a Exp.Jij .1CU // randomvariable.

All statements above concerning rates of jumps along the edge e hold (on theevent D� ), uniformly in anything that the process does anywhere else. Thus it ispossible to construct such exponential random variables for every edge e 2 , inde-pendent of all other edges, so that on D� , the first equals Uij and the second dom-inates Vj i is dominated by the second. The claim then follows by Lemma 19, since�ij D 2Uij CU

2ij and �j i D 2Vj i C V 2ji by their definitions and the time change for-

mulae.

5.3. Exponential decay and Theorem 6Let GR denote the ball of radius R around v0. Denote by �.R/ the VRJP measure onGR and the corresponding expectation by E

.R/. Note that �.R/ depends also on therandom weights J but we will suppress this in the notation. In the proof, EJ denoteexpectation with respect to J. Theorem 6 follows from the following.

THEOREM 20There is a universal constant c > 0 such that the following holds. Let G be a fixedgraph with degree bound K . Let J D .Je/e2E be a family of independent initial

Page 30: Localization for linearly edge reinforced random walks

918 ANGEL, CRAWFORD, and KOZMA

rates with

EJ 1=5e < cK�8:

Then (a.s. with respect to J) the measures �.R/ are a tight family and converge to alimit � on R

EC so that the VRJP is a time change of the process Z in the environment

W given by �. The limit process is positive recurrent, and the environment satisfies

EW 1=5v < 2K�5dist.v0;v/;

and hence the stationary measure decays exponentially.

The moment condition on Je is trivially satisfied in the case that all Je’s arebounded by some sufficiently small J0. The particular condition comes from special-izing to s D 1=5, and can be easily changed by taking other values of s or by usingHölder’s inequality in place of Cauchy–Schwarz in the proof below. The dependenceon K may be similarly improved.

Proof of Theorem 20Combining Lemmas 17 and 18 with the Cauchy–Schwarz inequality, for any v, anyradius R > dist.v0; v/ and any path W v0! v in GR we have (recall Wv0 D 1):

E.R/W s

v 1¹D�º ��E.R/

Ye2�

�ReQe

�2s�1=2�E.R/

Ye2�

Q2se 1¹D�º

�1=2

� Cj� j1

Ye2�

J se ;

where C1 D C1.s/ depends only on s. Let the c from the statement of the theorem be1=C1.1=5/. Then with s D 1=5 we get

EJCj� j1

Ye2�

J se �K�8j� j:

By Markov’s inequality we get that

PJ

�Cj� j1

Ye2�

J se >K�6j� j

��K�2j� j:

Since the number of paths of length n is at most Kn we get from Borel–Cantelli that,J -a.s.,

E.R/W s

v 1¹D�º �K�6j� j

Page 31: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 919

for all but finitely many . Since the number of paths of length n is at most Kn andno path to v is shorter than dist.v0; v/ this implies

E.R/W 1=5

v < 2K�5dist.v0;v/

for all but a random finite set of v which is independent of R. Since this bound isuniform in R, this implies that the measures �.R/ are tight and thus that they havesubsequential limits J-a.s. Let � be any such subsequential limit. (We later deducethat � is unique.) It is easy to see that the weak convergence of �.R/ to � implies aconvergence of Z.R/, Y .R/ and the time change between them to Z, Y and the timechange between them corresponding to the infinite measure. (All convergences arealong the chosen subsequence.) However, from the reinforcement viewpoint, Yt hasthe same law on all GR until the first time it reaches the boundary of the ball. Thus �yields the VRJP on the infinite graph G.

As noted above, the discretizedZs is just a random walk onG with conductancesCij D JijWiWj . By Markov’s inequality, P.Wv >K�3dist.v0;v//� 2K�2dist.v0;v/. Bythe Borel–Cantelli lemma, it follows that a.s. Wv � K�3dist.v0;v/ for all but finitelymany v, and therefore Ce � JijK�6dist.e;v0/. However, the number of edges at dis-tance n is at most Kn and we assumed Je has a finite 1=5 moment so yet anotherapplication of the Borel–Cantelli lemma ensures that

Pe JeK

�6dist.v0;v/ <1. ThusPe Ce is a.s. finite, and the total weight outside GR decays exponentially. This

implies the positive recurrence.Finally, since the process is a.s. recurrent, Z visits each vertex infinitely often,

and the environment can be deduced from the observed jump frequencies along edges,the subsequential limit � is in fact unique. With tightness, this implies convergenceof the �.R/.

Acknowledgments. We wish to thank Tom Spencer for suggesting to us that a resultsuch as Theorem 2 might be true. Part of this work was performed during the XVBrazilian School of Probability in Mambucaba. We wish to thank the organizers ofthe school for their hospitality. We thank Long Teng Toinh for spotting a few mistakesin one of the drafts.

References

[1] D. COPPERSMITH and P. DIACONIS, Random walks with reinforcement, preprint, 1986.(889)

[2] B. DAVIS, Reinforced random walk, Probab. Theory Related Fields 84 (1990),203–229. MR 1030727. DOI 10.1007/BF01197845. (893)

[3] P. DIACONIS and D. FREEDMAN, De Finetti’s theorem for Markov chains, Ann. Probab.8 (1980), 115–130. MR 0556418. (890, 891, 894, 895, 897)

Page 32: Localization for linearly edge reinforced random walks

920 ANGEL, CRAWFORD, and KOZMA

[4] M. DISERTORI and T. SPENCER, Anderson localization for a supersymmetric sigmamodel, Commun. Math. Phys. 300 (2010), 659–671. MR 2736958.DOI 10.1007/s00220-010-1124-6. (891, 895)

[5] M. DISERTORI, T. SPENCER, and M. ZIRNBAUER, Quasi-diffusion in a 3Dsupersymmetric hyperbolic sigma model, Commun. Math. Phys. 300 (2010),435–486. MR 2728731. DOI 10.1007/s00220-010-1117-5. (891, 895)

[6] J. DODZIUK, Difference equations, isoperimetric inequality and transience of certainrandom walks, Trans. Amer. Math. Soc. 284 (1984), 787–794. MR 0743744.DOI 10.2307/1999107. (912)

[7] P. G. DOYLE and J. L. SNELL, Random Walks and Electric Networks, Carus Math.Monogr. 22, Math. Assoc. Amer., Washington, DC, 1984. MR 0920811. (893,914)

[8] K. EFETOV, Supersymmetry in Disorder and Chaos, Cambridge Univ. Press,Cambridge, 1997. MR 1628498. (891)

[9] N. ENRIQUEZ and C. SABOT, Edge oriented reinforced random walks and RWRE, C. R.Math. Acad. Sci. Paris 335 (2002), 941–946. MR 1952554.DOI 10.1016/S1631-073X(02)02580-3. (890)

[10] M. KEANE and S. ROLLES, “Edge-reinforced random walk on finite graphs” in InfiniteDimensional Stochastic Analysis, Koninklijke Nederlandse Akademie vanWetenschappen, 2000, 217–234. MR 1832379. (890)

[11] R. LYONS and R. PEMANTLE, Random walk in a random environment and first-passagepercolation on trees, Ann. Probab. 20 (1992), 125–136. MR 1143414. (893)

[12] N. D. MERMIN and H. WAGNER, Absence of ferromagnetism or antiferromagnetism inone- or two-dimensional isotropic Heisenberg models, Phys. Rev. Lett. 17 (1966),1133–1136. (890)

[13] F. MERKL, A. ÖRY, and S. ROLLES, The “magic formula” for linearly edge-reinforcedrandom walks, Statistica Neerlandica 62 (2008), 345–363. MR 2441859.DOI 10.1111/j.1467-9574.2008.00402.x. (890)

[14] F. MERKL and S. ROLLES, Asymptotic behavior of edge-reinforced random walks, Ann.Probab. 35 (2007), 115–140. MR 2303945. DOI 10.1214/009117906000000674.(890)

[15] , A random environment for linearly edge-reinforced random walks on infinitegraphs, Probab. Theory and Related Fields 138 (2007), 157–176. MR 2288067.DOI 10.1007/s00440-006-0016-3. (894, 897)

[16] , Recurrence of edge-reinforced random walk on a two-dimensional graph,Ann. Probab. 37 (2009), 1679–1714. MR 2561431. DOI 10.1214/08-AOP446.(890)

[17] , Correlation inequalities for edge-reinforced random walk, Elect. Commun.Probab. 16 (2011), 753–763. MR 2861439. DOI 10.1214/ECP.v16-1683. (890)

[18] R. PEMANTLE, Phase transition in reinforced random walk and RWRE on trees, Ann.Probab. 16 (1988), 1229–1241. MR 0942765. (890, 892, 893)

[19] , A survey of random processes with reinforcement, Probab. Surveys 4 (2007),1–79. MR 2282181. DOI 10.1214/07-PS094. (889)

Page 33: Localization for linearly edge reinforced random walks

LOCALIZATION FOR LINEARLY EDGE REINFORCED RANDOM WALKS 921

[20] C. SABOT, Random walks in random Dirichlet environment are transient in dimensiond � 3, Probab. Theory Related Fields 151 (2011), 297–317. MR 2834720.DOI 10.1007/s00440-010-0300-0. (890)

[21] C. SABOT and P. TARRÈS, Edge-reinforced random walk, vertex-reinforced jumpprocess and the supersymmetric hyperbolic sigma model, preprint,arXiv:1111.3991v4 [math.PR]. (891, 894, 895, 914, 915)

[22] B. VIRÁG, Anchored expansion and random walk, Geom. Funct. Anal. 10 (2000),1588–1605. MR 1810755. DOI 10.1007/PL00001663. (912, 913)

Angel

Department of Mathematics, University of British Columbia, Vancouver British Columbia V6T

1Z2 Canada; [email protected]

Crawford

Department of Mathematics, Technion–Israel Institute of Technology, Haifa 32000, Israel;

[email protected]

Kozma

Department of Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel;

[email protected]