· renormalization operator for affine dissipative lorenz maps m´arcio alves∗ and eduardo...

43
Renormalization operator for affine dissipative Lorenz maps M´arcioAlves and Eduardo Colli March 14, 2008 Abstract We study properties of the renormalization operator arising in a three-dimensional family of affine dissipative Lorenz maps. With the explicit expression of the operator, which does not have a fixed point, we prove that the set of infinitely renormalizable maps is a lamination of C leaves. We also prove that the holonomies are H¨ older continuous, but not more than H¨ older, in general. Finally, we show that transversal fibers to the lamination intersect it in a zero Hausdorff dimension set. Math Subject Classification: Primary 37E05, 37E20 1 Introduction A broad understanding of the behavior of smooth dynamical systems is still far from being reached, and has been a challenge for mathematicians and physicists. General theories are often developed for sufficiently differentiable (or even analytic) systems, and continuity is almost always a basic assumption. Still discontinuous maps arise naturally in dynamics. A set of examples comes from three-dimensional flows, where Poincar´ e sections intersecting a stable (or un- stable) manifold of a singularity generate discontinuous return maps, and these maps often reduce to a one-dimensional dynamics because of the existence of an invariant stable foliation. These maps are often called Lorenz maps since they explain some features of the dynamics first discovered by Lorenz (see, for example, [7], [9] and [10]). * Partially supported by the brazilian agency CAPES. Partially supported by the brazilian agency FAPESP. 1

Upload: others

Post on 24-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Renormalization operator for affine dissipative

    Lorenz maps

    Márcio Alves∗ and Eduardo Colli†

    March 14, 2008

    Abstract

    We study properties of the renormalization operator arising in a three-dimensionalfamily of affine dissipative Lorenz maps. With the explicit expression of the operator,which does not have a fixed point, we prove that the set of infinitely renormalizablemaps is a lamination of C∞ leaves. We also prove that the holonomies are Höldercontinuous, but not more than Hölder, in general. Finally, we show that transversalfibers to the lamination intersect it in a zero Hausdorff dimension set.

    Math Subject Classification: Primary 37E05, 37E20

    1 Introduction

    A broad understanding of the behavior of smooth dynamical systems is still far frombeing reached, and has been a challenge for mathematicians and physicists. Generaltheories are often developed for sufficiently differentiable (or even analytic) systems,and continuity is almost always a basic assumption.

    Still discontinuous maps arise naturally in dynamics. A set of examples comesfrom three-dimensional flows, where Poincaré sections intersecting a stable (or un-stable) manifold of a singularity generate discontinuous return maps, and these mapsoften reduce to a one-dimensional dynamics because of the existence of an invariantstable foliation. These maps are often called Lorenz maps since they explain somefeatures of the dynamics first discovered by Lorenz (see, for example, [7], [9] and[10]).

    ∗Partially supported by the brazilian agency CAPES.†Partially supported by the brazilian agency FAPESP.

    1

  • What is striking about discontinuities is that intricate dynamics may appear evenwithout any expansion or criticality. This work is devoted to studying and charac-terizing a certain family of interval maps, which can be classified as Lorenz maps,described by three parameters inside the cube (0, 1)3. To be specific, each map of thefamily is affine, and with derivative positive and smaller than one outside its singlediscontinuity point. One parameter (b) describes the position of the discontinuitypoint into the interval and the two other parameters (here α and β) are the slopesto the left and to the right of the discontinuity point.

    In this family, the dynamical behaviour alternates between hyperbolic, i.e. exhibit-ing an attracting hyperbolic periodic point, and neutral, where there is an invariantset which is the complement of the orbit of a wandering interval, as in the Denjoycounter-examples for some circle diffeomorphisms with irrational rotation numbers(see [5] for a general presentation).

    A key element in the study of the family is the renormalization operator R,defined for maps coming from parameters in a certain subset of the cube. Thesemaps are called renormalizable. A map f is said to be N–renormalizable, N ≥ 1,if f,Rf, . . . ,RN−1f belong to the domain of R, and infinitely renormalizable if N–renormalizable for all N ≥ 1. It turns out that the neutral maps are exactly theinfinitely renormalizable maps, and all the other maps (non–renormalizable or finitelyrenormalizable) are hyperbolic.

    We identify the cube (0, 1)3 with the image of the one-to-one parametrization(α, β, b) 7→ fα,β,b. In this way, we may say that a parameter is renormalizable ora parameter is infinitely renormalizable, meaning that the corresponding functionshave this or the other property. The notation R will be also used for the operatorinduced in the parametrization cube.

    Renormalization operators in dynamical systems came into the scene in the sev-enties, with the works of Feigenbaum ([6]) and Coullet-Tresser ([3]) on the quadraticfamily. Since then, intense research has been made aiming at proving the conjecturesposed by these and other authors.

    The main advantage of working with this three-parameter family resides in thefinite dimensionality of the domain of the renormalization operator, allied to thepossibility of explicitly writing its expression. In general, as a consequence of the notso good properties of the composition operator, the renormalization operator is noteven differentiable (see [4] for a recent account in the context of unimodal maps).

    We will prove the following.

    Theorem 1. The set of infinitely renormalizable parameters (α, β, b) ∈ (0, 1)3 is alamination, with C∞ two-dimensional leaves, where each leaf is the graph of somefunction of (α, β). The leaves vary continuously in the Cr topology, for any r ≥ 0.The holonomies are Hölder continuous, but not uniformly Hölder continuous in (0, 1)3

    2

  • and not more than Hölder continuous, in general. Moreover, transversal fibers to thelamination intersect it in a zero Hausdorff dimension set.

    Differently from the renormalization operator of unimodal families, this one doesnot present an attracting set: for every triple (α, β, b) in the domain of RN for allN ≥ 1, the orbit {RN (α, β, b)} accumulates in the set {(0, 0, 0), (0, 0, 1)}, which isoutside the open cube and represents trivial dynamics.

    In this work we are concerned mostly with the parameter space. Some resultsabout the dynamics in the configuration space have been reported in previous works([1, 2, 8, 12, 11]). Our explanation on the configuration space will be restricted tothe assertions that are necessary to understand the parameter space, and this doesnot include, for example, involved combinatorial aspects.

    Some of this works also study the renormalization operators for families like this([11, 14, 13]). But they are concerned more with the combinatorics, instead of dif-ferentiable structures of the parameter space.

    In Section 2, we establish the dynamical dichotomy of this kind of maps: hiper-bolicity or wandering interval. Every assertion about the configuration space is con-tained in this Section. In Section 3, we list the main facts about the renormalizationoperator and its domain, explicitly writing its formula and calculating its derivative.The assertion that the set of infinitely renormalizable maps is a lamination of contin-uous graphs is proved in Section 4. There we also show that this lamination may beextended to a foliation of the whole parameter space and that vertical fibers intersectthe lamination in a zero Hausdorff dimension set. Section 5 is dedicated to studyingholonomies between vertical fibers, which easily extend to arbitrary transversal fiberswith the results of the remaining sections. In Section 6 we show that the operatorhas a hyperbolic structure when (α, β) is not so far from (0, 0), but uniform invariantunstable cone families containing the vertical direction are obtained for the wholecube. In Section 7 we prove that each leaf is Cr, for any r ≥ 1, and as consequenceof the proof that leaves vary continuously in the Cr topology, for every r ≥ 1.

    2 Attracting periodic orbits versus wandering

    intervals

    Let {fα,β,b}α,β,b with (α, β, b) ∈ (0, 1)3 be the family of functions of the interval

    [b− 1, b] given by

    fα,β,b(x) =

    {b+ αx , x < 0b− 1 + βx , x > 0

    (1)

    (see Figure 1). This expression has the advantage of dealing with a unit intervaland putting the discontinuity point always at the origin. Up to a translation of the

    3

  • x-coordinate this is equivalent to a map of [0, 1] into itself where the discontinuitypoint is 1 − b and the slopes are the same numbers α and β.

    α

    β

    b

    bb − 1b − 1

    0

    0

    Figure 1: A typical function of the three-parameter family {fα,β,b}α,β,b.

    We aim at describing the parameter space of this family accordingly to the dy-namical behavior, which is characterized by a dichotomy, as the following argumentsshow.

    Accordingly to the value of b, take the smaller interval of [b − 1, 0) and (0, b].Without loss of generality, we may assume b ≤ 12 and look at (0, b]. Similar argumentswill be clearly valid for b > 12 with the interval [b− 1, 0) (one may think that if b >

    12

    then the conjugate map g = τ−1 ◦ f ◦ τ , where τ(x) = −x, falls in the case “b < 12”).Assuming b ≤ 12 , the first consequence is that f((0, b]) ⊂ [b− 1, 0), since f

    ′ < 1.If f has a negative root z then [z, 0) is a fundamental domain for f restricted to

    the negative side. For all x ∈ (z, 0), f(x) > 0, and for every x ∈ [b − 1, 0) there isi = i(x) ≥ 0 such that f i(x) ∈ [z, 0). In fact, [0, b) is also a fundamental domain,since f([z, 0)) = [0, b).

    If f has no negative root then there are two possibilities, although the first isoutside the parameter space: (i) b = 0, where f i(x) → 0− for every x, or (ii) b > 0but f(x) > 0 for all x ∈ [b − 1, 0). In the latter case, f(x) ∈ (b − 1, 0) for everyx ∈ (0, b] and f(x) ∈ (0, b) for every x ∈ [b − 1, 0). This implies that the closureof f2((0, b]) is contained in (0, b] and f2|(0,b] is a contraction. Hence f

    2|(0,b] has a(unique) fixed point, and the original map f has a unique periodic orbit of periodtwo, which attracts every orbit.

    Note that f ′ < 1 implies that (ii) happens whenever b = 12 . In fact, for every(α, β) ∈ (0, 1)2 there is ǫ = ǫ(α, β) > 0 such that if b ∈ (12 − ǫ,

    12 ] then the same

    happens for fα,β,b.If z = b− 1 is a root of f then the return map to (0, b] is also a contraction, but

    the closure of f2|(0,b] is not anymore contained in (0, b]. If we extend f2 to [0, b] then

    4

  • 0 is fixed point, attracting all orbits. For the original map f , the orbits approach thepair of points {0, b − 1} as if they constituted a period two attracting periodic orbit.Note that 0 (and consequently also b− 1) are approximated only by the right side.

    We will never be free of this kind of behavior: a first return map to an openinterval which is a contraction but the fixed point is in one of the boundary points.But, under the dynamical point of view, this situation may be regarded as periodichyperbolic, even with no true attracting periodic orbit. We will call this kind of orbitas a false attracting periodic orbit, see below for a better characterization.

    Now suppose again that f has a root z ∈ (b − 1, 0). Then f(b − 1) < 0 and[b − 1, f(b − 1)) is another fundamental domain. Moreover, as f is not onto (sincef ′ < 1) the order b− 1 < f(b) < f(b− 1)) is always respected. We denote by W theinterval (f(b), f(b− 1)), exactly the maximal interval outside the image of f .

    In order to obtain the return map to (0, b) we follow the orbit of this interval.The first iterate is (b− 1, f(b)), and let k be the first integer such that

    fk((b− 1, f(b))) ∩ (0, b) 6= ∅ (2)

    (if f has no negative root then k = 1, but k may be equal to 1 even if f has a negativeroot). As each iterate of (b− 1, f(b)) is contained in some fundamental domain, and[0, b) is also a fundamental domain then there are only two possibilities (see Figures 2and 3):

    1. fk((b− 1, f(b))) ⊂ (0, b)

    2. 0 ∈ fk((b− 1, f(b)))

    b − 1 0 b

    W

    f(b)

    f(b − 1) f 2(b − 1)

    fk(b − 1)

    fk−1(W )

    Figure 2: First intersection of an iterate of (0, b) with itself - first case.

    In case 1., the return map to (0, b) is simply the restriction of fk+1 to this interval.Since fk+1 is a contraction, it has a unique fixed point (at least for the extended map

    5

  • b − 1 0 b

    W

    f(b)

    f(b − 1) f 2(b − 1)

    fk(b − 1)

    fk(W )

    Figure 3: First intersection of an iterate of (0, b) with itself - second case.

    to [0, b]), and we easily conclude that f has an attracting periodic orbit (true or false)of period k + 1, attracting all orbits except the pre-orbit of 0.

    To be more precise, there are essentially two subcases here. If the closure offk((b − 1, f(b))) is contained in (0, b) then there is a true attracting periodic orbit.In this case, 0 ∈ fk−1(W ) implies that the pre-orbit of 0 is finite, since there are nopre-images of points of W .

    The other possibility is that 0 or b are in the closure of fk((b − 1, f(b))). In theformer case, the false attracting periodic orbit is the set of points {b − 1, f(b1),. . ., fk−1(b − 1), 0}. The cycle is interrupted at 0 since f is not well definedat this point, but still b − 1 = f(0+). In the latter, the false orbit is the set{b, f(b), f2(b), . . . , fk−1(b), 0}, and now b = f(0−). Once again the pre-orbit of 0is finite.

    In case 2., there is c ∈ (0, b) such that fk+1(c) = 0. The return map R to (0, b), forpoints x ∈ (c, b), is given by R(x) = fk+1(x) . But as the iterates of (b− 1, f(b)) arealways contained inside fundamental domains then R(x) = fk+2(x) for all x ∈ (0, c).Then we observe that limx→c+ R(x) = 0 and limx→c− R(x) = b, so that R has theaspect shown in Figure 4.

    By an affine change of coordinates we rescale R to a function g defined in [̃b−1, b̃]with discontinuity at 0 and extend it to the boundary points b̃−1 and b̃. In this casewe say that f is renormalizable and call g the renormalization of f . The functionthat takes f to g will be called the renormalization operator R.

    The renormalization reduces the dynamics to the new map Rf , since, except forthe pre-orbit of 0 in the negative side, for every x ∈ [b− 1, 0) there is n = n(x) suchthat fnx ∈ (0, b). Now the same analysis can be done for Rf , leading to the samedichotomy: either there is an attracting periodic orbit or Rf is renormalizable. Weconclude that either f is N times renormalizable (N ≥ 0) and RNf has an attracting

    6

  • 00

    b

    bc

    R

    fk+1

    fk+2

    Figure 4: Return map when 0 ∈ fk((b − 1, f(b)).

    periodic orbit, true or false and attracting all orbits except the finite pre-orbit of 0,or f is infinitely (many times) renormalizable.

    In order to characterize in detail this dichotomy we introduce some new notationsand definitions. The return maps will not be rescaled, since we need to relate themto the original map f , but we will speak about “renormalization” even so.

    Suppose that f is N times renormalizable, N ≥ 0. Let I0 ≡ [a0, b0] ≡ [b − 1, b],c0 ≡ 0 and R0 ≡ f . Let also W0 be the interval W described above, which isthe maximal (non-trivial) interval inside [b − 1, b] such that f(x) 6∈ W0 for all x ∈[b − 1, b]. This property immediately implies that the forward iterates of W0 arepairwise disjoint.

    Now suppose that for all 0 < i ≤ N the interval Ii−1 = (ai−1, bi−1), ci−1 ∈ Ii−1,Ri−1 : Ii−1 \ ci−1 → Ii−1 (belonging, up to rescaling, to the three-parameter family{fα,β,b}), and Wi−1, the maximal interval outside the image of Ri−1, have beendefined. By induction, define Ii as the smaller interval of (ai−1, ci−1) and (ci−1, bi−1)(if these intervals were of equal size then, as remarked above, in the beginning ofthis Section, Ri−1 would not be renormalizable, contradicting the hypothesis on N).

    Then let ki−1 be the smallest integer such that ci−1 ∈ Rki−1+1i−1 (Ii) (see Figure 5). The

    point ci is defined as the point of Ii that satisfies Rki−1+1i−1 (ci) = ci−1. The map Ri is

    the first return map of Ri−1 to Ii, continuously extended to the boundary points of Ii.

    Finally Wi = Rki−1i−1 (Wi−1) is the (maximal) interval inside Ii such that Ri(x) 6∈ Wi

    for any x ∈ Ii.These definitions could be partially carried on one step further, even if f was

    not N + 1 times renormalizable. The interval IN+1 is the smallest of (aN , cN ) and(cN , bN ) and RN+1 is the first return map of RN to IN+1. But if RN+1 is a continuousmap (a contraction), then cN+1 and WN+1 are not defined.

    We are ready to prove the following results. First we notice that every point yin the interval has at most one pre-image, but possibly none, since f is injective but

    7

  • ai−1 bi−1cici−1

    Wi−1

    Wi = Rki−1i−1 (Wi−1)

    Figure 5: Induction scheme for return maps.

    not onto. Hence every point y has a well defined past orbit y, f−1y, f−2y, . . ., butthis orbit may or not be finite.

    Proposition 1. If there is n ≥ 0 such that 0 ∈ fnW0 then the pre-orbit of 0 is finite.

    Proof. The hypothesis implies f−n(0) ∈ W0. If f−n(0) ∈ W0 then f

    −(n+1)(0) is notdefined. Otherwise, f−n(0) = f(b) or f−n(0) = f(b − 1), hence f−(n+1)(0) = b orf−(n+1)(0) = b− 1, and this implies that f−(n+2)(0) is not defined.

    Proposition 2. If f is N but not N + 1 times renormalizable, for N < ∞, thenthere is n ≥ 0 such that 0 ∈ fn(W0), implying that 0 has a finite past orbit.

    Proof. The hypothesis implies that RN is not renormalizable (accordingly to the

    notations and definitions above). Then there is k ≥ 0 such that cN ∈ RkN (WN ). Nowwe observe, by the inductive construction above, that RkN (WN ) = f

    l(W0), for somel ≥ 0, and that f j(cN ) = 0, for some j ≥ 0. The assertion follows with n = j+ l.

    Proposition 3. If f is infinitely renormalizable then 0 6∈ fn(W0) for any n ≥ 0.

    Proof. If N = ∞ then the points ci are defined for every i ≥ 0. As each ci is a pre-image of ci−1 then the past orbit of c0 = 0 is infinite. If 0 belonged to fn(W0), for somen ≥ 0, then the past orbit of 0 would be finite, by Proposition 1, contradiction.

    Proposition 4. If f is N but not N + 1 times renormalizable, N < ∞, then the(true or false) attracting periodic orbit is unique and attracts every orbit except thefinite past orbit of 0.

    8

  • Proof. First we observe that RN+1 is a contraction, with domain in IN+1. Thenthere is a unique fixed point for the extension of RN+1 to the closure of IN+1. Butthis implies that there is a unique attracting periodic orbit of RN inside IN , sinceevery orbit of IN intersects IN+1, except the pre-orbit of cN . The same reasoningimplies that, except for the pre-orbit of cN (under RN ), every orbit in IN is attractedto this orbit. By backward induction, we conclude the same for i = N − 1, N − 2, . . .,proving the Proposition.

    Proposition 5. If f has an attracting periodic orbit (true or false) then f is notinfinitely renormalizable.

    Proof. Let O(x) be a true attracting periodic orbit. If f is N times renormalizable,0 ≤ N ≤ ∞, we get, for all 0 < i ≤ N , that O(x) ∩ (ai−1, ci−1) 6= ∅ and O(x) ∩(ci−1, bi−1) 6= ∅, with O(x) ∩ Ii an attracting periodic orbit of Ri. Then #O(x) ∩Ii < #O(x) ∩ Ii−1. If f was infinitely renormalizable then this would lead to acontradiction.

    If there is a false periodic orbit then its ciclicity is interrupted at 0, the only pointwhere f is not well defined, and the point following 0 must be b or b− 1, the laterallimits of f(x) at x = 0. But this also implies that the pre-orbit of 0 must be finite.As for infinitely renormalizable maps the pre-orbit of 0 is infinite, the Propositionfollows.

    This last Proposition shows that if f is infinitely renormalizable then W = W0 isa wandering interval, since neither its orbit intersects itself nor it is attracted to aperiodic orbit.

    3 A formula for the renormalization operator

    3.1 Description of the domain

    Although parameters of the family are taken inside the open cube (0, 1)3, it is instruc-tive to look at parameters of the faces and edges of the cube. For example, if b = 0 orb = 1 then every orbit converges to 0, that is, the boundary of the domain of f . If α(or β) equals zero then f is not injective, and there is always an attracting periodicorbit, whose period depends on b and β (or α). To be more precise, suppose thatb < 12 and the slope β in (0, b] is zero. Take k as the smallest positive integer suchthat fk(b − 1) ∈ (0, b]. Then p = fk(b − 1) is an attracting periodic point of periodk + 1. If the other slope was zero (α, in this case), then b would be an attractingperiodic point of period two. Another degenerate situation occurs for α = β = 1,which corresponds to a circle rotation. By varying b along [0, 1] we obtain the fullfamily of rotations. These extremal cases may be visualized in Figure 6.

    9

  • b

    αβ

    β = 0:α = 0:

    α = β = 1:b = 0:

    b = 1:

    attracting periodic orbitattracting periodic orbit

    orbits → boundary

    orbits → boundary

    circle rotations

    Figure 6: Dynamical behaviour for parameters in the boundary of the cube.

    The cases {α = 1, β 6= 1} and {α 6= 1, β = 1} are not as degenerate as the others,and indeed could be treated as the interior parameters.

    We first explore a family {fb}b = {fα,β,b}b for some fixed pair (α, β) ∈ (0, 1)2. In

    fact, the reasoning will extend continuously to [0, 1]2. The parameter b is allowed tovary along [0, 1]. As in the preceding Section, we look at the first return map to thesmaller interval of [b − 1, 0) and (0, b], which is the former if b > 12 and the latter ifb < 12 . We already know that for b =

    12 there is an attracting periodic orbit of period

    two.For the calculations we assume b < 12 , since the case b >

    12 will be easily deduced

    by symmetry (as the involution τ(x) = −xmakes the conjugacy τ ◦fα,β,b◦τ = fβ,α,1−bthen it suffices to interchange the roles of α and β, and take 1− b instead of b, in theformulas that will appear from now on). Let f = fb and consider iterations of theinterval (b− 1, f(b)), defining k (as before, in Section 2) as the first positive integersuch that

    fk((b− 1, f(b))) ∩ (0, b) 6= ∅ . (3)

    If b = 12 then 0 < f(b−1) < f(f(b)). As b decreases then also f(b−1) decreases, andthere is some b such that f(b− 1) = 0. The next situation is f(b− 1) < 0 < f(f(b)),which implies that f is renormalizable. Decreasing b, this situation ends up withf(f(b)) = 0, but now 0 < f2(b− 1), and so on.

    The overall picture is the following. For b < 12 there are disjoint open intervals

    10

  • I−1 , I−2 , . . ., accumulating on b = 0, such that b ∈ I

    −k implies

    fk(b− 1) < 0 < fk(f(b)) . (4)

    We denote the boundary points of I−k by bE−,k and b

    I−,k, where b

    I−,k is the nearest to

    b = 12 (I from “internal”, E from “external”). Therefore the parameter bI−,k is the

    solution of the equationfk(b− 1) = 0 (5)

    and bE−,k is the solution of the equation

    fk(f(b)) = 0 . (6)

    0

    12

    b

    I−1

    I−2

    I−3

    I−4I−5

    Figure 7: Renormalizable parameters for fixed α and β.

    These two equations can be explicitly stated for the piecewise linear family. Asb − 1 and f(b) = b − 1 + βb are negative, the iterates are done only in the left side(with respect to the discontinuity point). It is easy to prove by induction that fk(x)(k ≥ 1), in the left side, is given by the expression

    b

    k−1∑

    i=0

    αi + αkx . (7)

    The solution of Equation (5) is then given by

    bI−,k = αk

    (k∑

    i=0

    αi

    )−1(8)

    11

  • and of Equation (6) by

    bE−,k = αk

    (k∑

    i=0

    αi + βαk

    )−1. (9)

    The intervals where f is renormalizable, for b > 12 , are named I+1 , I

    +2 , . . ., with

    boundary points bI+,k and bE+,k. The value of the boundary points are obtained by

    interchanging α and β, and subtracting the result from 1. Therefore

    bI+,k = 1 − βk

    (k∑

    i=0

    βi

    )−1(10)

    and

    bE+,k = 1 − βk

    (k∑

    i=0

    βi + αβk

    )−1. (11)

    Now we consider the domain of the renormalization operator as a whole. Let

    D±k =⋃

    (α,β)∈(0,1)2

    {α} × {β} × I±k (α, β) , (12)

    where I±k (α, β) denotes the dependence of I±k on (α, β), as given in Equations (8),

    (9), (10), and (11). These are the connected components of the domain of R. Un-derstanding this set of connected components helps a lot in future estimates. InFigure 8, we show bI−,1, b

    E−,1, b

    I+,1 and b

    E+,1.

    Below we list a collection of facts about these leaves that can be easily deducedfrom the formula. We let to the reader’s imagination the complete drawing of allleaves.

    1. bI−,k does not depend on β, as well as bI+,k does not depend on α.

    2. bI−,k is an increasing function of α, equal to 0 for α = 0 and equal to1

    k+1 for

    α = 1; bI+,k is a decreasing function of β, equal to 1 for β = 0 and equal to

    1 − 1k+1 for β = 1.

    3. The derivative of bI,E−,k (resp. bI,E+,k) at α = 0 (resp. β = 0) is equal to zero for

    all k ≥ 2 and equal to 1 (resp. −1) for k = 1.

    4. bI−,1 and bI+,1 coincide at (α, β) = (1, 1) (and only there).

    5. For all k ≥ 1, bE±,k and bI±,k+1 coincide at (α, β) = (1, 1) (and only there).

    6. For all k ≥ 1, bI±,k and bE±,k coincide at (α, 0) and (0, β) (like two square sheets

    of paper glued together by two adjacent edges).

    12

  • Figure 8: Boundaries of D−1 and D+1 .

    7. The size of I±k attains its maximum value

    1

    (k + 1)(k + 2)(13)

    at (α, β) = (1, 1).

    The assertions above show, in particular, that the parameters regions for whichthere is an attracting periodic orbit disappear as (α, β) tends to (1, 1). This is inagreement with the fact that rotations do not have attracting periodic orbits.

    3.2 Distance between components

    In Section 5 we shall need some lower and upper bounds on the distance betweenlayers of the renormalization operator’s domain. The distances are estimated alongvertical fibers.

    13

  • Lemma 1. There is a positive function C(α, β), monotonically increasing in botharguments, such that

    C(α, β)−1 ≤|b2 − b1|

    αl2≤ C(α, β) , (14)

    whenever (α, β, b1) ∈ D−l1, (α, β, b2) ∈ D

    −l2

    and l1 > l2 ≥ 1, and

    C(α, β)−1 ≤|b2 − b1|

    βl2≤ C(α, β) , (15)

    whenever (α, β, b1) ∈ D+l1, (α, β, b2) ∈ D

    +l2

    and l1 > l2 ≥ 1. Moreover, if l1, l2 ≥ 1,

    (α, β, b1) ∈ D−l1, and (α, β, b2) ∈ D

    +l2

    then

    C(α, β)−1 ≤ |b1 − b2| ≤ 1 (16)

    Proof. A straightforward calculation with Formulas (8), (9), (10), and (11) showsthat

    1.∣∣bE−,l2 − b

    I−,l1

    ∣∣ ≥(

    (1 − αβ)(1 − α)

    1 + 11−α

    )αl2 , (17)

    2.∣∣bE+,l2 − b

    I+,l1

    ∣∣ ≥(

    (1 − αβ)(1 − β)

    1 + 11−β

    )βl2 , (18)

    3. ∣∣bI−,l2 − bE−,l1

    ∣∣ ≤(

    1

    1 − α+ αβ

    )αl2 , (19)

    4. ∣∣bI+,l2 − bE+,l1

    ∣∣ ≤(

    1

    1 − β+ αβ

    )βl2 , (20)

    and Inequalities (14) and (15) follow. For Inequality (16), |b1 − b2| is at least thedifference |bI+,1 − b

    I−,1|, which is equal to

    1 −α

    1 + α−

    β

    1 + β. (21)

    14

  • 3.3 Formula and derivative

    First we consider renormalization for a fixed pair (α, β), for b inside an interval I±k ,for some fixed k ≥ 1. The renormalization Rf is nothing more than the rescalingof the first return map to the smaller interval of [b − 1, 0) and (0, b]. The rescaledfunction must have discontinuity point at 0 and domain of size equal to one. As f ispiecewise linear then so is Rf . The new slopes (α′, β′) are (βαk+1, βαk), in the caseof b ∈ I−k , and (αβ

    k, αβk+1), in the case of b ∈ I+k .In fact, it is easy to see that R sends the set {α}×{β}×I±k onto {α

    ′}×{β′}×[0, 1].In order to obtain the explicit formula b 7→ b′ one must calculate the discontinuitypoint c1 of the return map. In the case that b ∈ I

    ±k , c1 is the solution of the

    implicit equation fk(f(x)) = 0, for x > 0, where f(x) = b− 1 + βx, and fk(f(x)) =b∑k−1

    i=0 αi + αk(b− 1 + βx), accordingly to Equation 7. Then b′ = b−c1

    b, which after

    manipulation gives

    b′ = 1 − β−1[b−1 − (bI−,k)

    −1]

    =

    (1 + (βαk)−1

    k∑

    i=0

    αi

    )− β−1b−1 . (22)

    If b ∈ I+k an analogous reasoning gives

    b′ = α−1[(1 − b)−1 − (1 − bI+,k)

    −1]

    = −(αβk)−1k∑

    i=0

    βi + α−1(1 − b)−1 (23)

    In both cases the transformation is orientation-preserving, but not affine!

    Definition 1. For σ = ±, k ≥ 1 and (α, β) ∈ (0, 1)2 we call Γσ,kα,β the one-variablefunction that sends b ∈ Iσk (α, β) to b

    ′ ∈ (0, 1), as above, where Iσk (α, β) indicates thedependence of Iσk on α and β.

    The derivative of R depends on the domain D±k to which ω belongs. For D−k we

    have

    DR(ω) =

    (k + 1)βαk αk+1 0kβαk−1 αk 0

    β−1 ∂∂α

    (bI−,k)−1 β−2(b−1 − (bI−,k)

    −1) β−1b−2

    . (24)

    By using symmetry, we will skip writing the explicit expression of DR(ω) for ω ∈ D+k .It must be remarked that the derivative with respect to b of the third component

    of R gives the expansion along vertical fibers. Then DΓ−,kα,β , the derivative of R

    restricted to {(α, β)} × I−k , is bounded between β−1(bI−,k)

    −2 and β−1(bE−,k)−2 (in

    particular greater than 4, since β < 1 and bI−,k ≤12). By Equations (8) and (9), this

    gives

    β−1α−2k ≤ DΓ−,kα,β ≤ β−1α−2k

    (2

    1 − α

    )2. (25)

    15

  • This also implies (1 − α

    2

    )2≤

    |I−k |

    βα2k≤ 1 . (26)

    Similar results for I+k come by simply interchanging α and β.A last remark is the following Lemma.

    Lemma 2. 0 ≤ ∂∂αbI−,k ≤ 1. As a consequence,

    (bI−,k)−2 ≥ −

    ∂α(bI−,k)

    −1 (27)

    Proof. The consequence comes from differentiating (bI−,k)−1. With respect to the

    first assertion,

    ∂αbI−,k =

    ∑ki=0(k − i)α

    k+i−1

    (∑k

    i=0 αi)2

    , (28)

    and the result follows from comparing the terms appearing in the sums of the nu-merator and the denominator.

    3.4 Iterations

    Whenever Rn(α0, β0, b0) is defined, some conclusions may be derived. For this pur-pose, let (αm, βm, bm) = R

    m(α0, β0, b0), m = 1, . . . , n. As (αm+1, βm+1) is equal to(βmα

    km+1m , βmα

    kmm ) or (αmβ

    kmm , αmβ

    km+1m ), for some km ≥ 1, m = 0, . . . , n, so αn (or

    βn) may be written as a product of the kind αl0β

    j0, where l and j are bigger or equal

    than n. In particularαn , βn ≤ (α0β0)

    n . (29)

    This shows that infinitely renormalizable parameters have orbits, under R, whoseprojections in the (α, β)-plane go to (0, 0). On the other hand, for (α, β) near (0, 0),the values of b such that (α, β, b) is in the domain of the renormalization operatorare near 0 and 1. Therefore, (αn, βn, bn) accumulates on the set {(0, 0, 0), (1, 1, 1)}.

    But the inequality shown in (29) may be still more precise. By induction, it iseasy to see that l and j must be greater than 2n−1, hence

    αn , βn ≤ (α0β0)2n−1 . (30)

    andαnβn ≤ (α0β0)

    2n (31)

    This can be used to estimate the thickness of the connected components of thedomain of Rn, by giving a lower bound for the derivative at each iterate. Let{(α0, β0)}×J be the intersection of this connected component with {(α0, β0)}×(0, 1)

    16

  • and assume J ⊂ I−k (α0, β0). By repeated application of the lower bound at Inequa-tion (25), we get

    |J | ≤ (β0α2k0 ) · (α1β1) · · · (αn−1βn−1) , (32)

    since this is the best we can do without knowing the layers of the future iterates.Then, using Inequation (31),

    |J | ≤ (β0α2k0 ) · (α0β0)

    2n−2 . (33)

    One further improvement can be done since as one knows that J ⊂ I−k then α1 =

    β0αk+10 and β1 = β0α

    k. Now Inequation (31) also implies αmβm ≤ (α1β1)2m−1 , for

    m ≥ 1, hence|J | ≤ (β0α

    2k0 ) · (β

    20α

    2k+10 )

    2n−1−1 . (34)

    Moreover, since 2n−1 − 1 ≥ n− 1 for all n ≥ 1 then in particular

    |J | ≤ (β0α2k0 )

    n . (35)

    4 Lamination and foliation

    4.1 Invariant lamination

    We are searching a lamination which is invariant under the action of the renormal-ization operator. This is the set of infinitely renormalizable parameters (α, β, b).

    As in the case of the boundary leaves of the domain of R the invariant leaves maybe found as two-variable graphs. This is done as follows. First, we fix a sequenceof symbols s = {(σ0, k0), (σ1, k1), (σ2, k2), . . .}, where σm = ± and km ≥ 1, for m =0, 1, 2, . . .. We look for the set of points (α0, β0, b0) such that (αm, βm, bm) ∈ D

    σmkm

    ,where (αm, βm, bm) is defined by induction by (αm, βm, bm) = R(αm−1, βm−1, bm−1),m ≥ 1, and call L = L(s) such a set. We assert that L intersects each fiber {α} ×{β} × (0, 1) at one and only one point, hence it is a graph of some function L = Ls.

    Fix the point (α0, β0). We want to show that there is one and only one point(α0, β0, b0) in L. As a first approach, we know that

    b0 ∈ Iσ0k0

    (α0, β0) ≡ J0 . (36)

    But we also know that

    R(α0, β0, b0) ∈ {α1} × {β1} × Iσ1k1

    (α1, β1) , (37)

    hence b0 is inside an interval J1 ⊂ J0 obtained by pulling back this interval Iσ1k1

    (α1, β1).In other words,

    J1 =(Γσ0,k0α0,β0

    )−1(Iσ1k1 (α1, β1)) (38)

    17

  • (see Definition 1). In this way, we construct a sequence of nested intervals J0 ⊃ J1 ⊃J2 ⊃ . . ., where

    Jn+1 =(Γσn,knαn,βn ◦ · · · ◦ Γ

    σ0,k0α0,β0

    )−n(I

    σn+1kn+1

    (αn+1, βn+1)) . (39)

    But by Inequation ( 33), |Jn| → 0, and the claim is proved.In particular, the intersection of {α0}×{β0}×(0, 1) with the invariant lamination

    is a totally disconnected set.

    4.2 Foliation of the whole parameter space

    Now we aim at proving that the lamination just described can be extended to afoliation in the whole cube (0, 1)3. Roughly speaking, the foliation will be defined bythe relative position of the fixed point inside the interval which is the domain of thereturn map.

    Let us first define the foliation outside the domain of the renormalization oper-ator. We start by the region comprised between the layers D−1 and D

    +1 . For these

    parameters there is an attracting periodic orbit of period 2. The position of therightmost point of this orbit is the solution of the equation

    b+ α(b− 1 + βx) = x . (40)

    Dividing the solution x by b leads to the relative position of this point inside [0, b]:

    x

    b=

    (1 + α) − αb−1

    1 − αβ. (41)

    We note that xb

    = 0 for b = bI−,1 andxb

    = 1 for b = bI+,1. Moreover,xb

    is strictly

    increasing for b ∈ [bI−,1, bI+,1], hence for fixed (α, β) and given 0 ≤ t ≤ 1 there is one

    and only one b in this interval such that xb

    = t, defining, thus, for each t, a graphbt(α, β).

    Now we do the same for the region comprised between D−k and D−k+1, for k ≥ 1.

    For these parameters there is an attracting periodic orbit of period k + 2, and theunique point x of this orbit in [0, b] satisfies

    x

    b= (1 − βαk+1)−1

    {k+1∑

    i=0

    αi − αk+1b−1

    }. (42)

    The same reasoning applies in this case (and also for the regions between D+k andD+k+1).

    Inside the domain of R one proceeds as in the construction of the invariant lam-ination, by pulling back the foliation above under a finite number of iterates of R.This leaves are analytic since R is analytic.

    18

  • 4.3 Bounded distortion

    We recall that Γσ,kα,β is always expanding (see Subsection 3.3). This fact will be usedto show that the composition of such functions has uniformly bounded distortion,where “uniformly” means independent of the composition. The bounded distortionwill be used in Subsection 4.4 and Section 5.

    Here one must be careful because in general expansion almost automatically im-plies bounded distortion. But this is so since in many situations the logarithm of thederivative is Lipschitz or Hölder continuous. In the present case, however, both arefalse.

    In the next Lemma, we let Γm = Γσm,kmαm,βm

    and Im = Iσmkm

    , m ≥ 1, and Γ =Γn ◦ · · · ◦ Γ0, and show that the distortion of Γ is uniformly bounded.

    Lemma 3. Let Γ = Γn ◦ · · · ◦Γ0, as above, in such a way that Γ maps Jn onto (0, 1).Then ∣∣∣∣∣log

    DΓ(b)

    DΓ(̃b)

    ∣∣∣∣∣ < 6 , (43)

    for any b, b̃ ∈ Jn.

    Proof. Let b0 = b, b̃0 = b̃, bm = Γm−1 ◦ · · · ◦ Γ0(b) and b̃m = Γm−1 ◦ · · · ◦ Γ0(̃b), form = 1, . . . , n+ 1. We have

    ∣∣∣∣∣logDΓ(b)

    DΓ(̃b)

    ∣∣∣∣∣ ≤n∑

    m=0

    ∣∣∣logDΓm(bm) − logDΓm(̃bm)∣∣∣

    ≤n∑

    m=0

    (maxIm

    |D logDΓm|

    )· |bm − b̃m|

    By the Mean Value Theorem there is, for each m, a point ξm ∈ (bm, b̃m) such that

    |bm−b̃m| =|bn+1 − b̃n+1|

    D(Γn ◦ · · · ◦ Γm)(ξm)≤

    (minIm+1

    D(Γn ◦ · · · ◦ Γm+1) · minIm

    DΓm

    )−1. (44)

    As remarked in Subsection 3.3, the derivative of each Γm is always greater than 4,thus

    minIm+1

    D(Γn ◦ · · · ◦ Γm+1) ≥ 4n−m . (45)

    Moreover if σm = − (the case “+” is analogous) then

    minIm

    DΓm = β−1m (b

    I−,km)

    −2 . (46)

    19

  • On the other hand,maxIm

    |D logDΓm(b)| = 2(bE−,km)

    −1 , (47)

    By comparing Equations (8) and (9), we see that bI−,km ≤ 2bE−,km

    , in such a way that

    maxIm |D logDΓm(b)|

    minIm DΓm≤ 8βbE−,k ≤ 4β ≤ 4 . (48)

    Therefore ∣∣∣∣∣logDΓ(b)

    DΓ(̃b)

    ∣∣∣∣∣ ≤n∑

    m=0

    4 ·1

    4n−m≤

    16

    3< 6 . (49)

    4.4 Zero Hausdorff dimension fiber intersections

    As a consequence of the previous development (Subsection 4.1) we know that for each(α0, β0), the fiber {α0}× {β0}× (0, 1) intersects the lamination of invariant leaves ina totally disconnected set. We will show that this set has zero Hausdorff dimension(in particular zero Lebesgue measure).

    Let K = K(α0, β0) be the intersection between the fiber and the lamination. Theset K can also be written as a nested intersection of sets K(0) ⊃ K(1) ⊃ K(2) ⊃ . . .,each K(n) being a countable union U (n) of pairwise disjoint intervals. The first set,K(0), is just the union of all intervals I±k (α0, β0). The sets K

    (m), m ≥ 1, are definedas in Subsection 4.1, by pulling back under the renormalization operator.

    The Hausdorff dimension of K is defined as follows. First one defines, for somecountable open cover U of K the number Hρ(U) =

    ∑U∈U |U |

    ρ, for ρ > 0. TheHausdorff ρ-measure of K is given by mρ(K) = limǫ→0

    (infdiam(U) HD(K) implies mρ(K) = 0.

    If we want to show that K has zero Hausdorff dimension we must prove thatmρ(K) = 0 for all ρ > 0. It suffices to show that there is a sequence U

    (n) of countableopen coverings of K such that limn→∞Hρ(U

    (n)) = 0 for each ρ > 0. As the namesuggests, U (n) will be the collection of components of K(n). The next Lemma assuresthat the above limit is indeed zero for this choice of a coverings’ sequence.

    Lemma 4. There is n0 = n0(α0, β0, ρ) such that Hρ(U(n+1)) ≤ 12Hρ(U

    (n)) for n ≥ n0.

    Proof. Let J ∈ U (n) and Γ : J → (0, 1) as in Subsection 4.3. Each J ′ ∈ U (n+1) is apre-image under Γ of some I = I±k (αn+1, βn+1). By the Mean Value Theorem, thereare ξ, ξ′ ∈ J such that

    |I|

    |(0, 1)|=DΓ(ξ′)|J ′|

    DΓ(ξ)|J |(50)

    20

  • hence, by Lemma 3, |J ′| ≤ c|J | · |I|, for some uniform constant c > 0 (Lemma 3assures c = e6, but the value does not matter). Therefore

    Hρ(U(n+1)) =

    J ′∈U(n+1)

    |J ′|ρ ≤ cρ∑

    J∈U(n)

    |J |ρ∑

    I=I±k

    (αn+1,βn+1)

    |I|ρ (51)

    The pair (αn+1, βn+1) depends on (α0, β0) (the position of the fiber) and J (whichimplictly gives the combonatorics). But Inequation (29) gives the uniform estimate(independent of J) max{αm, βm} ≤ (α0β0)

    m, for all m = 1, . . . , n + 1. This impliesthat given ǫ > 0 there is n0 = n0(α0, β0) such that n ≥ n0 implies that αn+1 andβn+1 are smaller than ǫ (independently of J).

    On the other hand, if I±k = I±k (α, β) then Inequation (26) implies |I

    −k | ≤ βα

    2k

    and |I+k | ≤ αβ2k. Therefore

    ∑{|I|ρ; I = I±k (α, β)} ≤

    ∞∑

    k=1

    (βα2k)ρ +∞∑

    k=1

    (αβ2k)ρ ≤βρα2ρ

    1 − α2ρ+

    αρβ2ρ

    1 − β2ρ. (52)

    Now there is ǫ > 0 such that if α and β are smaller than ǫ then this sum is smalleror equal than 12c

    −ρ. Therefore if n ≥ n0 then

    Hρ(Un+1) ≤

    1

    2

    J∈U(n)

    |J |ρ =1

    2Hρ(U

    (n)) . (53)

    5 Holonomies

    Let D∞ be the set of infinitely renormalizable parameters in the open cube, whichwe know to be formed by a collection of graphs of two-variable functions L(s)(α, β),where each s is the combinatorics that defines the future orbit under iterations ofthe renormalization operator R. Let Σ and Σ̃ be two (in principle topologically)transversal sections to the invariant lamination, with the additional condition thatthey intersect each leaf at one and only one point. The holonomy map from Σ∩D∞to Σ̃ ∩ D∞ is the function that takes (α, β, b) in (α̃, β̃, b̃), with the following rule:take the combinatorics s such that b = L(s)(α, β) and let (α̃, β̃, b̃) be the uniqueintersection point between Σ̃ and the graph of L(s).

    Transversality is not well defined at this point of the exposition, since we havenot proved that the functions L(α, β) are differentiable. But the essential aspectsof this section can be discussed with vertical sections, which will be the most natu-ral transversal sections, since the vertical direction is invariant (it is the expandingdirection of the hyperbolic structure that will be constructed in Section 6).

    21

  • Hence we will discuss Hölder continuity for vertical sections, and in fact it is notso difficulty, by using arguments of a general nature, to generalize these results togeneral transversal sections, assuming for grant the results of Section 7.

    5.1 Hölder continuity of holonomies

    The following Lemma says that holonomies between two given vertical fibers areHl̈der continuous.

    Proposition 6. Let (α0, β0), (α̃0, β̃0) ∈ (0, 1)2 be fixed. There are constants M and

    γ, depending on α0, β0, α̃0, and β̃0 such that the following is true. For any pair ofcombinatorics s1 and s2, if b1 = L(s1)(α0, β0), b2 = L(s2)(α0, β0), b̃1 = L(s1)(α̃0, β̃0),and b̃2 = L(s2)(α̃0, β̃0), then

    |̃b2 − b̃1| ≤ C|b2 − b1|γ . (54)

    Proof. Let n ≥ 0 be such that s1 and s2 coincide up to the n-th position and let Γ =Γn ◦· · ·◦Γ0 (resp. Γ̃ = Γ̃n ◦· · ·◦ Γ̃0) be the function that sends J ∈ U

    (n)(α0, β0) (resp.J̃ ∈ U (n)(α̃0, β̃0)) onto (0, 1), following the notation of Section 4. Let (αm, βm) (resp.(α̃m, β̃m)) be the projection of the m-th iteration of J (resp. J̃), m = 0, 1, . . . , n+ 1,and (σm, km) the common symbols of the trajectories of J and J̃ , m = 0, 1, . . . , n.

    There is still the case where s1 and s2 differ already in the first symbol, i.e. thepoints belong to different layers of the renormalization operator’s domain. In thiscase the reasoning is completely analogous and still much simpler. This will be calledthe “n = −1 case”.

    Let γ > 0 be such that α̃0 ≤ α2γ0 and β̃0 ≤ β

    2γ0 . We claim that the same relation

    is valid for the iterations: α̃m ≤ α2γm and β̃m ≤ β

    2γm . But this comes directly by

    induction from the inductive definition of αm, βm, α̃m, and β̃m.We want to analyse

    |̃b1 − b̃2|

    |b1 − b2|γ(55)

    and prove that this quotient is bounded by a constantM depending only on α0, β0, α̃0,and β̃0. Observe that |Γ(b1) − Γ(b2)| = |DΓ(ξ)| · |b1 − b2|, for some ξ ∈ (b1, b2), and|Γ̃(̃b1)− Γ̃(̃b2)| = |DΓ̃(ξ)| · |̃b1− b̃2|, for some ξ̃ ∈ (̃b1, b̃2). Then, using the Chain Rule,

    |̃b1 − b̃2|

    |b1 − b2|γ≤

    |Γ̃(̃b2) − Γ̃(̃b1)|

    |Γ(b2) − Γ(b1)|γ·(max |DΓn|)

    γ . . . (max |DΓ0|)γ

    (min |DΓ̃n|) . . . (min |DΓ̃0|)(56)

    (in the n = −1 case the second factor of the right hand side disappears and the firstfactor is exactly equal to the left hand side).

    22

  • The derivative of each Γm is the b–partial derivative of the third component ofthe renormalization operator, see Subsection 3.3. Thus, in the case that σm = − andkm = k,

    (max |DΓm|)γ

    min |DΓ̃m|=

    (β−1m (b

    E−,k)

    −2)γ

    β̃−1m (bI−,k)−2

    , (57)

    (where in this equation bE−,k = bE−,k(αm, βm) and b

    I−,k = b

    I−,k(α̃m, β̃m)), and this, in

    turn, is bounded by

    β̃m

    βγm

    (α̃m

    αγm

    )2k ( 11 − αm

    + βmαm

    )2γ, (58)

    using Formulas (8) and (9). If we call

    M1 = M1(α0, β0) =((1 − α0)

    −1 + (1 − β0)−1 + α0β0

    )2(59)

    and pay attention to the choice of γ then we obtain the bound Mγ1 (βmα2m)

    γ , whichis still bounded by

    Mγ1 (α0β0)

    mγ , (60)

    by the remarks of Subsection 3.4. The fact that M1 works also comes from theproperty, discussed in that Subsection, that αm and βm are smaller than α0 and β0.Moreover, the choice of M1 is symmetric in α and β, in such a way that for σm = +we obtain exactly the same bounds.

    Now there is m0 ≥ 1 such that if m ≥ m0 then Mγ1 (α0β0)

    mγ ≤ 1, and m0 dependsonly on α0, β0 and γ. This implies that the second factor of the right hand side ofInequation (56) is bounded by a constant that depends only on α0, β0, α̃0, and β̃0.

    The first factor of the right hand side of Inequation (56) has to be analysedaccordingly to the three following possibilities: (i) Γ(b1) <

    12 and Γ(b2) <

    12 ; (ii)

    Γ(b1) >12 and Γ(b2) >

    12 ; (iii)

    (Γ(b1) −

    12

    )·(Γ(b2) −

    12

    )< 0. The position with

    respect to 12 of Γ̃(̃bi) follows the position of Γ(bi), i = 1, 2. In the first case, as Γ(b1)

    and Γ̃(̃b1) are in a layer that is different from the layer to which Γ(b2) and Γ̃(̃b2)belong, Lemma 1 implies that there is l ≥ 1 such that

    |Γ̃(̃b2) − Γ̃(̃b2)|

    |Γ(b2) − Γ(b1)|γ≤

    C(α̃m+1, β̃m+1) · α̃lm+1

    C(αm+1, βm+1)−γ · αlγm+1

    (61)

    By the choice of γ, α̃lm+1 ≤ α2lγm+1. Therefore, using the monotonicity property of

    C(α, β) stated in Lemma 1,

    |Γ̃(̃b2) − Γ̃(̃b2)|

    |Γ(b2) − Γ(b1)|γ≤ C(α̃0, β̃0) · C(α0, β0)

    γ . (62)

    In the second and third cases this bound works as well.

    23

  • In fact, the following is a corollary of the proof above.

    Corollary 1. For any set S ⊂ (0, 1)2 such that the closure of S is contained in (0, 1)2

    there are constants γ = γ(S) > 0 and M = M(S) > 0 such that the same statementof Lemma 6 holds for any pair of base points (α0, β0) and (α̃0, β̃0).

    Observe that there is no chance of obtaining uniform Hölder continuity amongholonomies with base points in (0, 1)2. Even uniform continuity is not allowed. Forexample, if (α0, β0) is very near (1, 1) and (α̃0, β̃0) is very near (0, 0), take b1 ∈ I

    −1 and

    b2 ∈ I+1 . Given ǫ > 0, there is a choice of (α0, β0) and b1, b2 such that |b1 − b2| < ǫ.

    But if (α̃0, β̃0) is sufficiently near (0, 0), then |̃b1 − b̃2| > 1 − ǫ.In Section 7 we will show that each leaf of the lamination of infinitely renormal-

    izable parameters is of class C1, and also that leaves vary continuously in the C1

    topology. The former assertion allows considering transversal sections to the lamina-tion, from the differentiable point of view, while the latter says that near a transversalsection every crossing is also transversal.

    As we will see in Section 6, the vertical fibers are always transversal to the lamina-tion, since the unstable cone field may be constructed in order to contain the verticaldirection, so they constitute a particular case of transversal sections.

    With this is in mind, assuming for grant the results of Section 7, it is an exer-cise to show that the Hölder continuity of holonomies may be extended to any pairof transversal sections. However, the statement must be done only locally, in theneighbourhood of the crossing.

    In the same spirit, the result of Subsection 4.4 can be generalized to generaltransversal sections.

    5.2 Not more than Hölder

    The results of the previous Subsection may be proved to be the best possible for thislamination. We will show that there are plenty of fiber pairs where there is 0 < γ < 1such that the holonomy or its inverse is not (γ,C)-Hölder, for any constant C > 0.

    For that purpose, we need lower bounds, in opposition to the previous Subsection.For example,

    |̃b1 − b̃2|

    |b1 − b2|γ≥

    |Γ̃(̃b2) − Γ̃(̃b1)|

    |Γ(b2) − Γ(b1)|γ·(min |DΓn|)

    γ . . . (min |DΓ0|)γ

    (max |DΓ̃n|) . . . (max |DΓ̃0|)(63)

    is the analogous of Inequation (56). We have

    (min |DΓm|)γ

    max |DΓ̃m|≥β̃m

    βγm

    (α̃m

    αγm

    )2km·

    1(

    11−α̃m

    + α̃mβ̃m

    )2 . (64)

    24

  • using Formulas (8) and (9). With the same constant M1 there defined, we get

    (min |DΓm|)γ

    max |DΓ̃m|≥M−11

    β̃m

    βγm

    (α̃m

    αγm

    )2km. (65)

    Once more, using Lemma 1,

    |Γ̃(̃b2) − Γ̃(̃b1)|

    |Γ(b2) − Γ(b1)|γ≥ C−1 · P (66)

    where C−1 = C(α̃n+1, β̃n+1)−1C(αn+1, βn+1)

    −γ and P is equal to 1, to α̃ln+1α−γln+1 for

    some l ≥ 1, or to β̃ln+1β−γln+1, for some l ≥ 1. By putting everything together, taking

    into account the remarks of Subsection 3.4, we conclude that there are powers l andj such that

    |̃b1 − b̃2|

    |b1 − b2|γ≥ C−1M

    −(n+1)1

    (α̃0

    αγ0

    )l(β̃0

    βγ0

    )j. (67)

    Also by Subsection 3.4 the powers l and j must be greater or equal than 2n−1,

    therefore a uniform upper bound for |̃b1−b̃2||b1−b2|γ cannot be found if, for example, α̃0 > αγ0

    and β̃0 > βγ0 . These conditions can be achieved whenever α̃0 > α0, β̃0 > β0 and γ is

    sufficiently near 1.

    6 Invariant cones and hyperbolic structure

    Now we show that the renormalization operator has an invariant cone family withexpansion in the unstable cones, and for a certain restricted region (which is in thefuture of every orbit) there is in fact a hyperbolic structure, also with contraction inthe stable cones.

    For each ω = (α, β, b) belonging to the domain of the renormalization operatorwe associate a cone

    Cc(ω) = {(v1, v2, v3); |v3| ≥ cmax{|v1|, |v2|}} , (68)

    where c is a constant to be chosen. In fact the cones are equal everywhere, but wekeep this notation to remember that Cc(ω) is in the tangent space of ω.

    We will show that, for a suitable value of c, DR(ω) ·Cc(ω) is strictly contained inCc(Rω), meaning that if |v3| ≥ cmax{|v1|, |v2|} and (v

    ′1, v

    ′2, v

    ′3) = DR(ω) · (v1, v2, v3)

    then |v′3| > cmax{|v′1|, |v

    ′2|}.

    Lemma 5. For any ω in the domain of R, if v belongs to Cc(ω) with c ≥ 6 thenv′ = DR(ω) · v belongs to the interior of Cc(Rω).

    25

  • Proof. Firstly we observe that

    max{|v′1|, |v′2|} ≤ (k + 2)max{|v1|, |v2|} (69)

    (this estimate could not be much better without using α and β). Then

    |v′3| ≥ β−1b−2|v3| − β

    −1

    ∣∣∣∣∂

    ∂α(bI−,k)

    −1

    ∣∣∣∣ |v1| − β−2∣∣b−1 − (bI−,k)−1

    ∣∣ · |v2| , (70)

    that is to say,

    |v′3| ≥

    {β−1b−2c− β−1

    ∣∣∣∣∂

    ∂α(bI−,k)

    −1

    ∣∣∣∣− β−2∣∣b−1 − (bI−,k)−1

    ∣∣}·max{|v′1|, |v

    ′2|}

    k + 2. (71)

    As b ∈ I−k we have

    ∣∣b−1 − (bI−,k)−1∣∣ ≤

    ∣∣(bE−,k)−1 − (bI−,k)−1∣∣ = β , (72)

    using Equations (8) and (9). Now

    |v′3| ≥ β−1

    {b−2c−

    ∣∣∣∣∂

    ∂α(bI−,k)

    −1

    ∣∣∣∣− 1}·

    1

    k + 2max{|v′1|, |v

    ′2|} , (73)

    but the factor β−1 may be ignored, since β < 1. Lemma 2 says that b−2 ≥∣∣∣ ∂∂α(bI−,k)−1∣∣∣, therefore it is enough to find c such that b−2(c − 1) − 1 > c(k + 2),

    for all k ≥ 1. Now b ∈ I−k implies b ≤1

    k+1 , so that it suffices to find c such that

    (k+ 1)2(c− 1)− 1 > c(k+ 2), for all k ≥ 1. As any c ≥ 6 works well, this proves theLemma.

    Now we examine how the derivative of the renormalization operator acts on vec-tors of the cones.

    Lemma 6. For any ω in the domain of R and v ∈ Cc(ω), with c ≥ 6,

    ‖v′‖ > 3‖v‖ . (74)

    Proof. For c ≥ 6, if v ∈ Cc(ω) then ‖v‖ = |v3| and as v′ ∈ Cc(Rω), by Lemma 5,

    then ‖v′‖ = |v′3|. By Inequation (70) and Lemma 2,

    |v′3| ≥β−1

    c

    (b−2(c− 1) − 1

    )· |v3| , (75)

    using that |v1| and |v2| are smaller than |v3|. In particular, |v′3| > 3|v3|, since b ≤

    12 .

    26

  • If we restrict our attention only to the intersection of the domain of R with theregion

    Q ≡ {(α, β, b);α, β <1

    10} (76)

    a smaller value of c is allowed. This region is clearly forward invariant (see for exampleSubsection 3.4). In this region, we are able to get something better than (69).

    Lemma 7. If ω belongs to Q and to the domain of R, and if v belongs to Cc(ω) withc ≥ 1.1, then v′ = DR(ω) · v belongs to the interior of Cc(Rω).

    Proof. Now we have

    max{|v′1|, |v′2|} ≤ (βkα

    k−1 + αk)max{|v1|, |v2|} , (77)

    by already considering α, β < 110 , hence

    max{|v′1|, |v′2|} ≤

    1

    5max{|v1|, |v2|} , (78)

    for every k ≥ 1. As in the previous development, we arrive at Inequality (73), butwith 15 in the place of k + 2. Then, using Lemma 2, we have to find c such that

    5β−1{(c− 1)b−2 − 1} > c . (79)

    In this region b is at most equal to bI−,1 =α

    1+α , which is smaller than110 , thus

    b−2 > 100. It is sufficient to take c such that 4999c > 5050, for example c ≥ 1.1 isenough.

    The next Lemma assures that inverse branches expand vectors in the comple-ment of the unstable cone, in the restricted region Q, characterizing the hyperbolicstructure.

    Lemma 8. If ω ∈ Q and v′ = DR(ω) · v belongs to the complement of Cc(Rω), withc = 1.1, then

    ‖v′‖ ≤2

    5‖v‖ . (80)

    Proof. As v′ belongs to the complement of Cc(Rω), c = 1.1, then

    ‖v′‖ < 1.1 max{|v′1|, |v′2|} <

    2

    5max{|v1|, |v2|} ≤

    2

    5‖v‖ , (81)

    by Inequality (78).

    27

  • 7 Smoothness of leaves

    7.1 Preliminaires

    In Section 4 we proved that each leaf of the lamination is the graph of a continuousfunction Ls = Ls(α, β), which depends on the combinatorics s. Now we intend toshow that this function is Cr, for any r ≥ 1. As a consequence of the proof, wewill show that such leaves vary continuously in the Cr topology, for any r ≥ 1, incompact domains of (0, 1)2.

    First, L may be approximated by a sequence {φn} of analytic functions with adynamical meaning. These functions come from the construction done in Section 4.For the sake of clarity, we let G0 be the set {φ : (0, 1)

    2 → [0, 1];φ ≡ 0 or φ ≡ 1}. Byinduction, define Gn+1 as the set of analytic functions which come from pre-imagesof Gn under the renormalization operator. More precisely, φ is an element of Gn+1if there is φ̃ ∈ Gn such that R(α, β, φ(α, β)) belongs to the graph of φ̃ for every(α, β) ∈ (0, 1)2 (considering the domain of R extended to the boundary of eachcomponent). With this definition, G1 is the set of functions whose graphs form theboundaries of the components of the renormalization operator domain and Gn is theset of functions whose graphs constitute the boundaries of the components of thedomain of Rn.

    We also introduce the notion of subordination of two such functions.

    Definition 2. A function φ ∈ Gm is subordinated to another function ψ ∈ Gn(denoted by φ n and the graph of φ is contained in the component ofthe domain of Rn that is adherent to the graph of ψ.

    Definition 3. We say that {φn : (0, 1)2 → [0, 1]}n≥0 is a dynamically approximating

    sequence to Ls : (0, 1)2 → (0, 1) if (i) φn ∈ Gn}, ∀n ≥ 0, (ii) φn+1

  • the maximum of the absolute values of all partial derivatives of order r evaluated at(α, β). The precise result is the following.

    Lemma 9. Let r ≥ 1 and 0 < α0, β0 < 1. Given ǫ > 0 there is n0 = n0(ǫ, α0, β0, r)such that for every combinatorics s, every dynamically approximating sequence {φn}nto Ls, every n,m ≥ n0, and every (α, β) ∈ (0, α0] × (0, β0], one has ‖Drφn(α, β) −Drφm(α, β)‖ < ǫ.

    As a Corollary of Lemma 9, we have the continuous dependence on the Cr topol-ogy, for any r ≥ 1.

    Corollary 2. Let r ≥ 1 and 0 < α0, β0 < 1. Given δ > 0 there is n1 = n1(δ, α0, β0, r)such that if s and s̃ are two combinatorics coinciding up until the entry n1, then‖DrLs(α, β) −DrLs̃(α, β)‖ < δ.

    Proof. Let n1 = n0(δ3 , α0, β0, r) as in Lemma 9 and let s and s̃ be two combinatorics

    coinciding up until the n1-th entry. Let {φn}n and {φ̃n}n be dynamically approx-imating sequences to Ls and Ls̃, respectively, such that φn = φ̃n for every n ≤ n1(which is possible by the hypothesis on s and s̃). By Lemma 9, ‖DrLs −Drφn1‖ ≤

    δ3

    and ‖DrLs̃ −Drφ̃n1‖ ≤δ3 . But as φn1 = φ̃n1 then ‖DrLs −DrLs̃‖ ≤

    2δ3 < δ.

    The rest of this Section will be devoted to the proof of Lemma 9.

    7.2 Notation for partial derivatives

    We adopt the following notation for partial derivatives: wj indicates a word of α’sand β’s, of size j, and wj is the respective partial derivative of order j. More precisely,writing wj = w1w2 . . . wj , if A is an expression depending on α and β then

    Awj = Aw1...wj = (. . . (((Aw1)w2)w3 . . .)wj . (82)

    Let Wj be the set of all words of size j, j ≥ 0. The set W0 has, by convention, oneelement called w0, which means no differentiation. The cardinality of Wj is 2

    j , forall j ≥ 0.

    We may write wj = wwj−1, meaning that w = w1 for some w1 and wj−1 =

    w2 . . . wj . This notation may seem confuse at first sight, but can be economic if thereis no explicit need of knowing which derivatives enter in the formation of wj .

    Let wj = w1w2 . . . wj ∈ Wj . We say that wi, 1 ≤ i ≤ j, is extracted from wj ,

    denoting wi ⊂ wj , if wi = wm1 . . . wmi , where 1 ≤ m1 < m2 < . . . < mi ≤ j. Wealso say that wρ1 , . . . , wρl disjointly decompose wj , writing wρ1∪̇ . . . ∪̇wρl = wj , if (i)ρ1 + . . .+ ρl = j, (ii) w

    ρi = wki,1 . . . wki,mi , and (iii)⋃{ki,r; 1 ≤ i ≤ l, 1 ≤ r ≤ mi} =

    {1, 2, . . . , j}.

    29

  • For instance, let us write down the product rule for partial derivatives of any order.If A and B are two expressions both depending on α and β, and wj = w1 . . . wj , thenwe write

    (AB)wj =

    j∑

    i=0

    wi∪̇wj−i=wj

    AwiBwj−i . (83)

    For example, if j = 3 and i = 2 then the sum among wi∪̇wj−i = wj will be

    Aw1w2Bw3 +Aw1w3Bw2 +Aw2w3Bw1 . (84)

    The number of terms in this sum obey the rules of the binomial, being given by(j

    i

    )= j!

    i!(j−i)! . Note that the right side of Equation (83) contains the term AwjB

    (and also ABwj ). Then we may write, for later use,

    (AB)wj −AwjB =

    j−1∑

    i=0

    wi∪̇wj−i=wj

    AwiBwj−i . (85)

    7.3 Induction

    Let #α(wj) (resp. #β(w

    j)) be the number of α’s (resp. β’s) in the word wj .Let α0, β0 ∈ (0, 1). We are going to prove that, for any r ≥ 1, there is a positive

    function pr(k) = pr,α0,β0(k), positive for k ≥ 1, which is the maximum of a finitenumber of polynomial functions in k (for the sake of simplicity, we call a function ofpolynomial growth) such that if s is an arbitrary combinatorics with k = k0, {φn}n≥0is a dynamically approximating sequence to Ls, w

    r ∈ Wr, and n ≥ 0, then

    |φn+1,wr(α, β) − φn,wr(α, β)| ≤ (βαk)npr(k)α

    k−rαβ−rβ , (86)

    if σ0 = −, and if σ0 = + then

    |φn+1,wr(α, β) − φn,wr(α, β)| ≤ (αβk)npr(k)α

    −rαβk−rβ , (87)

    for any (α, β) ∈ (0, α0] × (0, β0], where rα = #α(wr) (resp. rβ = #β(w

    r)). We callA[r, wr, k, n] this assertion.

    This immediately implies Lemma 9, as desired.The proof is done by induction on r and n, in this order. We assume that

    A[j, wj , k,m] is true for all j < r, wj ∈ Wj , k ≥ 1 and m ≥ 0, with polynomialsp1(k), . . . , pr−1(k). Then, by induction on n, we prove A[r, w

    r, k, n] with pr,n(k) inthe place of pr(k), where pr,n(k) is also a function of polynomial growth, positive fork ≥ 1. Finally, we show that there is a function of polynomial growth pr(k) suchthat pr,n(k) ≤ pr(k) for all n ≥ 0.

    30

  • We observe that it will be not necessary to separate the cases r = 1 and r > 1.But for the induction on n we must first do the case n > 0 and then the case n > 1.

    We will inductively define the constant Cr > 0 such that

    (α0β0)k−1pj(k) ≤ Cr (88)

    for all k ≥ 1 and j ≤ r.

    7.4 Implicit equation

    Our aim is at obtaining a formula for φn+1,wr − φn,wr .The first symbol s0 = (σ0, k0) of the combinatorics s determines the layer of the

    domain of R in which (α, β, φn(α, β)) is contained, for all n ≥ 1. We denote k = k0for the sake of practicity. Remark that, once the first symbol s0 is fixed n ≥ 1, thenthe point (α, β, φn(α, β)) is sent to the point (α̃, β̃, φ̃n−1(α̃, β̃)), where α̃ = βα

    k+1

    (resp. α̃ = αβk) and β̃ = βαk (resp. β̃ = αβk+1), for σ0 = − (resp. σ0 = +). Notethat the symbol s1 = (σ1, k1) = (σ̃, k̃) may be anything, whatever be the assumptionon s0. If n ≥ 2 then φ̃n−1(α̃, β̃) ∈ I

    σ̃

    k̃(α̃, β̃).

    From now on we adopt the notation

    R(α, β, b) = (R1(α, β, b),R2(α, β, b),R3(α, β, b)) . (89)

    This means that

    α̃ = R1(α, β, φ(α, β)) (90)

    β̃ = R2(α, β, φ(α, β)) , (91)

    but as the right hand side functions are independent of the third coordinates we maywrite

    (α, β) = (R1,R2) = (R1(α, β),R2(α, β)) . (92)

    Therefore the fundamental equation to begin with is the relation

    φ̃n−1(R1(α, β),R2(α, β)) = R3(α, β, φn(α, β)) . (93)

    In order to prove Inequations (86) and (87) we need to estimate |φn+1,wr −φn,wr |,for every wr ∈ Wr. The starting point will be Equation (93).

    We denote φ = φn+1 and φ̃ = φ̃n, ψ = φn and ψ̃ = φ̃n−1, and recall Equation(93):

    φ̃(R1(α, β),R2(α, β)) = R3(α, β, φ(α, β)) (94)

    ψ̃(R1(α, β),R2(α, β)) = R3(α, β, ψ(α, β)) (95)

    31

  • For the sake of convention let the left sides be written as

    (φ̃)(α, β) = φ̃(R1(α, β),R2(α, β)) (96)

    (ψ̃)(α, β) = ψ̃(R1(α, β),R2(α, β)) . (97)

    By differentiating both sides of Equation (94) with respect to α and β we obtain

    (φ̃)α = β−1φ−2φα − β

    −1(bI−,k)−2(bI−,k)α (98)

    and(φ̃)β = β

    −1φ−2φβ + β−2φ−1 − β−2(bI−,k)

    −1 . (99)

    These equations can be rewritten as

    β(φ̃)w = φ−2φw + θ(w)β

    −1φ−1 + Ξ , (100)

    where Ξ = −(bI−,k)−2(bI−,k)α and θ(w) = 0 if w = α or Ξ = −β

    −1(bI−,k)−1 and

    θ(w) = 1 if w = β. As φ−2φw = −(φ−1)w, then

    β(φ̃)w − β(ψ̃)w = (ψ−1 − φ−1)w + θ(w)

    (β−1φ−1 − β−1ψ−1

    ). (101)

    The difference φwr − ψwr will appear after writing wr = wwr−1, differentiating both

    sides of Equation (101) with respect to wr−1, and isolating the desired term. Thedifferentiation gives

    (β(φ̃)w − β(ψ̃)w

    )

    wr−1= (ψ−1 − φ−1)wr + θ(w

    r)((β−1φ−1)wr−1 − (β

    −1ψ−1)wr−1),

    (102)where θ(wr) = 0 if wr = αw2 . . . wr and θ(w

    r) = 1 if wr = βw2 . . . wr.Understanding (φ−1)wj is a crucial step. We do not give a formula, but ratter

    what it looks like. By induction, it may be easily proved that

    (φ−1)wj =

    j∑

    l=1

    φ−(l+1)∑

    wρ1 ∪̇...∪̇wρl=wj

    N(wj ;wρ1 , . . . , wρl)φwρ1 . . . φwρl , (103)

    where N(wj ;wρ1 , . . . , wρl) is an integer and the indices ρ1, . . . , ρl are all strictly pos-itive. When l = 1 the decomposition of wj is unique and N(wj ;wj) = −1, i.e. thecorresponding term in the sum is

    −φ−2φwj . (104)

    In the subtraction (ψ−1 − φ−1)wr this term (for j = r) gives

    φ−2φwr − ψ−2ψwr = φ

    −2(φwr − ψwr) + ψwr(φ−2 − ψ−2) . (105)

    32

  • Isolating φwr − ψwr we obtain

    φwr − ψwr = T1 + T2 + T3 + T4 , (106)

    where

    T1 = φ2(β(φ̃)w − β(ψ̃)w

    )

    wr−1

    T2 = φ2ψwr

    (ψ−2 − φ−2

    )

    T3 = φ2

    r∑

    l=2

    wρ1 ∪̇...∪̇wρl=wr

    N(wr;wρ1 , . . . , wρl)(ψ−(l+1)ψwρ1 ...wρl − φ−(l+1)φwρ1 ...wρl )

    T4 = θ(wr)φ2

    {(β−1ψ−1

    )wr−1

    −(β−1φ−1

    )wr−1

    }

    The term T3 is zero when r = 1.

    7.5 Proof of the induction

    The following step is to give an estimate of |φwr − ψwr |, based on Equation (106),using the inductive hypotheses described in Subsection 7.3.

    The proof will be done in the case that σ0 = −, remarking that the other situationis analogous. However, the induction hypotheses must be taken in the complete form,that is, with Inequalities (86) and (87), since s1 = (σ1, k1) = (σ̃, k̃) may be anything,as previously remarked.

    The case n = 0. In this case ψ = φ0 is identically equal to 0 or 1, hence ψwr = 0,for all wr ∈ Wr, r ≥ 1. Moreover φ = φ1 ∈ G1 and

    |φwr − ψwr | = |φwr | . (107)

    The hypothesis φ(α, β) ∈ I−k (α, β) implies φ = bI−,k or φ = b

    E−,k, that is

    φ(α, β) = αkgk(α, β) , (108)

    where gk(α, β) = (1 + α + . . . + αk)−1 or gk(α, β) = (1 + α + . . . + α

    k + βαk)−1,accordingly to Equations (8) and (9).

    The series∑∞

    i=0 αi has convergence radius equal to 1 and converges to (1−α)−1.

    It is a known fact from elementary theory of power series that the derivatives of apower series is the series of the derivatives and has the same convergence radius. Thisimplies, in particular, that ∂

    ρ

    ∂αρ

    ∑ki=0 α

    i converges, uniformly in [0, α0], to∂ρ

    ∂αρ(1 −

    α)−1, for all ρ ≥ 1. Also βαk uniformly converges to 0 in the Cr topology, forany r ≥ 1, thefore in any case gk(α, β)

    −1 uniformly converges to (1 − α)−1 in the

    33

  • Cr topology. This implies that gk(α, β) uniformly converges to (1 − α) in the Cr

    topology.From this, we conclude that there is a constant Mr = Mr,α0,β0 such that

    |gk,wj (α, β)| ≤Mr , (109)

    for all wj ∈ Wj , 0 ≤ j ≤ r, k ≥ 1, and (α, β) ∈ [0, α0] × [0, β0].Now we evaluate the derivatives of φ, using the product rule given by Equation

    (83). We have

    |φwr | ≤Mr

    r∑

    i=0

    wi⊂wr

    (αk)wi . (110)

    If wi has a β then (αk)wi = 0 (this surely happens if i > rα). The same conclusionis true if i > k.

    We examine the terms where wi is only made of α’s and i ≤ k, dividing into twocases. First, if k ≤ rα then we see

    (αk)wi ≤ k(k − 1) . . . (k − i+ 1)αk−i ≤ k!αk−rα . (111)

    On the other hand, if k > rα then

    (αk)wi ≤ k(k − 1) . . . (k − i+ 1)αk−i ≤ k(k − 1) . . . (k − rα + 1)α

    k−rα , (112)

    Then (αk)wi ≤ qrα(k)αk−rα , where

    qj(k) =

    {k! if k ≤ jk(k − 1) . . . (k − j + 1) if k > j

    (113)

    We observe that rα ≤ r implies qrα(k) ≤ qr(k) and that there is a total of 2r terms

    in the sum (the sum of the coefficients in the binomial), in such a way that

    |φwr | ≤ 2rMrqr(k)α

    k−rα . (114)

    We callpr,0 ≡ 2

    rMrqr(k) . (115)

    Inductive step: common estimates. If φ(α, β) ∈ I−k (α, β) then

    φ ≤ αk , (116)

    which directly comes from Equation (8). If in addition ψ(α, β) ∈ I−k (α, β) then, byEquations (8) and (9),

    bI−,k

    φ,bI−,k

    ψ,φ

    ψ,ψ

    φ≤bI−,k

    bE−,k≤ 1 + βαk ≤ 2 . (117)

    34

  • On the other hand, φ(α, β), ψ(α, β) ∈ I−k (α, β) implies φ, ψ ≥ bE−,k, hence

    φ−1, ψ−1 ≤ α−k(1 + α+ . . .+ αk + βαk) ≤ (k + 2)α−k (118)

    (taking into account that α ≤ α0 and β ≤ β0, we could obtain φ−1, ψ−1 ≤ Cα−k,

    where C = C(α0, β0) = β0α0 +(1−α0)−1, but this would make any difference in our

    arguments). If moreover φ

  • where∑

    θ1denotes

    ∑2θ1=1

    and∑

    θ1,θ2denotes

    ∑2θ1=1

    ∑2θ2=1

    . It may be easily proved

    by induction that (φ̃)w1...wr is given by

    r∑

    l=1

    θ1,...,θl

    φ̃θ1...θl

    (∑

    wρ1 ∪̇...∪̇wρl=wr

    Ñ(wr;wρ1 , . . . , wρl)Rθ1,wρ1 . . .Rθl,wρl

    ), (125)

    where each Ñ(wr;wρ1 , . . . , wρl) is an integer and each ρi is strictly positive. Theterm with l = r can be more explicitly written as

    θ1,...,θr

    φ̃θ1...θrRθ1,w1Rθ2,w2 . . .Rθr,wr . (126)

    As each term Rθ1,wρ1 . . .Rθt,wρl depends only on α and β, it follows that

    (φ̃)wr − (ψ̃)wr = (127)r∑

    l=1

    θ1,...,θl

    wρ1 ∪̇...∪̇wρl=wr

    Ñ(wr;wρ1 , . . . , wρl)(φ̃− ψ̃)θ1...θlRθ1,wρ1 . . .Rθt,wρl ,

    which leads us to estimating the terms Rθ1,wρ1 . . .Rθt,wρl , in conjunction with (φ̃ −

    ψ̃)θ1...θl , bounded by the induction hypotheses. Recalling the notation rα = #α(wr),

    rβ = #β(wr), we add the (local) notation θ = θ1 . . . θl and #α(θ) (resp. #β(θ)) for

    the number of α’s (resp. β’s) in the word θ.In our notation, R1(α, β) = α̃ = βα

    k+1 and R2(α, β) = β̃ = βαk. We have

    R1,wρi = 0 and R2,wρi = 0 if wρi has at least two β’s. If each wρi has at most one β,

    then exactly rβ of the wρi ’s have a β and l− rβ do not have. Then Rθ1,wρ1 . . .Rθl,wρl

    has βl−rβ in its expression. The power of α, in turn, has the following contributions:kl, coming from each one of the l terms; #α(θ), coming from the θi’s that are equalto 1 (or α); −rα, coming from the rα differentiations with respect to α. Each differ-entiation with respect to α also carries a multiplicative factor that is at most k + 1.Therefore we obtain

    Rθ1,wρ1 . . .Rθl,wρl ≤ (k + 1)rβl−rβαkl+#α(θ)−rα . (128)

    The estimate of |(φ̃ − ψ̃)θ1...θl | comes from induction. There are two options,depending on σ̃. If σ̃ = −, then it is

    p(k̃)(β̃α̃k̃)n−1α̃k̃−#α(θ)β̃−#β(θ) , (129)

    and if σ̃ = + then it is

    p(k̃)(α̃β̃k̃)n−1α̃−#α(θ)β̃k̃−#β(θ) , (130)

    36

  • where p(k̃) = pl(k̃) if l < r and p(k̃) = pr,n−1(k̃) if l = r. The terms α̃k̃−#α(θ)β̃−#β(θ)

    and α̃−#α(θ)β̃k̃−#β(θ) are bounded by

    βk̃αkk̃β−lα−klα−#α(θ) = βαk(βαk)k̃−1β−lα−klα−#α(θ) . (131)

    The terms (β̃α̃k̃)n−1 and (α̃β̃k̃)n−1 are bounded by

    (βαk)n−1(βαk)n−1 , (132)

    using k̃ ≥ 1. Multiplying the estimates coming from Inequations (128), (131), and(132), we get

    |(φ̃− ψ̃)θ1...θl |Rθ1,wρ1 . . .Rθl,wρl ≤ (133)

    β · (βαk)n−1(k + 1)r · p(k̃)(βαk)k̃−1 · (βαk)n−1αk−rαβ−rβ .

    If l < r then p(k̃) = pl(k̃), hence p(k̃)(αβ)k̃−1 ≤ Cr−1 (recall definition of this

    constant through Inequation (88)). Therefore Inequation (133) is independent of l,if l < r. The case l = r is treated separately.

    Now we remember that

    T1 = φ2(β(φ̃)w − β(ψ̃)w

    )

    wr−1. (134)

    Using Equation (83) we see a number of terms of the kind

    φ2βwi[(φ̃)wj − (ψ̃)wj

    ], (135)

    with 0 ≤ i ≤ r − 1 and 1 ≤ j ≤ r, in particular including the term

    φ2β[(φ̃)wr − (ψ̃)wr

    ]. (136)

    Each βwi , i = 0, . . . , r, is equal to 0, β or 1, so it is bounded by 1. And every (φ̃)wj −(ψ̃)wj , j < r, gives rise to its own terms of the form (φ̃ − ψ̃)θ1...θlRθ1,wρ1 . . .Rθl,wρl ,with l ≤ j < r, where ρ1 + . . . + ρl = j and w

    ρ1∪̇ . . . ∪̇wρl = wj . These termsare bounded by Inequation (133), except that rα = rα(w

    r) and rβ = rβ(wr) must be

    substituted by rα(wj) and rβ(w

    j). But wj ⊂ wr implies rα(wj) ≤ rα and rβ(w

    j) ≤ rβ ,

    hence Inequation (133) may be used as it is, with p(k̃) = pl(k̃).Recalling φ ≤ αk, we then have a number Nr of terms with the estimate

    (βαk)n−1 · (k + 1)rCr · (βαk)nαk−rαβ−rβ . (137)

    and one term with the estimate

    (βαk)n−1 · (k + 1)r(βαk)k̃−1pr,n−1(k̃) · (βαk)nαk−rαβ−rβ . (138)

    37

  • Inductive step: T2. By Inequations (120) and (122), we have

    |T2| ≤ 16

    (n−1∑

    m=0

    (βαk)mpr,m(k)

    )α−k(βα2k)nαk−rαβ−rβ . (139)

    But n ≥ 1 impliesα−k(βα2k)n ≤ (βαk)n . (140)

    Inductive step: T3. We are left to estimate a sum of an r-dependent number ofterms, each one of the form

    φ2|φ−(l+1)φwρ1 · · ·φwρl − ψ−(l+1)ψwρ1 · · ·ψwρl | , (141)

    with l ≥ 2, which is bounded by

    φ2|φ−(l+1) − ψ−(l+1)| · |φwρ1 . . . φwρl |+ (142)

    +φ2ψ−(l+1)l∑

    i=1

    |ψwρ1 . . . ψwρi−1 | · |φwρi − ψwρi | · |φwρi+1 . . . φwρl | . (143)

    By Inequations (116) and (120),

    φ2|φ−(l+1)−ψ−(l+1)| ≤ (l+1)(k+2)l+2α−kl(βα2k)n ≤ (r+1)(k+2)r+2(βαk)nαkα−kl ,(144)

    where in the last inequality we used n ≥ 1. Moreover, as each ρt, t = 1, . . . , l,is strictly smaller than r, because l ≥ 2, then we bound |φwρt | and |ψwρt | withInequation (121). Using also that

    l∑

    t=1

    #α(wρt) = #α(w

    r) = rα ,l∑

    t=1

    #β(wρt) = #β(w

    r) = rβ , (145)

    then Expression (142) is bounded by

    (r + 1)(k + 2)r+2

    (1 − β0α0)r

    (l∏

    t=1

    pρt(k)

    )(βαk)nαk−rαβ−rβ . (146)

    This bound fits Expression (143), in a similar way. Hence Expression (141) is boundedby (147) multiplied by l + 1, which is in turn smaller or equal than r + 1:

    |T3| ≤ Nr(r + 1)2(k + 2)r+2

    (1 − β0α0)r

    (l∏

    t=1

    pρt(k)

    )(βαk)nαk−rαβ−rβ , (147)

    where Nr is a number exclusively depending on r.

    38

  • Inductive step: T4. First, we remark that θ(wr) is equal to zero if wr = αwr−1

    and equal to one if wr = βwr−1. This implies that if T4 is not zero then

    #β(wr) = #β(w

    r−1) + 1 , #α(wr) = #α(w

    r−1) (148)

    a relation that will be used below. By Equation (83),

    T4 = θ(wr)φ2(β−1)wr−1(ψ

    −1 − φ−1) + θ(wr)φ2r−2∑

    i=0

    wi∪̇wj=wr−1

    (β−1)wi(ψ−1 − φ−1)wj .

    (149)And by Equation (103), the second term is bounded by

    r−2∑

    i=0

    wi∪̇wj=wr−1

    j∑

    l=1

    wρ1 ∪̇...∪̇wρl=wj

    N(wj ;wρ1 , . . . , wρl) ·

    · (β−1)wiφ2∣∣∣ψ−(l+1)ψwρ1 ...wρl − φ−(l+1)φwρ1 ...wρl

    ∣∣∣ . (150)

    Each term of this sum is bounded with an analogous of Expression (147), remember-ing to substitute rα and rβ by #α(w

    j) and #β(wj). This gives

    (r + 1)2(k + 2)r+2

    (1 − β0α0)r

    (l∏

    t=1

    pρt(k)

    )(βαk)nαk−#α(w

    j)β−#β(wj)(β−1)wi . (151)

    The derivative of β−1 with respect to wi is either 0, if there is some α in wi, or(−1)ii!β−(i+1), otherwise, which is in fact the only case that matters. In this case,#β(w

    i) = i, hence #β(wr−1) = i + #β(w

    j). This implies, by (148), that i + 1 +#β(w

    j) = #β(wr) = rβ , and this is the negative power of β in the expression.

    On the other hand, still in the hypothesis that wi has no α and by (148), we have#α(w

    j) = #α(wr) = rα, and we obtain the same bound of (147).

    The first term of Equation (149) is easier (remark that it is the only term forr = 1). Inequation (120) with m = 1 gives |ψ−1 − φ−1| ≤ (k + 2)2α−2k(βα2k)n;together with (β−1)wr−1 = (−1)

    r−1(r − 1)!β−r if wr−1 = βr−1, zero otherwise, andφ2 ≤ α2k (see (116)), this implies the bound (k+2)2(r−1)!β−r(βα2k)n if wr−1 = βr−1

    and zero otherwise. As θ(wr) = 1 and #β(wr−1) = r − 1 is the only case where the

    term is not zero, in this case rβ = r and rα = 0. Moreover, n ≥ 1 allows one takingout αk from (βα2k)n, and everything together results in the bound

    (k + 2)2(r − 1)!(βαk)nαk−rαβ−rβ . (152)

    Hence

    |T4| ≤

    {(k + 2)2(r − 1)! +Nr

    (r + 1)2(k + 2)r+2

    (1 − β0α0)r

    (l∏

    t=1

    pρt(k)

    )}(βαk)nαk−rαβ−rβ .

    (153)

    39

  • n-uniform boundedness. Now we aim at eliminating the n-dependence in thefigure of the functions pr,n(k), showing that there is a function of polynomial growthpr(k) bounding them all.

    We recall (see (88)) that Cr−1 is such that

    (α0β0)k−1pj(k) ≤ Cr−1 , (154)

    for all k ≥ 1, j = 1, . . . , r− 1, and now, for all m = 0, 1, 2, . . . , n− 1, define Cr,m suchthat

    (α0β0)k−1pr,m(k) ≤ Cr,m , (155)

    for all k ≥ 1.

    1. First of all we put all estimates together and inductively define pr,n. Recallthat pr,0 was defined previously, in the discussion n = 0, see (115). We got that|φn+1,wr − φn,wr | is bounded by (βα

    k)nαk−rαβ−rβ multiplied by an expression that

    includes: (i) the k̃-dependent term given in (138),

    (βαk)n−1 · (k + 1)r(βαk)k̃−1pr,n−1(k̃) , (156)

    whose k̃-dependence can be eliminated by remembering that Cr,n−1 bounds the term

    (βαk)k̃−1pr,n−1(k̃) over k̃, by definition of Cr,n−1; (ii) a number Nr of terms

    (βαk)n−1(k + 1)rCr−1 , (157)

    coming from (137); (iii) the sum

    16n−1∑

    m=0

    (βαk)mpr,m(k) , (158)

    coming from T2; and (iv) a function of polynomial growth, which is the sum of theestimates coming (147) and (153); each term is a product of the pj(k)’s with j < rmultiplied by an r-dependent constant. We call p̂r(k) the sum of this expression withNr(k + 1)

    rCr−1, from (ii). Therefore the expression obtained by adding (i)-(iv) isbounded by

    pr,n(k) = (β0α0)n−1(k + 1)rCr,n−1 + 16

    n−1∑

    m=0

    (β0α0)mpr,m(k) + p̂r(k) , (159)

    which is also a function of polynomial growth.

    40

  • 2. Once pr,n(k) is defined, we are tasked with choosing a constant Cr,n such that

    (α0β0)k−1pr,n(k) ≤ Cr,n (160)

    as a function of the previous constants Cr,m, m ≤ n− 1. Let Ĉr be such that

    (α0β0)k−1p̂r(k) ≤ Ĉr (161)

    and C∗r be such that(α0β0)

    k−1(k + 1)r ≤ C∗r . (162)

    Multiplying (α0β0)k−1 by the expression of pr,n(k), given in (159), we get something

    smaller than

    Cr,n ≡ (α0β0)n−1C∗rCr,n−1 + 16

    n−1∑

    m=0

    (α0β0)mCr,m + Ĉr . (163)

    3. Now assume that pr,n(k) and Cr,n have been inductively defined by the expres-sions above, for all n. We will show that there is a constant C ′r such that

    Cr,n ≤ C′r , ∀n ≥ 0 . (164)

    First, take n0 ≥ 2 such that

    α0β0)n0−1 max{C∗r ,

    16α0β01 − α0β0

    } <1

    4. (165)

    Take C ′r such that (i) Ĉr <14C

    ′r and (ii) 16

    ∑n0−1m=0 Cr,m <

    14C

    ′r. We claim that this

    choice of C ′r implies Cr,n ≤ C′r, ∀n ≥ 0. If n ≥ n0 − 1, this directly comes from (ii).

    If n ≥ n0 we assume by induction that Cr,m ≤ C′r for all m ≤ n − 1 and prove the

    same for n. From (163),

    Cr,n ≤ Ĉr + (α0β0)n0−1C∗rC

    ′r + 16

    n0−1∑

    m=0

    (α0β0)mCr,m + 16

    n−1∑

    m=n0

    (α0β0)mCr,m . (166)

    Each one of the three first terms is bounded by 14C′r. The last (which may be an

    empty sum if n = n0) is bounded by16(α0β0)n0

    1−α0β0C ′r, which is also smaller than

    14C

    ′r.

    4. From (159) and the conclusion in 3. ,

    pr,n(k) ≤ (k + 1)rC ′r + p̂r(k) + 16

    n−1∑

    m=0

    (α0β0)mpr,m(k) . (167)

    41

  • Therefore define

    pr(k) = 2[(k + 1)rC ′r + p̂r(k) + 16(pr,0(k) + . . .+ pr,n0−1(k))

    ]. (168)

    We claim that pr,n(k) ≤ pr(k) for all n ≥ 0. For n < n0 it is automatic fromthe definition of pr(k) (remembering that all the functions involved are positive fork ≥ 1). If n ≥ n0 we assume by induction that pr,m(k) ≤ pr(k), for all m ≤ n − 1.From (167) and (168),

    pr,n(k) ≤1

    2pr(k) +

    n−1∑

    m=n0

    16(α0β0)mpr(k)

    ≤1

    2pr(k) +

    16(α0β0)n0

    1 − α0β0pr(k) <

    3

    4pr(k) < pr(k) ,

    and now the proof is complete.

    References

    [1] J. F. Alves, J. L. Fachada, and J. S. Ramos, A condition for transitivity of Lorenzmaps, in: “Proceedings of the Eighth International Conference on DiferenceEquations and Applications”, (2005), 7-13, Chapman & Hall/CRC, Boca Raton,Florida.

    [2] Y. Choi, Attractors from one dimensional Lorenz-like maps, Discrete Contin.Dyn. Syst., 11 (2-3) (2004), 715-730.

    [3] P. Coullet and C. Tresser, Itérations d’endomorphisms et groupe de renormali-sation, J. Phys. Colloque C, 539 (1978), C5-25.

    [4] E. de Faria, W. de Melo, and A. Pinto, Global hyperbolicity of renormalization,Ann. of Math., 164 (2006),to appear.

    [5] W. de Melo and S. van Strien, “One-dimensional dynamics” vol. 25 of Ergebnisseder Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and RelatedAreas (3)]. Springer-Verlag, Berlim, 1993.

    [6] M. J. Feigenbaum, Qualitative universality for a class of nonlinear transforma-tions, J. Statist. Phys., 19 (1978), 25-52.

    [7] J. Guckenheimer, Sensitive dependence to initial conditions for one-dimensionalmaps, Comm. Math. Phys., 70 (2) (1979), 133-160.

    [8] R. Labarca and C. Moreira, Bifurcations of the essential dynamics of Lorenzmaps on the real line and the bifurcation scenario for the linear family, Sci. Ser.A. Math. Sci. (N.S.), 7 (2001), 13-29.

    42

  • [9] E. N. Lorenz, Deterministic non-periodic flow, J. Atmos. Sci., 20 (1963), 130-141.

    [10] A. Rovella, The dynamics of perturbations of the contracting Lorenz atractor,Bol. Soc. Brasil. Mat. (N.S.), 24 (2) (1993), 233-259.

    [11] L. Silva and J. S. Ramos, Two-parameter families of discontinuous one-dimensional maps, Ann. Math. Sil., 13 (1999), 257-270, European Conferenceon Iteration Theory (Muszyna-Zlockie, 1998).

    [12] L. Silva and J. S. Ramos, Topological invariants and renormalization of Lorenzmaps, Phys. D., 162 (3-4) (2002), 233-243.

    [13] L. Silva and J. S. Ramos, A unified renormalization scheme for two-piecewisemonotonous maps of the interval, Int. J. Bifur. Chaos Appl. Sci. Engrg., 13 (7)(2003), 1711-1719.

    [14] L. Silva and J. S. Ramos, Renormalization and topological invariants of 2-pcmmaps, Grazer Math. Ber., 346 (2004), 413-436, European Conference on IterationTheory (Muszyna-Slockie, 2002).

    Contact: [email protected], [email protected]

    43