l. delvaux, a. van daele and s.h. wang- bicrossproducts of multiplier hopf algebras

Upload: joaokaa

Post on 06-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    1/50

    arXiv:0903

    .2974v1[math.RA

    ]17Mar2009

    Bicrossproducts of multiplier Hopf algebras

    L. Delvaux1, A. Van Daele2 and S.H. Wang3

    Abstract

    In this paper, we generalize Majids bicrossproduct construction. We start with a pair(A, B) of two regular multiplier Hopf algebras. We assume that B is a right A-modulealgebra and that A is a left B-comodule coalgebra. We recall and discuss the two notionsin the first sections of the paper. The right action of A on B gives rise to the smashproduct A#B. The left coaction of B on A gives a possible coproduct # on A#B. Wewill discuss in detail the necessary compatibility conditions between the action and thecoaction for # to be a proper coproduct on A#B. The result is again a regular multiplier

    Hopf algebra. Majids construction is obtained when we have Hopf algebras.

    We also look at the dual case, constructed from a pair (C, D) of regular multiplier Hopfalgebras where now C is a left D-module algebra while D is a right C-comodule coalgebra.We will show that indeed, these two constructions are dual to each other in the sense thata natural pairing ofA with C and of B with D will yield a duality between A#B and thesmash product C#D.

    We show that the bicrossproduct of algebraic quantum groups is again an algebraic quan-tum group (i.e. a regular multiplier Hopf algebra with integrals). The -algebra case willalso be considered. Some special cases will be treated and they will be related with otherconstructions available in the literature.

    Finally, the basic example, coming from a (not necessarily finite) group G with two sub-groups H and K such that G = KH and H K = {e} (where e is the identity of G) willbe used throughout the paper for motivation and illustration of the different notions andresults. The cases where either H or K is a normal subgroup will get special attention.

    March 2009 (Version 2.1)

    1 Department of Mathematics, Hasselt University, Agoralaan, B-3590 Diepenbeek (Bel-gium). E-mail: [email protected]

    2 Department of Mathematics, K.U. Leuven, Celestijnenlaan 200B (bus 2400), B-3001Heverlee (Belgium). E-mail: [email protected]

    3 Department of Mathematics, Southeast University, Nanjing 210096, China. E-mail:[email protected]

    1

    http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1http://arxiv.org/abs/0903.2974v1
  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    2/50

    0. Introduction

    Consider a group G and assume that it has two subgroups H and K such that G = KHand HK = {e} where e denotes the identity element in G. For any two elements h Hand k K, there is a unique way to write hk as a product kh with h H and k K.

    We will write h k for k and h k for h. As a consequence of the associativity of theproduct in G, we get, among other formulas, that

    (hh) k = h (h k)

    h (kk) = (h k) k

    for all h, h H and k, k K. We get a left action of the group H on the set K and aright action of the group K on the set H. It justifies the notations we have used. Theseactions are related in a very specific way. It is a (right-left) matched pair (K, H) of groupsas defined by Majid (see e.g. Definition 6.2.10 in [M3]). We will use such a matched pair

    throughout the paper, mainly for motivational reasons. Therefore, we will recall moredetails about such a pair at various places in our paper.

    Given a pair of groups (K, H), we can associate two Hopf algebras. The first one is thegroup algebra CH. We will use A to denote this group algebra. The second one is thegroup algebra CK. We will use D to denote this group algebra. On the other hand, wecan also consider the algebras F(H) and F(K) of complex functions with finite supporton H and K respectively (with pointwise operations). We will use B for F(K) and C forF(H). IfH and K are finite, also B and C are Hopf algebras and of course, C is the dualof A while D is dual to B. In general, B and C are regular multiplier Hopf algebras withintegrals, i.e. algebraic quantum groups (as defined and studied in [VD1] and [VD2]). Still,

    we have that A and C on the one hand and B and D on the other hand, are each othersdual as algebraic quantum groups.

    If moreover the pair (K, H) is a matched pair of groups as above, the left action of H on Kwill induce a right action of A on B (in the sense of [Dr-VD-Z]). It is denoted and definedby

    (f h)(k) = f(h k)

    whenever h H, k K and f B. This action makes B into a right A-module algebra(see Section 1). Similarly, the right action of K on H will induce a left action of D on C,denoted and defined by

    (k f)(h) = f(h k)whenever h H, k K and f C. It makes C into a left D-module algebra.

    As in [Dr-VD-Z], we can define the smash products A#B and C#D. We will recall thesenotions in detail in Section 1 of this paper. In the case considered here in the introduction,the algebra A#B is the space A B with the product defined by

    (h f)(h f) = hh (f h)f

    2

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    3/50

    when h, h H and f, f B. Observe that we consider elements of the group as sittingin the group algebra. Similarly, the algebra C#D here is the space C D with productdefined by

    (f k)(f k) = f(k f) kk

    whenever k, k

    K and f, f

    C. See Example 1.8 in Section 1 for more details.

    In the case of a pair of finite groups, the spaces are finite-dimensional. Then the multipli-cations on A B and C D induce comultiplications on C D and A B respectivelyby duality. It turns out that these coproducts make A#B and C#D into Hopf algebras,dual to each other. The compatibility of the actions ofH on K and K on H, as mentionedearlier in this introduction, is crucial for this (rather remarkable) result. This propertywas shown by Majid and a proof can be found in [M3] (see Example 6.2.11 and 6.2.12 in[M3]).

    It is rather straightforward to show that this result of Majid remains valid when the groupsare no longer assumed to be finite. Of course, in this more general case, we have to work

    with multiplier Hopf algebras as the spaces are no longer finite-dimensional so that theproducts on A B and C D will not induce ordinary coproducts on the dual spaces.However, apart from this, let us say technical problem, the extension of this result fromthe case of finite groups to the general case is, as said, mostly straightforward. Thiscase has been considered in [VD-W]. Here, we will conclude it from general results onbicrossproducts for multiplier Hopf algebras, as obtained in the paper.

    Indeed, the notion of a matched pair of groups is the underlying idea of Majids bicrossprod-uct construction. For his construction, the starting point is a pair (A, B) of Hopf algebraswith a right action of A on B and a left coaction of B on A. Roughly speaking, such a

    left coaction is the dual concept of a left action of D (the dual ofB) on C (the dual of A)(see Section 2 in this paper). Again, there are compatibility conditions. These allow toshow that the smash product A#B carries a coproduct (induced by the coaction), makingit into a Hopf algebra.

    The main purpose of this paper is to extend Majids bicrossproduct construction to thecase of multiplier Hopf algebras. We refer to the summary of the content of this paper,further in this introduction, for a more detailed account of what we do.

    Still, the notion of a matched pair of groups (now possibly infinite), will be used as amotivation throughout our paper. And one might expect that the passage from Hopfalgebras to regular multiplier Hopf algebras is again straightforward (as it is for the passage

    above from finite to infinite groups). To a certain extent, this is correct as essentially theformulas are the same. However, there are some difficulties as a consequence of the factthat, in the case of multiplier Hopf algebras, the coproducts (as well as the coactions)no longer take values in the tensor product of the algebras but rather in the multiplieralgebras (see further in the introduction where we recall this notion very briefly).

    There are essentially two ways to deal with this problem. One way is to keep using the sameformulas, work with the Sweedler notations, both for the coproducts and the coactions,and verify carefully if everything is well-covered. The technique of covering the legs

    3

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    4/50

    of coproducts with values in the multiplier algebra (together with the use of Sweedlersnotation) was first introduced in [Dr-VD]. However, in this context, we encounter a greatercomplexity with the use of the Sweedler notation. In [VD4] we have developed a morefundamental way to deal with these problems and we refer the reader to this article fora better theoretical basis for all this. In fact, in that paper, we have various examples,

    related to this work on bicrossproducts.Certainly in this setting, the covering technique requires some care, but it has the advantageof being more transparent. Another possible way to study these bicrossproducts is by usinglinear maps between tensor products of spaces. It is like treating the coproduct for amultiplier Hopf algebra (A, ) by using the associated linear maps T1, T2 from A A toA A given by

    T1(a a) = (a)(1 a)

    T2(a a) = (a 1)(a).

    The two approaches look very different, but are in fact two forms of the same underlyingidea. Roughly speaking one could say that the covering technique is a more transparentway of treating the correct formulas involving linear operators.

    For all these reasons, we have chosen to write this paper with the following underlyingpoint of view. We will spend time on motivation and doing so, we will focus on the use ofthe covering technique. On the other hand, we will also indicate how, in certain cases, thisis all translated into rigorous formulas using linear maps. We do get important new resultsbecause we work in a different and more general setting, but the results are expected. Byfocusing more on the new techniques, we hope to make the paper also more interesting toread. Moreover, with this point of view, we move our paper slightly into the direction of

    a paper with a more expository nature. We feel this is also important because it can helpthe reader to get more familiar with this generalization of Hopf algebras which, after all,turns out to give nice and more general results. The bicrossproduct construction of Majid,treated in this paper, seems to be a suitable subject for this purpose.

    Before we come to the content of the paper, we should mention in passing that Majidsbicrossproduct construction has also been obtained within the theory of locally compactquantum groups. The final result is obtained by Vaes and Vainerman [V-V2], see also[V1], [V2] and [V-V1]. Their work is greatly inspired by the paper by Baaj and Skan-dalis on multiplicative unitaries [B-S]. But before all this, we have the work of Majid onbicrossproducts for Hopf von Neumann algebras and Kac algebras [M1].

    It is worthwhile mentioning that the theory of multiplier Hopf algebras and algebraicquantum groups has been developed before the present theory of locally compact quantumgroups and that it has served as a source of inspiration for the latter. So, it should notcome as a surprise that the present work is related with the earlier and more recent work onbicrossproducts in Hopf-von Neumann algebra theory. In fact, the formulas used in theseanalytical versions of the bicrossproduct construction are essentially the same as the onesthat we use in this paper when we treat the matter with the linear operator technique.We will say more about this at the appropriate place in the paper. Note however that our

    4

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    5/50

    case can not be seen as a special case of this analytical theory because we work in a purelyalgebraic context. We will also come back to this statement later. See some remarks inSection 4.

    We also should mention here that many interesting cases of bicrossproducts in the settingof locally compact quantum groups (as treated by Vaes and Vainerman in [V-V2]) do

    not fit into this algebraic approach. Nevertheless, there are interesting examples thatcannot be treated using only Hopf algebras but where multiplier Hopf algebras are needed(and sufficient). We will say something more in Section 5 of this paper. We also referto our second paper on this subject [De-VD-W], where we include more examples. Andobserve also that the theory of multiplier Hopf algebras and algebraic quantum groupsis, from a didactical point of view, a good intermediate step towards the more general,but also technically far more complicated theory of locally compact quantum groups. Seee.g. reference [VD3] (Multiplier Hopf -algebras with positive integrals: A laboratory forlocally compact quantum groups).

    Content of the paper

    In Section 1, we recall briefly the notion of a unital action and the smash product. Wefirst consider a right action of a regular multiplier Hopf algebra A on an algebra B andwe assume that B is a right A-module algebra. We recall the twist map (or braiding) andthe notion of the smash product. The case of a left module algebra is also discussed. Weexplain a basic procedure to pass from one case to the other and to find the formulas inthe case of a left module algebra from those of a right module algebra. We look at the-algebra cases as well.

    For the sake of completeness, because we will treat these also in the coming sections, we

    consider the trivial cases (when the actions are trivial). The example of two subgroupsof a group (as we described earlier in this introduction) is the basic illustration of thenotions. And also for this example, we look at the special case where one of the subgroupsis normal.

    We use this well-known and relatively simple situation to explain more precisely how todeal with problems that arise from the fact that we work with multiplier Hopf algebraswhere the coproduct has its range in the multiplier algebra of the tensor product and notin the tensor product itself. In particular, we recall how the Sweedler notation is used inthis context and how to work with the covering technique properly. In [VD4], we haveexplained the theoretical background for these methods.

    Actions of multiplier Hopf algebras and module algebras were introduced in [Dr-VD-Z].Also the smash coproduct has been considered in several papers before (see also [De1] and[De3]). This first section does not contain really new ideas and certainly no new results.It is just included for convenience of the reader and as we mentioned, to illustrate the useof the Sweedler notation and the covering technique (as treated in the [VD4]).

    In Section 2, we recall definitions and results about coactions and smash coproducts. Thebasic ingredient is that of a left coaction of a regular multiplier Hopf algebra B on a vectorspace A. IfA is also an algebra, it is an injective linear map : A M(BA), satisfying

    5

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    6/50

    certain properties. In general, one must be a little more careful. One of the problemsarise because has its range in the multiplier algebra of B A and not in B A itself.This causes the need for a new type of covering. Further, the associated cotwist map (orcobraiding) T on A B is introduced (when also A is a regular multiplier Hopf algebra).Formally, it gives rise to the coproduct on A B given by (A T B)(A B).

    Again, there is a complication because the coproduct is supposed to map to the multiplieralgebra of the tensor product and therefore, the definition of this coproduct requires, insome sense, already the product. Also this problem is discussed.

    The notion of a coaction was introduced in [VD-Z2] and in [De2] it is studied further (seealso [De4]). In [De2], the cotwist map is considered and the notion of a comodule coalgebrais given. In this paper, we recall (and slightly generalize) these various notions and results.We also add some observations and provide some deeper insight in the problems that makethis dual case much more involved than the easier case of an action and a smash productas in Section 1.

    We consider the case of a right coaction as well. Again, we can use the basic technique

    as explained already in Section 1 to pass from one case to the other. Now, there is also asecond possibility to do this, based on duality. This is also explained in this section. Thetwo techniques are used further in Sections 3 and 4 to obtain the right formulas in onecase from those in the other case. Also here, we treat the -algebra cases.

    Finally in this section, again we consider the obvious trivial cases and we illustrate all ofthis with the group example as introduced before.

    Section 3 is the most important section of the paper. In this section, we consider thecandidate for the smash coproduct # on the smash product A#B. We explain step bystep what the natural conditions are for # to be an algebra map. The conditions involvedifferent connections between the right action of A on B and the left coaction of B on

    A. We formulate these conditions in a beautiful and symmetrical way, different from theclassical formulations in Hopf algebra theory. And of course, we prove that under theseconditions, we get indeed that # is a coproduct on the smash product.

    The other case of a left action of D on C and a right coaction of C on D is obtained asbefore, using the techniques we described in Section 1 and Section 2.

    The -algebra case is treated, as well as the various special cases. The basic example isused to illustrate the conditions and the results.

    In Section 4, we obtain the main results of this paper. We show that the smash productA#B, as reviewed in the first section, endowed with the smash coproduct #, as defined

    in the second section, when the conditions discussed in the third section are fulfilled, isactually a regular multiplier Hopf algebra. Again also the dual case, the -algebra caseand the case of algebraic quantum groups are considered, as well as the obvious specialcases. In this section, the treatment of the basic example is completed. We also referto a forthcoming paper where we complete the case of algebraic quantum groups in thesense that we also obtain formulas for the various objects like the modular elements, themodular automorphism groups, etc., associated with the bicrossproduct and its dual. See[De-VD-W].

    6

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    7/50

    In the last section, Section 5, we conclude the paper and we discuss some possible furtherresearch on this subject. In particular, we consider some remaining difficulties related withaspects of covering for coactions. And we discuss the problem of finding more examplesthat fit into this framework.

    We do not treat many examples in this paper. On the other hand, in our second paperon the subject [De-VD-W], where we consider integrals on bicrossproducts, we do considervarious non-trivial and interesting examples.

    Notations, conventions and basic references

    Throughout the paper, we work with (associative) algebras over the complex numbers Cas often we are also interested in the -algebra case. We do not require the algebras tohave an identity, but we do expect that the product is non-degenerate (as a bilinear map).For an algebra A, we use M(A) to denote the multiplier algebra. It is the largest unital

    algebra, containing A as a dense two-sided ideal. If A is a

    -algebra, then so is M(A).IfA and B are algebras and if : A M(B) is an algebra homomorphism, then it is callednon-degenerate if (A)B = B and B(A) = B. In that case, has a unique extension toa unital homomorphism from M(A) to M(B). This extension is still denoted by .

    We use 1A for the identity in M(A) and simply 1 if no confusion is possible. Sometimeshowever, we will even then use 1A for clarity. The identity element in a group is denotedby e. We use A for the identity map from A to itself and again, we simply write whenappropriate. Similarly, we use A and for a coproduct on A.

    A multiplier Hopf algebra is an algebra A with a coproduct , satisfying certain assump-tions. The coproduct is a non-degenerate homomorphism from A to M(A A). It is

    assumed to be coassociative, i.e. we have ( ) = ( ). If A is a

    -algebra, werequire that is a -homomorphism. A multiplier Hopf algebra is called regular if theopposite coproduct op still makes A into a multiplier Hopf algebra. For a multiplier Hopf-algebra, this is automatic. A regular multiplier Hopf algebra that carries integrals iscalled an algebraic quantum group. We refer to [VD1] for the theory of multiplier Hopfalgebras and to [VD2] for the theory of algebraic quantum groups, see also [VD-Z1]. Forthe use of the Sweedler notation, as introduced in [Dr-VD], we refer to special paper aboutthis subject, see [VD4]. For pairings of multiplier Hopf algebras, the main reference is also[Dr-VD]. More (basic) references will be given further in the paper.

    Acknowledgements

    We would like to thank the referee who reviewed an earlier version of this paper for manyvaluable remarks. They have inspired us to rewrite the paper more fundamentally.

    This work was partially supported by the FNS of CHINA (10871042). The third authoralso thanks the K.U. Leuven for the research fellowships he received in 2006 and 2009.

    7

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    8/50

    1. Actions and smash products

    In this section, we consider a regular multiplier Hopf algebra A and any other algebra B.Recall the following notion (see [Dr-VD] and [Dr-VD-Z]).

    1.1 Definition Suppose that we have a unital right action of A on B, denoted as B A.Then B is called a right A-module algebra if also

    bb a =

    (a)

    (b a(1))(b a(2))

    for all a A and b, b B.

    This notion has been studied before and is now well understood. However, because it is

    one of the basic ingredients of the construction in this paper and because of the scopeof this paper, as outlined in the introduction, we will here discuss some aspects of thisnotion. In particular, we will focus on the technique of covering. Being a relatively simpleconcept, the notion of a module algebra is well adapted to explain something more aboutthe covering technique. It should enable the reader to become more familiar with it.

    That A acts from the right on B means in the first place that b (aa) = (b a) a for alla, a A. The action is assumed unital. This means that elements of the form b a witha A and b B span all ofB. Because in any regular multiplier Hopf algebra, there existlocal units, it follows that for any b B, there exists an element e A such that b e = b.Then, in the defining formula of the above definition, a(1) will be covered by b and a(2)will be covered by b. In Example 2.8.ii of [VD4], this case is used to illustrate the basic

    ideas behind the covering technique.

    As we also mentioned in the introduction, one way to avoid these coverings is by usinglinear maps on tensor product spaces. We will now again do this in greater detail for thenotion of a module algebra as in Definition 1.1. For this purpose, a twist map is associatedwith the action. We recall the definition and the first main properties in the followingproposition (see e.g. [Dr-VD-Z] where the case of a left module algebra is treated).

    1.2 Proposition Assume that B is a right A-module algebra. Define a linear map R :B A A B by

    R(b a) =

    (a)

    a(1) (b a(2)).

    Then R is bijective and

    R1(a b) =

    (a)

    (b S1(a(2))) a(1)

    for all a A and b B. Here, S is the antipode on A.

    8

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    9/50

    Remark that in both equations, a(2) is covered by b through the action. Again, the readercan look at Example 2.9.i in [VD4] for more details if desirable.

    Some authors use a different terminology. In [B-M] e.g. this map is called a generalizedbraiding. We prefer our terminology because the map is used to twist the usual producton A B (see Proposition 1.4 below). Also, it does not really satisfy the standard braid

    relation but rather the similar relation in Proposition 1.3 below.Indeed, using this linear map R, it is possible to translate the basic conditions of Definition1.1 in terms of linear maps. These equations are given in the following proposition. Seealso e.g. [VD-VK] where these equalities appear in a natural way.

    1.3 Proposition We have

    R(B mA) = (mA B)(A R)(R A) on B A A

    R(mB A) = (A mB)(R B)(B R) on B B A

    where as mentioned before,A

    , B are the identity maps on

    Aand on

    Bresp. and

    mAand mB are the multiplication maps.

    The first formula follows from the fact that B is an A-module. The second one comes fromthe A-module algebra condition. This is easy to show (see e.g. [De3]). It can also be shownthat these two equations in turn will imply that B is a right A-module algebra. If e.g. weapply the first equation on b a a (with a, a A and b B) and then apply A Bwe will obtain that b (aa) = (b a) a. Similarly we can get the other requirement fromthe second equation.

    Next we consider the smash product. There are different ways to treat this concept. We

    start with recalling the usual approach (see [Dr-VD-Z] for the case of a left module algebraand [De3]). Of course, also elsewhere in the literature, related work has been done.

    1.4 Proposition Define a linear map m : (A B) (A B) A B by

    m = (mA mB)(A R B).

    This map makes AB into an (associative) algebra (with a non-degenerate product).

    The associativity can be proven by using the two formulas for R obtained in Proposition1.3.

    First, we will follow the common convention and write A#B for the space AB, endowedwith this product. Similarly, we will use a#b for the element a b when considered as anelement of this algebra.

    Using Sweedlers notation, the product can be written as

    (a#b)(a#b) =

    (a)

    aa(1)#(b a

    (2))b

    9

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    10/50

    for all a, a A and b, b B. In this formula, a(1) is covered by a (through multiplication)

    and a(2) is covered by b (through the action).

    We are also interested in the other way to treat this smash product because it makesformulas sometimes more transparent. It is based on the following result (see Proposition5.10 in [Dr-VD-Z]).

    1.5 Proposition There exist injective non-degenerate homomorphismsA : A M(A#B) and B : B M(A#B) defined by

    A(a)(a#b) = (aa)#b

    (a#b)B(b) = a#(bb)

    for all a, a A and b, b B. We also have

    A(a)B(b) = a#b

    B(b)A(a) =

    (a)

    a(1)#(b a(2))

    for all a A and b B.

    Observe the presence of the map R in the right hand side of the last formula.

    If we would identify A and B with their images in M(A#B), we can replace a#b by aband we find that ba =

    (a) a(1)(b a(2)) for all a A and b b. This takes us to the

    following result that provides an alternative approach to the smash product.

    1.6 Proposition The smash product algebra A#B is isomorphic with the algebra, gen-erated by A and B, subject to the commutation rules ba =

    (a) a(1)(b a(2)) for all

    a A and b B.

    It makes sense to denote this algebra with AB and it requires a small argument (bijectivityof the map R) to show that also AB = BA. More precisely, the two maps a b ab andba ba are linear bijections from the spaces AB and B A respectively to the spaceAB. In the remaining part of the paper, we will use AB more often than A#B to denotethe smash product.

    The algebra AB acts faithfully on itself by right multiplication. A simpler action is obtainedon B by letting A act in the original way and B again by right multiplication. Also thisaction will be used further in the paper although in general, it need not be faithful anymore(e.g. when the action of A on B is trivial).

    If the action is trivial, that is if b a = (a)b for all a A and b B, then the twist mapis nothing else but the flip and the smash product is simply the tensor product algebra.In the other approach, it means that A and B commute in the smash product AB.

    10

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    11/50

    If A is a multiplier Hopf -algebra and if B is a -algebra, we require the extra conditionthat

    (b a) = b S(a)

    for all a A and b B. This is a natural condition. One easily verifies e.g. that it iscompatible, both with the action property (as S(aa)

    = S(a)S(a)

    for all a, a A),and the module algebra property (as (bb) = bb for all b, b B and (S(a)) =((SS)op(a)) for all a A). One can show that with this extra assumption, the smashproduct A#B can be made into a -algebra simply by letting (ab) = ba for all a Aand b B.

    Before we consider the basic example, we also look (very briefly) at the case of a left D-module algebra C. We have made it clear in the introduction why we also need this case.Definitions and results can be taken directly from [Dr-VD-Z].

    1.7 Definition Let D be a regular multiplier Hopf algebra and C an algebra. Assume

    that D acts from the left on C (denoted D C). Then C is called a left D-modulealgebra if also

    d (cc) =

    (d)

    (d(1) c)(d(2) c)

    for all c, c C and d D.

    The relevant twist map R is now a linear map from D C to C D and it is given by

    R(d c) =

    (d)

    (d(1) c) d(2).

    The smash product C#D is the algebra generated by C and D, subject to the commutationrules

    dc =

    (d)

    (d(1) c)d(2)

    for all c C and d D. It is denoted by CD. Again the maps c d cd and d c dcare bijective from the spaces C D and D C respectively to the space CD.

    Remark that there is some possible confusion because we use the same notations as inthe case of a right A-module algebra B. A similar problem will occur on later occasions.However, we will be consequent in the use of A and B in the first case and C and D in

    the second case. This should help to distinguish the cases.As for a right module algebra, also here, we have the -algebra case. Then we assume thatD is a multiplier Hopf -algebra, that C is a -algebra and we make the extra assumptionthat (d c) = S(d) c for all c C and d D. Now the smash product C#D is againa -algebra with the involution defined by (cd) = dc

    There is a procedure to convert formulas from one case, say the right module algebras,to the other, now the left module algebras. Indeed, if B is a right A-module algebra (as

    11

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    12/50

    in Definition 1.1) then we obtain a left D-module algebra C if we let C be the algebra Bendowed with the opposite product and if we take for the multiplier Hopf algebra D thealgebra A but with the opposite product and coproduct. If we also flip the notation for theaction, as well as the tensor products, we see that the formula in Definition 1.1 becomesthe one in Definition 1.7, that the formula for R in Proposition 1.2 converts to the formula

    for R as above in the other case, etc. Remark however that the formulas in Proposition1.3 look the same, but that they are in fact interchanged.

    This is an important technical tool for this paper because we need to work with both cases.

    Now we consider the main motivating example, as announced already in the introduction.

    1.8 Example Let H and K be groups.

    i) Assume first that H acts on K from the left (as a group on a set). We use h kto denote the action of an element h H on a point k K. We assume that theaction is unital in the sense that e k = k for all k K where e denotes the unit

    in the group H. Let A be the group algebraC

    H of H. It is a Hopf algebra (andin particular, a regular multiplier Hopf algebra). Let B denote the algebra F(K) ofcomplex functions on K with finite support, endowed with pointwise operations. Theleft action of H on K induces a right action of A on B by

    (f h)(k) = f(h k)

    whenever h H, k K and f B. Remark that we consider elements of the groupH as sitting in the group algebra CH as usual. Then B is a right A-module algebrain the sense of Definition 1.1.

    The smash product AB is spanned by elements of the form hf with h H and f B

    and the product is given by

    (hf)(hf) = (hh)((f h)f)

    whenever h, h H and f, f B.

    ii) Now assume that K acts on H from the right and use h k to denote the actionof k K on h H. Again assume that the action is unital. Let C be the functionalgebra F(H) and let D be the group algebra CK. The right action of K on H givesa left action of D on C defined by

    (k f)h = f(h k)

    for all h H, k K and f C, making C into a left D-module algebra. In this case,the smash product CD is spanned by elements of the form f k with f C and k Kand the product is given by

    (f k)(fk) = (f(k f))(kk)

    for all k, k K and f, f C.

    12

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    13/50

    Observe that in the above examples, the algebras A and D are Hopf algebras and we donot need to think about coverings.

    Also, when we endow these algebras with the obvious -algebra structures, the actionssatisfy the requirements. We have e.g. that f = f, the complex conjugate of f whenf F(K) and h = h1 so that S(h) = h for h H. This will give the equality

    (f h) = f S(h).Finally, if the group actions are trivial on the sets, they also are trivial on the algebras(remark that the counit maps each group element to 1). Then the smash products aresimply the tensor products.

    2. Coactions and smash coproducts

    In this section, we will consider the objects, dual to actions and smash products as reviewedin the previous section. We will state and discuss notions and results as they have appearedelsewhere in the literature (see [VD-Z2] and [De2]). However, we start the treatment ina more general (and perhaps also more natural) setting. Further, we focus on motivationand on those aspects that are important for the main construction of the bicrossproductin the next two sections. In particular, as we also have done in the previous section, wewill be concerned with the proper coverings of our formulas and expressions therein.

    As we will see, the situation is somewhat more involved than for actions and smash prod-ucts.

    Coactions

    It is natural to start in this case with the notion of a coaction. It has been introduced in[VD-Z2] and studied further in [De2]. In this paper, we will treat a slightly more generalcase. In Hopf algebra theory, it is possible to define a coaction of a coalgebra on a vectorspace. In the setting of multiplier Hopf algebras however, more structure is needed. In[VD-Z2], the setting is that of an algebra A and a regular multiplier Hopf algebra B. Then,a left coaction of B on A is an injective linear map : A M(B A) satisfying

    i) (A)(B 1) B A and (B 1)(A) B A,

    ii) (B ) = (B A).

    The algebra structures ofA and B are needed to be able to consider the multiplier algebra

    M(B A). It would be too restrictive to assume that the coaction has range in thetensor product itself (see e.g. the example at the end of this section). Condition i) is usedto give a meaning to the left hand side of the equation in the second condition. And to givea meaning to the right hand side of the second equation, the non-degeneracy of B Ais used in order to extend it to M(B A).

    For the new notion here, we begin with the following notation. We refer to Section 1 in[VD4] for more details. In particular, see Definition 1.5 and Proposition 1.6 in [VD4] forthe notion of an extended module of a bimodule.

    13

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    14/50

    2.1 Notation Let A be a vector space and let B be an algebra. Consider the B-bimoduleB A with B acting as multiplication left and right on the first factor in the tensorproduct. Denote by M0(B A) the completion of this module. Similarly, considerB B A as a (B B)-bimodule with multiplication left and right on the first twofactors. Denote by M0(B B A) the completion of this module.

    We will write the actions of B as (b 1)y and y(b 1) when b B and y is an element ofBA or M0(BA). By definition, these elements belong to BA, also for y M0(BA).Similarly, we write the actions of B B as (b b 1)z and z(b b 1) when b, b Band z is an element of either B B A or M0(B B A). Again by definition, we getelements in B B A, also for z M0(B B A). This convention is in agreement withthe notions that are commonly used when both A and B are algebras so that M0(B A)and M0(B B A) are natural subspaces of M(B A) and M(B B A) respectively.

    Then we work with the following definition.

    2.2 Definition Let A be a vector space and B a regular multiplier Hopf algebra. A

    left coaction of B on A is an injective linear map : A M0(B A) such that(B ) = (B A) on A. We call A a left B-comodule.

    It is easy to show that the map B has a natural extension to M0(B A). We simplymultiply with an element of B in the first factor. Also B A can be extended toM0(B A) in a natural way. Now multiply with b b

    left or right. In the first case,use that b b (B 1)(B) and in the other case that b b (B)(B 1). Theseextensions will map into M0(B B A) and the condition (B )(a) = (B A)(a)is a well-defined equation in this space. For more details, again we refer to Section 1 in[VD4].

    It does not seem natural to weaken the assumptions about B (as we can see above: we usethat it is a regular multiplier Hopf algebra). On the other hand, remark that in the casewhere A is an algebra, the notion coincides with the one originally given in [VD-Z2] andrecalled in the beginning of this section. It is clear that the assumption i) in the originaldefinition above has become part of the definition by the use of the space M0(B A).

    It is an easy consequence of the assumptions that (B A)(a) = a for all a A. Theformula is given a meaning in the obvious way and the proof is the same as in the morerestricted situation (see [VD-Z2] and [De2]). See also Example 2.10.i in [VD4].

    We will also use the adapted version of the Sweedler notation for such a coaction. Wewill write (a) =

    (a) a(1) a(0) so that coassociativity is implicit in the formula (B

    )(a) = (a)

    a(

    2) a

    (

    1) a

    (0)when it is understood that

    B(a

    (

    1)) is replaced by

    a(2) a(1). We have more or less the same rules for covering (due to the two inclusionsin i) above or, in the more general case, because of the definition). We need to cover thefactor a(1) and possibly also a(2) (and so on) by elements in B, left or right. In this casehowever, one can not cover the factor a(0).

    One might expect that now, we can immediately move to the definition of a comodulecoalgebra. Remember that in the previous section, there was no problem to define a modulealgebra already in the first definition. Here however, things are again more complicated.

    14

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    15/50

    The cotwist map T

    First, we will make a remark that explains why we can consider coactions as dual to actions.This point of view will be used several times for motivation and later, in Sections 3 and 4,it will really be used to prove results about duality. We will see how this gives the natural

    candidate for the cotwist map.

    2.3 Remark Assume that we have regular multiplier Hopf algebras A , B , C and D andthat A is paired with C and that B is paired with D (in the sense of [Dr-VD]). Use , to denote these pairings. Assume that we have a left action of D on C.

    If all spaces are finite-dimensional, we can define a map : A B A by

    (a), d c = a , d c

    for all a A, c C and d D. It is easily verified that (dd) c = d (d c) will bethe same as the property (B ) = (B A).

    Let us also look at the map R, now defined from D C to C D by

    R(d c) =

    (d)

    (d(1) c) d(2).

    If again we assume our spaces to be finite-dimensional, we can look at the adjoint mapT : A B B A. We find that

    a b, R(d c) =

    (d)

    a, d(1) cb, d(2)

    =(d)

    (a), d(1) cb, d(2)

    = (a)(b 1), d c

    for all a A, b B, c C and d D and so T(a b) = (a)(b 1) for all a A andb B.

    Later in the paper, we will also have this type of duality in the more general case ofpairings between regular multiplier Hopf algebras (see Section 4).

    The above argument suggests to define the cotwist map T. It will indeed play the role of

    R in this dual setting. We have the following result. It is just as in the more restrictivecase, considered in [De2].

    2.4 Proposition Consider a left coaction as in Definition 2.2. Define a linear mapT : A B B A by T(a b) = (a)(b 1). Then T is bijective and T1 is givenby

    T1(b a) =

    (a)

    a(0) S1(a(1))b

    15

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    16/50

    for a A and b B (and where we use the Sweedler notation for (a) as explainedearlier). Moreover,

    (B A)T = (B T)(T B)(A B)

    on A B.

    There is no problem with the interpretation of the right hand side of the first formula asa(1) will be covered through b (in fact by S(b)). To prove that indeed, we get the inverseof T, we need to show that e.g.

    (a)

    a(1)S1(a(2))b a(0) =

    (a)

    B(a(1))b a(0)

    for all a A and b B. In order to do this properly, we need to multiply in the firstfactor with an element in B from the left. Doing so, we have covered the first two factors

    in the expression (B A)(a), we get something in B B A and we can replace theexpression a(1)S

    1(a(2)) with B(a(1)). See e.g. Example 2.10.ii in [VD4].

    In the second equation, when applied to a b, we obviously can cover the left hand side ofthe equation with b 1 1. Also the right hand side will be covered if we multiply withthis element from the right. We use that

    ((T B)(a B(b)))(b 1 1) = (T B)((a B(b))(1 b

    1)).

    The equality is then easily obtained from the coassociativity rule in Definition 2.2.

    This last formula, involving T and B , is completely expected when we think of T as the

    adjoint of the map R in the setting of duality. It is simply the adjoint of the equation

    R(mD C) = (C mD)(R D)(D R) on D D C,

    a formula which is the counterpart of the first formula in Proposition 1.3, but now for aleft D-module C.

    Here, as you might expect, this formula is essentially equivalent with the coassociativityfor the coaction, just as the first formula in Proposition 1.3 is essentially equivalent withthe module property. But observe again that the formula for T is of greater complexitythan the formula for R, simply because in the last case, we do not have to worry about

    coverings.We call this map a cotwist (as it will be used to twist the coproduct). Sometimes, thename (generalized) cobraiding is used (see e.g. [B-M]).

    The B-comodule coalgebra A

    Now we look for the right notion of a comodule coalgebra. For this, we assume that alsoA is a regular multiplier Hopf algebra.

    16

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    17/50

    It would be most natural to require, just as in the case of Hopf algebras, that

    (B A)(a) =

    (a)

    12(a(1))13(a(2)),

    where we use the leg numbering notation. So, in this case, for p A, we have 12(p) =(p) 1 and 13(p) = (B A)((p) 1) where A is the flip on AA. When using theSweedler notation, this equation reads as

    (B A)(a) =

    (a)

    a(1)(1)a(2)(1) a(1)(0) a(2)(0)

    for all a A. In this last expression, the Sweedler notation is used, first for A and thenfor . However, there is a serious problem with the coverings here. We can multiply withan element b in the first factor and this will cover the s. Still, it seems not obvious how

    to cover a(1) and a(2) in the right hand side of the equation at the next stage. Because ofthis problem, we will not look at details here. We will consider this matter later.

    Another, still more or less obvious choice (for defining the notion of a comodule coalgebra)is to look at the adjoint of the second formula in Proposition 1.3 (for a left action). Thenwe would require

    (B A)T = (T A)(A T)(A B) on A B.

    This means that we essentially have used the covering with the element in B as before andso it is expected that the problem remains. This is indeed the case. There is no problem

    with the left hand side of the equation, but it is still not obvious how to treat the righthand side.

    However, it is now more easy to see how this problem can be overcome. The solution willbe found from the result in the following lemma.

    2.5 Lemma First consider ABA as an A-bimodule where the action ofA is given bymultiplication in the first factor. Denote the completed module by M10 (A B A).Then, one can define the map (A T)(A B) from AB to M

    10 (AB A) in a

    natural way. Next, consider A B A as an A-bimodule with multiplication in thethird factor and denote the completed module by M30 (A B A). Then there is anatural map (T1 A)(B A) from B A to M

    3

    0(A B A).

    In the first case, define e.g.

    (a 1 1)((A T)(A(a) b)) = (A T)((a 1)A(a) b)

    for all a, a A and b B. Similarly on the other side. This proves the first statement.The second one is proven in the same way.

    17

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    18/50

    Given this result, it is possible to impose the required condition under the following form.Indeed, it makes sense to assume that

    (A T)(A B) = (T1 A)(B A)T

    on AB. The left hand side gives an element in M10 (AB A) while the right hand sidean element in M30 (AB A). Because there is a natural intersection of these two spaces,

    from this equality, the following assumption would be an implicit consequence. Therefore,let us make this assumption:

    2.6 Assumption We assume that the elements of the form

    ((A T)(A(a) b))(1 1 a)

    (1 1 a)(A T)(A(a) b)

    are in A B A for all a, a A and b B.

    We see from the discussion before, that this is a natural condition. We will say more aboutthis condition after we have stated the following definition (cf. Definition 1.6 in [De2]).

    2.7 Definition Let A and B be regular multiplier Hopf algebras and assume that :A M(B A) is a left coaction of B on A (as in Definition 2.2). Then we call A aleft B-comodule coalgebra if also

    (B A)T = (T A)(A T)(A B)

    on A B.

    Compare this equality with the one in Proposition 2.4. It is very similar, but there, thecovering problem was more easily solved. It is however also possible to treat the coveringof the formula in Proposition 2.4 in a similar way as we did for the formula in Definition2.7. It will not be necessary to impose an extra condition as in Assumption 2.6. as theanalogue of this assumption will be automatic.

    Before we proceed, let us make a few more remarks about the Assumption 2.6.

    First, it is easy to reformulate the assumption in terms of the map itself. We simplyhave that the expressions

    ((A )A(a))(1 b 1),

    when multiplied with elements of A in the third factor, left or right, give a result in thetensor product A B A.

    Next, fix a A and b B and look at the map F : A BA given by F(a) = (a)(ba).If in this map, the variable a is covered from the right, then the first part of the assumptionis fulfilled. Similarly, take the map G : A BA given by G(a) = (1a)(a)(b1) andassume that the variable a is covered from the left (or from the right). Then, the secondpart of the assumption is fulfilled.

    18

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    19/50

    There are reasons to believe that also the converse is true. Assume e.g. that A is analgebraic quantum group of discrete type so that there is a left cointegral h. We know thatthe right leg of (h) is all of A. Then, from the assumption, with a = h, it will followthat the maps F and G above have variables that are covered. As the result is triviallytrue for algebraic quantum groups of compact type, one might expect that it is true for all

    algebraic quantum groups, possibly for all regular multiplier Hopf algebras.Remark also that in the special case when is a homomorphism, then the existence oflocal units for A together with the fact that B A = (A)(B 1), will imply that thesevariables are covered and hence that the Assumption 2.6 is automatic.

    In fact, it would be convenient to assume the stronger assumption because then it wouldbe possible to cover the right hand side in the original equations

    (B A)(a) =

    (a)

    12(a(1))13(a(2)) =

    (a)

    a(1)(1)a(2)(1) a(1)(0) a(2)(0)

    by multiplying with b 1 a where a, a A and b B. But for the moment, we see nogood reason why we should assume this stronger property. We will just require Assumption2.6. Then, if we multiply the above expressions with elements of the form b 1 a, fromthe right, where b B and a A, we do get everything well-defined in B A A. Andalthough this does not really fit in the covering approach, this will be sufficient for ourpurposes. We will come back to this point in the next section where this will be used (seee.g. the proof of Proposition 3.11). We refer to Section 5 for some further discussion onthis problem.

    Before we proceed to the next item, just remark that it easily follows from the assumptions

    that (B A)(a) = A(a)1B for all a A (see e.g. Proposition 1.8 in [De2]).

    The twisted coproduct on A B

    The next logical step is the introduction of the twisted coproduct.

    As explained before (in the introduction), in this context, we can not define a coproductwithout referring to the algebra structure. This means that we expect that it is not possibleto associate a twisted coproduct on AB when we (only) know that A is left B-comodulecoalgebra. The problem of course does not exist in the framework of Hopf algebras.

    If we would consider the ordinary tensor product algebra structure on A B, we can relyon the work done in [De2]. However, this is not the algebra structure on A B that weare interested in. Nevertheless, let us try and see how far we can get. We will thereforerecall some of the results from [De2].

    For, there is of course a natural candidate for the coproduct. Formally, we should define on A B by the formula

    (a b) = (A T B)(A(a) B(b)).

    19

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    20/50

    This is indeed dual to the formula of the product in the case of a module algebra (as givenin Definition 1.4). Still formally, using Sweedlers notation, we can write

    (a b) =

    (a)(b)

    a(1) (a(2))(b(1) 1) b(2)

    for a A and b B.

    The right hand side is easily covered when multiplying with elements ofA in the first factor,left or right, and with elements of B in the last factor, also left or right. Unfortunately,this is not the type of covering that is needed to properly define a coproduct on the smashproduct. However, we also have the following possibility to cover this expression.

    2.8 Lemma Take AB with the tensor product algebra structure. Consider ABABas a (A B)-bimodule where the actions are given by multiplication in the first twofactors from the left and in the last two factors from the right. Then, there is a naturalmap (ATB)(AB) from AB to the completed module M0(ABAB).

    Proof: First multiply from the left with such elements. For all a, a A and b B,we have

    (a b 1 1)

    (a)

    (a(1) (a(2)) 1) (1 b 1 1)(A (A) 1)

    A B A 1.

    The second factor contains elements of B and this will cover the factor b(1) in theexpression

    (a b) =

    (a)(b)

    a(1) (a(2))(b(1) 1) b(2).

    Next, we multiply with an element of the form 11ab from the right. Clearly, theelement b will take care of the covering of the factor b(2). We get a linear combinationof elements of the form

    (a)

    a(1) (a(2))(p a) q

    with p, q B. Then this belongs to A B A B by Assumption 2.6.

    Similarly, we can multiply with elements of A B from the right in the first two factors

    and we will get elements in A B A B. However, if we multiply with such elementsfrom the left in the last two factors, we would need an assumption, similar as Assumption2.6, but for the opposite map Top, defined by Top(a b) = (b 1)(a). We will not needthis for the moment.

    It is an easy consequence of this result that the coproduct can be properly defined on ABwith the above formula in the sense that we have a well-defined linear map : A B M((AB)(AB)) when AB is considered with the tensor product algebra structure.Also showing that this is coassociative, is straightforward. It follows from the formulas for

    20

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    21/50

    T, given in Proposition 2.4 and Definition 2.7. This coproduct will be regular when alsothe assumption for Top is fulfilled. See [De2] for more details.

    However, as we mentioned before, this is not (really) what we want. Eventually, we wantto make the smash product algebra A#B into a multiplier Hopf algebra and so we need to

    define this coproduct on the smash product. Now, it takes again little effort to show thatthe result in the lemma will allow to define the linear map from A#B to the multiplieralgebra M((A#B) (A#B)) (no matter what the right action of A on B is) and to showthat it is coassociative.

    It is fair to say that this coproduct needs an algebra structure on A B, but that it isessentially independent of the choice of this algebra structure.

    It is in the next section that we will see what kind of compatibility conditions betweenthe action and the coaction are needed for this coproduct to be an algebra homomorphismand not merely a coassociative linear map. Then, the approach will become different fromwhat is obtained in [De2]. This will be explained.

    Also the behaviour of this coproduct with respect to the -operation (in the case of mul-tiplier Hopf -algebras), can not be treated here properly.

    If we look at the condition on the action, necessary to have that the smash productA#B becomes a -algebra with the obvious -algebra structure (see a remark followingProposition 1.6 in the previous section), if we translate this to the case of a left D-modulealgebra C and finally, if we dualize this condition, we arrive at the natural requirementthat

    (SA(a)) = ((B SA)(a))

    for all a A.

    This condition however is not sufficient to ensure that is a -map on A#B. We needone of the extra conditions on (namely the one that gives (aa) in terms of (a) and(a) for a, a A - see Section 3).

    The only thing we can obtain is that the map T(SA( ) SB( )

    ) is involutive fromAB to itself (where is the flip). This is no surprise because in the previous section, thecondition relating the -structure with the action implies that R(( )( )) is involutiveon B A. And whereas this last property is sufficient to get that A#B is a -algebra, thefirst one is not sufficient to have that is a -map. This is related with the fact that theantipode on A#B that we will get in Section 4, is not simply given by the antipode on Aand the antipode on B.

    We come back to this problem in Section 3.

    Other cases and examples

    Now, we will do a few more things in this section, as we also did in the previous one. Wewill (very briefly) consider the case of a right C-comodule coalgebra D, we consider someobvious special cases and we will look at the example.

    21

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    22/50

    Here is the definition of a right coaction. We will use the same symbols for the objectsassociated with this case as explained already earlier.

    2.9 Definition Let C be a regular multiplier Hopf algebra and D a vector space. Nowconsider the vector space M0(D C) of elements x such that x(1 c) and (1 c)x

    are in D C for all c C. Assume that : D M0(D C) is an injective linearmap so that ( C) = (DC). Then is called a right coaction ofC on D.

    We have the same remarks as about left coactions. Now, the concept is in duality withthat of a right action of A on B (when A is paired with C and B with D as before). Therelevant formula is

    b a, (d) = b a , d

    for all a A and b B.

    The other formulas can be obtained from the ones for a left coaction using the transfor-mation tool described earlier. They can also be obtained by dualizing formulas about a

    right action.The cotwist map T associated with this coaction is the map from CD to DC, definedby T(c d) = (1 c)(d). In terms of this map, coassociativity is expressed as

    (D C)T = (T C)(C T)(C D) on C D.

    2.10 Definition If also D is a regular multiplier Hopf algebra and if

    (D C)T = (C T)(T D)(C D)

    on C D, then D is called a right C-comodule coalgebra.

    Again, we have to assume the natural equivalent of the Assumption 2.6 for this last formulaas discussed before.

    The reader should be aware of the similarity of the two formulas for the cotwist map inthe case of a left comodule coalgebra and that of a right comodule coalgebra. The orderof the equations is different, just as in the case of left and right module algebras (see aremark in Section 1).

    The formula for the coproduct is the same. In this case, it is defined formally on CD by

    (c d) = (C T D)(C(c) D(d)).

    The same remarks as for the case of a left B-comodule coalgebra A hold here.

    Before we come to the example, let us say a few words about the case of a trivial coaction.The left coaction of B on A is trivial if (a) = 1B a for all a A. This implies thatthe cotwist map T is simply the flip map and that the coproduct is nothing else but thetensor coproduct. A similar remark holds for a trivial right coaction of C on D.

    22

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    23/50

    2.11 Example Let H and K be groups.

    i) Now first assume that K acts on H from the right (as a group on a set) and useh k to denote the action of the element k K on the element h H. Assumethat the action is unital. Let A be the group algebra CH of H and let B be thefunction algebra F(K) of complex functions with finite support on K. Define a map

    : A M(B A) by(h)(k 1) = k (h k)

    where h H, k K and where k denotes the function on K that is 1 in k and0 everywhere else. One easily verifies that is a left coaction of B on A (in thesense of Definition 2.1) and that it makes A into a left B-comodule coalgebra (as inDefinition 2.7). It is in duality with the left D-module algebra C as constructed inExample 1.8.ii (in the sense of Remark 2.3).

    The expression for the coproduct on A B is given by

    (h k) = kk=k

    h k (h k) k

    when h H and k K. The summation is taken over elements k, k in K.

    ii) Next, assume that H acts on K from the left and use h k to denote the action.Again assume that the action is unital. Let C be the function algebra F(H) and Dthe group algebra CK. Define : D M(D C) by

    (1 h)(k) = (h k) h

    for all h H and k K. Then we have a right coaction of C on D (as in Definition

    2.9), making D into a right C-comodule coalgebra (as in Definition 2.10). It is induality with the example in 1.8.i.

    Now the coproduct on C D is given by the expression

    (h k) =

    hh=h

    h (h k) h k

    for h H, k K and where again the summation is taken over elements h, h H.

    We also have natural -algebras structures and they are compatible with the coactions.These two cases are much simpler than the general case because A and D are Hopf algebras.Finally, if the group actions are trivial, then also the coactions are trivial and the cotwistmaps are simply the flip maps. The coproduct becomes the tensor coproduct.

    23

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    24/50

    3. The smash coproduct on the smash product

    In this section, we start with a pair of two regular multiplier Hopf algebras A and B. Weassume that B is a right A-module algebra (as in Section 1) and that A is a left B-comodulecoalgebra (as in Section 2). We will freely use the notations of the two previous sections.

    We consider the smash product AB (as reviewed in Section 1) and we will consider thecoproduct on AB as already discussed in Section 2. We have seen that the algebrastructure on AB is needed because the coproduct does not map AB into AB AB, butrather in M(AB AB). On the other hand, we also saw that this coproduct is not reallydependent on the algebra structure.

    In the previous section, we only considered this coproduct as a coassociative linear map.In this section, we will consider it as a coproduct on the smash product. We will see whatkind of compatibility conditions are needed between the action and the coaction for thiscoproduct to be an algebra homomorphism. It then will take little effort to see that we geta regular multiplier Hopf algebra, but this will be done in the next section. The conditions

    are of course the natural generalizations of the well-known conditions imposed by Majid(see e.g. Theorems 6.2.2 and 6.2.3 in [M3]). Following the scope of this paper, we willspend some more time to motivate these conditions, discuss different forms of it and againconcentrate on the problem of coverings.

    Before we start this in a systematic way, let us first rewrite the formula for the coproduct, as given in the previous section (see the two formulas before Lemma 2.8). Using ABto denote the smash product, we can write formally

    (ab) =

    (a)

    (a(1) 1)(a(2))(b)

    whenever a A and b B. Now, we will first consider this coproduct on B, then on Aand finally, we will consider the commutation rules so that, when defined on the algebraAB, we get a homomorphism. In the process of this way of constructing the coproduct,we will be concerned with the various conditions that are necessary and sufficient for thismap to be an algebra map. We will, in order to avoid notational conflicts, use # for thecoproduct on AB in stead of merely as we did in the previous section.

    The coproduct # on the algebras A and B

    We begin with the easiest part.

    3.1 Proposition There is a non-degenerate homomorphism # : B M(AB AB)given by

    #(b) = (b).

    This is a triviality. We know e.g. that (b 1)(b) and (b)(1 b) are in B B for allb, b B and also because AB = BA, this easily implies that #(b) M(AB AB) for

    24

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    25/50

    all b. Observe that the injection ofBB in ABAB is a non-degenerate homomorphismso that M(B B) is sitting inside M(AB AB) (see e.g. the results in Proposition 1.5 inSection 1). And because is a non-degenerate homomorphism from B to M(B B), itfollows that # will be a non-degenerate homomorphism from B into M(AB AB).

    Coassociativity of # on B is an immediate consequence of coassociativity of on B. We

    also get that

    #(B)(1 B) = B B and (B 1)#(B) = B B

    as sitting inside the multiplier algebra M(AB AB).

    The next step is already less obvious. First, we define # on A and we prove someelementary properties.

    3.2 Proposition There is a linear map # : A M(AB AB) defined by

    #(a) =

    (a)

    (a(1) 1)(a(2)).

    We have

    (AB 1)#(A) = AB A and #(A)(B A) = AB A.

    Proof: i) First we will show that #(a), as in the formulation of the proposition, isa well-defined element in M(ABAB). We follow an argument that we gave alreadyin the previous section. First multiply with a 1 from the left (with a A). We get

    (a)

    (aa(1) 1)(a(2)) (A 1)(A)

    and because (B 1)(A) B A, we see already that

    (AB 1)#(a) (AB 1)(A) AB A.

    We have looked closer at this argument in Example 3.5 of [VD4].

    Next, multiply with b a from the right (with a A and b B). We get

    (a)

    (a(1) 1)(a(2))(b a

    ) AB A

    because of Assumption 2.6 (as discussed in the previous section). So we see that also#(a)(ba

    ) ABA. The two results together will give that #(a) is well-definedin M(AB AB). This proves the first part of the proposition.

    ii) We will now show that this map satisfies the equalities. The arguments are closeto the ones used in i). Indeed, if we look at the different steps in the first part of the

    25

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    26/50

    proof above, we see that actually (AB 1)#(A) = AB A. We need to use thatB A = (B 1)(A) and that A A = (A 1)(A). On the other hand, when welook at the other side, we need the equality

    A B A = ((A )A(A))(1 B A)

    because then we have AB A = #(A)(B A). Now, this equality is an easyconsequence of the basic Assumption 2.6 and the one in Definition 2.7.

    Whereas coassociativity of # on B was more or less obvious, this is not the case for #on A. It is possible to argue (or verify) that # is coassociative on A, but for this, weneed # on AB. This has been argued already in the previous section. We will comeback to this problem later in this section (see Theorem 3.14) and in the next section (seea remark following the proof of Proposition 4.2).

    We now look for necessary and sufficient conditions to ensure that #(aa) = #(a)#(a

    )

    for all a, a

    A. We will first prove the following result.

    3.3 Proposition Define a linear map P : B A B A by

    P(b a) =

    (a)

    ((b a(1)) 1)(a(2)).

    Then #(aa) = #(a)#(a

    ) for all a, a A if and only if

    (B mA)P13P12 = P(B mA)

    on B A A.

    Before we prove this proposition, let us first make a few remarks about this map P andthe condition imposed on it.

    First notice that the map P can be written as a composition TopRop where

    Rop(b a) =

    (a)

    a(2) (b a(1))

    Top(a b) = (b 1)(a).

    Compare with the maps T and R, introduced earlier, for understanding the notation. Itfollows from earlier arguments that P is well defined. We also have a case of iteratedcoverings (cf. Example 3.5 in [VD4]).

    Also compare the equation satisfied by P with the first formula in Proposition 1.3. It isvery similar but slightly different. The difference is mainly due to the fact that P mapsB A to itself whereas R maps this space into AB. We will also come back to this later(see a remark after Assumption 3.9 below).

    26

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    27/50

    For a better understanding of the result and the proof, as well as for results to come, itturns out to be useful to introduce the following.

    3.4 Notation Consider the right action of AB on B, given before in Section 1 (see aremark following Proposition 1.6), by

    y (ab) = (y a)b

    whenever a A and b, y B. Combine it with right multiplication of A on itself to aright action of AB A on B A. So

    (y x) (c a) = (y c) xa

    when a, x A, y B and c AB.

    This action is not necessarily faithful, but it is unital and so we can extend it to themultiplier algebra M(AB A). In particular, we can consider the action of the elements#(a) for any a A. Then the following can be shown.

    3.5 Lemma We have P(b a) = (b 1) #(a) for all a A and b B.

    Proof: The formula has to be read correctly as

    (1 x)P(b a) = (b x) #(a)

    for all x A. As we clearly have

    (b x) #(a) =

    (a)

    ((b a(1)) x)(a(2))

    whenever a, x A and b B, the result is true.

    With this result, it is fairly easy to show one direction of Proposition 3.3. This is thecontent of the following lemma.

    3.6 Lemma If #(aa) = #(a)#(a

    ) for all a, a A then

    (B mA)P13P12 = P(B mA) on B A A.

    Proof: For all a, a A and b B we have

    (B mA)P13P12(b a a) = (B mA)P13(((b 1) #(a)) a

    )

    = ((b 1) #(a)) #(a)

    = (b 1) (#(a)#(a))

    27

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    28/50

    and clearly alsoP(b aa) = (b 1) #(aa

    ).

    Strictly speaking, we should multiply these equations all the time with an element ofA from the left in the second factor.

    If on the other hand we have the condition on P, we see from the proof that

    (b 1) #(aa) = (b 1) (#(a)#(a

    ))

    for all b B. We cannot conclude that #(aa) = #(a)#(a

    ) for all a, a A as theaction ofABA on BA is not necessarily faithful. To prove the converse in Proposition3.3, we have to use the action of AB on AB, given by right multiplication. This is faithfulas the product in AB is non-degenerate. This will be done in the next lemma.

    3.7 Lemma If (B mA)P13P12 = P(B mA) then #(aa) = #(a)#(a

    ) for all

    a, a

    A.Proof: Let a, x A and y B. Then

    (xy 1)#(a) =

    (a)

    (xya(1) 1)(a(2))

    =

    (a)

    (xa(1) 1)((y 1) #(a(2))).

    If we replace in this equation a by the product aa, and if we assume the condition onP, it follows from the remark just made before this lemma that

    (xy 1)#(aa) =

    (a)(a)

    (xa(1)a

    (1) 1)((y 1) #(a(2)a

    (2)))

    =

    (a)(a)

    (xa(1)a

    (1) 1)((y 1) #(a(2)) #(a

    (2)))

    =

    (a)

    (xa(1) 1)((y 1) #(a(2)))#(a)

    = (xy 1)#(a)#(a).

    As the action of AB on AB is faithful, we get #(aa

    ) = #(a)#(a

    ).

    Combining Lemma 3.6 with Lemma 3.7 we obtain Proposition 3.3.

    Before we continue, let us look at another form of the condition that looks more like theone used in Hopf algebra theory. We can also prove the following result but the precisecoverings are quite involved. Moreover, the result is not really important for our approach.Nevertheless, let us consider it briefly. It is a good illustration of how complicated thesecoverings can be.

    28

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    29/50

    3.8 Proposition We have #(aa) = #(a)#(a

    ) for all a, a A if and only if

    (aa) =

    (a)(a)

    ((a(1) a

    (1)) a(0))(a

    (2))

    for all a, a A.

    Proof: Of course, one first has to give a meaning to the second formula by usingthe right coverings. There are several possibilities. The easiest one is obtained if wemultiply with an element of B in the first factor from the left.

    Then the left hand side is alright. For the right hand side, we have a part of theformula like

    b(a(1) a

    (1)) = ((b S(a

    (1)))a(1)) a

    (2)

    and we see that first b will cover a(1) (through the action), then a(1) will be covered

    (by multiplication) and finally, a(2) will again be covered (through the action). So, a

    modified form of the right hand side of this formula for (aa

    ) is covered (by multi-plication with an element of B from the left in the first factor). Another way to fullycover the expression is by also multiplying from the right with elements of the formb a where a A and b B. Then Assumption 2.6 is used, but still, things getquite complicated.

    To prove the result, we actually need to use this last covering together with thefollowing arguments.

    The formula can be rewritten as (aa) = (a) #(a). This is correct for all a, a

    if and only if

    (a)(a)

    (a(1)a

    (1) 1)(a(2)a

    (2)) =

    (a)(a)

    (a(1)a

    (1) 1)((a(2)) #(a

    (2)))

    for all a, a A. But the right hand side of this equation is equal to

    (a)

    (a(1) 1)(a(2))#(a) = #(a)#(a

    )

    and the result follows.

    We see that things get quite complicated if we want to do this completely rigorously? Inany case, when we would have Hopf algebras, no problem would occur and the proof ofthe result in Proposition 3.8 becomes more or less obvious.

    The coproduct # on AB

    Next, we want to have that # is a homomorphism on the smash product AB. Since wehave already shown that it is a homomorphism on A and on B (provided we assume the

    29

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    30/50

    right conditions), it remains to verify that the commutation rules ba =

    (a) a(1)(b a(2))are respected. In other words, we need to have

    #(b)#(a) =

    (a)

    #(a(1))#(b a(2))

    in M(AB AB) for all a A and b B. This requires 3 steps. We will need a way tocommute elements of B with the first factor of #(a), a way to commute elements of Bwith the second factor of #(a) and finally, we will need a formula for (b a).

    We begin with the last property. Indeed, we have seen before how the coaction behaveswith respect to the product in A. What we need here is how the coproduct of B relateswith the action. This is precisely the dual situation. We know how to deduce one fromthe other.

    We start with the formula, given in Proposition 3.3. We had

    (B m

    A)P

    13P12

    = P(B m

    A)

    on B A A where P is defined from B A to itself by

    P(b a) =

    (a)

    ((b a(1)) 1)(a(2)).

    Next, we transform these formulas to the case of a left action of D on C and a rightcoaction of C on D (cf. Definitions 2.9 and 2.10). Then we get the map P from D Cto itself, given by

    P(d c) =

    (d)

    (d(1))(1 (d(2) c)).

    It should satisfy the equation

    (mD C)P

    13P

    23 = P(mD C)

    on D D C. Finally, if we look for the adjoint of this map P, we find

    b a, P(d c) =

    (d)

    b a, (d(1))(1 (d(2) c))

    =

    (a),(d)

    (b a(1)) a(2), d(1) (d(2) c)

    = P(b a), d c

    for all a,b,c and d in A , B , C and D respectively. We see that P and P are each othersadjoints in this situation. Now, if we take the adjoint of the condition on P, we arrive atthe following natural assumption about P.

    3.9 Assumption P23P13(B A) = (B A)P on B A.

    30

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    31/50

    This formula should be compared with the formulas in Proposition 2.4 and Definition2.7. The covering problem can be solved in two ways. Either we write the assumptionas P13(B A) = P

    123 (B A)P on B A. The left hand side is now covered if we

    multiply with elements of B in the second factor (left or right). The right hand side iscovered with elements of B in the first factor (again left or right). This is similar as for

    the equation in Definition 2.7.The other possibility goes as follows. Given a A and b B, we see that the mapq (b 1)P(q a) from B to B A has the variable covered from the left. Indeed,

    (b 1)P(q a) =

    (a)

    (b(q a(1)) 1)(a(2))

    and we have seen before that b and a will eventually cover q from the left. It follows thatthe equation in Assumption 3.9 is also well-covered if we multiply with an element of Bfrom the left, either in the first or in the second factor.

    Now, we are interested in another form of this equation. For this purpose, we let AB actfrom the right on B as we did before, but now we also consider AB AB as acting onB B. We get the following:

    3.10 Proposition The assumption in 3.9 is fulfilled if and only if (b a) = (b)#(a)for all a A and b B. This equation will be well-covered if we multiply from theleft in the first factor with any element of B.

    Proof: We start with the equation

    (B A)P(b a) = P23P13(B(b) a)

    with a A and b B. We apply B B A. On the left hand side, we getB(b a) because ( A)(a) = A(a)1. For the right hand side we get

    (b)

    (B B A)P23P13(b(1) b(2) a)

    =

    (a),(b)

    (B B A)P23((b(1) a(1)) b(2) 1)13(a(2))

    = (b) #(a)

    (where we again have used (B A)P(b

    a

    ) = b

    a

    ). Remark that everything iswell covered if we multiply with an element of B from the left, either in the first orin the second factor.

    Also conversely, if (b a) = (b) #(a) for all a, b, the assumption in 3.9 will besatisfied.

    Now, we have taken care of the third step as mentioned before. Another step in the wholeprocedure is to commute elements of B in the second factor with elements #(a). In

    31

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    32/50

    other words, we look at a formula for (1 b)#(a) and we try to move b to the other side.Because this amounts to commuting B with A, there is no need for an extra conditionhere. This is what we get:

    3.11 Proposition For all a A and b B we have

    (1 b)#(a) =

    (a)

    #(a(1))((1 b) (a(2))).

    Here, we use to denote the right action of B A on B B, obtained from rightmultiplication by B in the first factor and the right action of A in the second factor.Again the action is extended to the multiplier algebras.

    Proof: We first present a formal argument and we discuss the details about thenecessary coverings later.

    We know that ( A)(a) =

    (a) 12(a(1))13(a(2)). Therefore

    (1 b)(a) =

    (a)

    a(1) ba(0)

    =

    (a)

    a(1) a(0)(1)(b a(0)(2))

    =

    (a)

    a(1)(1)a(2)(1) a(1)(0)(b a(2)(0))

    = (a(1))((1 b) (a(2))).

    If we replace a by a(2) and multiply with a(1) from the left in the first factor, we get

    (a)

    (a(1) b)(a(2)) =

    (a)

    (a(1) 1)(a(2))((1 b) (a(3)))

    and this is the the required formula.

    In the last formula, there is no problem with covering a(1). We simply multiply withany element of A in the first factor from the left. This takes care of the last step inthe argument above.

    Next, we multiply all expressions in the first series of formulas with elements of theform b a, from the right, where a A and b B. Then we know from earlier

    discussions that at all stages in the calculation, everything is well-defined in B Aand that all equations hold.

    Finally, we need to take b to the other side in the formula (b 1)#(a). This is funda-mentally different from the case before with b in the second factor because the first factorof #(a) is in AB. Indeed, we need an extra assumption for this:

    3.12 Assumption We assume that T R = Top Rop.

    32

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    33/50

    Recall the formulas for the maps R and T (see Proposition 1.2 and Proposition 2.4):

    R(b a) =

    (a)

    a(1) (b a(2))

    T(a b) = (a)(b 1)

    where a A and b B. Similarly Rop and Top are defined:

    Rop(b a) =

    (a)

    a(2) (b a(1))

    Top(a b) = (b 1)(a)

    with a A and b B. The composition Top Rop is nothing else but the operator P asintroduced in Proposition 3.3 while the composition of T with R is given by

    (T R)(b a) =(a)

    (a(1))((b a(2)) 1).

    Then we have the following. It is essentially a reformulation of this condition and we seefrom it that this condition indeed allows to move elements b in the expression (b1)#(a)to the other side.

    3.13 Lemma For all a A and b B we have

    (b 1)#(a) =

    (a)

    #(a(1))((b a(2)) 1).

    Proof: For all a A and b B we get

    (b 1)#(a) =

    (a)

    (ba(1) 1)(a(2))

    =

    (a)

    (a(1)(b a(2)) 1)(a(3))

    =

    (a)

    (a(1) 1)P(b a(2)).

    In this case, we can multiply with elements of A in the first factor from the left to

    cover every expression properly.

    Now we use the assumption and we get

    (b 1)#(a) =

    (a)

    (a(1) 1)(a(2))((b a(3)) 1)

    =

    (a)

    #(a(1))(b a(2) 1).

    33

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    34/50

    Again, we multiply with an element of A on the left in the first factor and everythingwill be covered. Observe that b will cover a(2) and a(3) respectively through theaction.

    Now, we are ready to conclude with the main result of this section.

    3.14 Theorem Assume that we have two regular multiplier Hopf algebras A and B andthat B is a right A-module algebra and that A is a left B-comodule coalgebra asbefore. Consider the associated maps and assume that we have

    P(B mA) = (B mA)P13P12 on B A A

    (B A)P = P23P13(B A) on B A

    as well as T R = Top Rop (i.e. P = T R) on B A.

    Then the coproduct # is a (well-defined) homomorphism on the smash productAB.

    Proof: We have defined # on A and on B and we know that #(ab) is equalto #(a)#(b). We know from Section 2 that this map is coassociative. From theresults in Proposition 3.1 and 3.2, it follows that it is non-degenerate. Because ofProposition 3.3, we know that the first condition implies that # is a homomorphismon A. And if we combine the results obtained using the other two conditions, wefind for all a A and b B that

    #(b)#(a) =

    (b)

    (b(1) b(2))#(a)

    = (a),(b)

    (b(1)

    1)#

    (a(1)

    )((1 b(2)

    ) (a(2)

    ))

    =

    (a)

    #(a(1))((b) #(a(2)))

    =

    (a)

    #(a(1))(b a(2)).

    We can cover properly if we multiply with an element of AB in the first factor.

    This proves that # is a homomorphism on the smash coproduct AB.

    The reader should compare this result with Theorem 6.2.3 of [M3]. Remark that it followsfrom the second condition that B(b a) = B(b)A(a) and so, there is no need to addthis condition as an assumption. When dealing with Hopf algebras, the first condition willimply that (1A) = 1A 1B (because #(1) = 1). In our setting however, we still cansay that #(1) = 1 as it makes sense to extend # to the multiplier algebra. However,it is not obvious how to extend itself to the multiplier algebra M(A). See also a remarkin the proof of Theorem 4.3 and in Section 5. Nevertheless, this explains why we do notneed to impose an extra condition of this type on as is done in [M3].

    34

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    35/50

    Other cases and examples

    At this stage, we can complete the results considered in the previous sections about the-algebra case. We get the following.

    3.15 Theorem Let A and B be multiplier Hopf

    -algebras. Assume that B is a rightA-module algebra as before and also that

    (b a) = b S(a)

    for all a A and b B. Then AB is a -algebra for the involution defined by(ab) = ba. Assume also as before that A is a left B-comodule coalgebra and that

    (SA(a)) = ((B SA)(a))

    for all a A. If furthermore, we have the assumptions as in Theorem 3.14, then #is a -homomorphism on the -algebra AB.

    Proof: We have seen already in Section 1 that AB becomes a -algebra with theobvious involution. We also have seen in Section 2 that the condition on the coactionis a natural one, but that it is not sufficient to ensure that # is a

    -map. Wewill now see however that this condition, together with the property that # is ahomomorphism, will imply that it is a -homomorphism.

    In order to show that this is the case, we just have to verify that we have a -mapon the components A and B of AB. There is obviously no problem on B because# coincides with B on B and by assumption, B is a

    -homomorphism. So, wejust have to verify that #(a

    ) = #(a) for all a A.

    Using the formula for (S(a)) we get

    #(S(a)) =

    (a)

    (S(a(2)) 1)(S(a(1))

    )

    =

    (a)

    (( S)(a(1))(S(a(2)) 1))

    for all a. So, if we want to show that #(S(a)) = #(S(a))

    for all a A, it willbe sufficient to prove that

    (a)

    (S(a(2)) 1)(S(a(1))) =

    (a)

    ( S)(a(1))(S(a(2)) 1)

    for all a. Observe that this equation does not involve the involutions anymore.

    To prove this result, it is allowed to multiply with #(a(3)) from the right and provethe resulting equation. For the left hand side, we get

    (a)

    #(S(a(1)))#(a(2)) = (a)#(1) = (a)1

    35

  • 8/3/2019 L. Delvaux, A. Van Daele and S.H. Wang- Bicrossproducts of multiplier Hopf algebras

    36/50

    while for the right hand side, we obtain

    (a)

    (( S)(a(1)))(a(2)).

    And indeed, these two expressions coincide for, if we start with the formula

    ( )(a) =

    (a)

    12(a(1))13(a(2)),

    apply m(S ) on the last two factors and use that ( )(a) = (a)1, we get thedesired equality. We can cover by multiplying from the right with an element of Bin the first factor and an element of A in the last factor.

    Observe that the extra condition which is needed here to show that # is a -map is justthat it is a algebra homomorphism on A. This is the condition in Proposition 3.3.

    As we have done in Section 1 and Section 2, also here we will briefly consider the dualcase.

    So, we have again two regular multiplier Hopf algebras C and D. We assume that C is aleft D-module algebra (as in Section 1) and that D is a right C-comodule coalgebra (as inSection 2). As before, we will also use to denote this coaction. In this case, the twistmaps R, Rop : D C C D and the cotwist maps T, Top : C D D C are givenby

    R(d c) =

    (d)

    (d(1) c) d(2)

    Rop(d c) =

    (d)

    (d(2) c) d(1)

    T(c d) = (1 c)(d)

    Top(c d) = (d)(1 c).

    Similarly, the map P : D C D C, defined as Top Rop, in this case is given by

    P(d c) =

    (d)

    (d(1))(1 (d(2) c)).

    The dual version of Theorem 3.14 is now the following:

    3.16 Theorem Let C and D be two regular multiplier Hopf algebras. Assume that Cis a left D-module algebra and that D is a right C-comodule coalgebra. With thenotations as above, assume furthermore that

    P(mD C) = (mD C)P13P23 on D D C

    (D C)P