understanding information infrastructureheim.ifi.uio.no/~oleha/publications/bok.pdfunderstanding...

221
Understanding Information Infrastructure Ole Hanseth and Eric Monteiro Manuscript 27. Aug., 1998

Upload: others

Post on 12-Feb-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding InformationInfrastructure

Ole Hanseth and Eric Monteiro

Manuscript 27. Aug., 1998

Page 2: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

CHAPTER 1 Introduction 1

The emergence of information infrastructures 1Infrastructures are different 4New trends, new research 6Our focus, approach and contribution 7Brief outline of the book 9Earlier papers 14

CHAPTER 2 Cases: The Internet and NorwegianHealth Care 17

Internet 17International Standardization Organization(ISO) 25EDI and EDIFACT 26Health care information infrastructures 27EDI Infrastructure in the Norwegian health caresector 28Methodological issues 33

CHAPTER 3 Defining informationinfrastructures 37

Introduction 37Roots 39Defining II - identifying the aspects 40Sliding boundaries 50State of the art 51

CHAPTER 4 Standards and standardizationprocesses 55

Introduction 55

Page 3: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The economy of scale argument 56Types of standards 57Internet standards 58Standards for health care informationinfrastructures 63Standardisation processes and strategies 66OSI 66EDIFACT 68Internet 69Standardization of health care IIs 72Standards engineering 73State of the art in research into standardisation 75

CHAPTER 5 Openness and flexibility 77

Introduction 77Openness 78The wider environment - a more rapidly changingworld 81Implications: the need for flexibleinfrastructures 83Enabling flexibility to change 86Hampering flexibility to change 87The interdependencies between standardizationand flexibility 88

CHAPTER 6 Socio-technical webs and actornetwork theory 91

Introduction 91Technology and society: a brief outline 91The duality of technology 93Actor network theory - a minimalisticvocabulary 96

Page 4: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Actor networks meet structuration theory 101

CHAPTER 7 Inscribing behaviour 105

Introduction 105Translations and design 106Inscriptions and materials 110Accumulating the strength of an inscription 116Moving on 124

CHAPTER 8 Dreaming about theuniversal 127

Introduction 127What is universalism? 128Universalism at large: a few illustrations 129The origin of universalism 136How to create universals 139Universals in practise 141If universalism is dead, what then? 146

CHAPTER 9 Installed base cultivation 147

Introduction 147Installed base 147The irreversibility of the installed base 148Information infrastructure as actor 149Effects 151Some examples 153Strategy 154From design to cultivation 156Cultivating the installed base 160

Page 5: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

CHAPTER 10 Changing infrastructures: The caseof IPv6 165

Introduction 165The IP protocol 166Late 80s - July 1992 167July 1992 - July 1994 176July 1994 - today 185Discussion 188Conclusion 192

CHAPTER 11 Changing networks: Strategies andtools 193

Introduction 193NORDUnet 195On the notion of a gateway 207Illustrating the roles of gateways 211The political incorrectness of gateways 214

Page 6: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 1

CHAPTER 1 Introduction

The emergence of information infrastructures

During the last 3 - 4 years a truly amazing transformation has taken place. Frombeing the toy of square-eyed nerds and confined to esoteric enclaves, the interestaround Internet and communicative use of IT has exploded. That advertisementswould contain Web home pages, that newspapers would include a special section onInternet in addition to sports and finance and that it would be possible to tell yourmother that you are “on the Net” or “surfing” is nothing short of astonishing.

In parallel with the increase in popular and media attention directed towards theInternet, the establishment of information infrastructures has been heavily pro-moted by political actors. The term “information infrastructure” (II) has beenincreasingly used to refer to integrated solutions based on the now ongoing fusionof information and communication technologies. The term became popular after theUS plan for National Information Infrastructures (NII) was launched. Followingthat the term has been widely used to describe national and global communicationnetworks like the Internet and more specialized solutions for communicationswithin specific business sectors. The European Union has followed up, or rathercopied, the ideas in their Bangemann report (Bangemann et al. 1994). In this way,the shape and use of information infrastructures has been transformed into an issueof industrial policy (Mansell, Mulgan XX).

Page 7: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

2 Ole Hanseth and Eric Monteiro

The launching of ambitious political action plans like those mentioned above is to alarge extent an effect of the success of the Internet - the great potential of such tech-nologies that has been disclosed to us through the use of Internet. But the plans areover and above responses to a much broader trend concerning technological devel-opment and diffusion which the Internet also is a part of: the development of infor-mation and communication technologies. As a result of this trend, one (inparticular technology policy and strategy makers) rather prefer to talk about ICTthan IT or communication technologies separately. This integrated view on ICT is aresult of a long term trend integrating telecommunication and information technol-ogies - leading to a convergence of these technologies.

This convergence may be described as two parallel, and in principle, independentprocesses: information technologies “creeping into” telecommunication technolo-gies and telecommunication technologies “creeping into” information systems. IThas crept into telecommunication first of all through the “digitalization” of tele-communication, i.e. traditional telecommunication components like switches andtelephones are changed to digital technologies rather than analog technologies. Thelatter process has evolved as information systems have been enhanced through useof telecommunication technologies. Remote users have been given access to sys-tems independent of physical location and independent systems have been inte-grated through information exchange based on services like electronic datainterchange (EDI). In addition some integration processes which might be locatedsomewhere in the middle between these two have unfolded. This include forinstance the development of technologies enabling the public telephone infrastruc-ture to be used as an interface to information systems. One example is solutionsallowing bank customers to check their accounts directly using the telephone.

In total, the convergence of ICT has opened up for a vast array of new uses of tech-nologies. The “informatization” of telecommunication has opened up for lots ofnew enhanced telecommunication services, and similarly, the “telecommunicatiza-tion” of information systems has opened up for an equally large range of new infor-mation systems supporting information sharing and integrating processes at aglobal level. The range of new solutions that seems useful and that may be devel-oped and installed may be equally large as the number of traditional informationsystems developed.

Some examples of solutions of this kinds which may be available to us not too farinto the future are:

Page 8: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The emergence of information infrastructures

Understanding Information Infrastructure 3

• All goods might be bought through electronic commerce services. This implies

that the order, inventory, invoicing and accounting systems of all companies in

the world and all systems controlling customers’ bank accounts in the world

will be integrated.

• Global companies are integrating all their information systems globally at the

same time as production control and logistics systems are integrated with all

their supplies and customers,

• Individuals can access all the information services from their mobile phone: the

information systems they are authorized users of, and the Internet of course -

which implies that they can get access to all newspapers as well as TV chan-

nels.

The infrastructures mentioned so far can be seen as mostly been shared by the “glo-bal community.” However, information infrastructures have also been developedalong a different, a third, path. Inside individual corporations, the number of infor-mation systems has continuously been growing. At the same time existing systemshave become increasingly integrated with each other. Most companies do not havejust a collection of independent systems. The integration of and interdependencebetween the systems implies that they should rather be seen as an infrastructure -independent of their geographical distribution and the use of telecommunication.But this shift is of course strengthened by the fact that the companies are integrat-ing their growing number of systems at the same time as they (drawing on the tech-nological development sketched above) are becoming more geographicallydistributed (global) and integrate their systems more closely with others’.

There is also a fourth path which concerns the increasing degree ICT is penetrat-ing and becomes embedded into our everyday lives (Silverstone and Hirsch 1992;Sørensen and Lie 1996). As individuals, we are using an increasing number oftools, systems, and services in an increasing part of our lives. ICT is not just repre-sented in a limited set of tools we are using for specific purposes. We are ratherusing ICT, in one form or another whatever we are doing and wherever we are. Inthis way ICT is becoming a basic infrastructure for “all” our dealings in our world.We are “domesticating” the technology.

All these four trends are taking place in parallel. Just as information and communi-cation technolgies are converging, information systems and telecommunication

Page 9: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

4 Ole Hanseth and Eric Monteiro

services are increasingly becoming linked to each other. The number of links isincreasing.

Infrastructures are different

The broad trends outlined above, we argue, requires a re-conceptualization of theI(C)T solutions we are developing. If one is concerned with the development of the“information systems of the future”, the phenomena we are dealing with should beseen as information infrastructures - not systems. This reconceptualization isrequired because the nature of the new ICT solutions are qualitatively differentfrom what is captured by the concepts of information systems underlying thedevelopment of IT solutions so far. It is also significant different from traditionaltelecommunication systems - and even the combination of these two. This recon-ceptualization is what we are striving for with this book. We will here briefly moti-vate for the necessity of such a shift.

Understanding information infrastructures requires a holistic perspective - an infra-structure is more than the individual components. Successful development anddeployment of information infrastructures requires more than a combination of tra-ditional approaches and strategies for development of telecommunications solu-tions and information systems. Although these approaches complement each other,they also contain important contradictions and accordingly brand new approachesare required

The infrastructures we are addressing can still to some extent be seen as informa-tion systems as they contain everything you find in an IS. But an infrastructure issomething more than an IS. In line with the brief sketch of the emergence of ICT,we can say that an infrastructure also through the “telecommunicatization” process,information infrastructures are traditional information systems plus telecommuni-cation technology. This is of course correct, but still a bit too simple. A globalinfrastructure where any goods might be purchased is more than just an IS plustelecommunication. And also seen from the perspective of “informatization” oftelecommunication - information is certainly more that telecom - or telecom addedinformation systems.

Traditional approaches to information systems development are implicitly based onassumptions where the information systems are closed, stand-alone systems usedwithin closed organizational limits. They are assumed developed within a hierar-chical structure - a project (managed by a project leader and a steering group) -

Page 10: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Infrastructures are different

Understanding Information Infrastructure 5

which is a part of a larger hierarchical structure - the user organization (or the ven-dor organization in case of a commercial product). Telecommunication systems, onthe other hand, are global. The most important design work - or decisions at least -are taken care of by standardization bodies (CCITT and ITU). When developinginfrastructures, the focus on closed, stand-alone systems has to be replaced by onefocusing on the infrastructures as open and global as is the case for development oftelecommunication technologies. However, there are other parts of the telecommu-nication approach which is more problematic.

Characteristic for traditional telecommunication solutions have been their stability.This is in particular true for their functionality and user interface. To put it simply,the basic functionality has been stable for more than hundred years. A telephoneservice has one function: the user can dial a number, talk to the person at the otherend, and hand up when finished. As telecommunication got “informationalized,”however, this started to change and new functions (for instance, you can transferyour phone “virtually” to another number, you may use it as an alarm clock, etc.)have been added. But the stability of the functionality is a basic precondition forhow the approaches and strategies followed for developing telecommunicationtechnologies.

What has been in focus has been the improvement of the technologies invisible tothe users. At the national level, the telecommunication infrastructures have beenbuilt and operated by national monopolies since about the turn of the century(Schneider and Mayntz). Monopolies have dominated this technology as its sizeand interconnectedness makes this “natural,” and most investments have beenmade based on long time horizons, usually 30 years (ibid.). All technologies aredesigned and specified as global (universal) standards. Such standards have beenseen as absolutely required to enable smooth operation and use of the infrastruc-ture. However, the development of such standards takes time - usually about tenyears for each standard. And under the existing conditions - the simple user inter-face - one (or may be three: dial, talk, hang up) operation not being modified for100 years, the long term planning of the infrastructure, and the limited number ofactors - one for each country - this approach has worked pretty well.

Information systems development, however, has very different - or rather opposite- characteristics. While the telephone functionality and user interface has beenextremely stable and simple, information systems are characterized by very richand highly dynamic functionality. Information systems are closely tied to the work-ing processes they support. These processes are inscribed into the systems makingthem unique and local - not universal. The environments of information systemsare highly dynamic. And the information systems are in themselves a driving force

Page 11: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

6 Ole Hanseth and Eric Monteiro

in changing the very work practices and other environmental elements they aredesigned to support.

In the case of telecommunication technologies the stability of the functionality anduser interface has made users irrelevant in the design process. The user interfacehas been given, and one could concentrate only on technological issues. For infor-mation systems the situation is the opposite. As the links between the technologyand the users are so rich and dynamic, dealing with the interaction between techno-logical and human, social, and organizational issues has been of utmost impor-tance. (This fact is not challenged by the strong technological bias of mostengineers involved in IS development, struggling to turn IS development into anordinary technical engineering discipline.)

Future information infrastructures will be just as dynamic as our information sys-tems have been so far. They will be very heterogeneous (re. the combined verticaland horizontal integration of information systems).

So, can we find an approach to infrastructure development which account for thedynamics and non-technical elements of information systems at the same time asthe standards enabling the required integration - an approach which works for moreheterogeneous and dynamic technologies that telephone services up to now? Thisis an all to big question to be answered in one book. However, that is the questionwe want to inquire into. Our main focus is the tension between standardization andflexibility, which we believe is an important element in the bigger question. And indiscussion this tension we will in particular be concerned about the interplay andinterdependencies between technological issues on the one hand and social, organi-zational, and human on the other.

New trends, new research

“Everybody” observing the trends briefly described above have concluded thatbrand new technologies are on their way. However, large groups of IS and ITdevelopers seem to conclude that the new technology is still software and informa-tion systems, so that good old software engineering and information systems devel-

opment approaches still do.1 Among researchers, however, the announcement of

1. These assumptions are very visible in the methodological framework developed forengineering European wide telematics applications (..).

Page 12: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Our focus, approach and contribution

Understanding Information Infrastructure 7

the Clinton/Gore and Bangemann reports triggered new activities. Severalresearchers soon concluded that the solutions envisions in the reports were new,and as we did not have any experience in developing such solutions, we did nothave the required knowledge either (several papers in (Brandscomb and Kahin).One issue identified as the most probably key issue was that of standardization. Itwas considered obvious that the new infrastructures require lots of standards.Equally obvious - existing standardization practices and approaches are not at allsufficient to deliver the standards needed. An early conclusion is that the Internetexperience is the most valuable source for new knowledge about this issues(Brandscomb and Kahin 1996).

Science and Technology Studies has been a rapidly growing field in the last 10-20years. Neither standards nor infrastructures have been much in focus. However,there are a number of studies within this field which also give us important knowl-edge about standards useful when developing information infrastructures andwhich we will draw upon in this book (Geof, Marc, Leigh, ..).

IT infrastructure has also become a popular topic within information systemsresearch focusing on the situations within individual business organizations(Broadbent, Weill XX).

Our focus, approach and contribution

II is a vast field. It covers all kinds of technologies, all kinds of use and use areas. Itinvolves lots of political, social, organization, human aspects and issues - from thedevelopment of industrial at national, regional (EU), or even the global level withinthe G7 forum to the micro politics in the everyday activities between peopleinvolved in the design and use of the technology. An all these issues interact, theyare interdependent and intertwined. Lots of research into all issues and all combi-nations are important. In this context this book will just cover a minor aspect ofinfrastructures. But an important one - we believe. Our focus on the standardiza-tion, and in particular the tension between standardization and flexibility. In inquir-ing into this tension we will in particular be concerned about the interplay betweentechnical and non-technical (social, organizational, human, etc. (Latour,xx) issues.

We focus on information infrastructures. By this term we mean IT based infrastruc-tures at the application level, not lower levels IT based telecommunication net-works like for instance ATM networks or wireless networks. We see informationinfrastructures as “next generation” information systems. An information infra-

Page 13: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

8 Ole Hanseth and Eric Monteiro

structure can be described as an information system except that it is shared by alarge user community across large geographical areas such that it might moreappropriately be seen as an infrastructure than as a system.

Our motivation behind this book is to contribute to a firmer understanding of thechallenges involved in establishing a working information infrastructure. Withoutsuch a basis, political action seem futile. The potential and probable impacts ofinformation infrastructures in our everyday lives and at work, in terms of regionaldevelopment and (lack of) distribution urge us to develop our ability to makeinformed judgements. Closing your eyes or resorting to political slogans cannotmake a sound strategy.

As the information infrastructures of the future have yet to materialise — they arecurrently in the making — we are necessarily aimed at a moving target. Our analy-sis in this book is dominated by continuity: we expect and argue that the futureinformation infrastructures will be an extension, combination, substitution andsuperimposition of the bits and pieces that already exist (Smarr and Catlett 1992).Hence, the experiences acquired so far is relevant. The aim of this book, then, is topaint the emerging picture of information infrastructures based on a critical inter-pretation of the fragments of existing experience.

Our intention in writing the book is addressing what we see to be (some of) thecore characteristics of information infrastructures, i.e. those aspects making infor-mation infrastructures different from information systems and at the same timebeing critical in their development and use. These are aspects which our (research)and experience form the information systems field cannot tell us how to deal withproperly.

As will be elaborated at length later, establishing a working information infrastruc-ture is a highly involved socio-technical endeavour. Developing a firmer under-standing, then, amounts to developing a reasonable account of these socio-technical processes: the actors, institutions and technologies that play a role. Seem-ingly, we have translated design of information infrastructures from the relativelymanageable task of specifying a suitable family of technical standards and proto-cols to the quite unmanageable task of aligning innovations, national technologypolicies, conditions for commercially viable experimentation, governmental inter-vention and so forth. A perfectly reasonable question, then, is to inquire whetherthis is doable at all: is not design of information infrastructures a comforting, butultimately naive, illusion?

Page 14: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Brief outline of the book

Understanding Information Infrastructure 9

Kraemer and King (1996), for instance, take a long stride in this direction in theirappropriation of the NII initiative in the United States. They basically present NIIas a political-economic negotiation where “there is no design (...) [only] orderwithout design” (ibid., p. 139). Against this background, there seems to be littlespace for influencing the shaping of an information infrastructure through designrelated decisions.

Our handle on this is slightly different. We seek to make visible, not side-step, theissues of the political-economic negotiations. By making the inscriptions explicit,the intention is to pave the road for subsequent scrutiny and discussion. In thissense, we pursue a slightly, hopefully not overly, naive approach where we arguefor making the most out of the available options rather than fall back into apathy ordisillusion.

The fact that information infrastructures are established through complex and vastprocesses, implies that the notion of “designing” them needs to be critically reas-sessed. The connotation and assumptions about design is too much biased towardsbeing in control of the situation. In relation to information infrastructures, we arguethat it is more reasonable to think of design in terms of strategies of interventionand cultivation.

Brief outline of the book

Chapter 2 presents two examples of information infrastructures: The Internet andthe information infrastructure for the Norwegian health care sector. These twocases illustrate two different types of infrastructures. They also illustrate two differ-ent development approaches. Internet is based on an evolutionary, prototype ori-ented approach. The Norwegian health care infrastructure is tried developed basedon a specification driven approach where the focus has been on the specification ofEuropean standards for health care information exchange. These two developmentefforts are also different in the sense that the Internet has been growing rapidlythroughout its life from the very beginning, an indisputable success. The develop-ment of the Norwegian health care infrastructure has followed a different pattern. Itstarted with a tremendous success for one actor building the first limited networkfor transmission of lab reports to GPs. That solution was soon after copied by sev-eral others. After that, however, the infrastructure has developed very, very slowly.

Page 15: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

10 Ole Hanseth and Eric Monteiro

Chapter 3 critically analyses the notion of information infrastructure. The conceptis not strictly defined, but rather characterized by six key aspects. These are theaspects, we argue, making infrastructures qualitatively different from other infor-mation systems. We are arriving at these aspects by presenting and analysing anumber of infrastructure definitions provided by others, including the one used inthe official documents presenting the US Government plan for the building of theNational Information Infrastructure. The 6 aspects are: enabling, shared, open,socio-technical, heterogeneous, and installed base.

Infrastructures have a supporting or enabling function. This is opposed to beingespecially design to support one way of working within a specific application field.

An infrastructure is one irreducible unit shared by a larger community (or collec-tion of users and user groups). An infrastructure is irreducible in the sense that it isthe same “thing” used by all its users (although it may appear differently), it cannotbe split into separate parts being used by different groups independently. However,an infrastructure may of course be decomposed into separate units for analytical ordesign purposes. The fact that infrastructures are shared implies that their parts arelinked and they are defined as shared standards. This means that standards are notonly economically important but a necessary constituting element.

IIs are more than “pure” technology, they are rather socio-technical networks. Thisis true for ISs in general, as they will not work without support people and the usersusing it properly. For instance, flight booking systems do not work for one particu-lar user unless all booked seats are registered in the systems. But this fact is largelyignored in the thinking about the design of information systems as well as infra-structures.

Infrastructures are open. They are open in the sense that there are no limits for thenumber of users, stakeholders, vendors involved, nodes in the network and othertechnological components, application areas, network operators, etc. This definingcharacteristic does not necessarily imply the extreme position that absolutelyeverything is included in every infrastructure. However, it does imply that one can-not draw a strict border saying that there is one infrastructure for what is on oneside of the border and others for the other side and that these infrastructures haveno connections.

Infrastructures are heterogeneous. They are so different in different ways. Forinstance, they are connected into ecologies of infrastructures as illustrated above,they are layered upon each other as in the OSI model, they are heterogeneous asthey include elements of different qualities like humans and computers, etc. They

Page 16: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Brief outline of the book

Understanding Information Infrastructure 11

are also heterogeneous in the sense that the seemingly same function might beimplemented in several different ways.

Building large infrastructures takes time. All elements are connected. As timepasses, new requirements appear which the infrastructure has to adapt to. Thewhole infrastructure cannot be change instantly - the new has to be connected tothe old. In this way the old - the installed base - heavily influence how the new canbe designed. Infrastructures are not designed from scratch, they rather evolve as the“cultivation” of an shared, open, socio-technical, heterogeneous installed base.

The remainder of the book views information infrastructures through this lens. Wesubsequently look deeper into the above mentioned aspects of infrastructures, andthen gradually move towards design related strategies.

Based on the assumptions that standards are playing crucial roles in relation toinfrastructures, chapter 4 spells out the different kinds of standards defining theInternet and the standards defined through the standardization effort organized byCEN (CEN TC/251), the organization given the authority to set European stan-dards for health care information exchange and Internet. Different types of stan-dards are identified and the process through which standards are worked out aredescribed. The organisation of the standardisation process vary significantly.

The openness of infrastructures is addressed in chapter 5 and might be illustratedby an example form health care: A hospital is exchanging information with othermedical institutions, even in other countries. It is exchanging information withsocial insurance offices and other public sector institutions, it is ordering goodsfrom a wide range of companies, etc. These companies are exchanging informationwith other companies and institutions. Hospital doctors might be involved in inter-national research programmes. Accordingly, a hospital is sharing information withvirtually any other sector in society. Drawing a strict line between, for instance, amedical or health care infrastructure and an electronic commerce infrastructure isimpossible. However wide an infrastructure’s user groups or application areas aredefined, there will always be something outside which the infrastructure should beconnected to.

Openness implies heterogeneity. An infrastructure grows by adding new layers orsub-infrastructures. Over time, what is considered to be separate or part of the samewill change. Infrastructures initially developed separately will be linked together.These evolving processes make infrastructures heterogeneous in the sense that theyare composed of different kinds of components.

Page 17: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

12 Ole Hanseth and Eric Monteiro

Open worlds, like those of standards and infrastructures are dynamic and changing.To adapt to such change, infrastructures and standards must be flexible. They alsoneed to be flexible to allow some kind of experimentation and improvement asusers get experience.

Standards are neither easily made nor changed when widely implemented. Stan-dardization means stability. The openness of infrastructures implies that the rangeand scope of standards must change over time, and so will their relationships toother standards. This chapter inquires into the implications of considering infra-structures open, the need for flexibility and the more intricate relationshipsbetween flexibility and change on one hand and standardization and stability on theother.

Chapter 6 outlines a theoretical framework we argue is relevant for appropriatingthe socio-technical aspect of information infrastructures. The framework is actor-network theory (ANT) and is borrowed from the field of science and technologystudies (STS) (REF XX Latour, Bijker, Law). This chapter motivates for the rele-vance of ANT by comparing and contrasting it with alternative theoretical frame-works, in particular structuration theory (Orlikowski, Walsham XX). Also otherscholars of information systems research have used ANT (Walsham ifp8.2, Leigh,Bloomfield et al. XX).

One key concept in actor network theory is that of “inscriptions.” This is exploredat length in chapter 7. The concept explain how designers assumptions about thefuture use of a technology, described as programs of action, is inscribed into itsdesign. Whether the technology in fact will impose its inscribed program of actiondepends on to what extent the actual program of action also is inscribed into otherelements like for instance documentation, training programs, support functions,etc., i.e. into a larger network of social and technological elements (humans andnon-humans).

This chapter analyses in detail what kinds of programs of action are described intotwo specific standardized EDIFACT message definitions for health care. We arelooking at which elements, ranging from the atomic units of the message defini-tions to the overall organization of the standardization work, various programs ofaction are inscribed into as well as how these elements are aligned to each other.

So far we have inquired into the core aspects (as we see it) of information infra-structures. Starting in chapter 8, we turn more towards design related issues. Inchapter 8 we look into the basic assumptions underlying most infrastructure devel-opment work, in particular standardization activities.

Page 18: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Brief outline of the book

Understanding Information Infrastructure 13

Most information infrastructure standardization work is based on a set of beliefsand assumptions about what a good standard is. These beliefs are strong - but areindeed beliefs as they are not based on any empirical evidence concerning theirsoundness. They have strong implications for what kinds of standards that aredefined, their characteristics as well as choice of strategies for developing them.Beliefs of this kind are often in other contexts called ideologies. Hence, the focusof this chapter is on the dominant standardization ideology: its content, history,how it is tired applied, what really happens and its shortcomings. We will arguethat it has serious short-comings. In fact, the dominant standardization approachdoes not work for the development of future information infrastructures. Newapproaches based on different ideologies must be followed to succeed in the imple-mentation of the envisioned networks. The chapters 9 through 11 are devoted tospelling out viable, alternative design strategies to this dominating, main-streamone influenced by universalism.

The focus on infrastructure as an evolving “installed base” in chapter 9 implies thatinfrastructures are considered as always already existing, they are NEVER devel-oped from scratch. When “designing” a “new” infrastructure, it will always be inte-grated into and thereby extending others, or it will replace one part of anotherinfrastructure.

Within the field institutional economy some scholars have studied standards as apart of a more general phenomena labelled “self-reinforcing mechanisms” and“network externalities” (REFS XX). Self-reinforcing mechanisms appear when thevalue of a particular product or technology for individual adopters increases as thenumber of adopters increase. The term “network externalities” is used to denote thefact that such a phenomenon appears when the value of a product or technologydepends also on aspects being external to the product or technology itself. Chapter9 briefly reviews these conceptions of standards and the phenomenon we call theirreversibility of the installed base - the cause of this phenomenon as well as itseffects. Furthermore, we look at how the irreversibility problem appears in relationto information infrastructures, its negative implications and the need for flexibilityand change. This leads to a re-conceptualisation of the very notion of design ofinformation infrastructure to something closer to a cultivation strategy.

The practical implications of the argument of chapter 9, that the installed base has astrong and partly neglected influence, is illustrated in chapter 10 by one specificcase presented in detail. It describes the revision of the IP protocol in Internet. IPforms the core of Internet and has accordingly acquired a considerable degree ofirreversibility as a result of its wide-spread use. Changes to IP have to be evolution-

Page 19: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

14 Ole Hanseth and Eric Monteiro

ary and small-step, i.e. in the form of a transition strategy. Transition strategiesillustrate the cultivation approach to design of information infrastructures.

By way of conclusion, chapter 11 explores alternative cultivation based approachesto infrastructure development. These alternatives allow more radical changes thanthe basically conservative character of cultivation. They are based on the use of(generalised) gateways which link previously unrelated networks together. In con-junction with cultivation, these gateway based approach make up the interventionstrategies for information infrastructures.

Earlier papers

This book is to a large extent based on work we have reported on in earlier publica-tions. Some material is new. And the format of a book allows us in a completelynew way to combine, structure and elaborate the various threads and argumentsspread out in earlier publications. These publications are listed below.

Eric Monteiro and Ole Hanseth. Social shaping of information infrastructure: onbeing specific about the technology. In Wanda Orlikosiki, Geoff Walsham, Mat-thew R. Jones, and Janice I. DeGross, editors,Information technology and changesin organisational work, pages 325 -- 343. Chapman & Hall, 1995.

Hanseth, O., Monteiro, E. and Hatling, M. 1996. Developing information infra-structure standards: the tension between standardisation and flexibility. Science,Technology & Human Values, 21(4):407 – 426.

Hanseth, O. 1996 Information technolgogy as infrastructure, PhD thesis, GøteborgUniv.

O. Hanseth and E. Monteiro. Inscribing behaviour in information infrastructurestandards. Accounting, Management & Information Technologies, 7(4):183 – 211,1997.

O. Hanseth and E. Monteiro. Changing irreversible networks. In: Proc. ECIS ‘98,1998.

Monteiro, E. 1998. Scaling information infrastructure: the case of the nextgeneration IP in Internet. The Information Society, 14(3):229 - 245.

Page 20: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Earlier papers

Understanding Information Infrastructure 15

Page 21: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Introduction

16 Ole Hanseth and Eric Monteiro

Page 22: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 17

CHAPTER 2 Cases: The Internet andNorwegian Health Care

Throughout this book, a number of examples will be used to discuss and illustratevarious issues. These examples will primary be selected from two cases - the build-ing of two different information infrastructures: the Internet and an infrastructurefor exchange of form like information in the Norwegian health care sector. Thebuilding of these infrastructures will be presented in this chapter. We also discussmethodological issues regarding how reasonable it is to draw general conclusionsabout information infrastructures from these cases. Our approach is pragmatic. Wepresent an emerging picture of information infrastructure standardisation and devel-opment based on the empirical material at hand. This picture will be adjusted asmore experience with information infrastructures is gained. The two cases exhibit,we believe, a number of salient features of information infrastructure building.

In order to make our use of the two cases as clear as possible, we identify both theimportant lessons to be learned as well as pointing out the more accidental, lessreproducible aspects.

Internet

“The Internet has revolutionized the computer and communications worldlike nothing before” (Leiner et al., 1997, p. 102).

Page 23: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

18 Ole Hanseth and Eric Monteiro

“The Internet today is a widespread information infrastructure, the initialprototype of what is often called the National /or Global or Galactic)Information Infrastructure” (ibid., p. 102).

As indicated by these quotes, the Internet is widely held to be the primary success-ful example to learn from when trying to realized the envisioned information infra-structures (Kahin and Branscomb 1995, Digital lib. 1995). We share this view, andwill accordingly present the development of the Internet from its very beginning upto today. We will in this section give a brief overview of this development, pointingto what we believe to be the important steps and events that indicate what could beincluded in strategies for building future information infrastructures. This presenta-tion draws heavily on (Leiner et al., 1997).

We also include a few cautionary remarks about the danger of idolizing Internet. Itis a lot easier to acknowledge its historical success than to feel confident about itsfuture.

The beginning

The first notes related to the work leading to Internet is a few papers on packetswitching as a basis for computer networking written in 1962. The first long dis-tance connections between computers based on this principles were set up in 1965.In 1969 the first nodes of ARPANET were linked together, and in 1971-72 the

NCP1 protocol was implemented on this network, finally offering network usersthe possibilities of developing network applications. In 1972 e-mail was intro-duced, motivated by the ARPANET’s developers’ need for easy coordination.From there, e-mail took off as the most popular network application.

The TCP/IP core

The original ARPANET grew into the Internet. Internet was based on the idea thatthere would be multiple independent networks of rather arbitrary design, beginningwith the ARPANET as the pioneering packet switching network, but soon toinclude packet satellite networks, ground-based packet radio networks and othernetworks. The Internet as we now know it embodies a key underlying technicalidea, namely that of open architecture networking. In this approach, the choice of

1. NCP, Network Control Protocol

Page 24: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet

Understanding Information Infrastructure 19

any individual network technology was not dictated by a particular network archi-tecture but rather could be selected freely by a provider and made to interwork withthe other networks through a meta-level “Internetworking Architecture”.

The idea of open-architecture networking was guided by four critical ground rules:

• Each distinct network had to stand on its own, and no internal changes could berequired of any such network before being connected to the Internet.

• Communication would be in a best-effort basis. If a packet didn’t make it to thefinal destination, it would quickly be retransmitted from the source.

• Black boxes (later called gateways and routers) would be used to connect thenetworks. No information would be retained by the gateways about individualflows of packets passing through them, keeping them simple and avoiding com-plicated adaptation and recovery from various failure modes.

• There would be no global control at the operations level.

The original Cerf and Kahn (1974) paper on the Internet described one protocol,called TCP, which provided all the transport and forwarding services in the Inter-net. Kahn had intended that the TCP protocol support a range of different transportservices, from the totally reliable sequenced delivery of data (virtual circuit model)to a datagram service in which the application made direct use of the underlyingnetwork service, which might imply occasional lost, corrupted or reordered pack-ets.

Although Ethernet was under development at Xerox PARC at that time, the prolif-eration of LANs were not envisioned at the time, much less PCs and workstations.The original model was national level networks like ARPANET of which only arelatively small number were expected to exist. Thus a 32 bit IP address was usedof which the first 8 bits signified the network and the remaining 24 bits designatedthe host on that network. This assumption, that 256 networks would be sufficientfor the foreseeable future, was clearly in need of reconsideration when LANsbegan to appear in the late 1970s.

However, the initial effort to implement TCP resulted in a version that onlyallowed for virtual circuits. This model worked fine for file transfer and remotelogin applications, but some of the early work on advanced network applications,in particular packet voice in the 1970s, made clear that in some cases packet lossesshould not be corrected by TCP, but should be left to the application to deal with.This led to a reorganization of the original TCP into two protocols, the simple IPwhich provided only for addressing and forwarding of individual packets, and the

Page 25: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

20 Ole Hanseth and Eric Monteiro

separate TCP, which was concerned with service features such as flow control andrecovery from lost packets. For those applications that did not want the services ofTCP, an alternative called the User Datagram Protocol (UDP) was added in order toprovide direct access to the basic service of IP.

ARPANET replaced NCP with TCP/IP in 1983. This version of TCP/IP is offi-cially given version number four.

New applications - new protocols

A major initial motivation for both the ARPANET and the Internet was resourcesharing, like for instance allowing users on the packet radio networks to access thetime sharing systems attached to the ARPANET. Connecting the two networks wasfar more economical that duplicating these very expensive computers. However,while file transfer (the ftp protocol) and remote login (Telnet) were very importantapplications, electronic mail has probably had the most significant impact of theinnovations from that era. E-mail provided a new model of how people could com-municate with each other, and changed the nature of collaboration, first in thebuilding of the Internet itself (as is discussed below) and later for much of society.

In addition to e-mail, file transfer, and remote login, other applications were pro-posed in the early days of the Internet, including packet-based voice communica-tion (the precursor of Internet telephony), various models of file and disk sharing,and early “worm” programs illustrating the concept of agents (and viruses). TheInternet was not designed for just one application but as a general infrastructure onwhich new applications could be conceived, exemplified later by the emergence ofthe Web. The general-purpose nature of the service provided by TCP and IP makesthis possible.

As TCP moved to new platforms, new challenges were met. For instance, the earlyimplementations were done for large time-sharing systems. When desktop comput-ers first appeared, it was thought by some that TCP was too big and complex to runon a personal computer. However, well working implementations were developed,showing that such small computers could be connected to Internet as well.

As the number of computers connected increased, new addressing challengesappeared. For example, the Domain Name System was developed to provide ascalable mechanism for resolving hierarchical host names (like ifi.uio.no) intoInternet addresses. The requirements for scalable routing approaches led to a hier-archical model of routing, with an Interior Gateway Protocol (IGP) used inside

Page 26: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet

Understanding Information Infrastructure 21

each region of the Internet and an Exterior Gateway Protocol (EGP) used to tie theregions together.

Diffusion

For quite some time, the Internet (or at that time ARPANET) was primarily used byits developers and within the computer networking research community. As thenext major step, the technology was adopted by groups of computer scienceresearchers. An important use area was the development of basic support servicesfor distributed computing in environments based on work stations connected toLANs like distributed (or networking) file systems. A crucial event in this respectwas the incorporation of TCP/IP into the Unix operating system. An additionalimportant element was the fact that the code was freely distributed. And lastly, theon-line availability of the protocols’ documentation.

In 1985 the Internet was established as a technology supporting a broad communityof researchers and developers.

The evolution of the organization of the Internet

“The Internet is as much a collection of communities as it is a collectionof technologies” (ibid. p. 106).

It started with the ARPANET researchers working as a tight-knit community, theARPANET Working Group (Kahn 1994). In the late 1970s, the growth of the Inter-net was accompanied by the growth of the interested research community andaccordingly also an increased need for more powerful coordination mechanisms.The role of the government was initially to finance the project, to pay for the leasedlines, gateways and development contracts. In 1979, one opened up for participa-tion from a wider segment of the research community by setting up Internet Con-figuration Control Board (ICCB) to overlook the evolution of Internet. ICCB waschaired by ARPA (Kahn 1994, p. 16). In 1980 the US Department of Defenceadopted TCP/IP as one of several standards. In 1983 ICCB was substituted by theIAB, Internet Activities Board. The IAB delegated problems to task forces. Therewere initially 10 such task forces. The chair of the IAB was selected from theresearch community supported by ARPA. In 1983 TCP/IP was chosen as the stan-dard for Internet, and ARPA delegated the responsibility for certain aspects of thestandardisation process to the IAB.

Page 27: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

22 Ole Hanseth and Eric Monteiro

During the period 1983 -- 1989 there is a steady growth in the number of taskforces under the IAB. This gives rise to a reorganisation in 1989 where the IAB issplit into two parts: (i) the IETF (Internet Engineering Task Force) which is to con-sider “near-future” problems and (ii) IRTF (Internet Engineering Research TaskForce) for more long term issues.

The Internet is significantly stimulated by the High Performance Computing ini-tiate during the mid-80s where a number of supercomputing centres were con-nected by high-speed links. In the early 90s the IAB is forced to charge nominalfees from its members to pay for the growing administration of the standardisationprocess.

In 1992 a new reorganisation took place. Internet Society was established as a pro-fessional society. IAB got constituted as part of the Internet Society. While keepingthe abbreviation, the name was change from Internet Activities Board to InternetArchitecture Board. IAB delegated the responsibility for the Internet Standards tothe top level organization within the IETF which is called IESG (Internet Engineer-ing Steering Group). The IETF as such remained outside of the Internet Society tofunction as a “mixing bowl'” for experimenting with new standards.

The Web’s recent development and widespread development brings in a new com-munity. Therefore, in 1995, a new coordination organization was formed - theWorld Wide Web Consortium (W3C). Today, the W3C is responsible for the devel-opment (evolution) of the various protocols and standards associated with the Web.W3C is formally outside the IETF but is closely related. This is partly due to for-mal arrangements but more important is the similarity between the standardisationprocesses of the two (see chapter 11), and the fact that many of those active inW3C have been involved in other Internet activities for a long time, being membersof the larger Internet community and sharing the Internet culture (Hannemyr 1998,++).

Future

Internet has constantly been changing, seemingly at am increasing speed. The out-phasing of IP version 4, adopted by the whole ARPANET in 1983, is now at itsbeginning. The development of the new one, IP version 6, started in 1991 and wasmade a draft standard in 1996 (see further explanation in chapter 4). The transitionof the Internet to the new one is at its very beginning. The details of this evolutionis an important and instructive case of changing a large infrastructure and will bedescribed in detail later in chapter 10.

Page 28: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet

Understanding Information Infrastructure 23

The development of this new version turned out to be far more complicated thananticipated - both technologically and organizationally. Technologically due to thecomplexity of the Internet, organizationally due to the number of users and usergroups involved. The Internet is supposed to be the underlying basis of a widerange of new services, from new interactive media to electronic commerce. To playthis future role the Internet has to change significantly. New services such as real-time transport, supporting, for instance, audio and video streams have to be pro-vided: It also has to be properly adapted to new lower level services such as broad-band networks like ATM and Frame Relay, portable computers (lap tops, PDAs,cellular phones) and wireless networks enabling a new paradigm of nomadic com-puting, etc. The required changes confront us with challenging technological aswell as organizational issues.

“The most pressing question for the future of the Internet is not how thetechnology will change, but how the process of change and evolutionitself will be managed” (Leiner at al., p. 108).

Highlights

We will here point to what we consider the key lessons to be learned from the Inter-net experience (so far) and which will be focused throughout this book.

Fist of all, the Internet’s has constantly changed. It has changed in many ways,including:

1. The Internet’s character has changed.

It started as a research project, developing new basic network technologies. Asit developed, it also became a network providing services for its developers,then a network providing services to researcher communities, and lastly a net-work supporting all of us. As the network changed, its organization had to adapt- and it has done so.

2. It has constantly been growing.

It has grown in terms of number of nodes (hosts and routers) connected, innumber of users and use areas, and in terms of protocols and applications/ser-vices.

3. The Internet has had to change.

It has had to change to accommodate to its own growth as well as its changingenvironment. Among the first changes are the introduction of the DomainName System and the change from initial 32 bit address to the one used in IPv4,and now the definition of IPv6 to allow continued growth. Among the latter are

Page 29: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

24 Ole Hanseth and Eric Monteiro

changes necessary when PCs and LANs were developed and diffused and theabove mentioned ongoing changes to adapt to broadband and wireless networksetc.

Basic principles for the development has been:

1. Establishing a general basis for experimental development of applications andservices. This general basis was a packet switched computer communicationsnetwork, itself being subject to experimentation.

2. Applications and services have been implemented to support specific localneeds. Widely used services are based on applications that turned out to be gen-erally useful. E-mail and WorldWideWeb are examples of applications origi-nally developed for such specific local needs.

3. When an application has proved to be of general interest, its specification isapproved as standard.

4. The Internet was always been an heterogeneous network.

It has been heterogeneous as it from its very beginning was designed to inte-grate, run across, various basic networks like telephone, radio, satellite, etc. Ithas also been heterogeneous by accepting two alternative protocols, TCP andUDP, on the same level. Today is the Internet also heterogeneous as it has inte-grate various different network on higher levels like America On-Line, prodigy,etc. with its own protocols and services an e-mail networks based on other pro-

tocols like X.400, cc:mail, etc.2

Historical coincidences

Given the present, overwhelming success of Internet, there is a pronounced dangerthat this might tend towards idealizing the Internet experience. It is important todevelop a sense of how far the Internet experience is relevant as a basis for general-ized lessons, and what should be regarded as more or less historically contingent,

2. This is a somewhat controversial standpoint. The Internet community has alwaysstressed that “reality” out there is heterogeneous, and accordingly a useful runningnetwork has to run across different basic network technologies. This has been a cru-cial argument in the “religious war” against OSI. However, the Internet communitystrongly believe in “perfect” technical solutions (Hannemyr 1997), and accordinglyrefuse to accept gateways between not perfectly compatible solutions (Stefferud199x). We return to this when discussing gateways in chapter 11.

Page 30: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

International Standardization Organization (ISO)

Understanding Information Infrastructure 25

irreproducible decisions. Internet has, of course, a number of historically contin-gent features which distinguish it and make it difficult, if not impossible, to inten-tionally reproduce. Among these are:

• For a long time the developers were also the users.

This was important for the experimental development and early use of new ser-vices. These factor can hardly be replicated when developing services for, say,health care and transport.

• The inclusion of the technology into Berkley Unix.

• The free distribution of protocol implementation ins general as well as BerkleyUnix including the Internet technology in particular.

• The development of killer applications, in particular e-mail and World-WideWeb.

• The small boy’s club atmosphere which prevailed during the early years, rightup till today, was important for the way the work was organized and spread.

In addition, as in all cases, there has been a number of coincidences where indepen-dent events have happened at the same time, opening up possibilities and opportu-nities creating a success story.

International Standardization Organization (ISO)

ISO has a long history of developing standards in all kinds of areas, ranging fromthe length of a meter to nuts and bolts (REF). Due to the way it is intended tomimic quasi-democratic decision processes with representative voting (see chapter4 for further details), ISO has an unique position within national technology policy.Member nations agree to make ISO standards official, national standards. Thisimplies that ISO standards are automatically promoted by key, public actors in theareas they cover. And in some countries (like Norway), ISO standards are grantedthe status of laws.

When the need for open (i.e. non-proprietary) computer communication standardswas gaining wide acceptance, ISO was a quite obvious choice of body beingresponsible for the standardization work. The development of the OSI model andits protocol suite, covering everything from coding of physical signals to applica-tions like e-mail and secure transactions, started in XXXX. The standards are spec-ified in terms of a so-called reference model, the Open Systems Interconnection

Page 31: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

26 Ole Hanseth and Eric Monteiro

(OSI) model. It specifies seven layers which in sum make up what was supposed tobecome the basis of what we call information infrastructures.

For several years the was a “religious war” between the Internet and OSI support-ers (ref.). Beyond the religious aspects, the Internet and OSI work reflected differ-ent visions about what our future world of computer communications should looklike, giving different actors different roles. Accordingly, the battles over technolog-ical design alternatives, were also battles over positions in future markets (Abbate1995). In addition, the war had a general political content as the Internet technol-ogy was developed by Americans, and accordingly giving them an competitiveadvantage. For the Europeans, then, it was important to make OSI different fromthe Internet technology, putting them in an equal position.

The OSI approach was indeed different. It negates virtually every element in thelist of “highlights” above. The protocols have been tried developed by everybodycoming together, agreeing on the protocols specification. No experimentation, noimplementation before standardization, no evolution, no change, no heterogeneity.

Now the war is over, although there might still be some lonely soldiers left in the

jungle to whom this message still has not reached.3

EDI and EDIFACT

One important form of computer communication is EDI)Electronic Data Inter-change). This form of communication covers the exchange of information betweendifferent organizations, typically information having been exchanged as paperforms, often even formally standardized forms. Such forms include orders,invoices, consignment notes and freight bills, customs declaration documents, etc.In the business world, EDI infrastructures have been built for quite some time, andbodies taking care of the standardization have been set up. The seemingly largest

and important activity is related to EDIFACT.4 EDIFACT is a multifaceted crea-ture. It is a format, or language, for defining data structures combined with rules for

3. OSI has been strongly supported by public agencies, and most governments havespecified their GOSIPs (Government OSI Profiles), which have been mandatory ingovernment sectors. These GOSIPs are still mandatory, and Internet protocols areonly accepted to a limited extent.

Page 32: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Health care information infrastructures

Understanding Information Infrastructure 27

how such structures should be encoded into character streams to be exchanges.Further, it includes a set of standardized data structures and a large bureaucraticorganization controlling the EDIFCAT standards and standardization processes.

The primary items to be standardized in the EDIFACT world are “messages.” AnEDIFACT message is typically an electronic equivalent of a paper form.

The EDIFACT standardization organization is a part of the United Nations system.This was a deliberate choice by those starting the EDIFACT standardization work,believing that United Nations would give the activities the best possible legitima-tion.

The EDIFACT format was defined in the early seventies, while the formal stan-dardization activities started in XX.

Health care information infrastructures

Health care information infrastructures have been developed and used over aperiod of more than ten years and have taken different shapes over time. Two mainforms of use are transmission of form-like information and (possibly real-time)multi-media information. Typical examples of the former include: lab orders andreports exchanged between general practitioners, hospitals and labs; admission anddischarge letters between general practitioners, specialists, and hospitals; prescrip-tions from general practitioners to pharmacies; exchange of non-medical informa-tion like ordering of equipment and food and invoices from hospitals and generalpractitioners to health insurance offices for reimbursement.

Typical examples of the latter type include: telemedicine services, that is, computerbased services which usually include real time multi-media conferencing systemssupporting a physician requesting advise from another physician at another institu-tion; access to data bases and Web servers containing medical information; andPACS (Picture Archive Systems for X-rays) systems. In this book, we focus on theformer type, i.e. transmission of form-like information.

4. EDIFACT is an abbreviation for Electronic Data Interchange For Administration,Commerce and Transport.

Page 33: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

28 Ole Hanseth and Eric Monteiro

The various forms of information exchange are overlapping and interconnected.The same piece of information may be exchanged as part of different transactions,for instance, by transmission of a digital X-ray image either using a multi-mediaconference system or attached in an e-mail. Furthermore, any organisational unitmay engage in transactions with several other units. A lab, for instance, may com-municate with a number of general practitioners, other labs, and other hospitalwards. This is what distinguish such systems from stand-alone applications andturn them into infrastructure.

The development of health information infrastructure standards — not to mentiontheir implementation in products and adoption by user organisations — has beenslow. Based on personal experience, it seems that the more formal the standardisa-tion process is, the slower the adoption becomes. Industrial consortia seem so far tobe most successful. As, to the best of our knowledge, there does not exist any sys-tematic evaluation, this is difficult to “prove.” But studies in other sectors thanhealth care exist. The evaluation of the European Union’s program for diffusion ofEDI in trade, the TEDIS programme, lend support to the view that formal standard-isation is incredible slow - design as well as diffusion (Graham et al. 1996). Anevaluation of EDIFACT on behalf of the United Nations concludes similarly (UN1996).

EDI Infrastructure in the Norwegian health caresector

Fürst

The development of electronic information exchange between health care institu-tions in Norway started when a private lab, Dr. Fürst’s Medisinske Laboratorium inOslo, developed a system for lab report transmission to general practitioners in1987. The system was very simple — the development time was only 3 weeks forone person (Fiskerud 1996). The interest of Dr. Fürst’s laboratory was simply tomake profit by attracting new customers. It was based on the assumption that thesystem would help general practitioners save much time otherwise spent on manualentry of lab report data, and that the general practitioners would find the possibilityof saving this time attractive. Each general practitioner receives on average approx-imately 20 reports a day, which take quite some time to register manually in theirmedical record systems.

Page 34: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

EDI Infrastructure in the Norwegian health care sector

Understanding Information Infrastructure 29

The system proved to be a commercial success and brought them lots of generalpractitioners as new customers. This implied less profit for the other labs. Within acouple of years, several non-private labs (in hospitals) developed or bought sys-tems with similar functionality in order to be competitive. Alongside the growingnumber of labs adopting systems for exchange of reports, an increasing number ofactors saw a wider range of applications of similar technology in other areas. Theseactors were represented within the health sector as well as among possible vendorsof such technology. For all of them it was perceived as important that the technolo-gies should be shared among as many groups as possible in order to reduce costsand enable interconnection of a wide range of institutions.

The network Fürst established is still in use. Currently Fürst delivers the reports toabout 2000 regular customers through the network.

Telenor - standardization

After an initial experiment, Telenor (the former Norwegian Telecom) launched theproject “Telemedicine in Northern Norway” in 1987 which was running until 1993.Standardisation has always been considered important within the telecommunica-tion sector. This attitude combined with Telenor’s interest in largest possible mar-kets, made them take it for granted that the new health information infrastructurestandards should be like any other telecommunication standard: “open” and devel-oped according to the procedures of formal standardisation bodies.

As there was a general consensus about the need for standards, the fight about whatthese standards should look like and how they should be developed started. Basedon their interests in general solutions and rooted in the telecommunication traditionof international standardisation, Telenor searched for international activities aimingat developing “open” standards. The IEEE (Institute of Electrical and ElectronicsEngineers) P1157 committee, usually called Medix, did exactly that. This workwas the result of an initiative to develop open, international standards taken at theMEDINFO conference in 1986. Medix, which was dominated by IT professionalsworking in large companies like Hewlett Packard and Telenor and some standardi-sation specialists working for health care authorities, adopted the dominatingapproach at that time, namely that standards should be as open, general and univer-sal as possible.

The idea of open, universal standards underlying the Medix effort implied usingexisting OSI (Open Systems Interconnection) protocols defined by the ISO (Inter-national Standardisation Organisation) as underlying basis. The Medix effortadopted a standardisation approach — perfectly in line with texts books in infor-

Page 35: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

30 Ole Hanseth and Eric Monteiro

mation systems development — that the development should be based on an infor-mation model being a “true” description of the relevant part of reality, that is, thehealth care sector, independent of existing as well as future technology. (More onthis in later chapters, particularly 8 and 12.) Individual messages would be derivedfrom the model more or less automatically.

While the focus was directed towards a comprehensive information model, labreports were still the single most important area. However, for those involved inMedix the task of developing a Norwegian standardised lab report message hadaround 1990 been translated into the development of a proper object-oriented datamodel of the world-wide health care sector.

In addition to the information model, protocols and formats to be used had to bespecified. In line with the general strategy, as few and general protocols and for-mats as possible should be included. Medix first focused on the ISO standard forexchange of multi media documents, ODA (Open Document Architecture) believ-ing it covered all needs for document like information. However, around 1990 mostagreed that EDIFACT should be included as well. The Europeans who strongestadvocated EDIFACT had already established a new body, EMEDI (European Med-ical EDI), to promote EDIFACT in the health sector. In Norway, a driving forcebehind the EDIFACT movement was the “Infrastructure programme” run by a gov-ernmental agency (Statskonsult) during 1989 - 92. Promoting Open Systems Inter-connection standards and EDIFACT systems based on Open SystemsInterconnection were key goals for the whole public sector (Statskonsult 1992).

Several other standardization activities were considered and promoted/advocatedby various actors (vendors). Andersen Consulting, for instance, promoted the HL-

75 standard. The Ministry of Health hired a consultant making their own proposal -which immediately were killed by other actors on this arena. The dispute settled in1990 when the Commission of the European Community delegated to CEN(Comite Europeen de Normalisation, the European branch of ISO) to take responsi-bility for working out European standards within the health care domain in order tofacilitate the economical benefits of an European inner market. CEN established aso-called technical committee (TC 251) on the 23. of March 1990 dedicated to thedevelopment of standards within health care informatics. From this time Medixdisappeared from the European scene. However, the people involved moved toCEN and CEN’s work to a large extent continued along the lines of Medix.

5. HL-7 is an abbreviation of Health Level 7, the number “7” referring to level 7 in theOSI model.

Page 36: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

EDI Infrastructure in the Norwegian health care sector

Understanding Information Infrastructure 31

Both CEN and Medix were closely linked to the OSI and ISO ways of thinkingconcerning standards and standardization: Accordingly, the work has been basedon the same specification driven approach - and with the seemingly same lack ofresults.

Isolated networks for lab report exchange

As mentioned above, a large number of labs bought systems similar to Fürst’s.Some of these were based on early drafts of standardized EDIFACT messages,later update to the standardized versions. These networks were, however, not con-nected to each other. They remained isolated islands - each GP connected couldonly receive report from one lab.

Each of the networks got a significant number of GPs connected within a shortperiod after the network technology was put in place. From then on, the growth hasbeen very, very slow.

Pilots

In other areas, a number of pilot solutions have been implemented and tested inparallel with the standardization activities. This has been the case for prescriptions,lab orders, physicians’ invoices, reports from X-ray clinics and other labs likepathology and micro-biology, etc. However, non of these systems has been adoptedin regular use.

Cost containment in the public sector

All activities mentioned above has been driven from an interest in the improvementof medical services. Some overlapping activities were started under Statskonsult´s“Infrastructure programme.” The overall objectives of this programme was toimprove productivity, service quality, and cost containment in the public sector.

It is widely accepted that physicians get too much money for their work from thesocial insurance offices. Through improved control the government could save

maybe more than hundred billion Norwegian kroner a year.6 Spendings on pharma-ceuticals are high, and is accordingly another important area for cost containment.In addition, the health care authorities wanted enhanced control concerning the use

6. These numbers are of course controversial and debatable.

Page 37: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

32 Ole Hanseth and Eric Monteiro

of drugs by patients as well as prescription practices of physicians concerninghabit-forming drugs.

As part of the Infrastructure programme KITH (Kompetansesenteret for IT i hels-esektoren) was hired by Statskonsult to work out a preliminary specification of anEDIFACT message for prescriptions (KITH 1992). A project organization wasestablished, also involving representatives for the pharmacies and the GPs.

The interests of the pharmacies were primarily improved logistics and eliminatingunnecessary retyping of information (Statskonsult 1992). By integrating the systemreceiving prescriptions with the existing system for electronic ordering of drugs,the pharmacies would essentially have a just-in-time production scheme estab-lished. In addition, the pharmacies viewed it as an opportunity for improving thequality of service to their customers. A survey had documented that as much as80% of their customers were favourable to reducing waiting time at the pharmaciesas a result of electronic transmission of prescriptions (cited in Pedersen 1996).

As the standardization activities proceeded, the project drifted (Ciborra 1996, Berg1997 a, b) away from the focus on cost containment. The improved logistics ofpharmacies became the more focused benefit. The technological aspects of thestandardization work contributed to this, as such an objective appeared to be moreeasy to obtain through standard application of EDI technology.

This drifting might have been an unhappy event, as the potential reduction in pub-lic spending could significantly help raising the required funding for developing asuccessful system.

Highlights

We will here, as we did for the Internet, point to some lessons to be learnt from thisexperience:

1. Simple solutions being designed and deployed under a strong focus an the spe-cific benefits to be obtained have been very successful.

2. Health care is a sector with a wide range of different overlapping forms of com-munication, and communication between many different overlapping areas.

3. Focusing on general, universal standards makes things very complex, organiza-tionally as well as technologically. The specification of the data format used byFürst takes one A4 page. The specification of the CEN standardized EDIFACTmessage for lab reports takes 500 (!) pages (ref. CEN). Where the CEN mes-sage is used, the date being exchanges are exactly the same as in the Fürst solu-

Page 38: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Methodological issues

Understanding Information Infrastructure 33

tion. The focus on general solutions also makes the benefits rather abstract, andthe solutions are hard to sell to those who has the money. Furthermore, there isa long way to go from such general standards to useful, profitable solutions.

4. A strong focus on standards makes success unlikely.

Methodological issues

As pointed above and elaborated further in chapter 3, there is a wide variety ofinformation infrastructure standards produced within bodies ISO/CEN, EDIFACT,and Internet Society. These standards are on different levels and deals with issueslike message definitions, syntax specification, protocols, file type formats, etc.Some standards are general purpose, others are sector specific ones (for instance,health care), and some are global while others are regional. Most of them are cur-rently in-the-making. Our study does not provide evidence for drawing far-reach-ing conclusions regarding all types of information infrastructure standards. Webelieve, however, that the health care standards we have studied are representativefor crucial parts of the standards of the information infrastructures envisioned infor instance the Bangemann report (1994), and that the picture of standardisationemerging from our analysis contains important features.

Studying the development of information infrastructures is not straightforward.There are at least two reasons for this which have immediate methodological impli-cations worth reflecting upon.

First, the size of an information infrastructure makes detailed studies of all ele-ments practically prohibitive. Internet, for instance, consists of an estimated 10million nodes with an unknown number of users, more than 200 standards whichhave, and still are, extended and modified over a period of 25 years within a large,geographically dispersed organisation where also a number of vendors (Sun, IBM,Microsoft), commercial interests (MasterCard, Visa) and consortia (W3) attempt toexercise influence. This implies that the notion of an actor in connection with infor-mation infrastructure standardisation is a fairly general one in the sense that it issometimes an individual, a group, an organisation or a governmental institution.An actor may also be a technological artifact — small and simple or a large systemor network like Internet or EDIFACT.

Page 39: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

34 Ole Hanseth and Eric Monteiro

Actor network theory has a scalable notion of an actor as Callon and Latour (1981,p. 286) explain: “[M]acro-actors are micro-actors sitting on top of many (leaky)black boxes”. In other words, actor network theory does not distinguish between amacro- and micro-actor because opening one (macro) black-box, there is always anew actor-network. It is not a question of principle but of convenience, then, whichblack-boxes are opened and which are not. To account for information infrastruc-ture standardisation within reasonable space limits, it is necessary to rely on such ascalable notion of an actor. A systematic, comprehensive empirical study is prohib-itive. In our study, we have opened some, but far from every, black-box. Severalblack-boxes have been left unopened for different reasons: some due to constraintson writing space, some due to lack of data access and some due to constraints onresearch resources. It might be desirable to have opened more black-boxes than wehave done. We believe, however, we have opened enough to be able to present areasonable picture of standardisation.

Our empirical evidence is partly drawn from standardisation of EDI messageswithin the health care sector in Norway. A method of historical reconstruction fromreports, minutes and standards documents together with semi- and unstructuredinterview has been employed, partly based on (Pedersen 1996). One of the authorswas for a period of three years engaged in the development of the standards by twoof the companies involved (Telenor and Fearnley Data). Our account of the casehas been presented, discussed and validated with one of the key actors (KITH, Nor-wegian: Kompetansesenteret for IT i Helsesektoren A/S).

Second, the fact that information infrastructures are currently being developed andestablished implies that there is only limited material on about the practical experi-ence with which solutions “survive” and which “die”, i.e. which inscriptions areactually strong enough to enforce the desired pattern of use. Hence, we are caughtin a dilemma. On the one hand, the pressure for grounding an empirical study sug-gests that we need to await the situation, let the dust settle before inquiring closer.On the other hand, we are strongly motivated by a desire to engage in the ongoingprocess of developing information infrastructures in order to influence them(Hanseth, Hatling and Monteiro 1996).

Methodologically, the use of Internet as a case, in particular the IPng casedescribed in chapter 10, is a historical reconstruction based on several sources.Internet keeps a truly extensive written record of most of its activities, an idealsource for empirical studies related to the design of Internet. We have used thearchives for (see chapter 4 for an explanation of the abbreviations) IETF meetingsincluding BOFs, working group presentations at IETF meetings (ftp://ds.inter-nic.net/ietf/ and http://www.ieft.org), RFCs (ftp://ds.internic.net/rfc/), minutes from

Page 40: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Methodological issues

Understanding Information Infrastructure 35

IPng directorate meetings (ftp://Hsdndev.harvard.edu/pub/ipng/directorate.min-utes/), e-mail list for big-internet (ftp://munnari.oz.au/big-internet/list-archive/)and several working groups (ftp://Hsdndev.harvard.edu/pub/ipng/archive/), inter-net drafts (ftp://ds.internic.net/internet-drafts/), IESG membership (http://ietf.org/iesg.html#members), IAB minutes (http://info.internet.isi.edu:80/IAB), IAB mem-bership (http://www.iab.org/iab/members.html) and information about IAB activi-ties (http://www.iab.org/iab/connexions.html). The archives are vast, manythousand pages of documentation in total. The big-internet e-mail list, for instance,receives on the average about 200 e-mails every month. As a supplement, We haveconducted in-depth semi-structured interviewing lasting about two hours with twopersons involved in the design of Internet (Alvestrand 1996; Eidnes 1996). One ofthem is area director within IETF and hence a member of the IESG.

Page 41: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cases: The Internet and Norwegian Health Care

36 Ole Hanseth and Eric Monteiro

Page 42: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 37

CHAPTER 3 Defining informationinfrastructures

Introduction

An important part of the problem with information infrastructures (IIs) is exactlyhow to look at them: what kind of creatures are they really, what are the similaritiesand differences compared to other classes of information systems, how should IIsbe characterised, is it possible to draw a clear line between IIs and other informationsystems or is this rather a matter of degree, perspective or, indeed, inclination? Andassuming that it is reasonable to talk about IIs, why do they — and should they, wewant to argue — attract attention from media, politicians and scholars? These arehighly relevant, but difficult, questions we address in this chapter.

The term II has been widely used only during the last couple of years. It gains itsrhetorical thrust from certain so-called visions. These visions were initiated by theGore/Clinton plans and followed up by the European Union’s plan for Pan-Euro-pean II. The visions for an II are argued as a means for “blazing the trail (...) tolaunch the information society” (Bangemann et al. 1994, 23). The Bangemann com-mission proposed ten applications which this effort should be organised aroundwithin the European Union: teleworking, distance learning, university and researchnetworks, telematics services for small and medium sized enterprises, road trafficmanagement, air traffic control, health care networks, electronic tendering, trans-European public administration network and city information highways. The pro-

Page 43: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

38 Ole Hanseth and Eric Monteiro

posal is in line with the projects proposed by the Group of seven (G7) in Brusselsin March 1995.

The Bangemann commission (ibid.) states that by building a European IIs we canexpect

• A more caring European society with a significantly higher quality of life and awider choice of services and entertainment for their citizens and consumers.

• New ways to exercise creativity for content providers as the information societycalls into being new products and services.

• New opportunities for European regions to express their cultural traditions andidentities and, for those standing on the geographical periphery of the Union, aminimising of distance and remoteness.

• More efficient, transparent and responsive public services, closer to the citizenand at lower cost.

• Enterprises will get a more effective management and organisation of enter-prises, improved access to training and other services, data links with customersand suppliers generating greater competitiveness.

• Europe's telecommunications operators will get the capacity to supply an everwider range of new high value-added services.

• The equipment and software suppliers; the computer and consumer electronicsindustries: will get new and strongly-growing markets for their products athome and abroad.

Less speculative than citing political manifestoes, it is fairly safe to expect thatfuture II will consist of an elaboration, extension and combination of existing com-puter networks with associated services (Smarr and Catlett 1992). It is likely toconsist of an inter-connected collection of computer networks, but with a heteroge-neity, size and complexity extending beyond what exists today. New services willbe established, for instance, by developing today’s more experimentally motivatedservices for electronic commerce, video-on-demand and electronic publishing.These new services subsequently accumulate pressure for new development of theII to accommodate them.

There exist today a number of embryonic manifestations of the IIs. For many years,we have had application specific networks. Services provided include flight book-ing and bank networks supporting automatic teller machines and other economictransactions. Electronic data interchange (EDI), that is, electronic transmission ofform-like business and trade information, is an illustration of an existing technol-ogy related to II (Graham et al. 1995; Webster 1995). The rapid diffusion of World-

Page 44: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Roots

Understanding Information Infrastructure 39

WideWeb is the basis of a general II for information exchange as well as morespecialised IIs implementing open electronic marketplaces where products may beordered, paid for and possibly delivered (if they exist in electronic form like books,newspapers, software or stock marked information).

Basic data communication technology may be described as communication stand-ards and the software and hardware implementing them. This technology is inmany respects what comes closest to an existing, general purpose II. Two suchbasic communication technologies exist, Open Systems Interconnection (OSI) andInternet (Tanenbaum 1989). OSI is developed by the International StandardizationOrganization (ISO).

There is today no clear-cut conception of what an information infrastructure is andeven less how to design one. We approach this in a somewhat indirect fashion. Wetrace the origin of the concept, discuss proposed definitions, compare it with otherinfrastructure technologies and outline the role and contents of standards.

Roots

The concept of II may be seen as a combination, or merge, of information andinfrastructure technologies. IIs can be seen as a step in the development of informa-tion technologies as well as a step in the development and infrastructure technolo-gies. IIs share a number of aspects with other kinds of information technologieswhile having some unique aspects making them different. The term “infrastruc-ture” has been used in relation to information technology to denote basic supportsystems like operating systems, file servers, communication protocols, printers,etc. The term was introduced to separate between such underlying support servicesand the applications using them as the complexity of computing in organizationsrose. The II examples mentioned above can also be seen as an evolution of compu-ter networks, interorganizational systems and distributed information systems. IIsas envisioned are similar to examples of these concepts except that they are larger,more complex, more diversified, etc. It accordingly makes betters sense to talkabout an information systems as having degrees of infrastructural aspects.

Various forms of infrastructure like railways, roads, telecommunication networks,electricity supply systems, water supply, etc. have been analysed under the label“large technical systems” (Mayntz and Hughes 1988, La Porte 1989, Summerton1994). Some scholars have focused on what they find to be important characteris-tics of such technologies, talking about networking (David 1987, Mangematin andCallon 1995) and systemic technologies (Beckman 1994).

Page 45: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

40 Ole Hanseth and Eric Monteiro

Our approach to the study of the characteristics of IIs is to focus on what is foundto be the primary characteristics of other infrastructure technologies in general andanalyse how these characteristics appear in IIs.

FIGURE 1. The conceptual heritage of IIs

Defining II - identifying the aspects

We will now identify what we consider to be the key aspects of (information) infra-structures, and in particular what makes them different from information systems.These aspects will be identified by presenting and discussion a number of defini-tions proposed by others.

Enabling, shared and open

In Webster’s dictionary infrastructure is defined as

“a substructure or underlying foundation; esp., the basic installations andfacilities on which the continuance and growth of a community, state, etc.depends as roads, schools, power plants, transportation and communica-tion systems, etc.” (Guralnik 1970).

IT

Applications/infrastructure

IOS & DIS

computernetworks

II

Infrastructure

LTS

Networkingtechnologies

systemic techn.

Page 46: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining II - identifying the aspects

Understanding Information Infrastructure 41

This definition is also in perfect harmony with how IIs are presented in the policydocuments mentioned above. Some elements may be emphasized, leading to theidentification of the first three core aspects:

Aspect 1: Infrastructures have a supporting or enabling function.

This means that an infrastructure is design to support a wide range of activities, notespecially tailored to one. It is enabling in the sense that it is a technology intendedto open up a field of new activities, not just improving or automating somethingexisting. This is opposed to being especially design to support one way of workingwithin a specific application field. This enabling feature of infrastructures playsimportant roles in policy documents like those mentioned above. The enabling andconstraining character of II technologies will be discussed in chapter 7.

Aspect 2: An infrastructure is shared by a larger community (or collectionof users and user groups).

An infrastructure is shared by the members of a community in the sense that it isthe one and the same single object used by all of them (although it may appear dif-ferently). In this way infrastructures should be seen as irreducible, they cannot besplit into separate parts being used by different groups independently. An e-mailinfrastructure is one such shared irreducible unit, while various installation of aword processor may be used completely independently of each other. However, aninfrastructure may of course be decomposed into separate units for analytical ordesign purposes.

The different elements of an infrastructure are integrated through standardizedinterfaces. Often it is argued that such standards are important because the alterna-tive, bilateral arrangements, is all too expensive. As we see it, standards are notonly economically important but also a necessary constituting element. If an “infra-structure” is built on the bases of bilateral arrangements only, this is no real infra-structure, but just a collection of independent connections. We return to this in thenext chapter.

The shared and enabling aspects of infrastructures have made the concept increas-ingly popular the later years. Just as in the case of IIs the role of infrastructures isbelieved to be important as its enabling character points to what may be kept as astable basis in an increasingly more complex and dynamic world. Håkon With-Andersen (1997) documents how the Norwegian ship building sector has stayedcompetitive through major changes from sailing ships, through steam boats andtankers to offshore oil drilling supply boats due to the crucial role of an infrastruc-ture of competence centres and supporting institutions like shipbuilding researchcentres, a ship classification company and specialized financial institutions. In thesame way, Ed Steinmueller (1996) illustrate how the shared character of infrastruc-

Page 47: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

42 Ole Hanseth and Eric Monteiro

tures is used to help understand the growing importance of knowledge as publicgoods and infrastructure in societies where innovation and technological develop-ment is crucial for the economy. Different II standards and standardization proc-esses will be presented in chapter 4. However, aspects of standards will be anunderlying theme through the whole book.

Aspect 3: Infrastructures are open.

They are open in the sense that there is no limits for number of user, stakeholders,vendors involved, nodes in the network and other technological components, appli-cation areas or network operators. This defining characteristic does not necessarilyimply the extreme position that absolutely everything is included in every II. How-ever, it does imply that one cannot draw a strict border saying that there is oneinfrastructure for what is on one side of the border and others for the other side andthat these infrastructures have no important or relevant connections.

This might be illustrated by an example form health care: A hospital is exchanginginformation with other medical institutions, even in other countries. It is exchang-ing information with social insurance offices and other public sector institutionsand it is ordering gods from a wide range of companies. These companies areexchanging information with other companies and institutions etc. Hospital doctorsmight be involved in international research programmes. Accordingly, a hospital issharing information with virtually any other sector in society. And the informationexchanged among different partners is overlapping. Drawing a strict line between,for instance, a health care and an electronic commerce infrastructure is impossible.However wide an infrastructure’s user groups, application ares, designers and man-ufacturers, network operators or service providers are defined, there will always besomething outside which the infrastructure should be connected to.

Unlimited numbers of users, developers, stakeholders, components and use areasimplies

• several activities with varying relations over time

• varying constellations and alliances

• changing and unstable conditions for development

• changing requirements

In sum - all this implies heterogeneity

Unix systems and networks based on OSI protocols are closed according to thisdefinition although they are declared open by their name. The openness of IIs willbe discussed further in chapter 5.

Page 48: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining II - identifying the aspects

Understanding Information Infrastructure 43

Heterogeneity

In the Clinton/Gore report, the envisioned NII is meant to include more than justthe physical facilities used to transmit, store, process, and display voice, data, andimages. It is considered to encompass:

• A wide range and ever-expanding range of equipment including cameras, scan-ners, keyboards, telephones, fax machines, computers, switches, compact disks,video and audio tape, cable, wire, satellites, optical fibre transmission lines,microwave nets, switches, televisions, monitors, printers, and much more.

• The information itself, which may be in the form of video programming, scien-tific or business databases, images, sound recordings, library archives, othermedia, etc.

• Applications and software that allow users to access, manipulate, organize, anddigest the proliferating mass of information that the NII's facilities will put attheir fingertips.

• The network standards and transmission codes that facilitate interconnectionand interoperation between networks.

• The people who create the information, develop applications and services, con-struct the facilities, and train others to tap its potential.

It is said that every component of the information infrastructure must be developedand integrated if America is to capture the promise of the Information Age.

The Bangemann report does not contain any definition. However, the report is byand large a European response to - or rather a copy of - the US NII report. It seemsto share all its basic assumptions, and accordingly we can say that it implicitly alsouse the same II definition.

This definition also sees infrastructures as enabling, shared and open. Further, itpoints to some other crucial features we now will turn to. Infrastructures are heter-ogeneous phenomena. They are so along many different dimensions.

Aspect 4: IIs are more than “pure” technology, they are rather socio-techni-cal networks.

Infrastructures are heterogeneous concerning the qualities of their constituencies.They encompass technological components, humans, organizations, and institu-tions. This fact is most clearly expressed in the last bullet paragraph above. This istrue for information technologies in general, as they will not work without supportpeople. An information system does not work either if not the users are using itproperly. For instance, flight booking systems do not work unless all booked seatsare registered in the systems.

Page 49: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

44 Ole Hanseth and Eric Monteiro

The socio-technical aspects of IIs will be explored and discussed in chapter 6 andchapter 7.

Aspect 5: Infrastructures are connected and interrelated, constituting ecolo-gies of networks.

They are so different in different ways. For instance, different technologies arebrought together as illustrated in the NII definitions. Here we will concentrate ondifferent ways (sub-)infrastructures are connected into ecologies of infrastructures.Infrastructures are

• layered;

• linking logical related networks; and

• integrating independent components, making them interdependent.

Infrastructures are layered upon each other just as software components are layeredupon each other in all kinds of information systems. This is an important aspect ofinfrastructures, but one that is easily grasped as it is so well known.

Infrastructures are also heterogeneous in the sense that the same logical functionmight be implemented in several different ways. We will discuss heterogeneitybeing caused by two forms of infrastructure development:

1. When one standardized part (protocol) of an infrastructure is being replacedover time by a new one. In such transition periods, an infrastructure will consistof two interconnected networks running different versions. A paradigm exam-ple of this phenomena is the transition of Internet from IPv4 to IPv6 (Hanseth,Monteiro and Hatling, 1996, Monteiro 1998, chapter 10).

2. Larger infrastructures will often be developed by interconnecting two existingdifferent ones, as typically has happened when networks like America Onlineand Prodigy have been connected to Internet through gateways.

The third form of heterogeneity we are addressing is what happens when largercomponents or infrastructures are built based on existing smaller, independentcomponents. When these components are brought together into a larger unit, theybecome interdependent. When one of them is change for whatever reason, this willoften require the others to be changed as well. Examples of this phenomena arevarious formats for representing text, video, sound, image and graphical represen-tations are brought together and put into MIME to enable transfer of multimediainformation on the Web/Internet.

Different networks — some compatible and closely aligned, others incompatibleand poorly aligned — are superimposed, one on top of the other, to produce anecology of networks. The collection of the different, superimposed communication

Page 50: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining II - identifying the aspects

Understanding Information Infrastructure 45

infrastructures in a city provides a vivid illustration of the situation: the differenttelephone operators, the mobile phones, data communication, the cable-TV net-works have to a lesser or larger extent evolved independently but are getting moreand more entagled (REF BYPLAN BOKA). We return to and elaborate on thisaspect of an II in chapter 11.

While heterogeneity is indeed present in the NII definition of II, this seems not tobe the case in mere engineering inspired definitions. We see the following defini-tion proposed by McGarty (1992, p. 235-236) to be representative for communitiesdesigning computer communication technologies. He says that an infrastructureresource is

• Shareable. The resource must be able to be used by any set of users in any con-text consistent with its overall goals.

• Common. The resource must present a common and consistent interface to allusers, accessible by a standard mean. Thus common may be synonymous withthe term standard.

• Enabling. The resource must provide the basis for any user or set of users tocreate, develop, and implement any applications, utilities, or services consistentwith its goals.

• Physical embodiment of an architecture. The infrastructure is the physicalexpression of an underlying architecture. It expresses a world-view. This worldview must be balanced with all the other elements of the infrastructure.

• Enduring. The resource must be capable of lasting for an extensive period oftime. It must have the capability of changing incrementally and in an economi-cally feasible fashion to meet the slight changes of the environment, but mustbe consistent with the worlds view. In addition it must change in a fashion thatis transparent to the users.

• Scale. The resource can add any number of users or uses and can by its verynature expand in a structured manner in order to ensure consistent levels ofservice.

• Economically sustainable. The resource must have economic viability. It mustmeet the needs of both customers and providers of information products. Itmust provide for all elements of a distribution channel, bringing the productfrom the point of creation to the point of consumption. It must have all the ele-ments of a food chain.

Infrastructures corresponding to this definition will to a large extent embody thethree first infrastructure aspects mentioned above. Shareable and common inMcGarty’s definition is literally the same as shared in ours, scale is close to open-

Page 51: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

46 Ole Hanseth and Eric Monteiro

ness. There is, however, one important difference between our definitions:McGarty’s defines infrastructure as largely homogenous. There is no non-technicalelements involved, and even the technology is required to homogeneous. This isconnected to his notion of “physical embodiment of architecture.” The requirementthat the whole infrastructure should be based on one architecture is an expressionof traditional engineering design ideals like uniformity, simplicity, consistency.(These ideals are also underlying McGarty’s requirements about how an II shouldscale.) If it is a requirement that an II is based on one single architecture, it wouldbe impossible to make a larger II by linking together two existing ones through agateway if they are based upon different architectures (which different II regularlywill be) even though doing this should be desirable as well as feasible. Thisrequirement implies a form of homogeneity being in direct conflict with the essen-tial heterogeneous nature of infrastructures. It implies a strategy for making closedsystems. In this way, this definition is only open as far as one uniform coherentarchitecture is applicable and acceptable. It is the main argument of this book thatthis closed world thinking inherent in virtually all engineering activity (at least aspresented in text books) is a main obstacle for successfully developing the IIvisions in documents like the US NII plan and the Bangemann report, an issue wenow will turn to. The situation may be illustrated graphically as in figure xxx.

FIGURE 2. The ecology of networks which give IIs their non-homogenouscharacter

Page 52: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining II - identifying the aspects

Understanding Information Infrastructure 47

The ecologies of infrastructures will be explored further in chapters 8 and 11.

Installed base

Building large infrastructures takes time. All elements are connected. As timepasses, new requirements appear which the infrastructure has to adapt to. Thewhole infrastructure cannot be change instantly - the new has to be connected tothe old. The new version must be designed in a way making the old and the newlinked together and “interoperable” in one way or another. In this way the old - theinstalled base - heavily influence how the new can be designed.

Aspect 6: Infrastructures develops through extending and improving theinstalled base.

The focus on infrastructure as “installed base” implies that infrastructures are con-sidered as always already existing, they are NEVER developed from scratch. When“designing” a “new” infrastructure, it will always be integrated into or replacing apart of a later one. This has been the case in the building of all transport infrastruc-tures: Every single road - even the first one if it make sense to speak about a such -has been built in this way; when air traffic infrastructures have been built, theyhave been tightly interwoven with road and railway networks - one needed theseother infrastructures to travel between airports and the travels’ end points.Further,powerful telecommunications infrastructures are required to operate the air trafficinfrastructure safely. Pure air traffic infrastructures can only be used for one part ofa travel, and without infrastructures supporting the rest, isolated air traffic infra-structures would be useless.

In McGarty’s definition, his notion of enduring includes the requirement that infra-structures has to change incrementally to meet the changes of the environment.Such changes will be very, very modest if they always have to take place within theconstrains of the original architecture. So we think it is fair to say that McGartydoes not pay any attention to the importance of the installed base whatsoever.

The same kind of engineering ideals as found in McGarty’s definition played animportant role in the development of the OSI suite of protocols. This effort failed -according to Einar Stefferud and Marshal T. Rose (ref.) due to the protocols’“installed base hostility.” All protocols were designed without any consideration ofhow an OSI network should interoperate with others. It was assumed that OSI pro-tocols would replace all others. However, as there was an installed base of othercommunication protocols, the OSI ones were never adopted as they could not (eas-ily enough) be linked to these.

Page 53: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

48 Ole Hanseth and Eric Monteiro

The notion of installed base does to a large extent include all aspects of infrastruc-ture mentioned above - an infrastructure is an evolving shared, open, and heteroge-neous installed base.

Star and Ruhleder (1996, p. 113) give a definition sharing some of the features ofMcGarty’s and the NII one. However, in their definition Star and Ruhleder putmore emphasis on the social relations constituting infrastructures. They character-ize II by holding that it is “fundamentally and always a relation,” and that “infra-structure emerge with the following dimensions:”

• Embeddedness. Infrastructure is “sunk” into, inside of, other structures, socialarrangements and technologies;

• Transparency. Infrastructure is transparent in use, in the sense that it does nothave to be reinvented each time or assembled for each task, but invisibly sup-port those tasks;

• Reach or scope. This may be either spatial or temporal - infrastructure hasreach beyond a single event or one-site practice;

• Learned as part of membership. The taken-for-grantedness of artifacts andorganizational arrangements is a sine qua non of membership in a communityof practice. Strangers and outsiders encounter infrastructure as a target object tobe learned about. New participants acquire a naturalized familiarity with itsobjects as they become members;

• Links with conventions of practice. Infrastructure both shapes and is shaped bythe conventions of a community of practice, e.g. the way that cycles of day-night work are affected by and affect electrical power rates and needs.

• Embodiment of standards. Modified by scope and often by conflicting conven-tions, infrastructure takes on transparency by plugging into other infrastructuresand tools in a standardized fashion.

• Built on an installed base. Infrastructure does not grow de novo; it wrestleswith the “inertia of the installed base” and inherits strengths and limitationsfrom that base.

• Becomes visible upon breakdown. The normally invisible quality of workinginfrastructure becomes visible when it breaks.

The configuration of these dimensions forms “an infrastructure,” which is withoutabsolute boundary on a priori definition (ibid.).

Opposed to McGarty, this definition stresses the heterogeneous character of infra-structures as expressed in the notions of embeddedness as well as its socio-techni-cal nature in forms by being linked to conventions of practice. Although the

Page 54: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining II - identifying the aspects

Understanding Information Infrastructure 49

importance of the installed base is emphasised, Star and Ruhleder do not addressdesign related issues or implications. Further, learned as part of membership andlinks with conventions of practice have important implications: IIs must supporttoday’s conventions at the same time as they must stimulate to their change if sig-nificant benefits is to be gained. These aspects of infrastructures also mean thatalthough IIs are enabling and generic, they are not completely independent of theiruse. The interdependencies and possible co-evolution of IIs and conventions ofpractice will be explored in chapter yy.

Although having much in common, an interesting difference between the two defi-nitions given by Star and Ruhleder and McGarty respectively is the fact that thefirst stresses the importance of the installed base and its inertia while the latterrequires that an infrastructure must have the capability of being incrementallychanged to meet new needs, and that this change must be transparent to users. Thecombination of - or tension between - these two elements is the core of IIs as seenin this article: understanding the nature of the installed base and how to cultivate it.

Seeing II and II development as “installed base cultivation” captures most aspectsof (information) infrastructures as described above. IIs are larger and more com-plex systems, involving large numbers of independent actors as developers as wellas users. This fact makes standards a crucial part of IIs. Further, IIs grow anddevelop over a long period of time, new parts are added to what exists and existingparts are replaced by improved ones. An II is built through extensions andimprovements of what exists - never from scratch. It is open in the sense that anyproject, independent of how big it is, will just cover a part of an II. The rest exitsalready and will be developed by others being out of reach for the project and itscontrol. What is developed by a defined closed activity will have to be hooked intoan existing II. What exists has significant influence on the design of the new. Insum: IIs are developed through the cultivation of the installed base.

IIs and II development may to a large extent be described as specification andimplementation of standards. Cultivating IIs may, accordingly, be considered ascultivating standards.

The role of the installed base in the development of infrastructures and standard-ized technologies will be discussed in chapter 9, while design strategies for infra-structures having the characteristics presented above will be the theme of chapters10 and 11.

Page 55: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

50 Ole Hanseth and Eric Monteiro

Sliding boundaries

The focus on IIs as open systems raises some questions - among these: Are the bor-ders between open (IIs) and closed (I) systems absolute and predetermined in somesense? Is the crucial role played by the installed base a unique feature of infrastruc-ture and systemic technologies or is it a more general one? Focusing on IIs as anopen installed base means that IIs are never developed from scratch - they alwaysalready exist. If so - when and how is an II born?

Seeing IIs as an open installed base focuses on what makes them different fromISs. The latter may be described as closed in the sense that one project or definedactivity may design and control the development of all its parts. Whether ISsshould be considered as open or closed systems depends on the time frame chosen.Considering their life from initial design and development throughout the wholemaintenance phase, no IS can properly be understood as closed (Kling and Iacono1984).

Whether a system is open or closed is not always an a priori aspect of the system. Itdepends on the perspective chosen. Whether an open or closed perspective is mostappropriate in a specific situation is often not an easy question. And whetherchoosing an open or closed perspective deals with our basic conception of reality.Each perspective is linked to a wide range of other perspectives, theories, tools andtechniques, etc. The complexity and difficulties related to this issue can be illus-trated by “discussions” and disagreements between Popper and Feyerabend aboutthe nature of science and proper scientific work, and fights between positions likepositivism and dialectics. This issue has also been involved in discussions aboutthe approaches followed by OSI and Internet standardization efforts. Einar Stef-ferud (1994) summarizes the “religious war” (Drake 1993) as a discussionsbetween two competing and incommensurable paradigms, one being an open inter-networking paradigm, the other being a closed networking one.

ISs and IIs are not totally disjunct - there is a border area. An IS, e.g. an interorgan-izational system (IOS) or distributed IS (DIS), may “travel” through this area andchange and become an II. IIs are born when

1. new and independent actors become involved in the development of an IOS orDIS so that the development is not controlled by one actor any more;

2. existing IOSs are linked together and the development of the linked networks isnot controlled by one actor;

3. an IOS/DIS may be considered an II when the goal is that it shall grow anbecome an II (or part of) in the future and this is an important design objective.

Page 56: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

State of the art

Understanding Information Infrastructure 51

Where to draw the borderline between for instance an IOS and an II also dependson which aspect of an infrastructure definition is emphasized. Taking flight book-ing systems as an example, they may be considered infrastructure as they are large,complex and shared by a large user community. However, they are specializedapplications rather than generic, enabling substructures.

Installed base is not a unique II feature. In general the issue is the importance ofhistory, the history’s imprint on the future. History is everywhere, in IS design aswell as II cultivation. However, the issue needs to be dealt with in another sensewhen building II. In the social sciences, similar issues is studied under the label“new institutionalism” (Scott 1995, North 1990, Powell and DiMaggio 1991,March and Olsen 1989), and in philosophy by Heidegger in his Being and Time.

State of the art

The growing interest for information infrastructure has produced a rich variety ofstudies and analyses of information infrastructures. There do not, surprisinglyenough, exist many studies about the Internet which try to spell out in some detailhow the design process actually takes place. There exist several overviews of thehistorical development of Internet but they contain little or no evidence of how orwhy various design decisions came about (see for instance Hauben and Hauben1006; LoC 1996). Abbate (1994) represents an exception. Here the underlyingdesign visions of two competing alternatives for networking, namely IP and theone developed within the telecommunication community (called X.25 by the

CCITT1), are uncovered. Hanseth, Monteiro and Hatling (1996) discuss the struc-ture of the tension between change and stability in information infrastructures withillustrations from Internet and the Open Systems Interconnection (OSI) of theInternational Standardisation Organisation (ISO). Hanseth (1996) analyses thenature of the installed base through illustrations of a variety of cases, includingInternet. A highly relevant area of research regarding Internet is to unwrap thedesign culture within Internet. This, however, seems to be a completely neglectedarea. The few studies related to cultural aspects of Internet focus on others than thedesigners, for instance, Turkle (1995) on MUD users, Baym (1995) on Usenetusers.

1. CCITT is the international standardisation body for standards within telecommuni-cation.

Page 57: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

52 Ole Hanseth and Eric Monteiro

Our approach to the study of standardisation resembles those applied by MarcBerg, Geoffrey Bowker, Leigh Star and Stefan Timmermans (Bowker and Star1994; Bowker, Timmermans, and Star 1995; Star and Ruhleder 1996, Timmermansand Berg 1997). In their studies of classification schemes and infrastructuresBowker and Star identify a number of issues which are closely related to those weare focusing on. In a study of the evolution of the classification of diseases main-tained by the World Health Organisation, they illustrate how coding and classifica-tion — essential tasks in the establishment of an information infrastructure — isanything but neutral. Interests are inscribed into coding schemes (Bowker and Star1994). Bowker, Timmermans, and Star (1995) study how some aspects of work ismade more visible than other by inscribing them into a classification scheme. Starand Ruhleder (1996) discuss key characteristics of infrastructure based on a studyof the introduction and use of an information infrastructure.

Within the field of social studies of technology, there are some contributions rele-vant to a study of information infrastructure standardisation. Some focus on con-ceptual issues, for instance, the work by Fujimura (1992) on standardisingprocedures and interpretations across geographical distances. Others exploreempirically how universal standards are appropriated to local contexts (Berg 1995)or how the interplay between stability and change is played out (Hanseth, Monteiroand Hatling 1996). Similarly, Jewett and Kling (1991) develop a notion of infra-structure which is to capture the many hidden resources which need to be mobi-lised to get an information system to actually be used.

Other studies of the standardisation process of information infrastructure tend tofocus issues rather different from ours and those mentioned above. These includethe economical significance of standards (David 1987; OECD 1991), technicalchallenges (Rose 1992), the use of information infrastructures (Ciborra 1992), thepolitical nature of standardisation bodies (Webster 1995) or cultural differences(Trauth, Derksen and Mevissen 1993). The predominant view on information infra-structure standardisation is that it is straightforward. An exception to this is (Wil-liams 1997). Standardisation is commonly portrayed either as (i) a fairlyunproblematic application of mainstream techniques for solving the technical diffi-culties of software engineering or (ii) it is simply glossed over or taken as given(Ciborra 1992; David 1987; OECD 1991). Those involved in the design of infor-mation infrastructure have so far been very close to (i). These studies shed littlelight on the socio-technical complexity of establishing an information infrastruc-ture.

Lehr (1992) points to the bureaucratic and procedural differences in the way stand-ardisation bodies organise their work. These are argued to play an important rolefor the outcome, namely the standards. For instance, the OSI effort represents a

Page 58: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

State of the art

Understanding Information Infrastructure 53

clear-cut alternative to the evolutionary approach underlying an emphasis on tran-sition strategies. OSI is designed monolithically from scratch, that is, with a totaldisregard for existing information infrastructures, the installed base. It has beenfiercely criticised for exactly this (Hanseth, Monteiro and Hatling 1996; Rose1992).

The literature on large technical systems is illuminating in describing how infra-structures are established but tend to bypass how to facilitate changes (Summerton1994). A particularly relevant contribution is (Hughes 1983) which gives an histor-ical account of the electrification of the Western world around the turn of the cen-tury. Hughes’ work is important but it does not address the dilemmas of scalingexplicitly. It provides, however, a rich empirical material containing lots of illustra-tions of transition strategies, the role of the installed base and gateway technolo-gies, etc.

Recently, there has been attention to development strategies suitable for informa-tion infrastructures (Kahin and Abbate 1995). These strategies do not deal withscaling but address issues such as the role of government intervention and indus-trial consortia.

Grindley (1995) argues for the importance of the installed base of products. Thisemphasises the need for products to be backwards compatible, that is, that theyinteroperate with earlier versions of the product. In other words, this protects theinstalled base of earlier versions of the product. Backward compatibility plays thesame role for products as transition strategies for information infrastructures(Hinden 1996).

Page 59: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Defining information infrastructures

54 Ole Hanseth and Eric Monteiro

Page 60: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 55

CHAPTER 4 Standards andstandardization processes

Introduction

The world is full of standards. Standards regulate, simplify and make possible anextensive division of labour which should be recognized as a necessary basis forfar-reaching modernization processes (REFs noe STS).

Also in the world of computers there is a rich variety of standards. The conventionalwisdom, however, is that standards are either simple and straightforward to defineor purely technical (REFs). One, if not the, key theme running right through thisbook is that the development of an information infrastructure, necessarily includingthe standards, should instead be recognized as a highly complex socio-technicalnegotiation process. It is a pressing need to develop our understanding of how“social, economic, political and technical institutions (...) interact in the overalldesign of electronic communication systems” (Hawkins 1996, p. 158). There isaccordingly a need to classify and conceptualize to grasp the role of standards in thedevelopment of information infrastructures.

This chapter first outlines the basic argument for standards for communication tech-nology. Strictly speaking, “Communication systems cannot function without stand-ards” (Hawkins 1996, p. 157). We then provide a taxonomy of different types ofstandards. Standards for information infrastructures are worked out within quite dis-tinct institutional frameworks. We describe the most influential, international insti-

Page 61: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

56 Ole Hanseth and Eric Monteiro

tutions aiming at open information infrastructures. Lastly, we briefly review theliterature on standardization.

The economy of scale argument

Standardized technology abound and make perfectly good sense. It simplifies oth-erwise complicated choices, enables large scale integration, and it is the basis for adivision of labour. The standardization of the design of cars created such a divisionof labour between car manufacturers and suppliers of standardized parts rangingfrom roller bearings and lamps to complete motors (når? ref). For communicationtechnology there is in addition a very influential, brute force argument. It is simpleand is typically illustrated by the following figure.

FIGURE 3. The number of different links as a function of the number ofnodes.

The figure shows how the number of communication connections, or protocols,(the edges) rapidly increases as the number of communicating partners (the nodes)is rising. In the left hand case, every pair of communicating partners need a proto-col to communicate. With 4 partners the required number of protocols is 6. Thenumber of protocols increase rapidly. With 5 partners, the number of required pro-tocols is 10, with 6 partners 15, etc. The number of protocols is given by the for-mula n(n-1)/2, where n is the number of nodes. In the right hand situation, allcommunication is based on one single shared, standardized protocol. What isneeded is establishing a link between each partner (node) and the standard. As aconsequence, the number of required links to be established and maintainedincreases linearly with the number of partners — given the existence of a sharedstandard. With 4 partners, 4 links are required and with 5 partners, 5 links are

a b

Page 62: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Types of standards

Understanding Information Infrastructure 57

required, etc. Already with a relative small community of communicating partners,the only feasible strategy is to use a shared, standardized protocol, an Esperantolanguage which everyone reads and writes. A solution with pairwise protocolsamong the communicating partners simply does not scale, it works in principle butnot in practise. Larger communication networks is simply impossible to build andmanage if not based on standards.

As mentioned in the previous chapter, we consider standards not only importantfrom a practical and economic perspective, but also an essential and necessary con-stituting element. If an “infrastructure” is built only on the bases of bilateralarrangements, it is no real infrastructure. It is then just a collection of separate inde-pendent connections.

The brute force argument sketched above makes a lot of sense. The ideal of estab-lishing a shared standard is easy to understand and support. The problem, however,is how to decide which areas should be covered by one standard, how differentstandards should relate to each other and how to change them as their environmentchanges, i.e how to pragmatically balance the idealized picture of everybody shar-ing the same standard against the messy, heterogeneous and irreversible characterof an information infrastructure. This act of balancing — as opposed to dogmati-cally insisting on establishing one, shared standard — is very close to the heart ofthe design of an information infrastructure. It is an issue we explore in greater inthe subsequent chapters.

Types of standards

Standards abound. David and Greenstein (1990, p. 4) distinguish among threekinds of standards: reference, minimum quality and compatibility standards. IIstandards belong to the last category, that is, standards which ensure that one com-ponent may successfully be incorporated into a larger system given an adherence tothe interface specification of the standard (ibid., p. 4). One may also classify stand-ards according to the processes whereby they emerge. A distinction is often madebetween formal, de facto and de jure standards. Formal standards are worked out

Page 63: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

58 Ole Hanseth and Eric Monteiro

by standardisation bodies. Both OSI and Internet are formal according to such a

classification.1 De facto standards are technologies standardised through marketmechanisms, and de jure standards are imposed by law.

De facto standards are often developed by industrial consortia or vendors. Exam-ples of such standards are the W3 consortium currently developing a new versionof the HTML format for WorldWideWeb, IBM´s SNA protocol, CORBA develop-ing a common object oriented repository for distributed computing, X/Open devel-

oping a new version of Unix and the Health Level 72 standard for health carecommunication. Some of these consortia operate independently of the internationalstandardisation bodies, others align their activities more closely. For instance, theW3 consortium is independent of, but closely linked to, the standardisation processof the IETF (see further below).

Internet standards

The Internet is built of components implementing standardized communicationprotocols. Among these are the well known ones such as TCP, IP, SMTP (email),HTTP (World Wide Web), FTP (file transfer) and TELNET (remote login). ButInternet includes many more standards - in June 1997 there were in fact 569 offi-cially registered Internet standards (RFC 2200). These standards are split into dif-ferent categories.

Maturity levels and status

There are two independent categorization of protocols. The first is the “maturitylevel” which in the Internet terminology is called the state of standardization. Thestate of a protocol is either “standard”, “draft standard”, “proposed standard”,

1. This is the source of some controversy. Some prefer to only regard OSI as “formal”due to properties of the standardisation process described later. This disagreementis peripheral to our endeavour and is not be pursued in this book.

2. Health Level 7 is a standard worked out by an ad-hoc formation of a group ofsmaller vendors in the United States, later on being affiliated to American NationalStandards Institute, ANSI (see url http://www.mcis.duke.edu/standards/HL7/hl7.htm).

Page 64: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet standards

Understanding Information Infrastructure 59

“experimental”, “informational” or “historic”. The second categorization is the“requirement level” or status of a protocol. The state is either “required”, “recom-mended”, “elective”, “limited use”, or “not recommended”.

When a protocol is advanced to proposed standard or draft standard, it is labelledwith a current status.

In the Internet terminology computers attached to or otherwise a part of the net-work is called a “system.” There are two kinds of systems - hosts and gateways.Some protocols are particular to hosts and some to gateways; a few protocols areused in both.. It should be clear from the context of the particular protocol whichtypes of systems are intended.

Protocol states are defined as follows:

• Standard Protocol: The IESG (Internet Engineering Steering Group) has estab-lished this as an official standard protocol for the Internet. These protocols areassigned STD numbers (see RFC- 1311). These are separated into two groups:(1) IP protocol and above, protocols that apply to the whole Internet; and (2)network-specific protocols, generally specifications of how to do IP on particu-lar types of networks.

• Draft Standard Protocol: The IESG is actively considering this protocol as apossible Standard Protocol. Substantial and widespread testing and commentare desired. Comments and test results should be submitted to the IESG. Thereis a possibility that changes will be made in a Draft Standard Protocol before itbecomes a Standard Protocol.

• Proposed Standard Protocol: These are protocol proposals that may be consid-ered by the IESG for standardization in the future. Implementation and testingby several groups is desirable. Revision of the protocol specification is likely.

• Experimental Protocol: A system should not implement an experimental proto-col unless it is participating in the experiment and has coordinated its use of theprotocol with the developer of the protocol.

• Informational Protocol: Protocols developed by other standard organizations,or vendors, or that are for other reasons outside the purview of the IESG, maybe published as RFCs for the convenience of the Internet community as infor-mational protocols.

• Historic Protocol: These are protocols that are unlikely to ever become stand-ards in the Internet either because they have been superseded by later develop-ments or due to lack of interest.

Page 65: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

60 Ole Hanseth and Eric Monteiro

Typically, experimental protocols are those that are developed as part of an ongo-ing research project not related to an operational service offering. While they maybe proposed as a service protocol at a later stage, and thus become proposed stand-ard, draft standard, and then standard protocols, the designation of a protocol asexperimental may sometimes be meant to suggest that the protocol, although per-haps mature, is not intended for operational use.

Protocol status is defined as follows:

• Required Protocol: A system must implement the required protocols.

• Recommended Protocol: A system should implement the recommended proto-cols.

• Elective Protocol: A system may or may not implement an elective protocol.The general notion is that if you are going to do something like this, you mustdo exactly this. There may be several elective protocols in a general area, forexample, there are several electronic mail protocols, and several routing proto-cols.

• Limited Use Protocol: These protocols are for use in limited circumstances.This may be because of their experimental state, specialized nature, limitedfunctionality, or historic state.

• Not Recommended Protocol: These protocols are not recommended for generaluse. This may be because of their limited functionality, specialized nature, orexperimental or historic state.

TABLE 1. Internet standards - numbers and growth

94/7a

a. Number of Internet standards in July 1994 (RFC 161o).

Addedb 97/6c NewdRemoved

e

Standard 58 2 60 7 5

Draft 21 28 49 45 17

Proposed 161 99 260 150 51

Experimental 53 45 98 55 10

Informational 23 34 57 54 20

Historic 35 10 45 10 0

Totally 351 218 569 321 103

Page 66: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet standards

Understanding Information Infrastructure 61

Among the 569 registered Internet standards there are 60 standard protocols,3 49draft standards, 260 proposed, 98 experimental, 57 informational and 45 historic.The growth in standards from July 1994 is, as illustrated in Table 1, quite signifi-cant - about 62%. However, the growth has been rather uneven among the variousstandards categories - 3.5% growth in full standards, 133% in draft and 60% in pro-posed standards. Looking at the numbers in Table 1, we see that the number of pro-posed and experimental standards is growing rapidly. Lots of new standards arepopping up, while close to none is advancing to the top level. An important expla-nation for this is the increasing growth of the Internet, both in numbers of comput-ers (and users) connected and in the complexity of the technology. Within theenvironment constituted by this complex unstable environment, developing stableand accepted standards gets increasingly more difficult (Alvestrand 1996). Thispoints to the more general question of whether the organisation of Internet stand-ardisation which so far has proven so successful, has reached its limit, i.e. that thereis a need for a more formal, more ISO like organisation (see also earlier remarks inchapter 2). Alternatively, the size the Internet is approaching represents a limit forbig far that kind of networks can grow.

Types of Internet standards

The Internet standards specifies lots of different kinds of communication (sub-)technologies. We will here mention some of them just to give a flavour of what allthese protocols are all about. Interested readers should consult the truly vast elec-tronic archive the Internet community keeps of its reports and discussion (see the

list cited in the section on methodological issues in chapter 2).4

b. Increase in the number of Internet standards from July 1994 to June1997 (RFCs 1720, 1780, 1800, 1880, 1920, 2000, 2200).

c. Number of Internet standards in June 1997 (RFC 2200).

d. Total number of new Internet standards from July 1994 to June 1997(RFCs 1720, 1780, 1800, 1880, 1920, 2000, 2200).

e. The number of Internet standards deleted from the official list from July1994 to June 1997 (ibid.).

3. There are 53 Internet standards assigned a number. We have here counted thenumber of RFCs included in the official list of standards.

4. A good place to start looking for RFCs and information about Internetstandards is http://ds.internic.net.

Page 67: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

62 Ole Hanseth and Eric Monteiro

First of all there are protocol specifications like TCP, IP, SMTP, HTTP, RTP(Transport protocol for Real Time Applications), FTP, etc. These are the standardsmost often associated with the Internet.

In addition, there are lots of auxiliary protocols, protocols offering services to theothers. These include PPP (Point-to-Point Protocol), SNMP (for network manage-ment), Echo protocol, DNS (Domain Name System, a distribute data base systemmapping a symbolic Internet address (like diagnostix.ifi.uio.no) to a numerical one(like 129.240.68.33)), tools for DNS debugging, Time Server Protocol, etc. Thereare further lots of standards defining security systems (Kerberos authenticationservice, encryption control protocol, MIME object security services, Signed andencrytped MIME, etc.).

The Internet runs across many different physical networks, accordingly there arestandards defining how to implement one protocol on top of these networks, typi-cally IP on ARPANET, Wideband Network, Ethernet Networks, IEEE 802, trans-mission of IP traffic over serial lines, NETBIOS and FDDI. There is also asignificant number of standards defining gateways between protocols, like FTP -FTAM Gateway Specifications and X.400 - MIME Body Equivalencies. One groupof standards defines how to construct various identifiers (including addresses) likeIP addresses and URL specifications.

SNMP is the Internet network management protocol, defining the general rules formanagement of any part of the Internet. However, to set up a management system,additional information about the specific networks and protocols is required.Accordingly, there is one standard, SMI, defining how this information should bespecified as well as standards (MIBs, Management Information Bases) defininghow to manage specific parts of the Internet using for instance ATM, Ethernet, IP,TCP, UDP, DECNET and X.500.

Lastly, there are standardized data formats like MAIL (Format of Electronic MailMessages), MIME (Multipurpose Internet Mail Extensions) and MIME extensions(MIME Media Types, MIME message header extensions for non-ASCII), HTML,SGML Media Types, Using Unicode with MIME, NETFAX (File format forexchange of images), MIME encapsulation of Macintosh files and Serial numberarithmetic.

Page 68: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards for health care information infrastructures

Understanding Information Infrastructure 63

Standards for health care informationinfrastructures

We give a brief outline of the different II standards related to the health care sector. A com-prehensive overview of various international standardization efforts can be found in (DeMoor, McDonald and van Goor 1993).

CEN TC/251

In chapter 2 we presented CEN TC/251 as the standardisation body most widelyaccepted as the authoritative one. We will in this section give a brief overview ofwhat kinds of standards this body is defining. CEN/TC 251 has split its work into

seven different subfields which are called:5

1. Healthcare Terminology, Semantics and Knowledge Bases

2. Healthcare Information Modelling and Medical Records

3. Healthcare Communications and Messages

4. Medical Imaging and Multimedia

5. Medical Device Communication in Integrated Healthcare

6. Healthcare Security and Privacy, Quality and Safety

7. Intermittently Connected Devices

Within these areas standards of very different types are defined. We will here illus-trate this with the standards defined by xx 1996 within four areas. Within “Health-care Terminology, Semantics and Knowledge Bases,” standards with the followingtitles were defined:

• Structure for Nomenclature, Classification and Coding of Properties in ClinicalLaboratory Sciences

• Structure for Classification and Coding of Surgical Procedures

• Categorical Structures of Systems of Concepts - Model for the Representationof Semantics

• Medicinal Product Identification Standard

• Structure of Concept Systems - Medical Devices

5. Information about the work of CEN TC/251is found at http://

Page 69: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

64 Ole Hanseth and Eric Monteiro

Within the second area, “Healthcare Information Modelling and Medical Records,”standards were defined with the following titles:

• Medical Informatics - Vocabulary,

• Healthcare Information Systems Framework,

• Electronic Healthcare Records Architecture

• Standard Architecture for Healthcare Information Systems.

Within “Healthcare Communications and Messages” the standards defined were

• Procedures for the Registration of Coding Schemes related to Healthcare

• Messages for Exchange of Laboratory Information

• Recording Data Sets used for Information Interchange in Healthcare

• Request and Report Messages for Diagnostic Services Departments

• Messages for Exchange of Healthcare Administrative Information

• Messages for Patient Referral and Discharge

• Methodology for the Development of Healthcare Messages

Within “Medical Imaging and Multimedia” the following standards were defined:

• Medical Imaging and Related Data Interchange Format Standards

• Medical Image Management Standard

• Medical Data Interchange: HIS/RIS-PACS and HIS/RIS - Modality Interfaces

• Media Interchange in Medical Imagining Communications

Each of these standards are also given a status similar to those used in the Internetcommunity reflecting its maturity level (pre-standard, standard).

The standards cover a wide range of different health care related phenomena. Theyare also related to very different aspects of information infrastructure developmentand use. Several specify “messages,” i.e. structure of information to be exchanged.These standards specify also the institutions (or partners) the information should beexchanged between and when (for instance that a lab report should be sent from thelab to the ordering unit when the ordered analysis are finished). For these standardsit is also defined by a separate group of standards the formats and protocols to beused for exchanging them. Another group of standards defines the semantics ofimportant data fields in terms of nomenclatures and classification systems to beused. Further, there are standards defining the overall architectures of health care

Page 70: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards for health care information infrastructures

Understanding Information Infrastructure 65

information systems and infrastructures, and even methodologies for developingvarious types of standards.

While CEN standards are those having been considered the most important inEurope, there are lots of others. Some of these are overlapping, and in some casesone standard defined by one body is simply adopted by another. For instance, theDICOM standard for medical images developed by ACR/NEMA has a broadacceptance in this area and is accordingly more or less adopted by CEN as it is.Similarly, the EDIFACT messages defined by CEN is also given official statuswithin the EDIFACT bodies as well.

EDI

EDI denotes electronic exchange of form-like information between different organ-ization. It is often used to transfer electronic equivalents of already standardizedpaper forms. Examples are orders, invoices, customs declaration documents, bank(payment) transactions, etc. Within health care there is a vast range of such forms,including lab orders and reports, admission and discharge letters and drug prescrip-tions.

Within EDI, the EDIFACT format is the most widely accepted standard (amongstandardization bodies, may be not among users). Accordingly, EDIFACT has beenchosen as a basis for exchange of form-like information in health care as well.

In the EDIFACT community, there are three forms of standards: messages, seg-ments and data elements. The electronic equivalent of a paper form is a message. Amessage is, in EDIFACT terms, composed of segments and segment groups, wherea segment group is (recursively) defined by segment groups/and or segments. Asegment is composed of data elements - single or composed. The latter being com-posed by a number of single ones.

EDIFACT messages are defined internationally. Such international messages areoften accompanied by specification of national or sectorwise subsets as well as“exchange agreements” between pairs of communicating partners. Such specifica-tions define in more detail how a general message should be used within a regionor sector or between specific partners.

Page 71: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

66 Ole Hanseth and Eric Monteiro

Standardisation processes and strategies

Information infrastructures, like many other kinds of large technical systems(Hughes 1987), are standardised by formal, quasi-democratic, international stand-ardisation bodies (Lehr 1992). These standardisation bodies have to follow prede-fined procedures and rules regulating the status, organisation and process ofdeveloping standards. In recognition of the limits of both market forces and hierar-chical control, formal standardisation is a key strategy for developing an informa-tion infrastructure (OECD 1991).

Different standardization institutions organise the process of standardisation quitedifferently along several important dimensions, including the way participation inthe process is regulated, how voting procedures are organised, the requirementsproposed standards have to meet at different stages in the process, the mannerinformation about ongoing standardisation is made public, and the bureaucraticarrangements of how work on one, specific standard is aligned with other efforts,etc.

Standardization processes has only recently become a topic for research anddebate. Branscomb and Kahin (1995) discuss three possible models for NII stand-ards development, quite close to David’s three categories mentioned above, whichthey have given the following names:

1. The Applications Model: Intense Competition and Ad Hoc Consortia.

2. The Internet Model: A Cooperative Platform for Fostering Competition.

3. The Telecommunications Model: From National-Level Management to OpenCompetition

We will now present OSI and EDIFACT as an example of the telecommunicationsmodel and the Internet model. We will also present standardization processes andstrategies adopted in health care, which in fact combines all models to some extent.

OSI

OSI is short for the ISO Open Systems Interconnection (OSI) reference model. OSIwas worked out by the International Standardization Organization (ISO) in theearly 80s. ISO is a voluntary, non-treaty organization that was founded just afterWorld War II and produce international standards of a wide range of types. Themembers of ISO are national standardization bureaus rather than individuals or

Page 72: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

OSI

Understanding Information Infrastructure 67

organizations. Members include national standardization organisations like ANSI(from the US), BSI (from Great Britain), AFNOR (from France) and DIN (fromGermany) (Tanenbaum 1989).

ISO is divided into a number of technical committees according to whether thefocus is on specification of nuts and bolts for construction work, paint quality orcomputer software. There are several hundreds technical committees within ISO.The “real” work is done by the several thousand non-paid volunteers in the work-ing groups. A number of working groups belong to a subcommittees which, again,belong to a technical committee. To refer to the members of the working groups asnon-paid volunteers merely implies that they are not paid or employed by ISO assuch. They are typically employed by large vendors, consumer organizations orgovernmental institutions. Historically, vendors have dominated standards commit-tees (Jakobs 1998; Lehr 1992).

The development of OSI protocols follow (in formal terms) democratic procedureswith representative participation under the supervision of the ISO (Lehr 1992). Thestandardization process aims at achieving as broad consensus as possible. Voting isbased on representative voting, that is, that each member (representatives ofnational bureaus of standardization) is given a given weight.

An ISO standard passes through certain stages. It starts as a draft proposal and isworked out by a working group in response to one representative’s suggestion. Thedraft proposal is circulated around for six months and may be criticized. Given thata substantial majority is favourable, criticism is incorporated to produce a reviseddocument called a draft international standard. This is circulated for both com-ments and voting. This then is fed into the final document, an international stand-ard, which get approved and published. When faced with controversial issues, theprocess may back-track in order to work out compromises that mobilize sufficientsupport in the voting. In this way, as was the case with OSI, the standardizationprocess may stretch over several years.

OSI protocols are developed by first reaching a consensus about a specification ofthe protocol. The protocol specifications are assumed to be implemented as soft-ware products by vendors. The implementation is independent of the standardisa-tion process. Because of the formal and political status of OSI protocols, mostWestern governments have decided that II in the public sector should be based onOSI protocols.

The implementation and diffusion of OSI protocols have not proceeded as antici-pated by those involved in the standardisation processes. One of the main reasonsis that they have been developed by large groups of people who have been specify-

Page 73: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

68 Ole Hanseth and Eric Monteiro

ing the protocols without any required implementation and without consideringcompatibility with non-OSI protocols (Rose 1992). This results in very complexprotocols and serious unforeseen problems. The protocols cannot run alongsideother networks, only within closed OSI environments. The protocols are big, com-plex and ambiguous, making them very difficult to implement in compatible waysby different vendors. The definition of profiles mentioned earlier is an attempt todeal with this problem.

EDIFACT

EDIFACT, short for Electronic Data Interchange for Administration, Commerceand Transport, is a standardization body within the United Nation. This has notalways been the case. EDIFACT has during the last couple of decades transformeddramatically both in content and institutionally. It started in the early days, as aninformal body of about 30 people world-wide (Graham et al. 1996, p. 9). Sincethen it has grown to a huge, global bureaucracy involving several hundred partici-pants. The small group of people who established EDIFACT chose the UnitedNation as their overarching organization because they expected the perceived neu-trality of the United Nation to contribute favourably to the diffusion of EDIFACTmessages (ibid., p. 9). The representation in EDIFACT is, as is usual within theUnited Nations, through national governments rather than the national standardsbodies as would have been the case if EDIFACT had chosen to align with ISOinstead.

During 1987, EDIFACT reorganized into three geographically defined units, NorthAmerica, Western Europe and East Europe. This has later been extended to coverthe areas Australia/ New Zealand and Japan/ Singapore. These areas are subse-quently required to coordinate their activity. Although these areas in conjunctioncover a significant part of the world, the vast majority of the work has taken partwithin the Western European EDIFACT board (ibid., p. 9). This is due to the closealignment with the TEDIS programme of the Commission of the European Com-munity. In 1988, the EDIFACT syntax rules were recognized by the ISO (LIA-SON???XXXX).

EDIFACT messages pass through different phases before reaching the status as aproper standard. A draft is first circulated to the secretariat and all other regionalsecretariat to be assessed purely technically before being classified as status level0. Moving up to status 1 requires a consensus from all the geographical areas. Afterstatus 1 is achieved, the proposal has to be delayed for at least a year to allow

Page 74: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet

Understanding Information Infrastructure 69

implementations and assessments of use. If and when it reaches status 2, it hasbecome an United Nation standard and is published.

The EDIFACT organization also specifies rules for design of messages. Forinstance, if an existing message can be used, that one should be adopted rather thandesigning a new one. In the same way the rules specify that the existing segmentsand data elements should be reused as far as possible.

Internet

As illustrated in chapter 2, the organization of the development of the Internet haschanged several times throughout its history. The organizational structure haschanged from initially that of a research project into one being very close to thestandard standardization body. Along the change in organizational structure havethe rules also changed.

Internet is open to participation for anyone interested but without ensuring repre-

sentative participation.6 The development process of Internet protocols follows apattern different from that of OSI (RFC 1994; Rose 1992). A protocol will nor-mally be expected to remain in a temporary state for several months (minimum sixmonths for proposed standard, minimum four months for draft standard). A proto-col may be in a long term state for many years.

A protocol may enter the standards track only on the recommendation of the IESG;and may move from one state to another along the track only on the recommenda-tion of the IESG. That is, it takes action by the IESG to either start a protocol on thetrack or to move it along.

Generally, as the protocol enters the standards track a decision is made as to theeventual STATUS, requirement level or applicability (elective, recommended, orrequired) the protocol will have, although a somewhat less stringent current statusmay be assigned, and it then is placed in the proposed standard STATE with thatstatus. So the initial placement of a protocol is into the state of proposed protocol.At any time the STATUS decision may be revisited.

The transition from proposed standard to draft standard can only be by action of theIESG and only after the protocol has been proposed standard for at least sixmonths.

The transition from draft standard to standard can only be by action of the IESGand only after the protocol has been draft standard for at least four months.

Page 75: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

70 Ole Hanseth and Eric Monteiro

Occasionally, the decision may be that the protocol is not ready for standardizationand will be assigned to the experimental state. This is off the standards track, andthe protocol may be resubmitted to enter the standards track after further work.There are other paths into the experimental and historic states that do not involveIESG action.

Sometimes one protocol is replaced by another and thus becomes historic, or it mayhappen that a protocol on the standards track is in a sense overtaken by anotherprotocol (or other events) and becomes historic.

Standards develop through phases which explicitly aim at interleaving the develop-ment of the standard with practical use and evaluation (RFC 1994, 5). During thefirst phase (a Proposed Standard), known design problems should be resolved butno practical use is required. In the second phase (a Draft Standard), at least twoindependent implementations need to be developed and evaluated before it maypass on to the final phase, that is, to be certified as a full Internet Standard. Thisprocess is intended to ensure that several features are improved, the protocols are

6. The term “Internet” may denote either (i) the set of standards which facilitate thetechnology, (ii) the social and bureaucratic procedures which govern the process ofdeveloping the standards or (iii) the physical network itself (Krol 1992; RFC 1994).This might create some confusion because a version of Internet in the sense of (i) or(iii) has existed for many years whereas (ii) is still at work. We employ the term inthe sense of (ii) in this context. To spell out the formal organisation of Internet inslightly more detail (RFC 1994), anyone with access to Internet (that is, in the senseof (iii)!) may participate in any of the task forces (called IETF) which are dynami-cally established and dismantled to address technical issues. IETF nominates candi-dates to both Internet Advisory Board (IAB, responsible for the overallarchitecture) and Internet Engineering Steering Group (IESG, responsible for themanagement and approval of the standards). The IAB and IESG issue all the offi-cial reports which bear the name “Requests For Comments” (RFC). This archivewas established along with the conception of the Internet some 25 years ago. It con-tains close to 2000 documents including: all the formal, proposed, draft and experi-mental standards together with a description of their intended use. The RFCs alsorecord a substantial part of the technical controversies as played out within workinggroups established by the IETF or independent comments. Minutes from workinggroup meetings are sometimes published as RFCs. In short, the RFCs constitute arich archive which shed light on the historic and present controversies surroundingInternet. It seems to be a rather neglected source for information and accordingly anideal subject matter for an informed STS project providing us with the social con-struction of Internet. It is an electronic achieve which may be reached by World-WideWeb using http://ds.internic.net.

Page 76: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Internet

Understanding Information Infrastructure 71

lean and simple, and they are compatible with the already installed base of net-works. Internet standards are to function in a multi-vendor environment, that is,achieve “interoperability by multiple independent parties” (ibid., p. 5).

A key source for identifying design principles shared by the vast majority of theInternet community, is the ones embedded in the procedural arrangements fordeveloping Internet standards. The standards pass through three phases whichexplicitly aim at interleaving the development of the standard with practical useand evaluation:

These procedures are explicitly aimed at recognizing and adopting gener-ally-accepted practices. Thus, a candidate specification is implementedand tested for correct operation and interoperability by multiple indepen-dent parties and utilized in increasingly demanding environments, beforeit can be adopted as an Internet Standard.

(RFC 1994, p. 5)

The Internet community consists, in principle, of everybody with access to Internet(in the sense of (ii) above) (RFC 1994). Participation in the e-mail discussions,either general ones or those devoted to specific topics, is open to anyone who sub-mits an e-mail request in the way specified (see http://www.ietf.org). The Internetcommunity may participate in the three yearly meetings of the Internet EngineeringTask Force (IETF). The IETF dynamically decides to establish and dismantleworking groups devoted to specific topics. These working groups do much of theactual work of developing proposals. At the IETF meetings design issues aredebated. It is furthermore possible to organise informal forums called BOFs (“birdsof feather”) at these meetings.

IETF nominates candidates to both the 13 members of the Internet Advisory Board(IAB) and the 10 members of the Internet Engineering Steering Group (IESG). TheIETF, the IESG and the IAB constitute the core institutions for the design of Inter-net. Their members are part-time volunteers. In principle, they have distinct roles:the IETF is responsible for actually working out the proposals, the IESG for man-aging the standardisation process and the IAB for the overall architecture of theInternet together with the editorial management for the report series within Internetcalled Requests For Comments (RFC). In practise, however, the “boundaries of theproper role for the IETF, the IESG and the IAB are somewhat fuzzy” as the currentchair of the IAB admits (Carpenter 1996). It has proven particularly difficult, asvividly illustrated in the case further below, to negotiate how the IAB should exer-

Page 77: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

72 Ole Hanseth and Eric Monteiro

cise its role and extend advice to the IESG and the IETF about the overall architec-ture of the Internet protocols.

Standardization of health care IIs

The organization of CEN standardization work

CEN is the European branch of ISO follows the same rules and organizational prin-ciples as ISO in the definition of OSI protocols. Standards and work programmesare approved in meetings where each European country has a fixed number ofvotes. As presented above, the work is organized in eight so-called work groups,from WG1 up to WG8. Each group is responsible for one area. The tasks demand-ing “real work”, for instance the development of a proposal for a standardized mes-sage, are carried out in project teams. More than 1500 experts have been involved(De Moor 1993).

In areas where EDIFACT is used, the definition of the EDIFACT message is dele-gated to the EDIFACT standardization body, WEEB (Western European EDIFACTboard), which has established a so-called “message design group,” MD9, for thehealth sector. They have a liaison agreement regulating their cooperation. Thisalignment is furthermore strengthened by the fact that a number of the members ofWG3 within CEN TC 251 are also members of WEEB MD9. As of July 1995, thesecretary of WE/EB was moved to CEN.

On the national level, standardization work is organized to mirror that on the Euro-pean. Typical work tasks include specifying national requirements to a Europeanstandard, validation of proposed European standard messages and appropriatingEuropean standardized messages according to national needs.

CEN/TC 251 is following the ISO/OSI model.

Industrial consortia

To the best of our knowledge, there is no standardization efforts related to healthcare that can be said to follow the Internet model. Industrial consortia, on the otherhand, play a significant role. The earlier mentioned HL-7 standardization is organ-ized by one such. This organization has, or at least had in its earlier days, somerules (may be informal agreement) saying that the larger companies were notallowed to participate.

Page 78: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards engineering

Understanding Information Infrastructure 73

Within the field of medical imaging, the standardization work organized by ACR/NEMA (American College of Radiology/National Electronic Manufacturers’Association), developing the DICOM standard, seems by far to be the most influ-ential. ACR/NEMA is an organization that could be seen as a rather traditionalstandardization body. However, it is in this field operating as an industrial consor-tium and the work is dominated by large companies like General Electric, Siemensand Phillips.

Standards engineering

Expositions like the one in this chapter of the contents and operations of the vari-ous standardisation bodies involved in information infrastructure developmentmight easily become overwhelming: the number and complexity of abbreviations,institutions and arrangements is impressive (or alarming). The aim of this bookdeals with the design — broadly conceived — of information infrastructure andhence needs to come to grips with the practical and institutional setting of standard-isation. Still, it is fruitful to underscore a couple of themes of particular relevanceto us that might otherwise drown in the many details surrounding standardisation.

Specification or prototyping

The three models presented by Branscomb and Kahin (REF) categorize standardi-zation processes according to primarily according to organizational and govern-mental principles. We will here draw attention to important differences between theOSI and Internet processes seen as technological design strategies. Even if nevermade explicit, we believe that the two approaches followed by, on the one hand,OSI (and EDIFACT and CEN) and, on the other hand, Internet could be presentedas two archetypical approaches to the development of II. To explain this, weattempt to make explicit some of the underlying assumptions and beliefs.

The principal, underlying assumption of OSI’s approach is that standards should bedeveloped in much the same way as traditional software engineering, namely byfirst specifying the systems design, then implementing it as software products andfinally put it into use (Pressman 1992). Technical considerations dominate. As fortraditional software engineering (Pressman 1992, 771), OSI relies on a simplistic,linear model of technological diffusion, and in this case, for the adoption of formalstandards. The standardisation of Internet protocols are based on different assump-tions. The process is close to an approach to software development much lesswidely applied than the traditional software engineering approach explained above,

Page 79: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

74 Ole Hanseth and Eric Monteiro

namely one stressing prototyping, evolutionary development, learning and userinvolvement (Schuler and Namioka 1993).

In the Internet approach the standardisation process unifies the development of for-mal standards and their establishment as de facto ones. There is currently an inter-esting and relevant discussion going on about whether Internet’s approach hasreached its limits (see Eidnes 1994, p. 52; Steinberg 1995, p. 144). This is due tothe fact that not only the technology changes. As the number of users grow, theorganisation of the standardisation work also changes (Kahn 1994).

Contingency approaches

Due to the vast range of different “things” being standardized, it is unlikely thatthere is one best approach for all cases. Rather one needs a contingency approachin the sense that one needs to identify different approaches and criteria for choosingamong them (David 1985?). Such criteria are the general maturity of the technol-ogy to be standardized, its range of diffusion, the number of actors involved, etc.This also implies that the approach used may change dynamically over time as theconditions for it changes. The evolution of the standardization approaches followedas the Internet has evolved illustrates such a contingency approach.

Branscomb and Kahin (op. cit.) hold that Internet demonstrates the remarkablepotential (although perhaps the outer limits) for standards development and imple-mentation in concert with rapid technological change. However, their view is thatinteroperability becomes harder to achieve as the functionality of the NII expandsand draws in more and more vendors. They also say that it remains uncertain howwell the Internet-style standards processes scale beyond its current reach. The diffi-culties in defining and implementing a new version of IP supports this view. Thisissue will be analysed extensively throughout this book.

Branscomb and Kahin expect that the U.S. Congress (and governments in othercountries) will proceed with progressive deregulation of the telecommunicationindustry. This means that the importance of standardization will grow, while thegovernment’s authority to dictate standards will weaken. Yet these standards mustnot only accommodate the market failures of a deregulated industry; they mustsupport the much more complex networks of the NII.

According to Branscomb and Kahin, all face the paradox that standards are criticalto market development but, once accepted by the market, standards may threateninnovation, inhibit change, and retard the development of future markets. Theyconclude that these risks require standards processes to be future oriented. Theyconsider Internet practices breaking down the dichotomy between anticipatory ver-

Page 80: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

State of the art in research into standardisation

Understanding Information Infrastructure 75

sus reactive standards by promoting an iterative standards development with con-current implementation.

State of the art in research into standardisation

In activities aiming at implementing the NII and Bangemann reports, standards areidentified as the key elements (ref.). However, it is becoming increasingly moreaccepted that current standardization approaches will not deliver (ref.). “Currentapproaches” means here the telecommunication model (including ISO’s). Thisapproach is experienced to be too all too slow and inflexible. Some believe thedemocratic decision process is the problem, and that it should be replaced by moremanagerial government models (ref.).

Although the success of the Internet approach implies that it should be more widelyadopted, as pointed out by Branscomb and Kahin, this approach has its limitationsas well. This rather miserable state of affairs is motivating a growing researchactivity into standardization and standardization approach. We will here brieflypoint to some of the major resent and ongoing activities - activities this boot also isintended to contribute to.

ta med sts-aktige greier

digital libraries

Harvard project

hawkins & mansell - policical institutional economy

diffusjons teoretikere

Page 81: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Standards and standardization processes

76 Ole Hanseth and Eric Monteiro

Page 82: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 77

CHAPTER 5 Openness and flexibility

Introduction

We will in this chapter look more closely at the open character of information infra-structures underscored in chapter 3. We will discuss and illustrate how this charac-ter is materialized and spell out (some of) its implications. We will in particular lookat how openness generates requirements for flexible infrastructures.

At the core of this lies a dilemma. On the one hand, the expanding II supporting agrowing population of users and new services accumulate pressure to makechanges, but, on the other hand, this has to balanced against the conservative influ-ence of the huge, already installed base of elements of the II. There is simply noway to accomplish abrupt changes to the whole II requiring any kind of overallcoordination (for instance, so-called flag-days) because it is “too large for any kindof controlled rollout to be successful” (Hinden 1996, p. 63).

Standardization and flexibility are opposites. However, both are required. This factmakes the tension, interdependencies and interaction between these two issues cru-cial.

Page 83: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

78 Ole Hanseth and Eric Monteiro

Openness

Defining openness

Infrastructures, as explained in chapter 3, are open in the sense that there is no lim-its for number of user, stakeholders, vendors involved, nodes in the network andother technological components, application areas, network operators, etc. Thisdefining characteristic does not necessarily imply the extreme position that abso-lutely everything is included in every II. However, it does imply that one cannotdraw a strict border saying that there is one infrastructure for what is on one side ofthe border and others for the other side and that these infrastructures have no con-nections.

The enabling character of infrastructures means that their use ares should not bepredetermined or specified ahead of design and implementation. Enabling impliesthat infrastructures should be used where and as needed, as needs changes and useopportunities are discovered. This aspect of infrastructures is well illustrated by thegrowth in numbers of fields where the Internet is used, from the initial small groupof researches to virtually everybody.

Each use area has its categories and groups of users. If numbers of use areas isunlimited, so is the number of users, which further makes it impossible to definestrict limits for the number of components included in the infrastructure and thenumber of vendors and other stakeholders involved in its design and operation.

In the discussion about open systems (XXX Carl Hewitt), the term open usuallymeans that a system is embedded in an environment and that the system cannot beproperly understood without being considered as a part of its environment. Our useof the term open also includes this one.

Infrastructures are embedded in environments, upon which they are dependent.Change in the environment might require change in the infrastructure. Infrastruc-tures are parts of their environments in a way making it difficult to define a strictborder between infrastructure and environment. The infrastructure is a part of itsenvironment just as the environment is a part of the infrastructure.

The Internet, for instance, is running over various basic telecommunication infra-structures like ATM and telephone networks. Are these networks a part of the Inter-net or not? In the same way are health care infrastructures (for instance patientrecord and telemedicine infrastructures) integrated with medical instruments (MR,CT, X-ray, ultrasound, ECG, EEG or endoscope). Are they a part of the infrastruc-ture?

Page 84: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness

Understanding Information Infrastructure 79

When using a specific information infrastructure, say an EDI network for transmis-sion of lab reports or prescriptions, it will in fact be used in combination with otherinfrastructures like the ordinary telephone. In case of lab reports the telephone isused in emergency cases. In case of prescriptions will pharmacies call the generalpractitioner when there is something that requires clarification. The EDI network isnot working unless it is combined with others in this way. The overall lab reportand prescription transmission infrastructure also includes the telephone infrastruc-ture.

Unlimited use

As mention above, the Internet is a good example of the impossibility of definingthe areas and ways of using an infrastructure. Email was initially designed to sup-port the coordination and management of the first version of the Internet. Today itis used in virtually all kinds of activities. The enabling character of infrastructuresmakes defining its use areas impossible. This implies further that defining userrequirements is impossible.

Unlimited number of stakeholders

Unlimited use necessarily implies unlimited users as well as other stakeholders.Thestakeholders around the design and use of an information infrastructure include atleast:

• different categories of end-users;

• a collection of managers from involved user organizations;

• numerous vendors for various components of an information infrastructure;

• regulatory regimes (often governmental);

• standardisation bodies with professional standardisation experts and bureau-crats;

• telecom operators and service providers;

• political leaders (including Clinton) and institutions (including the EuropeanUnion) rhetorically promoting visions of desirable changes in work and leisure;

The present dog-fight over Internet provides ample evidence for the crossing inter-ests of the many stakeholders (Monteiro 1998). The last years have witnessed atransformation of Internet from an esoteric toy nurtured by a small, US based boy’sclub to an infrastructure supporting a rich spectrum of users and interests. The tra-ditional Internet community was small and cosy, consisting of academics and a col-

Page 85: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

80 Ole Hanseth and Eric Monteiro

lection of designers from large software vendors in the US. Only in 1992 did thistraditional community become a minority user group in Internet. Today, there are anumber of new stakeholders on the scene. The picture is highly dynamic andchanges every day. With the widespread use of the WorldWideWeb, Internetopened up to a new, large and heterogeneous user community of young and old,experienced and non-experienced and technical and non-technical. Important ele-ments of the design moved outside the control of the IETF. Especially the estab-lishment of the W3C consortium and efforts to promote electronic transactions areinfluential. Key actors here are large software vendors (including IBM, Sun,Microsoft, Apple and HP) and financial institutions (including Visa and Master-Card). It is interesting, but regrettably beyond the present scope, to study the strate-gies of these vendors towards open standards. Microsoft, for instance, hastraditionally been extremely “closed” in the sense that the different Microsoft prod-ucts were hardwired into each other thus keeping competitors out (as did IBM dur-ing the golden age of main frames). Within Internet technology in general and Webin particular, areas where Microsoft has been lagging considerable behind, they areperfectly happy to be daring and explorative (XXXKILDE Haakon Lie 1996).They promote continued experimentation rather than early adoption of an existingsolution, a solution developed by a competitor.

Another slow starting, but presumably influential, group of stakeholders are the tel-ecom operators. Coming from a different paradigm altogether (Abbate 1994), tele-com operators are increasingly trying to align with Internet rather than workagainst it. In Europe, for instance, telecom operators are key actors in the area ofInternet access providers (REF Knut HS, Jarle B)). The media, especially TV andnewspapers, struggle to come to grips with their role and use of Internet. Until now,however, they have played a marginal role.

A more technologically biased, but historically important, sense that informationinfrastructures involve numerous stakeholders concerns interoperability and porta-bility. An information infrastructure is to avoid a too tight connection with a spe-cific vendor. It should be multi-platform, ideally allowing a rich selection ofoperating systems, hardware architectures, applications and programming lan-guages. XXFINN PASSE SITAT Needless to say, this is more easily stated thatpractised as illustrated in chapter 10.

Unlimited size of an information infrastructure

It is not given from the outset how big, that is, how widespread, an informationinfrastructure will be. It is an open, empirical question. Note that despite the appar-ent similarities, the situation is distinct from that in the product world. It is cer-tainly true that nobody can tell for sure exactly many copies of MS Word will be

Page 86: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The wider environment - a more rapidly changing world

Understanding Information Infrastructure 81

sold, that the “size” of MS Word is unknown beforehand. But this has little or noimpact on the design (beyond, of course, the increased pressure in terms of appeal-ing functionality and appearance). There is no intrinsic connection between differ-ent stand-alone copies of MS Word. This is fundamentally different for aninformation infrastructure. As all nodes need to function together — regardless oftheir number — the design need to cater for this. It must be designed to be open toallow for an indefinite number of nodes to hook up.

An information infrastructure has to be designed to support a large, but unknown,population of users. So if hardwired constraints to further expansion surfaces,changes have to be made. These changes also take place during rapid diffusion.The very diffusion is in itself an important reason for the need for change. Thenumber of hosts connected to Internet grew from about 1000 to over 300.000 dur-ing 1985-1991 (Smarr and Catlett 1992). The Matrix Information and DirectoryServices estimated the number to about 10 million in July 1995 (McKinney 1995).

A network with just a few hosts has to be designed differently from one with mil-lions. When the number of hosts is open, the design has to be so as well.

The wider environment - a more rapidly changingworld

Dominating accounts of today’s business and organizational reality is full of con-cepts like “globalization,” “competitiveness,” “flexibility,” and change. The “real-ity” depicted by these concepts are not isolated to the business world, it is rathercreeping into virtually all our social life. Such accounts are found in popular pressas well as scientific literature.

We are witnessing a rapidly increasing number of theoretical, speculative or empir-ical accounts dealing with the background, contents and implications for a restruc-turing of private and public organisations. The sources of these accounts mirror thecomplex and many-faceted issues raised of economical (OECD 1992), social(Clegg 1990), political (Mulgan 1991) and technological (Malone and Rockart1993; Malone, Yates and Benjamin 1991) nature. A comprehensive account of thisis practically prohibitive; the only feasible strategy is to focus attention on arestricted set of issues.

New organisational forms are assumed important in order to achieve enhanced pro-ductivity, competitiveness, flexibility, etc. New organisational forms are usually of

Page 87: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

82 Ole Hanseth and Eric Monteiro

a network type positioned between markets and hierarchies. The discussions aboutnew organisational forms borrow from a number of sources.

From economics, basically relying on transaction-cost considerations, there is agrowing pressure to accommodate to the “information economy” (Ciborra 1992;OECD 1992). Transaction-cost considerations fail to do justice to the dynamicallychanging division of labour and functions which are two important aspects of neworganisational forms (Ciborra 1992). Within business policy literature the argu-ments focus on issues of innovation processes as facilitated through strategic alli-ances and globalization which emerge pragmatically from concerns aboutmaintaining competitiveness in a turbulent environment (Porter 1990; von Hippel1988). In organisational theory, one emphasises the weaknesses of centralised,bureaucratic control in terms of responsiveness to new situations (Clegg 1990).Ciborra (1992) sees new organisational forms as rational, institutional arrange-ments to meet the increased need for organisational learning. Technological devel-opment within information and communication technology are identified by somescholars as the driving force for the restructuring of organisations (Malone, Yatesand Benjamin 1991; Malone and Rockart 1993).

Even such a brief exposition of theoretical considerations should make it evidentthat the issue of new organisational forms is vast. When we turn to what exists ofempirical evidence, the picture gets even more complicated. This is because theempirical material document a far less clear-cut picture as it contains numerouscontradicting trends (Applegate 1994; Capello and Williams 1992; Orlikowski1991; Whitaker 1992).

A basic underlying theme in all these accounts is the view that our world is becom-ing increasingly more open as we are becoming more integrated. The integration isprimarily due to improved transport and communication technology. As we aremore tightly interwoven in a larger environment, our life is more open to influenceby others - which again implies increased instability and unpredictability. Theorganizational response to this is increased flexibility and communication to moreeffectively adapt to and interoperate with the environment. In this sense, informa-tion and communication technology is both a cause and an effect of this trend, gen-erating and continuously faster spinning spiral.

Page 88: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Implications: the need for flexible infrastructures

Understanding Information Infrastructure 83

Implications: the need for flexible infrastructures

Systems and infrastructures

Information infrastructures are, of course, in a sense information systems. As suchthey share lots of properties. There is a general and fairly well-known argumentthat the use (at least the requirements) of an information system evolves over time.Boehm (198xx), for instance, includes this in his spiral model for systems develop-ment when explaining that systems development is really like aiming at a movingtarget. Requirements are neither complete nor stable. They are only graduallyuncovered and they are dynamic. For larger systems it is also the case that it isimpossible to foresee all relevant issues and problems, they are discovered as wego along, and the technology must be changed accordingly (Parnas and Clement1986).

Hence, it is hardly news that requirements about use evolve. Still, mainstream sys-tems development is biased towards early and fairly stable specifications (Press-man 1992). There is an alternative, much less widely applied, approach stressingprototyping, evolutionary development, learning and user involvement (Schulerand Namioka 1993). Systems development is viewed, in principle, as a mutuallearning process where designers learn about the context of use and end-usersabout the technical possibilities (REF noe blautfisk greier). The first versions of asystem are poor in quality compared to later ones. They are improved as users getexperience in using them and discover what is needed as well as how the technol-ogy may be adapted to improved ways of working. For users it is impossible to tellin advance what kind of technology that will suit their needs best. User influence isan illusion unless it is based on experience of use.

The rationale for the importance of enabling a mutual learning process associatedwith systems development is, at least in parts, an ideologically biased one (REFempiriske studier). A less dogmatic approach, and one more immediately relevantto the development of information infrastructures, would be to inquire empiricallyinto actual changes that have been implemented in information infrastructures inresponse to evolving patterns of use. This may be illustrated by a few of thechanges of some OSI and Internet standards during their lifetime up till now.

Page 89: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

84 Ole Hanseth and Eric Monteiro

OSI

OSI protocols have in fact been quite stable after their formal approval. This stabil-ity may to a large extent be explained by the fact that most OSI protocols did notdiffuse. The OSI standard for e-mail, however, was approved in 1984. Four yearslater a new version came. It differed so much that a number of its features wereincompatible with the earlier version (Rose 1992).

Internet

Internet has so far proved remarkably flexible, adaptable and extendable. It hasundergone a substantial transformation — constantly changing, elaborating orrejecting its constituting standards — during its history. To keep track of all thechanges, approximately quarterly a special report is issued which gives all the lat-est updates (RFC 1995). These changes also take place during rapid diffusion. Thevery diffusion is an important reason for the need for change. The number of hostsconnected to Internet grew from about 1000 to over 300.000 during 1985-1991(Smarr and Catlett 1992). The Matrix Information and Directory Services esti-mated the number to about 10 million in July 1995 (McKinney 1995).

The need for an II to continue to change alongside its diffusion is recognised by thedesigners themselves as expressed in an internal document describing the organisa-tion of the Internet standardisation process: “From its conception, the Internet hasbeen, and is expected to remain, an evolving system whose participants regularlyfactor new requirements and technology into its design and implementation” (RFC1994, p. 6).

The IETF has launched a series of working groups which, after 4-5 years, are stillstruggling with different aspects of these problems. Some are due to new require-ments stemming from new services or applications. Examples are asynchronoustransmission mode, video and audio transmission, mobile computers, high speednetworks (ATM) and financial transactions (safe credit card purchases). Otherproblems, for instance, routing, addressing and net topology, are intrinsicallylinked to and fuelled by the diffusion itself of Internet (RFC 1995). As commercialactors have been involved, the “triple A-s” - authentication, authorization, andaccounting - have appeared as important issues. Until recently, when the Internethas been a non-commercial network, these issues have been non/exiting. For com-mercial service providers and network operators, they are absolutely necessary.This commercial turn requires that new features are added to a wide range of exist-ing protocols.

Page 90: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Implications: the need for flexible infrastructures

Understanding Information Infrastructure 85

The above illustrations clearly indicate that there is nothing which suggests that thepace or need for flexibility to change Internet will cease, quite the contrary (Smarrand Catlett 1992; RFC 1994, 1995).

During the period between 1974 and 1978 four versions of the bottom-most layerof the Internet, that is, the IP protocol were developed and tested out (Kahn 1994).For almost 15 years it has been practically stable. It forms in many respects thecore of the Internet by providing the basic services which all others build upon (cf.our earlier description). An anticipated revision of IP is today the subject of “spir-ited discussions” (RFC 1995, p. 5). The discussions are heated because the stakesare high. The problems with the present version of IP are acknowledged to be sograve that Internet, in its present form, cannot evolve for more than an estimated 10years without ceasing to be a globally, inter-connected network (ibid., pp. 6-7;Eidnes 1994, p. 46). This situation is quite distinct from the more popular concep-tion of an inevitable continued development of Internet. There are a whole set ofserious and still unresolved problems. Among the more pressing ones, there is theproblem that the “address space” will run out in few years. The Internet is based onthe fact that all nodes (computers, terminals and printers) are uniquely identified byan address. This size of this space is finite and determined by how one representsand assigns addresses. The problem with exhausting the current address space isserious as it will block any further diffusion of Internet for the simple reason thatthere will not be any free addresses to assign to new nodes wishing to hook up. Thedifficulty is that if one switches to a completely different way of addressing, onecannot communicate with the “old” Internet. One is accordingly forced to findsolutions which allow both the “old” (that is, the present) version of IP to functionalongside the new and non-existing IP.

This case is elaborated at greater length in chapter 10 (with a full account in(Monteiro 1998)). A core dilemma is how to balance the urge for making changesagainst the conservative influence of the installed base (see chapter 9). As a stand-ard is implemented and put into widespread use, the effort of changing it increasesaccordingly simply because any changes need to be propagated to a growing popu-lation of geographically and organisationally dispersed users as captured by thenotion of “network externalities” (Antonelli 1993, Callon 1994, p. 408) or the crea-tion of lock-ins and self-reinforcing effects (Cowan 1992, pp. 282-283).

As the components of IIs are inter-connected, standardisation sometimes requiresflexibility in the sense that to keep one component standardised and stable othersmust change. Enabling mobile computers network connections, for instance,requires new features to be added to IIs (Teraoka et al. 1994). These may be imple-mented either as extensions to the protocols at the network, transport or applicationlevel of the OSI model. If one wants to keep one layer stable others must change.

Page 91: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

86 Ole Hanseth and Eric Monteiro

Enabling flexibility to change

The primary principle for enabling flexibility is modularization, i.e. “black-box-ing.” Modularization as a strategy for coping with design is employed by mostengineers, not only those involved with II (Hård 1994). It could, however, be main-tained that in the case of computer science (including the development of II) modu-larization is systematically supported through a large and expanding body of tools,computer language constructs and design methodologies. Elaborating this wouldcarry us well beyond the scope of this paper, but it is indeed possible to present thehistorical development of a core element of computer science, namely the evolu-tion of programming language, as very much influenced with exactly how to findconstructs which support flexibility to change in the long run by pragmaticallydeciding how to restrict or discipline local flexibility. The interested reader mightwant to recast, say, the controversy over structured programming along these lines,that is, recognising the call for structured constructs as a means for enabling flexi-bility in the long run by sacrificing local flexibility of the kind the GOTO statementoffers. (The GOTO statement offers great flexibility in how to link micro levelmodules together at the cost of diminishing the flexibility to change the moduleslater on.)

Decomposition and modularization are at the same time a basis for flexibility in II:flexibility presupposes modularization. The reason for this, at least on a conceptuallevel, is quite simple. The effect of black-boxing is that only the interface (the out-side) of the box matters. The inside does not matter and may accordingly bechanged without disturbing the full system provided the interface looks the same.As long as a box is black, it is stable and hence standardised. In this sense stand-ardisation is a precondition for flexibility.

Two forms of this modularization need to be distinguished. Firstly, it may give riseto a layered or hierarchical system. OSI’s seven layered communication model pro-vides a splendid example of this. Each layer is uniquely determined through itsthree interfaces: the services it offers to the layer immediately above, the services ituses in the layer immediately below and the services a pair of sender and receiveron the same level make use of.

Secondly, modularization may avoid coupling or overlap between modules bykeeping them “lean”. One way this modularization principle is applied is by defin-ing mechanisms for adding new features without changing the existing ones. In thenew version of IP, for instance, a new mechanism is introduced to make it easier todefine new options (RFC 1995). Another example is the WorldWideWeb which iscurrently both diffusing and changing very fast. This is possible, among other rea-sons, because it is based on a format defined such that one implementation simply

Page 92: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Hampering flexibility to change

Understanding Information Infrastructure 87

may skip or read as plain text elements it does not understand. In this way, new fea-tures can be added so old and new implementations can work together.

Hampering flexibility to change

There are three principal ways the flexibility to change an II is hampered. Breakingeither of the two forms of modularization enabling flexibility described aboveaccounts for two of the three ways flexibility is hampered. To illustrate how lack ofhierarchical modularization may hamper flexibility, consider the followinginstance of a violation found in OSI. In the application level standard for e-mail,the task of uniquely identifying a person is not kept apart from the conceptuallydifferent task of implementing the way a person is located. This hampers the flexi-bility because if an organisation changes the way its e-mail system locates a person(for instance, by changing its network provider), all the unique identifications ofthe persons belonging to the organisation have to be changed as well. Most OSIprotocols are good illustrations of violations of the “lean-ness” principle. Althoughthe OSI model is an excellent example of hierarchical modularization, each OSIprotocol is so packed with features that they are hardly possible to implement andeven harder to change (Rose 1992). The reason is simply that it is easier to changea small and simple component than a large and complex one. Internet protocols aremuch simpler, that is, leaner, that OSI ones, and accordingly easier to change.

The third source of hampered flexibility is the diffusion of the II. As a standard isimplemented and put into widespread use, the effort of changing it increasesaccordingly simply because any changes need to be propagated to a growing popu-lation of geographically and organisationally dispersed users as captured by thenotion of “network externalities” (Antonelli 1993, Callon 1994, p. 408) or the crea-tion of lock-ins and self-reinforcing effects (Cowan 1992, pp. 282-283).

At the moment, Internet appears to be approaching a state of irreversibility. Con-sider the development of a new version of IP described earlier. One reason for thedifficulties in developing a new version of IP is the size of the installed base of IPprotocols which must be replaced while the network is running (cf. rate of diffusioncited earlier). Another major difficulty stems from the inter-connectivity of stand-ards: a large number of other technical components depend on IP. An internalreport assesses the situation more precisely as: “Many current IETF standards areaffected by [the next version of] IP. At least 27 of the 51 full Internet Standardsmust be revised (...) along with at least 6 of the 20 Draft Standards and at least 25of the 130 Proposed Standards.” (RFC 1995, p. 38).

Page 93: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

88 Ole Hanseth and Eric Monteiro

The irreversibility of II has not only a technical basis. An II turn irreversible as itgrows due to numbers of and relations between the actors, organisations and insti-tutions involved. In the case of Internet, this is perhaps most evident in relation tonew, commercial services promoted by organisations with different interests andbackground. The transition to the new version of IP will require coordinatedactions from all of these parties. It is a risk that “everybody” will await “the others”making it hard to be an early adopter. As the number of users as well as the types ofusers grow, reaching agreement on changes becomes more difficult (Steinberg1995).

The interdependencies between standardizationand flexibility

We have sketched, drawing on both conceptual arguments and empirical illustra-tions, in what sense an information infrastructure needs to be open-ended and flex-ible. However, the major difficulty may be to replace one working version withanother working one as change will introduce some kind of incompatibility whichmay cause a lock-in situation. As the components of information infrastructures areinter-connected, standardisation requires flexibility in the sense that to keep onecomponent standardised and stable others must change. Enabling mobile comput-ers network connections, for instance, requires new features to be added to IIs (Ter-aoka et al. 1994). These may be implemented either as extensions to the protocolsat the network, transport or application level of the OSI model. If one wants to keepone layer stable others must change. This aspect of information infrastructure wemight call anticipated and alternating flexibility.

There are, as we have seen in earlier chapters, lots of Internet standards. Thesestandards do not all fit into a tidy, monolithic form. Their inter-relationships arehighly complex. Some are organised in a hierarchical fashion as illustrated by thebulk of standards from OSI and Internet as outlined below. Others are partly over-lapping as, for instance, in the case where application specific or regional standardsshare some but not all features. Yet others are replaced, wholly or only in part, bynewer standards creating a genealogy of standards. This implies that the inter-dependencies of the totality of standards related to II form a complex network. Theheterogeneity of II standards, the fact that one standard includes, encompasses or isintertwined with a number of others, is an important aspect of II. It has, we argue,serious implications for how the tension between standardisation and flexibilityunfolds in II.

Page 94: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The interdependencies between standardization and flexibility

Understanding Information Infrastructure 89

We have in this chapter focused on just one type of flexibility — flexibility tochange. Another type of flexibility is flexibility in use. This means that the infor-mation infrastructure may be used in many different ways, serving different pur-poses as it is — without being changed. This is exactly what gives infrastructurestheir enabling function. Stressing this function makes use flexibility crucial. Flexi-bility of use and change are linked in the sense that increased flexibility of usedecreases the need for flexibility for change and vice versa. An aspect of both flex-ibility of use and change is to provide possibilities for adaptation to different localneeds and practices, avoiding unacceptable constraints being imposed by somecentralized authority. Flexibility of use is the topic of chapter 7 when we developthe notion of an “inscription” to talk about the way material (or non-material) arte-facts (attempt to) shape behaviour. To do so, however, it is necessary to frame thiswithin a broader understanding of the interplay between artefacts and humanbehaviour. To this end, we in the following chapter 6 develop a theoretical frame-work called actor-network theory (ANT).

Page 95: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Openness and flexibility

90 Ole Hanseth and Eric Monteiro

Page 96: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 91

CHAPTER 6 Socio-technical webs andactor network theory

Introduction

We now turn to the socio-technical nature of information infrastructures. As out-lined in chapters 3 and 4 above, the development of an information infrastructureneeds to be recognised as an ongoing socio-technical negotiation. An analysis of itaccordingly presumes a suitable vehicle. This chapter contains a presentation of onesuch vehicle, namely actor-network theory. It is intended to pave the road fordescribing and analysing how issues of flexibility and standardisation outlined inchapters 4 and 5 unfold in relation to information infrastructure. This chapteraccordingly functions as a stepping stone for the chapters than follow.

Technology and society: a brief outline

The relationship between technology and society may be conceptualised in manyways. We embrace the fairly widespread belief that IT is a, perhaps the, crucial fac-tor as it simultaneously enables and amplifies the currently dominating trends forrestructuring of organisations (Applegate 1994; Orlikowski 1991). The problem,however, is that this belief does not carry us very far; it is close to becoming a cli-che. To be instructive in an inquiry concerning current organisational transforma-tions, one has to supplement it with a grasp of the interplay between IT and

Page 97: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

92 Ole Hanseth and Eric Monteiro

organisations in more detail. We need to know more about how IT shapes, enablesand constrains organisational changes. Two extreme end points of a continuum ofalternatives are, on the one hand, technological determinism holding that the devel-opment of technology follows its own logic and that the technology determine itsuse (Winner 1977) and, on the other hand, social reductionism or constructionism(Woolgar 1991), (which comes close to technological somnambulism (Pfaffen-berger 1988; Winner 1977)) holding that society and its actors develop the technol-ogy it “wants” and use it as they want, implying that technology in itself plays norole. A series of Braverman inspired studies appeared in the late 70s and early 80sbiased towards a technological determinist position arguing that the use of IT wasbut the latest way of promoting management’s interests regarding deskilling andcontrol of labour (Sandberg 1979). Later, a number of studies belonging close tothe social constructivist end of the continuum were produced which focused ondiversity of use among a group of users and displaying use far beyond what wasanticipated by the designers (Henderson and Kyng 1991; Woolgar 1991b).

A more satisfactory account of the interwoven relationship between IT and organi-sational transformations is lacking. More specifically, we argue that we need tolearn more about how this interplay works, not only that it exists. This implies thatit is vital to be more concrete with respect to the specifics of the technology. As aninformation system (IS) consists of a large number of modules and inter-connec-tions, it may be approached with a varying degree of granularity. We cannot indis-criminatingly refer to it as IS, IT or computer systems. Kling (1991, p. 356)characterises this lack of precision as a “convenient fiction” which “deletesnuances of technical differences”. It is accordingly less than prudent to discuss ISat the granularity of an artefact (Pfaffenberger 1988), the programming language(Orlikowski 1992), the overall architecture (Applegate 1994) or a media for com-munication (Feldman 1987). To advance our understanding of the interplay itwould be quite instructive to be as concrete about which aspects, modules or func-tions of an IS enable or constrain which organisational changes — without collaps-ing this into a deterministic account (Monteiro, Hanseth and Hatling 1994).

Today, the majority of scholars in the field adhere to an intermediate positionsomewhere between the two extreme positions outlined above. The majority ofaccounts end up with the very important, but all too crude, insight that “informa-tion technology has both restricting and enabling implications” (Orlikowski andRobey 1991, p. 154). This insight — that IT enables and constrains — is reachedusing a rich variety of theoretical frameworks including structuration theory(Orlikowski and Robey 1991), phenomenology (Boland and Greenberg 1992),hermeneutics (Klein and Lyytinen 1992) or Habermas’ theory of communicativeaction (Gustavsen and Engelstad 1990).

Page 98: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The duality of technology

Understanding Information Infrastructure 93

Hence, there can hardly be said to be a lack of suggestions for suitable theoreticalframeworks (Kling 1991; Monteiro and Hanseth 1995). We will, however, intro-duce yet another one, actor network theory, which we believe will bring us one stepfurther towards a more detailed understanding of the relationships between infor-mation technology and its use (Akrich 1992; Akrich and Latour 1992; Callon 1991,1994; Latour 1987). This choice is motivated by the way actor network theory,especially in the minimalistic variant we employ, offers a language for describingthe many small, concrete technical and non-technical mechanisms which go intothe building and use of information infrastructures. (We will particularly look at thenegotiations of standards.) Actor network theory accordingly goes a long way indescribing which and how actions are enabled and constrained.

In this chapter we develop a minimalistic vocabulary of ANT intended to be a man-ageable, working way of talking about use and design of IT (see (Walsham 1997ifip8.2) for a survey of the use of ANT in our field). As a frame of reference, wefirst sketch and compare alternatives to ANT. We do not want to be seen as toodogmatic. There certainly exist fruitful alternatives and placing ANT in a widercontext of related thinking is relevant.

The duality of technology

The problem of how to conceptualise and account for the relationship between, onthe one hand, IT development and use and, on the other hand, organisationalchanges is complex — to say the least. A principal reason for the difficulty is dueto the contingent, interwoven and dynamic nature of the relationship. There exists atruly overwhelming body of literature devoted to this problem. We will discuss aselection of contributions which are fairly widely cited and we which considerimportant. (Consult, for instance, (Coombs, Knights and Willmott 1992; Kling1991; Orlikowski and Robey 1991; Walsham 1993) for a broader discussion.) Ourpurpose is to motivate a need to incorporate into such accounts a more thoroughdescription and understanding of the minute, grey and seemingly technical proper-ties of the technology and how these are translated into non-technical ones.

The selection of contributions we consider all acknowledge the need to incorpo-rate, in one way or another, that subjects interpret, appropriate and establish asocial construction of reality (Galliers 1992; Kling 1991; Orlikowski 1991;Orlikowski and Robey 1991; Smithson, Baskerville and Ngwenyama 1994; Wal-sham 1993). This alone enables us to avoid simple-minded, deterministic accounts.The potential problem with a subjectivist stance is how to avoid the possibility that,say, an IS could be interpreted and appropriated completely freely, that one inter-

Page 99: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

94 Ole Hanseth and Eric Monteiro

pretation would be just as reasonable as any other. This position obviously neglectsthe constraining effects the IS have on the social process of interpretation (Akrich1992; Bijker 1993; Orlikowski and Robey 1991). In other words, it is absolutelynecessary to recognise the “enabling and constraining” abilities of IT. A particu-larly skilful and appealing elaboration of this insight is the work done byOrlikowski, Walsham and others building on Giddens’ structuration theory(Orlikowski 1991, 1992; Orlikowski and Robey 1991; Walsham 1993).

Despite the fact that these accounts, in our view, are among the most convincingconceptualisations, they have certain weaknesses. These weaknesses have implica-tions for the way we later on will approach the question of the relationship betweeninformation infrastructure and new organisational forms. Our principal objection toconceptualisations like (Orlikowski 1991, 1992; Orlikowski and Robey 1991; Wal-sham 1993) is that they are not fine-grained enough with respect to the technologyto form an appropriate basis for understanding or to really inform design. Beforesubstantiating this claim, it should be noted that the studies do underline an impor-tant point, namely that “information technology has both restricting and enablingimplications” (Orlikowski and Robey 1991, p. 154). We acknowledge this, but areconvinced that it is necessary to push further: to describe in some detail how andwhere IT restricts and enables action. At the same time, we prepare the ground forthe alternative framework of ANT describing the social construction of technology.To this end, we briefly sketch the position using structuration theory.

The aim of structuration theory is to account for the interplay between humanaction and social structures. The notion of “structure” is to be conceived of as anabstract notion; it need not have a material basis. The two key elements of structur-ation theory according to Walsham (1993, p. 68) are: (i) the manner in which thetwo levels of actions and structure are captured through the duality of structure, and(ii) the identification of modalities as the vehicle which link the two levels. Onespeaks of the duality of structure because structure constrains actions but, at thesame time structures are produced (or more precisely: reproduced and transformed)through human action. This mutual interplay is mediated through a linking devicecalled modalities. As modalities are what link action and structure, and their rela-tionship is mutual, it follows that these modalities operate both ways.

There are three modalities: interpretative scheme, facility and norm. An interpreta-tive scheme deals with how agents understand and how this understanding isexhibited. It denotes the shared stock of knowledge which humans draw uponwhen interpreting situations; it enables shared meaning and hence communication.It may also be the reason why communication processes are inhibited. In applyingthis framework to IT, Orlikowski and Robey (1991, p. 155) notes that “softwaretechnology conditions certain social practices, and through its use the meanings

Page 100: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The duality of technology

Understanding Information Infrastructure 95

embodied in the technology are themselves reinforced”. The second modality,facility, refers to the mobilisation of resources of domination, that is, it comprisesthe media through which power is exercised. IT, more specifically, “constitutes asystem of domination” (ibid., p. 155). The third modality, norms, guide actionthrough mobilisation of sanctions. As a result, they define the legitimacy of inter-action. They are created through continuous use of sanctions. The way this worksfor IT is that IT “codifies” and “conveys” norms (ibid., pp. 155 - 156).

Given this admittedly brief outline of the use of structuration theory for graspingIT, we will proceed by documenting in some detail how these accounts fail to payproper attention to the specifics of IT. Orlikowski and Robey (1991, p. 160) pointout how “tools, languages, and methodologies” constrain the design process. Thequestion is whether this lumps too much together, whether this is a satisfactorylevel of precision with regards to the specifics of IT. There is, after all, quite anumber of empirical studies which document how, say, a methodology fails to con-strain design practice to any extent; it is almost never followed (Curtis, Krasner andIscoe 1988, Ciborra and Lanzara 1994). Referring to Orlikowski (1991), Walsham(1993, p. 67) notes that “the ways in which action and structure were linked areonly briefly outlined”. This is hardly an overstatement as (Orlikowski 1991), asopposed to what one might expect from examining “in detail the world of systemsdevelopment” (ibid., p. 10), maintains that the CASE tool — which is neverdescribed despite the fact that such tools exhibit a substantial degree of diversity(Vessey, Jarvenpaa and Tractinsky 1992) — was the “most visible manifestation”of a strategy to “streamline” the process (Orlikowski 1991, p. 14). (Orlikowski1992) suffers from exactly the same problem: organisational issues are discussedbased on the introduction of Lotus Notes. We are never explained in any detail,beyond referring to it as “the technology” or “Notes”, the functions of the applica-tions. This is particularly upsetting considering the fact that Lotus Notes is a versa-tile, flexible application level programming language.

In instructive, in-depth case studies, Walsham (1993) does indeed follow up hiscriticism cited above by describing in more detail than Orlikowski (1991, 1992)how the modalities operate. But this increased level of precision does not apply tothe specifics of the technology. The typical level of granularity is to discuss theissue of IBM vs. non-IBM hardware (ibid., pp. 92-94), centralised vs. decentralisedsystems architecture (ibid., p. 105) or top-down, hierarchical control vs. user-con-trol (ibid., p. 136 - 138).

Not distinguishing more closely between different parts and variants of the ele-ments of the IS is an instance of the aforementioned “convenient fiction” (Kling1991, p. 356). An unintended consequence of not being fine-grained enough isremoving social responsibility from the designers (ibid., p. 343). It removes social

Page 101: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

96 Ole Hanseth and Eric Monteiro

responsibility in the sense that a given designer in a given organisation obliged touse, say, a CASE tool, may hold that it is irrelevant how she uses the tool, it is stilla tool embodying a certain rationale beyond her control.

What is required, as already mentioned, is a more detailed and fine-grained analy-sis of the many mechanisms, some technical and some not, which are employed inshaping social action. We are not claiming that structuration theory cannot deliverthis (cf. Walsham 1993, p. 67). But we are suggesting that most studies conductedso far (Korpela 1994; Orlikowski 1991, 1992; Orlikowski and Robey 1991; Wal-sham 1993) are lacking a description, at a satisfactory level of precision, of howspecific elements and functions of an IS relate to organisational issues. We alsosuggest that the framework provided by actor-network theory (ANT) is more prom-ising in this regard. We proceed to give an outline of the basics of ANT based on(Akrich 1992; Akrich and Latour 1992; Callon 1991; Latour 1987) before discuss-ing what distinguishes it from the position outlined above.

Actor network theory - a minimalistic vocabulary

ANT was born out of ongoing efforts within the field called social studies of sci-ence and technology. It was not intended to conceptualise information technologyas such, and certainly not the ongoing design of a technology like informationinfrastructure, that is a technology in the process of being developed.

The field of social studies of technology in general and ANT in particular areevolving rapidly (REFS). It is a task in itself keeping up with the latest develop-ments. Our aim is modest: to extract a small, manageable and useful vocabularysuited an adequate understanding of the challenges of developing informationinfrastructure. To this end, we simplify and concentrate on only the aspects of ANTmost relevant to our endeavour.

What is an actor network, anyway?

The term “actor network”, the A and N in ANT, is not very illuminating. It ishardly obvious what the term implies. The idea, however, is fairly simple. Whengoing about doing your business — driving your car or writing a document using aword-processor — there are a lot of things that influence how you do it. Forinstance, when driving a car, you are influenced by traffic regulations, prior drivingexperience and the car’s manoeuvring abilities, the use of a word-processor isinfluenced by earlier experience using it, the functionality of the word-processorand so forth. All of these factors are related or connected to how you act. You do

Page 102: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Actor network theory - a minimalistic vocabulary

Understanding Information Infrastructure 97

not go about doing your business in a total vacuum but rather under the influenceof a wide range of surrounding factors. The act you are carrying out and all of theseinfluencing factors should be considered together. This is exactly what the termactor network accomplishes. An actor network, then, is the act linked together with

all of its influencing factors (which again are linked), producing a network.1

An actor network consists of and links together both technical and non-technicalelements. Not only the car’s motor capacity, but also your driving training, influ-ence your driving. Hence, ANT talks about the heterogeneous nature of actor net-works. In line with its semiotic origin, actor network theory is granting all entitiesof such a heterogeneous network the same explanatory status as “semiotics is thestudy of order building (...) and may be applied to settings, machines, bodies, andprogramming languages as well as text (...) [because] semiotics is not limited tosigns” (Akrich and Latour 1992, p.259). It might perhaps seem a radical move togrant artefacts the same explanatory status as human actors: does not this reducehuman actors to mere objects and social science to natural science? We intend tobracket this rather dogmatic issue. Interested readers should consult (Callon andLatour 1992; Collins and Yearley 1992). For our purposes, what is important is thatthis move has the potential for increasing the level of detail and precision. Morespecifically, allowing oneself not to distinguish a priori between social and techni-cal elements of a socio-technical web encourages a detailed description of the con-crete mechanisms at work which glue the network together — without beingdistracted by the means, technical or non-technical, of actually achieving this. Ifreally interested in discovering influential factors regarding the way you drive, weshould focus on what turns out to be actually influential, be it technical (the motor’capacity) or non-technical (your training).

Inscription and translation

Two concepts from actor network theory are of particular relevance for our inquiry:inscription (Akrich 1992; Akrich and Latour 1992) and translation (Callon 1991,1994; Latour 1987). The notion of inscription refers to the way technical artefactsembody patterns of use: “Technical objects thus simultaneously embody and meas-ure a set of relations between heterogeneous elements” (Akrich 1992, p. 205). Theterm inscription might sound somewhat deterministic by suggesting that action isinscribed, grafted or hard-wired into an artefact. This, however, is a misinterpreta-tion. Balancing the tight-rope between, on the one hand, an objectivistic stance

1. One way of putting this is that the actor network spells out the contents of the con-text or situatedness of an action (Suchman 1987).

Page 103: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

98 Ole Hanseth and Eric Monteiro

where artefacts determine the use and, on the other hand, a subjectivistic stanceholding that an artefact is always interpreted and appropriated flexibly, the notionof an inscription may be used to describe how concrete anticipations and restric-tions of future patterns of use are involved in the development and use of a technol-ogy. Akrich (1992, p. 208, emphasis added) explains the notion of inscription in thefollowing way:

Designers thus define actors with specific tastes, competencies, motives,aspirations, political prejudices, and the rest, and they assume that moral-ity, technology, science, and economy will evolve in particular ways. Alarge part of the work of innovators is that of “inscribing” this vision of(or prediction about) the world in the technical content of the new object.(...) The technical realization of the innovator’s beliefs about the relation-ship between an object and its surrounding actors is thus an attempt topredetermine the settings that users are asked to imagine (...).

Stability and social order, according to actor network theory, are continually nego-tiated as a social process of aligning interests. As actors from the outset have adiverse set of interests, stability rests crucially on the ability to translate, that is, re-interpret, re-present or appropriate, others’ interests to one’s own. In other words,with a translation one and the same interest or anticipation may be presented in dif-ferent ways thereby mobilising broader support. A translation presupposes amedium or a “material into which it is inscribed”, that is, translations are “embod-ied in texts, machines, bodily skills [which] become their support, their more orless faithful executive” (Callon 1991, p. 143).

In ANT terms, design is translation - “users’” and others’ interests may, accordingto typical ideal models, be translated into specific “needs,” the specific needs arefurther translated into more general and unified needs so that these needs mighttranslated into one and the same solution. When the solution (system) is running, itwill be adopted by the users by translating the system into the context of their spe-cific work tasks and situations.

In such a translation, or design, process, the designer works out a scenario for howthe system will be used. This scenario is inscribed into the system. The inscriptionincludes programs of action for the users, and it defines roles to be played by usersand the system. In doing this she is also making implicit or explicit assumptionsabout what competencies are required by the users as well as the system. In ANTterminology, she delegates roles and competencies to the components of the socio-technical network, including users as well as the components of the system (Latour

Page 104: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Actor network theory - a minimalistic vocabulary

Understanding Information Infrastructure 99

1991). By inscribing programs of actions into a piece of technology, the technology

becomes an actor2 imposing its inscribed program of action on its users.

The inscribed patterns of use may not succeed because the actual use deviates fromit. Rather than following its assigned program of action, a user may use the systemin an unanticipated way, she may follow an anti-program (Latour 1991). Whenstudying the use of technical artefacts one necessarily shifts back and forth“between the designer’s projected user and the real user” in order to describe thisdynamic negotiation process of design (Akrich 1992, p. 209).

Some technologies inscribe weak/flexible programs of action while others inscribestrong/inflexible programs. Examples of the former are tools, the hammer being aclassic example, and the assembly line of Chaplin’s “Modern times” a standardillustration of the latter.

Inscriptions are given a concrete content because they represent interests inscribedinto a material. The flexibility of inscriptions vary, some structure the pattern ofuse strongly, others weakly. The strength of inscriptions, whether they must be fol-lowed or can be avoided, depends on the irreversibility of the actor-network theyare inscribed into. It is never possible to know before hand, but by studying thesequence of attempted inscriptions we learn more about exactly how and whichinscriptions were needed to achieve a given aim. To exemplify, consider what ittakes to establish a specific work routine. One could, for instance, try to inscribethe routine into required skills through training. Or, if this inscription was tooweak, one could inscribe the routine into a textual description in the form of manu-als. Or, if this still is too weak, one could inscribe the work routines by supportingthem by an information system. Hence, through a process of translation, one andthe same work routine may be attempted inscribed into components of differentmaterials, components being linked together into a socio-technical network. Byadding and superimposing these inscriptions they accumulate strength.

Latour (1991) provides an illuminating illustration of this aspect of actor networktheory. It is an example intended for pedagogic purposes. Hotels, from the point ofview of management, want to ensure that the guests leave their keys at the frontdesk when leaving. The way this objective may be accomplished, according toactor network theory, is to inscribe the desired pattern of behaviour into an actor-network. The question then becomes how to inscribe it and into what. This isimpossible to know for sure before hand, so management had to make a sequenceof trials to test the strength of different inscriptions. In Latour’s story, management

2. Or “actant” as would be the more precise term in actor network theory (Akrich andLatour 1992).

Page 105: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

100 Ole Hanseth and Eric Monteiro

first tried to inscribe it into an artifact in the form of a sign behind the counterrequesting all guests to return the key when leaving. This inscription, however, wasnot strong enough. Then they tried having a manual door-keeper — with the sameresult. Management then inscribed it into a key with a metal knob of some weight.By stepwise increasing the weight of the knob, the desired behaviour was finallyachieved. Hence, through a succession of translations, the hotels’ interest werefinally inscribed into a network strong enough to impose the desired behaviour onthe guests.

Four key aspects of inscriptions

There are four aspects of the notions of inscription and translation which are partic-ularly relevant and which we emphasise in our study: (i) the identification ofexplicit anticipations (or scenarios) of use held by the various actors during design(that is, standardisation), (ii) how these anticipations are translated and inscribedinto the standards (that is, the materials of the inscriptions), (iii) who inscribes themand (iv) the strength of these inscriptions, that is, the effort it takes to oppose orwork around them.

Irreversibility

A key feature of information infrastructure, outlined in chapter 5, is the difficultyof making changes. Using and extending the core ANT vocabulary developedabove, this vital aspect may be lifted forward to occupy centre stage. In ways to beelaborated in greater detail in subsequent chapters, an in information infrastructureis an aligned actor network. The constitutive elements of an information infrastruc-ture — the collection of standards and protocols, user expectations and experience,bureaucratic procedures for passing standards — inscribe patterns of use. But is itnot possible to express this more precisely, to somehow “measure” the net effects(a dangerous expression, but let it pass) to which these superimposed inscriptionsactually succeed in shaping the pattern of use, to “measure” the strength of aninscription?

Callon’s concept of the (possible) irreversibility of an aligned network captures theaccumulated resistance against change quite nicely (Callon 1991, 1992, 1994). Itdescribes how translations between actor-networks are made durable, how they canresist assaults from competing translations. Callon (1991, p. 159) states that thedegree of irreversibility depends on (i) the extent to which it is subsequentlyimpossible to go back to a point where that translation was only one amongst oth-ers and (ii) the extent to which it shapes and determines subsequent translations.

Page 106: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Actor networks meet structuration theory

Understanding Information Infrastructure 101

The notions which at the present stage in our analysis pay most adequate justice tothe accumulating resistance against change, and the tight inter-connection betweendifferent parts of an II are alignment, irreversibility and accordingly momentum(Hughes and Callon both underline the similarities with the other, see Callon 1987,p. 101; Hughes 1994, p. 102). Despite their ability to account for the anticipatedand interleaved flexibility of an II, these notions down-play this phenomenon to thepoint of disappearance. To make this point more precise, consider the notion ofmomentum which Hughes (1994) discusses as a possible candidate for conceptual-ising the development of infrastructure technologies.

The crucial difference between Hughes and Callon is connected with how thedynamics of momentum unfolds. Hughes describes momentum as very much aself-reinforcing process gaining force as the technical system grows “larger andmore complex” (ibid., p. 108). It is reasonable to take the rate of diffusion of Inter-net during recent years as an indication of its considerable momentum. Majorchanges which seriously interfere with the momentum are, according to Hughes,only conceivable in extraordinary instances: “Only a historic event of large propor-tions could deflect or break the momentum [of the example he refers to], the GreatDepression being a case in point” (ibid., p. 108) or, in a different example, the “oilcrises” (ibid., p. 112). This, however, is not the case with II. As illustrated with theissue of the next version of IP in Internet, radical changes are regularly requiredand are to a certain extent anticipated.Momentum and irreversibility are accord-ingly contradictory aspects of II in the sense that if momentum results in actual —not only potential — irreversibility, then changes are impossible and it will col-lapse. Whether the proposed changes in Internet are adequate and manageableremains to be seen.

Actor networks meet structuration theory

Having given an outline of ANT, let us turn to see what is achieved vis-à-vis struc-turation theory. The principal improvement, as we see it, is the ability ANT pro-vides to be more specific and concrete with respects to the functions of an IS. It isnot the case, in our view, that ANT in every respect is an improvement over struc-turation theory. We only argue that it applies to the issue of being specific about thetechnology. For instance, we consider the important issue of the structuring abili-ties of institutions to be better framed within structuration theory than within ANT.Let us explain why we consider it so. We first compare the two theories on a gen-eral level, partly relying on pedagogic examples. Then we attempt to reinterpret(Orlikowski 1991) in terms of ANT.

Page 107: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

102 Ole Hanseth and Eric Monteiro

Inscriptions are given a concrete content because they represent interests inscribedinto a material. The flexibility of inscriptions vary, that is, some structure the pat-tern of use strongly, others quite weakly. The power of inscriptions, that is, whetherthey must be followed or can be avoided, depends on the irreversibility of theactor-network they are inscribed into. It is never possible to know before hand, butby studying the sequence of inscriptions we learn more about exactly how andwhich inscriptions were needed to achieve a given aim. To exemplify, considerwhat it takes to establish a specific work routine. One could, for instance, try toinscribe the required skills through training. Or, if this inscription was too weak,one could inscribe into a textual description of the routines in the form of manuals.Or, if this still is too weak, one could inscribe the work routines by supporting themby an IS.

ANT’s systematic blurring of the distinction between the technical and the non-technical extends beyond the duality of Orlikowski and Robey (1991) and Wal-sham (1993). The whole idea is to treat situations as essentially equal regardless ofthe means; the objective is still the same. Within ANT, technology receives exactlythe same (explanatory!) status as human actors; the distinction between human andnon-human actors is systematically removed. ANT takes the fact that, in a numberof situations, technical artefacts in practice play the same role as human actors veryseriously: the glue which keeps a social order in place is a heterogeneous networkof human and non-human actors. A theoretical framework which makes an a prioridistinction between the two is less likely to manage to keep its focus on the aim ofa social arrangement regardless of whether the means for achieving this are techni-cal or non-technical. The consequence of this is that ANT supports an inquirywhich traces the social process of negotiating, redefining and appropriating inter-ests back and forth between an articulate explicit form and a form where they areinscribed within a technical artefact. With reference to the small example above,the inscriptions attempting to establish the work routine were inscribed in bothtechnical and non-technical materials. They provide a collection of inscriptions —all aimed at achieving the same effect — with a varying power. In any given situa-tion, one would stack the necessary number of inscriptions which together seem todo the job.

We believe that the empirical material presented by Orlikowski (1991) may, at least

partially, be reinterpreted in light of ANT.3 Her primary example is the develop-ment and use of a CASE tool in an organisation she calls SCC. The control (andproductivity) interests of management are inscribed into the tool. The inscriptions

3. Doing this in any detail would, of course, demand access to the empirical materialbeyond the form in which it is presented in the article. We have no such access.

Page 108: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Actor networks meet structuration theory

Understanding Information Infrastructure 103

are so strong that the consultants do as intended down to a rather detailed level. Theonly exceptions reported are some senior consults saying that they in some rareinstances do not do as the tool require.

What is missing, then, in comparison with ANT is to portray this as more a step-wise alignment than the kind of all-in-one character of (ibid.). In ANT terms, themanagement’s control interests are inscribed into the CASE tool in forms ofdetailed inscriptions of the consultants behaviour. The inscriptions are very strongin the sense that there is hardly any room for interpretive flexibility. The CASE toolis the result of a long process where management’s control and productivity inter-ests have been translated into a larger heterogeneous actor-network encompassingcareer paths, work guidelines, methodologies and, finally, the CASE tool. Togetherthese elements form an actor-network into which consultants’ behaviour areinscribed. Just like Latour’s example presented above, the inscriptions becomestronger as they are inscribed into a larger network. This network is developedthrough successive steps where inscriptions are tested out and improved until thedesired outcome is reached. It is only when, as a result of a long sequence of testingand superpositioning of inscriptions, that one ends up in situations like the one pre-sented by Orlikowski (1991). If one succeeds in aligning the whole actor-network,the desired behaviour is established. Analytically, it follows from this that if anyone (or a few) of the elements of such an actor-network is not aligned, then thebehaviour will not be as presented by Orlikowski (ibid.). Empirically, we know thatmore often than not the result is different from that of Orlikowski’s case (Curtis,Krasner and Iscoe 1988; Vessey, Jarvenpaa and Tractinsky 1992).

We end this section by merely pointing out another issue we find problematic with(Orlikowski 1992). She states that “[t]he greater the spatial and temporal distancebetween the construction of a technology and its application, the greater the likeli-hood that the technology will be interpreted and used with little flexibility. Wheretechnology developers consult with or involve future users in the construction andtrial stages of a technology, there is an increased likelihood that it will be inter-preted and used more flexibly” (ibid., p. 421). We agree on the importance of userparticipation in design. According to ANT, however, the interpretive flexibility of atechnology may increase as the distance between designers and users increases.Interpretive flexibility means unintended use, i.e. using the technology different forwhat is inscribed into it. When the designers are close to the users, the network intowhich the intended user behaviour is inscribed will be stronger and accordinglyharder for the users not to follow this. An important aspect of ANT is its potentialto account for how restricted interpretative flexibility across great distances can beobtained (Law 1986).

Page 109: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Socio-technical webs and actor network theory

104 Ole Hanseth and Eric Monteiro

The chapters which follow build upon this minimalistic vocabulary of ANT. Thenotion of an actor-network will implicitly be assumed throughout. The notion of aninscription is closely linked to the (potential lack of) flexibility of an informationinfrastructure and will be elaborated in chapter 7. It also plays a central role whendescribing and critically assessing the prevailing approaches to the design of infor-mation infrastructures in chapter 8. The notion of irreversibility and the strength ofan inscription is the subject matter of chapters 9 (a conceptual analysis) and 10 (anempirical case).

Page 110: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 105

CHAPTER 7 Inscribing behaviour

Introduction

The two key notions of ANT from the previous chapter are inscription (which pre-supposes translation and program of action) and irreversibility (presupposing align-ment). In this chapter we focus on the notion of inscription. Four aspects ofinscriptions were identified in the previous chapter: the scenario, the material, whoinscribes and the strength of an inscription. Inscriptions invite us to talk about howthe various kinds of materials — artifacts, work routines, legal documents, prevail-ing norms and habits, written manuals, institutional and organizational arrange-ments and procedures — attempt to inscribe patterns of use (which may or may notsucceed). Inscribing patterns of use is a way to confine the flexibility of use of aninformation infrastructure (see chapter 5).

So much for the underlying ideas of ANT and our minimal vocabulary. It remains toshow how ANT may be employed to tease out important characteristics relevant tothe design of information infrastructure. This chapter focuses on the notion of aninscription by illustrating what inscriptions in information infrastructures look like.They have many forms, quite a few of which are not easily spotted. We are accord-ingly particularly concerned with uncovering the different materials for inscrip-tions, that is, how and where patterns of use are inscribed. But first it is necessary tostudy how interests get translated, that is, how they are inscribed into one materialbefore getting re-presented by inscribing it in a different material.

Page 111: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

106 Ole Hanseth and Eric Monteiro

Focusing on inscriptions also means focusing on flexibility as the inscriptions of atechnology are the links between the very technology and its use. There are twoaspects of the flexibility of a technology: use and change flexibility respectively.Use flexibility means that the technology may be used in many different wayswithout changing the technology as such. This is its enabling character. Secondly, atechnology may be more or less easy, i.e. flexible, to change in itself, when changesin requirements go beyond its use flexibility. Use flexibility means using the tech-nology differently form what was intended, i.e. not following the inscribed pro-gram of action, while change flexibility means changing the technology accordingto a new programs of action.

All empirical material used in this chapter is from the establishment of an informa-tion infrastructure for EDI in health care in Norway.

Translations and design

Lab reports

The development of electronic information exchange between health care institu-tions in Norway started when a private lab, Dr. Fürst’s Medisinske Laboratorium inOslo, developed a system for lab report transmission to general practitioners in1987. The interest of Dr. Fürst’s laboratory was simply to make profit by attractingnew customers. It was based on the assumption that the system would help generalpractitioners save much time otherwise spent on manual registering lab reports, andthat the general practitioners would be find this attractive. Each general practitionerreceives on average approximately 20 reports a day, which take quite some time toregister manually in their medical record systems.

Fürst translated the interests of themselves as well as general practitioners (GPs)into a system. The system was very simple. It was implemented on top of a termi-nal emulator package, and the development time was only 3 weeks for one per-

son.1 Further, they enrolled the vendors of medical record systems for GPs intotheir network as well by paying them for adapting their systems to Fürst’s modulefor receiving lab reports. The interests of GPs were further translated to enrol theminto the network by giving them the modems needed to communicate electronicallywith Fürst for free.

1. Interview with Fiskerud (1996).

Page 112: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Translations and design

Understanding Information Infrastructure 107

Lab orders

Labs also have an economical interest in receiving orders electronically as theycould save a lot of labour intensive registration work. The ordering general practi-tioners, however, do not enjoy the same kind of immediate and tangible advantage.In order to enrol the general practitioners, electronic transmission of lab communi-cation needs to translate the interests of the general practitioners’ into the system.So far, this has not been resolved. Several attempts are being made, some of whichwill be presented here.

A crucial aspect of ordering tests is to ensure that an order and the specimen itbelongs to are not mixed with others. A procedure followed by some general prac-titioners and labs today is the following: Each copy of the paper order is given aunique number. This number is printed on two different places on the form, includ-ing one adhesive label that is to be removed from the order and glued on the speci-men container. In addition, the paper order is connected to the specimen container.Reproducing this level of security in the scenario when the order is transmittedelectronically has turned out to be rather challenging, and will certainly include thedesign of specific technological as well as organisational arrangements. The designof a solution for lab orders invariably involves the alignment of the complete heter-ogeneous network of the collection of associated work routines as well as computersystems.

A possible solution that has been discussed is using a collection of label producingmachines (bar code printers), label reading machines, manual routines and newcomputer applications. Each time an order is filled out, a bar code label will beprinted by the general practitioner’s system and subsequently glued to the speci-men container. The unique number represented by the bar code is also a part of thespecimen identifier in the order message. When the lab receives a specimen, amachine must read the bar code on the label and ensure that the specimen isattached to its proper order (already received electronically by the lab). The stand-ardised message will inscribe the working routines for instance, the kind of infor-mation necessary for carrying out the control routines depends on how theseroutines are defined. However, as the general practitioners do not have any obviousadvantages from electronic ordering, it is reasonably to expect that they are notinterested in investing in bar code printers and other technological componentsthese proposal demands.

During 1996, two different solutions have been tested out in Norway, each involv-ing one lab and just a few general practitioners. One of them is based on what iscalled two dimensional bar codes. The complete order information is representedby a two dimensional bar code and printed on a label glued on the specimen con-tainer. The other solution is based on electronic transmission of the order using the

Page 113: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

108 Ole Hanseth and Eric Monteiro

standard European EDIFACT message, while a paper form is also printed and senttogether with the specimen as in the current practice.

When orders are sent electronically, some new possibilities and advantages for thegeneral practitioners as well are possible. One idea, which Dr. Fürst’s lab wants toimplement, is to take advantage of the possibility for ordering new tests of a speci-men when the results of those ordered first are available. Usually a general practi-tioner orders several tests of the same specimen. Which combination of tests thatare most interesting depends on the results of the analysis. Accordingly, it would beuseful to order some tests, study the results and then decide on which additionaltests that are relevant. When both orders and results are transmitted electronically,this possibility may become reality. It is, however, easier to implement this func-tionality using the on-line connection in Dr. Fürst’s laboratory original, non-stand-ardised solution. Such new services might be attractive to general practitioners andenable labs to enrol them into the networks necessary for making electronic order-ing work. However, the programs of action inscribed into the standardised solu-tions based on EDIFACT and the ISO e-mail standards make it impossible toimplement such services within that framework. Dr. Fürst’s laboratory is interestedin experimenting with communication technology to develop new or improvedservices for the general practitioners and their patients. They have so far judgedEDIFACT technology too complex and inflexible and intends to wait until simpler

and more flexible technology is accepted.2 Web technology might fulfil the techni-cal requirements for such technology, but this remains to be seen.

This example illustrates the wide range of different interests being involved, howsome of them might be translated into aligned networks representing interestingand promising solutions while no actor have succeeded in building an aligned actornetwork strong enough to put the technology into real use.

Prescriptions

The idea of electronic transmission of prescriptions grew out of a feasibility studyas part of Statskonsult’s Infrastructure programme. This area was also identified asan interesting one in Telenor’s Telemedicine project (Statskonsult 1992). Establish-ing an infrastructure for electronic exchange of prescriptions requires the align-ment of a wide range of different interests, including general practitioners,pharmacies, patients, the government and social insurance offices.

2. Interview with IT director Sten Tore Fiskerud, Feb. 1996.

Page 114: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Translations and design

Understanding Information Infrastructure 109

Unlike lab messages, there has up till now not been much done on an internationallevel regarding electronic prescriptions. The effort in Norway we report on accord-ingly represents an early attempt to standardise prescription messages. As willbecome evident further below, the institutional arrangements of the standardisationprocess which link national and international efforts tightly, have resulted in a pro-posed, international standard for prescriptions heavily influenced by the Norwe-gian project.

The overall objectives of Statskonsult’s Infrastructure programme was to improveproductivity, service quality, and cost containment in the public sector. Spendingson pharmaceuticals are high, and accordingly an important area for cost contain-ment. In addition, the health care authorities wanted enhanced control concerningthe use of drugs by patients as well as prescription practices of physicians concern-ing habit-forming drugs.

The interests of the pharmacies were primarily improved logistics and eliminatingunnecessary retyping of information (Statskonsult 1992). By integrating the systemreceiving prescriptions with the existing system for electronic ordering of drugs,the pharmacies would essentially have a just-in-time production scheme estab-lished. In addition, the pharmacies viewed it as an opportunity for improving thequality of service to their customers. A survey had documented that as much of80% of their customers were favourable to reducing waiting time at the pharmaciesas a result of electronic transmission of prescriptions (cited in Pedersen 1996).

As part of the Infrastructure programme KITH worked out a preliminary specifica-tion of an EDIFACT message for prescriptions (KITH 1992). The pharmacies alsowanted to include information about so-called bonus arrangements (Norwegian:frikort) into this message. Certain categories of patients get (up till 100%) bonus ontheir drugs. This bonus is subsidised by the health insurance authorities on the basisof special reports from the pharmacies.

The interests of general practitioners in the project had different sources. Electronicprescriptions would eliminate retyping a lot of information which already wasstored in the medical record system. It would also greatly support the reports thegeneral practitioners send to the health insurance authorities, some of them beingthe basis for their payment. More importantly, however, electronic prescriptionswere viewed as an element of the association of general practitioners’ ongoing pro-gramme on quality assurance (cited in Pedersen 1996). Electronic prescriptionsallow automatic cross-checking to be performed (for instance, that all fields arefilled in properly). The general practitioners were also attracted by the prospects ofgetting access to the pharmacies’ drug item list. This list is provided to the pharma-cies by their provider of drugs through the pharmacies’ application supplier (NAF-Data). The list contains information useful also for the general practitioners, for

Page 115: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

110 Ole Hanseth and Eric Monteiro

instance, about price and synonymous drugs. It is updated on a monthly basis. Aswe will spell out in more detail in the last section of this chapter, this list turned outto become the source of much controversy.

A key challenge in the prescription project was to find a way to align the interestsof the involved actors, most importantly the pharmacies and the general practition-ers. According to actor network theory, this takes place by translating these inter-ests and inscribing them into a material. This drug item list play the role of such amaterial. Today, the list of drugs accessible to the general practitioners medicalrecord system is either manually entered and updated or is provided through thevendors of medical records systems at a substantial cost.

The failure of building standardized infrastructure

The development and adoption of the network designed by Fürst was a smooth andeffective process and the solution has been very useful. The same was more or lessthe case with the copied solutions installed by other labs. However, later effortsaiming at developing similar solutions for other areas have failed. Importantexplanatory factors, in terms of actor network theory, are partly the fact that theprime actors have not managed to translate the interests of those they have dele-gated roles in their design into the very same design in a way making the wholeactor network aligned. In particular they have failed in translating the rather gen-eral design of their EDI systems into the working situations of its anticipated users.

Another important factor is the fact that the general, universal, solutions arrived atgenerates a very large and complex actor network, so complex that it cannot bealigned in any proper way in a changing world.

Inscriptions and materials

It is close to a cliche to maintain that technology, including information infrastruc-ture, is never neutral, that certain patterns of use are encouraged while others arediscouraged. By studying the material of inscriptions, this may be pushed furtherby specifying more specifically how and where behaviour is (attempted) inscribed.The variety of the material of inscriptions is indicated in a top-down fashion byillustrating inscription on a high, organisational level, on an architectural level, inthe messages as well as in tiny, grey details contained within a specific field in themessage.

Page 116: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscriptions and materials

Understanding Information Infrastructure 111

Inscribed in the bureaucratic organisation

The Norwegian efforts at developing communication between labs and orderinghospitals and general practitioners were quickly and tightly aligned with interna-tional efforts, especially those by CEN, the European branch of ISO. Translatingwhat started as a relatively down-to-earth, practical endeavour in Norway into aEuropean arena inscribed unintended behaviour and consequences. From the verybeginning, there was a fairly close dialogue between the Norwegian designers oflab communication and their users. When the design was translated into a Euro-pean effort to promote the European Union’s visions for an open market, the userswere unable to follow it. The problem had been moved to an arena with many andunknown bureaucratic rules of the game, circulation of technical descriptionsrather than practical problems and heaps of paper rather than operative solutions. Itis time to take a closer look at the inscriptions of the EDIFACT based bureaucracy.To be involved in the CEN standardization, the ongoing Norwegian effort had to betranslated and aligned with this bureacracy.

EDIFACT is not a self-contained piece of technology. It is a heterogeneous actor-network which includes: syntax for defining data structures; tools like convertersand data bases for definitions of messages and message elements; a hierarchy ofstandardisation bodies on global, regional (i.e. European, American, etc.) andnational levels; prevailing conceptions and established practices for how to defineand implement messages; an EDIFACT industry of vendors and consultants; arti-facts like manuals and other forms of documentation, and educational materialabout EDIFACT.

The size and complexity of this network make its inscriptions strong and difficultto work against when one is enrolled into it. We will first look at programs ofaction related to the standardisation process of EDIFACT, then we turn to patternsof use inscribed in the EDIFACT technology itself.

EDIFACT technology and the organisation of EDIFACT standardisation processesmake it virtually impossible for users to be involved in, not to say influence, thestandards setting processes. The standards are controlled by a group of more or lessprofessional standardisation people who work for large companies or bureaucra-cies. Inspired by MacKenzie’s (1990) notion of the “gyro mafia”, this group maybe dubbed the “EDIFACT mafia”. This mafia’s control is neither a feature of theEDIFACT format itself nor the organisation of the standardisation process, but it isa result of the interplay between the elements of the EDIFACT actor-network out-lined above.

An unintended consequence of the complexity and non-transparency of the EDI-FACT actor-network is that it inscribes barriers on end-user involvement through

Page 117: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

112 Ole Hanseth and Eric Monteiro

its requirements on the level of competence. To be involved in the standardisationwork, one needs to know all the rules of the game - the technological details ofEDIFACT, the formal rules of the standardisation bodies as well as all the informalpractices. There are formal and informal rules for how a message should be com-posed as well as how the processes should run. An essential EDIFACT rule is thatexisting standardised messages and message elements should be used as far as pos-sible when defining new ones. This implies that in order to make lab standards, onealso has to be familiar with standards within virtually all other sectors as well. Theeffect, unanticipated we assume, is that it preserves and professionalises themafia’s control over the process.

In addition, the tendency within EDIFACT to emphasise the technical aspects dele-gates an even less central role to end-users. The specification of the data formatused in the first proprietary systems literally fits on one page of paper and is easilyunderstood by those who need it. The specification of the European standardisedEDIFACT message, however, is a voluminous document of 500 (!) pages (ref CEN1994a, 1994b). Where this message is used, the information exchanged is almostexactly the same as when using the old systems (CEN 1992b, 1993a, 1993b; KITH1994)! The bias in lab communication standardisation towards the technical andgeneral issues at the expense of the practical is shared by other EDIFACT efforts asdocumented by the evaluation of the European Union’s programme on diffusion ofEDI in the trade sector, the TEDIS programme (Graham et al. 1996). In this sense,the bureaucratic and procedural arrangements of EDIFACT inscribe few and onlyindirect opportunities for user influence.

Inscriptions in the systems architecture

A systems architecture is its overall, organising principle. This is a somewhatvague notion. In our case it can be used to distinguish among architectural catego-ries like transaction oriented solutions, event driven ones, message oriented sys-tems, client-server oriented ones, etc.

The choice of such a systems architecture is not neutral. It is the material forinscriptions. We will illustrate this by the looking more closely at the inscriptionsof the EDIFACT bias towards a message oriented systems architectures.

EDIFACT inscribes certain patterns of use. This is partly inscribed in the broadlyestablished view that EDIFACT is mimicking today’s physical exchange of paperforms, orders and invoices being paradigm examples. This view is also translatedinto the choice of communication carrier for exchanging EDIFACT messages, i.e.using e-mail as standardised by the ISO. Using e-mail implies that the receivers getinformation when the senders want to provide them and not when receivers them-

Page 118: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscriptions and materials

Understanding Information Infrastructure 113

selves want it. For clinical-chemical laboratories, for instance, the results will besent to the ordering general practitioner when the ordered tests are completely ana-lysed, or at predefined intermediate points in the analysis process. This inscribes abehaviour which blocks what is possible with some existing, non-standardised sys-tems. The Fürst laboratory in Norway and its customers use a system where thegeneral practitioners at any time may access the results produced in the analysisprocesses up to that very moment in time. This function will not be provided by thestandardised, e-mail based solution. Other EDIFACT inscriptions will be touchedupon in later sections.

Inscriptions in the message syntax

Compared to modern programming language constructs, the EDIFACT syntax - ordata definition mechanisms - is quite primitive. These shortcomings inscribe cen-tralised control and barriers to flexible appropriation to local contexts of use(Hanseth, Thoresen and Winner 1993). The non-transparency of the overall EDI-FACT actor-network tends to make these inscriptions invisible and hence unantici-pated.

Technically speaking, the EDIFACT syntax lacks constructs for subtyping (orinheritance), pointers and recursive data structures. The ability to subtype wouldcome in very handy when defining standards covering different geographical areasand different disciplines. Subtyping provides a mechanism for defining a standardas a collection of modular building blocks. The lab messages have been defined inorder to serve the purpose of a large number of labs (for instance, clinical-chemicallabs, micro-biological labs and X-ray labs). In addition, there are geographical dif-ferences. Using EDIFACT, a number of different subsets or specialisations of themessage have to be worked out. As support for subtyping is lacking, the only wayof enabling this is to define a European message covering all local variations asoptional elements. Local specialisations are then defined by specifying which ofthe optional elements are mandatory and which ones should not be used. With sub-typing, local modifications would be contained within one module, leaving allother unchanged.

In this way the EDIFACT syntax inscribe centralized control of the standards andstandardization process, inhibiting user participation, etc. The syntax is wellaligned with the bureaucratic organization, embodying the same inscriptions.Together they make these inscriptions stronger.

Page 119: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

114 Ole Hanseth and Eric Monteiro

Inscriptions in the message itself

An important part of the definition of standardised messages is deciding which dataelements should be included in the messages and which should not. These elementsare also material for inscriptions.

In the system Dr. Fürst laboratory developed, only basic result data were included.The Health Level 7 message used later on as a prototype, included more informa-tion. Reflecting the organisation of the health sector in the United States with pri-vate financing, economic information was included. Some economic informationmay be relevant in Norway as well, especially if the message is seen in the contextof the overall economic organisation of the sector, that is, who is paying for what,who is responsible for quality control and cost containment, which institutions areinvolved in the payment and what kind of information they need.

Based on use scenarios worked out in the Norwegian lab messages working groupduring 1991-92, it was concluded that the data set in the Health Level 7 messagedid not satisfy the needs (KITH 1991). The message proposal was distributedtogether with a request for comments. It was, however, decided that economicinformation should not be included in the first official message standard for reasonsof simplicity. This was controversial. The association of pharmacies, for instance,expressed in their comments that the areas of use should be expanded to includeinformation exchange between labs, general practitioners and institutions outsidehealth care such as social insurance and banks.

In some European countries, the patients (through the general practitioners) paypart of the costs of the tests, but not in Norway. For this reason, the price the gen-eral practitioners pay for each test is included in the European report message. Thegeneral practitioners are invoiced periodically. The price information is importantin order to control that the amount they have invoiced is correct. Accordingly, theEuropean standard message include this economic information, and so does theNorwegian subset.

Another open issue was whether the information in a lab order should be includedin the result message as well. Usually the result is returned to the ordering physi-cian knowing the order specification already. Accordingly, in most cases the orderinformation would be unnecessary. In some situations, however, the result isreturned to another general practitioner than the ordering one. This is the case inambulatory care, where the general practitioner visiting the patient orders a testwhile the result should be returned to the patient’s ordinary general practitioner. Inhospitals the ordering general practitioner may have left work and a new one hastaken over the responsibility for the patient when the result arrives. In these cases,the result should include the order information as well. If this information is not

Page 120: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscriptions and materials

Understanding Information Infrastructure 115

available, the general practitioner may try to guess (which in many cases wouldwork pretty well), or call the lab and ask them.

The arguments against including the order information are the increasing complex-ity and size of the messages and message specifications it leads to. One proposalput forth was to send the order as a separate message when needed. This solutionneeded a reference in the result message to its corresponding order message toavoid confusion. Such references, however, are not a part of EDIFACT as it isused. Technically, it would be very simple to find a working solution. The problemwas that it would not follow the “rules of the game” of defining EDIFACT mes-sages. It worked against deeply inscribed practises of specific ways to use EDI-FACT. Accordingly it was ruled out. It was instead decided that the orderinformation could be included in the result message.

These examples illustrate that the inclusion or not of a data element in a message isan negotiation over which programs of action should or should not be inscribedinto the standard. In these negotiations, EDIFACT acts as a powerful actor in thesense that most alternatives are close to the intended and customary way of usingEDIFACT.

Inscribed into a field in the message

EDI based information infrastructures usually need various identification services.Lab report messages must identify its order, the ordering unit (GP, hospital depart-ment, another lab, etc.), the patient, etc. Prescription messages must identify theprescribed drug. Registers of such identifiers must be available to the II users. Thisrequires an II for maintaining and distributing updated versions of the registers.Such IIs may be rather complex technologically as well as organizationally. At theEuropean level, there has been much discussion about a European standardizedsystem of codes for identifying the object analysed (blood, skin, urine, etc.), whereon the body it is taken from, the test performed and its result. It has so far turnedout to be too difficult to reach an agreement. The problem is partly to find a systemserving all needs. But it is maybe more difficult to find a new system which mayreplace the installed base of existing ones at reasonable costs. For more informa-tion on these issues see (Hanseth and Monteiro 1996), and (Bowker and Star,1994).

Another example of inscriptions into single elements is given in the following sec-tion.

Page 121: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

116 Ole Hanseth and Eric Monteiro

Accumulating the strength of an inscription

We have described how actors seek to inscribe their interest into heterogeneousmaterials. Through processes of translation, these interests may be inscribed into avariety of materials. There are more than one way to get the job done.

But inscriptions are often unsuccessful in the sense that their scenarios are not fol-lowed, their intentions not fulfilled. To increase the likelihood that an inscriptionmay succeed, it is necessary to increase its strength. A key insight is that, in prici-ple, it is impossible to know beforehand whether an inscription is strong enough toactually work — it remains an open, emperical question that needs to be addressesby a strategy of trials and errors.

The question, then, is how do you increase the strength of an inscription? We illus-trate two strategies. The first is the superimposing of inscriptions. Rather thanmerely observing that one and the same scenario may be inscribed into this mate-rial or translated into that one, you “add” the inscriptions. Instead of an either-orkind of logic you adopt the both this-and-that kind of Winnie the Poe logic. Thesecond strategy is to expand the network (in ANT terms, enroll some actors&tech-nologies) and look for new, as yet unused, material to faithfully inscribe your sce-nario into.

According to actor network theory, inscribing behaviour into actor networks is howan actor might reach an objective. Two forms of objectives are:

1. To make necessary support to make a favorable decision and implement it.

2. To succeed in the design and implementation of a system.

In general, building alliances is a useful strategy for realizing one’s will. This isone of the main ideas behind actor network theory. An alliance is built by enrollingallies into an aligned network. However, to obtain both objectives mentionedabove, the network of allies will include humans as well as technologies (REFXX-Latour, “The Prince”). To get support for a decision, you have to trestle technolo-gies to fit the whole alliance you are building. When designing technology, you arealso designing roles for humans, for instance users, support people, etc. To makethe technology work, you have to make them play the role you have designed.

The two examples in this section illustrate how inscriptions are made strongerthrough both these two ways of alliance building.

Page 122: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Accumulating the strength of an inscription

Understanding Information Infrastructure 117

Technology as ally

As there was a general consensus — the interests were aligned — about the needfor standards, the fight about what these standards should look like and how theyshould be developed started. This race was a seemingly neutral and technical dis-cussion about which technology fitted the needs best. In reality, however, it was arace between different actors trying to manoeuvre themselves into key positions as“gatekeepers” or “obligatory passage points” (Latour 1987). In this race, most ofthem chose the same generic strategy, namely to first look for the technology whichseemed most beneficial for them and subsequently enrolling this technology intotheir own actor-network as an ally. Appealing to the symbolic character of, technol-ogy makes it possible to make non-technical interests appear as technical argu-ments. We will here present some actors and how they were selecting technologiesas allies or strategic partners. The general strategy was first to find an appropriatetechnology which each actor was well “equipped” to represent, and second, mak-ing the allied technology a powerful actor by socially constructing it as the bestchoise as standard.

Based on their interests in general solutions and rooted in the telecommunicationtradition of international standardisation, Telenor searched for international activi-ties aiming at developing “open” standards. The IEEE (Institute of Electrical andElectronics Engineers) P1157 committee, usually called Medix, did exactly this.This work was the result of an initiative to develop open, international standardstaken at the MEDINFO conference in 1986. Medix, which was dominated by ITprofessionals working in large companies like Hewlett Packard and Telenor andsome standardisation specialists working for health care authorities, adopted thedominating approach at that time, namely that standards should be as open, generaland universal as possible.

The appeal for open, universal standards inscribed in the Medix effort impliedusing existing OSI (Open Systems Interconnection) protocols defined by the ISO(International Standardisation Organisation) as underlying basis. The Medix effortadopted a standardisation approach — perfectly in line with texts books in infor-mation systems development — that the development should be based on an infor-mation model being a “true” description of the relevant part of reality, that is, thehealth care sector, independent of existing as well as future technology. This casewill be presented in more detail in the next chapter Individual messages would bederived from the model more or less automatically.

While the focus was directed towards a comprehensive information model, labreports were still the single most important area. However, for those involved inMedix the task of developing a Norwegian standardised lab report message had

Page 123: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

118 Ole Hanseth and Eric Monteiro

around 1990 been translated into the development of a proper object-oriented datamodel of the world-wide health care sector.

In addition to the information model, protocols and formats to be used had to bespecified. In line with the general strategy, as few and general as possible protocolsand formats should be included. Medix first focused on Open Document Architec-ture believing it covered all needs for document like information. However, around1990 most agreed that EDIFACT should be included as well. The Europeans whostrongest advocated EDIFACT had already established a new body, EMEDI (Euro-pean Medical EDI), to promote EDIFACT in the health sector. In Norway, a driv-ing force behind the EDIFACT movement was the “Infrastructure programme” runby a governmental agency (Statskonsult) during 1989 - 92. Promoting Open Sys-tems Interconnection standards and EDIFACT systems based on Open SystemsInterconnection were key goals for the whole public sector (Statskonsult 1992).

The Norwegian branch of Andersen Consulting, pursuing clear economical inter-ests, was marketing a product manufactured in the United States based on the so-called Health Level 7 standard. To promote their product, they pushed Health Level7 as a standard in Norway even though it was evident that a substantial modifica-tion to make it fit a Norwegian context was required.

A second vendor, Fearnley Data, decided during 1989 to develop products support-ing information exchange within the health care sector. They followed the Medixas well as the Health Level 7 activities. In early 1990, they initiated activities aim-ing at developing Health Level 7 based Norwegian standards. They organised aseries of meetings and tried to enrol the necessary actors into a network alignedaround Health Level 7 with themselves as the main gatekeeper while at the sametime keeping Andersen Consulting outside the network by focusing on the amountof work required to modify their product in Norway.

In 1990 the Ministry of Health decided that standards should be developed.Responsible for this work was Gudleik Alvik. He hired a consultant, Bjørn Brevik,for doing the technical work. He specified a coherent set of data structures andexchange formats along the same line as that of Dr. Fürst’s and similar systems(ref.). The proposal was distributed for comments. The procedure followed by theministry was the general one used for all kind of decision making concerning newrules to be followed by health care institutions. This procedure was — of course —very different from those of international telecommunication standardisation. Itdelegated power and competencies to actors within the health care sector and notthe telecommunication world. Actors from the telecommunication world mobilised

and easily killed this proposal (ref).3

Page 124: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Accumulating the strength of an inscription

Understanding Information Infrastructure 119

KITH was established in 1991 and was delegated the responsibility for standardisa-tion by the Ministry of Health. KITH’s director Bjørn Engum was the former headof Telenor’s telemedicine project, and accordingly enrolled into their standardisa-tion activities. He aligned closely with Statskonsult and likewisely argued in favourof EDIFACT and OSI. As was the case with Telenor’s role, key public institutionsmade heavy use of their perceived neutrality to advocate their interests.

Late in 1990, Fearnley Data started the development of a communication systemfor health care. At this time they had given up the Health Level 7 based standardi-sation approach because EDIFACT was gaining momentum. They decided to allywith EDIFACT rather than Health Level 7. They furthermore aligned with otherstandardisation bodies and activities, including European Medical EDI, KITH andStatskonsult. At the same time, another company (Profdoc) started the develop-ment of a product paying less attention to standardisation and rather more to theexperiences with existing systems.

Fearnley Data decided that their product should follow standards as far as possible.When they started, no formal decision about Norwegian or international standards

had been made. However, a “rough consensus”4 had been reached that EDIFACTshould be the basis for the exchange of structured form-like information. Accord-ingly, Fearnley Data considered it safe to start the implementation of an EDIFACTbased solution. One of their employees, Edgar Glück (educated as a doctor, practis-ing as a systems designer) designed a first version of a lab report message in EDI-FACT based on the Health Level 7 message. Fearnley Data’s strategy was to installtheir solutions for communication between hospital labs and general practitioners’offices in parallel with promoting the message as a proposed standard withinnational and international bodies. This strategy turned out to be very successful.The existence of a specified message and “running code” had similar effects as Dr.Fürst’s system. As Fearnley Data had one of the very rare existing EDIFACTimplementations, they were highly successful in mobilising support for their solu-tion. Having a concrete solution, as opposed to merely playing with paper sketches,proved to be an effective way of enrolling others. With minor changes the messagewas accepted by both KITH and EMEDI. EMEDI sent the message specification tothe Western European EDIFACT Board as a proposal for a message being formally

3. One of the authors was involved in this killing.4. The Internet slogan “We believe in rough consensus and running code” is indeed a

precise description of their successful strategy (Hanseth, Monteiro, Hatling 1996).This slogan nicely captures the successful strategy behind the development of theInternet.

Page 125: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

120 Ole Hanseth and Eric Monteiro

approved by the international EDIFACT standardisation authorities. The messagewas quickly approved.

The alignment of interests and technologies around EDIFACT established a verypowerful actor-network. EDIFACT technology in general and running EDIFACTsolutions in particular were powerful actors in this alliance. Profdoc reported that itwas “impossible” from 1992 to market their product as it was not based on EDI-FACT and standards from the ISO. The rhetoric of “open” standards was quiteeffective.

In 1990 the Commission of the European Community delegated to CEN (ComiteEuropeen de Normalisation, the European branch of ISO) to take responsibility forworking out European standards within the health care domain in order to facilitatethe economical benefits of an European inner market. CEN established a so-calledtechnical committee (TC 251) on the 23. of March 1990 dedicated to the develop-ment of standards within health care informatics. From this time Medix disap-peared from the European scene. However, the people involved moved to CEN andCEN’s work to a large extent continued along the lines of Medix.

When CEN started their work on lab reports, some proposals existed already. Forthis reason, they wanted to build upon one of these (CEN 1991). They hoped themessage specification worked out by Edgar Glück and approved by EuropeanMedical EDI could be proposed as a pre-standard. If so, a pre-standard for labinformation could be ready already in April 1992. There was a great pressure forproducing results rapidly. However, groups allied with other technologies thanEDIFACT opposed this. Among these was a group consisting of just a few personsbeing involved in the Euclides project under the first, preparatory phase of theEuropean Union’s health care telematics programme.

The Euclides project developed a prototype of a system for lab report exchangebased on their own non-standard format. After the project was completed, a com-pany was set up in Belgium to continue the work. Being a European Union project,Euclides was well known in the European networks which the CEN work was apart of. As the CEN work was financed by the European Union, the Euclidesproject was perceived as more important and relevant than its size and achieve-ments would imply. An additional, important factor was the fact that the head ofthe health care committee of CEN (TC 251), George De Moor, was also the man-ager of the Euclides project.

The Euclides group realised that they would not succeed in trying to make theirformat and message the only European standard. Accordingly, they made an alli-ance with the information modelling approach, proposing to develop an informa-tion model for lab first, and that this model would be the primary standard. Based

Page 126: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Accumulating the strength of an inscription

Understanding Information Infrastructure 121

on this model the information could be exchanged using EDIFACT as well as otherformats. This proposal was inherited from earlier Medix work, channelled to CENby former Medix people. As more countries participated in the health care commit-tee of CEN (TC 251) than the EMEDI group, it was decided to adopt the informa-tion modelling approach instead of modifying the EDIFACT message approved byEMEDI. This work was extended by a specification for how information should beexchanged using EDIFACT. To our knowledge, how to exchange the informationusing Euclides or other messages or formats have not been specified

In this process of searching for technologies as powerful allies, these were foundamong the general and well established ones. As EDIFACT was gaining momen-tum in Norway as well as Europe at this time (early 90s), EDIFACT — togetherwith the most closely aligned standardisation bodies — did occupy centre stage.The strategy first adopted by Dr. Fürst’s laboratory (accumulating practical experi-ence from various contexts of use within the health care sector) was abandoned infavour of a strategy focusing on modelling techniques. This did not inscribe defi-nite programs of action, but it did inscribe a shift in the delegation of competenceabout health care to competence in software engineering. This delay of gainingpractical experience by aligning with international standardisation bodies inscribedfewer and less direct channels for end-user input from the health care sector.

Expand the network to accumulating strength

In chapter 6 we explained how, according to actor network theory, inscriptionshave to be linked to larger actor-networks in order to give them sufficient strength.Exactly what it takes to make an inscription strong enough is not possible to knowbeforehand, it is a question of practical trial and error. A program of action isinscribed into an increasingly larger actor-network until the necessary strength isreached. This aspect of actor network theory is nicely illustrated, we believe, by theattempts presented below to inscribe a desired behaviour of general practitionersinto the definition of the semantics of one single data element in the prescriptionmessage. The question, then, is how to accumulate enough strength for this inscrip-tion to actually enforce the desired behaviour of general practitioners. Most exam-ples presented above in a similar way illustrate how EDIFACT inscriptions haveaccumulated its considerable strength.

A principal reason for the interest in prescriptions from the point of view of thepharmacies was the prospect of improved logistics by integrating the existing elec-tronic ordering of drugs from the drug depot (Norwegian: Norsk Medisinal Depot)(Statskonsult 1992; KITH 1993a). To exploit the economically interesting possibil-ities of establishing this kind of just-in-time distribution scheme, there had to be away for the pharmacies to uniquely identify a prescribed drug with the drug which

Page 127: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

122 Ole Hanseth and Eric Monteiro

subsequently was ordered from the drug depot. In the electronic ordering of drugsfrom the drug depot, the pharmacies made use of an existing drug list with a coding

scheme for drug identifiers as a six digit article number.5 This drug list wasupdated and maintained by the drug depot.

The pharmacies’ interests for improved logistics was accordingly translated into aproposal to include this six digit drug identifier into the electronic prescriptionmessage. This way of inscribing their interests into the semantics of one data ele-ment in the message was proposed by the representative of the pharmacies early in

the pre-project (KITH 1992)6.

No one seems to have objected to this proposal from the pharmacies despite (ormay be because of) the fact that the scenario of use which was inscribed was notspelled out in any detail. In particular, the pre-project did not spell out exactly howthe general practitioners should provide this drug identification number when mak-ing a prescription. The general practitioners do not make any use of this number.They identify drugs by their type or brand names, not their identification number. Itis not feasible to increase the workload of general practitioners by demanding thatthey provide the identifier manually. In that case, electronic transmission wouldrequire more work than the paper prescriptions and the general practitioners wouldhave no incentives to change.

Rather than working out a detailed program of action, the representative from thegeneral practitioners’ associations suggested that this somehow could be solved ifthe general practitioners were granted access to the list of drug identifiers the phar-macies had, the list maintained by the drug depot. Gaining access to this list wasappealing to the general practitioners for two reasons. Besides the drug identifiers,the list contains other information useful for the general practitioners such as pricesand synonymous drugs. The fact that the list is continuously updated was also con-sidered favourable. When the pre-project ended in 1992, what remained was totranslate the general practitioners’ interests in accessing the drug list into a suitable(but unspecified) inscription and align this with the already agreed upon inscrip-tions in the prescription message. In actor network theory terms, the inscription

5. Reaching agreement on article numbers has been an important and challenging partof the establishment of EDIFACT networks in several business sectors.

6. This should not be taken to imply that the pharmacies had their ways in everyrespect. At the same meeting the pharmacies also suggested including a data seg-ment for bonus arrangements which would have substantially improved theirreporting routines the health insurance authorities. This suggestion was declined,mainly for reasons of simplicity (KITH 1992).

Page 128: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Accumulating the strength of an inscription

Understanding Information Infrastructure 123

which demanded that general practitioners provide the drug identifier was to bestrengthened by aligning it with a larger (but unknown) actor-network inscribingaccess to the drug list for general practitioners.

The proposals from the pre-project (KITH 1992) were circulated for comments.Profdoc, the sceptical vendor of electronic medical record systems (see above),was also critical to how the issue of drug identification numbers should be solved(Profdoc 1993). The solution Profdoc suggested was to extract the identificationnumber from another source, the so-called “Common Catalogue” (Norwegian:Felleskatalogen) instead of the pharmacies’ drug list. The “Common Catalogue” isa paper based catalogue which all general practitioners have. It contains informa-tion about all registered drugs in Norway including their identification number. Inaddition, it contains information about treatment of acute poisoning, drugs thatinterfere each other, and a register over drug producers and pharmacies in Norway.The catalogue is printed once a year, while additions regarding new or obsoletedrugs are printed and distributed continuously. The “Common Catalogue” is pro-duces by a publisher (Fabritius) and was recently available also electronically inthe form of a CD-ROM. This solution based on the “Common Catalogue” dele-gates a very different set of roles to the involved actors. The required integrationwork between the electronic medical record system and the prescription modulewould now involve the publisher but neither the pharmacies nor the drug depot.Besides simply pointing out a, technically speaking, perfectly feasible alternativeto a solution based on the drug list from the pharmacies, Profdoc also had a moreself-centred interest in promoting it. During the period after the pre-project wascompleted, Profdoc had a series of meetings with the publisher of the “CommonCatalogue.” Profdoc explored the possibility, independently of the prescriptionproject, to integrate their medical record system with the “Common Catalogue.”They had never taken pro-active part in the prescription project. When the issue ofdrug identification number surfaced, they apparently seized the opportunity of try-ing to design a solution delegating a role for themselves and their allies in the pre-scription project.

The alternative suggested by Profdoc was not pursued in the main project. Instead,the project continued to work on how to make the drug list available. This soonturned out to be a lot more complicated than they imagined. The heart of the matterwas that the list belonged to an actor outside the project, namely the drug depot. Asthe list contained information which was confidential, for instance about profitmargins on pharmaceutical products, the drug depot had commercial interests in itand refused to hand it over free of charge. Hence, the attempts to accumulatestrength for the inscriptions demanding that general practitioners provide the drugidentifier were faced with serious, unforeseen problems. It was necessary to trans-late the commercial interests of the drug depot, a non-project actor, into an inscrip-

Page 129: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

124 Ole Hanseth and Eric Monteiro

tion. This would involve inscribing roles and obligations for (at least) the followingissues: how to obtain the list from the drug depot, how to “wash” the list to make itappropriate for use for general practitioners, who should do — and pay for — thework. The fact that the participants in the project had to finance their activitiesthemselves, made negotiations difficult. The problems with working out an agree-ment with the drug depot dragged on. In a coordination meeting in January 1994 itwas stated that an agreement was to be reached.

Late in 1995, the testing of the system for electronic transmission of prescriptionsstarted at a pilot site (one general practitioner and one pharmacy). In this first ver-sion of the system, drugs are identified by their ordinary brand names. Employeesat the pharmacy will map this name to its identification number manually. Whenthe name is incomplete or misspelled, as it is assumed quite often will be the case,they will call the general practitioner by telephone. This version will not be usedfor reiterated prescriptions either.

Due to the European Economical Area treaty, the earlier monopoly status of thedrug depot has been dismantled as of 1. of January 1995. This paved the road forseveral distributors of drugs to pharmacies beside the drug depot. Each would havetheir own drug identification number scheme as no “global” identification codingscheme exists. This makes the drug depot’s earlier situation a lot more vulnerable.To the project leader, the drug depot has stated that they now are willing to let givegeneral practitioners free access to their drug list (Yang 1995). During 1996, theprovider of applications for the pharmacies, NAF-Data, has been setting up a database for all distributors of drugs in Norway including the drug depot. This database is intended to be made accessible to the general practitioners. It has beendecided that a new institution will be established and delegated the responsibilityfor giving each drug its unique identification number.

Moving on

The notion of an inscription is, as we have argued both analytically as well bydrawing upon empirical material, a fruitful and instructive notion when analysinginformation infrastructures. It may be employed in order to underscore the (poten-tial lack of) flexibility in an information infrastructure. In the chapter that followswe analyse the contents of and the background for the lack of flexibility in many ofthe information infrastructure initiatives. They are based on an (ill-conceived)assumption that the existing, heterogenous bits and pieces of an infrastructure donot inscribe any behaviour, that it is more or less straightforward to sweep alterna-tives aside and establish a new, universal solution for covering all patterns of use.

Page 130: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Moving on

Understanding Information Infrastructure 125

In the following chapter 9, we explore further the way inscriptions accumulatestrength by “adding” or superimposing one on top of another in the way we haveexplained in this and the previous chapters. The total effect of such an “adding”and aligning of inscriptions is irreveribility. This process is particular relevant tocome to grips with when analysing information infrastructures due to the long timeit takes to establish an infrastructure. It is never established from scratch, only bygradually expanding, substituting and superimposing new elements. This entailsthat the actor-network in total — the installed base of existing components and pat-terns of use — inscribes and hence influences the further development and use ofthe information infrastructure.

Page 131: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Inscribing behaviour

126 Ole Hanseth and Eric Monteiro

Page 132: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 127

CHAPTER 8 Dreaming about theuniversal

Introduction

Most information infrastructure standardization work is based on a set of beliefsand assumptions about what a good standard is. These beliefs are strong — but notbased on any empirical evidence concerning their soundness. They have strongimplications for what kinds of standards that are defined, their characteristics aswell as choice of strategies for developing them. Beliefs of this kind are often calledideologies. Hence, the focus of this chapter is on the dominant standardization ide-ology which we dub “universalism”: its content, history, how it is tried applied,what really happens and its shortcomings. We will argue that it has serious short-comings. In fact, dominant standardization approaches do not work for the develop-ment of future information infrastructures. New approaches based on different ide-ologies must be followed to succeed in the implementation of the envisionednetworks. Basically, the heterogeneous nature of information infrastructures pointedout in chapter 3 needs to be acknowledged. Furthermore, the way the different com-ponents enable and hamper flexibility through their inscriptions needs to be empha-sised a lot stronger. To this end, we discuss how visions and patterns of use andstrategies for change are inscribed into standardisation efforts dominated by the ide-ology of universalism.

Page 133: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

128 Ole Hanseth and Eric Monteiro

What is universalism?

Universalism is the dream about the perfect, “universal solution,” the “seamlessweb” where everything fits together perfectly and where any information may beexchanged between anybody connected without any loss of meaning or otherobstacles. This seamless web is believed to be realized through the implementationof consistent and non-redundant standards. Dreaming about such a technologyseems perfectly sound. This list seems like innocent and desirable characteristics ofany well designed information system, including information infrastructures. Thisis, of course, true in a sense. This is a goal to strive for in most design efforts,including design related to IIs. But striving for this goal will often also cause seri-ous trouble. Exactly because universalism is immediately appealing, it is anextremely strong rhetorical device, and correspondingly difficult to see what iswrong with it. After all, who would rather have a messy, complex solution than aelegant and coherent one? It does not, after all, make much sense to invent a lot ofwheels in parallel. A much better idea is to just make one and let everybody use it.

There is only one major drawback with this scheme: it does not work just this way.Universal solutions of this kind presuppose an overarching design comprising theinformation infrastructure as a whole which is not attainable. It is based on a“closed world” set of assumption. As the world of infrastructures is an open onewithout limits, the universalism implies trying to make a complete, consistent for-mal specification of the whole unlimited world - which is simply impossible. Uni-versalism also implies homogeneity as opposed to the heterogeneous character ofinformation infrastructures emphasized in this book (see chapter 3).

In practice universalism leads to big, complex, incomprehensible and unmanagea-ble standards and infrastructures. There cannot be any tidy, overarching design. Itis a bricolage of components sometimes developed for different purposes, at differ-ent times. Hence, the inclination towards universal solutions, although understand-able, needs closer scrutiny. Its benefits are greatly exaggerated and its problemsvastly down-played (Graham et al. 1996; UN 1996; Williams 1997).

Universalism is not unique to standardization work. In fact, it is a strong ideal forvirtually all technical and scientific work. In this chapter we will look at how uni-versalism is imprinted on health care standardization work in particular and otherinformation infrastructures and technical and scientific work in general. We willfurther look at the arguments given for the ideals of universalism, how they aretried implemented, what really happens when following the ideology. Finally wewill analyse the experiences and identify its shortcomings.

Page 134: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universalism at large: a few illustrations

Understanding Information Infrastructure 129

Universalism at large: a few illustrations

Universalism, is expressed in a variety of ways and in numerous situations. Ourambition is neither to be systematic nor comprehensive, but merely to provideenough illustrations to make our point, namely that the bulk of standardisation hasbeen dominated by this ideology.

Health care

The CEN TC/251 clearly expresses universalism as its dominant ideology. Hence,despite the fact that an information infrastructure will evolve, we have to pay theprice of “freezing” it into one, given, universal solution:

“in case of moving technologies ... standards could possibly impede thedevelopment. On the other hand, it may be desirable to make sure thatunsuitable circumstances (e.g. proliferation of incompatible solutions) arenot allowed to take root and in that case, standardization must be startedas soon as possible in order to set the development on the right track” (DeMoor 1993, p. 4).

The assumed need for a coherent, consistent and non-redundant set of global stand-

ards are even more clearly expressed by HISPP,1 the US coordination committeecollaborating closely with CEN. The great fear of universalism, fragmentation andhence mess, is warned against:

“the efforts of these groups have been somewhat fragmented and redun-dant. Parallel efforts in Europe and the Pacific Rim threaten to furtherfragment standardization efforts” (McDonald 1993).

“If we could eliminate the overlap and differences, we could greatly mag-nify the utility and accelerate the usage of these standards.” (McDonald1993).

The danger, it is maintained, is that of barriers to the free flow of commodities andservices:

1. HISPP (...) is coordinating the ongoing standardization activities in different stand-ardization bodies in US.

Page 135: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

130 Ole Hanseth and Eric Monteiro

“The establishment of a ‘Fortress Europe’ and the creations of barriers oftrade have to be avoided” (De Moor 1993, p. 6).

Universalism goes deep. In the very statement of principles for the work of thehealth care standardization, CEN TC 251, it is explicitly pointed out that redun-dancy is not acceptable because “duplication of work must be avoided” (De Moor1993, p. 6).

Similar concerns are voiced by the work coordinated in the US by HISSP whichhold that there is:

“too much emphasis on short term benefits while ignoring the long termbenefits that would come through the analysis of large data bases col-lected in uniform fashion over large patient populations” (McDonald1993).

The belief in one universal standard is explicitly expressed:

“[The goal is] achieving ... a unified set of non-redundant, non-conflictingstandard” (McDonald 1993, p. 16, emphasis added).

This is a quite strong expression as it refers to the work of all groups developinghealth care information infrastructure standards in US as well as in the rest of theworld and standards outside the health care sector like EDI standards in general.

Striving for maximally applicable solutions in a world without clear borders has itsimplications. As different use areas are linked together and are overlapping, devel-oping coherent solutions for subfields, implies making a coherent solution for eve-rything. The same inscriptions and patterns of use are accordingly assumed toequally reasonable everywhere.

The difficulties in defining strict borders and hence the need for all-encompassingsolutions were acknowledged in the Medix effort. The objective of this effort wasdefined as the development of one single global standard, or one coherent set ofstandards, covering any need for information exchange within health care:

“The eventual scope of P1157 is all of healthcare communications, bothin the medical centre, between medical centres, and between individualproviders and medical centres” (ibid.).

Page 136: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universalism at large: a few illustrations

Understanding Information Infrastructure 131

This was carried over to CEN. CEN consider standards the key to successful healthcare II development, and that standards should be developed at an international,hopefully global, level:

“More alarming is the ascertainment that many of these telematics-exper-iments [in US, Europe (typically projects sponsored by EU), ..] in HealthCare have been conducted on regional scales [i.e. Europe, US, ..] andinevitably have resulted in the proliferation of incompatible solutions,hence the importance of standardization now” (De Moor 1993, p. 1).

which implies that

“Consensus standards are needed urgently” (De Moor 1993, p. 2).

Computer communications

Universalism has an equally strong position within telecommunication and compu-ter communication in general. The OSI model and the OSI protocols are clearexamples of this. The protocols are defined under the assumption that they will beaccepted by everybody, and accordingly the only ones in use. Not addressing howthey should interoperate with already existing network protocols, i.e. their installedbase hostility,” has been put forth as the major explanatory factor behind their lackof adoption.

The Internet has often been presented as the opposite alternative of OSI. Thatmight be done here as well (Abbate 1995).

Internet was based on the idea that there would be multiple independent networksof rather arbitrary design, beginning with the ARPANET as the pioneering packetswitching network, but soon to include packet satellite networks, ground-basedpacket radio networks and other networks. The Internet as we now know it embod-ies a key underlying technical idea, namely that of open architecture networking. Inthis approach, the choice of any individual network technology was not dictated bya particular network architecture but rather could be selected freely by a providerand made to interwork with the other networks through a meta-level “Internet-working Architecture”.

In an open-architecture network, the individual networks may be separatelydesigned and developed and each may have its own unique interface which it mayoffer to users and/or other providers, including other Internet providers. Each net-work can be designed in accordance with the specific environment and userrequirements of that network. There are generally no constraints on the types of

Page 137: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

132 Ole Hanseth and Eric Monteiro

network that can be included or on their geographic scope, although certain prag-matic considerations will dictate what makes sense to offer.

In this way the Internet was developed under the assumption that there would be noone single, universal network, but rather a heterogeneous infrastructure composedof any number of networks and network technologies. These assumptions are evenmore explicitly expressed in the four ground rules being critical to Robert Kahn'searly thinking, which later on led to the design of TCP/IP:

• Each distinct network would have to stand on its own and no internal changescould be required to any such network to connect it to the Internet.

• Communications would be on a best effort basis. If a packet didn't make it tothe final destination, it would shortly be retransmitted from the source.

• Black boxes would be used to connect the networks; these would later be calledgateways and routers. There would be no information retained by the gatewaysabout the individual flows of packets passing through them, thereby keepingthem simple and avoiding complicated adaptation and recovery from variousfailure modes.

• There would be no global control at the operations level.

Whether an information infrastructure should be based on a universal networkdesign or allow the degree of heterogeneity underlying the Internet technology wasthe source for heated debates throughout the seventies (and eighties) in the discus-sions about standards for such infrastructures. In this debate, one alternative wasthe Internet and its TCP/IP suite, the other the X.25 standard proposed by CCITTand telecom operators (Abbate 1995). X.25 was put forth as the universalistic alter-native to TCP/IP.

The telecom operators preferred X.25 as it was design according to the ideology ofuniversalism, having a strong tradition since the very early days of telecommunica-tion. However, such a homogeneous, universalistic solution was also well alignedwith their interest in extending their monopoly from telephone communication andinto computer communications (ibid.).

However, the Internet community is far from consistent in its thinking related touniversalism. The supporters of the Internet technology, including TCP/IP, havealways argued for TCP/IP as the universal network/transport level protocol. Therationale behind TCP/IP is an example of basic assumptions about a heterogene-ous, pluralistic world with diverging needs, requiring different network technolo-gies (like radio, satellite and telephone lines). However, this description of aheterogeneous, open world is put for as an argument for one specific universalsolution, TCP/IP. From the level of TCP/IP and up there is no heterogeneous world

Page 138: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universalism at large: a few illustrations

Understanding Information Infrastructure 133

any more, only consistent and coherent user needs to be satisfied by universal solu-tions. This view is maybe strongest and most explicitly expressed by Einar Stef-ferud when arguing against e-mail gateways (Stefferud 1994). He argues thatgateways translating between different e-mail (address and content) formats andprotocols should not be “allowed” as such translations cannot in principle be per-fect. In such translations there is, at least in principle, possibilities for loss of infor-mation.

Stefferud proposes another solution for integrating enclaves of users using separatee-mail systems. He calls this principle tunnelling, meaning that an e-mail messagegenerated by one e-mail system, say cc:mail, might be tunnelled trough an enclaveof e-mail systems of another kind, say X.400 systems, by enveloping the original e-mail in an e-mail handled by the second e-mail system. However, this techniquecan only be used to send an e-mail between users of the same kind of e-mail sys-tems. A user connected to a cc:mail system might send a mail to another cc:mailuser through, for instance, a X.400 network. However, it will not allow communi-cation between a user of a cc:mail system and a X.400 user. If tunnelling is a uni-versal solution, as Stefferud seems to believe, it presupposes a world built up ofseparate sub-worlds, between which there is no need for communication. Univer-salism and closed world thinking have also strong positions in the Internet commu-nity.

Science: universal facts and laws

Universalism has its strongest position in science. Universalism in technologicaldevelopment is usually the result of seeing scientific ideals as the ultimate, andaccordingly trying to apply scientific methods in technological development.

The traditional view on science is that science is simply the discovery of objective,universal facts, laws and theories about nature (and possibly other worlds, like thesocial). In this sense, universalism corresponds to a kind of heliocentrism, namelythe (implicit) assumption that your own position is priviledged, that you are theorigo around which everyone circle. It would carry us well beyond the scope of thisbook to pursue universalism within science. For our purposes it suffices to observethe many forms and appearances of universalism as well as its far-reaching influ-ence.

Standards are found everywhere, and as such they have been focused. Standards -in a wide sense- are indeed the issue addressed in STS. Not specifically technolog-ical standards, but rather standards in form of universal scientific facts and theo-ries. These studies also, we believe, have something to tell us about informationinfrastructure standards.

Page 139: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

134 Ole Hanseth and Eric Monteiro

0.1. constructing scientific facts, theories, technologies, standards

Universality, actor network theorists have argued, is not a transcendent, a prioriquality of a body of knowledge or a set of procedures. Rather, it is an acquiredquality; it is the effect produced through binding heterogeneous elements togetherinto a tightly coupled, widely extended network. In his elegant study on the crea-tion of universality, Joseph O'Connel discusses the history of electrical units.[2]Laboratory scientists, US war planes and consumers buying new TV's do not sim-ply plug into some pre-given, natural Universal called the Volt. Rather, the Volt is acomplex historical construct, whose maintenance has required and still requireslegions of technicians, Acts of Congress, a Bureau of Standards, cooling devices,precisely designed portable batteries, and so forth.

Other theoretical traditions within STS likewise question these rhetorics. Socialconstructivist analyses, for example, also argue that the universality of technologyor knowledge is an emergent property: to them, a fact or a technology becomes uni-versal when the relevant social actors defining it share a common definition.

0.2. obtaining universality

• deleting context

• Geof

The maybe most basic finding within STS is the local and situated nature of allknowledge - including scientific knowledge. Latour and Woolgar (1986) describeshow scientific results are obtained within specific local contexts and how the con-text is deleted as the results are constructed as universal. Universals in general (the-ories, facts, technologies) are constructed as the context is deleted, basically bybeing taken as given. This construction process has its opposite in a deconstructionprocess when universals are found not to be true. In such cases the universal isdeconstructed by re-introducing its context to explain why it is non valid in thecontext at hand (Latour and Woolgar, 1979).

In spite of the fact the context of origin and the interests of its originators are“deleted” when universals are created, these elements are still embedded in the uni-versals. They are shaped by their history and not just objectively reflecting somereality (in case of scientific facts of theories) or being neutral tools (in case of uni-versal technologies). They embed social and political elements.

In the same way as other universals, infrastructure are standards in fact “local”(Bowker and Star 1994, Timmermans and Berg 1997). They are not pure technicalartifacts, but complex heterogeneous actor-networks (Star and Ruhleder 1996,Hanseth and Monteiro 1997). When a classification and coding system like ICD

Page 140: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universalism at large: a few illustrations

Understanding Information Infrastructure 135

(International Classification for Diseases)2 is used, it is embedded into local prac-tices. The meaning of the codes “in use” depends on that practice (Bowker and Star1994). The ICD classification system developed and maintained by WHO in orderto enable a uniform registration of death causes globally (to enable the generationof statistics for research and health care management) reflects its origin in theWestern modern world. “Values, opinions, and rhetoric are frozen into codes”(Bowker and Star 1994, p. 187). Common diseases in the third world are less wellcovered, and the coding system is badly suited for the needs of a third world healthcare system.

0.3. implementation - making it work

• using a theory

• implementing/using a standard

In parallel with showing how universals are constructed, STS studies haveaddressed maybe more extensively how they are used, i.e. how they are made towork when applied in spite of the seemingly paradoxical fact that all knowledge islocal. This is explained by describing how the construction of universal, the proc-ess of universalization, also has its opposite, the process of localization. The mean-ing of universals in specific situations, and within specific field, is not given. It israther something that has to be worked out, a problem to be solved, in each situa-tion and context. If we will apply a universal (theory) in a new field, how to do thatproperly, might often be a rather difficult problem to solve, or in other words -working out the relations between the universal and the local setting is a matter of achallenging design issue. As a universal is used repeatedly within a field (i.e. com-munity of practice), a shared practice is established, within which the meaning anduse of the universal is taken as given.

boundary objects

practical accomplishment

Lucy plans

Just as the development of universal is not a neutral activity are social and politicalissues involved in the use of universals. As their use is not given, “designing” (or

2. ICD has been defined under the authority of WHO, and is used in health care

institutions in most of the world. ......

Page 141: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

136 Ole Hanseth and Eric Monteiro

“constructing”) the use of universals is a social activity like any others, takingplace within a local context where social and political issues are involved.

Marc Berg and Stefan Timmermans argue that studies in the STS field tend toreject the whole notion of universals (Timmermans and Berg 1997, Berg and Tim-mermans 1998). They disagree, saying that universals exist, but they are alwaysembedded into local networks and infrastructures. Universals exist - but as localuniversals. “The core of universalities lies in the changes built up on local infra-structures.” They argue further that there are always multiplicities of universalities.Some of these will be in conflict. Each universal defines primarily an order it ismeant to establish. Implicitly it defines at the same time dis-order - does not matchthe standard. When a multiplicity of standards are involved in an area - which is“always” the case - on standard’s order will be another’s dis-order. Further, Bergand Timmermans show how a standard even contains, builds upon, and presupposedis-order.

In this paper we will use these theories and concepts to discuss the definition,implementation and use of corporate infrastructure standards. We will do this byfirst showing how the need for a universal solution - a standard - was constructed,and subsequently the decision to define and implement a corporate standard calledHydro Bridge, and the definition of its content.

The main part of the article is concentrating on the implementation of the standard.The most characteristic aspect of this implementation of the process is the repeateddiscovery of the incompleteness of standard in spite of all efforts to extend it tosolve this very incompleteness problem. This process is a continuous process ofenrolling new actors and technological solutions to stabilize the network constitut-ing the standard. This stabilization process never terminates - partly due to the opennature of infrastructures, but may be more important because the standard createsdisorder within exactly the domain it is designed and implemented in order to bringinto order.

The origin of universalism

Constructing the need for universal standards

It is not at all obvious that solutions should be maximally applicable, so where doesthe idea stem from? We illustrate this by drawing on the experience with develop-ing a Norwegian health information infrastructure and tracing its initial, interna-tional efforts.

Page 142: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The origin of universalism

Understanding Information Infrastructure 137

The choice of a standardisation model was not given from the outset. The generalZeitgeist, however, was that of working out as universal and open standards as pos-sible as explained earlier for lab communication (see chapters 2 and 7). AdoptingEDIFACT as the basis for electronic prescriptions seemed inevitable even thoughalternatives were proposed. These alternatives inscribe quite different interests anddelegate completely different roles and competencies to involved actors, especiallythe EDIFACT mafia.

There were several, alternative standardisation and information infrastructuredevelopment strategies, or models, promoted originally. These models are all moreor less based on deep-seated convictions about how technology development takesplace. They inscribe quite different spheres of authoritative competence and stepsto proceed in the design. The range of technically feasible standardisation modelswas practically unlimited. This implied that deciding on one model was less a ques-tion of technical superiority of any one model and more a question of who shouldbe allowed to function as a gatekeeper in defining the problem.

The development of electronic information exchange between health care institu-tions in Norway started when a private lab, Dr. Fürst’s Medisinske Laboratorium inOslo, developed a system for lab report transmission to general practitioners in1987. The system was very simple — the development time was only 3 weeks forone person. The interest of Dr. Fürst’s laboratory was simply to make profit byattracting new customers. It was based on the assumption that the system wouldhelp general practitioners save much time otherwise spent on manual registeringlab reports, and that the general practitioners would be find this attractive. Eachgeneral practitioner receives on average approximately 20 reports a day, which takequite some time to register manually in their medical record system.

The system proved to be a commercial success and brought them lots of generalpractitioners as new customers. This implied less profit for the other labs. Within acouple of years, several non-private labs (in hospitals) developed or bought sys-tems with similar functionality in order to be competitive. Although these systemswere more or less blue-prints of that of Dr. Fürst’s laboratory, there were differ-ences which inscribed extra work for the vendors of electronic medical record sys-tems for the general practitioners. This gave these vendors incentives for workingout one, shared solution.

Alongside the growing number of labs adopting systems for exchange of reports,an increasing number of actors saw a wider range of applications of similar tech-nology in other areas. These actors were represented within the health sector aswell as among possible vendors of such technology. For all of them it was per-

Page 143: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

138 Ole Hanseth and Eric Monteiro

ceived as important that the technologies should be shared among as many groupsas possible in order to reduce costs and enable interconnection of a wide range ofinstitutions.

Telenor (the former Norwegian Telecom) had strong economical interests in pro-moting extensive use of tele- and data communication based services. As telecom-munication technology became more integrated with IT, Telenor searched forcandidates for extensive use of new and advanced services. The health sector wasselected as the potentially most promising one. After an initial experiment, Telenorlaunched the project “Telemedicine in Northern Norway” in 1987 which was run-ning until 1993. Although Telenor realised that the services and products devel-oped for a specific sector like health care could never be as general as thetelephone, Telenor had a strong economical incentive to make their market as largeas possible. This strategy presupposes that the standards are as general as possiblein order to cover as many sectors as possible.

Standardisation has always been considered important within the telecommunica-tion sector. Hence, Telenor took it for granted that the new health information infra-structure standards should be like any other telecommunication standard: “open”and developed according to the procedures of formal standardisation bodies. Tele-nor effectively acted as a standardisation “partisan”. Their perceived neutralitytogether with the investments in the telemedicine project made Telenor a veryinfluential actor within information infrastructure standardisation in Norway in the80s.

The historical legacy of telecom

Universalism within telecom has a long-standing history. It was first coined in1907 by the president of AT & T, Theodor Vail, and amounted to “one policy, onesystem and universal service” (Mueller 1993, cited in Taylor and Webster 1996, p.219). The notion of universalism in telecom is not well defined. It started out as atidiness principle, namely the principle of a unified, non-fragmented service. It hassince come to include also issues of coverage and reach. The heritage of universal-ism in telecom mirrors the deeply felt obligation of early telecom providers toavoid fragmentation and inequity. It thus inscribed clear, political goals. Universal-ism in telecomm was heavily influenced — arguable even crucially dependentupon — the prevailing monopoly situation (Taylor and Webster 1996, p. 220):

“The key to this construction of universal service was that it linked politi-cal goals, such as universal service, to a particular system of economic

Page 144: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

How to create universals

Understanding Information Infrastructure 139

organisation, a monopoly which sustained itself through revenue-poolingarrangements.”

The question, then, is how universalism may, or indeed should, unfold in the cur-rent situation with increasing deregulation.

Within a region controlled by a telecom operator, the ideal of equity was pro-nounced (XXREF soc hist of tele). Telephone service should be as general as possi-ble, everyone should be granted the same opportunities of access and service.Abbate (1995) shows how this biases the telecom world towards centralization as amean to achieve coherence, consistency and non-redundancy.

In the currently ongoing deregulation of the telecom sector, “universal service” is akey term. This terms means that the deregulation needs to happen in a way guaran-teeing universal service, i.e. all services provided to anybody in a country shouldbe provided to everybody (at the same price) all over the country (OECD).

How to create universals

Having sketched the background for universalism as well as some of its expres-sions, let us explore next how this maps onto practical design efforts. How, then, dodesigners go about when (implicitly) influenced by universalism?

Making universal health care standards

There is a strong tendency to aim at solutions that, implicitly or explicitly, are to befairly stable. The drive is to get it right once and for all, to really capture “it”:

“The complete model... will be very big, very complex and expensive tomake. In the process of coming there, a collection of smaller, internallyconsistent sub-models, coordinated via the most common medicalobjects, and specialized for states and countries, have a value of theirown” (MedixINFO).

Object oriented techniques were supposed to provide the tools necessary todevelop this all-encompassing information model while all the time keep it consist-ent and backwards compatible (Harrington 93). The requirements for the modelwas described as follows:

Page 145: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

140 Ole Hanseth and Eric Monteiro

“The MEDIX framework requires a medical information model. It mustbe a conceptual model, describing how actual people and medical systemsshare and communicate information. It is not a model describing how anautomated version of the health care environment distributes and commu-nicates technical representations of medical documents. No computers orcommunication nodes will be present in the model, only real worldobjects like patients, physicians, beds, and tests.

Making a common information model for use in MEDIX is a necessarytask, involving many people and work-months. On one hand, the modelmust be precise enough to be used as the basis of operational, communi-cating health care systems. On the other hand, it must capture the realityas it is perceived by many people” (ibid., p 5).

The information model was intended to be developed and specified according toCoad and Yourdon’s “Object-Oriented Analysis” (Coad and Yourdon 1991).

The computational/communication model is an implementation of the informationmodel which is assumed to be done more or less automatically from the specifica-tion of the information model. An automated health care IT system is assumed torepresent “the underlying health care reality in terms of a computational model ofthat reality” (Harrington 1990a).

“The information model serves two purposes. First, it represents the infor-mation flow patterns in the health care environment. In this representa-tion, there are no computers or communicating computer processes, onlyreal world entities like patients, physicians and service requests...

Once the information model has been defined and validated, however, ittakes on a new mission in which the model must relate to informationtechnology, and in particular to communication standards and profiles.For the model to serve this mission, it must be translated into precise lan-guages which can be used by automated processes to define the messagesand trigger events of the computer systems. Therefore a set of translating(or mapping) rules and methods has been developed for translatingselected aspects of the real world model into communication profiles”(ibid.)

“By extracting a small subset of the model, and relating it to a single med-ical scenario, it is possible to “prove” the model's suitability and correct-

Page 146: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universals in practise

Understanding Information Infrastructure 141

ness with respect to this tiny subset, if a set of medical experts agree thatit is a “correct” representation of the real world” (ibid., p. 12).

The MEDIX framework is a strict and straight-forward application of the classicalinformation modeling approach to information systems development. Some weak-nesses of this approach are mentioned. However, these weaknesses are implicitlyassumed to be irrelevant as there is no attempt to deal with them.

The key tool and strategy for creating universal standards is information modelling.This tool and strategy directly mirrors a naive realist position (Hanseth and Mon-teiro 1994 (XXSJIS)) within the theory of science, believing that the objective trueworld is discovered of we use proper methods, and that the world discovered usingsuch a method is a consistent one of manageable complexity. These assumptions isextensively used in the rhetorics of information modelling, in particular when argu-ing its advantages over alternative positions.

As mentioned, some weaknesses with the approach are mentioned. However, theseare no serious weaknesses - either they are not that serious, or they can be solved.The complexity of the models searched for is one such possible problem. However,this is no real problem. It is a problem only to the extent in can be solved, i.e. theonly problems seen are problems that really aren’t problems. Or problems are onlyseen when the solution appears. The complexity problems exist only to the extentthat object oriented technique can solve them (Harrington 1993).

Universals in practise

So far our analysis and critique of universalism has been of a conceptual nature. Ifwe bracket this theoretically biased view for a moment, what is the practical expe-rience with standardisation dominated by universalism? Is a the critique of univer-salism but high-flying, theoretical mumble void of any practical implications?

Evaluation at a quick glance

The massively dominant approach to date has been met with surprisingly fewobjections. The heritage from telecommunication standardisation and informationmodelling (see above) is evident in the thinking and actions of the EDIFACTmafia. It was, for instance, simply “obvious” that problem of developing lab mes-sages in Norway should be translated from acquiring practical experience from sit-uations of use in Norway to aligning the specification with perceived Europeanrequirements. The EDIFACT mafia had a gatekeeping role which allowed them to

Page 147: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

142 Ole Hanseth and Eric Monteiro

define the problem. And their definition of the problem was accepted. Proponentsof alternatives (for instance, Profdoc´s bar codes) was incapable to market theirsolutions to users. The statement from EDIFACT cited in (Graham et al. 1996,p.10, emphasis added) illustrates how problems are down-played and benefits areexaggerated: “It should be understood that the benefits of having a single interna-tional standard outweigh the drawbacks of the occasional compromise”.

The diffusion, in line with (Graham et al. 1996), has been very slow. The non-standardised lab message systems developed and adopted by users in the period1987 to 1992 are still in use although their further diffusion has stopped. The instal-lations of systems based on standardised lab messages seem to be used as describedby the scenarios worked out as part of the standardisation work. Similarly, the EDI-FACT messages implemented adhere to the practice inscribed into the actor-net-work constituting EDIFACT technology. There is no implementations of thestandards based on re-interpretations of some of the design assumptions. Dr.Fürst’s lab consider implementing a systems providing services beyond what canbe offered by a standardised one as being too difficult at the moment. This wouldrequire cooperation with other actors, and establishing such an arrangement is toodifficult as it is based on an anti-program compared to that inscribed into the stand-ards.

Learning from experience?

CEN’s own judgement is that “CEN/TC 251 has so far been a successful TechnicalCommittee.” (CEN 1996, p. 3). This is obviously true from a purely political pointof view in the sense that it has established itself as the most authoritative standard-ization committee on the European level within the health care area. Looking at theimplementation (diffusion) of the standards defined, the judgement may be differ-ent. So far, ten years of standardization work within Medix and CEN has hardlyhad any effect on information exchange within health care. Maybe the most signif-icant effect has been that the standardization work has made everybody awaitingthe ultimate standards rather then implementing simple, useful solutions. Withinthe health care sector in Norway, the simple solutions for lab report exchange sys-tems diffused very fast around 1990. The standardization efforts seem to havestopped rather than accelerated the development and diffusion of IIs.

The CEN approach is an example of those being criticized for being all to slow andcomplex, not satisfying user needs. The Medix work started in a period with lim-ited experience of the kind of work it was undertaking. In the CEN case however,more experience is available. This experience does not seem to influence CEN’sapproach, neither has the discussion about strategies for implementing NII and theBangemann plan.

Page 148: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universals in practise

Understanding Information Infrastructure 143

CEN considers the development of consistent, non-redundant global standardsmost important for building IIs. User influence is also considered mandatory. How-ever, it believes user participation in the information modelling work will do thejob. It does not consider changing the standards to adapt to future needs to be ofany importance. The issue is mentioned but not addressed. Even the brief com-ments made in Medix about the need for evolution and how object-oriented meth-ods would enable this has disappeared. CEN’s view on their own work is in strongcontrast to how they look at ongoing local activities implementing IIs for specificneeds. This work is considered of greatest danger as it will lead to permanentlyincompatible IIs. Accordingly, CEN is based on a belief that these smaller and sim-pler IIs cannot be changed and evolve into larger interconnected networks. Howtheir own standards, and the IIs implementing them, is going to avoid this problem,i.e. the difficulties in being changed to accommodate to future needs, is hard to see.

The view on standards found in the health care standardization communities men-tioned above is quite common among those involved in standardization. Forinstance, within CEC’s Telematics Programme a similar method for engineeringtrans-European telematics applications is developed (Howard, 1995).

The illusion of universalism

Universalism is an illusion, at least in form of universal information infrastructurestandards. This fact will be illustrated by the openness of the use and use areas ofinfrastructures, the unavoidable duplication and the incompleteness of any infor-mation infrastructure standard.

Openness: However universal the solutions are intended to be, there are still moreintegration and interconnection needs that are not addressed. At the moment ofwriting this, a popular Norwegian IT journal raises criticism against the govern-ment for lack of control ensuring necessary compatibility (ComputerWorld Norge1997). The problem is, it is said, the fact that two projects are developing solutionsfor overlapping areas without being coordinated. One is developing a system,including a module for digital signatures, for transmission of GPs’ invoices. The(digital signature) security system is intended to be used in all areas for communi-cation between social insurance offices and the health sector as well as for internalhealth sector communication. It is designed according to CEN specifications as faras possible, to be compatible with future European II needs within health care. Theother project develops a solution for document exchange with the government,including a system for digital signatures supposed to be the “universal system” forthe government sector - which the social insurance offices are parts of.

Page 149: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

144 Ole Hanseth and Eric Monteiro

This example illustrates a dilemma which cannot be solved, neither can one escapefrom it: Should CEN specify security systems standards not only for the health caresector in Europe, but the whole public sector as well? Or should the developmentof a security system for the government sector also include the solution for thewhole health sector as well - including trans-national information exchange andaccordingly being compatible with the needs in all European countries? And whatabout the other sectors anybody involved communicates with?

Incompleteness: When developing a “universal solution,” irrespective of its com-pleteness it will always be incomplete is the sense that its specification has to beextended and made more specific when it is implemented. For instance, when twopartners agree on using a specific EDIFACT message, they must specify exactlyhow to use it. Although a standard EDIFACT message is specified and so is the useof X.400 as standard for the transport infrastructure, other parts are missing. Suchparts may include security systems. If a security system is specified, for instanceone requiring a TTP (i.e. a “trusted third party”), the security system has to beadapted to the services supported by the TTP, etc. All these “missing links” intro-duce seams into the web. This made the implemented solution locally situated andspecific - not universal. The universal solution is not universal any more when it isimplemented into reality.

Duplication: When implementing a solution, “real” systems are required, not justabstract specifications. Usually the nodes in an II will be based on commerciallyavailable implementations of the standards chosen. However, to work together,commercial products have to be adapted to each other. In an EDIFACT based solu-tion requiring security services (encryption, digital signatures, etc.), the so-calledEDIFACT converter and the security system must fit together. Usually each con-verter manufacturer adapt their product to one or a few security systems providingsome security functions. This means that when an organization is running two dif-ferent EDIFACT based services with different security systems, they will often notonly have to install two security systems, but two EDIFACT converters as well asthe security systems are not integrated with the same EDIFACT converter. Thisproblem is found in the implementation of the EDIFACT solutions for lab reportsand GPs’ invoices respectively. The GPs using both kinds of services must installtwo separate systems, duplicating each other’s functionality completely: Two EDI-FACT converters, two security systems, two X.400 application clients and evenaccess to two separate X.400 systems and networks! (Stenvik 1996).

The problems the development of “universal solutions” meets are basically exactlythose the believers in universal solutions associate with heterogeneous solutions,and which the idea about the “universal solution” is proposed to solve. In somecases the solutions developed is even worse than “non-standard” ones because the

Page 150: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Universals in practise

Understanding Information Infrastructure 145

assumed standardized solutions are equally heterogeneous and in addition muchmore complex.

This kind of inconsistency is also found within CEN concerning installed baseissues. Those involved appear to be well aware of the irreversibility phenomena asfar as existing local solutions are concerned (De Moor 1993, McClement 1993).The irreversibility of these incompatible solutions is one CEN’s most importantarguments in favour of universal, consistent and non-redundant standards. How-ever, it is a bit strange that they apparently believe that the phenomenon does notapply to their own technology.

Linking variants

Universalism faces serious and greatly under-estimated problems. In practise, espe-cially for working solutions, the design of consistent non-redundant solutionsinherent in universalism has been abandoned. Instead, duplication, redundancy andinconsistency are allowed. The situation is kept at a manageable level by linkingnetworks through gateway mechanisms.

Successful information infrastructure building efforts have not followed the ideol-ogy of universalism, but rather an opposite “small is beautiful” approach. Thisapproach has been followed by concentrating on solving more urgent needs andlimited problems. The system developed by Fürst and later copied by lots of otherlabs is an example of this. To my knowledge, there is no larger effort in the healthsector explicitly trying to implement larger IIs based on explicitly chosen transitionand interconnection strategies and gateway technologies. However, smaller net-works are connected based on shorter term perspectives and practical approaches.Fürst, for instance, has connected their system for lab transmission to GPs in Nor-way to the system used by three UK and US based pharmaceutical companies(receiving copies of lab reports for patients using any of their drugs being testedout). The networks interconnections are based on “dual stack” solutions. Interfac-ing a new network and message format takes about one man week of work(Fiskerud 1996).

The Fürst experience indicates that building larger IIs by linking together smallernetworks are rather plain and simple. The difference in complexity between theFürst solution and the CEN effort is striking. The specification of the data formatrepresenting lab reports used in the Fürst system covers one page. The CEN speci-fication of the EDIFACT message for lab reports covers 500 pages. Its specifica-tion work lasted for 4 to 5 years. According to one Norwegian manufacturer ofmedical record systems for GPs, for two partners that want to start using a stand-ardized solution, ensuring that they are interpreting the CEN standardized message

Page 151: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Dreaming about the universal

146 Ole Hanseth and Eric Monteiro

consistently demands just as much work as developing a complete system likeFürst’s from scratch.

If universalism is dead, what then?

Based on the experiences mentioned above, two kinds of gateways seems to be par-ticularly relevant. One is gateways linking together different heterogeneous trans-port infrastructures into a seamless web. The other is “dual stack” solutions forusing different message formats when communicating with different partners. Thisis pursued further in chapter 11.

Experiences so far, indicates that implementing and running dual stack solutions isa viable strategy. If a strategy like the one sketched here is followed, implementingtools for “gateway-building” seams to a task of manageable complexity.

In the chapter that follows, we explore how the fallacy of universalism can beavoided by appropriating the inscriptions of the existing bits and pieces of an infra-structure.

Page 152: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 147

CHAPTER 9 Installed base cultivation

Introduction

We have outlined how the elements of an information infrastructure inscribe futurepatterns of use (chapters 6 and 7) and how the to date dominating approachinscribes completely unrealistic scenarios of use (chapter 8). This leads us toexplore further what more realistic ways to intervene, i.e. design, an informationinfrastructure. This implies a closer analysis of the way behaviour is inscribed in thealready existing elements of an infrastructure — the installed base. The direction ofour analysis will be a radically different approach to the “design” of infrastructure,an approach dubbed “cultivation”.

Installed base

The building of large infrastructures takes time. All elements are connected. Astime passes, new requirements appear which the infrastructure has to adapt to asexplained in chapter 5. The whole infrastructure cannot be change instantly - thenew has to be connected to the old. The new version must be designed in a waymaking the old and the new linked together and “interoperable” in one way or

Page 153: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

148 Ole Hanseth and Eric Monteiro

another. In this way the old - the installed base - heavily influence how the new canbe designed. Infrastructures develops through extending and improving theinstalled base.

The focus on infrastructure as “installed base” implies that infrastructures are con-sidered as always already existing, they are NEVER developed from scratch. When“designing” a “new” infrastructure, it will always be integrated into and therebyextending others, or it will replace one part of another infrastructure. This has beenthe case in the building of all transport infrastructures: Every single road - even thefirst one if it make sense to speak about a such - has been built in this way; when airtraffic infrastructures have been built, they have been tightly interwoven with roadand railway networks - one needed these other infrastructures to travel between air-ports and the travels’ end points. Or even more strongly - air traffic infrastructurescan only be used for one part of a travel, and without infrastructures supporting therest, isolated air traffic infrastructures would be useless.

The irreversibility of the installed base

Information infrastructures are large actor-networks including: systems architec-tures, message definitions, individual data elements, standardisation bodies, exist-ing implementations of the technology being included in a standard, users and userorganisations, software vendors, text books and specifications. Programs of actionare inscribed into every element of such networks. To reach agreement and succeedin the implementation of a standard, its whole actor-network must be aligned.

In the vocabulary of actor-network theory (presented in chapter 6), this insight cor-responds to recognising that the huge actor-network of Internet — the immenseinstalled base of routers, users’ experience and practice, backbones, hosts, softwareand specifications — is well-aligned and to a large extent irreversible. To change it,one must change it into another equally well-aligned actor-network. To do this,only one (or very few) components of the actor-network can be changed at a time.This component then has to be aligned with the rest of the actor-network beforeanything else can be changed. This gives rise to an alternation over time betweenstability and change for the various components of the information infrastructure(Hanseth, Monteiro and Hatling 1996).

Within the Internet community, the importance of the installed base and its hetero-geneous, i.e. socio.technical, character are to some extent acknowledged:

Page 154: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Information infrastructure as actor

Understanding Information Infrastructure 149

“I would strongly urge the customer/user community to think about costs,training efforts, and operational impacts of the various proposals andPLEASE contribute those thoughts to the technical process.”

(Crocker 1992)

“Key to understanding the notion of transition and coexistence is the ideathat any scheme has associated with it a cost-distribution. That is, someparts of the system are going to be affected more than other parts. Some-times there will be a lot of changes; sometimes a few. Sometimes thechanges will be spread out; sometimes they will be concentrated. In orderto compare transition schemes, you “must” compare their respective cost-distribution and then balance that against their benefits.”

(Rose 1992b)

In the next chapter we will look at installed base issues involved in the design ofthe new version of IP, IPv6.

Information infrastructure as actor

A large information infrastructure is not just hard to change. It might also be apowerful actor influencing its own future life - its extension and size as well as itsform.

Within the field institutional economy some scholars have studied standards as apart of a more general phenomena labelled “self-reinforcing mechanisms” (Arthur1988, 1989, 1990) and “network externalities” (Katz and Shapiro 1986). We willhere briefly review these scholars conception of standards and the and their self-reinforcing character as the installed base grow. We will look at the cause of thisphenomenon as well as its effects.

Self-reinforcing mechanisms appear when the value of a particular product or tech-nology for individual adopters increases as the number of adopters increase. Theterm “network externalities” is used to denote the fact that such a phenomenonappears when the value of a product or technology depends also on aspects beingexternal to the product or technology itself.

A standard which builds up an installed base ahead of its competitors becomescumulatively more attractive, making the choice of standards “path dependent” andhighly influenced by a small advantage gained in the early stages (Grindley 1995,

Page 155: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

150 Ole Hanseth and Eric Monteiro

p. 2). The development and diffusion of standards and “infrastructural” technolo-gies are determined by

“the overriding importance of standards and the installed base comparedto conventional strategies concentrating on programme quality and otherpromotional efforts.” (ibid., p. 7)

The basic mechanism is that the large installed base attracts complementary pro-duction and makes the standard cumulative more attractive. A larger base withmore complementary products also increases the credibility of the standard.Together these make a standard more attractive to new users. This brings in moreadoptions which further increases the size of the installed base, etc. (ibid., p. 27).

FIGURE 4. Standards reinforcements mechanism (Grindley 1995).

Self-reinforcing mechanisms are, according to Arthur (1990), outside the scope oftraditional, neo-classical economy, focusing on diminishing return on investment.Typical examples of this phenomenon is found within resource based economics.Assuming that the sources (of for instance hydro power, oil, ..) most easily availa-ble are used first, the costs increases as more is consumed (Arthur 1990).

In general, the part of the economy that is knowledge-based are largely subject toincreasing returns: large investments in research, incremental production is rela-tively cheap, increased production means more experience and greater understand-ing in how to produce additional units even more cheaply, benefits of using themincrease (ibid.)

Larger installed base

More complements produced

Greater credibility of standard

Reinforces value to users

Further adoptions

Page 156: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Effects

Understanding Information Infrastructure 151

When network externalities are significant, so too are the benefits of having com-patible products, and accordingly the establishment of common standards (Katzand Shapiro 1986).

Arthur (1988) identifies four sources of self-reinforcing processes: Large set-up orfixed costs; learning effects (improvement through experience); coordinationeffects (advantages in going along with others); and adaptive expectations. Katzand Shapiro (1985) presents three possible sources of network externalities: Directphysical effect of the number of purchases on the quality of the product (forinstance telephone); indirect effects, for instance the more users that buy a particu-lar computer, the more software will be available; and post-purchase servicedepends on the experience and size of the service network.

The self-reinforcing effects of the installed base corresponds to Thomas Hughes’concept of momentum. This concept is one of the important results of Hughes studyof electricity in Western societies in the period 1880-1930 (Hughes 1983). Thisstudy is certainly the most important and influential within the field of LTS studies.

Hughes describes momentum as very much a self-reinforcing process gaining forceas the technical system grows “larger and more complex” (Hughes 1987., 108).Major changes which seriously interfere with the momentum are, according toHughes, only conceivable in extraordinary instances: “Only a historic event oflarge proportions could deflect or break the momentum [of the example he refersto], the Great Depression being a case in point” (ibid., 108) or, in a different exam-ple, the “oil crises” (ibid., 112). As Hughes describes it, momentum is the result ofa larger actor-network including the technology and its users, manufacturers, edu-cational institutions, etc. In particular the establishment of professions, educationalinstitutions and programs are crucial in this respect. We will give a more compre-hensive presentation of Hughes study later in this chapter as an example of anevolving infrastructure.

Effects

Arthur (1988) points to some effects of self-reinforcing mechanisms:

1. Path-dependence; i.e. passed events will have large impacts on future develop-ment and in principle irrelevant events may turn out to have tremendous effects.

Page 157: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

152 Ole Hanseth and Eric Monteiro

2. Lock-in; i.e. when a technology has been adopted it will be impossible todevelop competing technologies. “Once random economic events select a par-ticular path, the choice become locked-in regardless of the advantages of alter-natives” (Arthur 1990).

3. Possible inefficiency; i.e. the best solution will not necessarily win.

The most widely known example of this phenomenon is the QWERTY layout oftypewriter and computer keyboards (David 1986). Other examples documented byeconomists include the VHS standard for VCRs and water-cooling systems innuclear power plants as well as FORTRAN. Technological standards in generaltend to become locked-in by positive feedback (Farrel and Saloner 1985, 1986,1992).

In the cases of VCR standards and nuclear power plant cooling Betamax and gascooling respectively were considered technological superior but became losers inthe “competition” due to the alternatives, VHS and water cooling, getting an early“advantage” being reinforced through positive feedback.

IIs and communication technologies are paradigm examples of phenomena where“network externalities” and positive feedback (increasing return on adoption) arecrucial, and accordingly technologies easily being “locked-in” and turning irrevers-ible. All factors mentioned above apply. The positive feedback from new adopters(users) is strong. The usefulness is not only dependent on the number of users, incase of e-mail for instance, the usefulness is to a large extent its number of users.The technology become hard to change as successful changes need to be compati-ble with the installed base. As the number of users grow, reaching agreement aboutnew features as well as coordinating transitions become increasingly difficult. Ven-dors develop products implementing a standard, new technologies are built on topof it. As the installed base grows, institutions like standardization bodies are estab-lished, the interests vested in the technology grow.

An actor-network becomes irreversible when it is practically impossible to changeit into another aligned one. At the moment, Internet appears to be approaching astate of irreversibility. Consider the development of a new version of IP mentionedearlier. One reason for the difficulty to develop a new version of IP is the size of theinstalled base of IP protocols which must be replaced while the network is running.Another major difficulty stems from the inter-connectivity of standards: a largenumber of other technical components depend on IP. An internal report assesses thesituation more precisely as: “Many current IETF standards are affected by [the nextversion of] IP. At least 27 of the 51 full Internet Standards must be revised (...)along with at least 6 of the 20 Draft Standards and at least 25 of the 130 ProposedStandards.” (RFC 1995, 38).

Page 158: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Some examples

Understanding Information Infrastructure 153

The irreversibility of II has not only a technical basis. An II turn irreversible as itgrows due to numbers of and relations between the actors, organisations and insti-tutions involved. In the case of Internet, this is perhaps most evident in relation tonew, commercial services promoted by organisations with different interests andbackground. The transition to the new version of IP will require coordinatedactions from all of these parties. It is a risk that “everybody” will await “the others”making it hard to be an early adopter. As the number of users as well as the types ofusers grow, reaching agreement on changes becomes more difficult (Steinberg1995).

Some examples

Installed base hostility: The OSI experience

For a long period there was a fight between OSI and Internet - sometimes called areligious war (Drake 1993). Now the war is obviously over and OSI is the looser.Einar Stefferud (1992, 1994) and Marshall Rose (1992) have claimed that OSIwould be a failure due to its “installed base hostility.” They argued that OSI was afailure as the protocols did not pay any attention to existing networks. They werespecified in a way causing great difficulties when trying to make them interworkwith corresponding Internet services, i.e. link them to any existing installed bases.The religious war can just as well be interpreted as a fight between an installedbase and improved design alternatives. As OSI protocols have been discussed,specified and pushed through the committees in ISO, Internet protocols have beenimplemented and deployed. The installed base won, in spite of tremendous supportfrom numerous governments and CEC specifying and enforcing GOSIPs (Govern-mental

The Internet and Web

The Internet has been a rapidly growing installed base. The larger the installed basethe more rapid growth. The rapid diffusion of WorldWideWeb is an example ofhow to successfully build new services on an existing installed base.

The growth of the Internet is a contrast to the failed OSI effort. However, there areother examples also being contrasts. Leigh Star and Karen Ruhleder (1994, 1996)documents how a nicely tailored system supporting a world wide research commu-nity “lost” for the much less sophisticated (i.e. specialized, tailored to users’ needs)gopher and later Web services. As we see this case, the specialized system lost due

Page 159: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

154 Ole Hanseth and Eric Monteiro

to its requirement for an installed base of new technological infrastructure. Impor-tant was also the lack of installed base of knowledge and systems support infra-structure.

Strategy

We will now look at some implications the importance and role of the installedbase will have for choosing infrastructure design strategies.

Dilemmas

David (1987) points out three strategy dilemmas one usually will face when devel-oping networking technologies and which are caused by installed base issues:

1. Narrow policy window. There may be only brief and uncertain “windows intime,” during which effective public policy interventions can be made at mod-erate resource costs.

2. Blind giants. Governmental agencies (or other institutions making decisions)are likely to have greatest power to influence the future trajectories of networktechnologies, just when suitable informational basis on which to make sociallyoptimal choices among alternatives is most lacking. The actors in question,then, resemble “blind giants” - whose vision we would wish to improve beforetheir power dissipates.

3. Angry orphans. Some groups of users will be left “orphaned;” they will havesunk investments in systems whose maintenance and further elaboration aregoing to be discontinued. Encouraging the development of gateway deviceslinking otherwise incompatible systems can help to minimize the static eco-nomic losses incurred by orphans.

One strategy David (ibid.) finds worth considering is that of “counter-action” - i.e.to prevent the “policy window” from slamming shut before the policy makers arebetter able to perceive the shape of their relevant future options. This requires pos-itive action to maintain leverage over the systems rivalry, preventing any of thepresently available variants from becoming too deeply entrenched as a standard,and so gathering more information about technological opportunities even at thecost of immediate losses in operations efficiency. In “the race for the installedbase” governments could subside only the second-place system.

Page 160: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Strategy

Understanding Information Infrastructure 155

Solutions - enabling flexibility

According to Arthur (1988), exit from an inferior lock-in in economics dependsvery much on the source of the self-reinforcing mechanism. It depends on thedegree to which the advantages accrued by the inferior “equilibrium” are reversibleor transferable to an alternative one. Arthur (1988) claims that when learningeffects and specialized fixed costs are the source of reinforcement, advantages arenot transferable to an alternative equilibrium. Where coordination effects are thesource of lock-in, however, he says that advantages are often transferable. As anillustration he mentions that users of a particular technological standard may agreethat an alternative would be superior, provided everybody switched. If the currentstandard is not embodied in specialized equipment and its advantage-in-use ismainly that of convention (for instance the use of colors in traffic lights), then anegotiated or mandated changeover to a superior collective choice can provide exitinto the new equilibrium at negligible cost.

The may be most important remedy to help overcome the negative effects of posi-tive feedback and network externalities, i.e. lock-in and inefficiency, is the con-struction of gateways and adapters (Katz and Shapiro 1985, David and Bunn1988). Gateways may connect heterogeneous networks, being built independentlyor based on different versions of the same standards.

Based on an analysis of the circumstances and consequences of the development ofthe rotary converter which permitted conversion between alternating and directcurrent (and vice versa), David and Bunn (1988) argue that “.. in addition to short-run resource saving effects, the evolution of a network technology can be stronglyinfluenced by the availability of a gateway innovation.” In this case, this gatewaytechnology enabled the interconnection of previously incompatible networks, andenabled the evolution from a dominating inferior technology into a superior alter-native.

On a general level, there are two elements being necessary for developing flexibleIIs. First, the standards and IIs themselves must be flexible and easy to adapt tonew requirements. Second, strategies for changing the existing II into the new onemust be developed together with necessary gateway technologies linking the oldand the new. These elements are often interdependent.

The basic principle for providing flexibility is modularization and encapsulation(Parnas 1972). Another important principle is leanness, meaning that any moduleshould be as simple as possible based on the simple fact that it is easier to changesomething small and simple than something large and complex. Formalizationincreases complexity, accordingly less formalization means larger flexibility.

Page 161: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

156 Ole Hanseth and Eric Monteiro

Techniques that may be used are to design a new version as an extension of theexisting guaranteeing backward compatibility and “dual stacks” meaning that auser that are interested in communicating with users on different networks are con-nected to both. This is explored further in chapter 11.

From design to cultivation

Accepting the general thrust of our argument, that the elements of an infrastructureinscribe behaviour and that there always already exist such elements, implies a rad-ical rethinking of the very notion of design. Traditional design (implicitly) assumesa degree of freedom that simply does not exist (cf. chapter 8 on design dominatedby univeralism). This leads us to explore alternative notions of design.

From design to cultivation

When describing our efforts and strategies for developing technological systems,we usually characterize these efforts as design or engineering. Bo Dahlbom andLars Erik Janlert (1996) use the notion of construction to denote a more generalconcept including design as well as engineering. They further use the notion of cul-tivation to characterize a fundamentally different approach to shaping technology.They characterize the two concepts in the following way:

[When we] “engage in cultivation, we interfere with, support and control,a natural process.” [When] “we are doing construction, [we are] selecting,putting together, and arranging, a number of objects to form a system.....

[Cultivation means that] ...we .. have to rely on a process in the material:the tomatoes themselves must grow, just as the wound itself must heal,..

Construction and cultivation give us two different versions of systemsthinking. Construction is a radical belief in our power to, once and for all,shape the world in accordance with our rationally founded goals. Cultiva-tion is a conservative belief in the power of natural systems to withstandour effort at design, either by disarming them or by ruining them bybreakdown.”

(ibid. p. 6-7)

The concept of cultivation turns our focus on the limits of rational, human control.Considering technological systems as organisms with a life of their own impliesthat we focus on the role of existing technology itself as an actor in the develop-

Page 162: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

From design to cultivation

Understanding Information Infrastructure 157

ment process. This theory focuses on socio-technical networks where objects usu-ally considered social or technological are linked together into networks. The“development organization” as well as the “product” being developed are consid-ered unified socio-technical networks.

Improvisation and drifting

Claudio Ciborra propose the concept of “improvisation” as a concept to understandwhat is going on in organizations when adopting information technology. He holdsthat this is a concept much more grounded in individual and organizational processthan planned decision making (XXCiborra ICIS96). He describes improvisation assituated performance where thinking and action seem to occur simultaneously andon the spur of the moment. It is purposeful human behavior which seems to beruled at the same time by chance, intuition competence and outright design.

Wanda Orlikowski use the same concept in her “Improvisational Model for Manag-ing Change” (XXOrlikowski Sloan, ISR). In this model organizational transforma-tion is seen as an ongoing improvisation enacted by organizational actors trying tomake sense of and act coherently in the world. The model rests on two majorassumptions which differentiate it from traditional models of change: first, changesassociated with technology implementations constitute an ongoing process ratherthan an event with an end point after which the organization can expect to return toa reasonably steady state; and second, various technological and organizationalchanges made during the ongoing process cannot, by definition, all be anticipatedahead of time.

Through a series of ongoing and situated accomondations, adaptations, and altera-tions (that draw on previous variations and immediate future ones), sufficient mod-ifications may be enacted over time that the fundamental changes are achieved.There is no deliberate orchestration of change here, no technological inevitability,no dramatic discontinuity, just recurrent and reciprocal variations of practice overtime. Each shift in practice creates the conditions for further breakdowns, unantici-pated outcomes, and innovations, which in turn are responded to with more varia-tions. And such variations are ongoing; there is no beginning or end point in thischange process.

Given these assumptions, the improvisational change model recognizes three dif-ferent types of change: anticipated, emergent, and opportunity-based. Orlikowskidistinguish between anticipated changes -- changes that are planned ahead of timeand occur as intended -- and emergent changes -- changes that arise spontaneouslyout of local innovation and which are not originally anticipated or intended. Anexample of an anticipated change would be the implementation of electronic mail

Page 163: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

158 Ole Hanseth and Eric Monteiro

software which accomplishes its intended aim to facilitate increased and quickercommunication among organizational members. An example of an emergentchange would be the use of the electronic mail network as an informal grapevinedisseminating rumors throughout an organization. This use of e-mail is typicallynot planned or anticipated when the network is implemented, but often emergestacitly over time in particular organizational contexts.

Orlikowski further differentiate these two types of changes from opportunity-basedchanges -- changes that are not anticipated ahead of time but are introduced pur-posefully and intentionally during the change process in response to an unexpectedopportunity, event, or breakdown. For example, as companies gain experience withthe World Wide Web, they are finding opportunities to apply and leverage its capa-bilities in ways that were not anticipated or planned before the introduction of theWeb. Both anticipated and opportunity-based changes involve deliberate action, incontrast to emergent changes which arise spontaneously and usually tacitly out ofpeople's practices with the technology over time (Orlikowski, 1996).

These three types of change build on each other over time in an iterative fashion(see Figure 1). While there is no predefined sequence in which the different typesof change occur, the deployment of new technology often entails an initial antici-pated organizational change associated with the installation of the new hardware/software. Over time, however, use of the new technology will typically involve aseries of opportunity-based, emergent, and further anticipated changes, the order ofwhich cannot be determined in advance because the changes interact with eachother in response to outcomes, events, and conditions arising through experimenta-tion and use.

Similarly, an improvisational model for managing technological change in organi-zations is not a predefined program of change charted by management ahead oftime. Rather, it recognizes that technological change is an iterative series of differ-ent changes, many unpredictable at the start, that evolve out of practical experiencewith the new technologies. Using such a model to manage change requires a set ofprocesses and mechanisms to recognize the different types of change as they occurand to respond effectively to them. The illustrative case presented below suggeststhat where an organization is open to the capabilities offered by a new technologi-cal platform and willing to embrace an improvisational change model, innovativeorganizational changes can be achieved.

Emergent change also covers where a series of smaller changes add up to a largerwhole being rather different from what one was striving for. Some researcher seeorganizational structures as well as computerized information systems as emergentrather than designed (Ngwenjama 1997). Defined in this way, the concept of emer-gent change comes close to what Claudio Ciborra (1996a, 1996b, 1997) calls

Page 164: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

From design to cultivation

Understanding Information Infrastructure 159

“drifting”. When studying groupware, he found that the technology tends to driftwhen put to use. By drifting he means a slight or significant shift of the role andfunction in concrete situations of usage, that the technology is called to play, com-pared to the predefined and assigned objectives and requirements (irrespective ofwho plans or defines them, users, sponsors, specialists, vendors or consultants).The drifting phenomenon also captures the sequence of ad hoc adjustments. Drift-ing can be looked at as the outcome of two intertwined processes. One is given bythe openness of the technology, its placticity in response to the re-inventions car-ried out by the users and specialists, who gradually learn to discover and exploitfeatures and potentials of groupware. On the other hand, there is the sheer unfold-ing of the actors’ “being-in-the-workflow” and the continuous stream of bricolageand improvisations that “color” the entire system lifecycle.

Drifting seams to lie outside the scope of control of the various actors; it consists ofsmall and big surprises, discoveries and blockages, opportunistic turns and viciouscircles.

Marc Berg (1997) describes drifting in terms of actor network theory. He sees drift-ing of actor-networks as a ubiqitous phenomena. A network drifts by being uncon-sciously changed as the effect of a conscious change of another one network whoseelements are also parts of others.

Who is controlling whom? technology or humans?

In most discussion about conceptions of and approaches to technological develop-ment and change, a key issue is to what extent and how humans can control thisprocess. The concepts of design and construction implicitly assumes that the tech-nology is under complete human control. Improvisation also sees the humans asbeing in control although less so as the design process cannot be planned. WhenOrlikowski sees improvisation as “situation change,” one might say that “situa-tions” influences the design. However, she does not discuss (or describe) the“nature” of “situations” or whatever might determine what “situations” we willmeet in a design or change activity. She notes however that “more research isneeded to investigate how the nature of the technology used influences the changeprocess and shapes the possibilities for ongoing organizational change”(Orlikowski xx, p. yy), implying that she assumes that the technology may be aninfluential actor.

The notion of “drifting” implies that there is no conscious human control over thechange process at all. Ciborra does not say anything either about whether there isany kind of “control” or what happens is completely random. However, he is usingthe notion of technologies being “out of control,” which is often used as a synony-

Page 165: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

160 Ole Hanseth and Eric Monteiro

mous with “autonomous technology,” i.e. technological determinism (Winner1977). Technological determinism is the opposite extreme of engineers conceptionof deign and engineering, assuming that it is not humans that design the technol-ogy, but rather the technology and its inner logic that determine, i.e. “design,” itsown use, implying in the end the whole society - including its own future develop-ment. In this extreme position the technology is the only actor.

These two extreme positions is an example of the old dichotomy and discussionswithin the social sciences between agency and structure. The notions of cultivationis here seen as a middle position where technology is considered shaped by humansalthough the humans are not in complete control. The technology is also changedby “others,” including the technology itself.

FIGURE 5. Concepts for technological development

Cultivating the installed base

The middle ground

Acknowledging the importance of the installed base implies that traditional notionsof design have to be rejected. However, denying humans any role at all is equally,at least, wrong. Cultivation as a middle position captures quite nicely the role ofboth humans and technology. It is the concept providing us the best basis for devel-oping strategies for infrastructure development. The installed base is a powerfulactor. Its future cannot be consciously designed, but designers do have influence -they might cultivate it.

The installed base acts in two ways. It may be considered an actor involved in eachsingle II development activity, but perhaps more important, it plays a crucial role asmediator and coordinator between the independent non-technological actors anddevelopment activities.

design improvization cultivation drifting determinism

complete human control no human controltechn.: material to be shaped technology as actor

Page 166: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cultivating the installed base

Understanding Information Infrastructure 161

If humans strive for control, i.e. making our world appropriate for engineeringtasks, strategies for cultivating IIs may be considered strategies for fighting againstthe power of the installed base.

Technological determinism is an extreme position where the installed base is theonly actor having power. An actor-network perspective where II development isseen as installed base cultivation is a middle position between technological deter-minism and social reductionism.

Cultivation might be related to the discussion about the limits and relationshipsbetween design and maintenance, and someone might be tempted to see cultivationfor just another term for maintenance. This is not the case. First, maintenance isrelated to design in the way that what has come into being through design is main-tained for a while, i.e. until being replaced by a new designed system. Cultivationreplaces design in the way that there is nothing being designed which later on willbe maintained. Technology comes about and is changed by being cultivated. How-ever, it might still be useful to talk about maintenance as minor changes of minorparts of a larger system or infrastructure.

Transition strategy as cultivation

In terms of actor-network theory, a transition strategy for an information infrastruc-ture corresponds to a situation where one well-aligned actor-network is modifiedinto another well-aligned actor-network. Only one (or a few) of the nodes of theactor-network is modified at a time. At each step in a sequence of modifications,the actor-network is well-aligned. In the more conventional vocabulary of the prod-uct world, a transition strategy corresponds to “backwards compatibility” (Grind-ley 1995). Backwards compatibility denotes the case when a new version of aproduct — an application, a protocol, a module, a piece of hardware — functionsalso in conjunction with older versions of associated products. Intel’s micro proces-sor chips, the 80x86 processor family, are examples of backward compatible prod-ucts. An Intel 80486 processor may run any program that a 80386 processor mayrun, and a 80386 processor may run any program a 80286 may run. Micorsoft’sWords application is another example of a backwards compatible product: thenewer versions of Word may read files produced by older versions of Word (but notthe other way around, not forward compatibility).

To appreciate what changing an information infrastructure is all about, it is neces-sary to identify underlying assumptions, assumptions which go back to the essen-tially open character of an information infrastructure outlined earlier in chapter 5. Itis illuminating to do so by contrasting it with the way changes in another infra-structure technology are made, namely the telephone system. Despite a number of

Page 167: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

162 Ole Hanseth and Eric Monteiro

similarities between a telephone infrastructure and an information infrastructure,the issue of how changes are made underline vital and easily forgotten differences.

During the early 90s the Norwegian Telecom implemented a dramatic change intheir telephone infrastructure: all telephones were assigned a new 8 digit numberinstead of the old 6 digit one. The unique digit sequence of a telephone is a coreelement of any telephone infrastructure. It is the key to identify the geographicaland logical location of the phone. Both routing of traffic and billing rely on theunique number sequence. Hence changing from 6 to 8 digits was a major change.The way it was implemented, however, was simple. At a given time and date theold numbers ceased to function and the new ones came into effect. The crucialaspect of this example of a changing infrastructure technology is the strongassumption about the existence of central authority (the Norwegian Telecom) withabsolute powers. None were allowed to hang on to the old numbers longer than theothers, absolutely everyone had to move in concert. This situation might prevail inthe world of telecommunication but it is fundamentally antagonistic to the kind ofopenness hardwired into the notion of an information infrastructure. Makingchanges in the style of telecommunication is simply not an option for an informa-tion infrastructure. There is simply no way to accomplish abrupt changes to thewhole information infrastructure requiring any kind of overall coordination (forinstance, so-called flag-days) because it is “too large for any kind of controlledrollout to be successful” (Hinden 1996, p. 63). It is accordingly vital to explorealternatives. Transition strategies is one of the most important of these alternatives.To develop a firmer understanding of exactly how large changes can be made,when they are appropriate and where and in which sequence they are to be imple-mented, is of vital concern when establishing a National Information Infrastructure(IITA 1995).

Strategies for scaling, which necessarily include changes more generally, to infor-mation infrastructures are recognized as central to the NII initiative. In a report bythe Information Infrastructure Technology and Application working group, thehighest level NII technical committee, it is pointed out:

We don’t know how to approach scaling as a research question, other thanto build upon experience with the Internet. However, attention to scalingas a research theme is essential and may help in further clarifying infra-structure needs and priorities (...). It is clear that limited deployment ofprototype systems will not suffice (...) (IITA 1995, emphasis added)

Contributing to the problems of making changes to an information infrastructure isthe fact that it is not a self-contained artefact. It is a huge, tightly interconnected yetgeographically dispersed collection of both technical and non-technical elements.Because the different elements of an information infrastructure is so tightly inter-

Page 168: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Cultivating the installed base

Understanding Information Infrastructure 163

connected, it becomes increasingly difficult to make changes when it expands. Theinertia of the installed base increases as the information infrastructure scales as isthe case with Internet: “The fact that the Internet is doubling in size every 11months means that the cost of transition (...) (in terms of equipment and manpower)is also increasing.” (IPDECIDE 1993). But changes, also significant ones, arecalled for.

The scaling of an information infrastructure is accordingly caught in a dilemma. Itis a process where the pressure for making changes which ensure the scaling haveto be pragmatically negotiated against the conservative forces of the economical,technical and organisational investments in the existing information infrastructure,the installed base. A feasible way to deal with this is for the information infrastruc-ture to evolve in a small-step, near-continuous fashion respecting the inertia of theinstalled base (Grindley 1995; Hanseth, Monteiro and Hatling 1996; Inhumane andStar 1996; Star and Ruhleder 1996). Between each of these evolutionary steps therehas to be a transition strategy, a plan which outlines how to evolve from one stageto another.

In the next chapter, we empirically describe an effort to design according to theprinciples of cultivating the installed base, namely the revision of the IP protocol inInternet. That chapter fleshes out some of the programatically stated principles out-lined in this chapter.

Page 169: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Installed base cultivation

164 Ole Hanseth and Eric Monteiro

Page 170: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 165

CHAPTER 10 Changinginfrastructures: The caseof IPv6

Introduction

As expectations and patterns of use of an information infrastructure tend to evolveduring its life span (see chapter 5), changes are called for. Both minor and majorchanges are required. This creates a dilemma. The pressure for making changeshave to be pragmatically negotiated against the conservative forces of the economi-cal, technical and organisational investments in the existing information infrastruc-ture, the installed base (see chapter 9). A feasible way to deal with this is for theinformation infrastructure to evolve in a small-step, near-continuous fashionrespecting the inertia of the installed base (Grindley 1995; Hanseth, Monteiro andHatling 1996; Neumann and Star 1996; Star and Ruhleder 1996). Between each ofthese evolutionary steps there has to be a transition strategy, a plan which outlineshow to evolve from one stage to another. The controversies over a transition strat-egy are negotiations about how big changes can — or have to — be made, where tomake them, when and in which sequence to deploy them.

A transition strategy is a conservative strategy. Rather than trying anything adven-turous, it plays it safe. Only modest changes are possible within a transition strat-egy. A transition strategy is an instance of a cultivation strategy for informationinfrastructures. In the next chapter we explore other, non-cultivation basedapproaches to establishing information infrastructures. These approaches facilitate

Page 171: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

166 Ole Hanseth and Eric Monteiro

more radical changes to the infrastructure than cultivation based one like the transi-tion strategy described in this chapter.

The IP protocol

The revision of the internet protocol (IP) in the Internet was a direct response to theproblems of scaling Internet: “Growth is the basic issue that created the need for anext-generation IP” (Hinden 1996, p. 62). The IP protocol forms the core of theInternet in the sense that most services, including WorldWideWeb, e-mail, ftp, tel-net, archive and WAIS, build upon and presuppose IP.

It is fair to say that it has never been more difficult to make changes in Internet thanthe revision of the IP. This is because the dilemma outlined above has never beenmore pressing. The explosive growth of Internet is generating a tremendous pres-sure for making changes, changes which are so fundamental that they need to bemade at the core, that is, in IP. At the same time, these changes are likely to haverepercussions on an Internet which have never been as huge, never exhibited astronger inertia of the installed base. Revising IP is the most difficult and involvedchange ever made to the Internet during its near 30 years of existence. Accordingly,it provides a critical case when studying the problems of changing large informa-tion infrastructures.

Our intention is to spell out some of the pragmatics played out within a transitionstrategy. Adopting a transition strategy is not straightforward. There are a numberof socio-technical negotiations that need to be settled. To learn about how to adopttransition strategies accordingly extends well beyond merely stating a conservativeattitude. It is necessary to inquire closer into what this amounts to. Changes arealways relative. When close, pressing your nose against them, all changes seembig. What, or indeed, whom, is to tell “small” (and safe) changes from “big” (anddaring) ones?

Page 172: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Late 80s - July 1992

Understanding Information Infrastructure 167

Late 80s - July 1992

Framing the problem

There was during the late 80s a growing concern that the success of Internet, itsaccelerating adoption, diffusion and development, was generating a problem (RFC1995, p. 4). No one had ever anticipated the growth rate of Internet. The design ofInternet was not capable of handling this kind of growth for very long.

Internet is designed so that every node (for instance, a server, PC, printer or router)has an unique address. The core of the problem was considered to be that IPv4 hasa 32 bit, fixed length address. Even though 32 bits might theoretically produce2**32 different identifiers which is a very significant number, the actual number ofavailable identifiers is dramatically lower. This is because the address space is hier-archically structured: users, organisations or geographical regions wanting to hookonto the Internet are assigned a set of unique identifiers (a subnetwork) of predeter-mined size. There are only three available sizes to choose from, so-called class A,B or C networks. The problem, then, is that class B networks are too popular. For alarge group of users, class C is too small. Even though a few times class C wouldsuffice, they are assigned the next size, class B which is 256 times larger than classC.

In this way, the problem of fixed length IPv4 addresses gradually got reformed intothe problem of exhausting class B networks. At the August 1990 IETF meeting itwas projected that class B space would be exhausted by 1994, that is, fairly soon(ibid., p. 4). This scenario produced a profound sense of urgency. Something had todone quickly. The easy solution of simply assigning several class C network tousers requiring somewhat more than class C size but much less than class B wasimmediately recognised to cause another, equally troublesome, problem. As thebackbone routers in Internet, the nodes which decide which node to forward trafficto next, need to keep tables of the subnets, this explosion of the number of class Cnetworks would dramatically increase the size of the routing tables, tables whichalready was growing disturbingly fast (ibid.). Even without this explosion of classC networks, the size of routing tables was causing severe problems as they grew50% quicker than hardware advances in memory technology.

During the early 1990s, there was a growing awareness regarding the problemsassociated with the continued growth of the Internet. It was also recognised thatthis was not an isolated problem but rather involved issues including assignmentpolicies for networks, routing algorithms and addressing schemes. There wasaccordingly a fairly clear conception that there was a problem complex, but with apoor sense of how the different problems related to each other, not to mention their

Page 173: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

168 Ole Hanseth and Eric Monteiro

relative importance or urgency. In response, the IETF in November 1991 formed aworking group called “Routing and addressing (ROAD)” to inquire closer intothese matters.

Appropriating the problem

The ROAD group had by November 1992 identified two of the problems (class Bexhaustion, routing table explosion) as the most pressing and IP address exhaustionas less urgent:

Therefore, we will consider interim measures to deal with Class B addressexhaustion and routing table explosion (together), and to deal with IPaddress exhaustion (separately).

(RFC 1992, p. 10)

The two most pressing problems required quick action. But the ROAD group rec-ognised that for swift action to be feasible, changes had to be limited as the totalinstalled base cannot change quickly. This exemplifies a, if not the, core dilemmawhen extending infrastructure technologies. There is pressure for changes — someimmediate, others more long-term, some well understood, others less so — whichneed to be pragmatically balanced against the conservative influence of the inertiaof the installed base. This dilemma is intrinsic to the development of infrastructuretechnology and is accordingly impossible to resolve once and for all. On the hand,one wants to explore a number of different approaches to make sure the potentialproblems are encountered, but on the other hand one need at one stage to settle fora solution in order to make further progress. It makes more sense to study specificinstances of the dilemma and see how it is pragmatically negotiated in every case.A necessary prerequisite for this kind of judgement is a deep appreciation andunderstanding for exactly how the inertia of the installed base operates.

In the discussions around IPng, the Internet community exhibited a rich under-standing of the inertia of the installed base. It was clearly stated that the installedbase was not only technical but included “systems, software, training, etc.”(Crocker 1992) and that:

The large and growing installed base of IP systems comprises people, aswell as software and machines. The proposal should describe changes inunderstanding and procedures that are used by the people involved ininternetworking. This should include new and/or changes in concepts, ter-minology, and organization.

(RFC 1992, p. 19)

Page 174: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Late 80s - July 1992

Understanding Information Infrastructure 169

Furthermore, the need to order the required changes in a sequence was repeatedlystated. To be realistic, only small changes can be employed quickly. More substan-tial ones need to be sugared through a gradual transition.

The [currently unknown] long-term solution will require replacementand/or extension of the Internet layer. This will be a significant trauma forvendors, operators, and for users. Therefore, it is particularly importantthat we either minimize the trauma involved in deploying the short-andmid-term solutions, or we need to assure that the short- and mid-termsolutions will provide a smooth transition path for the long-term solu-tions.

(RFC 1992, p. 11)

So much for the problem in general. How does this unfold in specific instances? Isit always clear-cut what a “small” as opposed to “large” change is, or what a“short-term” rather than “mid-” or “long-term” solution is? The controversy overCIDR and C# illustrates the problem.

CIDR vs. C#

Instead of rigid network sizes (such as class A, B and C), the ROAD workinggroup proposed employing CIDR (“Class-less Inter-Domain Routing”). CIDR sup-ports variable-sized networks (Eidnes 1994). It was argued to solve many of theproblems and that the disruptions to the installed base were known:

CIDR solves the routing table explosion problem (for the current IPaddressing scheme), makes the Class B exhaustion problem less impor-tant, and buys time for the crucial address exhaustion problem.

(...) CIDR will require policy changes, protocol specification changes,implementation, and deployment of new router software, but it does notcall for changes to host software.

(RFC 1992, p. 12)

At this stage, the CIDR solution to the most pressing problems was not well knownas Fuller’s(1992) question to the big-internet mailing list illustrates “but what is‘CIDR´?”. Nor was it unanimous (Chiappa 1992).

Furthermore, alternatives to CIDR existed that had several proponents. One wasC# which supported a different kind of variable-sized networks. The trust of theargument for C#, perfectly in line with the fidelity of the installed base, was that itrequired less changes:

Page 175: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

170 Ole Hanseth and Eric Monteiro

I feel strongly that we should be doing C# right now. It’s not new, and it’snot great, but its very easy - there’s nothing involved that takes anyresearch, any developments, or any agreements not made already - justsay “go” and the developers can start getting this into the production sys-tems, and out into the field. I don’t think that CIDR can be done quite thatquickly.

(Elz 1992)

The discussions surrounding the different short-term solutions for the IP relatedproblems shows broad consensus for paying respect to the installed base. TheCIDR vs. C# debate amounts to a judgement about exactly how much changes tothe installed base is feasible within a certain time-frame. This judgement varied,producing disagreement and personal frustration. At the same time, the closingdown of the controversy and deciding on CIDR illustrates the widespread beliefthat the need to move on overrides “smaller” disagreements:

I do feel strongly that it is far more important that we decide on one, and*DO IT*, than continue to debate the merits for an extended period. Lead-times are long, even for the simplest fix, and needs are becoming press-ing. So, I want to see us *quickly* decide (agreement is probably toomuch to ask for :-) on *one* of the three options and *get on with it*!

(...) I will say that I am extremely, deeply, personally, upset with the pro-cess that encouraged the creation of the C# effort, then stalled it formonths while the Road group educated themselves, leaving the C# work-ers in the dark, etc., etc.

(Chiappa 1992)

The immediate steps including deployment of CIDR was to buy some time badlyneeded to address the big problem of IP address exhaustion. How to solve the prob-lem was a lot less clear and the consequences were expected to be a lot bigger andcause “significant trauma for vendors, operators, and for users” (RFC 1992, p. 11).

The big heat

At this stage in late 1992, there already had been proposed four solutions to theproblem. One solution, called CLNP, was acknowledged to have a certain amountof support but was not accepted (RFC 1992, p. 13). Unable to vouch for any one,specific solution, the IESG only outlined a process of exploration which, hopefully,would lead to a solution. Central to this decision was a judgment about exactly howurgent it was to find a solution. As will become clear further below, this was ahighly controversial issue. The IESG position was that there still was some time:

Page 176: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Late 80s - July 1992

Understanding Information Infrastructure 171

[T]he IESG felt that if a decision had to be made *immediately*, then“Simple CLNP” might be their choice. However, they would feel muchmore comfortable if more detailed information was part of the decision.

The IESG felt there needed to be an open and thorough evaluation of anyproposed new routing and addressing architecture. The Internet commu-nity must have a thorough understanding of the impact of changing fromthe current IP architecture to a new one. The community needs to be con-fident that we all understand which approach has the most benefits forlong-term internet growth and evolution, and the least impact on the cur-rent Internet.

(RFC 1992, p. 14)

In parallel with the work of the ROAD group, and apparently poorly aligned withit, the IAB proposed its own plan for the next generation IP (IAB 1992). It wasdubbed version 7, written IPv7. This plan of July 1992 opposed the recommenda-tions of the ROAD group and IESG regarding the long-term problem of exhaustingIPv4 address space. It produced an unprecedented heated debate during the sum-mer of 1992. The debate focused both on the contents of IAB’s solution and deci-sion process producing the plan.

The crucial element of the IAB plan for IPv7 was the endorsement of one of thefour available solutions, namely CLNP. The thrust of the argument was appealingto the ideals of Internet design: CLNP existed and people had some experiencefrom it, so why not build upon it? Again, the controversy is not about abstract prin-ciples — they are unanimously accepted — but about how to apply the principlesto a difficult situation. Hence, the IAB (1992, p. 14) argues that:

Delaying by a few more months in order to gather more informationwould be very unlikely to help us make a decision, and would encouragepeople to spend their time crafting arguments for why CLNP is or is not abetter solution than some alternative, rather than working on the detailedspecification of how CLNP can be used as the basis for IPv7 (...).

The IAB plan for IPv7 thus makes a different judgement about the available timefor the Internet community to search for alternatives than the IESG IPng plan (RFC1992).

The decisive measures taken by the IAB, settling for a solution rather than keepquarrelling, was praised by a number of people (Braun 1992; Rekhter and Knopper1992), particularly those close to the commercial interests of Internet. This supportfor swift action rather than smooth talk is mixed with a discontent for letting thefaith of the Internet be left to designers with little or no interest or insight into

Page 177: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

172 Ole Hanseth and Eric Monteiro

“reality”. A particularly crisp formulation of this position was submitted to the big-internet mailing list shortly after the IAB’s decision (Rekhter and Knopper 1992):

We would like to express our strong support for the decision made by theIAB with respect to adopting CLNP as the basis for V7 of the InternetProtocol.

It is high time to acknowledge that the Internet involves significantinvestment from the computer industry (both within the US and abroad),and provides production services to an extremely large and diverse popu-lation of users. Such and environment dictates that decisions about criticalaspects of the Internet should lean towards conservatism, and shouldclearly steer away from proposals whose success is predicated on somefuture research.

While other than CLNP proposals may on the surface sound tempting, theInternet community should not close its eyes to plain reality -- namelythat at the present moment these proposals are nothing more than just pro-posals; with no implementations, no experience, and in few cases strongdependencies on future research and funding. Resting the Internet futureon such foundation creates and unjustifiable risk for the whole Internetcommunity.

The decision made by the IAB clearly demonstrated that the IAB was ableto go beyond parochial arguments (TCP/IP vs. CLNP), and make itsjudgements based on practical and pragmatic considerations.

Yakov Rekhter (IBM Corporation)

Mark Knopper (Merit Network)

One of the founding fathers of the Internet, Vint Cerf (1992), agreed initially withthe IAB that in this case one should organise the efforts rather than fragment them:

The CLNP specification is proposed as the starting point for the IPv7 bothto lend concreteness to the ensuing discussion (I hope this does NOTresult in concrete brickbats being hurled through MIME mail....!!) and totake advantage of whatever has already been learned by use of this partic-ular packet format.

But the majority of the Internet was appalled. In the heated debate on the big-inter-net mailing list, a number of people spoke about “shocked disbelief”, “a disastrousidea”, “shocked”, “dismayed”, “strongly disagree” and “irresponsible”. The gen-eral feeling was clear. The frustration with the decision was obviously very much

Page 178: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Late 80s - July 1992

Understanding Information Infrastructure 173

influenced by the oblique way the IAB had reached its decision thus breachingdeep-seated concerns for participating, quasi-democratic decision processes in theInternet.

Bracketing the frustration about the decision process itself, the controversies cir-

cled around different views and interpretations of praised design principles.1 Inother words, even though it can be said to be near full consensus among the Inter-net community regarding concerns about continuity, installed base, transition etc.(see above), the application to specific context is regularly contested. The debateover IAB’s IPv7 illustrates this in a striking way.

Abstract design principles meets the real world

The main reason, IAB argued, why it favoured CLNP was that it was necessary forthe Internet to find a solution very soon (IAB 1992, p. 14). CLNP is a protocolwhich “is already specified, and several implementations exist” so it “will avoiddesign of a new protocol from scratch, a process that would consume valuable timeand delay testing and deployment.” (ibid., p. 10).

The concern for practical experience is deep and the CLNP solution of the IABappealed to this. Furthermore, it paves the road for interoperability, another keyprinciple in Internet. Interoperability is recognised to be the end-result of a processof stabilisation:

I think that relying on highly independent and distributed developmentand support groups (i.e., a competitive product environment) means thatwe need a production, multi-vendor environment operating for awhile,before interoperability can be highly stable. It simply takes time for theengineering, operations and support infrastructure to develop a commonunderstanding of a technology.

(Crocker 1992)

While acknowledging this design principle, the IAB (1992b) in its Kobe declara-tion of June 1992 explained its IPv7 decision and argued that for IP an exceptionhad to be made:

[W]e believe that the normal IETF process of “let a thousand (proposals)bloom”, in which the “right choice” emerges gradually and naturally froma dialectic of deployment and experimentation, would in this case expose

1. Alvestrand (1996) suggests that had it not been for the clumsy way IAB announcedits decision, many more would probably have gone along with the CLNP solution.

Page 179: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

174 Ole Hanseth and Eric Monteiro

the community to too great a risk that the Internet will drown in its ownexplosive success before the process had run its course.

The principal difference was the pragmatic judgement of the amount of time andresources available to work out a revised IP protocol. The IESG’s judgement is ahead-on disagreement with the IAB’s judgment. In addition, more indirect strate-gies for challenging the IAB were employed. One important line of argumentaimed at questioning the experience with CLNP: did it really represent a suffi-ciently rich source of experience?

There does exist some pieces of an CLNP infrastructure, but not only is itmuch smaller than the IP infrastructure (by several orders of magnitude),but important pieces of that infrastructure are not deployed. For examplethe CLNP routing protocols IS-IS and IDRP are not widely deployed.ISIS (Intra-Domain routing protocol) is starting to become available fromvendors, but IDRP (the ISO inter-domain routing protocol) is just comingout of ANSI. As far as I know there aren’t any implementations yet.

(Tsuchiya 1992)

And more specifically, whether the amount and types of experience was enough toensure interoperability:

While there certainly are some implementations and some people using[CLNP], I have no feel for the scale of the usage or -- more importantly --the amount of multi-vendor interoperability that is part of production-level usage. Since we have recently been hearing repeated reference to thereliance upon and the benefits of CLNP’s installed base, I’d like to hearmuch more concrete information about the nature of the system-levelshakeout that it has _already_ received. Discussion about deployment his-tory, network configuration and operation experience, and assorted user-level items would also seem appropriate to flesh out the assertion thatCLNP has a stable installed base upon which the Internet can rely.

(Crocker 1992).

Interoperability resulting from experience in stable environments presupposes avariety of vendors. CLNP was associated with one specific vendor, DEC, as suc-cinctly coined by Crowcroft (1992): “IPv7 = DECNET Phase 5?” (DECNET isDEC’s proprietary communication protocols). Hence, the substance of the experi-ence with CLNP experience was undermined as Crocker (1992) illustrates:

So, when we start looking at making changes to the Internet, I hope weconstantly ask about the _real_ experience that is already widely availableand the _real_ effort it will take to make each and every piece of every

Page 180: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Late 80s - July 1992

Understanding Information Infrastructure 175

change we require. (...) [R]eferences to the stability of CLNP leave mesomewhat confused.

Gaining experience from keeping certain parts stable is a design principle (seeabove). But some started challenging the very notion of stability. They startedquestioning exactly what it took for some part to be considered “stable”. An impor-tant and relevant instance of this dispute was IPv4. Seemingly, IPv4 has been stablefor a number of years as the protocol was passed as an Internet Standard in 1981without subsequent changes. But even if the isolated protocol itself has beenunchanged for 15 years, have there not been a number of changes in associated andtightly coupled elements? Is it, then, reasonable to maintain that IPv4 has been sta-ble?

How long do we think IP has been stable? It turns out that one can givehonestly different answers. The base spec hasn’t changed in a very longtime. On the other hand, people got different implementations of some ofthe options and it was not until relatively recently that things stabilized.(TCP Urgent Pointer handling was another prize. I think we got stable,interoperable implementations universally somewhere around 1988 or89.)

(Crocker 1992)

I still don’t see how you can say things have been stable that long. Thereare still algorithms and systems that don’t do variable length subnets.When were variable length subnets finally decided on? Are they in theprevious router requirements? (...). So things are STILL unstable.

(Tsuchiya 1992)

This is an important argument. It will be addressed also later. In effect, it states thatthe IP protocol cannot be considered an isolated artefact. It is but one element of atightly intertwined collection of artefacts. It is this collection of artefacts — thisinfrastructure — which is to be changed. A shift of focus from the artefact to infra-structure has far-reaching repercussions on what design is all about.

A highly contested issue was exactly which problems CLNP allegedly solved andwhether these were in fact the right ones. A well-known figure in the Internet (andOSI) community, Marshall Rose, was among the ones voicing concern that it “isless clear that IPv7 will be able to achieve route-aggregation without significantadministrative overhead and/or total deployment.” (Rose 1992a).

After the storm of protests against IAB, combining objections against CLNP withIAB’s decision process, one of Internet’s grand old men, Vint Cerf, reversed theIAB decision at the IETF in July 1992:

Page 181: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

176 Ole Hanseth and Eric Monteiro

Vint Cerf Monday morning basically retracted the IAB position. Theyare now supporting the IESG position, and he said that the IAB haslearned not to try and enforce stuff from above. (...) Apparently Vint did astrip tease until he took off his shirt to reveal an “IP over everything” T-shirt underneath.

(Medin 1992)

The overall result of the hot summer of 1992 was that a plan to explore and evalu-ate proposals was worked out (RFC 1992). By this time it was clear that “forcingpremature closure of a healthy debate, in the name of ‘getting things done’, is*exactly* the mistake the IAB made.” (Chiappa 1992).

July 1992 - July 1994

Let the thousand blossoms bloom, or: negotiating the available time

The situation by July 1992 was this. The IESG recommendation (RFC 1992) ofJune 1992 calling for proposals drowned in the subsequent controversy over IAB’sIPv7 plan. As the dramatic July 1992 IETF meeting led by Vint Cerf decided toreject the IAB plan, the IESG plan (RFC 1992) was accepted and so a call for pro-posals for IPng was made at the meeting itself.

The problem now was to organise the effort. Central to this was, again, the issue oftime: how urgent were the changes, how many different approaches should be pur-sued, at which stage should one move towards a closing?

The plan by IESG formulated in June 1992 and revised a month later at the IETFmeeting was shaped according to a definite sense of urgency. But it was far frompanic. IESG declined to accept the problem as one merely of timing. So eventhough “[a]t first the question seemed to be one of timing” (RFC 1992, p. 14), theIESG was calm enough to hold that “additional information and criteria wereneeded to choose between approaches” (ibid., p. 14). Still, the suggested timetablesand milestones clearly mirror a sense of urgency. The plan outlines phases ofexploring alternatives, elaborating requirements for IPng and a pluralistic decisionprocess — all to be completed within 5 months, by December 1992 (ibid., p. 15).As it turned out, this timetable was to underestimate the effort by a factor of morethan by four. It eventually took more than two years to reach the milestone theIESG originally had scheduled for late 1992.

Page 182: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

July 1992 - July 1994

Understanding Information Infrastructure 177

The IESG feared fragmenting the effort too much by spending an excessive amountof time exploring many different proposals. This argument, as illustrated above,was it that led Vint Cerf to initially go along with the IAB IPv7 plan which focusedon CLNP. At this stage in July 1992, four proposals existed (called “CNAT”, “IPEncaps”, “Nimrod” and “Simple CLNP”, see (RFC 1995, p. 11)). This was,according to the IESG, more than sufficient as “in fact, our biggest problem is hav-ing too many possible solutions rather than too few” (RFC 1992, p. 2).

Following the call for proposals in July, three additional proposals were submittedduring the autumn of 1992, namely “The P Internet Protocol (PIP)”, “The simpleInternet protocol (SIP)” and “TP/IX” (RFC 1995, p. 11). So by the time the IESGhad planned to close down on a single solution, the Internet community was facinga wider variety of proposals than ever. Seven proposed solutions existed byDecember 1992.

Preparing selection criteria

In parallel with, and fuelled by, the submission of proposals, there were efforts anddiscussions about the criteria for selecting proposals. As it was evident that therewould be several to chose from, there was a natural need to identify a set of criteriawhich, ideally, would function as a vehicle for making a reasonable and open deci-sion.

The process of working out these criteria evolved in conjunction with, rather thanprior to, the elaboration of the solutions themselves. From the early sketch in 1992,the set of criteria did not stabilise into its final form as a RFC until the IPng deci-sion was already made in July 1994 (RFC 1994c). It accordingly makes bettersense to view the process of defining a set of selection criteria as an expression ofthe gradual understanding and articulation of the challenges of an evolving infra-structure technology like the Internet.

Neither working on the proposals themselves nor settling for selection criteria wasstraightforward. The efforts spanned more than two years involving a significantnumber of people. The work and discussions took place in a variety of forms andarenas including IETF meetings and BOFs, several e-mail lists, working groupsand teleconferencing. In tandem with the escalating debate and discussion, theinstitutional organisation of the efforts was changed. This underscores an importantbut neglected aspect of developing infrastructure technology, namely that there hasto be a significant flexibility in the institutional framework not only (the more well-known challenge of) flexibility in the technology. It would carry us well beyond thescope of this paper to pursue this issue in any detail, but let me indicate a fewaspects. The Internet establishes and dismantles working groups dynamically. To

Page 183: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

178 Ole Hanseth and Eric Monteiro

establish a working group, the group only has to have its charter mandated by theIETF. In relation to IPng, several working groups were established (including ALE,ROAD, SIPP, TUBA, TACIT and NGTRANS, see ftp://Hsdndev.harvard.edu/pub/ipng/archive/). As the explorative process unfolds during 1993, there is a sense ofan escalating rather than diminishing degree of clarity:

The [IPDECIDE] BOF [about criteria at the July 1993 IETF] was held ina productive atmosphere, but did not achieve what could be called a clearconsensus among the assembled attendees. In fact, despite its generallyproductive spirit, it did more to highlight the lack of a firm direction thanto create it.

(RFC 1994b, p. 2)

In response to this situation, Gross, chair of the IESG, called for the establishmentof an IPng “area”, an ad-hoc constellation of the collection of relevant workinggroups with a directorate (which he suggested the leaders of himself). At a criticaltime of escalating diversity, the IESG thus institutionalises a concerting of efforts.The changes in the institutional framework for the design of Internet is elaboratedfurther below.

Returning to the heart of the matters, the contents of solutions and the criteria, therewere much variations. The rich and varied set of criteria mirror the fact that manyparticipants in the Internet community felt that they were at a critical point in time,that important and consequential decision had to me made in response to a rapidlychanging outside world. Hence, the natural first aim of formulating a tight andorderly set of criteria was not possible:

This set of criteria originally began as an ordered list, with the goal ofranking the importance of various criteria. Eventually, (...) each criterionwas presented without weighting (...)

(RFC 1994c, p.2)

The goal was to provide a yardstick against which the various proposalscould be objectively measured to point up their relative strengths andweaknesses. Needless to say, this goal was far too ambitious to actually beachievable (...)

(SELECT 1992)

To get a feeling of the kind of considerations, types of arguments and level ofreflection about the problem, a small selection of issues are elaborated whichrelated to this paper’s core question of how to make changes to infrastructure tech-nology in order to scale.

Page 184: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

July 1992 - July 1994

Understanding Information Infrastructure 179

Market-orienting Internet

One issue concerned the role and extent market forces, big organisations and usergroups should be involved. Of course, none objected to their legitimate role. Butexactly how influential these concerns should be was debated. Partly, this issue hadto do with the fact that historically the Internet has been dominated by individualswith a primary interest in design. There has until fairly recently not been muchattention to the commercial potential of Internet among the community itself. Thisis clearly changing now (Hinden 1996). The economic and commercial repercus-sions of Internet was debated as, for instance, the IPDECIDE BOF at the July 1993IETF confirmed that “IETF decisions now have an enormous potential economicimpact on suppliers of equipment and services.” (IPDECIDE 1993). There waswidespread agreement that the (near) future would witness a number of influentialactors, both in terms of new markets as well as participants in the future develop-ment of Internet:

Remember, we are at the threshold of a market driven environment. (...)Large scale phone companies, international PTTs and such, for example,as they discover that there is enough money in data networking worththeir attention. A major point here is that the combination of the IETF andthe IAB really has to deliver here, in order to survive.

(Braun 1992)

Market forces were recognised to play an important, complementary role:

[The] potential time frame of transition, coexistence and testing processeswill be greatly influenced through the interplay of market forces withinthe Internet, and that any IPng transition plan should recognize thesemotivations (...)

(AREA 1994)

Still, there was broad consensus that the Internet community should take the lead.At one of the earliest broad, open hearings regarding selection criteria, the IPDE-CIDE BOF at the July 1993 IETF, it was forcefully stated that “`letting the marketdecide’ (whatever that may mean) was criticised on several grounds [including thefact that the] decision was too complicated for a rational market-led solution.”(IPDECIDE 1993).

Nevertheless, the increasing tension between the traditional Internet community ofdesigners and commercial interest surfaced. Several pointed out that the Internetdesigners were not in close enough contact with the “real” world. The “Internetcommunity should not close its eyes to plain reality” (Rekhter and Knopper 1992).

Page 185: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

180 Ole Hanseth and Eric Monteiro

This tension between users, broadly conceived, and designers did not die out. Itwas repeatedly voiced:

Concerns were expressed by several service providers that the developershad little appreciation of the real-world networking complexities thattransition would force people to cope with.

(IPDECIDE 1993)

More bluntly, I find it rather peculiar to be an end user saying: we enduser’s desperately need [a certain feature] and then sitting back and hear-ing non-end-users saying “No you don’t”.

(Fleichman 1993)

Stick or carrot?

Still, the core problem with IPng concerned how large changes could (or ought to)be made, where, how and when to make them — in other words, the transitionstrategy broadly conceived.

On the one hand, there were good reasons for making substantial changes to IPv4.A number of new services and patterns of use were expected including: real-time,multimedia, Asynchronous Transfer Mode, routing policy and mobile computing.On the other hand, there was the pressure for playing it reasonable safe by focusingon only what was absolutely required, namely solving the addressing space androuting problems. This was recognised as a dilemma:

There was no consensus about how to resolve this dilemma, since bothsmooth transition and [new services like for instance] multimedia supportare musts.

(IPDECIDE 1993)

It was pointed out above that balancing the pressure for changes against the need toprotect the installed base is an intrinsic dilemma of infrastructure technology. In thecase of IPng, this was amplified by the fact that the core requirements for IPng,namely solving the routing and address space problems, were invisible to mostusers. They were taken for granted. Hence, there was few incentives for users tochange.Why would anyone bother to change to something with little perceived,added value?

In the final version of the selection criteria, addressing this dilemma is used toguide all other requirements:

Page 186: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

July 1992 - July 1994

Understanding Information Infrastructure 181

[W]e have had two guiding principles. First, IPng must offer an internet-work service akin to that of IPv4, but improved to handle the well-knownand widely-understood problems of scaling the Internet architecture tomore end-points and an ever increasing range of bandwidths. Second, itmust be desirable for users and network managers to upgrade their equip-ment to support IPng. At a minimum, this second point implies that theremust be a straightforward way to transition systems from IPv4 to IPng.But it also strongly suggests that IPng should offer features that IPv4 doesnot; new features provide a motivation to deploy IPng more quickly.

(RFC 1994c, pp. 3-4)

It was argued that the incentives should be easily recognisable for important usergroups. Hence, it was pointed out that network operators were so vital that theyshould be offered tempting features such as controlling “load-shedding and balanc-ing, switching to backup routers” (NGREQS 1994). Similarly, the deep seatedaversion for Application Platform Interfaces, that is, tailor-made interfaces for spe-cific platforms, was questioned. Despite the fact that “the IETF does not ‘do’[Application Platform Interfaces]” (RFC 1995, p. 39), the IESG finally recom-mends that an exception should be made in the case of IPng. This was because itmeets the pressing need for tangible incentives for a transition to IPng (ibid., p.5).

Internet is an infrastructure, not an artifact

A large number of requirements were suggested and debated. They include: topo-logical flexibility, mobile communication, security, architectural simplicity, uniqueidentifiers, risk assessment, network management, variable-length addresses andperformance (RFC 1994c). Besides addressing perceived and anticipated require-ments, the requirements might have repercussions on the whole infrastructure, notonly IPng.

It was repeatedly pointed out that IPng was not only about revising one, self-con-tained element of the Internet. It was about changing a core element of an infra-structure with tight and oblique coupling to a host of other elements in theinfrastructure:

Matt Mathis pointed out that different proposals may differ in how thepain of deployment is allocated among the levels of the networking foodchain (backbones, midlevels, campus nets, end users) (...).

(SELECT 1992)

Page 187: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

182 Ole Hanseth and Eric Monteiro

I would strongly urge the customer/user community to think about costs,training efforts, and operational impacts of the various proposals andPLEASE contribute those thoughts to the technical process.

(Crocker 1992)

This well-developed sense of trying to grasp how one components, here IPng,relates to the surrounding components of the information infrastructure is a princi-pal reason for Internet’s success up till now.

New features are included to tempt key users to change. But the drive towards con-servatism is linked to one of the most important design principles of Internet,namely to protect the installed base. It is of overriding importance:

[T]he transition and interoperation aspects of any IPng is *the* key firstelement, without which any other significant advantage won’t be able tobe integrated into the user’s network environment.

(e-mail from B. Fink to sipp mailing list, cited by Hinden1996)

This appeal for conservatism is repeated ad nauseam. The very first sentence of(RFC 1996) describing the transition mechanisms of IPv6, reads: “The key to asuccessful IPv6 transition is compatibility with the large installed base of IPv4hosts and routers” (ibid., p. 1). The pressure for holding back and declining fea-tures which might disturb the installed base is tremendous.

“Applying” the principles

A rich and varied set of proposed requirements was worked out. Still, it is not rea-sonable to hold that the decision was made by simply “applying” the abstract selec-tion criteria to the different proposals for IPng. Despite the fact that the resultingrequirements (RFC 1994c) with 17 criteria were “presented without weighting”(ibid., p. 3), a few themes were of overriding importance (IPDECIDE 1993). Atthis stage, draft requirements had been suggested for more than one year and sevencandidates existed but the requirements were “too general to support a defensiblechoice on the grounds of technical adequacy” and “had so far not gelled enough toeliminate any candidate” (ibid.). The concern for sharper criteria prevailed. It wasrepeated as late as in March 1994 only two months before the decision was made:

One important improvement that seemed to have great support from thecommunity was that the requirements should be strengthened and madefirmer -- fewer “should allows” and the like and more “musts.”

(AREA 1994)

Page 188: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

July 1992 - July 1994

Understanding Information Infrastructure 183

The core concern focused on making transition from IPv4 to IPv6 as smooth, sim-ple and uncostly as possible. A few carrots were considered crucial as incentivesfor a transition, primarily security:

What is the trade-off between time (getting the protocol done quickly)versus getting autoconfiguration and security into the protocol? Autocon-figuration and security are important carrots to get people to use IPng.The trade-off between making IPng better than IP (so people will use it)versus keeping IPv4 to be as good as it can be.

(NGDIR 1994)

Other requirements were to a large extent subordinate or related to these. Forinstance, autoconfiguration, that is, “plug and play” functionality, may be viewedas an incentive for transition.

The collection of proposed IPng solutions had evolved, joined forces or died. Asexplained earlier, there was tight interplay between the development of the solu-tions and the criteria. The real closing down on one solution took place duringMay-July 1994. In this period, there was extensive e-mail discussions, but moreimportantly, the IPng Directorate organised a two day retreat 19.-20. May 1994 atBigTen with the aim of evaluating and reworking the proposals (Knopper 1994).Through his and the subsequent IETF in July 1994, an IPng solution was decidedupon.

Showdown

By the spring of 1994, three candidates for IPng existed, namely “CATNIP”(evolving from TP/IX), “SIPP” (an alliance between IPAE, SIP and PIP) and“TUBA” (evolving from Simple CLNP). A fourth proposal, Nimrod, was more orless immediately rejected for being too unfinished and too much of a researchproject.

CATNIP was “to provide common ground between the Internet, OSI, and theNovell protocols” (RFC 1995, p. 12). The basic idea of CATNIP for ensuring thiswas to have Internet, OSI and Novell transport layer protocols (for instance, TCP,TP4 and SPX) run on to of any of the network layer protocols (IPv4, CLNP, IPX —or CATNIP). The addressing scheme was borrowed from OSI.

A primary objection against CATNIP which surfaced during the BigTen retreat,was that it was not completely specified (Knopper1994; RFC 1995, pp. 14-15).Beyond the obvious problems with evaluating an incomplete proposal, this illus-trates a more general point made earlier and illustrated by Alvestrand (1996), areadirector within IETF: “The way to get something done in the Internet is to work

Page 189: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

184 Ole Hanseth and Eric Monteiro

and write down the proposal”. Despite appreciation for the “innovative” solution,there was scepticism towards the “complexity of trying to be the union of a numberof existing network protocols” (RFC 1995, p. 15).

The TUBA solution was explicitly conservative. Its principal aim was to “mini-mize the risk associated with the migration to a new IP address space” (ibid., p.13). This would mean “only replacing IP with CLNP” (ibid., p. 13) and let “exist-ing Internet transport and application protocols continue to operate unchanged,except for the replacement of 32-bit IP[v4] addresses with larger addresses” (ibid.,p. 13). CLNP is, as outlined above, OSI’s already existing network layer protocol.Hence, the core idea is simply to encapsulated, that is, wrap up, TCP in CLNPpackets.

The evaluation of TUBA acknowledged the benefits a solution making use of the“significant deployment of CLNP-routers throughout the Internet” (ibid., p. 16),that is, a solution paying respect to an installed base. Similar to the arguments out-lined above regarding the IAB’s IPv7 plan to build IPng on CLNP, “[t]here wasconsiderably less agreement that there was significant deployment of CLNP-capa-ble hosts or actual networks running CLNP.” (RFC 1995, p. 16). The worries —“including prejudice in a few cases” (ibid., p. 16) — about the prospects of losingcontrol of the Internet by aligning IPng with an OSI protocol were deep-seated.

SIPP was to be “an evolutionary step from IPv4 (...) not (...) a radical step” (ibid.,p. 12). SIPP doubles the address size of IP from 32 to 64 bits to support more levelsof addressing hierarchy and a much greater number of addressable nodes. SIPPdoes not, in the same way as CATNIP or TUBA, relate to non-Internet protocols.

The reviews of SIPP were favourable. SIPP was praised for its “aesthetically beau-tiful protocol well tailored to compactly satisfy today’s known network require-ment” (ibid., p. 15). It was furthermore pointed out that the SIPP working grouphad been the most dynamic one in the previous year, producing close to a completespecification.

Still, it was definitely not a satisfactory solution. In particular, the transition plans(based on the encapsulation suggestion originally in IPAE) was viewed as “fatallyflawed” (Knopper 1994). A number of reviewers also felt that the routing problemswere not really addressed, partly because there was no way deal with topologicalinformation and aggregation of information about areas of the network.

In sum, there were significant problems with all three proposals. Because CATNIPwas so incomplete, the real choice was between TUBA and SIPP. Following theBigTen evaluation retreat, Deering and Francis (1994), co-chairs of the SIPP work-ing group, summarised the BigTen retreat to the sipp-email list and proposed tobuild upon suggestions which came out of it. Particularly important, they suggested

Page 190: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

July 1994 - today

Understanding Information Infrastructure 185

to “change address size from 8 bytes [=64 bits, the original SIPP proposal] to 16bytes [=128 bits] (fixed-length)” (ibid.). This increase in address length would buyflexibility to find better solutions for autoconfiguration, more akin to the TUBAsolution. These suggestions were accepted by the SIPP working group who submit-ted the revised SIPP (version 128 bits) to the IPng Directorate together with a newbut incomplete transition plan inspired by TUBA. This was accepted in July 1994as the solution for IPng, finally ready to be put on the ordinary standards track ofInternet.

July 1994 - today

Finished at last — or are we?

By the summer of 1994, a recommended candidate for IPng was found. It wascalled IPv6. It has been put on the standard track (see chapter 4) and made a Pro-posed Standard in November 1994. One could accordingly be tempted to think thatit was all over, that one had found a way which secured the future of Internet. This,however, is not quite the case, not even today. There is a fairly well-founded doubt“whether IPv6 is in fact the right solution to the right problem” (Eidnes 1996).There are two reasons for this, both to be elaborated later:

• There was — and still is — a considerable degree of uncertainty about how toconduct full-scale testing;

• Even if the IPng protocol itself was completed, a number of tightly relatedissues were still unresolved, most importantly, a transition strategy;

Full-scale testing

A core element of the Internet design principles, what could be said to be the reali-sation of the Internet pragmatism, is the emphasis on practical experience and test-ing of any solutions (RFC 1994). Although this principle is universally acceptedwithin the Internet community, the point is that as the installed base of Internetexpands, so does the difficulties of actually accomplishing large-scale, realistictesting. So again, how should the principle of realistic testing be implemented forIPng? This worry was voiced fairly early on:

It is unclear how to prove that any proposal truly scales to a billion nodes.(...) Concern was expressed about the feasibility of conducting reason-

Page 191: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

186 Ole Hanseth and Eric Monteiro

ably-sized trials of more than one selected protocol and of the confusingsignals this would send the market.

(IPDECIDE 1993)

The problem of insufficient testing is important because it undermines the possibil-ity of establishing interoperability (ibid.):

It is also difficult to estimate the time taken to implement, test and thendeploy any chosen solution: it was not clear who was best placed to dothis.

Current deployment of IPv6 is very slow. Implementations of IPv6 segments, evenon an experimental basis, do hardly exist (Eidnes 1996). Even though the phases astandard undergo before becoming a full Internet Standard may be as swift as 10months, a more realistic projection for IPv6 is 5 years (Alvestrand 1996). Theupgrading of IPv6 to a Draft Standard requires testing well beyond what has so farbeen conducted.

As Internet expands, full-scale testing becomes more cumbersome. Some withinthe IETF see an increasingly important role for non-commercial actors, forinstance, research networks, to function as early test-beds for future Internet Stand-ards (Alvestrand 1996). The US Naval research lab. has implemented an experi-mental IPv6 segment by June 1. 1996 as part of their internetworking research. TheNorwegian research network, which traditionally has been fairly up-front, expectsto start deployment of IPv6 during 1997.

Unresolved issues

At the time when the IPng protocol was accepted on the standards track, severalcrucial issues were still not completed. At the November 1994 IETF immediatelyfollowing the IPng decision, it was estimated that 10-20 specifications wererequired (AREA 1994b). Most importantly, a transition strategy was not in place.This illustrates the point made earlier, namely that the actual design decisions arenot derived in any straightforward sense from abstract principles. Besides a transi-tion strategy, the security mechanisms related to key management was not — and,indeed, still is not — completed.

A core requirement for IPng was to have a clear transition strategy (RFC 1995).The SIPP (version 128 bits) was accepted as IPng without formally having pro-duced a clear transition strategy because the concerns for facilitating a smooth tran-sition was interwoven with the whole process, as outlined earlier. There was afeeling that it would be feasible to work out the details of the transition mecha-

Page 192: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

July 1994 - today

Understanding Information Infrastructure 187

nisms based on the IPng protocol. It was accordingly decided by the IPng Directo-rate just prior to the BigTen retreat to separate transition from the protocol.

In response to the lack of a complete transition strategy, informal BOFs(NGTRANS and TACIT) were held at the November 1994 IETF. TACIT was aworking group formed during the spring of 1994, NGTRANS was established as aworking group shortly after the November 1994 IETF. Both TACIT andNGTRANS were to address the issue of a transition strategy, but with slightly dif-ferent focus. NGTRANS was to develop and specify the actual, short-term transi-tion mechanisms leaving TACIT to deal with deployment plans and operationalpolicies (NGTRANS 1994). The available time for a transition was to be “completebefore IPv4 routing and addressing break down” (Hinden 1996, p. 62). As a resultof the deployment of CIDR, it was now estimated that “IPv4 addresses would bedepleted around 2008, give or take three years” (AREA 1994b).

From drafts sketched prior to the establishment of NGTRANS and TACIT, thework with the transition strategy was completed to the stage of a RFC only byApril 1996 (RFC 1996).

The transition mechanisms evolved gradually. It was early on recognised that a cor-nerstone of the transition strategy was a “dual-stack” node, that is, host or router. Adual-stack node implements both IPv4 and IPv6 and thus functions as a gatewaybetween IPv4 and IPv6 segments. Dual-stack nodes have the capability to send andreceive both IPv4 and IPv6 packets. They enforce no special ordering on thesequence of nodes to be upgraded to IPv6 as dual-stack nodes “can directly inter-operate with IPv4 nodes using IPv4 packets, and also directly interoperate withIPv6 nodes using IPv6 packets” (RFC 1996, p. 4).

Progress was also made on closely related elements of an IPv6 infrastructure. Thebulk of the IPv4 routing algorithms were reported to be working also for IPv6 rout-ers, a piece of pleasant news in November 1994 (AREA 1994a, p.4).

The additional, key transition mechanism besides dual-stack nodes was IPv6 overIPv4 “tunnelling”. This is the encapsulation, or wrapping up, of an IPv6 packetwithin an IPv4 header in order to carry them across IPv4 segments of the infra-structure. A key element to facilitate this is to assign IPv6 addresses which arecompatible to IPv4 addresses in a special way. (The IPv4 compatible IPv6 addresshas it first 96 bits set to zero and the remaining 32 bits equalling the IPv4 address).

Page 193: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

188 Ole Hanseth and Eric Monteiro

Discussion

Rule following vs. reflective practitioners

A striking aspects of the IPng effort is the difference between abstract design prin-ciples and the application of these to situated contexts. A considerable body of lit-erature has, both on a theoretical and an empirical basis, pointed out how humanaction always involves a significant element of situated interpretations extendingwell beyond predefined rules, procedures, methods or principles (Suchman 1987).That designers deviate from codified methods and text-books is likewise not news(Curtis, Krasner and Iscoe 1988; Vincenti 1990). Still, the manner deviation from,application to, or expectation from design principles is made the subject of a fairlyopen and pluralistic discussion is rare. It is not merely the case that the actualdesign of Internet does not adhere strictly to any design principles. This should notsurprise anyone. More surprisingly is the extent to which the situated interpreta-tions of the design principles is openly and explicitly discussed among a significantportion of the community of designers.

When outlining different approaches to systems design or interdisciplinarity, theengineering or technically inclined approach is commonly portrayed as quite nar-row-minded (Lyytinen 1987). The Internet community is massively dominated bydesigners with a background, experience and identity stemming from the techni-cally inclined systems design. The design process of IPng, however, illustrates animpressively high degree of reflection among the designers. It is not at all narrow-minded. As outlined earlier, there are numerous examples of this including crucialones such as: how the installed base constrains and facilitates further changes, thenew role of market forces and the balance between exploring alternatives and clos-ing down.

Aligning actor-networks

The majority of the Internet community has a well-developed sense of what theyare designing. They are not designing artefacts but tightly related collections ofartefacts, that is, an infrastructure. When changes are called for (and they oftenare), they do not change isolated elements of the infrastructure. They facilitate atransition of the infrastructure from one state to another.

Key to understanding the notion of transition and coexistence is the ideathat any scheme has associated with it a cost-distribution. That is, someparts of the system are going to be affected more than other parts. Some-times there will be a lot of changes; sometimes a few. Sometimes the

Page 194: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Discussion

Understanding Information Infrastructure 189

changes will be spread out; sometimes they will be concentrated. In orderto compare transition schemes, you *must* compare their respective cost-distribution and then balance that against their benefits.

(Rose 1992b)

In the vocabulary of actor-network theory (Callon 1991; Latour 1992), this insightcorresponds to recognising that the huge actor-network of Internet — the immenseinstalled base of routers, users’ experience and practice, backbones, hosts, softwareand specifications — is well-aligned and to a large extent irreversible. To change it,one must change it into another equally well-aligned actor-network. To do this,only one (or very few) components of the actor-network can be changed at a time.This component then has to be aligned with the rest of the actor-network beforeanything else can be changed. This gives rise to an alternation over time betweenstability and change for the various components of the information infrastructure(Hanseth, Monteiro and Hatling 1996).

The crucial but neglected insight of infrastructure design is well developed in theInternet community as the IPng case contains several illustrations of: the differencebetween short-term and long-term solutions, the debate over CIDR vs. C# and con-cerns regarding transition mechanisms. The failure to really appreciate this is prob-ably the key reason why the otherwise similar and heavily sponsored OSI effortshave yet to produce anything close to an information infrastructure of Internet’scharacter (Rose 1992). Hanseth, Monteiro and Hatling (1996) compares the OSIand Internet efforts more closely.

An actor-network may become almost impossible to change by having the compo-nents accumulating too much irreversibility and becoming too well aligned witheach other (Hughes 1983). The components of the actor-network become so tospeak locked into one another in a deadly dance where none succeeds to break out.This is not seldom the case with infrastructure technologies. Grindley (1995)describes the collapse of closed operating systems along these lines, withoutemploying the language of actor-network theory. The operating systems were tooconservative. They were locked into each other by insisting that new versions werebackwards compatible with earlier ones and by tailoring a large family of applica-tions to run only on one operating system. The danger that something similarshould happen to Internet is increasing as the infrastructure expands because the“longer it takes to reach a decision, the more costly the process of transition and themore difficult it is to undertake” (IPDECIDE 1993).

Obviously, there are no generic answers to how much one should open an infra-structure technology to further changes, and when to close down on a solutionwhich addresses at least fairly well-understood problems — or simply keeping the

Page 195: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

190 Ole Hanseth and Eric Monteiro

old solution without changes for the time being. Internet has pursued and devel-oped what seems a reasonably sound, pragmatic sense of this problem:

Making a reasonable well-founded decision earlier was preferred overtaking longer to decide and allowing major deployment of competing pro-posals.

(IPDECIDE 1993)

Striking a balance between stability and change has to date been fairly successful.Whether this level of openness and willingness to be innovative suffice to meetfuture challenges remains to be seen. It is anything but obvious.

But what about the future?

The institutionalised framework of Internet is under a tremendous — and a com-pletely new kind of — pressure. This is partly due to the fact that the majority ofusers come from other sectors than the traditional ones. The crucial challenge is topreserve the relatively pluralistic decision process involving a significant fractionof the community when confronted with situations calling for pragmatic judge-ment.

So there it is: politics, compromise, struggle, technical problems to solve,personality clashes to overcome, no guarantee that we’ll get the bestresult, no guarantee that we’ll get any result. The worst decision makingsystem in the world except for all the others

(Smart 1992)

But only a minority of today’s Internet community has acquired the required senseof pragmatism in Internet. There are signs which indicate a growing gulf betweenthe traditional design culture and the more commercially motivated ones (Rekhterand Knopper 1992).

The core institutions of Internet are the IETF, the IESG and the IAB. Despite thefact that the IAB members are appointed from the IETF, the IAB was — especiallyduring the heated debate over the Kobe declaration — poorly aligned with theIESG and the IETF. How, then, can the interests of the IAB seemingly differ somuch from those of the IESG and the IETF? I point out a couple of issues I believeare relevant to working out an explanation.

Even if the IAB today is recruited from basically the same population as the IESGand the IETF, this has not always been the case (Kahn 1994). The bulk of the cur-rent members of the IAB come from the computer and telecommunication industry(8), two from universities, one from a research institute and one from manufactur-

Page 196: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Discussion

Understanding Information Infrastructure 191

ing industry. Seven are based in the United States and one from each of Australia,Britain, Canada, the Netherlands and Switzerland (Carpenter 1996). The IABstruggled until fairly recently, however, with a reputation of being too closed (IAB1990). The minutes of the IAB were not published until 1990. In addition, the IABwas for some time “regarded as a closed body dominated by representatives of theUnited States Government” rather than the traditional designers of the IETF andthe IESG (Carpenter 1996). In connection with the Kobe declaration, this legacy ofthe IAB was made rhetorically use of and hence kept alive: “Let’s face it: in gen-eral, these guys [from IAB] do little design, they don’t code, they don’t deploy,they don’t deal with users, etc., etc., etc.” (Rose 1992b).

The programmatically stated role of the IAB to advice and stimulate action —rather than direct — has to be constantly adjusted. As Carpenter (1996), the IABchair, states: “the IAB has often discussed what this means (...) and how to imple-ment it”. It seems that the IAB during recent years has become more careful whenextending advice in order not to have it misconstrued as a direction. The contro-versy over the Kobe declaration was an important adjustment of what it is to meanfor the IAB to provide advice: “the most important thing about the IAB IPv7 con-troversy [in the summer of 1992] was not to skip CLNP. It was to move the powerfrom the IAB to the IESG and the IETF” (Alvestrand 1996).

The last few years have witnessed a many-folded increase in the IETF attendance,even if it seems to have stabilised during the last year or so. Many important ele-ments of the future Internet, most notably related to Web technology, are developedoutside the Internet community in industrial consortia dealing with the HTML pro-tocol family, HTTP, web-browsers and electronic payment. It is not clear that all ofthe standards these consortia develop will ever get on the Internet standards track.These consortia might decide to keep them proprietary. Still, a key consortium likethe WorldWideWeb consortium lead by Tim Berners-Lee has gained widespreadrespect within the Internet community for the way the standardisation processmimics that of Internet (see http://www.w3.org/pub/WWW). As the organisation ofInternet standardisation activities grows, so does the perceived need to introducemore formal, bureaucratic procedures closer to those employed within the OSI:“the IETF might be able to learn from ISO about how to run a large organization:‘mutual cultural infection’ might be positive” (IAB 1993).

An important design principle within Internet is the iterative development of stand-ards which combine practical testing and deployment with the standardisationprocess. This principle is getting increasingly more difficult to meet as the IP revi-sion makes painfully clear. There is a growing danger that the Internet standardisa-tion process may degenerate into a more traditional, specification driven approach.

Page 197: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing infrastructures: The case of IPv6

192 Ole Hanseth and Eric Monteiro

Non-commercial actors, for instance, research networks, have an important role toplay to function as a testbed for future standards (Alvestrand 1996).

Conclusion

To learn about the problems of scaling information infrastructure, we should studyInternet. With the escalating use of Internet, making changes required for scalingbecome increasingly more difficult. Internet has never faced a more challengingtask regarding scaling than its revision of IP. After years of hard work most peoplereckon that IPv6 will enhance further scaling of Internet. But even today, there is areasonably well-founded doubt about this. We have yet to see documented testingof IPv6 segments.

The real asset of Internet is its institutionalised practise of pragmatically and fairlypluralistically negotiating design issues. Whether this will survive the increasingpressure from new users, interest groups, commercial actors and industrial consor-tia remains to be seen.

Having argued conceptually for cultivation based strategies to the establishment ofinformation infrastructures (chapter 9) and illustrated an instance of it in somedetail in the case of IPv6, we now turn to alternative, more radical approaches tothe “design” of information infrastructures. Together with the cultivation basedapproach, these alternatives approaches make up the repertoire of strategies wehave available when establishing information infrastructures.

Page 198: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Understanding Information Infrastructure 193

CHAPTER 11 Changing networks:Strategies and tools

THIS CHAPTER IS IN A PRE-PRE-PRE-DRAFT STATUS

Introduction

Our analysis of information infrastructures have led to a rephrasing of the notion ofdesign. The “design” of an infrastructure is the purposeful intervention and cultiva-tion of an already existing, well-aligned actor-network. This immediately promptsthe question of which strategies there exist for such purposeful intervention. Onemay distinguish between three, generic strategies:

• an evolutionary one: a slow, incremental process where each step is short andconservative;

• a more daring one: a faster process where each step is longer and more daring;

• a radical one: fast changes which are a radical break with the past;

We have conceptually (chapter 9) as well as empirically (chapter 10) elaborated theformer of these. The change represents a (limited) number of new features added tothe existing ones. The new is just an extension of the existing. Its successful appli-cation within Internet is in itself a warrant for its relevance. It is was corresponds to

Page 199: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

194 Ole Hanseth and Eric Monteiro

“backwards compatibility” - a well known phenomenon in the world of products(Grindley 1995).

More daring changes imply “jumping” between disconnected networks andinvolve building a new one from scratch that is unrelated (that is, completely un-aligned) to the existing and require jumping from one network to another. Exam-ples of this kind of changes is illustrated by e-mail users subscribing to AmericaOnline that “jumped” to Internet.

Changing a network through this kind of abrupt changes, however, are often diffi-cult to implement due to the important role of the installed base. Connection to thefirst network gives access to a large community of communicating partners, whilethe new one gives initially access to none, making it unattractive to be the firstmovers. In spite of this fact, this jumping strategy is really the one implicitlyassumed in the definition of OSI protocols, and is claimed to be the main explana-tion of their failure (Stefferud 1994; Hanseth, Monteiro, and Hatling 1996). Such ajumping strategy might, however, be made more realistic if combined with organi-zation and coordination activities. A simple strategy is to decide on a so-called“flag day” where everybody are expected to jump from the old to the new. Still, thisstrategy requires that the communicating community has a well defined, centralauthority and that the change is simple enough to be made in one single step.Changing the Norwegian telephone system in 1994 from 6 to 8 digit numbers wasdone in this way by the Norwegian Telecom who at that time enjoyed a monopolystatus.

Radical changes are often advocated, for instance within the business processreengineering (BPR) literature (ref.). Empirically, however, such radical changes oflarger networks are rather rare. Hughes (1987) concluded that large networks onlychange in the chaos of dramatic crises (like the oil crises in the early 70s) or in caseof some external shock.

As a strategy for changing information infrastructures of the kind we discuss in thisbook, relying on abrupt changes is ill-suited and will not be pursued further. Still,the former approach, the evolutionary one, needs to be supplemented. There areimportant and partly neglected situations where this strategy simply is not suffi-cient: what we (for lack of a better name) dub “gateway-based” strategies arerequired for more radical changes. The slow, evolutionary approach involve onlymodest changes to a well aligned network.

This chapter explains the background for and contents of these, supplementarystrategies to the evolutionary approach. We will first illustrate this strategy “in

Page 200: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

NORDUnet

Understanding Information Infrastructure 195

action” through an example, namely the establishment of NORDUnet, a researchnetwork in Scandinavia in the 80s. Afterwards, we turn to a more general analysisof the notion of gateways and their role in future information infrastructures.

NORDUnet

Status 83 - 85: networks (and) actors

In the late seventies and early eighties, most Nordic universities started to buildcomputer networks. Different groups at the universities got involved in variousinternational network building efforts. Around 1984 lots of fragmented solutionswere in use and the level of use was growing. Obtaining interoperable servicesbetween the universities was emerging as desirable - and (technologically) possi-ble.

The networks already in use - including their designers, users, and operating per-sonnel - were influential actors and stakeholders in the design and negotiations offuture networks - including Nordunet. We will here briefly describe some.

The EARN network was established based on proprietary IBM technology. TheRCSC protocols were used. (Later on these protocols were redesigned and becamewell known as the SNA protocol suite.) The network was built and operated byIBM. It was connected to BITnet in US. Most large European universities wereconnected.

The EARN network was linked to the EDP centres. It was based on a “star” topol-ogy. The Nordic countries were linked to the European network through a node atthe Royal Technical University in Stockholm. In Norway, the central node waslocated to Trondheim. The network was based on 2.4 - 4.8 KB/second lines in theNordic countries.

The EARN networks had was used by many groups within the universities in theircollaboration with colleagues at other universities. The main services were e-mailand file transfer. A chat service was also used to some extent.

HEP-net was established to support collaboration among physics researchersaround the world (?), and in particular among researchers collaborating with CERNoutside Basel in Switzerland. This network was based on DECnet protocols. This

Page 201: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

196 Ole Hanseth and Eric Monteiro

community represented “big science,” they had lots of money an was a strong andinfluential group in the discussions about future academic networks. The EDPdepartment at CERN was also very active in developing systems that were estab-lished as regular services for this community.

EUnet is (was) a network of Unix computers based on UUCP protocols. EUnet hasalways been weak in Norway, but was used to some extent in computer sciencecommunities. Its main node was located at Kongsberg until 1986 when it wasmoved to USIT.

EUnet was mostly used by Unix users (doing software development), within aca-demic institutions as well as private IT enterprises.

Norway was the first country outside US linked to ARPANET. A node was set upat NDRE (Norwegian Defence Research Establishment) at Kjeller outside Oslo byPål Spilling when he returned in 198? from a research visit at.... The second nodewas established by Tor Sverre Lande at the department of informatics at the Uni-versity of Oslo in.... This happened when he also returned from a one year (??)research visit at .. Lande brought with him a copy of the Berkeley Unix operatingsystem which included software implementing all ARPANET protocols. The soft-ware was installed on a VAX 11/780 computer and linked to ARPANET through aconnection to the node at Kjeller. Later on more ARPANET nodes were set up.

NDRE was using the net for research within computer communications in collabo-ration with ARPA. Lande was working within hardware design, and wanted to usethe net to continue the collaboration with the people he visited in US, all usingVLSI design software on Unix machines linked to ARPANET.

At that time (??), ARPANET was widely used among computer science researchersin US, and computer science researchers in Norway very much wanted to getaccess to the same network to strengthen their ties to the US research communities.

At this time Unix was diffusing rapidly. All Unix systems contained the ARPANETprotocols, and most Unix computers were in fact communicating using this proto-cols in the local area networks they were connected to. Accordingly there were lotsof isolated IP islands in Norway and the other Nordic countries. By linking these IPislands there would be a huge network.

In Norway the development of one network connecting all universities started inthe early eighties. The objective was one network linking every user and providingthe same services to all. With this goal at hand, it was felt quite natural to link up

Page 202: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

NORDUnet

Understanding Information Infrastructure 197

with the OSI standardization effort and build a network based on what would comeout of that. Those involved tried to set up a X.25 network. First, it was tried buildthis based on an X.25 product developed by a Spanish company. The quality of this

product was low, and it seemed out of reach to get the network up and running.1

Using this product was given up, and it was replaced by an English product calledCamtech. Running DECnet over X.25 was considered. Based on the English prod-uct one managed to keep the network running and an e-mail service was estab-lished in 84/85 based on the EAN system.

Universal solutions

As the networks described above were growing, the need for communicationbetween users of different networks appeared. And the same was happening“everywhere,” leading to a generally acknowledged need for one universal networkproviding the same universal services to everybody. Such a universal networkrequired universal standards. So far so good - everybody agreed on this. But whatthe universal standards should look like was another issue.

The was a time of ideologies, and the strongest ideology seems to be ISO/OSImodel, protocols and approach. In general there was a religious atmosphere. Every-body agreed that proprietary protocols were bad, and that “open systems” weremandatory. The Americans pushed IP based technologies. They did so becausethey already had an extensive IP based network running, and extensive experiencefrom the design, operations, and use of this network. The network worked verywell (at least compared to others), and lots of application protocols were alreadydeveloped and in use (ftp, telnet, e-mail,....???).

As the IP based network (Arpanet, later Internet) was growing, the protocols wereimproved and tuned. New ones were developed as it was discovered that they wereurgently needed to make the network work smoothly or new ideas developed asone used the existing services. An example of the first is the development of theDomain Name Service, DNS, mapping symbolic names to digital IP addresses.This service made the network scalable. Further, the decision to build the network

1. The product got the nick-name SPANTAX, after the Spanish airline with the samename. At that time “Spantax” became a almost generic term for low quality servicesin Norway due to lots of tourists having bad experience with that airline whengoing to Spanish tourist resorts, combined with one specific event where a flightwas close to crash when the pilot thought a large area with lots of football fieldswas the airport.

Page 203: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

198 Ole Hanseth and Eric Monteiro

on a connectionless transport service made the network flexible, robust, and simpleas no management of connections was required during communication sessions.

American research and university communities pushed IP, while both Europeanresearchers within the computer communications field and telecom operatorspushed ISO. The role of telecom operators had the effect that the whole of OSI is

based on telephone thinking.2 The Europeans wanted a non-IP based solutionbelieving that would close the technological gap between Europe and US.

The OSI idea was.

The IP (Internet) idea was

alliances

links between technology and non-technology, for instance the embedding of tele-com operators intentions to expand their monopoly was embedded into the designof X.25 (Abbate 1995).

Nordunet

The Nordunet initiative was taken by the top managers at the EDP centres at theuniversities in the capitals of the Nordic countries. They had met at least once ayear for some time to discuss experiences and ideas. Most of them had a strongbelief in computer communication. In Norway the director of the EDP departmentat the University of Oslo, Rolf Nordhagen, was a strong believer in the importanceof computer network technology. He had pushed the development of the BRU net-work at the university, linking all terminals to all computers. He also workedeagerly for establishing new projects with wider scopes, and he was an importantactor in the events leading to the conception of the idea of building a network link-ing together all Nordic universities. When the idea was accepted, funding was thenext issue. The Ministry of the Nordic Council was considered to proper fundingorganization. They had money, an application was written and funding granted.

Arild Jansen, a former employee at the EDP department in Oslo was now workingat the Ministry for Public affairs in Norway and played the role as the bridgebetween the technical community on the one hand and the political and funding

2. For more on this, see (Abbate, 1995).

Page 204: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

NORDUnet

Understanding Information Infrastructure 199

communities on the other. He was also the one writing the application for funding.Later he became a member of the steering group.

The idea was one shared network for research and education for all the Nordiccountries. This objective almost automatically lead to “openness” as a primaryobjective. “Openness” was also important for the politicians.

Strategy one: Universal solution, i.e. OSI.The Nordunet project was established in 1985. Einar Løvdal and Mats Brunellwere appointed project coordinators. When the project started, they had hardly theslightest idea about what to do. Just as in the larger computer communicationscommunity, those involved in the project easily agreed about the need for a univer-sal solution - agreeing on what this should look like was a different matter.

The people from the EDP centres, having the idea about the project, all believed inthe OSI “religion.” Next they made an alliance with public authorities responsiblefor the field computer networks for research and education would fall into and thefunding institution (which was also closely linked to the authorities). Obtaining“universal service” was an important objective for the, accordingly they all sup-ported the ideas behind OSI. These alliance easily agreed that an important elementin the strategy was to unify all forces, i.e. enrolling the computer communicationsresearchers into the project. And so happened. As they already were involved inOSI related activities, they were already committed to the “universal solution”objective and the OSI strategy to reach it.

However, products implementing OSI protocols were lacking. So choice of strat-egy, and in particular short term plans, was not at all obvious. Løvdal was indeed atrue believer in the OSI religion. Mats Brunell, on the other hand, believed inEARN. To provide a proper basis for taking decisions a number of studies lookingat various alternative technologies for building a Nordic network were carried out:

1. IP and other ARPANET protocols like SMTP (e-mail), ftp, and telnet.

2. Calibux protocols used in the JANET in UK.

3. EAN, an X.400 system developed in Canada.

All these technologies were considered only as possible candidates for intermedi-ate solutions. The main rationale behind the studies was to find best currently avail-able technology. The most important criterion was the number of platforms(computers and operating systems) the protocols could run on.

Page 205: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

200 Ole Hanseth and Eric Monteiro

Neither IP (and the ARPANET) nor the Calibux protocols were found accept-able.The arguments against IP and ARPANET were in general that the technologyhad all too limited functionality. Ftp had limited functionality compared to OSI’sFTAM protocol (and also compared to the Calibux file transfer protocol whichFTAM’s design to a large extent was based on).The Nordunet project group, in linewith the rest of the OSI community, found the IP alternative “ridiculous,” consider-ing the technology all to simple and not offering the required services. There werein particular hard discussions about whether the transport level services should be

based on connectionoriented or connectionless services3. The OSI camp arguedthat connection oriented services were the most important. IP is based on a connec-tion less datagram service, which the IP camp considered one of the strengths ofthe ARPANET technology.

JANET was at that time a large and heavily used network linking almost allEnglish universities. The network was based on X.25. In addition it provided e-mail, file transfer, and remote job entry services. The protocols were developed andimplemented by academic communities in UK. The fact that this large network wasbuilt in UK was to a large extent due to the institution funding UK universitiesrequired that the universities bought computers that could run these protocols.JANET was also linked to ARPANET through gateways. The gateways wereimplemented between service/application protocols. The people involved in thedevelopment of the Calibux protocols were also active in and had significant influ-ence on the definition of the OSI protocols. The main argument against Calibuxwas that the protocols did not run on all required platforms (computers and operat-ing systems).

One important constrain put on the Nordunet project was that the solutions shouldbe developed in close cooperation with similar European activities. This made italmost impossible to go for ARPANET protocols, and also Calibux although theywere closer to the OSI protocols unanimously preferred by those building aca-demic networks and doing computer communications research throughout Europe.

The IP camp believed that IP (and the other ARPANET protocols) was the univer-sal solution needed, and that the success of ARPANET had proved this.

The users were not directly involved in the project, but their views were importantto make the project legitimate. They were mostly concerned about services. They

3. Connection oriented means.. modeling telephone communication. Connectionlessmeans... modeling ordinary mail (or telegram) services.

Page 206: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

NORDUnet

Understanding Information Infrastructure 201

want better services - now! But in line with this they also argued that more effortshould be put into the extensions and improvements of the networks and servicesthey were using already, and less into the long term objectives. The HEPnet usersexpressed this most clearly. They were using DECnet protocols and DEC comput-ers (in particular VAX). DEC computers were popular among most Nordic univer-sities, accordingly they argued that a larger DECnet could easily be established andthat this would be very useful for large groups. The physicists argued for a DECsolution, so did Norsk Romsenter. Nobody argued for a “clean” DECnet solution asa long term objective.

On the Nordic as well as on the global scene (Abbate 1995) the main fight wasbetween the IP and OSI camps. This fight involved several elements and reachedfar beyond technical considerations related to computer communications. At alluniversities there was a fight and deep mistrust between EDP centres and computerscience departments. The EDP centres were concerned about delivering (“univer-sal”) services to the whole university as efficient as possible. They thought this bestcould be done by one shared and centralized services. The computer sciencedepartments found most of the time the services provided by the EDP centres aslagging behind and unsatisfactory in relation to their requirements. They saw them-selves as rather different from the other departments as computers were their sub-ject. They had different requirements and would be much better off if they wereallowed to run their own computers. But the EDP centres were very afraid of losingcontrol if there were any computers outside the domain they ruled.

The computer science departments also disagreed with the EDP centres about whatshould be in focus when building communication services and networks. The EDPdepartments focus first on their own territory, then on the neighboring area. Thismeans, first establishing networks across the university. Second, extending andenhancing this so that it becomes linked to the networks at the other universities inNorway, then the Nordic countries. The computer science departments, however,are not interested in communicating with other departments at the same university.They want to communicate and collaborate with fellow researchers at other com-puter science departments - not primarily in Norway or other Nordic countries, butin US. They wanted Unix computers to run the same software as their colleagues inUS, and they wanted connection to ARPANET to communicate with them.

The EDP centres would not support Unix as long as it was not considered feasibleas the single, “universal” operating system for the whole university. And theywould not support IP for the same reason. And thirdly, they wanted complete con-trol and would not let the computer science department do it by their own either. Toget money to buy their own computers, the computer science department had to

Page 207: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

202 Ole Hanseth and Eric Monteiro

hide this in applications for funding of research projects within VLSI and otherfields. The fight over OSI (X.25) and IP was deeply embedded into networks ofpeople, institutions, and technologies like these ones.

Tor Sverre Lande and Spilling participated in some meetings in the early phase ofthe project. They were sceptical about Calibux and wanted an IP based solution.They did not have much influence on the Nordunet project and decided to go forthe establishment of the IP connections they wanted outside the Nordunet project.And most involved in the Nordunet project were happy when they did not have todeal with the IP camp. At this time there was a war going on between the campswith lots of bad feelings.

As all intermediate solutions were dismissed, it was decided to go directly for anOSI based solution. The first version of the network should be build based on X.25and the EAN system providing e-mail services. This solution was very expensive,and the project leaders soon realized that it did not scale. X.25 was full of trouble.The problems were mostly related to the fact that the X.25 protocol specification isquite extensive, and accordingly easily leading to incompatible implementations.Computes from several vendors were used within the Nordunet community, andthere were several incompatibilities among the vendors’ implementations. Butmaybe more trouble was caused by the fact that lots of parameters have to be setwhen installing/configuring an X.25 protocol. To make the protocol implementa-tions interoperate smoothly, the parameter setting has to be coordinated. In fact, theprotocols required coordination beyond what turned out to be possible.

The project worked on the implementation of the network as specified for about ayear or so without any significant progress. The standardization of OSI protocolswas also (constantly) discovered to be more difficult and the progress slower thanexpected, making the long term objectives. The Ministry of the Nordic Councilwas seriously discussing to stop the project because there was no results. Newapproaches were desperately needed.

Strategy 2: Intermediate, short-term solutionsAt the same time other things happened. IBM wanted to transfer the operations ofits EARN network to the universities. Together Einar Løvdal and Mads Brunellthen over some time developed the idea to use EARN as backbone of a multi proto-col network. They started to realize that OSI would take a long time - one had toprovide services before that. OSI was all the time ideological important, but onehad to be/become more (and more) pragmatic. The idea about “The NORDUNETPlug” was developed. This idea mean that there should be one “plug” common foreverybody that would hook up to the Nordunet network. The plug should have 4

Page 208: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

NORDUnet

Understanding Information Infrastructure 203

“pins:” one for each of the network protocols to be supported: X.25, EARN, DEC,and IP. The idea was presented as if the plug implemented a gateway between allthe networks as illustrated by fig.1. That was, however, not the case.

Fig 6. The Nordunet Plug, seen as gateway

The plug only provided access to a shared backbone network as illustrated by fig 2.An IBM computer running EARN/RSCS protocols could communicate only withanother computer also running the same protocols. There was no gateway enablingcommunications between, say, an RSCS and an IP based network.

Fig 7. The NORDUNET Plug, as shared backbone

The EARN idea received strong support. Løvdal and Brunell got hold of the EARNlines through a “coup” and the implementation of a Nordic network based on the“Nordunet plug” idea started. They succeeded in finding products making theimplementation of the “plug” quite straight forward. First, Vitalink Ethernetbridges were connected to the EARN lines. This means that Nordunet was essen-

X.25

RSCS

DEC

IP

X.25

RSCS

DEC

IP

IBM

DECnet

IP

IBM

DECnet

IP

X.25 X.25

Page 209: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

204 Ole Hanseth and Eric Monteiro

tially an Ethernet. To these Vitalink boxes the project linked IP routers, X.25switches and EARN “routers.” For all these protocols there were high quality prod-ucts available that could be linked to the Vitalink Ethernet bridges.

This solution had implications beyond enabling communication across the back-bone. The EARN network was originally designed by a centralized IBM unit andwas based on a coherent line structure and network topology. Such coherent topol-ogy would be difficult to design by an organization containing so many conflictinginterests as the Nordunet project. However, the EARN topology meant that theNordunet network was designed in a way well prepared for further growth.

Further, the EARN backbone also included a connection to the rest of the globalEARN network. A shared Nordic line to ARPANET was established and connectedto the central EARN node in Stockholm. 64 KB lines to CERN for HEPnet werealso connected.

Fig 8. The Nordunet topology

Having established a shared backbone, the important next step was of course theestablishment of higher level services like e-mail, file transfer, remote job entry(considered very, very important at that time to share computing resources for

KTH

Runit

Norwegianuniversities

Finland

Denmark

Iceland

Internet EARN CERN/HEPnet

Page 210: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

NORDUnet

Understanding Information Infrastructure 205

number crunching), etc. As most of the networks in use had such services based onproprietary protocols, the task for the Nordunet project was to establish gatewaysbetween these. A large activity aiming at exactly that was set up. When gateways atthe application level were established, interoperability would be achieved. A gate-way at the transport level would do the job if there were products available at theapplication level (e-mail, file transfer, etc.) on all platforms implementing the sameprotocols. Such products did not exist.

Before this, users in the Nordic countries used gateways in the US to transfer e-mail between computers running different e-mail systems. That meant that sendingan e-mail between two computers standing next to each other, the e-mail had to betransferred across the atlantic, converted by the gateway in the US, and finallytransferred back again. The Nordunet established a gateway service between themajor e-mail systems used. The service was based on gateways software developedat CERN.

File transfer gateways are difficult to develop as it requires conversion on the fly.CERN had a file transfer protocol, called GIFT (General Interface for FIle Trans-fer), running on VAX/VMS computers. An operational service was established atCERN. It linked the services developed within the Calibux (Blue book), DECnet,the Internet (ftp), and EARN. The gateway worked very well at CERN. WithinNordunet the Finnish partners were delegated the task of establishing an opera-tional gateway service based on the same software. This effort was, however, givenup as the negotiations about conditions for getting access to the software failed.

A close collaboration emerged between the Nordunet project and CERN people.They were “friends in spirit” (“åndsfrender”) - having OSI as the primary longterm objective, but at the same time concentrating on delivering operational ser-vices to the users.

From intermediate to permanent solutionWhen the “Nordunet Plug” was in operation, a new situation was created. The net-work services had to be maintained and operated. Users started to use the network.And users’ experiences and interests had to be accounted for when making deci-sions about the future changes to the network. Both the maintenance and operationwork as well as the use am the network was influenced by the way the network-and in particular the “plug” as its core - was designed. The “plug also became anactor playing a central role in the future of the network.

Most design activities were directed towards minor, but important, necessaryimprovements of the net that was disclosed its use disclosed. Less resources were

Page 211: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

206 Ole Hanseth and Eric Monteiro

left for working on long term issues. However, in the NORDUnet community, thistopic was still considered important. And the researchers involved continued theirwork on OSI protocols and their standardization. The war between IP and X.25continued. The OSI “priests” believed as strongly as ever that OSI, including X.25,was the ultimate solution. Among these were Bringsrud at the EDP centre i Oslo,Alf Hansen, Olav Kvittem in Trondheim, and Terje Grimstad at NR. Einar Løvdalwas fighting for making bridges to IP communities, having meetings with Spilling.

The may be most important task in this period was the definition and implementa-tion of a unified address structure for the whole NORDUnet. This task was carriedout successfully.

In parallel with the implementation and early use phase of the “Plug,” Unix dif-fused fasted in academic institutions, ARPANET was growing fast, ARPANETprotocols were implemented on more platforms and created more local IP commu-nities (in LANs), while there was in practical terms no progress within the OSIproject.

The increased availability of IP on more platforms led to an increase in use of “dualstack” solutions, i.e. installing more than one protocol stack on a computer linkingit to more than one network. Each protocol stack is then used to communicate withspecific communities. This phenomenon was in particular common among users ofDEC computers. Initially they were using DECnet protocols to communicate withlocals or for instance fellow researchers using HEPnet, and IP to communicate withARPANET users.

The shared backbone, the e-mail gateway, and “dual stack” solutions created a highof interoperability among NORDUnet users. And individual users could, for mostpurposes, choose which protocols they preferred - they could switch from one toanother based on personal preferences. And as IP and ARPANET were diffusingfast, more and more users found it most convenient to use IP. This led to a smooth,unplanned, and uncoordinated transition of the NORDUnet into an IP based net-work.

One important element behind the rapid growth of the use of IP inside NORDUnetwas the fact that ARPANET’s DNS service made it easy to scale up an IP networkscalable. In fact, this can be done by just giving a new computer and address andhook it up and enter its address and connection point into DNS. No change isrequired in the rest of the network. All the network needs to know about the exist-ence of the new node is taken care of by DNS. For this reason, the IP network

Page 212: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

On the notion of a gateway

Understanding Information Infrastructure 207

could grow without requiring any work done by the network operators. And theOSI enthusiasts could not do anything to stop it either.

The coherent network topology and the unified addressing structure implementedalso made the network scalable.

Nordunet and Europe

From the very begging, participating in European activities was important for theNORDUnet project. The NORDUnet project also meant that the Nordic countriesacted like one actor on the European level. This also helped them coming into aninfluential position. They were considered a “great power” in line with UK, Franceand (at that time - West) Germany. However, the relationships change when NOR-DUnet decided implementing the “plug.” This meant that the project no longer wasgoing for the pure OSI strategy. For this reason the Nordic countries were seen astraitors in Europe.

This made the collaboration difficult for some time. But as OSI continued not todeliver and the pragmatic NORDUnet strategy proved to be very successful, morepeople got interested in similar pragmatic approaches. The collaboration with theCERN community is already mentioned. Further, the academic network communi-ties in the Netherlands and Switzerland moved towards the same approach.

Throughout its pragmatic strategy and practical success, the NORDUnet had sig-nificant influence on what happened in Europe in total. This means that the projectcontributed in important ways to the diffusion of IP and ARPA/Intenret in Europe -and reduced the possibilities for OSI to succeed.

On the notion of a gateway

The term “gateway” has a strong connotation. It has traditionally been used in atechnical context to denote an artefact that is able to translate back and forthbetween two different communication networks (Saleh and Jaragh 1998). A gate-way in this sense is also called a “converter” and operates by inputting data in oneformat and converting it to another. In this way a gateway may translate betweentwo, different communication protocols that would otherwise be incompatible as aprotocol converter “accepts messages from either protocol, interprets them anddelivers appropriate messages to the other protocol” (ibid., p. 106).

Page 213: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

208 Ole Hanseth and Eric Monteiro

A gateway, as known from infrastructure technologies for communication andtransport, is to translate back and forth between networks which would otherwisebe incompatible. A well-known and important example is the AC/DC adapter(Dunn xxx; Hughes 1983). At the turn of the century, it was still an open and con-troversial issue whether electricity supply should be based on AC or DC. The twoalternatives were incompatible and the “battle of systems” unfolded. As a user ofelectrical lighting, you would have to choose between the two. There were strongproponents and interests behind both. Both had their distinct technical virtues. ACwas more cost-effective for long-distance transportation (because the voltage levelcould be higher) whereas a DC based electrical motor proceeded the AC based oneby many years. As described by Hughes (1983) and emphasized by Dunn (198xx),the introduction of the converter made it possible to couple the two networks. Itaccordingly became feasible to combine the two networks and hence draw upontheir respective virtues.

Potentially confusing perhaps, but we generalise this technically biased notion of agateway as an artefact that converts between incompatible formats. In line withANT, we subsequently use “gateway” to denote the coupling or linking of two dis-tinct actor-networks. Compared to the conventional use of the term, our use of theterm gateway is a generalization along two dimensions:

• the coupling is not restricted to be an artefact but may more generally be anactor-network itself, e.g. a manual work routine;

• the coupling is between actor-networks, not only communication networks;

This needs some unpacking to see that it is not just a play with words. To show thatthis generalized notion of a gateway may actually contribute with anything sub-stantial, we will spell out the roles they play.

Other scholars have developed notions related to this notion of a gateway. Star andGreisemer’s (1992) concept of boundary objects may also be seen as a gatewaysenabling communication between different communities of practices. The same isthe case for Cussins (1996) objectification strategies. These strategies may be seenas constituting different networks, each of them being connected to the networksconstituted by the different practices through gateways translating the relevantinformation according to the needs of the “objectification networks.”

Page 214: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

On the notion of a gateway

Understanding Information Infrastructure 209

The roles and functions of (generalized) gateways

Generalized gateways (or simply “gateways” from now) fill important roles in anumber of situations during all phases of an information infrastructure develop-ment. The listing of these roles should be recognised as an analytic vehicle. Inpractise, a gateway may perform several of these roles simultaneously.

Side-stepping confrontation

The key effect of traditional converters is that they side-step — either by postpon-ing or by altogether avoiding — a confrontation. The AC/ DC adapter is a classicexample. The adapter bought time so that the battle between AC and DC could bepostponed. Hence, the adapter avoided a premature decision. Instead, the two alter-natives could co-exist and the decision be delayed after more experience had beenacquired.

Side-stepping a confrontation is particularly important during the early phases ofan infrastructure development as there are still a considerable amount of uncer-tainty about how the infrastructure will evolve. And this uncertainty cannot be set-tled up front, it has to unfold gradually.

But side-stepping confrontation is not only vital during the early phases. It is alsoimportant in a situation where there already exists a number of alternatives, neitherof which are strong enough to “conquer” the others. We illustrate this further belowdrawing upon e-mail gateways.

When one of the networks are larger than the other, this strategy might be used tobuy time to expand and mobilise the smaller network in a fairly sheltered environ-ment. MER???økonomene.....XXXX (David and Bunn 1988)

Modularisation and decomposition

A more neglected role of gateways is the way they support modularisation.Themodularisation of an information infrastructure is intimately linked to its heteroge-neous character (see chapter 5). As we argued in chapter 8, the impossibility ofmonolithicly developing an information infrastructure, forces a more patch-likeand dynamic approach. In terms of actual design, this entails decomposition andmodularization. The role of a gateway, then, is that it encourages this requireddecomposition by decoupling the efforts of developing the different elements of theinfrastructure and only couple them in the end. This allows a maximum of indepen-dence and autonomy.

Page 215: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

210 Ole Hanseth and Eric Monteiro

Modularisation, primarily through black-boxing and interface specification, is ofcourse an old and acknowledged design virtue for all kinds of information systems,including information infrastructures (REFS DIJKSRTEA PARNAS). But themodularisation of an information infrastructure supported by gateways has another,essential driving force that is less obvious. As the development is more likely totake ten than one year, the contents is bound to evolve or “drift” (see chapters 5 and9). This entails that previously unrelated features and functions need to be alignedas a result of this “drifting”. The coupling of two (or more) of these might be theresult of a highly contingent, techno-economical process, a process which is diffi-cult to design and cater for. Figure XXbyplan illustrates this. Cabel-TV and tele-phone have a long-standing history of distinctly different networks. They wereconceived of, designed and appropriated in quite distinct ways. Only as a result oftechnological development (XXX) and legislative de-regulation has it become rea-sonable to link them (REF mansell, mulgan). This gives rise to an ecology of net-works that later may be linked together by gateways.

Broadbent XXX??? describe an information infrastructure along two dimensions,reach and range, implying that changes can be made along the same dimensions.Changes along the reach dimension amount to adding nodes (or users) to the net-work while changes to the range amount to adding new functions. Changes to thelatter of these, the range, often take place through the drifting and subsequent cou-pling of two initially, independent networks. Further below we use the case ofMIME (Multipurpose Internet Mail Extension, RFC 1341) to highlight this.

Forging compromises

Gateways may play a crucial, political role in forging a compromise in an other-wise locked situation. This is due to the way a gateway may alternative interests tobe translated and subsequently inscribed into the same (gateway-based) solution.The key thing is that the initial interests may be faithfully translated and inscribedinto one and the same material, namely the gateway. In this way all the alternativesare enrolled and sufficient support is mobilized to stabilise a solution. This isimportant in dead-lock situations where no alternative is able to “win”. Mobilisingthe support of two or more alternatives through the use of gateways could very wellbe what it takes to tip the balance.

XX pek ut hvilke av s-in-a sine fem strategier dette minner om

Page 216: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Illustrating the roles of gateways

Understanding Information Infrastructure 211

The polyvalent role of gateways

The way a gateway allows interaction with multiple actor-networks makes the con-text of use more robust in the sense that a user fluently may move between differ-ent actor-networks.

???XX HAR VI andre eks enn papir/elektr journal ?????

dual stack, IPng (her en dings)

This polyvalent character of the gateway provides a flexibility that adds to therobustness of the use of the infrastructure.

KNØLETE!!!!!!!!!

Illustrating the roles of gateways

E-mail

E-mail is one of the oldest services provided by Internet. The current version of thestandard for e-mail dates back to 1982. That version developed through revisionsspanning three years. A separate standard specifying the format of the e-mail mes-sage was launched in 1982 together with the protocol itself. An earlier version offormats for e-mail goes back to 1977. The Internet e-mail service consists of twostandards, both from 1982: one specifying the format of a single e-mail (RFC 822)and one protocol for the transmitting of e-mails (Simple Mail Transfer Protocol,SMTP, RFC 821).

The e-mail boom in the US proceeded that of Europe and the rest of the world bymany years. Already in the 70s, there were a considerable amount of e-mail trafficin the US. There existed several, independent e-mail services in addition to theInternet one, the most important ones being UUCP (a Unix-based e-mail service)and NJE within BITNET (RFC 1506, p. 3). The problem, however, was that all ofthese were mutually incompatible. Accordingly there was a growing awarenessabout the need to develop a uniform standard. This recognition spawned CCITTand ISO efforts in working out a shared e-mail standard that could cater for all byproviding a “superset of the existing systems” (ibid., p. 3). LITEN KOMMEN-TAR: prøvde altså opprinnelig på en universlaisme løsning - før man falt tilbake påGW!). These efforts are known as the X.400 standards.

Page 217: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

212 Ole Hanseth and Eric Monteiro

The X.400 initiate enjoyed heavy backing as it aligned and allied with the official,international standardization bodies (recall chapter 4). Especially the lobbying bythe West-Germany was influential (ibid., p. 3). Promoting X.400 in Europe made alot more sense than a corresponding move in the US. This was because the installedbase of (Internet and other) e-mail services in Europe was insignificant (see chapter9). X.400 based e-mail in Europe was fuelled by the free distribution of the EAN e-mail product to research and university institutions.

During the 80s, this created a situation where there where really two candidates fore-mail, namely Internet e-mail and X.400. The large and growing installed base ofInternet e-mail in the US (and elsewhere) implied that one would need to live withboth for many years to come. After the overwhelming diffusion of Internet the lastfew years, it is easily forgotten that during the 80s, even the US Department ofDefense anticipated a migration to ISO standards. As a result, the Internet commu-nity were very eager to develop gateway solutions between the ISO world and theInternet.

>>>> (EMAIL GW rfc1327 kom TO ÅR etter x400(84)!!)

An e-mail gateway between X.400 and Internet has accordingly been perceived asimportant within Internet. It provides an excellent illustration of the underlyingmotivation and challenges of gatewaying. Even today, though, “mail gatewayingremains a complicated subject” (RFC 1506, p. 34). The fact that X.400 is reallytwo different standards complicated matters even more. The X.400 from 1984(written X.400(84)) was originally developed within IFIP Working Group 6.5 andadopted by CCITT. Only in 1988 did CCITT and ISO align their efforts in a revisedX.400(88) versions.

The challenge, then, for an e-mail gateway is to receive a mail from one world,translate it into the formats of the other world and send it out again using the rout-ing rules and protocols of that other world. There are two, principal difficultieswith this scheme.

First, there is the problem of translating between basically incompatible formats,that is, a thing from one world that simply has no counterpart in the other world.Second, there is the problem of coordinating different, independent e-mail gate-ways. In principle, e-mail “gatewaying” can only function perfectly if all gatewaysoperate according to the same translation rules, that is, the different gateways needto synchronize and coordinate their operations. We comment on both of these prob-lems in turn.

Page 218: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Illustrating the roles of gateways

Understanding Information Infrastructure 213

With X.400(88), an e-mail may be confirmed upon receipt. In other words, it isintended to reassure the sender that the e-mail did indeed reach its destination. Thishas no corresponding feature in Internet e-mail. Hence, it is impossible to translate.The solution, necessarily imperfect, is to interpret the X.400/ Internet gateway asthe final destination, that is, the receipt is generated as the mail reaches the gate-way, not the intended recipient. A bigger and more complicated example of essen-tially incompatibilities between X.400 and Internet is the translation of addresses.This is what in practise is the most important and pressing problem for e-mail gate-waying. This is because the logical structure of the two addressing schemes differ.The details of this we leave out, but interested readers may consult (RFC 1596, pp.11, 14-29).

The coordination of translation rules for e-mail gatewaying is attempted achievedthrough the establishment of a special institution, the Message Handling SystemCo-ordination Service located in Switzerland. This institution registers, updatesand distributes translation rules. As far as we know, there exist no survey of thepenetration of these rules in currently operating gateways.

MIME

Independent actor-networks that have evolved gradually in different settings, serv-ing different purposes, may need to be aligned and linked through a gatewaybecause they somehow have “drifted” together. An illustration of this is the currentdevelopments around e-mail.

The conceptually self-contained function of providing an e-mail service getsincreasingly caught up and entangled with an array of previously unrelated issues.An illustration is the current discussion about how to support new applicationsrelated to multi-media requiring other kinds of data than just the plain text of anordinary e-mail message.

Conceptually — as well as historically — e-mail functionality and multi-media filetypes belong to quite distinct actor-networks. It was anything but obvious or “natu-ral” that the two need to be closer aligned.

There were three underlying reasons why pressure to somehow allow the 1982 ver-sion of Internet e-mail to cater for more than unstructured, US-ASCII text e-mails.First and foremost, the growing interest in multi-media application — storing, amanipulating and communication of video, audio, graphics, bit maps, voice —increased the relevance of a decent handling of corresponding file formats for these

Page 219: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

214 Ole Hanseth and Eric Monteiro

data types. Secondly, the growth and spreading of Internet prompted the need for aricher alphabet than the US-ASCII. The majority of European languages, forinstance, require a richer alphabet. Thirdly, the ISO and CCITT e-mail standardX.400 allows for non-text e-mails. With an increasing concern for smooth X.400/Internet gatewaying, there was a growing need for non-text Internet e-mail.

The problem, of course, was the immense installed base of text-based Internet e-mail (RFC 822). As has always been the Internet policy, “compatibility was alwaysfavored over elegance” (RFC 1341, p. 2). The gateway or link between text-basede-mail and multi-media data types and rich alphabets was carefully designed as anextension, not a substitute, for the 1982 e-mail. The designers happily agree thatthe solution was “ugly” (Alvestrand 1995).

The gateway is MIME, Multipurpose Internet Mail Extension (RFC 1341), anddates back to 1992, ten years after Internet e-mail. What MIME does is fairlystraightforward. The relevant information about the multi-media data typesincluded in the e-mail in encoded in US-ASCII. Basically, it adds two fields to thee-mail header: one specifying the data type (from a given set of available optionsincluding: video, audio and image) and one specifying the encoding of the data(again, from a given set of encoding rules). Exactly because the need to include dif-ferent data types in e-mails is recognized to be open-ended, the given list of avail-able options for data types and encoding rules is continuously updated. A specificinstitution, the Internet Assigned Numbers Authority, keeps a central achieve overthese lists.

The political incorrectness of gateways

No one likes gateways. They are regarded as second class citizen that are only tol-erated for a little while as they “should be considered as a short to mid-term solu-tion in contrast to the long term solution involving the standardization of networkinterfaces over which value-added services can be provided” (Saleh and Jaragh

1998, p. 105). A similar sentiment dominates within Internet4. An intriguing ques-tion — beyond the scope of our analysis — is where these negative reactions aregrounded. One reason is that gateways loose information and hence are “imper-fect”. Infrastructure design, also within Internet, seems to be driven towards“purity” (Eidnes 1996). As this purity is likely to be increasingly difficult to main-tain in the future, it would be interesting to investigate more closely into the role ofand attitudes towards gateways within Internet.

Page 220: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

The political incorrectness of gateways

Understanding Information Infrastructure 215

Based on the experiences outlined in earlier chapters, two kinds of gateways seemsto be particularly relevant in health care information infrastructures. One is gate-ways linking together different heterogeneous transport infrastructures into a seam-less web. The other is “dual stack” solutions for using different message formatswhen communicating with different partners.

E-mail is considered best as carriers of EDI messages. There exist gatewaysbetween most available products and protocols. These gateways work fine in mostcases. However, they will cause trouble when using features specific for one prod-uct or protocol. When using X.400 systems in the way specified by most GOSIPs,saying that the X.400 unique notification mechanisms shall be used, one cannot usegateways between X.400 systems and others not having compatible mechanisms(Hanseth 1996b).

Experiences so far, indicates that implementing and running dual stack solutions isa viable strategy. If a strategy like the one sketched here is followed, implementingtools for “gateway-building” seams to a task of manageable complexity (ibid.).

XXXX en litt mer snappy avslutingig

4. The notion of a gateway is, perhaps surprisingly, not clear. It is used in differentways. In particular, it may be used as a mechanism to implement a transition strat-egy (Stefferud and Pliskin 1994). It is then crucial that the gateway translates backand forth between two infrastructures in such a way that no information is lost.Dual-stack nodes and “tunneling” (see chapter 10) are illustrations of such gate-ways. But gateways more generally might loose information as, for instance, thegateway between the ISO X.400 e-mail protocol and the e-mail protocol in Internet.Within the Internet community, however, only gateways of the latter type arereferred to as “gateways”. The former type is regarded as a transition mechanism.And it is this latter type of gateways which is not seriously considered within theInternet community.

Page 221: Understanding Information Infrastructureheim.ifi.uio.no/~oleha/Publications/bok.pdfUnderstanding Information Infrastructure 3 • All goods might be bought through electronic commerce

Changing networks: Strategies and tools

216 Ole Hanseth and Eric Monteiro