the evolving internet: applications and network service infrastructure

12
issues changes literally daily, and some of it may be out of date by the time it reaches print. The Next Gener- ation Internet (NGI) program, for example, has been the subject of continuing negotiation among the Clinton administration, participating federal agencies, and Con- gress. In the last few months, the formation of the American Registry of Internet Numbers (ARIN) 3 has effected major changes in the way fundamental compo- nents, such as the assignment of network number ad- dress space, are managed. Important issues about the administration of the domain naming system (DNS) still remain unresolved. 4 The structure of the commer- 3 See www.arin.net. ARIN is a new not-for-profit organization that will take over the administration and registration of Internet Network Numbers, formerly handled by Network Solutions, Inc. under contract to the National Science Foundation. ARIN is responsible for the Ameri- cas, the Carribean, and for sub-Saharan Africa; there are similar organi- zations that handle other geographic regions. 4 The policy debate about the management of the domain naming system, and in particular the creation of additional top-level domain names to supplement the familiar ‘‘.com,’’ ‘‘.org,’’ and the like, has had good coverage in the press, notably The New York Times. The essence of the issue is this: Historically, these have been managed by the Internet Assigned Number Authority (IANA) and implemented by a company called Network Solutions, Inc. under contract to the National Science Foundation. As the U.S. Government has disengaged from sup- port of the operations of the Internet infrastructure, one proposal is to shift this responsibility to a consortium of registries which includes heavy international participation and organizations such as the World Intellectual Property Organization ( WIPO ) . Other proposals that seem to have some support from the Clinton administration retain U.S. control over these for the next few years. See a January 1998 administration ‘‘green paper’’ available at www.ntia.doc.gov for a summary of these proposals. Administration of the assignment of domain names has been problematic as the Internet has commercialized, because of the interac- tions between domain names and trademarks. Trademarks often have a geographic scope, while there is no geography on the net. There are also other interesting subsidiary problems; for example, the familiar ‘‘.edu’’ domain for educational institutions has been subject to some debate about what organizations are eligible for listing within the do- main. Historically these have been mainly institutions of higher educa- tion. Now some commercial educational ventures want to be included. EDUCOM, an association of higher education institutions concerned with the role of information technology in higher education, has recently proposed that it take over management of the ‘‘.edu’’ domain and is awaiting a response from the National Science Foundation. TECHNOLOGICAL ISSUES The Evolving Internet: Applications and Network Service Infrastructure Clifford Lynch Coalition for Networked Information, 21 Dupont Circle, NW, Washington, DC 20036. E-mail: [email protected] This article has three goals. It begins by reviewing the current state of the Internet as a data transport system and some of its emerging problems. Then it discusses two major research initiatives — Internet 2 and the Next Generation Internet (NGI) program—and some of the new network-level services that may first see substantial deployment in these new research testbeds. The article concludes with a few speculations about future changes to the Internet as a data transport system. While focusing on the Internet as a data transport sys- tem, rather than on its broader role as a communications and information distribution medium or as a social force, the perspective here is really driven by current and planned applications. Throughout the article, connections are highlighted between developments in network ser- vices, the implications of these developments for the sup- port of current applications, and the design and deploy- ment of next-generation applications. However, this arti- cle does not cover important developments in broader Internet service infrastructure, such as work on Uniform Resource Names 1 or the efforts to develop a public key infrastructure to support authentication and electronic commerce, 2 which will be important building blocks for applications. As a survey, this article can only provide an over- view of most of the topics discussed and a few refer- ences to additional information. Many areas are very active on technical, operational, business, and policy levels. While the content should be reasonably current as of this writing (January 1998), the status of some 1 See the various RFCs and Internet Drafts on Uniform Resource Names (URNs); see also the articles on Digital Object Identifiers in the December 1997/January 1998 issue of the Bulletin of the American Society for Information Science. 2 See the various RFCs and Internet Drafts issued by the Public Key Infrastructure IETF Working group for more information. For a broader survey, see Secure Electronic Commerce: Building the Infrastructure for Digital Signatures and Encryption by Warwick Ford and Michael S. Baum (Prentice Hall, 1997). q 1998 John Wiley & Sons, Inc. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 49(11):961–972, 1998 CCC 0002-8231/98 / 110961-12 / 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Upload: clifford-lynch

Post on 06-Jun-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: The evolving internet: Applications and network service infrastructure

issues changes literally daily, and some of it may beout of date by the time it reaches print. The Next Gener-ation Internet (NGI) program, for example, has beenthe subject of continuing negotiation among the Clintonadministration, participating federal agencies, and Con-gress. In the last few months, the formation of theAmerican Registry of Internet Numbers (ARIN) 3 haseffected major changes in the way fundamental compo-nents, such as the assignment of network number ad-dress space, are managed. Important issues about theadministration of the domain naming system (DNS)still remain unresolved.4 The structure of the commer-

3 See www.arin.net. ARIN is a new not-for-profit organization thatwill take over the administration and registration of Internet NetworkNumbers, formerly handled by Network Solutions, Inc. under contractto the National Science Foundation. ARIN is responsible for the Ameri-cas, the Carribean, and for sub-Saharan Africa; there are similar organi-zations that handle other geographic regions.

4 The policy debate about the management of the domain namingsystem, and in particular the creation of additional top-level domainnames to supplement the familiar ‘‘.com,’’ ‘‘.org,’’ and the like, hashad good coverage in the press, notably The New York Times. Theessence of the issue is this: Historically, these have been managed bythe Internet Assigned Number Authority (IANA) and implemented bya company called Network Solutions, Inc. under contract to the NationalScience Foundation. As the U.S. Government has disengaged from sup-port of the operations of the Internet infrastructure, one proposal is toshift this responsibility to a consortium of registries which includesheavy international participation and organizations such as the WorldIntellectual Property Organization (WIPO). Other proposals that seemto have some support from the Clinton administration retain U.S. controlover these for the next few years. See a January 1998 administration‘‘green paper’’ available at www.ntia.doc.gov for a summary of theseproposals. Administration of the assignment of domain names has beenproblematic as the Internet has commercialized, because of the interac-tions between domain names and trademarks. Trademarks often have ageographic scope, while there is no geography on the net. There arealso other interesting subsidiary problems; for example, the familiar‘‘.edu’’ domain for educational institutions has been subject to somedebate about what organizations are eligible for listing within the do-main. Historically these have been mainly institutions of higher educa-tion. Now some commercial educational ventures want to be included.EDUCOM, an association of higher education institutions concernedwith the role of information technology in higher education, has recentlyproposed that it take over management of the ‘‘.edu’’ domain and isawaiting a response from the National Science Foundation.

TECHNOLOGICAL ISSUES

The Evolving Internet: Applications and NetworkService Infrastructure

Clifford LynchCoalition for Networked Information, 21 Dupont Circle, NW, Washington, DC 20036. E-mail: [email protected]

This article has three goals. It begins by reviewing thecurrent state of the Internet as a data transport systemand some of its emerging problems. Then it discussestwo major research initiatives—Internet 2 and the NextGeneration Internet (NGI) program—and some of thenew network-level services that may first see substantialdeployment in these new research testbeds. The articleconcludes with a few speculations about future changesto the Internet as a data transport system.

While focusing on the Internet as a data transport sys-tem, rather than on its broader role as a communicationsand information distribution medium or as a social force,the perspective here is really driven by current andplanned applications. Throughout the article, connectionsare highlighted between developments in network ser-vices, the implications of these developments for the sup-port of current applications, and the design and deploy-ment of next-generation applications. However, this arti-cle does not cover important developments in broaderInternet service infrastructure, such as work on UniformResource Names1 or the efforts to develop a public keyinfrastructure to support authentication and electroniccommerce,2 which will be important building blocks forapplications.

As a survey, this article can only provide an over-view of most of the topics discussed and a few refer-ences to additional information. Many areas are veryactive on technical, operational, business, and policylevels. While the content should be reasonably currentas of this writing (January 1998) , the status of some

1 See the various RFCs and Internet Drafts on Uniform ResourceNames (URNs); see also the articles on Digital Object Identifiers inthe December 1997/January 1998 issue of the Bulletin of the AmericanSociety for Information Science.

2 See the various RFCs and Internet Drafts issued by the Public KeyInfrastructure IETF Working group for more information. For a broadersurvey, see Secure Electronic Commerce: Building the Infrastructurefor Digital Signatures and Encryption by Warwick Ford and MichaelS. Baum (Prentice Hall, 1997).

q 1998 John Wiley & Sons, Inc.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 49(11) :961–972, 1998 CCC 0002-8231/98/110961-12

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 2: The evolving internet: Applications and network service infrastructure

for Comments (RFCs) and Internet Drafts can be foundthrough the www.ietf.org site.

The Applications PerspectiveFrom the perspective of production applications, the

Internet is showing severe strains in terms of performanceand reliability due to the rapidly growing demand forbandwidth, the rapidly increasing number of hosts andnetworks participating in the Internet, and shifts in thebusiness model and structure of the Internet Service Pro-vider (ISP) industry. In a sense, the Internet is now indanger of becoming a victim of its own success. Thevision of networks and networked information to supportaccess, commerce, and collaboration has been so compel-ling that organizations have adopted it wholesale, creatingboth massive increases in load and very high expectationsfor performance and reliability to support critical applica-tions. These performance and reliability problems are nowstarting to call into question the commitment to the net-worked information model, using the current Internet asthe access vehicle, as an organizational strategy. Corpora-tions are looking at creating private subnets to ensurehigh-quality, high-reliability access to their key businesspartners, for example. Libraries that have licensed accessto information resources on the net for their user commu-nities are beginning to look at ways to move the informa-tion in-house or to gain dedicated access to remote re-sources. System architectures are appearing that explicitlyinclude caching and mirroring as a means of circum-venting problems with capacity and reliability of networkservices.

Supporting the current range of production applica-tions and a wide range of emerging applications does notreally require any additions to the Internet’s present datatransport service interface. It requires only improvementsin available bandwidth, transport capacity, and reliabilityand robustness of the network. And certainly the experi-ence of the past few years proves that the range of newapplications that can be supported on the net is far fromexhausted.

But there are new applications currently under devel-opment, still largely in laboratory settings, which do re-quire substantial extensions to the basic network servicesinfrastructure to deploy at a reasonable level of scale andreliability. These applications are driving the developmentof the new network services that differentiate the aca-demic Internet 2 effort—and, somewhat, the NGI—fromnear-term developments in the broad ‘‘commodity’’ In-ternet.

Supporting Production Applications: State of theInternet in 1997

Within the United States, the Internet has evolved toan almost completely commercial operation9 dominated

9 The only remaining areas that have not shifted to commercial oper-ations are the domain name system, which is still operated under contractto NSF (though this is an area of active debate, as discussed earlier) ,

cial Internet Service Provider marketplace was ex-tremely volatile in 1997.5 The Telecommunications Actof 1996 paved the way for acquisitions involving MCI,Cerfnet, Uunet, and BBN Planet, to name a few, andstimulated the emergence of alternative service provid-ers such as @home 6 and Metricom, 7 and Microsoft’sinvestments in Internet access via cable television.Early 1998 developments, such as the announcementof an alliance to stimulate the deployment of DigitalSubscriber Loop (DSL) technology, which includesMicrosoft, Intel, and Compaq, suggest that this volatil-ity will continue.

I use the terms ‘‘network services’’ and ‘‘the In-ternet as a data transport system’’ to refer to the set ofservices that an Internet host can access at the InternetProtocol ( IP) level. In today’s Internet, this is an unreli-able best-effort datagram delivery service called uni-cast transmission, where packets are sent from a sourcehost to a single destination host. There is no guaranteethat packets will arrive; they may be discarded due totransmission errors, equipment failures, or congestion.They may arrive in a different order from which theyare sent; and duplicate copies may be delivered. Inpractical terms, this IP service interface defines themost basic and universal set of Internet data transportservices.8 The Transmission Control Protocol (TCP) isnot a network service; it is a protocol implementedby Internet hosts which makes use of IP services andprovides a reliable bi-directional byte stream over anunreliable datagram network service. Other familiarprotocols, such as the file transfer protocol (FTP) orthe World Wide Web’s Hypertext Transfer Protocol(HTTP) are application-level protocols that are alsoimplemented on Internet hosts. These protocols useTCP connections as a service and are layered on topof it. Other components of the Internet’s infrastructure,such as the domain name system, are also accessed byinteracting with various network hosts through the IPprotocol.

Many of the references in this article are to Web siteswhere more current or detailed information can be foundrather than to more traditional printed sources. Requests

5 These events are well covered in the national business press andalso in a range of more specialized newsletters and magazines suchas Inter@ctive Week. More analytic coverage can be found throughpublications like the Cook Report (see www.cookreport.com).

6 @Home is a for-profit public corporation (backed by several ofthe major cable TV operators, among others) that offers Internet connec-tivity to the home via cable television infrastructure.

7 Metricom is a for-profit public corporation that provides wirelessInternet access. Currently it has coverage in the Washington, DC andgreater San Francisco Bay areas. The company supplies small wirelessmodems which can be attached to laptop (or desktop) computers formobile Internet access.

8 See RFC 791 for a definition of the IP protocol service interface.RFC 1122 and RFC 1123 may also be helpful to readers in understandinghow host application software relates to the data transport services onthe network.

962 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 3: The evolving internet: Applications and network service infrastructure

lower speeds between T1 speed Frame Relay service andclear channel T1—shared communications infrastructureversus dedicated facilities.10) Basically, a growing invest-ment in network connectivity is becoming part of thefundamental operating expenses of institutions. These in-stitutions are not simply benefiting from the ability to buymuch more bandwidth each year for relatively constantcosts due to price reductions. Unless some of the newventures developing massive fiber optic networks such asQwest or Level 3 can introduce intensive pressure onvery high bandwidth connection pricing, thus completelyreshaping the economics of networking, we will see acost crisis emerging by the end of the decade. Largeorganizations, both higher education and corporate, willbe increasingly unable to afford (and in some cases evenobtain, at any price) the capacity they need to connect totheir backbone Internet service providers.

Connectivity to the network by individuals and smallbusinesses has also become an increasingly serious bottle-neck. Large corporations or educational institutions canafford very high-speed, dedicated links to an Internet ser-vice provider. This capacity is then distributed to individ-ual users within the organization through private localarea networks, which are now moving from Ethernetspeeds (10 Mbits/second) to higher speeds using technol-ogies such as 100 Mbits/second switched Ethernet. Butthe gap between the capacity of such institutional net-works and the bandwidth affordable and available to asmall business or an individual grows ever wider. Practi-cally speaking, individuals and small businesses still haveonly very limited alternatives: Inexpensive dialup mo-dems (operating at 33.6 Kbits/second or the new asym-metric 56 Kbits/second services); ISDN, a common car-rier offering which is still expensive (prohibitively insome regions) , has limited availability, and is stunninglydifficult to install and configure, and which offers only avery modest improvement over the bandwidth availablevia dialup modems; inexpensive cable TV access at highspeeds, but only available in very limited areas; wirelesssolutions with limited availability and often at low speeds;and high-speed DSL technologies, which are still more

10 One can argue that this is a symptom of a fundamental conflictin business models. The common carriers historically have believed thatthey should be providing shared public network infrastructure—X.25networks being a good case in point—and that organizations shouldsimply connect to the public net as provided by the common carriers.However, the common-carrier-provided nets have been a massive failurethroughout the 1970s, 1980s, and early 1990s. They have been tooexpensive, slow, and inflexible for marketplace needs. So a class ofbusinesses—ISPs—emerged that purchased raw bandwidth from thecommon carriers and used this bandwidth along with their own routersto construct networks. They sold connectivity to these networks to endcustomers. First with Shared Multimegabit Data Service (SMDS) andmore recently with ATM, we have seen the common carriers deployservice offerings and pricing which try to encourage a return to ‘‘intelli-gent’’ common-carrier-provided networks and to move the commoncarriers away from simply selling raw bandwidth which other organiza-tions use to build networks. Telephone companies are also buying upISPs.

by a handful of major ISPs such as Sprint, MCI (appar-ently being acquired by Worldcom, although this acquisi-tion is not yet complete and is subject to regulatory ap-proval, at least in the U.S. and perhaps elsewhere) , BBNPlanet (recently acquired by GTE), and Uunet (alreadyacquired by Worldcom). These providers operate majornational backbones and offer backbone connectivity tomany regional and local ISPs who resell access to theInternet. They also provide access directly to customersin competition with these same smaller local and regionalproviders. A number of government backbones are alsopart of the national Internet and support federal agencymissions (such as ESNET for the Department of Energyand the NASA Science Internet) , although even they areincreasingly contracted out to the ISPs for implementa-tion.

The major ISPs are spending enormous amounts ofmoney and energy upgrading their backbone networks tosupport both the growth in traffic and the increasing num-ber of customers they service. Just a few years ago, mostcustomers were connected to the Internet by T1 (1.544Mbits/second) links; and the primary national backbone(the NSFNET) was built from DS3 (45 Mbits/second)trunks. Today large organizations commonly use multipleDS3s to connect to the Internet; and many such organiza-tions are rapidly demanding faster links, typically OC3(155 Mbits/second) or above. The major ISPs are usingOC3 trunks in their backbones and phasing in OC12 facil-ities (622 Mbits/second). Several of the major ISPs ex-pect to be using at least some trunks at OC24 (1.2 Gbits/second) or OC48 (2.4 Gbits/second) rates this year. Notethat trunks above DS3 speeds in this context assume theavailability of fiber optic facilities. Both the common car-riers and other private fiber providers are making hugeinvestments in laying fiber to meet this demand.

The economics of this rapid expansion in bandwidthdeserve some comment. In the late 1980s and early 1990s,the price of leased T1 circuits and T1 connectivity to theInternet dropped steadily. An organization could obtainT1 connectivity for a price of about $25K/year. Todayprices for high-speed Internet connectivity are going uprather than down, as the ISPs develop a better appreciationof the capital and operating costs of supporting thesetraffic demands within their own infrastructure. DS3 In-ternet connections can cost hundreds of thousands of dol-lars each year, and costs of OC-3 or OC-12 connectionsmay surpass the million-dollar-a-year barrier, particularlyif the customer wants clear channel SONET instead ofshared ATM fabric access (much like the distinction at

and the management of what are essentially registries of one sort oranother that are part of infrastructure maintenance: Network numbers(being shifted to ARIN), certain routing databases that are still beingoperated in part as research projects, and some registries being managedby the IANA in support of the Internet Engineering Task Force (IETF).It is unclear how this last area will be supported once governmentunderwriting is phased out. However, the amounts of money involvedin federal support of the infrastructure at this point are miniscule.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998 963

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 4: The evolving internet: Applications and network service infrastructure

ticularly if they do not involve other very large ISPs. Onthe one hand, ISPs need to cooperate with each other tomake the Internet work for the overall customer base. Onthe other hand, there is some competitive advantage tooffering better service to customers who all use the sameISP, or at least to a small set of major ISPs who arecomfortable dividing the market amongst themselves.

An increasingly common customer response to perfor-mance and reliability problems is multihoming: The pur-chase of Internet connectivity from two or more ISPs.Institutions are doing this for both reliability and perfor-mance reasons. The technical issues involved inmultihoming are complex, and space does not permit afull discussion of these issues. But the growth ofmultihoming is adding substantially to the overall prob-lems of scaling routing in the Internet.11

Another problem that has emerged recently is the as-signment and management of network numbers. Histori-cally, these were ‘‘owned’’ by end-user organizations,and an organization could move from one ISP to another,or contract with multiple ISPs to carry the organization’snetwork connectivity to the broader Internet. Severalyears ago, it became clear that the Internet was runningout of network numbers and that until the new IP V6,with a much larger address space, could be deployed toreplace the existing IP V4, strategies would be needed tomanage the existing number space more parsimoniously.Large blocks of address space were assigned to the majorISPs, who then provided sub-blocks to their customers asneeded. Problems developed in two scenarios: When acustomer wants to switch ISPs, a vastly expensive anddisruptive readdressing of all the hosts must occur withinthe end-user organization, since network numbers are notnecessarily portable between them. If a customer wantsto multihome with several ISPs, technical issues arisewhen one ISP must carry a sub-block of another’s addressblock.

Internet connectivity has been expanding more rapidlyoutside the United States than within it as the explosivegrowth of the network in the U.S. has become an interna-tional phenomenon. The net is now being used heavilyto support international access to information resources,international collaboration, and to some extent interna-tional commerce. But the business models for fundinginternational connectivity remain problematic. In the dayswhen the Internet primarily supported the research andeducation communities, most of the international linkswere jointly funded by organizations such as NSF in theU.S. and various scientific organizations in other nations.

11 In essence, if organizations are connected to the Internet via asingle ISP, route advertisement does not need to be highly dynamic,and routes to multiple organizations can be aggregated together whenadvertised beyond the ISP providing service. In cases where an organiza-tion is multihomed and served by several ISPs, route advertisementthroughout the Internet needs to be dynamic since connectivity to theorganization can change rather than merely fail. In addition, aggregationis problematic since the organization’s network numbers will be partof, at most, one ISP’s address block.

promised than actually available. This last-mile bottle-neck will increasingly limit the deployment of advancedInternet applications, particularly those involving multi-media. There has been a massive market failure here,especially on the part of the common carriers.

It is impossible to make meaningful general statementsabout the performance of the Internet. One can only talkabout the performance of a network path from one specifichost to another. Suppose we have two hosts, A and B,that need to communicate. If the organizations owningthe two hosts are both customers of the same ISP, thenthere is a single party responsible for the performance ofthe backbone network, and the organizations can reason-ably contract for various performance guarantees—al-though most ISPs are still reluctant to enter into suchcontracts. If A and B are served by different ISPs, how-ever, then no single party takes responsibility for perfor-mance, and the quality of the connection is determinedby three factors: The performance of the ISP for A; theperformance of the ISP for B; and how effectively trafficcan move between these two ISPs—which may involveadditional transit ISPs. Inter-ISP connections are emerg-ing as a major complication in the Internet, as it appearsthat many performance problems occur at the interchangepoints between ISPs, rather than within the network runby any given ISP.

ISPs are said to ‘‘peer’’—to exchange routing infor-mation and traffic—at various interchange points. Theestablishment of a peering relationship between two ISPsrequires a commitment by both parties. Historically, peer-ing was done largely at public interchange points such asMAE-EAST and MAE-WEST (on the east and westcoasts, respectively) , or at one of the handful of NetworkAccess Points (NAPs) established as part of the phase-out of NSFNET. These public interchange points run ex-traordinarily heavy loads and are often congested, leadingto high delay and packet loss. The MAEs, in particular,seem to be constantly pushing the state of the art to itslimits. Partially in response to the problems with the pub-lic interchange points, some of the major ISPs, such asSprint and MCI, have been establishing systems of ‘‘pri-vate’’ peering between the ISPs through specially engi-neered private interconnect points. This peering allowstraffic between the networks to bypass the congestion atthe MAEs. The criteria for establishing peering betweenISPs, both at public exchange points and in private peer-ing contexts, is a controversial business issue and is driv-ing the stratification of the ISP industry. Very large ISPsonly want to peer with other very large ISPs, and wouldprefer to do it privately for performance reasons. SmallerISPs are being blocked from peering arrangements withthe suggestion that they buy backbone access from oneof the very large ISPs, and thus pay to use the peeringrelationships already established by the large ISPs.

The restructuring of the ISP industry based on peeringrelationships raises a number of interesting practical is-sues. Large ISPs have only a limited ability or incentiveto invest in resolving inter-ISP interchange problems, par-

964 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 5: The evolving internet: Applications and network service infrastructure

by software, configuration or operational errors, the sheerscale of the ever-expanding Internet, and the growingtrend of intentional attacks on the routing infrastructure.These cannot be fixed simply by throwing capacity at theproblem.

In 1997, events occurred where incorrectly configuredrouters at customer sites propagated widely through theInternet, making large numbers of networks inaccessiblefor several hours. These problems have provoked com-plex technical and administrative debates about the re-sponsibilities of Internet service providers at various lev-els of the service hierarchy to validate and filter routinginformation that they transfer across their boundaries toother providers. With more and more organizations (fre-quently inexperienced and without competent technicalstaff) running networks, it seems inevitable that theseconfiguration errors will multiply and that we will needto adopt policies, procedures, and technical approachesthat treat routing information with a much higher degreeof skepticism. Current protocols and router implementa-tions are limited in this regard, and a high degree ofexpertise is required to establish and operate robust con-figurations. And considerable administrative overhead isrequired to manage routing configurations which are de-signed to localize and quarantine the effects of customerconfiguration errors.

Convergence in the routing infrastructure—that is, theability of all of the components within the routing infra-structure to share a common view of reachability of thecomponent networks of the Internet—is becoming a seri-ous scaling issue. Basically, when routes to a given net-work change (perhaps due to a trunk or router failure) ,not only does it take time for the new routing informationto propagate through the Internet, but it takes considerablebandwidth and processing overhead to propagate the routeupdates. As the number of routers and routes grows, theamount of time for an update to propagate increases.While the information is moving through the routers, thenetwork is in an anomalous state, and routing loops orother problems can occur, leading to delayed or lost traf-fic. These problems correct themselves automaticallyonce convergence reoccurs. But the network has becomevulnerable to ‘‘routing flaps.’’ For instance, a series ofintermittent failures might cause the reachability of agiven network to change repeatedly in a very short periodof time. Internet service providers face tradeoffs betweenthe implementation of route damping functions—whichallow them to act as ‘‘good citizens’’ within the routinginfrastructure by discarding notifications of frequent con-nectivity changes—and the promise of quality of serviceto individual customers who may be connected by a trunkthat suffers occasional intermittent failures. The networkwas originally designed to reconfigure dynamicallyaround failures of all kinds. It now is so large that itmust inhibit its ability to respond to short-term dynamicconnectivity changes in the interests of large-scale stabil-ity. Already, reachability rather than best-path selection

As the network has commercialized in the U.S., the frame-work for funding the growth of the international links hasnot kept pace. This situation is greatly aggravated by theridiculously high costs of many of the international links(caused by the pricing established by national telephonecompanies in other nations) . From the network user’spoint of view, the problem is clear: Links across the At-lantic and the Pacific tend to be so heavily congested thatthey are often useless; and it is increasingly necessary tomirror data in Europe and Asia rather than use the networkto fetch it on demand from the U.S. Another response tocongestion and bandwidth constraints is the use of cach-ing servers, which are now being deployed on a largescale within the U.K.’s higher education community.

There are really two distinct issues. One is fundingcapacity through some sort of viable economic model.International telephony largely resolved this through asystem of settlements; but there is no similar system inplace today for the Internet. The second problem is highprice, a result of monopoly pricing by various nationaltelephone companies—particularly in the less-developedcountries—which use international telecommunicationsas a major source of foreign currency into the local econ-omy. It continues to be a problem both for traditionalvoice telephony as well as for trunks to support the In-ternet. Recently the U.S. Federal Communications Com-mission has launched initiatives to reduce the costs ofinternational telecommunications.12

The other area which requires comment is the overallreliability and robustness of the Internet. Organizationsare increasingly relying on the Internet to support criticalelements of their day-to-day operations. Events in 1997highlighted how vulnerable the Internet really is. Therehave been repeated physical disruptions—such as fibercuts—that have taken many trunks out of service, bothinside the ISPs’ networks and those trunks that link cus-tomers to ISPs. These events have become so frequentthat some network operators refer to 1997 as the ‘‘yearof the backhoe.’’ Such accidental physical disruptionswill always occur, but their impact can be minimized byroute diversity planning. (This can be difficult, particu-larly with fiber capacity shortages developing in someareas.) But the increasing capacity of optical fiber impliesa growing concentration of traffic, and thus vulnerability.Technologies such as SONET rings, which include ‘‘self-healing’’ capabilities that reconfigure automaticallyaround fiber cuts, will help if carriers are willing to deploythem. In addition, while interchange points like the MAEsare replicated (as well as incorporating massive internalengineering for redundancy, backup power, and the like) ,traffic loading has reached the point where problems atany one of these interchange points has widespread ef-fects.

But of even greater concern are the problems generated

12 See the material on the FCC Web site, www.fcc.gov, concerningIB Docket 96-261 on international settlement rates.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998 965

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 6: The evolving internet: Applications and network service infrastructure

of scholarly communication: Many professional andscholarly societies are offering large amounts of contenton the net. The major commercial scholarly publishersare offering their journals for licensed access to librariesthrough the net as well. And community-wide electronicdatabases are changing the nature of disciplines such asmolecular biology.

The realization of the networked information visionhas not been without problems, however. As discussedearlier, chronic performance and reliability issues haveplagued the commodity Internet and have provoked someinstitutions to question the wisdom of depending on thenet to reach critical information resources. Some crucialinfrastructure components, such as interorganizational au-thentication and access management, are still largely un-developed, and the lack of these facilities is a seriousbarrier to networked information access.14

Today’s new applications are predominantly content-oriented. They generally do not involve new protocolsbut rather new content and services layered above theexisting protocol suite that is already established on thenetwork. New key applications that have emerged on theInternet recently, such as the various Web indexing ser-vices, follow the same pattern. One exception is InternetRelay Chat (IRC), a text-oriented communications ser-vice that has some parallels with citizens band (CB) radioand that does come with its own protocol structure. Be-cause this service is text-based, however, its demands andimpacts on the network are negligible (except that peoplefight wars over the control of IRC channels using thekinds of denial-of-service attacks described earlier) .

A few applications have emerged that try to push theenvelope of existing network services, most notably In-ternet telephony, streaming audio and video services, and‘‘push’’ technology. Each of these deserves brief discus-sion, because none of them work reliably and satisfacto-rily on the commodity network, and they set the stage forsome of the new service requirements that are being metby Internet 2 and NGI.

Internet telephony replaces costly long-distance (usu-ally international) voice telephone calls with digitized,packetized voice carried over the Internet. Sometimes In-ternet telephony is conducted end-to-end between twocomputers connected directly to the Internet. In other vari-ations, gateways bridge an Internet telephony service andthe voice phone network so that, for example, the Internetcarries the international part of a call and the gatewaycomputer uses an inexpensive local dial voice circuit toreach one of the parties that has only a telephone, ratherthan an Internet-connected computer. The computers digi-tize incoming voice, break it into packets, and transmitit across the Internet. The quality of the connection isdependent primarily on the packet delivery rate that the

14 See Clifford A. Lynch (1997). The changing role of authenticationand authorization in a networked information environment. Library Hi-Tech, 15,(1–2) 30–38. See also the CNI White Paper on Authenticationand Access at www.cni.org.

based on dynamic loading serves as the underpinning ofInternet routing.

Computers have been the subject of attacks for years.Networked computers, because they are easily accessedfrom remote sites, have been particular targets of suchattacks. The traditional philosophy has been that eachnetwork host is responsible for maintaining its own secu-rity, and to some extent network operators have not paidtoo much attention to security issues. In the past coupleof years, however, the network infrastructure itself hasbecome both a victim of attack (for example, routerbreak-ins, or the injection of falsified routing informationto hijack traffic destined for specific networks) as wellas a tool to mount denial-of-service attacks against hostsattached to the network. A current major denial-of-serviceattack strategy is based on the use of directed IP broad-cast, sometimes called a ‘‘smurf ’’ attack, after a programcalled smurf that made the rounds on the net and can beused to trigger such an attack.13 A relatively anonymoususer on a low-speed dialup line can use this attack togenerate megabits per second of traffic aimed at a targethost, saturating routers and trunks along the way. Anextensive effort is required to upgrade router configura-tions and routing protocols to resist these types of attacks,and to help isolate and track them when they do occur.The Internet today still reflects design assumptions whichpresume a benign, friendly environment of cooperation.These assumptions are no longer true, and the infrastruc-ture needs to evolve to survive in a more hostile world.

The Shifting Nature of Applications on theCommodity Internet

For all of its problems, however, the Internet has beenhugely successful. The number of users continues to growrapidly, both in the U.S. and internationally. The culturaleffects of the Internet are enormous: Increased access togovernment information and disintermediation in com-mercial practices (for example, massively increased ac-cess to investment information, and the growth of airlineticketing and automobile shopping by individuals throughthe net) . At least in a few sectors, electronic commerceis becoming established. Consider the success of ama-zon.com in the book trade and the considerable growthin purchasing over the Internet during the 1997 Christmasshopping season.

By the early 1990s, the higher education communityin the United States had largely accepted a vision of elec-tronic access to shared, network-based information re-sources and had incorporated this vision into its strategicplanning. Many of the components of this vision are fi-nally becoming real. The Internet is affecting the practices

13 For details on this attack, see the material on ‘‘CERT AdvisoryCA-98.01 - smurf ’’ at www.cert.org; see also ‘‘The Latest in Denial ofService Attacks: ‘Smurfing’: Description and Information to MinimizeEffects’’ by Craig Huegen, http: / /www.quadrunner.com/Çchuegen/smurf.txt.

966 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 7: The evolving internet: Applications and network service infrastructure

alized information delivery to interest users in near-real-time access. Less-intrusive E-mail based approaches seemto work at least as well. Software updates are a promisingpotential application, but they are large and demand a lotof bandwidth if pushed to a large population of users.

All of these emerging applications offer tantalizingglimpses of what we may find on the Internet of thefuture. But they do not work well today for many networkusers, and are perhaps of more interest to highly dedicatedInternet ‘‘hobbyists’’ than to the general public.

Next-Generation Applications and NewNetworking Initiatives

The current Internet cannot reliably support the ex-isting massive volume of primarily character-based, inter-active traffic or Web access. Yet there is a whole selectionof new, highly interactive, and multimedia-rich Internetapplications that are of great interest to the research andeducation community. They seem to be far beyond thesupport capacity of the current commodity Internet, butthey are of compelling importance to those who are fo-cused on ways in which networks and networked informa-tion can potentially transform teaching, learning, andscholarly communication. Sketches of these applicationscan be found both in the reports on the NGI programat www.ngi.gov and in the report of the ApplicationsCommittee for the Internet 2 initiative available atwww.internet2.edu. The higher education and researchcommunities are eager to explore and gain experiencewith these new applications.

While the major Internet service providers are certainlyinterested in this new class of applications, the highereducation and research communities no longer drive theagenda or priorities for the growth and development ofthe commodity Internet. Commercial and consumer inter-ests are in control, and the ISPs are having enough troublestabilizing the existing commodity Internet and meetingdemands for capacity growth. The introduction of poorlyunderstood, near-research-level technology, with inherentproblematic economic and business issues, is not theirpriority. Thus it makes sense that large-scale testbed net-works are being developed outside the commodity com-mercial Internet to address the needs of the research andeducation community and to allow the ISPs to gain expe-rience with next-generation applications.

Two major initiatives are emerging to establish suchtestbed network environments: The Internet 2 initiative,driven by the higher education community, and the NGIprogram, driven largely by the federal government andexecutive branch agencies concerned with high-perfor-mance computing and communications technologies.

Internet 2

The Internet 2 program was developed during 1997 bythe Networking and Telecommunications Task Force ofEDUCOM, an association of higher education institutions

Internet can sustain in a timely and reliable way betweenthe endpoint computers (and, to a lesser extent, whetherthe endpoint computers have sufficient capacity to digitizethe incoming voice signals) . As the destination machinereceives packets, it converts them back to analog signalsand plays them through its speakers. If packets are de-layed or lost, parts of the conversation will be disrupted.The network will discard packets in some cases when itis overloaded. On today’s heavily loaded Internet, onecannot count on any particular data rate or level of relia-bility. At present, therefore, Internet telephony is muchlike amateur short-wave conversations: Sometimes theywork and sometimes they do not. It is appealing primarilyto people who are extremely price-sensitive and preparedto endure poor quality and reliability, or to hobbyists.

Streaming audio and video technologies, perhaps thebest-known being the RealAudio system, are used to de-liver audio and/or video content across the Internet. Basi-cally, they use the same packetizing approach that is usedfor Internet telephony. These technologies try to sendenough information before play starts at the recipient’smachine so that buffering will cover network delays andoccasional packet loss (interpolation techniques are alsoused to compensate for occasional lost packets) . Fidelityon these services is often fairly low in order to keepthe bandwidth requirements within reasonable bounds. Ifsome part of a path, such as the last-mile connection tothe user, is bandwidth-limited, no amount of bufferingwill enable it to transmit content that requires more thanthe limited bandwidth available on a continued basis.Good quality video simply will not fit down a 28.8 Kbits/second dialup connection. The options are to send lower-quality video or to move the whole high-quality videofile down to the user’s machine and then play it locally.Not only do streaming audio and video suffer from band-width constraints and packet loss, it is an extremely inef-ficient provider of ‘‘broadcast’’ material to a large numberof network users at once, since it requires a copy of thecontent to be individually transmitted to each user.

Push technology, popularized by Pointcast and Ma-rimba, releases us from navigating the Web for informa-tion. Instead, the user establishes an interest profile andthen pertinent information is sent by a network server onan ongoing basis. Users can receive news headlines orstock quotes that fit that profile. Push technology hasreceived limited acceptance, however. It is relatively inef-ficient, suffering from the same information broadcastlimitations that affect streaming audio and video. Eachuser gets his or her own copy of each bit of informationbeing distributed (imagine the release of a set of updatesto a popular program like Windows 95 through a pushchannel) . Users with bandwidth-constrained net connec-tions, such as those using dialup, generally find that theydo not want the overhead of having information pushedat them when they are trying to use the network for otherpurposes. And, finally, aside from stock quotes or sportsscores, many of the services offering push informationhave not been able to offer sufficiently compelling person-

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998 967

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 8: The evolving internet: Applications and network service infrastructure

that will drive Internet 2. There is also an assumptionthat higher-level infrastructure services (for example,authentication) will be consistently deployed through-out Internet 2 sites. While this higher-level infrastruc-ture does not require the network services that willcharacterize Internet 2, it will have to support Internet2 applications; and the assumption is that the relativelysmall number of Internet 2 sites (presumably sharing acommon commitment to the applications goals drivingInternet 2 development) will be able to deploy thisinfrastructure to a point of ubiquity much more quicklythan the commodity commercial Internet.

Internet 2 is really an unfortunate name for this effort.It is at best confusing and at worst actively counterproduc-tive. The intent here is not to develop a separate replace-ment for the Internet and then to disconnect the participat-ing institutions from the existing worldwide Internet. Thegoal is to create a testbed for advanced applications. In-ternet 2 sites will continue to be connected to the currentInternet. Nonparticipants will be disadvantaged only inthat their ability to experiment with, and participate in,the development of certain next-generation network appli-cations will be constrained. They will not lose connectiv-ity to the Internet 2 participants. Use of the Internet 2experimental network will likely be limited by an accept-able policy that, for example, would forbid transit trafficbetween non-Internet 2 sites being carried across the In-ternet 2 experimental network, and perhaps even forbidtraffic between an Internet 2 site and a non-Internet 2 sitefrom transiting Internet 2 en route. Nonacademic connec-tivity to Internet 2 will likely be limited to commercialorganizations that are partners in the development of ad-vanced applications in partnership with the higher educa-tion community and that exploit the new services offeredby the experimental Internet 2 network, although thesepolicies are still under development.

Internet 2 is evolving rapidly. The most current infor-mation can be found at the UCAID Web site, www.u-caid.edu, and the Internet 2 Web site, www.internet2.edu.

The Federal Next Generation Internet Program

The NGI is a federal program proposed by the Clintonadministration to advance the state of the art in net-working within the federal government, and between gov-ernment and academic sites. It figured prominently inmany of the administration’s public pronouncements dur-ing the 1996 election, and subsequently in the President’sJanuary 1998 State of the Union address. The NGI pro-gram will develop a network of some 1000 sites runningat 100 times current network bandwidths (presumably, inthe OC-3 range) and 10 sites connected at 1,000 timescurrent network bandwidth (approaching gigabit speeds) .The NGI and Internet 2 share many goals and approaches.

A discussion of the politics of NGI is beyond the scopeof this article. The initial proposals received a lukewarmreception in Congress because the NGI did not plan toreach some primarily rural states, and because of concerns

concerned with information technology applications. Theeffort culminated in the establishment in late 1997 ofthe not-for-profit University Corporation for AdvancedInternet Development (UCAID). Participating institu-tions pay substantial membership fees to UCAID. Thesefees plus contributions from industrial partners, local in-stitutional investments, and NSF grants will provide thefunding for the development of Internet 2 and prototypingof advanced applications that will run on it. There arefour major components to the Internet 2 initiative:

1. The development of a high-speed backbone among agroup of perhaps 100 institutions of higher educationthat can support and serve as testbeds for advancedapplications. This backbone would be at least OC-3and perhaps OC-12 in some areas, with a commitmentto ongoing bandwidth upgrades. At least initially, theNSF vBNS15 experimental network is expected to pro-vide much of the backbone connectivity.

2. High-speed institutional connectivity to the Internet 2backbone. At a minimum, this will be DS3 (45 Mbits/second service) , with an expectation that OC-3 (155Mbits/second) will be more typical, and that somesites will connect at still higher bandwidths. Internet 2also introduces the idea of ‘‘gigapops’’—interconnectpoints between regional groupings of Internet 2 parti-cipants and the backbone. These points will functionas extended metropolitan area networks for local traf-fic within a region, and as interconnect points to movetraffic between a regional group of Internet 2 institu-tions and ISPs offering commodity commercial In-ternet services, as well as providing connectivity tothe Internet 2 backbones.

3. Commitments to upgrade local infrastructure to sup-port high-speed networking. There is an implied movehere from 10 Mbits/second Ethernet to new technolo-gies such as 100 Mbits/second Ethernet, gigabitEthernet, or ATM so that capabilities within a campusare consistent with those linking the campus to otherInternet 2 sites. Certainly, all parts of participatingcampuses will not be immediately networked at In-ternet 2-consistent bandwidth. But this very costly partof the program (which is funded locally by institu-tional funds at each participating university, over andbeyond the investment in the shared Internet 2 infra-structure) is essential for a meaningful deployment ofadvanced applications.

4. Commitments to the deployment of new network ser-vices, both on an on-campus and wide-area basis,which support advanced capabilities like quality ofservice guarantees and multicasting (discussed be-low). The ubiquitous availability of these capabilitieswill be required by the new generation of applications

15 For details on the NSF Very High Speed Backbone Network, seewww.vbns.net. At present, this network runs at a mix of OC-3 and OC-12 speeds. It was originally created to link supercomputer centers andto support research in very high-speed networking applications; its rolehas recently been broadened to support the Internet 2 program on abroader basis. Added in proof: See The announcement of the Abilenebackbone at www.ucaid.edu for information about a new, faster secondbackbone for Internet 2.

968 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 9: The evolving internet: Applications and network service infrastructure

explicit target of Internet 2, some attention to middlewarewill clearly be needed to support advanced applications.Internet 2 may also provide some leverage for exploringmiddleware through early deployment, because the smallnumber of sites represent a tractable community.

An excellent report on defining the architectural andservice environments for Internet 2 and NGI, particu-larly regarding middleware services, can be found inthe report of a 1997 workshop sponsored by the Com-puting Research Association at http: / /www.cra.org /main / research_chall.pdf.

The remainder of this section focuses on the commonbasic network services of NGI and Internet 2, and theapplications they facilitate.

Quality of Service/Bandwidth Reservation

The basic idea behind QoS is that an application shouldbe able to tell the network that it requires, in its communi-cation to a remote site, certain agreed-upon service guar-antees. For example, it should be possible to transmit aspecified amount of data per second; packet loss shouldremain below certain thresholds; and the amount of transitdelay introduced by the network in the transmission ofthe data stream should be bound by specific constants.The network would either guarantee the requested levelof service, reserving required network resources in orderto meet the guarantees, or refuse the request because itis unable to meet the performance guarantees (perhapswith some diagnostic information indicating what levelsof guarantees it is able to confirm).

Clearly, QoS is essential to supporting all kinds ofreal-time applications, including the delivery of streamingvideo or audio, conferencing, and Internet telephony. Butits applications are in fact much broader and encompasscontrol of experimental apparatus, collection of time-sen-sitive telemetry (possibly at low data rates) , and evenhigh-performance, distributed simulations among multi-ple sites. Advanced applications may require multipleconnections—each connection having different QoS pa-rameters—and involving multiple sites. There may de-velop a need for setup/rendezvous protocols that can beused to negotiate a possibly interdependent, parallel setof QoS guarantees.

While QoS is generally considered a critically im-portant service, the definition of the precise parametersthat characterize a communications path—such as band-width, loss rate, delay, and jitter—are still under debate.Part of the problem is defining which parameters canpractically be subject to specific guarantees in a large,complex Internet. Another problem is what to do whenguarantees must be altered due to changes in the networkinfrastructure, such as link failure, link quality degrada-tion, or network load changes. There are some additionalcomplexities in specifying acceptable loss rates. In someapplications, interpolation can compensate for the loss ofa limited number of packets; forward error correction canalso be used to trade off bandwidth for protection against

about agency roles and interagency coordination. Afterconsiderable discussion, however, some $85 million havebeen allocated for the NGI program in federal fiscal1997–1998; but part of these funds represent reallocationor relabeling of funding that already existed within agencybudgets. Just as with the overall High Performance Com-puting and Communications Program (HPPC), it is some-times difficult to decipher whether the NGI is an organiz-ing framework superimposed on existing agency net-working initiative plans or is a truly new program. Thefuture of the NGI is also unclear. The Clinton administra-tion originally proposed funding levels at $100 million ayear for 3 years, and Congress has yet to act on the outyears of the program. Finally, the relationship betweenthe Internet 2 initiative and NGI remains somewhat un-clear, at least to me. Certainly, some of the NGI fundsallocated to ARPA and NSF seem likely to be translatedinto support for the Internet 2 initiative.

As with Internet 2, the goal of NGI is not to replace theexisting Internet, but to provide a set of testbed networkswithin the context of the existing Internet (subject toacceptable use policies) that allow the exploration of ad-vanced applications. There is much commonality betweenNGI and Internet 2 in terms of thinking about new net-work services and applications, although NGI includessome additional areas of emphasis that are more directlyrelevant to federal government concerns (such as fundingfor network security research) than to those of the highereducation community.

Documents on the NGI program can be found atwww.ngi.gov.

New Network Services

At the network services level, there is real convergencebetween the thinking of the Internet 2 and NGI programs:The two key new areas are quality of service (QoS) andmulticasting. At higher software levels, there is less con-sensus and greater ambiguity about what is in the scopeof each initiative. While there is a general recognition that‘‘middleware’’ will play a fundamental role in facilitatingapplications development, there is little consensus on howto define or scope it precisely. Middleware might, forexample, reasonably be considered to include persistentnaming, security and authentication services, rendezvous,and coordination services for multisite interactions. Yetmost of these functions do not depend on Internet 2 orNGI network-level services; they will depend as well onthe commodity Internet. Many of these middleware ser-vices have been the subject of active research within thecomputer networking research community, and have evenseen preliminary standardization and limited deploymentthrough work within the Internet Engineering Task Force.But unlike multicast and QoS, where there is substantialconsensus about the fundamentals of the new services,agreement on middleware architectures has been elusive.The NGI program does include funding for some mid-dleware research. While this is not, as I understand it, an

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998 969

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 10: The evolving internet: Applications and network service infrastructure

sort of trust and authority models will be needed to extendinstitutional controls into a shared network environment.

QoS also raises complex business issues for the currentInternet environment. How is billing accomplished, par-ticularly when a path from source to sink may span multi-ple ISPs? How are priorities set for reserving capacity ina multiple ISP environment? Under what conditions willone ISP be comfortable acting upon requests from a hostserviced by another ISP which result in the reservationof substantial resources?

The networking research community has worked onthese issues for a decade or more, resulting in the codifi-cation of RSVP, a resource reservation protocol, by theIETF. RSVP is still somewhat controversial within theInternet community. Many regard it as a valid approachto the problem of QoS guarantees; others are extremelyskeptical about its ability to meet the requirements ofpractical, large-scale deployments.

RFCs 2205 through 2209 document the existing workon the RSVP protocol. Interested readers will also wantto consult the Internet drafts issued by the RSVP workinggroup. Although it is the key element at the networkservice level, RSVP is only one component of supplyingQoS to applications. Other components include a set ofalternative transport protocols that can be used in placeof TCP and are better tuned to the needs of various real-time applications.

Protocol definition is one issue. But implementationpractice is yet another. Routers along a path are requestedto reserve resources. The protocol allows these resourcesto be defined and action taken by the router to be con-veyed back to the requester. We still need to define abaseline of engineering practice about reservation: Shouldthe routers reserve for the worst case or attempt to usestatistical principles to overcommit their resources, as-suming that this can be masked by multiplexing?

Quality of service is an essential network function forthe support of multimedia. As such, it is a cornerstone ofnetwork applications that involve multimedia, distanceeducation, collaborative activities, and real-time controlor support of experimental apparatus. As QoS finds itsway into the commodity commercial Internet, it will bea key element in bringing streaming audio and video andInternet telephony into broad-based use.

Multicasting

The other fundamentally new service that both Internet2 and NGI expect from the network infrastructure is sup-port of multicasting. It should be possible to ‘‘broadcast’’a packet to a group of intended recipients in such a waythat the packet is only duplicated when there is no pathin common between two of the recipients. To the extentthat there is a common path between the source host andthe recipients, then one copy of each packet traversesthe network over this common path, providing efficientdistribution of information to groups of recipients. Insome applications, multicast really functions like a di-

packet loss. Consequently, there is an argument to bemade that it is not sufficient simply to specify acceptablepacket loss rates, but that it is also necessary to describebehavior in the face of packet loss—whether simply todrop lost packets in the interests of maintaining delayguarantees for other packets that are part of the datastream, or to perform some type of expedited retransmis-sion of lost packets. Similarly, questions arise aboutpacket delivery sequence: Is it desirable to ensure thatpackets are received in the same order that they are trans-mitted, even if a few are lost in transit? Or is it moreimportant to maximize the number of packets that aresuccessfully shipped from transmitter to receiver, even ifthis causes the packets to be received out of order?16

Perhaps it is simplest to support quality of serviceguarantees by massively overprovisioning network capac-ity. In this environment, a best-efforts delivery approach,such as that used in the current commodity Internet, willsimulate quality of service guarantees fairly effectively.If all traffic gets high-reliability, low-latency service, thenQoS guarantees may be considered met by default. Thiswas the exact approach followed by some of the earlyexperimentation using the network to support real-timetraffic, and it may be a viable route for some early experi-mentation in the Internet 2 and NGI environments.

In more scalable and realistic implementations, re-source reservation protocols are used. All routers in thepath between source and sink for a QoS-guaranteed linkreserve capacity, in order to meet the QoS parametersunder the control of the resource reservation protocol,prior to actual application data transmission. In practice,however, how do you characterize the capacity to be re-served? What happens when routing changes cause thepath between source and sink to shift? How is capacityadministered: How does the network decide that a hostor application is permitted to request reserved networkcapacity, and how is this request interfaced to authentica-tion and resource management functions? This approachleads to complex and unprecedented couplings betweenvery high-level authentication, resource management, andbilling functions (which are often institutionally basedrather than networkwide) and very low-level functionsrelated to network services. We do not yet know what

16 The problems of handling packet loss and sequencing as part ofquality of service guarantees are used here to illustrate the overall scopeof the QoS problem, as viewed from the applications perspective. Inpractice, the way QoS is being approached in the Internet partitions theproblem a little differently. At the network services level, QoS is mainlyabout reserving bandwidth and router capacity so that guarantees aboutbandwidth and latency can be met. As a byproduct of reservation, thereis also some assurance that packet loss due to congestion will be keptto a minimum. Dealing with other sorts of packet loss—for example,losses due to transient errors on the communications trunks—is left tothe transport protocol. If TCP is used as the transport protocol, thenordering and data integrity are guaranteed, but at the cost of possibledelays from the transport protocol when packets are lost. There are otherexperimental transport protocols that are more attuned to the needs ofreal-time applications which can be used instead of TCP, and whichmake other tradeoffs between delay, sequencing, and data integrity.

970 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 11: The evolving internet: Applications and network service infrastructure

service into a reliable byte stream between two endpoints.The definition—much less the actual implementation—of reliable multicast transmission still remains a researchproblem. There are multiple directions in which the prop-erties of reliable unicast transmission (defined by the be-havior of TCP) can be generalized for the multicast envi-ronment. There is an active Internet Research Task Forceworking group struggling with the definition of reliablemulticast, and the development of efficient protocols toimplement it.

Because both multicasting and quality of service areimportant for many multimedia and real-time applica-tions, they will often be used together. One might wantto multicast information to a group with guarantees appro-priate for the transmission of video or audio. How wouldquality of service be characterized for a multicast group,particularly whenever a host joins or leaves that groupand the achievable level of QoS for the group as a wholethen changes? A host on a slow dialup link might join amulticast group supporting a real-time interactive gamewhere all the other nodes are connected via Ethernet tothe network backbone.17 Indeed, there is considerablevariation from application to application in the way onewants controls on group membership to interact with QoS.RSVP protocol has been designed to some extent to workin multicast environments. But highly dynamic multicastgroups are likely to present problems because it may benecessary to reissue reservations whenever group mem-bership changes.18

A good review of the state of the art and the openresearch issues in multicasting can be found in the April1997 issue of IEEE Selected Areas in Communication—Network Support for Multipoint Communication. Readerswill also find the draft RFCs developed by the IETF work-ing group on Large Scale Multicast Applications (LSMA)to be of interest. In addition, there is an extensive set ofRFCs about routing protocols and protocol extensions thatsupport multicasting.

Like QoS, multicasting is clearly a major applicationsenabler, both in terms of functionality and efficiency. Weare just beginning to understand where multicasting canbe useful, and testbeds like Internet 2 will undoubtedlyhelp direct its evolution. I would speculate that multicast-ing will find numerous unexpected roles. The work of

17 With the deployment of multiplayer interactive games on the net-work today, even without multicasting, there are considerable problemscaused by the disparity of connection speeds among the players. Thishas given rise to the practice in some gaming communities—mainlywith ‘‘action’’ games—of excluding players on high-speed connections(for example, students playing from dormitory rooms that are directlywired to a university network) from some games, or, conversely, lim-iting participation in games only to players on high-speed connections.

18 RSVP is designed so that reservations time out and have to bereissued periodically anyway. This is done in part to allow reservationsalong a path to adapt to routing changes that may take the path throughdifferent sets of routers and trunks over the course of its life. In practice,it is unclear how often it will be either necessary or practical to reissuereservations, or what the implications for applications stability will bein heavily loaded networks where there is contention for resources.

rected broadcast, with a single source multicasting to agroup of receivers. In other applications, all members ofa multicast group transmit as well as receive traffic. In-cluded in this second class of applications are simulations:Membership in a given multicast group simulates activitythat is correlated to some specific geographic neighbor-hood within the simulation.

Multicasting is actually a surprisingly complex andnuanced network service. If one characterizes the networkas offering best-efforts unreliable delivery of packets,then it is fairly clear how to implement multicasting.Multicasting in this scenario simply duplicates packetsand addresses them to different hosts at various pointsduring their transport through the network. Progress ofpackets from one node to the next continues to be gov-erned by the usual best-efforts delivery rules of the net-work. Where there are many recipients for multicast data,this kind of basic multicasting support promises substan-tial efficiency over multiple point-to-point transmissions.

Complementing the mechanics of multicast routing,however, are protocols that allow hosts to affiliate them-selves with a multicast group or to disestablish such anaffiliation. These protocols again can potentially have in-teractions with high-level authentication and authoriza-tion functions—which hosts are allowed to join a givenmulticast group—that are normally considered to be be-yond the scope of network services. Complex protocolsare used to distribute and update the routing informationneeded to support the transmission of multicast packets.To support the general use of multicast for informationdistribution applications, additional higher-level servicessuch as guides to available multicast ‘‘channels’’ will beneeded.

Even at its most basic level, multicasting is nontrivialin a multi-ISP environment. Consider the complex issueof billing (settlement) for traffic moving between oneISP and another: In unicast applications, the number ofpackets transiting between the ISPs is closely correlatedto the load that one ISP places upon another. In multicastapplications, however, a small number of packets enteringan ISP’s networks may be multiplied enormously due tothe large number of recipients. ISPs are having consider-able difficulty managing the interchange of routing andreachability information to support normal unicast packetdelivery. The routing protocols needed to supportmulticast add both complexity and overhead which manyISPs are not eager to accept. Multicast has been deployedon the current commodity Internet for a number of yearsthrough the MBONE, which can be viewed as a special-purpose multicast overlay built on top of parts of theInternet. It is used for audio and video broadcasts ofevents such as IETF plenary meetings. But the MBONEis very bandwidth-constrained and is far from pervasive.We are a long way from permitting general-purpose appli-cations to make large-scale, casual use of multicastingcapabilities.

The TCP converts a path between only two hostslinked by a network offering unreliable packet delivery

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998 971

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS

Page 12: The evolving internet: Applications and network service infrastructure

A successor to the existing IP protocol (IP version 4)called IP Version 6 is seeing some deployment. From anetwork services perspective, there are few major changesin IP V6.20 IP V6 contains much more extensive supportfor security. It is designed to support automatic configu-ration of hosts. Its support for mobile hosts is somewhatcleaner than IP V4. In addition, and perhaps most impor-tantly, it moves from a 32-bit address space to a 128-bitaddress space, paving the way for a huge number of hostsand networks to become part of the global Internet. Thisexplosion in connectivity is the final trend that deservessome comment.

Currently, we tend to think of the Internet as support-ing only computers. Over the next few years we willsee a proliferation of sensors and machines of varioustypes connected to the network—cameras, badge track-ers, intelligent highway systems, security systems,medical diagnosis and patient monitoring, and appli-ances. These new data sources and devices will stimu-late tremendous growth in new applications. Many ofthese new applications will not be bandwidth-intensive( taking a blood pressure reading, turning on a washingmachine, reading a utility meter, or measuring trafficcongestion at a freeway on-ramp) , but they will placea premium on reliability, security, and authentication.Over the next decade, sensing and control will developas important categories of applications alongside theexisting communications and information access anddistribution applications.

The legitimate role of advanced applications testbedslike Internet 2 go beyond simply exploring the outerreaches of high-performance networking. The current pri-orities for the commodity Internet are dominated by com-mercial demands and business considerations. There area broad array of scientific, educational, social, and publicpolicy priorities that set a research agenda for new net-worked applications of all kinds, and it is important thatwe have testbeds that can support and respond to theseapplications agendas in a timely and agile fashion. MobileIP and sensing and control applications are excellent ex-amples. NGI and Internet 2 reaffirm the commitment toadvanced application development that goes beyondcommercial services. They represent a mechanism forallowing the research and education communities to re-gain control of the agenda and priorities for this develop-ment.

20 There are several good tutorial books on IP Version 6. See, forexample, IP V6: The New Internet Protocol by Christian Huitema (Pren-tice-Hall, 1997).

the IETF LSMA group demonstrates, for example, howmulticasting can play a key role in making very large-scale, high-performance simulations possible.

Conclusions: Enabling Applications beyond High-Performance Networking

While NGI and Internet 2 both emphasize very high-performance network applications, there are some othertechnologies moving into deployment both in the experi-mental environments and the commodity commercial In-ternet. These new technologies will further expand theset of basic network services available and enable a broadrange of new applications.

The first set of technologies is the incorporation ofone-way broadcast channels. Hosts might obtain Internetservices using a telephone line to transmit and a DirectBroadcast Satellite channel to receive. Many recipientswould share the broadcast channel, picking out packetsthat have their address. While these technologies do notchange the network services from the host perspective,they do require some modifications to routing protocolsand other infrastructure components. Their most im-portant potential impacts will be economic: They mayprovide yet another option to make high-bandwith net-work access, at least for retrieving information, both moreavailable and more affordable.

The second set of technologies are called Mobile IPand provide support for nomadic hosts—or even entirenetworks, such as a ship or an airplane—on the networkat the network services level. Often these technologieswill be combined with radio-based or satellite-basedtrunking technologies. The new generation of low-earthorbiting satellites now deployed by various corporationsmay increase interest in this area, as will developmentsin digital wireless services. Many mobile hosts will beon the end of slow, relatively unreliable links, puttingpressure on applications developers to build applicationsthat can scale intelligently across an ever broader rangeof performance parameters. Mobile IP involves a complexset of protocols and infrastructure changes which are be-yond the scope of this article.19 Mobile IP has complicatedinteractions both with RSVP and with multicast support.It is likely that some of the Internet 2 and NGI participantswill also explore Mobile IP services.

19 On Mobile IP, see Mobile IP: Design Principles and Practicesby Charles E. Perkins (Addison-Wesley, 1997), and Mobile IP: TheInternet Unplugged by James D. Solomon (Prentice-Hall, 1997).

972 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—September 1998

/ 8n56$$1227 07-08-98 12:43:45 jasbal W: JASIS