[ieee 2010 14th international telecommunications network strategy and planning symposium (networks)...

6
978-1-4244-6705-1/10/$26.00 ©2010 IEEE Abstract—The current trend towards hosting and cloud computing is the latest popular embrace of the compelling and timeless case for sharing systems resources. The author imagines a world without the essential services of a network operator and examines the cost dimensions of creating an end-to-end fixed or wireless private network from scratch. Investors may query what will be a service provider’s most enduring asset. We take for granted the ability to speak or send data around the globe, and the conventional wisdom is that connectivity has become a commodity. As computer users and business students lose interest in how it all works, there is a risk that investors may lose sight of where the indispensable value in a network business lies. We present a short primer on the fundamentals of fixed and wireless networks, and readily demonstrate why the cost of building a wholly private network would be prohibitive for all but the most powerful institutions. Acknowledging the mathematical certainties which make this so, we also illustrate the earning power of well-utilised communications equipment. Finally we examine the factors which are driving businesses to adopt hosting and cloud computing services. While the arguments are sometimes compelling, we question whether the value is in the connectivity or the computing. An investor needs to make a shrewd judgement about which elements of this value chain are really sustainable. Index Terms— Network economics, Communication systems, Application hosting, Cloud computing, Modelling, Software tools. I. INTRODUCTION ERE is a radical idea. Imagine a company with offices in several different cities, or even countries, trying to establish a mechanism for exchanging messages and data between those offices. Or consider a wealthy family, trying to emulate a similar system to stay in touch with cousins abroad. The utilisation and geographical overhead of such an endeavour would be exorbitant. (Wouldn’t it?) But there is nothing new, unique or original in these requirements. On the contrary, there would be near universal demand for such a service, if there were sufficient economy of scale to tempt an investor or engineer to create a company which could offer such a communications network as a service. The children of the 21 st century are used to digital TV and radio, 3D cinema and two-way interactive media, and are more likely to use a laptop as a Facebook console or gaming device than as a programming environment. Software development and perhaps network design have both had their popular heyday and are now fast returning to the exclusive domain of the professional engineer. The majority simply expect the infrastructure around them to function, with little care or consideration for how it works. And if it doesn’t, well, they can always buy a new one! This paper is written as a primer for those new to the economics of telecoms networks, or for those considering investments in traditional service providers compared to contemporary applications and systems hosting providers. We provide an insight into the end-to-end requirements of fixed or mobile network provision, and the geographical deployment considerations which many business plans seem to skirt over even today. We look at the modern gold rush towards systems hosting, and observe that there is nothing new in the economic value of offering a slice of a large distributed system to a broad market of smaller players who could not afford to own such a system in their own right. (For cloud computing, previously it was too hard; and maybe soon it will be too easy.) Finally we speculate that, with few barriers to competition, many of these automation-based services are doomed to inevitable commoditisation and price erosion, compared to the inescapable logic of providing network access as a service. II. THE COST OF A TRULY PRIVATE NETWORK Imagine you are the CIO of a company with multiple, geographically removed offices, and that you don’t have the option of connecting through a network service provider. (This might be for reasons of extreme confidentiality, or exceptional, national security, through to the imagined situation where such a service is not already on offer.) Just exactly what kind of infrastructure would be required to connect to a common database, or make a secure phone call between company sites? There are two broad categories of communications transport: fixed, encompassing everything (including string) from copper pairs and coaxial through to modern fibre; and wireless, including point-to-point, 2G/3G/4G cellular, broadcast UHF and satellite technologies. Given the number of IP-enabled voice and data devices on a modern company LAN, it is a reasonable to assume that any newly-built custom network would have a strong IP flavour, but that does not alter the traditional split between access and core network. The network as a service: a primer on the fundamentals of network economics Robin J. Bailey Implied Logic Limited Cambridge, UK Phone: +44-1223-257717; fax: +44-1223-257800; e-mail: [email protected] H

Upload: robin-j

Post on 08-Dec-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2010 14th International Telecommunications Network Strategy and Planning Symposium (NETWORKS) - Warsaw, Poland (2010.09.27-2010.09.30)] 2010 14th International Telecommunications

978-1-4244-6705-1/10/$26.00 ©2010 IEEE

Abstract—The current trend towards hosting and cloud computing is the latest popular embrace of the compelling and timeless case for sharing systems resources. The author imagines a world without the essential services of a network operator and examines the cost dimensions of creating an end-to-end fixed or wireless private network from scratch. Investors may query what will be a service provider’s most enduring asset.

We take for granted the ability to speak or send data around the globe, and the conventional wisdom is that connectivity has become a commodity. As computer users and business students lose interest in how it all works, there is a risk that investors may lose sight of where the indispensable value in a network business lies.

We present a short primer on the fundamentals of fixed and wireless networks, and readily demonstrate why the cost of building a wholly private network would be prohibitive for all but the most powerful institutions. Acknowledging the mathematical certainties which make this so, we also illustrate the earning power of well-utilised communications equipment.

Finally we examine the factors which are driving businesses to adopt hosting and cloud computing services. While the arguments are sometimes compelling, we question whether the value is in the connectivity or the computing. An investor needs to make a shrewd judgement about which elements of this value chain are really sustainable.

Index Terms— Network economics, Communication systems, Application hosting, Cloud computing, Modelling, Software tools.

I. INTRODUCTION ERE is a radical idea. Imagine a company with offices in several different cities, or even countries, trying to

establish a mechanism for exchanging messages and data between those offices. Or consider a wealthy family, trying to emulate a similar system to stay in touch with cousins abroad.

The utilisation and geographical overhead of such an endeavour would be exorbitant. (Wouldn’t it?) But there is nothing new, unique or original in these requirements. On the contrary, there would be near universal demand for such a service, if there were sufficient economy of scale to tempt an investor or engineer to create a company which could offer such a communications network as a service.

The children of the 21st century are used to digital TV and radio, 3D cinema and two-way interactive media, and are more likely to use a laptop as a Facebook console or gaming device than as a programming environment. Software development and perhaps network design have both had their

popular heyday and are now fast returning to the exclusive domain of the professional engineer. The majority simply expect the infrastructure around them to function, with little care or consideration for how it works. And if it doesn’t, well, they can always buy a new one!

This paper is written as a primer for those new to the economics of telecoms networks, or for those considering investments in traditional service providers compared to contemporary applications and systems hosting providers. We provide an insight into the end-to-end requirements of fixed or mobile network provision, and the geographical deployment considerations which many business plans seem to skirt over even today.

We look at the modern gold rush towards systems hosting, and observe that there is nothing new in the economic value of offering a slice of a large distributed system to a broad market of smaller players who could not afford to own such a system in their own right. (For cloud computing, previously it was too hard; and maybe soon it will be too easy.) Finally we speculate that, with few barriers to competition, many of these automation-based services are doomed to inevitable commoditisation and price erosion, compared to the inescapable logic of providing network access as a service.

II. THE COST OF A TRULY PRIVATE NETWORK Imagine you are the CIO of a company with multiple,

geographically removed offices, and that you don’t have the option of connecting through a network service provider. (This might be for reasons of extreme confidentiality, or exceptional, national security, through to the imagined situation where such a service is not already on offer.)

Just exactly what kind of infrastructure would be required to connect to a common database, or make a secure phone call between company sites? There are two broad categories of communications transport: fixed, encompassing everything (including string) from copper pairs and coaxial through to modern fibre; and wireless, including point-to-point, 2G/3G/4G cellular, broadcast UHF and satellite technologies.

Given the number of IP-enabled voice and data devices on a modern company LAN, it is a reasonable to assume that any newly-built custom network would have a strong IP flavour, but that does not alter the traditional split between access and core network.

The network as a service: a primer on the fundamentals of network economics

Robin J. Bailey Implied Logic Limited

Cambridge, UK Phone: +44-1223-257717; fax: +44-1223-257800; e-mail: [email protected]

H

Page 2: [IEEE 2010 14th International Telecommunications Network Strategy and Planning Symposium (NETWORKS) - Warsaw, Poland (2010.09.27-2010.09.30)] 2010 14th International Telecommunications

If you cannot afford path-protection (even assuming you could afford the rest of it!), then as soon as you have more than two end points, it is near certain that the total distance and cost will be minimised by planning around a core network portion, with individual sites or clusters of sites connecting-in through a number of peripheral access or feed networks.

Fig. 1. Minimising distance and cost between three sites.

In this regard, network planning is very much like transport planning, when you consider that motorways are never built to service individual addresses! The roadway metaphor is also a useful way to explain the concept of path-protection. If a road is closed by an accident, congestion, maintenance or disaster, then there will be alternative routes to all but the most remote destinations. Similarly, if there is a critical requirement for a communications network to be on 24×7, then there has to be a contingency for equipment or route failure.

Even with the high reliability of modern electronics, external factors such as civil works, an electrical surge or water ingress (to name but a few), or human error, or even occasional component failure, can all lead to an interruption of service which may not be readily fixed. For high resilience, a network must be planned with redundant path connectivity so that no one element or link is critical.

Fig. 2. Connecting suburban offices on a peripheral ring provides path-protection and avoids high planning and building costs in the city centre.

This slightly changes the tipping point for a core network, in that a number of suburban offices around a city may be better and more efficiently served with a single ring structure than forcing the traffic to a central core (especially if the planning controls and building costs are more onerous further

in). But it remains the case more generally that clusters of sites are still likely to be more efficiently connected to other clusters by a common core network.

Fig. 3. Local clusters of sites connecting around access rings onto a higher-bandwidth core network.

Technologies abound for access networks, ranging from FTTH technologies such as GPON in dense urban situations through to satellite-based solution in very rural areas. Core networks are predominantly fixed because of considerations of distance and overall throughput. The laws of physics impose certain limitations on radio-based solutions in terms of reach and information spectrum-efficiency. (Don’t they?)

A. Getting a fixed network connection The fastest and only scalable contender for a modern fixed

network connection is fibre, but establishing your own private fibre connection would be a costly endeavour, when you consider how slow many incumbent operators have been to deploy it, except where forced to by competition from cable operators, or rare government intervention.

The cheapest path is ‘aerial’; i.e., suspended from a series of poles every 35m. If you have travelled to developing countries and observed the aerial delivery of copper networks from a single operator, just imagine the impact on the environment of multiple private fibre networks distributed in this way! Sadly the alternative of underground trenching and ducts is much more expensive, and highly dependent on municipal utility planning. It also requires inspection pits at least every 300m.

Covering longer distances, you would be looking for ways to avoid these local planning considerations. Large carriers have built trunk fibres alongside railways or highways, or even along waterways. Again it would be a planning nightmare if everyone wanted to run a private fibre along such isolated resources! Whereas a passive GPON network is designed for urban connections, these longer routes would also be served by active optical transports with repeater amplification every few 100km. Where cluster of sites are aggregated onto a core DWDM system, Ethernet technology may be used to aggregate traffic from sites on one local ring to another.

Besides the sheer number and cost of these elements, it is the geographical extent and maintenance implications of the end-to-end deployment which really brings home the economic incentive to share such assets. In a yet more established metaphor, imagine ‘being your own postman’. If you had to supervise the delivery of every packet first hand, you would never do anything else, and you would become a full-time communications network of the oldest sort!

Page 3: [IEEE 2010 14th International Telecommunications Network Strategy and Planning Symposium (NETWORKS) - Warsaw, Poland (2010.09.27-2010.09.30)] 2010 14th International Telecommunications

B. Getting a wireless network connection Subject to various line-of-sight issues, depending on the

spectrum used, a point-to-point radio solution could be the most practical solution for a truly private network. By using a series of towers, which themselves you can lease without any concern about mixing network traffic, and which might be required every 50–200km depending on the terrain, you could connect multiple business sites across a nation.

This has been the traditional solution for connecting regional TV broadcast towers. Because of the point-to-point beam design of the radio signal, this technology is relatively immune to physical eavesdropping.

Very remote sites might be more effectively connected by satellite; but then are you really going to pay to put your own communications hardware in orbit (with a backup on standby to launch if the first fails)? Imagine the cost and inefficiency, never mind the potential congestion of geo-stationary orbits!

Multiple sites within one town or city might benefit from a more wide-area technology like WiMAX, but this still requires multiple towers and is wide-open to physical eavesdropping, so you would then rely on encryption for ‘privacy’. (Of course this kind of technology is really designed for carriers to reach many subscribers as a FWA solution, or even as a mobility solution.)

C. Getting a wireless connection on the move If your employees are travelling, then none of the above

solutions are much good. Providing a cellular radio system for the possibility that one employee might pass through a given vicinity would be as expensive as implementing private networks to every base-station site in the country, never mind the eavesdropping angle.

Ironically, it would be even more expensive to roll-out a network of private phone kiosks, if they were present in every small town. (Just consider how many offices you could connect with the same network footprint.) It is no coincidence that the pay-phone has become much less prevalent since the advent of affordable mobile services. In rural areas in some developing countries, pay-phone services are actually provided by bolting a GSM handset to the wall of a container, as this avoids the expensive requirement of a fixed connection.

III. THE PAIN AND GAIN IN DISTRIBUTED CONNECTIVITY If this DIY outline of fixed and wireless private networks is

enough to deter all but the most powerful state or industrial organisations, then we should also take a moment to consider the core economic realities which face any would-be service provider which attempts to aggregate demand from individual users in order to achieve viable scale.

The good news is that, by creating a value chain of user and provider and offering the network as a service to users in many markets, the operator can share the cost of any common infrastructure between all of its customers. This is not just about using more of the ports on a network device, but also reducing the total distance overhead per user; not forgetting that it is ultimately distance which is being ‘delivered’.

The bad news is that, to reach this nirvana of sharing and trust, the operator must offer his services to as many users in as many markets as possible, and it will face at least two harsh realities along the way.

The first is a simple matter of utilisation. To revert to our transport metaphor, compare the cost per head of motoring as a single person, compared to a family of five. A smaller car might cost half the price of the family car, but this is still 2.5 times as expensive compared to the five in the family saloon when evaluated on an individual basis.

The same applies to many network devices, such as routers, DSLAMs and optical termination. These are usually modular, and come in different sizes; but if you are aiming to serve many customers, you will want the larger unit with the lower price-per-port in the end. This means that, until you achieve critical mass, you may be looking at low percentage utilisation of the device, equivalent to a 5× overhead on cost per head.

Fig. 4. Utilisation of any shared network element will gradually increase as customers are added to the network. This is a ‘signature’ chart which always dips back towards 1/2, then 2/3, 3/4, and so on.

The second is a slightly more subtle issue of geographical deployment which is often overlooked in simplistic back-of-envelope calculations or hurried spreadsheet analyses. If your target market encompasses many towns or cities, then it is very unlikely that your network will revolve around a single switch fabric. And while it would be convenient if all your customers were to sign up in one location first, so that you could achieve scale in utilisation there before offering the next location, it is unlikely you would want to defer the wider revenue opportunity in this way, or risk losing these customers to a competitor. So the practical impact is that you have to sustain sub-scale utilisation in many geographical locations to start with, and every location will continue to add an overhead of slack (i.e., unused capacity) on an indefinite basis.

Page 4: [IEEE 2010 14th International Telecommunications Network Strategy and Planning Symposium (NETWORKS) - Warsaw, Poland (2010.09.27-2010.09.30)] 2010 14th International Telecommunications

Fig. 5. Mobile networks have to provide coverage, when there is low initial utilisation at all base-station sites; but natural variance in demand means that there will continue to be an overhead of unused capacity at each separate site.

You have to look beyond the family for a transport analogy, but consider an education authority providing buses for children to travel to a number of schools in a region. Each individual bus might have a capacity of 50 pupils; but each school will require its own bus if everyone is to get to school in time, even though there might only be 400 distant pupils across ten schools. Worse, the likely variance means that, while there may be some schools with less than the average of 40 per school, there will then be some with enough to call for a second bus! Even with several bus loads per school, it remains the case that there might be an average of half the seats empty on one bus for each separate school, due to the impossibility of sharing seats between schools with the same arrival time.

Fig. 6. Every school needs at least one bus, and the average bus is only 80% full. However, a larger school might have more extra pupils than the average 20% spare, and will actually need two buses.

Both of these phenomena apply equally to fixed and wireless networks. Geographical deployment is most often mentioned in the context of cellular networks, in relation to base-station roll-out and coverage, but what is sometimes missed is that the geographical overhead persists beyond the first radio carrier capacity and remains as a driver of slack capacity per site, irrespective of the variance or otherwise of per-site take-up across the network. Exactly the same issue applies to most distributed and shared network components, and the impact is a simple mathematical reality.

These technical finance and economic issues are covered in depth in my training course for the STEM business-modelling software for networks, which is written in the context of a model comparing WiMAX and DSL as alternative technologies for the provision of broadband in rural areas.[1]

IV. TAPPING INTO AN ALWAYS-ON REVENUE STREAM As you will have realised from the preceding sections,

building a communications network outside of a laboratory is a costly and time-consuming process requiring planning, approval, investment and manpower. But when the network is switched on and offered to fee-paying ‘passengers’, then it will hum with activity from day one.

Consider a modest 40 × 40Gbit/s λ DWDM system, which might cost €1m for the optical interfaces and electronics plus perhaps another €2m for a 1% share of a 1000km buried fibre, plus another €1m for repeater amplification along the way.

For €4m + 5% annual O&M, you have up to 1.6Tbit/s of throughput. Now consider the cost-benefit analysis. Assuming a modest asset lifetime of ten years, with depreciation implied at €400,000, plus €200,000 O&M, the annualised cost is just €0.6m.

Fig. 7. Connecting bandwidth over a 1000km DWDM communications link.

An end user might be happy to pay €10 for the end-to-end transit of 2.5GB of data, with reference to typical ADSL monthly usage limits, equivalent to €4 per GB. Our 1000km link might constitute only 1/25 of the commercial path of any one packet in transit, so we can only recognise 4% of the associated revenue; i.e., €0.16 per GB.

A transit link will not be massively utilised all of the time, but if it is part of an international network, it may handle an overlapping sequence of regional busy periods. Suppose it is only 1% utilised as an average over a 24-hour period. That implies an average delivery of 16Gbit/s:

– 16 billion bits per second – 960 billion bits per minute – 57,600 billion bits per hour – 1,382,400 billion bits per day – 504,576,000 billion bits per annum – 63,072,000 billion bytes per annum – 58,740,377 GB per annum. At €0.16 per GB, this yields a jaw-dropping annual revenue

of €9.4m, compared to the annualised cost of €0.6m. To spell it out, this is a €8.8m profit and a margin of 94%.

V. REAPING THE BENEFITS OF SERVICE-PROVIDER EXPERTISE The economic potential of the network provider is being

steadily eroded by the advent of regulation and competition. This is good for the consumer, but limits the appeal of the network business. So many carriers are looking to exploit their

Page 5: [IEEE 2010 14th International Telecommunications Network Strategy and Planning Symposium (NETWORKS) - Warsaw, Poland (2010.09.27-2010.09.30)] 2010 14th International Telecommunications

distributed network assets, and to cash-in on their service articulation and delivery experience and network-management and call-centre processes by moving into the provision of IT services.

Some of the economic aspects of managing computing infrastructure look like the wider issues of maintaining a telecoms network in microcosm. For smaller companies, the cost overhead of always-on high-bandwidth connectivity may be an expensive luxury if demand arrives in isolated bursts, whereas a hosting service enables the cost to be shared among many clients with similar requirements, yielding a lower unit cost for the client while affording a margin to the operator.

Small and medium enterprises also benefit from leasing options for fully-managed computing solutions if they do not have the scale to employ their own round-the-clock support staff. Larger companies, and many Internet-based businesses, which may place wildly varying and unpredictable demands on their processing infrastructure will increasingly benefit from cloud computing services which provide virtualised processor solutions on-demand.

Fig. 8. One company uses computing capacity from the cloud during the day while another exploits it at night, thus increasing the overall utilisation of the asset. In this sense, the cloud is like a computing time-share.

The principal value comes in sharing the infrastructure: the laws of probability mean that proportionately less overhead is required when the processing power is all in one ‘farm’, compared to each client maintaining its own, separate computing facilities. Through higher utilisation, it is also practical to replace the equipment more frequently, so everyone benefits from the latest technology.

In an era where speed-to-market is key, an on-demand solution enables rapid deployment of infrastructure, and moves the required investment into monthly payments (opex) instead of high up-front capex, which makes this especially attractive to start-ups or growing businesses.

A secondary benefit lies in the associated human resources. The overhead of training IT staff in best-practice security, reliability and disaster-recovery techniques becomes the preserve of the service provider, allowing the client organisation to focus on its own business imperatives.

If this all sounds too good, then what is the likely end-game as other players join the rush to be market leaders in these areas? Competition is already fierce, and will become only

more so, given the low barriers to market entry (compared to the enormous investment of creating a telecoms network). While the distinguishing features of a market leader might be innovation today, the product is massively ripe for automation and is likely to become rapidly commoditised.

Also, IT technology is still improving, both in terms of ease of operation and reliability. In procuring the IT infrastructure for my new company, the detail which we established was that we could own and operate our own servers, even in a hosted environment, far more economically than leasing in the cloud, even with a two-year replacement cycle. For the long tail of companies like ours with relatively modest computing requirements, a great majority may be happy to suffer a once-a year server outage if it halves the IT bill all year round.

VI. CHOOSING BETWEEN INNOVATION AND NECESSITY It always pays to analyse the value chain and assess which

elements are vital (i.e., you can’t afford the alternative) and which are just convenient.

A hosting provider is providing network connectivity, round-the-clock monitoring, and often multiple point-of-presence path-protection and resilience too. These are all attributes of a network service provider which we have already established are very expensive to operate on a private or dedicated basis.

A cloud provider is certainly enabling better utilisation of computing resources than will be achieved by a business with heavy and widely varying demands. On the other hand, many businesses with significant, but more predictable, requirements may be better served by their own dedicated and optimised solution which may take advantage of server virtualisation to create an internal or private cloud infrastructure.

Making the distinction between retail service provider and physical infrastructure operator, it seems likely that the former will stand to succeed or fail solely on the strength of their market proposition and customer loyalty, as the efficiency and standardisation of their internal processes inevitably converge. Whereas the distance ‘delivered’ by the network operator is an economic absolute. It is hard to escape the conclusion that it is the indispensable connectivity which will last out as its most enduring asset.

Although competition (in larger markets) and regulation will constrain the margin which may be earned from the infrastructure, it will be a secure and evergreen investment. In contrast, more lucrative services based on differentiated IT innovation will ride an unpredictable economic seesaw of competition, consolidation, and inevitable obsolescence as the technology evolves and market requirements change. Service-provider investors may insist on a balanced portfolio of services, given the certainty that our desire to communicate and transport data will only ever increase.

VII. STOP PRESS Unless, of course, there is a market-changing event, such as

the discovery of a new ultra-long wavelength radio technology which could combine high-bandwidth information rates with

Page 6: [IEEE 2010 14th International Telecommunications Network Strategy and Planning Symposium (NETWORKS) - Warsaw, Poland (2010.09.27-2010.09.30)] 2010 14th International Telecommunications

the same capacity for long-distance communication enjoyed by whales. Then it might be practical to carry about your own private network interface with you, and the age of the network as a service would be over. But I am not holding my breath!

ACRONYMS DIY do-it-yourself DSLAM digital subscriber line access multiplexer DWDM dense wave-division multiplexing FTTH fibre to the home FWA fixed wireless access GPON Gigabit passive optical network GSM global system for mobile communications IP Internet protocol IT information technology LAN local area network O&M operations and maintenance STEM strategic telecoms evaluation model UHF ultra high frequency WiMAX worldwide interoperability for microwave access

REFERENCES [1] R. J. Bailey, “Exercises in STEM modelling: The business case for

WiMAX vs. DSL in rural areas” 1st ed. Analysys Mason Limited, 2009; 2nd ed., 2010; 3rd ed. Implied Logic Limited, 2010.